Posts

FRINX to present at OSN meet-up: Workflow, Inventory and Network Control

Major Russian internet player Mail.Ru Group, will host the Open Source Networking meetup co-organizing the event with the local Open Source networking community. FRINX will present its solution at the event.

The Open Source Network meetup is taking place for the second time this year, this time on the premises of major Russian internet player Mail.Ru.

What are OSN User Groups about?

Open Source Networking User Groups (OSN User Groups) are locally managed groups that are passionate about network transformation through open source. Groups meet periodically to discuss practical applications and the latest innovations from the Linux Foundation networking projects: DPDK, FD.io, ONAP, OpenDaylight and SNAS, as well as other open source initiatives.

Mail.Ru’s Moscow HQ

These groups provide an opportunity to meet face to face with other open networking enthusiasts in the region, share ideas and insights, and work collaboratively to overcome the challenges of Software Defined Networking (SDN), Network Functions Virtualization (NFV), Management and Orchestration (MANO), cloud, data analytics, and the acceleration of underlying infrastructures.

What’s the plan?

The upcoming event will start on 6th of March at 6 p.m. (GMT +3) in Mail.Ru’s HQ in Moscow. Speakers from three companies will present and discuss Open Source Network topics. Martin Sunal from FRINX will start the guest segment at 6:30 p.m. presenting Workflow, Inventory and Network Control. Factor Group is co-organizing and attending the event.

Vladimir Gurevich from Barefoot Networks will take care of presenting P4 language for beginners at 8:30 p.m. The last guests speaker of the event comes from Noviflow, with a presentation dedicated to gRPC in SDN switches. The exact line-up for the event can be found here.

You won’t be able to attend in person, but would still like to see the presentations – What now?

Don’t worry, the Open Source Networking Moscow User Group has its own YouTube channel. Last month’s event was fully documented, as was the event from July 2018. Click subscribe on The Open Source Networking Moscow User Group’s YouTube channel to be notified about new videos from the event.

FRINX Machine: Make different networks speak the same language

You won’t come across a smarter tool in the networking business. FRINX Machine manages workflow, inventory and network control. It packs a number of different functions into one user-friendly solution stack.

FRINX Machine can be used with network devices of various scale. It’s easy to use and install and you need just three commands to set it up. You can run FRINX Machine on your local PC and on a high-end machine in the cloud.

FRINX UniConfig serves as the cornerstone

FRINX Machine is a dockerized deployment of three main elements. FRINX UniConfig allows FRINX Machine to translate between OpenConfig, vendor proprietary CLI and NETCONF dialects. It also connects to the devices in network and keeps connections between devices alive. Furthermore, it pushes configuration data to devices and pulls configuration and operational data from device.

On top of that we’ve added workflow and inventory capabilities. Our pick for the job went to Netflix Conductor, a micro service orchestration solution. It’s vastly scalable, battle proven and used by the biggest video streaming service amongst others. FRINX Machine uses Netflix Conductor to chain atomic tasks into complex workflows and helps to define, execute and monitor them.

For the inventory system, we have picked Elasticsearch. It’s a great tool and everybody loves to work with it. Moreover, it has a good front-end delivered by Kibana. In FRINX Machine it serves as inventory and log data storage.

The rest of the containerized package consists of FRINX Microservices Engine that is working as a binder between Netflix Conductor and ODL and FRINXit that serves as a command line interface for FRINX UniConfig.

While creating FRINX Machine, the goal was to provide a platform enabling easy definition, execution and monitoring of complex workflows.

Easy to setup and to use, highly available

With just three commands needed for the installation and to run the whole thing, FRINX Machine is refreshingly simple to set up. First, start with git clone to create a copy of the FRINX Machine repository. Second, run install to create all the containers in the infrastructure and third is startup. You can run everything on a minimal environment on your local PC, not needing more that 5 GB RAM, ideally suited for developers. On the other hand, FRINX Machine can also run on high-end scalable machines in the cloud to scale up to your processing and storage needs.

High availability is a hard requirement for many customers. When you run FRINX Machine in a production environment, Execution Logic and State & Transformation components can run in clusters to provide high availability. On the other hand, deploying it into a development or in a less demanding environment, FRINX Machine allows you to scale down to a much smaller footprint.

Real world battlefield

The true strength of FRINX Machine comes to light for mission critical application. It is designed as a cloud native infrastructure workflow engine that manages execution state and related statistics. It provides events, triggers and full visibility into the data that is exchanged between tasks in a workflow. You can look at it as a real-time debugger for your infrastructure, in a high available and high scale environment. Moreover, FRINX Machine offers the ability to write simple tasks, functions dealing with the execution logic, and stringing them together using a workflow.

Why we have decided to combine workflows with network data models

That opens this system to personnel which wasn’t trained to deal with model based systems. Now anyone who is capable of writing code (e.g. Python), can contribute to these workflows, while interacting with the workflow engine, that manages the administration and execution of all tasks. This approach enables new features and capabilities to move from development and test to staging and production in the shortest amount of time possible. Learn more about why we have decided to combine workflows with network data models in the article above.


About FRINX

FRINX was founded in 2016 in Bratislava and consists of a team of passionate developers and industry professionals who want to change the way networking software is developed, deployed and used. FRINX offers distributions of OpenDaylight and FD.io in conjunction with support services. We are proud to have service provider and enterprise customers from the Fortune Global 500 list.

Why we have decided to combine workflows with network data models

A couple of words about our decision to combine workflows with network data models. We think it’s a match made in heaven. Here is why.

Network operators worldwide have embraced YANG models for services and devices and we have implemented that in FRINX UniConfig. Now, what we see is that our customers write their own workflow execution logic. For example in languages like Python that interacts with our UniConfig APIs, Ansible and with other systems. Among them are IPAM, CRM, inventory, AWS, GCP and many more.

Easy to build and hard to operate? Not anymore

Infrastructure engineers and network admins who write their own workflow execution logic usually tackle the following challenges. It is easy to build, but hard to operate a system that consists of many interdependent scripts, functions and playbooks. In systems where the workflow execution state is implicitly encoded in the functions and scripts that are calling each other, it is hard to determine to what execution stage the workflow has progressed, where it stopped or how its overall health is trending.

Modifying and augmenting existing workflows in such systems is hard. Only specialists with deep knowledge of the code and system behavior can modify them. Often reducing a team of specialists further down to the one person who has implemented the code in the first place. If that person is no longer around, the alternatives are “don’t touch” or “full re-write”.

Difficult tasks made simple

This challenge has been addressed by providing a cloud native infrastructure workflow engine that manages execution state, related statistics, provides events and triggers. Furthermore, it provides full visibility into the data being exchanged between tasks in a workflow. Think of it as an execution environment and real-time debugger for your infrastructure. Workflows can be started via REST APIs or via GUI and are executed in a fully decentralized way.

We use “workers” that implement the execution logic by polling the workflow engines. Those workers can be scaled up and down based on workload needs. The persistence layer uses REDIS for state and Elasticsearch for statistics and logs. This approach allows users to run workflows in parallel, scaled by the number of “workers”, with full visibility into execution state and statistics.

Everyone in the team can contribute

Having the ability to write simple tasks, i.e. functions dealing with the execution logic, and stringing them together using a workflow, opens this system up to personnel that previously did not have the expertise to deal with model based systems.

Anyone who is capable of writing code (e.g. Python), can contribute to these workflows, while interacting with one logical function, the workflow engine, that manages the administration and execution of all tasks. This approach enables new features and capabilities to move from development and test to staging and production in the shortest amount of time possible.

A single workflow for cloud and on-prem assets

Here is an example where we have combined two tasks that are often handled by separate teams into a single workflow. Infrastructure engineers and network admins need to create assets in the cloud and need to configure on-prem assets to connect with those cloud assets.

We have created a workflow that uses Terraform to create a VPC in AWS and that configures on-prem networking equipment with the correct IP addresses, VLAN information and BGP configurations to establish the direct connection to AWS. The result is that infrastructure engineers can provide a single, well defined API to northbound systems that activate the service. Examples for such northbound systems are Servicenow, Salesforce, JBPM based systems or any business process management system with a REST interface.

Let’s have a look

Here is the graph of our workflow before we have started the execution. The graph is defined by JSON and can be customized and augmented as needed by users.

Before we execute the workflow we need to provide the necessary input parameters, either via GUI, like shown below, or programmatically via a REST call.

After we have provided all parameters and have started the execution, we can monitor the progress of the workflow by seeing its color change from orange (Running) to green (Completed) or to red (Failure).

After the first two tasks have completed, we see that a new VPC and NAT Gateway were created in our AWS data center.

Terraform provides us with information from AWS. The second task in our workflow “Terraform apply” provides output variables that can be used by the following tasks. Here we receive the public IP address, VLAN and other information that we can process for our network device configuration.

In the workflow definition we see that we use the variable from the output of the terraform apply task to configure the BGP neighbor on the network device.

In the next steps we mount a network device and we configure it to provide connectivity to the AWS VPC, For demonstration purposes, we use two techniques for device configuration. One method uses templates with a parameter dictionary and the second method uses the FRINX UniConfig APIs with OpenConfig semantics and full transactionality.

A task that implements device configuration via a template is shown below. A template (“template”) with variables is passed and a dictionary of parameters (“params”), either static or dynamically obtained from other tasks in the workflow are passed to the template and executed on the device. All text is escaped so it can be handled in JSON.

The second method to apply configuration takes advantage of the UniConfig API. It starts with a “sync-from-network” call to obtain the latest configuration state from the network device that is to be configured. The next step is to load our intent, the BGP configuration, into the UniConfig datastore. Finally we issue a commit to apply the configuration to the network device. If this step fails, UniConfig takes care of rolling the configuration back and restoring the device to its state before the configuration attempt. Transactions can be performed on one or across multiple devices.

In conclusion, we can see the journal that reflects all information that has been sent to the device by examining the task output of the “read journal” task.

Below you can find the unescaped version of the journal content:

2018-11-21T16:46:59.919: show running-config
2018-11-21T16:47:02.478: show history all | include Configured from
2018-11-21T16:47:02.628: show running-config
2018-11-21T16:47:08.85: configure terminal
vlan 1111
!
vlan configuration 1111
!
end
2018-11-21T16:47:11.516: configure terminal
interface Vlan1111
description routed vlan 1111 interface for vpc vpc-123412341234
no shutdown
mtu 9000
no bfd echo
no ip redirects
ip address 10.30.32.54/31
ip unreachables
!
end
2018-11-21T16:47:15.927: configure terminal
interface Ethernet1/3
switchport trunk allowed vlan add 1111
!
end
2018-11-21T16:47:18.475: show history all | include Configured from
2018-11-21T16:47:18.516: show running-config
2018-11-21T16:47:34.545: configure terminal
router bgp 65000
neighbor 63.33.91.252 remote-as 65071
neighbor 63.33.91.252 description ^:tier2:bgp:int:uplink:vlan::abc:eth1/1:1111:trusted:abcdef
neighbor 63.33.91.252 update-source Vlan1111
neighbor 63.33.91.252 route-map DEFAULT_ONLY out
address-family ipv4
neighbor 63.33.91.252 activate
exit
end
2018-11-21T16:47:34.974: show history all | include Configured from

This concludes the workflow. Therefore we can design other workflows to change or clean up all assets in case the connection is decommissioned.

 

About FRINX

FRINX was founded in 2016 in Bratislava and consists of a team of passionate developers and industry professionals who want to change the way networking software is developed, deployed and used. FRINX offers distributions of OpenDaylight and FD.io in conjunction with support services and is proud to have service provider and enterprise customers from the Fortune Global 500 list.

FRINX UniConfig: Worldwide leading OSS Network Device Library for Structured Data

We just realized that we have built the worldwide largest open source network device library for structured data. Napalm is not even close and Ansible deals with unstructured data. Also, Ansible requires users to implement their own data store, device version control, parser or mapping between models and transaction logic. All of that and more is part of FRINX UniConfig.

Our customers want us to control a wide range of network devices from many different vendors. Together with our technology partners and customers we have built an open source device library that we are proud of. FRINX’s UniConfig device library contains JunOS, IOS classic (back to 12.0, including the latest versions), IOS XR (4,5,6,7), IOS XE, Nexus OS, Huawei VRP, Brocade, Dasan and many more to come. Our library supports access to devices via CLI, NETCONF and provides translation capabilities from vendor specific CLI and YANG models to standard based OpenConfig models. Our upcoming release (FRINX ODL 4.2.x) also supports native YANG models with our new “UniConfig native” feature.

Choosing the right way

We made some deliberate choices early on when we decided how to build our library. We had to make a decision if we would use templating tools or code to implement our library. In the end we decided to use code to implement the readers and writers in our library. We have built a framework that reduces the amount of code that has to be written to a minimum and allows contributors to focus on the parsing, transforming and writing of commands.

Since we have made that decision, we encountered many situations that required complex mappings and required us to accommodate platform specific dynamic behavior. Those experiences confirmed our decision to implement our device library in code. But all of that is only useful if users can easily find what is supported in which release. Hence, we have built an automated documentation that allows customers to search for features and platforms that are supported in our library. The documentation is part of our build process and always up to date with our latest developments and additions.

Same tools for different devices

FRINX set out to build a network control product that our users love. Our goal is to give them the same tools that they use for software development like diff, snapshots, sync, and commits and make those available to interact with their networks. We read and write network configurations and our built-in datastore allows us to support stateful applications.

We compare intent with actual device configuration and can perform atomic transactions on one device. Or even across multiple network devices. We execute disjunct sets of transaction in parallel and scale up to thousands of connections. FRINX UniConfig enables transactions and rollback across devices that do not have native support for rollback or transactions.

We appreciate your feedback about which devices and features you want us to add next or which ones you would be interested to contribute to our library (no worries, we are happy to help). We look forward to seeing many new use cases of FRINX UniConfig and hearing back from you.

LEARN MORE:

FRINX UniConfig Device Library on GitHub

https://github.com/FRINXio/cli-units
https://github.com/FRINXio/unitopo-units

Device Library Documentation (auto-generated)

https://frinxio.github.io/translation-units-docs/

Device Library Use Case Configuration (manually maintained)

https://github.com/FRINXio/translation-units-docs

Download FRINX Machine and start using our library

https://github.com/FRINXio/FRINX-machine

About FRINX

FRINX was founded in 2016 in Bratislava and consists of a team of passionate developers and industry professionals who want to change the way networking software is developed, deployed and used. FRINX offers distributions of OpenDaylight and FD.io in conjunction with support services and is proud to have service provider and enterprise customers from the Fortune Global 500 list.

FRINX joins Intel Network Builders program

The Intel Network Builders is an ecosystem of independent companies from broad spectrum of networking area. Software vendors, operating system vendors, original equipment manufacturers, telecom equipment manufacturers, system integrators and communications service providers are all part of Intel’s epic program.

These companies are coming together to accelerate the adoption of network functions virtualization – and software defined networking – based solutions in telecommunications networks and public, private enterprise, and hybrid clouds. Intel Network Builders connects service providers and end users with the infrastructure, software, and technology vendors that are driving new solutions to the market. Intel Network Builders offers technical support, matchmaking, and co-marketing opportunities to help facilitate joint collaboration from the discovery phase to the eventual trial and deployment of NFV and SDN solutions.

The Intel Network Builders program seeks to increase ecosystem alignment and build a strong and sustainable market advantage for our members through solution-centered ecosystem collaboration based on Intel architecture. Today, the Network Builders program counts more than 260 partners, a growing number of end user members and increasing opportunities for collaboration. We are proud to have joined the Intel Network Builders program  which is focused on building the networks of the future.

About FRINX

FRINX was founded in 2016 in Bratislava and consists of a team of passionate developers and industry professionals who want to change the way networking software is developed, deployed and used. FRINX offers distributions of OpenDaylight and FD.io in conjunction with support services and is proud to have service provider and enterprise customers from the Fortune Global 500 list.