FRINX Learning Labs: How to easily create LACP link bundles using Python application?

In our new lab you will learn how to use and create a simple Python application that leverages the FRINX UniConfig framework to provision LACP link bundles between two network devices.

The purpose of this application is to show developers how to use the UniConfig API in FRINX ODL to configure services in network devices.

For a simple and efficient programming experience, we provide a Python client library generated by Swagger that exposes the Uniconfig API. You can inspect the library by unzipping the .whl file provided together with the lacp app source code.

The application that we wrote is called lacp-service.py and automates the process of configuring LACP link bundles between two network devices using the UniConfig API. The application was created for demonstration purposes and is not meant for use in production.

Try out our lab HERE.

 

Why we have decided to combine workflows with network data models

A couple of words about our decision to combine workflows with network data models. We think it’s a match made in heaven. Here is why.

Network operators worldwide have embraced YANG models for services and devices and we have implemented that in FRINX UniConfig. Now, what we see is that our customers write their own workflow execution logic. For example in languages like Python that interacts with our UniConfig APIs, Ansible and with other systems. Among them are IPAM, CRM, inventory, AWS, GCP and many more.

Easy to build and hard to operate? Not anymore

Infrastructure engineers and network admins who write their own workflow execution logic usually tackle the following challenges. It is easy to build, but hard to operate a system that consists of many interdependent scripts, functions and playbooks. In systems where the workflow execution state is implicitly encoded in the functions and scripts that are calling each other, it is hard to determine to what execution stage the workflow has progressed, where it stopped or how its overall health is trending.

Modifying and augmenting existing workflows in such systems is hard. Only specialists with deep knowledge of the code and system behavior can modify them. Often reducing a team of specialists further down to the one person who has implemented the code in the first place. If that person is no longer around, the alternatives are “don’t touch” or “full re-write”.

Difficult tasks made simple

This challenge has been addressed by providing a cloud native infrastructure workflow engine that manages execution state, related statistics, provides events and triggers. Furthermore, it provides full visibility into the data being exchanged between tasks in a workflow. Think of it as an execution environment and real-time debugger for your infrastructure. Workflows can be started via REST APIs or via GUI and are executed in a fully decentralized way.

We use “workers” that implement the execution logic by polling the workflow engines. Those workers can be scaled up and down based on workload needs. The persistence layer uses REDIS for state and Elasticsearch for statistics and logs. This approach allows users to run workflows in parallel, scaled by the number of “workers”, with full visibility into execution state and statistics.

Everyone in the team can contribute

Having the ability to write simple tasks, i.e. functions dealing with the execution logic, and stringing them together using a workflow, opens this system up to personnel that previously did not have the expertise to deal with model based systems.

Anyone who is capable of writing code (e.g. Python), can contribute to these workflows, while interacting with one logical function, the workflow engine, that manages the administration and execution of all tasks. This approach enables new features and capabilities to move from development and test to staging and production in the shortest amount of time possible.

A single workflow for cloud and on-prem assets

Here is an example where we have combined two tasks that are often handled by separate teams into a single workflow. Infrastructure engineers and network admins need to create assets in the cloud and need to configure on-prem assets to connect with those cloud assets.

We have created a workflow that uses Terraform to create a VPC in AWS and that configures on-prem networking equipment with the correct IP addresses, VLAN information and BGP configurations to establish the direct connection to AWS. The result is that infrastructure engineers can provide a single, well defined API to northbound systems that activate the service. Examples for such northbound systems are Servicenow, Salesforce, JBPM based systems or any business process management system with a REST interface.

Let’s have a look

Here is the graph of our workflow before we have started the execution. The graph is defined by JSON and can be customized and augmented as needed by users.

Before we execute the workflow we need to provide the necessary input parameters, either via GUI, like shown below, or programmatically via a REST call.

After we have provided all parameters and have started the execution, we can monitor the progress of the workflow by seeing its color change from orange (Running) to green (Completed) or to red (Failure).

After the first two tasks have completed, we see that a new VPC and NAT Gateway were created in our AWS data center.

Terraform provides us with information from AWS. The second task in our workflow “Terraform apply” provides output variables that can be used by the following tasks. Here we receive the public IP address, VLAN and other information that we can process for our network device configuration.

In the workflow definition we see that we use the variable from the output of the terraform apply task to configure the BGP neighbor on the network device.

In the next steps we mount a network device and we configure it to provide connectivity to the AWS VPC, For demonstration purposes, we use two techniques for device configuration. One method uses templates with a parameter dictionary and the second method uses the FRINX UniConfig APIs with OpenConfig semantics and full transactionality.

A task that implements device configuration via a template is shown below. A template (“template”) with variables is passed and a dictionary of parameters (“params”), either static or dynamically obtained from other tasks in the workflow are passed to the template and executed on the device. All text is escaped so it can be handled in JSON.

The second method to apply configuration takes advantage of the UniConfig API. It starts with a “sync-from-network” call to obtain the latest configuration state from the network device that is to be configured. The next step is to load our intent, the BGP configuration, into the UniConfig datastore. Finally we issue a commit to apply the configuration to the network device. If this step fails, UniConfig takes care of rolling the configuration back and restoring the device to its state before the configuration attempt. Transactions can be performed on one or across multiple devices.

In conclusion, we can see the journal that reflects all information that has been sent to the device by examining the task output of the “read journal” task.

Below you can find the unescaped version of the journal content:

2018-11-21T16:46:59.919: show running-config
2018-11-21T16:47:02.478: show history all | include Configured from
2018-11-21T16:47:02.628: show running-config
2018-11-21T16:47:08.85: configure terminal
vlan 1111
!
vlan configuration 1111
!
end
2018-11-21T16:47:11.516: configure terminal
interface Vlan1111
description routed vlan 1111 interface for vpc vpc-123412341234
no shutdown
mtu 9000
no bfd echo
no ip redirects
ip address 10.30.32.54/31
ip unreachables
!
end
2018-11-21T16:47:15.927: configure terminal
interface Ethernet1/3
switchport trunk allowed vlan add 1111
!
end
2018-11-21T16:47:18.475: show history all | include Configured from
2018-11-21T16:47:18.516: show running-config
2018-11-21T16:47:34.545: configure terminal
router bgp 65000
neighbor 63.33.91.252 remote-as 65071
neighbor 63.33.91.252 description ^:tier2:bgp:int:uplink:vlan::abc:eth1/1:1111:trusted:abcdef
neighbor 63.33.91.252 update-source Vlan1111
neighbor 63.33.91.252 route-map DEFAULT_ONLY out
address-family ipv4
neighbor 63.33.91.252 activate
exit
end
2018-11-21T16:47:34.974: show history all | include Configured from

This concludes the workflow. Therefore we can design other workflows to change or clean up all assets in case the connection is decommissioned.

 

About FRINX

FRINX was founded in 2016 in Bratislava and consists of a team of passionate developers and industry professionals who want to change the way networking software is developed, deployed and used. FRINX offers distributions of OpenDaylight and FD.io in conjunction with support services and is proud to have service provider and enterprise customers from the Fortune Global 500 list.

FRINX UniConfig: Worldwide leading OSS Network Device Library for Structured Data

We just realized that we have built the worldwide largest open source network device library for structured data. Napalm is not even close and Ansible deals with unstructured data. Also, Ansible requires users to implement their own data store, device version control, parser or mapping between models and transaction logic. All of that and more is part of FRINX UniConfig.

Our customers want us to control a wide range of network devices from many different vendors. Together with our technology partners and customers we have built an open source device library that we are proud of. FRINX’s UniConfig device library contains JunOS, IOS classic (back to 12.0, including the latest versions), IOS XR (4,5,6,7), IOS XE, Nexus OS, Huawei VRP, Brocade, Dasan and many more to come. Our library supports access to devices via CLI, NETCONF and provides translation capabilities from vendor specific CLI and YANG models to standard based OpenConfig models. Our upcoming release (FRINX ODL 4.2.x) also supports native YANG models with our new “UniConfig native” feature.

Choosing the right way

We made some deliberate choices early on when we decided how to build our library. We had to make a decision if we would use templating tools or code to implement our library. In the end we decided to use code to implement the readers and writers in our library. We have built a framework that reduces the amount of code that has to be written to a minimum and allows contributors to focus on the parsing, transforming and writing of commands.

Since we have made that decision, we encountered many situations that required complex mappings and required us to accommodate platform specific dynamic behavior. Those experiences confirmed our decision to implement our device library in code. But all of that is only useful if users can easily find what is supported in which release. Hence, we have built an automated documentation that allows customers to search for features and platforms that are supported in our library. The documentation is part of our build process and always up to date with our latest developments and additions.

Same tools for different devices

FRINX set out to build a network control product that our users love. Our goal is to give them the same tools that they use for software development like diff, snapshots, sync, and commits and make those available to interact with their networks. We read and write network configurations and our built-in datastore allows us to support stateful applications.

We compare intent with actual device configuration and can perform atomic transactions on one device. Or even across multiple network devices. We execute disjunct sets of transaction in parallel and scale up to thousands of connections. FRINX UniConfig enables transactions and rollback across devices that do not have native support for rollback or transactions.

We appreciate your feedback about which devices and features you want us to add next or which ones you would be interested to contribute to our library (no worries, we are happy to help). We look forward to seeing many new use cases of FRINX UniConfig and hearing back from you.

LEARN MORE:

FRINX UniConfig Device Library on GitHub

https://github.com/FRINXio/cli-units
https://github.com/FRINXio/unitopo-units

Device Library Documentation (auto-generated)

https://frinxio.github.io/translation-units-docs/

Device Library Use Case Configuration (manually maintained)

https://github.com/FRINXio/translation-units-docs

Download FRINX Machine and start using our library

https://github.com/FRINXio/FRINX-machine

About FRINX

FRINX was founded in 2016 in Bratislava and consists of a team of passionate developers and industry professionals who want to change the way networking software is developed, deployed and used. FRINX offers distributions of OpenDaylight and FD.io in conjunction with support services and is proud to have service provider and enterprise customers from the Fortune Global 500 list.

FRINX and China Telecom Beijing Research Institute Collaborate

FRINX and China Telecom Beijing Research Institute are working together to develop an open source-based control system for China Telecom.

Chinese telecommunication giant is leading the business in development of open source-based solutions. FRINX provides parts of source code, libraries and services to help China Telecom in their development. Slovak star-up has developed the UniConfig Framework that is being used by China Telecom Beijing Research Institute (CTBRI) while configuring its networking services. China Telecom Beijing Research Institute is automating L3VPN service provisioning and is using FRINX’s L3VPN service module that implements the IETF data model standard (RFC 8299).

Open Source has become the de facto standard for developing new networking services and support functions. China Telecom is leading the way by leveraging IETF standards for service models, OpenConfig for device abstraction and OpenDaylight for implementing its controller applications. FRINX is proud to be part of China Telecom’s controller project.” said FRINX co-founder and CEO Gerhard Wieser.

Giant found a small partner

China Telecom is developing its network control system based on open source components to manage its network services and to be able to introduce new services faster and more easily into our network. FRINX is the appropriate partner for helping us to master the key components of OpenDaylight for this project. The conscientiousness and dedication of the FRINX colleagues impressed us deeply and we are looking forward to cooperation opportunities in the future.” said Mr. Aijun Wang, Manager of the L3VPN PoC project in China Telecom Beijing Research Institute.

CTBRI’s main R&D areas include communication and information technology development trends and strategic research; communication and information technology development policy research; corporate decision-making  methodology research; corporate strategy development research; communication networks, technology and business development planning research; communication technology system and standards research; evaluation of new technologies, new equipment and new products in networks; development of supporting systems such as network management and business management; application software research and system integration; development and promotion of new information products and value-added services for communication information.

About FRINX

Our company offers solutions and services for open source network control and automation. FRINX consists of a team of passionate developers and industry professionals who want to change the way networking software is created, deployed and operated. FRINX offers network automation products and distributions of OpenDaylight and FD.io in conjunction with support services and is proud to count service providers and enterprise companies from the Fortune Global 500 list among its customers.

Read more about the FRINX ODL Distribution and FRINX UniConfig here:

https://frinx.io/

https://frinx.io/odl_distribution

FRINX joins Intel Network Builders program

The Intel Network Builders is an ecosystem of independent companies from broad spectrum of networking area. Software vendors, operating system vendors, original equipment manufacturers, telecom equipment manufacturers, system integrators and communications service providers are all part of Intel’s epic program.

These companies are coming together to accelerate the adoption of network functions virtualization – and software defined networking – based solutions in telecommunications networks and public, private enterprise, and hybrid clouds. Intel Network Builders connects service providers and end users with the infrastructure, software, and technology vendors that are driving new solutions to the market. Intel Network Builders offers technical support, matchmaking, and co-marketing opportunities to help facilitate joint collaboration from the discovery phase to the eventual trial and deployment of NFV and SDN solutions.

The Intel Network Builders program seeks to increase ecosystem alignment and build a strong and sustainable market advantage for our members through solution-centered ecosystem collaboration based on Intel architecture. Today, the Network Builders program counts more than 260 partners, a growing number of end user members and increasing opportunities for collaboration. We are proud to have joined the Intel Network Builders program  which is focused on building the networks of the future.

About FRINX

FRINX was founded in 2016 in Bratislava and consists of a team of passionate developers and industry professionals who want to change the way networking software is developed, deployed and used. FRINX offers distributions of OpenDaylight and FD.io in conjunction with support services and is proud to have service provider and enterprise customers from the Fortune Global 500 list.

 

 

FRINX and SoftBank Collaborate to Create Open-Source Based Network Operations System

FRINX and SoftBank are collaborating and have created an open-source based operating solution for SoftBank’s networks. Slovak startup FRINX is providing support for its distribution of OpenDaylight, enabling SoftBank to focus on creating their applications.

FRINX offers a fully supported version of OpenDaylight (FRINX ODL Distribution). That makes it a reliable infrastructure onto which SoftBank creates applications to manage their networks. Meanwhile, FRINX provides support, quality assurance and performance enhancements to the code base and manages the up-streaming process. SoftBank also uses the FRINX Smart Build Engine – its development and test environment. That allows customers to automate the build process and system tests for OpenDaylight and its applications in minutes, compared to weeks or months with Do-It-Yourself solutions.

“Open Source has become the de-facto standard for developing new applications in the networking space. SoftBank is one of the leaders in the telecommunications industry leveraging that trend.” said FRINX co-founder and CEO Gerhard Wieser. FRINX and SoftBank are moving the industry forward by building their applications on a supported open-source foundation to evolve SoftBank’s network operations.

Read more about the FRINX ODL Distribution and the FRINX Smart Build Engine here:

https://frinx.io/odl_distribution

https://frinx.io/sbe

About SoftBank

SoftBank Corp., a subsidiary of SoftBank Group Corp. (TOKYO: 9984), provides mobile communication, fixed-line communication and Internet connection services to customers in Japan. Leveraging synergies with other companies in the SoftBank Group, SoftBank Corp. aims to transform lifestyles through ICT and expand into other business areas including AI, smart robotics, IoT, robotics and energy. To learn more, please visit www.SoftBank.jp/en/corp/group/sbm/

About FRINX

FRINX was founded in 2016 in Bratislava and consists of a team of passionate developers and industry professionals who want to change the way networking software is developed, deployed and used. FRINX offers distributions of OpenDaylight and FD.io in conjunction with support services and is proud to have service provider and enterprise customers from the Fortune Global 500 list.