Here is a brief update about what we have been up to lately. We added more functionality (e.g. config timestamp checks) and parallelism (e.g. parallel commits in groups of non-overlapping devices) to our UniConfig framework. UniConfig is deployed at large Telcos in Asia. We are super happy about that as you can imagine.

We completed a new application on top of our UniConfig framework that allows us to build physical L0/1 topologies across different vendors. We use LLDP on the devices and translate that into one common L0/1 topology based on the IETF topology model. The app is massively parallel and can handle a large number of endpoints. You can use the L0/1 topo for many things like audits, business rules and input for automatic device/service configuration. Here is a 3 min summary of how it works.

Also, we started to work with the micro service orchestration engine from Netflix called “Conductor”. We like the approach that Netflix took by creating a massively scalable micro service orchestration engine that is battle tested in their own environment. We implement all services in the workflow as REST endpoints and conductor helps us to formalize tasks and workflows via JSON definitions. Conductor uses Redis/dynamo and Elasticsearch as persistence layers. We use clustered ODL/UniConfig as our API into the network elements and services. Here are a few snapshots.

We have created a sample workflow that we start by passing the name of a device in a REST POST call to conductor, then the following tasks execute in sequence: 1) fetch device interface information in OpenConfig format from ODL, 2) send that information to a micro service (implemented in python) that selects the next available interface on the device and finally 3) post the selected interface name to our Slack channel.

Visibility of every step is very good. You can see start/end time of tasks and workflows and the user can see input and output of every task and of the whole workflow. Workflow execution is documented and all stats are available.
Workflow shown as a flow diagram:
All workflow and task inputs and outputs visible for every run.
Workflow inputs and outputs are visible:
Finally for our demo, we send the output to Slack. The workflow output can be re-used as input to another workflow.
Our sample workflow definition looks like this:
**** Create Tasks

curl -X POST --header 'Content-Type:application/json' -H 'Accept:application/json' -d '
[{
"name": "odl_get_openconfig_interfaces01",
"retryCount": 3,
"timeoutSeconds": 9,
"timeoutPolicy": "TIME_OUT_WF",
"retryLogic": "FIXED",
"retryDelaySeconds": 3,
"responseTimeoutSeconds": 9
}]' 'http://192.168.1.51:8080/api/metadata/taskdefs'

curl -X POST --header 'Content-Type:application/json' -H 'Accept:application/json' -d '
[{
"name": "ni_utils_select_next_available_interface01",
"retryCount": 3,
"timeoutSeconds": 9,
"timeoutPolicy": "TIME_OUT_WF",
"retryLogic": "FIXED",
"retryDelaySeconds": 3,
"responseTimeoutSeconds": 9
}]' 'http://192.168.1.51:8080/api/metadata/taskdefs'

curl -X POST --header 'Content-Type:application/json' -H 'Accept:application/json' -d '
[{
"name": "post_message_to_slack",
"retryCount": 3,
"timeoutSeconds": 9,
"timeoutPolicy": "TIME_OUT_WF",
"retryLogic": "FIXED",
"retryDelaySeconds": 3,
"responseTimeoutSeconds": 9
}]' 'http://192.168.1.51:8080/api/metadata/taskdefs'

***** Create Workflow

curl -H 'Content-Type:application/json' -H 'Accept:application/json' -X POST http://192.168.1.51:8080/api/metadata/workflow -d '{
"name": "get_next_interface_wf01",
"description": "Select the next available interface from a network node",
"version": 4,
"tasks": [
{
"name": "odl_get_openconfig_interfaces01",
"taskReferenceName": "get_cli_topology",
"inputParameters": {
"node": "${workflow.input.node}",
"http_request": {
"uri": "http://192.168.1.1:8181/restconf/operational/network-topology:network-topology/topology/unified/node/${workflow.input.node}/yang-ext:mount/frinx-openconfig-interfaces:interfaces",
"method": "GET",
"headers": {
"Authorization": "Basic YWRtaW46YWRtaW4="
}
}
},
"type": "HTTP"
},
{
"name": "ni_utils_select_next_available_interface01",
"taskReferenceName": "run_external_application_second",
"inputParameters": {
"http_request": {
"uri": "http://192.168.1.51:5555/select_next_interface",
"method": "POST",
"headers": {
"Accept": "application/json",
"Content-Type": "application/json"
},
"body" : "${get_cli_topology.output.response.body}"
}
},
"type": "HTTP"
},
{
"name": "post_message_to_slack",
"taskReferenceName": "post_message_to_slack",
"inputParameters": {
"http_request": {
"uri": "https://hooks.slack.com/services/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",
"method": "POST",
"headers": {
"Accept": "application/json",
"Content-Type": "application/json"
},
"body" : "{ \"text\": \"${run_external_application_second.output.response.body}\" }"
}
},
"type": "HTTP"
}
],
"outputParameters": {
"g_output_workflow": "${run_external_application_second.output.response.body}"
},
"schemaVersion": 2
}'

The workflow can be started with the following POST call:

curl -X POST --header 'Content-Type: application/json' --header 'Accept: text/plain' -d '{"node": "Leaf01"}' 'http://192.168.1.51:8080/api/workflow/get_next_interface_wf01'

 

-Gerhard Wieser