VLAN Setup

Topology

Config

Classifier API (ACLs)

To disable ICMP traffic on an interface invoke following commands in VAT:

Similarly to disabling traffic from a certain IP, replace the table and session creation with:

Qemu + VPP + Vhost user

This article shows you how to set up a VM or a bare metal machine with qemu and KVM and VPP handling L2 networking between VMs.

Resources

The main resource for the setup was: https://gist.github.com/egernst/5982ae6f0590cd83330faafacc3fd545

Install Qemu/KVM

The installation is straightforward when following this guide (Make sure KVM is available): https://help.ubuntu.com/community/KVM/Installation

Inside PROXMOX

It is also possible to do all this within a VM utilizing nested virtualisation. With e.g. PROXMOX, it’s just a matter of turning on nested virtualisation according to: https://pve.proxmox.com/wiki/Nested_Virtualization After that, a VM can be installed with Qemu and KVM as if it were a bare metal machine.

Install VPP

Just install VPP (see this guide) and start it up…

Configure VPP

The following configuration can be used to connect two VMs using VPP as the vhost interface server side:

This configuration is SERVER for VPP and CLIENT for VMs and there is a bridge domain between interfaces in VPP

Start VMs

FreeBSD
The following command starts FreeBSD VM and attaches it to the opened socket by VPP:

The image was taken from: http://ftp.freebsd.org/pub/FreeBSD/releases/VM-IMAGES/10.3-RELEASE/amd64/Latest/

Ubuntu 14
The following command starts Ubuntu14 VM and attaches it to the opened socket by VPP:

The image was taken from: https://uec-images.ubuntu.com/releases/14.04/release/ The command does not use the qcow image directly. Instead it uses the following guide to configure password login to the VM according to this guide: https://help.ubuntu.com/community/UEC/Images (Section: Ubuntu Cloud Guest images on 12.04 LTS (Precise) and beyond using NoCloud)

… Ubuntu cloud images do not allow password login via e.g. VNC

Network config in VMs

Accessing VMs
Each VM can be accessed using VNC at:

so in the case of the earlier FreeBSD example and IP being e.g. 10.10.199.72 it would be:

Credentials:

The default credentials for VMs are:

FreeBSD: root, no password
Ubuntu: ubuntu, passw0rd

Ifconfig

The configuration for the VMs could look like:
FreeBSD:

Ubuntu:

Now, ping and any other traffic should be working fine between VMs.

Troubleshooting

VPP vhost user bug: Fix: https://gerrit.fd.io/r/#/c/6735

IPsec Setup

Configure VPP from the console using the following commands:

VPP1 (hub)

VPP2 (spoke1)

VPP3 (spoke2)

IPFIX Setup

Configure VPP using VAT

Set interface on index 1: up + IP

Configure IPFIX export from interface 1’s IP address to collector at 192.168.1.101 port 4739

… this configures VPP’s interface with index 1 to export IPFIX statistics and send them to 192.168.1.101:4739 UDP server

Start a UDP server

… this bash UDP server will accept the IPFIX data and display on screen. A better way of reading the data is using Wireshark (you can ping 192.168.1.87 to increase the stats) Sample capture from Wireshark

Further configuration options To export IP source, IP destination, Protocol configure classifier with:

To also export ports, configure with:

IPFIX limitations as of 17.01 IPFIX has some limitations that (might) limit its usage within real use cases:

  1. Only inbound/ingress traffic is matched/exported by IPFIX
  2. When using both IPFIX and IPSEC, the traffic is always going through IPFIX node before IPSEC decrypt, making IPFIX not work at all – there is an issue with the node graph order
  3. Matching L4 ports is also triggered for port-less protocols like ICMP, exporting each ICMP packet as new flow (since ports are assigned random numbers), which makes IPFIX export packets too big
  4. Each flow/connection creates a new classify session in order to be able to report it via IPFIX, however the sessions are not “garbage collected” making this a memory leak
  5. Due to creating new sessions for each flow, it is impossible to customize IPFIX matching e.g. match only TCP/UDP protocols, any IP with a port range

Installation

VPP Installation

Described at: VPP install packages

If VPP does not pick up interfaces, it helps to build and install it manually:

The following resources are useful in figuring out how to do (configure) something with VPP

  1. https://wiki.fd.io/view/VPP
  2. https://docs.fd.io/vpp/17.04/
  3. Google
  4. CSIT (VPP integration tests) – they use VAT tool to provide complex configuration to VPP. Each test case logs exact configuration steps. So looking at log file from latest vpp functional test run can help https://jenkins.fd.io/view/csit/job/csit-vpp-functional-master-ubuntu1604-virl/ If log file does not open directly, download and reopen in browser). Look at a test case, list of keywords and vat command executed in every step.
  5. vpp-dev@lists.fd.io

Install VPP in a docker container

After docker is installed start a centos7 container:

Then in the container install vpp, configure and start it:

To save the modified container invoke from host:

Installing binary packages

Instructions for consuming publicly available binary packages of FD.FRINX.io distribution

Centos7

In file:

Set content:

Make sure to change the CustomerID and password in the repository settings

Installation with YUM can be done with:

FRINX VPP distribution

FRINX provides an FD.io distribution.
FD.io is an opensource project that among other things provides the Vector Packet Processor. More information at FD.io.
This page contains the details about FRINX fd.io distribution.

Features

Project imported from open source:

  •     VPP
    • No changes

Internal projects:

  •     VPP-monitoring-agent (magent)

Operations

CI/CD

There is a custom CI/CD pipeline for FRINX fd.io distribution. It is based on SBE .
The following diagram shows the relationships between FD.io and FD.FRINX.io:

SBE installation

Refer to SBE for FDio installation page for install instructions.

Deployment

The CI/CD lives in a dedicated VM inside Siecit.

The credentials and access is listed at the credentials page

The static IPs can be found at IP allocation page

Public access

Admin credentials for the services: admin (password: sbe4fdio, )

Installing binary packages

Instructions for consuming publicly available binary packages of FD.FRINX.io distribution

Centos7

In file:

/etc/yum.repos.d/frinx-fdio-release.repo

Set content:

[frinx-fdio-release] name=FRINX fd.io release branch latest merge baseurl=https://:@nexus.fd.frinx.io/nexus/content/repositories/fd.io.centos7/ enabled=1 gpgcheck=0 sslverify=0

Make sure to change the CustoemrID and password in the repository settings !!!

And the installation with YUM should be:

sudo yum install vpp vpp-plugins vpp-monitoring-agent

Jenkins jobs

The following diagrams list the jenkins jobs imported from opensource FD.io (green marks imported):

How-Tos

Releasing

The basic setup does not take care of release process. Releases have to be managed manually.

Basically, once we are happy with packages in the stable repository, we need to manually copy the over into a release repository.

FDio sync

There is no automated sync between FDio codebases and FRINX’s FDio forks.

In order to perform a sync, use the import_fdio.sh script from ci-management/frinx. It will update all specified projects, all branches.

Then triggering required merge jobs would build and deploy all the packages.

Adding customer account

Create the account by creating a an ldif file in the sbe container:

docker exec -it sbe-FDio-sbe vi /data/instances/FDio/ldap/customer.ldif

and pasting following content:

dn: uid=customer,ou=accounts,dc=example,dc=com objectClass: inetOrgPerson objectClass: organizationalPerson objectClass: person uid: customer cn: Generic Customer displayName: Generic Customer sn: customer givenName: Customer mail: nobody@exists.like.this userpassword: customer

Make sure to update the password

Save the file and invoke following command:

./sbe -i FDio run ldap-import customer.ldif

This will create the customer account. Delete the customer.ldif file from container.

Adding to customers group

Similarly to before, create ldif file:

docker exec -it sbe-FDio-sbe vi /data/instances/FDio/ldap/customerToGroup.ldif

and set the content to:

dn: cn=customers,ou=accounts,dc=example,dc=com objectClass: groupOfNames objectClass: top cn: customers member: uid=customer,ou=accounts,dc=example,dc=com

and ivoke the update:

./sbe -i FDio run ldap-import customerToGroup.ldif

In case the group already exists, set the content of customerToGroup to:

dn: cn=customers,ou=accounts,dc=example,dc=com changetype: Modify add: member member: uid=customer2,ou=accounts,dc=example,dc=com

and make sure to remove switch ‘-a’ from ldap-import script in sbe container

Configuring customers group rights

Nexus

In nexus open LDAP configuration and set the group mapping to:

Next go to Roles configuration and add an external mapping for LDAP:customers group and set its privileges to:

  • All Repositories (view)
  • All Repositories (read)
  • UI: Base UI
  • UI: Repository browser
  • UI: Search
  • Nexus YUM reader

and save.

Now its important to disable Anonymous access in the Server configuration in order to have customer private nexus repositories.

Jenkins

Jenkins default LDAP configuration is ready for the account groups. Enabling read only access to customers group can be configured in Global Security Settings under Authorization section:

Resources 1: http://fd.io

Diagrams on draw.io

ci-management fork for FD.FRINX.io

General

Dump VPP message table

Vpp APIs work with messages, to check all the available messages and their indices use in VAT:

View VPP node graph

VPP is implemented as a set of ordered graph nodes, to see them live, use following command in CLI:

Trace VPP’s API executions

To record the execution of its APIs, use following command outside of VPP: