VLAN Setup

Topology

Config

Classifier API (ACLs)

To disable ICMP traffic on an interface invoke following commands in VAT:

Similarly to disabling traffic from a certain IP, replace the table and session creation with:

Qemu + VPP + Vhost user

This article shows you how to set up a VM or a bare metal machine with qemu and KVM and VPP handling L2 networking between VMs.

Resources

The main resource for the setup was: https://gist.github.com/egernst/5982ae6f0590cd83330faafacc3fd545

Install Qemu/KVM

The installation is straightforward when following this guide (Make sure KVM is available): https://help.ubuntu.com/community/KVM/Installation

Inside PROXMOX

It is also possible to do all this within a VM utilizing nested virtualisation. With e.g. PROXMOX, it’s just a matter of turning on nested virtualisation according to: https://pve.proxmox.com/wiki/Nested_Virtualization After that, a VM can be installed with Qemu and KVM as if it were a bare metal machine.

Install VPP

Just install VPP (see this guide) and start it up…

Configure VPP

The following configuration can be used to connect two VMs using VPP as the vhost interface server side:

This configuration is SERVER for VPP and CLIENT for VMs and there is a bridge domain between interfaces in VPP

Start VMs

FreeBSD
The following command starts FreeBSD VM and attaches it to the opened socket by VPP:

The image was taken from: http://ftp.freebsd.org/pub/FreeBSD/releases/VM-IMAGES/10.3-RELEASE/amd64/Latest/

Ubuntu 14
The following command starts Ubuntu14 VM and attaches it to the opened socket by VPP:

The image was taken from: https://uec-images.ubuntu.com/releases/14.04/release/ The command does not use the qcow image directly. Instead it uses the following guide to configure password login to the VM according to this guide: https://help.ubuntu.com/community/UEC/Images (Section: Ubuntu Cloud Guest images on 12.04 LTS (Precise) and beyond using NoCloud)

… Ubuntu cloud images do not allow password login via e.g. VNC

Network config in VMs

Accessing VMs
Each VM can be accessed using VNC at:

so in the case of the earlier FreeBSD example and IP being e.g. 10.10.199.72 it would be:

Credentials:

The default credentials for VMs are:

FreeBSD: root, no password
Ubuntu: ubuntu, passw0rd

Ifconfig

The configuration for the VMs could look like:
FreeBSD:

Ubuntu:

Now, ping and any other traffic should be working fine between VMs.

Troubleshooting

VPP vhost user bug: Fix: https://gerrit.fd.io/r/#/c/6735

IPsec Setup

Configure VPP from the console using the following commands:

VPP1 (hub)

VPP2 (spoke1)

VPP3 (spoke2)

IPFIX Setup

Configure VPP using VAT

Set interface on index 1: up + IP

Configure IPFIX export from interface 1’s IP address to collector at 192.168.1.101 port 4739

… this configures VPP’s interface with index 1 to export IPFIX statistics and send them to 192.168.1.101:4739 UDP server

Start a UDP server

… this bash UDP server will accept the IPFIX data and display on screen. A better way of reading the data is using Wireshark (you can ping 192.168.1.87 to increase the stats) Sample capture from Wireshark

Further configuration options To export IP source, IP destination, Protocol configure classifier with:

To also export ports, configure with:

IPFIX limitations as of 17.01 IPFIX has some limitations that (might) limit its usage within real use cases:

  1. Only inbound/ingress traffic is matched/exported by IPFIX
  2. When using both IPFIX and IPSEC, the traffic is always going through IPFIX node before IPSEC decrypt, making IPFIX not work at all – there is an issue with the node graph order
  3. Matching L4 ports is also triggered for port-less protocols like ICMP, exporting each ICMP packet as new flow (since ports are assigned random numbers), which makes IPFIX export packets too big
  4. Each flow/connection creates a new classify session in order to be able to report it via IPFIX, however the sessions are not “garbage collected” making this a memory leak
  5. Due to creating new sessions for each flow, it is impossible to customize IPFIX matching e.g. match only TCP/UDP protocols, any IP with a port range

Installation

VPP Installation

Described at: VPP install packages

If VPP does not pick up interfaces, it helps to build and install it manually:

The following resources are useful in figuring out how to do (configure) something with VPP

  1. https://wiki.fd.io/view/VPP
  2. https://docs.fd.io/vpp/17.04/
  3. Google
  4. CSIT (VPP integration tests) – they use VAT tool to provide complex configuration to VPP. Each test case logs exact configuration steps. So looking at log file from latest vpp functional test run can help https://jenkins.fd.io/view/csit/job/csit-vpp-functional-master-ubuntu1604-virl/ If log file does not open directly, download and reopen in browser). Look at a test case, list of keywords and vat command executed in every step.
  5. vpp-dev@lists.fd.io

Install VPP in a docker container

After docker is installed start a centos7 container:

Then in the container install vpp, configure and start it:

To save the modified container invoke from host:

Installing binary packages

Instructions for consuming publicly available binary packages of FD.FRINX.io distribution

Centos7

In file:

Set content:

Make sure to change the CustomerID and password in the repository settings

Installation with YUM can be done with:

General

Dump VPP message table

Vpp APIs work with messages, to check all the available messages and their indices use in VAT:

View VPP node graph

VPP is implemented as a set of ordered graph nodes, to see them live, use following command in CLI:

Trace VPP’s API executions

To record the execution of its APIs, use following command outside of VPP: