Linux Bridge vs Open vSwitch

As a short Follow-Up to

justindpettit wrote a nice Blog post about recent performance improvements in OVS:

What does this mean for a simple KVM/ OpenStack setup? For me it shows that the Linux bridge is still superior when only doing Layer 2 forwarding. OVS might be as fast but takes a whole lot more CPU power which is taken away from your VMs. And it seems to be even worse for old versions of OVS in LTS distris.

First steps with Open vSwitch and a comparison to the Linux bridge

Some weeks ago we finally found the time to install our new workhorse server “muli”. It’s meant to host some VMs of former tutors and utilize libvirt therefor. Instead of the traditional network bridge device we thought about experimenting with Open vSwitch (ovs). While the integrated Linux is pretty fast and featureful ovs brings remote programming possibilities, vxlan, OpenFlow, GRE, integrated QoS, NetFlow and many, many more. And it might also be interesting later on for my PHD thesis. But first things first.

There seems to be a misunderstanding sometimes: The Linux bridge is in no way a hub, it is a full featured bridge with a forwarding database, ageing times and even STP support. You don’t need ovs for this!

We are on Debian Wheezy (stable at the time of writing). There is only ovs 1.4 but to get a taste of it, it seemed to be enough:

apt-get install openvswitch-switch openvswitch-datapath-dkms

Our setup is a 2 times 1G LACP-trunk carrying 2 tagged and an untagged VLAN with shall make up the uplink interface of the switch. On the downlink side we thought about one virtual interface per vlan per VM plus one interface in the untagged vlan for the Debian Linux. Obviously there is a third physical network card which gives us an oob path — just in case :-).

The UI and Documentation of ovs is — how to say it in a friendly way — a mess. I suppose this stems from the target as a SDN bridge. In the end most of the useful documentaion ended up to be in the FAQ.

At first we create a new open vSwitch (aka bridge) and add the LACP trunk (aka bond aka etherchannel) to it

ovs-vsctl add-br br0
ovs-vsctl add-bond br0 bond0 eth1 eth2
ovs-vsctl set port bond0 bond_mode=balance-tcp
ovs-vsctl set port bond0 vlan_mode=native-untagged trunks=[1,3] tag=2
ovs-vsctl list port bond0

this gives us

_uuid : 889e1057-d6fb-45bf-a3cc-582be1e7b4b8
bond_downdelay : 0
bond_fake_iface : false
bond_mode : balance-tcp
bond_updelay : 0
external_ids : {}
fake_bridge : false
interfaces : [2cc3b8dd-ecae-41cc-a67c-d92764713eac, bfc1c7a7-fa16-493d-b76d-8459bca7ea46]
lacp : active
mac : []
name : "bond0"
other_config : {}
qos : []
statistics : {}
status : {}
tag : 2
trunks : [1, 3]
vlan_mode : native-untagged

There are quite a lot of configuration options for bonds. They are documented in ovs-vswitchd.conf.db(5). One might also use “tagged” or “access” as vlan_mode. The untagged vlan might or might
not appear in the “trunks” list but always has to be set with “tag”.

Now we can list interfaces and ports:

ovs-vsctl list br
ovs-vsctl list-ifaces br0
ovs-vsctl list interface
ovs-vsctl list port
ovs-vsctl show

To add a new virtual (internal) interface we use

ovs-vsctl add-port br0 hv-vlan2 tag=2 type=internal

This port is meant to be the interface of the hypervisor (ie the Linux host aka muli) and can get an IP in /etc/network/interfaces:

auto hv-vlan2
iface hv-vlan2 inet dhcp

# we also have to make sure to bring up all the other
# interfaces (/ifconfig up/ them):
auto eth1
iface eth1 inet manual
up ifconfig $IFACE up
down ifconfig $IFACE down

auto eth2
iface eth2 inet manual
up ifconfig $IFACE up
down ifconfig $IFACE down

auto br0
iface br0 inet manual
up ifconfig $IFACE up
down ifconfig $IFACE down

Finally devices for all 3 VLANs that might be used by virtual machines:

ovs-vsctl add-port br0 vm-vlan1 vlan_mode=access tag=1 type=internal
ovs-vsctl add-port br0 vm-vlan2 vlan_mode=access tag=2 type=internal
ovs-vsctl add-port br0 vm-vlan3 vlan_mode=access tag=3 type=internal 

Once again one should not forget to add them to /etc/network/interfaces in order to bring them up while booting the host!

For the integration into libvirt we decided it would be most simple to use the passthrough mode to the virtual port:

<interface type='direct'>
      <mac address='XX:XX:XX:XX:XX:XX'/>
      <source dev='vm-vlan2' mode='passthrough'/>
      <model type='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03'

Easy, wasn’t it?