KVM with bridge

In the following, I will describe how to set up kvm in an environment where you want the guest virtual machines to appear as independent servers, indistinguishable from the ordinary metal boxes sitting on the floor of the office.

Host Configuration

The server, called ‘'’octopus’’’, is a powerful HP 380 G5 machine, with an Intel dual quadcore CPU. This is a processor that has hardware virtualization enabled, which you can see by doing:

egrep ‘(vmx|svm)’ /proc/cpuinfo

If you get output from this command, your processor has either vmx (Intel) or svm (AMD) capability. If not, you can give up running qemu right here. It will be s.l.o.w. In our case, the server is pretty damn fast. It has the equivalent of 8 CPU’s and is equipped with 12Gb RAM. We intend to run 8 guest servers on it, all for a particular task, e.g. web server, name server, etc.

I started out with a fresh installed Ubuntu Server, version 7.10, “Gutsy Gibbon”.

Installing kvm and qemu

First, install kvm and qemu:

octopus# install kvm qemu
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following extra packages will be installed:
  bochsbios bridge-utils vgabios
Suggested packages:
  kvm-source debto debootstrap etherboot
Recommended packages:
  vde2 sharutils proll openhackware
The following NEW packages will be installed:
  bochsbios bridge-utils kvm qemu vgabios
0 upgraded, 5 newly installed, 0 to remove and 0 not upgraded.
Need to get 4788kB of archives.
After unpacking 13.1MB of additional disk space will be used.
Do you want to continue [Y/n]? y

As you see from the above, kvm pulls in a number of other packages that are needed. In particular, the bridge-util package is important. The very first thing to do is to make the host ‘'’octopus’’’ function as a network bridge, that accepts network traffic that arrives on the guest’s IP address, and relays it to the guest’s virtual network interface.

Load kernel module

Next, we need to load a kvm kernel module:

modprobe kvm-intel

or “kvm-amd” if that is your hardware platform. ‘‘(In 8.04 (Hardy) you can skip this step, since the kvm module is loaded automatically.)’’

That command inserts the module in the currently running kernel. However, to have it loaded at each boot, insert the module name (kvm-intel/kvm-amd) in the file /etc/modules.

Define the bridge interface

Edit the file /etc/network/interface, which on an Ubuntu or Debian system contains information for configuration of the network interface. This is what the file looked like before editing:

# The loopback network interface
auto lo
iface lo inet loopback

# The primary network interface
auto eth0
iface eth0 inet static
        address 192.168.0.101
        netmask 255.255.255.0
        network 192.168.0.0
        broadcast 192.168.0.255
        gateway 192.168.0.1
        # dns-* options are implemented by the resolvconf package, if installed
        dns-nameservers 192.168.0.4
        dns-search mydomain.net

We edit the file, so it gets the following content:

# The loopback network interface
auto lo
iface lo inet loopback

# The primary network interface
auto br0
iface br0 inet static
        address 192.168.0.101
        netmask 255.255.255.0
        network 192.168.0.0
        broadcast 192.168.0.255
        gateway 192.168.0.1
        bridge_ports eth0
        bridge_stp off
        bridge_maxwait 5
        # dns-* options are implemented by the resolvconf package, if installed
        dns-nameservers 192.168.0.4
        dns-search mydomain.net

We see that the internet address of ‘'’octopus’’’ is now assigned to the br0 interface, and the original eth0 is defined tied to the bridge via the bridge_prots keyword. If your primary network interface is eth1, for example, you need to use that name.

Bring Up Bridge

Either reboot, or cycle the interfaces if the machine is local:

sudo ifup br0

Check the host

Once the server comes back up, you should test that it still has proper network connections, on both sides of the gateway, f.ex. using ping.

Now, check the network interfaces using the ip addr command. Notice in the following listing, that eth0 no longer has an associated IP address, while the new bridge device br0 has taken over the IP address.

octopus# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 brd 127.255.255.255 scope host lo
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:1c:c4:5e:18:5e brd ff:ff:ff:ff:ff:ff
    inet6 fe80::21c:c4ff:fe5e:185e/64 scope link
       valid_lft forever preferred_lft forever
3: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:1c:c4:5e:18:5e brd ff:ff:ff:ff:ff:ff
    inet 192.168.0.101/24 brd 192.168.0.255 scope global eth0
    inet6 fe80::21c:c4ff:fe5e:185e/64 scope link
       valid_lft forever preferred_lft forever

libvirt Guest Configuration

In virt-manager, a “bridge” configuration is now selectable when creating a new VM. To modify existing VMs, you can change the XML definition (in /etc/libvirt/qemu for the network interface, adjusting the mac address as desired:

<interface type='bridge'>
  <mac address='00:11:22:33:44:55'/>
  <source bridge='br0'/>
</interface>

Finally, restart libvirtd (make sure your VMs are shutdown):

sudo /etc/init.d/libvirtd restart

KVM-only Guest Configuration

We have a new device called tap0:

tap0      Link encap:Ethernet  HWaddr 00:FF:F9:C5:39:4A
          inet6 addr: fe80::2ff:f9ff:fec5:394a/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:2492 errors:0 dropped:0 overruns:0 frame:0
          TX packets:88109 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:500
          RX bytes:279707 (273.1 KB)  TX bytes:11379233 (10.8 MB)

Start the virtual machine

At this point, I assume that you already have access to a virtual server machine. How to create it is described elsewhere. On octopus, the virtual server ‘'’octo1’’’ is a 5G file stored on the /home partition, and based on the Ubuntu JeOS server.

We now start the virtual machine:

octopus# kvm -m 512 -net nic -net tap  /home/octopus-jeos.img

kvm will now boot the virtual machine in an X window. You will se the normal boot sequence inside the window. Eventually, the virtual machine comes up, and you should be able to log on. ‘‘Beware that kvm’s window takes control of your mouse, so the system seems completely unresponsive!’’ You need to press and simultaneously for kvm to release the mouse.

Configure the guest machine

Now is the time to configure the guest machine, in this case octo1. Notice that you should configure the guest machine exactly like you would configure a normal box sitting under your desk. In our case, the network on octo1 is set up, again in the file /etc/network/interface, which on octo1 looks like this:

# The loopback network interface
auto lo
iface lo inet loopback

# The primary network interface
auto eth0
#iface eth0 inet dhcp
iface eth0 inet static
        address 192.168.0.102
        netmask 255.255.255.0
        network 192.168.0.0
        broadcast 192.168.0.255
        gateway 192.168.0.1
        dns-nameservers 192.168.0.4
        dns-search mydomain.net

That’s it! The guest machine (you may have to reboot it) should now have access to the internet, and you should be able to reach it from the outside. You can now configure it further.

When you are done with the configuration, you will likely want to start up the virtual machine as a standalone process without a graphics console. That can be achieved using the -daemonize and -nographic options:

kvm -m 512 -net nic -net tap -daemonize -nographic /home/octopus-jeos.img

Several virtual machines on the same host

Soon, you will want to have several virtual machines running on the same server. Of course, each guest must have its own unique IP address, but in addition, you need to make sure that each virtual machine also has a unique mac-address. The network card in your computer has a hardwired mac-address, but of course this is not so with a virtual machine.

The mac-address consists of six bytes, usually given in hexadecimal code. Normally, it encodes the vendor and a serial number. However, you can make your own mac addresses which can be any number as long as the first byte is 02. This is a so-called ‘‘locally administered address’’.

octopus# kvm -m 512 -net nic,macaddr=02:00:00:00:00:01 -net tap  /home/octopus-jeos.img

If you have been using the virtual machine before, you will need to edit it’s network configuration again, because the first time you invoke kvm with the macaddr option, it creates a new network device having the given mac address. That may be for example eth1, and so you will have to reconfigure that device with the correct IP address etc.


This post was originally authored for the Ubuntu Wiki, reformatted for inclusion in this blog 2015-05-15.


Why abandon CentOS for Ubuntu?

My Linux experience

My first Linux experience was around 1996 with the slackware distribution installed from floppies. I quickly went on using RedHat 3.03, and went all the way. When RedHat stopped their free version we went to Fedora Core 1. However, I quickly got fed up with Fedora’s perpetual upgrade-circus. It was just too much work upgrading all the time, and with no apparent gains (other that eye-candy stuff, but who cares). So when RedHat EL4 appeared some years ago we jumped to CentOS 4.

So, for a couple of years, we’ve had a constant platform (EL4) on which to base our (1) own software development, and (2) adapting other peoples software (mainly for macromolecular crystallography and bioinformatics). But along the way, I have become aware of the inherent weakness of the RedHat/CentOS platform for our needs.

Why jump to Ubuntu/Debian?

CentOS is based on RedHat, so the base of the distribution is RedHat’s packages, which also means that the packaging policy is defined by RedHat. However, they only supply ~2500 packages, which is not sufficient for our use. We very often need software that is not available from RedHat. This is where the 3rd party repositories started playing a role.

A few years ago – in my naiveté – I had a whole bunch of repo’s defined in my sources.list.d directory. I really believed in the “let a thousand repos blossom” idea. However, gradually, I had to remove repos from the list, because software in our systems started breaking.

Finally, I was down to just two repos when one day, I came in and discovered that a lot of our local software – that relied on fftw2 – did not work anymore because one repo suddenly decided to provide fftw-3, and that package replaced fftw-2 and crashed our local software that needs fftw2.

It is necessary to solve these problems if people are to rely on updating a system for long periods of time. But RedHat’s and Fedora’s basic strategy is to supply a system that you have to upgrade regularly, replacing all the base software in one big swoop, and then compile all add-on software again. The constant update-cycle of Fedora disguised this fact for a while, but it has become visible during the two-year CentOS period.

This is when I realized that CentOS is never going to work for us. We can’t continue using third party repos, but this problem is not going away anytime soon and people don’t seem to care about it.

With over 10 years of experience packaging RPMs it has not been easy to come to the conclusion that Debian is the way to go. It is not the RPM format per se. Although Debians packaging mechanism is a bit more advanced that for RPMs, I believe that the RPM packaging system is sufficiently capable, and it is easier to use, and has a gentler learning curve. The problem is RedHat’s packaging policy. Basically, RedHat has ~2500 packages that they care about. The rest is added by third party repos who basically make up their own policies. If at all.

My first Ubuntu experience

In the meantime, I have had an Kubuntu system at home for six months now, and it has been an amazing experience. There are over 20K packages available from the repos, and I have not once run into problems. I can have different packages, that rely on different versions of the same libraries, installed side-by-side. It just works. I did a dist-upgrade from etchy to feisty without a single problem. In addition, I have access to all the multimedia packages that I care to install, the latest KDE/Gnome environments, games, and so on (which is what our users ask for on the workstations ;-)).

So I decided that rather than porting my repo to EL5, I would port it to Debian and be in with Flynn :-)

What packaging means to the lab

We currently have around 25 workstations, some are in a big room in the basement, some are on people’s personal desks. We also have a small HPC cluster with 5*2 CPUs. All users have an NFS mounted home directory which is sitting on one of two large diskarrays. The other diskarray is on a server in a remote building and is used for backups. We use rsnapshot to save snapshots from the last 40 days. The backup array is mounted on a virtual partition so every user can visit their own backup directory.

All workstation have the exact same configuration of software, this is achieved by using cfengine, apt, and a few cron scripts.

To keep this setup running without a large support team we absolutely need to have all software packaged. Until now, I have packaged everything in RPM packages. Proprietary software – which we unfortunately in some cases are forced to use – is also packaged, but stored on an apt partition which can not be reached from outside our network.

At the time of writing (6. june 2007) I have begun porting my RPM packages to .deb format, and when that process is completed, we will start a gradual move from CentOS to Kubuntu on the workstations. The servers will probably be running Debian stable.