Installing Xen Project Hypervisor on Debian 10

Table of Contents:

  1. Installing the Debian base operating system
  2. Installing the Xen hypervisor and host/control domain (“Dom0”)
  3. Installing a guest domain (“DomU”)

The purpose of this tutorial is to describe how to install and configure a Xen Project hypervisor with host/control and guest domains using Debian as the base operating system. Note that this tutorial uses Xen version 4.11 as included in the current stable release of Debian 10 (buster).

For your reference, note that we previously published an in-depth version of this tutorial for Debian 9 (stretch) titled Installing Xen Project Hypervisor on Debian 9: An in-depth beginner’s guide. Feel free to refer to the Debian 9 tutorial for more detailed information and reasoning for any of the steps provided below.

Hardware

Concerning hardware, a Lenovo ThinkCentre M83 SFF Pro desktop computer was used to test this tutorial with the following main components:

  • Intel Core i5-4570 processor (4 cores),
  • 32GB DDR3 memory,
  • 120GB SSD,
  • 2 x 1TB HDDs, and
  • Gigabit Ethernet port (with Internet connectivity).

Three-step process

Before we dive into the tutorial, here’s a quick tip: it’s helpful to think of the Xen system installation as a three-step process:

  1. installing the Debian base operating system,
  2. installing the Xen hypervisor and host/control domain (“Dom0”), and
  3. installing guest domains (“DomU”).

For the uninitiated, the term “domain” is used in Xen parlance to refer to a specific virtual machine or instance.

Step 1 – Installing the Debian base operating system

During the Xen installation process, the Debian base operating system is used to install and configure the Xen hypervisor and Dom0 virtual machine only. After Xen is installed and configured, GRUB will be configured to boot directly into the Xen hypervisor and Dom0 operating system where additional virtual machines can then be installed and configured.

In Step 1, we assume you either have a Debian ISO image written to a USB thumb drive, or have another method of installing Debian to boot the computer into the Debian installer software. If you need help getting the USB drive ISO working, detailed instructions are provided in the Debian 9 Xen Project hypervisor tutorial here.

1.1 Run Debian installer and partition the primary disk drive

Once the USB drive is loaded with the Debian 10 ISO, boot the computer with the USB drive plugged in to initiate the Debian installer. Make sure the computer’s BIOS is configured to boot from the USB drive ahead of the system’s disk drive(s).

Follow the Debian installer prompts setting up networking and user accounts as necessary. When the installer’s partition configuration section is reached, choose the “manual” option and set up the primary disk drive partition(s) as follows:

  • 20GB primary partition, filesystem type “ext4”, mark as “bootable”, mount on “/”,
  • no swap partition, as we will allocate 8GB ram from the computer’s 32GB to the Dom0, and
  • we left the remaining disk space empty as we will create a LVM Volume Group with this drive space later on.

Note that we chose to create the 20GB primary partition on the 120GB SSD. The SSD will be used for all our virtual machine operating system partitions, whereas the 1TB HDDs on the test machine will be allocated to virtual machines for file storage.

1.2 Partition and ram statistics

To give context to the partition choices above, note that we decided to install Debian 10 including the MATE desktop environment. After installing the operating system and all Xen Project software only 8GB of the 20GB primary partition was used.

In an alternative scenario, in the Debian 9 Xen Project hypervisor tutorial, the minimal install of Debian 9 with SSH server, standard system utilities, and no desktop environment only used about 900MB of the 4GB primary partition. Even with all Xen Project software installed, primary partition usage came in at around 1.1GB.

Concerning ram and swap space, we found that 8GB ram and no swap is more than enough for running Debian 10 with Xen Project hypervisor and the MATE desktop environment, just as 1GB ram and 1GB swap was more than enough for the bare bones Debian 9 Xen Project hypervisor install.

Note that we provide instructions for allocating ram to Dom0 in Step 2, below.

Step 2 – Installing the Xen hypervisor and host/control domain (“Dom0”)

If you made it through Step 1, you should have a fresh Debian 10 operating system installed.

In Step 2 we install the Xen Project hypervisor software package, and configure the Dom0 virtual machine by way of the Debian base operating system. Once all preliminary configuration is complete, we will reboot the computer and automatically boot into the Xen hypervisor/Dom0 virtual machine to explore the new system.

2.1 Install Xen hypervisor

If you need to add a non-root user, the sudo package, non-free firmware, a firewall, or other necessary packages, now is the time.

Use apt-get to update the Debian base operating system package index files, and upgrade all currently installed packages. As root (or using sudo) run the following command:

# apt-get update && apt-get upgrade

Next, use apt-get to install the Xen Project hypervisor meta-package. Run the following command as root, adjusting the architecture suffix to suit your hardware:

# apt-get install xen-system-amd64

2.2 Bridge the network interface

We have found that the easiest way to provide DomU access to the computer’s Ethernet device is through a Linux Ethernet bridge to Dom0. In this tutorial, the Debian package bridge-utils is used to configure the bridge. The bridge-utils package should have been installed automatically when the xen-system-amd64 meta-package was installed.

Just to be safe, let’s backup our “interfaces” file before we do any editing. As root, make a copy of the file “/etc/network/interfaces”:

# cp /etc/network/interfaces /etc/network/interfaces.backup

Now edit the “interfaces” file using the nano file editor:

# nano /etc/network/interfaces

The original interfaces.backup file contains the following lines:

# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).

source /etc/network/interfaces.d/*

# The loopback network interface
auto lo
iface lo inet loopback

Note that the primary network interface in your computer may have a different name than “eno1”. Always use the name found in your computer’s interfaces file. In other words, don’t use eno1 unless it’s the name found on your system.

Edit the “interfaces” file to include the following modifications and new lines making sure to substitute “eno1” with your system’s interface name:

# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).

source /etc/network/interfaces.d/*

# The loopback network interface
auto lo
iface lo inet loopback

# The primary network interface
auto eno1
iface eno1 inet manual

# Setup Xen interface bridge
auto xenbr0
iface xenbr0 inet dhcp
    bridge_ports eno1

It is important to set the primary network interface to “manual” and the bridge to “dhcp”, if you want to have your networking configured via DHCP. Also, note that “xenbr0” is the standard name used for the Linux Ethernet bridge in Xen Project installations.

If you need to immediately initiate the networking configuration changes, execute the following systemctl command. However, since we’re going to reboot the computer soon, we can also wait for the reboot to automatically initiate the networking changes.

Optional: Initiate the networking configuration changes:

# systemctl restart networking

2.3 Autoballoon and memory allocation – xl.conf

We will turn off the memory “autoballoon” feature for the Debian 10 Xen Project install, as we had good results doing the same with our prior install on Debian 9. Note that we have not tested the autoballoon feature, so we cannot speak positively or negatively about its use.

Concerning autoballoon, the manpage for xl.conf states the following:

You are strongly recommended to set this to “off” (or “auto”) if you use the “dom0_mem” hypervisor command line to reduce the amount of memory given to domain 0 by default.

Given the xl.conf recommendation, since we will restrict Dom0 memory through GRUB and the hypervisor command line below, we will set autoballoon to 0 [zero] or “off” to avoid any issues.

In the Debian 9 tutorial autoballoon configuration, we made changes to both the /etc/xen/xl.conf and /etc/xen/xend-config.sxp configuration files. However, in the Debian 10 Xen Project installation the xend-config.sxp file doesn’t exist, so we only need to modify the xl.conf file.

As root, create a backup copy of file “/etc/xen/xl.conf”:

# cp /etc/xen/xl.conf /etc/xen/xl.conf.backup

Then change the line in the xl.conf file that reads…

#autoballoon="auto"

to read…

autoballoon=0

2.4 Autoballoon and memory allocation – /etc/default/grub.d/xen.cfg

Since our plan is to restrict Dom0 memory to 8GB through GRUB and the hypervisor command line, we need to do a quick edit of the /etc/default/grub.d/xen.cfg configuration file.

As root, create a backup copy of file “/etc/default/grub.d/xen.cfg”:

# cp /etc/default/grub.d/xen.cfg /etc/default/grub.d/xen.cfg.backup

Then open the xen.cfg file in nano and add the line:

GRUB_CMDLINE_XEN_DEFAULT="dom0_mem=8G,max:8G"

As mentioned above, allocating 8GB ram to a Dom0 running Debian 10 with the MATE desktop environment provides more than enough memory. In fact, for the Debian 9 tutorial, which did not have a desktop environment installed, only 1GB of ram was allocated and the Dom0 performed flawlessly.

IMPORTANT: You must update GRUB or the changes will not take effect! Note that rebooting the computer does not update GRUB!

To update GRUB run the following command as root:

# update-grub

2.5 Allocate Dom0 virtual CPUs

As the test computer has only four cores in its processor, no configuration was made to allocate CPUs to Dom0.

By default, Xen makes all processor cores available to Dom0, who then shares these cores with DomU when the virtual machines are created. This configuration is in line with the Xen Project guide Tuning Xen for Performance, which states that “In general you should not assigned less than 4 vCPUs to Dom0”.

2.6 Reboot computer and boot into Xen hypervisor/Dom0 virtual machine

All configuration in the Debian base operating system is now complete. Once you are ready, run the reboot command as root. The computer should automatically boot into the Xen Project hypervisor and Dom0 virtual machine.

# reboot

If you have a monitor plugged in, after the computer reboots, you will see the GRUB menu with “Debian GNU/Linux, with Xen hypervisor” set as default. This is your new Xen hypervisor/Dom0 installation, and our current destination.

If you’re using SSH on a headless system, you will have to wait until the computer passes the GRUB menu, and loads the Xen hypervisor/Dom0 activating the SSH server daemon. Note that if the SSH server was enabled in the Debian base operating system, it will be enabled in the Xen hypervisor/Dom0.

Whether proceeding with or without a monitor, once you reach the familiar terminal login screen, use your login and password from the Debian base operating system to login to the new Xen hypervisor/Dom0 virtual machine.

Now that you’re in Dom0, check your work to make sure that all configurations made in the Debian base operating system were passed on to Dom0. Start by running the free command to verify that Dom0 has been allocated its 8GB of memory:

$ free -h

If the free command does not display the amount of memory you specified in the xen.cfg file, verify that you used the precise syntax to modify xen.cfg as specified above, and also that the update-grub command was properly executed after the configuration file was modified (see Step 2.4, above).

2.7 The xl Command

One way to confirm you are in Dom0 is by running an xl command. Try any of the following xl commands as root user:

# xl list

# xl info

# xl top

Step 3 – Installing a guest domain (“DomU”)

Now that the Xen hypervisor/Dom0 virtual machine is configured and up and running, it’s time to configure and create a DomU. Just like we did in the Debian 9 tutorial, we will now install and use the xen-tools package to automate the steps involved in creating a paravirtualized (“PV”) DomU.

3.1 Install Debian package xen-tools

Use apt-get to update the Xen hypervisor/Dom0 package index files, and upgrade all currently installed packages. As root run the following command:

# apt-get update && apt-get upgrade

Next, use apt-get to install the xen-tools package from the Debian stretch repository, which is currently shipping xen-tools version 4.8-1. Run the following command as root:

# apt-get install xen-tools

3.2 Set up new partition and volume group

Before we use xen-tools to create a new DomU, we need to create a LVM volume group which will be used to provision disk space for new virtual machines. In this step, we assume Dom0 was installed on the /dev/sda1 partition of your primary disk drive, and that all other /dev/sda drive space is empty.

If you already have a partition on the empty drive space from where you’ll provision DomU disks, you can simply follow the step to add it as a volume group (vgcreate). If you already have a volume group on the space from where you’ll provision DomU disks, take note of the volume group name, and then you are ready for Step 3.3.

Assuming there is no partition or volume group: In the first step, we use cfdisk to create a partition on the empty space. The drive with the empty space on the test computer is /dev/sda. So we use cfdisk to modify that drive, select the empty drive space, and create an Id/Type partition of “83 Linux”. The partition should not be bootable.

Don’t touch the /dev/sda1 partition as it is contains the Dom0 operating system!

# cfdisk /dev/sda

Make sure to “write” your changes, then quit.

If you follow the above step, you should end up with a new /dev/sda2 partition.

Next, we add the /dev/sda2 partition to a new LVM volume group called “vg0”:

# vgcreate vg0 /dev/sda2

For more information on configuring LVM volume groups, refer to the article A Quick Guide for Configuring LVM.

3.3 Use xen-tools to create a new DomU

Now that the volume group is set up, everything is in place to create a new PV DomU virtual machine. We can now pass xen-tools some configuration options, which will automate the creation of the new DomU. Then all we need to do is start it!

# xen-create-image --hostname=apache --lvm=vg0 --dhcp --pygrub --memory=4G --maxmem=4G --noswap --size=4G --passwd

Here is an explanation of the xen-create-image options as detailed above:

  • “hostname” sets the root@hostname of the new DomU
  • “lvm” tells xen-tools which volume group to provision disk space from
  • “dhcp” tells xen-tools that the new DomU will be using DHCP for networking
  • “pygrub” tells xen-tools to boot the new DomU using pygrub
  • “memory” tells xen-tools how much ram to allocate to the new DomU
  • “maxmem” tells xen-tools the maximum amount of ram to allocate to the new DomU
  • “noswap” tells xen-tools not to create swap space for the new DomU
  • “size” tells xen-tools how much disk space to allocate to the new DomU
  • “passwd” tells xen-tools to prompt the user to set the root password for the new DomU

After running the xen-create-image command, the Dom0 console will be taken over momentarily by the process. When the process is complete, a new DomU configuration file will be created in the “/etc/xen/” folder. In the example above, the configuration file will be named “apache.cfg”.

Refer to the hyperlinks provided at the end of this tutorial for more information on creating, configuring, and deleting DomU.

3.4 Start the new DomU

Now that xen-tools has helped create a new DomU, it’s time to boot the new virtual machine. Now we can use the xl create command and pass it the name of the newly created DomU configuration file.

# xl create /etc/xen/apache.cfg -c

The “-c” option passes the Dom0 terminal to the booting DomU. To get back to Dom0 from the DomU’s terminal, use the key combination “Ctrl + ]”.

To get back to the DomU from Dom0’s terminal, use the command:

# xl console {name-of-domu}

Note that, once you go back to the DomU, press the “Enter” key otherwise the terminal may appear inactive!

3.5 A few more tips

Before we wrap up, here are a few navigational tips.

While in DomU’s console:

  • To shutdown the DomU, as root run the command “shutdown -h now”
  • To reboot the DomU, as root run the command “reboot”
  • To exit DomU’s console and go back to Dom0 use the “CTRL + ]” key combination

While in Dom0’s console:

  • To shutdown a DomU, as root run the command “xl shutdown {domain-id}”
  • To reboot a DomU, as root run the command “xl reboot {domain-id}”
  • To exit Dom0’s console and go back to DomU, as root run the command “xl console {domain-id}”
  • To start a DomU for the first time, or to restart after shutdown, as root run the command “xl create {full-location-to-DomU .cfg file}”

If it appears you are in a frozen terminal in any one of the transitions above, try pressing “Enter” again, and you will likely find out that you are not!

It is important to note that the DomU operating system, like any real operating system, will retain all configuration history and file creation/destruction changes even after it is shutdown and restarted. Simply configure and use the DomU virtual machine as you would any computer.

3.6 Helpful tutorials and instructionals

Where do you go from here? Please check out these other useful tutorials for more information on creating/deleting DomU, passing USB drives to DomU, and configuring Xen Project hypervisor.