VM hosting (deb/3.0/CLI)

2.9 3.0
DEB CLI ~ UI CLI ~ UI
SNAP CLI ~ UI CLI ~ UI

Eight questions you may have:

  1. What is a VM host?
  2. How do I make LXD available for hosting?
  3. How do I set up SSH for use by libvirt?
  4. How do I add a VM host?
  5. How do I see resources for a specific NUMA node?
  6. How do I see resources for NUMA-node-bearing VM hosts?
  7. How do I see the alignment between VM host interfaces and NUMA nodes?
  8. How do I configure and use hugepages on my VMs?

VM hosts are particularly useful for Juju integration, allowing for dynamic allocation of VMs with custom interface constraints. Alternatively, if you would like to use MAAS to manage a collection of VMs, the robust web UI allows you to easily create and manage VMs, logically grouped by VM host. Six conspicuous features include:

  1. Juju integration
  2. At-a-glance visual tools for easy resource management
  3. Set overcommit ratios for physical resources such as CPU and RAM
  4. Assign VMs to resource pools to segregate your VMs into logical groupings
  5. Track VM host storage pool usage and assign default storage pools
  6. Create VMs on multiple networks, specified by space, subnet, VLAN, or IP address

This section will lead you through the creation, usage, and management of VM hosts.

What is a VM host?

Simply put, a VM host is a machine which is designated to run virtual machines (VMs). A VM host divides its resources (CPU cores, RAM, storage) among the number of VMs you want to create, based on choices that you make when creating each VM. It is also possible to overcommit resources – that is, use more resources than the VM host actually has available – as long as you use the VMs carefully. Once MAAS has enlisted, commissioned, and acquired a newly-added machine, you can deploy it as a VM host.

VMs and NUMA

MAAS provides extensive optimisation tools for using NUMA with virtual machines. Earlier versions of MAAS guarantee that machines are assigned to a single NUMA node that contains all the machine’s resources. As of 2.9, MAAS now allows you to see how many VMs are allocated to each NUMA node, along with the allocations of cores, storage, and memory. You can quickly spot a VM running in multiple NUMA nodes, and optimise accordingly, with instant updates on pinning and allocations. You can also tell which VMs are currently running.

In addition, you can get a bird’s-eye view of network configuration:

  1. You can see which VM NIC/bond is connected to which NUMA node.
  2. You can tell when a NIC is connected to a different NUMA node.
  3. You can tell if one of multiple NICs is not in the correct node.
  4. You can confirm the subnet and space connecting to a VM.
  5. You can confirm that a VM has the desired network properties, such as latency and throughput.
  6. You can identify NICs that support SR-IOV and tell how many VFs are available.

MAAS also shows hugepages information (if they are in use) and prevents overcommit when using them. Hugepages essentially allow a much larger memory cache associated with the core. This obviously reduces the number of times a core has to access memory, but because the core must swap entire hugepages, optimising usage of them can be complex. MAAS helps you create these optimisations by giving you a discrete view of hugepages associated with your VM, helping you decide whether you need to use them or not.

Support for NUMA, SR-IOV, and hugepages

VM host management has been redesigned to support NUMA/SR-IOV configurations and hugepages from the API/CLI. Users can:

  1. See resources per NUMA node.
  2. See resources for VM hosts bearing NUMA nodes.
  3. See the alignment between VM host interfaces and NUMA nodes.
  4. Configure and use hugepages.

This section explains how to access this new functionality.

Examine NUMA node resources

You can examine the resources of a NUMA node with the MAAS CLI. A very basic way to do so is to enter the following command for a configured VM:

maas $PROFILE machine read $SYSTEM_ID

In the resulting JSON output, look for the array entry numanode_set, which will show the NUMA details for that specific VM:

"numanode_set": [
     {
          "index": 0,
	  "memory": 16384,
	  "cores": [
	      0,
	      2,
	      1,
	      3,
	  ],
	  "hugepages_set": [
	      {
	          "page_size": 2097152,
		  "total": 0
              }
	  ]
     }
]

Examine resources for NUMA-node-bearing VM hosts

With the MAAS CLI, you can get an overview of resource usage for an LXD host that’s running NUMA VMs with the following command:

maas $PROFILE virtual-machines read

Currently, the API does not give you an aggregated usage, as provided in the UI; hence you’ll have to look at the VMs and sum up the usage data yourself. You can see a list of pinned cores via this method, and we do show alignment of machines and NUMA nodes.

Examine the alignment between VM host interfaces and NUMA nodes

To see an alignment of VM host interfaces and NUMA nodes via the CLI, you can use the command mentioned above:

maas $PROFILE machine read $SYSTEM_ID

and focus on the interface_block section in the resulting JSON. This will give you the alignment information you’re seeking.

Configure and use hugepages on my VMs

Configuring hugepages for VM use consists of two steps:

  1. Creating a tag which includes a kernel option to use hugepages.
  2. Composing a VM backed with hugepages, tagged with the newly-created tag.

Here are the specific commands:

maas $PROFILE tags create name=use-hugepages kernel_opts=default_hugepagesz=1G hugepages=20"

maas $PROFILE vm-host compose $VM_HOST_ID pinned_cores=$CORE_NUMBER hugepages_backed=true

Make LXD available for VM hosting

To use LXD VM hosts, you need to install the correct version of LXD. Prior to the release of Ubuntu 20.04 LXD was installed using Debian packages. The Debian packaged version of LXD is too old to use with MAAS. If this is the case, you’ll need to remove the LXD Debian packages and install the Snap version. Note that you cannot install both Debian and snap versions, as this creates a conflict.

Removing older versions of LXD

If you’re on a version of Ubuntu older than 20.04, or you have the Debian version of LXD, start the uninstall process with the following command:

sudo apt-get purge -y *lxd* *lxc*

This command should result in output that looks something like this:

Reading package lists... Done
Building dependency tree      
Reading state information... Done
Note, selecting 'lxde-core' for glob '*lxd*'
Note, selecting 'python-pylxd-doc' for glob '*lxd*'
Note, selecting 'python3-pylxd' for glob '*lxd*'
Note, selecting 'python-nova-lxd' for glob '*lxd*'
Note, selecting 'lxde-common' for glob '*lxd*'
Note, selecting 'lxde-icon-theme' for glob '*lxd*'
Note, selecting 'lxde-settings-daemon' for glob '*lxd*'
Note, selecting 'lxde' for glob '*lxd*'
Note, selecting 'lxdm' for glob '*lxd*'
Note, selecting 'lxd' for glob '*lxd*'
Note, selecting 'lxd-tools' for glob '*lxd*'
Note, selecting 'python-pylxd' for glob '*lxd*'
Note, selecting 'lxdm-dbg' for glob '*lxd*'
Note, selecting 'lxde-session' for glob '*lxd*'
Note, selecting 'nova-compute-lxd' for glob '*lxd*'
Note, selecting 'openbox-lxde-session' for glob '*lxd*'
Note, selecting 'python-nova.lxd' for glob '*lxd*'
Note, selecting 'lxd-client' for glob '*lxd*'
Note, selecting 'openbox-lxde-session' instead of 'lxde-session'
Note, selecting 'lxctl' for glob '*lxc*'
Note, selecting 'lxc-common' for glob '*lxc*'
Note, selecting 'python3-lxc' for glob '*lxc*'
Note, selecting 'libclxclient-dev' for glob '*lxc*'
Note, selecting 'lxc-templates' for glob '*lxc*'
Note, selecting 'lxc1' for glob '*lxc*'
Note, selecting 'lxc-dev' for glob '*lxc*'
Note, selecting 'lxc' for glob '*lxc*'
Note, selecting 'liblxc1' for glob '*lxc*'
Note, selecting 'lxc-utils' for glob '*lxc*'
Note, selecting 'vagrant-lxc' for glob '*lxc*'
Note, selecting 'libclxclient3' for glob '*lxc*'
Note, selecting 'liblxc-dev' for glob '*lxc*'
Note, selecting 'nova-compute-lxc' for glob '*lxc*'
Note, selecting 'python-lxc' for glob '*lxc*'
Note, selecting 'liblxc-common' for glob '*lxc*'
Note, selecting 'golang-gopkg-lxc-go-lxc.v2-dev' for glob '*lxc*'
Note, selecting 'lxcfs' for glob '*lxc*'
Note, selecting 'liblxc-common' instead of 'lxc-common'
Package 'golang-gopkg-lxc-go-lxc.v2-dev' is not installed, so not removed
Package 'libclxclient-dev' is not installed, so not removed
Package 'libclxclient3' is not installed, so not removed
Package 'lxc-templates' is not installed, so not removed
Package 'lxctl' is not installed, so not removed
Package 'lxde' is not installed, so not removed
Package 'lxde-common' is not installed, so not removed
Package 'lxde-core' is not installed, so not removed
Package 'lxde-icon-theme' is not installed, so not removed
Package 'lxde-settings-daemon' is not installed, so not removed
Package 'lxdm' is not installed, so not removed
Package 'lxdm-dbg' is not installed, so not removed
Package 'openbox-lxde-session' is not installed, so not removed
Package 'python-lxc' is not installed, so not removed
Package 'python3-lxc' is not installed, so not removed
Package 'vagrant-lxc' is not installed, so not removed
Package 'liblxc-dev' is not installed, so not removed
Package 'lxc-dev' is not installed, so not removed
Package 'nova-compute-lxc' is not installed, so not removed
Package 'nova-compute-lxd' is not installed, so not removed
Package 'python-nova-lxd' is not installed, so not removed
Package 'python-pylxd' is not installed, so not removed
Package 'python-pylxd-doc' is not installed, so not removed
Package 'lxc' is not installed, so not removed
Package 'lxc-utils' is not installed, so not removed
Package 'lxc1' is not installed, so not removed
Package 'lxd-tools' is not installed, so not removed
Package 'python-nova.lxd' is not installed, so not removed
Package 'python3-pylxd' is not installed, so not removed
The following packages were automatically installed and are no longer required:
  dns-root-data dnsmasq-base ebtables libuv1 uidmap xdelta3
Use 'sudo apt autoremove' to remove them.
The following packages will be REMOVED:
  liblxc-common* liblxc1* lxcfs* lxd* lxd-client*
0 upgraded, 0 newly installed, 5 to remove and 21 not upgraded.
pAfter this operation, 34.1 MB disk space will be freed.
(Reading database ... 67032 files and directories currently installed.)
Removing lxd (3.0.3-0ubuntu1~18.04.1) ...
Removing lxd dnsmasq configuration
Removing lxcfs (3.0.3-0ubuntu1~18.04.2) ...
Removing lxd-client (3.0.3-0ubuntu1~18.04.1) ...
Removing liblxc-common (3.0.3-0ubuntu1~18.04.1) ...
Removing liblxc1 (3.0.3-0ubuntu1~18.04.1) ...
Processing triggers for man-db (2.8.3-2ubuntu0.1) ...
Processing triggers for libc-bin (2.27-3ubuntu1) ...
(Reading database ... 66786 files and directories currently installed.)
Purging configuration files for liblxc-common (3.0.3-0ubuntu1~18.04.1) ...
Purging configuration files for lxd (3.0.3-0ubuntu1~18.04.1) ...
Purging configuration files for lxcfs (3.0.3-0ubuntu1~18.04.2) ...
Processing triggers for systemd (237-3ubuntu10.40) ...
Processing triggers for ureadahead (0.100.0-21) ...

You should also autoremove packages no longer needed by LXD:

$ sudo apt-get autoremove -y

Output from this command should be similar to:

Reading package lists... Done
Building dependency tree      
Reading state information... Done
The following packages will be REMOVED:
  dns-root-data dnsmasq-base ebtables libuv1 uidmap xdelta3
0 upgraded, 0 newly installed, 6 to remove and 21 not upgraded.
After this operation, 1860 kB disk space will be freed.
(Reading database ... 66769 files and directories currently installed.)
Removing dns-root-data (2018013001) ...
Removing dnsmasq-base (2.79-1) ...
Removing ebtables (2.0.10.4-3.5ubuntu2.18.04.3) ...
Removing libuv1:amd64 (1.18.0-3) ...
Removing uidmap (1:4.5-1ubuntu2) ...
Removing xdelta3 (3.0.11-dfsg-1ubuntu1) ...
Processing triggers for man-db (2.8.3-2ubuntu0.1) ...
Processing triggers for libc-bin (2.27-3ubuntu1) ...

Now install LXD from the Snap:

$ sudo snap install lxd
2020-05-20T22:02:57Z INFO Waiting for restart...
lxd 4.1 from Canonical✓ installed

Refreshing LXD on 20.04

If you are on 20.04 or above LXD should be installed by default, but it’s a good idea to make sure it’s up to date:

$ sudo snap refresh
All snaps up to date.

Initialise LXD prior to use

Once LXD is installed it needs to be configured with lxd init before first use:

$ sudo lxd init

Your interactive output should look something like the following. Note a few points important points about these questions:

  1. Would you like to use LXD clustering? (yes/no) [default=no]: no - MAAS does not support LXD clusters in this version.

  2. Name of the storage back-end to use (btrfs, dir, lvm, zfs, ceph) [default=zfs]: dir - Testing has primarily been with dir; other options should work, but less testing has been done, so use at your own risk.

  3. Would you like to connect to a MAAS server? (yes/no) [default=no]: no - When LXD is connected to MAAS containers or virtual machines created by LXD will be automatically added to MAAS as devices. This feature should work, but has limited testing in this version.

  4. Would you like to configure LXD to use an existing bridge or host interface? (yes/no) [default=no]: yes - The bridge LXD creates is isolated and not managed by MAAS. If this bridge is used, you would be able to add the LXD VM host and compose virtual machines, but commissioning, deploying, and any other MAAS action which uses the network will fail – so yes is the correct answer here.

  5. Name of the existing bridge or host interface: br0 - br0 is the name of the bridge the user configured (see sections above) which is connected to a MAAS-managed network.

  6. Trust password for new clients: - This is the password the user will enter when connecting with MAAS.

Would you like to use LXD clustering? (yes/no) [default=no]: no
Do you want to configure a new storage pool? (yes/no) [default=yes]: yes
Name of the new storage pool [default=default]:  
Name of the storage back-end to use (btrfs, dir, lvm, zfs, ceph) [default=zfs]: dir
Would you like to connect to a MAAS server? (yes/no) [default=no]: no
Would you like to create a new local network bridge? (yes/no) [default=yes]: no
Would you like to configure LXD to use an existing bridge or host interface? (yes/no) [default=no]: yes
Name of the existing bridge or host interface: br0
Would you like LXD to be available over the network? (yes/no) [default=no]: yes
pAddress to bind LXD to (not including port) [default=all]:
Port to bind LXD to [default=8443]:
Trust password for new clients:
Again:
Would you like stale cached images to be updated automatically? (yes/no) [default=yes]
Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]:

Once that’s done, the LXD host is now ready to be added to MAAS as an LXD VM host. Upon adding the VM host, its own commissioning information will be refreshed.

When composing a virtual machine with LXD, MAAS uses either the ‘maas’ LXD profile, or (if that doesn’t exist) the ‘default’ LXD profile. The profile is used to determine which bridge to use. Users may also add additional LXD options to the profile which are not yet supported in MAAS.


Last updated 17 days ago.