MAAS 3.1 has been released. Find out what’s new in 3.1.

How to customise machines (snap/3.1/UI)

2.9 3.0 3.1
DEB CLI ~ UI CLI ~ UI CLI ~ UI
SNAP CLI ~ UI CLI ~ UI CLI ~ UI

MAAS machines can be customised in a number of ways, including:

  • machine storage.
  • commissioning and deployment configurations (known as “pre-seeding”).
  • custom Ubuntu kernels.
  • kernel boot options.
  • resource pools.

In this article, you can learn:

In short, this article will explain these possible customisations, and provide detailed instructions on how to customise your own machines as desired.

About customising machines

In this section, you’ll learn:

About customising machine storage

You have significant latitude when choosing the final storage configuration of a deployed machine. MAAS supports traditional disk partitioning, as well as more complex options such as LVM, RAID, and bcache. MAAS also supports UEFI as a boot mechanism. This article explains boot mechanisms and layouts, and offers some advice on how to configure layouts and manage storage.

A machine’s storage is dependant upon the underlying system’s disks, but its configuration (i.e., disk usage) is the result of a storage template. In MAAS, this template is called a layout, and MAAS applies it to a machine during commissioning. Once a layout is applied, a regular user can make modifications to a machine at the filesystem level to arrive at the machine’s final storage configuration. When a machine is no longer needed, a user can choose from among several disk erasure types before releasing it.

MAAS supports storage configuration for CentOS and RHEL deployments. Support includes RAID, LVM, and custom partitioning with different file systems (ZFS and bcache excluded). This support requires a newer version of Curtin, available as a PPA.

About UEFI booting

Every layout type supports a machine booting with UEFI. In such a case, MAAS automatically creates an EFI boot partition (/boot/efi). Other than setting the machine to boot from UEFI, the user does not need to take any additional action.

Warning: UEFI must be enabled or disabled for the lifespan of the machine. For example, do not enlist a machine with UEFI enabled, and then disable it before commissioning. It won’t work!

The EFI partition, if created, will be the first partition (sda1) and will have a FAT32 filesystem with a size of 512 MB.

About block devices

Once the initial storage layout has been configured on a machine, you can perform many operations to view and adjust the entire storage layout for the machine. In MAAS there are two different types of block devices.

Physical

A physical block device is a physically attached block device such as a 100GB hard drive connected to a server.

Virtual

A virtual block device is a block device that is exposed by the Linux kernel when an operation is performed. Almost all the operations on a physical block device can be performed on a virtual block device, such as a RAID device exposed as md0.

About partitions

As with block devices (see Block devices), MAAS and the MAAS API offer a great deal of control over the creation, formatting, mounting and deletion of partitions.

About storage restrictions

There are three restrictions for the storage configuration:

  1. An EFI partition is required to be on the boot disk for UEFI.
  2. You cannot place partitions on logical volumes.
  3. You cannot use a logical volume as a Bcache backing device.

Violating these restrictions will prevent a successful deployment.

About VMFS datastores

MAAS can configure custom local VMware VMFS Datastore layouts to maximise the usage of your local disks when deploying VMware ESXi. As VMware ESXi requires specific partitions for operating system usage, you must first apply the VMFS6 storage layout. This layout creates a VMFS Datastore named datastore1 which uses the disk space left over on the boot disk after MAAS creates the operating system partitions.

About final storage modifications

Once MAAS provisions a machine with block devices, via a layout or administrator customisation, a regular user can modify the resulting storage configuration at the filesystem level.

About disk erasure

Disk erasure pertains to the erasing of data on each of a machine’s disks when the machine has been released (see Release action) back into the pool of available machines. The user can choose from among three erasure types before confirming the Release action. A default erasure configuration can also be set.

About disk erasure types

The three disk erasure types are:

  1. Standard erasure - Overwrites all data with zeros.
  2. Secure erasure - Although effectively equivalent to Standard erase, Secure erase is much faster because the disk’s firmware performs the operation. Because of this, however, some disks may not be able to perform this erasure type (SCSI, SAS, and FC disks in particular).
  3. Quick erasure - Same as Standard erase but only targets the first 1 MB and the last 1 MB of each disk. This removes the partition tables and/or superblock from the disk, making data recovery difficult but not impossible.

If all three options are checked when the machine is released the following order of preference is applied:

  1. Use ‘secure erase’ if the disk supports it
  2. If it does not then use ‘quick erase’

It is very important to pay close attention to your selections when erasing disks.

About pre-seeding

During machine enlistment, deployment, commissioning and machine installation, MAAS sends Tempita-derived configuration files to the cloud-init process running on the target machine. MAAS refers to this process as preseeding. These preseed files are used to configure a machine’s ephemeral and installation environments and can be modified or augmented to a custom machine configuration.

Preseeding in MAAS can be achieved in two ways:

  1. Curtin, a preseeding system similar to Kickstart or d-i (Debian Installer), applies customisation during operating system (OS) image installation. MAAS performs these changes on deployment, during OS installation, but before the machine reboots into the installed OS. Curtin customisations are perfect for administrators who want their deployments to have identical setups all the time, every time. This blog post contains an excellent high-level overview of custom MAAS installs using Curtin.

  2. Cloud-init, a system for setting up machines immediately after instantiation. cloud-init applies customisations after the first boot, when MAAS changes a machine’s status to ‘Deployed.’ Customisations are per-instance, meaning that user-supplied scripts must be re-specified on redeployment. Cloud-init customisations are the best way for MAAS users to customise their deployments, similar to how the various cloud services prepare VMs when launching instances.

About templates

The Tempita template files are found in the /var/snap/maas/current/preseeds/ directory on the region controller. Each template uses a filename prefix that corresponds to a particular phase of MAAS machine deployment:

Phase Filename prefix
Enlistment enlist
Commissioning commissioning
Installation curtin (Curtin)

Additionally, the template for each phase typically consists of two files. The first is a higher-level file that often contains little more than a URL or a link to further credentials, while a second file contains the executable logic.

The enlist template, for example, contains only minimal variables, whereas enlist_userdata includes both user variables and initialisation logic.

Tempita’s inheritance mechanism is the reverse of what you might expect. Inherited files, such as enlist_userdata, become the new template which can then reference variables from the higher-level file, such as enlist.

About template naming

MAAS interprets templates in lexical order by their filename. This order allows for base configuration options and parameters to be overridden based on a combination of operating system, architecture, sub-architecture, release, and machine name.

Some earlier versions of MAAS only support Ubuntu. If the machine operating system is Ubuntu, then filenames without {os} will also be tried, to maintain backward compatibility.

Consequently, template files are interpreted in the following order:

  1. {prefix}_{os}_{node_arch}_{node_subarch}_{release}_{node_name} or {prefix}_{node_arch}_{node_subarch}_{release}_{node_name}

  2. {prefix}_{os}_{node_arch}_{node_subarch}_{release} or {prefix}_{node_arch}_{node_subarch}_{release}

  3. {prefix}_{os}_{node_arch}_{node_subarch} or {prefix}_{node_arch}_{node_subarch}

  4. {prefix}_{os}_{node_arch} or {prefix}_{node_arch}

  5. {prefix}_{os}

  6. {prefix}

  7. generic

The machine needs to be the machine name, as shown in the web UI URL.

The prefix can be either enlist, enlist_userdata, commissioning, curtin, curtin_userdata or preseed_master. Alternatively, you can omit the prefix and the following underscore.

For example, to create a generic configuration template for Ubuntu 16.04 Xenial running on an x64 architecture, the file would need to be called ubuntu_amd64_generic_xenial_node.

To create the equivalent template for curtin_userdata, the file would be called curtin_userdata_ubuntu_amd64_generic_xenial_node.

Any file targeting a specific machine will replace the values and configuration held within any generic files. If those values are needed, you will need to copy these generic template values into your new file.

About Ubuntu kernels

MAAS supports four types of kernels for its Ubuntu machines:

  1. General availability kernels
  2. Hardware enablement kernels
  3. Hardware enablement kernels (pre-release)
  4. Low latency kernels

About general availability kernels

The general availability (GA) kernel is based on the generic kernel that ships with a new Ubuntu version. Subsequent fixes are applied regularly by the ‘stable’ stream used when setting up the global image source for MAAS.

MAAS denotes a GA kernel like this:

ga-<version>: The GA kernel reflects the major kernel version of the shipped Ubuntu release. For example, ‘ga-16.04’ is based on the ‘generic’ 4.4 Ubuntu kernel. As per Ubuntu policy, a GA kernel will never have its major version upgraded until the underlying release is upgraded.

About hardware enablement kernels

New hardware gets released all the time. If an Ubuntu host runs an older kernel, it’s unlikely that MAAS can support the hardware. Canonical does make every effort to back-port more recent kernels enabling more hardware. The acronym HWE stands for “Hardware Enablement.”

You also gain kernel improvements and new features when installing an HWE kernel.

There is the notion of an HWE stack, which refers to the window manager and kernel when the Ubuntu host is running a desktop environment. HWE stacks do not apply to MAAS since machines are provisioned strictly as non-graphical servers.

Note that these back-ported/HWE kernels are only available for LTS releases (e.g. Trusty, Xenial, etc.). For example, the first available HWE kernel for Ubuntu 16.04 LTS (Xenial) will be the GA kernel from Ubuntu 16.10 (Yakkety).

Before MAAS 2.1 on Xenial, HWE kernels are referred to by the notation hwe-<release letter>. So, to install the Yakkety HWE kernel on Xenial, the hwe-y kernel is used. By default, when using the web UI, MAAS imports all available HWE kernels along with its generic boot images. So if you are importing Trusty images, then the following HWE kernels are included: hwe-u, hwe-v, hwe-w, hwe-x (presuming the Xenial HWE kernel is available).

In MAAS 2.1, starting with Xenial kernels, the notation has changed. The following is used to refer to the latest HWE kernel available for Xenial: hwe-16.04.

See LTS Enablement Stack (Ubuntu wiki) for the latest information on HWE.

About pre-release hardware enablement kernels

The pre-release HWE kernel is known as the edge HWE kernel.

MAAS denotes the edge kernel like this: hwe-<version>-edge.

So ‘hwe-16.04’ is considered older than ‘hwe-16.04-edge’.

See Rolling LTS Enablement Stack (Ubuntu wiki) for more information.

About low latency kernels

The low-latency kernel is based on the GA kernel, but uses a more aggressive configuration to reduce latency. It is categorised as a soft real-time kernel. For more information, see Criteria for real-time computing (Wikipedia).

MAAS denotes a low latency kernel in three ways:

  1. hwe-x-lowlatency: the Xenial low latency HWE kernel for Trusty
  2. ga-16.04-lowlatency: the low latency GA kernel for Xenial
  3. hwe-16.04-lowlatency: the low latency HWE kernel for Xenial

About choosing a kernel

The kernel installed on a machine during deployment is, by default, the Ubuntu release’s native kernel (GA). However, it is possible to tell MAAS to use a different kernel. Via the Web UI, MAAS can help you choose one of these kernels. There are three different contexts for your choice:

  1. globally (default minimum enlistment and commissioning kernel)
  2. per machine (minimum deploy kernel)
  3. per machine during deployment (specific deploy kernel)

About kernel boot options

MAAS can specify kernel boot options to machines on both a global basis (UI and CLI) and a per-machine basis (CLI-only). A full catalogue of available options can be found in the Linux kernel parameters list at kernel.org.

About resource pools

Resource pools allow administrators to logically group resources – machines and VM hosts – into pools. Pools can help you budget machines for a particular set of functions. For example, if you’re using MAAS to manage a hospital data centre, you may want to keep a certain number of machines reserved for provider use, whether that be for the charts, documentation, or orders application. You can use resource pools to reserve those machines, regardless of which of the three applications you end up loading onto a particular machine at any given time.

Administrators can manage resource pools on the Machines page in the web UI, under the Resource pools tab, or with the MAAS CLI. Also note that all MAAS installations have a resource pool named “default.” MAAS automatically adds new machines to the default resource pool.

How to customise machines

If you want to customise machines, you may want to know:

How to customise machine storage

This section will show you:

Note that layouts can be set globally and on a per-machine basis. For additional information on storage layouts, see the Storage layouts reference section at the end of this article.

How to set global storage layouts

All machines will have a default layout applied when commissioned. An administrator can configure the default layout on the ‘Settings’ page, under the ‘Storage’ tab.

Important: The new default will only apply to newly-commissioned machines.

How to set per-machine storage layouts

An administrator can change the layout for a single machine as well as customise that layout providing this is done while the machine has a status of ‘Ready’. This is only possible via the CLI: to see how, click the “CLI” option for your version and delivery method above.

Only an administrator can modify storage at the block device level (providing the machine has a status of ‘Ready’).

How to set the default erasure configuration

A default erasure configuration can be set on the ‘Settings’ page by selecting the ‘Storage’ tab.

If option ‘Erase machines’ disks prior to releasing’ is chosen then users will be compelled to use disk erasure. That option will be pre-filled in the machine’s view and the user will be unable to remove the option.

With the above defaults, the machine’s view will look like this when the Release action is chosen:

Where ‘secure erase’ and ‘quick erase’ can then be configured by the user.

How to pre-seed with curtin

You can customise the Curtin installation by either editing the existing curtin_userdata template or by adding a custom file as described above.

Curtin provides hooks to execute custom code before and after installation takes place. These hooks are named early and late respectively, and they can both be overridden to execute the Curtin configuration in the ephemeral environment. Additionally, the late hook can be used to execute a configuration for a machine being installed, a state known as in-target.

Curtin commands look like this:

foo: ["command", "--command-arg", "command-arg-value"]

Each component of the given command makes up an item in an array. Note, however, that the following won’t work:

foo: ["sh", "-c", "/bin/echo", "foobar"]

This syntax won’t work because the value of sh's -c argument is itself an entire command. The correct way to express this is:

foo: ["sh", "-c", "/bin/echo foobar"]

The following is an example of an early command that will run before the installation takes place in the ephemeral environment. The command pings an external machine to signal that the installation is about to start:

early_commands:
  signal: ["wget", "--no-proxy", "http://example.com/", "--post-data", "system_id=&signal=starting_install", "-O", "/dev/null"]

The following is an example of two late commands that run after installation is complete. Both run in-target, on the machine being installed.

The first command adds a PPA to the machine. The second command creates a file containing the machine’s system ID:

late_commands:
  add_repo: ["curtin", "in-target", "--", "add-apt-repository", "-y", "ppa:my/ppa"]
  custom: ["curtin", "in-target", "--", "sh", "-c", "/bin/echo -en 'Installed ' > /tmp/maas_system_id"]

How to pre-seed with cloud-init

It’s easy to customise cloud-init via the web UI. When you’ve selected a machine and choose ‘Take action >> Deploy,’ you’ll be presented with the following screen:

Select a viable release (in this case, “Ubuntu 18.04…”) and check the box labelled “Cloud-init user-data…”:

Paste the desired script directly into the box, and select “Start deployment for machine.” For example, to import an SSH key immediately after your machine deployment, you could paste this script:

#!/bin/bash
(
echo === $date ===
ssh-import-id foobar_user
) | tee /ssh-key-import.log

No script validation of any kind is provided with this capability. You will need to test and debug your own cloud-init scripts.

How to choose Ubuntu kernels

This section will show you:

How to set a default minimum kernel for enlistment and commissioning

To set the default minimum enlistment and commissioning kernel (based on Ubuntu release: GA kernel) for all machines visit the ‘General’ tab of the ‘Settings’ page and select a kernel in the ‘Default Minimum Kernel Version’ field of the Commissioning section. Don’t forget to click ‘Save’.

How to set a minimum deployment kernel for a machine

To set the minimum deploy kernel on a machine basis, click on a machine from the ‘Machines’ page of the web UI and switch to its ‘Configuration’ page. Click ‘Edit’ in the ‘Machine configuration’ section, select a kernel in the ‘Minimum Kernel’ field followed by ‘Save changes’.

How to set a specific kernel during machine deployment

To set a specific kernel during deployment, select a machine from the ‘Machines’ page and choose ‘Deploy’ under ‘Take action’. Then choose a kernel from the (third) kernel field. Hit ‘Deploy machine’ to initiate the deployment.

MAAS verifies that the specified kernel is available for the given Ubuntu release (series) before deploying the machine.

How to set global kernel boot options

To set kernel boot options globally, as an admin, open the ‘Settings’ page and on the ‘General’ tab scroll down to the ‘Global Kernel Parameters’ section:

Type in the desired (space separated) options and click ‘Save’. The contents of the field will be used as-is. Do not use extra characters.

See How can I set kernel boot options for a specific machine? to set per-machine kernel boot options.

How to use resource pools

This section will explain:

How to add a resource pool

Use the Add pool button to add a new resource pool.

After giving your new pool a name and description, click the Add pool button:

How to delete a resource pool

To delete a resource pool, click the trashcan icon next to the pool.

If you delete a resource pool, all machines that belong to that resource pool will return to the default pool.

How to add a machine to a resource pool

To add a machine to a resource pool, on the Machines page, select the machine you want to add to the resource pool. Next, select the Configuration tab. Now select the resource pool and click the Save changes button.

How to remove a machine from a resource pool

To remove a machine from a resource pool, follow the same procedure you would use to add a machine, but select “default” as the new resource pool. This action will return the machine to the default resource pool.

How to add a VM host to a resource pool

You can add a VM host to a resource pool when you create a new VM host, or you can edit a VM host’s configuration:

How to remove a VM host from a resource pool

To remove a VM host from a resource pool, follow the same procedure you would use to add a VM host to a resource pool, except select “default” as the new resource pool. This action will return the machine to the default resource pool.

Storage layouts reference

There are three layout types:

  1. Flat layout
  2. LVM layout
  3. bcache layout

The layout descriptions below will include the EFI partition. If your system is not using UEFI, regard sda2 as sda1 (with an additional 512 MB available to it).

Flat layout storage reference

With the Flat layout, a partition spans the entire boot disk. The partition is formatted with the ext4 filesystem and uses the / mount point:

Name Size Type Filesystem Mount point
sda - disk
sda1 512 MB part FAT32 /boot/efi
sda2 rest of sda part ext4 /

The following three options are supported:

  1. boot_size: Size of the boot partition on the boot disk. Default is 0, meaning not to create the boot partition. The ‘/boot’ will be placed on the root filesystem.

  2. root_device: The block device on which to place the root partition. The default is the boot disk.

  3. root_size: Size of the root partition. Default is 100%, meaning the entire size of the root device.

LVM storage layout reference

The LVM layout creates the volume group vgroot on a partition that spans the entire boot disk. A logical volume lvroot is created for the full size of the volume group; is formatted with the ext4 filesystem; and uses the / mount point:

Name Size Type Filesystem Mount point
sda - disk
sda1 512 MB part FAT32 /boot/efi
sda2 rest of sda part lvm-pv(vgroot)
lvroot rest of sda lvm ext4 /
vgroot rest of sda lvm

The following six options are supported:

  1. boot_size: Size of the boot partition on the boot disk. Default is 0, meaning not to create the boot partition. The ‘/boot’ will be placed on the root filesystem.
  2. root_device: The block device on which to place the root partition. The default is the boot disk.
  3. root_size: Size of the root partition. Default is 100%, meaning the entire size of the root device.
  4. vg_name: Name of the created volume group. Default is vgroot.
  5. lv_name: Name of the created logical volume. Default is lvroot.
  6. lv_size: Size of the created logical volume. Default is 100%, meaning the entire size of the volume group.

bcache storage layout reference

A bcache layout will create a partition that spans the entire boot disk as the backing device. It uses the smallest block device tagged with ‘ssd’ as the cache device. The bcache device is formatted with the ext4 filesystem and uses the / mount point. If there are no ‘ssd’ tagged block devices on the machine, then the bcache device will not be created, and the Flat layout will be used instead:

Name Size Type Filesystem Mount point
sda - disk
sda1 512 MB part FAT32 /boot/efi
sda2 rest of sda part bc-backing
sdb (ssd) - disk
sdb1 100% of sdb part bc-cache
bcache0 per sda2 disk ext4 /

The following seven options are supported:

  1. boot_size: Size of the boot partition on the boot disk. Default is 0, meaning not to create the boot partition. The ‘/boot’ will be placed on the root filesystem.
  2. root_device: The block device upon which to place the root partition. The default is the boot disk.
  3. root_size: Size of the root partition. Default is 100%, meaning the entire size of the root device.
  4. cache_device: The block device to use as the cache device. Default is the smallest block device tagged ssd.
  5. cache_mode: The cache mode to which MAAS should set the created bcache device. The default is writethrough.
  6. cache_size: The size of the partition on the cache device. Default is 100%, meaning the entire size of the cache device.
  7. cache_no_part: Whether or not to create a partition on the cache device. Default is false, meaning to create a partition using the given cache_size. If set to true, no partition will be created, and the raw cache device will be used as the cache.

VMFS6 storage layout reference

The VMFS6 layout is used for VMware ESXi deployments only. It is required when configuring VMware VMFS Datastores. This layout creates all operating system partitions, in addition to the default datastore. The datastore may be modified. New datastores may be created or extended to include other storage devices. The base operating system partitions may not be modified because VMware ESXi requires them. Once applied another storage layout must be applied to remove the operating system partitions.

Name Size Type Use
sda - disk
sda1 3 MB part EFI
sda2 4 GB part Basic Data
sda3 Remaining part VMFS Datastore 1
sda4 - skipped
sda5 249 MB part Basic Data
sda6 249 MB part Basic Data
sda7 109 MB part VMware Diagnostic
sda8 285 MB part Basic Data
sda9 2.5 GB part VMware Diagnostic

The following options are supported:

  1. root_device: The block device upon which to place the root partition. Default is the boot disk.

  2. root_size: Size of the default VMFS Datastore. Default is 100%, meaning the remaining size of the root disk.

Blank storage layout reference

The blank layout removes all storage configuration from all storage devices. It is useful when needing to apply a custom storage configuration.

Warning: Machines with the blank layout applied are not deployable; you must first configure storage manually.


Last updated 4 days ago.