Deploying OpenStack all day long: The maths behind the magic.

by Canonical on 2 June 2014

Reflecting on my team’s experience at the OpenStack Summit in Atlanta, it was a lot of fun and truly inspiring.

Once again Canonical’s presence at the OpenStack Summit was driven by doing things “for real”. Whether it’s in Mark Shuttleworth’s keynote, where he deployed multiple OpenStack clouds on multiple platforms, or indeed deploying OpenStack live on the Ubuntu stand in the Marketplace Expo Hall.

Many have already commented on the stunning bits of equipment (informally known as “Orange Boxes”) we were using to demonstrate Metal as a Service (MAAS), Juju, OpenStack and other workloads. However, I’d like to talk a little bit more about what the team was doing behind the scenes and what made it possible.

OpenStack deployments at scale

OpenStack at scale usually means very large deployments, hundreds or thousands of physical servers. And while Canonical is certainly no stranger to that (most of the largest production OpenStack clouds run on Ubuntu and many of those are backed by Canonical’s “Ubuntu Advantage” support programme), this type of deployment at scale was different. Our challenge was to show each person, or group of people who came to our stand a fresh deployment of OpenStack while they asked us questions and we got a chance to explore their requirements.

To give you an idea of scale, we had five of the 10-node “Orange Box” clusters on the stand and two more in private meeting rooms where we met with customers and partners. The OpenStack Summit Marketplace Expo Hall was open for visitors around 9 hours each day.

9 hours x 3 days gives us approximately 27 hours of live demo time in the hall. Not everyone wanted to see us deploying OpenStack, some wanted to see workloads on top of OpenStack, others just wanted a high-level overview of what we could do, so let’s cut that back by a generous 20% to take account of the different needs of some visitors to our stand. That still leaves us with around 21 hours of demo time.

As mentioned, we had a total of seven clusters running all the time, and we had refined the process so it took us about 10 minutes to prepare and 20 minutes to deploy OpenStack every time. A bit more simple maths gives us the following: 21 hours / 0.5 = 42 deployments of OpenStack per cluster over that period.  42 multiplied by the seven clusters gives us around 300 deployments of OpenStack performed during the course of that 3-day event.

One of many live OpenStack installs done over the 3 days.

300 Deployments of OpenStack in 3 days… how?

So the first and most obvious question is how do we do it? The answer is twofold.

Total Automation

The first and probably most important factor is the total automation of everything by using Juju and MAAS. MAAS handles the hardware provisioning part of the process. It takes your servers from being just bare metal to a state where they are ready to be connected to OpenStack. Juju is the service orchestration tool which takes your available resources and installs the right software components on them in the right order and creates relationships between your existing components.

MAAS and Juju are fully open source and available to download and try for anyone who is interested.

Talented Experts

The majority of the demonstrations were delivered by the Canonical Sales Engineering team, a group that I’m very proud to lead. However, we were also supported by colleagues from Cloud DevOps, Canonical Technical Services and other groups.

We also had members of our OpenStack, Juju, MAAS and Landscape engineering teams manning the stand at some point or other. There were also a number of our Cloud Architects and Cloud Consultants pitching in too in between sessions that they were attending.

If we can do this on a show floor … imagine what we could do for your datacentre

Naturally, doing something as a quick 30 minute demonstration isn’t anything like a full production deployment with uptime and performance SLA’s. However, this should give you an idea of what is possible.

OpenStack is complex, but by using tools like MAAS and Juju to provide that best of breed OpenStack deployment experience, you can automate much of the complexity freeing up your time and resources to focus on delivering on your business objectives and speeding up your time to production.

If you’d like to find out more about how we can help you to go from bare metal to fully deployed production OpenStack cloud in your datacentre, then please get in touch with us and we’d be happy to help.

Related posts

Canonical joins the Sylva project

Canonical is proud to announce that we have joined the Sylva project of Linux Foundation Europe as a General Member. We aim to bring our open source infrastructure solutions to Sylva and contribute to the project’s goal of providing a platform to validate cloud-native telco functions. Sylva was created to accelerate the cloudification of […]

Data Centre AI evolution: combining MAAS and NVIDIA smart NICs

It has been several years since Canonical committed to implementing support for NVIDIA smart NICs in our products. Among them, Canonical’s metal-as-a-service (MAAS) enables the management and control of smart NICs on top of bare-metal servers. NVIDIA BlueField smart NICs are very high data rate network interface cards providing advanced s […]

A call for community

Introduction Open source projects are a testament to the possibilities of collective action. From small libraries to large-scale systems, these projects rely on the volunteer efforts of communities to evolve, improve, and sustain. The principles behind successful open source projects resonate deeply with the divide-and-conquer strategy, a […]