Need to set up servers in remote locations?

by Marcin Bednarz on 6 June 2019

Use bare metal provisioning with a top-of-the-rack switch

When deploying a small footprint environment such as edge computing sites, 5G low latency services, a site support cabinet or baseband unit, its critical to establish the optimal number of physical servers needed for set up. While several approaches exist, bare metal provisioning through KVM can often be the most reliable option. Here’s why.

For every physical server in such a constrained physical environment, there is an associated cost.

In the case of an edge deployment, this cost can be measured in (among other properties):

  • Capital and operational expenses
  • Power usage
  • Dissipated heat
  • The actual real estate it occupies

Ways to set up servers in remote locations

One approach would be to have a dedicated server shipped to every remote location to act as an infrastructure node. Typically this would require an additional node (or committed shared resources) which might not align with remote site footprint constraints.

This approach might introduce unnecessary latency and delays in server provisioning

Another option is stretching the provisioning and management network across WAN and provisioning all the servers from a central location. This approach might, however, introduce unnecessary latency and delays in server provisioning. It also requires quite sophisticated network configuration to account for security, reliability and scale of remote site deployments.

So what other options exist? What is the common infrastructure component always present in every remote location? The answer is quite straightforward – in every single site one needs to provide basic network connectivity through a top-of-the-rack/site switch. It’s this critical component that enables servers to communicate with the rest of the network and provide required functions such as application servers, VNFs, container and virtualisation platforms.

How do I re-purpose nodes to provision different operating systems?

Modern switches can run Linux as their underlying operating system, enabling infrastructure operators to run applications directly on these top-of-the-rack devices either through KVM or snaps support.

A great example of a workload that can run on a top-of-the-rack switch is a bare metal provisioning solution such as MAAS. By deploying MAAS we can solve the system provisioning challenge without unnecessary complexity. By running a lightweight version of MAAS on a top-of-the-rack switch, we reduce friction in small footprint environments as well as providing an open API-driven way to provision and repurpose nodes in every remote location. This enables not only fast and efficient server provisioning but also eliminates drawbacks of other alternatives mentioned above.

Contact us to learn more

Related posts

Canonical joins the Sylva project

Canonical is proud to announce that we have joined the Sylva project of Linux Foundation Europe as a General Member. We aim to bring our open source infrastructure solutions to Sylva and contribute to the project’s goal of providing a platform to validate cloud-native telco functions. Sylva was created to accelerate the cloudification of […]

Data Centre AI evolution: combining MAAS and NVIDIA smart NICs

It has been several years since Canonical committed to implementing support for NVIDIA smart NICs in our products. Among them, Canonical’s metal-as-a-service (MAAS) enables the management and control of smart NICs on top of bare-metal servers. NVIDIA BlueField smart NICs are very high data rate network interface cards providing advanced s […]

A call for community

Introduction Open source projects are a testament to the possibilities of collective action. From small libraries to large-scale systems, these projects rely on the volunteer efforts of communities to evolve, improve, and sustain. The principles behind successful open source projects resonate deeply with the divide-and-conquer strategy, a […]