Software-Defined. Hardware?!

In the first blog post from our IO team, we talk about Software-Defined infrastructure.

Edward Worthington
By Edward Worthington

The current buzz around Data Centre expansion and growth seems to be about the concept of everything being software defined. Here at VIA, it’s no different and we’ve learnt the hard way that if something is software designed, you better buy the right hardware.

For the last few months in our lab, we have been grappling with a new virtualised environment to run the VIA Cloud. We set out, as our ethos always been, to keep things simple. After all, the simpler something is, the less likely it is to go wrong and if it does, it’s easy to fix. The goal of this new project for the IO team (Infrastructure and Operations) was to make the provisioning and management of anything needed by someone in our team or the dev team quick and straightforward. Need a new production-ready SQL Always-On Cluster? “Sure, it’ll be ready in 15 minutes”. A second Redis test lab for a dev team member? “No problem, 5 minutes”. We knew from the outset that we needed automation and instantly available scale and we recognised we could accomplish this much easier if everything in the Data Centre was software defined.

Software-Define Hardware

We’ve always used Hewlett Packard (now HPE) servers in our Data Centres. From our years in the IT industry, we know it’s the best, not the cheapest nor the most popular, but the best. (Now, we know this will split opinion but this article isn’t going into why we think this). So when we first started, we built out a small 3-server lab to test multiple “cloud” or “hypervisor” technologies. We grappled with OpenStack for a month and, after a number of unsuccessful attempts with looking at various distributions, we were feeling pretty defeated. Microsoft appear to have made a lot of progress with their software-defined Data Centre suite. Mainly, we assume, through running thousands of servers in their own Data Centres for the Azure Cloud. So, we decided to give Windows Server 2016 a go.

Whilst we know OpenStack is a very powerful product, it’s designed for massive scale. The kind of massive scale we don’t currently have. This perhaps put us on the wrong foot when we started to trial Windows Server 2016. We repurposed the same hardware we’d used for the OpenStack trial and quickly learnt how a system that is software-defined relies so heavily on the underlying hardware. We were testing Microsoft’s software-defined networking solution and software-defined storage solution, known as Storage Spaces Direct. Whilst we were checking the list of features REQUIRED by Microsoft against the list of SUPPORTED features of the HPE NIC’s and storage controllers, we realised the way both parties implement these technologies is sometimes very disparate. Good examples of this include RDMA (Remote Direct Memory Access), RSS (Receive Side Scaling) and VMQ (Virtual Machine Queue) - all of which can get very complicated, very quickly. Furthermore, the features of each of these technologies have been implemented differently by both the software manufactures and the hardware manufactures.

So, in short, the lesson learnt here is that just because the vendors have executed their systems to a standard, doesn’t mean they like talking to each other. After all the trials and errors, it’s clear that best practice should always be to check with the software manufacturer to make sure the hardware for something that is “software-defined”, is fully supported. 

On a side note, after all our experiences, we’ve also decided that software-defined should really be called software-controlled because the software is now so much more heavily reliant on the hardware and vice-versa. But we may hold our breath on the industry adopting our term!

BOOK A DEMO

We will contact you to arrange a 15 minute demo.

( )  ext:

What products are you interested in?

Do you have any additional requirements?

HOW MANY USERS?