When considering two or more items, there is the concept of “comparing apples with apples” – i.e. making sure that what is under consideration is being compared objectively. Therefore, comparing a car journey against an air flight for getting between London and Edinburgh is reasonable, but the same is not true between London and New York.
The same problems come up in the world of virtualised hosting. Here, the concept of a standard unit of compute power has been wrestled with for some time, and the results have led to confusion. Amazon Web Services (AWS) works against an EC2 Compute Unit (ECU), Lunacloud against a virtual CPU (vCPU). Others have their own units, such as Hybrid Compute Units (HCUs) or Universal Compute Units (UCUs) – while others do not make a statement of a nominal unit at all.
Behind the confusion lies a real problem; the underlying physical hardware is not a constant. As new servers and CPU chips emerge, hosting companies will procure the best price/performance option for their general workhorse servers. Therefore, over time there could be a range of older and newer generation Xeon CPUs with different chipsets and different memory types on the motherboard. Abstracting these systems into a pool of virtual resource should allow for a method of providing comparable units of compute power – but each provider seems to have decided that their own choice of unit is the one to stick with – and so true comparisons are difficult to work with. Even if a single comparative unit could be agreed on, it would remain pretty meaningless.
Let’s take two of the examples listed earlier – AWS and Lunacloud. 1 AWS ECU is stated as being the “equivalent of a 1.0-1.2 GHz 2007 (AMD) Opteron or 2007 (Intel) Xeon processor”. AWS then goes on to say that this is also the “equivalent of an early-2006 1.7GHz Xeon processor referenced in our original documentation”. No reference to memory or any other resource, so just a pure CPU measure here. Further, Amazon’s documentation states that AWS reserves the right to add, change or delete any definitions as time progresses.
Lunacloud presents its vCPU as the equivalent of a 2010 1.5GHz Xeon processor – again, a pure CPU measure.
Note the problem here – the CPUs being compared are 3 years apart, and with a 50% spread on clock speed. Here’s where the granularity also gets dirty – a 2007 Xeon chip could have been manufactured to the Allendale, Kentsfield, Wolfdale or Harpertown Intel architectures. The first two of these were 65 nm architectures, the second two 45nm. The differences in possible performance were up to 30% across these architectures – depending on workload. A 2010 Xeon processor would have been to the Beckton 45nm architecture.
Now, here’s a bit of a challenge: Intel’s comprehensive list of Xeon processors (see this link http://www.intel.com/pressroom/kits/quickreffam.htm) does not list a 2007 (or any other date) 1.0-1.2 GHz Xeon processor, other than a Pentium III Xeon from 2000. Where has this mysterious 1.0 or 1.2GHz Xeon processor come from? What we see is the creation of a nominal convenient unit of compute power that the hosting company can use as a commercial unit. The value to the purchaser is in being able to order more of the same from the one hosting company – not to be able to compare any actual capabilities between providers.
Furthermore, the CPU (or a virtual equivalent) is not the end of the problem. Any compute environment has dependencies between the CPU, its supporting chipsets, the memory and storage systems and the network knitting everything together. Surely, though, a gigabyte of memory is a gigabyte of memory, and 10GB of storage is 10GB of storage? Unfortunately not – there are many different types of memory that can be used – and the acronyms get more technical and confusing here. As a base physical memory technology, is the hosting company using DDR RDIMMS or DDR2 FBDIMMS or even DDR3? Is the base storage just a RAIDed JBOD, DAS, NAS, a high-speed SAN or an SSD-based PCI-X attached array? How are such resources virtualised, and how are the virtual resource pools then allocated and managed?
How is the physical network addressed? Many hosting companies do not use a virtualised network, so network performance is purely down to how the physical network is managed. Others have implemented full fabric networking with automated virtual routing and failover, providing different levels of priority and quality of service capabilities.
To come up with a single definition of a “compute unit” that allows off-the-page comparisons between the capabilities of one environment and another to deal with a specific workload is unlikely to happen. Even if it could be done, it still wouldn’t help to define the complete end user experience, as the wide area network connectivity then comes in to play.
Can anything be done? Yes – back in the dim, dark depths of the physical world, a data centre manager would take servers from different vendors when looking to carry out a comparison and run some benchmarks or standard workloads against them. As the servers were being tested in a standardised manner under the control of the organisation, the results were comparable – so apples were being compared to apples.
The same approach has to be taken when it comes to hosting providers. Any prospective buyer should set themselves a financial ceiling and then try and create an environment for testing that fits within that ceiling. This ceiling is not necessarily aimed at creating a full run-time environment, and may be as low as a few tens of pounds. Once an environment has been created, then load up a standardised workload that is similar to what the run-time workload is likely to be and measure key performance metrics. Comparing these key metrics will then provide the real-world comparison that is needed – and arguments around ECU, vCPU, HCU, UCU or any other nominal unit becomes a moot point.
Only through such real-world measurement will an apple be seen to be an apple – as sure as eggs are eggs.