Don’t head down a cloud cul-de-sac

Cloud computing promises much when it comes to the capability to move workloads between dedicated private and shared public infrastructure so the that the use of resources can grow and shrink as needed. As mentioned in the last post from Quocirca, the strong growth in the adoption of private cloud is good for public cloud providers, providing there is the capability to port workloads between the two.

The promise is good, but in many cases, the implementation has left much to be desired. The main problem is that there are a multitude of cloud platforms that have been built either on existing underpinnings of old-style operating systems and application server stacks (and as such struggle to scale and share resources), or that they have been built in a proprietary manner (and as such can only share workloads or resources between themselves, and not with different systems).

All that is required is some standards to enable a reasonable level of commonality at the compute, storage and network layers, and everything will be OK. And on the face of it, there should be few problems when it comes to such standards. Like the proverbial bus, stand around for long enough and a whole load of standards will come along at the same time.

It is all well and good for the various industry bodies – such as the Institute of Electrical and Electronics Engineers (IEEE), the Cloud Standards Customer Council (CSCC), the Storage Network Industry Alliance (SNIA), the Desktop Management Task Force (DMTF), the Open Data Center Alliance (ODCA), the Cloud Security Alliance (CSA) and the several tens of others all working assiduously in this space – to create de jure standards, but unless they reflect the real needs of users in the market and do so quickly, the cloud world will already have gone proprietary.

And here lies the biggest problem – your standard may not be my standard, and we’ll need a third standard to act as the bridge between what I am using and what you are using. The problem with de jure standards is that they can take ages to agree – and less time for the vendors nominally supporting them to break through adding additional “extensions” here and there.

However, cloud has been around for a while now, and there are some identifiable winning bets out there. The 500 pound gorilla has to be Amazon Web Services (AWS) with its Elastic Compute Cloud (EC2) and its Simple Storage Services (S3). However, for a number of reasons, AWS is not suitable for many organisations looking to move to a cloud environment, whether this is down to cost, contracts or specific geographic needs. What is important is not to shut any doors on integration between existing internal and external applications and services and a chosen public cloud platform. At the storage level, S3 seems to be the direction the crowd is moving in; at the compute level, EC2 is not quite such a certain bet due to the extra complexities there are in dealing with compute workloads as against storage workloads – and that different cloud platform providers seem to want to compete over this area more.

This is where the use of application programming interfaces (APIs) comes in. By utilising the same APIs, cloud providers can make it easier for workloads to be ported across different platforms. Lunacloud uses the Cloudian storage system, which along with other cloud platforms such as Eucalyptus and the open source OpenStack (backed by Rackspace) supports the S3 APIs.
What is still needed is agreement over compute APIs. Some platforms already support the EC2 API, but this is only at a basic level and this does not mean that compute workloads are portable across different cloud platforms. Only time will tell if the world has to wait for an agreed de jure standard, or some company railroads through their own means of doing this. Cloud can only provide fully on its promise when compute portability is fully in place to enable organisations to choose where a specific workload should run – on its private cloud, or in a public cloud environment.

It may well be that the answer to this is not to force through a base-level standard at the platform level, but to essentially create a cloud enterprise service bus (ESB), where different connectors can be created that connect different cloud compute services together enabling workloads to be ported, on the fly, between platforms.

The world cannot wait for the de jure groups to create coherent and cohesive holistic cloud standards – this is like trying to boil the ocean as the world changes around you. Basic de facto APIs are already available at the storage level; the network angle is pretty much there from just using existing network standards and approaches. The key is still in the compute compatibility: whether AWS EC2 will follow S3 to become the de facto standard, or whether a cloud ESB or an alternative approach becomes the winner is, as yet, unclear.

Organisations wanting to gain the early adopter benefits of cloud now need to know that they are not adopting something that will either push them down a cul-de-sac or involve them in constant change as they chase some level of working interoperability. Quocirca recommends that organisations choose carefully – any provider should be able to discuss their future plans around interoperability openly. Just beware those that sound closed to the idea of being able to move workloads between platforms.

Leave a Reply