In a previous post, Quocirca discussed how the cloud can be used to provide levels of business continuity and disaster recovery to meet an organisation’s needs around its own business risk profile.
However, data can be stored in many formats, and the granularity of this storage can have an impact on how well an organisation can recover information, function or transactions. There are three basic levels to consider: files, storage images and applications.
First, at the file level, the most common form of a need of data recovery is the loss of a single file. A user may have deleted the file by mistake, may have over-written it or may just have mislaid it. The best way to recover such a file is to have a mirrored copy of the primary file store it resides in – provided that there is a degree of intelligence built in. Direct mirroring of all actions carried out on a file store not only ensures that all files saved are replicated – it also means that all files deleted or modified are reflected in the mirror as well. Therefore, a user deleting a file in one file store deletes it in all mirrors – and so is no better off when trying to recover it.
Phased mirroring can be implemented, but is not of much use. Here, a time delay is built in to the mirroring, so that if a file is deleted or changed, there is a grace period in which the user can change their mind before the action is reflected in the mirror – but this also applies to file saves – and just how long should such a grace period be – a few seconds, minutes, hours, days?
A far better way is to build in basic versioning; here, a number of copies of the file can be kept as they are saved. This should reflect the importance of the data held within the file – information of lower importance may just have one earlier version stored, whereas more important project documents or information that may be required to feed into governance and compliance systems may have many more versions enabled.
Doing such version control within an organisation can rapidly give rise to massive growth in storage requirements, and many would struggle to put in place the infrastructure required to manage this. This is where cloud storage comes in to its own as it is easy to “thin provision” storage volumes and manage them dynamically, sharing the underlying costs of a mass storage platform between the cloud provider’s many customers. Another key benefit of using cloud storage in this way is that it provides an abstraction of the file from the user’s immediate environment (i.e. offsite) – the data is protected from device failure or even from site failure. Data replicated within an organisation’s own data centre may not survive a catastrophic large scale failure.
Beyond replicating files, the second level is the need to backup disk images. A user with a full-function device (such as a PC or laptop with installed software and data) can find themselves incapable of working should such a device fail. Rebuilding a machine can take a long time – and if the associated data has been lost, then the cost to the organisation can be high.
Taking a full image of a device’s storage systems means that on failure, the device can be rebuilt very rapidly – or a new device can be provisioned using the saved the image and the employee can soon be working again. An image can also be mounted as a virtual device, giving the user access to a virtual desktop while a new physical device is provisioned. Again, the cloud enables a cost-effective means of enabling such functionality, without the customer organisation having to own all the underlying hardware, operating systems and software stacks that underpin this, all again with the benefit of storage being off-site.
Some may think that using such image files negates the need for file mirrors. As all files are included in an image backup, it is possible for individual files to be recovered from these. However, immediate imaging is not an easy task, and so files can be lost between the image creation times. To recover a file, the correct image will have to be identified, mounted and opened, with the file system then being interrogated in order to provide the user with the capability to recover the one file they are looking for – this may not be the best way to do things, and does not easily allow for versioning.
The third level is the need to keep full applications running. In the world of virtualisation, it is now possible to package a complete application up as a virtual machine – this includes anything from the operating system upwards in the stack, and may include application server platform, middleware connectors, additional services the application is dependent on, and so on. Such virtual machines (VMs) should not include any live data, however, as this means that the standby VM has to be kept synchronised with the live instance at all times. Data should be stored outside of the VM and mirrored separately. By creating backups of VMs, should anything happen to the live instance (e.g. a failure in the physical underpinnings, corruption of the image or whatever), then a new instance of the image can be spun up rapidly to enable work to continue.
Each of the three levels of granularity has its part to play in how an organisation should seek to ensure it has the best approach to business continuity and disaster recovery. Although all three could be carried out in-house, cloud computing brings technical and business benefits to the fore – from domain expertise skills in how to manage data through economies in scale in providing large storage capabilities to multi-level data management through the provider’s own backup and restore policies to build on your organisation’s own. For many organisations struggling to “do more with less”, the cloud is the only way to gain access to such levels of technical information assurance – cloud brings such large organisation capabilities into the reach of many mid-market and small and medium enterprise (SME) organisations.
In fact, such capabilities are increasingly available from specialist providers of business continuity and disaster recovery services, and many of these do not even run their own storage infrastructure. How? You’ve guessed it; they turn to other cloud service providers for the functionality itself.