Cloud computing has led to a paradigm shift in client-server technology. Just as the mainframe morphed into mini-computing, which led to the client-server model, the ubiquity of the cloud is the next phase in the evolution of IT. Applications, data and services are now being moved to the edge of the enterprise data centre.
For any CIO looking to take advantage of cloud computing to lower IT spend and mitigate risk there are many options:
- Move budget and functionality directly to the business (Shadow IT) and empower the use of public cloud options
- Move to a managed service – private cloud for the skittish
- Create a private cloud with the ability to burst to a public cloud (i.e., hybrid cloud)
- Move 100 per cent to a public cloud provider managed by a smaller IT department
The three key considerations for an organisation’s database needs are performance, security and compliance. Each of the options above has distinct strengths and weaknesses, and the importance of those depend on your organisation’s specific requirements.
On premises/private cloud
One of the main advantages of this type of database deployment scenario is that it gives an enterprise control over its own environment, which can be customised to its specific business and security needs. This does mean, however, that security measures and disaster recovery both must be architected into the solution as well. This greater level of involvement for an in-house IT department can sometimes impact an enterprise’s ability to go to market quickly.
Location can be an issue: if users located in different parts of the globe need to access data via mobile devices, latency can become a problem, affecting the user experience. Moreover, some industries, such as financial services and healthcare have security regulations that require strict compliance. Countries like Canada and Germany are drafting stricter data residency and sovereignty laws that require data to remain in the country to protect their citizens’ personal information.
Traditionally, the break-even ROI for on-premises deployment – between hardware, software and all required components – is about 24 and 36 months, which can be too long for some organisations. It’s therefore important to examine expected ROI before moving to an on premises/private cloud database and ensure that the timeline for that ROI fits your organisational needs.
Hybrid cloud is flexible and customisable, allowing managers to stick with the private cloud for some elements and to “cloud burst” to the public cloud when required – for example, when experiencing a spike in data volume on an application during a particularly busy period.
Importantly, disaster recovery is built into a hybrid solution, removing a key concern. An organisation can also mitigate some of the restraints of data sovereignty and security laws with a hybrid cloud; some data can stay local and some can go into the cloud.
However, integration in a hybrid cloud is complex, and may lead to security issues. Hybrid cloud also can lead to sprawl, where the growth of computing resources underlying IT services becomes uncontrolled and exceeds the resources required. It’s important to have a way to govern and manage this sprawl. Equally as important is having a data migration strategy architected into a hybrid cloud; this helps reduce complexity while enhancing security.
The main advantage with public cloud is its almost infinite scalability as well as its pay-as-you-go model, which result in faster go-to-market capabilities. However, public clouds are often homogeneous by nature, intended to satisfy many different enterprises’ needs – this means that customisation can be challenge.
As with a hybrid cloud, sprawl can also be a problem in the public cloud. Without a strategy to manage and control a public cloud platform, costs can spiral and negate the savings and efficiency.
Data visibility is another downside; once data goes into a public cloud, it can be hard to determine where it actually resides, and sovereignty laws can come into play for global enterprises.
While public cloud is Opex-friendly, it can get expensive after the first 36 months. Keep TCO in mind when deploying a workload: its lifecycle and overall cost benefit, as well as how the true cost of that application will be tracked.
Security with a public cloud is always a challenge, but can be mitigated with proper measures such as at-rest encryption and well-thought-out access management tools or processes.
Traditionally, this is an on-premises solution – either managed by a vendor or in an enterprise’s own data centre. There are many popular vendors that provide this solution, and using a single vendor to control the complete solution can offer performance and support gains.
However, that can also lock an enterprise into that vendor, and appliance-based databases therefore tend to be a niche, use-case-specific option. Vendor selection is an essential process to ensure that the partnership works both in the present and the future.
Appliance databases, because of their specialised, task-specific nature, are expensive. They can, however, be cost-effective over time if they are deployed properly and with the right partner.
With virtualisation, you can consolidate multiple applications onto a single given piece of hardware. Whilst this often entails higher initial Capex due to the initial cost of installing a database, over time Opex is reduced because of consolidation as a lot of processes will be automated. This leads to a quicker ROI and lower total cost of ownership. However, licensing costs can get expensive.
An enterprise can also achieve better data centre resource utilisation because of the smaller footprint, which saves on the costs of running servers and allows an enterprise to host multiple virtual databases on the same physical machine while maintaining complete isolation of the operating system layer.
The ability to scale is built into a virtualised environment, and administration is simple, with a number of existing tools to administer a virtualised environment.
Virtualisation does leave the enterprise itself with single point of failure; if hardware fails, VMs go down. Fault-proof disaster recovery is therefore a major concern and must be well architected.
There can be network traffic issues because multiple applications will be trying to use the same network card. The actual server an enterprise employs must be purpose built for the virtualised environment.
Virtualisation is ideal for repurposing older hardware to some extent, because IT can consolidate many applications onto hardware that might have been written off. It is well suited to clustering; being able to cluster multiple VMs over multiple servers is a key benefit as far as disaster recovery goes.
Selecting the right database
Clearly, there are a variety of approaches to database management and deployment, and cost is not the only important factor. Every enterprise has its own challenges, goals and needs and there is no one-size-fits-all recommendation when selecting a database. It’s worth carefully examining your own infrastructure as well as ROI expectations, long-term business goals, sovereignty laws, IT capabilities and resource allocation to determine which of these databases is the right one for your enterprise – now and years down the line.
Carl Davies, CEO at TmaxSoft UK
Image source: Shutterstock/violetkaipa