Achieving true balance with enterprise storage

This article was originally published on Technology.Info.
As part of our continuing strategy for growth, ITProPortal has joined forces with Technology.Info to help us bring you the very best coverage we possibly can.

For many years, the enterprise storage world was a relatively simple place. The biggest decision organisations had to make was often whether to use SAN or NAS technology to best meet their technical and business challenges, after which it was a simple case of negotiating the best price per gigabyte given the workload.

However, times change and new technologies emerge. Instead of the lowest decision point in the stack being whether to use slow archive or fast enterprise drives, many organisations are now faced with a bewildering array of traditional hard disk drives, solid state/flash drives, or hybrid-based technologies.

Without a doubt, flash memory can be a huge help when dealing with I/O bottlenecks in storage, but in certain circumstances it can actually do nothing for performance increases - or even make the problem worse. With the advancements in flash module, solid state drive, and server-based flash cards, customers have a number of options available that can help lower storage latency and increase I/O rates; however, these may not be the most appropriate tool for the job.

When looking at options for data storage, or in fact any IT solution, most organisations are looking at three primary elements: the effect on cost, the effect on risk, and any constraints or effect on growth. Depending on the organisation's current drivers,these can often be weighted when solutions are being designed. Given the current economic conditions, many customers look for a truly balanced approach.

The purpose of this article is to outline the various approaches available, and then looks at the effect on cost, risk and growth (both in terms of performance and capacity) as appropriate.

Option 1: Adding solid state drives to existing enterprise storage

option1

Most traditional storage vendors now offer the ability to add solid state disk drives to existing storage area network (SAN) blocks.Whilst this is technically possible, it comes with limitations as to the number of drives you can deploy, due to possible back-end bandwidth limitations. Most traditional storage arrays have been designed with one challenge in mind - getting the most from hard disk drives. Adding solid state disk drives into an existing architecture can often cause serious detriment to the whole design philosophy of that array.

Once drives are installed, in order to utilise them, the process often requires manual movement of data between Tier 0 (SSD) and Tier 1 (HDDs) LUNs which can add complexity and a degree of risk (predictions can invariably be wrong and/or frequently changing data can occur in truly random workloads).

The alternative is to combine multiple tiers of disk with automated tiering software that uses historical-based or cache-based tiering. The issue here is that many organisations have discovered that historical activity is not a particularly good metric for true random I/O environments. A good analogy to use is that it's like driving whilst looking through your rear-view mirror rather than the windscreen. With a cache-based approach, all workloads go through the solid state layer regardless of the performance payback available. Whilst some workloads may benefit, many will actually suffer as a result. Furthermore, the software licence and maintenance cost for these add on elements often dramatically increases the cost of the solution over time.

Ultimately, while this option may present itself as a simple, relatively low risk approach to address bottlenecks due to its use of existing storage management practices, be warned - it is a costly way of addressing the issue that constrains future growth due to the technical challenges involved.

Option 2: Adding flash cards to servers

option2

Opening servers and adding such PCI Express cards to the storage design - often believed to be the simplest way to address performance challenges - can significantly limit the growth and functionality of any end-to-end virtualised solution.

Whilst this approach can be useful for single application, single server environments, it is not well suited to enterprise solutions. This is due to the constraints found particularly in virtualised environments, where tools such as vMotion cannot be deployed as a result of captive storage.

Similar to option one, this option significantly limits growth. Should growth be required, then storage or application administrators need to manually move data between Tier 0 and Tier 1 and/or utilise a costly data movement tool.

Ultimately this is a high cost approach (due to the cost per GB of the cards themselves). This solution can constrain both direct data and solution growth, but it is perceived to be low risk because only the host configuration is being modified, not the enterprise storage architecture itself.

Option 3: Utilising all-flash arrays

option3

The last 12-18 months have seen a plethora of all-flash based (AFA) arrays come to market. If we put aside the risks that come with dealing with "startup" organisations (such as service capabilities, warranty fulfilment, company stability, etc.) and look at the technology alone, then there are some good - if pricey - technologies to choose from.

While all-flash arrays can provide extremely high I/O rates, they are really designed for one job - high I/O, low latency data delivery. In some cases, such as low latency trading, the cost outlay (often in excess of $100k/TB) can be justified. But for most, projected costs of $100k/TB simply minimise and distort the cost justification for standard enterprise class projects.

Due to the high cost, AFAs are deployed in very small amounts leading to the issue of either manually tiering the data or deploying costly software to attempt to predict data usage. Again, using predictive software is dangerous and, as mentioned before, is like driving through the rear-view mirror. In addition, many organisations are merely trying to achieve low latency storage without the need for 500,000-plus IOPs. When solutions such as this are deployed, the I/O performance can be overkill when all that's required is lower latency storage.

Ultimately, this is probably the most unbalanced approach of the four. It requires extremely high capital expenditure, high operating expenditure, presents the most risk to businesses, and is growth constrained.

It should be noted that a number of new products have emerged that aim to lower the cost barrier to entry for AFAs by utilising consumer grade flash, taking a simple x86 server, adding some laptop grade SSDs, and then marketing it as an enterprise grade all-flash array. Whilst this will undoubtedly lower cost, the risks associated with placing consumer grade hardware in an enterprise environment are all too obvious.

Option 4: Utilise hybrid storage

option4

In order to address the challenges found in virtualised environments, we not only need to achieve balance amongst the core elements of cost, growth and risk, we need to provide an approach that enables predictable and unpredictable workloads to be addressed without manual intervention.

Flash storage is merely one tool that can be utilised to address performance issues challenging storage designers, rather than the solution itself.

By combining flash and HDDs into a single pool, the most appropriate tool can be utilised, be it cache memory, solid state storage or traditional hard disk storage. However, even within hybrid storage architectures, there are different approaches to incorporating flash and controlling data access patterns. The key element here is the ability to move data between different tiers, in real-time, without any manual intervention, based upon I/O activity, rather than merely workload predictions based upon file activity.

One example of this is X-IO's "Continuous Adaptive Data Placement" software (CADP). It provides an architecture that "fuses" SSD and HDDs, placing "hot" active data onto flash memory. CADP also performs a ROI calculation and will only place data onto flash if the application will experience a real performance improvement with almost no overhead to the storage system.

Data storage is still architected in a traditional manner, as a simple Fibre Channel array, but at a much lower capital expenditure price point than an all-flash array. Hybrid storage also lowers risk because existing storage management practices (including virtualisation tools such as vMotion) can still be utilised. Growth can be achieved using a modular approach rather than the traditional "big bang" approach of wide-striped enterprise storage, or the complex flash server based card approach.

In essence, hybrid storage with real-time tiering is the only methodology to address the storage architecture challenge presented by VDI workloads, providing a true balance of cost, growth and risk.

All drives are not the same: Component quality matters

Once you've got over the hurdle of deciding the most appropriate and balanced way to improve storage performance, the next challenge is often which class of storage to look at. Should you lower the initial capital cost by using consumer grade components, or should you stick with enterprise class hardware? What are the risks involved?

Whilst it is technically possible to use consumer grade components (more on this later), it's a good idea to look at what the market thinks right now. A survey was prepared and conducted by independent research firm Vanson Bourne in February 2013. The data was collected via an online survey, completed by a nationally representative sample of 100 IT managers from key industry sectors such as financial services, manufacturing, retail, distribution, transport and the commercial sector from across the United Kingdom. These companies employed either 1,000-3,000 staff, or more than 3,000 employees.

questions

When this survey asked the question, "Would you ever knowingly deploy consumer-grade storage, whether HDDs or flash, for you enterprise applications?" the response was very clear (see above).
Interestingly, while some vendors are open about the use of consumer grade flash in their products (albeit claiming the use of software in an attempt to offset the risk), they are being deliberately vague in the underlying architectures. The huge risks this presents need to be highlighted more prominently.
The history of the enterprise storage marketplace is littered with examples of organisations learning the valuable lesson of "use the right tool for the job." Whilst some have made the error of overspending on technology, many others have made the mistake of underspending, and therefore adding unnecessary risk. A great example of this is the use of low cost SATA hard disk drives in enterprise environments, the "old school" equivalent of using consumer grade flash in hybrid and all-flash arrays. Whilst it undoubtedly lowers the initial capital expenditure, the operating cost and downtime cost incurred will ruin any seemingly good business case in the long term.
Many people would have you believe the old myth that "all hard disk drives are the same." That couldn't be further from the truth, maybe because of the bifurcation of the PC/desktop market and the enterprise IT sector, or the margin that storage providers have placed upon enterprise drives. There are key differences relating not only to performance, but reliability, data integrity, and overall ruggedness for 24×7 utilisation. This is contrasted to a drive that's either meant to be used over an eight hour a day period, or a drive that is meant to have a continuous but less strenuous workload. The key here is that since HDDs look the same on the outside, a good number of people just think they are all the same. It's so unfortunate, but it's exacerbated by the fact that drives are used incorrectly because of the environment they are placed within. All of this relates to how and why drives fail, or at least seem to fail.
Enterprise drives compared to nearline and desktop drives are always a subject of debate, and whether they are worth it or not tends to bookend these conversations. The point is that they all have a place, but only when used appropriately.

Enterprise ssd

Topics