How data growth’s intersection with performance will affect the future of storage

Anyone charged with managing storage today knows that we are facing a crisis of “more” – more data, more multimedia consumption, more devices attached to the Internet generating data, more mobility and more demand for real-time analytics.

The more-more-more nature of the market is pushing IT and storage teams to respond in kind by adding more of the same storage. However, that reactionary instinct perpetuates a cycle that is both inefficient and ineffective.

As demands on storage performance continue to grow, IT teams will need to respond in more strategic ways. Simply adding more and more storage capacity or arrays no longer makes sense to address performance deficits and IT budgets cannot handle the inefficiency.

Investing in performance to solve performance problems

Traditionally, organisations buy more storage capacity (by adding disk spindles or new arrays) to meet performance requirements as content demands increase.

However, these are inefficient and expensive ways to address the problem of degraded data storage performance in virtualised environments. Enterprises need to look for better ways to manage the performance of their storage because buying capacity to support performance demand is a never-ending cycle, and one that is expensive and inefficient.

New innovations suggest new solutions

One focused way that organisations are approaching the problem of storage performance is with all-flash arrays – but they are not cost effective for all workloads and use cases.

While all-flash arrays are popular because they are providing an incredible number of IOPS, they do so without regard for $/IOPS or the $/GB. Many workloads don’t require the level of performance provided, and the capacity cost makes it prohibitively expensive. Individual enterprises that choose all-flash deployments will do so because of application requirements that require global, guaranteed low latency for all applications.

For the rest of the world, there is a set of newer flash-based technologies emerging that lend themselves to alternative performance-oriented solutions. For example, lower latency persistent memory solutions such as NVDIMM are starting to come to market.

Like most new technology, they are currently rather expensive for widespread use. However, as they mature, they will garner interest for their ability to complete writes in fewer than 10 microseconds, an order of magnitude faster than current solutions.

Solutions like this that are low-latency and tightly coupled with host-side CPU resources lend themselves to the creation of a server-side performance layer within a storage architecture. This type of architecture creates a focused pool of performance resources that is separate and distinct from capacity management.

On the other end of the spectrum, there are new innovations and trends in capacity optimisation that also lend themselves to an architecture like this that decouples storage performance and storage capacity.

For example, shingled magnetic recording drives (SMR) will gain ground as an option for better capacity value – 2.5x greater than previous versions – with more cost-effective capacity for unstructured data and object storage that doesn’t require high-speed, low-latency access. Similarly, low-cost, off-premise cloud storage will also provide an option for inexpensive capacity with lower performance requirements.

Considering a dedicated storage performance layer

A dedicated storage performance layer that is decoupled from a capacity layer brings several benefits, some of them technological, and some of them economic.

  • Higher performance: when low-latency media is co-located in the application servers, performance improves. Requests no longer need to traverse the storage network to be satisfied; they can be served directly from high-speed flash within the same servers on which the applications are located.
  • Virtualised performance: By separating storage performance into its own layer, the performance tier can be applied to specific workloads independent of the arrays or other physical resources that are backing it.
  • Scale-out performance: Performance grows with additional servers, as they are added to support a greater application load. (This is in contrast to traditional centralised arrays that do not increase performance with increasing application load.)
  • Improved cost/IOPS: Flash devices and drives that are located in servers are significantly less expensive than those found in proprietary disk arrays – even if they have the same performance characteristics. Being able to take advantage of the commodity nature of servers and purchase high-performance media within that footprint can drive the cost of performance down significantly when compared to that of storage-side resources.
  • Improved cost/capacity: Once shared arrays are designated for capacity storage and data management tasks (rather than performance requirements), more efficient purchasing of capacity can be executed. Rather than purchasing unnecessary capacity to meet performance needs, or purchasing marked-up proprietary versions of high-speed drives, capacity purchasing can be reduced to buying the least expensive drives to store data that doesn’t require low-latency access.

Which brings this discussion full circle. We began by talking about how much data is being generated and created for IT professionals to manage.

This architecture, which separates not just the management but also the acquisition of storage capacity from storage performance, is best suited to handle this deluge of data.

Considerations for storage’s “more-driven” future

As the market continues to evolve in the face of more data and more aggressive performance expectations, we’ll also see more focus on the potential of software-based storage acceleration.

For the vast number of enterprises in pursuit of better storage performance across their virtualised environments and a need for more cost-effective IT decisions, the path of separating performance from capacity investments will become more popular.

Scott Davis is CTO at Infinio.