IBM aims to slash big data costs with "Elastic Storage" software-defined products

Big data places enormous demands on storage, and in many cases conventional technologies are struggling to keep up.

Read more: How to gain more powerful insights from big data analysis

In an effort to deliver improved economics and at the same time enable organisations to access and process any type of data, on any type of storage device anywhere in the world, IBM has unveiled a portfolio of software-defined storage products.

Codenamed "Elastic Storage" - insert your own drapery joke here - the portfolio of products claims to offer higher performance, infinite scale, and be capable of reducing storage costs up to 90 per cent by automatically moving data onto the most economical storage device.

"Digital information is growing at such a rapid rate and in such dramatic volumes that traditional storage systems used to house and manage it will eventually run out of runway," said Tom Rosamilia, the senior vice president of IBM's systems and technology group. "Our technology offers the advances in speed, scalability and cost savings that clients require to operate in a world where data is the basis of competitive advantage."

Developed by IBM Research Labs, the software is ideally suited for the most data-intensive applications which require high-speed access to massive volumes of information. It provides a set of capabilities to automatically manage data both locally and globally, offering faster access, easier administration and greater scalability.

IBM has demonstrated that Elastic Storage can successfully scan 10 billion files on a single cluster in just 43 minutes. It has its roots in the technology used for the Jeopardy! TV show with IBM's Watson supercomputer.

Read more: Breaking down the five most common big data myths

Among its capabilities, Elastic Storage can exploit server-side flash to give up to a six times increase in performance, compared to standard SAS disks. It recognises when a server has flash storage and uses it as cache memory to boost performance. Storage is virtualised, allowing multiple systems to share common pools. It's not reliant on centralised management and so can ensure continuous access, working around software and hardware failures.

It features native encryption and secure erase options, as well as supporting OpenStack cloud management, to allow customers to spread data across private, public and hybrid clouds.