What does 2016 hold for object storage?

In today's environment, companies are struggling to manage larger and larger swathes of information and to keep on top of a massive growth in unstructured data. The limitations of traditional storage arrays are becoming more and more apparent, especially when it comes to scaling.

As its name suggests, object storage works by enabling the storage system to manage data as objects rather than as a file hierarchy or as blocks within sectors. Objects include the data as well as valuable metadata, making it easier to quickly find, classify and understand the information stored.

Object storage provides scalability advantages as well because it doesn’t outgrow a file system and consolidates storage silos that proliferate with traditional file systems. It provides a better remote backup solution with automatic data replication to a secondary site. Software-defined object storage also delivers a significant improvement in total cost of ownership (TCO) because it uses standard hardware, and avoids the rip and replace expense usually associated with expanding or upgrading classic storage.

Object storage enables real-time archival data access while providing a more robust ecosystem for data centre environments. It also manages unstructured data better with metadata and makes it more accessible. For enterprises experiencing exabyte scale growth, the data accessibility provided by object storage could be extremely valuable.

The need for a software approach

Today, enterprises seeking to scale from small clusters of data to exabytes of data are looking for a software-based approach that provides the most cost-effective scalability and avoids vendor lock-in. The pay-as-you-grow licensing model available with cloud-based object storage is increasingly attractive because it matches how customers prefer to pay for their consumption vs. having to pay for 3-5 years upfront. Organisations can use object storage to deploy and manage a private storage cloud within their custody, with freedom of choice in access and protection, within and across data centres.

But to match the requirements of organisations of all shapes and sizes today and across upcoming deployment upgrade cycles, the new breed of software-based object storage will need to include unified file and object-based storage in a single solution and provide multiple ways to access stored data through APIs, along with file sync and share capabilities.

Why now?

One of the major reasons why object storage could finally be on the verge of making a name for itself in the storage market after nearly 20 years is the driving force of the leader in public cloud storage (Amazon S3). S3’s huge success is generating greater interest in object storage generally which is helping to build up customer awareness of what it can do.

Another possible driver for object storage adoption is be OpenStack, the free, open-source software platform for cloud computing that enables solutions to be deployed as infrastructure-as-a-service (IaaS). OpenStack Object Storage, also known as Swift, is a scalable redundant storage system that has been been embraced by companies reliant on very large amounts of data, especially those in the supercomputing industry such as life sciences, or in media and entertainment.

One of the biggest technical challenges in storage is compatibility and interoperability between various systems and resources. Most organisations have a highly heterogeneous storage environment and they will expect the same flexibility from object storage with solutions that support OpenStack Swift API and AWS S3 API. What they want is affordable, easy to consume options that enable them to optimise their resources based on their unique business needs.

Awareness around object storage is growing and that should, in turn, spur more development, leading to more solutions. At long last, the conditions could be right for object storage to stake its claim in the storage firmament.

Mario Blandini, chief evangelist, SwiftStack

Image source: Shutterstock/Ralwel