Software Composable Infrastructure (SCI): The latest revolution in enterprise infrastructure

Silicon Valley, as we know, is chock full with technological innovation across all industries and touches virtually every corner of our lives. It’s also true that data is central to these new technologies. And as more data is generated, we need new technologies to accommodate the volume and diversity of the exploding amounts of data. Market intelligence firm IDC forecasts that by 2025 the global datasphere will grow to 163ZB –– equating to a trillion gigabytes and ten times the 16.1ZB of data generated in 2016.

Modern data technologies, such as Hadoop, Kafka and Cassandra were created to help manage these growing workloads and the datasets they rely on that are too large to be processed by traditional computing. These workloads have then been deployed on clusters based on scale-out architectures, which offer a low-cost option for managing big data. But until recently, legacy, on-premises computing architectures lacked the agility of public clouds and ability to easily and quickly scale up or down the compute and storage resources needed for each workload.

Absent parallel innovations in infrastructure technologies, enterprises just will not be able to efficiently and effectively manage this data flood. An emerging technology called Software Composable Infrastructure (SCI) is an approach that provides a new solution to the challenges that big data teams face daily. Not only that, SCI provides the ability to create the right infrastructure when needed via a simple and easy-to-use interface.

What is SCI?

Software Composable Infrastructure (SCI) works by combining separate compute and storage resources into “physical” servers and clusters under software control––that is, after composition, it appears to applications and system software that the resources are physically attached to their respective servers. Thus SCI allows servers to be provisioned and re-provisioned to suit the demands of particular workloads via tightly integrated software and hardware components. This model gives data center administrators public-cloud-like flexibility while still providing the scalability, reliability and security of an on-prem installation. SCI is also able to maintain the performance and cost advantages of bare metal, so no processing power is lost in the process.

With SCI, any drive can be attached to any server, effectively “composing” servers and clusters that are optimised for the needs of a particular workload. If a workload needs additional compute or storage resources, it can be as easy as a few keyboard clicks to add more to the cluster. Once a workload is complete, these resources can then be returned to the pool for use by other applications. This composition and recomposition all occurs under software control, which is fast and requires no one to physically touch or reconfigure any of the equipment. Resources are no longer trapped in separate silos.

Why enterprises need SCI

But why would large enterprises and organisations need SCI?

While the public cloud has come a long way, there are often still concerns around security, processing speed and scalability. One such company that faced these issues is Clearsense, a medical analytics company. Clearsense offers a SaaS product based on Hadoop analytics. The service started on AWS and grew to be a very large application. But Clearsense became frustrated with the cost and unpredictability of AWS, and began to consider a move to on-premises infrastructure. However, traditional server infrastructure just didn’t offer the flexibility that Clearsense desired.

A second reason enterprises should consider SCI is the inherent limitations of legacy data centre technologies. Because of the time and expense of forklift upgrades, legacy data centres can often be slow, lack agility, and require over-provisioning to ensure that the required data needs and workloads are sufficiently accommodated. As anyone in IT knows, it’s often quite difficult to know up front how much compute and storage are needed for the scope of a given project or company. As a result, projects often are delayed as IT team reconfigure all of the required systems, impinging on profitability. Of course, to combat this, teams overprovision, driving up costs and decreasing overall utilisation.

With SCI, organisations have the flexibility to recompose compute and storage resources on the fly, adjusting as needed for highly dynamic modern workloads as they continue to scale. Adopting SCI also can significantly lower IT costs.

Because the accumulation of data shows no sign of slowing down -- the size of the digital universe will double at least every two years, a 50-fold increase from 2010 to 2020 -- and because quantum computing and DNA storage aren’t here yet, organisations need viable, fast, simple and cost-effective ways to manage data and allocate resources. SCI is one such very valuable tool and one whose time has come.

S.K. Vinod is the VP of Product Management at DriveScale
Image source: Shutterstock/everything possible