Chief information officers and IT managers might tell you that life can be difficult, but it is rarely boring. Given the accelerating pace of technological change, as soon as these professionals become comfortable with cutting-edge technology, another compelling advance demands their immediate attention.
For many IT leaders, containers have become one of the top items on their “to-do” lists. These standards-based code wrappers emerged from the open source community a few years ago, and have experienced rapid maturation and increasing adoption ever since.
Containers promise a wide range of benefits, including many related to public cloud deployments. But the technology is new enough that it can still be quite challenging for many companies to fully exploit its potential.
By way of example, unlike virtual machines - which run on top of a hypervisor layer and contain their own operating system instances, applications, and support libraries - containers can be much more compact and faster executing. In part, that is because containers do not require their own OS or a hypervisor. Instead, they use the host’s OS to wrap application code, runtimes, tools, and libraries into isolated units that can be easily moved and deployed on different platforms and in multiple different environments.
Among the companies that have already adopted containers, there is evidence of enthusiasm and caution. In early 2017, Forrester Research surveyed nearly 200 such organisations across several countries, and found that 63 per cent already had more than 100 containers deployed. Two years from now, 82 per cent expected to have more than 100 containers in use.
These early adopters were also exploiting one of containers’ key features – their portability. Among the survey respondents, 82 per cent had deployed containers in private clouds, 53 per cent in public clouds, and 36 per cent on traditional infrastructure. Among the many benefits cited were increased speed, improved security, and a consistent deployment process.
While container technologies - like the open source offerings from Docker - provide a ‘lightweight’ virtual environment that enables multiple applications to share an operating system, but isolates the individual processes of each app, stacking multiple containers within a single cloud instance enables multiple applications to share the same OS and instance without interfering with each other. Container stacking can therefore be an effective way to maximise your application density and optimise your utilisation of purchased cloud resources.
Using containers leverages a key feature of virtualisation: overcommit. As the term suggests, this means assigning more resource to the workloads in an environment than the physical infrastructure can support simultaneously. This is only possible if the workloads are not active at the same time, which allows high VM-to-host stacking ratios that translate into significant cloud cost savings.
The concept of overcommit is familiar to anyone who regularly flies on commercial airlines. Based on experience, the airline knows that not every passenger who has booked the flight will show up. So they book more passengers than they have seats to ensure every seat is filled on every flight.
Of course, anyone who regularly flies also knows that sometimes all the booked passengers do show up—bumping someone off the flight. That’s because the airlines make their booking predictions based on averages for each route, rather than an in-depth predictive analysis of each passenger’s schedule based on their individual travel patterns. But that’s exactly the kind of analysis you’ll want to perform to avoid any of your apps getting ‘bumped’. Having an in-depth understanding of each containerised workload over time is essential for stacking them for greatest efficiency, while ensuring you have adequate resources available at all times.
Critical to this, is ensuring you take an analytics approach to stacking containers. The key to safely maximising utilisation and leveraging over-commit of resources is ‘dovetailing’ containers, based on their detailed intra-day workload patterns. Visualise a Tetris game, with each tile representing the workload patterns of an application within each container. Stacking them so that every space is filled will assure the optimum use of the instance. When stacking containers, you might pair an application that is busy in the morning with another that is busy at night or one that is CPU intensive might be paired with another that is memory intensive. This is no easy task and requires analytics that uses historical workload patterns to predict future requirements in order to safely dovetail the apps. If you can get it right, the payoff is significant.
Densify is an analytics service that does just that, recently publishing a case study highlighting a saving of 82 per cent by moving from individual AWS instances for 983 workloads to a container stacking strategy.
It is however important to note that container stacking may not be appropriate for every application. Open source container technology is still maturing and doesn’t offer the robust management, security and resiliency that mission-critical applications demand. However, for non-critical apps, container stacking can be a great option. The key is being aware of the limitations of management ecosystem gaps surrounding containers today and making thoughtful decisions about which workloads belong there—and which do not.
Realising the promise
The container landscape is starting to gel and fill in gaps in standards, management systems, orchestration tools, and other advances that will address many of the technology’s most pressing needs. The major public cloud providers have also rushed to implement support for container-based applications and microservices, recognising the many synergies that exist between containers and the cloud.
As more organisations grow comfortable with container technology, and as they see others realising the types of savings to be gained with these advanced use cases, the steady move toward containers will likely become a stampede. But creating, deploying, and optimising containers on a broader scale will demand that companies have deep visibility into how these building blocks consume the underlying resources that they share. CIOs or IT Managers that crack this and look to harness the advanced strategy of container stacking within cloud instances, will reap the reward of dramatic savings, before their attention is diverted to the next technological development. Their lives are never boring!
Yama Habibzai is CMO of Densify
Image Credit: Rawpixel / Shutterstock