No longer an emerging trend, containers are becoming the go-to technology for enterprises building new applications or refactoring existing applications for the cloud.
Enabling application developers to package small, focused code into independent portable modules containing everything needed to run the code, the use of containers has grown dramatically in the wake of the coronavirus crisis.
According to Gartner, the current rate of adoption of containers means that by 2022, more than 75 percent of global organizations will be running containerized applications in production, up from less than 30 percent today.
By decoupling the application and its dependencies from the underlying infrastructure, containerization looks set to play a central role in the enterprise as organizations strive to become more agile and cloud-ready.
Containerization comes of age
Last year proved a pivotal moment for containerization as enterprises looked to enable new digital capabilities and make more of their applications cloud-native. Offering a slick approach to delivering compute, storage and networking for microservices, containers proved an increasingly popular choice for organizations looking to deploy applications fast in any environment.
Indeed, a recent study of data protection strategies for containers found that 67 percent of respondents were already running containers for production applications, with the remaining 33 percent planning to do so in the next 12 months. At this rate of growth, containers are set to become the most widely used platform for production deployment, ahead of virtual machines.
However, as containers rise in popularity, organizations will need to rethink their data protection strategies fast as the traditional approaches for data protection are not appropriate for a modern containerized business environment.
That is because container-based applications cannot be backed up in the same way as on-premises VM-based applications are protected.
The importance of protecting next-gen containerized applications
Containers and cloud-native workloads are not immune from disruptive events and data loss. This includes human errors, outages, but most importantly: ransomware.
Ransomware attacks are no longer specific to applications run on physical or virtual servers. As container adoption grows, so too is the targeting of these applications. For example, malware is being designed and built specifically to target Kubernetes clusters. Once they’re in, hackers can then run malicious containers and spread to other nodes within the cluster.
For this reason, DevOps and IT teams should be thinking long and hard about their data protection and recovery strategies. Because, as containers continue their penetration of production environments, data protection SLAs are set to become even more critical. Similarly, failing to address the need to assure resilience means risking disruptions that could result in significant business interruption costs.
Understanding the challenges involved
Since containers democratize the ability to provision infrastructure, data protection is becoming a shared mandate that involves IT operations teams, who provide the infrastructure, and the application development and cloud platform teams that create and deploy applications via containers.
However, this shared mandate often creates a disconnect in terms of who holds responsibility (the development team) and who is accountable (IT operations), which in turn increases the risk of improper protection being implemented across production applications.
To add further complexity to the challenge, containers can run across on-premises and public cloud environments. While in the past, IT teams operating virtualization architectures knew that application data was stored in a VMDK or on shared storage, containers change the rules of the game.
As a result, data protection approaches will need to be rethought because containers can move data storage to external data storage services in the cloud or on-premises, all of which impacts visibility into the state of data protection across these environments.
Finally, containers differ from mature virtual environments in one other significant respect: they offer fewer ways of ensuring that new workloads are configured correctly for data protection. Even next-gen applications, built with internal availability and resilience in mind, often lack a streamlined or simple way to recover from risks such as human error or malicious intent.
Protecting next-gen containerized applications
Traditional backup approaches lack the rigor needed to take care of data protection in a multi-cloud estate that features containers and virtual machines. Indeed, opting for non-native solutions from legacy backup and disaster recovery providers will all add time, resources and barriers to application development.
However, adopting a native solution that drives a ‘data protection as a code’ strategy will ensure that data protection and disaster recovery operations are integrated into the application development lifecycle from the get-go. Ensuring that applications are effectively born protected by defining the protection requirements as part of the deployment code.
There are multiple benefits to adopting this approach. The teams creating container-based workloads are able apply pre-defined policies in a way that makes sense for them at the Kubernetes resources and object level, using annotations to automatically ensure all related persistent data elements are protected. Effectively, this should feel like an extension to ‘infrastructure as a code’, with the additional backup of data in the container registry or at the artefact repository level providing true end-to-end resilience.
By eliminating any need to configure policies or build a separate data protection infrastructure, developers are free to consume containers in a self-service and on-demand manner, applying the policies they know will ensure data protection is taken care of. Leaving IT operations to focus on utilizing policy-based management to retain visibility and assure compliance.
Bonus, by using continuous data protection which is built into the application lifecycle and not bolted on as an afterthought, recovery operations can be fully orchestrated from a granular consistent point in time across all resources. A departure from nightly snapshots of data.
Delivering agility and resilience
Utilizing today’s innovative containerized technologies demands a new approach to data protection and disaster recovery. As many organizations have discovered to their cost, utilizing outdated monolithic backup apps risks compromising efficiency, application resilience and data protection capabilities.
By viewing containers and their data as a single entity and promoting continuous data protection as code for containerized applications, organizations can integrate resilience and data protection into their existing continuous integration/continuous delivery workflows in a manner that frees up both developer and IT operations cycles. All of which allows organizations to pursue a microservices architecture in a truly agile way, and without the risk of sacrificing resilience or data protection.
Deepak Verma, VP of Product Management at Zerto, a Hewlett-Packard Enterprise company