Skip to main content

DevOps’ missing ingredient: Fast, secure data

(Image credit: Image Credit: Profit_Image / Shutterstock)

Over the past several years, DevOps has evolved from “trendy idea” into a tried and true approach for delivering enterprise applications. DevOps practices have more than just picked up momentum over the last decade— they have now become the mainstay of the IT industry, with more and more companies (startups and enterprise alike) adopting DevOps at scale. 

As the latest IDC study shows, the DevOps software market is estimated to reach $6.6 billion in 2022. I’ve had a front-row seat to this prolific adoption, watching as companies transform their software delivery pipelines through adopting Continuous Delivery, making infrastructure agile and capable of being managed as code, and undergo organisational and cultural transformations at scale. However, even for the most mature DevOps adopters today, I hear there is a missing ingredient that continues to disrupt their otherwise highly-automated pipelines.

That missing piece is fast, secure access to high-quality data. Provisioning data for dev and test environments can still take days or even weeks, causing serious delays in the development process and a bottleneck that prevents a holistic DevOps practice. To conquer these changes, DevOps practitioners need to transform their data into an asset rather than a liability by addressing how they think about data delivery. 

Here are the emerging trends guiding the relationship between data and DevOps today, in order to better inform the next leg of your DevOps journey and truly achieve Continuous Delivery.

More data in more places

As organisations migrate to transformational technologies like the cloud and begin to break large monolithic applications into microservices-based architectures, they are seeing the need to re-architect their data repositories too. Data repositories need to move from large, highly normalised databases to datastores that are repositories for the specific subset of data required by a set of microservices.

For any microservice-based architecture, you end up moving from classical monolithic data stores to several fit-for-purpose datastores. Every microservice team will use its own datastore because they need a specific subset of data for their needs. In order to archive this, organisations need to better categorise and segment their data to determine what goes into what datastore and what data should be allowed where. Both the data architecture and the governance of the data hence need to evolve to meet these needs of modern applications.

Treating data like code

The real value of DevOps can be achieved if all three core components of the technology stack — infrastructure, code under development, and data — can be changed, versioned, managed, and collaborated around like application code. With the evolution of infrastructure as code technologies, infrastructure has come into the fold of being managed at par with application code. We now have change management, versioning, and collaboration around infrastructure definition and configuration, defined as code. The next step is to do the same with data in non-production environments. We must be able to manage, change, version, branch, and collaborate around data like application code, in order to bring it up too to be at par with it.

Moving from DevOps to DevSecOps

While DevOps is all about applying lean principles to accelerate feedback and improve time to value, there are three important dimensions of security in a DevOps enterprise – what is now being referred to as DevSecOps.

The first has to do with securing the perimeter, such as controlling access to data environments, both production, and non-production. Next, you must secure the delivery pipeline itself, which includes eliminating vulnerabilities related to the software supply chain, insider attacks, errors within the development project, and weaknesses tied to the design, code, and integration. Your security practices must ensure that anyone who has access to the delivery pipeline cannot insert malicious code or maliciously access production data. The third step has to do with securing the application itself, ensuring that there is proper identity management to control access to the application and associated data when running in production.

Define data privacy versus data security

While there has been a lot of focus on integrating security processes within DevOps, data privacy has taken a back seat — but that can no longer be the case. First gaining the spotlight after incidents like Equifax’s massive breach or the infamous Facebook Cambridge Analytica scandal, data privacy is now front-and-centre in the public eye. As calls for increased privacy regulations like Europe’s GDPR continue to grow across the world, the responsibility of companies to be the safeguards of data grows, too.

Data privacy is all about mitigating risk within the data, independent of who has access to it, whereas security has to do with ensuring that only the right people have access to the data.

For example, consider what happens when a developer decides to collect new location data from a mobile app. What are the privacy implications? A successful DevOps operation considers the risks and implants “guardrails” around sensitive data to avoid mishandling. Technology, such as automated data masking and obfuscation, exists today and can be fully integrated into the application delivery pipeline automation framework for enhanced protection, without sacrificing speed.  

As DevOps continues to evolve and scale in a world that is driven by the modernisation of applications, so grows the need for rapid data delivery to fuel these projects. In order to reach that challenging final step of your DevOps journey, data needs to be freely available in a secure and compliant manner in both production and non-production environments. Not in days, not in hours, but in minutes.

Developers, testers, and other practitioners or stakeholders throughout the organisation’s delivery pipeline need to be able to get the data they need, when they need it, in the manner and time they need it — while ensuring it is made available in a secure and compliant manner.

This is easier said than done, of course. It’s no easy task and requires the focus of organisations as they continued down the DevOps adoption path this year and beyond.

Sanjeev Sharma, Global Practice Director for Data Transformation, Delphix
Image Credit: Profit_Image / Shutterstock