Skip to main content

Everything became code (and why that’s good)

(Image credit: Image source: Shutterstock/McIek)

The debate – if there ever was much of one – is now over. The IT industry is clearly moving to a world of everything as code. More and more, organisations are embracing the idea of modelling the components of software development and delivery programmatically in code. If they can’t treat literally everything as code, they are incorporating as much as possible, including infrastructure, networks, delivery workflows, environments and more. Why? Because capturing these elements as code enables them to be versioned, shared, reused and refined through collaboration. In short, organisations that adhere to the everything as code concept are applying proven software development practices to other domains, enabling them to increase automation and repeatability, reduce errors and accelerate delivery pipelines.

In a world where businesses have to deliver high-quality software fast in order to remain competitive, IT organisations are turning to practices such as continuous delivery and DevOps.  These practices are focused on establishing the ability to deliver software reliably and repeatedly. We will take a detailed look at how codifying the delivery through everything as code makes the entire process repeatable, enables a DevOps approach and allows the business to realise value by getting innovation to market faster!

Applying proven software development best practices in new domains

Code is, at its heart, a set of text files. Over the decades, the industry has developed a wide array of tools and best practices for creating those text files as quickly as possible and with as few errors as possible. For example, virtually every development team uses GitHub or another version control tool along with common versioning practices to manage multiple variants of their code as it is developed. These tools and practices are designed wholly to enable teams to collaborate, to track differences in the code and to collaborate more efficiently and with greater transparency. Further, they support auditability and traceability, so teams know who changed what and when. They also support roll-backs and repeatability, enabling teams to reliably construct any version of their application from their versioned code at any time.

Everything as code extends the benefits of versioning tools and practices – as well as numerous other software development tools and practices – to virtually every other aspect of the software development and delivery process. For example, once you have captured a configuration, workflow or environment via YAML, JSON or any human-readable syntax, you can always recreate what you have captured. Using traditional software development tooling and practices, you realise the same auditability, traceability, repeatability, reuse and related benefits that you get for your application code across other software delivery domains.

What other domains can be managed using an everything as code approach?  More than you might expect:

  • Networks – With software-defined networks, teams are programmatically initialising, controlling, changing and managing network behaviour dynamically via open interfaces.
  • Build and deployment workflows – Development and operations teams model software delivery pipelines in code to enable continuous delivery and speed release cycles.

  • Infrastructure – Infrastructure as code using Chef, Puppet, Ansible and similar tools enable organisations to orchestrate system and environment configurations at large scale, quickly.
  • Environments  – Using container technology, developers define entire environments in code, as self-contained systems. For example, Docker uses the text-based Dockerfiles to build shareable images that fit specific infrastructure requirements.

  • Tests – Both non-functional tests (including unit tests, performance tests and load tests) as well as functional tests (including acceptance tests and tests created for behaviour-driven development) are defined and managed programmatically via a wide variety of tools and scripting languages

Harnessing the power of a programmatic, normalised representation

Aside from making it possible to reap the rewards of time-tested development tools and best practices, everything as code provides a normalised representation of the software delivery process. Normalised, means a shared view across stakeholders that defines the process clearly and unambiguously, providing insights that can otherwise be difficult to come by, as well as a mechanism for stakeholders to collaborate more effectively.

Nearly all organisations have a wide range of domain specific tools which drive the development and delivery pipeline. Traditionally, each tool must be accessed via its own dedicated user interface (UI) in order to configure aspects of the process. In this environment, the entire process is fragmented across disparate UIs, making it difficult to achieve a common view of the delivery pipeline.

In contrast with an everything as code approach, tools are controlled and configured programmatically via application programming interfaces (APIs), SDKs and text-based representations, enabling the process to be codified into a shared, normalised representation. Such representations can be readily shared with other teams – either within the organisation or outside it via GitHub – who can then provide feedback and make improvements. Defining the production environment programmatically – with the use of containers, for example – makes it possible to model the end state of the process early on, which in itself reduces errors.

Any aspect of development and delivery implemented in code can be validated using tools that employ syntax checks or higher level logic. And finally, because all changes are captured and versioned, when an error is identified downstream in the delivery process, teams are equipped to immediately roll-back to a known safe version, diagnose the error and correct it.

Caveats and coming attractions

Of course, transitioning from just code as code to everything as code will be easier for some organisations and more difficult for others, and there are considerations to bear in mind. Adopting an everything as code approach, for example, tends to shift organisations to a more collaborative culture, much like a DevOps transformation. Bringing configurations and processes out of UIs and exposing them all as code that is shared and versioned increases transparency. Agile shops may already be comfortable with this transparency and culture, but more traditional software organisations may find the initial change more jarring.

Another caveat to be taken into account is that not everyone takes to code naturally. UIs were developed for a reason – specifically to make it easier for users to complete tasks without editing files and running tools from the command line. It’s true that some organisations will have a longer learning curve as they transition from UIs that guide users through steps to understanding how to implement those steps in code. Fortunately, this potential stumbling-block was recognised at the earliest stages of the everything as code movement, so all along there has been an effort to provide solutions and technologies that make the transition easier.

One example in the Jenkins space of making technologies more approachable for users of all skill levels is the Blue Ocean project, which enables teams to visually create a code-based representation of their continuous delivery (CD) pipeline, without detailed coding. The user experience provides guidance on defining CD pipelines while retaining the benefits of everything as code. In a similar way, the new Declarative Pipeline provides teams with a way to define how they want their pipelines to work using configuration files rather than scripts. The idea is that you express in a flat text file format what you want to accomplish, and then make use of tools that interpret the declarations in the file, convert them to more complex operations and then execute those operations.

Brian Dawson, DevOps evangelist and practitioner, CloudBees
Image source: Shutterstock/McIek

Brian Dawson
Brian is currently a DevOps evangelist and practitioner at CloudBees where he helps the open source community and CloudBees customers in the implementation of agile, continuous integration (CI), continuous delivery (CD) and DevOps practices. Before CloudBees, Brian spent over 22 years as a software professional in multiple domains including QA, engineering and management. Most recently he led an agile transformation consulting practice helping organizations small and large implement CI, CD and DevOps.