Disruption is everywhere irrespective of industry. Customers are constantly looking for cheaper, faster and better services, and companies like Netflix in media, Amazon in retail and WhatsApp in telephony are providing this. Software-defined disruption is now at the forefront of many industries, and while it can be disturbing for some, others have thrived and start-ups have simply left larger, more established enterprises in their wake.
This is the landscape that Canonical and our customers play in, and one which is shaking up the thinking of savvy enterprise companies. When we get a call, it is typically to solve one of two problems as expressed by the customer: either the company is looking to adopt an entirely new workload: AI, machine learning, Kubernetes or containers for example. Or, and this is increasingly the case, they have explicitly identified a competitor who poses an existential threat to their business, and the conversation is about how best to head off that threat through new capability. In practice, these often become the same meeting. Amazon buying Whole Foods puts every established supermarket chain on watch for the inevitable disruption this will bring. Our conversation is about leaning in to beat that trend rather than be left behind by it.
Be brave and take a leap of faith with software innovation
One weapon available to everyone, incumbent or disruptor, is Open Source software. At its core, open-source gives you a great deal of capability, available both in libraries of data and code, and also in the millions of individuals working on those libraries. But open-source also means a chance to share & crowd-source innovation. By sharing your efforts to solve big, hard problems, you invite the world to help improve the answers in a way that any single company would struggle to do. It is now normal to find entities ranging from Walmart and Carrefour to eBay loudly open-sourcing their work & inviting anyone interested in retail innovation to help contribute & leverage this shared body of work. Sharing is always a leap of faith, but one that is well worth the effort, in this era of disruption.
And while open-source has typically been assumed to be referring to software, the concept is just as applicable to Operations i.e. how that software is operated in a data centre. For example, Deutsche Telekom and Bell Canada, two large telcos on different continents, share a similar approach to operations for their next-generation network infrastructure. They have decided that sharing the same underlying models of their IT infrastructure gives everyone a competitive advantage. If one of them makes a marginal improvement to a piece of their own stack, everyone else using that stack benefits. Differentiation can now be focused on services delivered to end-users rather than the underlying servers they run on. We are going to start seeing a world where infrastructure and operational knowledge become a commodity, and where crowdsourcing of IT becomes something we do as a matter of course.
How to cope with infrastructure complexity
One reason that this crowd-sourcing of IT is inevitable is that we are now dealing with a different class of software.
Most legacy infrastructure, that takes up the bulk of the budget and the floor-space in enterprise data centres, is typically comprised of monolithic, slow-changing applications - like a database server - running on a relatively small number of machines. But take any cutting-edge software capability today - machine-learning, big-data, or indeed an OpenStack architecture - and it must be integrated, configured and tuned in a way that is specific for each group of users. The resulting solution is typically compiled from multiple, disparate sources and then deployed across elastic infrastructure that could scale to thousands of servers. Change - patches, versions, configuration updates - is assumed to be part of the daily beat not a special event. Operations at this scale & speed is a different, and far more complex, problem.
We coined the term “Big Software” to represent this class of at-scale software which organisations now rely on to stay ahead. Any innovating organisation must expect to ingest and rely on growing amounts of Big Software. There is simply no way that any organisation can ramp up & maintain operational expertise of this new type rapidly enough to keep pace with the business imperative. But they can get there by open-sourcing their IT Operations knowledge. Doing this involves encapsulating operational expertise in intelligent open-source ‘models’ that are iterated on by many organisations at once. Those ‘models’ become the automation backbone for Big Software, delivering speed and economics that legacy IT approaches can only dream of.
Let automation set you free
We believe that this shift in approach to ‘model-driven’ automation of IT described above is inevitable. Because the economics say so.
Companies routinely pour 80 per cent of their IT budget into simply operating existing infrastructure; running required installations and upgrades and basically keeping the lights on. That leaves just 20 per cent of the budget for innovation and this is the shortfall that disruptors are able to exploit. If your business is to grow and remain competitive in this software-defined age, that dial needs to move the other way, and quite substantially.
It starts with getting past the mindset that IT operations have to be done by hand. That approach was adequate for the decade gone by but in a world of at-scale infrastructure and agile, oft-changing, composable workloads being ingested from a variety of sources, every IT organisation has to think of running their data centres the way a Google or an Amazon runs theirs. This means automation as the default - moving beyond simple scripting of batch processes to truly intelligent, model-driven operations that allows IT staff to completely offload the routine and spend time on competitive differentiation.
We work with customers across industries to bring the power of model-driven automation to their data centres. For example, we have a pharmaceutical customer for whom we built a research cloud. In under 90 days they went from concept to being able to deploy, configure, stop and start apps, across thousands of machines. Where once their IT team would have done manual & one-off work across tens of machines, now model-driven operations allows that team to crunch big data sets, at will, on cloud infrastructure that just works. The pay-off here is in the pace of discovery - shorter time-to-market with new drugs, new chemicals or molecular entities - with the potential return measured in billions per patent. True model-driven automation pays for itself and then some.
Creative disruption is dominating all industries. In order to keep up with market leaders, businesses must let go and let software do the work, enabling their employees to be smarter and innovate. There’s a real appetite for disruption and disrupt or die is the new battle cry in this software-defined era.
Anand Krishnan, EVP & GM Cloud, Canonical
Image Credit: TeroVesalainen / Pixabay