The saying used to go that “Nobody ever got fired for buying IBM.” Next, it was “... for buying Oracle.” Today, it’s “... for buying AWS.” Talk about a not-so-subtle connotation.
What clichés like this really mean is, “It’s OK to play it safe and choose the dominant vendor of the day. In fact, if you like you your job, it’s probably advisable.” That might have been true historically, when the primary function of IT was running back-office applications and keeping the lights on, but it’s becoming increasingly untrue by the day. In 2003, Nicholas Carr proclaimed in the Harvard Business Review that “IT Doesn’t Matter.” In 2018, nothing could be further from the truth.
Today, CIOs are being tasked not just with managing information technology, but actually with facilitating and empowering digital innovation. Moving some applications to a public cloud platform is only a small first step in accomplishing these mandates, which are quite a bit loftier than making sure the database and email server stay up.
Developers are revered, but underused
A good number of executives appear to have received this message. For example, online payment leader Stripe recently published survey results showing that many C-level executives think a lack of access to application developers is a bigger threat to their companies than is a lack of access to capital. If that isn’t a paean to the value of developers, I don’t know what is.
The same survey, however, also suggests that executives don’t always appear to understand how to harness that developer talent for good. It found that developers currently spend an average of 17 hours per week maintaining legacy applications and fixing bad code—a lack of efficiency that results in an estimated $300 billion worth of lost GDP every year.
Cloud-native software company Pivotal and research partners Longitude Ltd. and Ovum Ltd. Also recently conducted an independent survey of more than 1,600 IT executives globally that dives a little deeper into the factors behind these attitudes and how organizations are addressing them.
For example, 51 percent of U.S. respondents report continuous or daily feedback from customers (a number that would probably be higher if more people knew where to look for feedback), which helps to explain why 55 percent of them are now deploying code either continuously, hourly or daily. The uptick in customer engagement, and the resulting uptick in deployment velocity, is a big reason why access to developers is so important. (Interestingly, although only 43 percent of German respondents claimed at least daily customer feedback, 72 percent of them reported deploying code at least daily.)
The survey also underscores the concern about developer talent and investment being wasted on low-value tasks. Looking strictly at the United States (these numbers are lower globally, on average), it found that:
- Only 45 percent of applications have been built or refactored to run in the cloud.
- Only 51 percent of companies are spending more on developing new applications and refactoring legacy ones than they are on maintaining legacy code.
- An even half of U.S. respondents also said their IT budgets are fully committed at the beginning of the year, meaning it can be difficult to find money for new projects as the year goes on.
Enterprises are headed in the right direction, but legacy IT is still weighing down their initiatives to modernize via software. That’s not a great place to be in a world where tech companies are increasingly using their software and systems prowess to attack new markets, and where customers and developers can always find greener grass someplace else.
The continuously deployed business
One way for executives to bring their understanding of what they know they need to do (hire more developers and build better applications) together with their actual IT processes (spending altogether too much time maintaining legacy systems) is to embrace the concept of continuous deployment across their companies, and at every level of the software stack. There’s a lot more to this than can be captured here, but here’s a 5-step framework for thinking about how to do it:
1. Put culture before computing
The first step in doing company-wide continuous deployment actually has little to do with technology and much more to do with culture. From corporate leadership down to individual developers, people need to feel empowered to take chances, jump at new opportunities and fix things that are broken. There’s some inherent risk in moving quickly—which is where planning and CD concepts such as version control come into play—but risk in the name of improvement is better than paralysis because of fear.
2. Think about IT as an opportunity center
Because software drives so much product development today, and is often the primary connection between companies and their customers, it’s only natural that the IT department should take on a more pronounced role in solving business-level issues. Executives might present IT leaders with problems or opportunities and task their teams with devising the right solutions. Or, CIOs and CEOs trying to take advantage of hot trends (e.g., artificial intelligence or serverless computing) might task IT with identifying easy first implementations.
3. Invest in modern technology platforms
Actually choosing and deploying new technologies is where the rubber of high-falutin’ digital transformation talk meets the road of actually building software better. The ultimate goal is to empower developers to act faster and safer by embracing modern application architectures, development patterns and open source technologies. A modern platform (from infrastructure up to OS) facilitates this by maximizing productivity, efficiency and automation, while minimizing fear of breaking changes, security holes and other operational concerns. Strictly infrastructure-level solutions like “going to the cloud” or “doing Docker” can undue effort for little actual gain in speed, flexibility, efficiency or security.
4. Deal with legacy applications as needed
At the risk of being obvious, it’s always worth a reminder that although many legacyworkloads can’t disappear, many legacy applications can. Some can simply be replaced by SaaS, which all but eliminates the overhead of maintaining those systems and code bases. In other situations, applications can be refactored to fit more naturally into modern development and application lifecycle practices. Containers, microservices and functions (aka serverless) are all options here, as long as the new architecture meets the ultimate goal of increasing productivity (without sacrificing security or blowing up the budget, of course).
5. Measure the right things
Try quantifying success at the business level, rather than focusing solely on application performance. That might mean measuring whether a new feature or refactored application is improving sales, customer engagement or bounce rates, as well as softer metrics such as customer service complaints or the tone of social media conversations. Developer productivity should be another important metric, because productive developers are happy developers. The beautiful part about continuous deployment is that it encourages a constant cycle of measurement, iteration and testing that that should keep things head in the right direction overall, despite a hiccup here and there.
Unfortunately, hiring all the developers in the world can’t fix decades worth of reliance on brittle systems, inefficient processes, and annual product release and software update cycles. Successful companies will need to take things further and view today’s dynamic IT landscape as what it is—an opportunity to be a better business by quickly capitalizing on opportunities, fixing flaws and never settling for good enough. In a choice between being Equifax and becoming Netflix, the choice to modernize seems like a no-brainer.
Derrick Harris, Product Marketing Manager at Pivotal
Image Credit: Everything Possible / Shutterstock