The high profile implosion of General Electric’s (GE) ambitious digital strategy has caught the attention of many practitioners and IT leaders over the last few months. GE is an iconic business conglomerate with major operations in Aerospace, Transportation and Power, along with a major financing arm.
GE has embarked on a massive digital transformation strategy a few years ago, with GE Digital ostensibly in charge of driving disruptive digital capabilities into the various business units. GE’s digital operation was touted as building software capabilities that drive business differentiators and ROI across aircraft engines/supply chains, transportation & power. GE also has been on record touting billions of dollars in cost savings attributed to this digital transformation, mainly due to fault tolerance & lower operating costs. According to Reuters, GE expects to book around $12 billion in revenue in 2020. For perspective, GE’s total revenue in 2017 was nearly $124 billion.
The corporate tumult at GE that led to the resignation of Jeff Immelt as CEO seems to have affected GE Digital as well. Predix, GE’s industrial internet platform, has had technical challenges and delays which have resulted in a scaling back of ambitions and investment. After six years and $4 billion in spend, the new CEO John Flannery called a two-month timeout to fix Predix’s problems. Flannery also announced that once issues have been fixed, future sales efforts will be to focus on existing sectors and not to go after new industries.
Digital transformation is critical to the competitiveness and survival of businesses today. We all hope that Predix and GE will overcome their challenges to benefit from their digital transformation journey. However, this high-profile failure is an example of why Digital or Cloud Transformation projects never seem to succeed in the first couple of tries.
As a result of Digital Transformation, there is significant pressure on IT infrastructures in three major ways
- to be able to adapt to this new way of doing things and to be able to offer multiple channels and applications for consumers to engage with the business
- to offer ‘smart’ applications that can be more easily updated and extended, and that often employ AI and predictive analytics/ML to detect customer preferences and provide value-added services on the fly. Services that not only provide a better experience but also help in building a longer-term customer relationship
- to be able to help the business prototype, test, refine and rapidly develop and release new business capabilities and applications in a much faster pace
Today, every company is a software company – from financial services, to retail, mobile and telecommunications, IoT, automotive, and even utilities and government agencies. The core nature of Corporate IT is thus changing from an organization dedicated to keeping the trains running on time to one that is expected to be a key driver and enabler of a DevOps or Digital transformation – focusing on innovative approaches to provide, consume and operate IT resources to benefit the organization (for example, being able to offer IT as a service, automate IT Ops, and support cloud-native applications).
Given all this, there are 4 major reasons why Digital transformation projects typically fail – from small organizations to the GEs of the world. As you’re mapping your journey and IT processes to this new digital world, keep these principles in mind to avoid some of the common pitfalls of the road ahead.
Reason #1: Your Digital Initiative Lacks Business Focus and a DIGESTIBLE, Quantifiable Rollout Plan
The first major lesson gleaned from the GE case or similar stories is that a Digital Platform must have a solid business focus on solving quantifiable business challenges. Excitement and hype generated by executives have little meaning unless each use case is backed by solid ROI in three critical prongs:
- Cost reduction
- Revenue generation – from new customers, and retaining existing ones
- The development of new business model and market opportunities.
Yes, digital transformation is disruptive and crucial for an organization. But you cannot boil the ocean, and need to have specific goals and a digestible, agile, rollout plan for what you want to attack first. And how to measure your progress- so you can either succeed – or fail fast and re-iterate (rather than having to wait for years or $4B later…)
Approaches around ‘wholesale’ business disruption end up failing due to political challenges, too much change too soon, lack of clear focus and manageable action plan, and because they can take up resources and disrupt existing business models which have been performing reasonably well. Line of business applications with clear and discrete challenges that can benefit from a digital approach are likely candidates for transformation.
Once you see success, you can roll it out to other project and continue to expand your transformation. For example, once a core application has been architected and built keeping reusability in mind – then legacy applications can use an API based approach to integrate with infrastructure and cloud capabilities to develop a digital ecosystem of applications. These applications can ultimately be offered as a SaaS offering for business partners and customers to consume. The end state of the Digital Platform thus becomes a business system that is not only responsive to customer interactions but also supports massive scale in terms of connected services, data, users, partners, and employees.
Reason #2: Don’t Pigeonhole Your Technology Investments
While modern technologies and architectural patterns that often accompanies Digital initiatives offers innovative possibilities, existing legacy applications and systems that are critical to your operations (and still keep the lights on!) cannot be neglected.
While greenfield applications can be built using new technologies or architectures, for a range of use cases across verticals, it is important to keep in mind that the shiny new tech needs to be balanced with support for the legacy. For instance, Container-based software development and deployment methodologies are all the rage, while legacy applications that function well using VMs still support large application estates. These applications can be migrated over time to new architecture, but many organizations would need to support “legacy” systems for a long time to come, alongside more modern applications.
In addition, remember that the enterprise ‘must haves’ must all be supported across both the legacy and the new technology investments – such as security, scalability, ACLs, auditing, monitoring, logging Provisioning processes and quotas, upgrades, rollbacks, billing, disaster recovery, DevOps CI/CD pipelines, and more.
Lastly – but not less critical – while you may choose to re-architect your applications, modernize your infrastructure or refactor your processes so you can take advantage of the most recent, cool, new technology out there – remember that:
- It takes time
- No technology is a silver bullet/ one size fits all – for all your needs or applications
- You never know what shiny, better thing would come tomorrow – offering you additional benefits that you’d want to take advantage of.
Reason #3: Not Enabling Your Developers
One of the biggest reasons for Digital transformations and organizational change initiatives to fail is the real or perceived lack of innovative business functionality – which is greatly dependent on how we enable our engineers to go fast and get it done! Developers often de-facto lead the digital transformation and the development and adoption of new capabilities around software development and delivery, when they are more productive and choose tools and processes that allow them to more easily build and ship products.
The most common ways in which developers are not provided with a seamless experience include
- enforcing long procurement cycles for IT resources,
- adopting complex cloud management products that do not provide “self-service” but rather force them to open tickets — and wait
- lack of automation through friendly cloud APIs
- poor cloud governance capabilities
Reason #4: Not Experimenting Along the Way
Going ‘digital’ means rethinking through core products and services. And really what brings an enterprise closer to a startup mentality – and is a key tenant of DevOps and digital transformation – is a culture of constant experimentation and trying out new things. Digital disruptors recognize that a digital and cloud transformation is a multi-year journey, and make sure to enable their IT team to experiment along the way, adjust course, and also choose tools and processes that enable other teams (such as developers and application architect) to easily experiment themselves.
Worth noting that much of the innovation and experimentation in software delivery is happening typically in the open source communities- and digital disruptors encourage their engineers and IT to listen to open source developments, learn from them and take advantage of open source in their organization.
Digital maturity capabilities will vary from enterprise to enterprise. With an incremental approach that adheres to a clear business objective, KPIs and digestible rollout plan will enable organizations to steadily progress on their journey towards a ‘Digital Optimized’ model. To recap some of the key learnings from our experience of working with customers across verticals:
1. Drive the digital business case with economics and business value in mind. The first few applications will be the proving ground. Boil the ocean approaches typically tend to fail, resulting in loss of credibility for the CXO team and frustration across the organization.
2. Consider you’ll need to support a range of hybrid cloud architectures and applications- from legacy, containers, serverless, and whatever shiny new tech the future brings. Avoid lock-in to cloud or tools providers or to technology stacks as much as possible, so you can easily port between environments or enhance your code and processes. Rely as much as possible on widely-adopted open source solutions to future-proof for innovation and portability. Remember that not all your applications would be hosted on one public cloud, and that a hybrid cloud setting is the most common (private clouds are also a way to de-risk cost and cloud lock-in).The biggest pain point in running a hybrid cloud is OpEx maintenance costs. Consider a Managed SaaS solution that deploys, monitors, troubleshoots and seamlessly updates your on-premises data centers and that can also seamlessly manage your public cloud footprint. That way, you know you’ve got the most advanced private cloud management at the lowest possible operational cost, for years to come.
3. Multi-cloud management is a challenge that IT will need to deal with and that executives would need to plan for in the entire business case – from the perspectives of economics, value realization, headcount planning, chargeback etc. The ‘single pane of management’ is a worthy goal to aspire to. However, beware of vendors selling ‘integrated’ stacks. These are as much a lock-in as are the public cloud APIs.
4. Leveraging successful blueprints and patterns around vertical industry use cases and digital transformation would accelerate your success. How are leaders in your industry using the cloud for specific use cases common to everyone operating in the vertical?
5. Investing in a unified experience and single pane of glass to manage all your types of infrastructure – VM workloads, containers and serverless, across private, public cloud and containers – are key. These will serve as a way of de-risking and increasing the efficiency of your IT and data center investments and will greatly simplify your operations- in an IT reality that otherwise keeps on getting more and more complex.
Vamsi Chemitiganti, Chief Strategist at Platform 9
Image Credit: Wichy / Shutterstock