For any enterprise embarking on a digital transformation journey, there is an unavoidable tinge of excitement as the organization prepares to change and evolve. Yet, despite this enthusiasm, all companies preparing to undergo digital transformation face a multitude of challenges, most of which cannot be anticipated. There is however one common fear facing enterprises: the fear of the unknown.
The truth is that few companies fully understand what they are getting themselves into. Justification for digital transformation is often a matter of survival: the need to drive efficiency, embrace business agility, elevate the customer experience, take web 2.0 competitors head on, and many other compelling business outcomes.
These top-down goals are often broad, leaving IT teams to transform their application and network stacks with little knowledge of what exactly is required to meet high-level objectives envisioned for their new “transformed” company. Rarely are existing teams equipped pursue initiatives such as the cloudification, the integration of omnichannel into the user experience, the introduction of DevOps processes and toolsets, and the building of hybrid-cloud apps that can accelerate their organisation, given traditional infrastructure, skills, and tools.
However, the benefits of successful transformation are undeniable. And for enterprises hoping to evolve their services and continue to cater to an ever-digital world, the pressure is on to take the digital transformation bull by the horns and overcome the challenges that come with these projects.
The new complex
Enterprise applications have changed drastically in recent years, becoming hybrid assemblies of a wide mix of digital technologies. Take retail banking, for example: applications supporting ATM machines have evolved into 24/7 multi-platform online banking apps that have gone from solely accessing information from legacy servers to incorporating SaaS and cloud components. For example, Amazon Alexa for speech recognition, Google Maps to locate nearby branches, hosted machine learning to provide intelligent breakdowns of user spending by segment, and countless other bits hosted on diverse infrastructure. As a result, user experience (UX) is determined by a myriad of application components, all of which interact with each other in complex application chains, interconnected by multiple networks, and living in private and public clouds. The user is largely unaware and uninterested in how any given app is built, simply judging it based on performance: availability, ‘lag’, and simplicity of navigation.
This complex hybrid application and cloud infrastructure means that there are many more points of failure and layers of application complexity that risk causing degraded user experience. With so many components, many of which are outside the control of the enterprise, isolating and resolving performance issues quickly is mandatory, but largely impossible to achieve with tools designed to monitor static, private data centers and pre-SDN networking environments.
Virtualisation compounds these problems, as ‘lift and shift’ programs that seek to virtualise inter-dependent servers (e.g, a front-end web application server that relies on back-end database, authentication and file storage servers) can be separated across space and time, introducing intolerable amounts of latency and loss that applications were never designed to accommodate when written for mainframe and private server environments.
If enterprises are to even attempt to monitor their increasingly complex web of applications, it requires the unenviable task of monitoring all endpoints, at the same time, and correlating all information that was gathered from the individual monitoring tools. What’s more, the arrival of new technologies and approaches to spin up new digital services quickly (e.g., microservices, DevOps, etc.) further adds to enterprises’ burden to bring visibility across all endpoints and services.
But technology isn’t the only challenge for IT engineers. Digital transformation also demands cultural change, driven by the need to unify organisational silos and their diverse toolsets, procedures, and levels of visibility into the user experience. Many enterprises today still use—and require—traditional performance management solutions (i.e. single purpose solutions designed to monitor a specific part of the application chain, with little or no visibility into what happens outside specific IT silos) that bring their own challenges.
For example, developers can see how their code is performing but have little visibility into the network that links servers to each other, to the cloud, and to the end-user. Meanwhile network operations staff have tools to monitor Local Area Networks (LAN) and Wide Area Networks (WAN), but don’t see how application transactions and server responsiveness are impacting users. This sees longer Mean Time to Resolution (MTTR) as IT teams struggle to work together to identify and solve issues along the application chain. Ultimately, this has a negative impact on both internal teams as well as end-users, who have to wait longer for issues to be resolved.
More visibility, less problems
To solve this increased complexity, there is only one solution: better visibility. To achieve this, enterprises need to arm themselves with a new generation of monitoring tools to complement and unify their existing systems. This requires an integrated approach to performance monitoring, whereby IT teams have a common view of all performance tools and of all points on the application chain. By establishing a unified view into applications, networks, servers, and users, teams can establish a ‘single source of truth’ that allows ‘silos’ to redefine how they work together to serve common business goals. It is only then that domain-specific tools can shine, as context is provided by a higher-level vantage point.
By improving performance monitoring to span the entire application stack across all digital assets, enterprises will see several benefits begin to emerge. They will better understand network and application usage and performance through a holistic view of the entire digital infrastructure. They will learn how certain user activities impact performance. They will see where the real bottlenecks are. For example: are certain transactions, server requests, web pages, or network links degrading user experience? Are virtual machines over or under-provisioned? How many users are currently affected, where are they, and what is it costing the business?
From the understanding and knowledge gathered from pervasive all-layer application and network performance monitoring, enterprises will be better suited to detect and mitigate abnormal behaviours along the application chain, which, in turn, will help IT teams adjust capacity to optimise Quality of Experience (QoE) according to need. Costs can be optimised and freed up cash and resources can be applied to accelerate revenue-generating projects.
Ultimately, a successful performance monitoring strategy will be a tool that spans the physical and virtual application chain as well as the user experience (UX). It is only through this “all-encompassing” approach that enterprise IT teams can overcome the fragmentation and silos that exist as a result of existing monitoring solutions.
Success in digital transformation
Finding success in digital transformation is nothing short of challenging. For enterprises, new digital services and pressures from end-users mean increased complexity across all points of their application chain. While this presents several obstacles, it also presents a myriad of opportunities to optimise their IT stack in a way that increases performance, boosts QoE and Quality of Service (Qos), and sees a reduction in OpEx.
However, getting digital transformation right without the correct monitoring tools is a little bit like baking without an oven, so isn’t it time that enterprises rethink the utensils used for their digital transformation recipe?
Sergio Bea, VP Global Enterprise and Channels at Accedian
Image Credit: Wichy / Shutterstock