Skip to main content

Machine learning: the key to preventing unplanned application downtime in the digital economy

UK employees are expecting faster access to applications and data unlike ever before. But, as many IT teams and CIOs will know - meeting this expectation is always a challenge.   

Recent research undertaken by Nimble Storage found that nearly two-thirds of British office workers believe that the speed of their work applications does impact their performance and productivity. Waiting for an app to load in a matter of seconds may not seem like anything to be fuss over, but this time quickly adds up over a period of time.   

Many organisations will understand the importance of investing heavily into digital transformation initiatives to retain a competitive edge. However, what many business leaders may not realise is that new projects and new processes will require hundreds and thousands of applications for smooth and seamless operations. 

Wasted moments waiting for applications to load could indeed have a strong impact on business output, success and even employee retention.   At present, employees experience on average four software delays each day at work, each lasting approximately seven seconds. Again, what is the fuss about? But, measure this against the UK’s average hourly wage, and this delay costs the British economy a staggering £744,235,520 annually.  

The challenges of the app-data gap

The app-data gap is behind many of these application and software issues. This gap is created when there are delivery delays between the data and the application in question. This then acts as a barrier to information being instantly available and processes slow down as result. The end outcome is the creation of a performance bottleneck which negatively impacts employee productivity and hinders smooth and seamless business operation.   

The responsibility often falls on the IT team to remediate such issues. When application performance is impacted by the app-data gap, the team must launch into a reactive approach. The best case scenario they can hope for is that complaints from employees start a troubleshooting process which then result in a vicious circle of finger pointing between storage, network, development and VM departments. The worst case scenario at play here is that down time leads to an ‘all hands on deck’ or ‘fire drill’ approach.    

This is concerning practice for IT leaders today. With the team constantly monitoring for and responding to application issues, very little bandwidth is left to deliver proactive ideas and strategies with the business. As a result, IT is often perceived as blocker to productivity, rather than recognised for its increasing competitiveness.

Breaking down the walls to fast data delivery

Thorough forensics and analysis must be carried out by IT leaders if they are to unravel the plethora of issues throughout a company’s IT infrastructure that contribute to the delay in the delivery of data to various applications.   

Storage is often identified as the first culprit for slow app-data delivery. However, the app-data gap is actually most commonly a result of complexity across the entire data centre.    

Nimble Storage research found that the origins of application breakdowns in over 7,500 companies arose from issues with the configuration, interoperability and not implementing best practices that were unrelated to storage.   

From these origins, the common chain of events prevailed where application breakdowns led to the creation of an app-data gap, which subsequently disrupted the data delivery to end user applications.   

The primary reason for this is the way in which date centre infrastructure is purchased today. Whether sold from multi vendors or from a single vendor, most components are designed independently of each other. Far too often these technologies are primarily built by start-ups looking to optimise individual functions, rather than achieving overall infrastructure interoperability.   

These startups are then frequently acquired by larger IT vendors to bolster their portfolio, which can result in interoperability issues between adjacent products from the same company. Add desktop and server virtualisation in the mix and the resulting infrastructure sure stack becomes complex and diverse.

Optimisation: a data driven approach

Optimisation across the whole data centre requires IT teams to analyse interactions between various components. Data science and machine learning are now increasingly being deployed to harness the big data gathered around the data centre to help remediate these issues. 

By rolling out such IT solutions, IT teams can:   

· Analyse the performance metrics gathered from a large volume of high performing environments
· Correlate a number of elements across the infrastructure to pinpoint the root cause
· Prevent problems arising through highlighting interoperability problems     Having undisrupted and fast applications has become the lifeblood of businesses run today: from strengthening product development and enhancing customer interaction, to running the back office. 

Closing the app-data gap and deploying a data drive approach will be the key to survival in the digital economy.  

Paul Scarrott, Director, Nimble Storage

Image Credit: Razum / Shutterstock