Skip to main content

Think small to solve big problems in enterprise IT infrastructure

(Image credit: Image Credit: SFIO CRACHO / Shutterstock)

Most IT organisations within UK enterprises would describe themselves as “on a transformation journey.” For some, modernising the IT infrastructure – encompassing legacy hardware and software – is at the heart of this, but for many others the digital transformation mandate is underpinned by ‘broken’ infrastructure, with no plan in place for fixing it. 

The result is transformation projects racing ahead but enterprise IT departments walking a tightrope, with multiple areas of their IT infrastructure not fit for purpose.   

With so many pressing IT priorities demanding attention, many of which could have significant consequences for business performance if they go wrong, IT teams are left to make tough choices. 

They must therefore ‘think small’ to fix big problems in their IT infrastructure and tackle major issues – such as downtime – before they strike. 

Fixing ‘the most broken thing’   

In many cases, operational IT teams – infrastructure managers, IT manager and directors, system engineers – are too frequently consumed with firefighting and addressing the most pressing challenges. They have a list of ‘broken things’ and their job is to prioritise which are ‘most broken’ and potentially damaging to the business, and which are the ‘least broken’ and can be put on the back shelf.   

In this scenario, the ‘least broken’ things, which aren’t damaging company performance daily, get de-prioritised or worse, ignored completely. In some instances, the vast sprawl and complexity of an enterprise IT infrastructure makes understanding how ‘broken’ a process is very hard. In others, the problem is understood, but it is perceived to be too hard, too expensive, or simply too time-consuming to address.   

Disaster recovery and back-up is a common example which fits this description. Some organisations think they have a robust solution, only to discover when there is downtime and their backups are put under strain that the solution doesn’t meet their current needs at an operational or compliance level. Others know that solutions are not fit for purpose and hope it will never be tested on their watch. 

But with cloud and virtualisation now mainstream amongst most – if not all – enterprises, is there an excuse for this de-prioritisation of ‘the least broken thing’ and can availability justifiably be overlooked? 

Start small and scale in the cloud 

We need to change the perception that some infrastructure jobs are huge, insurmountable tasks, or that they are too complex to unravel. For example, a few years ago, extending data centre infrastructure to a hyper-public cloud felt like a futile endeavour that involved connectivity issues, security problems and a mix of other unpleasant surprises.   

Today, the market is ready to accept adoption of hybrid cloud architectures, from both infrastructure and application sides. Considering this approach enables organisations to embrace disaster recovery as a service (DRaaS) moving data in stages into the cloud. Potentially starting small with minimal investment of time and money, then scaling-up once it’s proven to work. 

Hybrid cloud is undoubtedly a trend which organisations are adopting to solve multiple infrastructure challenges. IDC is predicting organisations will require a mainly cloud-based IT environment by 2019 and 451 Research is claiming that public storage spend will double in the next two years as demand for on-prem storage declines. Predictions of hybrid cloud’s promise have seen the world’s largest technology organisations begin to lay major plans for a hybrid future. For example, Hewlett Packard Enterprise (HPE) and Microsoft recently created an innovation centre in Seattle that will speed up hybrid cloud adoption and help customers test hybrid solutions and use cases, such as HPE/Azure Stack environments.  

To take advantage of this opportunity, enterprises must apply the same principles they utilise in maintaining multiple data centres to data storage in the cloud. A single cloud will no longer suffice. The move towards hybrid cloud makes the integrity of data and services a major priority for enterprises. It will therefore be essential to strike a fine balance between on-prem and various as-a-service offerings, to ensure data is always available and synchronised across multiple platforms.   

Hybrid is gaining in popularity as it offers not only the flexibility and data deployment benefits of public cloud, but also provides the security assurance of on-prem, private cloud – effectively giving businesses the best of both worlds. This means organisations can now store their most important or sensitive data on the private cloud, whilst using the public cloud for capabilities that can be rapidly provisioned, whether it be to quickly scale up and rapidly release capacity, or to quickly scale down. 

Counting the cost of downtime 

The level of flexibility afforded by virtualised environments and hybrid cloud infrastructure means IT departments can be far more agile and address areas of concern across their IT stack. De-prioritising the least broken thing shouldn’t therefore be an issue, especially if the availability strategy is the least broken thing. 

The recent Amazon Web Services (AWS) outage is a timely reminder about the importance of having a robust availability strategy and embracing hybrid cloud. The AWS outage took down several large websites for several hours. During the four-hour disruption, S&P 500 companies lost $150 million, per Cyence while US financial services companies lost an estimated $160 million. Downtime costs not only revenue but brand reputation, and consumer confidence takes a hit that forces enterprises to reassess their multi-cloud strategies.   

The AWS outage is proof that even best-in-class solutions can suffer downtime. The ripple effect felt across AWS business customers – from Business Insider to Slack – goes to show how reliant businesses are on a single source of backup. 

This example paints a clear picture for backup, or more specifically data availability, across a hybrid cloud architecture, ensuring that the ‘crown jewels’ of any business are backed-up locally and can continue to operate when another source goes down. Relying on a single source to backup vital information when the service itself is down spells an obvious dilemma. 

Solving big problems in 2017 

IT departments will always have a list of broken things to fix. But we are living in a new digital economy where old rules don’t apply and approaches to problem solving must adapt. Businesses of all sizes are challenged with delivering services at any time, from anywhere, while at the same time streamlining costs and engaging efficiency. No critical infrastructure should de-prioritised or considered too broken to fix. It’s time to embrace business transformation through the cloud. 

Update, 23/3/2017:

It was brought to our attention that AWS did not, in fact, suffer an outage, as it was written here.

HK Strategies' Kishan Mistry writes:

"I understand it’s an opinion piece but Richard has mentioned that there was a “AWS outage” which is technically incorrect. There was an AWS S3 disruption in that region not the whole of AWS was effected."

More can be found on the AWS blog here

Richard Agnew, VP NW EMEA at Veeam 

Image Credit: SFIO CRACHO / Shutterstock

Richard Agnew
Richard brings a broad base of sales and management experience to Code42, gained through years leading regional teams within internationally recognized brands such as Veeam, NetApp, and Dell.