Skip to main content

An opportunity to learn from the mistakes of others

data
(Image credit: Image Credit: StartupStockPhotos / Pixabay)

The global Covid-19 pandemic has accelerated transformation projects in almost every walk of life – from businesses learning how to manage data distributed across the homes of all their employees to governments learning how to manage massive-scale testing, allocate exam results without pupils taking exams or implementing contact tracing.  The scale – and publicity – of these government projects is often much greater than those being rolled out by businesses, but there are also many commonalities, which means that there are lessons IT managers can take away from the mistakes made by others.

There’s little doubt that the scale and timeframe demanded by any of these national digital transformation projects is enough to give project managers nightmares.  There are a lot of variables in play, and the pressure is immense, but many IT managers have found themselves in a similar position since the start of the pandemic.  So, to make things a little simpler, I’ll focus on three core areas; scalability, data quality and interoperability. 

Scalability

Scalability has become a watchword for good IT planning over the past 10 years.  The ability to go from 10 to 10,000 in hours is the primary selling point of cloud services, and it’s proven invaluable for business time and time again. 

But how did this work for the government?  From the word go, Test and Trace was going to have to be scalable.  As Covid cases grew, almost exponentially, so the service would need to grow. And the inverse was also true, making sure that the system wasn’t wasting resources when they weren’t needed.  In reality, much of the scalability challenges around Test and Trace have been logistic in nature rather than technical, with one notable exception.  Picking a service or software that is appropriate to its purpose in mind, that is predictably and reliably scalable was vital for this exercise.  It’s possible that a finance-based spreadsheet program was not quite the right choice, but hindsight is a wonderful thing. 

Regardless, this highlights one of the biggest lessons to be taken from the program.  Data continues to grow at phenomenal rates, and with home-working now the norm, company data is becoming more fragmented than ever before.  Furthermore, workloads and platform use are also spreading, and complexity has risen massively as a result.  Managing data needs to reflect this fact.  Companies are themselves going from 10 to 10,000 overnight, and scalability, reliability and flexibility are all key to ensuring trustworthy data management.

Data quality

When the pandemic ensured that A-level examinations were not going to be a possibility, the question became ‘what next?’.  How could the Department for Education and the various examination boards fairly and evenly award such important educational qualifications to students?  The answer came in the form of an algorithm, although strictly speaking it was actually an equation. 

The range of details or their political ramifications within that equation aren’t my concern here, but the gaps in it are.  Specifically, the equation called for information that in many cases wasn’t available.  It was known and understood and ‘accounted for’ that this might be the case, but without that data the result became more unreliable.  The more incomplete the data presented, the greater the likelihood that the outcome would simply be incorrect.  And this happened. A lot. 

In tech developer terms, this is known as GIGO (garbage in, garbage out) and it’s hardly a new concept.  Quality data has been at the center of every scientific endeavor and discovery for hundreds of years, but apparently, the lesson still hasn’t been learnt properly.  But here’s another chance, and we should take it.  Data quality is everything.  Do IT managers know what data they even have?  Do they know what’s in it, whether it’s used or even useful?  Veritas own Databerg research suggests that as much as 53 percent is unknown data (called dark-data), and another 28 percent is redundant, obsolete or trivial (ROT).  That is a huge amount of either worthless or un-valued data for businesses and IT managers to cope with.  IT managers must get a handle on it to be effective in their own efforts.  Getting insight into your own data is key in avoiding mistakes made by others in the exam algorithm program.

Interoperability

In effect, this expensive stumble was caused by a simple issue of interoperability between different data sources.  It’s not the first time such a thing has hampered government IT projects, but there’s still a lesson to be had here.  Businesses are currently facing an explosion of data across a swiftly growing hybrid- and multi-cloud infrastructure.  Different workloads and targets are creating much more complex environments than that offered by the two main mobile phone platforms, and interoperability is a problem when it comes to managing that data.  Companies must look at this situation and deal with it from the start.  They must work with technologies that have proven interoperability with the widest number of workloads and targets possible to ensure there is no break in the chain.

The government organized track and trace app was something of a stop-start adventure. Initially commissioned in March, the final fully working app took until September 24th to launch.  The reasons for this were surprisingly simple.  Despite a great deal of knowledge, innovation, and experience, the first version of the app didn’t work on iPhones.  The technical reasons for this (largely based around data security and privacy issues) are irrelevant here.  Instead, the key is that from the outset, the app development cut out almost half of all UK smart phones.  The original app had to be scratched and a new approach taken, setting the process back by months, and costing many millions of pounds.

Same same, but different

More or less everyone working in the private business sector can be grateful that they don’t usually face the same scrutiny as those deploying these national projects.  But the publicity gives us a rare insight into how other organizations are doing things, and where they sometimes go wrong along the way.  That insight is a gift, especially at a time where so many of us are working on our own transformation projects, where we’re also strapped for time and resources. 

None of these mistakes are particularly original.  Business of all sorts have made them before, and no doubt will do so again in the future, unless they take this opportunity to remind themselves of some core areas of focus:

  • Make sure your systems are scalable at the drop of a hat
  • Ensure you have great visibility into your data, so you can understand what’s missing and where the gaps may be and
  • Check interoperability; make sure your different data sources can work with each other

Ian Wood, Senior Director, Head of Technology UK&I, Veritas Technologies