All data management professionals with any tenure know the familiar pattern of hype that accompanies new, potentially innovative technologies and software solutions. We hear a buzzword or two at a conference — or some new, exciting vendor suddenly becomes known to us — and in short order, the concept or company is being hailed as the next big thing. This pattern is so familiar that Gartner created its Hype Cycle methodology, featuring phases such as the “Peak of Inflated Expectations” and the forlorn “Trough of Disillusionment.”
While we’ve seen this all before, Artificial Intelligence (AI) seems to be enjoying (or suffering from) the longest-running period of untethered promises in the “hype cycle” in recent memory. It’s not hard to understand why: Tangible AI-generated breakthroughs are occurring daily across many industries and applications, and some very tantalising advancements appear ever more attainable.
Despite all this potential, however, many organisations have experienced frustrating and costly failures with AI investments.
What explains this chasm between buzzy AI headlines and the real-world disappointment of so many companies? In my view, one of the factors accounting for some of the dissonance is that a number of software vendors have irresponsibly co-opted the term “AI,” and are applying it to technologies that are little more than partially automated business rules.
But there is another, more fundamental reason that AI isn’t meeting companies’ expectations – and it has nothing to do with the technology itself.
It’s still all about the data
When comparing many organisations’ current data management capabilities with the requirements imposed by AI, an obvious gap emerges. Too many organisations still struggle with:
- Basic data access: exposing first-party data assets to internal users, managing rights-based access, and generally knocking down self-imposed walls that block users
- Seemingly impenetrable walls around critical data assets: typically, the product of organisational “fiefdoms” that take a proprietary view toward “their” data
- Lack of basic data documentation: data fields and values are frequently not documented in a way that facilitates business-based analytics -- if at all
- Disparate and unintegrated first-party data: creating a full view of customer relationships, business processes, and markets almost always requires bringing together multiple first-party data sources and many companies have not done this basic (but demanding) work
- Unresolved processing errors that create anomalous records every day: many legacy systems that have undergone multiple upgrades and version changes generate data that is not what users believe it to be
Most telling: Some data management professionals don’t even know the totality of first-party data available within their own organisations, which means they aren’t in position to leverage it for value creation.
Some of the more ungrounded AI-related claims suggest the technology has an almost magical ability to see through data-quality and integration challenges, thereby eliminating the need to deal with the profoundly unglamorous task of maintaining a reliable data management infrastructure. While, understandably, the promise of a shortcut is quite attractive – and it is true that AI-technologies can and should be a core component of pre-analysis data assessment practices – too much AI-related excitement seems to be coming at the expense of traditional data management priorities. But as with anything, there is no AI “free lunch.”
Dealing with these long-standing data management issues is painful and challenging (that’s how they came to be “long-standing”) – sadly, there is no AI silver bullet to quickly change this state of affairs – but to harness the true value of AI, organisations must first correct these issues.
How to move toward true AI: Three tips
Most organisations are likely to see short-term returns from investing in basic data stewardship, and should do so before putting their money on AI solutions.
Specifically, data management professionals should focus on fully inventorying and understanding their own first-party data assets and comparing them with business objectives. If structured properly, this process readily translates to tactical plans for capturing integration synergies, addressing deficiencies, and implementing quality and access measures.
We can see validation for this “data first” perspective in the actions of IBM’s Watson group. Watson is arguably the most prominent commercial AI platform, and was a critical pioneer in the market, with impressive technical capabilities.
However, a new class of Open Source entrants, probably best characterised by H20.ai and TensorFlow, are increasing their market presence. IBM’s response to this situation does not seem to emphasise AI technologies or software wizardry. Rather, IBM seems to be recognising the primacy of data content over technology via their purchases of companies, such as The Weather Company (curator of amazingly detailed weather data) and healthcare data content specialists Explorys and Phytel. Data analysts may wish to replicate this set of priorities when formulating their AI strategies.
In addition to focusing on data curation and management, data management professionals must check their AI-related enthusiasm and take inflated management expectations with a grain of salt.
Unsurprisingly, this situation seems to be primarily the result of irrational exuberance created by software vendors. Increasingly, software solutions are described with terms such as “AI-enabled” or “powered by AI,” but even the most charitable comparison of many software feature sets to generally accepted definitions of “AI” can reveal a wide gulf.
Exaggerated software claims are, of course, nothing new. But the very real promise of AI seems to be creating particularly fertile ground for irresponsible claims. Data analysts must be vigilant against this reality and should not expect breakthrough results from technologies that aren’t, in fact, breakthrough in design and functionality.
Finally, the cost of failure for AI-related investments can be very high. Given the specialised knowledge required to fully vet AI technologies, many organisations contemplating their future state may be well served by engaging a neutral third-party to participate in the process of choosing and implementing an AI solution. Utilising this type of objective resource may also help internal associates responsibly frame the AI debate with their management.
Where AI is headed next
AI promises — and is delivering on — incredible new benefits. But it is not a solution that will quickly fix your core data challenges.
Just as with older technologies, the quality and reliability of AI-generated analytics are firmly chained to the quality and value of the data content being analysed – and that means data must be fully understood from a business perspective if profitable results are to be expected. Get your data right, and you’ll be in prime position to reap the benefits of the truly revolutionary AI capabilities that loom just over the horizon.
Jonathan Hill, Chief Architect of Data & Insights, RRD