Skip to main content

AI is dead, long live AI

(Image credit: Image Credit: Geralt / Pixabay)

What does the term ‘AI’ mean to you? For most, the answer is an abbreviation of ‘artificial intelligence’. However, as technology becomes more sophisticated, AI is a term used far too often in the wrong context. Moreover, businesses are finding it trickier to develop an effective strategy for including AI in their overall context.

It may seem pedantic to dive into the issues around the semantics of AI, but misconceptions and the overall hype around the topic cause a genuine problem for businesses today. Personally, having a clear definition of AI has helped me achieve my mandate of bringing clarity and leadership to organisations that are finding it difficult to create a cohesive strategy including big data and AI.

The tendency to apply the term ‘artificial intelligence’ without discretion becomes problematic in many ways in the modern business context. One of the underlying sources of confusion is that is the term is often used incorrectly, or casually. Conceptions that come from science fiction and pop culture further complicate the divide between what we imagine and what is, in fact, reality. Subsequently, new terminology is beginning to emerge. For example, another term, ‘augmented intelligence’, is increasingly being used to help distinguish between fully automated methods and those which work in support of human decision makers.

The changing perception of AI

The way that we use the terms ‘AI’ or ‘artificial intelligence’ is becoming outdated, as the field evolves at an ever-increasing rate. As a result, what we now refer to as artificial intelligence has changed so that it no longer has the same meaning it once had.

Indeed, if we go back to the days of artificial intelligence’s very inception, many of its definitions involved humans: serving humans, mimicking human intent, projecting human behaviour and behaving ethically.

These are all anthropomorphic terms; and when we anthropomorphise, we project human properties onto our machines. However, what is actually going on in those machines is as inhuman as you can get. For instance, the term ‘machine learning’ is problematic. Much of it is just mathematical regression. Soft skills native to human learning, such as intuition, empathy, and aspiration are far from any machine learning models of today. As a result, machine learning algorithms can sometimes fail at the simplest human tasks.

There are other methods that are intended to act more like how we understand the brain to behave; these are sometimes called neuromorphic methods. Designed to model real-world biology and augmented with bespoke hardware, these methods could be applied to sophisticated tasks such as recognising objects and faces or toppling the top human chess players.

How new methods shape AI

One of the biggest problems with many new AI methods, including neuromorphic methods, ensemble AI methods (where multiple methods combine to form a total result) and cognitive algorithms, is that they’re hard to explain, even though they can produce better results. The results can solve more complex problems, so naturally, we prefer them in certain complex situations, especially those where there isn’t sufficient prior data to provide examples to a machine learning approach. What we are left with is an approach that produces better results, but in a way that even the most senior of technology experts are unable to explain, due to the thousands upon thousands of possibilities considered to produce an answer. Indeed, what happens inside these algorithms is the very reason why they are so difficult to understand.

The problem described, often called ‘explainability’, isn’t an issue in the more moderate applications; for example, it’s only a relatively minor problem if we can’t completely explain why a chess program made a brilliant yet entirely unpredictable move (compared with a human). The phenomenon becomes a much more significant problem, however, when outcomes have a significant human impact, such as accidents with autonomous vehicles.

The more serious incidents are the biggest concerns and one of the reasons why these technologies are not yet being used on a wider scale. Ways of providing insight into the rationale and behaviour for this technology need to be provided before they enter the mainstream and to help prevent any disastrous outcomes from occurring.w

The benefits of augmented intelligence

Applications where users work alongside their digital agents, applying their thinking, intent, or morals to an algorithm are slowly becoming defined as ‘augmented intelligence’. These algorithms offer support for the user, which improves the overall performance, but ultimately leaves the human in charge and allows the machine to learn from which advice is taken.

Augmented intelligence allows the human to make the final decision. Human intelligence will be augmented by something that is able to manage incredible amounts of data, which humans could only dream of managing. These approaches will also able to advise humans on what to do based on what they have done in the past. If an action was successful (as defined by goal functions and well-established criteria), the method can advise the human agent to do something similar. Likewise, if outcomes were unfavourable, the methods can advise humans to do something differently. This sort of “learning by doing” is very difficult to do independently. All kinds of human bias, complexity from overwhelming amounts of data, and environmental change make complex tasks increasingly difficult.

Augmented intelligence could allow users to do things better; it can help to remove the inconsistencies that are in our nature and provide a much more stable, empirical structure for decision-making.

The potential benefit for businesses using augmented intelligence approaches correctly is exciting. These methods could save employees hours by completing all the administrative tasks at a more efficient rate, improve awareness and consideration of bias, and promote learning subtle and nuanced aspects of our human decision-making. To make full use of these approaches, however, businesses need to understand how AI should shape their business practice so that they use it to aid their employees, freeing up resources to approach more sophisticated, value-added activities.

What is the future of AI?

So, what does all this evolution mean for the future of AI and how businesses approach their evolving strategies? First, by carefully using terminology (for example, clearly separating ‘artificial intelligence’ and ‘augmented intelligence’), businesses can focus on a much clearer definition of how the technology should be used for a specific outcome. Companies who do not understand how to clearly define emerging technology such as AI will face increasing frustration, in part because of a lack of ongoing clarity.

More importantly, clearer definitions allow businesses to hone in on the specific parts of their technology to make it more fit for human purpose. As a result, AI can be used to drive human development in new and exciting ways, to address more complex and meaningful opportunities. New technology will always present new opportunities and new risks. For a modern business, focus on this emerging technology will no doubt present both. It will be exciting indeed to see what comes next.

Anthony Scriffignano, PhD, Senior Vice President and Chief Data Scientist, Dun & Bradstreet (opens in new tab)
Image Credit: Geralt / Pixabay

Anthony Scriffignano is the SVP and Chief Data Scientist of data and analytics firm, Dun & Bradstreet. He is an internationally recognised data scientist, with over 35 years of experience.