AI as a cognitive multiplier: Tackling problems that are hard for humans and problems that are hard for computers

While the all-knowing, all-purpose robotic companion – think HAL 9000 from “2001”, the ship’s computer on “Star Trek”, or Samantha from “Her” – is still “just around the corner”, the latest generation of AI technologies is having an unprecedented impact on our personal and professional lives. They are providing “cognitive multipliers”, or power tools, for mental rather than physical tasks. They enable individuals to perform their tasks faster, more accurately, more consistently, and at greater scale than they could before. They also support super-human behaviours such as solving problems that are beyond the capability of any given individual.

To achieve this level of effectiveness, AI systems have had to overcome numerous challenges, not all of which are obvious. More specifically, it has been necessary to address 1) tasks that are difficult for humans, and 2) those that are easy for humans but hard for computers.

Doing what computers do well

Since their earliest days, computers were built to handle greater volumes of data than people could process. Traditionally, this required bespoke applications and carefully curated data to support them.  Today, more general approaches enable powerful capabilities, using more diverse data, to be developed faster and at a lower cost.

What’s Different? 

Today’s AI solutions, and machine learning (ML) in particular, have benefitted from some important changes that together make things possible that would have been impossible even a few years back.  These include:

·         Data Volume and Ubiquity: Information about a vast range of our personal and professional lives is available online and in a computer-accessible form. Enterprises track business transactions, finances, operations, client interactions, and more. Consumer information about who bought what, when, and where is tracked. Communication, social media, and entertainment providers collect data from our online interactions. Electronic medical records and activity trackers and healthcare monitors provide additional personal data. Add data about weather, traffic, vehicle tracking, polling data, telemetry, trading systems, etc. and the amount of information available to machine learning applications is almost boundless.

·         Algorithms, Platforms, and Computing Power: New(er) ML approaches, including deep learning, reinforcement learning, and transfer learning, enable practical solutions to previously intractable problems. Inexpensive and easily scalable computing power greatly lowers the bar to ML experimentation and solution deployment. Lastly, numerous platforms are automating increasing levels of model selection, building, assessment, and management functionality, reducing reliance on scarce data scientist resources.

·         Ubiquitous Access: People are connected to more systems and networks than ever before. This not only provides opportunities to collect data but to impact people via applications that leverage that data. The nascent “internet of things” is rapidly increasing the connectivity between people and electronic systems.

Pattern Finding, Predictions, Recommendations, Human Mimicry

 Given this wealth of data, increasingly easy ways to leverage it, and ever-growing opportunities to interact with users, AI is enabling automated systems to do things they never could before.   These include finding patterns and drawing conclusions that are difficult, if not impossible, for people to match, often requiring the observation of thousands of variables (or more!) and their possible interactions. By drawing on vast collection of past examples, these systems are able to make highly accurate predictions and offer personalised recommendations even if neither the system nor its human developers completely understand the bases for those conclusions.  Applications include movie recommendations on Netflix, identifying likely areas in which crimes may occur, pre-provisioning Amazon delivery trucks with likely purchases, routing traffic, recommending financial products, presenting articles on Facebook, matching people on dating sites, and more. 

Doing what humans do well

There are many skills humans have evolved to perform, and which we take for granted, that have turned out to be very difficult to implement in computers. It is clear why it is beneficial to train computers to do things humans don’t do well, but perhaps less clear when they need to learn human-like skills. Obtaining some level of (human) language understanding to facilitate communication with human users and accessing human-created artifacts (such as documents, email, texts, queries, commands, etc.) is one clear example. Computer vision is critical for performing many real-world tasks. Emotional computing is helpful for computers to “understand” humans and to be understood by them. There are many other areas, but let’s just focus on language understanding.

To be clear, no computer today is capable of deeply understanding natural language (spoken or written) at anything even approaching the level that a human can.  In other words, do not give a computer a non-trivial document and expect it to be able to answer arbitrary questions about its content – including things that are implicit in it – except in the most constrained and controlled situations. However, all is not lost:  there are significant beneficial tasks that computers can perform, even with less than full language comprehension. Some of these are described below:

Information Retrieval

A dog that fetches the newspaper is performing a helpful task, but there is no expectation that the dog understands the newspaper’s contents. But what if that dog, or perhaps a virtual assistant, could fetch the appropriate document from a law library and open it to the right page for you? It turns out that systems can be taught or trained to find “relevant” documents for a variety of tasks without requiring any deep understanding of the document content. This can be done by either providing the systems with sufficient examples of relevant documents, by providing an ontology of relevant concepts, or through a combination of the two.

Factoid Extraction

Unstructured documents often contain data that can be useful for programs to reason with, but which may not (yet) be available in a structured, computer-accessible form. For instance, dosage or contraindication information for drugs may be found within their textual data sheets rather than as structured information within a database table. Where the type of information sought is known a priori and the source documents are somewhat standardised in how they present this information, machine learning models can be developed to extract that information from the text and make it available for further processing.

Selecting Queries or Commands

One of the most visible examples of partial language understanding can be found in the use of chatbots and similar natural language interfaces (such as Siri, Google Now, Alexa, Cortana, etc.). While they may appear to understand spoken language, they are actually performing a much simpler classification task. That is, each has a limited set of commands (often called intents or skills) that they can perform, so the job of the natural language front-end is to recognise which of those commands the spoken (or typed) utterance best corresponds to extract the necessary parameter values from the utterance. For instance, understanding “What’s the weather in Chicago tomorrow?” requires recognising that the “weather forecast” service is the best match and that the location and time to pass it are “Chicago” and “tomorrow”, respectively.

Sentiment Analysis

Another approach to leveraging a partial understanding of natural language is sentiment analysis. It is often useful to understand what positive or negative feelings are being reflected by a document – think of product reviews or help-desk sessions – and what is triggering those reactions, even if the document content is not fully understood.

Mimicking Human Responses

By looking at large corpora of human activities, systems are (at least partially) able to mimic some human behaviours (such as machine translation or conversational agents) even without a deep understanding of how humans perform those tasks. A system that behaves in a manner that is similar to how humans have behaved in almost identical situations in the past may not reflect true understanding, but can be useful in many scenarios.

Leveraging Human Knowledge and Reasoning

Learning from data or examples can be facilitated by using domain knowledge to craft the features used in machine learning solutions. Alternatively, a knowledge-based approach can provide an alternative solution in which a system is explicitly taught (rather than trained) by manually authoring computer-accessible models of domain-specific (or more general) knowledge. This knowledge model (i.e., an ontology or knowledge base) can be used by automated inference to perform human-like reasoning over these explicit knowledge models.  While this approach may be more labour-intensive than ML, it offers the benefit of greater transparency and explainability of its results and potentially greater maintainability.


Developing successful AI solutions requires recognising that things that are hard for humans are often simple for computers, and vice versa. Traditional IT solutions have played to computers’ strengths, focusing on crunching numbers and manipulating massive amounts of data. Applications that will be more enmeshed in our day-to-day lives must give equal consideration to supporting human-like activities and reasoning. AI solutions, whether trained from experience or taught via curated knowledge models, offer a promising approach to doing just that.

Larry Lefkowitz, Ph.D, Chief Scientist, Publicis.Sapient
Image Credit: John Williams RUS / Shutterstock