Skip to main content

The 4 types of AI and where you encounter them

(Image credit: Image Credit: John Williams RUS / Shutterstock)

AI, or artificial intelligence, is becoming increasingly prevalent in today’s society, but many people don’t realise there are actually four distinctive types of artificial intelligence. Keep reading to discover the types of AI and the differences among them and examples of how you might see each one play out in real-life scenarios. 

1. Reactive machines

Think of this type of AI as the most basic variety. Like the name suggests, it merely reacts to current scenarios and cannot rely on taught or recalled data to make decisions in the present. 

Most of the earliest robots came into existence after engineers relied on maps and other very detailed data to tell the gadgets how to move throughout their environments. Reactive machines do away with maps and other forms of pre-planning altogether and focus on live observations of the environment. 

Where might you see a reactive machine at work? Google’s AlphaGo reactive AI machine has generated lots of headlines recently because it beat a top human Go player, a feat that has perplexed AI researchers for a long while. But even the technology behind AlphaGo isn’t extremely advanced. It uses a neural network to watch developments in the game and respond accordingly. 

Reactive machines are given certain tasks and don’t have capabilities beyond those duties. They’re what you’re most likely to see when witnessing a robot playing a game, like chess, against a human. That’s impressive, but because reactive machines don’t interact with the world, they respond to identical situations in the same ways every time those scenarios are encountered.

2. Limited memory 

AI that works off the principle of limited memory depends on both pre-programmed knowledge and observations carried out over time. In the case of the latter insight, the AI looks at certain things within an environment and detects how they change, then makes necessary adjustments. 

This kind of technology is already used in some autonomous cars. They observe how other vehicles are moving around them, in the present, and as time passes. That ongoing, collected data gets added to the static data within the AI machine, such as lane markers and traffic lights. 

Mitsubishi Electric is a company that has been figuring out how to improve such technology for applications like self-driving cars. Representatives there say it has made major headway. 

Autonomous cars are put to the test much more when driving through neighbourhoods than on highways because in the former setting, they are more likely to encounter things like bicyclists and pedestrians. Until recently, it took some onboard AI systems in self-driving cars about 100 seconds to make judgments. 

Thanks to a system called “compact AI” developed by Mitsubishi Electric, autonomous cars can compute things in significantly shorter amounts of time thanks to a filtering technology that only looks at the information necessary for certain types of analysis. 

Because of this improvement, we may soon see autonomous cars, and other applications of limited memory AI, that work faster than ever. 

3. Theory of mind

Theory of mind AI represents a very advanced class of technology whereby the respective applications interpret their worlds, plus the people in them. This kind of AI requires a thorough understanding that the people and things within an environment can alter feelings and behaviours. 

As such, a robot that’s working off of theory of mind AI would be able to gauge things within their worlds and recognise the people within the environments have their own minds, unique emotions, learned experiences and so on. Theory of mind AI can pick up on people’s intentions and predict how they’ll behave, too. 

Theory of mind AI hasn’t been frequently developed in society yet, but research suggests the way to make progress is to start by designing robots that can carry out some of the things kids can do early in the developmental process, such as detect faces and eye movements and follow others’ gazes. 

Humans do those things, and others, to show they’re paying attention and worthwhile theory of mind AI must be able to do the same. 

One real-world example of theory of mind AI is Kismet, a robot head made in the late 90s by a Massachusetts Institute of Technology researcher. Kismet can mimic human emotions and recognise them. Both abilities are key advancements in theory of mind AI, but Kismet can’t follow gazes or convey attention to humans. 

More recently, a team of engineers at Imperial College London invented a robotic arm that paints pictures for a person controlling the device with just his or her eyes. Although that example doesn’t specifically relate to theory of mind AI, it does show how scientists are figuring out how to make AI recognise minute cues in humans. 

Someday, that insight could be beneficial for theory of mind AI.

Analysts believe that if theory of mind AI progresses enough, it could be used in caregiving roles, such as to assist elderly or disabled people with everyday tasks. It may even offer comfort to lonely people who don’t have nearby human companions. 

4. Self-aware AI

This most advanced type of artificial intelligence involves machines that have consciousness and don’t only recognise it in humans. Scientists haven’t been able to build something that displays this type of AI yet, but when it happens, the gadget should be able to demonstrate desire for certain things and recognise its own internal feelings. 

Self-aware AI is an extension of theory of mind AI. It means the respective devices are tuned into cues from humans, such as attention spans and emotions, but able to display self-driven reactions, too. 

Think about what it would be like if a researcher develops a humanoid robot that can go grocery shopping and uses self-aware AI to make decisions about what its owner wants most at the supermarket. 

The robot might feel excitement that a favourite product is part of a buy-one-get-one-free sale and decide to pick up an item even though it wasn’t initially directed to do so precisely because it can remember the owner uses that product and has almost run out of it. 

Conversely, the same robot might feel genuine disappointment that the supermarket was sold out of an owner’s most-beloved breakfast cereal, an item the robot was told to stock up on if possible. It might even get impatient if the lines at the checkout are extremely long and the cashiers aren’t working at top efficiency levels.

Now that you know about the four types of AI, it’s easier to start recognising them in society. You can also begin to get even more excited about what may be possible as AI technology continually advances. 

Kayla Matthews, technology writer and cybersecurity blogger
Image Credit: John Williams RUS / Shutterstock

Kayla Matthews
Kayla Matthews is a technology writer and cybersecurity blogger. You can read more posts from Kayla on Datanami, CloudTweaks, VentureBeat and Motherboard, as well as on