Skip to main content

How to teach a machine to think like a human

imagery of a circuitboard and chip
(Image credit: Getty Images)

Artificial intelligence (AI) and machine learning (ML) have become buzzwords over the last several years. Everybody talks about them, explains them as concepts, and runs workshops. The whole world today is about intelligent computers. The majority of companies have now invested in them to drive internal and business systems.

However, we should understand the difference between these phenomena. 

AI versus ML

Simply put, AI means making machines capable of performing intelligent tasks like human beings. It performs automated tasks using intelligence, and its key components are automation and intelligence. AI has three stages:

  1. ML: a set of algorithms used by intelligent systems to learn from experience
  2. Machine intelligence: the advanced set of algorithms used by machines to learn from experience (deep neural networks works as a good example here, and currently, the technology is at this stage)
  3. Machine consciousness: self-learning from experience without the need for external data

In other words, machine learning is the branch of computer science that makes AI happen. But how does it all work?

Thinking machines

It's worth saying that computers outperform humans in many functions. They are faster, don’t need rest, cannot be distracted, and they are perfect at crunching down a series of numbers. However, how can a computer know what a cat looks like? Or how can it know how to drive a car? Or to play a strategy game? Machine learning comes to the rescue.

Its basic principle is to teach machines to use or follow algorithms that are guided by data. Machine learning algorithms use training sets of real-world data to infer models that are more accurate and sophisticated than humans could develop on their own. 

With the machine learning systems, computers learn to recognize speech, objects, and faces. Unlike programs that follow manually-created guides for specific tasks, machine learning gives the program or system the opportunity to identify templates and make predictions. Today, businesses leverage this opportunity.

The companies that deliver services like voice/face/object recognition, text-to-speech/speech-to-text, translation and other tasks use a range of pre-trained APIs that add intelligence to applications and services. Since machine learning technologies have been developing for quite some time, most systems covering more or less standard functions are already created. There is no need to develop new models and train systems. Amazon, Google, Azure offer such APIs and services. 

For example, the package of Amazon Vision Services helps businesses easily add  visual search and image classification to applications. With them, solutions are able to detect and analyze objects, scenes, actives, and faces. Moreover, Amazon provides libraries with tonnes of images and pre-trained models that already know how to work. What the developers need to know is how to integrate it. 

Therefore, training means a lot. Amazon runs educational workshops that guide you step-by-step through the process of developing machine learning-based solutions. However, what if the service or application the company works on has nothing to do with popular recognition tasks? What if the task is specific, and Amazon does not have such a service?

Solving non-standard tasks 

In instances like this, the company has to go to square one to create a neural network. 

A neural network is a set of algorithms, modeled loosely after the human brain, and designed to recognize patterns. The network interprets sensory data through a kind of machine perception, labeling or clustering raw input. The patterns recognized are numerical and contained in vectors. 

All real-world data, be it images, sound, text or time series, must be translated into this format. By classifying, clustering, storing, analyzing, managing data and using previous experience, the network behaves like the human brain, which means it can study.

Neural networks are well-suited to identifying non-linear patterns, where there isn’t a one-to-one relationship between the input and the output. The networks identify patterns between combinations of inputs and a given output. 

Let’s say you are creating a system that distinguishes different types of animals – cats, lizards, and whales – based on the presence or absence of certain features. In this case, the presence of four legs or warm blood doesn’t do a good job of predicting whether an animal is a cat or not, as the former could also describe a lizard while the latter would also describe a whale. 

However, the presence of four legs and warm blood is a pretty good indicator (at least in this case) that we have a cat. Multiply the number of features and labels by a few thousand or million, and you’ll have a good idea of how the networks work.

But let’s go back to the point where a company creates a service with specific tasks. To deliver it to the market, the company has to create a neural network. In line with that, they have to study a framework to train the neural network. Today there are numerous available, for example on one of the latest projects we used TensorFlow.

Developed by Google, TensorFlow uses a system of multi-layered nodes that allow you to quickly set up, train, and deploy artificial neural networks with large datasets. The project I mentioned was related to development of an application for determining the size of wounds. Obviously, there is no such service that can do that, so far. 

Along with that, the team created a set of algorithms (a neural network) to analyze the size and the type of wounds based on the photos. In other words, we went almost the full way to give the application intelligence. This is the case when pre-trained modules do not have such information, or similar algorithms that can help to perform the task. The app ensures about 90% of computational accuracy: for emergency crews and preop labs this is quite essential. 

As can be seen, lots of tasks today are getting delivered to intelligent machines. Most of those machines are created to perform one specific automated task. Even though machines are quicker, more precise, there are still things that are inaccessible like creativity, feeling and understanding emotions, and using common sense to solve new problems. Even with multiple tech advances, humans remain the masters of these three skills, likely for a long time. 

However, no one would refuse to have an intellectual machine that cleans the house. Therefore, applying machine learning mechanisms to make our life better seems like a wise decision. 


Further reading

To learn more about AI and machine learning, take a look at how both are transforming transactions in retail; how to get a machine learning platform working perfectly; how to use machine learning algorithms to supercharge your business; and how the AI and ML community is ready to help with the climate crisis.

Alexey Zhukovsky is a Delivery Director at Intetics. He is involved in business analysis and project management for over 10 years. Mr. Zhukovsky is heavily engaged on projects related to development of AI/ML solutions and mobile applications.