Artificial intelligence and machine learning have become buzzwords over the last several years. Everybody talks about it, explains it and runs workshops. The whole world today is about intelligent computers. Companies invest in them to drive internal and business systems.
However, we should understand the difference between these phenomena.
AI versus ML
Simply put, artificial intelligence (AI) means making machines capable of performing intelligent tasks like human beings. AI performs automated tasks using intelligence, its key components are automation and intelligence. Artificial Intelligence has three stages:
Stage 1. Machine Learning (ML) - a set of algorithms used by intelligent systems to learn from experience.
Stage 2. Machine Intelligence - the advanced set of algorithms used by machines to learn from experience. Deep Neural Networks works as a good example here. Currently, the technology is at this stage.
Stage 3. Machine Consciousness – self-learning from experience without the need for external data.
In other words, machine learning is the branch of computer science that makes AI happen. But how does it all work?
It worth saying that computers outperform humans in many functions. They are faster, don’t need rest, cannot be distracted, and they are perfect at crunching down the series of numbers. However, how a computer can know how a cat looks like? Or how can it know how to drive a car? Or to play a strategy game? Machine learning comes to rescue.
Its basic principle is that to teach machines use/follow algorithms that are guided by data. Machine learning algorithms use training sets of real-world data to infer models that are more accurate and sophisticated than humans could develop on their own.
With the machine learning systems, computers learn to recognize speech, objects, and faces. Unlike programs that follow manually created guides for specific tasks, machine learning gives the program/system the opportunity to identify templates and make predictions. Today businesses leverage this opportunity.
The companies that deliver services like voice/face/object recognition, text-to-speech/speech-to-text, translation and other tasks use a range of pre-trained APIs that add intelligence to the applications and services. Since machine learning technologies have been developing for quite a time, most of the systems covering a more or less standard set of functions are already created. There is no need to develop new models and train systems. Amazon, Google, Azure offer such APIs and services.
For example, the package of Amazon vision services helps businesses easily add the visual search and image classification to the applications. With them, solutions are able to detect and analyze objects, scenes, actives, and faces. Moreover, Amazon provides libraries with tones of images and pre-trained models that already know how to work. What the developers need to know, is how to integrate it. Therefore, training means a lot. By the way, Amazon runs educational workshops, that guide step-by-step through the process of developing machine learning-based solutions.
However, what if the service or application the company works on has nothing to do with the popular recognition tasks? What if the task is specific and Amazon does not have such a service?
Solving Non-Standard Tasks
In instances like this, the company has to go to square one to create a neural network.
A neural network is a set of algorithms, modeled loosely after the human brain, designed to recognize patterns. The network interprets sensory data through a kind of machine perception, labeling or clustering raw input. The patterns they recognize are numerical and contained in vectors. All real-world data, be it images, sound, text or time series, must be translated into this format. By classifying, clustering, storing, analyzing, managing data and using previous experience, the network behaves like the human brain, which means it can study.
Neural networks are well-suited to identifying non-linear patterns, where there isn’t a one-to-one relationship between the input and the output. The networks identify patterns between combinations of inputs and a given output.
Let’s say you are creating a system that distinguishes different types of animals – cats, lizards, and whales – basing on the presence or absence of certain features. In this case, the presence of four legs or warm blood doesn’t do a good job of predicting whether an animal is a cat or not. As the former could also describe a lizard while the latter would also describe a whale. However, the presence of four legs and warm blood is a pretty good indicator (at least in this case) that we have a cat. Multiply the number of features and labels by a few thousand or million, and you’ll have a good idea of how the networks work.
But let’s go back to the point where a company creates a service with specific tasks. To deliver it to the market, the company has to create a neural network. In line with that, they have to study some framework to train the neural network. Today there are numerous of them, for example on one of the latest projects we used TensorFlow. Developed by Google, TensorFlow uses a system of multi-layered nodes that allow to quickly set up, train, and deploy artificial neural networks with large datasets.
The project I mentioned was related to the development of the application for determining the size of wounds. Obviously, there is no such service that can do that, so far. Along with that, the team created a set of algorithms (neural network) to analyze the size and the type of wounds based on the photos.
In other words, we went almost the full way to give the application intelligence. This is the case when pre-trained modules do not have such or similar algorithms that can help to perform the task. The app ensures about 90% of computational accuracy for emergency crews and preop labs this is quite essential.
As seen, lots of tasks today are getting delivered to intelligent machines. Most of those machines are created to perform one specific automated task. Even though machines are quicker, more precise. There are still things that are inaccessible for machines like creativity, feeling and understanding emotions, using common sense to solve new problems. Even with multiple tech advances, humans remain the masters of these three skills, likely for a long time.
However, no one would refuse to have an intellectual machine that cleans the house. Therefore, applying machine learning mechanisms to make our life better seems like a wise decision.
Alexey Zhukovsky, Delivery Director at Intetics
Image Credit: Sarah Holmlund / Shutterstock