The renaissance of machine learning is already here

null

“Human beings have dreams. Even dogs have dreams, but not you, you are just a machine. An imitation of life. Can a robot write a symphony? Can a robot turn a canvas into a beautiful masterpiece?”

This famous quote from the film “I, Robot”, inspired by Isaac Asimov, the science fiction writer’s collection of short stories, poses questions that current technology can already answer.

Computer programs that have machine learning capabilities can compose sonatas, songs, and classical pieces, and can even draw pictures at a level on a par with high art. The intelligent computer or robot, capable of making decisions independently, is taking shape before our very eyes.

The idea of intelligent machines was already fostered in the 1950’s, when research in the field of Artificial Intelligence reached its peak. At the time, there was even the expectation that smart applications, machines and robots would serve the general public within a short span of time, in day to day tasks. However, many believe that this expectation was ahead of its time and did not materialise because computing power was not strong enough to support it at the time, and thus, research on Artificial Intelligence was abandoned for a long time.

Now, decades later, the idea has been reborn, and applications with Machine Learning capabilities are taking over various aspects of our lives. Beyond the works of art we mentioned, there are applications such as facial recognition software, language interpreting in voice calls (such as Skype Translator), cooperative transportation services (such as Uber), diagnostic medical tools, smart data security solutions, and more.

While we are standing at the threshold of an exciting renaissance in the field of machine learning, another concept called Deep Learning has emerged. The difference between Machine Learning and Deep Learning teaches about the future toward which the computer world is moving.

Machine learning algorithms need the input of data, but they also need a mediating human to “instruct" them in a series of rules and classifications so that they learn to distinguish and identify what is required of them. For example, if we upload several photos of cats and instruct the algorithm that it is a cat, it will eventually learn to autonomously identify new pictures that it has not yet encountered. However, in Deep Learning, the algorithms do not require a human to mediate at all. All it needs is a larger concentration of data so that it can teach itself how to define, classify and identify the cat.

Thanks to Deep Learning capabilities, computers have now reached faster and more accurate picture recognition abilities than a human’s.  Imagine how significant this can be in the context of cancerous growths in radiology tests. It may even tip the scales between life and death.

It can be said that Deep Learning is closer to Artificial Intelligence the scientists dreamed of decades ago, and it may be the same intelligence that Isaac Asimov and many other science fiction writers wrote about. Either way, it is the revolution that will shape the future of our lives in the coming decades. Indeed, it requires more computing power and more data, but it will also make it possible to solve problems that machines could not solve to date.

The key to the success of Deep Learning, in its current second cycle, lies in the ability to process immense volumes of information, since Deep Learning algorithms exhibit better performance in proportion to the amount of examples they can “learn” from.

Clearing a path forward

The more examples we input, the higher the algorithm’s accuracy. The method of Deep Learning (an artificial neural network) can be compared to the human brain, which learns from its experiences. Think for a moment about a small infant that begins learning about the world, about objects, animals, food, and in general, about everything around him. If we refer to the example of the cats we mentioned earlier, the more pictures of different breeds of cats the infant’s mother shows him, different coloured fur, lying in different positions, with different backgrounds and different sizes, the more accurate he will be in identifying them and in differentiating between them compared to other animals.

This is how the Deep Learning algorithm learns. The more examples of cats we input, the greater its degree of accuracy, and it will be able to identify every type of cat and in every shape and scene, even if it is not a cat in its classic and common form - (for example an ear and tail of a cat peeking out from behind a sofa).

However, unlike the human brain, the algorithm can learn in a more decentralised and parallel manner with the ability to process significantly more examples compared to a human. Therefore, the advanced computation tools of Deep Learning require strong graphics processors (GPUs) capable of massive parallelism and the ability to store and access immense volumes of data, quickly, and at a financially feasible cost.

In recent years we saw how outdated storage technologies are beginning to clear a path for All Flash technologies, however they have not yet delivered the goods the world is waiting for. Although they have exhibited improved performance, they are significantly more expensive than storage on rotating disks and they have not enabled storage on the massive scales that are required or the ability to grow.

In order to fulfil the true potential of Machine Learning, the cost of storage will need to decrease significantly. Currently, the All Flash media is still 10-15 times more expensive than the rotating disk technology, and this ratio, according to all forecasts, will remain as such even when storage costs in general go down across the market.

This is the time to recognise hyper-storage innovative technologies, which exhibit higher performance than the All-Flash at a price that is comparable to the rotating disk media. These technologies are based on a smart program that knows to utilise relatively simple resources in order to store an immense volume in high density and small area, without compromising on reliability and high performance.

The resemblance and the curiosity on the topic have not been abandoned, and new forms of computing are waiting for us around the corner if we know how to utilise our data resources properly. The day is not far when we can all travel in autonomic cars, use a program to answer our emails, or receive educated and intelligent advice on investments from bots. The basis for the success of Deep Learning is in the storage infrastructure - the future is already here and there’s no reason to compromise on it this time.

Jacob Broido is Chief Product Officer at INFINIDAT
Image Credit: Shutterstock/Mopic