The concept of machine learning has been around for some time. Deep learning is an area of research aimed at taking things further still and getting closer to an artificial intelligence system by using neural networks in a way that imitates the human brain.
Sometimes also referred to as hierarchical learning or deep structured learning, it seeks to model data in order to solve problems like object and facial recognition, natural language processing and speech recognition.
It’s called deep learning because the data is processed through a number of layers, usually in a neural network, the output from one layer forming the input for the next. This allows for machines to learn unsupervised as high level features are derived from low level ones to create a hierarchical representation of the data.
The basic idea behind deep learning is to mimic the type of activity that happens in the human brain. The brain uses layers of neurons in order to ‘think’ and deep learning systems echo this layered structure.
This idea of a ‘neural network’ isn’t new, it’s been around since the 1950s, but in the past it’s been impractical because of the amount of computing power required for it to work. Early versions were therefore only able to recognise simple patterns. In recent years the availability of more powerful systems has made it possible to model more layers and therefore make the detection of more complex patterns practical. Graphics company Nvidia for example has developed systems using GPUs to boost deep learning performance.
How it works
Deep learning works by exploiting the fact that, like the brain, there are multiple layers of virtual neurons each of which can be trained to build on the work of the one before. For example, the first layer would be taught to spot simple features like the edge of a shape or a portion of a sound. This information is then passed to the next layer which will look for a more complex feature like the junction of two lines or a group of sounds. This progresses through multiple neuron layers, each looking for something a little more complex, until the system is able to accurately spot a particular object or word.
Of course this can be applied to any kind of pattern, so deep learning can be used to scan unstructured data in order to extract useful insights. The clever bit is that the computer doesn’t need to be programmed to carry out the recognition task. Instead it gets given a learning algorithm, it’s then exposed to large amounts of training data containing the things it’s looking for. This allows it to work out for itself how to spot the objects or patterns required.
To work effectively therefore deep learning needs a large sample of training data in order to take in as many variations as possible on the pattern being sought – facial recognition for example needs to take account of different expressions, skin tones, lighting conditions and so on. Too small a sample leads to higher error rates. Even with large volumes of training information the best deep learning systems currently still have an error rate of around 10 percent.
Deep learning has the potential to address the huge amounts of data now being collected from Internet of Things devices and elsewhere. One of the uses researchers are most excited about is that it could speed up medical and life sciences research by allowing large volumes of information to be processed effectively. A computer for example can scan thousands of X-ray or scan images looking for anomalies much faster than a human would be able to.
It also has practical uses in things like self-driving cars and driver assistance, allowing systems to recognise objects like landmarks, pedestrians, road signs and other vehicles. Other uses for deep learning include speech recognition and translation as well as natural language processing. Facebook has been recruiting researchers with the aim of allowing the social networking site to perform tasks like automatically tagging photographs using facial recognition. IBM’s Watson system is also being boosted by the use of some deep learning techniques.
Google has been busy experimenting with deep learning too. In 2012 it built a system that was able to browse YouTube videos and identify cats with an accuracy of almost 75 percent. The company bought DeepMind Technologies in 2014 and in 2016 has demonstrated a system called AlphaGo, using an AI system based on deep learning to defeat a professional human Go player. Google has also applied deep learning techniques to cut speech recognition error rates in the latest versions of the Android mobile operating system. In addition DeepMind’s technology has been used in Google’s data centres and is said to have improved their energy efficiency by 15 percent by better management of power and cooling.
For businesses deep learning could be applied to customer relationship management helping to analyse the effectiveness of marketing campaigns and predicting customer activity. It also has potential to predict a customer’s ability to repay a loan, or to build accurate filtering systems to identify spam or phishing emails.
Although it’s not likely that we’ll see machines that can properly ‘think’ in the human sense for some years yet, increases in computing power means that the theory of neural networking and deep learning can now begin to be applied in practical ways.
This has created a surge of interest in the technology. Research company CB Insights says that equity funding of AI startup companies reached a new high in 2016.
There’s no doubt that deep learning will have a major influence on many areas of everyday life and the more it’s used the more effective it will get. As we see technologies like quantum computing become more available so having the processing power to make the most of neural networking won’t be an issue. For now deep learning still in its relatively early stages but if you want to find out more there’s an active deep learning community on Google+ and an official website at deeplearning.net which lists research groups and more.