So apparently computers can have dreams. No they don't count little robotic sheep, but in fact they're highly vivid, and very strange dreams that feature snails, dogs and bananas...
The computers normally used by Google for their high-powered image recognition technology were let loose and free to identify the subtlest and smallest of things in the images, resulting in very skewed pictures.
Google along with a multitude of other companies have been working and trying to simulate how the human brain thinks. The leading research currently is the work on artificial neural networks that are using extremely complex mathematical algorithms, and as a result of this, the image recognition computers are providing us with these very strange images.
Blurring the lines between humans and machines is very much a hot topic at the moment, with Hollywood taking the concept to heart with releases such as Ex Machina, iRobot and the Terminator series. A bit closer to home, UK TV series Humans on channel 4 is enjoying rave reviews for the portrayal of our reliance on technology. The series is set in a parallel present, the must-have gadget is a AI robot called a 'Synth' -a highly developed and artificially intelligent servant that caters for the characters every need.
With the lines between fiction and reality becoming ever closer to togther year after year, it really does make you wonder whether we are too far off having our own AI robot servants. At least now with the help of Google's image recognition AI robots, we now may know what they 'dream of'
The way the Google image recognition computers realise these 'dreams' or images is by passing an image through multiple layers to build up a picture of what an image may be. By showing them millions of examples, and adjusting the computer when it is correct or wrong the computer is capable of learning.
The way the layers work is in fact quite a simple system, where each layer build up more and more information on the image. So the first layer for example may pick out the very basic corners, outlines and edges. This information is then passed on to the next layer, that will identify what the edges or outlines belong to, with then further layers putting all the information it has learnt and recognised into an overall image.
Normally this process will create what we would recognise as a normal picture, be it a house, a dog, or a landscape. But, when Google reversed the process and effectively telling the robots what to see at the final layers and letting the computers work on the image, and tasking the computers with recognising features of the image, and to emphasise this feature that it recognises. By then feeding this image back into the network, this process is done again and again until the feedback loop modifies the picture beyond recognition.
Some of the most impressive images were created by running the software on random noise and the clever robots created images that are wholly of its own imagination.
Here we see the computer has generated an image of bananas from white noise.
One of the more trippy images was created when the computer was asked to identify buildings on an otherwise bland and featureless image:
On the company's research blog, Google engineers have tried to explain these pictures and for instance describe how from random noise they created the image with the bananas,
“One way to visualise what goes on is to turn the network upside down and ask it to enhance an input image in such a way as to elicit a particular interpretation,” they add. “Say you want to know what sort of image would result in ‘banana’. Start with an image full of random noise, then gradually tweak the image towards what the neural net considers a banana.”
Aside from the novelty facture of the trippy images, the software has made it into some important products including Google Photos.