In the video below, Evan Longoria, a baseball player with the Tampa Bay Rays, is seen to make a spectacular grab to save a reporter from certain death – or at least serious injury. Granted, Evan may have had a little help from video editing, but at the professional level at least, comparable performances no doubt occur every time an umpire gives the command to play ball.
The computations a man-made machine would need to perform to detect and track an incoming threat, like an errant ball, and simultaneously perform motor adjustments to intercept it are certainly not trivial. Yet, for a human brain, the computations underlying such virtuosity pale in comparison to the massive background processing interleaved to create the awareness to perform the task in the first place – or to choose a different course of action on say, the tenth run of the scenario.
The recently proposed Brain Activity Map (BAM) project suggests that it may be possible to record not only all the synaptic connections in a brain, but also their spike activity. Ignoring for the moment the difficulty of actually doing such a thing, if we were able to capture Evan’s BAM for two seconds from the crack of the bat to the catch, how would we begin to identify computations? In short, we couldn’t.
The problem is that each spike is in turn the result of billions if not trillions of interactions within a neuron, and we haven’t a clue how many of them are necessary to capture the essential behaviour. Rather than the sequential program of a computer, the computations in Evan’s brain may seem to be more like the actions of a pack of ants carrying home a particle of potential nest material. Closer inspection would likely show that much of the time, the food may actually be traveling away from the colony as ants quibble about what route to take, and perhaps even whether they have food or a poison berry on their hands.
In theory, it should be possible to capture some of the essence of simpler ant behaviour with a computer, but if the individual is represented by just a few equations, you will not see any rich behaviour from your model – like for example, spontaneous organisation to form bridges across a water obstacle.
The same limitation exists for the recently funded European Human Brain Project, which seeks to compute the behaviour of neurons by dividing them into tiny compartments and describing their electrical behaviour with equations. The problem is that the vast majority of the computations taking place inside a neuron do not appear to be about generating spikes; they seem to be about building and rebuilding complex bridges to other neurons.
To take another equally confounding system, let’s consider a gaggle of geese milling about a lake. At the beginning of the migration season the geese will likely raise a fuss each morning, and rise in flight for a few loops around the lake in various meaningful formations. Ultimately, by the end of the day they may have lost a few tribal elders and perhaps gained a few wayward stragglers. While you can make the argument that the geese are computing the optimal time to head south, and who among them might have a clue as to which way to go, the computing geese don’t outwardly appear much different than randomly moving geese.
In the same way, neurons in the brain don’t appear to be performing any computation in a way that is familiar to us. Neurons communicating with spikes are in many ways similar to geese exchanging honks. You can poke at either and get a response, but left alone, neither is very predictable.
A computation that is strictly constrained by an algorithm may incorporate some level of unpredictability. Random number generators, for example, are the heart of many computational methods. The Monte Carlo simulations first programmed on the ENIAC (pictured right) by Von Neumann, and more recently, genetic algorithms, both rely on quasi-random numbers drawn from distributions or generated by recursive methods. Computations can also incorporate inputs from other machines or the environment into a computation.
It is here, in the use of information from the environment, that the difference between a computation and a random process might begin to be understood – but only by degree. In addition to simply using sensory input from the environment, the class of machines we call brains are made by sensory input. The majority of a brain’s activity consists of sending and responding to the open-ended probes it puts to the environment (the senses), as well as to itself. A complete description of a brain therefore requires a description of its environment.
The most widely accepted mathematical description of computability is Turing’s suggestion that, if a particular computation eventually finishes – if it doesn’t get stuck in an infinite loop – then it is computable. The thing to realise is that a brain will never stop until it is destroyed, and furthermore it is impossible to give it just one computation independent of the many on-going computations which are essential for its survival.
In this light the computability question is a silly one. Each of the billions of cells in the brain is performing computations simultaneously, and each is knitted together on a machine that rewrites its architecture on timescales orders of magnitude below any we could observe. If the machine itself is changing, the only way it would be possible to define its computations would be if you knew how it changed.
In the foul ball scenario above, by a tenth trial, Evan’s hand might have become so swollen that he might simply decide to push the reporter along with himself out of the way rather than attempt yet another stoic grab. A machine could also do the same, but in order to set priority for itself, rather than just obey imperatives programmed in by humans, the machine must first learn to feel.
So what are we really asking, when we ask if brains are computable?
Many machines can obviously compute some things our brains cannot, and similarly brains can compute things that the machines we have today cannot. The question we are really trying to ask is: Can we build a computer that can compute everything a brain can? In other words, can we build a computer that can compute consciousness, or more generally, the ability to feel?
In a recent public statement during the AAAS meeting sponsored by Science magazine, Miguel Nicolelis said that the brain is not computable. His comments were meant to be an antidote to the on-going claims from the Kurzweil camp that soon we will be able to make machines that are conscious. Having just published a paper where he describes giving his rats the ability to feel infrared light, and then another paper where he describes a brain-to-brain interface he built for them, a lot of folks have been paying attention to what Nicolelis says.
The only way to settle the Nicolelis-Kurzweil debate would be to actually build a conscious machine, and then figure out a way to actually prove it was conscious. It is an easy matter to say that we agree on what it means to be conscious, and what it means to feel, but a little tougher to know when you can have one without the other.
For example, people can be born with all kinds of peripheral nervous system disorders which leave them unable to feel most kinds of physical pain, including heat, cold, touch or pressure. Usually this condition is dangerous and disfiguring since normal self-precautions are not taken. Even more unnerving are rare cases of so-called Urbach-Wiethe disease in which the parts of the brain known as the amygdala have atrophied, with the startling result that the person has no fear. We might then ask if it would be possible for an individual to have a combination of mutations that leaves them unable to feel anything physical, or emotional, yet still be quite conscious of themselves and the world around them?
While there is no single answer to this question, a recent study of people with this disease suggests that they may hyper-compensate in some ways for their deficit, actually experiencing some things in greater detail. For example, when they inhaled 35 per cent carbon dioxide under experimental conditions, they felt immediate deep panic and fears of suffocation, whereas normal people did not have anywhere near the same effect, at least until further into the experiment. Apparently other mechanisms in the brain besides those in the amygdala detect the acidification of the blood caused by CO2 and create the associated feelings in response.
It is a bit of a stretch to imagine building machines that feel fear when their input voltage dipped by a few per cent during peak hours, or feel a burning sensation if circuits started to overheat. If it is eventually possible for them to compute consciousness, so to speak, additional considerations would be in order. If we empower computing machines with consciousness so that they might make the world a better place for us, we will also have to make sure that we are not creating a hell for them in the process. This is not a misplaced concern: We need look no further than the daily news to see accounts of untold physical misery levied upon those who did nothing to cause it.
From the casual observation that over 10 per cent of all photographs in the entire history of photography were taken in the last year, we might further suggest that the explosion of information available about the present moment tends to diminish the relevance of the past. The divergence of possibility created by this explosion makes the future less knowable, yet more immediate. It might then be fair to say that your comparative fitness in the environment is increasingly evaluated according to your present power – what you can do right now as opposed to what you might eventually learn or acquire. The drive to incorporate machine aid will therefore be a strong one, not to get ahead, but rather just to keep up. In building machines that are more brain-like, and brains that more machine-like, we may soon have a better answer to our question of building a conscious machine.