“I was clutching at the face of a rock but it would not hold. Gravel gave way. I grasped for a shrub, but it pulled loose, and in cold terror I fell into the abyss.”
Thus spoke amateur rock climber Heinz Pagels, the real-life inspiration for mathematician Ian Malcolm in Michael Critchon’s Jurassic Park novels, about a recurring nightmare he had been having. If continuity of consciousness could some day be guaranteed through software or hardware backups of the mind, risky activities like climbing might become a lot more popular. Mind uploads and copies, conscious machine intelligences and connectomes are the kinds of things that keep a man like Ray Kurzweil up at night.
With the publication of his 2005 book, “The Singularity is Near,” Kurzweil single-handedly brought the concept of a technological singularity into the mainstream. Presumably, when the singularity is reached, machine intelligence will have attained human-level equivalence and therefore be capable of consciousness.
The future fate of humanity would at this point become unpredictable, then unknowable. Kurzweil’s estimation for the singularity to begin in less than 30 years has raised eyebrows – and controversy. With his newly released book “How to Create a Mind,” Kurzweil seeks to better ground his former work, and in the process takes stock of the current state-of-the-art in brain mapping, machine intelligence, and how we came to be where we are today.
Among its many gems, Kurzweil’s new release looks to define ways in which the human brain will overcome its limitations by either merging with, or being downloaded to, the hardware of our machines. The most palatable option would be to expand the natural architecture of the brain, augmenting it by sharing resources with the cloud of the future through an appropriate interface. The more daring option would be for future minds to abandon flesh and blood hardware, resolving instead to neuromorphic hardware or to software running on more traditional computing elements.
Consider, for example, a box containing two hard drives, each with Windows 8 installed. One might then ask how much more complex is this box, than a box containing just a single Windows 8 hard drive? If complexity is defined as thermodynamic depth – essentially a measure of how hard it is to put something together from elementary pieces – then the answer would be not that much. On the other hand, a box containing both Jayne and Joan Boyd, the original Doublemint Twins of Wrigley’s fame, would have roughly double the thermodynamic depth of a box containing just Joan.
How can one be so confident in such a measure? As any mother will tell you, putting something together – a human from elementary pieces, even starting with an egg – is pretty hard. While each of the twins began life with a similar set of genetic instructions and form, their unique experiences in their own varied environments since birth are what comprise the bulk of the information that constitutes their being. The cells in their brains are therefore understood as devices for directly converting the flow of sensory information into synaptic and intracellular structure.
The notion of brains being hard to copy flies in the face of ideas for recreating a mind through the readout of a detailed connectome. One difficulty lies in the fact that during the finite time required to read and recreate any part of a living connectome, the connectome itself has changed by a significant amount. As fidelity is increased, so must the readout time and therefore the divergence of new connectome from the original one.
While traditionally falling under the rubric of philosophy, these observations lead to the alarming proposition that technology could in short order subsume huge chunks of the field, and with it perhaps, much of ethics. Both face being reduced to following plaintively amidst an ever-widening gap. Indeed that is the major point of the singularity and why it warrants ample inspection in any discussion of technology. Ethics can only inform a technology that is understood.
Copies and robots
In his discussion of copies and robots, Kurzweil maintains that if a robot looked and acted sufficiently like a human, he would accept it on faith as a fellow conscious entity. For all intents and purposes then, it would be. If similar robots were then mass produced on an assembly line and downloaded with appropriate software, well, according to Pagels, they could not be accepted as such.
If the logic of thermodynamic depth is extended down to individual atoms and their governing principles, it might be said that consciousness requires continuity to be maintained on all scales. That would include everything from the motions of individual molecules, up to cells, and to conscious collections of cells. Thus any attempts to impose consciousness with a top-down approach would only result in a pretender, zombie, or impostor. Accordingly, if the universe is viewed as a giant simulation, then only the universe itself can create consciousness. Entities arising within that universe cannot create it except in accordance with the founding laws of the simulation.
Any attempts to create consciousness in hardware, even in detailed neuromorphic chips, would then be doomed to failure. In these chips, artificial neurons are powered top-down from without. In other words, their “spikes,” which are their sole means of output and communication with their neighbours reflect nothing of themselves, only their dendritic inputs. In real neurons, spikes are the energetic end result of every activity inside the cell. In addition to dendritic integration of neighbouring input, they are the result of the activity of the thousands of mitochondria in the cell competing for every molecule of oxygen and glucose metabolite in their domain.
In a real brain, collections of cells continue to create novel thoughts in the absence of sensory input. A similar connectome comprised of neuromorphic neurons might very well do the same. After a short while however, the activities of these two networks will have diverged so as to bear little relation to each other – much like the proverbial butterfly flapping its wings and changing the weather. Without rich internal activity which can support large scale growth and remodelling, neuromorphic collections are reduced to less vital activity dictated by the mundane stable points of their shallow dynamics.
What will become of brains and machines when the singularity is achieved? Kurzweil has left this door open and others have jumped in with wild speculation. One extreme idea is that minds will transition to a technology operating on scales smaller than the nanoscale we have today. At this “femtoscale,” communications and computations using elementary particles at highly compressed scales would enable thought to occur much faster than before. This could be a risky business as it is well established that when matter is compressed beyond a certain limit, it is transformed by its own gravity into a black hole.
There is no evidence that future minds would have the power to organise and control a survivable collapse to black hole. The idea is a topic of much discussion among singularitarians because it suggests one explanation of the Fermi Paradox – the question being, if intelligence is so likely to arise in the universe, where did everybody go? If the idea is preposterous, then it should be easy to disprove. Otherwise disproving it must be among the formative pursuits in the development of the technological singularity.
Once again Kurzweil has rallied the troops, not by declaring an enemy or instilling fear, but by preaching cautionary optimism. The achievements of artificial intelligence, of which Kurzweil himself played significant part, including speech-to-text and the backbone of new technologies like Siri, are impressive but do not lead directly to consciousness or the technological singularity by themselves. We must be the singularity, using and merging with our machines along the way, maintaining the thermodynamically deep links to elementary building blocks of nature as they continue to quicken.