The evolution of life and the future of computing

The evolution of computers, and the evolution of life, share the common constraint that in going forward beyond a certain level of complexity, the advantage goes to that which can build on what is already in hand rather than redesigning from scratch. Life’s crowning achievement, the human brain, seeks to mould for itself the power and directness of the computing machine, while endowing the machine with its own economy of thought and movement. To predict the form their inevitable convergence might eventually take, we can look back now with greater understanding to the early drivers which shaped life, and reinvigorate these ideas to guide our construction.

"Life is, in effect, a side reaction of an energy-harnessing reaction. It requires vast amounts of energy to go on."

Nick Lane, author of a new paper in the journal Cell, was speaking here about critical processes in the origin of life, though his words would also be an apt description of computing in general. His paper, basically, puts forth bold, new ideas for how proto-life forms originated in deep sea hydrothermal vents by harnessing energy gradients. The strategies employed by life offer some insights into how we might build the ultimate processor of the future.

Many of us have read claims regarding the information storage capacity and processing rate of the human brain, and have wondered – how do they measure that? Well, the fact is they don’t. With our limited understanding of how living systems like the brain work, it is folly at this point to attempt any direct comparison with the operation of computing machines. Empirical guesswork is often attempted, but in the end it is little more than hand waving.

Google, while clearly not a brain of any kind, certainly processes a lot of information. We might ask, how well does it actually perform? It is easy enough to verify that a typical search query takes less than 0.2 seconds. Each server that touches the operation spends perhaps a few thousandths of a second on it. Google’s engineers have estimated the total work involved with indexing and retrieval amounts to about 0.0003 kWh of energy per search.

They did not indicate how they estimated this number, but if we think about it, it is a fascinating result, despite their unfortunate prefixing of the units. Suppose we take the liberty of defining this quantity, the energy per search, as a googlewatt. Such a measure would be a convenient way to characterise a computing ecosystem in much the same way that the Reynolds number qualitatively characterises flow conditions across aerodynamic systems.

One might then ask: If the size of a completely indexed web crawl is constantly expanding while the energy per elementary search operation contracts with improvements in processor efficiency, how might the googlewatt scale as the ecosystem continues to evolve? In other words, can we hope to continue to query a rapidly diverging database at 0.3Wh per search, or in monetary terms – at £0.0003 per search?

For the sake of putting energy-per-search on more familiar terms, Google notes that the average adult requires 8000 kilojoules (kJ) a day from food. It then concludes that a search is equivalent to the energy a person would burn in 10 seconds. No doubt brains perform search much differently to Google, but efforts to explore energy use by brains have proved to be confounding.

PET scanning, for example, is not a very reliable tool for localising function to specific parts of the brain. Furthermore, its temporal resolution is pitiful. It is, however, not too bad at measuring global glucose utilisation, from which energy use can be inferred. Subjects having their brains imaged by PET scanner while performing a memory retrieval task frequently appear to utilise less energy than when resting. So if we accept the bigger picture in some of these studies, we often see the counterintuitive result that the googlewatt for a brain, at least transiently and locally, can sometimes take on a negative value. This is not totally unexpected since inhibition of neuron activity balances excitability at nearly every turn. The situation may be likened to that of a teacher silencing the background din of an unruly class and demanding attention at the start of a lesson.

Each of Watson’s 90 Power 750 server nodes has four processor chips, making for a total of 32 cores per node. Each 567mm chip, fabricated with a 45nm process has 1.2 billion transistors. The Power 750 server was based on the earlier 575 server, but was designed to be more energy efficient and run without the need for water cooling. As it is air-cooled, the 750 cannot consume more than 1600 Watts of power and is therefore limited to 3.3GHz. The 575 could handle 5400 Watts and be run a bit higher at 4.7GHz.

Just in case you might be wondering where these processor speeds come from in the first place, it may be comforting to know that they are probably not just pulled out of a hat. They appear to be part of a sequence known as the E6 preferred number series, which IBM must have a special fondness for, and which is of course, eminently practical.

The point of these details is that to marginally outperform a human at a memory game, Watson churns out 60 MFLOPs/Watt while consuming a whopping 140 kilowatts. The human may be running a 50 Watt system with only about 10 or so Watts being used by the brain. So if life is a side effect of energy-harnessing reactions, computing in silicon looks more like a side effect of an energy-dissipating system!

In fact, the cooling system for IBM’s new 3-petaflop supercomputer, SuperMUC, uses waste heat from the machine to warm the Leibniz Centre where it is housed. It seems our brains still have a few tricks to teach their silicon brethren. How then might we apply some of these tricks to computing?

Phillip Ball published articles last month in Nature and Scientific American where he suggests that future supercomputers might not be powered by electrical currents borne along metal wires, but instead driven electrochemically by ions in the coolant flow. The idea of supplying power along with the coolant is not entirely new – jet fuel has long been used to cool aircraft electronics.

The problem with wires or circuit board traces is that routing dedicated power and ground traces to each transistor is, beyond a certain scale, a poor use of volume. Designers mitigate these problems in multi-layer boards by employing entire planes for ground, and for any of several supply voltages that might be needed. In large computers, 2D boards are stacked along with their cooling apparatus into 3D forms, but critically, the opportunity for efficient and local 3D interconnectivity is sacrificed.

If power was accessible anywhere in the volume, the efficient form of a 2D folded surface within a 3D volume might more readily be brought to bear. Elements in frequent communication but widely separated on a 2D surface can be closely opposed when folded. One may argue that high speed optical interconnects in computers reduce transmission delays and obviate the need for a more complex geometry. Typically, however, the system of interconnects and opto-electronic hardware required to make them work takes up an exorbitant volume.

Life takes on a unique geometry at all scales according to need. Single-celled algae, for example, and the mitochondria that power cells, optimally pack catalytic surface into their volumes using convoluted folds that continually evolve and intercalate according to need. Mitochondria divide when demand for their products increases, and they undergo fusion to perform error correction when their DNA is mutated by toxic oxygen metabolites.

The cerebral cortex also uses an extensively folded and connected surface, as do the dynamic synaptic sculptures within it. Folding and re-folding might even be said to be one of the central preoccupations of life. This holds true whether one is referring to proteins, membranes, or organs. The ability of membranes in particular to enclose and isolate equipotential volumes for use as local power sources or sinks as demand arises is life’s calling card.

It appears that life takes origin not by chance but in the most predictable, inevitable, and simplest way possible. Life needs an energy gradient to drive things, but a gradient not so great that any nascent structure is destroyed before it might be stabilised. While lightning, volcanism, and even cosmic ray bombardment might forge molecular precursors to life, deep sea hydrothermal vents have emerged as the prebiotic mill through which life consistently percolates. The major ions which were segregated by primordial membranes appear to have been H+ and Na+, both logical choices in an ocean environment.

Their early selection accounts for the present ubiquity of these ions as the currency for cellular charge. Iron sulfide reactions of the type that led to early forms of metabolism still occur in the thermal vents today. Vast mineral deposits are also found near these vents, in particular rich “manganese nodules.” (Incidentally, these minerals provided a convenient alibi for the CIA’s dramatic efforts to recover the Soviet K-129 submarine with the Howard Hughes Glomar Explorer in the early 1970s – but that is a story for another time.)

Nick Lane’s origin of life paper is notable for the primacy that it puts upon membrane bioenergetics as the base upon which life coalesced. The primitive membrane forms were initially devoid of the protein machinery found in modern organisms. One such protein is a rotary engine known as an ATPase. Modern ATPase’s are clusters of up to 300 protein elements printed with a 5A (Angstrom) process, self-assembled, and typically run at around 9000 RPM. Initially leaky and randomly structured, early membranes harnessed energy gradients with marginal efficiency.

Over time those that persisted grew increasingly more complex by incorporating these nanomachines, culminating in the exquisitely malleable geometry of the brain. Similarly our first computers made for a dreadful waste of energy. Over time their efficiency has dramatically improved and the number of atoms per transistor has shrunk to near the limits where the current silicon technology can be reliably supported. New technologies incorporating circuits just a few atoms wide promise dramatic miniaturisation of our current state of the art.

Increasingly, heat looms as the single greatest obstacle to processor speed limits. As we have seen in the example of the Power 575, speed and investment in cooling go hand-in-hand. Years ago, the maximum rate of information processing in a given volume was derived under the assumption that irreversible computation would be limited only by the rate at which heat can be removed from that volume. Indeed our understanding of any such limits evolves with our grasp of natural phenomena. The maximum amount of information that might be stored within a spherical volume is known as the Bekenstein Bound. This is a more esoteric quantity, but for those so inclined to investigate these kinds of measures, it is defined as the entropy of a black hole with an event horizon of a corresponding surface area.

In 2000, Seth Lloyd, one of the pioneers of quantum computing, envisioned the ultimate laptop [PDF] as a 1kg mass occupying a 1 litre volume, and calculated that its maximum speed would be 10^50 operations per second. Needless to say, any such device would quickly be consumed in a ball of plasma. Cells and their lipid membranes which sustain life only within a few degrees of 98.6, won’t stand a chance against whatever technology might approach the ultimate laptop – but for now at least, they still reign supreme.

Is it even possible to measure how good life is at what it does? One thing that makes life the envy of all things hardware is the ability to replicate itself. Imagine the power of a supercomputer that could replicate processors in-situ as demand arises, and then could absorb them just as fast (or faster) when they began to accumulate errors or became superfluous. Along these lines, a recent theoretical paper[PDF] seeks to define how efficiently an E. coli bacterium can produce a copy of itself.

The astounding result is that the excess heat generated by real bacteria is only about three times that which is optimally possible. It is difficult to imagine what an optimal assembly of a bacterium from its constituent atoms might look like as there are no doubt many near-optimal ways to go about it.

If we are to conclude anything at this point, it might be that any machine that could duplicate one of its processors using roughly the same amount of energy in the same amount of time that the processor itself uses during normal operation, would be something of unimaginable power. In the same breath, any computer which has the ability to direct its own assembly would potentially be ominous, perhaps – some might say – a bit too much like us.