Computer scientists at University College London have created a self-repairing computer that never crashes. This technology could find its way into mission-critical systems that must not fail, or it could improve the overall reliability of computers in general – which is a good thing, considering how almost everything we interact with is governed by a digital computer.
In a modern computer, instructions are carried out procedurally – one instruction at a time, usually in a fixed order. Your computer might give the illusion that it’s working on multiple processes at the same time (multitasking), but in actuality it’s just switching very rapidly between multiple processes. As long as the task scheduler ensures that no process is left waiting for more than a few milliseconds, you are none the wiser. This method has served us very well, but it also introduces a single point of failure – if the processor or process crashes, the house of cards usually comes tumbling down.
In nature, however, where massively parallel brains and nervous systems rule supreme, processes are distributed, decentralised, and fault-tolerant. If a single process misfires in some way – and your brain misfires a lot – then the process is simply run again, perhaps in a slightly different way. Brains and nervous systems can also self-heal, as you will surely know if you’ve ever seen someone being rehabilitated after a stroke or similar neurological event.
UCL’s computer is crash-proof by virtue of its mirroring nature. Instead of being procedural, UCL’s computer is “systemic” – each process that it executes is actually its own system, with its own data and instructions. Instead of these processes being executed sequentially, a pseudorandom number generator acts as the task scheduler. Each of the systems act simultaneously, and independently of other systems – except for a piece of memory where similar systems can share context-sensitive data.
According to New Scientist, each systemic computer is resilient and self-healing because it stores multiple copies of its data/instructions spread across multiple systems. If one system crashes or becomes corrupted, the computer can just rebuild that system and start again. Ultimately, the systems are executed in a random order, and thus interact in a random way, until the result of the calculation is achieved.
The below image shows the suggested programming methodology for a systemic computer:
As for how the systemic computer is actually executed in real life, UCL uses an FPGA that has been specially programmed to execute the scientists’ nascent implementation of systemic computation. Performance isn’t fantastic, but “a lot faster than expected.” Peter Bentley, one of the computer’s developers, will present his findings at the IEEE International Conference on Evolvable Systems in April – but if you want a sneak peek at how UCL’s computer actually works in practice, the (open access) research paper is linked below.
Moving forward, Bentley and his comrade Christos Sakellariou are now working on imbuing their systemic computer with an artificial intelligence that can rewrite its code on the fly. The idea, of course, is to build a system that is not only resilient and reliable, but also capable of learning from experiences and reacting to its environment.
For further reading, you might want to have a browse of our article How to create a mind, or die in the attempt.