Cognitive computing – an entirely new computational paradigm which draws its inspiration from the human brain – sounds like science fiction, but IBM is spending a fortune trying to turn it into science fact. We chat to the leader of the project Dr. Dharmendra Modha about what could prove to be a tipping point in the history of computing.
Dr. Modha’s team – made up of members from four world-class universities, six IBM labs and fabs, government labs, and experts from the fields of neuroscience, supercomputing, and nanotechnology – has been working for years to produce the first prototypes of a cognitive computing processor, and last week it announced the fruits of its labour: two tiny processors that could spark a major change in the way computing works.
“As I speak to you,” an animated Modha explains during our interview, “I hold in my hand a tiny, tiny, tiny chip that represents our first cognitive computing core. It brings together the three key architectural elements of the brain: neurons, representing computation; synapses, representing memory; and axons, representing communications – in working silicon.”
That’s all well and good, but what use is a cut-down – the chips Modha refers to contain a mere 256 neurons each, compared to the estimated 10-100 billion found inside your skull right now – brain to anyone? “We aim to create a new generation of computers that can lead to brain-like applications, and fundamentally complement – not replace – complement today’s computers, while bypassing the technology, architecture, and programming limitations that they suffer from.
“I want to emphasise that very clearly,” Modha stops to explain. “We are not trying to replace, compete with, or in any way challenge today’s computers. Today’s computers are a foundation of our society, and will be with us in perpetuity. They are loved, but we are adding another member to the family so that we can carry out brain-like computation more efficiently in real time and embed that within physical environments.
“To give you the contrast between today’s computers and brains, let me call today’s computers ‘left-brain machines,” Modha explains. “They’re sequential, analytical, they can deal with text, numbers, symbols, they’re good for front-end or back-end intelligence, they’re centralised, sequential, clock-driven. They have a bus, they have cache memories, they constantly overwrite the registers, they need programming, they’re hard-wired and therefore – I would say – fault-prone, and they use algorithms, and are fundamentally computation driven, right?
“So they count flops – ‘how many flops can I compute per unit of time?’ But they ignore power, or energy. On the other hand, we can think of cognitive computers as ‘right-brain’ computers – they’re parallel, they close the sensory motor feedback loop, they’re symbolic, they’re distributed, I would say event-driven. They don’t have a bus, no cache memories, they update the registers’ state only when things change, they’re learning and are fundamentally energy-conscious. They do what is necessary when it is necessary and only that which is necessary. Different paradigms, right? Day and night. And we need both – because we are trying to achieve a balanced computing stack.”
That balanced stack is the true goal of Modha’s team: a joining of the left and right brains of a computer into the first truly flexible computer, capable of learning for itself, feats of associative memory, and pattern recognition without preprogramming on a scale unimaginable with today’s technology.
“Imagine instrumenting a grocer’s glove with sensors for temperature, smell, sight – and being able to flag bad or contaminated produce,” Modha entuses, his speech becoming more rapid as he warms to his subject. “Or you could have smarter transportation systems, or we could take the world’s financial markets and instruments them 7/24 – stocks, bonds, commodities, real estate, currencies, derivatives – and keep an eye out for large-scale patterns, associations, opportunities, anomolies, so we can create fundamental improvements in how we invest.”
The tasks cognitive processors would be given are vastly different to those a standard computer processor would run. While they excel at pattern matching, it’s not possible for a cognitive processor to crunch numbers at anywhere near the speed of a traditional chip, but they are capable of feats of ‘logic’ which make them ideally suited to working with sensors and looking for patterns that a pre-written program could miss. In Modha’s opinion, however, one cannot function successfully without the other.
“Imagine a different evolution of the world in which we had built brain-like computers today,” said Modha. “There’ll be some other Ahdom – which is my last name spelt opposite – who will be telling you how the brain is so terrible for this rational computation, right? The same thing that I’m telling you can be made to work in reverse: I can make the same argument to say how the brain is terrible for processing your social security numbers, how the brain is terrible for remembering the employee’s age, gender, salary, date of birth, social security number, tax benefits, addresses, license plates, drivers ID – you see what I’m saying?
“The very same factors that allow the brain to be fault-tolerant – a couple of synapses in your brain and my brain can fail, but our mother’s sweet face, which is our most cherished memory, never fails us, right? Now imagine a couple of bits are flipped in your bank account – I mean, you’ll freak out! So, that’s the real reason why neither has a prayer in challenging the other. It’s really a genuinely collaborative hybrid environment in which both must continue to exist.”
When Modha starts to speak about what he calls “quote-unquote neurons, quote-unquote axons, and quote-unquote synapses,” and his tone takes on the wistful mannerisms of a storyteller, it’s easy to imagine him as little more than a fantasist espousing the latest variant of the perpetual motion machine, completely caught up in his own vision and paying little head to the commercial realities that surround him. If that’s the case, however, he’s managed to convince an awful lot of people to follow him in his flight of fancy – including the Defense Advanced Research Projects Agency, best known for creating ARPANET which would grow into the Internet that we know and love.
“In a typical meeting we might have in the room a psychiatrist, a psychologist, a neuroscientist, a cognitive neuroscientist, a supercomputing expert, a simulation expert, a computational neuroscientist, an electronic design automation expert, experts in analog VLSI, asynchronous VLSI, digital VLSI, technology experts, experts in packaging, experts in visualisation, experts in sensors, experts in actuators, experts in power consumption,” Modha reels off from memory. “The mind goes crazy! These people would never interact with each other meaningfully in the normal course of their lives, but they’re together in a room filled with creative tensions, because it’s one team, one dream.
“We work with DARPA, and DARPA demands we deliver our dream on a deadline,” Modha says with an edge to his voice missing from his talk of cognitive processors and the applications thereof. “This isn’t just blue sky research – our heads are in the clouds, but our feet are firmly grounded in the engineering, the technology, the scientific reality of today. This isn’t about ‘some day, maybe’ – this is ‘what can we do now, how can we really innovate, how can we mine Mother Nature’s patent fund, combine it with the very best that science, technology, and engineering has to offer today, and then make a humble effort at improving productivity of this society?'”
That’s the key to what Modha is trying to do, and the reason why IBM and the US government are investing vast quantities of money into his project: the promise of vastly improved productivity in almost every aspect of business. “We are trying to fundamentally increase revenues of the corporation, decrease their costs, and to increase the productivity of the individuals, and to create more in a better future for us, our planet, and our civilisation – and it is an exploration that we must do. If everyone is thinking the same thing,” Modha jokes, “somebody isn’t thinking.”
It’s true that the prototypes Modha’s team has created are somewhat limited at present. He explains that a visit to his lab would reveal a system powered by a cognitive processor capable of automatic classification of a digit, feats of associate memory, a game of Pong – “perhaps it will beat you, on a good day,” he jokes – and a pathfinding system that keeps a virtual car from crashing in a virtual maze.
Recognising the English nature of our publication – after joking that, had he thought of it first, his chips would have been called ‘thinq’ – Modha ends the interview paraphrasing Winston Churchill: “This isn’t the end,” he says, his smile evident in his voice. “This isn’t the beginning of the end. This is perhaps the end of the beginning.”Leave a comment on this article