Qualcomm plans to start making computer chips with a new processor design which "mimics the human brain and nervous system" as early as next year, the mobile chip maker said last week.
Dubbed a Neural Processing Unit (NPU), these future chips will "not only mimic human-like perception but also have the ability to learn how biological brains do," the company said. Qualcomm's forthcoming line of NPUs will be branded "Zeroth" after the "Zeroth Law" from the late Isaac Asimov's Robot novels, and the chip maker hopes to partner up with other companies to design and manufacture them in 2014.
Qualcomm aims to produce Zeroth chips "for applications ranging from artificial vision sensors to robot controllers and even brain implants," Qualcomm chief technology officer Matt Grob said during a talk during the MIT Technology Review's EmTech conference last week, the MIT news site reported.
Grob said Qualcomm and partner Brain Corporation, a self-described "pioneer in developing novel algorithms based on the functionality of the nervous system," have been working together for several years to develop a chip design and software platform based on how human brains take in and process information.
"This 'neuromorphic' hardware is biologically inspired — a completely different architecture — and can solve a very different class of problems that conventional architecture is not good at. It really uses physical structures derived from real neurons — parallel and distributed," Grob was quoted as saying by the MIT Technology Review.
"What is new now is the ability to drop down large numbers of these structures on silicon. The tools we can create are very sophisticated. The promise of this is a kind of machine that can learn, and be programmed without software — be programmed the way you teach your kid."
Qualcomm isn't ready to produce Zeroth chips in volume yet but the company has been testing early production designs in devices like the robot in the video above which was taught to visit white boxes only in an environment with boxes of many different colours.
"We did this through dopaminergic-based learning, a.k.a. positive reinforcement—not by programming lines of code," the company said.
In a blog post introducing Zeroth, Qualcomm director of business development Samir Kumar said the chip maker hopes to "define and standardize" an NPU architecture which could be used in stand-alone devices or in conjunction with traditional microprocessors, and even "live side-by-side in future system-on-chips."
"This way you can develop programs using traditional programing languages, or tap into the NPU to train the device for human-like interaction and behavior," Kumar said.
Qualcomm is not the only prominent technology company exploring brain-based hardware and software designs. In August, IBM revealed some details about a new computer programming framework it is developing which draws inspiration from the way the human brain receives data, processes it, and instructs the body to act upon it, all while requiring relatively tiny amounts of energy to do so.
IBM and research partners Cornell University and iniLabs have now completed the second phase of their approximately $53 million (£33 million) Systems of Neuromorphic Adaptive Plastic Scalable Electronics (SyNAPSE) project. With $12 million (£7.5 million) in new funding from the Defense Advanced Research Projects Agency (DARPA), IBM said a few months ago that work was set to commence on Phase 3, which involves an ambitious plan to develop intelligent sensor networks built on a "brain-inspired chip architecture" using a "scalable, interconnected, configurable network of 'neurosynaptic cores'."
Meanwhile, Intel and some others have long been developing similar technologies. In fact, scientists and engineers have been attempting to build artificial neural networks which seek to mimic brain processes since the 1940s, when Warren McCulloch and Walter Pitts devised their "threshold logic" computational model and Donald Hebb articulated what became known as the Hebbian learning process.
Over the past few decades, important developments in the refinement of neural networks have included parallel distributed processing and the creation of algorithmic hierarchies of conceptual inputs known as deep learning.
In just the past few years, the recurrent neural networks and deep feedforward neural networks created by the Dalle Molle Institute for Artificial Intelligence Research (IDSIA) have been recognised as pushing machine learning and pattern recognition by artificial intelligence to new heights.