The IBM Quantum Experience: Feynman’s Vision Comes into Focus

In 1981, at a conference co-organized by MIT and IBM, the famously brilliant and irreverent physicist Richard Feynman urged the world to build a quantum computer. He said “Nature isn’t classical, dammit, and if you want to make a simulation of nature, you’d better make it quantum mechanical, and by golly it’s a wonderful problem, because it doesn’t look so easy.”

Quantum theory was a revolutionary advance in physics and chemistry in the early 20th century, an elegant mathematical theory that explained the bizarre behavior of subatomic particles, and led to great technological advances such as the laser and transistor. But only in the last decade of the 20th century was it realized that quantum theory applies not only to atoms and molecules, but to bits and logic operations in a computer.

Now, in the 21st century, this realization is bringing about a revolution in the science and technology of information processing, making possible kinds of computing and communication unforeseen by the founders of the information revolution. But to explore and develop these possibilities, we will need to learn how to build a quantum computer, that is to say a device able to store and process this delicate new form of information as reliably and smoothly as an ordinary computer manipulates its bits.

Computation, as is generally contained in your phone, or laptop, or web-server, is based on information processing that a physicist typically refers to as classical. For most of the 20th century, quantum mechanical effects in these systems have been regarded as a pain and potential nuisance. This stems from the fact that the eponymous Heisenberg uncertainty principle makes computing devices behave less reliably than classical ideals, and quantum mechanics is treated as something that introduced noise that can’t be removed.

This effect of quantum mechanical non-idealities is especially made evident as Moore’s Law continues scaling. As we reach transistor densities with characteristic dimensions at the order of an atomic layer, quantum tunneling and heating leads to incorrigible computing. So then, if Moore’s Law is fundamentally limited by quantum physics, how do we continue to push the boundaries of computing? What is the next frontier?

Well, this is where we can in fact change the way we think about computation, and take the problem of quantum effects harming our processing, and turn it to our advantage. To do this, we needed to put quantum physics back into the model of computation. This has led to the exciting field of quantum information science, and how our team at IBM Research is working with superconducting qubits towards the construction of quantum computers.

A quantum computer performs calculations using devices that follow laws of quantum mechanics. These laws allow two particles to exist in an entangled state, causing them to behave in ways that cannot be explained by classical physics. This principle along with other ideas from quantum theory led Peter Shor to show in 1995 that it’s theoretically possible to efficiently break down a very large number into its prime number factors with a quantum computer. This factoring problem is one that is believed to be hard to do with a classical computer, and is the basis of the plurality of today’s encryption systems.

Now, this breakthrough idea for the use of quantum computing was revolutionary because it revealed that one of the following must be true: we have an incomplete model of computing, factoring is easy, or quantum mechanics is wrong.

Many researchers have investigated the efficiency of the factoring problem and the validity of quantum mechanics has been demonstrated in numerous experimental demonstrations, hinting that indeed the first item is the likely truth – there is a separation between classical and quantum computing. Since Shor’s landmark idea, there have been numerous other proposed quantum algorithms that present a quantum speed-up. An up-to-date list can be found at the quantum algorithm zoo.

Another promising direction that quantum computers have a wealth of potential for application is in quantum chemistry. The field of quantum chemistry explores the physical nature underlying chemical structure in the materials all around us, those that make up life and earth. Yet, the complexity of such problems can be astounding, especially when considering complex molecules are made up of more than just a handful of atoms. Quantum computers allow a route around this complexity, because fortunately, a quantum computer works the same way that nature does, the same way the chemicals that make up life and matter work. This offers the possibility of simulating (and therefore understanding and improving upon) nature better than any conventional computer will ever be capable.

Resolving the quantum conflict between noise and controllability
So why now? Why has it taken more than 30 years to make this “quantum leap” towards practical quantum computing? Well, it helps to understand what it means to be quantum, or entangled.

Quantum effects are not so easily observed in everyday life. Entanglement is non-intuitive and the world around us appears to follow classical physics. This is because in the realm of quantum physics, information becomes extremely fragile and delicate. Any miniscule disturbance, from heat, from noise, or vibration, and all the quantum effects just disappear. So it takes a lot of care to build a quantum computer, and design circuitry such that virtually no unwanted perturbation can ever touch the system – yet at the same time, functions such that we as operators of the quantum circuits can still control input and output to the individual qubits (or quantum bits) that make up a quantum processor. This is the grand challenge for building a quantum computer: balancing the preservation of quantum fragility (referred to as quantum coherence) with user controllability. Estimates of this balancing act indicate that we must reduce errors to only 1 error per 100 tera-operations (that’s 1 error per 10^14 operations) in order to build a functioning quantum computer.

Such a minute proportion of errors is an extremely difficult challenge. Fortunately, theoretical work into quantum information processing has led to the framework of quantum error correction. Through encoding quantum information into logical qubits, it is possible to allow for higher error rates, more like 1 error per 100 to 10,000 – a massive improvement within reach of current experimental devices.

As we move to larger systems, it is important to show that using these error correction principles to show that fault-tolerant quantum computing is possible. This will be a hallmark challenge of our group and the greater quantum computing experimental community in general. Yet, along the same token of building these more complicated quantum devices with an increasing number of qubits, we will also look for near-term applications in the realm of quantum chemistry, trying novel applications of simulation that might be out of the reach of classical computers. With current numbers of qubits in our lab around 7-10, soon we will be constructing processors nearing 40-50 qubits. At that level, such devices will have enough complexity that no classical computer, anywhere, will be able to emulate them. Unearthing the quantum advantage contained within these systems, and realizing Feynman’s dream, will be within reach.

The Quantum Experience
Quantum computing’s full potential is still unknown, but we believe it reaches beyond our expectations. And until we have the hardware and a community of dedicated users, this potential will remain untapped. So, rather than limiting our calculations to classical computers, we have built the