Sixteen years ago, on a cold February day at Yale University, a poster caught Gil Kalai’s eye. It advertised a series of lectures by Michel Devoret, a well-known expert on experimental efforts in quantum computing. The talks promised to explore the question “Quantum Computer: Miracle or Mirage?” Kalai expected a vigorous discussion of the pros and cons of quantum computing. Instead, he recalled, “the skeptical direction was a little bit neglected.” He set out to explore that skeptical view himself.
Today, Kalai, a mathematician at Hebrew University in Jerusalem, is one of the most prominent of a loose group of mathematicians, physicists and computer scientists arguing that quantum computing, for all its theoretical promise, is something of a mirage. Some argue that there exist good theoretical reasons why the innards of a quantum computer — the “qubits” — will never be able to consistently perform the complex choreography asked of them. Others say that the machines will never work in practice, or that if they are built, their advantages won’t be great enough to make up for the expense.
Kalai has approached the issue from the perspective of a mathematician and computer scientist. He has analyzed the issue by looking at computational complexity and, critically, the issue of noise. All physical systems are noisy, he argues, and qubits kept in highly sensitive “superpositions” will inevitably be corrupted by any interaction with the outside world. Getting the noise down isn’t just a matter of engineering, he says. Doing so would violate certain fundamental theorems of computation.
Kalai knows that his is a minority view. Companies like IBM, Intel and Microsoft have invested heavily in quantum computing; venture capitalists are funding quantum computing startups (such as Quantum Circuits, a firm set up by Devoret and two of his Yale colleagues). Other nations — most notably China — are pouring billions of dollars into the sector.
Quanta Magazine recently spoke with Kalai about quantum computing, noise and the possibility that a decade of work will be proven wrong within a matter of weeks. A condensed and edited version of that conversation follows.
When did you first have doubts about quantum computers?
At first, I was quite enthusiastic, like everybody else. But at a lecture in 2002 by Michel Devoret called “Quantum Computer: Miracle or Mirage,” I had a feeling that the skeptical direction was a little bit neglected. Unlike the title, the talk was very much the usual rhetoric about how wonderful quantum computing is. The side of the mirage was not well-presented.
And so you began to research the mirage.
Only in 2005 did I decide to work on it myself. I saw a scientific opportunity and some possible connection with my earlier work from 1999 with Itai Benjamini and Oded Schramm on concepts called noise sensitivity and noise stability.
What do you mean by “noise”?
By noise I mean the errors in a process, and sensitivity to noise is a measure of how likely the noise — the errors — will affect the outcome of this process. Quantum computing is like any similar process in nature — noisy, with random fluctuations and errors. When a quantum computer executes an action, in every computer cycle there is some probability that a qubit will get corrupted.
And so this corruption is the key problem?
We need what’s known as quantum error correction. But this will require 100 or even 500 “physical” qubits to represent a single “logical” qubit of very high quality. And then to build and use such quantum error-correcting codes, the amount of noise has to go below a certain level, or threshold.
To determine the required threshold mathematically, we must effectively model the noise. I thought it would be an interesting challenge.
What exactly did you do?
I tried to understand what happens if the errors due to noise are correlated — or connected. There is a Hebrew proverb that says that trouble comes in clusters. In English you would say: When it rains, it pours. In other words, interacting systems will have a tendency for errors to be correlated. There will be a probability that errors will affect many qubits all at once.
So over the past decade or so, I’ve been studying what kind of correlations emerge from complicated quantum computations and what kind of correlations will cause a quantum computer to fail.
In my earlier work on noise we used a mathematical approach called Fourier analysis, which says that it’s possible to break down complex waveforms into simpler components. We found that if the frequencies of these broken-up waves are low, the process is stable, and if they are high, the process is prone to error.
That previous work brought me to my more recent paper that I wrote in 2014 with a Hebrew University computer scientist, Guy Kindler. Our calculations suggest that the noise in a quantum computer will kill all the high-frequency waves in the Fourier decomposition. If you think about the computational process as a Beethoven symphony, the noise will allow us to hear only the basses, but not the cellos, violas and violins.
These results also give good reasons to think that noise levels cannot be sufficiently reduced; they will still be much higher than what is needed to demonstrate quantum supremacy and quantum error correction.
Why can’t we push the noise level below this threshold?
Many researchers believe that we can go beyond the threshold, and that constructing a quantum computer is merely an engineering challenge of lowering it. However, our first result shows that the noise level cannot be reduced, because doing so will contradict an insight from the theory of computing about the power of primitive computational devices. Noisy quantum computers in the small and intermediate scale deliver primitive computational power. They are too primitive to reach “quantum supremacy” — and if quantum supremacy is not possible, then creating quantum error-correcting codes, which is harder, is also impossible.
What do your critics say to that?
Critics point out that my work with Kindler deals with a restricted form of quantum computing and argue that our model for noise is not physical, but a mathematical simplification of an actual physical situation. I’m quite certain that what we have demonstrated for our simplified model is a real and general phenomenon.
My critics also point to two things that they find strange in my analysis: The first is my attempt to draw conclusions about engineering of physical devices from considerations about computation. The second is drawing conclusions about small-scale quantum systems from insights of the theory of computation that are usually applied to large systems. I agree that these are unusual and perhaps even strange lines of analysis.
And finally, they argue that these engineering difficulties are not fundamental barriers, and that with sufficient hard work and resources, the noise can be driven down to as close to zero as needed. But I think that the effort required to obtain a low enough error level for any implementation of universal quantum circuits increases exponentially with the number of qubits, and thus, quantum computers are not possible.
How can you be certain?
I am pretty certain, while a little nervous to be proven wrong. Our results state that noise will corrupt the computation, and that the noisy outcomes will be very easy to simulate on a classical computer. This prediction can already be tested; you don’t even need 50 qubits for that, I believe that 10 to 20 qubits will suffice. For quantum computers of the kind Google and IBM are building, when you run, as they plan to do, certain computational processes, they expect robust outcomes that are increasingly hard to simulate on a classical computer. Well, I expect very different outcomes. So I don’t need to be certain, I can simply wait and see.