Quantum Algorithms Struggle Against Old Foe: Clever Computers


The quest for “quantum supremacy” – unambiguous proof that a quantum computer does something faster than an ordinary computer – has paradoxically led to a boom in quasi-quantum classical algorithms.

A popular misconception is that the potential — and the limits — of quantum computing must come from hardware. In the digital age, we’ve gotten used to marking advances in clock speed and memory. Likewise, the 50-qubit quantum machines now coming online from the likes of Intel and IBM have inspired predictions that we are nearing “quantum supremacy” — a nebulous frontier where quantum computers begin to do things beyond the ability of classical machines.

But quantum supremacy is not a single, sweeping victory to be sought — a broad Rubicon to be crossed — but rather a drawn-out series of small duels. It will be established problem by problem, quantum algorithm versus classical algorithm. “With quantum computers, progress is not just about speed,” said Michael Bremner, a quantum theorist at the University of Technology Sydney. “It’s much more about the intricacy of the algorithms at play.”

And the goalposts are shifting. “When it comes to saying where the supremacy threshold is, it depends on how good the best classical algorithms are,” said John Preskill, a theoretical physicist at the California Institute of Technology. “As they get better, we have to move that boundary.”

‘It Doesn’t Look So Easy’

Before the dream of a quantum computer took shape in the 1980s, most computer scientists took for granted that classical computing was all there was. The field’s pioneers had convincingly argued that classical computers — epitomized by the mathematical abstraction known as a Turing machine — should be able to compute everything that is computable in the physical universe, from basic arithmetic to stock trades to black hole collisions.

Classical machines couldn’t necessarily do all these computations efficiently, though. Let’s say you wanted to understand something like the chemical behavior of a molecule. This behavior depends on the behavior of the electrons in the molecule, which exist in a superposition of many classical states. Making things messier, the quantum state of each electron depends on the states of all the others — due to the quantum-mechanical phenomenon known as entanglement. Classically calculating these entangled states in even very simple molecules can become a nightmare of exponentially increasing complexity.

A quantum computer, by contrast, can deal with the intertwined fates of the electrons under study by superposing and entangling its own quantum bits. This enables the computer to process extraordinary amounts of information. Each single qubit you add doubles the states the system can simultaneously store: Two qubits can store four states, three qubits can store eight states, and so on. Thus, you might need just 50 entangled qubits to model quantum states that would require exponentially many classical bits — 1.125 quadrillion to be exact — to encode.

A quantum machine could therefore make the classically intractable problem of simulating large quantum-mechanical systems tractable, or so it appeared. “Nature isn’t classical, dammit, and if you want to make a simulation of nature, you’d better make it quantum mechanical,” the physicist Richard Feynman famously quipped in 1981. “And by golly it’s a wonderful problem, because it doesn’t look so easy.”

It wasn’t, of course.

Even before anyone began tinkering with quantum hardware, theorists struggled to come up with suitable software. Early on, Feynman and David Deutsch, a physicist at the University of Oxford, learned that they could control quantum information with mathematical operations borrowed from linear algebra, which they called gates. As analogues to classical logic gates, quantum gates manipulate qubits in all sorts of ways — guiding them into a succession of superpositions and entanglements and then measuring their output. By mixing and matching gates to form circuits, the theorists could easily assemble quantum algorithms.

Conceiving algorithms that promised clear computational benefits proved more difficult. By the early 2000s, mathematicians had come up with only a few good candidates. Most famously, in 1994, a young staffer at Bell Laboratories named Peter Shor proposed a quantum algorithm that factors integers exponentially faster than any known classical algorithm — an efficiency that could allow it to crack many popular encryption schemes. Two years later, Shor’s Bell Labs colleague Lov Grover devised an algorithm that speeds up the classically tedious process of searching through unsorted databases. “There were a variety of examples that indicated quantum computing power should be greater than classical,” said Richard Jozsa, a quantum information scientist at the University of Cambridge.

But Jozsa, along with other researchers, would also discover a variety of examples that indicated just the opposite. “It turns out that many beautiful quantum processes look like they should be complicated” and therefore hard to simulate on a classical computer, Jozsa said. “But with clever, subtle mathematical techniques, you can figure out what they will do.” He and his colleagues found that they could use these techniques to efficiently simulate — or “de-quantize,” as Calude would say — a surprising number of quantum circuits. For instance, circuits that omit entanglement fall into this trap, as do those that entangle only a limited number of qubits or use only certain kinds of entangling gates.

What, then, guarantees that an algorithm like Shor’s is uniquely powerful? “That’s very much an open question,” Jozsa said. “We never really succeeded in understanding why some [algorithms] are easy to simulate classically and others are not. Clearly entanglement is important, but it’s not the end of the story.” Experts began to wonder whether many of the quantum algorithms that they believed were superior might turn out to be only ordinary.

Sampling Struggle

Until recently, the pursuit of quantum power was largely an abstract one. “We weren’t really concerned with implementing our algorithms because nobody believed that in the reasonable future we’d have a quantum computer to do it,” Jozsa said. Running Shor’s algorithm for integers large enough to unlock a standard 128-bit encryption key, for instance, would require thousands of qubits — plus probably many thousands more to correct for errors. Experimentalists, meanwhile, were fumbling while trying to control more than a handful.

But by 2011, things were starting to look up. That fall, at a conference in Brussels, Preskill speculated that “the day when well-controlled quantum systems can perform tasks surpassing what can be done in the classical world” might not be far off. Recent laboratory results, he said, could soon lead to quantum machines on the order of 100 qubits. Getting them to pull off some “super-classical” feat maybe wasn’t out of the question. (Although D-Wave Systems’ commercial quantum processors could by then wrangle 128 qubits and now boast more than 2,000, they tackle only specific optimization problems; many experts doubt they can outperform classical computers.)

“I was just trying to emphasize we were getting close — that we might finally reach a real milestone in human civilization where quantum technology becomes the most powerful information technology that we have,” Preskill said. He called this milestone “quantum supremacy.” The name — and the optimism — stuck. “It took off to an extent I didn’t suspect.”

The buzz about quantum supremacy reflected a growing excitement in the field — over experimental progress, yes, but perhaps more so over a series of theoretical breakthroughs that began with a 2004 paper by the IBM physicists Barbara Terhal and David DiVincenzo. In their effort to understand quantum assets, the pair had turned their attention to rudimentary quantum puzzles known as sampling problems. In time, this class of problems would become experimentalists’ greatest hope for demonstrating an unambiguous speedup on early quantum machines.

Sampling problems exploit the elusive nature of quantum information. Say you apply a sequence of gates to 100 qubits. This circuit may whip the qubits into a mathematical monstrosity equivalent to something on the order of 2100 classical bits. But once you measure the system, its complexity collapses to a string of only 100 bits. The system will spit out a particular string — or sample — with some probability determined by your circuit.

In a sampling problem, the goal is to produce a series of samples that look as though they came from this circuit. It’s like repeatedly tossing a coin to show that it will (on average) come up 50 percent heads and 50 percent tails. Except here, the outcome of each “toss” isn’t a single value — heads or tails — it’s a string of many values, each of which may be influenced by some (or even all) of the other values.

For a well-oiled quantum computer, this exercise is a no-brainer. It’s what it does naturally. Classical computers, on the other hand, seem to have a tougher time. In the worst circumstances, they must do the unwieldy work of computing probabilities for all possible output strings — all 2100 of them — and then randomly select samples from that distribution. “People always conjectured this was the case,” particularly for very complex quantum circuits, said Ashley Montanaro, an expert in quantum algorithms at the University of Bristol.

Terhal and DiVincenzo showed that even some simple quantum circuits should still be hard to sample by classical means. Hence, a bar was set. If experimentalists could get a quantum system to spit out these samples, they would have good reason to believe that they’d done something classically unmatchable.

Theorists soon expanded this line of thought to include other sorts of sampling problems. One of the most promising proposals came from Scott Aaronson, a computer scientist then at the Massachusetts Institute of Technology, and his doctoral student Alex Arkhipov. In work posted on the scientific preprint site arxiv.org in 2010, they described a quantum machine that sends photons through an optical circuit, which shifts and splits the light in quantum-mechanical ways, thereby generating output patterns with specific probabilities. Reproducing these patterns became known as boson sampling. Aaronson and Arkhipov reasoned that boson sampling would start to strain classical resources at around 30 photons — a plausible experimental target.

Similarly enticing were computations called instantaneous quantum polynomial, or IQP, circuits. An IQP circuit has gates that all commute, meaning they can act in any order without changing the outcome — in the same way 2 + 5 = 5 + 2. This quality makes IQP circuits mathematically pleasing. “We started studying them because they were easier to analyze,” Bremner said. But he discovered that they have other merits. In work that began in 2010 and culiminated in a 2016 paper with Montanaro and Dan Shepherd, now at the National Cyber Security Center in the U.K., Bremner explained why IQP circuits can be extremely powerful: Even for physically realistic systems of hundreds — or perhaps even dozens — of qubits, sampling would quickly become a classically thorny problem.

By 2016, boson samplers had yet to extend beyond 6 photons. Teams at Google and IBM, however, were verging on chips nearing 50 qubits; that August, Google quietly posted a draft paper laying out a road map for demonstrating quantum supremacy on these “near-term” devices.

Google’s team had considered sampling from an IQP circuit. But a closer look by Bremner and his collaborators suggested that the circuit would likely need some error correction — which would require extra gates and at least a couple hundred extra qubits — in order to unequivocally hamstring the best classical algorithms. So instead, the team used arguments akin to Aaronson’s and Bremner’s to show that circuits made of non-commuting gates, although likely harder to build and analyze than IQP circuits, would also be harder for a classical device to simulate. To make the classical computation even more challenging, the team proposed sampling from a circuit chosen at random. That way, classical competitors would be unable to exploit any familiar features of the circuit’s structure to better guess its behavior.

But there was nothing to stop the classical algorithms from getting more resourceful. In fact, in October 2017, a team at IBM showed how, with a bit of classical ingenuity, a supercomputer can simulate sampling from random circuits on as many as 56 qubits — provided the circuits don’t involve too much depth (layers of gates). Similarly, a more able algorithm has recently nudged the classical limits of boson sampling, to around 50 photons.

These upgrades, however, are still dreadfully inefficient. IBM’s simulation, for instance, took two days to do what a quantum computer is expected to do in less than one-tenth of a millisecond. Add a couple more qubits — or a little more depth — and quantum contenders could slip freely into supremacy territory. “Generally speaking, when it comes to emulating highly entangled systems, there has not been a [classical] breakthrough that has really changed the game,” Preskill said. “We’re just nibbling at the boundary rather than exploding it.”

That’s not to say there will be a clear victory. “Where the frontier is is a thing people will continue to debate,” Bremner said. Imagine this scenario: Researchers sample from a 50-qubit circuit of some depth — or maybe a slightly larger one of less depth — and claim supremacy. But the circuit is pretty noisy — the qubits are misbehaving, or the gates don’t work that well. So then some crackerjack classical theorists swoop in and simulate the quantum circuit, no sweat, because “with noise, things you think are hard become not so hard from a classical point of view,” Bremner explained. “Probably that will happen.”

What’s more certain is that the first “supreme” quantum machines, if and when they arrive, aren’t going to be cracking encryption codes or simulating novel pharmaceutical molecules. “That’s the funny thing about supremacy,” Montanaro said. “The first wave of problems we solve are ones for which we don’t really care about the answers.”

Yet these early wins, however small, will assure scientists that they are on the right track — that a new regime of computation really is possible. Then it’s anyone’s guess what the next wave of problems will be.

Advertisements

The Era of Quantum Computing Is Here.


 Quantum computers should soon be able to beat classical computers at certain basic tasks. But before they’re truly powerful, researchers have to overcome a number of fundamental roadblocks.

Quantum computers have to deal with the problem of noise, which can quickly derail any calculation.

Quantum computers have to deal with the problem of noise, which can quickly derail any calculation.

After decades of heavy slog with no promise of success, quantum computing is suddenly buzzing with almost feverish excitement and activity. Nearly two years ago, IBM made a quantum computer available to the world: the 5-quantum-bit (qubit) resource they now call (a little awkwardly) the IBM Q experience. That seemed more like a toy for researchers than a way of getting any serious number crunching done. But 70,000 users worldwide have registered for it, and the qubit count in this resource has now quadrupled. In the past few months, IBM and Intel have announced that they have made quantum computers with 50 and 49 qubits, respectively, and Google is thought to have one waiting in the wings. “There is a lot of energy in the community, and the recent progress is immense,” said physicist Jens Eisert of the Free University of Berlin.

It would be tempting to conclude from all this that the basic problems are solved in principle and the path to a future of ubiquitous quantum computing is now just a matter of engineering. But that would be a mistake. The fundamental physics of quantum computing is far from solved and can’t be readily disentangled from its implementation.

Even if we soon pass the quantum supremacy milestone, the next year or two might be the real crunch time for whether quantum computers will revolutionize computing. There’s still everything to play for and no guarantee of reaching the big goal.

IBM’s quantum computing center at the Thomas J. Watson Research Center in Yorktown Heights, New York, holds quantum computers in large cryogenic tanks (far right) that are cooled to a fraction of a degree above absolute zero.

IBM’s quantum computing center at the Thomas J. Watson Research Center in Yorktown Heights, New York, holds quantum computers in large cryogenic tanks (far right) that are cooled to a fraction of a degree above absolute zero.

Connie Zhou for IBM

Shut Up and Compute

Both the benefits and the challenges of quantum computing are inherent in the physics that permits it. The basic story has been told many times, though not always with the nuance that quantum mechanics demands. Classical computers encode and manipulate information as strings of binary digits — 1 or 0. Quantum bits do the same, except that they may be placed in a so-called superposition of the states 1 and 0, which means that a measurement of the qubit’s state could elicit the answer 1 or 0 with some well-defined probability.

To perform a computation with many such qubits, they must all be sustained in interdependent superpositions of states — a “quantum-coherent” state, in which the qubits are said to be entangled. That way, a tweak to one qubit may influence all the others. This means that somehow computational operations on qubits count for more than they do for classical bits. The computational resources increase in simple proportion to the number of bits for a classical device, but adding an extra qubit potentially doubles the resources of a quantum computer. This is why the difference between a 5-qubit and a 50-qubit machine is so significant.

Note that I’ve not said — as it often is said — that a quantum computer has an advantage because the availability of superpositions hugely increases the number of states it can encode, relative to classical bits. Nor have I said that entanglement permits many calculations to be carried out in parallel. (Indeed, a strong degree of qubit entanglement isn’t essential.) There’s an element of truth in those descriptions — some of the time — but none captures the essence of quantum computing.

Inside one of IBM’s cryostats wired for a 50-qubit quantum system.

Inside one of IBM’s cryostats wired for a 50-qubit quantum system.

 

It’s hard to say qualitatively why quantum computing is so powerful precisely because it is hard to specify what quantum mechanics means at all. The equations of quantum theory certainly show that it will work: that, at least for some classes of computation such as factorization or database searches, there is tremendous speedup of the calculation. But how exactly?

Perhaps the safest way to describe quantum computing is to say that quantum mechanics somehow creates a “resource” for computation that is unavailable to classical devices. As quantum theorist Daniel Gottesman of the Perimeter Institute in Waterloo, Canada, put it, “If you have enough quantum mechanics available, in some sense, then you have speedup, and if not, you don’t.”

Some things are clear, though. To carry out a quantum computation, you need to keep all your qubits coherent. And this is very hard. Interactions of a system of quantum-coherent entities with their surrounding environment create channels through which the coherence rapidly “leaks out” in a process called decoherence. Researchers seeking to build quantum computers must stave off decoherence, which they can currently do only for a fraction of a second. That challenge gets ever greater as the number of qubits — and hence the potential to interact with the environment — increases. This is largely why, even though quantum computing was first proposed by Richard Feynman in 1982 and the theory was worked out in the early 1990s, it has taken until now to make devices that can actually perform a meaningful computation.

Quantum Errors

There’s a second fundamental reason why quantum computing is so difficult. Like just about every other process in nature, it is noisy. Random fluctuations, from heat in the qubits, say, or from fundamentally quantum-mechanical processes, will occasionally flip or randomize the state of a qubit, potentially derailing a calculation. This is a hazard in classical computing too, but it’s not hard to deal with — you just keep two or more backup copies of each bit so that a randomly flipped bit stands out as the odd one out.

Researchers working on quantum computers have created strategies for how to deal with the noise. But these strategies impose a huge debt of computational overhead — all your computing power goes to correcting errors and not to running your algorithms. “Current error rates significantly limit the lengths of computations that can be performed,” said Andrew Childs, the codirector of the Joint Center for Quantum Information and Computer Science at the University of Maryland. “We’ll have to do a lot better if we want to do something interesting.”

Andrew Childs, a quantum theorist at the University of Maryland, cautions that error rates are a fundamental concern for quantum computers.

Andrew Childs, a quantum theorist at the University of Maryland, cautions that error rates are a fundamental concern for quantum computers.

A lot of research on the fundamentals of quantum computing has been devoted to error correction. Part of the difficulty stems from another of the key properties of quantum systems: Superpositions can only be sustained as long as you don’t measure the qubit’s value. If you make a measurement, the superposition collapses to a definite value: 1 or 0. So how can you find out if a qubit has an error if you don’t know what state it is in?

One ingenious scheme involves looking indirectly, by coupling the qubit to another “ancilla” qubit that doesn’t take part in the calculation but that can be probed without collapsing the state of the main qubit itself. It’s complicated to implement, though. Such solutions mean that, to construct a genuine “logical qubit” on which computation with error correction can be performed, you need many physical qubits.

How many? Quantum theorist Alán Aspuru-Guzik of Harvard University estimates that around 10,000 of today’s physical qubits would be needed to make a single logical qubit — a totally impractical number. If the qubits get much better, he said, this number could come down to a few thousand or even hundreds. Eisert is less pessimistic, saying that on the order of 800 physical qubits might already be enough, but even so he agrees that “the overhead is heavy,” and for the moment we need to find ways of coping with error-prone qubits.

An alternative to correcting errors is avoiding them or canceling out their influence: so-called error mitigation. Researchers at IBM, for example, are developing schemes for figuring out mathematically how much error is likely to have been incurred in a computation and then extrapolating the output of a computation to the “zero noise” limit.

Some researchers think that the problem of error correction will prove intractable and will prevent quantum computers from achieving the grand goals predicted for them. “The task of creating quantum error-correcting codes is harder than the task of demonstrating quantum supremacy,” said mathematician Gil Kalai of the Hebrew University of Jerusalem in Israel. And he adds that “devices without error correction are computationally very primitive, and primitive-based supremacy is not possible.” In other words, you’ll never do better than classical computers while you’ve still got errors.

Others believe the problem will be cracked eventually. According to Jay Gambetta, a quantum information scientist at IBM’s Thomas J. Watson Research Center, “Our recent experiments at IBM have demonstrated the basic elements of quantum error correction on small devices, paving the way towards larger-scale devices where qubits can reliably store quantum information for a long period of time in the presence of noise.” Even so, he admits that “a universal fault-tolerant quantum computer, which has to use logical qubits, is still a long way off.” Such developments make Childs cautiously optimistic. “I’m sure we’ll see improved experimental demonstrations of [error correction], but I think it will be quite a while before we see it used for a real computation,” he said.

Living With Errors

For the time being, quantum computers are going to be error-prone, and the question is how to live with that. At IBM, researchers are talking about “approximate quantum computing” as the way the field will look in the near term: finding ways of accommodating the noise.

This calls for algorithms that tolerate errors, getting the correct result despite them. It’s a bit like working out the outcome of an election regardless of a few wrongly counted ballot papers. “A sufficiently large and high-fidelity quantum computation should have some advantage [over a classical computation] even if it is not fully fault-tolerant,” said Gambetta.

One of the most immediate error-tolerant applications seems likely to be of more value to scientists than to the world at large: to simulate stuff at the atomic level. (This, in fact, was the motivation that led Feynman to propose quantum computing in the first place.) The equations of quantum mechanics prescribe a way to calculate the properties — such as stability and chemical reactivity — of a molecule such as a drug. But they can’t be solved classically without making lots of simplifications.

In contrast, the quantum behavior of electrons and atoms, said Childs, “is relatively close to the native behavior of a quantum computer.” So one could then construct an exact computer model of such a molecule. “Many in the community, including me, believe that quantum chemistry and materials science will be one of the first useful applications of such devices,” said Aspuru-Guzik, who has been at the forefront of efforts to push quantum computing in this direction.

Quantum simulations are proving their worth even on the very small quantum computers available so far. A team of researchers including Aspuru-Guzik has developed an algorithm that they call the variational quantum eigensolver (VQE), which can efficiently find the lowest-energy states of molecules even with noisy qubits. So far it can only handle very small molecules with few electrons, which classical computers can already simulate accurately. But the capabilities are getting better, as Gambetta and coworkers showed last September when they used a 6-qubit device at IBM to calculate the electronic structures of molecules, including lithium hydride and beryllium hydride. The work was “a significant leap forward for the quantum regime,” according to physical chemist Markus Reiher of the Swiss Federal Institute of Technology in Zurich, Switzerland. “The use of the VQE for the simulation of small molecules is a great example of the possibility of near-term heuristic algorithms,” said Gambetta.

But even for this application, Aspuru-Guzik confesses that logical qubits with error correction will probably be needed before quantum computers truly begin to surpass classical devices. “I would be really excited when error-corrected quantum computing begins to become a reality,” he said.

“If we had more than 200 logical qubits, we could do things in quantum chemistry beyond standard approaches,” Reiher adds. “And if we had about 5,000 such qubits, then the quantum computer would be transformative in this field.”

What’s Your Volume?

Despite the challenges of reaching those goals, the fast growth of quantum computers from 5 to 50 qubits in barely more than a year has raised hopes. But we shouldn’t get too fixated on these numbers, because they tell only part of the story. What matters is not just — or even mainly — how many qubits you have, but how good they are, and how efficient your algorithms are.

Any quantum computation has to be completed before decoherence kicks in and scrambles the qubits. Typically, the groups of qubits assembled so far have decoherence times of a few microseconds. The number of logic operations you can carry out during that fleeting moment depends on how quickly the quantum gates can be switched — if this time is too slow, it really doesn’t matter how many qubits you have at your disposal. The number of gate operations needed for a calculation is called its depth: Low-depth (shallow) algorithms are more feasible than high-depth ones, but the question is whether they can be used to perform useful calculations.

What’s more, not all qubits are equally noisy. In theory it should be possible to make very low-noise qubits from so-called topological electronic states of certain materials, in which the “shape” of the electron states used for encoding binary information confers a kind of protection against random noise. Researchers at Microsoft, most prominently, are seeking such topological states in exotic quantum materials, but there’s no guarantee that they’ll be found or will be controllable.

Researchers at IBM have suggested that the power of a quantum computation on a given device be expressed as a number called the “quantum volume,” which bundles up all the relevant factors: number and connectivity of qubits, depth of algorithm, and other measures of the gate quality, such as noisiness. It’s really this quantum volume that characterizes the power of a quantum computation, and Gambetta said that the best way forward right now is to develop quantum-computational hardware that increases the available quantum volume.

This is one reason why the much vaunted notion of quantum supremacy is more slippery than it seems. The image of a 50-qubit (or so) quantum computer outperforming a state-of-the-art supercomputer sounds alluring, but it leaves a lot of questions hanging. Outperforming for which problem? How do you know the quantum computer has got the right answer if you can’t check it with a tried-and-tested classical device? And how can you be sure that the classical machine wouldn’t do better if you could find the right algorithm?

So quantum supremacy is a concept to handle with care. Some researchers prefer now to talk about “quantum advantage,” which refers to the speedup that quantum devices offer without making definitive claims about what is best. An aversion to the word “supremacy” has also arisen because of the racial and political implications.

Whatever you choose to call it, a demonstration that quantum computers can do things beyond current classical means would be psychologically significant for the field. “Demonstrating an unambiguous quantum advantage will be an important milestone,” said Eisert — it would prove that quantum computers really can extend what is technologically possible.

That might still be more of a symbolic gesture than a transformation in useful computing resources. But such things may matter, because if quantum computing is going to succeed, it won’t be simply by the likes of IBM and Google suddenly offering their classy new machines for sale. Rather, it’ll happen through an interactive and perhaps messy collaboration between developers and users, and the skill set will evolve in the latter only if they have sufficient faith that the effort is worth it. This is why both IBM and Google are keen to make their devices available as soon as they’re ready. As well as a 16-qubit IBM Q experience offered to anyone who registers online, IBM now has a 20-qubit version for corporate clients, including JP Morgan Chase, Daimler, Honda, Samsung and the University of Oxford. Not only will that help clients discover what’s in it for them; it should create a quantum-literate community of programmers who will devise resources and solve problems beyond what any individual company could muster.

“For quantum computing to take traction and blossom, we must enable the world to use and to learn it,” said Gambetta. “This period is for the world of scientists and industry to focus on getting quantum-ready.”

Quantum Chemistry Solves The Question of Why Life Needs So Many Amino Acids


One of the oldest and most fundamental questions in biochemistry is why the 20 amino acids that support life are all needed, when the original core of 13 would do – and quantum chemistry might have just provided us with the answer.

According to new research, it’s the extra chemical reactivity of the newer seven amino acids that make them so vital to life, even though they don’t add anything different in terms of their spatial structure.

Quantum chemistry is a way of taking some of the principles of quantum mechanics – describing particles according to probabilistic, wave-like properties – and applying them to the way atoms behave in chemical reactions.

The international team of scientists behind the new study used quantum chemistry techniques to compare amino acids found in space (and left here by meteorite fragments) with amino acids supporting life today on Earth.

“The transition from the dead chemistry out there in space to our own biochemistry here today was marked by an increase in softness and thus an enhanced reactivity of the building blocks,” says one of the researchers, Bernd Moosmann from Johannes Gutenberg University Mainz in Germany.

It’s the job of amino acids to form proteins, as instructed by our DNA. These acids were formed right after Earth itself came into being, about 4.54 billion years ago, and so represent one of the earliest building blocks of life.

However, why evolution decided that we needed 20 amino acids to handle this genetic encoding has never been clear, because the first 13 that developed should have been enough for the task.

The greater “softness” of the extra seven amino acids identified by the researchers means they are more readily reactive and more flexible in terms of chemical changes.

If you were representing the amino acids as circles, they could be drawn as multiple concentric circles representing differing energy levels, rather than one single circle of the same chemical hardness and energy level – kind of like in the photo below.

amino acid circles (Michael Plenikowski)

Having determined the hypothesis through quantum chemistry calculations, the scientists were able to back up their ideas with a series of biochemical experiments.

Along the way the team determined that the extra amino acids – particularly methionine, tryptophan, and selenocysteine – could well have evolved as a response to increasing levels of oxygen in the biosphere in the planet’s youngest days.

Peering so far back in time is difficult, as the first organic compounds never left fossils behind for us to analyse, but this may have been part of the process that kicked off the formation of life on Earth.

As the very earliest living cells tried to deal with the extra oxidative stress, it was a case of survival of the fittest. The cells best able to cope with that additional oxygen – through the protection of the new amino acids – were the ones that lived on and flourished.

“With this in view, we could characterise oxygen as the author adding the very final touch to the genetic code,” says Moosmann.

The research has been published in PNAS.

You thought quantum mechanics was weird: check out entangled time


<em>Photo by Alan Levine/Flickr</em>

In the summer of 1935, the physicists Albert Einstein and Erwin Schrödinger engaged in a rich, multifaceted and sometimes fretful correspondence about the implications of the new theory of quantum mechanics. The focus of their worry was what Schrödinger later dubbed entanglement: the inability to describe two quantum systems or particles independently, after they have interacted.

Until his death, Einstein remained convinced that entanglement showed how quantum mechanics was incomplete. Schrödinger thought that entanglement was the defining feature of the new physics, but this didn’t mean that he accepted it lightly. ‘I know of course how the hocus pocus works mathematically,’ he wrote to Einstein on 13 July 1935. ‘But I do not like such a theory.’ Schrödinger’s famous cat, suspended between life and death, first appeared in these letters, a byproduct of the struggle to articulate what bothered the pair.

The problem is that entanglement violates how the world ought to work. Information can’t travel faster than the speed of light, for one. But in a 1935 paper, Einstein and his co-authors showed how entanglement leads to what’s now called quantum nonlocality, the eerie link that appears to exist between entangled particles. If two quantum systems meet and then separate, even across a distance of thousands of lightyears, it becomes impossible to measure the features of one system (such as its position, momentum and polarity) without instantly steering the other into a corresponding state.

Up to today, most experiments have tested entanglement over spatial gaps. The assumption is that the ‘nonlocal’ part of quantum nonlocality refers to the entanglement of properties across space. But what if entanglement also occurs across time? Is there such a thing as temporal nonlocality?

The answer, as it turns out, is yes. Just when you thought quantum mechanics couldn’t get any weirder, a team of physicists at the Hebrew University of Jerusalem reported in 2013 that they had successfully entangled photons that never coexisted. Previous experiments involving a technique called ‘entanglement swapping’ had already showed quantum correlations across time, by delaying the measurement of one of the coexisting entangled particles; but Eli Megidish and his collaborators were the first to show entanglement between photons whose lifespans did not overlap at all.

Here’s how they did it. First, they created an entangled pair of photons, ‘1-2’ (step I in the diagram below). Soon after, they measured the polarisation of photon 1 (a property describing the direction of light’s oscillation) – thus ‘killing’ it (step II). Photon 2 was sent on a wild goose chase while a new entangled pair, ‘3-4’, was created (step III). Photon 3 was then measured along with the itinerant photon 2 in such a way that the entanglement relation was ‘swapped’ from the old pairs (‘1-2’ and ‘3-4’) onto the new ‘2-3’ combo (step IV). Some time later (step V), the polarisation of the lone survivor, photon 4, is measured, and the results are compared with those of the long-dead photon 1 (back at step II).

Figure 1. Time line diagram: (I) Birth of photons 1 and 2, (II) detection of photon 1, (III) birth of photons 3 and 4, (IV) Bell projection of photons 2 and 3, (V) detection of photon 4.

The upshot? The data revealed the existence of quantum correlations between ‘temporally nonlocal’ photons 1 and 4. That is, entanglement can occur across two quantum systems that never coexisted.

What on Earth can this mean? Prima facie, it seems as troubling as saying that the polarity of starlight in the far-distant past – say, greater than twice Earth’s lifetime – nevertheless influenced the polarity of starlight falling through your amateur telescope this winter. Even more bizarrely: maybe it implies that the measurements carried out by your eye upon starlight falling through your telescope this winter somehow dictated the polarity of photons more than 9 billion years old.

Lest this scenario strike you as too outlandish, Megidish and his colleagues can’t resist speculating on possible and rather spooky interpretations of their results. Perhaps the measurement of photon 1’s polarisation at step II somehow steers the future polarisation of 4, or the measurement of photon 4’s polarisation at step V somehow rewrites the past polarisation state of photon 1. In both forward and backward directions, quantum correlations span the causal void between the death of one photon and the birth of the other.

Just a spoonful of relativity helps the spookiness go down, though. In developing his theory of special relativity, Einstein deposed the concept of simultaneity from its Newtonian pedestal. As a consequence, simultaneity went from being an absolute property to being a relative one. There is no single timekeeper for the Universe; precisely when something is occurring depends on your precise location relative to what you are observing, known as your frame of reference. So the key to avoiding strange causal behaviour (steering the future or rewriting the past) in instances of temporal separation is to accept that calling events ‘simultaneous’ carries little metaphysical weight. It is only a frame-specific property, a choice among many alternative but equally viable ones – a matter of convention, or record-keeping.

The lesson carries over directly to both spatial and temporal quantum nonlocality. Mysteries regarding entangled pairs of particles amount to disagreements about labelling, brought about by relativity. Einstein showed that no sequence of events can be metaphysically privileged – can be considered more real – than any other. Only by accepting this insight can one make headway on such quantum puzzles.

The various frames of reference in the Hebrew University experiment (the lab’s frame, photon 1’s frame, photon 4’s frame, and so on) have their own ‘historians’, so to speak. While these historians will disagree about how things went down, not one of them can claim a corner on truth. A different sequence of events unfolds within each one, according to that spatiotemporal point of view. Clearly, then, any attempt at assigning frame-specific properties generally, or tying general properties to one particular frame, will cause disputes among the historians. But here’s the thing: while there might be legitimate disagreement about which properties should be assigned to which particles and when, there shouldn’t be disagreement about the very existence of these properties, particles, and events.

These findings drive yet another wedge between our beloved classical intuitions and the empirical realities of quantum mechanics. As was true for Schrödinger and his contemporaries, scientific progress is going to involve investigating the limitations of certain metaphysical views. Schrödinger’s cat, half-alive and half-dead, was created to illustrate how the entanglement of systems leads to macroscopic phenomena that defy our usual understanding of the relations between objects and their properties: an organism such as a cat is either dead or alive. No middle ground there.

Most contemporary philosophical accounts of the relationship between objects and their properties embrace entanglement solely from the perspective of spatial nonlocality. But there’s still significant work to be done on incorporating temporal nonlocality – not only in object-property discussions, but also in debates over material composition (such as the relation between a lump of clay and the statue it forms), and part-whole relations (such as how a hand relates to a limb, or a limb to a person). For example, the ‘puzzle’ of how parts fit with an overall whole presumes clear-cut spatial boundaries among underlying components, yet spatial nonlocality cautions against this view. Temporal nonlocality further complicates this picture: how does one describe an entity whose constituent parts are not even coexistent?

Discerning the nature of entanglement might at times be an uncomfortable project. It’s not clear what substantive metaphysics might emerge from scrutiny of fascinating new research by the likes of Megidish and other physicists. In a letter to Einstein, Schrödinger notes wryly (and deploying an odd metaphor): ‘One has the feeling that it is precisely the most important statements of the new theory that can really be squeezed into these Spanish boots – but only with difficulty.’ We cannot afford to ignore spatial ortemporal nonlocality in future metaphysics: whether or not the boots fit, we’ll have to wear ’em.

The Era of Quantum Computing Is Here. Outlook: Cloudy


Quantum computers should soon be able to beat classical computers at certain basic tasks. But before they’re truly powerful, researchers have to overcome a number of fundamental roadblocks.

Quantum computers have to deal with the problem of noise, which can quickly derail any calculation.

After decades of heavy slog with no promise of success, quantum computing is suddenly buzzing with almost feverish excitement and activity. Nearly two years ago, IBM made a quantum computer available to the world: the 5-quantum-bit (qubit) resource they now call (a little awkwardly) the IBM Q experience. That seemed more like a toy for researchers than a way of getting any serious number crunching done. But 70,000 users worldwide have registered for it, and the qubit count in this resource has now quadrupled. In the past few months, IBM and Intel have announced that they have made quantum computers with 50 and 49 qubits, respectively, and Google is thought to have one waiting in the wings. “There is a lot of energy in the community, and the recent progress is immense,” said physicist Jens Eisert of the Free University of Berlin.

There is now talk of impending “quantum supremacy”: the moment when a quantum computer can carry out a task beyond the means of today’s best classical supercomputers. That might sound absurd when you compare the bare numbers: 50 qubits versus the billions of classical bits in your laptop. But the whole point of quantum computing is that a quantum bit counts for much, much more than a classical bit. Fifty qubits has long been considered the approximate number at which quantum computing becomes capable of calculations that would take an unfeasibly long time classically. Midway through 2017, researchers at Google announced that they hoped to have demonstrated quantum supremacy by the end of the year. (When pressed for an update, a spokesperson recently said that “we hope to announce results as soon as we can, but we’re going through all the detailed work to ensure we have a solid result before we announce.”)

It would be tempting to conclude from all this that the basic problems are solved in principle and the path to a future of ubiquitous quantum computing is now just a matter of engineering. But that would be a mistake. The fundamental physics of quantum computing is far from solved and can’t be readily disentangled from its implementation.

Even if we soon pass the quantum supremacy milestone, the next year or two might be the real crunch time for whether quantum computers will revolutionize computing. There’s still everything to play for and no guarantee of reaching the big goal.

IBM’s quantum computing center at the Thomas J. Watson Research Center in Yorktown Heights, New York, holds quantum computers in large cryogenic tanks (far right) that are cooled to a fraction of a degree above absolute zero.

IBM’s quantum computing center at the Thomas J. Watson Research Center in Yorktown Heights, New York, holds quantum computers in large cryogenic tanks (far right) that are cooled to a fraction of a degree above absolute zero.

Connie Zhou for IBM

Shut Up and Compute

Both the benefits and the challenges of quantum computing are inherent in the physics that permits it. The basic story has been told many times, though not always with the nuance that quantum mechanics demands. Classical computers encode and manipulate information as strings of binary digits — 1 or 0. Quantum bits do the same, except that they may be placed in a so-called superposition of the states 1 and 0, which means that a measurement of the qubit’s state could elicit the answer 1 or 0 with some well-defined probability.

To perform a computation with many such qubits, they must all be sustained in interdependent superpositions of states — a “quantum-coherent” state, in which the qubits are said to be entangled. That way, a tweak to one qubit may influence all the others. This means that somehow computational operations on qubits count for more than they do for classical bits. The computational resources increase in simple proportion to the number of bits for a classical device, but adding an extra qubit potentially doubles the resources of a quantum computer. This is why the difference between a 5-qubit and a 50-qubit machine is so significant.

Note that I’ve not said — as it often is said — that a quantum computer has an advantage because the availability of superpositions hugely increases the number of states it can encode, relative to classical bits. Nor have I said that entanglement permits many calculations to be carried out in parallel. (Indeed, a strong degree of qubit entanglement isn’t essential.) There’s an element of truth in those descriptions — some of the time — but none captures the essence of quantum computing.

Inside one of IBM’s cryostats wired for a 50-qubit quantum system.

Inside one of IBM’s cryostats wired for a 50-qubit quantum system.

Connie Zhou for IBM

It’s hard to say qualitatively why quantum computing is so powerful precisely because it is hard to specify what quantum mechanics means at all. The equations of quantum theory certainly show that it will work: that, at least for some classes of computation such as factorization or database searches, there is tremendous speedup of the calculation. But how exactly?

Perhaps the safest way to describe quantum computing is to say that quantum mechanics somehow creates a “resource” for computation that is unavailable to classical devices. As quantum theorist Daniel Gottesman of the Perimeter Institute in Waterloo, Canada, put it, “If you have enough quantum mechanics available, in some sense, then you have speedup, and if not, you don’t.”

Some things are clear, though. To carry out a quantum computation, you need to keep all your qubits coherent. And this is very hard. Interactions of a system of quantum-coherent entities with their surrounding environment create channels through which the coherence rapidly “leaks out” in a process called decoherence. Researchers seeking to build quantum computers must stave off decoherence, which they can currently do only for a fraction of a second. That challenge gets ever greater as the number of qubits — and hence the potential to interact with the environment — increases. This is largely why, even though quantum computing was first proposed by Richard Feynman in 1982 and the theory was worked out in the early 1990s, it has taken until now to make devices that can actually perform a meaningful computation.

Quantum Errors

There’s a second fundamental reason why quantum computing is so difficult. Like just about every other process in nature, it is noisy. Random fluctuations, from heat in the qubits, say, or from fundamentally quantum-mechanical processes, will occasionally flip or randomize the state of a qubit, potentially derailing a calculation. This is a hazard in classical computing too, but it’s not hard to deal with — you just keep two or more backup copies of each bit so that a randomly flipped bit stands out as the odd one out.

Researchers working on quantum computers have created strategies for how to deal with the noise. But these strategies impose a huge debt of computational overhead — all your computing power goes to correcting errors and not to running your algorithms. “Current error rates significantly limit the lengths of computations that can be performed,” said Andrew Childs, the codirector of the Joint Center for Quantum Information and Computer Science at the University of Maryland. “We’ll have to do a lot better if we want to do something interesting.”

Andrew Childs, a quantum theorist at the University of Maryland, cautions that error rates are a fundamental concern for quantum computers.

Andrew Childs, a quantum theorist at the University of Maryland, cautions that error rates are a fundamental concern for quantum computers.

Photo by John T. Consoli/University of Maryland

A lot of research on the fundamentals of quantum computing has been devoted to error correction. Part of the difficulty stems from another of the key properties of quantum systems: Superpositions can only be sustained as long as you don’t measure the qubit’s value. If you make a measurement, the superposition collapses to a definite value: 1 or 0. So how can you find out if a qubit has an error if you don’t know what state it is in?

One ingenious scheme involves looking indirectly, by coupling the qubit to another “ancilla” qubit that doesn’t take part in the calculation but that can be probed without collapsing the state of the main qubit itself. It’s complicated to implement, though. Such solutions mean that, to construct a genuine “logical qubit” on which computation with error correction can be performed, you need many physical qubits.

How many? Quantum theorist Alán Aspuru-Guzik of Harvard University estimates that around 10,000 of today’s physical qubits would be needed to make a single logical qubit — a totally impractical number. If the qubits get much better, he said, this number could come down to a few thousand or even hundreds. Eisert is less pessimistic, saying that on the order of 800 physical qubits might already be enough, but even so he agrees that “the overhead is heavy,” and for the moment we need to find ways of coping with error-prone qubits.

An alternative to correcting errors is avoiding them or canceling out their influence: so-called error mitigation. Researchers at IBM, for example, are developing schemes for figuring out mathematically how much error is likely to have been incurred in a computation and then extrapolating the output of a computation to the “zero noise” limit.

Some researchers think that the problem of error correction will prove intractable and will prevent quantum computers from achieving the grand goals predicted for them. “The task of creating quantum error-correcting codes is harder than the task of demonstrating quantum supremacy,” said mathematician Gil Kalai of the Hebrew University of Jerusalem in Israel. And he adds that “devices without error correction are computationally very primitive, and primitive-based supremacy is not possible.” In other words, you’ll never do better than classical computers while you’ve still got errors.

Others believe the problem will be cracked eventually. According to Jay Gambetta, a quantum information scientist at IBM’s Thomas J. Watson Research Center, “Our recent experiments at IBM have demonstrated the basic elements of quantum error correction on small devices, paving the way towards larger-scale devices where qubits can reliably store quantum information for a long period of time in the presence of noise.” Even so, he admits that “a universal fault-tolerant quantum computer, which has to use logical qubits, is still a long way off.” Such developments make Childs cautiously optimistic. “I’m sure we’ll see improved experimental demonstrations of [error correction], but I think it will be quite a while before we see it used for a real computation,” he said.

Living With Errors

For the time being, quantum computers are going to be error-prone, and the question is how to live with that. At IBM, researchers are talking about “approximate quantum computing” as the way the field will look in the near term: finding ways of accommodating the noise.

This calls for algorithms that tolerate errors, getting the correct result despite them. It’s a bit like working out the outcome of an election regardless of a few wrongly counted ballot papers. “A sufficiently large and high-fidelity quantum computation should have some advantage [over a classical computation] even if it is not fully fault-tolerant,” said Gambetta.

Lucy Reading-Ikkanda/Quanta Magazine

One of the most immediate error-tolerant applications seems likely to be of more value to scientists than to the world at large: to simulate stuff at the atomic level. (This, in fact, was the motivation that led Feynman to propose quantum computing in the first place.) The equations of quantum mechanics prescribe a way to calculate the properties — such as stability and chemical reactivity — of a molecule such as a drug. But they can’t be solved classically without making lots of simplifications.

In contrast, the quantum behavior of electrons and atoms, said Childs, “is relatively close to the native behavior of a quantum computer.” So one could then construct an exact computer model of such a molecule. “Many in the community, including me, believe that quantum chemistry and materials science will be one of the first useful applications of such devices,” said Aspuru-Guzik, who has been at the forefront of efforts to push quantum computing in this direction.

Quantum simulations are proving their worth even on the very small quantum computers available so far. A team of researchers including Aspuru-Guzik has developed an algorithm that they call the variational quantum eigensolver (VQE), which can efficiently find the lowest-energy states of molecules even with noisy qubits. So far it can only handle very small molecules with few electrons, which classical computers can already simulate accurately. But the capabilities are getting better, as Gambetta and coworkers showed last Septemberwhen they used a 6-qubit device at IBM to calculate the electronic structures of molecules, including lithium hydride and beryllium hydride. The work was “a significant leap forward for the quantum regime,” according to physical chemist Markus Reiher of the Swiss Federal Institute of Technology in Zurich, Switzerland. “The use of the VQE for the simulation of small molecules is a great example of the possibility of near-term heuristic algorithms,” said Gambetta.

But even for this application, Aspuru-Guzik confesses that logical qubits with error correction will probably be needed before quantum computers truly begin to surpass classical devices. “I would be really excited when error-corrected quantum computing begins to become a reality,” he said.

“If we had more than 200 logical qubits, we could do things in quantum chemistry beyond standard approaches,” Reiher adds. “And if we had about 5,000 such qubits, then the quantum computer would be transformative in this field.”

What’s Your Volume?

Despite the challenges of reaching those goals, the fast growth of quantum computers from 5 to 50 qubits in barely more than a year has raised hopes. But we shouldn’t get too fixated on these numbers, because they tell only part of the story. What matters is not just — or even mainly — how many qubits you have, but how good they are, and how efficient your algorithms are.

Any quantum computation has to be completed before decoherence kicks in and scrambles the qubits. Typically, the groups of qubits assembled so far have decoherence times of a few microseconds. The number of logic operations you can carry out during that fleeting moment depends on how quickly the quantum gates can be switched — if this time is too slow, it really doesn’t matter how many qubits you have at your disposal. The number of gate operations needed for a calculation is called its depth: Low-depth (shallow) algorithms are more feasible than high-depth ones, but the question is whether they can be used to perform useful calculations.

What’s more, not all qubits are equally noisy. In theory it should be possible to make very low-noise qubits from so-called topological electronic states of certain materials, in which the “shape” of the electron states used for encoding binary information confers a kind of protection against random noise. Researchers at Microsoft, most prominently, are seeking such topological states in exotic quantum materials, but there’s no guarantee that they’ll be found or will be controllable.

Researchers at IBM have suggested that the power of a quantum computation on a given device be expressed as a number called the “quantum volume,” which bundles up all the relevant factors: number and connectivity of qubits, depth of algorithm, and other measures of the gate quality, such as noisiness. It’s really this quantum volume that characterizes the power of a quantum computation, and Gambetta said that the best way forward right now is to develop quantum-computational hardware that increases the available quantum volume.

This is one reason why the much vaunted notion of quantum supremacy is more slippery than it seems. The image of a 50-qubit (or so) quantum computer outperforming a state-of-the-art supercomputer sounds alluring, but it leaves a lot of questions hanging. Outperforming for which problem? How do you know the quantum computer has got the right answer if you can’t check it with a tried-and-tested classical device? And how can you be sure that the classical machine wouldn’t do better if you could find the right algorithm?

So quantum supremacy is a concept to handle with care. Some researchers prefer now to talk about “quantum advantage,” which refers to the speedup that quantum devices offer without making definitive claims about what is best. An aversion to the word “supremacy” has also arisen because of the racial and political implications.

Whatever you choose to call it, a demonstration that quantum computers can do things beyond current classical means would be psychologically significant for the field. “Demonstrating an unambiguous quantum advantage will be an important milestone,” said Eisert — it would prove that quantum computers really can extend what is technologically possible.

That might still be more of a symbolic gesture than a transformation in useful computing resources. But such things may matter, because if quantum computing is going to succeed, it won’t be simply by the likes of IBM and Google suddenly offering their classy new machines for sale. Rather, it’ll happen through an interactive and perhaps messy collaboration between developers and users, and the skill set will evolve in the latter only if they have sufficient faith that the effort is worth it. This is why both IBM and Google are keen to make their devices available as soon as they’re ready. As well as a 16-qubit IBM Q experience offered to anyone who registers online, IBM now has a 20-qubit version for corporate clients, including JP Morgan Chase, Daimler, Honda, Samsung and the University of Oxford. Not only will that help clients discover what’s in it for them; it should create a quantum-literate community of programmers who will devise resources and solve problems beyond what any individual company could muster.

“For quantum computing to take traction and blossom, we must enable the world to use and to learn it,” said Gambetta. “This period is for the world of scientists and industry to focus on getting quantum-ready.”

Quantum Algorithms Struggle Against Old Foe: Clever Computers


The quest for “quantum supremacy” – unambiguous proof that a quantum computer does something faster than an ordinary computer – has paradoxically led to a boom in quasi-quantum classical algorithms.

For Cristian Calude, doubt began with a puzzle so simple, he said, that “even a child can understand it.” Here it is: Suppose you have a mysterious box that takes one of two possible inputs — you can press a red button or a blue button, say — and gives back one of two possible outputs — a red ball or a blue ball. If the box always returns the same color ball no matter what, it’s said to be constant; if the color of the ball changes with the color of the button, it’s balanced. Your assignment is to determine which type of box you’ve got by asking it to perform its secret act only once.

At first glance, the task might seem hopeless. Indeed, when the physicist David Deutsch described this thought experiment in 1985, computer scientists believed that no machine operating by the rules of classical physics could learn the box’s identity with fewer than two queries: one for each input.

Deutsch, however, found that by translating the problem into the strange language of quantum mechanics, he could in fact achieve a one-query solution. He proposed a simple five-step algorithm that could run on a quantum computer of just two qubits — the basic units of quantum information. (Experimentalists wouldn’t build an actual quantum machine capable of running the algorithm until 1998.)

Although it has no practical use, Deutsch’s algorithm— the first quantum algorithm — became a ubiquitous illustration of the inimitable power of quantum computation, which might one day transform such fields as cryptography, drug discovery and materials engineering. “If you open a textbook in quantum computing written before the last 10 years or so, it will start with this example,” said Calude, a mathematician and computer scientist at the University of Auckland in New Zealand. “It appeared everywhere.”

But something bothered him. If Deutsch’s algorithm were truly superior, as the early textbooks claimed, no classical algorithm of comparable ability could exist. Was that really true? “I’m a mathematician — I am by training an unbeliever,” Calude said. “When I see a claim like this, I start thinking: How do I prove it?”

He couldn’t. Instead, he showed it was false. In a 2007 paper, he broke down Deutsch’s algorithm into its constituent quantum parts (for instance, the ability to represent two classical bits as a “superposition” of both at once) and sidestepped these instructions with classical operations — a process Calude calls “de-quantization.” In this way, he constructed an elegant classical solution to Deutsch’s black-box riddle. The quantum solution, it turned out, wasn’t always better after all.

A popular misconception is that the potential — and the limits — of quantum computing must come from hardware. In the digital age, we’ve gotten used to marking advances in clock speed and memory. Likewise, the 50-qubit quantum machines now coming online from the likes of Intel and IBM have inspired predictions that we are nearing “quantum supremacy” — a nebulous frontier where quantum computers begin to do things beyond the ability of classical machines.

But quantum supremacy is not a single, sweeping victory to be sought — a broad Rubicon to be crossed — but rather a drawn-out series of small duels. It will be established problem by problem, quantum algorithm versus classical algorithm. “With quantum computers, progress is not just about speed,” said Michael Bremner, a quantum theorist at the University of Technology Sydney. “It’s much more about the intricacy of the algorithms at play.”

Calude’s story, in other words, is not unique. Paradoxically, reports of powerful quantum computations are motivating improvements to classical ones, making it harder for quantum machines to gain an advantage. “Most of the time when people talk about quantum computing, classical computing is dismissed, like something that is past its prime,” Calude said. “But that is not the case. This is an ongoing competition.”

And the goalposts are shifting. “When it comes to saying where the supremacy threshold is, it depends on how good the best classical algorithms are,” said John Preskill, a theoretical physicist at the California Institute of Technology. “As they get better, we have to move that boundary.”

‘It Doesn’t Look So Easy’

Before the dream of a quantum computer took shape in the 1980s, most computer scientists took for granted that classical computing was all there was. The field’s pioneers had convincingly argued that classical computers — epitomized by the mathematical abstraction known as a Turing machine — should be able to compute everything that is computable in the physical universe, from basic arithmetic to stock trades to black hole collisions.

Classical machines couldn’t necessarily do all these computations efficiently, though. Let’s say you wanted to understand something like the chemical behavior of a molecule. This behavior depends on the behavior of the electrons in the molecule, which exist in a superposition of many classical states. Making things messier, the quantum state of each electron depends on the states of all the others — due to the quantum-mechanical phenomenon known as entanglement. Classically calculating these entangled states in even very simple molecules can become a nightmare of exponentially increasing complexity.

A quantum computer, by contrast, can deal with the intertwined fates of the electrons under study by superposing and entangling its own quantum bits. This enables the computer to process extraordinary amounts of information. Each single qubit you add doubles the states the system can simultaneously store: Two qubits can store four states, three qubits can store eight states, and so on. Thus, you might need just 50 entangled qubits to model quantum states that would require exponentially many classical bits — 1.125 quadrillion to be exact — to encode.

A quantum machine could therefore make the classically intractable problem of simulating large quantum-mechanical systems tractable, or so it appeared. “Nature isn’t classical, dammit, and if you want to make a simulation of nature, you’d better make it quantum mechanical,” the physicist Richard Feynman famously quipped in 1981. “And by golly it’s a wonderful problem, because it doesn’t look so easy.”

It wasn’t, of course.

Even before anyone began tinkering with quantum hardware, theorists struggled to come up with suitable software. Early on, Feynman and Deutsch learned that they could control quantum information with mathematical operations borrowed from linear algebra, which they called gates. As analogues to classical logic gates, quantum gates manipulate qubits in all sorts of ways — guiding them into a succession of superpositions and entanglements and then measuring their output. By mixing and matching gates to form circuits, the theorists could easily assemble quantum algorithms.

Conceiving algorithms that promised clear computational benefits proved more difficult. By the early 2000s, mathematicians had come up with only a few good candidates. Most famously, in 1994, a young staffer at Bell Laboratories named Peter Shor proposed a quantum algorithm that factors integers exponentially faster than any known classical algorithm — an efficiency that could allow it to crack many popular encryption schemes. Two years later, Shor’s Bell Labs colleague Lov Grover devised an algorithm that speeds up the classically tedious process of searching through unsorted databases. “There were a variety of examples that indicated quantum computing power should be greater than classical,” said Richard Jozsa, a quantum information scientist at the University of Cambridge who, in 1992, helped Deutsch extend his black-box problem to boxes that have many possible inputs, for which the quantum solution remains unrivaled.

But Jozsa, along with Calude and others, would also discover a variety of examples that, like Deutsch’s original algorithm, indicated just the opposite. “It turns out that many beautiful quantum processes look like they should be complicated” and therefore hard to simulate on a classical computer, Jozsa said. “But with clever, subtle mathematical techniques, you can figure out what they will do.” He and his colleagues found that they could use these techniques to efficiently simulate — or de-quantize, as Calude would say — a surprising number of quantum circuits. For instance, circuits that omit entanglement fall into this trap, as do those that entangle only a limited number of qubits or use only certain kinds of entangling gates.

What, then, guarantees that an algorithm like Shor’s is uniquely powerful? “That’s very much an open question,” Jozsa said. “We never really succeeded in understanding why some [algorithms] are easy to simulate classically and others are not. Clearly entanglement is important, but it’s not the end of the story.” Experts began to wonder whether many of the quantum algorithms that they believed were superior might turn out to be only ordinary.

Sampling Struggle

Until recently, the pursuit of quantum power was largely an abstract one. “We weren’t really concerned with implementing our algorithms because nobody believed that in the reasonable future we’d have a quantum computer to do it,” Jozsa said. Running Shor’s algorithm for integers large enough to unlock a standard 128-bit encryption key, for instance, would require thousands of qubits — plus probably many thousands more to correct for errors. Experimentalists, meanwhile, were fumbling while trying to control more than a handful.

But by 2011, things were starting to look up. That fall, at a conference in Brussels, Preskill speculated that “the day when well-controlled quantum systems can perform tasks surpassing what can be done in the classical world” might not be far off. Recent laboratory results, he said, could soon lead to quantum machines on the order of 100 qubits. Getting them to pull off some “super-classical” feat maybe wasn’t out of the question. (Although D-Wave Systems’ commercial quantum processors could by then wrangle 128 qubits and now boast more than 2,000, they tackle only specific optimization problems; many experts doubt they can outperform classical computers.)

“I was just trying to emphasize we were getting close — that we might finally reach a real milestone in human civilization where quantum technology becomes the most powerful information technology that we have,” Preskill said. He called this milestone “quantum supremacy.” The name — and the optimism — stuck. “It took off to an extent I didn’t suspect.”

The buzz about quantum supremacy reflected a growing excitement in the field — over experimental progress, yes, but perhaps more so over a series of theoretical breakthroughs that began with a 2004 paper by the IBM physicists Barbara Terhal and David DiVincenzo. In their effort to understand quantum assets, the pair had turned their attention to rudimentary quantum puzzles known as sampling problems. In time, this class of problems would become experimentalists’ greatest hope for demonstrating an unambiguous speedup on early quantum machines.

Sampling problems exploit the elusive nature of quantum information. Say you apply a sequence of gates to 100 qubits. This circuit may whip the qubits into a mathematical monstrosity equivalent to something on the order of 2100 classical bits. But once you measure the system, its complexity collapses to a string of only 100 bits. The system will spit out a particular string — or sample — with some probability determined by your circuit.

In a sampling problem, the goal is to produce a series of samples that look as though they came from this circuit. It’s like repeatedly tossing a coin to show that it will (on average) come up 50 percent heads and 50 percent tails. Except here, the outcome of each “toss” isn’t a single value — heads or tails — it’s a string of many values, each of which may be influenced by some (or even all) of the other values.

For a well-oiled quantum computer, this exercise is a no-brainer. It’s what it does naturally. Classical computers, on the other hand, seem to have a tougher time. In the worst circumstances, they must do the unwieldy work of computing probabilities for all possible output strings — all 2100 of them — and then randomly select samples from that distribution. “People always conjectured this was the case,” particularly for very complex quantum circuits, said Ashley Montanaro, an expert in quantum algorithms at the University of Bristol.

Terhal and DiVincenzo showed that even some simple quantum circuits should still be hard to sample by classical means. Hence, a bar was set. If experimentalists could get a quantum system to spit out these samples, they would have good reason to believe that they’d done something classically unmatchable.

Theorists soon expanded this line of thought to include other sorts of sampling problems. One of the most promising proposals came from Scott Aaronson, a computer scientist then at the Massachusetts Institute of Technology, and his doctoral student Alex Arkhipov. In work posted on the scientific preprint site arxiv.org in 2010, they described a quantum machine that sends photons through an optical circuit, which shifts and splits the light in quantum-mechanical ways, thereby generating output patterns with specific probabilities. Reproducing these patterns became known as boson sampling. Aaronson and Arkhipov reasoned that boson sampling would start to strain classical resources at around 30 photons — a plausible experimental target.

Similarly enticing were computations called instantaneous quantum polynomial, or IQP, circuits. An IQP circuit has gates that all commute, meaning they can act in any order without changing the outcome — in the same way 2 + 5 = 5 + 2. This quality makes IQP circuits mathematically pleasing. “We started studying them because they were easier to analyze,” Bremner said. But he discovered that they have other merits. In work that began in 2010 and culiminated in a 2016 paper with Montanaro and Dan Shepherd, now at the National Cyber Security Center in the U.K., Bremner explained why IQP circuits can be extremely powerful: Even for physically realistic systems of hundreds — or perhaps even dozens — of qubits, sampling would quickly become a classically thorny problem.

By 2016, boson samplers had yet to extend beyond 6 photons. Teams at Google and IBM, however, were verging on chips nearing 50 qubits; that August, Google quietly posted a draft paper laying out a road map for demonstrating quantum supremacy on these “near-term” devices.

Google’s team had considered sampling from an IQP circuit. But a closer look by Bremner and his collaborators suggested that the circuit would likely need some error correction — which would require extra gates and at least a couple hundred extra qubits — in order to unequivocally hamstring the best classical algorithms. So instead, the team used arguments akin to Aaronson’s and Bremner’s to show that circuits made of non-commuting gates, although likely harder to build and analyze than IQP circuits, would also be harder for a classical device to simulate. To make the classical computation even more challenging, the team proposed sampling from a circuit chosen at random. That way, classical competitors would be unable to exploit any familiar features of the circuit’s structure to better guess its behavior.

But there was nothing to stop the classical algorithms from getting more resourceful. In fact, in October 2017, a team at IBM showed how, with a bit of classical ingenuity, a supercomputer can simulate sampling from random circuits on as many as 56 qubits — provided the circuits don’t involve too much depth (layers of gates). Similarly, a more able algorithm has recently nudged the classical limits of boson sampling, to around 50 photons.

These upgrades, however, are still dreadfully inefficient. IBM’s simulation, for instance, took two days to do what a quantum computer is expected to do in less than one-tenth of a millisecond. Add a couple more qubits — or a little more depth — and quantum contenders could slip freely into supremacy territory. “Generally speaking, when it comes to emulating highly entangled systems, there has not been a [classical] breakthrough that has really changed the game,” Preskill said. “We’re just nibbling at the boundary rather than exploding it.”

That’s not to say there will be a clear victory. “Where the frontier is is a thing people will continue to debate,” Bremner said. Imagine this scenario: Researchers sample from a 50-qubit circuit of some depth — or maybe a slightly larger one of less depth — and claim supremacy. But the circuit is pretty noisy — the qubits are misbehaving, or the gates don’t work that well. So then some crackerjack classical theorists swoop in and simulate the quantum circuit, no sweat, because “with noise, things you think are hard become not so hard from a classical point of view,” Bremner explained. “Probably that will happen.”

What’s more certain is that the first “supreme” quantum machines, if and when they arrive, aren’t going to be cracking encryption codes or simulating novel pharmaceutical molecules. “That’s the funny thing about supremacy,” Montanaro said. “The first wave of problems we solve are ones for which we don’t really care about the answers.”

Yet these early wins, however small, will assure scientists that they are on the right track — that a new regime of computation really is possible. Then it’s anyone’s guess what the next wave of problems will be.

Job One for Quantum Computers: Boost Artificial Intelligence


The fusion of quantum computing and machine learning has become a booming research area. Can it possibly live up to its high expectations?

In the early ’90s, Elizabeth Behrman, a physics professor at Wichita State University, began working to combine quantum physics with artificial intelligence — in particular, the then-maverick technology of neural networks. Most people thought she was mixing oil and water. “I had a heck of a time getting published,” she recalled. “The neural-network journals would say, ‘What is this quantum mechanics?’ and the physics journals would say, ‘What is this neural-network garbage?’”

Today the mashup of the two seems the most natural thing in the world. Neural networks and other machine-learning systems have become the most disruptive technology of the 21st century. They out-human humans, beating us not just at tasks most of us were never really good at, such as chess and data-mining, but also at the very types of things our brains evolved for, such as recognizing faces, translating languages and negotiating four-way stops. These systems have been made possible by vast computing power, so it was inevitable that tech companies would seek out computers that were not just bigger, but a new class of machine altogether.

Quantum computers, after decades of research, have nearly enough oomph to perform calculations beyond any other computer on Earth. Their killer app is usually said to be factoring large numbers, which are the key to modern encryption. That’s still another decade off, at least. But even today’s rudimentary quantum processors are uncannily matched to the needs of machine learning. They manipulate vast arrays of data in a single step, pick out subtle patterns that classical computers are blind to, and don’t choke on incomplete or uncertain data. “There is a natural combination between the intrinsic statistical nature of quantum computing … and machine learning,” said Johannes Otterbach, a physicist at Rigetti Computing, a quantum-computer company in Berkeley, California.

If anything, the pendulum has now swung to the other extreme. Google, Microsoft, IBM and other tech giants are pouring money into quantum machine learning, and a startup incubator at the University of Toronto is devoted to it. “‘Machine learning’ is becoming a buzzword,” said Jacob Biamonte, a quantum physicist at the Skolkovo Institute of Science and Technology in Moscow. “When you mix that with ‘quantum,’ it becomes a mega-buzzword.”

Yet nothing with the word “quantum” in it is ever quite what it seems. Although you might think a quantum machine-learning system should be powerful, it suffers from a kind of locked-in syndrome. It operates on quantum states, not on human-readable data, and translating between the two can negate its apparent advantages. It’s like an iPhone X that, for all its impressive specs, ends up being just as slow as your old phone, because your network is as awful as ever. For a few special cases, physicists can overcome this input-output bottleneck, but whether those cases arise in practical machine-learning tasks is still unknown. “We don’t have clear answers yet,” said Scott Aaronson, a computer scientist at the University of Texas, Austin, who is always the voice of sobriety when it comes to quantum computing. “People have often been very cavalier about whether these algorithms give a speedup.”

Quantum Neurons

The main job of a neural network, be it classical or quantum, is to recognize patterns. Inspired by the human brain, it is a grid of basic computing units — the “neurons.” Each can be as simple as an on-off device. A neuron monitors the output of multiple other neurons, as if taking a vote, and switches on if enough of them are on. Typically, the neurons are arranged in layers. An initial layer accepts input (such as image pixels), intermediate layers create various combinations of the input (representing structures such as edges and geometric shapes) and a final layer produces output (a high-level description of the image content).

 

Crucially, the wiring is not fixed in advance, but adapts in a process of trial and error. The network might be fed images labeled “kitten” or “puppy.” For each image, it assigns a label, checks whether it was right, and tweaks the neuronal connections if not. Its guesses are random at first, but get better; after perhaps 10,000 examples, it knows its pets. A serious neural network can have a billion interconnections, all of which need to be tuned.

On a classical computer, all these interconnections are represented by a ginormous matrix of numbers, and running the network means doing matrix algebra. Conventionally, these matrix operations are outsourced to a specialized chip such as a graphics processing unit. But nothing does matrices like a quantum computer. “Manipulation of large matrices and large vectors are exponentially faster on a quantum computer,” said Seth Lloyd, a physicist at the Massachusetts Institute of Technology and a quantum-computing pioneer.

For this task, quantum computers are able to take advantage of the exponential nature of a quantum system. The vast bulk of a quantum system’s information storage capacity resides not in its individual data units — its qubits, the quantum counterpart of classical computer bits — but in the collective properties of those qubits. Two qubits have four joint states: both on, both off, on/off, and off/on. Each has a certain weighting, or “amplitude,” that can represent a neuron. If you add a third qubit, you can represent eight neurons; a fourth, 16. The capacity of the machine grows exponentially. In effect, the neurons are smeared out over the entire system. When you act on a state of four qubits, you are processing 16 numbers at a stroke, whereas a classical computer would have to go through those numbers one by one.

Lloyd estimates that 60 qubits would be enough to encode an amount of data equivalent to that produced by humanity in a year, and 300 could carry the classical information content of the observable universe. (The biggest quantum computers at the moment, built by IBM, Intel and Google, have 50-ish qubits.) And that’s assuming each amplitude is just a single classical bit. In fact, amplitudes are continuous quantities (and, indeed, complex numbers) and, for a plausible experimental precision, one might store as many as 15 bits, Aaronson said.

But a quantum computer’s ability to store information compactly doesn’t make it faster. You need to be able to use those qubits. In 2008, Lloyd, the physicist Aram Harrow of MIT and Avinatan Hassidim, a computer scientist at Bar-Ilan University in Israel, showed how to do the crucial algebraic operation of inverting a matrix. They broke it down into a sequence of logic operations that can be executed on a quantum computer. Their algorithm works for a huge variety of machine-learning techniques. And it doesn’t require nearly as many algorithmic steps as, say, factoring a large number does. A computer could zip through a classification task before noise — the big limiting factor with today’s technology — has a chance to foul it up. “You might have a quantum advantage before you have a fully universal, fault-tolerant quantum computer,” said Kristan Temme of IBM’s Thomas J. Watson Research Center.

Let Nature Solve the Problem

So far, though, machine learning based on quantum matrix algebra has been demonstrated only on machines with just four qubits. Most of the experimental successes of quantum machine learning to date have taken a different approach, in which the quantum system does not merely simulate the network; it is the network. Each qubit stands for one neuron. Though lacking the power of exponentiation, a device like this can avail itself of other features of quantum physics.

The largest such device, with some 2,000 qubits, is the quantum processor manufactured by D-Wave Systems, based near Vancouver, British Columbia. It is not what most people think of as a computer. Instead of starting with some input data, executing a series of operations and displaying the output, it works by finding internal consistency. Each of its qubits is a superconducting electric loop that acts as a tiny electromagnet oriented up, down, or up and down — a superposition. Qubits are “wired” together by allowing them to interact magnetically.

Processors made by D-Wave Systems are being used for machine learning applications.

Processors made by D-Wave Systems are being used for machine learning applications.

To run the system, you first impose a horizontal magnetic field, which initializes the qubits to an equal superposition of up and down — the equivalent of a blank slate. There are a couple of ways to enter data. In some cases, you fix a layer of qubits to the desired input values; more often, you incorporate the input into the strength of the interactions. Then you let the qubits interact. Some seek to align in the same direction, some in the opposite direction, and under the influence of the horizontal field, they flip to their preferred orientation. In so doing, they might trigger other qubits to flip. Initially that happens a lot, since so many of them are misaligned. Over time, though, they settle down, and you can turn off the horizontal field to lock them in place. At that point, the qubits are in a pattern of up and down that ensures the output follows from the input.

It’s not at all obvious what the final arrangement of qubits will be, and that’s the point. The system, just by doing what comes naturally, is solving a problem that an ordinary computer would struggle with. “We don’t need an algorithm,” explained Hidetoshi Nishimori, a physicist at the Tokyo Institute of Technology who developed the principles on which D-Wave machines operate. “It’s completely different from conventional programming. Nature solves the problem.”

The qubit-flipping is driven by quantum tunneling, a natural tendency that quantum systems have to seek out their optimal configuration, rather than settle for second best. You could build a classical network that worked on analogous principles, using random jiggling rather than tunneling to get bits to flip, and in some cases it would actually work better. But, interestingly, for the types of problems that arise in machine learning, the quantum network seems to reach the optimum faster.

The D-Wave machine has had its detractors. It is extremely noisy and, in its current incarnation, can perform only a limited menu of operations. Machine-learning algorithms, though, are noise-tolerant by their very nature. They’re useful precisely because they can make sense of a messy reality, sorting kittens from puppies against a backdrop of red herrings. “Neural networks are famously robust to noise,” Behrman said.

In 2009 a team led by Hartmut Neven, a computer scientist at Google who pioneered augmented reality — he co-founded the Google Glass project — and then took up quantum information processing, showed how an early D-Wave machine could do a respectable machine-learning task. They used it as, essentially, a single-layer neural network that sorted images into two classes: “car” or “no car” in a library of 20,000 street scenes. The machine had only 52 working qubits, far too few to take in a whole image. (Remember: the D-Wave machine is of a very different type than in the state-of-the-art 50-qubit systems coming online in 2018.) So Neven’s team combined the machine with a classical computer, which analyzed various statistical quantities of the images and calculated how sensitive these quantities were to the presence of a car — usually not very, but at least better than a coin flip. Some combination of these quantities could, together, spot a car reliably, but it wasn’t obvious which. It was the network’s job to find out.

The team assigned a qubit to each quantity. If that qubit settled into a value of 1, it flagged the corresponding quantity as useful; 0 meant don’t bother. The qubits’ magnetic interactions encoded the demands of the problem, such as including only the most discriminating quantities, so as to keep the final selection as compact as possible. The result was able to spot a car.

Last year a group led by Maria Spiropulu, a particle physicist at the California Institute of Technology, and Daniel Lidar, a physicist at USC, applied the algorithm to a practical physics problem: classifying proton collisions as “Higgs boson” or “no Higgs boson.” Limiting their attention to collisions that spat out photons, they used basic particle theory to predict which photon properties might betray the fleeting existence of the Higgs, such as momentum in excess of some threshold. They considered eight such properties and 28 combinations thereof, for a total of 36 candidate signals, and let a late-model D-Wave at the University of Southern California find the optimal selection. It identified 16 of the variables as useful and three as the absolute best. The quantum machine needed less data than standard procedures to perform an accurate identification. “Provided that the training set was small, then the quantum approach did provide an accuracy advantage over traditional methods used in the high-energy physics community,” Lidar said.

Maria Spiropulu, a physicist at the California Institute of Technology, used quantum machine learning to find Higgs bosons.

Maria Spiropulu, a physicist at the California Institute of Technology, used quantum machine learning to find Higgs bosons.

 

In December, Rigetti demonstrated a way to automatically group objects using a general-purpose quantum computer with 19 qubits. The researchers did the equivalent of feeding the machine a list of cities and the distances between them, and asked it to sort the cities into two geographic regions. What makes this problem hard is that the designation of one city depends on the designation of all the others, so you have to solve the whole system at once.

The Rigetti team effectively assigned each city a qubit, indicating which group it was assigned to. Through the interactions of the qubits (which, in Rigetti’s system, are electrical rather than magnetic), each pair of qubits sought to take on opposite values — their energy was minimized when they did so. Clearly, for any system with more than two qubits, some pairs of qubits had to consent to be assigned to the same group. Nearby cities assented more readily since the energetic cost for them to be in the same group was lower than for more-distant cities.

To drive the system to its lowest energy, the Rigetti team took an approach similar in some ways to the D-Wave annealer. They initialized the qubits to a superposition of all possible cluster assignments. They allowed qubits to interact briefly, which biased them toward assuming the same or opposite values. Then they applied the analogue of a horizontal magnetic field, allowing the qubits to flip if they were so inclined, pushing the system a little way toward its lowest-energy state. They repeated this two-step process — interact then flip — until the system minimized its energy, thus sorting the cities into two distinct regions.

These classification tasks are useful but straightforward. The real frontier of machine learning is in generative models, which do not simply recognize puppies and kittens, but can generate novel archetypes — animals that never existed, but are every bit as cute as those that did. They might even figure out the categories of “kitten” and “puppy” on their own, or reconstruct images missing a tail or paw. “These techniques are very powerful and very useful in machine learning, but they are very hard,” said Mohammad Amin, the chief scientist at D-Wave. A quantum assist would be most welcome.

D-Wave and other research teams have taken on this challenge. Training such a model means tuning the magnetic or electrical interactions among qubits so the network can reproduce some sample data. To do this, you combine the network with an ordinary computer. The network does the heavy lifting — figuring out what a given choice of interactions means for the final network configuration — and its partner computer uses this information to adjust the interactions. In one demonstration last year, Alejandro Perdomo-Ortiz, a researcher at NASA’s Quantum Artificial Intelligence Lab, and his team exposed a D-Wave system to images of handwritten digits. It discerned that there were 10 categories, matching the digits 0 through 9, and generated its own scrawled numbers.

Bottlenecks Into the Tunnels

Well, that’s the good news. The bad is that it doesn’t much matter how awesome your processor is if you can’t get your data into it. In matrix-algebra algorithms, a single operation may manipulate a matrix of 16 numbers, but it still takes 16 operations to load the matrix. “State preparation — putting classical data into a quantum state — is completely shunned, and I think this is one of the most important parts,” said Maria Schuld, a researcher at the quantum-computing startup Xanadu and one of the first people to receive a doctorate in quantum machine learning. Machine-learning systems that are laid out in physical form face parallel difficulties of how to embed a problem in a network of qubits and get the qubits to interact as they should.

Once you do manage to enter your data, you need to store it in such a way that a quantum system can interact with it without collapsing the ongoing calculation. Lloyd and his colleagues have proposed a quantum RAM that uses photons, but no one has an analogous contraption for superconducting qubits or trapped ions, the technologies found in the leading quantum computers. “That’s an additional huge technological problem beyond the problem of building a quantum computer itself,” Aaronson said. “The impression I get from the experimentalists I talk to is that they are frightened. They have no idea how to begin to build this.”

And finally, how do you get your data out? That means measuring the quantum state of the machine, and not only does a measurement return only a single number at a time, drawn at random, it collapses the whole state, wiping out the rest of the data before you even have a chance to retrieve it. You’d have to run the algorithm over and over again to extract all the information.

Yet all is not lost. For some types of problems, you can exploit quantum interference. That is, you can choreograph the operations so that wrong answers cancel themselves out and right ones reinforce themselves; that way, when you go to measure the quantum state, it won’t give you just any random value, but the desired answer. But only a few algorithms, such as brute-force search, can make good use of interference, and the speedup is usually modest.

In some cases, researchers have found shortcuts to getting data in and out. In 2015 Lloyd, Silvano Garnerone of the University of Waterloo in Canada, and Paolo Zanardi at USC showed that, for some kinds of statistical analysis, you don’t need to enter or store the entire data set. Likewise, you don’t need to read out all the data when a few key values would suffice. For instance, tech companies use machine learning to suggest shows to watch or things to buy based on a humongous matrix of consumer habits. “If you’re Netflix or Amazon or whatever, you don’t actually need the matrix written down anywhere,” Aaronson said. “What you really need is just to generate recommendations for a user.”

All this invites the question: If a quantum machine is powerful only in special cases, might a classical machine also be powerful in those cases? This is the major unresolved question of the field. Ordinary computers are, after all, extremely capable. The usual method of choice for handling large data sets — random sampling — is actually very similar in spirit to a quantum computer, which, whatever may go on inside it, ends up returning a random result. Schuld remarked: “I’ve done a lot of algorithms where I felt, ‘This is amazing. We’ve got this speedup,’ and then I actually, just for fun, write a sampling technique for a classical computer, and I realize you can do the same thing with sampling.”

If you look back at the successes that quantum machine learning has had so far, they all come with asterisks. Take the D-Wave machine. When classifying car images and Higgs bosons, it was no faster than a classical machine. “One of the things we do not talk about in this paper is quantum speedup,” said Alex Mott, a computer scientist at Google DeepMind who was a member of the Higgs research team. Matrix-algebra approaches such as the Harrow-Hassidim-Lloyd algorithm show a speedup only if the matrices are sparse — mostly filled with zeroes. “No one ever asks, are sparse data sets actually interesting in machine learning?” Schuld noted.

Quantum Intelligence

On the other hand, even the occasional incremental improvement over existing techniques would make tech companies happy. “These advantages that you end up seeing, they’re modest; they’re not exponential, but they are quadratic,” said Nathan Wiebe, a quantum-computing researcher at Microsoft Research. “Given a big enough and fast enough quantum computer, we could revolutionize many areas of machine learning.” And in the course of using the systems, computer scientists might solve the theoretical puzzle of whether they are inherently faster, and for what.

Schuld also sees scope for innovation on the software side. Machine learning is more than a bunch of calculations. It is a complex of problems that have their own particular structure. “The algorithms that people construct are removed from the things that make machine learning interesting and beautiful,” she said. “This is why I started to work the other way around and think: If have this quantum computer already — these small-scale ones — what machine-learning model actually can it generally implement? Maybe it is a model that has not been invented yet.” If physicists want to impress machine-learning experts, they’ll need to do more than just make quantum versions of existing models

Just as many neuroscientists now think that the structure of human thought reflects the requirements of having a body, so, too, are machine-learning systems embodied. The images, language and most other data that flow through them come from the physical world and reflect its qualities. Quantum machine learning is similarly embodied — but in a richer world than ours. The one area where it will undoubtedly shine is in processing data that is already quantum. When the data is not an image, but the product of a physics or chemistry experiment, the quantum machine will be in its element. The input problem goes away, and classical computers are left in the dust.

In a neatly self-referential loop, the first quantum machine-learning systems may help to design their successors. “One way we might actually want to use these systems is to build quantum computers themselves,” Wiebe said. “For some debugging tasks, it’s the only approach that we have.” Maybe they could even debug us. Leaving aside whether the human brain is a quantum computer — a highly contentious question — it sometimes acts as if it were one. Human behavior is notoriously contextual; our preferences are formed by the choices we are given, in ways that defy logic. In this, we are like quantum particles. “The way you ask questions and the ordering matters, and that is something that is very typical in quantum data sets,” Perdomo-Ortiz said. So a quantum machine-learning system might be a natural way to study human cognitive biases

Neural networks and quantum processors have one thing in common: It is amazing they work at all. It was never obvious that you could train a network, and for decades most people doubted it would ever be possible. Likewise, it is not obvious that quantum physics could ever be harnessed for computation, since the distinctive effects of quantum physics are so well hidden from us. And yet both work — not always, but more often than we had any right to expect. On this precedent, it seems likely that their union will also find its place.

Quantum Computing Without Qubits


A quantum computing pioneer explains why analog simulators may beat out general-purpose digital quantum machines — for now.

Ivan Deutsch is building quantum computers out of base-16 “qudits,” quantum information units that can assume any number of “d” states.

Ivan Deutsch is building quantum computers out of base-16 “qudits,” quantum information units that can assume any number of “d” states.

For more than 20 years, Ivan H. Deutsch has struggled to design the guts of a working quantum computer. He has not been alone. The quest to harness the computational might of quantum weirdness continues to occupy hundreds of researchers around the world. Why hasn’t there been more to show for their work? As physicists have known since quantum computing’s beginnings, the same characteristics that make quantum computing exponentially powerful also make it devilishly difficult to control. The quantum computing “nightmare” has always been that a quantum computer’s advantages in speed would be wiped out by the machine’s complexity.

Yet progress is arriving on two main fronts. First, researchers are developing unique quantum error-correction techniques that will help keep quantum processors up and running for the time needed to complete a calculation. Second, physicists are working with so-called analog quantum simulators — machines that can’t act like a general-purpose computer, but rather are designed to explore specific problems in quantum physics. A classical computer would have to run for thousands of years to compute the quantum equations of motion for just 100 atoms. A quantum simulator could do it in less than a second.

Quanta Magazine spoke with Deutsch about recent progress in the field, his hopes for the near future, and his own work at the University of New Mexico’s Center for Quantum Information and Control on scaling up binary quantum bits into base-16 digits.

QUANTA MAGAZINE: Why would a universal quantum machine be so uniquely powerful?

IVAN DEUTSCH: In a classical computer, information is stored in retrievable bits binary coded as 0 or 1. But in a quantum computer, elementary particles inhabit a probabilistic limbo called superposition where a “qubit” can be coded as 0 and 1.

Here is the magic: Each qubit can be entangled with the other qubits in the machine. The intertwining of quantum “states” exponentially increases the number of 0s and 1s that can be simultaneously processed by an array of qubits. Machines that can harness the power of quantum logic can deal with exponentially greater levels of complexity than the most powerful classical computer. Problems that would take a state-of-the-art classical computer the age of our universe to solve, can, in theory, be solved by a universal quantum computer in hours.

What is the quantum computing “nightmare”?

The same quantum effects that make a quantum computer so blazingly fast also make it incredibly difficult to operate. From the beginning, it has not been clear whether the exponential speed up provided by a quantum computer would be cancelled out by the exponential complexity needed to protect the system from crashing.

Is the situation hopeless?

Not at all. We now know that a universal quantum computer will not require exponential complexity in design. But it is still very hard.

So what’s the problem, and how do we get around it?

The hardware problem is that the superposition is so fragile that the random interaction of a single qubit with the molecules composing its immediate surroundings can cause the entire network of entangled qubits to delink or collapse. The ongoing calculation is destroyed as each qubit transforms into a digitized classical bit holding a single value: 0 or 1.

In classical computers, we reduce the inevitable loss of information by designing a lot of redundancy into the system. Error-correcting algorithms compare multiple copies of the output. They select the most frequent answer and discard the rest of the data as noise. We cannot do that with a quantum computer, because trying to directly compare qubits will crash the program. But we are gradually learning how to keep systems of entangled qubits from collapsing.

The major obstacle, to my mind, is creating error-correcting software that can keep data from being corrupted as the calculation proceeds toward the final readout. The great trick is to design and implement an algorithm that only measures the errors and not the data, thus preserving the superposition that contains the correct answer.

Will that end the nightmare?

It turns out that the error correction technique itself introduces errors. One of the most wonderful advances in quantum computing was recognizing that, in theory, we can correct the new errors without requiring 100 percent precision, allowing minor background noise to pollute the calculation as it rolls along. We cannot actually do this — yet. The main reason that we do not have a working universal quantum computer is that we are still experimenting with how to implant such a “fault-tolerant” algorithm into a quantum circuit. Right now we can control 10 qubits reasonably well. But there is no error-correcting technique, to my knowledge, capable of controlling the thousands of qubits needed to construct a universal machine.

Is that what you’re working on?

I study the information processing capabilities of trapped atoms. My colleague Poul Jessen at the University of Arizona and I are pushing the logical power beyond binary-based qubits. For example, what if we can control the superposition of an atom with, say, 16 different energy levels? Using base 16, we can then store what we call a “qudit” in a single atom. That would move us beyond the information processing speed obtainable by a base 2 system, the qubit.

What other options do we have?

There may be significant applications available for making non-universal machines: Special purpose, analog quantum simulators designed to solve specific problems, such as how room-temperature superconductors work or how a particular protein folds.

Are these actually computers?

They are not universal machines capable of solving any type of question. But say that I want to model global climate change. One way to do this is to write down a mathematical model and then solve the equations on a digital computer. That is typically what climate scientists do. Another way is to try to simulate some aspect of the earth’s climate in a controllable experiment. I can create a simple physical system that obeys the same laws of motion as the system I’m trying to model — mixing nitrogen, oxygen, and hydrogen in a tank, for example. What goes on inside the tank is a real-world computation that tells me something about atmospheric turbulence under certain conditions.

It is the same with an analog quantum simulator — I use one controllable physical system to simulate another. For example, successfully simulating a superconductor with such a device would reveal the quantum mechanics of high-temperature superconductivity. That could lead to the manufacture of non-brittle superconducting materials for many uses, including building less-fragile quantum circuits. Hopefully, we can learn how to build a robust universal digital computer by experimenting with analog simulators.

Has anyone built a working analog quantum simulator?

In 2002, a group at the Max Planck Institute in Germany built an optical lattice — a super-chilled egg carton made of light — and controlled it by pulsing different strengths of laser beams at it. This was a fundamentally analog device designed to obey quantum mechanical equations of motion. The short story is that it successfully simulated how atoms transition between acting as superfluids or insulators. That experiment has sparked a lot of research in analog quantum computing with optical lattices and cold atom traps.

What are the main challenges for these quantum simulators?

Because the evolution of the analog simulation is not digitized, the software cannot correct the tiny errors that accumulate during the calculation as we could error-correct noise on a universal machine. The analog device must keep a quantum superposition intact long enough for the simulation to run its course without resorting to digital error correction. This is a particular challenge for the analog approach to quantum simulation.

Is the D-Wave machine a quantum simulator?

The D-Wave prototype is not a universal quantum computer. It is not digital, nor error-correcting, nor fault tolerant. It is a purely analog machine designed to solve a particular optimization problem. It is unclear if it qualifies as a quantum device.

Will a scalable quantum computer be deployed during your lifetime?

We are pushing past the nightmare. Around the world, many university-based labs are working hard to remove or bypass the road block of fault tolerance. Academic researchers are leading the way, intellectually. For example, the groups of Rob Schoelkopf and Michel H. Devoret at Yale are taking superconducting technologies close to fault-tolerance.

But constructing a working universal digital quantum computer will likely require mobilizing industrial-scale resources. To that end, IBM is exploring quantum computing with superconducting circuits with personnel largely from the Yale groups. Google is working with John Martinis’s lab at the University of California, Santa Barbara. HRL Laboratories is working on silicon-based quantum computing. Lockheed Martin is exploring ion traps. And who knows what the National Security Agency is up to.

But generally in academic labs, without these industrial-scale resources, scientists are focusing more and more on learning how to control analog quantum simulators. There is short-term fruit to be picked in that arena — both intellectually and in the currency of academics: publishable papers.

Are you willing to settle for analog?

I favor pursuing the digital approach full force. Before I die, I would love to see just one universal logical qubit that can be indefinitely error corrected. It would instantly be classified by the government, of course. But I dream on, regardless.

Why Quantum Computers Might Not Break Cryptography


A new paper claims that a common digital security system could be tweaked to withstand attacks even from a powerful quantum computer.

Math is hard. Indeed, much of the modern infrastructure for secure communication depends heavily on the difficulty of elementary mathematics — of factoring, to be exact. It’s easy to reduce a small number like 15 to its prime factors (3 x 5), but factoring numbers with a few hundred digits is still exceedingly difficult. For this reason, the RSA cryptosystem, an encryption scheme that derives its security from the difficulty of integer factorization, remains a popular tool for secure communication.

Research suggests, however, that a quantum computer would be able to factor a large number far more quickly than the best available methods today. If researchers could build a quantum computer that could outperform classical supercomputers, the thinking goes, cryptographers could use a particular algorithm called Shor’s algorithm to render the RSA cryptosystem unsalvageable. The deadline to avert this may arrive sooner than we think: Google recently claimed that its quantum computers will be able to perform a calculation that’s beyond the reach of any classical computer by the end of the year. In light of this, cryptographers are scrambling to find a new quantum-proof security standard.

Yet perhaps RSA isn’t in as much trouble as researchers have assumed. A few weeks ago, a paper surfaced on the Cryptology ePrint Archive that asked: “Is it actually true that quantum computers will kill RSA?” The authors note that even though a quantum computer running Shor’s algorithm would be faster than a classical computer, the RSA algorithm is faster than both. And the larger the RSA “key” — the number that must be factored — the greater the speed difference.

The authors of the paper estimate that attacking a terabyte-size key using Shor’s algorithm would require around 2100 operations on a quantum computer, an enormous number comparable to the total number of bacterial cells on Earth. The authors don’t convert this to a concrete time estimate, but current research suggests that a real quantum computer wouldn’t be able to accomplish this in any reasonable amount of time. “RSA is not entirely dead even if quantum computers are practical,” said Nadia Heninger, an assistant professor of computer and information science at the University of Pennsylvania and a co-author of the paper. The paper also shows how to implement such a massive RSA key, which had not been done before.

Still, a terabyte-size key isn’t exactly easy to work with. (The largest RSA keys right now are a few thousand bits; a terabyte is many trillions of bits.) The authors report that generating a terabyte-size RSA key and carrying out the encryption-decryption process takes about five days. “The encryption and decryption cost is terrible for most applications,” said Scott Aaronson, the director of the Quantum Information Center at the University of Texas, Austin. What’s more, the security we gain from using enormous RSA keys is “extremely precarious, vulnerable to even a modest improvement in algorithms or hardware, or a determined and well-funded-enough adversary.”

“Scott is thinking in a theoretical sense,” said Heninger, who maintains that the gap is enough “from a concrete security point of view.” “More importantly,” the paper states, “it is interesting to see that the conventional wisdom is wrong.”

The Era of Quantum Computing Is Here. Outlook: Cloudy


Quantum computers should soon be able to beat classical computers at certain basic tasks. But before they’re truly powerful, researchers have to overcome a number of fundamental roadblocks.

Quantum computers have to deal with the problem of noise, which can quickly derail any calculation.

Quantum computers have to deal with the problem of noise, which can quickly derail any calculation.

 

After decades of heavy slog with no promise of success, quantum computing is suddenly buzzing with almost feverish excitement and activity. Nearly two years ago, IBM made a quantum computer available to the world: the 5-quantum-bit (qubit) resource they now call (a little awkwardly) the IBM Q experience. That seemed more like a toy for researchers than a way of getting any serious number crunching done. But 70,000 users worldwide have registered for it, and the qubit count in this resource has now quadrupled. In the past few months, IBM and Intel have announced that they have made quantum computers with 50 and 49 qubits, respectively, and Google is thought to have one waiting in the wings. “There is a lot of energy in the community, and the recent progress is immense,” said physicist Jens Eisert of the Free University of Berlin.

There is now talk of impending “quantum supremacy”: the moment when a quantum computer can carry out a task beyond the means of today’s best classical supercomputers. That might sound absurd when you compare the bare numbers: 50 qubits versus the billions of classical bits in your laptop. But the whole point of quantum computing is that a quantum bit counts for much, much more than a classical bit. Fifty qubits has long been considered the approximate number at which quantum computing becomes capable of calculations that would take an unfeasibly long time classically. Midway through 2017, researchers at Google announced that they hoped to have demonstrated quantum supremacy by the end of the year. (When pressed for an update, a spokesperson recently said that “we hope to announce results as soon as we can, but we’re going through all the detailed work to ensure we have a solid result before we announce.”)

It would be tempting to conclude from all this that the basic problems are solved in principle and the path to a future of ubiquitous quantum computing is now just a matter of engineering. But that would be a mistake. The fundamental physics of quantum computing is far from solved and can’t be readily disentangled from its implementation.

Even if we soon pass the quantum supremacy milestone, the next year or two might be the real crunch time for whether quantum computers will revolutionize computing. There’s still everything to play for and no guarantee of reaching the big goal.

IBM’s quantum computing center at the Thomas J. Watson Research Center in Yorktown Heights, New York, holds quantum computers in large cryogenic tanks (far right) that are cooled to a fraction of a degree above absolute zero.

IBM’s quantum computing center at the Thomas J. Watson Research Center in Yorktown Heights, New York, holds quantum computers in large cryogenic tanks (far right) that are cooled to a fraction of a degree above absolute zero.

Connie Zhou for IBM

Shut Up and Compute

Both the benefits and the challenges of quantum computing are inherent in the physics that permits it. The basic story has been told many times, though not always with the nuance that quantum mechanics demands. Classical computers encode and manipulate information as strings of binary digits — 1 or 0. Quantum bits do the same, except that they may be placed in a so-called superposition of the states 1 and 0, which means that a measurement of the qubit’s state could elicit the answer 1 or 0 with some well-defined probability.

To perform a computation with many such qubits, they must all be sustained in interdependent superpositions of states — a “quantum-coherent” state, in which the qubits are said to be entangled. That way, a tweak to one qubit may influence all the others. This means that somehow computational operations on qubits count for more than they do for classical bits. The computational resources increase in simple proportion to the number of bits for a classical device, but adding an extra qubit potentially doubles the resources of a quantum computer. This is why the difference between a 5-qubit and a 50-qubit machine is so significant.

Note that I’ve not said — as it often is said — that a quantum computer has an advantage because the availability of superpositions hugely increases the number of states it can encode, relative to classical bits. Nor have I said that entanglement permits many calculations to be carried out in parallel. (Indeed, a strong degree of qubit entanglement isn’t essential.) There’s an element of truth in those descriptions — some of the time — but none captures the essence of quantum computing.

Inside one of IBM’s cryostats wired for a 50-qubit quantum system.

Inside one of IBM’s cryostats wired for a 50-qubit quantum system.

Connie Zhou for IBM

It’s hard to say qualitatively why quantum computing is so powerful precisely because it is hard to specify what quantum mechanics means at all. The equations of quantum theory certainly show that it will work: that, at least for some classes of computation such as factorization or database searches, there is tremendous speedup of the calculation. But how exactly?

Perhaps the safest way to describe quantum computing is to say that quantum mechanics somehow creates a “resource” for computation that is unavailable to classical devices. As quantum theorist Daniel Gottesman of the Perimeter Institute in Waterloo, Canada, put it, “If you have enough quantum mechanics available, in some sense, then you have speedup, and if not, you don’t.”

Some things are clear, though. To carry out a quantum computation, you need to keep all your qubits coherent. And this is very hard. Interactions of a system of quantum-coherent entities with their surrounding environment create channels through which the coherence rapidly “leaks out” in a process called decoherence. Researchers seeking to build quantum computers must stave off decoherence, which they can currently do only for a fraction of a second. That challenge gets ever greater as the number of qubits — and hence the potential to interact with the environment — increases. This is largely why, even though quantum computing was first proposed by Richard Feynman in 1982 and the theory was worked out in the early 1990s, it has taken until now to make devices that can actually perform a meaningful computation.

Quantum Errors

There’s a second fundamental reason why quantum computing is so difficult. Like just about every other process in nature, it is noisy. Random fluctuations, from heat in the qubits, say, or from fundamentally quantum-mechanical processes, will occasionally flip or randomize the state of a qubit, potentially derailing a calculation. This is a hazard in classical computing too, but it’s not hard to deal with — you just keep two or more backup copies of each bit so that a randomly flipped bit stands out as the odd one out.

Researchers working on quantum computers have created strategies for how to deal with the noise. But these strategies impose a huge debt of computational overhead — all your computing power goes to correcting errors and not to running your algorithms. “Current error rates significantly limit the lengths of computations that can be performed,” said Andrew Childs, the codirector of the Joint Center for Quantum Information and Computer Science at the University of Maryland. “We’ll have to do a lot better if we want to do something interesting.”

Andrew Childs, a quantum theorist at the University of Maryland, cautions that error rates are a fundamental concern for quantum computers.

Andrew Childs, a quantum theorist at the University of Maryland, cautions that error rates are a fundamental concern for quantum computers.

Photo by John T. Consoli/University of Maryland

A lot of research on the fundamentals of quantum computing has been devoted to error correction. Part of the difficulty stems from another of the key properties of quantum systems: Superpositions can only be sustained as long as you don’t measure the qubit’s value. If you make a measurement, the superposition collapses to a definite value: 1 or 0. So how can you find out if a qubit has an error if you don’t know what state it is in?

One ingenious scheme involves looking indirectly, by coupling the qubit to another “ancilla” qubit that doesn’t take part in the calculation but that can be probed without collapsing the state of the main qubit itself. It’s complicated to implement, though. Such solutions mean that, to construct a genuine “logical qubit” on which computation with error correction can be performed, you need many physical qubits.

How many? Quantum theorist Alán Aspuru-Guzik of Harvard University estimates that around 10,000 of today’s physical qubits would be needed to make a single logical qubit — a totally impractical number. If the qubits get much better, he said, this number could come down to a few thousand or even hundreds. Eisert is less pessimistic, saying that on the order of 800 physical qubits might already be enough, but even so he agrees that “the overhead is heavy,” and for the moment we need to find ways of coping with error-prone qubits.

An alternative to correcting errors is avoiding them or canceling out their influence: so-called error mitigation. Researchers at IBM, for example, are developing schemes for figuring out mathematically how much error is likely to have been incurred in a computation and then extrapolating the output of a computation to the “zero noise” limit.

Some researchers think that the problem of error correction will prove intractable and will prevent quantum computers from achieving the grand goals predicted for them. “The task of creating quantum error-correcting codes is harder than the task of demonstrating quantum supremacy,” said mathematician Gil Kalai of the Hebrew University of Jerusalem in Israel. And he adds that “devices without error correction are computationally very primitive, and primitive-based supremacy is not possible.” In other words, you’ll never do better than classical computers while you’ve still got errors.

Others believe the problem will be cracked eventually. According to Jay Gambetta, a quantum information scientist at IBM’s Thomas J. Watson Research Center, “Our recent experiments at IBM have demonstrated the basic elements of quantum error correction on small devices, paving the way towards larger-scale devices where qubits can reliably store quantum information for a long period of time in the presence of noise.” Even so, he admits that “a universal fault-tolerant quantum computer, which has to use logical qubits, is still a long way off.” Such developments make Childs cautiously optimistic. “I’m sure we’ll see improved experimental demonstrations of [error correction], but I think it will be quite a while before we see it used for a real computation,” he said.

Living With Errors

For the time being, quantum computers are going to be error-prone, and the question is how to live with that. At IBM, researchers are talking about “approximate quantum computing” as the way the field will look in the near term: finding ways of accommodating the noise.

This calls for algorithms that tolerate errors, getting the correct result despite them. It’s a bit like working out the outcome of an election regardless of a few wrongly counted ballot papers. “A sufficiently large and high-fidelity quantum computation should have some advantage [over a classical computation] even if it is not fully fault-tolerant,” said Gambetta.

Lucy Reading-Ikkanda/Quanta Magazine

One of the most immediate error-tolerant applications seems likely to be of more value to scientists than to the world at large: to simulate stuff at the atomic level. (This, in fact, was the motivation that led Feynman to propose quantum computing in the first place.) The equations of quantum mechanics prescribe a way to calculate the properties — such as stability and chemical reactivity — of a molecule such as a drug. But they can’t be solved classically without making lots of simplifications.

In contrast, the quantum behavior of electrons and atoms, said Childs, “is relatively close to the native behavior of a quantum computer.” So one could then construct an exact computer model of such a molecule. “Many in the community, including me, believe that quantum chemistry and materials science will be one of the first useful applications of such devices,” said Aspuru-Guzik, who has been at the forefront of efforts to push quantum computing in this direction.

Quantum simulations are proving their worth even on the very small quantum computers available so far. A team of researchers including Aspuru-Guzik has developed an algorithm that they call the variational quantum eigensolver (VQE), which can efficiently find the lowest-energy states of molecules even with noisy qubits. So far it can only handle very small molecules with few electrons, which classical computers can already simulate accurately. But the capabilities are getting better, as Gambetta and coworkers showed last Septemberwhen they used a 6-qubit device at IBM to calculate the electronic structures of molecules, including lithium hydride and beryllium hydride. The work was “a significant leap forward for the quantum regime,” according to physical chemist Markus Reiher of the Swiss Federal Institute of Technology in Zurich, Switzerland. “The use of the VQE for the simulation of small molecules is a great example of the possibility of near-term heuristic algorithms,” said Gambetta.

But even for this application, Aspuru-Guzik confesses that logical qubits with error correction will probably be needed before quantum computers truly begin to surpass classical devices. “I would be really excited when error-corrected quantum computing begins to become a reality,” he said.

“If we had more than 200 logical qubits, we could do things in quantum chemistry beyond standard approaches,” Reiher adds. “And if we had about 5,000 such qubits, then the quantum computer would be transformative in this field.”

What’s Your Volume?

Despite the challenges of reaching those goals, the fast growth of quantum computers from 5 to 50 qubits in barely more than a year has raised hopes. But we shouldn’t get too fixated on these numbers, because they tell only part of the story. What matters is not just — or even mainly — how many qubits you have, but how good they are, and how efficient your algorithms are.

Any quantum computation has to be completed before decoherence kicks in and scrambles the qubits. Typically, the groups of qubits assembled so far have decoherence times of a few microseconds. The number of logic operations you can carry out during that fleeting moment depends on how quickly the quantum gates can be switched — if this time is too slow, it really doesn’t matter how many qubits you have at your disposal. The number of gate operations needed for a calculation is called its depth: Low-depth (shallow) algorithms are more feasible than high-depth ones, but the question is whether they can be used to perform useful calculations.

What’s more, not all qubits are equally noisy. In theory it should be possible to make very low-noise qubits from so-called topological electronic states of certain materials, in which the “shape” of the electron states used for encoding binary information confers a kind of protection against random noise. Researchers at Microsoft, most prominently, are seeking such topological states in exotic quantum materials, but there’s no guarantee that they’ll be found or will be controllable.

Researchers at IBM have suggested that the power of a quantum computation on a given device be expressed as a number called the “quantum volume,” which bundles up all the relevant factors: number and connectivity of qubits, depth of algorithm, and other measures of the gate quality, such as noisiness. It’s really this quantum volume that characterizes the power of a quantum computation, and Gambetta said that the best way forward right now is to develop quantum-computational hardware that increases the available quantum volume.

This is one reason why the much vaunted notion of quantum supremacy is more slippery than it seems. The image of a 50-qubit (or so) quantum computer outperforming a state-of-the-art supercomputer sounds alluring, but it leaves a lot of questions hanging. Outperforming for which problem? How do you know the quantum computer has got the right answer if you can’t check it with a tried-and-tested classical device? And how can you be sure that the classical machine wouldn’t do better if you could find the right algorithm?

So quantum supremacy is a concept to handle with care. Some researchers prefer now to talk about “quantum advantage,” which refers to the speedup that quantum devices offer without making definitive claims about what is best. An aversion to the word “supremacy” has also arisen because of the racial and political implications.

Whatever you choose to call it, a demonstration that quantum computers can do things beyond current classical means would be psychologically significant for the field. “Demonstrating an unambiguous quantum advantage will be an important milestone,” said Eisert — it would prove that quantum computers really can extend what is technologically possible.

That might still be more of a symbolic gesture than a transformation in useful computing resources. But such things may matter, because if quantum computing is going to succeed, it won’t be simply by the likes of IBM and Google suddenly offering their classy new machines for sale. Rather, it’ll happen through an interactive and perhaps messy collaboration between developers and users, and the skill set will evolve in the latter only if they have sufficient faith that the effort is worth it. This is why both IBM and Google are keen to make their devices available as soon as they’re ready. As well as a 16-qubit IBM Q experience offered to anyone who registers online, IBM now has a 20-qubit version for corporate clients, including JP Morgan Chase, Daimler, Honda, Samsung and the University of Oxford. Not only will that help clients discover what’s in it for them; it should create a quantum-literate community of programmers who will devise resources and solve problems beyond what any individual company could muster.

“For quantum computing to take traction and blossom, we must enable the world to use and to learn it,” said Gambetta. “This period is for the world of scientists and industry to focus on getting quantum-ready.”