The Era of Quantum Computing Is Here. Outlook: Cloudy


Quantum computers should soon be able to beat classical computers at certain basic tasks. But before they’re truly powerful, researchers have to overcome a number of fundamental roadblocks.

Quantum computers have to deal with the problem of noise, which can quickly derail any calculation.

After decades of heavy slog with no promise of success, quantum computing is suddenly buzzing with almost feverish excitement and activity. Nearly two years ago, IBM made a quantum computer available to the world: the 5-quantum-bit (qubit) resource they now call (a little awkwardly) the IBM Q experience. That seemed more like a toy for researchers than a way of getting any serious number crunching done. But 70,000 users worldwide have registered for it, and the qubit count in this resource has now quadrupled. In the past few months, IBM and Intel have announced that they have made quantum computers with 50 and 49 qubits, respectively, and Google is thought to have one waiting in the wings. “There is a lot of energy in the community, and the recent progress is immense,” said physicist Jens Eisert of the Free University of Berlin.

There is now talk of impending “quantum supremacy”: the moment when a quantum computer can carry out a task beyond the means of today’s best classical supercomputers. That might sound absurd when you compare the bare numbers: 50 qubits versus the billions of classical bits in your laptop. But the whole point of quantum computing is that a quantum bit counts for much, much more than a classical bit. Fifty qubits has long been considered the approximate number at which quantum computing becomes capable of calculations that would take an unfeasibly long time classically. Midway through 2017, researchers at Google announced that they hoped to have demonstrated quantum supremacy by the end of the year. (When pressed for an update, a spokesperson recently said that “we hope to announce results as soon as we can, but we’re going through all the detailed work to ensure we have a solid result before we announce.”)

It would be tempting to conclude from all this that the basic problems are solved in principle and the path to a future of ubiquitous quantum computing is now just a matter of engineering. But that would be a mistake. The fundamental physics of quantum computing is far from solved and can’t be readily disentangled from its implementation.

Even if we soon pass the quantum supremacy milestone, the next year or two might be the real crunch time for whether quantum computers will revolutionize computing. There’s still everything to play for and no guarantee of reaching the big goal.

IBM’s quantum computing center at the Thomas J. Watson Research Center in Yorktown Heights, New York, holds quantum computers in large cryogenic tanks (far right) that are cooled to a fraction of a degree above absolute zero.

IBM’s quantum computing center at the Thomas J. Watson Research Center in Yorktown Heights, New York, holds quantum computers in large cryogenic tanks (far right) that are cooled to a fraction of a degree above absolute zero.

Connie Zhou for IBM

Shut Up and Compute

Both the benefits and the challenges of quantum computing are inherent in the physics that permits it. The basic story has been told many times, though not always with the nuance that quantum mechanics demands. Classical computers encode and manipulate information as strings of binary digits — 1 or 0. Quantum bits do the same, except that they may be placed in a so-called superposition of the states 1 and 0, which means that a measurement of the qubit’s state could elicit the answer 1 or 0 with some well-defined probability.

To perform a computation with many such qubits, they must all be sustained in interdependent superpositions of states — a “quantum-coherent” state, in which the qubits are said to be entangled. That way, a tweak to one qubit may influence all the others. This means that somehow computational operations on qubits count for more than they do for classical bits. The computational resources increase in simple proportion to the number of bits for a classical device, but adding an extra qubit potentially doubles the resources of a quantum computer. This is why the difference between a 5-qubit and a 50-qubit machine is so significant.

Note that I’ve not said — as it often is said — that a quantum computer has an advantage because the availability of superpositions hugely increases the number of states it can encode, relative to classical bits. Nor have I said that entanglement permits many calculations to be carried out in parallel. (Indeed, a strong degree of qubit entanglement isn’t essential.) There’s an element of truth in those descriptions — some of the time — but none captures the essence of quantum computing.

Inside one of IBM’s cryostats wired for a 50-qubit quantum system.

Inside one of IBM’s cryostats wired for a 50-qubit quantum system.

Connie Zhou for IBM

It’s hard to say qualitatively why quantum computing is so powerful precisely because it is hard to specify what quantum mechanics means at all. The equations of quantum theory certainly show that it will work: that, at least for some classes of computation such as factorization or database searches, there is tremendous speedup of the calculation. But how exactly?

Perhaps the safest way to describe quantum computing is to say that quantum mechanics somehow creates a “resource” for computation that is unavailable to classical devices. As quantum theorist Daniel Gottesman of the Perimeter Institute in Waterloo, Canada, put it, “If you have enough quantum mechanics available, in some sense, then you have speedup, and if not, you don’t.”

Some things are clear, though. To carry out a quantum computation, you need to keep all your qubits coherent. And this is very hard. Interactions of a system of quantum-coherent entities with their surrounding environment create channels through which the coherence rapidly “leaks out” in a process called decoherence. Researchers seeking to build quantum computers must stave off decoherence, which they can currently do only for a fraction of a second. That challenge gets ever greater as the number of qubits — and hence the potential to interact with the environment — increases. This is largely why, even though quantum computing was first proposed by Richard Feynman in 1982 and the theory was worked out in the early 1990s, it has taken until now to make devices that can actually perform a meaningful computation.

Quantum Errors

There’s a second fundamental reason why quantum computing is so difficult. Like just about every other process in nature, it is noisy. Random fluctuations, from heat in the qubits, say, or from fundamentally quantum-mechanical processes, will occasionally flip or randomize the state of a qubit, potentially derailing a calculation. This is a hazard in classical computing too, but it’s not hard to deal with — you just keep two or more backup copies of each bit so that a randomly flipped bit stands out as the odd one out.

Researchers working on quantum computers have created strategies for how to deal with the noise. But these strategies impose a huge debt of computational overhead — all your computing power goes to correcting errors and not to running your algorithms. “Current error rates significantly limit the lengths of computations that can be performed,” said Andrew Childs, the codirector of the Joint Center for Quantum Information and Computer Science at the University of Maryland. “We’ll have to do a lot better if we want to do something interesting.”

Andrew Childs, a quantum theorist at the University of Maryland, cautions that error rates are a fundamental concern for quantum computers.

Andrew Childs, a quantum theorist at the University of Maryland, cautions that error rates are a fundamental concern for quantum computers.

Photo by John T. Consoli/University of Maryland

A lot of research on the fundamentals of quantum computing has been devoted to error correction. Part of the difficulty stems from another of the key properties of quantum systems: Superpositions can only be sustained as long as you don’t measure the qubit’s value. If you make a measurement, the superposition collapses to a definite value: 1 or 0. So how can you find out if a qubit has an error if you don’t know what state it is in?

One ingenious scheme involves looking indirectly, by coupling the qubit to another “ancilla” qubit that doesn’t take part in the calculation but that can be probed without collapsing the state of the main qubit itself. It’s complicated to implement, though. Such solutions mean that, to construct a genuine “logical qubit” on which computation with error correction can be performed, you need many physical qubits.

How many? Quantum theorist Alán Aspuru-Guzik of Harvard University estimates that around 10,000 of today’s physical qubits would be needed to make a single logical qubit — a totally impractical number. If the qubits get much better, he said, this number could come down to a few thousand or even hundreds. Eisert is less pessimistic, saying that on the order of 800 physical qubits might already be enough, but even so he agrees that “the overhead is heavy,” and for the moment we need to find ways of coping with error-prone qubits.

An alternative to correcting errors is avoiding them or canceling out their influence: so-called error mitigation. Researchers at IBM, for example, are developing schemes for figuring out mathematically how much error is likely to have been incurred in a computation and then extrapolating the output of a computation to the “zero noise” limit.

Some researchers think that the problem of error correction will prove intractable and will prevent quantum computers from achieving the grand goals predicted for them. “The task of creating quantum error-correcting codes is harder than the task of demonstrating quantum supremacy,” said mathematician Gil Kalai of the Hebrew University of Jerusalem in Israel. And he adds that “devices without error correction are computationally very primitive, and primitive-based supremacy is not possible.” In other words, you’ll never do better than classical computers while you’ve still got errors.

Others believe the problem will be cracked eventually. According to Jay Gambetta, a quantum information scientist at IBM’s Thomas J. Watson Research Center, “Our recent experiments at IBM have demonstrated the basic elements of quantum error correction on small devices, paving the way towards larger-scale devices where qubits can reliably store quantum information for a long period of time in the presence of noise.” Even so, he admits that “a universal fault-tolerant quantum computer, which has to use logical qubits, is still a long way off.” Such developments make Childs cautiously optimistic. “I’m sure we’ll see improved experimental demonstrations of [error correction], but I think it will be quite a while before we see it used for a real computation,” he said.

Living With Errors

For the time being, quantum computers are going to be error-prone, and the question is how to live with that. At IBM, researchers are talking about “approximate quantum computing” as the way the field will look in the near term: finding ways of accommodating the noise.

This calls for algorithms that tolerate errors, getting the correct result despite them. It’s a bit like working out the outcome of an election regardless of a few wrongly counted ballot papers. “A sufficiently large and high-fidelity quantum computation should have some advantage [over a classical computation] even if it is not fully fault-tolerant,” said Gambetta.

Lucy Reading-Ikkanda/Quanta Magazine

One of the most immediate error-tolerant applications seems likely to be of more value to scientists than to the world at large: to simulate stuff at the atomic level. (This, in fact, was the motivation that led Feynman to propose quantum computing in the first place.) The equations of quantum mechanics prescribe a way to calculate the properties — such as stability and chemical reactivity — of a molecule such as a drug. But they can’t be solved classically without making lots of simplifications.

In contrast, the quantum behavior of electrons and atoms, said Childs, “is relatively close to the native behavior of a quantum computer.” So one could then construct an exact computer model of such a molecule. “Many in the community, including me, believe that quantum chemistry and materials science will be one of the first useful applications of such devices,” said Aspuru-Guzik, who has been at the forefront of efforts to push quantum computing in this direction.

Quantum simulations are proving their worth even on the very small quantum computers available so far. A team of researchers including Aspuru-Guzik has developed an algorithm that they call the variational quantum eigensolver (VQE), which can efficiently find the lowest-energy states of molecules even with noisy qubits. So far it can only handle very small molecules with few electrons, which classical computers can already simulate accurately. But the capabilities are getting better, as Gambetta and coworkers showed last Septemberwhen they used a 6-qubit device at IBM to calculate the electronic structures of molecules, including lithium hydride and beryllium hydride. The work was “a significant leap forward for the quantum regime,” according to physical chemist Markus Reiher of the Swiss Federal Institute of Technology in Zurich, Switzerland. “The use of the VQE for the simulation of small molecules is a great example of the possibility of near-term heuristic algorithms,” said Gambetta.

But even for this application, Aspuru-Guzik confesses that logical qubits with error correction will probably be needed before quantum computers truly begin to surpass classical devices. “I would be really excited when error-corrected quantum computing begins to become a reality,” he said.

“If we had more than 200 logical qubits, we could do things in quantum chemistry beyond standard approaches,” Reiher adds. “And if we had about 5,000 such qubits, then the quantum computer would be transformative in this field.”

What’s Your Volume?

Despite the challenges of reaching those goals, the fast growth of quantum computers from 5 to 50 qubits in barely more than a year has raised hopes. But we shouldn’t get too fixated on these numbers, because they tell only part of the story. What matters is not just — or even mainly — how many qubits you have, but how good they are, and how efficient your algorithms are.

Any quantum computation has to be completed before decoherence kicks in and scrambles the qubits. Typically, the groups of qubits assembled so far have decoherence times of a few microseconds. The number of logic operations you can carry out during that fleeting moment depends on how quickly the quantum gates can be switched — if this time is too slow, it really doesn’t matter how many qubits you have at your disposal. The number of gate operations needed for a calculation is called its depth: Low-depth (shallow) algorithms are more feasible than high-depth ones, but the question is whether they can be used to perform useful calculations.

What’s more, not all qubits are equally noisy. In theory it should be possible to make very low-noise qubits from so-called topological electronic states of certain materials, in which the “shape” of the electron states used for encoding binary information confers a kind of protection against random noise. Researchers at Microsoft, most prominently, are seeking such topological states in exotic quantum materials, but there’s no guarantee that they’ll be found or will be controllable.

Researchers at IBM have suggested that the power of a quantum computation on a given device be expressed as a number called the “quantum volume,” which bundles up all the relevant factors: number and connectivity of qubits, depth of algorithm, and other measures of the gate quality, such as noisiness. It’s really this quantum volume that characterizes the power of a quantum computation, and Gambetta said that the best way forward right now is to develop quantum-computational hardware that increases the available quantum volume.

This is one reason why the much vaunted notion of quantum supremacy is more slippery than it seems. The image of a 50-qubit (or so) quantum computer outperforming a state-of-the-art supercomputer sounds alluring, but it leaves a lot of questions hanging. Outperforming for which problem? How do you know the quantum computer has got the right answer if you can’t check it with a tried-and-tested classical device? And how can you be sure that the classical machine wouldn’t do better if you could find the right algorithm?

So quantum supremacy is a concept to handle with care. Some researchers prefer now to talk about “quantum advantage,” which refers to the speedup that quantum devices offer without making definitive claims about what is best. An aversion to the word “supremacy” has also arisen because of the racial and political implications.

Whatever you choose to call it, a demonstration that quantum computers can do things beyond current classical means would be psychologically significant for the field. “Demonstrating an unambiguous quantum advantage will be an important milestone,” said Eisert — it would prove that quantum computers really can extend what is technologically possible.

That might still be more of a symbolic gesture than a transformation in useful computing resources. But such things may matter, because if quantum computing is going to succeed, it won’t be simply by the likes of IBM and Google suddenly offering their classy new machines for sale. Rather, it’ll happen through an interactive and perhaps messy collaboration between developers and users, and the skill set will evolve in the latter only if they have sufficient faith that the effort is worth it. This is why both IBM and Google are keen to make their devices available as soon as they’re ready. As well as a 16-qubit IBM Q experience offered to anyone who registers online, IBM now has a 20-qubit version for corporate clients, including JP Morgan Chase, Daimler, Honda, Samsung and the University of Oxford. Not only will that help clients discover what’s in it for them; it should create a quantum-literate community of programmers who will devise resources and solve problems beyond what any individual company could muster.

“For quantum computing to take traction and blossom, we must enable the world to use and to learn it,” said Gambetta. “This period is for the world of scientists and industry to focus on getting quantum-ready.”

Quantum Computing Without Qubits


A quantum computing pioneer explains why analog simulators may beat out general-purpose digital quantum machines — for now.

Ivan Deutsch is building quantum computers out of base-16 “qudits,” quantum information units that can assume any number of “d” states.

Ivan Deutsch is building quantum computers out of base-16 “qudits,” quantum information units that can assume any number of “d” states.

For more than 20 years, Ivan H. Deutsch has struggled to design the guts of a working quantum computer. He has not been alone. The quest to harness the computational might of quantum weirdness continues to occupy hundreds of researchers around the world. Why hasn’t there been more to show for their work? As physicists have known since quantum computing’s beginnings, the same characteristics that make quantum computing exponentially powerful also make it devilishly difficult to control. The quantum computing “nightmare” has always been that a quantum computer’s advantages in speed would be wiped out by the machine’s complexity.

Yet progress is arriving on two main fronts. First, researchers are developing unique quantum error-correction techniques that will help keep quantum processors up and running for the time needed to complete a calculation. Second, physicists are working with so-called analog quantum simulators — machines that can’t act like a general-purpose computer, but rather are designed to explore specific problems in quantum physics. A classical computer would have to run for thousands of years to compute the quantum equations of motion for just 100 atoms. A quantum simulator could do it in less than a second.

Quanta Magazine spoke with Deutsch about recent progress in the field, his hopes for the near future, and his own work at the University of New Mexico’s Center for Quantum Information and Control on scaling up binary quantum bits into base-16 digits.

QUANTA MAGAZINE: Why would a universal quantum machine be so uniquely powerful?

IVAN DEUTSCH: In a classical computer, information is stored in retrievable bits binary coded as 0 or 1. But in a quantum computer, elementary particles inhabit a probabilistic limbo called superposition where a “qubit” can be coded as 0 and 1.

Here is the magic: Each qubit can be entangled with the other qubits in the machine. The intertwining of quantum “states” exponentially increases the number of 0s and 1s that can be simultaneously processed by an array of qubits. Machines that can harness the power of quantum logic can deal with exponentially greater levels of complexity than the most powerful classical computer. Problems that would take a state-of-the-art classical computer the age of our universe to solve, can, in theory, be solved by a universal quantum computer in hours.

What is the quantum computing “nightmare”?

The same quantum effects that make a quantum computer so blazingly fast also make it incredibly difficult to operate. From the beginning, it has not been clear whether the exponential speed up provided by a quantum computer would be cancelled out by the exponential complexity needed to protect the system from crashing.

Is the situation hopeless?

Not at all. We now know that a universal quantum computer will not require exponential complexity in design. But it is still very hard.

So what’s the problem, and how do we get around it?

The hardware problem is that the superposition is so fragile that the random interaction of a single qubit with the molecules composing its immediate surroundings can cause the entire network of entangled qubits to delink or collapse. The ongoing calculation is destroyed as each qubit transforms into a digitized classical bit holding a single value: 0 or 1.

In classical computers, we reduce the inevitable loss of information by designing a lot of redundancy into the system. Error-correcting algorithms compare multiple copies of the output. They select the most frequent answer and discard the rest of the data as noise. We cannot do that with a quantum computer, because trying to directly compare qubits will crash the program. But we are gradually learning how to keep systems of entangled qubits from collapsing.

The major obstacle, to my mind, is creating error-correcting software that can keep data from being corrupted as the calculation proceeds toward the final readout. The great trick is to design and implement an algorithm that only measures the errors and not the data, thus preserving the superposition that contains the correct answer.

Will that end the nightmare?

It turns out that the error correction technique itself introduces errors. One of the most wonderful advances in quantum computing was recognizing that, in theory, we can correct the new errors without requiring 100 percent precision, allowing minor background noise to pollute the calculation as it rolls along. We cannot actually do this — yet. The main reason that we do not have a working universal quantum computer is that we are still experimenting with how to implant such a “fault-tolerant” algorithm into a quantum circuit. Right now we can control 10 qubits reasonably well. But there is no error-correcting technique, to my knowledge, capable of controlling the thousands of qubits needed to construct a universal machine.

Is that what you’re working on?

I study the information processing capabilities of trapped atoms. My colleague Poul Jessen at the University of Arizona and I are pushing the logical power beyond binary-based qubits. For example, what if we can control the superposition of an atom with, say, 16 different energy levels? Using base 16, we can then store what we call a “qudit” in a single atom. That would move us beyond the information processing speed obtainable by a base 2 system, the qubit.

What other options do we have?

There may be significant applications available for making non-universal machines: Special purpose, analog quantum simulators designed to solve specific problems, such as how room-temperature superconductors work or how a particular protein folds.

Are these actually computers?

They are not universal machines capable of solving any type of question. But say that I want to model global climate change. One way to do this is to write down a mathematical model and then solve the equations on a digital computer. That is typically what climate scientists do. Another way is to try to simulate some aspect of the earth’s climate in a controllable experiment. I can create a simple physical system that obeys the same laws of motion as the system I’m trying to model — mixing nitrogen, oxygen, and hydrogen in a tank, for example. What goes on inside the tank is a real-world computation that tells me something about atmospheric turbulence under certain conditions.

It is the same with an analog quantum simulator — I use one controllable physical system to simulate another. For example, successfully simulating a superconductor with such a device would reveal the quantum mechanics of high-temperature superconductivity. That could lead to the manufacture of non-brittle superconducting materials for many uses, including building less-fragile quantum circuits. Hopefully, we can learn how to build a robust universal digital computer by experimenting with analog simulators.

Has anyone built a working analog quantum simulator?

In 2002, a group at the Max Planck Institute in Germany built an optical lattice — a super-chilled egg carton made of light — and controlled it by pulsing different strengths of laser beams at it. This was a fundamentally analog device designed to obey quantum mechanical equations of motion. The short story is that it successfully simulated how atoms transition between acting as superfluids or insulators. That experiment has sparked a lot of research in analog quantum computing with optical lattices and cold atom traps.

What are the main challenges for these quantum simulators?

Because the evolution of the analog simulation is not digitized, the software cannot correct the tiny errors that accumulate during the calculation as we could error-correct noise on a universal machine. The analog device must keep a quantum superposition intact long enough for the simulation to run its course without resorting to digital error correction. This is a particular challenge for the analog approach to quantum simulation.

Is the D-Wave machine a quantum simulator?

The D-Wave prototype is not a universal quantum computer. It is not digital, nor error-correcting, nor fault tolerant. It is a purely analog machine designed to solve a particular optimization problem. It is unclear if it qualifies as a quantum device.

Will a scalable quantum computer be deployed during your lifetime?

We are pushing past the nightmare. Around the world, many university-based labs are working hard to remove or bypass the road block of fault tolerance. Academic researchers are leading the way, intellectually. For example, the groups of Rob Schoelkopf and Michel H. Devoret at Yale are taking superconducting technologies close to fault-tolerance.

But constructing a working universal digital quantum computer will likely require mobilizing industrial-scale resources. To that end, IBM is exploring quantum computing with superconducting circuits with personnel largely from the Yale groups. Google is working with John Martinis’s lab at the University of California, Santa Barbara. HRL Laboratories is working on silicon-based quantum computing. Lockheed Martin is exploring ion traps. And who knows what the National Security Agency is up to.

But generally in academic labs, without these industrial-scale resources, scientists are focusing more and more on learning how to control analog quantum simulators. There is short-term fruit to be picked in that arena — both intellectually and in the currency of academics: publishable papers.

Are you willing to settle for analog?

I favor pursuing the digital approach full force. Before I die, I would love to see just one universal logical qubit that can be indefinitely error corrected. It would instantly be classified by the government, of course. But I dream on, regardless.

The Era of Quantum Computing Is Here. Outlook: Cloudy


Quantum computers should soon be able to beat classical computers at certain basic tasks. But before they’re truly powerful, researchers have to overcome a number of fundamental roadblocks.

Quantum computers have to deal with the problem of noise, which can quickly derail any calculation.

Quantum computers have to deal with the problem of noise, which can quickly derail any calculation.

 

After decades of heavy slog with no promise of success, quantum computing is suddenly buzzing with almost feverish excitement and activity. Nearly two years ago, IBM made a quantum computer available to the world: the 5-quantum-bit (qubit) resource they now call (a little awkwardly) the IBM Q experience. That seemed more like a toy for researchers than a way of getting any serious number crunching done. But 70,000 users worldwide have registered for it, and the qubit count in this resource has now quadrupled. In the past few months, IBM and Intel have announced that they have made quantum computers with 50 and 49 qubits, respectively, and Google is thought to have one waiting in the wings. “There is a lot of energy in the community, and the recent progress is immense,” said physicist Jens Eisert of the Free University of Berlin.

There is now talk of impending “quantum supremacy”: the moment when a quantum computer can carry out a task beyond the means of today’s best classical supercomputers. That might sound absurd when you compare the bare numbers: 50 qubits versus the billions of classical bits in your laptop. But the whole point of quantum computing is that a quantum bit counts for much, much more than a classical bit. Fifty qubits has long been considered the approximate number at which quantum computing becomes capable of calculations that would take an unfeasibly long time classically. Midway through 2017, researchers at Google announced that they hoped to have demonstrated quantum supremacy by the end of the year. (When pressed for an update, a spokesperson recently said that “we hope to announce results as soon as we can, but we’re going through all the detailed work to ensure we have a solid result before we announce.”)

It would be tempting to conclude from all this that the basic problems are solved in principle and the path to a future of ubiquitous quantum computing is now just a matter of engineering. But that would be a mistake. The fundamental physics of quantum computing is far from solved and can’t be readily disentangled from its implementation.

Even if we soon pass the quantum supremacy milestone, the next year or two might be the real crunch time for whether quantum computers will revolutionize computing. There’s still everything to play for and no guarantee of reaching the big goal.

IBM’s quantum computing center at the Thomas J. Watson Research Center in Yorktown Heights, New York, holds quantum computers in large cryogenic tanks (far right) that are cooled to a fraction of a degree above absolute zero.

IBM’s quantum computing center at the Thomas J. Watson Research Center in Yorktown Heights, New York, holds quantum computers in large cryogenic tanks (far right) that are cooled to a fraction of a degree above absolute zero.

Connie Zhou for IBM

Shut Up and Compute

Both the benefits and the challenges of quantum computing are inherent in the physics that permits it. The basic story has been told many times, though not always with the nuance that quantum mechanics demands. Classical computers encode and manipulate information as strings of binary digits — 1 or 0. Quantum bits do the same, except that they may be placed in a so-called superposition of the states 1 and 0, which means that a measurement of the qubit’s state could elicit the answer 1 or 0 with some well-defined probability.

To perform a computation with many such qubits, they must all be sustained in interdependent superpositions of states — a “quantum-coherent” state, in which the qubits are said to be entangled. That way, a tweak to one qubit may influence all the others. This means that somehow computational operations on qubits count for more than they do for classical bits. The computational resources increase in simple proportion to the number of bits for a classical device, but adding an extra qubit potentially doubles the resources of a quantum computer. This is why the difference between a 5-qubit and a 50-qubit machine is so significant.

Note that I’ve not said — as it often is said — that a quantum computer has an advantage because the availability of superpositions hugely increases the number of states it can encode, relative to classical bits. Nor have I said that entanglement permits many calculations to be carried out in parallel. (Indeed, a strong degree of qubit entanglement isn’t essential.) There’s an element of truth in those descriptions — some of the time — but none captures the essence of quantum computing.

Inside one of IBM’s cryostats wired for a 50-qubit quantum system.

Inside one of IBM’s cryostats wired for a 50-qubit quantum system.

Connie Zhou for IBM

It’s hard to say qualitatively why quantum computing is so powerful precisely because it is hard to specify what quantum mechanics means at all. The equations of quantum theory certainly show that it will work: that, at least for some classes of computation such as factorization or database searches, there is tremendous speedup of the calculation. But how exactly?

Perhaps the safest way to describe quantum computing is to say that quantum mechanics somehow creates a “resource” for computation that is unavailable to classical devices. As quantum theorist Daniel Gottesman of the Perimeter Institute in Waterloo, Canada, put it, “If you have enough quantum mechanics available, in some sense, then you have speedup, and if not, you don’t.”

Some things are clear, though. To carry out a quantum computation, you need to keep all your qubits coherent. And this is very hard. Interactions of a system of quantum-coherent entities with their surrounding environment create channels through which the coherence rapidly “leaks out” in a process called decoherence. Researchers seeking to build quantum computers must stave off decoherence, which they can currently do only for a fraction of a second. That challenge gets ever greater as the number of qubits — and hence the potential to interact with the environment — increases. This is largely why, even though quantum computing was first proposed by Richard Feynman in 1982 and the theory was worked out in the early 1990s, it has taken until now to make devices that can actually perform a meaningful computation.

Quantum Errors

There’s a second fundamental reason why quantum computing is so difficult. Like just about every other process in nature, it is noisy. Random fluctuations, from heat in the qubits, say, or from fundamentally quantum-mechanical processes, will occasionally flip or randomize the state of a qubit, potentially derailing a calculation. This is a hazard in classical computing too, but it’s not hard to deal with — you just keep two or more backup copies of each bit so that a randomly flipped bit stands out as the odd one out.

Researchers working on quantum computers have created strategies for how to deal with the noise. But these strategies impose a huge debt of computational overhead — all your computing power goes to correcting errors and not to running your algorithms. “Current error rates significantly limit the lengths of computations that can be performed,” said Andrew Childs, the codirector of the Joint Center for Quantum Information and Computer Science at the University of Maryland. “We’ll have to do a lot better if we want to do something interesting.”

Andrew Childs, a quantum theorist at the University of Maryland, cautions that error rates are a fundamental concern for quantum computers.

Andrew Childs, a quantum theorist at the University of Maryland, cautions that error rates are a fundamental concern for quantum computers.

Photo by John T. Consoli/University of Maryland

A lot of research on the fundamentals of quantum computing has been devoted to error correction. Part of the difficulty stems from another of the key properties of quantum systems: Superpositions can only be sustained as long as you don’t measure the qubit’s value. If you make a measurement, the superposition collapses to a definite value: 1 or 0. So how can you find out if a qubit has an error if you don’t know what state it is in?

One ingenious scheme involves looking indirectly, by coupling the qubit to another “ancilla” qubit that doesn’t take part in the calculation but that can be probed without collapsing the state of the main qubit itself. It’s complicated to implement, though. Such solutions mean that, to construct a genuine “logical qubit” on which computation with error correction can be performed, you need many physical qubits.

How many? Quantum theorist Alán Aspuru-Guzik of Harvard University estimates that around 10,000 of today’s physical qubits would be needed to make a single logical qubit — a totally impractical number. If the qubits get much better, he said, this number could come down to a few thousand or even hundreds. Eisert is less pessimistic, saying that on the order of 800 physical qubits might already be enough, but even so he agrees that “the overhead is heavy,” and for the moment we need to find ways of coping with error-prone qubits.

An alternative to correcting errors is avoiding them or canceling out their influence: so-called error mitigation. Researchers at IBM, for example, are developing schemes for figuring out mathematically how much error is likely to have been incurred in a computation and then extrapolating the output of a computation to the “zero noise” limit.

Some researchers think that the problem of error correction will prove intractable and will prevent quantum computers from achieving the grand goals predicted for them. “The task of creating quantum error-correcting codes is harder than the task of demonstrating quantum supremacy,” said mathematician Gil Kalai of the Hebrew University of Jerusalem in Israel. And he adds that “devices without error correction are computationally very primitive, and primitive-based supremacy is not possible.” In other words, you’ll never do better than classical computers while you’ve still got errors.

Others believe the problem will be cracked eventually. According to Jay Gambetta, a quantum information scientist at IBM’s Thomas J. Watson Research Center, “Our recent experiments at IBM have demonstrated the basic elements of quantum error correction on small devices, paving the way towards larger-scale devices where qubits can reliably store quantum information for a long period of time in the presence of noise.” Even so, he admits that “a universal fault-tolerant quantum computer, which has to use logical qubits, is still a long way off.” Such developments make Childs cautiously optimistic. “I’m sure we’ll see improved experimental demonstrations of [error correction], but I think it will be quite a while before we see it used for a real computation,” he said.

Living With Errors

For the time being, quantum computers are going to be error-prone, and the question is how to live with that. At IBM, researchers are talking about “approximate quantum computing” as the way the field will look in the near term: finding ways of accommodating the noise.

This calls for algorithms that tolerate errors, getting the correct result despite them. It’s a bit like working out the outcome of an election regardless of a few wrongly counted ballot papers. “A sufficiently large and high-fidelity quantum computation should have some advantage [over a classical computation] even if it is not fully fault-tolerant,” said Gambetta.

Lucy Reading-Ikkanda/Quanta Magazine

One of the most immediate error-tolerant applications seems likely to be of more value to scientists than to the world at large: to simulate stuff at the atomic level. (This, in fact, was the motivation that led Feynman to propose quantum computing in the first place.) The equations of quantum mechanics prescribe a way to calculate the properties — such as stability and chemical reactivity — of a molecule such as a drug. But they can’t be solved classically without making lots of simplifications.

In contrast, the quantum behavior of electrons and atoms, said Childs, “is relatively close to the native behavior of a quantum computer.” So one could then construct an exact computer model of such a molecule. “Many in the community, including me, believe that quantum chemistry and materials science will be one of the first useful applications of such devices,” said Aspuru-Guzik, who has been at the forefront of efforts to push quantum computing in this direction.

Quantum simulations are proving their worth even on the very small quantum computers available so far. A team of researchers including Aspuru-Guzik has developed an algorithm that they call the variational quantum eigensolver (VQE), which can efficiently find the lowest-energy states of molecules even with noisy qubits. So far it can only handle very small molecules with few electrons, which classical computers can already simulate accurately. But the capabilities are getting better, as Gambetta and coworkers showed last Septemberwhen they used a 6-qubit device at IBM to calculate the electronic structures of molecules, including lithium hydride and beryllium hydride. The work was “a significant leap forward for the quantum regime,” according to physical chemist Markus Reiher of the Swiss Federal Institute of Technology in Zurich, Switzerland. “The use of the VQE for the simulation of small molecules is a great example of the possibility of near-term heuristic algorithms,” said Gambetta.

But even for this application, Aspuru-Guzik confesses that logical qubits with error correction will probably be needed before quantum computers truly begin to surpass classical devices. “I would be really excited when error-corrected quantum computing begins to become a reality,” he said.

“If we had more than 200 logical qubits, we could do things in quantum chemistry beyond standard approaches,” Reiher adds. “And if we had about 5,000 such qubits, then the quantum computer would be transformative in this field.”

What’s Your Volume?

Despite the challenges of reaching those goals, the fast growth of quantum computers from 5 to 50 qubits in barely more than a year has raised hopes. But we shouldn’t get too fixated on these numbers, because they tell only part of the story. What matters is not just — or even mainly — how many qubits you have, but how good they are, and how efficient your algorithms are.

Any quantum computation has to be completed before decoherence kicks in and scrambles the qubits. Typically, the groups of qubits assembled so far have decoherence times of a few microseconds. The number of logic operations you can carry out during that fleeting moment depends on how quickly the quantum gates can be switched — if this time is too slow, it really doesn’t matter how many qubits you have at your disposal. The number of gate operations needed for a calculation is called its depth: Low-depth (shallow) algorithms are more feasible than high-depth ones, but the question is whether they can be used to perform useful calculations.

What’s more, not all qubits are equally noisy. In theory it should be possible to make very low-noise qubits from so-called topological electronic states of certain materials, in which the “shape” of the electron states used for encoding binary information confers a kind of protection against random noise. Researchers at Microsoft, most prominently, are seeking such topological states in exotic quantum materials, but there’s no guarantee that they’ll be found or will be controllable.

Researchers at IBM have suggested that the power of a quantum computation on a given device be expressed as a number called the “quantum volume,” which bundles up all the relevant factors: number and connectivity of qubits, depth of algorithm, and other measures of the gate quality, such as noisiness. It’s really this quantum volume that characterizes the power of a quantum computation, and Gambetta said that the best way forward right now is to develop quantum-computational hardware that increases the available quantum volume.

This is one reason why the much vaunted notion of quantum supremacy is more slippery than it seems. The image of a 50-qubit (or so) quantum computer outperforming a state-of-the-art supercomputer sounds alluring, but it leaves a lot of questions hanging. Outperforming for which problem? How do you know the quantum computer has got the right answer if you can’t check it with a tried-and-tested classical device? And how can you be sure that the classical machine wouldn’t do better if you could find the right algorithm?

So quantum supremacy is a concept to handle with care. Some researchers prefer now to talk about “quantum advantage,” which refers to the speedup that quantum devices offer without making definitive claims about what is best. An aversion to the word “supremacy” has also arisen because of the racial and political implications.

Whatever you choose to call it, a demonstration that quantum computers can do things beyond current classical means would be psychologically significant for the field. “Demonstrating an unambiguous quantum advantage will be an important milestone,” said Eisert — it would prove that quantum computers really can extend what is technologically possible.

That might still be more of a symbolic gesture than a transformation in useful computing resources. But such things may matter, because if quantum computing is going to succeed, it won’t be simply by the likes of IBM and Google suddenly offering their classy new machines for sale. Rather, it’ll happen through an interactive and perhaps messy collaboration between developers and users, and the skill set will evolve in the latter only if they have sufficient faith that the effort is worth it. This is why both IBM and Google are keen to make their devices available as soon as they’re ready. As well as a 16-qubit IBM Q experience offered to anyone who registers online, IBM now has a 20-qubit version for corporate clients, including JP Morgan Chase, Daimler, Honda, Samsung and the University of Oxford. Not only will that help clients discover what’s in it for them; it should create a quantum-literate community of programmers who will devise resources and solve problems beyond what any individual company could muster.

“For quantum computing to take traction and blossom, we must enable the world to use and to learn it,” said Gambetta. “This period is for the world of scientists and industry to focus on getting quantum-ready.”

Quantum computing will bring immense processing possibilities


Quantum computing will bring immense processing possibilities

The one thing everyone knows about quantum mechanics is its legendary weirdness, in which the basic tenets of the world it describes seem alien to the world we live in. Superposition, where things can be in two states simultaneously, a switch both on and off, a cat both dead and alive. Or entanglement, what Einstein called “spooky action-at-distance” in which objects are invisibly linked, even when separated by huge.

But weird or not, is approaching a century old and has found many applications in daily life. As John von Neumann once said: “You don’t understand quantum mechanics, you just get used to it.” Much of electronics is based on quantum physics, and the application of quantum theory to computing could open up huge possibilities for the complex calculations and data processing we see today.

Imagine a computer processor able to harness super-position, to calculate the result of an arbitrarily large number of permutations of a complex problem simultaneously. Imagine how entanglement could be used to allow systems on different sides of the world to be linked and their efforts combined, despite their physical separation. Quantum computing has immense potential, making light work of some of the most difficult tasks, such as simulating the body’s response to drugs, predicting weather patterns, or analysing big datasets.

Such processing possibilities are needed. The first transistors could only just be held in the hand, while today they measure just 14 nm – 500 times smaller than a red blood cell. This relentless shrinking, predicted by Intel founder Gordon Moore as Moore’s law, has held true for 50 years, but cannot hold indefinitely. Silicon can only be shrunk so far, and if we are to continue benefiting from the performance gains we have become used to, we need a different approach.

Quantum fabrication

Advances in have made it possible to mass-produce quantum-scale semiconductors – electronic circuits that exhibit quantum effects such as super-position and entanglement.

Quantum computing will bring immense processing possibilities
Replica of the first ever transistor, manufactured at Bell Labs in 1947. 

The image, captured at the atomic scale, shows a cross-section through one potential candidate for the building blocks of a quantum computer, a semiconductor nano-ring. Electrons trapped in these rings exhibit the strange properties of , and semiconductor fabrication processes are poised to integrate these elements required to build a quantum computer. While we may be able to construct a quantum computer using structures like these, there are still major challenges involved.

In a classical computer processor a huge number of transistors interact conditionally and predictably with one another. But quantum behaviour is highly fragile; for example, under quantum physics even measuring the state of the system such as checking whether the switch is on or off, actually changes what is being observed. Conducting an orchestra of quantum systems to produce useful output that couldn’t easily by handled by a classical computer is extremely difficult.

But there have been huge investments: the UK government announced £270m funding for quantum technologies in 2014 for example, and the likes of Google, NASA and Lockheed Martin are also working in the field. It’s difficult to predict the pace of progress, but a useful quantum computer could be ten years away.

The basic element of is known as a qubit, the quantum equivalent to the bits used in traditional computers. To date, scientists have harnessed to represent qubits in many different ways, ranging from defects in diamonds, to semiconductor nano-structures or tiny superconducting circuits. Each of these has is own advantages and disadvantages, but none yet has met all the requirements for a quantum computer, known as the DiVincenzo Criteria.

The most impressive progress has come from D-Wave Systems, a firm that has managed to pack hundreds of qubits on to a small chip similar in appearance to a traditional processor.

Quantum computing will bring immense processing possibilities
Quantum circuitry.

Quantum secrets

The benefits of harnessing aren’t limited to computing, however. Whether or not quantum computing will extend or augment digital computing, the same can be harnessed for other means. The most mature example is quantum communications.

Quantum physics has been proposed as a means to prevent forgery of valuable objects, such as a banknote or diamond, as illustrated in the image below. Here, the unusual negative rules embedded within prove useful; perfect copies of unknown states cannot be made and measurements change the systems they are measuring. These two limitations are combined in this quantum anti-counterfeiting scheme, making it impossible to copy the identity of the object they are stored in.

The concept of quantum money is, unfortunately, highly impractical, but the same idea has been successfully extended to communications. The idea is straightforward: the act of measuring quantum super-position states alters what you try to measure, so it’s possible to detect the presence of an eavesdropper making such measurements. With the correct protocol, such as BB84, it is possible to communicate privately, with that privacy guaranteed by fundamental laws of physics.

Quantum computing will bring immense processing possibilities
Adding a quantum secret to a standard barcode prevents tampering or forgery of valuable goods.

Quantum communication systems are commercially available today from firms such as Toshiba and ID Quantique. While the implementation is clunky and expensive now it will become more streamlined and miniaturised, just as transistors have miniaturised over the last 60 years.

Improvements to nanoscale fabrication techniques will greatly accelerate the development of quantum-based technologies. And while useful quantum computing still appears to be some way off, it’s future is very exciting indeed.

 

Can Quantum Computing Reveal the True Meaning of Quantum Mechanics? – The Nature of Reality .


Quantum mechanics says not merely that the world is probabilistic, but that it uses rules of probability that no science fiction writer would have had the imagination to invent. These rules involve complex numbers, called “amplitudes,” rather than just probabilities (which are real numbers between 0 and 1). As long as a physical object isn’t interacting with anything else, its state is a huge wave of these amplitudes, one for every configuration that the system could be found in upon measuring it. Left to itself, the wave of amplitudes evolves in a linear, deterministic way. But when you measure the object, you see some definite configuration, with a probability equal to the squared absolute value of its amplitude. The interaction with the measuring device “collapses” the object to whichever configuration you saw.

Those, more or less, are the alien laws that explain everything from hydrogen atoms to lasers and transistors, and from which no hint of an experimental deviation has ever been found, from the 1920s until today. But could this really be how the universe operates? Is the “bedrock layer of reality” a giant wave of complex numbers encoding potentialities—until someone looks? And what do we mean by “looking,” anyway?

binary_620

Could quantum computing help reveal what the laws of quantum mechanics really mean? Adapted from an image by Flickr user Politropix under a Creative Commons license.

There are different interpretive camps within quantum mechanics, which have squabbled with each other for generations, even though, by design, they all lead to the same predictions for any experiment that anyone can imagine doing. One interpretation is Many Worlds, which says that the different possible configurations of a system (when far enough apart) are literally parallel universes, with the “weight” of each universe given by its amplitude. In this view, the whole concept of measurement—and of the amplitude waves collapsing on measurement—is a sort of illusion, playing no fundamental role in physics. All that ever happens is linear evolution of the entire universe’s amplitude wave—including a part that describes the atoms of your body, which (the math then demands) “splits” into parallel copies whenever you think you’re making a measurement. Each copy would perceive only itself and not the others. While this might surprise people, Many Worlds is seen by many (certainly by its proponents, who are growing in number) as the conservative option: the one that adds the least to the bare math.

A second interpretation is Bohmian mechanics, which agrees with Many Worlds about the reality of the giant amplitude wave, but supplements it with a “true” configuration that a physical system is “really” in, regardless of whether or not anyone measures it. The amplitude wave pushes around the “true” configuration in a way that precisely matches the predictions of quantum mechanics. A third option is Niels Bohr’s original “Copenhagen Interpretation,” which says—but in many more words!—that the amplitude wave is just something in your head, a tool you use to make predictions. In this view, “reality” doesn’t even exist prior to your making a measurement of it—and if you don’t understand that, well, that just proves how mired you are in outdated classical ways of thinking, and how stubbornly you insist on asking illegitimate questions.

But wait: if these interpretations (and others that I omitted) all lead to the same predictions, then how could we ever decide which one is right? More pointedly, does it even mean anything for one to be right and the others wrong, or are these just different flavors of optional verbal seasoning on the same mathematical meat? In his recent quantum mechanics textbook, the great physicist Steven Weinberg reviews the interpretive options, ultimately finding all of them wanting. He ends with the hope that new developments in physics will give us better options. But what could those new developments be?

In the last few decades, the biggest new thing in quantum mechanics has been the field of quantum computing and information. The goal here, you might say, is to “put the giant amplitude wave to work”: rather than obsessing over its true nature, simply exploit it to do calculations faster than is possible classically, or to help with other information-processing tasks (like communication and encryption). The key insight behind quantum computing was articulated by Richard Feynman in 1982: to write down the state of n interacting particles each of which could be in either of two states, quantum mechanics says you need 2namplitudes, one for every possible configuration of all n of the particles. Chemists and physicists have known for decades that this can make quantum systems prohibitively difficult to simulate on a classical computer, since 2n grows so rapidly as a function of n.

But if so, then why not build computers that would themselves take advantage of giant amplitude waves? If nothing else, such computers could be useful for simulating quantum physics! What’s more, in 1994, Peter Shor discovered that such a machine would be useful for more than physical simulations: it could also be used to factor large numbers efficiently, and thereby break most of the cryptography currently used on the Internet. Genuinely useful quantum computers are still a ways away, but experimentalists have made dramatic progress, and have already demonstrated many of the basic building blocks.

I should add that, for my money, the biggest application of quantum computers will be neither simulation nor codebreaking, but simply proving that this is possible at all! If you like, a useful quantum computer would be the most dramatic demonstration imaginable that our world really does need to be described by a gigantic amplitude wave, that there’s no way around that, no simpler classical reality behind the scenes. It would be the final nail in the coffin of the idea—which many of my colleagues still defend—that quantum mechanics, as currently understood, must be merely an approximation that works for a few particles at a time; and when systems get larger, some new principle must take over to stop the exponential explosion.

But if quantum computers provide a new regime in which to probe quantum mechanics, that raises an even broader question: could the field of quantum computing somehow clear up the generations-old debate about the interpretation of quantum mechanics? Indeed, could it do that even before useful quantum computers are built?

At one level, the answer seems like an obvious “no.” Quantum computing could be seen as “merely” a proposed application of quantum mechanics as that theory has existed in physics books for generations. So, to whatever extent all the interpretations make the same predictions, they also agree with each other about what a quantum computer would do. In particular, if quantum computers are built, you shouldn’t expect any of the interpretive camps I listed before to concede that its ideas were wrong. (More likely that each camp will claim its ideas were vindicated!)

At another level, however, quantum computing makes certain aspects of quantum mechanics more salient—for example, the fact that it takes 2n amplitudes to describe n particles—and so might make some interpretations seem more natural than others. Indeed that prospect, more than any application, is why quantum computing was invented in the first place. David Deutsch, who’s considered one of the two founders of quantum computing (along with Feynman), is a diehard proponent of the Many Worlds interpretation, and saw quantum computing as a way to convince the world (at least, this world!) of the truth of Many Worlds. Here’s how Deutsch put it in his 1997 book “The Fabric of Reality”:

Logically, the possibility of complex quantum computations adds nothing to a case [for the Many Worlds Interpretation] that is already unanswerable. But it does add psychological impact. With Shor’s algorithm, the argument has been writ very large. To those who still cling to a single-universe world-view, I issue this challenge: explain how Shor’s algorithm works. I do not merely mean predict that it will work, which is merely a matter of solving a few uncontroversial equations. I mean provide an explanation. When Shor’s algorithm has factorized a number, using 10500 or so times the computational resources that can be seen to be present, where was the number factorized? There are only about 1080 atoms in the entire visible universe, an utterly minuscule number compared with 10500. So if the visible universe were the extent of physical reality, physical reality would not even remotely contain the resources required to factorize such a large number. Who did factorize it, then? How, and where, was the computation performed?

As you might imagine, not all researchers agree that a quantum computer would be “psychological evidence” for Many Worlds, or even that the two things have much to do with each other. Yes, some researchers reply, a quantum computer would take exponential resources to simulate classically (using any known algorithm), but all the interpretations agree about that. And more pointedly: thinking of the branches of a quantum computation as parallel universes might lead you to imagine that a quantum computer could solve hard problems in an instant, by simply “trying each possible solution in a different universe.” That is, indeed, how most popular articles explain quantum computing, but it’s also wrong!

The issue is this: suppose you’re facing some arbitrary problem—like, say, the Traveling Salesman problem, of finding the shortest path that visits a collection of cities—that’s hard because of a combinatorial explosion of possible solutions. It’s easy to program your quantum computer to assign every possible solution an equal amplitude. At some point, however, you need to make a measurement, which returns a single answer. And if you haven’t done anything to boost the amplitude of the answer you want, then you’ll see merely a random answer—which, of course, you could’ve picked for yourself, with no quantum computer needed!

For this reason, the only hope for a quantum-computing advantage comes frominterference: the key aspect of amplitudes that has no classical counterpart, and indeed, that taught physicists that the world has to be described with amplitudes in the first place. Interference is customarily illustrated by the double-slit experiment, in which we shoot a photon at a screen with two slits in it, and then observe where the photon lands on a second screen behind it. What we find is that there are certain “dark patches” on the second screen where the photon never appears—and yet, if we close one of the slits, then the photon can appear in those patches. In other words, decreasing the number of ways for the photon to get somewhere can increase the probability that it gets there! According to quantum mechanics, the reason is that the amplitude for the photon to land somewhere can receive a positive contribution from the first slit, and a negative contribution from the second. In that case, if both slits are open, then the two contributions cancel each other out, and the photon never appears there at all. (Because the probability is the amplitude squared, both negative and positive amplitudes correspond to positive probabilities.)

Likewise, when designing algorithms for quantum computers, the goal is always to choreograph things so that, for each wrong answer, some of the contributions to its amplitude are positive and others are negative, so on average they cancel out, leaving an amplitude close to zero. Meanwhile, the contributions to the right answer’s amplitude should reinforce each other (being, say, all positive, or all negative). If you can arrange this, then when you measure, you’ll see the right answer with high probability.

It was precisely by orchestrating such a clever interference pattern that Peter Shor managed to devise his quantum algorithm for factoring large numbers. To do so, Shor had to exploit extremely specific properties of the factoring problem: it was not just a matter of “trying each possible divisor in a different parallel universe.” In fact, an important 1994 theorem of Bennett, Bernstein, Brassard, and Vazirani shows that what you might call the “naïve parallel-universe approach” never yields an exponential speed improvement. The naïve approach can reveal solutions in only the square root of the number of steps that a classical computer would need, an important phenomenon called the Grover speedup. But that square-root advantage turns out to be the limit: if you want to do better, then like Shor, you need to find something special about your problem that lets interference reveal its answer.

What are the implications of these facts for Deutsch’s argument that only Many Worlds can explain how a quantum computer works? At the least, we should say that the “exponential cornucopia of parallel universes” almost always hides from us, revealing itself only in very special interference experiments where all the “universes” collaborate, rather than any one of them shouting above the rest. But one could go even further. One could say: To whatever extent the parallel universes do collaborate in a huge interference pattern to reveal (say) the factors of a number, to that extent they never had separate identities as “parallel universes” at all—even according to the Many Worlds interpretation! Rather, they were just one interfering, quantum-mechanical mush. And from a certain perspective, all the quantum computer did was to linearly transform the way in which we measured that mush, as if we were rotating it to see it from a more revealing angle. Conversely, whenever the branches do act like parallel universes, Many Worlds itself tells us that we only observe one of them—so from a strict empirical standpoint, we could treat the others (if we liked) as unrealized hypotheticals. That, at least, is the sort of reply a modern Copenhagenist mightgive, if she wanted to answer Deutsch’s argument on its own terms.

There are other aspects of quantum information that seem more “Copenhagen-like” than “Many-Worlds-like”—or at least, for which thinking about “parallel universes” too naïvely could lead us astray. So for example, suppose Alice sends n quantum-mechanical bits (or qubits) to Bob, then Bob measures qubits in any way he likes. How many classical bits can Alice transmit to Bob that way? If you remember that n qubits require 2n amplitudes to describe, you might conjecture that Alice could achieve an incredible information compression—“storing one bit in each parallel universe.” But alas, an important result called Holevo’s Theorem says that, because of the severe limitations on what Bob learns when he measures the qubits, such compression is impossible. In fact, by sending n qubits to Bob, Alice can reliably communicate only n bits (or 2n bits, if Alice and Bob shared quantum correlations in advance), essentially no better than if she’d sent the bits classically. So for this task, you might say, the amplitude wave acts more like “something in our heads” (as the Copenhagenists always said) than like “something out there in reality” (as the Many-Worlders say).

But the Many-Worlders don’t need to take this lying down. They could respond, for example, by pointing to other, more specialized communication problems, in which it’s been proven that Alice and Bob can solve using exponentially fewer qubits than classical bits. Here’s one example of such a problem, drawing on a 1999 theorem of Ran Raz and a 2010 theorem of Boaz Klartag and Oded Regev: Alice knows a vector in a high-dimensional space, while Bob knows two orthogonal subspaces. Promised that the vector lies in one of the two subspaces, can you figure out which one holds the vector? Quantumly, Alice can encode the components of her vector as amplitudes—in effect, squeezing n numbers into exponentially fewer qubits. And crucially, after receiving those qubits, Bob can measure them in a way that doesn’t reveal everything about Alice’s vector, but does reveal which subspace it lies in, which is the one thing Bob wanted to know.

So, do the Many Worlds become “real” for these special problems, but retreat back to being artifacts of the math for ordinary information transmission?

To my mind, one of the wisest replies came from the mathematician and quantum information theorist Boris Tsirelson, who said: “a quantum possibility is more real than a classical possibility, but less real than a classical reality.” In other words, this is a new ontological category, one that our pre-quantum intuitions simply don’t have a good slot for. From this perspective, the contribution of quantum computing is to delineate for which tasks the giant amplitude wave acts “real and Many-Worldish,” and for which other tasks it acts “formal and Copenhagenish.” Quantum computing can give both sides plenty of fresh ammunition, without handing an obvious victory to either.

So then, is there any interpretation that flat-out doesn’t fare well under the lens of quantum computing? While some of my colleagues will strongly disagree, I’d put forward Bohmian mechanics as a candidate. Recall that David Bohm’s vision was of real particles, occupying definite positions in ordinary three-dimensional space, but which are jostled around by a giant amplitude wave in a way that perfectly reproduces the predictions of quantum mechanics. A key selling point of Bohm’s interpretation is that it restores the determinism of classical physics: all the uncertainty of measurement, we can say in his picture, arises from lack of knowledge of the initial conditions. I’d describe Bohm’s picture as striking and elegant—as long as we’re only talking about one or two particles at a time.

But what happens if we try to apply Bohmian mechanics to a quantum computer—say, one that’s running Shor’s algorithm to factor a 10,000-digit number, using hundreds of thousands of particles? We can do that, but if we do, talking about the particles’ “real locations” will add spectacularly little insight. The amplitude wave, you might say, will be “doing all the real work,” with the “true” particle positions bouncing around like comically-irrelevant fluff. Nor, for that matter, will the bouncing be completely deterministic. The reason for this is technical: it has to do with the fact that, while particles’ positions in space are continuous, the 0’s and 1’s in a computer memory (which we might encode, for example, by the spins of the particles) are discrete. And one can prove that, if we want to reproduce the predictions of quantum mechanics for discrete systems, then we need to inject randomness at many times, rather than only at the beginning of the universe.

But it gets worse. In 2005, I proved a theorem that says that, in any theory like Bohmian mechanics, if you wanted to calculate the entire trajectory of the “real” particles, you’d need to solve problems that are thought to be intractable even for quantum computers. One such problem is the so-called collision problem, where you’re given a cryptographic hash function (a function that maps a long message to a short “hash value”) and asked to find any two messages with the same hash. In 2002, I proved that, at least if you use the “naïve parallel-universe” approach, any quantum algorithm for the collision problem requires at least ~H1/5 steps, where H is the number of possible hash values. (This lower bound was subsequently improved to ~H1/3 by Yaoyun Shi, exactly matching an upper bound of Brassard, Høyer, and Tapp.) By contrast, if (with godlike superpower) you could somehow see the whole histories of Bohmian particles, you could solve the collision problem almost instantly.

What makes this interesting is that, if you ask to see the locations of Bohmian particles at any one time, you won’t find anything that you couldn’t have easily calculated with a standard, garden-variety quantum computer. It’s only when you ask for the particles’ locations at multiple times—a question that Bohmian mechanics answers, but that ordinary quantum mechanics rejects as meaningless—that you’re able to see multiple messages with the same hash, and thereby solve the collision problem.

My conclusion is that, if you believe in the reality of Bohmian trajectories, you believe that Nature does even more computational work than a quantum computer could efficiently simulate—but then it hides the fruits of its labor where no one can ever observe it. Now, this sits uneasily with a principle that we might call “Occam’s Razor with Computational Aftershave.” Namely: In choosing a picture of physical reality, we should be loath to posit computational effort on Nature’s part that vastly exceeds what could ever in principle be observed. (Admittedly, some people would probably argue that the Many Worlds interpretation violates my “aftershave principle” even more flagrantly than Bohmian mechanics does! But that depends, in part, on what we count as “observation”: just our observations, or also the observations of any parallel-universe doppelgängers?)

Could future discoveries in quantum computing theory settle once and for all, to every competent physicist’s satisfaction, “which interpretation is the true one”? To me, it seems much more likely that future insights will continue to do what the previous ones did: broaden our language, strip away irrelevancies, clarify the central issues, while still leaving plenty to argue about for people who like arguing. In the end, asking how quantum computing affects the interpretation of quantum mechanics is sort of like asking how classical computing affects the debate about whether the mind is a machine. In both cases, there was a range of philosophical positions that people defended before a technology came along, and most of those positions still have articulate defenders after the technology. So, by that standard, the technology can’t be said to have “resolved” much! Yet the technology is so striking that even the idea of it—let alone the thing itself—can shift the terms of the debate, which analogies people use in thinking about it, which possibilities they find natural and which contrived. This might, more generally, be the main way technology affects philosophy.

New chip architecture may provide foundation for quantum computer


New chip architecture may provide foundation for quantum computer
A photograph of the completed BGA trap assembly. The trap chip is at the center, sitting atop the larger interposer chip that fans out the wiring. The trap chip surface area is 1mm x 3mm, while the interposer is roughly 1 cm square. Credit: D. Youngner, Honeywell

Quantum computers are in theory capable of simulating the interactions of molecules at a level of detail far beyond the capabilities of even the largest supercomputers today. Such simulations could revolutionize chemistry, biology and material science, but the development of quantum computers has been limited by the ability to increase the number of quantum bits, or qubits, that encode, store and access large amounts of data.

In a paper appearing this week in the Journal of Applied Physics, from AIP Publishing, a team of researchers at Georgia Tech Research Institute and Honeywell International have demonstrated a new device that allows more electrodes to be placed on a chip—an important step that could help increase qubit densities and bring us one step closer to a quantum computer that can simulate molecules or perform other algorithms of interest.

“To write down the quantum state of a system of just 300 qubits, you would need 2^300 numbers, roughly the number of protons in the known universe, so no amount of Moore’s Law scaling will ever make it possible for a classical computer to process that many numbers,” said Nicholas Guise, who led the research. “This is why it’s impossible to fully simulate even a modest sized quantum system, let alone something like chemistry of complex molecules, unless we can build a quantum computer to do it.”

While existing computers use classical bits of information, quantum computers use “” or qubits to store information. Classical bits use either a 0 or 1, but a qubit, exploiting a weird quantum property called superposition, can actually be in both 0 and 1 simultaneously, allowing much more information to be encoded. Since qubits can be correlated with each other in a way that classical bits cannot, they allow a new sort of massively parallel computation, but only if many qubits at a time can be produced and controlled. The challenge that the field has faced is scaling this technology up, much like moving from the first transistors to the first computers.

New chip architecture may provide foundation for quantum computer
Fluorescence images of calcium ions confined in the BGA trap. 

Creating the Building Blocks for Quantum Computing

One leading qubit candidate is individual ions trapped inside a vacuum chamber and manipulated with lasers. The scalability of current trap architectures is limited since the connections for the electrodes needed to generate the trapping fields come at the edge of the chip, and their number are therefore limited by the chip perimeter.

The GTRI/Honeywell approach uses new microfabrication techniques that allow more electrodes to fit onto the chip while preserving the laser access needed.

The team’s design borrows ideas from a type of packaging called a ball grid array (BGA) that is used to mount integrated circuits. The ball grid array’s key feature is that it can bring electrical signals directly from the backside of the mount to the surface, thus increasing the potential density of electrical connections.

The researchers also freed up more chip space by replacing area-intensive surface or edge capacitors with trench capacitors and strategically moving wire connections.

New chip architecture may provide foundation for quantum computer
SEMs of the trench capacitor and TSV (thru-substrate via) structures fabricated into the trap chip. These make electrical connections to the trap electrodes while filtering out RF pickup.
The space-saving moves allowed tight focusing of an addressing laser beam for fast operations on single qubits. Despite early difficulties bonding the chips, a solution was developed in collaboration with Honeywell, and the device was trapping ions from the very first day.

The team was excited with the results. “Ions are very sensitive to stray electric fields and other noise sources, and a few microns of the wrong material in the wrong place can ruin a trap. But when we ran the BGA trap through a series of benchmarking tests we were pleasantly surprised that it performed at least as well as all our previous traps,” Guise said.

Working with trapped ion currently requires a room full of bulky equipment and several graduate students to make it all run properly, so the researchers say much work remains to be done to shrink the technology. The BGA project demonstrated that it’s possible to fit more and more electrodes on a surface trap chip while wiring them from the back of the chip in a compact and extensible way. However, there are a host of engineering challenges that still need to be addressed to turn this into a miniaturized, robust and nicely packaged system that would enable , the researchers say.

In the meantime, these advances have applications beyond quantum computing. “We all hope that someday quantum computers will fulfill their vast promise, and this research gets us one step closer to that,” Guise said. “But another reason that we work on such difficult problems is that it forces us to come up with solutions that may be useful elsewhere. For example, like those demonstrated here for ion traps are also very relevant for making miniature atomic devices like sensors, magnetometers and chip-scale atomic clocks.”

Rice-sized laser, powered one electron at a time, bodes well for quantum computing


Princeton University researchers have built a rice grain-sized laser powered by single electrons tunneling through artificial atoms known as quantum dots. The tiny microwave laser, or “maser,” is a demonstration of the fundamental interactions between light and moving electrons.

The researchers built the device—which uses about one-billionth the electric current needed to power a hair dryer—while exploring how to use , which are bits of semiconductor material that act like single atoms, as components for quantum computers.

“It is basically as small as you can go with these single-electron devices,” said Jason Petta, an associate professor of physics at Princeton who led the study, which was published in the journalScience.

The device demonstrates a major step forward for efforts to build systems out of semiconductor materials, according to co-author and collaborator Jacob Taylor, an adjunct assistant professor at the Joint Quantum Institute, University of Maryland-National Institute of Standards and Technology. “I consider this to be a really important result for our long-term goal, which is entanglement between quantum bits in semiconductor-based devices,” Taylor said.

The original aim of the project was not to build a maser, but to explore how to use double quantum dots—which are two quantum dots joined together—as quantum bits, or qubits, the basic units of information in quantum computers.

“The goal was to get the double quantum dots to communicate with each other,” said Yinyu Liu, a physics graduate student in Petta’s lab. The team also included graduate student Jiri Stehlik and associate research scholar Christopher Eichler in Princeton’s Department of Physics, as well as postdoctoral researcher Michael Gullans of the Joint Quantum Institute.

Double quantum dot as imaged by a scanning electron microscope. Current flows one electron at a time through two quantum dots (red circles) that are formed in an indium arsenide nanowire. Credit: Science

Because quantum dots can communicate through the entanglement of light particles, or photons, the researchers designed dots that emit photons when single electrons leap from a higher energy level to a lower energy level to cross the double dot.

Each double quantum dot can only transfer one electron at a time, Petta explained. “It is like a line of people crossing a wide stream by leaping onto a rock so small that it can only hold one person,” he said. “They are forced to cross the stream one at a time. These double quantum dots are zero-dimensional as far as the electrons are concerned—they are trapped in all three spatial dimensions.”

The researchers fabricated the double quantum dots from extremely thin nanowires (about 50 nanometers, or a billionth of a meter, in diameter) made of a called indium arsenide. They patterned the indium arsenide wires over other even smaller metal wires that act as gate electrodes, which control the energy levels in the dots.

To construct the maser, they placed the two double dots about 6 millimeters apart in a cavity made of a superconducting material, niobium, which requires a temperature near absolute zero, around minus 459 degrees Fahrenheit. “This is the first time that the team at Princeton has demonstrated that there is a connection between two double quantum dots separated by nearly a centimeter, a substantial distance,” Taylor said.

When the device was switched on, electrons flowed single-file through each double quantum dot, causing them to emit photons in the microwave region of the spectrum. These photons then bounced off mirrors at each end of the cavity to build into a coherent beam of microwave light.

One advantage of the new maser is that the energy levels inside the dots can be fine-tuned to produce light at other frequencies, which cannot be done with other semiconductor lasers in which the frequency is fixed during manufacturing, Petta said. The larger the energy difference between the two levels, the higher the frequency of light emitted.

Claire Gmachl, who was not involved in the research and is Princeton’s Eugene Higgins Professor of Electrical Engineering and a pioneer in the field of semiconductor lasers, said that because lasers, masers and other forms of coherent light sources are used in communications, sensing, medicine and many other aspects of modern life, the study is an important one.

“In this paper the researchers dig down deep into the fundamental interaction between light and the moving electron,” Gmachl said. “The double quantum dot allows them full control over the motion of even a single electron, and in return they show how the coherent microwave field is created and amplified. Learning to control these fundamental light-matter interaction processes will help in the future development of light sources.”

The Revolutionary Quantum Computer That May Not Be Quantum at All


Google owns a lot of computers—perhaps a million servers stitched together into the fastest, most powerful artificial intelligence on the planet. But last August, Google teamed up with NASA to acquire what may be the search giant’s most powerful piece of hardware yet. It’s certainly the strangest.
Located at NASA Ames Research Center in Mountain View, California, a couple of miles from the Googleplex, the machine is literally a black box, 10 feet high. It’s mostly a freezer, and it contains a single, remarkable computer chip—based not on the usual silicon but on tiny loops of niobium wire, cooled to a temperature 150 times colder than deep space. The name of the box, and also the company that built it, is written in big, science-fiction-y letters on one side: D-WAVE. Executives from the company that built it say that the black box is the world’s first practical quantum computer, a device that uses radical new physics to crunch numbers faster than any comparable machine on earth. If they’re right, it’s a profound breakthrough. The question is: Are they?
Hartmut Neven, a computer scientist at Google, persuaded his bosses to go in with NASA on the D-Wave. His lab is now partly dedicated to pounding on the machine, throwing problems at it to see what it can do. An animated, academic-tongued German, Neven founded one of the first successful image-recognition firms; Google bought it in 2006 to do computer-vision work for projects ranging from Picasa to Google Glass. He works on a category of computational problems called optimization—finding the solution to mathematical conundrums with lots of constraints, like the best path among many possible routes to a destination, the right place to drill for oil, and efficient moves for a manufacturing robot. Optimization is a key part of Google’s seemingly magical facility with data, and Neven says the techniques the company uses are starting to peak. “They’re about as fast as they’ll ever be,” he says.
That leaves Google—and all of computer science, really—just two choices: Build ever bigger, more power-hungry silicon-based computers. Or find a new way out, a radical new approach to computation that can do in an instant what all those other million traditional machines, working together, could never pull off, even if they worked for years.

That, Neven hopes, is a quantum computer. A typical laptop and the hangars full of servers that power Google—what quantum scientists charmingly call “classical machines”—do math with “bits” that flip between 1 and 0, representing a single number in a calculation. But quantum computers use quantum bits, qubits, which can exist as 1s and 0s at the same time. They can operate as many numbers simultaneously. It’s a mind-bending, late-night-in-the-dorm-room concept that lets a quantum computer calculate at ridiculously fast speeds.
Unless it’s not a quantum computer at all. Quantum computing is so new and so weird that no one is entirely sure whether the D-Wave is a quantum computer or just a very quirky classical one. Not even the people who build it know exactly how it works and what it can do. That’s what Neven is trying to figure out, sitting in his lab, week in, week out, patiently learning to talk to the D-Wave. If he can figure out the puzzle—what this box can do that nothing else can, and how—then boom. “It’s what we call ‘quantum supremacy,’” he says. “Essentially, something that cannot be matched anymore by classical machines.” It would be, in short, a new computer age.
A former wrestler short-listed for Canada’s Olympic team, D-Wave founder Geordie Rose is barrel-chested and possessed of arms that look ready to pin skeptics to the ground. When I meet him at D-Wave’s headquarters in Burnaby, British Columbia, he wears a persistent, slight frown beneath bushy eyebrows. “We want to be the kind of company that Intel, Microsoft, Google are,” Rose says. “The big flagship $100 billion enterprises that spawn entirely new types of technology and ecosystems. And I think we’re close. What we’re trying to do is build the most kick-ass computers that have ever existed in the history of the world.”
The office is a bustle of activity; in the back rooms technicians peer into microscopes, looking for imperfections in the latest batch of quantum chips to come out of their fab lab. A pair of shoulder-high helium tanks stand next to three massive black metal cases, where more techs attempt to weave together their spilt guts of wires. Jeremy Hilton, D-Wave’s vice president of processor development, gestures to one of the cases. “They look nice, but appropriately for a startup, they’re all just inexpensive custom components. We buy that stuff and snap it together.” The really expensive work was figuring out how to build a quantum computer in the first place.
Like a lot of exciting ideas in physics, this one originates with Richard Feynman. In the 1980s, he suggested that quantum computing would allow for some radical new math. Up here in the macroscale universe, to our macroscale brains, matter looks pretty stable. But that’s because we can’t perceive the subatomic, quantum scale. Way down there, matter is much stranger. Photons—electromagnetic energy such as light and x-rays—can act like waves or like particles, depending on how you look at them, for example. Or, even more weirdly, if you link the quantum properties of two subatomic particles, changing one changes the other in the exact same way. It’s called entanglement, and it works even if they’re miles apart, via an unknown mechanism that seems to move faster than the speed of light.
Knowing all this, Feynman suggested that if you could control the properties of subatomic particles, you could hold them in a state of superposition—being more than one thing at once. This would, he argued, allow for new forms of computation. In a classical computer, bits are actually electrical charge—on or off, 1 or 0. In a quantum computer, they could be both at the same time.
It was just a thought experiment until 1994, when mathematician Peter Shor hit upon a killer app: a quantum algorithm that could find the prime factors of massive numbers. Cryptography, the science of making and breaking codes, relies on a quirk of math, which is that if you multiply two large prime numbers together, it’s devilishly hard to break the answer back down into its constituent parts. You need huge amounts of processing power and lots of time. But if you had a quantum computer and Shor’s algorithm, you could cheat that math—and destroy all existing cryptography. “Suddenly,” says John Smolin, a quantum computer researcher at IBM, “everybody was into it.”
That includes Geordie Rose. A child of two academics, he grew up in the backwoods of Ontario and became fascinated by physics and artificial intelligence. While pursuing his doctorate at the University of British Columbia in 1999, he readExplorations in Quantum Computing, one of the first books to theorize how a quantum computer might work, written by NASA scientist—and former research assistant to Stephen Hawking—Colin Williams. (Williams now works at D-Wave.)
Reading the book, Rose had two epiphanies. First, he wasn’t going to make it in academia. “I never was able to find a place in science,” he says. But he felt he had the bullheaded tenacity, honed by years of wrestling, to be an entrepreneur. “I was good at putting together things that were really ambitious, without thinking they were impossible.” At a time when lots of smart people argued that quantum computers could never work, he fell in love with the idea of not only making one but selling it.
With about $100,000 in seed funding from an entrepreneurship professor, Rose and a group of university colleagues founded D-Wave. They aimed at an incubator model, setting out to find and invest in whoever was on track to make a practical, working device. The problem: Nobody was close.
At the time, most scientists were pursuing a version of quantum computing called the gate model. In this architecture, you trap individual ions or photons to use as qubits and chain them together in logic gates like the ones in regular computer circuits—the ands, ors, nots, and so on that assemble into how a computer thinks. The difference, of course, is that the qubits could interact in much more complex ways, thanks to superposition, entanglement, and interference.
But qubits really don’t like to stay in a state of super¬position, what’s called coherence. A single molecule of air can knock a qubit out of coherence. The simple act of observing the quantum world collapses all of its every-number-at-once quantumness into stochastic, humdrum, non¬quantum reality. So you have to shield qubits—from everything. Heat or other “noise,” in physics terms, screws up a quantum computer, rendering it useless.
You’re left with a gorgeous paradox: Even if you successfully run a calculation, you can’t easily find that out, because looking at it collapses your superpositioned quantum calculation to a single state, picked at random from all possible superpositions and thus likely totally wrong. You ask the computer for the answer and get garbage.
Lashed to these unforgiving physics, scientists had built systems with only two or three qubits at best. They were wickedly fast but too underpowered to solve any but the most prosaic, lab-scale problems. But Rose didn’t want just two or three qubits. He wanted 1,000. And he wanted a device he could sell, within 10 years. He needed a way to make qubits that weren’t so fragile.
“WHAT WE’RE TRYING TO DO IS BUILD THE MOST KICK-ASS COMPUTERS THAT HAVE EVER EXISTED IN THE HISTORY OF THE WORLD.”
In 2003, he found one. Rose met Eric Ladizinsky, a tall, sporty scientist at NASA’s Jet Propulsion Lab who was an expert in superconducting quantum interference devices, or Squids. When Ladizinsky supercooled teensy loops of niobium metal to near absolute zero, magnetic fields ran around the loops in two opposite directions at once. To a physicist, electricity and magnetism are the same thing, so Ladizinsky realized he was seeing superpositioning of electrons. He also suspected these loops could become entangled, and that the charges could quantum-tunnel through the chip from one loop to another. In other words, he could use the niobium loops as qubits. (The field running in one direction would be a 1; the opposing field would be a 0.) The best part: The loops themselves were relatively big, a fraction of a millimeter. A regular microchip fab lab could build them.
The two men thought about using the niobium loops to make a gate-model computer, but they worried the gate model would be too susceptible to noise and timing errors. They had an alternative, though—an architecture that seemed easier to build. Called adiabatic annealing, it could perform only one specific computational trick: solving those rule-laden optimization problems. It wouldn’t be a general-purpose computer, but optimization is enormously valuable. Anyone who uses machine learning—Google, Wall Street, medicine—does it all the time. It’s how you train an artificial intelligence to recognize patterns. It’s familiar. It’s hard. And, Rose realized, it would have an immediate market value if they could do it faster.
In a traditional computer, annealing works like this: You mathematically translate your problem into a landscape of peaks and valleys. The goal is to try to find the lowest valley, which represents the optimized state of the system. In this metaphor, the computer rolls a rock around the problem-¬scape until it settles into the lowest-possible valley, and that’s your answer. But a conventional computer often gets stuck in a valley that isn’t really lowest at all. The algorithm can’t see over the edge of the nearest mountain to know if there’s an even lower vale. A quantum annealer, Rose and Ladizinsky realized, could perform tricks that avoid this limitation. They could take a chip full of qubits and tune each one to a higher or lower energy state, turning the chip into a representation of the rocky landscape. But thanks to superposition and entanglement between the qubits, the chip could computationally tunnel through the landscape. It would be far less likely to get stuck in a valley that wasn’t the lowest, and it would find an answer far more quickly.
INSIDE THE BLACK BOX
The guts of a D-Wave don’t look like any other computer. Instead of metals etched into silicon, the central processor is made of loops of the metal niobium, surrounded by components designed to protect it from heat, vibration, and electromagnetic noise. Isolate those niobium loops well enough from the outside world and you get a quantum computer, thousands of times faster than the machine on your desk—or so the company claims. —Cameron Bird

Quantum computing explained: harnessing particle physics to work faster


Work is underway around the world to revolutionise computers using the principles of quantum mechanics.
D-Wave quantum-computer

The D-Wave, the world’s only commercially available quantum computer.

Around the world teams of scientists are working on the next technological revolution: quantum computing. But what makes it so special? And why do we need it? We ask physicist Dr Ruth Oulton of the Bristol University to explain.

 

In a normal computer, information is stored as bits. How is it different in a quantum computer?

A normal computer has bits and each bit [is either] zero or one. A quantum computer has quantum bits. These are made out of quantum particles that can be zero, one, or some kind of state in between – [in other words they can have both values] at the same time.

 

So a quantum bit is made from a physical particle?

It pretty much could be any fundamental particle, so it could be a photon or an electron or it could be a nucleus, for example. It’s a particle that can have two different properties [at once]. [For example], the particle can be in both one place and the other place at the same time.

 

How does this help with computing?

In a normal computer, a particular calculation might go through all the different possibilities of zeros and ones for a particular calculation. Because a quantum computer can be in all the states at the same time, you just do one calculation [testing a vast number of possibilities simultaneously]. So it can be much quicker.

 

What’s the biggest challenge?

You need a very good control over individual particles. You can’t just shove [all the particles] together because they would interact with each other [in an unpredictable way]. You need to be able to trap and direct them, but when the particles interact [with the trap itself] it makes them lose their information, so you need to make sure that you design the trap well.

 

What are the applications?

The biggest and most important one is the ability to factorise a very large number into two prime numbers. That’s really important because that’s what almost all encryption for internet computing is based on. A quantum computer should be able to do that relatively quickly to get back the prime numbers and that will mean that basically anything that has been with [that] encryption can be de-encrypted. If you were to do it with the classical computers we have now, it would take longer than the age of the universe to go back.

 

Are there other scientific uses?

Calculating the positions of individual atoms in very large molecules like polymers and in viruses. The way that the particles interact with each other – there’s so many different possibilities that normally they say that you can’t calculate anything properly [with] more than about 10 atoms inside the molecule. So if you have a quantum computer you could use it to develop drugs and understand how molecules work a bit better.

 

Are there commercial quantum computers?

There is a commercial computer out there but it’s very expensive ($10m), it has very limited computing power and it hasn’t yet been verified by anybody externally [as to] what it’s actually doing.

 

Will quantum computers look like our desktops and laptops do now?

We are completely re-designing the computer. The very first quantum computers will probably fill a room. It’s going to take us a while to get to desktops. Really, actually what is going to happen [is] you are going to have a hybrid laptop with a quantum chip and a classical chip.

Quantum Computers Check Each Other’s Work.


Image courtesy of Equinox Graphics

Check it twice. Quantum computers rely on these clusters of entangled qubits—units of data that embody many states at once—to achieve superspeedy processing. New research shows one such computer can verify the solutions of another.

Quantum computers can solve problems far too complex for normal computers, at least in theory. That’s why research teams around the globe have strived to build them for decades. But this extraordinary power raises a troubling question: How will we know whether a quantum computer’s results are true if there is no way to check them? The answer, scientists now reveal, is that a simple quantum computer—whose results humans can verify—can in turn check the results of other dramatically more powerful quantum machines.

Quantum computers rely on odd behavior of quantum mechanics in which atoms and other particles can seemingly exist in two or more places at once, or become “entangled” with partners, meaning they can instantaneously influence each other regardless of distance. Whereas classical computers symbolize data as bits—a series of ones and zeroes that they express by flicking switchlike transistors either on or off—quantum computers use quantum bits (qubits) that can essentially be on and off at the same time, or in any on/off combination, such as 32% on and 68% off.

Because each qubit can embody so many different states, quantum computers could compute certain classes of problems dramatically faster than regular computers by running through every combination of possibilities at once. For instance, a quantum computer with 300 qubits could perform more calculations in an instant than there are atoms in the universe.

Currently, all quantum computers involve only a few qubits “and thus can be easily verified by a classical computer, or on a piece of paper,” says quantum physicist Philip Walther of the University of Vienna. But their capabilities could outstrip conventional computers “in the not-so-far future,” he warns, which raises the verification problem.

Scientists have suggested a few ways out of this conundrum that would involve computers with large numbers of qubits or two entangled quantum computers. But these still lie outside the reach of present technology.

Now, quantum physicist Stefanie Barz at the University of Vienna, along with Walther and their colleagues, has a new strategy for verification. It relies on a technique known as blind quantum computing, an idea which they first demonstrated in a 2012 Science paper. A quantum computer receives qubits and completes a task with them, but it remains blind to what the input and output were, and even what computation it performed.

To test a machine’s accuracy, the researchers peppered a computing task with “traps”—short intermediate calculations to which the user knows the result in advance. “In case the quantum computer does not do its job properly, the trap delivers a result that differs from the expected one,” Walther explains. These traps allow the user to recognize when the quantum computer is inaccurate, the researchers report online today in Nature Physics. The results show experimentally that one quantum computer can verify the results of another, and that theoretically any size of quantum computer can verify any other, Walther says.

The existence of undetectable errors will depend on the particular quantum computer and the computation it carries out. Still, the more traps users build into the tasks, the better they can ensure the quantum computer they test is computing accurately. “The test is designed in such a way that the quantum computer cannot distinguish the trap from its normal tasks,” Walther says.

The researchers used a 4-qubit quantum computer as the verifier, but any size will do, and the more qubits the better, Walther notes. The technique is scalable, so it could be used even on computers with hundreds of qubits, he says, and it can be applied to any of the many existing quantum computing platforms.

“Like almost all current quantum computing experiments, this currently has the status of a fun demonstration proof of concept, rather than anything that’s directly useful yet,” says theoretical computer scientist Scott Aaronson at the Massachusetts Institute of Technology in Cambridge. But that doesn’t detract from the importance of these demonstrations, he adds. “I’m very happy that they’re done, as they’re necessary first steps if we’re ever going to have useful quantum computers.”