‘Quantum Atmospheres’ May Reveal Secrets of Matter


A new theory proposes that the quantum properties of an object extend into an “atmosphere” that surrounds the material.
 

Over the past several years, some materials have proved to be a playground for physicists. These materials aren’t made of anything special — just normal particles such as protons, neutrons and electrons. But they are more than the sum of their parts. These materials boast a range of remarkable properties and phenomena and have even led physicists to new phases of matter — beyond the solid, gas and liquid phases we’re most familiar with.

One class of material that especially excites physicists is the topological insulator — and, more broadly, topological phases, whose theoretical foundations earned their discoverers a Nobel Prize in 2016. On the surface of a topological insulator, electrons flow smoothly, while on the inside, electrons are immobile. Its surface is thus a metal-like conductor, yet its interior is a ceramic-like insulator. Topological insulators have drawn attention for their unusual physics as well as for their potential use in quantum computers and so-called spintronic devices, which utilize electrons’ spins as well as their charge.

But such exotic behaviors aren’t always obvious. “You can’t just tell easily by looking at the material in conventional ways whether it has these kinds of properties,” said Frank Wilczek, a physicist at the Massachusetts Institute of Technology and winner of the 2004 Nobel Prize in Physics.

This means a host of seemingly ordinary materials might harbor hidden — yet unusual and possibly useful — properties. In a paper recently posted online, Wilczek and Qing-Dong Jiang, a physicist at Stockholm University, propose a new way to discover such properties: by probing a thin aura that surrounds the material, something they’ve dubbed a quantum atmosphere.

Some of a material’s fundamental quantum properties could manifest in this atmosphere, which physicists could then measure. If confirmed in experiments, not only would this phenomenon be one of only a few macroscopic consequences of quantum mechanics, Wilczek said, but it could also be a powerful tool for exploring an array of new materials.

“Had you asked me if something like this could occur, I would’ve said that seems like a reasonable idea,” said Taylor Hughes, a condensed matter theorist at the University of Illinois, Urbana-Champaign. But, he added, “I would imagine the effect to be very small.” In the new analysis, however, Jiang and Wilczek calculated that, in principle, a quantum atmospheric effect would be well within the range of detectability.

Not only that, Wilczek said, but detecting such effects may be achievable sooner rather than later.

A Zone of Influence

A quantum atmosphere, Wilczek explained, is a thin zone of influence around a material. According to quantum mechanics, a vacuum isn’t completely empty; rather, it’s filled with quantum fluctuations. For example, if you take two uncharged plates and bring them together in a vacuum, only quantum fluctuations with wavelengths shorter than the distance between the plates can squeeze between them. Outside the plates, however, fluctuations of all wavelengths can fit. The energy outside will be greater than inside, resulting in a net force that pushes the plates together. Called the Casimir effect, this phenomenon is similar to the influence from a quantum atmosphere, Wilczek said.

Just as a plate feels a stronger force as it nears another one, a needlelike probe would feel an effect from the quantum atmosphere as it approaches a material. “It’s just like any atmosphere,” Wilczek said. “You get close to it, and you start to see its influence.” And the nature of that influence depends on the quantum properties of the material itself.

Photo of antimony

Antimony can behave as a topological insulator — a material that acts as an insulator everywhere except across its surface.

 

Those properties can be extraordinary. Certain materials act like their own universes with their own physical laws, as if comprising what’s recently been called a materials multiverse. “A very important idea in modern condensed matter physics is that we’re in possession of these materials — say, a topological insulator — which have different sets of rules inside,” said Peter Armitage, a condensed matter physicist at Johns Hopkins University.

Some materials, for example, harbor objects that act as magnetic monopoles — point-like magnets with a north pole but no south pole. Physicists have also detected so-called quasiparticles with fractional electric charge and quasiparticles that act as their own antimatter, with the ability to annihilate themselves.

If similarly exotic properties exist in other materials, they could reveal themselves in quantum atmospheres. You could, in principle, discover all sorts of new properties simply by probing the atmospheres of materials, Wilczek said.

To demonstrate their idea, Jiang and Wilczek focused on an unorthodox set of rules called axion electrodynamics, which could give rise to unique properties. Wilczek came up with the theory in 1987 to describe how a hypothetical particle called an axion would interact with electricity and magnetism. (Physicists had previously proposed the axion as a solution to one of physics’ biggest unsolved questions: why interactions involving the strong force are the same even when particles are swapped with their antiparticles and reflected in a mirror, preserving so-called charge and parity symmetry.) To this day, no one has found any evidence that axions exist, even though they’ve recently garnered renewed interest as a candidate for dark matter.

While these rules don’t seem to be valid in most of the universe, it turns out they can come into play inside a material such as a topological insulator. “The way electromagnetic fields interact with these new kinds of matter called topological insulators is basically the same way they would interact with a collection of axions,” Wilczek said.

Diamond Defects

If a material such as a topological insulator obeys axion electrodynamics, its quantum atmosphere could induce a telltale effect on anything that crosses into the atmosphere. Jiang and Wilczek calculated that such an effect would be similar to that of a magnetic field. In particular, they found that if you were to place some system of atoms or molecules in the atmosphere, their quantum energy levels would be altered. A researcher could then measure these altered levels using standard laboratory techniques. “It’s kind of an unconventional but a quite interesting idea,” said Armitage.

One such potential system is a diamond probe imbued with features called nitrogen-vacancy (NV) centers. An NV center is a type of defect in a diamond’s crystal structure where some of the diamond’s carbon atoms are swapped out for nitrogen atoms, and where the spot adjacent to the nitrogen is empty. The quantum state of this system is highly sensitive, allowing NV centers to sniff out even very weak magnetic fields. This property makes them powerful sensors that can be used for a variety of applications in geology and biology.

“This is a nice proof of principle,” Hughes said. One application, he added, could be to map out a material’s properties. By passing an NV center across a material like a topological insulator, you can determine how its properties may vary along the surface.

Jiang and Wilczek’s paper, which they have submitted to Physical Review Letters, details only the quantum atmospheric influence derived from axion electrodynamics. To determine how other kinds of properties affect an atmosphere, Wilczek said, you would have to do different calculations.

Breaking Symmetries

Fundamentally, the properties that quantum atmospheres unmask are symmetries. Different phases of matter, and the properties unique to a phase, can be thought of in terms of symmetry. In a solid crystal, for example, atoms are arranged in a symmetric lattice that shifts or rotates to form an identical crystal pattern. When you apply heat, however, the bonds break, the lattice structure collapses, and the material — now a liquid with markedly different properties — loses its symmetry.

Materials can break other fundamental symmetries such as the time-reversal symmetry that most laws of physics obey. Or phenomena may be different when looked at in the mirror, a violation of parity symmetry.

Whether these symmetries are broken in a material could signify previously unknown phase transitions and potentially exotic properties. A material with certain broken symmetries would induce the same violations in a probe that’s inside its quantum atmosphere, Wilczek said. For example, in a material that adheres to axion electrodynamics, time and parity symmetry are each broken, but the combination of the two is not. By probing a material’s atmosphere, you could learn whether it follows this symmetry-breaking pattern and to what extent — and thus what bizarre behaviors it may have, he said.

“Some materials will be secretly breaking symmetries that we didn’t know about and that we didn’t suspect,” he said. “They seem very innocent, but somehow they’ve been hiding in secret.”

Wilczek said he’s already talked with experimentalists who are interested in testing the idea. What’s more, he said, experiments should be readily feasible, hopefully coming to fruition not in years, but in only weeks and months.

If everything works out, then the term “quantum atmosphere” may find a permanent spot in the physics lexicon. Wilczek has previously coined terms like axions, anyons (quasiparticles that may be useful for quantum computing) and time crystals (structures that move in regular and repeating patterns without using energy). He has a good track record of coming up with names that stick, Armitage said. “‘Quantum atmospheres’ is another good one.”

Advertisements

Should Quantum Anomalies Make Us Rethink Reality?


Inexplicable lab results may be telling us we’re on the cusp of a new scientific paradigm

Should Quantum Anomalies Make Us Rethink Reality?

Every generation tends to believe that its views on the nature of reality are either true or quite close to the truth. We are no exception to this: although we know that the ideas of earlier generations were each time supplanted by those of a later one, we still believe that this time we got it right. Our ancestors were naïve and superstitious, but we are objective—or so we tell ourselves. We know that matter/energy, outside and independent of mind, is the fundamental stuff of nature, everything else being derived from it—or do we?

In fact, studies have shown that there is an intimate relationship between the world we perceive and the conceptual categories encoded in the language we speak. We don’t perceive a purely objective world out there, but one subliminally pre-partitioned and pre-interpreted according to culture-bound categories. For instance, “color words in a given language shape human perception of color.” A brain imaging study suggests that language processing areas are directly involved even in the simplest discriminations of basic colors. Moreover, this kind of “categorical perception is a phenomenon that has been reported not only for color, but for other perceptual continua, such as phonemes, musical tones and facial expressions.” In an important sense, we see what our unexamined cultural categories teach us to see, which may help explain why every generation is so confident in their own worldview. Allow me to elaborate.

The conceptual-ladenness of perception isn’t a new insight. Back in 1957, philosopher Owen Barfield wrote:

“I do not perceive any thing with my sense-organs alone.… Thus, I may say, loosely, that I ‘hear a thrush singing.’ But in strict truth all that I ever merely ‘hear’—all that I ever hear simply by virtue of having ears—is sound. When I ‘hear a thrush singing,’ I am hearing … with all sorts of other things like mental habits, memory, imagination, feeling and … will.” (Saving the Appearances)

As argued by philosopher Thomas Kuhn in his book The Structure of Scientific Revolutions, science itself falls prey to this inherent subjectivity of perception. Defining a “paradigm” as an “implicit body of intertwined theoretical and methodological belief,” he wrote:

“something like a paradigm is prerequisite to perception itself. What a man sees depends both upon what he looks at and also upon what his previous visual-conceptual experience has taught him to see. In the absence of such training there can only be, in William James’s phrase, ‘a bloomin’ buzzin’ confusion.’”

Hence, because we perceive and experiment on things and events partly defined by an implicit paradigm, these things and events tend to confirm, by construction, the paradigm. No wonder then that we are so confident today that nature consists of arrangements of matter/energy outside and independent of mind.

Yet, as Kuhn pointed out, when enough “anomalies”—empirically undeniable observations that cannot be accommodated by the reigning belief system—accumulate over time and reach critical mass, paradigms change. We may be close to one such a defining moment today, as an increasing body of evidence from quantum mechanics (QM) renders the current paradigm untenable.

Indeed, according to the current paradigm, the properties of an object should exist and have definite values even when the object is not being observed: the moon should exist and have whatever weight, shape, size and color it has even when nobody is looking at it. Moreover, a mere act of observation should not change the values of these properties. Operationally, all this is captured in the notion of “non-contextuality”: the outcome of an observation should not depend on the way other, separate but simultaneous observations are performed. After all, what I perceive when I look at the night sky should not depend on the way other people look at the night sky along with me, for the properties of the night sky uncovered by my observation should not depend on theirs.

The problem is that, according to QM, the outcome of an observation can depend on the way another, separate but simultaneous, observation is performed. This happens with so-called “quantum entanglement” and it contradicts the current paradigm in an important sense, as discussed above. Although Einstein argued in 1935 that the contradiction arose merely because QM is incomplete, John Bell proved mathematically, in 1964, that the predictions of QM regarding entanglement cannot be accounted for by Einstein’s alleged incompleteness.

So to salvage the current paradigm there is an important sense in which one has to reject the predictions of QM regarding entanglement. Yet, since Alain Aspect’s seminal experiments in 1981–82, these predictions have been repeatedly confirmed, with potential experimental loopholes closed one by one. 1998 was a particularly fruitful year, with two remarkable experiments performed in Switzerland and Austria. In 2011 and 2015, new experiments again challenged non-contextuality. Commenting on this, physicist Anton Zeilinger has been quoted as saying that “there is no sense in assuming that what we do not measure [that is, observe] about a system has [an independent] reality.” Finally, Dutch researchers successfully performed a test closing all remaining potential loopholes, which was considered by Nature the “toughest test yet.”

The only alternative left for those holding on to the current paradigm is to postulate some form of non-locality: nature must have—or so they speculate—observation-independent hidden properties, entirely missed by QM, which are “smeared out” across spacetime. It is this allegedly omnipresent, invisible but objective background that supposedly orchestrates entanglement from “behind the scenes.”

It turns out, however, that some predictions of QM are incompatible with non-contextuality even for a large and important class of non-local theories. Experimental results reported in 2007 and 2010 have confirmed these predictions. To reconcile these results with the current paradigm would require a profoundly counterintuitive redefinition of what we call “objectivity.” And since contemporary culture has come to associate objectivity with reality itself, the science press felt compelled to report on this by pronouncing, “Quantum physics says goodbye to reality.”

The tension between the anomalies and the current paradigm can only be tolerated by ignoring the anomalies. This has been possible so far because the anomalies are only observed in laboratories. Yet we know that they are there, for their existence has been confirmed beyond reasonable doubt. Therefore, when we believe that we see objects and events outside and independent of mind, we are wrong in at least some essential sense. A new paradigm is needed to accommodate and make sense of the anomalies; one wherein mind itself is understood to be the essence—cognitively but also physically—of what we perceive when we look at the world around ourselves.

A Strange Quantum Effect Could Give Rise to a Completely New Kind of Star


A near cousin to black holes.

We might have to add a brand new category of star to the textbooks: an advanced mathematical model has revealed a certain ultracompact star configuration could in fact exist, when scientists had previously thought it impossible.

main article image

This model mixes the repulsive effect of quantum vacuum polarisation – the idea that a vacuum isn’t actually empty but is filled with quantum energy and particles – with the attractive principles of general relativity.

The calculations are the work of Raúl Carballo-Rubio from the International School for Advanced Studies in Italy, and describe a hypothesis where a massive star doesn’t follow the usual instructions laid down by astrophysics.

“The novelty in this analysis is that, for the first time, all these ingredients have been assembled together in a fully consistent model,” says Carballo-Rubio.

“Moreover, it has been shown that there exist new stellar configurations, and that these can be described in a surprisingly simple manner.”

Due to the push and pull of gigantic forces, massive stars collapse under their own weight when they run out of fuel to burn. They then either explode as supernovae and become neutron stars, or collapse completely into a black hole, depending on their mass.

There’s a particular mass threshold at which the dying star goes one way or another.

 But what if extra quantum mechanical forces were at play? That’s the question Carballo-Rubio is asking, and he suggests the rules of quantum mechanics would create a different set of thresholds or equilibriums at the end of a massive star’s life.

Thanks to quantum vacuum polarisation, we’d be left with something that would look like a black hole while behaving differently, according to the new model. These new types of stars have been dubbed “semiclassical relativistic stars” because they the result of both classical and quantum physics.

One of the differences would be that the star would be horizonless – like another theoretical star made possible by quantum physics, the gravastar. There wouldn’t be the same ‘point of no return’ for light and matter as there is around a black hole.

The next step is to see if we can actually spot any of them – or rather spot any of the ripples they create through the rest of space. One possibility is that these strange types of stars wouldn’t exist for very long at all.

“It is not clear yet whether these configurations can be dynamically realised in astrophysical scenarios, or how long would they last if this is the case,” says Carballo-Rubio.

Interest in this field of astrophysics has been boosted by the progress scientists have been making in detecting gravitational waves, and it’s because of that work that it might be possible to find these variations on black holes.

The observatories and instruments coming online in the next few years will give scientists the chance to put this intriguing hypothesis to the test.

“If there are very dense and ultracompact stars in the Universe, similar to black holes but with no horizons, it should be possible to detect them in the next decades,” says Carballo-Rubio.

Does a Quantum Equation Govern Some of the Universe’s Large Structures?


A new paper uses the Schrödinger equation to describe debris disks around stars and black holes—and provides an object lesson about what “quantum” really means

Does a Quantum Equation Govern Some of the Universe's Large Structures?
This artist’s concept shows a swirling debris disk of gas and dust surrounding a young protostar.

Researchers who want to predict the behavior of systems governed by quantum mechanics—an electron in an atom, say, or a photon of light traveling through space—typically turn to the Schrödinger equation. Devised by Austrian physicist Erwin Schrödinger in 1925, it describes subatomic particles and how they may display wavelike properties such as interference. It contains the essence of all that appears strange and counterintuitive about the quantum world.

But it seems the Schrödinger equation is not confined to that realm. In a paper just published in Monthly Notices of the Royal Astronomical Society, planetary scientist Konstantin Batygin of the California Institute of Technology claims this equation can also be used to understand the emergence and behavior of self-gravitating astrophysical disks. That is, objects such as the rings of the worlds Saturn and Uranus or the halos of dust and gas that surround young stars and supply the raw material for the formation of a planetary system or even the accretion disks of debris spiraling into a black hole.

And yet there’s nothing “quantum” about these things at all. They could be anything from tiny dust grains to big chunks of rock the size of asteroids or planets. Nevertheless, Batygin says, the Schrödinger equation supplies a convenient way of calculating what shape such a disk will have, and how stable it will be against buckling or distorting. “This a fascinating approach, synthesizing very old techniques to make a brand-new analysis of a challenging problem,” says astrophysicist Duncan Forgan of the University of Saint Andrews in Scotland, who was not part of the research. “The Schrödinger equation has been so well studied for almost a century that this connection is clearly handy.”

From Classical to Quantum

This equation is so often regarded as the distilled essence of “quantumness” that it is easy to forget what it really represents. In some ways Schrödinger pulled it out of a hat when challenged to come up with a mathematical formula for French physicist Louis de Broglie’s hypothesis that quantum particles could behave like waves. Schrödinger drew on his deep knowledge of classical mechanics, and his equation in many ways resembles those used for ordinary waves. One difference is that in quantum mechanics the energies of “particle–waves” are quantized: confined to discrete values that are multiples of the so-called Planck’s constant h, first introduced by German physicist Max Planck in 1900.

This relation of the Schrödinger equation to classical waves is already revealed in the way that a variant called the nonlinear Schrödinger equation is commonly used to describe other classical wave systems—for example in optics and even in ocean waves, where it provides a mathematical picture of unusually large and robust “rogue waves.”

But the normal “quantum” version—the linear Schrödinger equation—has not previously turned up in a classical context. Batygin says it does so here because the way he sets up the problem of self-gravitating disks creates a quantity that sets a particular “scale” in the problem, much as h does in quantum systems.

Loopy Physics

Whether around a young star or a supermassive black hole, the many mutually interacting objects in a self-gravitating debris disk are complicated to describe mathematically. But Batygin uses a simplified model in which the disk’s constituents are smeared and stretched into thin “wires” that loop in concentric ellipses right around the disk. Because the wires interact with one another through gravity, they can exchange orbital angular momentum between them, rather like the transfer of movement between the gear bearings and the axle of a bicycle.

This approach uses ideas developed in the 18th century by the mathematicians Pierre-Simon Laplace and Joseph-Louis Lagrange. Laplace was one of the first to study how a rotating clump of objects can collapse into a disklike shape. In 1796 he proposed our solar system formed from a great cloud of gas and dust spinning around the young sun.

Batygin and others had used this “wire” approximation before, but he decided to look at the extreme case in which the looped wires are made thinner and thinner until they merge into a continuous disk. In that limit he found the equation describing the system is the same as Schrödinger’s, with the disk itself being described by the analog of the wave function that defines the distribution of possible positions of a quantum particle. In effect, the shape of the disk is like the wave function of a quantum particle bouncing around in a cavity with walls at the disk’s inner and outer edges.

The resulting disk has a series of vibrational “modes,” rather like resonances in a tuning fork, that might be excited by small disturbances—think of a planet-forming stellar disk nudged by a passing star or of a black hole accretion disk in which material is falling into the center unevenly. Batygin deduces the conditions under which a disk will warp in response or, conversely, will behave like a rigid body held fast by its own mutual gravity. This comes down to a matter of timescales, he says. If the angular momentum of the objects orbiting in the disk is transferred from one to another much more rapidly than the perturbation’s duration, the disk will remain rigid. “If on the other hand the self-interaction timescale is long compared with the perturbation timescale, the disk will warp,” he says.

Is “Quantumness” Really So Weird?

When he first saw the Schrödinger equation materialize out of his theoretical analysis, Batygin says he was stunned. “But in retrospect it almost seems obvious to me that it must emerge in this problem,” he adds.

What this means, though, is the Schrödinger equation can itself be derived from classical physics known since the 18th century. It doesn’t depend on “quantumness” at all—although it turns out to be applicable to that case.

That’s not as strange as it might seem. For one thing, science is full of examples of equations devised for one phenomenon turning out to apply to a totally different one, too. Equations concocted to describe a kind of chemical reaction have been applied to the modeling of crime, for example, and very recently a mathematical description of magnets was shown also to describe the fruiting patterns of trees in pistachio orchards.

But doesn’t quantum physics involve a rather uniquely odd sort of behavior? Not really. The Schrödinger equation does not so much describe what quantum particles are actually “doing,” rather it supplies a way of predicting what might be observed for systems governed by particular wavelike probability laws. In fact, other researchers have already shown the key phenomena of quantum theory emerge from a generalization of probability theory that could, too, have been in principle devised in the 18th century, before there was any inkling that tiny particles behave this way.

The advantage of his approach is its simplicity, Batygin notes. Instead of having to track all the movements of every particle in the disk using complicated computer models (so-called N-body simulations), the disk can be treated as a kind of smooth sheet that evolves over time and oscillates like a drumskin. That makes it, Batygin says, ideal for systems in which the central object is much more massive than the disk, such as protoplanetary disks and the rings of stars orbiting supermassive black holes. It will not work for galactic disks, however, like the spiral that forms our Milky Way.

But Ken Rice of The Royal Observatory in Scotland, who was not involved with the work says that in the scenario in which the central object is much more massive than the disk, the dominant gravitational influence is the central object. “It’s then not entirely clear how including the disk self-gravity would influence the evolution” he says. “My simple guess would be that it wouldn’t have much influence, but I might be wrong.” Which suggests the chief application of Batygin’s formalism may not be to model a wide range of systems but rather to make models for a narrow range of systems far less computationally expensive than N-body simulations.

Astrophysicist Scott Tremaine of the Institute for Advanced Study in Princeton, N.J., also not part of the study, agrees these equations might be easier to solve than those that describe the self-gravitating rings more precisely. But he says this simplification comes at the cost of neglecting the long reach of gravitational forces, because in the Schrödinger version only interactions between adjacent “wire” rings are taken into account. “It’s a rather drastic simplification of the system that only works for certain cases”, he says, “and won’t provide new insights into these disks for experts.” But he thinks the approach could have useful pedagogical value, not least in showing that the Schrödinger equation “isn’t some magic result just for quantum mechanics, but describes a variety of physical systems.”

But Saint Andrews’s Forgan thinks Batygin’s approach could be particularly useful for modeling black hole accretion disks that are warped by companion stars. “There are a lot of interesting results about binary supermassive black holes with ‘torn’ disks that this may be applicable to,” he says.

Quantum Algorithms Struggle Against Old Foe: Clever Computers


The quest for “quantum supremacy” – unambiguous proof that a quantum computer does something faster than an ordinary computer – has paradoxically led to a boom in quasi-quantum classical algorithms.

A popular misconception is that the potential — and the limits — of quantum computing must come from hardware. In the digital age, we’ve gotten used to marking advances in clock speed and memory. Likewise, the 50-qubit quantum machines now coming online from the likes of Intel and IBM have inspired predictions that we are nearing “quantum supremacy” — a nebulous frontier where quantum computers begin to do things beyond the ability of classical machines.

But quantum supremacy is not a single, sweeping victory to be sought — a broad Rubicon to be crossed — but rather a drawn-out series of small duels. It will be established problem by problem, quantum algorithm versus classical algorithm. “With quantum computers, progress is not just about speed,” said Michael Bremner, a quantum theorist at the University of Technology Sydney. “It’s much more about the intricacy of the algorithms at play.”

And the goalposts are shifting. “When it comes to saying where the supremacy threshold is, it depends on how good the best classical algorithms are,” said John Preskill, a theoretical physicist at the California Institute of Technology. “As they get better, we have to move that boundary.”

‘It Doesn’t Look So Easy’

Before the dream of a quantum computer took shape in the 1980s, most computer scientists took for granted that classical computing was all there was. The field’s pioneers had convincingly argued that classical computers — epitomized by the mathematical abstraction known as a Turing machine — should be able to compute everything that is computable in the physical universe, from basic arithmetic to stock trades to black hole collisions.

Classical machines couldn’t necessarily do all these computations efficiently, though. Let’s say you wanted to understand something like the chemical behavior of a molecule. This behavior depends on the behavior of the electrons in the molecule, which exist in a superposition of many classical states. Making things messier, the quantum state of each electron depends on the states of all the others — due to the quantum-mechanical phenomenon known as entanglement. Classically calculating these entangled states in even very simple molecules can become a nightmare of exponentially increasing complexity.

A quantum computer, by contrast, can deal with the intertwined fates of the electrons under study by superposing and entangling its own quantum bits. This enables the computer to process extraordinary amounts of information. Each single qubit you add doubles the states the system can simultaneously store: Two qubits can store four states, three qubits can store eight states, and so on. Thus, you might need just 50 entangled qubits to model quantum states that would require exponentially many classical bits — 1.125 quadrillion to be exact — to encode.

A quantum machine could therefore make the classically intractable problem of simulating large quantum-mechanical systems tractable, or so it appeared. “Nature isn’t classical, dammit, and if you want to make a simulation of nature, you’d better make it quantum mechanical,” the physicist Richard Feynman famously quipped in 1981. “And by golly it’s a wonderful problem, because it doesn’t look so easy.”

It wasn’t, of course.

Even before anyone began tinkering with quantum hardware, theorists struggled to come up with suitable software. Early on, Feynman and David Deutsch, a physicist at the University of Oxford, learned that they could control quantum information with mathematical operations borrowed from linear algebra, which they called gates. As analogues to classical logic gates, quantum gates manipulate qubits in all sorts of ways — guiding them into a succession of superpositions and entanglements and then measuring their output. By mixing and matching gates to form circuits, the theorists could easily assemble quantum algorithms.

Conceiving algorithms that promised clear computational benefits proved more difficult. By the early 2000s, mathematicians had come up with only a few good candidates. Most famously, in 1994, a young staffer at Bell Laboratories named Peter Shor proposed a quantum algorithm that factors integers exponentially faster than any known classical algorithm — an efficiency that could allow it to crack many popular encryption schemes. Two years later, Shor’s Bell Labs colleague Lov Grover devised an algorithm that speeds up the classically tedious process of searching through unsorted databases. “There were a variety of examples that indicated quantum computing power should be greater than classical,” said Richard Jozsa, a quantum information scientist at the University of Cambridge.

But Jozsa, along with other researchers, would also discover a variety of examples that indicated just the opposite. “It turns out that many beautiful quantum processes look like they should be complicated” and therefore hard to simulate on a classical computer, Jozsa said. “But with clever, subtle mathematical techniques, you can figure out what they will do.” He and his colleagues found that they could use these techniques to efficiently simulate — or “de-quantize,” as Calude would say — a surprising number of quantum circuits. For instance, circuits that omit entanglement fall into this trap, as do those that entangle only a limited number of qubits or use only certain kinds of entangling gates.

What, then, guarantees that an algorithm like Shor’s is uniquely powerful? “That’s very much an open question,” Jozsa said. “We never really succeeded in understanding why some [algorithms] are easy to simulate classically and others are not. Clearly entanglement is important, but it’s not the end of the story.” Experts began to wonder whether many of the quantum algorithms that they believed were superior might turn out to be only ordinary.

Sampling Struggle

Until recently, the pursuit of quantum power was largely an abstract one. “We weren’t really concerned with implementing our algorithms because nobody believed that in the reasonable future we’d have a quantum computer to do it,” Jozsa said. Running Shor’s algorithm for integers large enough to unlock a standard 128-bit encryption key, for instance, would require thousands of qubits — plus probably many thousands more to correct for errors. Experimentalists, meanwhile, were fumbling while trying to control more than a handful.

But by 2011, things were starting to look up. That fall, at a conference in Brussels, Preskill speculated that “the day when well-controlled quantum systems can perform tasks surpassing what can be done in the classical world” might not be far off. Recent laboratory results, he said, could soon lead to quantum machines on the order of 100 qubits. Getting them to pull off some “super-classical” feat maybe wasn’t out of the question. (Although D-Wave Systems’ commercial quantum processors could by then wrangle 128 qubits and now boast more than 2,000, they tackle only specific optimization problems; many experts doubt they can outperform classical computers.)

“I was just trying to emphasize we were getting close — that we might finally reach a real milestone in human civilization where quantum technology becomes the most powerful information technology that we have,” Preskill said. He called this milestone “quantum supremacy.” The name — and the optimism — stuck. “It took off to an extent I didn’t suspect.”

The buzz about quantum supremacy reflected a growing excitement in the field — over experimental progress, yes, but perhaps more so over a series of theoretical breakthroughs that began with a 2004 paper by the IBM physicists Barbara Terhal and David DiVincenzo. In their effort to understand quantum assets, the pair had turned their attention to rudimentary quantum puzzles known as sampling problems. In time, this class of problems would become experimentalists’ greatest hope for demonstrating an unambiguous speedup on early quantum machines.

Sampling problems exploit the elusive nature of quantum information. Say you apply a sequence of gates to 100 qubits. This circuit may whip the qubits into a mathematical monstrosity equivalent to something on the order of 2100 classical bits. But once you measure the system, its complexity collapses to a string of only 100 bits. The system will spit out a particular string — or sample — with some probability determined by your circuit.

In a sampling problem, the goal is to produce a series of samples that look as though they came from this circuit. It’s like repeatedly tossing a coin to show that it will (on average) come up 50 percent heads and 50 percent tails. Except here, the outcome of each “toss” isn’t a single value — heads or tails — it’s a string of many values, each of which may be influenced by some (or even all) of the other values.

For a well-oiled quantum computer, this exercise is a no-brainer. It’s what it does naturally. Classical computers, on the other hand, seem to have a tougher time. In the worst circumstances, they must do the unwieldy work of computing probabilities for all possible output strings — all 2100 of them — and then randomly select samples from that distribution. “People always conjectured this was the case,” particularly for very complex quantum circuits, said Ashley Montanaro, an expert in quantum algorithms at the University of Bristol.

Terhal and DiVincenzo showed that even some simple quantum circuits should still be hard to sample by classical means. Hence, a bar was set. If experimentalists could get a quantum system to spit out these samples, they would have good reason to believe that they’d done something classically unmatchable.

Theorists soon expanded this line of thought to include other sorts of sampling problems. One of the most promising proposals came from Scott Aaronson, a computer scientist then at the Massachusetts Institute of Technology, and his doctoral student Alex Arkhipov. In work posted on the scientific preprint site arxiv.org in 2010, they described a quantum machine that sends photons through an optical circuit, which shifts and splits the light in quantum-mechanical ways, thereby generating output patterns with specific probabilities. Reproducing these patterns became known as boson sampling. Aaronson and Arkhipov reasoned that boson sampling would start to strain classical resources at around 30 photons — a plausible experimental target.

Similarly enticing were computations called instantaneous quantum polynomial, or IQP, circuits. An IQP circuit has gates that all commute, meaning they can act in any order without changing the outcome — in the same way 2 + 5 = 5 + 2. This quality makes IQP circuits mathematically pleasing. “We started studying them because they were easier to analyze,” Bremner said. But he discovered that they have other merits. In work that began in 2010 and culiminated in a 2016 paper with Montanaro and Dan Shepherd, now at the National Cyber Security Center in the U.K., Bremner explained why IQP circuits can be extremely powerful: Even for physically realistic systems of hundreds — or perhaps even dozens — of qubits, sampling would quickly become a classically thorny problem.

By 2016, boson samplers had yet to extend beyond 6 photons. Teams at Google and IBM, however, were verging on chips nearing 50 qubits; that August, Google quietly posted a draft paper laying out a road map for demonstrating quantum supremacy on these “near-term” devices.

Google’s team had considered sampling from an IQP circuit. But a closer look by Bremner and his collaborators suggested that the circuit would likely need some error correction — which would require extra gates and at least a couple hundred extra qubits — in order to unequivocally hamstring the best classical algorithms. So instead, the team used arguments akin to Aaronson’s and Bremner’s to show that circuits made of non-commuting gates, although likely harder to build and analyze than IQP circuits, would also be harder for a classical device to simulate. To make the classical computation even more challenging, the team proposed sampling from a circuit chosen at random. That way, classical competitors would be unable to exploit any familiar features of the circuit’s structure to better guess its behavior.

But there was nothing to stop the classical algorithms from getting more resourceful. In fact, in October 2017, a team at IBM showed how, with a bit of classical ingenuity, a supercomputer can simulate sampling from random circuits on as many as 56 qubits — provided the circuits don’t involve too much depth (layers of gates). Similarly, a more able algorithm has recently nudged the classical limits of boson sampling, to around 50 photons.

These upgrades, however, are still dreadfully inefficient. IBM’s simulation, for instance, took two days to do what a quantum computer is expected to do in less than one-tenth of a millisecond. Add a couple more qubits — or a little more depth — and quantum contenders could slip freely into supremacy territory. “Generally speaking, when it comes to emulating highly entangled systems, there has not been a [classical] breakthrough that has really changed the game,” Preskill said. “We’re just nibbling at the boundary rather than exploding it.”

That’s not to say there will be a clear victory. “Where the frontier is is a thing people will continue to debate,” Bremner said. Imagine this scenario: Researchers sample from a 50-qubit circuit of some depth — or maybe a slightly larger one of less depth — and claim supremacy. But the circuit is pretty noisy — the qubits are misbehaving, or the gates don’t work that well. So then some crackerjack classical theorists swoop in and simulate the quantum circuit, no sweat, because “with noise, things you think are hard become not so hard from a classical point of view,” Bremner explained. “Probably that will happen.”

What’s more certain is that the first “supreme” quantum machines, if and when they arrive, aren’t going to be cracking encryption codes or simulating novel pharmaceutical molecules. “That’s the funny thing about supremacy,” Montanaro said. “The first wave of problems we solve are ones for which we don’t really care about the answers.”

Yet these early wins, however small, will assure scientists that they are on the right track — that a new regime of computation really is possible. Then it’s anyone’s guess what the next wave of problems will be.

The Era of Quantum Computing Is Here.


 Quantum computers should soon be able to beat classical computers at certain basic tasks. But before they’re truly powerful, researchers have to overcome a number of fundamental roadblocks.

Quantum computers have to deal with the problem of noise, which can quickly derail any calculation.

Quantum computers have to deal with the problem of noise, which can quickly derail any calculation.

After decades of heavy slog with no promise of success, quantum computing is suddenly buzzing with almost feverish excitement and activity. Nearly two years ago, IBM made a quantum computer available to the world: the 5-quantum-bit (qubit) resource they now call (a little awkwardly) the IBM Q experience. That seemed more like a toy for researchers than a way of getting any serious number crunching done. But 70,000 users worldwide have registered for it, and the qubit count in this resource has now quadrupled. In the past few months, IBM and Intel have announced that they have made quantum computers with 50 and 49 qubits, respectively, and Google is thought to have one waiting in the wings. “There is a lot of energy in the community, and the recent progress is immense,” said physicist Jens Eisert of the Free University of Berlin.

It would be tempting to conclude from all this that the basic problems are solved in principle and the path to a future of ubiquitous quantum computing is now just a matter of engineering. But that would be a mistake. The fundamental physics of quantum computing is far from solved and can’t be readily disentangled from its implementation.

Even if we soon pass the quantum supremacy milestone, the next year or two might be the real crunch time for whether quantum computers will revolutionize computing. There’s still everything to play for and no guarantee of reaching the big goal.

IBM’s quantum computing center at the Thomas J. Watson Research Center in Yorktown Heights, New York, holds quantum computers in large cryogenic tanks (far right) that are cooled to a fraction of a degree above absolute zero.

IBM’s quantum computing center at the Thomas J. Watson Research Center in Yorktown Heights, New York, holds quantum computers in large cryogenic tanks (far right) that are cooled to a fraction of a degree above absolute zero.

Connie Zhou for IBM

Shut Up and Compute

Both the benefits and the challenges of quantum computing are inherent in the physics that permits it. The basic story has been told many times, though not always with the nuance that quantum mechanics demands. Classical computers encode and manipulate information as strings of binary digits — 1 or 0. Quantum bits do the same, except that they may be placed in a so-called superposition of the states 1 and 0, which means that a measurement of the qubit’s state could elicit the answer 1 or 0 with some well-defined probability.

To perform a computation with many such qubits, they must all be sustained in interdependent superpositions of states — a “quantum-coherent” state, in which the qubits are said to be entangled. That way, a tweak to one qubit may influence all the others. This means that somehow computational operations on qubits count for more than they do for classical bits. The computational resources increase in simple proportion to the number of bits for a classical device, but adding an extra qubit potentially doubles the resources of a quantum computer. This is why the difference between a 5-qubit and a 50-qubit machine is so significant.

Note that I’ve not said — as it often is said — that a quantum computer has an advantage because the availability of superpositions hugely increases the number of states it can encode, relative to classical bits. Nor have I said that entanglement permits many calculations to be carried out in parallel. (Indeed, a strong degree of qubit entanglement isn’t essential.) There’s an element of truth in those descriptions — some of the time — but none captures the essence of quantum computing.

Inside one of IBM’s cryostats wired for a 50-qubit quantum system.

Inside one of IBM’s cryostats wired for a 50-qubit quantum system.

 

It’s hard to say qualitatively why quantum computing is so powerful precisely because it is hard to specify what quantum mechanics means at all. The equations of quantum theory certainly show that it will work: that, at least for some classes of computation such as factorization or database searches, there is tremendous speedup of the calculation. But how exactly?

Perhaps the safest way to describe quantum computing is to say that quantum mechanics somehow creates a “resource” for computation that is unavailable to classical devices. As quantum theorist Daniel Gottesman of the Perimeter Institute in Waterloo, Canada, put it, “If you have enough quantum mechanics available, in some sense, then you have speedup, and if not, you don’t.”

Some things are clear, though. To carry out a quantum computation, you need to keep all your qubits coherent. And this is very hard. Interactions of a system of quantum-coherent entities with their surrounding environment create channels through which the coherence rapidly “leaks out” in a process called decoherence. Researchers seeking to build quantum computers must stave off decoherence, which they can currently do only for a fraction of a second. That challenge gets ever greater as the number of qubits — and hence the potential to interact with the environment — increases. This is largely why, even though quantum computing was first proposed by Richard Feynman in 1982 and the theory was worked out in the early 1990s, it has taken until now to make devices that can actually perform a meaningful computation.

Quantum Errors

There’s a second fundamental reason why quantum computing is so difficult. Like just about every other process in nature, it is noisy. Random fluctuations, from heat in the qubits, say, or from fundamentally quantum-mechanical processes, will occasionally flip or randomize the state of a qubit, potentially derailing a calculation. This is a hazard in classical computing too, but it’s not hard to deal with — you just keep two or more backup copies of each bit so that a randomly flipped bit stands out as the odd one out.

Researchers working on quantum computers have created strategies for how to deal with the noise. But these strategies impose a huge debt of computational overhead — all your computing power goes to correcting errors and not to running your algorithms. “Current error rates significantly limit the lengths of computations that can be performed,” said Andrew Childs, the codirector of the Joint Center for Quantum Information and Computer Science at the University of Maryland. “We’ll have to do a lot better if we want to do something interesting.”

Andrew Childs, a quantum theorist at the University of Maryland, cautions that error rates are a fundamental concern for quantum computers.

Andrew Childs, a quantum theorist at the University of Maryland, cautions that error rates are a fundamental concern for quantum computers.

A lot of research on the fundamentals of quantum computing has been devoted to error correction. Part of the difficulty stems from another of the key properties of quantum systems: Superpositions can only be sustained as long as you don’t measure the qubit’s value. If you make a measurement, the superposition collapses to a definite value: 1 or 0. So how can you find out if a qubit has an error if you don’t know what state it is in?

One ingenious scheme involves looking indirectly, by coupling the qubit to another “ancilla” qubit that doesn’t take part in the calculation but that can be probed without collapsing the state of the main qubit itself. It’s complicated to implement, though. Such solutions mean that, to construct a genuine “logical qubit” on which computation with error correction can be performed, you need many physical qubits.

How many? Quantum theorist Alán Aspuru-Guzik of Harvard University estimates that around 10,000 of today’s physical qubits would be needed to make a single logical qubit — a totally impractical number. If the qubits get much better, he said, this number could come down to a few thousand or even hundreds. Eisert is less pessimistic, saying that on the order of 800 physical qubits might already be enough, but even so he agrees that “the overhead is heavy,” and for the moment we need to find ways of coping with error-prone qubits.

An alternative to correcting errors is avoiding them or canceling out their influence: so-called error mitigation. Researchers at IBM, for example, are developing schemes for figuring out mathematically how much error is likely to have been incurred in a computation and then extrapolating the output of a computation to the “zero noise” limit.

Some researchers think that the problem of error correction will prove intractable and will prevent quantum computers from achieving the grand goals predicted for them. “The task of creating quantum error-correcting codes is harder than the task of demonstrating quantum supremacy,” said mathematician Gil Kalai of the Hebrew University of Jerusalem in Israel. And he adds that “devices without error correction are computationally very primitive, and primitive-based supremacy is not possible.” In other words, you’ll never do better than classical computers while you’ve still got errors.

Others believe the problem will be cracked eventually. According to Jay Gambetta, a quantum information scientist at IBM’s Thomas J. Watson Research Center, “Our recent experiments at IBM have demonstrated the basic elements of quantum error correction on small devices, paving the way towards larger-scale devices where qubits can reliably store quantum information for a long period of time in the presence of noise.” Even so, he admits that “a universal fault-tolerant quantum computer, which has to use logical qubits, is still a long way off.” Such developments make Childs cautiously optimistic. “I’m sure we’ll see improved experimental demonstrations of [error correction], but I think it will be quite a while before we see it used for a real computation,” he said.

Living With Errors

For the time being, quantum computers are going to be error-prone, and the question is how to live with that. At IBM, researchers are talking about “approximate quantum computing” as the way the field will look in the near term: finding ways of accommodating the noise.

This calls for algorithms that tolerate errors, getting the correct result despite them. It’s a bit like working out the outcome of an election regardless of a few wrongly counted ballot papers. “A sufficiently large and high-fidelity quantum computation should have some advantage [over a classical computation] even if it is not fully fault-tolerant,” said Gambetta.

One of the most immediate error-tolerant applications seems likely to be of more value to scientists than to the world at large: to simulate stuff at the atomic level. (This, in fact, was the motivation that led Feynman to propose quantum computing in the first place.) The equations of quantum mechanics prescribe a way to calculate the properties — such as stability and chemical reactivity — of a molecule such as a drug. But they can’t be solved classically without making lots of simplifications.

In contrast, the quantum behavior of electrons and atoms, said Childs, “is relatively close to the native behavior of a quantum computer.” So one could then construct an exact computer model of such a molecule. “Many in the community, including me, believe that quantum chemistry and materials science will be one of the first useful applications of such devices,” said Aspuru-Guzik, who has been at the forefront of efforts to push quantum computing in this direction.

Quantum simulations are proving their worth even on the very small quantum computers available so far. A team of researchers including Aspuru-Guzik has developed an algorithm that they call the variational quantum eigensolver (VQE), which can efficiently find the lowest-energy states of molecules even with noisy qubits. So far it can only handle very small molecules with few electrons, which classical computers can already simulate accurately. But the capabilities are getting better, as Gambetta and coworkers showed last September when they used a 6-qubit device at IBM to calculate the electronic structures of molecules, including lithium hydride and beryllium hydride. The work was “a significant leap forward for the quantum regime,” according to physical chemist Markus Reiher of the Swiss Federal Institute of Technology in Zurich, Switzerland. “The use of the VQE for the simulation of small molecules is a great example of the possibility of near-term heuristic algorithms,” said Gambetta.

But even for this application, Aspuru-Guzik confesses that logical qubits with error correction will probably be needed before quantum computers truly begin to surpass classical devices. “I would be really excited when error-corrected quantum computing begins to become a reality,” he said.

“If we had more than 200 logical qubits, we could do things in quantum chemistry beyond standard approaches,” Reiher adds. “And if we had about 5,000 such qubits, then the quantum computer would be transformative in this field.”

What’s Your Volume?

Despite the challenges of reaching those goals, the fast growth of quantum computers from 5 to 50 qubits in barely more than a year has raised hopes. But we shouldn’t get too fixated on these numbers, because they tell only part of the story. What matters is not just — or even mainly — how many qubits you have, but how good they are, and how efficient your algorithms are.

Any quantum computation has to be completed before decoherence kicks in and scrambles the qubits. Typically, the groups of qubits assembled so far have decoherence times of a few microseconds. The number of logic operations you can carry out during that fleeting moment depends on how quickly the quantum gates can be switched — if this time is too slow, it really doesn’t matter how many qubits you have at your disposal. The number of gate operations needed for a calculation is called its depth: Low-depth (shallow) algorithms are more feasible than high-depth ones, but the question is whether they can be used to perform useful calculations.

What’s more, not all qubits are equally noisy. In theory it should be possible to make very low-noise qubits from so-called topological electronic states of certain materials, in which the “shape” of the electron states used for encoding binary information confers a kind of protection against random noise. Researchers at Microsoft, most prominently, are seeking such topological states in exotic quantum materials, but there’s no guarantee that they’ll be found or will be controllable.

Researchers at IBM have suggested that the power of a quantum computation on a given device be expressed as a number called the “quantum volume,” which bundles up all the relevant factors: number and connectivity of qubits, depth of algorithm, and other measures of the gate quality, such as noisiness. It’s really this quantum volume that characterizes the power of a quantum computation, and Gambetta said that the best way forward right now is to develop quantum-computational hardware that increases the available quantum volume.

This is one reason why the much vaunted notion of quantum supremacy is more slippery than it seems. The image of a 50-qubit (or so) quantum computer outperforming a state-of-the-art supercomputer sounds alluring, but it leaves a lot of questions hanging. Outperforming for which problem? How do you know the quantum computer has got the right answer if you can’t check it with a tried-and-tested classical device? And how can you be sure that the classical machine wouldn’t do better if you could find the right algorithm?

So quantum supremacy is a concept to handle with care. Some researchers prefer now to talk about “quantum advantage,” which refers to the speedup that quantum devices offer without making definitive claims about what is best. An aversion to the word “supremacy” has also arisen because of the racial and political implications.

Whatever you choose to call it, a demonstration that quantum computers can do things beyond current classical means would be psychologically significant for the field. “Demonstrating an unambiguous quantum advantage will be an important milestone,” said Eisert — it would prove that quantum computers really can extend what is technologically possible.

That might still be more of a symbolic gesture than a transformation in useful computing resources. But such things may matter, because if quantum computing is going to succeed, it won’t be simply by the likes of IBM and Google suddenly offering their classy new machines for sale. Rather, it’ll happen through an interactive and perhaps messy collaboration between developers and users, and the skill set will evolve in the latter only if they have sufficient faith that the effort is worth it. This is why both IBM and Google are keen to make their devices available as soon as they’re ready. As well as a 16-qubit IBM Q experience offered to anyone who registers online, IBM now has a 20-qubit version for corporate clients, including JP Morgan Chase, Daimler, Honda, Samsung and the University of Oxford. Not only will that help clients discover what’s in it for them; it should create a quantum-literate community of programmers who will devise resources and solve problems beyond what any individual company could muster.

“For quantum computing to take traction and blossom, we must enable the world to use and to learn it,” said Gambetta. “This period is for the world of scientists and industry to focus on getting quantum-ready.”

Quantum Chemistry Solves The Question of Why Life Needs So Many Amino Acids


One of the oldest and most fundamental questions in biochemistry is why the 20 amino acids that support life are all needed, when the original core of 13 would do – and quantum chemistry might have just provided us with the answer.

According to new research, it’s the extra chemical reactivity of the newer seven amino acids that make them so vital to life, even though they don’t add anything different in terms of their spatial structure.

Quantum chemistry is a way of taking some of the principles of quantum mechanics – describing particles according to probabilistic, wave-like properties – and applying them to the way atoms behave in chemical reactions.

The international team of scientists behind the new study used quantum chemistry techniques to compare amino acids found in space (and left here by meteorite fragments) with amino acids supporting life today on Earth.

“The transition from the dead chemistry out there in space to our own biochemistry here today was marked by an increase in softness and thus an enhanced reactivity of the building blocks,” says one of the researchers, Bernd Moosmann from Johannes Gutenberg University Mainz in Germany.

It’s the job of amino acids to form proteins, as instructed by our DNA. These acids were formed right after Earth itself came into being, about 4.54 billion years ago, and so represent one of the earliest building blocks of life.

However, why evolution decided that we needed 20 amino acids to handle this genetic encoding has never been clear, because the first 13 that developed should have been enough for the task.

The greater “softness” of the extra seven amino acids identified by the researchers means they are more readily reactive and more flexible in terms of chemical changes.

If you were representing the amino acids as circles, they could be drawn as multiple concentric circles representing differing energy levels, rather than one single circle of the same chemical hardness and energy level – kind of like in the photo below.

amino acid circles (Michael Plenikowski)

Having determined the hypothesis through quantum chemistry calculations, the scientists were able to back up their ideas with a series of biochemical experiments.

Along the way the team determined that the extra amino acids – particularly methionine, tryptophan, and selenocysteine – could well have evolved as a response to increasing levels of oxygen in the biosphere in the planet’s youngest days.

Peering so far back in time is difficult, as the first organic compounds never left fossils behind for us to analyse, but this may have been part of the process that kicked off the formation of life on Earth.

As the very earliest living cells tried to deal with the extra oxidative stress, it was a case of survival of the fittest. The cells best able to cope with that additional oxygen – through the protection of the new amino acids – were the ones that lived on and flourished.

“With this in view, we could characterise oxygen as the author adding the very final touch to the genetic code,” says Moosmann.

The research has been published in PNAS.

You thought quantum mechanics was weird: check out entangled time


<em>Photo by Alan Levine/Flickr</em>

In the summer of 1935, the physicists Albert Einstein and Erwin Schrödinger engaged in a rich, multifaceted and sometimes fretful correspondence about the implications of the new theory of quantum mechanics. The focus of their worry was what Schrödinger later dubbed entanglement: the inability to describe two quantum systems or particles independently, after they have interacted.

Until his death, Einstein remained convinced that entanglement showed how quantum mechanics was incomplete. Schrödinger thought that entanglement was the defining feature of the new physics, but this didn’t mean that he accepted it lightly. ‘I know of course how the hocus pocus works mathematically,’ he wrote to Einstein on 13 July 1935. ‘But I do not like such a theory.’ Schrödinger’s famous cat, suspended between life and death, first appeared in these letters, a byproduct of the struggle to articulate what bothered the pair.

The problem is that entanglement violates how the world ought to work. Information can’t travel faster than the speed of light, for one. But in a 1935 paper, Einstein and his co-authors showed how entanglement leads to what’s now called quantum nonlocality, the eerie link that appears to exist between entangled particles. If two quantum systems meet and then separate, even across a distance of thousands of lightyears, it becomes impossible to measure the features of one system (such as its position, momentum and polarity) without instantly steering the other into a corresponding state.

Up to today, most experiments have tested entanglement over spatial gaps. The assumption is that the ‘nonlocal’ part of quantum nonlocality refers to the entanglement of properties across space. But what if entanglement also occurs across time? Is there such a thing as temporal nonlocality?

The answer, as it turns out, is yes. Just when you thought quantum mechanics couldn’t get any weirder, a team of physicists at the Hebrew University of Jerusalem reported in 2013 that they had successfully entangled photons that never coexisted. Previous experiments involving a technique called ‘entanglement swapping’ had already showed quantum correlations across time, by delaying the measurement of one of the coexisting entangled particles; but Eli Megidish and his collaborators were the first to show entanglement between photons whose lifespans did not overlap at all.

Here’s how they did it. First, they created an entangled pair of photons, ‘1-2’ (step I in the diagram below). Soon after, they measured the polarisation of photon 1 (a property describing the direction of light’s oscillation) – thus ‘killing’ it (step II). Photon 2 was sent on a wild goose chase while a new entangled pair, ‘3-4’, was created (step III). Photon 3 was then measured along with the itinerant photon 2 in such a way that the entanglement relation was ‘swapped’ from the old pairs (‘1-2’ and ‘3-4’) onto the new ‘2-3’ combo (step IV). Some time later (step V), the polarisation of the lone survivor, photon 4, is measured, and the results are compared with those of the long-dead photon 1 (back at step II).

Figure 1. Time line diagram: (I) Birth of photons 1 and 2, (II) detection of photon 1, (III) birth of photons 3 and 4, (IV) Bell projection of photons 2 and 3, (V) detection of photon 4.

The upshot? The data revealed the existence of quantum correlations between ‘temporally nonlocal’ photons 1 and 4. That is, entanglement can occur across two quantum systems that never coexisted.

What on Earth can this mean? Prima facie, it seems as troubling as saying that the polarity of starlight in the far-distant past – say, greater than twice Earth’s lifetime – nevertheless influenced the polarity of starlight falling through your amateur telescope this winter. Even more bizarrely: maybe it implies that the measurements carried out by your eye upon starlight falling through your telescope this winter somehow dictated the polarity of photons more than 9 billion years old.

Lest this scenario strike you as too outlandish, Megidish and his colleagues can’t resist speculating on possible and rather spooky interpretations of their results. Perhaps the measurement of photon 1’s polarisation at step II somehow steers the future polarisation of 4, or the measurement of photon 4’s polarisation at step V somehow rewrites the past polarisation state of photon 1. In both forward and backward directions, quantum correlations span the causal void between the death of one photon and the birth of the other.

Just a spoonful of relativity helps the spookiness go down, though. In developing his theory of special relativity, Einstein deposed the concept of simultaneity from its Newtonian pedestal. As a consequence, simultaneity went from being an absolute property to being a relative one. There is no single timekeeper for the Universe; precisely when something is occurring depends on your precise location relative to what you are observing, known as your frame of reference. So the key to avoiding strange causal behaviour (steering the future or rewriting the past) in instances of temporal separation is to accept that calling events ‘simultaneous’ carries little metaphysical weight. It is only a frame-specific property, a choice among many alternative but equally viable ones – a matter of convention, or record-keeping.

The lesson carries over directly to both spatial and temporal quantum nonlocality. Mysteries regarding entangled pairs of particles amount to disagreements about labelling, brought about by relativity. Einstein showed that no sequence of events can be metaphysically privileged – can be considered more real – than any other. Only by accepting this insight can one make headway on such quantum puzzles.

The various frames of reference in the Hebrew University experiment (the lab’s frame, photon 1’s frame, photon 4’s frame, and so on) have their own ‘historians’, so to speak. While these historians will disagree about how things went down, not one of them can claim a corner on truth. A different sequence of events unfolds within each one, according to that spatiotemporal point of view. Clearly, then, any attempt at assigning frame-specific properties generally, or tying general properties to one particular frame, will cause disputes among the historians. But here’s the thing: while there might be legitimate disagreement about which properties should be assigned to which particles and when, there shouldn’t be disagreement about the very existence of these properties, particles, and events.

These findings drive yet another wedge between our beloved classical intuitions and the empirical realities of quantum mechanics. As was true for Schrödinger and his contemporaries, scientific progress is going to involve investigating the limitations of certain metaphysical views. Schrödinger’s cat, half-alive and half-dead, was created to illustrate how the entanglement of systems leads to macroscopic phenomena that defy our usual understanding of the relations between objects and their properties: an organism such as a cat is either dead or alive. No middle ground there.

Most contemporary philosophical accounts of the relationship between objects and their properties embrace entanglement solely from the perspective of spatial nonlocality. But there’s still significant work to be done on incorporating temporal nonlocality – not only in object-property discussions, but also in debates over material composition (such as the relation between a lump of clay and the statue it forms), and part-whole relations (such as how a hand relates to a limb, or a limb to a person). For example, the ‘puzzle’ of how parts fit with an overall whole presumes clear-cut spatial boundaries among underlying components, yet spatial nonlocality cautions against this view. Temporal nonlocality further complicates this picture: how does one describe an entity whose constituent parts are not even coexistent?

Discerning the nature of entanglement might at times be an uncomfortable project. It’s not clear what substantive metaphysics might emerge from scrutiny of fascinating new research by the likes of Megidish and other physicists. In a letter to Einstein, Schrödinger notes wryly (and deploying an odd metaphor): ‘One has the feeling that it is precisely the most important statements of the new theory that can really be squeezed into these Spanish boots – but only with difficulty.’ We cannot afford to ignore spatial ortemporal nonlocality in future metaphysics: whether or not the boots fit, we’ll have to wear ’em.

The Era of Quantum Computing Is Here. Outlook: Cloudy


Quantum computers should soon be able to beat classical computers at certain basic tasks. But before they’re truly powerful, researchers have to overcome a number of fundamental roadblocks.

Quantum computers have to deal with the problem of noise, which can quickly derail any calculation.

After decades of heavy slog with no promise of success, quantum computing is suddenly buzzing with almost feverish excitement and activity. Nearly two years ago, IBM made a quantum computer available to the world: the 5-quantum-bit (qubit) resource they now call (a little awkwardly) the IBM Q experience. That seemed more like a toy for researchers than a way of getting any serious number crunching done. But 70,000 users worldwide have registered for it, and the qubit count in this resource has now quadrupled. In the past few months, IBM and Intel have announced that they have made quantum computers with 50 and 49 qubits, respectively, and Google is thought to have one waiting in the wings. “There is a lot of energy in the community, and the recent progress is immense,” said physicist Jens Eisert of the Free University of Berlin.

There is now talk of impending “quantum supremacy”: the moment when a quantum computer can carry out a task beyond the means of today’s best classical supercomputers. That might sound absurd when you compare the bare numbers: 50 qubits versus the billions of classical bits in your laptop. But the whole point of quantum computing is that a quantum bit counts for much, much more than a classical bit. Fifty qubits has long been considered the approximate number at which quantum computing becomes capable of calculations that would take an unfeasibly long time classically. Midway through 2017, researchers at Google announced that they hoped to have demonstrated quantum supremacy by the end of the year. (When pressed for an update, a spokesperson recently said that “we hope to announce results as soon as we can, but we’re going through all the detailed work to ensure we have a solid result before we announce.”)

It would be tempting to conclude from all this that the basic problems are solved in principle and the path to a future of ubiquitous quantum computing is now just a matter of engineering. But that would be a mistake. The fundamental physics of quantum computing is far from solved and can’t be readily disentangled from its implementation.

Even if we soon pass the quantum supremacy milestone, the next year or two might be the real crunch time for whether quantum computers will revolutionize computing. There’s still everything to play for and no guarantee of reaching the big goal.

IBM’s quantum computing center at the Thomas J. Watson Research Center in Yorktown Heights, New York, holds quantum computers in large cryogenic tanks (far right) that are cooled to a fraction of a degree above absolute zero.

IBM’s quantum computing center at the Thomas J. Watson Research Center in Yorktown Heights, New York, holds quantum computers in large cryogenic tanks (far right) that are cooled to a fraction of a degree above absolute zero.

Connie Zhou for IBM

Shut Up and Compute

Both the benefits and the challenges of quantum computing are inherent in the physics that permits it. The basic story has been told many times, though not always with the nuance that quantum mechanics demands. Classical computers encode and manipulate information as strings of binary digits — 1 or 0. Quantum bits do the same, except that they may be placed in a so-called superposition of the states 1 and 0, which means that a measurement of the qubit’s state could elicit the answer 1 or 0 with some well-defined probability.

To perform a computation with many such qubits, they must all be sustained in interdependent superpositions of states — a “quantum-coherent” state, in which the qubits are said to be entangled. That way, a tweak to one qubit may influence all the others. This means that somehow computational operations on qubits count for more than they do for classical bits. The computational resources increase in simple proportion to the number of bits for a classical device, but adding an extra qubit potentially doubles the resources of a quantum computer. This is why the difference between a 5-qubit and a 50-qubit machine is so significant.

Note that I’ve not said — as it often is said — that a quantum computer has an advantage because the availability of superpositions hugely increases the number of states it can encode, relative to classical bits. Nor have I said that entanglement permits many calculations to be carried out in parallel. (Indeed, a strong degree of qubit entanglement isn’t essential.) There’s an element of truth in those descriptions — some of the time — but none captures the essence of quantum computing.

Inside one of IBM’s cryostats wired for a 50-qubit quantum system.

Inside one of IBM’s cryostats wired for a 50-qubit quantum system.

Connie Zhou for IBM

It’s hard to say qualitatively why quantum computing is so powerful precisely because it is hard to specify what quantum mechanics means at all. The equations of quantum theory certainly show that it will work: that, at least for some classes of computation such as factorization or database searches, there is tremendous speedup of the calculation. But how exactly?

Perhaps the safest way to describe quantum computing is to say that quantum mechanics somehow creates a “resource” for computation that is unavailable to classical devices. As quantum theorist Daniel Gottesman of the Perimeter Institute in Waterloo, Canada, put it, “If you have enough quantum mechanics available, in some sense, then you have speedup, and if not, you don’t.”

Some things are clear, though. To carry out a quantum computation, you need to keep all your qubits coherent. And this is very hard. Interactions of a system of quantum-coherent entities with their surrounding environment create channels through which the coherence rapidly “leaks out” in a process called decoherence. Researchers seeking to build quantum computers must stave off decoherence, which they can currently do only for a fraction of a second. That challenge gets ever greater as the number of qubits — and hence the potential to interact with the environment — increases. This is largely why, even though quantum computing was first proposed by Richard Feynman in 1982 and the theory was worked out in the early 1990s, it has taken until now to make devices that can actually perform a meaningful computation.

Quantum Errors

There’s a second fundamental reason why quantum computing is so difficult. Like just about every other process in nature, it is noisy. Random fluctuations, from heat in the qubits, say, or from fundamentally quantum-mechanical processes, will occasionally flip or randomize the state of a qubit, potentially derailing a calculation. This is a hazard in classical computing too, but it’s not hard to deal with — you just keep two or more backup copies of each bit so that a randomly flipped bit stands out as the odd one out.

Researchers working on quantum computers have created strategies for how to deal with the noise. But these strategies impose a huge debt of computational overhead — all your computing power goes to correcting errors and not to running your algorithms. “Current error rates significantly limit the lengths of computations that can be performed,” said Andrew Childs, the codirector of the Joint Center for Quantum Information and Computer Science at the University of Maryland. “We’ll have to do a lot better if we want to do something interesting.”

Andrew Childs, a quantum theorist at the University of Maryland, cautions that error rates are a fundamental concern for quantum computers.

Andrew Childs, a quantum theorist at the University of Maryland, cautions that error rates are a fundamental concern for quantum computers.

Photo by John T. Consoli/University of Maryland

A lot of research on the fundamentals of quantum computing has been devoted to error correction. Part of the difficulty stems from another of the key properties of quantum systems: Superpositions can only be sustained as long as you don’t measure the qubit’s value. If you make a measurement, the superposition collapses to a definite value: 1 or 0. So how can you find out if a qubit has an error if you don’t know what state it is in?

One ingenious scheme involves looking indirectly, by coupling the qubit to another “ancilla” qubit that doesn’t take part in the calculation but that can be probed without collapsing the state of the main qubit itself. It’s complicated to implement, though. Such solutions mean that, to construct a genuine “logical qubit” on which computation with error correction can be performed, you need many physical qubits.

How many? Quantum theorist Alán Aspuru-Guzik of Harvard University estimates that around 10,000 of today’s physical qubits would be needed to make a single logical qubit — a totally impractical number. If the qubits get much better, he said, this number could come down to a few thousand or even hundreds. Eisert is less pessimistic, saying that on the order of 800 physical qubits might already be enough, but even so he agrees that “the overhead is heavy,” and for the moment we need to find ways of coping with error-prone qubits.

An alternative to correcting errors is avoiding them or canceling out their influence: so-called error mitigation. Researchers at IBM, for example, are developing schemes for figuring out mathematically how much error is likely to have been incurred in a computation and then extrapolating the output of a computation to the “zero noise” limit.

Some researchers think that the problem of error correction will prove intractable and will prevent quantum computers from achieving the grand goals predicted for them. “The task of creating quantum error-correcting codes is harder than the task of demonstrating quantum supremacy,” said mathematician Gil Kalai of the Hebrew University of Jerusalem in Israel. And he adds that “devices without error correction are computationally very primitive, and primitive-based supremacy is not possible.” In other words, you’ll never do better than classical computers while you’ve still got errors.

Others believe the problem will be cracked eventually. According to Jay Gambetta, a quantum information scientist at IBM’s Thomas J. Watson Research Center, “Our recent experiments at IBM have demonstrated the basic elements of quantum error correction on small devices, paving the way towards larger-scale devices where qubits can reliably store quantum information for a long period of time in the presence of noise.” Even so, he admits that “a universal fault-tolerant quantum computer, which has to use logical qubits, is still a long way off.” Such developments make Childs cautiously optimistic. “I’m sure we’ll see improved experimental demonstrations of [error correction], but I think it will be quite a while before we see it used for a real computation,” he said.

Living With Errors

For the time being, quantum computers are going to be error-prone, and the question is how to live with that. At IBM, researchers are talking about “approximate quantum computing” as the way the field will look in the near term: finding ways of accommodating the noise.

This calls for algorithms that tolerate errors, getting the correct result despite them. It’s a bit like working out the outcome of an election regardless of a few wrongly counted ballot papers. “A sufficiently large and high-fidelity quantum computation should have some advantage [over a classical computation] even if it is not fully fault-tolerant,” said Gambetta.

Lucy Reading-Ikkanda/Quanta Magazine

One of the most immediate error-tolerant applications seems likely to be of more value to scientists than to the world at large: to simulate stuff at the atomic level. (This, in fact, was the motivation that led Feynman to propose quantum computing in the first place.) The equations of quantum mechanics prescribe a way to calculate the properties — such as stability and chemical reactivity — of a molecule such as a drug. But they can’t be solved classically without making lots of simplifications.

In contrast, the quantum behavior of electrons and atoms, said Childs, “is relatively close to the native behavior of a quantum computer.” So one could then construct an exact computer model of such a molecule. “Many in the community, including me, believe that quantum chemistry and materials science will be one of the first useful applications of such devices,” said Aspuru-Guzik, who has been at the forefront of efforts to push quantum computing in this direction.

Quantum simulations are proving their worth even on the very small quantum computers available so far. A team of researchers including Aspuru-Guzik has developed an algorithm that they call the variational quantum eigensolver (VQE), which can efficiently find the lowest-energy states of molecules even with noisy qubits. So far it can only handle very small molecules with few electrons, which classical computers can already simulate accurately. But the capabilities are getting better, as Gambetta and coworkers showed last Septemberwhen they used a 6-qubit device at IBM to calculate the electronic structures of molecules, including lithium hydride and beryllium hydride. The work was “a significant leap forward for the quantum regime,” according to physical chemist Markus Reiher of the Swiss Federal Institute of Technology in Zurich, Switzerland. “The use of the VQE for the simulation of small molecules is a great example of the possibility of near-term heuristic algorithms,” said Gambetta.

But even for this application, Aspuru-Guzik confesses that logical qubits with error correction will probably be needed before quantum computers truly begin to surpass classical devices. “I would be really excited when error-corrected quantum computing begins to become a reality,” he said.

“If we had more than 200 logical qubits, we could do things in quantum chemistry beyond standard approaches,” Reiher adds. “And if we had about 5,000 such qubits, then the quantum computer would be transformative in this field.”

What’s Your Volume?

Despite the challenges of reaching those goals, the fast growth of quantum computers from 5 to 50 qubits in barely more than a year has raised hopes. But we shouldn’t get too fixated on these numbers, because they tell only part of the story. What matters is not just — or even mainly — how many qubits you have, but how good they are, and how efficient your algorithms are.

Any quantum computation has to be completed before decoherence kicks in and scrambles the qubits. Typically, the groups of qubits assembled so far have decoherence times of a few microseconds. The number of logic operations you can carry out during that fleeting moment depends on how quickly the quantum gates can be switched — if this time is too slow, it really doesn’t matter how many qubits you have at your disposal. The number of gate operations needed for a calculation is called its depth: Low-depth (shallow) algorithms are more feasible than high-depth ones, but the question is whether they can be used to perform useful calculations.

What’s more, not all qubits are equally noisy. In theory it should be possible to make very low-noise qubits from so-called topological electronic states of certain materials, in which the “shape” of the electron states used for encoding binary information confers a kind of protection against random noise. Researchers at Microsoft, most prominently, are seeking such topological states in exotic quantum materials, but there’s no guarantee that they’ll be found or will be controllable.

Researchers at IBM have suggested that the power of a quantum computation on a given device be expressed as a number called the “quantum volume,” which bundles up all the relevant factors: number and connectivity of qubits, depth of algorithm, and other measures of the gate quality, such as noisiness. It’s really this quantum volume that characterizes the power of a quantum computation, and Gambetta said that the best way forward right now is to develop quantum-computational hardware that increases the available quantum volume.

This is one reason why the much vaunted notion of quantum supremacy is more slippery than it seems. The image of a 50-qubit (or so) quantum computer outperforming a state-of-the-art supercomputer sounds alluring, but it leaves a lot of questions hanging. Outperforming for which problem? How do you know the quantum computer has got the right answer if you can’t check it with a tried-and-tested classical device? And how can you be sure that the classical machine wouldn’t do better if you could find the right algorithm?

So quantum supremacy is a concept to handle with care. Some researchers prefer now to talk about “quantum advantage,” which refers to the speedup that quantum devices offer without making definitive claims about what is best. An aversion to the word “supremacy” has also arisen because of the racial and political implications.

Whatever you choose to call it, a demonstration that quantum computers can do things beyond current classical means would be psychologically significant for the field. “Demonstrating an unambiguous quantum advantage will be an important milestone,” said Eisert — it would prove that quantum computers really can extend what is technologically possible.

That might still be more of a symbolic gesture than a transformation in useful computing resources. But such things may matter, because if quantum computing is going to succeed, it won’t be simply by the likes of IBM and Google suddenly offering their classy new machines for sale. Rather, it’ll happen through an interactive and perhaps messy collaboration between developers and users, and the skill set will evolve in the latter only if they have sufficient faith that the effort is worth it. This is why both IBM and Google are keen to make their devices available as soon as they’re ready. As well as a 16-qubit IBM Q experience offered to anyone who registers online, IBM now has a 20-qubit version for corporate clients, including JP Morgan Chase, Daimler, Honda, Samsung and the University of Oxford. Not only will that help clients discover what’s in it for them; it should create a quantum-literate community of programmers who will devise resources and solve problems beyond what any individual company could muster.

“For quantum computing to take traction and blossom, we must enable the world to use and to learn it,” said Gambetta. “This period is for the world of scientists and industry to focus on getting quantum-ready.”

Quantum Algorithms Struggle Against Old Foe: Clever Computers


The quest for “quantum supremacy” – unambiguous proof that a quantum computer does something faster than an ordinary computer – has paradoxically led to a boom in quasi-quantum classical algorithms.

For Cristian Calude, doubt began with a puzzle so simple, he said, that “even a child can understand it.” Here it is: Suppose you have a mysterious box that takes one of two possible inputs — you can press a red button or a blue button, say — and gives back one of two possible outputs — a red ball or a blue ball. If the box always returns the same color ball no matter what, it’s said to be constant; if the color of the ball changes with the color of the button, it’s balanced. Your assignment is to determine which type of box you’ve got by asking it to perform its secret act only once.

At first glance, the task might seem hopeless. Indeed, when the physicist David Deutsch described this thought experiment in 1985, computer scientists believed that no machine operating by the rules of classical physics could learn the box’s identity with fewer than two queries: one for each input.

Deutsch, however, found that by translating the problem into the strange language of quantum mechanics, he could in fact achieve a one-query solution. He proposed a simple five-step algorithm that could run on a quantum computer of just two qubits — the basic units of quantum information. (Experimentalists wouldn’t build an actual quantum machine capable of running the algorithm until 1998.)

Although it has no practical use, Deutsch’s algorithm— the first quantum algorithm — became a ubiquitous illustration of the inimitable power of quantum computation, which might one day transform such fields as cryptography, drug discovery and materials engineering. “If you open a textbook in quantum computing written before the last 10 years or so, it will start with this example,” said Calude, a mathematician and computer scientist at the University of Auckland in New Zealand. “It appeared everywhere.”

But something bothered him. If Deutsch’s algorithm were truly superior, as the early textbooks claimed, no classical algorithm of comparable ability could exist. Was that really true? “I’m a mathematician — I am by training an unbeliever,” Calude said. “When I see a claim like this, I start thinking: How do I prove it?”

He couldn’t. Instead, he showed it was false. In a 2007 paper, he broke down Deutsch’s algorithm into its constituent quantum parts (for instance, the ability to represent two classical bits as a “superposition” of both at once) and sidestepped these instructions with classical operations — a process Calude calls “de-quantization.” In this way, he constructed an elegant classical solution to Deutsch’s black-box riddle. The quantum solution, it turned out, wasn’t always better after all.

A popular misconception is that the potential — and the limits — of quantum computing must come from hardware. In the digital age, we’ve gotten used to marking advances in clock speed and memory. Likewise, the 50-qubit quantum machines now coming online from the likes of Intel and IBM have inspired predictions that we are nearing “quantum supremacy” — a nebulous frontier where quantum computers begin to do things beyond the ability of classical machines.

But quantum supremacy is not a single, sweeping victory to be sought — a broad Rubicon to be crossed — but rather a drawn-out series of small duels. It will be established problem by problem, quantum algorithm versus classical algorithm. “With quantum computers, progress is not just about speed,” said Michael Bremner, a quantum theorist at the University of Technology Sydney. “It’s much more about the intricacy of the algorithms at play.”

Calude’s story, in other words, is not unique. Paradoxically, reports of powerful quantum computations are motivating improvements to classical ones, making it harder for quantum machines to gain an advantage. “Most of the time when people talk about quantum computing, classical computing is dismissed, like something that is past its prime,” Calude said. “But that is not the case. This is an ongoing competition.”

And the goalposts are shifting. “When it comes to saying where the supremacy threshold is, it depends on how good the best classical algorithms are,” said John Preskill, a theoretical physicist at the California Institute of Technology. “As they get better, we have to move that boundary.”

‘It Doesn’t Look So Easy’

Before the dream of a quantum computer took shape in the 1980s, most computer scientists took for granted that classical computing was all there was. The field’s pioneers had convincingly argued that classical computers — epitomized by the mathematical abstraction known as a Turing machine — should be able to compute everything that is computable in the physical universe, from basic arithmetic to stock trades to black hole collisions.

Classical machines couldn’t necessarily do all these computations efficiently, though. Let’s say you wanted to understand something like the chemical behavior of a molecule. This behavior depends on the behavior of the electrons in the molecule, which exist in a superposition of many classical states. Making things messier, the quantum state of each electron depends on the states of all the others — due to the quantum-mechanical phenomenon known as entanglement. Classically calculating these entangled states in even very simple molecules can become a nightmare of exponentially increasing complexity.

A quantum computer, by contrast, can deal with the intertwined fates of the electrons under study by superposing and entangling its own quantum bits. This enables the computer to process extraordinary amounts of information. Each single qubit you add doubles the states the system can simultaneously store: Two qubits can store four states, three qubits can store eight states, and so on. Thus, you might need just 50 entangled qubits to model quantum states that would require exponentially many classical bits — 1.125 quadrillion to be exact — to encode.

A quantum machine could therefore make the classically intractable problem of simulating large quantum-mechanical systems tractable, or so it appeared. “Nature isn’t classical, dammit, and if you want to make a simulation of nature, you’d better make it quantum mechanical,” the physicist Richard Feynman famously quipped in 1981. “And by golly it’s a wonderful problem, because it doesn’t look so easy.”

It wasn’t, of course.

Even before anyone began tinkering with quantum hardware, theorists struggled to come up with suitable software. Early on, Feynman and Deutsch learned that they could control quantum information with mathematical operations borrowed from linear algebra, which they called gates. As analogues to classical logic gates, quantum gates manipulate qubits in all sorts of ways — guiding them into a succession of superpositions and entanglements and then measuring their output. By mixing and matching gates to form circuits, the theorists could easily assemble quantum algorithms.

Conceiving algorithms that promised clear computational benefits proved more difficult. By the early 2000s, mathematicians had come up with only a few good candidates. Most famously, in 1994, a young staffer at Bell Laboratories named Peter Shor proposed a quantum algorithm that factors integers exponentially faster than any known classical algorithm — an efficiency that could allow it to crack many popular encryption schemes. Two years later, Shor’s Bell Labs colleague Lov Grover devised an algorithm that speeds up the classically tedious process of searching through unsorted databases. “There were a variety of examples that indicated quantum computing power should be greater than classical,” said Richard Jozsa, a quantum information scientist at the University of Cambridge who, in 1992, helped Deutsch extend his black-box problem to boxes that have many possible inputs, for which the quantum solution remains unrivaled.

But Jozsa, along with Calude and others, would also discover a variety of examples that, like Deutsch’s original algorithm, indicated just the opposite. “It turns out that many beautiful quantum processes look like they should be complicated” and therefore hard to simulate on a classical computer, Jozsa said. “But with clever, subtle mathematical techniques, you can figure out what they will do.” He and his colleagues found that they could use these techniques to efficiently simulate — or de-quantize, as Calude would say — a surprising number of quantum circuits. For instance, circuits that omit entanglement fall into this trap, as do those that entangle only a limited number of qubits or use only certain kinds of entangling gates.

What, then, guarantees that an algorithm like Shor’s is uniquely powerful? “That’s very much an open question,” Jozsa said. “We never really succeeded in understanding why some [algorithms] are easy to simulate classically and others are not. Clearly entanglement is important, but it’s not the end of the story.” Experts began to wonder whether many of the quantum algorithms that they believed were superior might turn out to be only ordinary.

Sampling Struggle

Until recently, the pursuit of quantum power was largely an abstract one. “We weren’t really concerned with implementing our algorithms because nobody believed that in the reasonable future we’d have a quantum computer to do it,” Jozsa said. Running Shor’s algorithm for integers large enough to unlock a standard 128-bit encryption key, for instance, would require thousands of qubits — plus probably many thousands more to correct for errors. Experimentalists, meanwhile, were fumbling while trying to control more than a handful.

But by 2011, things were starting to look up. That fall, at a conference in Brussels, Preskill speculated that “the day when well-controlled quantum systems can perform tasks surpassing what can be done in the classical world” might not be far off. Recent laboratory results, he said, could soon lead to quantum machines on the order of 100 qubits. Getting them to pull off some “super-classical” feat maybe wasn’t out of the question. (Although D-Wave Systems’ commercial quantum processors could by then wrangle 128 qubits and now boast more than 2,000, they tackle only specific optimization problems; many experts doubt they can outperform classical computers.)

“I was just trying to emphasize we were getting close — that we might finally reach a real milestone in human civilization where quantum technology becomes the most powerful information technology that we have,” Preskill said. He called this milestone “quantum supremacy.” The name — and the optimism — stuck. “It took off to an extent I didn’t suspect.”

The buzz about quantum supremacy reflected a growing excitement in the field — over experimental progress, yes, but perhaps more so over a series of theoretical breakthroughs that began with a 2004 paper by the IBM physicists Barbara Terhal and David DiVincenzo. In their effort to understand quantum assets, the pair had turned their attention to rudimentary quantum puzzles known as sampling problems. In time, this class of problems would become experimentalists’ greatest hope for demonstrating an unambiguous speedup on early quantum machines.

Sampling problems exploit the elusive nature of quantum information. Say you apply a sequence of gates to 100 qubits. This circuit may whip the qubits into a mathematical monstrosity equivalent to something on the order of 2100 classical bits. But once you measure the system, its complexity collapses to a string of only 100 bits. The system will spit out a particular string — or sample — with some probability determined by your circuit.

In a sampling problem, the goal is to produce a series of samples that look as though they came from this circuit. It’s like repeatedly tossing a coin to show that it will (on average) come up 50 percent heads and 50 percent tails. Except here, the outcome of each “toss” isn’t a single value — heads or tails — it’s a string of many values, each of which may be influenced by some (or even all) of the other values.

For a well-oiled quantum computer, this exercise is a no-brainer. It’s what it does naturally. Classical computers, on the other hand, seem to have a tougher time. In the worst circumstances, they must do the unwieldy work of computing probabilities for all possible output strings — all 2100 of them — and then randomly select samples from that distribution. “People always conjectured this was the case,” particularly for very complex quantum circuits, said Ashley Montanaro, an expert in quantum algorithms at the University of Bristol.

Terhal and DiVincenzo showed that even some simple quantum circuits should still be hard to sample by classical means. Hence, a bar was set. If experimentalists could get a quantum system to spit out these samples, they would have good reason to believe that they’d done something classically unmatchable.

Theorists soon expanded this line of thought to include other sorts of sampling problems. One of the most promising proposals came from Scott Aaronson, a computer scientist then at the Massachusetts Institute of Technology, and his doctoral student Alex Arkhipov. In work posted on the scientific preprint site arxiv.org in 2010, they described a quantum machine that sends photons through an optical circuit, which shifts and splits the light in quantum-mechanical ways, thereby generating output patterns with specific probabilities. Reproducing these patterns became known as boson sampling. Aaronson and Arkhipov reasoned that boson sampling would start to strain classical resources at around 30 photons — a plausible experimental target.

Similarly enticing were computations called instantaneous quantum polynomial, or IQP, circuits. An IQP circuit has gates that all commute, meaning they can act in any order without changing the outcome — in the same way 2 + 5 = 5 + 2. This quality makes IQP circuits mathematically pleasing. “We started studying them because they were easier to analyze,” Bremner said. But he discovered that they have other merits. In work that began in 2010 and culiminated in a 2016 paper with Montanaro and Dan Shepherd, now at the National Cyber Security Center in the U.K., Bremner explained why IQP circuits can be extremely powerful: Even for physically realistic systems of hundreds — or perhaps even dozens — of qubits, sampling would quickly become a classically thorny problem.

By 2016, boson samplers had yet to extend beyond 6 photons. Teams at Google and IBM, however, were verging on chips nearing 50 qubits; that August, Google quietly posted a draft paper laying out a road map for demonstrating quantum supremacy on these “near-term” devices.

Google’s team had considered sampling from an IQP circuit. But a closer look by Bremner and his collaborators suggested that the circuit would likely need some error correction — which would require extra gates and at least a couple hundred extra qubits — in order to unequivocally hamstring the best classical algorithms. So instead, the team used arguments akin to Aaronson’s and Bremner’s to show that circuits made of non-commuting gates, although likely harder to build and analyze than IQP circuits, would also be harder for a classical device to simulate. To make the classical computation even more challenging, the team proposed sampling from a circuit chosen at random. That way, classical competitors would be unable to exploit any familiar features of the circuit’s structure to better guess its behavior.

But there was nothing to stop the classical algorithms from getting more resourceful. In fact, in October 2017, a team at IBM showed how, with a bit of classical ingenuity, a supercomputer can simulate sampling from random circuits on as many as 56 qubits — provided the circuits don’t involve too much depth (layers of gates). Similarly, a more able algorithm has recently nudged the classical limits of boson sampling, to around 50 photons.

These upgrades, however, are still dreadfully inefficient. IBM’s simulation, for instance, took two days to do what a quantum computer is expected to do in less than one-tenth of a millisecond. Add a couple more qubits — or a little more depth — and quantum contenders could slip freely into supremacy territory. “Generally speaking, when it comes to emulating highly entangled systems, there has not been a [classical] breakthrough that has really changed the game,” Preskill said. “We’re just nibbling at the boundary rather than exploding it.”

That’s not to say there will be a clear victory. “Where the frontier is is a thing people will continue to debate,” Bremner said. Imagine this scenario: Researchers sample from a 50-qubit circuit of some depth — or maybe a slightly larger one of less depth — and claim supremacy. But the circuit is pretty noisy — the qubits are misbehaving, or the gates don’t work that well. So then some crackerjack classical theorists swoop in and simulate the quantum circuit, no sweat, because “with noise, things you think are hard become not so hard from a classical point of view,” Bremner explained. “Probably that will happen.”

What’s more certain is that the first “supreme” quantum machines, if and when they arrive, aren’t going to be cracking encryption codes or simulating novel pharmaceutical molecules. “That’s the funny thing about supremacy,” Montanaro said. “The first wave of problems we solve are ones for which we don’t really care about the answers.”

Yet these early wins, however small, will assure scientists that they are on the right track — that a new regime of computation really is possible. Then it’s anyone’s guess what the next wave of problems will be.

%d bloggers like this: