The Universe’s Ultimate Complexity Revealed by Simple Quantum Games


A two-player game can reveal whether the universe has an infinite amount of complexity.

Art for "The Universe’s Ultimate Complexity Revealed by Simple Quantum Games"

How many independent properties does the universe possess? A simple game might reveal the answer.

One of the biggest and most basic questions in physics involves the number of ways to configure the matter in the universe. If you took all that matter and rearranged it, then rearranged it again, then rearranged it again, would you ever exhaust the possible configurations, or could you go on reconfiguring forever?

Physicists don’t know, but in the absence of certain knowledge, they make assumptions. And those assumptions differ depending on the area of physics they happen to be in. In one area they assume the number of configurations is finite. In another they assume it’s infinite. For now, at least, there’s no way to tell who’s right.

But over the last couple years, a select group of mathematicians and computer scientists has been busy creating games that could theoretically settle the question. The games involve two players placed in isolation from each other. The players are asked questions, and they win if their answers are coordinated in a certain way. In all of these games, the rate at which players win has implications for the number of different ways the universe can be configured.

“There’s this philosophical question: Is the universe finite or infinite-dimensional?” said Henry Yuen, a theoretical computer scientist at the University of Toronto. “People will think this is something you can never test, but one possible way of resolving this is with a game like what William came up with.”

Yuen was referring to William Slofstra, a mathematician at the University of Waterloo. In 2016 Slofstra invented a game that involves two players who assign values to variables in hundreds of simple equations. Under normal circumstances even the most cunning players will sometimes lose. But Slofstra proved that if you give them access to an infinite amount of an unorthodox resource — entangled quantum particles — it becomes possible for the players to win this game all the time.

Other researchers have since refined Slofstra’s result. They’ve proved that you don’t need a game with hundreds of questions to reach the same conclusion Slofstra did. In 2017 three researchers proved that there are games with just five questions that can be won 100 percent of the time if the players have access to an unlimited number of entangled particles.

These games are all modeled on games invented more than 50 years ago by the physicist John Stewart Bell. Bell developed the games to test one of the strangest propositions about the physical world made by the theory of quantum mechanics. A half-century later, his ideas may turn out to be useful for much more than that.

Magic Squares

Bell came up with “nonlocal” games, which require players to be at a distance from each other with no way to communicate. Each player answers a question. The players win or lose based on the compatibility of their answers.

One such game is the magic square game. There are two players, Alice and Bob, each with a 3-by-3 grid. A referee tells Alice to fill out one particular row in the grid — say the second row — by putting either a 1 or a 0 in each box, such that the sum of the numbers in that row is odd. The referee tells Bob to fill out one column in the grid — say the first column — by putting either a 1 or a 0 in each box, such that the sum of the numbers in that column is even. Alice and Bob win the game if Alice’s numbers give an odd sum, Bob’s give an even sum, and — most important — they’ve each written down the same number in the one square where their row and column intersect.

Here’s the catch: Alice and Bob don’t know which row or column the other has been asked to fill out. “It’s a game that would be trivial for the two players if they could communicate,” said Richard Cleve, who studies quantum computing at the University of Waterloo. “But the fact that Alice doesn’t know what question Bob was asked and vice versa means it’s a little tricky.”

GRAPHIC: Magic Squares

Lucy Reading-Ikkanda/Quanta Magazine

In the magic square game, and other games like it, there doesn’t seem to be a way for the players to win 100 percent of the time. And indeed, in a world completely explained by classical physics, 89 percent is the best Alice and Bob could do.

But quantum mechanics — specifically, the bizarre quantum phenomenon of “entanglement” — allows Alice and Bob to do better.

In quantum mechanics, the properties of fundamental particles like electrons don’t exist until the moment you measure them. Imagine, for example, an electron moving rapidly around the circumference of a circle. To find its position you perform a measurement. But prior to the measurement, the electron has no definite position at all. Instead, the electron is characterized by a mathematical formula expressing the likelihood that it’s in any given position.

When two particles are entangled, the complex probability amplitudes that describe their properties are intertwined. Imagine two electrons that were entangled such that if a measurement identifies the first electron in one position around the circle, the other must occupy a position directly across the circle from it. This relationship between the two electrons holds when they’re right next to each other and when they’re light-years apart: Even at that distance, if you measure the position of one electron, the position of the other becomes instantly determined, even though no causal event has passed between them.

The phenomenon seems preposterous because there’s nothing about our non-quantum-scale experience to suggest such a thing is possible. Albert Einstein famously derided entanglement as “spooky action at a distance” and argued for years that it couldn’t be true.

To implement a quantum strategy in the magic square game, Alice and Bob each take one of a pair of entangled particles. To determine which numbers to write down, they measure properties of their particles — almost as if they were rolling correlated dice to guide their choice of answers.

What Bell calculated, and what many subsequent experiments have shown, is that by exploiting the strange quantum correlations found in entanglement, players of games like the magic square game can coordinate their answers with greater exactness and win the game more than 89 percent of the time.

Bell came up with nonlocal games as a way to show that entanglement was real, and that our classical view of the world was incomplete — a conclusion that was very much up for grabs in Bell’s time. “Bell came up with this experiment you could do in a laboratory,” Cleve said. If you recorded higher-than-expected success rates in these experimental games, you’d know the players had to be exploiting some feature of the physical world not explained by classical physics.

What Slofstra and others have done since then is similar in strategy, but different in scope. They’ve shown that not only do Bell’s games imply the reality of entanglement, but some games have the power to imply a whole lot more — like whether there is any limit to the number of configurations the universe can take.

More Entanglement, Please

In his 2016 paper Slofstra proposed a kind of nonlocal game involving two players who provide answers to simple questions. To win, they have to give responses that are coordinated in a certain way, as in the magic square game.

Imagine, for example, a game that involves two players, Alice and Bob, who have to match socks from their respective sock drawers. Each player has to choose a single sock, without any knowledge of the sock the other has chosen. The players can’t coordinate ahead of time. If their sock choices form a matching pair, they win.

Given these uncertainties it’s unclear which socks Alice and Bob should pick in the morning — at least in a classical world. But if they can employ entangled particles they have a better chance of matching. By basing their color choice on the results of measurements of a single pair of entangled particles they could coordinate along that one attribute of their socks.

Yet they’d still be guessing blindly about all the other attributes — whether they were wool or cotton, ankle-height or crew. But with additional entangled particles they’d get access to more measurements. They could use one set to correlate their choice of material and another to correlate their choice of sock height. In the end, because they were able to coordinate their choices for many attributes, they’d be more likely to end up with a matching pair than if they’d only been able to coordinate for one.

“More complicated systems allow for more correlated measurements, which enable coordination at more complicated tasks,” Slofstra said.

The questions in Slofstra’s game aren’t really about socks. They involve equations such as a + b + c and b + c + d. Alice can make the value of each variable either 1 or 0 (and the values have to remain consistent across the equations — b has to have the same value in every equation where it appears). And her equations have to sum to various numbers.

Bob is given just one of Alice’s variables, say b, and asked to assign a value to it: 0 or 1. The players win if they both assign the same value to whichever variable Bob is given.

If you and a friend were to play this game, there’s no way you could win it all the time. But with the aid of a pair of entangled particles, you could win more consistently, as in the sock game.

Slofstra was interested in understanding whether there is an amount of entanglement past which a team’s winning probability stops increasing. Perhaps players could achieve an optimal strategy if they shared five pairs of entangled particles, or 500. “We’d hoped you could say, ‘You need this much entanglement to play it optimally,’” Slofstra said. “That’s not what is true.”

He found that adding more pairs of entangled particles always increased the winning percentage. Moreover, if you could somehow exploit an infinite number of entangled particles, you would be able to play the game perfectly, winning 100 percent of the time. This clearly can’t be done in a game with socks — ultimately you’d run out of sock features to coordinate. But as Slofstra’s game has made clear, the universe can be far knottier than a sock drawer.

Is the Universe Infinite?

Slofstra’s result came as a shock. Eleven days after his paper appeared, the computer scientist Scott Aaronson wrote that Slofstra’s result touches “on a question of almost metaphysical significance: namely, what sorts of experimental evidence could possibly bear on whether the universe was discrete or continuous?”

Aaronson was referring to the different states the universe can take — where a state is a particular configuration of all the matter within it. Every physical system has its own state space, which is an index of all the different states it can take.

Researchers talk about a state space as having a certain number of dimensions, reflecting the number of independent characteristics you can adjust in the underlying system.

For example, even a sock drawer has a state space. Any sock might be described by its color, its length, its material, and how raggedy and worn it is. In this case, the dimension of the sock drawer’s state space is four.

A deep question about the physical world is whether there’s a limit to the size of the state space of the universe (or of any physical system). If there is a limit, it means that no matter how large and complicated your physical system is, there are still only so many ways it can be configured. “The question is whether physics allows there to be physical systems that have an infinite number of properties that are independent of each other that you could in principle observe,” said Thomas Vidick, a computer scientist at the California Institute of Technology.

The field of physics is undecided on this point. In fact, it maintains two contradictory views.

On the one hand, students in an introductory quantum mechanics course are taught to think in terms of infinite-dimensional state spaces. If they model the position of an electron moving around a circle, for instance, they’ll assign a probability to each point on the circle. Because there are infinite points, the state space describing the electron’s position will be infinite-dimensional.

“In order to describe the system we need a parameter for every possible position the electron can be in,” Yuen said. “There are infinitely many positions, so you need infinitely many parameters. Even in one-dimensional space [like the circle], the state space of the particle is infinite-dimensional.”

But perhaps the idea of infinite-dimensional state spaces is nonsense. In the 1970s, the physicists Jacob Bekenstein and Stephen Hawking calculated that a black hole is the most complicated physical system in the universe, yet even its state can be specified by a huge but finite number of parameters — approximately 1069 bits of information per square meter of the black hole’s event horizon. This number — the “Bekenstein bound” — suggests that if a black hole doesn’t require an infinite-dimensional state space, then nothing does.

These competing perspectives on state spaces reflect fundamentally different views about the nature of physical reality. If state spaces are truly finite-dimensional, this means that at the smallest scale, nature is pixelated. But if electrons require infinite-dimensional state spaces, physical reality is fundamentally continuous — an unbroken sheet even at the finest resolution.

So which is it? Physics hasn’t devised an answer, but games like Slofstra’s could, in principle, provide one. Slofstra’s work suggests a way to test the distinction: Play a game that can only be won 100 percent of the time if the universe allows for infinite-dimensional state spaces. If you observe players winning every time they play, it means they’re taking advantage of the kinds of correlations that can only be generated through measurements on a physical system with an infinite number of independently tunable parameters.

“He gives an experiment such that, if it can be realized, then we conclude the system that produced the statistics that were observed must have infinite degrees of freedom,” Vidick said.

There are barriers to actually carrying out Slofstra’s experiment. For one thing, it’s impossible to certify any laboratory result as occurring 100 percent of the time.

“In the real world you’re limited by your experimental setup,” Yuen said. “How do you distinguish between 100 percent and 99.9999 percent?”

But practical considerations aside, Slofstra has shown that there is, mathematically at least, a way of assessing a fundamental feature of the universe that might otherwise have seemed beyond our ken. When Bell first came up with nonlocal games, he hoped that they’d be useful for probing one of the most beguiling phenomena in the universe. Fifty years later, his invention has proved to have even more depth than that.

Advertisements

A New Spin on the Quantum Brain


A new theory explains how fragile quantum states may be able to exist for hours or even days in our warm, wet brain. Experiments should soon test the idea.
 

Quantum Brain GIF

The mere mention of “quantum consciousness” makes most physicists cringe, as the phrase seems to evoke the vague, insipid musings of a New Age guru. But if a new hypothesis proves to be correct, quantum effects might indeed play some role in human cognition. Matthew Fisher, a physicist at the University of California, Santa Barbara, raised eyebrows late last year when he published a paper in Annals of Physics proposing that the nuclear spins of phosphorus atoms could serve as rudimentary “qubits” in the brain — which would essentially enable the brain to function like a quantum computer.

As recently as 10 years ago, Fisher’s hypothesis would have been dismissed by many as nonsense. Physicists have been burned by this sort of thing before, most notably in 1989, when Roger Penrose proposed that mysterious protein structures called “microtubules” played a role in human consciousness by exploiting quantum effects. Few researchers believe such a hypothesis plausible. Patricia Churchland, a neurophilosopher at the University of California, San Diego, memorably opined that one might as well invoke “pixie dust in the synapses” to explain human cognition.

Fisher’s hypothesis faces the same daunting obstacle that has plagued microtubules: a phenomenon called quantum decoherence. To build an operating quantum computer, you need to connect qubits — quantum bits of information — in a process called entanglement. But entangled qubits exist in a fragile state. They must be carefully shielded from any noise in the surrounding environment. Just one photon bumping into your qubit would be enough to make the entire system “decohere,” destroying the entanglement and wiping out the quantum properties of the system. It’s challenging enough to do quantum processing in a carefully controlled laboratory environment, never mind the warm, wet, complicated mess that is human biology, where maintaining coherence for sufficiently long periods of time is well nigh impossible.

Over the past decade, however, growing evidence suggests that certain biological systems might employ quantum mechanics. In photosynthesis, for example, quantum effects help plants turn sunlight into fuel. Scientists have also proposed that migratory birds have a “quantum compass” enabling them to exploit Earth’s magnetic fields for navigation, or that the human sense of smell could be rooted in quantum mechanics.

Fisher’s notion of quantum processing in the brain broadly fits into this emerging field of quantum biology. Call it quantum neuroscience. He has developed a complicated hypothesis, incorporating nuclear and quantum physics, organic chemistry, neuroscience and biology. While his ideas have met with plenty of justifiable skepticism, some researchers are starting to pay attention. “Those who read his paper (as I hope many will) are bound to conclude: This old guy’s not so crazy,” wrote John Preskill, a physicist at the California Institute of Technology, after Fisher gave a talk there. “He may be on to something. At least he’s raising some very interesting questions.”

Senthil Todadri, a physicist at the Massachusetts Institute of Technology and Fisher’s longtime friend and colleague, is skeptical, but he thinks that Fisher has rephrased the central question — is quantum processing happening in the brain? — in such a way that it lays out a road map to test the hypothesis rigorously. “The general assumption has been that of course there is no quantum information processing that’s possible in the brain,” Todadri said. “He makes the case that there’s precisely one loophole. So the next step is to see if that loophole can be closed.” Indeed, Fisher has begun to bring together a team to do laboratory tests to answer this question once and for all.

Finding the Spin

Fisher belongs to something of a physics dynasty: His father, Michael E. Fisher, is a prominent physicist at the University of Maryland, College Park, whose work in statistical physics has garnered numerous honors and awards over the course of his career. His brother, Daniel Fisher, is an applied physicist at Stanford University who specializes in evolutionary dynamics. Matthew Fisher has followed in their footsteps, carving out a highly successful physics career. He shared the prestigious Oliver E. Buckley Prize in 2015 for his research on quantum phase transitions.

So what drove him to move away from mainstream physics and toward the controversial and notoriously messy interface of biology, chemistry, neuroscience and quantum physics? His own struggles with clinical depression.

Fisher vividly remembers that February 1986 day when he woke up feeling numb and jet-lagged, as if he hadn’t slept in a week. “I felt like I had been drugged,” he said. Extra sleep didn’t help. Adjusting his diet and exercise regime proved futile, and blood tests showed nothing amiss. But his condition persisted for two full years. “It felt like a migraine headache over my entire body every waking minute,” he said. It got so bad he contemplated suicide, although the birth of his first daughter gave him a reason to keep fighting through the fog of depression.

Eventually he found a psychiatrist who prescribed a tricyclic antidepressant, and within three weeks his mental state started to lift. “The metaphorical fog that had so enshrouded me that I couldn’t even see the sun — that cloud was a little less dense, and I saw there was a light behind it,” Fisher said. Within nine months he felt reborn, despite some significant side effects from the medication, including soaring blood pressure. He later switched to Prozac and has continuously monitored and tweaked his specific drug regimen ever since.

His experience convinced him that the drugs worked. But Fisher was surprised to discover that neuroscientists understand little about the precise mechanisms behind how they work. That aroused his curiosity, and given his expertise in quantum mechanics, he found himself pondering the possibility of quantum processing in the brain. Five years ago he threw himself into learning more about the subject, drawing on his own experience with antidepressants as a starting point.

Since nearly all psychiatric medications are complicated molecules, he focused on one of the most simple, lithium, which is just one atom — a spherical cow, so to speak, that would be an easier model to study than Prozac, for instance. The analogy is particularly appropriate because a lithium atom is a sphere of electrons surrounding the nucleus, Fisher said. He zeroed in on the fact that the lithium available by prescription from your local pharmacy is mostly a common isotope called lithium-7. Would a different isotope, like the much more rare lithium-6, produce the same results? In theory it should, since the two isotopes are chemically identical. They differ only in the number of neutrons in the nucleus.

When Fisher searched the literature, he found that an experiment comparing the effects of lithium-6 and lithium-7 had been done. In 1986, scientists at Cornell University examined the effects of the two isotopes on the behavior of rats. Pregnant rats were separated into three groups: One group was given lithium-7, one group was given the isotope lithium-6, and the third served as the control group. Once the pups were born, the mother rats that received lithium-6 showed much stronger maternal behaviors, such as grooming, nursing and nest-building, than the rats in either the lithium-7 or control groups.

This floored Fisher. Not only should the chemistry of the two isotopes be the same, the slight difference in atomic mass largely washes out in the watery environment of the body. So what could account for the differences in behavior those researchers observed?

Fisher believes the secret might lie in the nuclear spin, which is a quantum property that affects how long each atom can remain coherent — that is, isolated from its environment. The lower the spin, the less the nucleus interacts with electric and magnetic fields, and the less quickly it decoheres.

Because lithium-7 and lithium-6 have different numbers of neutrons, they also have different spins. As a result, lithium-7 decoheres too quickly for the purposes of quantum cognition, while lithium-6 can remain entangled longer.

Fisher had found two substances, alike in all important respects save for quantum spin, and found that they could have very different effects on behavior. For Fisher, this was a tantalizing hint that quantum processes might indeed play a functional role in cognitive processing.

 

Quantum Protection Scheme

That said, going from an intriguing hypothesis to actually demonstrating that quantum processing plays a role in the brain is a daunting challenge. The brain would need some mechanism for storing quantum information in qubits for sufficiently long times. There must be a mechanism for entangling multiple qubits, and that entanglement must then have some chemically feasible means of influencing how neurons fire in some way. There must also be some means of transporting quantum information stored in the qubits throughout the brain.

This is a tall order. Over the course of his five-year quest, Fisher has identified just one credible candidate for storing quantum information in the brain: phosphorus atoms, which are the only common biological element other than hydrogen with a spin of one-half, a low number that makes possible longer coherence times. Phosphorus can’t make a stable qubit on its own, but its coherence time can be extended further, according to Fisher, if you bind phosphorus with calcium ions to form clusters.

In 1975, Aaron Posner, a Cornell University scientist, noticed an odd clustering of calcium and phosphorous atoms in his X-rays of bone. He made drawings of the structure of those clusters: nine calcium atoms and six phosphorous atoms, later called “Posner molecules” in his honor. The clusters popped up again in the 2000s, when scientists simulating bone growth in artificial fluid noticed them floating in the fluid. Subsequent experiments found evidence of the clusters in the body. Fisher thinks that Posner molecules could serve as a natural qubit in the brain as well.

That’s the big picture scenario, but the devil is in the details that Fisher has spent the past few years hammering out. The process starts in the cell with a chemical compound called pyrophosphate. It is made of two phosphates bonded together — each composed of a phosphorus atom surrounded by multiple oxygen atoms with zero spin. The interaction between the spins of the phosphates causes them to become entangled. They can pair up in four different ways: Three of the configurations add up to a total spin of one (a “triplet” state that is only weakly entangled), but the fourth possibility produces a zero spin, or “singlet” state of maximum entanglement, which is crucial for quantum computing.

Next, enzymes break apart the entangled phosphates into two free phosphate ions. Crucially, these remain entangled even as they move apart. This process happens much more quickly, Fisher argues, with the singlet state. These ions can then combine in turn with calcium ions and oxygen atoms to become Posner molecules. Neither the calcium nor the oxygen atoms have a nuclear spin, preserving the one-half total spin crucial for lengthening coherence times. So those clusters protect the entangled pairs from outside interference so that they can maintain coherence for much longer periods of time — Fisher roughly estimates it might last for hours, days or even weeks.

In this way, the entanglement can be distributed over fairly long distances in the brain, influencing the release of neurotransmitters and the firing of synapses between neurons — spooky action at work in the brain.

Testing the Theory

Researchers who work in quantum biology are cautiously intrigued by Fisher’s proposal. Alexandra Olaya-Castro, a physicist at University College London who has worked on quantum photosynthesis, calls it “a well-thought hypothesis. It doesn’t give answers, it opens questions that might then lead to how we could test particular steps in the hypothesis.”

University of Oxford chemist Peter Hore, who investigates whether migratory birds’ navigational systems make use of quantum effects, concurs. “Here’s a theoretical physicist who is proposing specific molecules, specific mechanics, all the way through to how this could affect brain activity,” he said. “That opens up the possibility of experimental testing.”

Experimental testing is precisely what Fisher is now trying to do. He just spent a sabbatical at Stanford University working with researchers there to replicate the 1986 study with pregnant rats. He acknowledged the preliminary results were disappointing, in that the data didn’t provide much information, but thinks if it’s repeated with a protocol closer to the original 1986 experiment, the results might be more conclusive.

Fisher has applied for funding to conduct further in-depth quantum chemistry experiments. He has cobbled together a small group of scientists from various disciplines at UCSB and the University of California, San Francisco, as collaborators. First and foremost, he would like to investigate whether calcium phosphate really does form stable Posner molecules, and whether the phosphorus nuclear spins of these molecules can be entangled for sufficiently long periods of time.

Even Hore and Olaya-Castro are skeptical of the latter, particularly Fisher’s rough estimate that the coherence could last a day or more. “I think it’s very unlikely, to be honest,” Olaya-Castro said. “The longest time scale relevant for the biochemical activity that’s happening here is the scale of seconds, and that’s too long.” (Neurons can store information for microseconds.) Hore calls the prospect “remote,” pegging the limit at one second at best. “That doesn’t invalidate the whole idea, but I think he would need a different molecule to get long coherence times,” he said. “I don’t think the Posner molecule is it. But I’m looking forward to hearing how it goes.”

Others see no need to invoke quantum processing to explain brain function. “The evidence is building up that we can explain everything interesting about the mind in terms of interactions of neurons,” said Paul Thagard, a neurophilosopher at the University of Waterloo in Ontario, Canada, to New Scientist. (Thagard declined our request to comment further.)

Plenty of other aspects of Fisher’s hypothesis also require deeper examination, and he hopes to be able to conduct the experiments to do so. Is the Posner molecule’s structure symmetrical? And how isolated are the nuclear spins?

Most important, what if all those experiments ultimately prove his hypothesis wrong? It might be time to give up on the notion of quantum cognition altogether. “I believe that if phosphorus nuclear spin is not being used for quantum processing, then quantum mechanics is not operative in longtime scales in cognition,” Fisher said. “Ruling that out is important scientifically. It would be good for science to know.”

A New Spin on the Quantum Brain


A new theory explains how fragile quantum states may be able to exist for hours or even days in our warm, wet brain. Experiments should soon test the idea.
39

Quantum Brain GIF

davidope for Quanta Magazine

The mere mention of “quantum consciousness” makes most physicists cringe, as the phrase seems to evoke the vague, insipid musings of a New Age guru. But if a new hypothesis proves to be correct, quantum effects might indeed play some role in human cognition. Matthew Fisher, a physicist at the University of California, Santa Barbara, raised eyebrows late last year when he published a paper in Annals of Physics proposing that the nuclear spins of phosphorus atoms could serve as rudimentary “qubits” in the brain — which would essentially enable the brain to function like a quantum computer.

As recently as 10 years ago, Fisher’s hypothesis would have been dismissed by many as nonsense. Physicists have been burned by this sort of thing before, most notably in 1989, when Roger Penrose proposed that mysterious protein structures called “microtubules” played a role in human consciousness by exploiting quantum effects. Few researchers believe such a hypothesis plausible. Patricia Churchland, a neurophilosopher at the University of California, San Diego, memorably opined that one might as well invoke “pixie dust in the synapses” to explain human cognition.

Fisher’s hypothesis faces the same daunting obstacle that has plagued microtubules: a phenomenon called quantum decoherence. To build an operating quantum computer, you need to connect qubits — quantum bits of information — in a process called entanglement. But entangled qubits exist in a fragile state. They must be carefully shielded from any noise in the surrounding environment. Just one photon bumping into your qubit would be enough to make the entire system “decohere,” destroying the entanglement and wiping out the quantum properties of the system. It’s challenging enough to do quantum processing in a carefully controlled laboratory environment, never mind the warm, wet, complicated mess that is human biology, where maintaining coherence for sufficiently long periods of time is well nigh impossible.

Over the past decade, however, growing evidence suggests that certain biological systems might employ quantum mechanics. In photosynthesis, for example, quantum effects help plants turn sunlight into fuel. Scientists have also proposed that migratory birds have a “quantum compass” enabling them to exploit Earth’s magnetic fields for navigation, or that the human sense of smell could be rooted in quantum mechanics.

Fisher’s notion of quantum processing in the brain broadly fits into this emerging field of quantum biology. Call it quantum neuroscience. He has developed a complicated hypothesis, incorporating nuclear and quantum physics, organic chemistry, neuroscience and biology. While his ideas have met with plenty of justifiable skepticism, some researchers are starting to pay attention. “Those who read his paper (as I hope many will) are bound to conclude: This old guy’s not so crazy,” wrote John Preskill, a physicist at the California Institute of Technology, after Fisher gave a talk there. “He may be on to something. At least he’s raising some very interesting questions.”

Senthil Todadri, a physicist at the Massachusetts Institute of Technology and Fisher’s longtime friend and colleague, is skeptical, but he thinks that Fisher has rephrased the central question — is quantum processing happening in the brain? — in such a way that it lays out a road map to test the hypothesis rigorously. “The general assumption has been that of course there is no quantum information processing that’s possible in the brain,” Todadri said. “He makes the case that there’s precisely one loophole. So the next step is to see if that loophole can be closed.” Indeed, Fisher has begun to bring together a team to do laboratory tests to answer this question once and for all.

Finding the Spin

Fisher belongs to something of a physics dynasty: His father, Michael E. Fisher, is a prominent physicist at the University of Maryland, College Park, whose work in statistical physics has garnered numerous honors and awards over the course of his career. His brother, Daniel Fisher, is an applied physicist at Stanford University who specializes in evolutionary dynamics. Matthew Fisher has followed in their footsteps, carving out a highly successful physics career. He shared the prestigious Oliver E. Buckley Prize in 2015 for his research on quantum phase transitions.

So what drove him to move away from mainstream physics and toward the controversial and notoriously messy interface of biology, chemistry, neuroscience and quantum physics? His own struggles with clinical depression.

Fisher vividly remembers that February 1986 day when he woke up feeling numb and jet-lagged, as if he hadn’t slept in a week. “I felt like I had been drugged,” he said. Extra sleep didn’t help. Adjusting his diet and exercise regime proved futile, and blood tests showed nothing amiss. But his condition persisted for two full years. “It felt like a migraine headache over my entire body every waking minute,” he said. It got so bad he contemplated suicide, although the birth of his first daughter gave him a reason to keep fighting through the fog of depression.

Eventually he found a psychiatrist who prescribed a tricyclic antidepressant, and within three weeks his mental state started to lift. “The metaphorical fog that had so enshrouded me that I couldn’t even see the sun — that cloud was a little less dense, and I saw there was a light behind it,” Fisher said. Within nine months he felt reborn, despite some significant side effects from the medication, including soaring blood pressure. He later switched to Prozac and has continuously monitored and tweaked his specific drug regimen ever since.

His experience convinced him that the drugs worked. But Fisher was surprised to discover that neuroscientists understand little about the precise mechanisms behind how they work. That aroused his curiosity, and given his expertise in quantum mechanics, he found himself pondering the possibility of quantum processing in the brain. Five years ago he threw himself into learning more about the subject, drawing on his own experience with antidepressants as a starting point.

Since nearly all psychiatric medications are complicated molecules, he focused on one of the most simple, lithium, which is just one atom — a spherical cow, so to speak, that would be an easier model to study than Prozac, for instance. The analogy is particularly appropriate because a lithium atom is a sphere of electrons surrounding the nucleus, Fisher said. He zeroed in on the fact that the lithium available by prescription from your local pharmacy is mostly a common isotope called lithium-7. Would a different isotope, like the much more rare lithium-6, produce the same results? In theory it should, since the two isotopes are chemically identical. They differ only in the number of neutrons in the nucleus.

When Fisher searched the literature, he found that an experiment comparing the effects of lithium-6 and lithium-7 had been done. In 1986, scientists at Cornell University examined the effects of the two isotopes on the behavior of rats. Pregnant rats were separated into three groups: One group was given lithium-7, one group was given the isotope lithium-6, and the third served as the control group. Once the pups were born, the mother rats that received lithium-6 showed much stronger maternal behaviors, such as grooming, nursing and nest-building, than the rats in either the lithium-7 or control groups.

This floored Fisher. Not only should the chemistry of the two isotopes be the same, the slight difference in atomic mass largely washes out in the watery environment of the body. So what could account for the differences in behavior those researchers observed?

Fisher believes the secret might lie in the nuclear spin, which is a quantum property that affects how long each atom can remain coherent — that is, isolated from its environment. The lower the spin, the less the nucleus interacts with electric and magnetic fields, and the less quickly it decoheres.

Because lithium-7 and lithium-6 have different numbers of neutrons, they also have different spins. As a result, lithium-7 decoheres too quickly for the purposes of quantum cognition, while lithium-6 can remain entangled longer.

Fisher had found two substances, alike in all important respects save for quantum spin, and found that they could have very different effects on behavior. For Fisher, this was a tantalizing hint that quantum processes might indeed play a functional role in cognitive processing.

 

Quantum Protection Scheme

That said, going from an intriguing hypothesis to actually demonstrating that quantum processing plays a role in the brain is a daunting challenge. The brain would need some mechanism for storing quantum information in qubits for sufficiently long times. There must be a mechanism for entangling multiple qubits, and that entanglement must then have some chemically feasible means of influencing how neurons fire in some way. There must also be some means of transporting quantum information stored in the qubits throughout the brain.

This is a tall order. Over the course of his five-year quest, Fisher has identified just one credible candidate for storing quantum information in the brain: phosphorus atoms, which are the only common biological element other than hydrogen with a spin of one-half, a low number that makes possible longer coherence times. Phosphorus can’t make a stable qubit on its own, but its coherence time can be extended further, according to Fisher, if you bind phosphorus with calcium ions to form clusters.

In 1975, Aaron Posner, a Cornell University scientist, noticed an odd clustering of calcium and phosphorous atoms in his X-rays of bone. He made drawings of the structure of those clusters: nine calcium atoms and six phosphorous atoms, later called “Posner molecules” in his honor. The clusters popped up again in the 2000s, when scientists simulating bone growth in artificial fluid noticed them floating in the fluid. Subsequent experiments found evidence of the clusters in the body. Fisher thinks that Posner molecules could serve as a natural qubit in the brain as well.

That’s the big picture scenario, but the devil is in the details that Fisher has spent the past few years hammering out. The process starts in the cell with a chemical compound called pyrophosphate. It is made of two phosphates bonded together — each composed of a phosphorus atom surrounded by multiple oxygen atoms with zero spin. The interaction between the spins of the phosphates causes them to become entangled. They can pair up in four different ways: Three of the configurations add up to a total spin of one (a “triplet” state that is only weakly entangled), but the fourth possibility produces a zero spin, or “singlet” state of maximum entanglement, which is crucial for quantum computing.

Next, enzymes break apart the entangled phosphates into two free phosphate ions. Crucially, these remain entangled even as they move apart. This process happens much more quickly, Fisher argues, with the singlet state. These ions can then combine in turn with calcium ions and oxygen atoms to become Posner molecules. Neither the calcium nor the oxygen atoms have a nuclear spin, preserving the one-half total spin crucial for lengthening coherence times. So those clusters protect the entangled pairs from outside interference so that they can maintain coherence for much longer periods of time — Fisher roughly estimates it might last for hours, days or even weeks.

In this way, the entanglement can be distributed over fairly long distances in the brain, influencing the release of neurotransmitters and the firing of synapses between neurons — spooky action at work in the brain.

Testing the Theory

Researchers who work in quantum biology are cautiously intrigued by Fisher’s proposal. Alexandra Olaya-Castro, a physicist at University College London who has worked on quantum photosynthesis, calls it “a well-thought hypothesis. It doesn’t give answers, it opens questions that might then lead to how we could test particular steps in the hypothesis.”

University of Oxford chemist Peter Hore, who investigates whether migratory birds’ navigational systems make use of quantum effects, concurs. “Here’s a theoretical physicist who is proposing specific molecules, specific mechanics, all the way through to how this could affect brain activity,” he said. “That opens up the possibility of experimental testing.”

Experimental testing is precisely what Fisher is now trying to do. He just spent a sabbatical at Stanford University working with researchers there to replicate the 1986 study with pregnant rats. He acknowledged the preliminary results were disappointing, in that the data didn’t provide much information, but thinks if it’s repeated with a protocol closer to the original 1986 experiment, the results might be more conclusive.

Fisher has applied for funding to conduct further in-depth quantum chemistry experiments. He has cobbled together a small group of scientists from various disciplines at UCSB and the University of California, San Francisco, as collaborators. First and foremost, he would like to investigate whether calcium phosphate really does form stable Posner molecules, and whether the phosphorus nuclear spins of these molecules can be entangled for sufficiently long periods of time.

Even Hore and Olaya-Castro are skeptical of the latter, particularly Fisher’s rough estimate that the coherence could last a day or more. “I think it’s very unlikely, to be honest,” Olaya-Castro said. “The longest time scale relevant for the biochemical activity that’s happening here is the scale of seconds, and that’s too long.” (Neurons can store information for microseconds.) Hore calls the prospect “remote,” pegging the limit at one second at best. “That doesn’t invalidate the whole idea, but I think he would need a different molecule to get long coherence times,” he said. “I don’t think the Posner molecule is it. But I’m looking forward to hearing how it goes.”

Others see no need to invoke quantum processing to explain brain function. “The evidence is building up that we can explain everything interesting about the mind in terms of interactions of neurons,” said Paul Thagard, a neurophilosopher at the University of Waterloo in Ontario, Canada, to New Scientist. (Thagard declined our request to comment further.)

Plenty of other aspects of Fisher’s hypothesis also require deeper examination, and he hopes to be able to conduct the experiments to do so. Is the Posner molecule’s structure symmetrical? And how isolated are the nuclear spins?

Most important, what if all those experiments ultimately prove his hypothesis wrong? It might be time to give up on the notion of quantum cognition altogether. “I believe that if phosphorus nuclear spin is not being used for quantum processing, then quantum mechanics is not operative in longtime scales in cognition,” Fisher said. “Ruling that out is important scientifically. It would be good for science to know.”

Quantum Monism Could Save the Soul of Physics


The multiverse may be an artifact of a deeper reality that is comprehensible and unique

Quantum Monism Could Save the Soul of Physics

“The most incomprehensible thing about the universe is that it is comprehensible, ”Albert Einstein famously once said. These days, however, it is far from being a matter of consensus that the universe is comprehensible, or even that it is unique. Fundamental physics is facing a crisis, related to two popular concepts that are frequently invoked, summarized tellingly by the buzzwords “multiverse” and “uglyverse.”

Multiverse proponents advocate the idea that there may exist innumerable other universes, some of them with totally different physics and numbers of spatial dimensions; and that you, I and everything else may exist in countless copies. “The multiverse may be the most dangerous idea in physics” argues the South African cosmologist George Ellis.

Ever since the early days of science, finding an unlikely coincidence prompted an urge to explain, a motivation to search for the hidden reason behind it. One modern example: the laws of physics appear to be finely tuned to permit the existence of intelligent beings who can discover those laws—a coincidence that demands explanation.

With the advent of the multiverse, this has changed: As unlikely as a coincidence may appear, in the zillions of universes that compose the multiverse, it will exist somewhere. And if the coincidence seems to favor the emergence of complex structures, life or consciousness, we shouldn’t even be surprised to find ourselves in a universe that allows us to exist in the first place. But this “anthropic reasoning” in turn implies that we can’t predict anything anymore. There is no obvious guiding principle for the CERN physicists searching for new particles. And there is no fundamental law to be discovered behind the accidental properties of the universe.

Quite different but not less dangerous is the other challenge—the “uglyverse”: According to theoretical physicist Sabine Hossenfelder, modern physics has been led astray by its bias for “beauty,” giving rise to mathematically elegant, speculative fantasies without any contact to experiment. Physics has been “lost in math,” she argues. But then, what physicists call “beauty” are structures and symmetries. If we can’t rely on such concepts anymore, the difference between comprehension and a mere fit to experimental data will be blurred.

Both challenges have some justification. “Why should the laws of nature care what I find beautiful?” Hossenfelder righteously asks, and the answer is: They shouldn’t. Of course, nature could be complicated, messy and incomprehensible—if it were classical. But nature isn’t. Nature is quantum mechanical. And while classical physics is the science of our daily life where objects are separable, individual things, quantum mechanics is different. The condition of your car for example is not related to the color of your wife’s dress. In quantum mechanics though, things that were in causal contact once remain correlated, described by Einstein as “spooky action at a distance.” Such correlations constitute structure, and structure is beauty.

In contrast, the multiverse appears difficult to deny. Quantum mechanics in particular seems to be enamored with it. Firing individual electrons at a screen with two slits results in an interference pattern on a detector behind the screen. In each case, it appears that the electron went through both slits each time.

Quantum physics is the science behind nuclear explosions, smart phones and particle collisions—and it is infamous for its weirdness such as Schrödinger’s cat existing in a limbo of being half dead and half alive. In quantum mechanics, different realities (such as “particle here” and “particle there” or “cat alive” and “cat dead”) can be superimposed such as waves on the surface of a lake. The particle can be in a “half here and half there” state. This is called a “superposition,” and for particles or waves it gives rise to interference patterns.

Originally devised to describe the microscopic world, quantum mechanics in recent years has been shown to govern increasingly large objects—if they are sufficiently isolated from their environment. Somehow, however, our daily life seems to be protected from experiencing too much quantum weirdness.: Nobody has ever seen an undead cat, and whenever you measure the position of a particle you get a definite result.

A straightforward interpretation assumes that all possible options are realized, albeit in different, parallel realities or “Everett branches”—named after Hugh Everett, who first advocated this view known as the “many worlds Interpretation” of quantum mechanics. Everett’s “many worlds” are in fact one example of a Multiverse—one out of four, if you follow Max Tegmark’s Scientific American feature from May 2003. Two of the others are not that interesting, since one is not really a multiverse but rather different regions in our own universe, and the other one is based on the highly speculative idea that matter is nothing but math. The remaining multiverse is the “string theory landscape” to which we will return later.

By appealing to quantum mechanics in order to justify the beauty of physics, it seems that we sacrificed the uniqueness of the universe. But this conclusion results from a superficial consideration. What is typically overlooked in this picture is that Everett’s multiverse is not fundamental. It is only apparent or “emergent,” as philosopher David Wallace at the University of Southern California insists.

To appreciate this point one needs to understand the principle behind both quantum measurements and “spooky action at a distance.” Instrumental for both phenomena is a concept known as “entanglement,” pointed out in 1935 by Einstein, Boris Podolsky and Nathaniel Rosen: In quantum mechanics, a system of two entangled spins adding up to zero can be composed of a superposition of pairs of spins with opposite directions while it is absolutely undetermined in which direction the individual spin points. Entanglement is nature‘s way of integrating parts into a whole; individual properties of constituents cease to exist for the benefit of a strongly correlated total system.

Whenever a quantum system is measured or coupled to its environment, entanglement plays a crucial role: Quantum system, observer and the rest of the universe become interwoven with each other. From the perspective of the local observer, information is dispersed into the unknown environment and a process called “decoherence”—first discovered by H. Dieter Zeh in 1970—sets in. Decoherence is the agent of classicality: It describes the loss of quantum properties when a quantum system interacts with its surroundings. Decoherence acts if it would open a zipper between quantum physics’ parallel realities. From the observer’s perspective, the universe and she herself seem to “split” into separated Everett branches. The observer observes a live cat or a dead cat but nothing in between. The world looks classical for her, while from a global perspective it is still quantum mechanical. In fact, in this view the entire universe is a quantum object.

This is where “quantum monism,” as championed by Rutgers University philosopher Jonathan Schaffer, enters the stage. Schaffer has mused over the question what the universe is made of. According to quantum monism, the fundamental layer of reality is not made of particles or strings but the universe itself—understood not as the sum of things making it up but rather as a single, entangled quantum state.

Similar thoughts have been expressed earlier, for example by the physicist and philosopher Carl Friedrich von Weizsäcker: Taking quantum mechanics seriously predicts a unique, single quantum reality underlying the multiverse. The homogeneity and the tiny temperature fluctuations of the cosmic microwave background, which indicate that our observable universe can be traced back to a single quantum state, usually identified with the quantum field that fuels primordial inflation, support this view.

Moreover, this conclusion extends to other multiverse concepts such as different laws of physics in the various valleys of the “string theory landscape” or other “baby universes” popping up in eternal cosmological inflation. Since entanglement is universal, it doesn’t stop at the boundary of our cosmic patch. Whatever multiverse you have, when you adopt quantum monism they are all part of an integrated whole: There always is a more fundamental layer of reality underlying the many universes within the multiverse, and that layer is unique.

Both quantum monism and Everett’s many worlds are predictions of quantum mechanics taken seriously. What distinguishes these views is only the perspective: What looks like “many worlds” from the perspective of a local observer is indeed a single, unique universe from a global perspective (such as that of someone who would be able to look from outside onto the entire universe).

In other words: many worlds is how quantum monism looks like for an observer who has only limited information about the universe. In fact, Everett’s original motivation was to develop a quantum description of the entire universe in terms of a “universal wave function.” It is as if you look out through a muntin window: Nature looks divided into separate pieces but this is an artifact of your perspective.

Both monism and many worlds can be avoided, but only when one either changes the formalism of quantum mechanics—typically in ways that are in conflict with Einstein’s theory of special relativity—or if one understands quantum mechanics not as a theory about nature but as a theory about knowledge: a humanities concept rather than science.

As it stands, quantum monism should be considered as a key concept in modern physics: It explains why “beauty,” understood as structure, correlation and symmetry among apparently independent realms of nature, isn’t an “ill-conceived aesthetic ideal” but a consequence of nature descending from a single quantum state. In addition, quantum monism also removes the thorn of the multiverse as it predicts correlations realized not only in a specific baby universe but in any single branch of the multiverse—such as the opposite directions of entangled spins in the Einstein-Podolsky-Rosen state.

Finally, quantum monism soothes the crisis in experimental fundamental physics relying on increasingly large colliders to study smaller and smaller constituents of nature, simply since the smallest constituents are not the fundamental layer of reality. Studying the foundations of quantum mechanics, new realms in quantum field theory or the largest structures in cosmology may turn out to be equally useful.

This doesn’t mean that every observed coincidence points to the foundations of physics or that any notion of beauty should be realized in nature—but it tells us we shouldn’t stop seeking. As such, quantum monism has the potential to save the soul of science: the conviction that there is a unique, comprehensible and fundamental reality.

Reality is an illusion: The scientific proof everything is energy and reality isn’t real


Quantum physicists are discovering facts about the world that we would never have thought to be possible.

The scientific breakthroughs that have taken place in the last few years are as significant to our understanding of reality as Copernicus’s outline of the solar system.

The problem? Many of us simply do not understand quantum physics. And this all began roughly a hundred years ago, when physicists began challenging the assumption that the physical space and universe that we see around us is actually “real”.

Scientists decided that to prove that reality was not, in fact, simply an illusion, they had to discover the “point particle”, and this would be accomplished with innovations like the Large Hadron Collider.

This machine was initially built to smash particles into one another, and this is where they made the greatest discovery: the physical world is not as physical as we believe. Reality is an illusion as we see it.  Instead, everything around us is just energy.

How Reality Is Just Energy

We think of the atom as an organized group of electrons and protons zooming around a neutron, but this figure is completely wrong.

The particles that make up the atoms have no structure or size, no weight or physical presence.

They have no height, length, width, or weight, and are nothing more than events in time. They have zero dimensions.

Electrons also do not have a singular presence—they are both a particle and a wave simultaneously, depending on how they are observed.

They are never in a single location at a single moment, and instead exist in several moments at the same time.

Scientists also discovered what is known as the “superposition”, in which several particles aside from electrons can be proven to exist in multiple places at a single moment.



What does all this mean?

It means that the more we discover about the subatomic world, the more we discover that we know nothing about the true nature of reality at all.

The Copenhagen Interpretation

Many scientists have come to the Copenhagen Interpretation as their conclusion for understanding reality.

The Copenhagen Interpretation comes from the school of quantum mechanics, and it believes that reality does not exist without an observer to observe it.

As reality is nothing more than energy (what gives us physicality if the smallest parts of us have no physical characteristics?), then the energy is conscious when consciousness is observing it.

This may be difficult to understand.

Think of it this way: since particles exist in several areas at the same time, then it must respond to an observation by choosing to exist in a singular location, allowing the observer to have an image to observe.

A growing number of researchers in this field believe that reality exists only because human consciousness wills it to exist, by interacting with the energy that makes up the universe.



Understanding the Universe as Information

Another mind-blowing discovery in quantum physics is entanglement.

Entanglement is when a pair of particles have interacted and have affected the spin of the other particle.

What’s strange is that once these two particles have become tangled with one another, they can never become untangled.

No matter how far apart they may stretch from the other, the spin of one particle will always affect the spin of the other.

Researchers have observed this in living cells, communicating over far distances. In one famous experiment, researchers grew algae cells in a petri dish. They then separated these cells into two halves, taking one half to another laboratory.

What they found was that no matter how much they separated the two dishes, a low-voltage current applied on one dish would always affect the cells in the other dish in the exact same way at the exact same moment.

How Is this possible?

Understanding this requires shifting the way we think of the universe. We can no longer think of the universe as a physical realm in which the things we observe and sense are all that exists.

Instead, as famous physicist Sir Roger Penrose theorized, we must envision the universe as nothing but information.

We must believe that the physical universe is just a product of an abstract universe, in which we are all connected in an unobservable way.

Information is simply embedded into the physical constructs of the physical universe, but is transmitted to our physical states from the abstract realm, first theorized by Greek philosopher, Plato.

As Erwin Schrodinger famously stated, “What we observe as material bodies and forces are nothing but shapes and variations in the structure of space. Particles are just appearances.”

Simply put, everything is nothing but energy.

Coping With A Different Reality

There are certain questions and realizations you must come to terms with after learning this true state of reality. You could obsess over the implications indefinitely, but here are a few to start you off:

  • You have never touched anything, and you never will. The electrons that make up your atoms repulse against the electrons of other physical entities, making it impossible for you to interact with other material at the subatomic level.
  • If we are not touching anything, then what is it that we feel when we “touch”?
  • How is the world physical when the building blocks that make it have no dimensions?
  • How is anything real, and what does real mean?
  • Is reality determined by physicality?

‘Quantum Atmospheres’ May Reveal Secrets of Matter


A new theory proposes that the quantum properties of an object extend into an “atmosphere” that surrounds the material.
 

Over the past several years, some materials have proved to be a playground for physicists. These materials aren’t made of anything special — just normal particles such as protons, neutrons and electrons. But they are more than the sum of their parts. These materials boast a range of remarkable properties and phenomena and have even led physicists to new phases of matter — beyond the solid, gas and liquid phases we’re most familiar with.

One class of material that especially excites physicists is the topological insulator — and, more broadly, topological phases, whose theoretical foundations earned their discoverers a Nobel Prize in 2016. On the surface of a topological insulator, electrons flow smoothly, while on the inside, electrons are immobile. Its surface is thus a metal-like conductor, yet its interior is a ceramic-like insulator. Topological insulators have drawn attention for their unusual physics as well as for their potential use in quantum computers and so-called spintronic devices, which utilize electrons’ spins as well as their charge.

But such exotic behaviors aren’t always obvious. “You can’t just tell easily by looking at the material in conventional ways whether it has these kinds of properties,” said Frank Wilczek, a physicist at the Massachusetts Institute of Technology and winner of the 2004 Nobel Prize in Physics.

This means a host of seemingly ordinary materials might harbor hidden — yet unusual and possibly useful — properties. In a paper recently posted online, Wilczek and Qing-Dong Jiang, a physicist at Stockholm University, propose a new way to discover such properties: by probing a thin aura that surrounds the material, something they’ve dubbed a quantum atmosphere.

Some of a material’s fundamental quantum properties could manifest in this atmosphere, which physicists could then measure. If confirmed in experiments, not only would this phenomenon be one of only a few macroscopic consequences of quantum mechanics, Wilczek said, but it could also be a powerful tool for exploring an array of new materials.

“Had you asked me if something like this could occur, I would’ve said that seems like a reasonable idea,” said Taylor Hughes, a condensed matter theorist at the University of Illinois, Urbana-Champaign. But, he added, “I would imagine the effect to be very small.” In the new analysis, however, Jiang and Wilczek calculated that, in principle, a quantum atmospheric effect would be well within the range of detectability.

Not only that, Wilczek said, but detecting such effects may be achievable sooner rather than later.

A Zone of Influence

A quantum atmosphere, Wilczek explained, is a thin zone of influence around a material. According to quantum mechanics, a vacuum isn’t completely empty; rather, it’s filled with quantum fluctuations. For example, if you take two uncharged plates and bring them together in a vacuum, only quantum fluctuations with wavelengths shorter than the distance between the plates can squeeze between them. Outside the plates, however, fluctuations of all wavelengths can fit. The energy outside will be greater than inside, resulting in a net force that pushes the plates together. Called the Casimir effect, this phenomenon is similar to the influence from a quantum atmosphere, Wilczek said.

Just as a plate feels a stronger force as it nears another one, a needlelike probe would feel an effect from the quantum atmosphere as it approaches a material. “It’s just like any atmosphere,” Wilczek said. “You get close to it, and you start to see its influence.” And the nature of that influence depends on the quantum properties of the material itself.

Photo of antimony

Antimony can behave as a topological insulator — a material that acts as an insulator everywhere except across its surface.

 

Those properties can be extraordinary. Certain materials act like their own universes with their own physical laws, as if comprising what’s recently been called a materials multiverse. “A very important idea in modern condensed matter physics is that we’re in possession of these materials — say, a topological insulator — which have different sets of rules inside,” said Peter Armitage, a condensed matter physicist at Johns Hopkins University.

Some materials, for example, harbor objects that act as magnetic monopoles — point-like magnets with a north pole but no south pole. Physicists have also detected so-called quasiparticles with fractional electric charge and quasiparticles that act as their own antimatter, with the ability to annihilate themselves.

If similarly exotic properties exist in other materials, they could reveal themselves in quantum atmospheres. You could, in principle, discover all sorts of new properties simply by probing the atmospheres of materials, Wilczek said.

To demonstrate their idea, Jiang and Wilczek focused on an unorthodox set of rules called axion electrodynamics, which could give rise to unique properties. Wilczek came up with the theory in 1987 to describe how a hypothetical particle called an axion would interact with electricity and magnetism. (Physicists had previously proposed the axion as a solution to one of physics’ biggest unsolved questions: why interactions involving the strong force are the same even when particles are swapped with their antiparticles and reflected in a mirror, preserving so-called charge and parity symmetry.) To this day, no one has found any evidence that axions exist, even though they’ve recently garnered renewed interest as a candidate for dark matter.

While these rules don’t seem to be valid in most of the universe, it turns out they can come into play inside a material such as a topological insulator. “The way electromagnetic fields interact with these new kinds of matter called topological insulators is basically the same way they would interact with a collection of axions,” Wilczek said.

Diamond Defects

If a material such as a topological insulator obeys axion electrodynamics, its quantum atmosphere could induce a telltale effect on anything that crosses into the atmosphere. Jiang and Wilczek calculated that such an effect would be similar to that of a magnetic field. In particular, they found that if you were to place some system of atoms or molecules in the atmosphere, their quantum energy levels would be altered. A researcher could then measure these altered levels using standard laboratory techniques. “It’s kind of an unconventional but a quite interesting idea,” said Armitage.

One such potential system is a diamond probe imbued with features called nitrogen-vacancy (NV) centers. An NV center is a type of defect in a diamond’s crystal structure where some of the diamond’s carbon atoms are swapped out for nitrogen atoms, and where the spot adjacent to the nitrogen is empty. The quantum state of this system is highly sensitive, allowing NV centers to sniff out even very weak magnetic fields. This property makes them powerful sensors that can be used for a variety of applications in geology and biology.

“This is a nice proof of principle,” Hughes said. One application, he added, could be to map out a material’s properties. By passing an NV center across a material like a topological insulator, you can determine how its properties may vary along the surface.

Jiang and Wilczek’s paper, which they have submitted to Physical Review Letters, details only the quantum atmospheric influence derived from axion electrodynamics. To determine how other kinds of properties affect an atmosphere, Wilczek said, you would have to do different calculations.

Breaking Symmetries

Fundamentally, the properties that quantum atmospheres unmask are symmetries. Different phases of matter, and the properties unique to a phase, can be thought of in terms of symmetry. In a solid crystal, for example, atoms are arranged in a symmetric lattice that shifts or rotates to form an identical crystal pattern. When you apply heat, however, the bonds break, the lattice structure collapses, and the material — now a liquid with markedly different properties — loses its symmetry.

Materials can break other fundamental symmetries such as the time-reversal symmetry that most laws of physics obey. Or phenomena may be different when looked at in the mirror, a violation of parity symmetry.

Whether these symmetries are broken in a material could signify previously unknown phase transitions and potentially exotic properties. A material with certain broken symmetries would induce the same violations in a probe that’s inside its quantum atmosphere, Wilczek said. For example, in a material that adheres to axion electrodynamics, time and parity symmetry are each broken, but the combination of the two is not. By probing a material’s atmosphere, you could learn whether it follows this symmetry-breaking pattern and to what extent — and thus what bizarre behaviors it may have, he said.

“Some materials will be secretly breaking symmetries that we didn’t know about and that we didn’t suspect,” he said. “They seem very innocent, but somehow they’ve been hiding in secret.”

Wilczek said he’s already talked with experimentalists who are interested in testing the idea. What’s more, he said, experiments should be readily feasible, hopefully coming to fruition not in years, but in only weeks and months.

If everything works out, then the term “quantum atmosphere” may find a permanent spot in the physics lexicon. Wilczek has previously coined terms like axions, anyons (quasiparticles that may be useful for quantum computing) and time crystals (structures that move in regular and repeating patterns without using energy). He has a good track record of coming up with names that stick, Armitage said. “‘Quantum atmospheres’ is another good one.”

Should Quantum Anomalies Make Us Rethink Reality?


Inexplicable lab results may be telling us we’re on the cusp of a new scientific paradigm

Should Quantum Anomalies Make Us Rethink Reality?

Every generation tends to believe that its views on the nature of reality are either true or quite close to the truth. We are no exception to this: although we know that the ideas of earlier generations were each time supplanted by those of a later one, we still believe that this time we got it right. Our ancestors were naïve and superstitious, but we are objective—or so we tell ourselves. We know that matter/energy, outside and independent of mind, is the fundamental stuff of nature, everything else being derived from it—or do we?

In fact, studies have shown that there is an intimate relationship between the world we perceive and the conceptual categories encoded in the language we speak. We don’t perceive a purely objective world out there, but one subliminally pre-partitioned and pre-interpreted according to culture-bound categories. For instance, “color words in a given language shape human perception of color.” A brain imaging study suggests that language processing areas are directly involved even in the simplest discriminations of basic colors. Moreover, this kind of “categorical perception is a phenomenon that has been reported not only for color, but for other perceptual continua, such as phonemes, musical tones and facial expressions.” In an important sense, we see what our unexamined cultural categories teach us to see, which may help explain why every generation is so confident in their own worldview. Allow me to elaborate.

The conceptual-ladenness of perception isn’t a new insight. Back in 1957, philosopher Owen Barfield wrote:

“I do not perceive any thing with my sense-organs alone.… Thus, I may say, loosely, that I ‘hear a thrush singing.’ But in strict truth all that I ever merely ‘hear’—all that I ever hear simply by virtue of having ears—is sound. When I ‘hear a thrush singing,’ I am hearing … with all sorts of other things like mental habits, memory, imagination, feeling and … will.” (Saving the Appearances)

As argued by philosopher Thomas Kuhn in his book The Structure of Scientific Revolutions, science itself falls prey to this inherent subjectivity of perception. Defining a “paradigm” as an “implicit body of intertwined theoretical and methodological belief,” he wrote:

“something like a paradigm is prerequisite to perception itself. What a man sees depends both upon what he looks at and also upon what his previous visual-conceptual experience has taught him to see. In the absence of such training there can only be, in William James’s phrase, ‘a bloomin’ buzzin’ confusion.’”

Hence, because we perceive and experiment on things and events partly defined by an implicit paradigm, these things and events tend to confirm, by construction, the paradigm. No wonder then that we are so confident today that nature consists of arrangements of matter/energy outside and independent of mind.

Yet, as Kuhn pointed out, when enough “anomalies”—empirically undeniable observations that cannot be accommodated by the reigning belief system—accumulate over time and reach critical mass, paradigms change. We may be close to one such a defining moment today, as an increasing body of evidence from quantum mechanics (QM) renders the current paradigm untenable.

Indeed, according to the current paradigm, the properties of an object should exist and have definite values even when the object is not being observed: the moon should exist and have whatever weight, shape, size and color it has even when nobody is looking at it. Moreover, a mere act of observation should not change the values of these properties. Operationally, all this is captured in the notion of “non-contextuality”: the outcome of an observation should not depend on the way other, separate but simultaneous observations are performed. After all, what I perceive when I look at the night sky should not depend on the way other people look at the night sky along with me, for the properties of the night sky uncovered by my observation should not depend on theirs.

The problem is that, according to QM, the outcome of an observation can depend on the way another, separate but simultaneous, observation is performed. This happens with so-called “quantum entanglement” and it contradicts the current paradigm in an important sense, as discussed above. Although Einstein argued in 1935 that the contradiction arose merely because QM is incomplete, John Bell proved mathematically, in 1964, that the predictions of QM regarding entanglement cannot be accounted for by Einstein’s alleged incompleteness.

So to salvage the current paradigm there is an important sense in which one has to reject the predictions of QM regarding entanglement. Yet, since Alain Aspect’s seminal experiments in 1981–82, these predictions have been repeatedly confirmed, with potential experimental loopholes closed one by one. 1998 was a particularly fruitful year, with two remarkable experiments performed in Switzerland and Austria. In 2011 and 2015, new experiments again challenged non-contextuality. Commenting on this, physicist Anton Zeilinger has been quoted as saying that “there is no sense in assuming that what we do not measure [that is, observe] about a system has [an independent] reality.” Finally, Dutch researchers successfully performed a test closing all remaining potential loopholes, which was considered by Nature the “toughest test yet.”

The only alternative left for those holding on to the current paradigm is to postulate some form of non-locality: nature must have—or so they speculate—observation-independent hidden properties, entirely missed by QM, which are “smeared out” across spacetime. It is this allegedly omnipresent, invisible but objective background that supposedly orchestrates entanglement from “behind the scenes.”

It turns out, however, that some predictions of QM are incompatible with non-contextuality even for a large and important class of non-local theories. Experimental results reported in 2007 and 2010 have confirmed these predictions. To reconcile these results with the current paradigm would require a profoundly counterintuitive redefinition of what we call “objectivity.” And since contemporary culture has come to associate objectivity with reality itself, the science press felt compelled to report on this by pronouncing, “Quantum physics says goodbye to reality.”

The tension between the anomalies and the current paradigm can only be tolerated by ignoring the anomalies. This has been possible so far because the anomalies are only observed in laboratories. Yet we know that they are there, for their existence has been confirmed beyond reasonable doubt. Therefore, when we believe that we see objects and events outside and independent of mind, we are wrong in at least some essential sense. A new paradigm is needed to accommodate and make sense of the anomalies; one wherein mind itself is understood to be the essence—cognitively but also physically—of what we perceive when we look at the world around ourselves.

A Strange Quantum Effect Could Give Rise to a Completely New Kind of Star


A near cousin to black holes.

We might have to add a brand new category of star to the textbooks: an advanced mathematical model has revealed a certain ultracompact star configuration could in fact exist, when scientists had previously thought it impossible.

main article image

This model mixes the repulsive effect of quantum vacuum polarisation – the idea that a vacuum isn’t actually empty but is filled with quantum energy and particles – with the attractive principles of general relativity.

The calculations are the work of Raúl Carballo-Rubio from the International School for Advanced Studies in Italy, and describe a hypothesis where a massive star doesn’t follow the usual instructions laid down by astrophysics.

“The novelty in this analysis is that, for the first time, all these ingredients have been assembled together in a fully consistent model,” says Carballo-Rubio.

“Moreover, it has been shown that there exist new stellar configurations, and that these can be described in a surprisingly simple manner.”

Due to the push and pull of gigantic forces, massive stars collapse under their own weight when they run out of fuel to burn. They then either explode as supernovae and become neutron stars, or collapse completely into a black hole, depending on their mass.

There’s a particular mass threshold at which the dying star goes one way or another.

 But what if extra quantum mechanical forces were at play? That’s the question Carballo-Rubio is asking, and he suggests the rules of quantum mechanics would create a different set of thresholds or equilibriums at the end of a massive star’s life.

Thanks to quantum vacuum polarisation, we’d be left with something that would look like a black hole while behaving differently, according to the new model. These new types of stars have been dubbed “semiclassical relativistic stars” because they the result of both classical and quantum physics.

One of the differences would be that the star would be horizonless – like another theoretical star made possible by quantum physics, the gravastar. There wouldn’t be the same ‘point of no return’ for light and matter as there is around a black hole.

The next step is to see if we can actually spot any of them – or rather spot any of the ripples they create through the rest of space. One possibility is that these strange types of stars wouldn’t exist for very long at all.

“It is not clear yet whether these configurations can be dynamically realised in astrophysical scenarios, or how long would they last if this is the case,” says Carballo-Rubio.

Interest in this field of astrophysics has been boosted by the progress scientists have been making in detecting gravitational waves, and it’s because of that work that it might be possible to find these variations on black holes.

The observatories and instruments coming online in the next few years will give scientists the chance to put this intriguing hypothesis to the test.

“If there are very dense and ultracompact stars in the Universe, similar to black holes but with no horizons, it should be possible to detect them in the next decades,” says Carballo-Rubio.

Does a Quantum Equation Govern Some of the Universe’s Large Structures?


A new paper uses the Schrödinger equation to describe debris disks around stars and black holes—and provides an object lesson about what “quantum” really means

Does a Quantum Equation Govern Some of the Universe's Large Structures?
This artist’s concept shows a swirling debris disk of gas and dust surrounding a young protostar.

Researchers who want to predict the behavior of systems governed by quantum mechanics—an electron in an atom, say, or a photon of light traveling through space—typically turn to the Schrödinger equation. Devised by Austrian physicist Erwin Schrödinger in 1925, it describes subatomic particles and how they may display wavelike properties such as interference. It contains the essence of all that appears strange and counterintuitive about the quantum world.

But it seems the Schrödinger equation is not confined to that realm. In a paper just published in Monthly Notices of the Royal Astronomical Society, planetary scientist Konstantin Batygin of the California Institute of Technology claims this equation can also be used to understand the emergence and behavior of self-gravitating astrophysical disks. That is, objects such as the rings of the worlds Saturn and Uranus or the halos of dust and gas that surround young stars and supply the raw material for the formation of a planetary system or even the accretion disks of debris spiraling into a black hole.

And yet there’s nothing “quantum” about these things at all. They could be anything from tiny dust grains to big chunks of rock the size of asteroids or planets. Nevertheless, Batygin says, the Schrödinger equation supplies a convenient way of calculating what shape such a disk will have, and how stable it will be against buckling or distorting. “This a fascinating approach, synthesizing very old techniques to make a brand-new analysis of a challenging problem,” says astrophysicist Duncan Forgan of the University of Saint Andrews in Scotland, who was not part of the research. “The Schrödinger equation has been so well studied for almost a century that this connection is clearly handy.”

From Classical to Quantum

This equation is so often regarded as the distilled essence of “quantumness” that it is easy to forget what it really represents. In some ways Schrödinger pulled it out of a hat when challenged to come up with a mathematical formula for French physicist Louis de Broglie’s hypothesis that quantum particles could behave like waves. Schrödinger drew on his deep knowledge of classical mechanics, and his equation in many ways resembles those used for ordinary waves. One difference is that in quantum mechanics the energies of “particle–waves” are quantized: confined to discrete values that are multiples of the so-called Planck’s constant h, first introduced by German physicist Max Planck in 1900.

This relation of the Schrödinger equation to classical waves is already revealed in the way that a variant called the nonlinear Schrödinger equation is commonly used to describe other classical wave systems—for example in optics and even in ocean waves, where it provides a mathematical picture of unusually large and robust “rogue waves.”

But the normal “quantum” version—the linear Schrödinger equation—has not previously turned up in a classical context. Batygin says it does so here because the way he sets up the problem of self-gravitating disks creates a quantity that sets a particular “scale” in the problem, much as h does in quantum systems.

Loopy Physics

Whether around a young star or a supermassive black hole, the many mutually interacting objects in a self-gravitating debris disk are complicated to describe mathematically. But Batygin uses a simplified model in which the disk’s constituents are smeared and stretched into thin “wires” that loop in concentric ellipses right around the disk. Because the wires interact with one another through gravity, they can exchange orbital angular momentum between them, rather like the transfer of movement between the gear bearings and the axle of a bicycle.

This approach uses ideas developed in the 18th century by the mathematicians Pierre-Simon Laplace and Joseph-Louis Lagrange. Laplace was one of the first to study how a rotating clump of objects can collapse into a disklike shape. In 1796 he proposed our solar system formed from a great cloud of gas and dust spinning around the young sun.

Batygin and others had used this “wire” approximation before, but he decided to look at the extreme case in which the looped wires are made thinner and thinner until they merge into a continuous disk. In that limit he found the equation describing the system is the same as Schrödinger’s, with the disk itself being described by the analog of the wave function that defines the distribution of possible positions of a quantum particle. In effect, the shape of the disk is like the wave function of a quantum particle bouncing around in a cavity with walls at the disk’s inner and outer edges.

The resulting disk has a series of vibrational “modes,” rather like resonances in a tuning fork, that might be excited by small disturbances—think of a planet-forming stellar disk nudged by a passing star or of a black hole accretion disk in which material is falling into the center unevenly. Batygin deduces the conditions under which a disk will warp in response or, conversely, will behave like a rigid body held fast by its own mutual gravity. This comes down to a matter of timescales, he says. If the angular momentum of the objects orbiting in the disk is transferred from one to another much more rapidly than the perturbation’s duration, the disk will remain rigid. “If on the other hand the self-interaction timescale is long compared with the perturbation timescale, the disk will warp,” he says.

Is “Quantumness” Really So Weird?

When he first saw the Schrödinger equation materialize out of his theoretical analysis, Batygin says he was stunned. “But in retrospect it almost seems obvious to me that it must emerge in this problem,” he adds.

What this means, though, is the Schrödinger equation can itself be derived from classical physics known since the 18th century. It doesn’t depend on “quantumness” at all—although it turns out to be applicable to that case.

That’s not as strange as it might seem. For one thing, science is full of examples of equations devised for one phenomenon turning out to apply to a totally different one, too. Equations concocted to describe a kind of chemical reaction have been applied to the modeling of crime, for example, and very recently a mathematical description of magnets was shown also to describe the fruiting patterns of trees in pistachio orchards.

But doesn’t quantum physics involve a rather uniquely odd sort of behavior? Not really. The Schrödinger equation does not so much describe what quantum particles are actually “doing,” rather it supplies a way of predicting what might be observed for systems governed by particular wavelike probability laws. In fact, other researchers have already shown the key phenomena of quantum theory emerge from a generalization of probability theory that could, too, have been in principle devised in the 18th century, before there was any inkling that tiny particles behave this way.

The advantage of his approach is its simplicity, Batygin notes. Instead of having to track all the movements of every particle in the disk using complicated computer models (so-called N-body simulations), the disk can be treated as a kind of smooth sheet that evolves over time and oscillates like a drumskin. That makes it, Batygin says, ideal for systems in which the central object is much more massive than the disk, such as protoplanetary disks and the rings of stars orbiting supermassive black holes. It will not work for galactic disks, however, like the spiral that forms our Milky Way.

But Ken Rice of The Royal Observatory in Scotland, who was not involved with the work says that in the scenario in which the central object is much more massive than the disk, the dominant gravitational influence is the central object. “It’s then not entirely clear how including the disk self-gravity would influence the evolution” he says. “My simple guess would be that it wouldn’t have much influence, but I might be wrong.” Which suggests the chief application of Batygin’s formalism may not be to model a wide range of systems but rather to make models for a narrow range of systems far less computationally expensive than N-body simulations.

Astrophysicist Scott Tremaine of the Institute for Advanced Study in Princeton, N.J., also not part of the study, agrees these equations might be easier to solve than those that describe the self-gravitating rings more precisely. But he says this simplification comes at the cost of neglecting the long reach of gravitational forces, because in the Schrödinger version only interactions between adjacent “wire” rings are taken into account. “It’s a rather drastic simplification of the system that only works for certain cases”, he says, “and won’t provide new insights into these disks for experts.” But he thinks the approach could have useful pedagogical value, not least in showing that the Schrödinger equation “isn’t some magic result just for quantum mechanics, but describes a variety of physical systems.”

But Saint Andrews’s Forgan thinks Batygin’s approach could be particularly useful for modeling black hole accretion disks that are warped by companion stars. “There are a lot of interesting results about binary supermassive black holes with ‘torn’ disks that this may be applicable to,” he says.

Quantum Algorithms Struggle Against Old Foe: Clever Computers


The quest for “quantum supremacy” – unambiguous proof that a quantum computer does something faster than an ordinary computer – has paradoxically led to a boom in quasi-quantum classical algorithms.

A popular misconception is that the potential — and the limits — of quantum computing must come from hardware. In the digital age, we’ve gotten used to marking advances in clock speed and memory. Likewise, the 50-qubit quantum machines now coming online from the likes of Intel and IBM have inspired predictions that we are nearing “quantum supremacy” — a nebulous frontier where quantum computers begin to do things beyond the ability of classical machines.

But quantum supremacy is not a single, sweeping victory to be sought — a broad Rubicon to be crossed — but rather a drawn-out series of small duels. It will be established problem by problem, quantum algorithm versus classical algorithm. “With quantum computers, progress is not just about speed,” said Michael Bremner, a quantum theorist at the University of Technology Sydney. “It’s much more about the intricacy of the algorithms at play.”

And the goalposts are shifting. “When it comes to saying where the supremacy threshold is, it depends on how good the best classical algorithms are,” said John Preskill, a theoretical physicist at the California Institute of Technology. “As they get better, we have to move that boundary.”

‘It Doesn’t Look So Easy’

Before the dream of a quantum computer took shape in the 1980s, most computer scientists took for granted that classical computing was all there was. The field’s pioneers had convincingly argued that classical computers — epitomized by the mathematical abstraction known as a Turing machine — should be able to compute everything that is computable in the physical universe, from basic arithmetic to stock trades to black hole collisions.

Classical machines couldn’t necessarily do all these computations efficiently, though. Let’s say you wanted to understand something like the chemical behavior of a molecule. This behavior depends on the behavior of the electrons in the molecule, which exist in a superposition of many classical states. Making things messier, the quantum state of each electron depends on the states of all the others — due to the quantum-mechanical phenomenon known as entanglement. Classically calculating these entangled states in even very simple molecules can become a nightmare of exponentially increasing complexity.

A quantum computer, by contrast, can deal with the intertwined fates of the electrons under study by superposing and entangling its own quantum bits. This enables the computer to process extraordinary amounts of information. Each single qubit you add doubles the states the system can simultaneously store: Two qubits can store four states, three qubits can store eight states, and so on. Thus, you might need just 50 entangled qubits to model quantum states that would require exponentially many classical bits — 1.125 quadrillion to be exact — to encode.

A quantum machine could therefore make the classically intractable problem of simulating large quantum-mechanical systems tractable, or so it appeared. “Nature isn’t classical, dammit, and if you want to make a simulation of nature, you’d better make it quantum mechanical,” the physicist Richard Feynman famously quipped in 1981. “And by golly it’s a wonderful problem, because it doesn’t look so easy.”

It wasn’t, of course.

Even before anyone began tinkering with quantum hardware, theorists struggled to come up with suitable software. Early on, Feynman and David Deutsch, a physicist at the University of Oxford, learned that they could control quantum information with mathematical operations borrowed from linear algebra, which they called gates. As analogues to classical logic gates, quantum gates manipulate qubits in all sorts of ways — guiding them into a succession of superpositions and entanglements and then measuring their output. By mixing and matching gates to form circuits, the theorists could easily assemble quantum algorithms.

Conceiving algorithms that promised clear computational benefits proved more difficult. By the early 2000s, mathematicians had come up with only a few good candidates. Most famously, in 1994, a young staffer at Bell Laboratories named Peter Shor proposed a quantum algorithm that factors integers exponentially faster than any known classical algorithm — an efficiency that could allow it to crack many popular encryption schemes. Two years later, Shor’s Bell Labs colleague Lov Grover devised an algorithm that speeds up the classically tedious process of searching through unsorted databases. “There were a variety of examples that indicated quantum computing power should be greater than classical,” said Richard Jozsa, a quantum information scientist at the University of Cambridge.

But Jozsa, along with other researchers, would also discover a variety of examples that indicated just the opposite. “It turns out that many beautiful quantum processes look like they should be complicated” and therefore hard to simulate on a classical computer, Jozsa said. “But with clever, subtle mathematical techniques, you can figure out what they will do.” He and his colleagues found that they could use these techniques to efficiently simulate — or “de-quantize,” as Calude would say — a surprising number of quantum circuits. For instance, circuits that omit entanglement fall into this trap, as do those that entangle only a limited number of qubits or use only certain kinds of entangling gates.

What, then, guarantees that an algorithm like Shor’s is uniquely powerful? “That’s very much an open question,” Jozsa said. “We never really succeeded in understanding why some [algorithms] are easy to simulate classically and others are not. Clearly entanglement is important, but it’s not the end of the story.” Experts began to wonder whether many of the quantum algorithms that they believed were superior might turn out to be only ordinary.

Sampling Struggle

Until recently, the pursuit of quantum power was largely an abstract one. “We weren’t really concerned with implementing our algorithms because nobody believed that in the reasonable future we’d have a quantum computer to do it,” Jozsa said. Running Shor’s algorithm for integers large enough to unlock a standard 128-bit encryption key, for instance, would require thousands of qubits — plus probably many thousands more to correct for errors. Experimentalists, meanwhile, were fumbling while trying to control more than a handful.

But by 2011, things were starting to look up. That fall, at a conference in Brussels, Preskill speculated that “the day when well-controlled quantum systems can perform tasks surpassing what can be done in the classical world” might not be far off. Recent laboratory results, he said, could soon lead to quantum machines on the order of 100 qubits. Getting them to pull off some “super-classical” feat maybe wasn’t out of the question. (Although D-Wave Systems’ commercial quantum processors could by then wrangle 128 qubits and now boast more than 2,000, they tackle only specific optimization problems; many experts doubt they can outperform classical computers.)

“I was just trying to emphasize we were getting close — that we might finally reach a real milestone in human civilization where quantum technology becomes the most powerful information technology that we have,” Preskill said. He called this milestone “quantum supremacy.” The name — and the optimism — stuck. “It took off to an extent I didn’t suspect.”

The buzz about quantum supremacy reflected a growing excitement in the field — over experimental progress, yes, but perhaps more so over a series of theoretical breakthroughs that began with a 2004 paper by the IBM physicists Barbara Terhal and David DiVincenzo. In their effort to understand quantum assets, the pair had turned their attention to rudimentary quantum puzzles known as sampling problems. In time, this class of problems would become experimentalists’ greatest hope for demonstrating an unambiguous speedup on early quantum machines.

Sampling problems exploit the elusive nature of quantum information. Say you apply a sequence of gates to 100 qubits. This circuit may whip the qubits into a mathematical monstrosity equivalent to something on the order of 2100 classical bits. But once you measure the system, its complexity collapses to a string of only 100 bits. The system will spit out a particular string — or sample — with some probability determined by your circuit.

In a sampling problem, the goal is to produce a series of samples that look as though they came from this circuit. It’s like repeatedly tossing a coin to show that it will (on average) come up 50 percent heads and 50 percent tails. Except here, the outcome of each “toss” isn’t a single value — heads or tails — it’s a string of many values, each of which may be influenced by some (or even all) of the other values.

For a well-oiled quantum computer, this exercise is a no-brainer. It’s what it does naturally. Classical computers, on the other hand, seem to have a tougher time. In the worst circumstances, they must do the unwieldy work of computing probabilities for all possible output strings — all 2100 of them — and then randomly select samples from that distribution. “People always conjectured this was the case,” particularly for very complex quantum circuits, said Ashley Montanaro, an expert in quantum algorithms at the University of Bristol.

Terhal and DiVincenzo showed that even some simple quantum circuits should still be hard to sample by classical means. Hence, a bar was set. If experimentalists could get a quantum system to spit out these samples, they would have good reason to believe that they’d done something classically unmatchable.

Theorists soon expanded this line of thought to include other sorts of sampling problems. One of the most promising proposals came from Scott Aaronson, a computer scientist then at the Massachusetts Institute of Technology, and his doctoral student Alex Arkhipov. In work posted on the scientific preprint site arxiv.org in 2010, they described a quantum machine that sends photons through an optical circuit, which shifts and splits the light in quantum-mechanical ways, thereby generating output patterns with specific probabilities. Reproducing these patterns became known as boson sampling. Aaronson and Arkhipov reasoned that boson sampling would start to strain classical resources at around 30 photons — a plausible experimental target.

Similarly enticing were computations called instantaneous quantum polynomial, or IQP, circuits. An IQP circuit has gates that all commute, meaning they can act in any order without changing the outcome — in the same way 2 + 5 = 5 + 2. This quality makes IQP circuits mathematically pleasing. “We started studying them because they were easier to analyze,” Bremner said. But he discovered that they have other merits. In work that began in 2010 and culiminated in a 2016 paper with Montanaro and Dan Shepherd, now at the National Cyber Security Center in the U.K., Bremner explained why IQP circuits can be extremely powerful: Even for physically realistic systems of hundreds — or perhaps even dozens — of qubits, sampling would quickly become a classically thorny problem.

By 2016, boson samplers had yet to extend beyond 6 photons. Teams at Google and IBM, however, were verging on chips nearing 50 qubits; that August, Google quietly posted a draft paper laying out a road map for demonstrating quantum supremacy on these “near-term” devices.

Google’s team had considered sampling from an IQP circuit. But a closer look by Bremner and his collaborators suggested that the circuit would likely need some error correction — which would require extra gates and at least a couple hundred extra qubits — in order to unequivocally hamstring the best classical algorithms. So instead, the team used arguments akin to Aaronson’s and Bremner’s to show that circuits made of non-commuting gates, although likely harder to build and analyze than IQP circuits, would also be harder for a classical device to simulate. To make the classical computation even more challenging, the team proposed sampling from a circuit chosen at random. That way, classical competitors would be unable to exploit any familiar features of the circuit’s structure to better guess its behavior.

But there was nothing to stop the classical algorithms from getting more resourceful. In fact, in October 2017, a team at IBM showed how, with a bit of classical ingenuity, a supercomputer can simulate sampling from random circuits on as many as 56 qubits — provided the circuits don’t involve too much depth (layers of gates). Similarly, a more able algorithm has recently nudged the classical limits of boson sampling, to around 50 photons.

These upgrades, however, are still dreadfully inefficient. IBM’s simulation, for instance, took two days to do what a quantum computer is expected to do in less than one-tenth of a millisecond. Add a couple more qubits — or a little more depth — and quantum contenders could slip freely into supremacy territory. “Generally speaking, when it comes to emulating highly entangled systems, there has not been a [classical] breakthrough that has really changed the game,” Preskill said. “We’re just nibbling at the boundary rather than exploding it.”

That’s not to say there will be a clear victory. “Where the frontier is is a thing people will continue to debate,” Bremner said. Imagine this scenario: Researchers sample from a 50-qubit circuit of some depth — or maybe a slightly larger one of less depth — and claim supremacy. But the circuit is pretty noisy — the qubits are misbehaving, or the gates don’t work that well. So then some crackerjack classical theorists swoop in and simulate the quantum circuit, no sweat, because “with noise, things you think are hard become not so hard from a classical point of view,” Bremner explained. “Probably that will happen.”

What’s more certain is that the first “supreme” quantum machines, if and when they arrive, aren’t going to be cracking encryption codes or simulating novel pharmaceutical molecules. “That’s the funny thing about supremacy,” Montanaro said. “The first wave of problems we solve are ones for which we don’t really care about the answers.”

Yet these early wins, however small, will assure scientists that they are on the right track — that a new regime of computation really is possible. Then it’s anyone’s guess what the next wave of problems will be.

%d bloggers like this: