Quantum teleportation was just achieved over more than 7 km of city fibre

It’s getting real.

Quantum teleportation just moved out of the lab and into the real world, with two independent teams of scientists successfully sending quantum information across several kilometres of optical fibre networks in Calgary, Canada, and Hefei, China.

The experiments show that not only is quantum teleportation very much real, it’s also feasible technology that could one day help us build unhackable quantum communication systems that stretch across cities and maybe even continents.

Quantum teleportation relies on a strange phenomenon called quantum entanglement. Basically, quantum entanglement means that two particles are inextricably linked, so that measuring the state of one immediately affects the state of the other, no matter how far apart the two are – which led Einstein to call entanglement “spooky action at a distance“.

Using that property, quantum teleportation allows the quantum state of one particle to be transferred to its partner, no matter the distance between the two, without anything physical passing between them.

That’s not like the teleportation you see in sci-fi shows like Star Trek – only information can be sent via quantum teleportation, not people.

What it is, though, is a great way to create an unhackable, totally encrypted form of communication – just imagine receiving information that can only be interpreted once you know the state of your entangled particle.

In the latest experiments, both published in Nature Photonics (here and here), the teams had slightly different set-ups and results. But what they both had in common is the fact that they teleported their information across existing optical fibre networks – which is important if we ever want to build useable quantum communication systems.

In fact, quantum teleportation has been achieved over greater distances in the past – in 2012, researchers from Austria set a record by teleporting information across 143 km of space using lasers, but that technology isn’t as useful for practical networks as optical fibre.

To understand the experiments, Anil Ananthaswamy over at New Scientist nicely breaks it down like this: picture three people involved – Alice, Bob, and Charlie.

Alice and Bob want to share cryptographic keys, and to do that, they need Charlie’s help. Alice sends a particle to Charlie, while Bob entangles two particles and sends just one of them to Charlie.

Charlie then measures the two particles he’s received from each of them, so that they can no longer be differentiated – and that results in the quantum state of Alice’s particle being transferred to Bob’s entangled particle.

So basically, the quantum state of Alice’s particle eventually ends up in Bob’s particle, via a way station in the form of Charlie.

The Canadian experiment followed this same process, and was able to send quantum information over 6.2 km of Calgary’s fibre optic network that’s not regularly in use.

“The distance between Charlie and Bob, that’s the distance that counts,” lead researcher of the Canadian experiment, Wolfgang Tittel, from the University of Calgary in Alberta, told New Scientist“We have shown that this works across a metropolitan fibre network, over 6.2 kilometres, as the crow flies.”

The Chinese researchers were able to extend their teleportation further, over a 12.5 km area, but they had a slightly different set-up. It was Charlie in the middle who created the entangled particles and sent one to Bob, instead of the other way around.

This resulted in more accurate communication, and could work best for a quantum network where a central quantum computer (Charlie) communicates with lots of Alices and Bobs around a city. But the Calgary model could spread even greater distances, because Bob could work like a quantum repeater, sending the information further and further down the line.

The downside to both experiments was that they couldn’t send very much information. The Calgary experiment was the fastest, managing to send just 17 photons a minute.

And while many people assume that quantum teleportation would result in faster communication, in reality, decrypting the quantum state of the entangled particle requires a key, which needs to be sent via regular, slow communication – so quantum teleportation wouldn’t actually be any faster than the internet we already have, just more secure.

But the fact that both teams were able to use existing telecommunications infrastructure to achieve such long-distance teleportation at all is a huge deal – and something that hasn’t been done outside of the lab before.

It’s going to take a lot more tweaking and investigation before it’s something that we can use in our daily lives, but we’re definitely getting closer.

Japanese Scientists Prove Teleportation is Possible.

The future is already here: for the first time in the world, a team of Japanese scientists managed to implement teleportation! A beam of light was moved from point A to point B. For the purpose of the experiment, Noriyuki Lee and his colleagues divided light into elementary particles – photons. They kept only one photon that carried the information about the rest beam.

This photon was entangled at the quantum level with another photon, which was located at point B. It turned out that these two photons instantaneously affected each other, being physically located in different places. Thanks to this phenomenon, the original beam was at the same moment recreated elsewhere using the information carried by the photon.

It is interesting that the possibility of quantum entanglement of elementary particles was suggested by Albert Einstein in 1935, but in that time even the physicist himself considered his theory absurd. However, subsequently physicists have proved that quantum entanglement exists, and already in our days some companies have created technology of secure communication channels on the basis of this phenomenon.

Furthermore, among other things, the phenomenon of quantum entanglement might be used as evidence for the existence of a plurality of parallel universes.

Doing more with less: Steering a quantum path to improved internet security

Research conducted at Griffith University in Queensland, Australia, may lead to greatly improved security of information transfer over the internet.

In a paper published in the online journal Nature Communications, physicists from Griffith’s Centre for Quantum Dynamics demonstrate the potential for “quantum steering” to be used to enhance data security over long distances, discourage hackers and eavesdroppers and resolve issues of trust with .

“Quantum physics promises the possibility of absolutely secure , where your credit card details or other personal data sent over the internet could be completely isolated from hackers,” says project leader Professor Geoff Pryde.

“In an ideal world, such perfectly secure long distance between any two parties is simple. They could share strongly entangled quantum systems—such as particles of light called photons—to generate truly random and uncrackable codes.

“Unfortunately, in the real world the two parties cannot share sufficiently strong entanglement over due to transmission and detection losses. As the photons travel through the communication network, some are lost, thus providing a loophole for outsiders to attack their code.”

A backup solution—and the focus of the Griffith research—is quantum steering, where a measurement made on one party’s quantum system changes, or steers, the system held by another.

Professor Pryde says that, despite being a weaker form of entanglement, quantum steering operates paradoxically to maintain communication security while tolerating greater real-world loss and removing the need for absolute trust in devices.

“Quantum entanglement is a wonderful resource for safe and secure communication, but you need to verify it is really there to be certain any eavesdroppers are kept out of the loop,” he says.

“Our new technique does so without requiring any trust in the communication devices and it should work in long distance scenarios where standard methods fail.”

The Griffith team used special photon quantum states to program a measurement apparatus at each step of sending the code.

Because of “Heisenberg’s uncertainty principle” – which states one can never be certain of both the position and speed of a microscopic particle—a hacker cannot reliably determine these quantum states even if they have hacked an apparatus. Remarkably, this means it can still be used securely.

In the experimental demonstration, measurement devices representing the two parties were constructed and received entangled photons from a quantum source. Another photon source, representing the referee, was used to prepare the quantum states for programming one apparatus.

After many runs of the protocol, the referee could use the measurement results from both parties to perform a mathematical test for genuine quantum steering, as derived by Griffith theoretical physicist Dr Michael Hall.

“The team showed that the quantum-refereed steering protocol can match tests for strong , in not requiring trust in the measuring devices, and has the further advantage of being robust to noise,” says Dr Hall, adding that researchers hope to use the technique in a full quantum secure coding demonstration.

Read more at: http://phys.org/news/2015-01-quantum-path-internet.html#jCp

Physicists propose way to use quantum bidding in bridge

A team of physicists in Europe, led by Marcin Pawlowski, has proposed a way to use entangled quantum particles to improve the odds of winning in the game of bridge. As the team notes in their paper published in Physical Review X, their proposition appears to be the first example of using quantum enhancement of information transfer as it applies to a real-world (non-physics) application.

Entanglement is where pairs of quantum particles are generated where their quantum state is no longer defined independently—instead a quantum state comes to exist that defines them as a single unit. The team in Europe has taken this concept and applied it to the game of bridge, increasing the odds of winning by a pair of players that successfully employs the strategy they’ve devised.
Bridge is a card game played by teams of paired individuals—successful players learn to communicate meaningfully with one another to convey information each needs to improve their hand, without being specific—that’s against the rules. In this new scenario, Pawlowski, et al, suggest that in addition to a handful of cards, players are given the means to receive (and measure) one particle of an entangled pair from their mate—doing so they note, would not violate the rules of bridge as quantum entanglement cannot be used to send messages.
Use of entanglement in bridge would only come up during certain parts of play, such as when hands are dealt, and players on a team are trying to determine what cards their partner has that could be used to fill in the gaps in their own hand. Here team members would use combined measurements on the entangled particles along with information provided via coded bids to come up with partial information to help them better understand each other’s cards. Such a strategy isn’t foolproof, of course, it would help up the odds of winning rather than provide a clear path to victory. Pawlowski and his team have calculated that using their strategy would increase the probability of one team member guessing what cards their mate is holding from 87.5% to 89.5%, a 2 percent gain, but one that could for a team playing tournament style, mean a decided advantage.
Explore further: Scientists open a new window into quantum physics with superconductivity in LEDs

Quantum methods allow us to reduce communication complexity of some computational tasks, with several separated partners, beyond classical constraints. Nevertheless, experimental demonstrations of this have thus far been limited to some abstract problems, far away from real-life tasks. We show here, and demonstrate experimentally, that the power of reduction of communication complexity can be harnessed to gain an advantage in a famous, immensely popular, card game—bridge. The essence of a winning strategy in bridge is efficient communication between the partners. The rules of the game allow only a specific form of communication, of very low complexity (effectively, one has strong limitations on the number of exchanged bits). Surprisingly, our quantum technique does not violate the existing rules of the game (as there is no increase in information flow). We show that our quantum bridge auction corresponds to a biased nonlocal Clauser-Horne-Shimony-Holt game, which is equivalent to a 2→1 quantum random access code. Thus, our experiment is also a realization of such protocols. However, this correspondence is not complete, which enables the bridge players to have efficient strategies regardless of the quality of their detectors.

New Quantum Theory Could Explain the Flow of Time.

Cups of coffee cool, buildings crumble and stars fizzle out, physicists say, because of a strange quantum effect called “entanglement.” Photo:

Coffee cools, buildings crumble, eggs break and stars fizzle out in a universe that seems destined to degrade into a state of uniform drabness known as thermal equilibrium. The astronomer-philosopher Sir Arthur Eddington in 1927 cited the gradual dispersal of energy as evidence of an irreversible “arrow of time.”

But to the bafflement of generations of physicists, the arrow of time does not seem to follow from the underlying laws of physics, which work the same going forward in time as in reverse. By those laws, it seemed that if someone knew the paths of all the particles in the universe and flipped them around, energy would accumulate rather than disperse: Tepid coffee would spontaneously heat up, buildings would rise from their rubble and sunlight would slink back into the sun.

Quanta_logo_black100Original story reprinted with permission from Simons Science News, an editorially independent division of SimonsFoundation.org whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.
“In classical physics, we were struggling,” said Sandu Popescu, a professor of physics at the University of Bristol in the United Kingdom. “If I knew more, could I reverse the event, put together all the molecules of the egg that broke? Why am I relevant?”

Surely, he said, time’s arrow is not steered by human ignorance. And yet, since the birth of thermodynamics in the 1850s, the only known approach for calculating the spread of energy was to formulate statistical distributions of the unknown trajectories of particles, and show that, over time, the ignorance smeared things out.

Now, physicists are unmasking a more fundamental source for the arrow of time: Energy disperses and objects equilibrate, they say, because of the way elementary particles become intertwined when they interact — a strange effect called “quantum entanglement.”

“Finally, we can understand why a cup of coffee equilibrates in a room,” said Tony Short, a quantum physicist at Bristol. “Entanglement builds up between the state of the coffee cup and the state of the room.”

Popescu, Short and their colleagues Noah Linden and Andreas Winter reported the discovery in the journal Physical Review E in 2009, arguing that objects reach equilibrium, or a state of uniform energy distribution, within an infinite amount of time by becoming quantum mechanically entangled with their surroundings. Similar results by Peter Reimann of the University of Bielefeld in Germany appeared several months earlier in Physical Review Letters. Short and a collaborator strengthened the argument in 2012 by showing that entanglement causes equilibration within a finite time. And, in work that was posted on the scientific preprint site arXiv.org in February, two separate groups have taken the next step, calculating that most physical systems equilibrate rapidly, on time scales proportional to their size. “To show that it’s relevant to our actual physical world, the processes have to be happening on reasonable time scales,” Short said.

A watershed paper by Noah Linden, left, Sandu Popescu, Tony Short and Andreas Winter (not pictured) in 2009 showed that entanglement causes objects to evolve toward equilibrium. The generality of the proof is “extraordinarily surprising,” Popescu says. “The fact that a system reaches equilibrium is universal.” The paper triggered further research on the role of entanglement in directing the arrow of time. Photo: Courtesy of Tony Short
A watershed paper by Noah Linden, left, Sandu Popescu, Tony Short and Andreas Winter (not pictured) in 2009 showed that entanglement causes objects to evolve toward equilibrium. The generality of the proof is “extraordinarily surprising,” Popescu says. “The fact that a system reaches equilibrium is universal.” The paper triggered further research on the role of entanglement in directing the arrow of time. Photo: Courtesy of Tony Short
The tendency of coffee — and everything else — to reach equilibrium is “very intuitive,” said Nicolas Brunner, a quantum physicist at the University of Geneva. “But when it comes to explaining why it happens, this is the first time it has been derived on firm grounds by considering a microscopic theory.”
If the new line of research is correct, then the story of time’s arrow begins with the quantum mechanical idea that, deep down, nature is inherently uncertain. An elementary particle lacks definite physical properties and is defined only by probabilities of being in various states. For example, at a particular moment, a particle might have a 50 percent chance of spinning clockwise and a 50 percent chance of spinning counterclockwise. An experimentally tested theorem by the Northern Irish physicist John Bell says there is no “true” state of the particle; the probabilities are the only reality that can be ascribed to it.

Quantum uncertainty then gives rise to entanglement, the putative source of the arrow of time.

When two particles interact, they can no longer even be described by their own, independently evolving probabilities, called “pure states.” Instead, they become entangled components of a more complicated probability distribution that describes both particles together. It might dictate, for example, that the particles spin in opposite directions. The system as a whole is in a pure state, but the state of each individual particle is “mixed” with that of its acquaintance. The two could travel light-years apart, and the spin of each would remain correlated with that of the other, a feature Albert Einstein famously described as “spooky action at a distance.”

“Entanglement is in some sense the essence of quantum mechanics,” or the laws governing interactions on the subatomic scale, Brunner said. The phenomenon underlies quantum computing, quantum cryptography and quantum teleportation.

The idea that entanglement might explain the arrow of time first occurred to Seth Lloyd about 30 years ago, when he was a 23-year-old philosophy graduate student at Cambridge University with a Harvard physics degree. Lloyd realized that quantum uncertainty, and the way it spreads as particles become increasingly entangled, could replace human uncertainty in the old classical proofs as the true source of the arrow of time.

Seth Lloyd, now an MIT professor, came up with the idea that entanglement might explain the arrow of time while he was in graduate school at Cambridge University in the 1980s. Photo: Courtesy of Seth Lloyd
Seth Lloyd, now an MIT professor, came up with the idea that entanglement might explain the arrow of time while he was in graduate school at Cambridge University in the 1980s. Photo: Courtesy of Seth Lloyd

Using an obscure approach to quantum mechanics that treated units of information as its basic building blocks, Lloyd spent several years studying the evolution of particles in terms of shuffling 1s and 0s. He found that as the particles became increasingly entangled with one another, the information that originally described them (a “1” for clockwise spin and a “0” for counterclockwise, for example) would shift to describe the system of entangled particles as a whole. It was as though the particles gradually lost their individual autonomy and became pawns of the collective state. Eventually, the correlations contained all the information, and the individual particles contained none. At that point, Lloyd discovered, particles arrived at a state of equilibrium, and their states stopped changing, like coffee that has cooled to room temperature.

“What’s really going on is things are becoming more correlated with each other,” Lloyd recalls realizing. “The arrow of time is an arrow of increasing correlations.”

The idea, presented in his 1988 doctoral thesis, fell on deaf ears. When he submitted it to a journal, he was told that there was “no physics in this paper.” Quantum information theory “was profoundly unpopular” at the time, Lloyd said, and questions about time’s arrow “were for crackpots and Nobel laureates who have gone soft in the head.” he remembers one physicist telling him.

“I was darn close to driving a taxicab,” Lloyd said.

Advances in quantum computing have since turned quantum information theory into one of the most active branches of physics. Lloyd is now a professor at the Massachusetts Institute of Technology, recognized as one of the founders of the discipline, and his overlooked idea has resurfaced in a stronger form in the hands of the Bristol physicists. The newer proofs are more general, researchers say, and hold for virtually any quantum system.

“When Lloyd proposed the idea in his thesis, the world was not ready,” said Renato Renner, head of the Institute for Theoretical Physics at ETH Zurich. “No one understood it. Sometimes you have to have the idea at the right time.”

As a hot cup of coffee equilibrates with the surrounding air, coffee particles (white) and air particles (brown) interact and become entangled mixtures of brown and white states. After some time, most of the particles in the coffee are correlated with air particles; the coffee has reached thermal equilibrium. Image: Lidia del Rio
As a hot cup of coffee equilibrates with the surrounding air, coffee particles (white) and air particles (brown) interact and become entangled mixtures of brown and white states. After some time, most of the particles in the coffee are correlated with air particles; the coffee has reached thermal equilibrium. Image: Lidia del Rio

In 2009, the Bristol group’s proof resonated with quantum information theorists, opening up new uses for their techniques. It showed that as objects interact with their surroundings — as the particles in a cup of coffee collide with the air, for example — information about their properties “leaks out and becomes smeared over the entire environment,” Popescu explained. This local information loss causes the state of the coffee to stagnate even as the pure state of the entire room continues to evolve. Except for rare, random fluctuations, he said, “its state stops changing in time.”

Consequently, a tepid cup of coffee does not spontaneously warm up. In principle, as the pure state of the room evolves, the coffee could suddenly become unmixed from the air and enter a pure state of its own. But there are so many more mixed states than pure states available to the coffee that this practically never happens — one would have to outlive the universe to witness it. This statistical unlikelihood gives time’s arrow the appearance of irreversibility. “Essentially entanglement opens a very large space for you,” Popescu said. “It’s like you are at the park and you start next to the gate, far from equilibrium. Then you enter and you have this enormous place and you get lost in it. And you never come back to the gate.”

In the new story of the arrow of time, it is the loss of information through quantum entanglement, rather than a subjective lack of human knowledge, that drives a cup of coffee into equilibrium with the surrounding room. The room eventually equilibrates with the outside environment, and the environment drifts even more slowly toward equilibrium with the rest of the universe. The giants of 19th century thermodynamics viewed this process as a gradual dispersal of energy that increases the overall entropy, or disorder, of the universe. Today, Lloyd, Popescu and others in their field see the arrow of time differently. In their view, information becomes increasingly diffuse, but it never disappears completely. So, they assert, although entropy increases locally, the overall entropy of the universe stays constant at zero.

“The universe as a whole is in a pure state,” Lloyd said. “But individual pieces of it, because they are entangled with the rest of the universe, are in mixtures.”

One aspect of time’s arrow remains unsolved. “There is nothing in these works to say why you started at the gate,” Popescu said, referring to the park analogy. “In other words, they don’t explain why the initial state of the universe was far from equilibrium.” He said this is a question about the nature of the Big Bang.

Despite the recent progress in calculating equilibration time scales, the new approach has yet to make headway as a tool for parsing the thermodynamic properties of specific things, like coffee, glass or exotic states of matter. (Several traditional thermodynamicists reported being only vaguely aware of the new approach.) “The thing is to find the criteria for which things behave like window glass and which things behave like a cup of tea,” Renner said. “I would see the new papers as a step in this direction, but much more needs to be done.”

Some researchers expressed doubt that this abstract approach to thermodynamics will ever be up to the task of addressing the “hard nitty-gritty of how specific observables behave,” as Lloyd put it. But the conceptual advance and new mathematical formalism is already helping researchers address theoretical questions about thermodynamics, such as the fundamental limits of quantum computers and even the ultimate fate of the universe.

“We’ve been thinking more and more about what we can do with quantum machines,” said Paul Skrzypczyk of the Institute of Photonic Sciences in Barcelona. “Given that a system is not yet at equilibrium, we want to get work out of it. How much useful work can we extract? How can I intervene to do something interesting?”

Sean Carroll, a theoretical cosmologist at the California Institute of Technology, is employing the new formalism in his latest work on time’s arrow in cosmology. “I’m interested in the ultra-long-term fate of cosmological space-times,” said Carroll, author of “From Eternity to Here: The Quest for the Ultimate Theory of Time.” “That’s a situation where we don’t really know all of the relevant laws of physics, so it makes sense to think on a very abstract level, which is why I found this basic quantum-mechanical treatment useful.”

Twenty-six years after Lloyd’s big idea about time’s arrow fell flat, he is pleased to be witnessing its rise and has been applying the ideas in recent work on the black hole information paradox. “I think now the consensus would be that there is physics in this,” he said.

Not to mention a bit of philosophy.

According to the scientists, our ability to remember the past but not the future, another historically confounding manifestation of time’s arrow, can also be understood as a buildup of correlations between interacting particles. When you read a message on a piece of paper, your brain becomes correlated with it through the photons that reach your eyes. Only from that moment on will you be capable of remembering what the message says. As Lloyd put it: “The present can be defined by the process of becoming correlated with our surroundings.”

As a hot cup of coffee equilibrates with the surrounding air, coffee particles (white) and air particles (brown) interact and become entangled mixtures of brown and white states. After some time, most of the particles in the coffee are correlated with air particles; the coffee has reached thermal equilibrium. Image: Lidia del Rio

The backdrop for the steady growth of entanglement throughout the universe is, of course, time itself. The physicists stress that despite great advances in understanding how changes in time occur, they have made no progress in uncovering the nature of time itself or why it seems different (both perceptually and in the equations of quantum mechanics) than the three dimensions of space. Popescu calls this “one of the greatest unknowns in physics.”

“We can discuss the fact that an hour ago, our brains were in a state that was correlated with fewer things,” he said. “But our perception that time is flowing — that is a different matter altogether. Most probably, we will need a further revolution in physics that will tell us about that.”

Scientists propose quantum wells as high-power, easy-to-make energy harvesters.

Time is an emergent phenomenon that is a side effect of quantum entanglement, say physicists. And they have the first experimental results to prove it

When the new ideas of quantum mechanics spread through science like wildfire in the first half of the 20th century, one of the first things physicists did was to apply them to gravity and general relativity. The result were not pretty.

It immediately became clear that these two foundations of modern physics were entirely incompatible. When physicists attempted to meld the approaches, the resulting equations were bedeviled with infinities making it impossible to make sense of the results.

Then in the mid-1960s, there was a breakthrough. The physicists John Wheeler and Bryce DeWitt successfully combined the previously incompatible ideas in a key result that has since become known as the Wheeler-DeWitt equation. This is important because it avoids the troublesome infinites—a huge advance.

But it didn’t take physicists long to realise that while the Wheeler-DeWitt equation solved one significant problem, it introduced another. The new problem was that time played no role in this equation. In effect, it says that nothing ever happens in the universe, a prediction that is clearly at odds with the observational evidence.

This conundrum, which physicists call ‘the problem of time’, has proved to be thorn in flesh of modern physicists, who have tried to ignore it but with little success.

Then in 1983, the theorists Don Page and William Wooters came up with a novel solution based on the quantum phenomenon of entanglement. This is the exotic property in which two quantum particles share the same existence, even though they are physically separated.

Entanglement is a deep and powerful link and Page and Wooters showed how it can be used to measure time. Their idea was that the way a pair of entangled particles evolve is a kind of clock that can be used to measure change.

But the results depend on how the observation is made. One way to do this is to compare the change in the entangled particles with an external clock that is entirely independent of the universe. This is equivalent to god-like observer outside the universe measuring the evolution of the particles using an external clock.

In this case, Page and Wooters showed that the particles would appear entirely unchanging—that time would not exist in this scenario.

But there is another way to do it that gives a different result. This is for an observer inside the universe to compare the evolution of the particles with the rest of the universe. In this case, the internal observer would see a change and this difference in the evolution of entangled particles compared with everything else is an important a measure of time.

This is an elegant and powerful idea. It suggests that time is an emergent phenomenon that comes about because of the nature of entanglement. And it exists only for observers inside the universe. Any god-like observer outside sees a static, unchanging universe, just as the Wheeler-DeWitt equations predict.

Of course, without experimental verification, Page and Wooter’s ideas are little more than a philosophical curiosity. And since it is never possible to have an observer outside the universe, there seemed little chance of ever testing the idea.

Until now. Today, Ekaterina Moreva at the Istituto Nazionale di Ricerca Metrologica (INRIM) in Turin, Italy, and a few pals have performed the first experimental test of Page Wooters ideas. And they confirm that time is indeed an emergent phenomenon for ‘internal’ observers but absent for external ones.

The experiment involves the creation of a toy universe consisting of a pair of entangled photons and an observer that can measure their state in one of two ways. In the first, the observer measures the evolution of the system by becoming entangled with it. In the second, a god-like observer measures the evolution against an external clock which is entirely independent of the toy universe.

The experimental details are straightforward. The entangled photons each have a polarisation which can be changed by passing it through a birefringent plate. In the first set up, the observer measures the polarisation of one photon, thereby becoming entangled with it. He or she then compares this with the polarisation of the second photon. The difference is a measure of time.

In the second set up, the photons again both pass through the birefringent plates which change their polarisations. However, in this case, the observer only measures the global properties of both photons by comparing them against an independent clock.

In this case, the observer cannot detect any difference between the photons without becoming entangled with one or the other. And if there is no difference, the system appears static. In other words, time does not emerge.

“Although extremely simple, our model captures the two, seemingly contradictory, properties of the Page-Wooters mechanism,” say Moreva and co.

That’s an impressive experiment. Emergence is a popular idea in science. In particular, physicists have recently become excited about the idea that gravity is an emergent phenomenon. So it’s a relatively small step to think that time may emerge in a similar way.

What emergent gravity has lacked, of course, is an experimental demonstration that shows how it works in in practice. That’s why Moreva and co’s work is significant. It places an abstract and exotic idea on firm experimental footing for the first time.

Perhaps most significant of all is the implication that quantum mechanics and general relativity are not so incompatible after all. When viewed through the lens of entanglement, the famous ‘problem of time’ just melts away.

The next step will to extend the idea further, particularly to the macroscopic scale. It’s one thing to show how time emerges for photons, it’s quite another to show how it emerges for larger things such as humans and train timetables.

And therein lies another challenge.

A closet of hidden phenomena

Humankind is racing toward the day when the world’s fastest computers are quantum mechanical. These devices owe their superiority to a phenomenon scientists don’t fully understand. Enter John Stuart Bell, to make matters worse.

Science has been rarely counter-intuitive to our understanding of reality, and its elegant rationalism at every step of the way has been reassuring. This is why Bell’s theorem has been one of the strangest concepts of reality scientistshave come across: it is hardly intuitive, hardly rational, and hardly reassuring.


To someone interested in the bigger picture, the theorem is the line before which quantum mechanics ends and after which classical mechanics begins. It’s the line in the sand between the Max Planck and the Albert Einstein weltanschauungen.

Einstein, and many others before him, worked with gravity, finding a way to explain the macrocosm and its large-scale dance of birth and destruction. Planck, and many others after him, have helped describe the world of the atom and its innards using extremely small packets of energy called particles, swimming around in a pool of exotic forces.

At the nexus of a crisis

Over time, however, as physicists studied the work of both men and of others, it started to become clear that the the fields were mutually exclusive, never coming together to apply to the same idea. At this tenuous nexus, the Irish physicist John Stuart Bell cleared his throat.

Bell’s theorem states, in simple terms, that for quantum mechanics to be a complete theory – applicable everywhere and always – either locality or realism must be untrue. Locality is the idea that instantaneous or superluminal communication is impossible. Realism is the idea that even if an object cannot be detected at some times, its existence cannot be disputed – like the moon in the morning.

The paradox is obvious. Classical mechanics is applicable everywhere, even with subatomic particles that are billionths of nanometers across. That it’s not is only because its dominant player, the gravitational force, is overshadowed by other stronger forces. Quantum mechanics, on the other hand, is not so straightforward with its offering. It could be applied in the macroscopic world – but its theory has trouble dealing with gravity and the strong nuclear force, both of which have something to do with mass.

This means if quantum mechanics is to have a smooth transition at some scale into a classical reality… it can’t. At that scale, one of locality or realism must snap back to life. This is why confronting the idea that one of them isn’t true is unsettling. They are both fundamental hypotheses of physics.

The newcomer

A few days ago, I found a paper on arXiv titled Violation of Bell’s inequality in fluid mechanics (May 28, 2013). Its abstract stated that “… a classical fluid mechanical system can violate Bell’s inequality because the fluid motion is correlated over very large distances”. Given that Bell stands between Planck’s individuated notion of quantum mechanics and Einstein’s waltz-like continuum of the cosmos, it was intriguing to see scientists attempting to describe a quantum mechanical phenomenon in a classical system.

The correlation that the paper’s authors talk about implies fluid flow in one region of space-time is somehow correlated with fluid flow in another region of space-time. This is a violation of locality. However, fluid mechanics has been, still is, a purely classical occurrence: its behaviour can be traced to Newton’s ideas from the 17th century. This means all flow events are, rather have to be, decidedly realand local.

To make their point, the authors use mathematical equations modelling fluid flow, conceived by Leonhard Euler in the 18th century, and how they could explain vortices – regions of a fluid where the flow is mostly a spinning motion about an axis.

Animated Infinite Vortex (1)

Assigning fictitious particles to different parts of the equation, the scientists demonstrate how the particles in one region of flow could continuously and instantaneously affect particles in another region of fluid flow. In quantum mechanics, this phenomenon is called entanglement. It has no classical counterpart.

Coincidental correlation

However, there is nothing quantum about fluid flow, much less about Euler’s equations. Then again, if the paper is right, would that mean flowing fluids are a quantum mechanical system? Occam’s razorcomes to the rescue: Because fluid flow is classical but still shows signs of nonlocality, there is a possibility that purely local interactions could explain quantum mechanical phenomena.

Think about it. A purely classical system also shows signs of quantum mechanical behaviour. This meant that some phenomena in the fluid could be explained by both classical and quantum mechanical models, i.e. the two models correspond.

There is a stumbling block, however. Occam’s razor only provides evidence of a classical solution for nonlocality, not a direct correspondence between micro- and macroscopic physics. In other words, it could easily be a post hoc ergo propter hoc inference: Because nonlocality came after application of local mathematics, local mathematics must have caused nonlocality.

“Not quite,” said Robert Brady, one of the authors on the paper. “Bell’s hypothesis is often said to be about ‘locality’, and so it is common to say that quantum mechanical systems are ‘nonlocal’ because Bell’s hypothesis does not apply to them. If you choose this description, then fluid mechanics is also ‘non-local’, since Bell’s hypothesis does not apply to them either.”

“However, in fluid mechanics it is usual to look at this from a different angle, since Bell’s hypothesis would not be thought reasonable in that field.”

Brady’s clarification brings up an important point: Even though the lines don’t exactly blur between the two domains, knowing more than choosing where to apply which model makes a large difference. If you misstep, classical fluid flow could become quantum fluid flow simply because it displays some pseudo-effects.

In fact, experiments to test Bell’s hypothesis have been riddled with such small yet nagging stumbling blocks. Even if a suitable domain of applicability has been chosen, an efficient experiment has to be designed that fully exploits the domain’s properties to arrive at a conclusion – and this has proved very difficult. Inspired by the purely theoretical EPR paradox put forth in 1935, Bell stated his theorem in 1964. It is now 2013 and no experiment has successfully been able to decide if Bell was right or wrong.

Three musketeers

The three most prevalent problems such experiments face are called the failure of rotational invariance, the no-communication loophole, and the fair sampling assumption.

In any Bell experiment, two particles are allowed to interact in some way – such as being born from a same source – and separated across a large distance. Scientists then measure the particles’ properties using detectors. This happens again and again until any patterns among paired particles can be found or denied.

Whatever properties the scientists are going to measure, the different values that that property can take must be equally likely. For example, if I have a bag filled with 200 blue balls, 300 red balls and 100 yellow balls, I shouldn’t think something quantum mechanical was at play if one in two balls pulled out was red. That’s just probability at work. And when probability can’t be completely excluded from the results, it’s called a failure of rotational invariance.

For the experiment to measure only the particles’ properties, the detectors must not be allowed to communicate with each other. If they were allowed to communicate, scientists wouldn’t know if a detection arose due to the particles or due to glitches in the detectors. Unfortunately, in a perfect setup, the detectors wouldn’t communicate at all and be decidedly local – putting them in no position to reveal any violation of locality! This problem is called the no-communication loophole.

The final problem – fair sampling – is a statistical issue. If an experiment involves 1,000 pairs of particles, and if only 800 pairs have been picked up by the detector and studied, the experiment cannot be counted as successful. Why? Because results from the other 200 could have distorted the results had they been picked up. There is a chance. Thus, the detectors would have to be 100 per cent efficient in a successful experiment.

In fact, the example was a gross exaggeration: detectors are only 5-30 per cent efficient.

One (step) at a time

Resolution for the no-communication problem came in 1998 by scientists from Austria, who also closed the rotational invariance loophole. The fair sampling assumption was resolved by a team of scientists from the USA in 2001, one of whom was David Wineland, physics Nobel Laureate, 2012. However, they used only two ions to make the measurements. A more thorough experiment’s resultswere announced just last month.

Researchers from the Institute for Quantum Optics and Quantum Communication, Austria, had used detectors called transition-edge sensors that could pick up individual photons for detection with a 98 per cent efficiency. These sensors were developed by the National Institute for Standards and Technology, Maryland, USA. In keeping with tradition, the experiment admitted the no-communication loophole.

Unfortunately, for an experiment to be a successful Bell-experiment, it must get rid of all three problems at the same time. This hasn’t been possible to date, which is why a conclusive Bell’s test, and the key to quantum mechanics’ closet of hidden phenomena, eludes us. It is as if nature uses one loophole or the other to deceive the experimenters.*

The silver lining is that the photon has become the first particle for which all three loopholes have been closed, albeit in different experiments. We’re probably getting there, loopholes relenting. The reward, of course, could be the greatest of all: We will finally know if nature is described by quantum mechanics, with its deceptive trove of exotic phenomena, or by classical mechanics and general relativity, with its reassuring embrace of locality and realism.

*In 1974, John Clauser and Michael Horne found a curious workaround for the fair-sampling problem that they realised could be used to look for new physics. They called this the no-enhancement problem. They had calculated that if some method was found to amplify the photons’ signals in the experiment and circumvent the low detection efficiency, the method would also become a part of the result. Therefore, if the result came out that quantum mechanics was nonlocal, then the method would be a nonlocal entity. So, using different methods, scientists distinguish between previously unknown local and nonlocal processes.

Source: The Hindu