Physicists Aim to Classify All Possible Phases of Matter

A complete classification could lead to a wealth of new materials and technologies. But some exotic phases continue to resist understanding.

In the last three decades, condensed matter physicists have discovered a wonderland of exotic new phases of matter: emergent, collective states of interacting particles that are nothing like the solids, liquids and gases of common experience.

The phases, some realized in the lab and others identified as theoretical possibilities, arise when matter is chilled almost to absolute-zero temperature, hundreds of degrees below the point at which water freezes into ice. In these frigid conditions, particles can interact in ways that cause them to shed all traces of their original identities. Experiments in the 1980s revealed that in some situations electrons split en masse into fractions of particles that make braidable trails through space-time; in other cases, they collectively whip up massless versions of themselves. A lattice of spinning atoms becomes a fluid of swirling loops or branching strings; crystals that began as insulators start conducting electricity over their surfaces. One phase that shocked experts when recognized as a mathematical possibility in 2011 features strange, particle-like “fractons” that lock together in fractal patterns.

Now, research groups at Microsoft and elsewhere are racing to encode quantum information in the braids and loops of some of these phases for the purpose of developing a quantum computer. Meanwhile, condensed matter theorists have recently made major strides in understanding the pattern behind the different collective behaviors that can arise, with the goal of enumerating and classifying all possible phases of matter. If a complete classification is achieved, it would not only account for all phases seen in nature so far, but also potentially point the way toward new materials and technologies.

Led by dozens of top theorists, with input from mathematicians, researchers have already classified a huge swath of phases that can arise in one or two spatial dimensions by relating them to topology: the math that describes invariant properties of shapes like the sphere and the torus. They’ve also begun to explore the wilderness of phases that can arise near absolute zero in 3-D matter.

“It’s not a particular law of physics” that these scientists seek, said Michael Zaletel, a condensed matter theorist at Princeton University. “It’s the space of all possibilities, which is a more beautiful or deeper idea in some ways.” Perhaps surprisingly, Zaletel said, the space of all consistent phases is itself a mathematical object that “has this incredibly rich structure that we think ends up, in 1-D and 2-D, in one-to-one correspondence with these beautiful topological structures.”

In the landscape of phases, there is “an economy of options,” said Ashvin Vishwanath of Harvard University. “It all seems comprehensible” — a stroke of luck that mystifies him. Enumerating phases of matter could have been “like stamp collecting,” Vishwanath said, “each a little different, and with no connection between the different stamps.” Instead, the classification of phases is “more like a periodic table. There are many elements, but they fall into categories and we can understand the categories.”

While classifying emergent particle behaviors might not seem fundamental, some experts, including Xiao-Gang Wen of the Massachusetts Institute of Technology, say the new rules of emergent phases show how the elementary particles themselves might arise from an underlying network of entangled bits of quantum information, which Wen calls the “qubit ocean.” For example, a phase called a “string-net liquid” that can emerge in a three-dimensional system of qubits has excitations that look like all the known elementary particles. “A real electron and a real photon are maybe just fluctuations of the string-net,” Wen said.

A New Topological Order

Before these zero-temperature phases cropped up, physicists thought they had phases all figured out. By the 1950s, they could explain what happens when, for example, water freezes into ice, by describing it as the breaking of a symmetry: Whereas liquid water has rotational symmetry at the atomic scale (it looks the same in every direction), the H20 molecules in ice are locked in crystalline rows and columns.

Things changed in 1982 with the discovery of phases called fractional quantum Hall states in an ultracold, two-dimensional gas of electrons. These strange states of matter feature emergent particles with fractions of an electron’s charge that take fractions of steps in a one-way march around the perimeter of the system. “There was no way to use different symmetry to distinguish those phases,” Wen said.

A new paradigm was needed. In 1989, Wen imagined phases like the fractional quantum Hall states arising not on a plane, but on different topological manifolds — connected spaces such as the surface of a sphere or a torus. Topology concerns global, invariant properties of such spaces that can’t be changed by local deformations. Famously, to a topologist, you can turn a doughnut into a coffee cup by simply deforming its surface, since both surfaces have one hole and are therefore equivalent topologically. You can stretch and squeeze all you like, but even the most malleable doughnut will refuse to become a pretzel.

Wen found that new properties of the zero-temperature phases were revealed in the different topological settings, and he coined the term “topological order” to describe the essence of these phases. Other theorists were also uncovering links to topology. With the discovery of many more exotic phases — so many that researchers say they can barely keep up — it became clear that topology, together with symmetry, offers a good organizing schema.

The topological phases only show up near absolute zero, because only at such low temperatures can systems of particles settle into their lowest-energy quantum “ground state.” In the ground state, the delicate interactions that correlate particles’ identities — effects that are destroyed at higher temperatures — link up particles in global patterns of quantum entanglement. Instead of having individual mathematical descriptions, particles become components of a more complicated function that describes all of them at once, often with entirely new particles emerging as the excitations of the global phase. The long-range entanglement patterns that arise are topological, or impervious to local changes, like the number of holes in a manifold.

Consider the simplest topological phase in a system — called a “quantum spin liquid” — that consists of a 2-D lattice of “spins,” or particles that can point up, down, or some probability of each simultaneously. At zero temperature, the spin liquid develops strings of spins that all point down, and these strings form closed loops. As the directions of spins fluctuate quantum-mechanically, the pattern of loops throughout the material also fluctuates: Loops of down spins merge into bigger loops and divide into smaller loops. In this quantum-spin-liquid phase, the system’s ground state is the quantum superposition of all possible loop patterns.

To understand this entanglement pattern as a type of topological order, imagine, as Wen did, that the quantum spin liquid is spilling around the surface of a torus, with some loops winding around the torus’s hole. Because of these hole windings, instead of having a single ground state associated with the superposition of all loop patterns, the spin liquid will now exist in one of four distinct ground states, tied to four different superpositions of loop patterns. One state consists of all possible loop patterns with an even number of loops winding around the torus’s hole and an even number winding through the hole. Another state has an even number of loops around the hole and an odd number through the hole; the third and fourth ground states correspond to odd and even, and odd and odd, numbers of hole windings, respectively.

Which of these ground states the system is in stays fixed, even as the loop pattern fluctuates locally. If, for instance, the spin liquid has an even number of loops winding around the torus’s hole, two of these loops might touch and combine, suddenly becoming a loop that doesn’t wrap around the hole at all. Long-way loops decrease by two, but the number remains even. The system’s ground state is a topologically invariant property that withstands local changes.

Future quantum computers could take advantage of this invariant quality. Having four topological ground states that aren’t affected by local deformations or environmental error “gives you a way to store quantum information, because your bit could be what ground state it’s in,” explained Zaletel, who has studied the topological properties of spin liquids and other quantum phases. Systems like spin liquids don’t really need to wrap around a torus to have topologically protected ground states. A favorite playground of researchers is the toric code, a phase theoretically constructed by the condensed matter theorist Alexei Kitaev of the California Institute of Technology in 1997 and demonstrated in experiments over the past decade. The toric code can live on a plane and still maintain the multiple ground states of a torus. (Loops of spins are essentially able to move off the edge of the system and re-enter on the opposite side, allowing them to wind around the system like loops around a torus’s hole.) “We know how to translate between the ground-state properties on a torus and what the behavior of the particles would be,” Zaletel said.

Spin liquids can also enter other phases, in which spins, instead of forming closed loops, sprout branching networks of strings. This is the string-net liquid phase that, according to Wen, “can produce the Standard Model” of particle physics starting from a 3-D qubit ocean.

The Universe of Phases

Research by several groups in 2009 and 2010 completed the classification of “gapped” phases of matter in one dimension, such as in chains of particles. A gapped phase is one with a ground state: a lowest-energy configuration sufficiently removed or “gapped” from higher-energy states that the system stably settles into it. Only gapped quantum phases have well-defined excitations in the form of particles. Gapless phases are like swirling matter miasmas or quantum soups and remain largely unknown territory in the landscape of phases.

For a 1-D chain of bosons — particles like photons that have integer values of quantum spin, which means they return to their initial quantum states after swapping positions — there is only one gapped topological phase. In this phase, first studied by the Princeton theorist Duncan Haldane, who, along with David Thouless and J. Michael Kosterlitz, won the 2016 Nobel Prize for decades of work on topological phases, the spin chain gives rise to half-spin particles on both ends. Two gapped topological phases exist for chains of fermions — particles like electrons and quarks that have half-integer values of spin, meaning their states become negative when they switch positions. The topological order in all these 1-D chains stems not from long-range quantum entanglement, but from local symmetries acting between neighboring particles. Called “symmetry-protected topological phases,” they correspond to “cocycles of the cohomology group,” mathematical objects related to invariants like the number of holes in a manifold.

 Two-dimensional phases are more plentiful and more interesting. They can have what some experts consider “true” topological order: the kind associated with long-range patterns of quantum entanglement, like the fluctuating loop patterns in a spin liquid. In the last few years, researchers have shown that these entanglement patterns correspond to topological structures called tensor categories, which enumerate the different ways that objects can possibly fuse and braid around one another. “The tensor categories give you a way [to describe] particles that fuse and braid in a consistent way,” said David Pérez-García of Complutense University of Madrid.

Researchers like Pérez-García are working to mathematically prove that the known classes of 2-D gapped topological phases are complete. He helped close the 1-D case in 2010, at least under the widely-held assumption that these phases are always well-approximated by quantum field theories — mathematical descriptions that treat the particles’ environments as smooth. “These tensor categories are conjectured to cover all 2-D phases, but there is no mathematical proof yet,” Pérez-García said. “Of course, it would be much more interesting if one can prove that this is not all. Exotic things are always interesting because they have new physics, and they’re maybe useful.”

Gapless quantum phases represent another kingdom of possibilities to explore, but these impenetrable fogs of matter resist most theoretical methods. “The language of particles is not useful, and there are supreme challenges that we are starting to confront,” said Senthil Todadri, a condensed matter theorist at MIT. Gapless phases present the main barrier in the quest to understand high-temperature superconductivity, for instance. And they hinder quantum gravity researchers in the “it from qubit” movement, who believe that not only elementary particles, but also space-time and gravity, arise from patterns of entanglement in some kind of underlying qubit ocean. “In it from qubit, we spend much of our time on gapless states because this is where one gets gravity, at least in our current understanding,” said Brian Swingle, a theoretical physicist at the University of Maryland. Some researchers try to use mathematical dualities to convert the quantum-soup picture into an equivalent particle description in one higher dimension. “It should be viewed in the spirit of exploring,” Todadri said.

Even more enthusiastic exploration is happening in 3-D. What’s already clear is that, when spins and other particles spill from their chains and flatlands and fill the full three spatial dimensions of reality, unimaginably strange patterns of quantum entanglement can emerge. “In 3-D, there are things that escape, so far, this tensor-category picture,” said Pérez-García. “The excitations are very wild.”

The Haah Code

The very wildest of the 3-D phases appeared seven years ago. A talented Caltech graduate student named Jeongwan Haah discovered the phase in a computer search while looking for what’s known as the “dream code”: a quantum ground state so robust that it can be used to securely store quantum memory, even at room temperature.

For this, Haah had to turn to 3-D matter. In 2-D topological phases like the toric code, a significant source of error is “stringlike operators”: perturbations to the system that cause new strings of spins to accidentally form. These strings will sometimes wind new loops around the torus’s hole, bumping the number of windings from even to odd or vice versa and converting the toric code to one of its three other quantum ground states. Because strings grow uncontrollably and wrap around things, experts say there cannot be good quantum memories in 2-D.

Jeongwan Haah, a condensed matter theorist now working at Microsoft Research in Redmond, Washington, discovered a bizarre 3-D phase of matter with fractal properties.

Jeongwan Haah, a condensed matter theorist now working at Microsoft Research in Redmond, Washington, discovered a bizarre 3-D phase of matter with fractal properties.


Haah wrote an algorithm to search for 3-D phases that avoid the usual kinds of stringlike operators. The computer coughed up 17 exact solutions that he then studied by hand. Four of the phases were confirmed to be free of stringlike operators; the one with the highest symmetry was what’s now known as the Haah code.

As well as being potentially useful for storing quantum memory, the Haah code was also profoundly weird. Xie Chen, a condensed matter theorist at Caltech, recalled hearing the news as a graduate student in 2011, within a month or two of Haah’s disorienting discovery. “Everyone was totally shocked,” she said. “We didn’t know anything we could do about it. And now, that’s been the situation for many years.”

The Haah code is relatively simple on paper: It’s the solution of a two-term energy formula, describing spins that interact with their eight nearest neighbors in a cubic lattice. But the resulting phase “strains our imaginations,” Todadri said.

The code features particle-like entities called fractons that, unlike the loopy patterns in, say, a quantum spin liquid, are nonliquid and locked in place; the fractons can only hop between positions in the lattice if those positions are operated upon in a fractal pattern. That is, you have to inject energy into the system at each corner of, say, a tetrahedron connecting four fractons in order to make them switch positions, but when you zoom in, you see that what you treated as a point-like corner was actually the four corners of a smaller tetrahedron, and you have to inject energy into the corners of that one as well. At a finer scale, you see an even smaller tetrahedron, and so on, all the way down to the finest scale of the lattice. This fractal behavior means that the Haah code never forgets the underlying lattice it comes from, and it can never be approximated by a smoothed-out description of the lattice, as in a quantum field theory. What’s more, the number of ground states in the Haah code grows with the size of the underlying lattice — a decidedly non-topological property. (Stretch a torus, and it’s still a torus.)

The quantum state of the Haah code is extraordinarily secure, since a “fractal operator” that perfectly hits all the marks is unlikely to come along at random. Experts say a realizable version of the code would be of great technological interest.

Haah’s phase has also generated a surge of theoretical speculation. Haah helped matters along in 2015 when he and two collaborators at MIT discovered many examples of a class of phases now known as “fracton models” that are simpler cousins of the Haah code. (The first model in this family was introduced by Claudio Chamon of Boston University in 2005.) Chen and others have since been studying the topology of these fracton systems, some of which permit particles to move along lines or sheets within a 3-D volume and might aid conceptual understanding or be easier to realize experimentally. “It’s opening the door to many more exotic things,” Chen said of the Haah code. “It’s an indication about how little we know about 3-D and higher dimensions. And because we don’t yet have a systematic picture of what is going on, there might be a lot of things lying out there waiting to be explored.

No one knows yet where the Haah code and its cousins belong in the landscape of possible phases, or how much bigger this space of possibilities might be. According to Todadri, the community has made progress in classifying the simplest gapped 3-D phases, but more exploration is needed in 3-D before a program of complete classification can begin there. What’s clear, he said, is that “when the classification of gapped phases of matter is taken up in 3-D, it will have to confront these weird possibilities that Haah first discovered.”

Many researchers think new classifying concepts, and even whole new frameworks, might be necessary to capture the Haah code’s fractal nature and reveal the full scope of possibilities for 3-D quantum matter. Wen said, “You need a new type of theory, new thinking.” Perhaps, he said, we need a new picture of nonliquid patterns of long-range entanglement. “We have some vague ideas but don’t have a very systematic mathematics to do them,” he said. “We have some feeling what it looks like. The detailed systematics are still lacking. But that’s exciting.”

New Device Lets Scientists Explore The Weird Physics Near Absolute Zero

Strange things happen at the limits of science.

Scientists have long been intrigued by the physics near absolute zero – the temperature 0° Kelvin, or -273.15°C, where particles reach the lowest possible amount of movement – ever since this limit was theorised.

Yet reaching absolute zero has been called impossible: as you continue to remove heat from a gas to cool it, the work needed to remove the heat increases. At this temperature, the work to remove additional heat becomes infinite.

This doesn’t discourage scientists from trying, however. A group at the University of Basel recently have developed a device that gets us closer than ever before to the coldest of cold, and could allow them to explore the strange physics that is thought to occur near absolute zero.

The team developed a nanoelectronic chip that can cool to a record 2.8 millikelvin. The device uses magnetic cooling, through an applied magnetic field, to diminish the chip’s electrical connections to 150 microkelvin.

Using a designed magnetic field system, the team also cooled a thermometer in order to measure temperature, and kept the chip cold for seven hours. This extended period of time allowed the team to explore this ultra-cooled state.

While the chip itself is impressive, it’s not even the most exciting part of this experiment.

This chip opens up an enormous potential to better understand what happens to physics near absolute zero. This understanding could continue to increase, as the researchers are hoping to improve the device and their experiment to eventually reach a bone-chilling 1 millikelvin.

Within the past year, the observable limit (also known as the quantum backaction limit) of how low you can theoretically cool an object has been experimentally challenged.

Researchers were able to cool an object to less than one-fifth of one quantum, pushing it more than two decibels below the quantum backaction limit.

But later on, scientists officially proved the third law of thermodynamics by showing that it is mathematically impossible to cool to absolute zero.

The strange physical space on the cusp of reaching absolute zero is relatively unexplored, as it has been so difficult to reach experimentally.

With our newfound ability to research this uber-cold state, there is still much to learn about physics at these temperatures.

Additionally, understanding absolute zero could potentially improve modern electronics.

The performance of transistors is greatly affected by temperature. Traditional transistors run into a whole host of issues when they overheat, which happens commonly.

The transistors used in computers, smart devices, and other commercially available electronic devices are shown to be much more efficient at extremely low temperatures, like those barely above absolute zero.

The use of extremely low temperatures could improve not only household electronics, but also support the technologies we use to explore the far reaches of the cosmos. Infrared cameras built for space imaging need to operate at the lowest possible temperatures, which allow them to operate with maximum sensitivity.

These temperatures could also be integral in advancing medical imaging technologies.

After 100 years of debate, hitting absolute zero has been declared mathematically impossible.

The third law of thermodynamics finally gets its proof.

After more than 100 years of debate featuring the likes of Einstein himself, physicists have finally offered up mathematical proof of the third law of thermodynamics, which states that a temperature of absolute zero cannot be physically achieved because it’s impossible for the entropy (or disorder) of a system to hit zero.

While scientists have long suspected that there’s an intrinsic ‘speed limit’ on the act of cooling in our Universe that prevents us from ever achieving absolute zero (0 Kelvin, -273.15°C, or -459.67°F), this is the strongest evidence yet that our current laws of physics hold true when it comes to the lowest possible temperature.

 “We show that you can’t actually cool a system to absolute zero with a finite amount of resources and we went a step further,” one of the team, Lluis Masanes from University College London, told IFLScience.

“We then conclude that it is impossible to cool a system to absolute zero in a finite time, and we established a relation between time and the lowest possible temperature. It’s the speed of cooling.”

What Masanes is referring to here are two fundamental assumptions that the third law of thermodynamics depends on for its validity.

The first is that in order to achieve absolute zero in a physical system, the system’s entropy has to also hit zero.

The second rule is known as the unattainability principle, which states that absolute zero is physically unreachable because no system can reach zero entropy.

The first rule was proposed by German chemist Walther Nernst in 1906, and while it earned him a Nobel Prize in Chemistry, heavyweights like Albert Einstein and Max Planck weren’t convinced by his proof, and came up with their own versions of the cooling limit of the Universe.

 This prompted Nernst to double down on his thinking and propose the second rule in 1912, declaring absolute zero to be physically impossible.

Together, these rules are now acknowledged as the third law of thermodynamics, and while this law appears to hold true, its foundations have always seemed a little rocky – when it comes to the laws of thermodynamics, the third one has been a bit of a black sheep.

“[B]ecause earlier arguments focused only on specific mechanisms or were crippled by questionable assumptions, some physicists have always remained unconvinced of its validity,” Leah Crane explains for New Scientist.

In order to test how robust the assumptions of the third law of thermodynamics actually are in both classical and quantum systems, Masanes and his colleague Jonathan Oppenheim decided to test if it is mathematically possible to reach absolute zero when restricted to finite time and resources.

Masanes compares this act of cooling to computation – we can watch a computer solve an algorithm and record how long it takes, and in the same way, we can actually calculate how long it takes for a system to be cooled to its theoretical limit because of the steps required to remove its heat.

You can think of cooling as effectively ‘shovelling’ out the existing heat in a system and depositing it into the surrounding environment.

How much heat the system started with will determine how many steps it will take for you to shovel it all out, and the size of the ‘reservoir’ into which that heat is being deposited will also limit your cooling ability.

Using mathematical techniques derived from quantum information theory – something that Einstein had pushed for in his own formulations of the third law of thermodynamics – Masanes and Oppenheim found that you could only reach absolute zero if you had both infinite steps and an infinite reservoir.

And that’s not exactly something any of us are going to get our hands on any time soon.

This is something that physicists have long suspected, because the second law of thermodynamics states that heat will spontaneously move from a warmer system to a cooler system, so the object you’re trying to cool down will constantly be taking in heat from its surroundings.

And when there’s any amount of heat within an object, that means there’s thermal motion inside, which ensures some degree of entropy will always remain.

This explains why, no matter where you look, every single thing in the Universe is moving ever so slightly – nothing in existence is completely still according to the third law of thermodynamics.

The researchers say they “hope the present work puts the third law on a footing more in line with those of the other laws of thermodynamics”, while at the same time presenting the fastest theoretical rate at which we can actually cool something down.

In other words, they’ve used maths to quantify the steps of cooling, allowing researchers to define set speed limit for how cold a system can get in a finite amount of time.

And that’s important, because even if we can never reach absolute zero, we can get pretty damn close, as NASA demonstrated recently with its Cold Atom Laboratory, which can hit a mere billionth of a degree above absolute zero, or 100 million times colder than the depths of space.

At these kinds of temperatures, we’ll be able to see strange atomic behaviours that have never been witnessed before. And being able to remove as much heat from a system is going to be crucial in the race to finally build a functional quantum computer.

And the best part is, while this study has taken absolute zero off the table for good, no one has even gotten close to reaching the temperatures or cooling speeds that it’s set as the physical limits – despite some impressive efforts of late.

“The work is important – the third law is one of the fundamental issues of contemporary physics,” Ronnie Kosloff at the Hebrew University of Jerusalem, Israel who was not involved in the study, told New Scientist.

“It relates thermodynamics, quantum mechanics, information theory – it’s a meeting point of many things.”

Source: Nature Communications.

It’s Finally Settled: Absolute Zero Is Impossible

Just how cold can it get? The answer may be more important than you think: scientists study absolute zero to figure out all the wacky stuff that happens to molecules when the chilly temperatures slow them way down. But up until recently, absolute zero has had a shadow of controversy surrounding it, one that two researchers decided to take head-on. Grab your scarves and coat for this one.

What All the Fuss is About

 Absolute zero is the lowest temperature that is theoretically possible—0 Kelvin, or about -273.15 degrees Celsius. Entropy, on the other hand, is the measure of disorder in a system. In 1906, as described by New Scientist, the German chemist Walther Nernst put forward the principle that, as a system’s temperature approaches absolute zero, the system’s entropy goes to zero. In 1912, he added the unattainability principle, stating that absolute zero is actually impossible to reach. Taken together, the principles form the third law of thermodynamics. However, the third law of thermodynamics has not been considered a law by some—it has remained controversial for decades. But a new study from researchers at the University College London may just settle the matter once and for all.

The problem is this: at 0 Kelvin, a system has minimal motion—but not a lack of motion altogether. That’s because of the Heisenberg uncertainty principle, which states that we can’t know both the exact position and momentum of a particle at the same time. There may still be small fluctuations of movement. So, how could a system’s entropy go down to zero?

The short answer: it can’t.

Solving the Riddle

 A new study from researchers Jonathan Oppenheim and Lluís Masanes sheds light on this riddle, by showing that reaching 0 Kelvin is physically impossible. Think of it this way: as described by Science Alert, the cooling of a system is essentially the “shoveling” out of heat from that system into the surrounding environment. But cooling has its limits, determined by how many steps it takes to shovel the heat out, and the size of the surrounding environment. You can only reach absolute zero, then, if you have both infinite steps and an infinite surrounding environment.

Dr. Lluís Masanes told IFL Science that their study shows “it is impossible to cool a system to absolute zero in a finite time” and that they “established a relation between time and the lowest possible temperature. It’s the speed of cooling.”

The researchers used quantum mechanics to arrive at their conclusion, viewing the cooling process as a computation, according to IFL Science. A longstanding debate about the third law of thermodynamics has finally been put to bed.

 What the Coldest Temperatures in the Universe Can Tell Us
 Do Electrons Move At Absolute Zero?

Fahrenheit, Celsius and Kelvin Explained In Ten Seconds

Coldest spot in known universe: NASA to study almost absolute zero matter at ISS.

The International Space Station<br /><br />
(Credit: STS-122 Shuttle Crew, NASA)The International Space Station (Credit: STS-122 Shuttle Crew, NASA)
 NASA has revealed its plans to create the coldest spot in the known universe on board the International Space Station in 2016. The researchers are preparing to study matter at temperatures near absolute zero, revealing the world of quantum mechanics.

The US space agency has announced that its researchers are currently working on the Cold Atom Laboratory , “the coolest spot in the universe”, which will be ready for installation inside the International Space Station by December 2015.

There are several reasons underlying the scientific drive to explore characteristics and qualities of matter in conditions that are difficult to replicate on Earth. Space’s low temperatures, unattainable in terrestrial laboratories, reveal the wave nature of atoms, as well as possibly new phenomena. The absence of gravity additionally allows such experiments to last longer – up to 20 seconds.

“We’re going to study matter at temperatures far colder than are found naturally,” said the project’s head scientist Rob Thompson of Jet Propulsion Laboratory (JPL).“We aim to push effective temperatures down to 100 pico-Kelvin.”

One hundred pico-Kelvin is remarkable in that it is a mere ten billionth of a degree above absolute zero (0K or −273.15°C) – a point on an imaginary thermometer where all thermal activity of atoms theoretically halts. When temperatures are so low, our traditional ideas of atomic behavior cease to apply. The matter is no longer solid, liquid or gas – its atoms tend to create quantum forms of matter.

Quantum mechanics is a branch of physics that describes intricate and bizarre light and matter rules on an atomic scale. It is a wonderland where nothing is certain, where objects behave both as particles and as waves, and where matter can be in two places at once. “We’re entering the unknown,” said Thompson.

The Cold Atom Lab mission poster (Image from Cold Atom Lab mission poster (Image from

With the help of the Cold Atom Lab, the researchers will be able to conduct many exciting experiments.

“We’ll begin by studying Bose-Einstein Condensates,” he said. “The Cold Atom Lab will allow us to study these objects at perhaps the lowest temperatures ever.”

The condensates, named after Satyendra Bose and Albert Einstein, who predicted them in the beginning of the 20th century, were, in fact, discovered only in 1995. And in 2001, Eric Cornell and Carl Wieman shared the Nobel Prize with Wolfgang Ketterle for their independent discovery of the intriguing capacity of rubidium and sodium atoms to form a single wave of matter when cooled to temperatures slightly above the absolute zero threshold.

The researches, planned by NASA, are aimed at studying ultra-cold quantum gases in the microgravity of the ISS besides other experiments.

The technology, which would allow such experiments, includes an atom chip with on-window wires that enable simultaneous magnetic trapping and optical manipulation, in addition to compound silicon and glass substrate technology that leads to both magnetic and optical control of ultra-cold atoms.

The Cold Atom Lab, which actually is designed “for use by multiple investigators” and is “upgradable and maintainable on orbit,” is scheduled to be launched inside the ISS in early 2016, where it will be able to function for 5 years.

Quantum gas goes below absolute zero .

It may sound less likely than hell freezing over, but physicists have created an atomic gas with a sub-absolute-zero temperature for the first time1. Their technique opens the door to generating negative-Kelvin materials and new quantum devices, and it could even help to solve a cosmological mystery.

Lord Kelvin defined the absolute temperature scale in the mid-1800s in such a way that nothing could be colder than absolute zero. Physicists later realized that the absolute temperature of a gas is related to the average energy of its particles. Absolute zero corresponds to the theoretical state in which particles have no energy at all, and higher temperatures correspond to higher average energies.

However, by the 1950s, physicists working with more exotic systems began to realise that this isn’t always true: Technically, you read off the temperature of a system from a graph that plots the probabilities of its particles being found with certain energies. Normally, most particles have average or near-average energies, with only a few particles zipping around at higher energies. In theory, if the situation is reversed, with more particles having higher, rather than lower, energies, the plot would flip over and the sign of the temperature would change from a positive to a negative absolute temperature, explains Ulrich Schneider, a physicist at the Ludwig Maximilian University in Munich, Germany.

Schneider and his colleagues reached such sub-absolute-zero temperatures with an ultracold quantum gas made up of potassium atoms. Using lasers and magnetic fields, they kept the individual atoms in a lattice arrangement. At positive temperatures, the atoms repel, making the configuration stable. The team then quickly adjusted the magnetic fields, causing the atoms to attract rather than repel each other. “This suddenly shifts the atoms from their most stable, lowest-energy state to the highest possible energy state, before they can react,” says Schneider. “It’s like walking through a valley, then instantly finding yourself on the mountain peak.”

At positive temperatures, such a reversal would be unstable and the atoms would collapse inwards. But the team also adjusted the trapping laser field to make it more energetically favourable for the atoms to stick in their positions. This result, described today in Science1, marks the gas’s transition from just above absolute zero to a few billionths of a Kelvin below absolute zero.

Wolfgang Ketterle, a physicist and Nobel laureate at the Massachusetts Institute of Technology in Cambridge, who has previously demonstrated negative absolute temperatures in a magnetic system2, calls the latest work an “experimental tour de force”. Exotic high-energy states that are hard to generate in the laboratory at positive temperatures become stable at negative absolute temperatures — “as though you can stand a pyramid on its head and not worry about it toppling over,” he notes — and so such techniques can allow these states to be studied in detail. “This may be a way to create new forms of matter in the laboratory,” Ketterle adds.

If built, such systems would behave in strange ways, says Achim Rosch, a theoretical physicist at the University of Cologne in Germany, who proposed the technique used by Schneider and his team3. For instance, Rosch and his colleagues have calculated that whereas clouds of atoms would normally be pulled downwards by gravity, if part of the cloud is at a negative absolute temperature, some atoms will move upwards, apparently defying gravity4.

Another peculiarity of the sub-absolute-zero gas is that it mimics ‘dark energy’, the mysterious force that pushes the Universe to expand at an ever-faster rate against the inward pull of gravity. Schneider notes that the attractive atoms in the gas produced by the team also want to collapse inwards, but do not because the negative absolute temperature stabilises them. “It’s interesting that this weird feature pops up in the Universe and also in the lab,” he says. “This may be something that cosmologists should look at more closely.”

Accidental discovery dramatically improves electrical conductivity.

Quite by accident, Washington State University researchers have achieved a 400-fold increase in the electrical conductivity of a crystal simply by exposing it to light. The effect, which lasted for days after the light was turned off, could dramatically improve the performance of devices like computer chips.

Strontium Titanate

WSU doctoral student Marianne Tarun chanced upon the discovery when she noticed that the conductivity of some strontium titanate shot up after it was left out one day. At first, she and her fellow researchers thought the sample was contaminated, but a series of experiments showed the effect was from light.

“It came by accident,” said Tarun. “It’s not something we expected. That makes it very exciting to share.”

The phenomenon they witnessed—”persistent photoconductivity“—is a far cry from superconductivity, the complete lack of  pursued by other physicists, usually using temperatures near absolute zero. But the fact that they’ve achieved this at room temperature makes the phenomenon more immediately practical.

And while other researchers have created persistent photoconductivity in other materials, this is the most dramatic display of the phenomenon.

The research, which was funded by the National Science Foundation, appears this month in the journal Physical Review Letters.

“The discovery of this effect at  opens up new possibilities for practical devices,” said Matthew McCluskey, co-author of the paper and chair of WSU’s physics department. “In standard computer memory, information is stored on the surface of a computer chip or hard drive. A device using persistent photoconductivity, however, could store information throughout the entire volume of a crystal.”

This approach, called holographic memory, “could lead to huge increases in information capacity,” McCluskey said.

Strontium titanate and other oxides, which contain oxygen and two or more other elements, often display a dizzying variety of electronic phenomena, from the high resistance used for insulation to superconductivity’s lack of resistance.

“These diverse properties provide a fascinating playground for scientists but applications so far have been limited,” said McCluskey.

McCluskey, Tarun and physicist Farida Selim, now at Bowling Green State University, exposed a sample of  to light for 10 minutes. Its improved conductivity lasted for days. They theorize that the light frees electrons in the material, letting it carry more current.