Sniffing and Peeking at Mars


The ExoMars Trace Gas Orbiter gets into position and takes some new pictures

Sniffing and Peeking at Mars

Although it arrived at Mars back in October 2016, the ESA/Roscosmos mission called ExoMars Trace Gas Orbiter (and no, I couldn’t see a nice acronym in there either) has spent the last 11 months getting into a working orbit.

Using aerobraking the spacecraft has shrunk its highly elliptical capture orbit to a relatively tight, near circular path around Mars, about 400 km above the surface. This is the prime science mission configuration. Although the highest profile science goal for the orbiter is arguably its study of gases like methane in the martian atmosphere, it’s got some other nifty science instruments on board.

One of those is the CaSSIS camera – capable of taking stereoscopic images of the planetary surface to a resolution of some 4.5 meters. Developed at the University of Bern in Switzerland, CaSSIS has been returning data since reaching Mars, but in the new orbit these pictures are taking on a new level of detail. Using a set of 3 color filters – skewed towards the red and infrared bands the following image shows a 40 km long stretch of Korolev Crater at high northern latitudes. Bright looking material is ice.

Credit: ESA, Roscosmos and CaSSIS

With a close up of one area shown here:

Credit: ESA, Roscosmos and CaSSIS

Images like these will help add to our increasingly detailed maps of Mars. In many respects this alien surface is already better mapped out than the Earth’s ocean floors. The CaSSIS data will also help improve our understanding of the comings and goings of volatiles like water and carbon dioxide on Mars, linking these data with the spectroscopic study of trace gases.

Is, for example, methane on Mars coming from specific locations? And is there evidence that it could have a biological origin?

These are big questions, and as ExoMars goes about its business we’re going to get closer to some answers.

 

For all book lovers please visit my friend’s website.
URL: http://www.romancewithbooks.com

Advertisements

Can Rover sniff out life on Mars?


FIRST it was Beagle 2 that put Stevenage on the map of space exploration. Now aerospace company EADS-Astrium hopes to redeem itself after that failure with the Rover, a robot vehicle it is hoped will crawl across the surface of Mars at little more than a rover.

Practice run – putting the rover through its paces on Mount Teide in Tenerife

FIRST it was Beagle 2 that put Stevenage on the map of space exploration.

Now aerospace company EADS-Astrium hopes to redeem itself after that failure with the Rover, a robot vehicle it is hoped will crawl across the surface of Mars at little more than a snail’s pace.

Beagle 2 vanished without trace minutes before it was due to land on the Martian landscape on Christmas Day 2003 leaving red faces at EADS-Astrium followed by a further rebuke in a later report saying the project was flawed and under-funded.

Now, though, the Rover, which is costing £154m, has shown it has the technology and the ability to crawl on Mars by completing a series of tests on a mountain top in Tenerife.

The landscape around the summit of Mount Teide, the world’s third largest volcano that last erupted in 1909, proved the perfect obstacle course and, after a week crawling around the barren, rock-strewn tundra, the two Stevenage engineers who carried out the experiments reported they were satisfied with rover.

The vehicle is a prototype of the vehicle that will be sent off in the direction of the Red Planet some time in 2011 and is the central feature in the European Space Agency’s £400m project known as ExoMars.

The Rover is a six-wheeled device that may answer many of the questions about Mars including is there life on the planet?

With a top speed of just one tenth of a mile an hour, Rover was put through its paces by two scientists with a remote control.

“For a prototype it worked very well,” said project head Lester Waugh.

“It demonstrated its capabilities very well and now we have to work further on the semi-autonomous navigation system and the other science packages including the instruments that will hopefully, scan, drill and sample the Martian surface.”

With the same team that gave life to Beagle 2 now working on the rover, there would be cause to celebrate if it actually got to Mars and would prompt a major party at the EADS-Astrium site in Gunnels Wood Road if it found its Stevenage mate Beagle 2.

 

For all book lovers please visit my friend’s website.
URL: http://www.romancewithbooks.com

Pluto’s landscape is so complex that Nasa scientists aren’t sure how it got there, after New Horizons images could show huge field of dunes


How the dunes, craters and huge mountains on the dwarf planet could have been formed is ‘a head-scratcher’, say scientist.

This synthetic perspective view of Pluto, based on the latest high-resolution images to be downlinked from NASA’s New Horizons spacecraft, shows what you would see if you were approximately 1,100 miles (1,800 kilometers) above Pluto’s equatorial area, loo NASA/Johns Hopkins University Applied Physics Laboratory/Southwest Research Institute

The surface of Pluto is so complex that scientists aren’t sure how it got there, they have said, after images beamed back from New Horizons show an incredibly complex landscape.

The pictures show that the dwarf planet might have huge fields of dunes, massive nitrogen ice flows and valleys that could have formed as materials flowed over its surface. The complexity has stunned scientists —they shouldn’t be there, since the atmosphere is so thin.

“Pluto is showing us a diversity of landforms and complexity of processes that rival anything we’ve seen in the solar system,” New Horizons Principal Investigator Alan Stern said in a statement. “If an artist had painted this Pluto before our flyby, I probably would have called it over the top — but that’s what is actually there.”

This 220-mile (350-kilometer) wide view of Pluto from NASA’s New Horizons spacecraft illustrates the incredible diversity of surface reflectivities and geological landforms on the dwarf planet

Now scientists are trying to work out what happened to get the stunning range of complexity of features onto Pluto.

“Seeing dunes on Pluto — if that is what they are — would be completely wild, because Pluto’s atmosphere today is so thin,” William B. McKinnon, part of New Horizons’ Geology, Geophysics and Imaging team said in a statement. “Either Pluto had a thicker atmosphere in the past, or some process we haven’t figured out is at work. It’s a head-scratcher.”

Scientists have also been surprised to find that the haze in Pluto’s atmosphere has more layers than they knew. That creates a kind of twilight effect, meaning that terrain is lit up at sunset and gives them a kind of visibility that they’d never expected.

Two different versions of an image of Pluto’s haze layers, taken by New Horizons as it looked back at Pluto’s dark side nearly 16 hours after close approach, from a distance of 480,000 miles (770,000 kilometers), at a phase angle of 166 degrees

Last weekend, New Horizons started sending images back to Earth after its flyby in July — a process that will take a year, in all. The new pictures have enabled the New Horizons team to see Pluto in much detail than before, giving resolutions as high as 400 meters per pixel.

Pluto could be reclassified by scientists as a planet


International Astronomical Union is being encouraged to reconsider its definition of ‘planet’

In 2006, Pluto was relegated to the status of dwarf planet by the International Astronomical Union NASA/JPL-Caltech

Three years ago, Nasa’s New Horizons, the fastest spaceship ever launched, raced past Pluto, spectacularly revealing the wonders of that newly seen world.

This coming New Year’s Eve – if all goes well on board this small robot operating extremely far from home – it will treat us to images of the most distant body ever explored, provisionally named Ultima Thule.

We know very little about it, but we do know it’s not a planet. Pluto, by contrast – despite what you’ve heard – is.

Why do we say this? We are planetary scientists, meaning we’ve spent our careers exploring and studying objects that orbit stars.

We use “planet” to describe worlds with certain qualities. When we see one like Pluto, with its many familiar features – mountains of ice, glaciers of nitrogen, a blue sky with layers of smog – we and our colleagues quite naturally find ourselves using the word “planet” to describe it and compare it to other planets that we know and love

In 2006, the International Astronomical Union (IAU) announced an attempted redefinition of the word “planet” that excluded many objects, including Pluto. We think that decision was flawed, and that a logical and useful definition of planet will include many more worlds.

We find ourselves using the word planet to describe the largest “moons” in the solar system.

Moon refers to the fact that they orbit around other worlds which themselves orbit our star, but when we discuss a world such as Saturn’s Titan, which is larger than the planet Mercury, and has mountains, dunes and canyons, rivers, lakes and clouds, you will find us – in the literature and at our conferences – calling it a planet.

This usage is not a mistake or a throwback. It is increasingly common in our profession and it is accurate.

Most essentially, planetary worlds (including planetary moons) are those large enough to have pulled themselves into a ball by the strength of their own gravity.

Below a certain size, the strength of ice and rock is enough to resist rounding by gravity, and so the smallest worlds are lumpy.

This is how, even before New Horizons arrives, we know that Ultima Thule is not a planet. Among the few facts we’ve been able to ascertain about this body is that it is tiny (just 17 miles across) and distinctly non-spherical.

This gives us a natural, physical criterion to separate planets from all the small bodies orbiting in space – boulders, icy comets or rocky and metallic asteroids, all of which are small and lumpy because their gravity is too weak for self-rounding.

The desire to reconsider the meaning of “planet” arose because of two thrilling discoveries about our universe: There are planets in unbelievable abundance beyond our solar system – called “exoplanets” – orbiting nearly every star we see in the sky. And there are a great many small icy objects orbiting our sun out in Pluto’s realm, beyond the zone of the rocky inner worlds or “terrestrial planets” (such as Earth), the “gas giants” (such as Jupiter) and the “ice giants” (such as Neptune).

In light of these discoveries, it did then make sense to ask which objects discovered orbiting other stars should be considered planets. Some, at the largest end, are more like stars themselves. And just as stars like our sun are known as “dwarf stars” and still considered stars, it made some sense to consider small icy worlds like Pluto to occupy another subcategory of planet: “dwarf planet.”

But the process for redefining planet was deeply flawed and widely criticised even by those who accepted the outcome.

At the 2006 IAU conference, which was held in Prague, the few scientists remaining at the very end of the week-long meeting (less than 4 per cent of the world’s astronomers and even a smaller percentage of the world’s planetary scientists) ratified a hastily drawn definition that contains obvious flaws. For one thing, it defines a planet as an object orbiting around our sun – thereby disqualifying the planets around other stars, ignoring the exoplanet revolution, and decreeing that essentially all the planets in the universe are not, in fact, planets.

Even within our solar system, the IAU scientists defined “planet” in a strange way, declaring that if an orbiting world has “cleared its zone”, or thrown its weight around enough to eject all other nearby objects, it is a planet. Otherwise it is not.

This criterion is imprecise and leaves many borderline cases, but what’s worse is that they chose a definition that discounts the actual physical properties of a potential planet, electing instead to define “planet” in terms of the other objects that are – or are not – orbiting nearby.

This leads to many bizarre and absurd conclusions. For example, it would mean that Earth was not a planet for its first 500 million years of history, because it orbited among a swarm of debris until that time, and also that if you took Earth today and moved it somewhere else, say out to the asteroid belt, it would cease being a planet.

To add insult to injury, they amended their convoluted definition with the vindictive and linguistically paradoxical statement that “a dwarf planet is not a planet”. This seemingly served no purpose but to satisfy those motivated by a desire – for whatever reason – to ensure that Pluto was “demoted” by the new definition.

By and large, astronomers ignore the new definition of “planet” every time they discuss all of the exciting discoveries of planets orbiting other stars.

And those of us who actually study planets for a living also discuss dwarf planets without adding an asterisk. But it gets old having to address the misconceptions among the public who think that because Pluto was “demoted” (not exactly a neutral term) that it must be more like a lumpy little asteroid than the complex and vibrant planet it is.

It is this confusion among students and the public – fostered by journalists and textbook authors who mistakenly accepted the authority of the IAU as the final word – that makes this worth addressing.

Last March, in Houston, planetary scientists gathered to share new results and ideas at the annual Lunar and Planetary Science Conference. One presentation, titled “A Geophysical Planet Definition”, intended to set the record straight.

It stated: “In keeping with both sound scientific classification and peoples’ intuition, we propose a geophysically-based definition of ‘planet’ that importantly emphasises a body’s intrinsic physical properties over its extrinsic orbital properties.”

After giving a precise and nerdy definition, it offered: “A simple paraphrase of our planet definition – especially suitable for elementary school students – could be, ’round objects in space that are smaller than stars’.”

It seems very likely that at some point the IAU will reconsider its flawed definition. In the meantime, people will keep referring to the planets being discovered around other stars as planets, and we’ll keep referring to round objects in our solar system and elsewhere as planets. Eventually, “official” nomenclature will catch up to both common sense and scientific usage. The word “planet” predates and transcends science. Language is malleable and responsive to culture. Words are not defined by voting. Neither is scientific paradigm.

Grinspoon is an astrobiologist who studies climate evolution and habitability of other worlds. Stern is the principal investigator of the New Horizons mission to Pluto and the Kuiper belt. Their book “Chasing New Horizons: Inside the Epic First Mission to Pluto,” was published May 1 by Picador.

For all book lovers please visit my friend’s website.
URL: http://www.romancewithbooks.com

Escape from Proxima b


A civilization in the habitable zone of a dwarf star like Proxima Centauri might find it hard to get into interstellar space with conventional rockets

Escape from Proxima b
Artist’s impression of the exoplanet Proxima Centauri b.

Almost all space missions launched so far by our civilization have been based on chemical propulsion. The fundamental limitation here is easy to understand: a rocket is pushed forward by ejecting burnt fuel gases backwards through its exhaust. The characteristic composition and temperature of the burnt fuel set the exhaust speed to a typical value of a few kilometers per second. Momentum conservation implies that the terminal speed of the rocket is given by this exhaust speed times the natural logarithm of the ratio between the initial and final mass of the rocket.

To exceed the exhaust speed by some large factor requires an initial fuel mass that exceeds the final payload mass by the exponential of this factor. Since the required fuel mass grows exponentially with terminal speed, it is not practical for chemical rockets to exceed a terminal speed that is more than an order of magnitude larger than the exhaust speed, namely a few tens of kilometers per second. Indeed, this has been the speed limit of all spacecraft launched so far by NASA or other space agencies.

By a fortunate coincidence, the escape speed from the surface of the Earth, 11 kilometers per second, and the escape speed from the location of the Earth around the sun, 42 kilometers per second, are close to the speed limit attainable by chemical propulsion. This miracle allowed our civilization to design missions, such as Voyager 1 and 2 or New Horizons, that could escape from the solar system into interstellar space. But is this fortune shared by other civilizations on habitable planets outside the solar system?

Life “as we know it” requires liquid water, which can exist on planets with a surface temperature and a mass similar to Earth. Surface heating is needed to avoid freezing of water into ice and an Earth-like gravity is needed to retain the planet’s atmosphere, which is also essential, since ice turns directly into gas in the absence of an external atmospheric pressure. Just next door to Mars, which has a tenth of an Earth mass and lost most its atmosphere long ago.

Since the surface temperature of a warm planet is dictated by the flux of stellar irradiation, the distance of the habitable zone around any arbitrary star scales roughly as the square root of the star’s luminosity. For low mass stars, the stellar luminosity scales roughly as the stellar mass to the third power. The escape speed scales as the square root of the stellar mass over the distance from the star.

Taken together, these considerations imply that the escape speed from the habitable zone of a star scales inversely with stellar mass to the power of one quarter. Paradoxically, the gravitational potential well is deeper in the habitable zone around lower mass stars. A civilization born near a dwarf star would need to launch rockets at a higher speed than we do in order to escape the gravitational pull of its star, even though the star is less massive than the Sun.

As it turns out, the lowest mass stars happen to be the most abundant of them all. It is therefore not surprising that the nearest star to the sun, Proxima Centauri, has 12 percent of the mass of the sun. This star also hosts a planet, Proxima b, in its habitable zone at a distance that is 20 times smaller than the Earth-Sun separation. The escape speed from the location of Proxima b to interstellar space is about 65 kilometers per second. Launching a rocket from rest at that location requires the fuel-to-payload weight ratio to be larger than a few billions in order for the rocket to escape the gravitational pull of Proxima Centauri.

In other words, freeing one gram’s worth of technological equipment from the position of Proxima b to interstellar space requires a chemical fuel tank that weighs millions of kilograms, similar to that used for liftoff of the space shuttle. Increasing the final payload weight to a kilogram, the scale of our smallest CubeSat, requires a thousand times more fuel than carried by the space shuttle.

This is bad news for technological civilizations in the habitable zone of dwarf stars.

Their space missions would barely be capable of escaping into interstellar space using chemical propulsion alone. Of course, the extraterrestrials (E.T.s) can take advantage, as we do, of gravitational assists by optimally designing the spacecraft trajectory around their host star and surrounding planets.

In particular, launching a rocket in the direction of motion of the planet would reduce the propulsion boost needed for interstellar escape down to the practical range of 30 kilometers per second. The E.T.s could also employ more advanced propulsion technologies, such as light sails or nuclear engines.

Nevertheless, this global perspective should make us feel fortunate that we live in the habitable zone of a rare star as bright as the sun. Not only that we have liquid water and a comfortable climate to maintain a good quality of life, but that we also inhabit a platform from which we can escape at ease into interstellar space. We should take advantage of this fortune to find real estate on extrasolar planets in anticipation of a future time when life on our own planet will become impossible.

This unfortunate fate will inevitably confront us in less than a billion years, when the sun will heat up enough to boil all water off the face of the Earth. With proper planning we could relocate to a new home by then. Some of the most desirable destinations would be systems of multiple planets around low mass stars, such as the nearby dwarf star TRAPPIST-1 which weighs 9 percent of a solar mass and hosts seven Earth-size planets.

Once we get to the habitable zone of TRAPPIST-1, however, there would be no rush to escape. Such stars burn hydrogen so slowly that they could keep us warm for ten trillion years, about a thousand times longer than the lifetime of the sun.

There Are Giant Plasma Tubes Floating Above Earth


We’d long suspected them, but in 2015 astronomers for the first time captured visual evidence of tubular plasma structures in the inner layers of the magnetosphere surrounding the Earth.

main article image

“For over 60 years, scientists believed these structures existed but by imaging them for the first time, we’ve provided visual evidence that they are really there,” said Cleo Loi of the ARC Centre of Excellence for All-sky Astrophysics (CAASTRO) and the School of Physics at the University of Sydney back in 2015.

Loi was lead author on this research, done as part of her award-winning undergraduate thesis and published in the journal Geophysical Research Letters.

“The discovery of the structures is important because they cause unwanted signal distortions that could, as one example, affect our civilian and military satellite-based navigation systems. So we need to understand them,” she said.

The plasma structures are explained in this clip:

The region of space around the Earth occupied by its magnetic field, called the magnetosphere, is filled with plasma created by the atmosphere being ionised by sunlight.

The innermost layer of the magnetosphere is the ionosphere, and above that is the plasmasphere. They are embedded with a variety of strangely shaped plasma structures, including the tubes.

“We measured their position to be about 600 km above the ground, in the upper ionosphere, and they appear to be continuing upwards into the plasmasphere. This is around where the neutral atmosphere ends, and we are transitioning to the plasma of outer space,” Loi said.

1433177712286

Using the Murchison Widefield Array, a radio telescope in the Western Australian desert, Loi found that she could map large patches of the sky and exploit the the array’s rapid snapshot capabilities to create a movie – effectively capturing the real-time movements of the plasma.

Loi was awarded the 2015 Bok Prize of the Astronomical Society of Australia for her work.

Whisper From the First Stars Sets Off Loud Dark Matter Debate


A surprise discovery announced a month ago suggested that the early universe looked very different than previously believed. Initial theories that the discrepancy was due to dark matter have come under fire.

Illustration for first stars

Evidence pointing to the discovery of the earliest stars inspired excitement among cosmologists, but also skepticism.

The news about the first stars in the universe always seemed a little off. Last July, Rennan Barkana, a cosmologist at Tel Aviv University, received an email from one of his longtime collaborators, Judd Bowman. Bowman leads a small group of five astronomers who built and deployed a radio telescope in remote western Australia. Its goal: to find the whisper of the first stars. Bowman and his team had picked up a signal that didn’t quite make sense. He asked Barkana to help him think through what could possibly be going on.

For years, as radio telescopes scanned the sky, astronomers have hoped to glimpse signs of the first stars in the universe. Those objects are too faint and, at over 13 billion light-years away, too distant to be picked up by ordinary telescopes. Instead, astronomers search for the stars’ effects on the surrounding gas. Bowman’s instrument, like the others involved in the search, attempts to pick out a particular dip in radio waves coming from the distant universe.

The measurement is exceedingly difficult to make, since the potential signal can get swamped not only by the myriad radio sources of modern society — one reason the experiment is deep in the Australian outback — but by nearby cosmic sources such as our own Milky Way galaxy. Still, after years of methodical work, Bowman and his colleagues with the Experiment to Detect the Global Epoch of Reionization Signature (EDGES) concluded not only that they had found the first stars, but that they had found evidence that the young cosmos was significantly colder than anyone had thought.

Barkana was skeptical, however. “On the one hand, it looks like a very solid measurement,” he said. “On the other hand, it is something very surprising.”

What could make the early universe appear cold? Barkana thought through the possibilities and realized that it could be a consequence of the presence of dark matter — the mysterious substance that pervades the universe yet escapes every attempt to understand what it is or how it works. He found that the EDGES result could be interpreted as a completely new way that ordinary material might be interacting with dark matter.

The EDGES group announced the details of this signal and the detection of the first stars in the March 1 issue of Nature. Accompanying their article was Barkana’s paper describing his novel dark matter idea. News outlets worldwide carried news of the discovery. “Astronomers Glimpse Cosmic Dawn, When the Stars Switched On,” the Associated Press reported, adding that “they may have detected mysterious dark matter at work, too.”

Yet in the weeks since the announcement, cosmologists around the world have expressed a mix of excitement and skepticism. Researchers who saw the EDGES result for the first time when it appeared in Nature have done their own analysis, showing that even if some kind of dark matter is responsible, as Barkana suggested, no more than a small fraction of it could be involved in producing the effect. (Barkana himself has been involved in some of these studies.) And experimental astronomers have said that while they respect the EDGES team and the careful work that they’ve done, such a measurement is too difficult to trust entirely. “If this weren’t a groundbreaking discovery, it would be a lot easier for people to just believe the results,” said Daniel Price, an astronomer at Swinburne University of Technology in Australia who works on similar experiments. “Great claims require great evidence.”

This message has echoed through the cosmology community since those Nature papers appeared.

The Source of a Whisper

The day after Bowman contacted Barkana to tell him about the surprising EDGES signal, Barkana drove with his family to his in-laws’ house. During the drive, he said, he contemplated this signal, telling his wife about the interesting puzzle Bowman had handed him.

Bowman and the EDGES team had been probing the neutral hydrogen gas that filled the universe during the first few hundred million years after the Big Bang. This gas tended to absorb ambient light, leading to what cosmologists poetically call the universe’s “dark ages.” Although the cosmos was filled with a diffuse ambient light from the cosmic microwave background (CMB) — the so-called afterglow of the Big Bang — this neutral gas absorbed it at specific wavelengths. EDGES searched for this absorption pattern.

As stars began to turn on in the universe, their energy would have heated the gas. Eventually the gas reached a high enough temperature that it no longer absorbed CMB radiation. The absorption signal disappeared, and the dark ages ended.

The absorption signal as measured by EDGES contains an immense amount of information. As the absorption pattern traveled across the expanding universe, the signal stretched. Astronomers can use that stretch to infer how long the signal has been traveling, and thus, when the first stars flicked on. In addition, the width of the detected signal corresponds to the amount of time that the gas was absorbing the CMB light. And the intensity of the signal — how much light was absorbed — relates to the temperature of the gas and the amount of light that was floating around at the time.

Many researchers find this final characteristic the most intriguing. “It’s a much stronger absorption than we had thought possible,” said Steven Furlanetto, a cosmologist at the University of California, Los Angeles, who has examined what the EDGES data would mean for the formation of the earliest galaxies.

Graph showing absorption profiles for the early universe.

Lucy Reading-Ikkanda/Quanta Magazine; Source: arXiv:1609.02312v3 Figure 1 (expected); doi:10.1038/nature25792 Figure 2 (observed)

The most obvious explanation for such a strong signal is that the neutral gas was colder than predicted, which would have allowed it to absorb even more background radiation. But how could the universe have unexpectedly cooled? “We’re talking about a period of time when stars are beginning to form,” Barkana said — the darkness before the dawn. “So everything is as cold as it can be. The question is: What could be even colder?”

As he parked at his in-laws’ house that July day, an idea came to him: Could it be dark matter? After all, dark matter doesn’t seem to interact with normal matter via the electromagnetic force — it doesn’t emit or absorb heat. So dark matter could have started out colder or been cooling much longer than normal matter at the beginning of the universe, and then continued to cool.

Over the next week, he worked on a theory of how a hypothetical form of dark matter called “millicharged” dark matter could have been responsible. Millicharged dark matter could interact with ordinary matter, but only very weakly. Intergalactic gas might then have cooled by “basically dumping heat into the dark matter sector where you can’t see it anymore,” Furlanetto explained. Barkana wrote the idea up and sent it off to Nature.

Then he began to work through the idea in more detail with several colleagues. Others did as well. As soon as the Nature papers appeared, several groups of theoretical cosmologists started to compare the behavior of this unexpected type of dark matter to what we know about the universe — the decades’ worth of CMB observations, data from supernova explosions, the results of collisions at particle accelerators like the Large Hadron Collider, and astronomers’ understanding of how the Big Bang produced hydrogen, helium and lithium during the universe’s first few minutes. If millicharged dark matter was out there, did all these other observations make sense?

They did not. More precisely, these researchers found that millicharged dark matter can only make up a small fraction of the total dark matter in the universe — too small a fraction to create the observed dip in the EDGES data. “You cannot have 100 percent of dark matter interacting,” said Anastasia Fialkov, an astrophysicist at Harvard University and the first author of a paper submitted to Physical Review Letters. Another paper that Barkana and colleagues posted on the preprint site arxiv.org concludes that this dark matter has an even smaller presence: It couldn’t account for more than 1 to 2 percent of the millicharged dark matter content. Independent groups have reached similar conclusions.

If it’s not millicharged dark matter, then what might explain EDGES’ stronger-than-expected absorption signal? Another possibility is that extra background light existed during the cosmic dawn. If there were more radio waves than expected in the early universe, then “the absorption would appear stronger even though the gas itself is unchanged,” Furlanetto said. Perhaps the CMB wasn’t the only ambient light during the toddler years of our universe.

This idea doesn’t come entirely out of left field. In 2011, a balloon-lofted experiment called ARCADE 2 reported a background radio signal that was stronger than would have been expected from the CMB alone. Scientists haven’t yet been able to explain this result.

After the EDGES detection, a few groups of astronomers revisited these data. One group looked at black holes as a possible explanation, since black holes are the brightest extragalactic radio sources in the sky. Yet black holes also produce other forms of radiation, like X-rays, that haven’t been seen in the early universe. Because of this, astronomers remain skeptical that black holes are the answer.

Is It Real?

Perhaps the simplest explanation is that the data are just wrong. The measurement is incredibly difficult, after all. Yet by all accounts the EDGES team took exceptional care to cross-check all their data — Price called the experiment “exquisite” — which means that if there is a flaw in the data, it will be exceptionally hard to find.

Photo of the antenna for EDGES located in remote western Australia

This antenna for EDGES was deployed in 2015 at a remote location in western Australia where it would experience little radio interference.

The EDGES team deployed their radio antenna in September 2015. By December, they were seeing a signal, said Raul Monsalve, an experimental cosmologist at the University of Colorado, Boulder, and a member of the EDGES team. “We became suspicious immediately, because it was stronger than expected.”

And so they began what became a marathon of due diligence. They built a similar antenna and installed it about 150 meters away from the first one. They rotated the antennas to rule out environmental and instrumental effects. They used separate calibration and analysis techniques. “We made many, many kinds of cuts and comparisons and cross-checks to try to rule out the signal as coming from the environment or from some other source,” Monsalve said. “We didn’t believe ourselves at the beginning. We thought it was very suspicious for the signal to be this strong, and that’s why we took so long to publish.” They are convinced that they’re seeing a signal, and that the signal is unexpectedly strong.

“I do believe the result,” Price said, but he emphasized that testing for systematic errors in the data is still needed. He mentioned one area where the experiment could have overlooked a potential error: Any antenna’s sensitivity varies depending on the frequency it’s observing and the direction from which a signal is coming. Astronomers can account for these imperfections by either measuring them or modeling them. Bowman and colleagues chose to model them. Price suggests that the EDGES team members instead find a way to measure them and then reanalyze their signal with that measured effect taken into account.

The next step is for a second radio detector to see this signal, which would imply it’s from the sky and not from the EDGES antenna or model. Scientists with the Large-Aperture Experiment to Detect the Dark Ages (LEDA) project, located in California’s Owens Valley, are currently analyzing that instrument’s data. Then researchers will need to confirm that the signal is actually cosmological and not produced by our own Milky Way. This is not a simple problem. Our galaxy’s radio emission can be thousands of times stronger than cosmological signals.

On the whole, researchers regard both the EDGES measurement itself and its interpretation with a healthy skepticism, as Barkana and many others have put it. Scientists should be skeptical of a first-of-its-kind measurement — that’s how they ensure that the observation is sound, the analysis was completed accurately, and the experiment wasn’t in error. This is, ultimately, how science is supposed to work. “We ask the questions, we investigate, we exclude every wrong possibility,” said Tomer Volansky, a particle physicist at Tel Aviv University who collaborated with Barkana on one of his follow-up analyses. “We’re after the truth. If the truth is that it’s not dark matter, then it’s not dark matter.”

In a Multiverse, What Are the Odds?


Testing the multiverse hypothesis requires measuring whether our universe is statistically typical among the infinite variety of universes. But infinity does a number on statistics.
20

The theory of eternal inflation casts our universe as one of countless bubbles in an eternally frothing sea.

The theory of eternal inflation casts our universe as one of countless bubbles in an eternally frothing sea.

If modern physics is to be believed, we shouldn’t be here. The meager dose of energy infusing empty space, which at higher levels would rip the cosmos apart, is a trillion trillion trillion trillion trillion trillion trillion trillion trillion trillion times tinier than theory predicts. And the minuscule mass of the Higgs boson, whose relative smallness allows big structures such as galaxies and humans to form, falls roughly 100 quadrillion times short of expectations. Dialing up either of these constants even a little would render the universe unlivable.

To account for our incredible luck, leading cosmologists like Alan Guth and Stephen Hawking envision our universe as one of countless bubbles in an eternally frothing sea. This infinite “multiverse” would contain universes with constants tuned to any and all possible values, including some outliers, like ours, that have just the right properties to support life. In this scenario, our good luck is inevitable: A peculiar, life-friendly bubble is all we could expect to observe.

Many physicists loathe the multiverse hypothesis, deeming it a cop-out of infinite proportions. But as attempts to paint our universe as an inevitable, self-contained structure falter, the multiverse camp is growing.

The problem remains how to test the hypothesis. Proponents of the multiverse idea must show that, among the rare universes that support life, ours is statistically typical. The exact dose of vacuum energy, the precise mass of our underweight Higgs boson, and other anomalies must have high odds within the subset of habitable universes. If the properties of this universe still seem atypical even in the habitable subset, then the multiverse explanation fails.

But infinity sabotages statistical analysis. In an eternally inflating multiverse, where any bubble that can form does so infinitely many times, how do you measure “typical”?

Guth, a professor of physics at the Massachusetts Institute of Technology, resorts to freaks of nature to pose this “measure problem.” “In a single universe, cows born with two heads are rarer than cows born with one head,” he said. But in an infinitely branching multiverse, “there are an infinite number of one-headed cows and an infinite number of two-headed cows. What happens to the ratio?”

For years, the inability to calculate ratios of infinite quantities has prevented the multiverse hypothesis from making testable predictions about the properties of this universe. For the hypothesis to mature into a full-fledged theory of physics, the two-headed-cow question demands an answer.

Eternal Inflation

As a junior researcher trying to explain the smoothness and flatness of the universe, Guth proposed in 1980 that a split second of exponential growth may have occurred at the start of the Big Bang. This would have ironed out any spatial variations as if they were wrinkles on the surface of an inflating balloon. The inflation hypothesis, though it is still being tested, gels with all available astrophysical data and is widely accepted by physicists.

Video: MIT cosmologist Alan Guth, 67, discusses why two-headed cows are an important problem in an infinite multiverse.

Katherine Taylor for Quanta Magazine

In the years that followed, Andrei Linde, now of Stanford University, Guth and other cosmologists reasoned that inflation would almost inevitably beget an infinite number of universes. “Once inflation starts, it never stops completely,” Guth explained. In a region where it does stop — through a kind of decay that settles it into a stable state — space and time gently swell into a universe like ours. Everywhere else, space-time continues to expand exponentially, bubbling forever.

Each disconnected space-time bubble grows under the influence of different initial conditions tied to decays of varying amounts of energy. Some bubbles expand and then contract, while others spawn endless streams of daughter universes. The scientists presumed that the eternally inflating multiverse would everywhere obey the conservation of energy, the speed of light, thermodynamics, general relativity and quantum mechanics. But the values of the constants coordinated by these laws were likely to vary randomly from bubble to bubble.

Paul Steinhardt, a theoretical physicist at Princeton University and one of the early contributors to the theory of eternal inflation, saw the multiverse as a “fatal flaw” in the reasoning he had helped advance, and he remains stridently anti-multiverse today. “Our universe has a simple, natural structure,” he said in September. “The multiverse idea is baroque, unnatural, untestable and, in the end, dangerous to science and society.”

Steinhardt and other critics believe the multiverse hypothesis leads science away from uniquely explaining the properties of nature. When deep questions about matter, space and time have been elegantly answered over the past century through ever more powerful theories, deeming the universe’s remaining unexplained properties “random” feels, to them, like giving up. On the other hand, randomness has sometimes been the answer to scientific questions, as when early astronomers searched in vain for order in the solar system’s haphazard planetary orbits. As inflationary cosmology gains acceptance, more physicists are conceding that a multiverse of random universes might exist, just as there is a cosmos full of star systems arranged by chance and chaos.

“When I heard about eternal inflation in 1986, it made me sick to my stomach,” said John Donoghue, a physicist at the University of Massachusetts, Amherst. “But when I thought about it more, it made sense.”

One for the Multiverse

The multiverse hypothesis gained considerable traction in 1987, when the Nobel laureate Steven Weinberg used it to predict the infinitesimal amount of energy infusing the vacuum of empty space, a number known as the cosmological constant, denoted by the Greek letter Λ (lambda). Vacuum energy is gravitationally repulsive, meaning it causes space-time to stretch apart. Consequently, a universe with a positive value for Λ expands — faster and faster, in fact, as the amount of empty space grows — toward a future as a matter-free void. Universes with negative Λ eventually contract in a “big crunch.”

Physicists had not yet measured the value of Λ in our universe in 1987, but the relatively sedate rate of cosmic expansion indicated that its value was close to zero. This flew in the face of quantum mechanical calculations suggesting Λ should be enormous, implying a density of vacuum energy so large it would tear atoms apart. Somehow, it seemed our universe was greatly diluted.

Weinberg turned to a concept called anthropic selection in response to “the continued failure to find a microscopic explanation of the smallness of the cosmological constant,” as he wrote in Physical Review Letters (PRL). He posited that life forms, from which observers of universes are drawn, require the existence of galaxies. The only values of Λ that can be observed are therefore those that allow the universe to expand slowly enough for matter to clump together into galaxies. In his PRL paper, Weinberg reported the maximum possible value of Λ in a universe that has galaxies. It was a multiverse-generated prediction of the most likely density of vacuum energy to be observed, given that observers must exist to observe it.

A decade later, astronomers discovered that the expansion of the cosmos was accelerating at a rate that pegged Λ at 10−123 (in units of “Planck energy density”). A value of exactly zero might have implied an unknown symmetry in the laws of quantum mechanics — an explanation without a multiverse. But this absurdly tiny value of the cosmological constant appeared random. And it fell strikingly close to Weinberg’s prediction.

“It was a tremendous success, and very influential,” said Matthew Kleban, a multiverse theorist at New York University. The prediction seemed to show that the multiverse could have explanatory power after all.

Close on the heels of Weinberg’s success, Donoghue and colleagues used the same anthropic approach to calculate the range of possible values for the mass of the Higgs boson. The Higgs doles out mass to other elementary particles, and these interactions dial its mass up or down in a feedback effect. This feedback would be expected to yield a mass for the Higgs that is far larger than its observed value, making its mass appear to have been reduced by accidental cancellations between the effects of all the individual particles. Donoghue’s group argued that this accidentally tiny Higgs was to be expected, given anthropic selection: If the Higgs boson were just five times heavier, then complex, life-engendering elements like carbon could not arise. Thus, a universe with much heavier Higgs particles could never be observed.

Until recently, the leading explanation for the smallness of the Higgs mass was a theory called supersymmetry, but the simplest versions of the theory have failed extensive tests at the Large Hadron Collider near Geneva. Although new alternatives have been proposed, many particle physicists who considered the multiverse unscientific just a few years ago are now grudgingly opening up to the idea. “I wish it would go away,” said Nathan Seiberg, a professor of physics at the Institute for Advanced Study in Princeton, N.J., who contributed to supersymmetry in the 1980s. “But you have to face the facts.”

However, even as the impetus for a predictive multiverse theory has increased, researchers have realized that the predictions by Weinberg and others were too naive. Weinberg estimated the largest Λ compatible with the formation of galaxies, but that was before astronomers discovered mini “dwarf galaxies” that could form in universes in which Λ is 1,000 times larger. These more prevalent universes can also contain observers, making our universe seem atypical among observable universes. On the other hand, dwarf galaxies presumably contain fewer observers than full-size ones, and universes with only dwarf galaxies would therefore have lower odds of being observed.

Researchers realized it wasn’t enough to differentiate between observable and unobservable bubbles. To accurately predict the expected properties of our universe, they needed to weight the likelihood of observing certain bubbles according to the number of observers they contained. Enter the measure problem.

Measuring the Multiverse

Guth and other scientists sought a measure to gauge the odds of observing different kinds of universes. This would allow them to make predictions about the assortment of fundamental constants in this universe, all of which should have reasonably high odds of being observed. The scientists’ early attempts involved constructing mathematical models of eternal inflation and calculating the statistical distribution of observable bubbles based on how many of each type arose in a given time interval. But with time serving as the measure, the final tally of universes at the end depended on how the scientists defined time in the first place.

Berkeley physicist Raphael Bousso, 43, extrapolated from the physics of black holes to devise a novel way of measuring the multiverse, one that successfully explains many of our universe’s features.

Berkeley physicist Raphael Bousso, 43, extrapolated from the physics of black holes to devise a novel way of measuring the multiverse, one that successfully explains many of our universe’s features.

Courtesy of Raphael Bousso

“People were getting wildly different answers depending on which random cutoff rule they chose,” said Raphael Bousso, a theoretical physicist at the University of California, Berkeley.

Alex Vilenkin, director of the Institute of Cosmology at Tufts University in Medford, Mass., has proposed and discarded several multiverse measures during the last two decades, looking for one that would transcend his arbitrary assumptions. Two years ago, he and Jaume Garriga of the University of Barcelona in Spain proposed a measure in the form of an immortal “watcher” who soars through the multiverse counting events, such as the number of observers. The frequencies of events are then converted to probabilities, thus solving the measure problem. But the proposal assumes the impossible up front: The watcher miraculously survives crunching bubbles, like an avatar in a video game dying and bouncing back to life.

In 2011, Guth and Vitaly Vanchurin, now of the University of Minnesota Duluth, imagined a finite “sample space,” a randomly selected slice of space-time within the infinite multiverse. As the sample space expands, approaching but never reaching infinite size, it cuts through bubble universes encountering events, such as proton formations, star formations or intergalactic wars. The events are logged in a hypothetical databank until the sampling ends. The relative frequency of different events translates into probabilities and thus provides a predictive power. “Anything that can happen will happen, but not with equal probability,” Guth said.

Still, beyond the strangeness of immortal watchers and imaginary databanks, both of these approaches necessitate arbitrary choices about which events should serve as proxies for life, and thus for observations of universes to be counted and converted into probabilities. Protons seem necessary for life; space wars do not — but do observers require stars, or is this too limited a concept of life? With either measure, choices can be made so that the odds stack in favor of our inhabiting a universe like ours. The degree of speculation raises doubts.

The Causal Diamond

Bousso first encountered the measure problem in the 1990s as a graduate student working with Stephen Hawking, the doyen of black hole physics. Black holes prove there is no such thing as an omniscient measurer, because someone inside a black hole’s “event horizon,” beyond which no light can escape, has access to different information and events from someone outside, and vice versa. Bousso and other black hole specialists came to think such a rule “must be more general,” he said, precluding solutions to the measure problem along the lines of the immortal watcher. “Physics is universal, so we’ve got to formulate what an observer can, in principle, measure.”

This insight led Bousso to develop a multiverse measure that removes infinity from the equation altogether. Instead of looking at all of space-time, he homes in on a finite patch of the multiverse called a “causal diamond,” representing the largest swath accessible to a single observer traveling from the beginning of time to the end of time. The finite boundaries of a causal diamond are formed by the intersection of two cones of light, like the dispersing rays from a pair of flashlights pointed toward each other in the dark. One cone points outward from the moment matter was created after a Big Bang — the earliest conceivable birth of an observer — and the other aims backward from the farthest reach of our future horizon, the moment when the causal diamond becomes an empty, timeless void and the observer can no longer access information linking cause to effect.

The infinite multiverse can be divided into finite regions called causal diamonds that range from large and rare with many observers (left) to small and common with few observers (right). In this scenario, causal diamonds like ours should be large enough to give rise to many observers but small enough to be relatively common.

The infinite multiverse can be divided into finite regions called causal diamonds that range from large and rare with many observers (left) to small and common with few observers (right). In this scenario, causal diamonds like ours should be large enough to give rise to many observers but small enough to be relatively common.

Olena Shmahalo / Quanta Magazine, source: Raphael Bousso, Roni Harnik, Graham Kribs and Gilad Perez

Bousso is not interested in what goes on outside the causal diamond, where infinitely variable, endlessly recursive events are unknowable, in the same way that information about what goes on outside a black hole cannot be accessed by the poor soul trapped inside. If one accepts that the finite diamond, “being all anyone can ever measure, is also all there is,” Bousso said, “then there is indeed no longer a measure problem.”

In 2006, Bousso realized that his causal-diamond measure lent itself to an evenhanded way of predicting the expected value of the cosmological constant. Causal diamonds with smaller values of Λ would produce more entropy — a quantity related to disorder, or degradation of energy — and Bousso postulated that entropy could serve as a proxy for complexity and thus for the presence of observers. Unlike other ways of counting observers, entropy can be calculated using trusted thermodynamic equations. With this approach, Bousso said, “comparing universes is no more exotic than comparing pools of water to roomfuls of air.”

Using astrophysical data, Bousso and his collaborators Roni Harnik, Graham Kribs and Gilad Perez calculated the overall rate of entropy production in our universe, which primarily comes from light scattering off cosmic dust. The calculation predicted a statistical range of expected values of Λ. The known value, 10-123, rests just left of the median. “We honestly didn’t see it coming,” Bousso said. “It’s really nice, because the prediction is very robust.”

Making Predictions

Bousso and his collaborators’ causal-diamond measure has now racked up a number of successes. It offers a solution to a mystery of cosmology called the “why now?” problem, which asks why we happen to live at a time when the effects of matter and vacuum energy are comparable, so that the expansion of the universe recently switched from slowing down (signifying a matter-dominated epoch) to speeding up (a vacuum energy-dominated epoch). Bousso’s theory suggests it is only natural that we find ourselves at this juncture. The most entropy is produced, and therefore the most observers exist, when universes contain equal parts vacuum energy and matter.

In 2010 Harnik and Bousso used their idea to explain the flatness of the universe and the amount of infrared radiation emitted by cosmic dust. Last year, Bousso and his Berkeley colleague Lawrence Hall reported that observers made of protons and neutrons, like us, will live in universes where the amount of ordinary matter and dark matter are comparable, as is the case here.

“Right now the causal patch looks really good,” Bousso said. “A lot of things work out unexpectedly well, and I do not know of other measures that come anywhere close to reproducing these successes or featuring comparable successes.”

The causal-diamond measure falls short in a few ways, however. It does not gauge the probabilities of universes with negative values of the cosmological constant. And its predictions depend sensitively on assumptions about the early universe, at the inception of the future-pointing light cone. But researchers in the field recognize its promise. By sidestepping the infinities underlying the measure problem, the causal diamond “is an oasis of finitude into which we can sink our teeth,” said Andreas Albrecht, a theoretical physicist at the University of California, Davis, and one of the early architects of inflation.

Kleban, who like Bousso began his career as a black hole specialist, said the idea of a causal patch such as an entropy-producing diamond is “bound to be an ingredient of the final solution to the measure problem.” He, Guth, Vilenkin and many other physicists consider it a powerful and compelling approach, but they continue to work on their own measures of the multiverse. Few consider the problem to be solved.

Every measure involves many assumptions, beyond merely that the multiverse exists. For example, predictions of the expected range of constants like Λ and the Higgs mass always speculate that bubbles tend to have larger constants. Clearly, this is a work in progress.

“The multiverse is regarded either as an open question or off the wall,” Guth said. “But ultimately, if the multiverse does become a standard part of science, it will be on the basis that it’s the most plausible explanation of the fine-tunings that we see in nature.”

Perhaps these multiverse theorists have chosen a Sisyphean task. Perhaps they will never settle the two-headed-cow question. Some researchers are taking a different route to testing the multiverse. Rather than rifle through the infinite possibilities of the equations, they are scanning the finite sky for the ultimate Hail Mary pass — the faint tremor from an ancient bubble collision.

Like this:

Wormholes Untangle a Black Hole Paradox


A bold new idea aims to link two famously discordant descriptions of nature. In doing so, it may also reveal how space-time owes its existence to the spooky connections of quantum information.
 

Like initials carved in a tree, ER = EPR, as the new idea is known, is a shorthand that joins two ideas proposed by Einstein in 1935. One involved the paradox implied by what he called “spooky action at a distance” between quantum particles (the EPR paradox, named for its authors, Einstein, Boris Podolsky and Nathan Rosen). The other showed how two black holes could be connected through far reaches of space through “wormholes” (ER, for Einstein-Rosen bridges). At the time that Einstein put forth these ideas — and for most of the eight decades since — they were thought to be entirely unrelated.

But if ER = EPR is correct, the ideas aren’t disconnected — they’re two manifestations of the same thing. And this underlying connectedness would form the foundation of all space-time. Quantum entanglement — the action at a distance that so troubled Einstein — could be creating the “spatial connectivity” that “sews space together,” according to Leonard Susskind, a physicist at Stanford University and one of the idea’s main architects. Without these connections, all of space would “atomize,” according to Juan Maldacena, a physicist at the Institute for Advanced Study in Princeton, N.J., who developed the idea together with Susskind. “In other words, the solid and reliable structure of space-time is due to the ghostly features of entanglement,” he said. What’s more, ER = EPR has the potential to address how gravity fits together with quantum mechanics.

Not everyone’s buying it, of course (nor should they; the idea is in “its infancy,” said Susskind). Joe Polchinski, a researcher at the Kavli Institute for Theoretical Physics at the University of California, Santa Barbara, whose own stunning paradox about firewalls in the throats of black holes triggered the latest advances, is cautious, but intrigued. “I don’t know where it’s going,” he said, “but it’s a fun time right now.”

The Black Hole Wars

The road that led to ER = EPR is a Möbius strip of tangled twists and turns that folds back on itself, like a drawing by M.C. Escher.

A fair place to start might be quantum entanglement. If two quantum particles are entangled, they become, in effect, two parts of a single unit. What happens to one entangled particle happens to the other, no matter how far apart they are.

Maldacena sometimes uses a pair of gloves as an analogy: If you come upon the right-handed glove, you instantaneously know the other is left-handed. There’s nothing spooky about that. But in the quantum version, both gloves are actually left- and right-handed (and everything in between) up until the moment you observe them. Spookier still, the left-handed glove doesn’t become left until you observe the right-handed one — at which moment both instantly gain a definite handedness.

Entanglement played a key role in Stephen Hawking’s 1974 discovery that black holes could evaporate. This, too, involved entangled pairs of particles. Throughout space, short-lived “virtual” particles of matter and anti-matter continually pop into and out of existence. Hawking realized that if one particle fell into a black hole and the other escaped, the hole would emit radiation, glowing like a dying ember. Given enough time, the hole would evaporate into nothing, raising the question of what happened to the information content of the stuff that fell into it.

But the rules of quantum mechanics forbid the complete destruction of information. (Hopelessly scrambling information is another story, which is why documents can be burned and hard drives smashed. There’s nothing in the laws of physics that prevents the information lost in a book’s smoke and ashes from being reconstructed, at least in principle.) So the question became: Would the information that originally went into the black hole just get scrambled? Or would it be truly lost? The arguments set off what Susskind called the “black hole wars,” which have generated enough stories to fill many books. (Susskind’s was subtitled “My Battle with Stephen Hawking to Make the World Safe for Quantum Mechanics.”)

Eventually Susskind — in a discovery that shocked even him — realized (with Gerard ’t Hooft) that all the information that fell down the hole was actually trapped on the black hole’s two-dimensional event horizon, the surface that marks the point of no return. The horizon encoded everything inside, like a hologram. It was as if the bits needed to re-create your house and everything in it could fit on the walls. The information wasn’t lost — it was scrambled and stored out of reach.

Susskind continued to work on the idea with Maldacena, whom Susskind calls “the master,” and others. Holography began to be used not just to understand black holes, but any region of space that can be described by its boundary. Over the past decade or so, the seemingly crazy idea that space is a kind of hologram has become rather humdrum, a tool of modern physics used in everything from cosmology to condensed matter. “One of the things that happen to scientific ideas is they often go from wild conjecture to reasonable conjecture to working tools,” Susskind said. “It’s gotten routine.”

Holography was concerned with what happens on boundaries, including black hole horizons. That left open the question of what goes on in the interiors, said Susskind, and answers to that “were all over the map.” After all, since no information could ever escape from inside a black hole’s horizon, the laws of physics prevented scientists from ever directly testing what was going on inside.

Then in 2012 Polchinski, along with Ahmed Almheiri, Donald Marolf and James Sully, all of them at the time at Santa Barbara, came up with an insight so startling it basically said to physicists: Hold everything. We know nothing.

The so-called AMPS paper (after its authors’ initials) presented a doozy of an entanglement paradox — one so stark it implied that black holes might not, in effect, even have insides, for a “firewall” just inside the horizon would fry anyone or anything attempting to find out its secrets.

Scaling the Firewall       

Here’s the heart of their argument: If a black hole’s event horizon is a smooth, seemingly ordinary place, as relativity predicts (the authors call this the “no drama” condition), the particles coming out of the black hole must be entangled with particles falling into the black hole. Yet for information not to be lost, the particles coming out of the black hole must also be entangled with particles that left long ago and are now scattered about in a fog of Hawking radiation. That’s one too many kinds of entanglements, the AMPS authors realized. One of them would have to go.

The reason is that maximum entanglements have to be monogamous, existing between just two particles. Two maximum entanglements at once — quantum polygamy — simply cannot happen, which suggests that the smooth, continuous space-time inside the throats of black holes can’t exist. A break in the entanglement at the horizon would imply a discontinuity in space, a pileup of energy: the “firewall.”

David Kaplan explores one of the biggest mysteries in physics: the apparent contradiction between general relativity and quantum mechanics.

Video: David Kaplan explores one of the biggest mysteries in physics: the apparent contradiction between general relativity and quantum mechanics.

Filming by Petr Stepanek. Editing and motion graphics by MK12. Music by Steven Gutheinz.

The AMPS paper became a “real trigger,” said Stephen Shenker, a physicist at Stanford, and “cast in sharp relief” just how much was not understood. Of course, physicists love such paradoxes, because they’re fertile ground for discovery.

Both Susskind and Maldacena got on it immediately. They’d been thinking about entanglement and wormholes, and both were inspired by the work of Mark Van Raamsdonk, a physicist at the University of British Columbia in Vancouver, who had conducted a pivotal thought experiment suggesting that entanglement and space-time are intimately related.

“Then one day,” said Susskind, “Juan sent me a very cryptic message that contained the equation ER = EPR. I instantly saw what he was getting at, and from there we went back and forth expanding the idea.”

Their investigations, which they presented in a 2013 paper, “Cool Horizons for Entangled Black Holes,” argued for a kind of entanglement they said the AMPS authors had overlooked — the one that “hooks space together,” according to Susskind. AMPS assumed that the parts of space inside and outside of the event horizon were independent. But Susskind and Maldacena suggest that, in fact, particles on either side of the border could be connected by a wormhole. The ER = EPR entanglement could “kind of get around the apparent paradox,” said Van Raamsdonk. The paper contained a graphic that some refer to half-jokingly as the “octopus picture” — with multiple wormholes leading from the inside of a black hole to Hawking radiation on the outside.

In other words, there was no need for an entanglement that would create a kink in the smooth surface of the black hole’s throat. The particles still inside the hole would be directly connected to particles that left long ago. No need to pass through the horizon, no need to pass Go. The particles on the inside and the far-out ones could be considered one and the same, Maldacena explained — like me, myself and I. The complex “octopus” wormhole would link the interior of the black hole directly to particles in the long-departed cloud of Hawking radiation.

Holes in the Wormhole

No one is sure yet whether ER = EPR will solve the firewall problem. John Preskill, a physicist at the California Institute of Technology in Pasadena, reminded readers of Quantum Frontiers, the blog for Caltech’s Institute for Quantum Information and Matter, that sometimes physicists rely on their “sense of smell” to sniff out which theories have promise. “At first whiff,” he wrote, “ER = EPR may smell fresh and sweet, but it will have to ripen on the shelf for a while.”

Whatever happens, the correspondence between entangled quantum particles and the geometry of smoothly warped space-time is a “big new insight,” said Shenker. It’s allowed him and his collaborator Douglas Stanford, a researcher at the Institute for Advanced Study, to tackle complex problems in quantum chaos through what Shenker calls “simple geometry that even I can understand.”

To be sure, ER = EPR does not yet apply to just any kind of space, or any kind of entanglement. It takes a special type of entanglement and a special type of wormhole. “Lenny and Juan are completely aware of this,” said Marolf, who recently co-authored a paper describing wormholes with more than two ends. ER = EPR works in very specific situations, he said, but AMPS argues that the firewall presents a much broader challenge.

Like Polchinski and others, Marolf worries that ER = EPR modifies standard quantum mechanics. “A lot of people are really interested in the ER = EPR conjecture,” said Marolf. “But there’s a sense that no one but Lenny and Juan really understand what it is.” Still, “it’s an interesting time to be in the field.”

Why Stephen Hawking’s Black Hole Puzzle Keeps Puzzling


The renowned British physicist, who died at 76, left behind a riddle that could eventually lead his successors to the theory of quantum gravity.
 

The physicist Stephen Hawking in 1979 in Princeton, New Jersey.

The physicist Stephen Hawking in 1979 in Princeton, New Jersey.

The renowned British physicist Stephen Hawking, who died today at 76, was something of a betting man, regularly entering into friendly wagers with his colleagues over key questions in theoretical physics. “I sensed when Stephen and I first met that he would enjoy being treated irreverently,” wrote John Preskill, a physicist at the California Institute of Technology, earlier today on Twitter. “So in the middle of a scientific discussion I could interject, ‘What makes you so sure of that, Mr. Know-It-All?’ knowing that Stephen would respond with his eyes twinkling: ‘Wanna bet?’”

And bet they did. In 1991, Hawking and Kip Thorne bet Preskill that information that falls into a black hole gets destroyed and can never be retrieved. Called the black hole information paradox, this prospect follows from Hawking’s landmark 1974 discovery about black holes — regions of inescapable gravity, where space-time curves steeply toward a central point known as the singularity. Hawking had shown that black holes are not truly black. Quantum uncertainty causes them to radiate a small amount of heat, dubbed “Hawking radiation.” They lose mass in the process and ultimately evaporate away. This evaporation leads to a paradox: Anything that falls into a black hole will seemingly be lost forever, violating “unitarity” — a central principle of quantum mechanics that says the present always preserves information about the past.

Hawking and Thorne argued that the radiation emitted by a black hole would be too hopelessly scrambled to retrieve any useful information about what fell into it, even in principle. Preskill bet that information somehow escapes black holes, even though physicists would presumably need a complete theory of quantum gravity to understand the mechanism behind how this could happen.

Physicists thought they resolved the paradox in 2004 with the notion of black hole complementarity. According to this proposal, information that crosses the event horizon of a black hole both reflects back out and passes inside, never to escape. Because no single observer can ever be both inside and outside the black hole’s horizon, no one can witness both situations simultaneously, and no contradiction arises. The argument was sufficient to convince Hawking to concede the bet. During a July 2004 talk in Dublin, Ireland, he presented Preskill with the eighth edition of Total Baseball: The Ultimate Baseball Encyclopedia, “from which information can be retrieved at will.”

Thorne, however refused to concede, and it seems he was right to do so. In 2012, a new twist on the paradox emerged. Nobody had explained precisely how information would get out of a black hole, and that lack of a specific mechanism inspired Joseph Polchinski and three colleagues to revisit the problem. Conventional wisdom had long held that once someone passed the event horizon, they would slowly be pulled apart by the extreme gravity as they fell toward the singularity. Polchinski and his co-authors argued that instead, in-falling observers would encounter a literal wall of fire at the event horizon, burning up before ever getting near the singularity.

At the heart of the firewall puzzle lies a conflict between three fundamental postulates. The first is the equivalence principle of Albert Einstein’s general theory of relativity: Because there’s no difference between acceleration due to gravity and the acceleration of a rocket, an astronaut named Alice shouldn’t feel anything amiss as she crosses a black hole horizon. The second is unitarity, which implies that information cannot be destroyed. Lastly, there’s locality, which holds that events happening at a particular point in space can only influence nearby points. This means that the laws of physics should work as expected far away from a black hole, even if they break down at some point within the black hole — either at the singularity or at the event horizon.

To resolve the paradox, one of the three postulates must be sacrificed, and nobody can agree on which one should get the axe. The simplest solution is to have the equivalence principle break down at the event horizon, thereby giving rise to a firewall. But several other possible solutions have been proposed in the ensuing years.

David Kaplan explores one of the biggest mysteries in physics: the apparent contradiction between general relativity and quantum mechanics.

Video: David Kaplan explores one of the biggest mysteries in physics: the apparent contradiction between general relativity and quantum mechanics.

Filming by Petr Stepanek. Editing and motion graphics by MK12.

For instance, a few years before the firewalls paper, Samir Mathur, a string theorist at Ohio State University, raised similar issues with his notion of black hole fuzzballs. Fuzzballs aren’t empty pits, like traditional black holes. They are packed full of strings (the kind from string theory) and have a surface like a star or planet. They also emit heat in the form of radiation. The spectrum of that radiation, Mathur found, exactly matches the prediction for Hawking radiation. His “fuzzball conjecture” resolves the paradox by declaring it to be an illusion. How can information be lost beyond the event horizon if there is no event horizon?

Hawking himself weighed in on the firewall debate along similar lines by way of a two-page, equation-free paper posted to the scientific preprint site arxiv.org in late January 2014 — a summation of informal remarks he’d made via Skype for a small conference the previous spring. He proposed a rethinking of the event horizon. Instead of a definite line in the sky from which nothing could escape, he suggested there could be an “apparent horizon.” Information is only temporarily confined behind that horizon. The information eventually escapes, but in such a scrambled form that it can never be interpreted. He likened the task to weather forecasting: “One can’t predict the weather more than a few days in advance.”

In 2013, Leonard Susskind and Juan Maldacena, theoretical physicists at Stanford University and the Institute for Advanced Studies, respectively, made a radical attempt to preserve locality that they dubbed “ER = EPR.” According to this idea, maybe what we think are faraway points in space-time aren’t that far away after all. Perhaps entanglement creates invisible microscopic wormholes connecting seemingly distant points. Shaped a bit like an octopus, such a wormhole would link the interior of the black hole directly to the Hawking radiation, so the particles still inside the hole would be directly connected to particles that escaped long ago, avoiding the need for information to pass through the event horizon.

Physicists have yet to reach a consensus on any one of these proposed solutions. It’s a tribute to Hawking’s unique genius that they continue to argue about the black hole information paradox so many decades after his work first suggested it.

%d bloggers like this: