Testing the multiverse hypothesis requires measuring whether our universe is statistically typical among the infinite variety of universes. But infinity does a number on statistics.
If modern physics is to be believed, we shouldn’t be here. The meager dose of energy infusing empty space, which at higher levels would rip the cosmos apart, is a trillion trillion trillion trillion trillion trillion trillion trillion trillion trillion times tinier than theory predicts. And the minuscule mass of the Higgs boson, whose relative smallness allows big structures such as galaxies and humans to form, falls roughly 100 quadrillion times short of expectations. Dialing up either of these constants even a little would render the universe unlivable.
To account for our incredible luck, leading cosmologists like Alan Guth and Stephen Hawking envision our universe as one of countless bubbles in an eternally frothing sea. This infinite “multiverse” would contain universes with constants tuned to any and all possible values, including some outliers, like ours, that have just the right properties to support life. In this scenario, our good luck is inevitable: A peculiar, life-friendly bubble is all we could expect to observe.
Many physicists loathe the multiverse hypothesis, deeming it a cop-out of infinite proportions. But as attempts to paint our universe as an inevitable, self-contained structure falter, the multiverse camp is growing.
The problem remains how to test the hypothesis. Proponents of the multiverse idea must show that, among the rare universes that support life, ours is statistically typical. The exact dose of vacuum energy, the precise mass of our underweight Higgs boson, and other anomalies must have high odds within the subset of habitable universes. If the properties of this universe still seem atypical even in the habitable subset, then the multiverse explanation fails.
But infinity sabotages statistical analysis. In an eternally inflating multiverse, where any bubble that can form does so infinitely many times, how do you measure “typical”?
Guth, a professor of physics at the Massachusetts Institute of Technology, resorts to freaks of nature to pose this “measure problem.” “In a single universe, cows born with two heads are rarer than cows born with one head,” he said. But in an infinitely branching multiverse, “there are an infinite number of one-headed cows and an infinite number of two-headed cows. What happens to the ratio?”
For years, the inability to calculate ratios of infinite quantities has prevented the multiverse hypothesis from making testable predictions about the properties of this universe. For the hypothesis to mature into a full-fledged theory of physics, the two-headed-cow question demands an answer.
As a junior researcher trying to explain the smoothness and flatness of the universe, Guth proposed in 1980 that a split second of exponential growth may have occurred at the start of the Big Bang. This would have ironed out any spatial variations as if they were wrinkles on the surface of an inflating balloon. The inflation hypothesis, though it is still being tested, gels with all available astrophysical data and is widely accepted by physicists.
In the years that followed, Andrei Linde, now of Stanford University, Guth and other cosmologists reasoned that inflation would almost inevitably beget an infinite number of universes. “Once inflation starts, it never stops completely,” Guth explained. In a region where it does stop — through a kind of decay that settles it into a stable state — space and time gently swell into a universe like ours. Everywhere else, space-time continues to expand exponentially, bubbling forever.
Each disconnected space-time bubble grows under the influence of different initial conditions tied to decays of varying amounts of energy. Some bubbles expand and then contract, while others spawn endless streams of daughter universes. The scientists presumed that the eternally inflating multiverse would everywhere obey the conservation of energy, the speed of light, thermodynamics, general relativity and quantum mechanics. But the values of the constants coordinated by these laws were likely to vary randomly from bubble to bubble.
Paul Steinhardt, a theoretical physicist at Princeton University and one of the early contributors to the theory of eternal inflation, saw the multiverse as a “fatal flaw” in the reasoning he had helped advance, and he remains stridently anti-multiverse today. “Our universe has a simple, natural structure,” he said in September. “The multiverse idea is baroque, unnatural, untestable and, in the end, dangerous to science and society.”
Steinhardt and other critics believe the multiverse hypothesis leads science away from uniquely explaining the properties of nature. When deep questions about matter, space and time have been elegantly answered over the past century through ever more powerful theories, deeming the universe’s remaining unexplained properties “random” feels, to them, like giving up. On the other hand, randomness has sometimes been the answer to scientific questions, as when early astronomers searched in vain for order in the solar system’s haphazard planetary orbits. As inflationary cosmology gains acceptance, more physicists are conceding that a multiverse of random universes might exist, just as there is a cosmos full of star systems arranged by chance and chaos.
“When I heard about eternal inflation in 1986, it made me sick to my stomach,” said John Donoghue, a physicist at the University of Massachusetts, Amherst. “But when I thought about it more, it made sense.”
One for the Multiverse
The multiverse hypothesis gained considerable traction in 1987, when the Nobel laureate Steven Weinberg used it to predict the infinitesimal amount of energy infusing the vacuum of empty space, a number known as the cosmological constant, denoted by the Greek letter Λ (lambda). Vacuum energy is gravitationally repulsive, meaning it causes space-time to stretch apart. Consequently, a universe with a positive value for Λ expands — faster and faster, in fact, as the amount of empty space grows — toward a future as a matter-free void. Universes with negative Λ eventually contract in a “big crunch.”
Physicists had not yet measured the value of Λ in our universe in 1987, but the relatively sedate rate of cosmic expansion indicated that its value was close to zero. This flew in the face of quantum mechanical calculations suggesting Λ should be enormous, implying a density of vacuum energy so large it would tear atoms apart. Somehow, it seemed our universe was greatly diluted.
Weinberg turned to a concept called anthropic selection in response to “the continued failure to find a microscopic explanation of the smallness of the cosmological constant,” as he wrote in Physical Review Letters (PRL). He posited that life forms, from which observers of universes are drawn, require the existence of galaxies. The only values of Λ that can be observed are therefore those that allow the universe to expand slowly enough for matter to clump together into galaxies. In his PRL paper, Weinberg reported the maximum possible value of Λ in a universe that has galaxies. It was a multiverse-generated prediction of the most likely density of vacuum energy to be observed, given that observers must exist to observe it.
A decade later, astronomers discovered that the expansion of the cosmos was accelerating at a rate that pegged Λ at 10−123 (in units of “Planck energy density”). A value of exactly zero might have implied an unknown symmetry in the laws of quantum mechanics — an explanation without a multiverse. But this absurdly tiny value of the cosmological constant appeared random. And it fell strikingly close to Weinberg’s prediction.
“It was a tremendous success, and very influential,” said Matthew Kleban, a multiverse theorist at New York University. The prediction seemed to show that the multiverse could have explanatory power after all.
Close on the heels of Weinberg’s success, Donoghue and colleagues used the same anthropic approach to calculate the range of possible values for the mass of the Higgs boson. The Higgs doles out mass to other elementary particles, and these interactions dial its mass up or down in a feedback effect. This feedback would be expected to yield a mass for the Higgs that is far larger than its observed value, making its mass appear to have been reduced by accidental cancellations between the effects of all the individual particles. Donoghue’s group argued that this accidentally tiny Higgs was to be expected, given anthropic selection: If the Higgs boson were just five times heavier, then complex, life-engendering elements like carbon could not arise. Thus, a universe with much heavier Higgs particles could never be observed.
Until recently, the leading explanation for the smallness of the Higgs mass was a theory called supersymmetry, but the simplest versions of the theory have failed extensive tests at the Large Hadron Collider near Geneva. Although new alternatives have been proposed, many particle physicists who considered the multiverse unscientific just a few years ago are now grudgingly opening up to the idea. “I wish it would go away,” said Nathan Seiberg, a professor of physics at the Institute for Advanced Study in Princeton, N.J., who contributed to supersymmetry in the 1980s. “But you have to face the facts.”
However, even as the impetus for a predictive multiverse theory has increased, researchers have realized that the predictions by Weinberg and others were too naive. Weinberg estimated the largest Λ compatible with the formation of galaxies, but that was before astronomers discovered mini “dwarf galaxies” that could form in universes in which Λ is 1,000 times larger. These more prevalent universes can also contain observers, making our universe seem atypical among observable universes. On the other hand, dwarf galaxies presumably contain fewer observers than full-size ones, and universes with only dwarf galaxies would therefore have lower odds of being observed.
Researchers realized it wasn’t enough to differentiate between observable and unobservable bubbles. To accurately predict the expected properties of our universe, they needed to weight the likelihood of observing certain bubbles according to the number of observers they contained. Enter the measure problem.
Measuring the Multiverse
Guth and other scientists sought a measure to gauge the odds of observing different kinds of universes. This would allow them to make predictions about the assortment of fundamental constants in this universe, all of which should have reasonably high odds of being observed. The scientists’ early attempts involved constructing mathematical models of eternal inflation and calculating the statistical distribution of observable bubbles based on how many of each type arose in a given time interval. But with time serving as the measure, the final tally of universes at the end depended on how the scientists defined time in the first place.
“People were getting wildly different answers depending on which random cutoff rule they chose,” said Raphael Bousso, a theoretical physicist at the University of California, Berkeley.
Alex Vilenkin, director of the Institute of Cosmology at Tufts University in Medford, Mass., has proposed and discarded several multiverse measures during the last two decades, looking for one that would transcend his arbitrary assumptions. Two years ago, he and Jaume Garriga of the University of Barcelona in Spain proposed a measure in the form of an immortal “watcher” who soars through the multiverse counting events, such as the number of observers. The frequencies of events are then converted to probabilities, thus solving the measure problem. But the proposal assumes the impossible up front: The watcher miraculously survives crunching bubbles, like an avatar in a video game dying and bouncing back to life.
In 2011, Guth and Vitaly Vanchurin, now of the University of Minnesota Duluth, imagined a finite “sample space,” a randomly selected slice of space-time within the infinite multiverse. As the sample space expands, approaching but never reaching infinite size, it cuts through bubble universes encountering events, such as proton formations, star formations or intergalactic wars. The events are logged in a hypothetical databank until the sampling ends. The relative frequency of different events translates into probabilities and thus provides a predictive power. “Anything that can happen will happen, but not with equal probability,” Guth said.
Still, beyond the strangeness of immortal watchers and imaginary databanks, both of these approaches necessitate arbitrary choices about which events should serve as proxies for life, and thus for observations of universes to be counted and converted into probabilities. Protons seem necessary for life; space wars do not — but do observers require stars, or is this too limited a concept of life? With either measure, choices can be made so that the odds stack in favor of our inhabiting a universe like ours. The degree of speculation raises doubts.
The Causal Diamond
Bousso first encountered the measure problem in the 1990s as a graduate student working with Stephen Hawking, the doyen of black hole physics. Black holes prove there is no such thing as an omniscient measurer, because someone inside a black hole’s “event horizon,” beyond which no light can escape, has access to different information and events from someone outside, and vice versa. Bousso and other black hole specialists came to think such a rule “must be more general,” he said, precluding solutions to the measure problem along the lines of the immortal watcher. “Physics is universal, so we’ve got to formulate what an observer can, in principle, measure.”
This insight led Bousso to develop a multiverse measure that removes infinity from the equation altogether. Instead of looking at all of space-time, he homes in on a finite patch of the multiverse called a “causal diamond,” representing the largest swath accessible to a single observer traveling from the beginning of time to the end of time. The finite boundaries of a causal diamond are formed by the intersection of two cones of light, like the dispersing rays from a pair of flashlights pointed toward each other in the dark. One cone points outward from the moment matter was created after a Big Bang — the earliest conceivable birth of an observer — and the other aims backward from the farthest reach of our future horizon, the moment when the causal diamond becomes an empty, timeless void and the observer can no longer access information linking cause to effect.
Bousso is not interested in what goes on outside the causal diamond, where infinitely variable, endlessly recursive events are unknowable, in the same way that information about what goes on outside a black hole cannot be accessed by the poor soul trapped inside. If one accepts that the finite diamond, “being all anyone can ever measure, is also all there is,” Bousso said, “then there is indeed no longer a measure problem.”
In 2006, Bousso realized that his causal-diamond measure lent itself to an evenhanded way of predicting the expected value of the cosmological constant. Causal diamonds with smaller values of Λ would produce more entropy — a quantity related to disorder, or degradation of energy — and Bousso postulated that entropy could serve as a proxy for complexity and thus for the presence of observers. Unlike other ways of counting observers, entropy can be calculated using trusted thermodynamic equations. With this approach, Bousso said, “comparing universes is no more exotic than comparing pools of water to roomfuls of air.”
Using astrophysical data, Bousso and his collaborators Roni Harnik, Graham Kribs and Gilad Perez calculated the overall rate of entropy production in our universe, which primarily comes from light scattering off cosmic dust. The calculation predicted a statistical range of expected values of Λ. The known value, 10-123, rests just left of the median. “We honestly didn’t see it coming,” Bousso said. “It’s really nice, because the prediction is very robust.”
Bousso and his collaborators’ causal-diamond measure has now racked up a number of successes. It offers a solution to a mystery of cosmology called the “why now?” problem, which asks why we happen to live at a time when the effects of matter and vacuum energy are comparable, so that the expansion of the universe recently switched from slowing down (signifying a matter-dominated epoch) to speeding up (a vacuum energy-dominated epoch). Bousso’s theory suggests it is only natural that we find ourselves at this juncture. The most entropy is produced, and therefore the most observers exist, when universes contain equal parts vacuum energy and matter.
In 2010 Harnik and Bousso used their idea to explain the flatness of the universe and the amount of infrared radiation emitted by cosmic dust. Last year, Bousso and his Berkeley colleague Lawrence Hall reported that observers made of protons and neutrons, like us, will live in universes where the amount of ordinary matter and dark matter are comparable, as is the case here.
“Right now the causal patch looks really good,” Bousso said. “A lot of things work out unexpectedly well, and I do not know of other measures that come anywhere close to reproducing these successes or featuring comparable successes.”
The causal-diamond measure falls short in a few ways, however. It does not gauge the probabilities of universes with negative values of the cosmological constant. And its predictions depend sensitively on assumptions about the early universe, at the inception of the future-pointing light cone. But researchers in the field recognize its promise. By sidestepping the infinities underlying the measure problem, the causal diamond “is an oasis of finitude into which we can sink our teeth,” said Andreas Albrecht, a theoretical physicist at the University of California, Davis, and one of the early architects of inflation.
Kleban, who like Bousso began his career as a black hole specialist, said the idea of a causal patch such as an entropy-producing diamond is “bound to be an ingredient of the final solution to the measure problem.” He, Guth, Vilenkin and many other physicists consider it a powerful and compelling approach, but they continue to work on their own measures of the multiverse. Few consider the problem to be solved.
Every measure involves many assumptions, beyond merely that the multiverse exists. For example, predictions of the expected range of constants like Λ and the Higgs mass always speculate that bubbles tend to have larger constants. Clearly, this is a work in progress.
“The multiverse is regarded either as an open question or off the wall,” Guth said. “But ultimately, if the multiverse does become a standard part of science, it will be on the basis that it’s the most plausible explanation of the fine-tunings that we see in nature.”
Perhaps these multiverse theorists have chosen a Sisyphean task. Perhaps they will never settle the two-headed-cow question. Some researchers are taking a different route to testing the multiverse. Rather than rifle through the infinite possibilities of the equations, they are scanning the finite sky for the ultimate Hail Mary pass — the faint tremor from an ancient bubble collision.
A bold new idea aims to link two famously discordant descriptions of nature. In doing so, it may also reveal how space-time owes its existence to the spooky connections of quantum information.
Like initials carved in a tree, ER = EPR, as the new idea is known, is a shorthand that joins two ideas proposed by Einstein in 1935. One involved the paradox implied by what he called “spooky action at a distance” between quantum particles (the EPR paradox, named for its authors, Einstein, Boris Podolsky and Nathan Rosen). The other showed how two black holes could be connected through far reaches of space through “wormholes” (ER, for Einstein-Rosen bridges). At the time that Einstein put forth these ideas — and for most of the eight decades since — they were thought to be entirely unrelated.
But if ER = EPR is correct, the ideas aren’t disconnected — they’re two manifestations of the same thing. And this underlying connectedness would form the foundation of all space-time. Quantum entanglement — the action at a distance that so troubled Einstein — could be creating the “spatial connectivity” that “sews space together,” according to Leonard Susskind, a physicist at Stanford University and one of the idea’s main architects. Without these connections, all of space would “atomize,” according to Juan Maldacena, a physicist at the Institute for Advanced Study in Princeton, N.J., who developed the idea together with Susskind. “In other words, the solid and reliable structure of space-time is due to the ghostly features of entanglement,” he said. What’s more, ER = EPR has the potential to address how gravity fits together with quantum mechanics.
Not everyone’s buying it, of course (nor should they; the idea is in “its infancy,” said Susskind). Joe Polchinski, a researcher at the Kavli Institute for Theoretical Physics at the University of California, Santa Barbara, whose own stunning paradox about firewalls in the throats of black holes triggered the latest advances, is cautious, but intrigued. “I don’t know where it’s going,” he said, “but it’s a fun time right now.”
The Black Hole Wars
The road that led to ER = EPR is a Möbius strip of tangled twists and turns that folds back on itself, like a drawing by M.C. Escher.
A fair place to start might be quantum entanglement. If two quantum particles are entangled, they become, in effect, two parts of a single unit. What happens to one entangled particle happens to the other, no matter how far apart they are.
Maldacena sometimes uses a pair of gloves as an analogy: If you come upon the right-handed glove, you instantaneously know the other is left-handed. There’s nothing spooky about that. But in the quantum version, both gloves are actually left- and right-handed (and everything in between) up until the moment you observe them. Spookier still, the left-handed glove doesn’t become left until you observe the right-handed one — at which moment both instantly gain a definite handedness.
Entanglement played a key role in Stephen Hawking’s 1974 discovery that black holes could evaporate. This, too, involved entangled pairs of particles. Throughout space, short-lived “virtual” particles of matter and anti-matter continually pop into and out of existence. Hawking realized that if one particle fell into a black hole and the other escaped, the hole would emit radiation, glowing like a dying ember. Given enough time, the hole would evaporate into nothing, raising the question of what happened to the information content of the stuff that fell into it.
But the rules of quantum mechanics forbid the complete destruction of information. (Hopelessly scrambling information is another story, which is why documents can be burned and hard drives smashed. There’s nothing in the laws of physics that prevents the information lost in a book’s smoke and ashes from being reconstructed, at least in principle.) So the question became: Would the information that originally went into the black hole just get scrambled? Or would it be truly lost? The arguments set off what Susskind called the “black hole wars,” which have generated enough stories to fill many books. (Susskind’s was subtitled “My Battle with Stephen Hawking to Make the World Safe for Quantum Mechanics.”)
Eventually Susskind — in a discovery that shocked even him — realized (with Gerard ’t Hooft) that all the information that fell down the hole was actually trapped on the black hole’s two-dimensional event horizon, the surface that marks the point of no return. The horizon encoded everything inside, like a hologram. It was as if the bits needed to re-create your house and everything in it could fit on the walls. The information wasn’t lost — it was scrambled and stored out of reach.
Susskind continued to work on the idea with Maldacena, whom Susskind calls “the master,” and others. Holography began to be used not just to understand black holes, but any region of space that can be described by its boundary. Over the past decade or so, the seemingly crazy idea that space is a kind of hologram has become rather humdrum, a tool of modern physics used in everything from cosmology to condensed matter. “One of the things that happen to scientific ideas is they often go from wild conjecture to reasonable conjecture to working tools,” Susskind said. “It’s gotten routine.”
Holography was concerned with what happens on boundaries, including black hole horizons. That left open the question of what goes on in the interiors, said Susskind, and answers to that “were all over the map.” After all, since no information could ever escape from inside a black hole’s horizon, the laws of physics prevented scientists from ever directly testing what was going on inside.
Then in 2012 Polchinski, along with Ahmed Almheiri, Donald Marolf and James Sully, all of them at the time at Santa Barbara, came up with an insight so startling it basically said to physicists: Hold everything. We know nothing.
The so-called AMPS paper (after its authors’ initials) presented a doozy of an entanglement paradox — one so stark it implied that black holes might not, in effect, even have insides, for a “firewall” just inside the horizon would fry anyone or anything attempting to find out its secrets.
Scaling the Firewall
Here’s the heart of their argument: If a black hole’s event horizon is a smooth, seemingly ordinary place, as relativity predicts (the authors call this the “no drama” condition), the particles coming out of the black hole must be entangled with particles falling into the black hole. Yet for information not to be lost, the particles coming out of the black hole must also be entangled with particles that left long ago and are now scattered about in a fog of Hawking radiation. That’s one too many kinds of entanglements, the AMPS authors realized. One of them would have to go.
The reason is that maximum entanglements have to be monogamous, existing between just two particles. Two maximum entanglements at once — quantum polygamy — simply cannot happen, which suggests that the smooth, continuous space-time inside the throats of black holes can’t exist. A break in the entanglement at the horizon would imply a discontinuity in space, a pileup of energy: the “firewall.”
The AMPS paper became a “real trigger,” said Stephen Shenker, a physicist at Stanford, and “cast in sharp relief” just how much was not understood. Of course, physicists love such paradoxes, because they’re fertile ground for discovery.
Both Susskind and Maldacena got on it immediately. They’d been thinking about entanglement and wormholes, and both were inspired by the work of Mark Van Raamsdonk, a physicist at the University of British Columbia in Vancouver, who had conducted a pivotal thought experiment suggesting that entanglement and space-time are intimately related.
“Then one day,” said Susskind, “Juan sent me a very cryptic message that contained the equation ER = EPR. I instantly saw what he was getting at, and from there we went back and forth expanding the idea.”
Their investigations, which they presented in a 2013 paper, “Cool Horizons for Entangled Black Holes,” argued for a kind of entanglement they said the AMPS authors had overlooked — the one that “hooks space together,” according to Susskind. AMPS assumed that the parts of space inside and outside of the event horizon were independent. But Susskind and Maldacena suggest that, in fact, particles on either side of the border could be connected by a wormhole. The ER = EPR entanglement could “kind of get around the apparent paradox,” said Van Raamsdonk. The paper contained a graphic that some refer to half-jokingly as the “octopus picture” — with multiple wormholes leading from the inside of a black hole to Hawking radiation on the outside.
In other words, there was no need for an entanglement that would create a kink in the smooth surface of the black hole’s throat. The particles still inside the hole would be directly connected to particles that left long ago. No need to pass through the horizon, no need to pass Go. The particles on the inside and the far-out ones could be considered one and the same, Maldacena explained — like me, myself and I. The complex “octopus” wormhole would link the interior of the black hole directly to particles in the long-departed cloud of Hawking radiation.
Holes in the Wormhole
No one is sure yet whether ER = EPR will solve the firewall problem. John Preskill, a physicist at the California Institute of Technology in Pasadena, reminded readers of Quantum Frontiers, the blog for Caltech’s Institute for Quantum Information and Matter, that sometimes physicists rely on their “sense of smell” to sniff out which theories have promise. “At first whiff,” he wrote, “ER = EPR may smell fresh and sweet, but it will have to ripen on the shelf for a while.”
To be sure, ER = EPR does not yet apply to just any kind of space, or any kind of entanglement. It takes a special type of entanglement and a special type of wormhole. “Lenny and Juan are completely aware of this,” said Marolf, who recently co-authored a paper describing wormholes with more than two ends. ER = EPR works in very specific situations, he said, but AMPS argues that the firewall presents a much broader challenge.
Like Polchinski and others, Marolf worries that ER = EPR modifies standard quantum mechanics. “A lot of people are really interested in the ER = EPR conjecture,” said Marolf. “But there’s a sense that no one but Lenny and Juan really understand what it is.” Still, “it’s an interesting time to be in the field.”
The renowned British physicist, who died at 76, left behind a riddle that could eventually lead his successors to the theory of quantum gravity.
The renowned British physicist Stephen Hawking, who died today at 76, was something of a betting man, regularly entering into friendly wagers with his colleagues over key questions in theoretical physics. “I sensed when Stephen and I first met that he would enjoy being treated irreverently,” wroteJohn Preskill, a physicist at the California Institute of Technology, earlier today on Twitter. “So in the middle of a scientific discussion I could interject, ‘What makes you so sure of that, Mr. Know-It-All?’ knowing that Stephen would respond with his eyes twinkling: ‘Wanna bet?’”
And bet they did. In 1991, Hawking and Kip Thorne bet Preskill that information that falls into a black hole gets destroyed and can never be retrieved. Called the black hole information paradox, this prospect follows from Hawking’s landmark 1974 discovery about black holes — regions of inescapable gravity, where space-time curves steeply toward a central point known as the singularity. Hawking had shown that black holes are not truly black. Quantum uncertainty causes them to radiate a small amount of heat, dubbed “Hawking radiation.” They lose mass in the process and ultimately evaporate away. This evaporation leads to a paradox: Anything that falls into a black hole will seemingly be lost forever, violating “unitarity” — a central principle of quantum mechanics that says the present always preserves information about the past.
Hawking and Thorne argued that the radiation emitted by a black hole would be too hopelessly scrambled to retrieve any useful information about what fell into it, even in principle. Preskill bet that information somehow escapes black holes, even though physicists would presumably need a complete theory of quantum gravity to understand the mechanism behind how this could happen.
Physicists thought they resolved the paradox in 2004 with the notion of black hole complementarity. According to this proposal, information that crosses the event horizon of a black hole both reflects back out and passes inside, never to escape. Because no single observer can ever be both inside and outside the black hole’s horizon, no one can witness both situations simultaneously, and no contradiction arises. The argument was sufficient to convince Hawking to concede the bet. During a July 2004 talk in Dublin, Ireland, he presented Preskill with the eighth edition of Total Baseball: The Ultimate Baseball Encyclopedia, “from which information can be retrieved at will.”
Thorne, however refused to concede, and it seems he was right to do so. In 2012, a new twist on the paradox emerged. Nobody had explained precisely how information would get out of a black hole, and that lack of a specific mechanism inspired Joseph Polchinski and three colleagues to revisit the problem. Conventional wisdom had long held that once someone passed the event horizon, they would slowly be pulled apart by the extreme gravity as they fell toward the singularity. Polchinski and his co-authors argued that instead, in-falling observers would encounter a literal wall of fire at the event horizon, burning up before ever getting near the singularity.
At the heart of the firewall puzzle lies a conflict between three fundamental postulates. The first is the equivalence principle of Albert Einstein’s general theory of relativity: Because there’s no difference between acceleration due to gravity and the acceleration of a rocket, an astronaut named Alice shouldn’t feel anything amiss as she crosses a black hole horizon. The second is unitarity, which implies that information cannot be destroyed. Lastly, there’s locality, which holds that events happening at a particular point in space can only influence nearby points. This means that the laws of physics should work as expected far away from a black hole, even if they break down at some point within the black hole — either at the singularity or at the event horizon.
To resolve the paradox, one of the three postulates must be sacrificed, and nobody can agree on which one should get the axe. The simplest solution is to have the equivalence principle break down at the event horizon, thereby giving rise to a firewall. But several other possible solutions have been proposed in the ensuing years.
For instance, a few years before the firewalls paper, Samir Mathur, a string theorist at Ohio State University, raised similar issues with his notion of black hole fuzzballs. Fuzzballs aren’t empty pits, like traditional black holes. They are packed full of strings (the kind from string theory) and have a surface like a star or planet. They also emit heat in the form of radiation. The spectrum of that radiation, Mathur found, exactly matches the prediction for Hawking radiation. His “fuzzball conjecture” resolves the paradox by declaring it to be an illusion. How can information be lost beyond the event horizon if there is no event horizon?
Hawking himself weighed in on the firewall debate along similar lines by way of a two-page, equation-free paper posted to the scientific preprint site arxiv.org in late January 2014 — a summation of informal remarks he’d made via Skype for a small conference the previous spring. He proposed a rethinking of the event horizon. Instead of a definite line in the sky from which nothing could escape, he suggested there could be an “apparent horizon.” Information is only temporarily confined behind that horizon. The information eventually escapes, but in such a scrambled form that it can never be interpreted. He likened the task to weather forecasting: “One can’t predict the weather more than a few days in advance.”
In 2013, Leonard Susskind and Juan Maldacena, theoretical physicists at Stanford University and the Institute for Advanced Studies, respectively, made a radical attempt to preserve locality that they dubbed “ER = EPR.” According to this idea, maybe what we think are faraway points in space-time aren’t that far away after all. Perhaps entanglement creates invisible microscopic wormholes connecting seemingly distant points. Shaped a bit like an octopus, such a wormhole would link the interior of the black hole directly to the Hawking radiation, so the particles still inside the hole would be directly connected to particles that escaped long ago, avoiding the need for information to pass through the event horizon.
Physicists have yet to reach a consensus on any one of these proposed solutions. It’s a tribute to Hawking’s unique genius that they continue to argue about the black hole information paradox so many decades after his work first suggested it.
We might have to add a brand new category of star to the textbooks: an advanced mathematical model has revealed a certain ultracompact star configuration could in fact exist, when scientists had previously thought it impossible.
The calculations are the work of Raúl Carballo-Rubio from the International School for Advanced Studies in Italy, and describe a hypothesis where a massive star doesn’t follow the usual instructions laid down by astrophysics.
“The novelty in this analysis is that, for the first time, all these ingredients have been assembled together in a fully consistent model,” says Carballo-Rubio.
“Moreover, it has been shown that there exist new stellar configurations, and that these can be described in a surprisingly simple manner.”
Due to the push and pull of gigantic forces, massive stars collapse under their own weight when they run out of fuel to burn. They then either explode as supernovae and become neutron stars, or collapse completely into a black hole, depending on their mass.
There’s a particular mass threshold at which the dying star goes one way or another.
But what if extra quantum mechanical forces were at play? That’s the question Carballo-Rubio is asking, and he suggests the rules of quantum mechanics would create a different set of thresholds or equilibriums at the end of a massive star’s life.
Thanks to quantum vacuum polarisation, we’d be left with something that would look like a black hole while behaving differently, according to the new model. These new types of stars have been dubbed “semiclassical relativistic stars” because they the result of both classical and quantum physics.
One of the differences would be that the star would be horizonless – like another theoretical star made possible by quantum physics, the gravastar. There wouldn’t be the same ‘point of no return’ for light and matter as there is around a black hole.
The next step is to see if we can actually spot any of them – or rather spot any of the ripples they create through the rest of space. One possibility is that these strange types of stars wouldn’t exist for very long at all.
“It is not clear yet whether these configurations can be dynamically realised in astrophysical scenarios, or how long would they last if this is the case,” says Carballo-Rubio.
Interest in this field of astrophysics has been boosted by the progress scientists have been making in detecting gravitational waves, and it’s because of that work that it might be possible to find these variations on black holes.
The observatories and instruments coming online in the next few years will give scientists the chance to put this intriguing hypothesis to the test.
“If there are very dense and ultracompact stars in the Universe, similar to black holes but with no horizons, it should be possible to detect them in the next decades,” says Carballo-Rubio.
A new paper uses the Schrödinger equation to describe debris disks around stars and black holes—and provides an object lesson about what “quantum” really means
Researchers who want to predict the behavior of systems governed by quantum mechanics—an electron in an atom, say, or a photon of light traveling through space—typically turn to the Schrödinger equation. Devised by Austrian physicist Erwin Schrödinger in 1925, it describes subatomic particles and how they may display wavelike properties such as interference. It contains the essence of all that appears strange and counterintuitive about the quantum world.
But it seems the Schrödinger equation is not confined to that realm. In a paper just published in Monthly Notices of the Royal Astronomical Society, planetary scientist Konstantin Batygin of the California Institute of Technology claims this equation can also be used to understand the emergence and behavior of self-gravitating astrophysical disks. That is, objects such as the rings of the worlds Saturn and Uranus or the halos of dust and gas that surround young stars and supply the raw material for the formation of a planetary system or even the accretion disks of debris spiraling into a black hole.
And yet there’s nothing “quantum” about these things at all. They could be anything from tiny dust grains to big chunks of rock the size of asteroids or planets. Nevertheless, Batygin says, the Schrödinger equation supplies a convenient way of calculating what shape such a disk will have, and how stable it will be against buckling or distorting. “This a fascinating approach, synthesizing very old techniques to make a brand-new analysis of a challenging problem,” says astrophysicist Duncan Forgan of the University of Saint Andrews in Scotland, who was not part of the research. “The Schrödinger equation has been so well studied for almost a century that this connection is clearly handy.”
From Classical to Quantum
This equation is so often regarded as the distilled essence of “quantumness” that it is easy to forget what it really represents. In some ways Schrödinger pulled it out of a hat when challenged to come up with a mathematical formula for French physicist Louis de Broglie’s hypothesis that quantum particles could behave like waves. Schrödinger drew on his deep knowledge of classical mechanics, and his equation in many ways resembles those used for ordinary waves. One difference is that in quantum mechanics the energies of “particle–waves” are quantized: confined to discrete values that are multiples of the so-called Planck’s constant h, first introduced by German physicist Max Planck in 1900.
This relation of the Schrödinger equation to classical waves is already revealed in the way that a variant called the nonlinear Schrödinger equation is commonly used to describe other classical wave systems—for example in optics and even in ocean waves, where it provides a mathematical picture of unusually large and robust “rogue waves.”
But the normal “quantum” version—the linear Schrödinger equation—has not previously turned up in a classical context. Batygin says it does so here because the way he sets up the problem of self-gravitating disks creates a quantity that sets a particular “scale” in the problem, much as h does in quantum systems.
Whether around a young star or a supermassive black hole, the many mutually interacting objects in a self-gravitating debris disk are complicated to describe mathematically. But Batygin uses a simplified model in which the disk’s constituents are smeared and stretched into thin “wires” that loop in concentric ellipses right around the disk. Because the wires interact with one another through gravity, they can exchange orbital angular momentum between them, rather like the transfer of movement between the gear bearings and the axle of a bicycle.
This approach uses ideas developed in the 18th century by the mathematicians Pierre-Simon Laplace and Joseph-Louis Lagrange. Laplace was one of the first to study how a rotating clump of objects can collapse into a disklike shape. In 1796 he proposed our solar system formed from a great cloud of gas and dust spinning around the young sun.
Batygin and others had used this “wire” approximation before, but he decided to look at the extreme case in which the looped wires are made thinner and thinner until they merge into a continuous disk. In that limit he found the equation describing the system is the same as Schrödinger’s, with the disk itself being described by the analog of the wave function that defines the distribution of possible positions of a quantum particle. In effect, the shape of the disk is like the wave function of a quantum particle bouncing around in a cavity with walls at the disk’s inner and outer edges.
The resulting disk has a series of vibrational “modes,” rather like resonances in a tuning fork, that might be excited by small disturbances—think of a planet-forming stellar disk nudged by a passing star or of a black hole accretion disk in which material is falling into the center unevenly. Batygin deduces the conditions under which a disk will warp in response or, conversely, will behave like a rigid body held fast by its own mutual gravity. This comes down to a matter of timescales, he says. If the angular momentum of the objects orbiting in the disk is transferred from one to another much more rapidly than the perturbation’s duration, the disk will remain rigid. “If on the other hand the self-interaction timescale is long compared with the perturbation timescale, the disk will warp,” he says.
Is “Quantumness” Really So Weird?
When he first saw the Schrödinger equation materialize out of his theoretical analysis, Batygin says he was stunned. “But in retrospect it almost seems obvious to me that it must emerge in this problem,” he adds.
What this means, though, is the Schrödinger equation can itself be derived from classical physics known since the 18th century. It doesn’t depend on “quantumness” at all—although it turns out to be applicable to that case.
That’s not as strange as it might seem. For one thing, science is full of examples of equations devised for one phenomenon turning out to apply to a totally different one, too. Equations concocted to describe a kind of chemical reaction have been applied to the modeling of crime, for example, and very recently a mathematical description of magnets was shown also to describe the fruiting patterns of trees in pistachio orchards.
But doesn’t quantum physics involve a rather uniquely odd sort of behavior? Not really. The Schrödinger equation does not so much describe what quantum particles are actually “doing,” rather it supplies a way of predicting what might be observed for systems governed by particular wavelike probability laws. In fact, other researchers have already shown the key phenomena of quantum theory emerge from a generalization of probability theory that could, too, have been in principle devised in the 18th century, before there was any inkling that tiny particles behave this way.
The advantage of his approach is its simplicity, Batygin notes. Instead of having to track all the movements of every particle in the disk using complicated computer models (so-called N-body simulations), the disk can be treated as a kind of smooth sheet that evolves over time and oscillates like a drumskin. That makes it, Batygin says, ideal for systems in which the central object is much more massive than the disk, such as protoplanetary disks and the rings of stars orbiting supermassive black holes. It will not work for galactic disks, however, like the spiral that forms our Milky Way.
But Ken Rice of The Royal Observatory in Scotland, who was not involved with the work says that in the scenario in which the central object is much more massive than the disk, the dominant gravitational influence is the central object. “It’s then not entirely clear how including the disk self-gravity would influence the evolution” he says. “My simple guess would be that it wouldn’t have much influence, but I might be wrong.” Which suggests the chief application of Batygin’s formalism may not be to model a wide range of systems but rather to make models for a narrow range of systems far less computationally expensive than N-body simulations.
Astrophysicist Scott Tremaine of the Institute for Advanced Study in Princeton, N.J., also not part of the study, agrees these equations might be easier to solve than those that describe the self-gravitating rings more precisely. But he says this simplification comes at the cost of neglecting the long reach of gravitational forces, because in the Schrödinger version only interactions between adjacent “wire” rings are taken into account. “It’s a rather drastic simplification of the system that only works for certain cases”, he says, “and won’t provide new insights into these disks for experts.” But he thinks the approach could have useful pedagogical value, not least in showing that the Schrödinger equation “isn’t some magic result just for quantum mechanics, but describes a variety of physical systems.”
But Saint Andrews’s Forgan thinks Batygin’s approach could be particularly useful for modeling black hole accretion disks that are warped by companion stars. “There are a lot of interesting results about binary supermassive black holes with ‘torn’ disks that this may be applicable to,” he says.
Vacuum birefringence has been observed by a team of scientists for the first time ever using the European Southern Observatory’s (ESO) Very Large Telescope (VLT).
The team observed neutron star RX J1856.5-375, which is about 400 light-years from Earth, with just visible light, pushing the limits of existing telescope technology.
A Little Less Strange
Vacuum birefringence is a weird quantum phenomenon that has only ever been observed on an atomic scale. It occurs when a neutron star is surrounded by a magnetic field so intense, it’s given rise to a region in empty space where matter randomly appears and vanishes.
This polarization of light in a vacuum due to strong magnetic fields was first thought to be possible in the 1930s by physicists Werner Heisenberg and Hans Heinrich Euler as a product of the theory of quantum electrodynamics (QED). The theory describes how light and matter interact.
Now, for the first time ever, this strange quantum effect has been observed by a team of scientists from INAF Milan (Italy) and from the University of Zielona Gora (Poland).
Using the European Southern Observatory’s (ESO) Very Large Telescope (VLT), a research team led by Roberto Mignani observed neutron star RX J1856.5-375, which is about 400 light-years from Earth.
Neutron stars are rather dim, yet they are 10 times more massive than our sun. As such, they have extremely strong magnetic fields permeating their surface and surroundings.
Vacuums are supposedly empty spaces (according to Einstein and Newton, at least) where light can pass through uninhibited or unchanged. But, according to QED, space is full of virtual particles continually popping in and out of existence. Very strong magnetic fields, like those surrounding neutron stars, can modify such spaces as vacuums.
Using the FORS2 instrument on the VLT, the researchers were able to observe the neutron star with just visible light, pushing the limits of existing telescope technology.
Studying VLT data on the star, the researchers saw linear polarization occurring at a significant degree of around 16%. This is very likely due to vacuum birefringence in the area surrounding RX J1856.5-375.
“The high linear [polarization] that we measured with the VLT can’t be easily explained by our models unless the vacuum birefringence effects predicted by QED are included,” said Mignani.
Given the limited technology used, Mignani believes that future telescopes can discover more about similar strange quantum effects by studying other neutron stars. “[Polarization] measurements with the next generation of telescopes, such as ESO’s RX J1856.5-375 (EELT), could play a crucial role in testing QED predictions of vacuum birefringence effects around many more neutron stars,” he said.
“This measurement, made for the first time now in visible light, also paves the way to similar measurements to be carried out at X-ray wavelengths,” researcher Kinwah Wu said.
Researchers have directly observed the scattering electrons behind the shifting patterns of light called pulsating auroras, confirming models of how charged solar winds interact with our planet’s magnetic field.
Dazzling curtains of light that shimmer over Earth’s poles have captured our imagination since prehistoric times, and the fundamental processes behind the eerie glow of the aurora borealis and aurora australis – the northern and southern lights – are fairly well known.
Charged particles, spat out of the Sun by coronal mass ejections and other solar phenomena, wash over our planet in waves. As they hit Earth’s magnetic field, most of the particles are deflected around the globe. Some are funnelled down towards the poles, where they smash into the gases making up our atmosphere and cause them to glow in sheets of dazzling greens, blues, and reds.
Those are typically called active auroras, and are often photographed to make up the gorgeous curtains we put onto calendars and desktop wallpapers.
But pulsating auroras are a little different.
Rather than shimmer as a curtain of light, they grow and fade over tens of seconds like slow lightning. They also tend to form higher up than their active cousins at the poles and closer to the equator, making them harder to study.
This kind of aurora is thought to be caused by sudden rearrangements in the magnetic field lines releasing their stored solar energy, sending showers of electrons crashing into the atmosphere in cycles of brightening called aurora substorms.
“They are characterised by auroral brightening from dusk to midnight, followed by violent motions of distinct auroral arcs that eventually break up, and emerge as diffuse, pulsating auroral patches at dawn,” lead author Satoshi Kasahara from the University of Tokyo explains in their report.
Confirming specific changes in magnetic field are truly responsible for these waves of electrons isn’t easy. For one thing, mapping the magnetic field lines with precision requires putting equipment into the right place at the right time in order to track charged particles trapped within them.
While the rearrangements of the magnetic field seem likely, there’s still the question of whether there’s enough electrons in these surges to account for the pulsating auroras.
This latest study has now put that question to rest.
The researchers directly observed the scattering of electrons produced by shifts in channelled currents of charged particles, or plasma, called chorus waves.
Electron bursts have been linked with chorus waves before, with previous research spotting electron showers that coincide with the ‘whistling’ tunes of these shifting plasma currents. But now they knew the resulting eruption of charged particles could do the trick.
“The precipitating electron flux was sufficiently intense to generate pulsating aurora,” says Kasahara.
The clip below does a nice job of explaining the research using neat visuals. Complete with a wicked thumping dance beat.
The next step for the researchers is to use the ERG spacecraft to comprehensively analyse the nature of these electron bursts in conjunction with phenomena such as auroras.
These amazing light shows are spectacular to watch, but they also have a darker side.
Those light showers of particles can turn into storms under the right conditions. While they’re harmless enough high overhead, a sufficiently powerful solar storm can cause charged particles to disrupt electronics in satellites and devices closer to the surface.
Just last year the largest flare to erupt from the Sun in over a decade temporarily knocked out high frequency radio and disrupted low-frequency navigation technology.
Getting a grip on what’s between us and the Sun might help us plan better when even bigger storms strike.
Black holes don’t just sit there munching away constantly on the space around them. Eventually they run out of nearby matter and go quiet, lying in wait until a stray bit of gas passes by.
Then a black hole devours again, belching out a giant jet of particles. And now scientists have captured one doing so not once, but twice – the first time this has been observed.
The two burps, occurring within the span of 100,000 years, confirm that supermassive black holes go through cycles of hibernation and activity.
It’s actually not as animalistic as all that, since black holes aren’t living or sentient, but it’s a decent-enough metaphor for the way black holes devour material, drawing it in with their tremendous gravity.
But even though we’re used to thinking how nothing ever comes back out of a black hole, the curious thing is that they don’t retain everything they capture.
When they consume matter such as gas or stars, they also generate a powerful outflow of high-energy particles from close to the event horizon, but not beyond the point of no return.
“Black holes are voracious eaters, but it also turns out they don’t have very good table manners,” said lead researcher Julie Comerford, an astronomer at the University of Colorado Boulder.
“We know a lot of examples of black holes with single burps emanating out, but we discovered a galaxy with a supermassive black hole that has not one but two burps.”
The black hole in question is the supermassive beast at the centre of a galaxy called SDSS J1354+1327 or just J1354 for short. It’s about 800 million light-years from Earth, and it showed up in Chandra data as a very bright point of X-ray emission – bright enough to be millions or even billions of times more massive than our Sun.
The team of researchers compared X-ray data from the Chandra X-ray observatory to visible-light images from the Hubble Space Telescope, and found that the black hole is surrounded by a thick cloud of dust and gas.
“We are seeing this object feast, burp, and nap, and then feast and burp once again, which theory had predicted,” Comerford said. “Fortunately, we happened to observe this galaxy at a time when we could clearly see evidence for both events.”
That evidence consists of two bubbles in the gas – one above and one below the black hole, expulsions particles following a meal. And they were able to gauge that the two bubbles had occurred at different times.
The southern bubble had expanded 30,000 light-years from the galactic centre, while the northern bubble had expanded just 3,000 light-years from the galactic centre. These are known as Fermi bubbles, and they are usually seen after a black hole feeding event.
From the movement speed of these bubbles, the team was able to work out they occurred roughly 100,000 years apart.
So what’s the black hole eating that’s giving it such epic indigestion? Another galaxy. A companion galaxy is connected to J1354 by streams of stars and gas, due to a collision between the two. It is clumps of material from this second galaxy that swirled towards the black hole and got eaten up.
“We were able to show that the gas from the northern part of the galaxy was consistent with an advancing edge of a shock wave, and the gas from the south was consistent with an older outflow from the black hole.”
The Milky Way also has Fermi bubbles following a feeding event by Sagittarius A*, the black hole in its centre. And, just as J1354’s black hole fed, slept, then fed again, astronomers believe Sagittarius A* will wake to feed again too.
The research was presented at the 231st meeting of the American Astronomical Society, and has also been published in The Astrophysical Journal.
The NASA New Horizons probe just set a new interstellar exploration record, taking pictures from further out in space than ever before – it snapped the shots you see above some 6.12 billion kilometres (3.79 billion miles) away from Earth.
That’s about 6 million kilometres (3.7 million miles) further out than the Voyager 1 spacecraft was when it captured the famous Pale Blue Dot image of Earth back in 1990. Since Voyager 1’s cameras were turned off shortly after that shot was taken, the record has stood for the past 27 years.
The new record-breaking photos show two Kuiper Belt objects, 2012 HZ84 and 2012 HE85. As fuzzy as they are, they’re the closest look we’ve ever got at any objects inside this vast icy ring, which circles the Sun about 30 to 55 times further out than Earth.
“New Horizons has long been a mission of firsts – first to explore Pluto, first to explore the Kuiper Belt, fastest spacecraft ever launched,” says New Horizons Principal Investigator Alan Stern, from the Southwest Research Institute in Boulder, Colorado.
“And now, we’ve been able to make images farther from Earth than any spacecraft in history.”
In fact, New Horizons broke the record twice in quick succession, first snapping a shot of a group of distant stars called the Wishing Well, around 1,300 light-years away from our planet. That was followed up with the shots of the Kuiper Belt two hours later.
New Horizons first left Earth in 2006 with the aim of flying by Pluto, which it did in 2015, taking some dramatic photos along the way. Since then it’s been heading into the Kuiper Belt, and will carry out a flyby of Kuiper Belt object (KBO) 2014 MU69 in January 2019.
As anyone who’s ever tried to keep a camera steady will know, taking pictures at that speed is an impressive feat.
Before we eventually lose touch with New Horizons, it’s hoped that it will tell us plenty more about the Kuiper Belt. The probe is measuring levels of plasma, dust, and gases as it travels, and will eventually take a look at more than 20 other KBOs.
New Horizons is going to get nudged out of hibernation again on the 4th of June. In the meantime, we can marvel at these record-breaking deep space photographs.
Four years ago, a monster sunspot complex broke a solar record. As astronomers have just figured out, it was the source of the strongest magnetic field ever measured on the surface of the Sun.
It was, researchers at the National Astronomical Observatory of Japan concluded after five days of analysis, caused by gas outflow from one sunspot in the complex pushing against another sunspot.
The sunspot complex, AR1967 (the AR stands for “active region”), appeared in February 2014, during which time astronomers around the globe took measurements.
At over 180,000 kilometres (111,847 miles) across, it was wider than Jupiter, and went on to morph into AR1990 and spit out an enormous X4.9-class solar flare on February 25.
But the magnetic field occurred earlier in the month; the team’s data started from February 4.
Sunspots are called “active regions” for a reason. They look a bit like holes in the Sun, and are much darker and cooler than the rest of the visible surface, the layer of the Sun called the photosphere.
These regions are caused by magnetic fields, and usually occur in east-west pairs, with opposite polarities.
The magnetic fields are strongest in the darkest part of the sunspot, known as the umbra. Here, the magnetic field is around 1,000 times stronger than the surrounding photosphere and extends vertically.
In the lighter region, the penumbra, the magnetic field is weaker and extends horizontally.
Gas flows outward along the horizontal threads of the magnetic field in a sunspot’s penumbra.
Joten Okamoto and Takashi Sakurai of the NAOJ were analysing data taken of AR1967 by the Solar Optical Telescope aboard the HINODE spacecraft when they found something really unusual – a signature for strongly magnetised iron atoms.
When they crunched the numbers, they found that the magnetic field had a strength of 6,250 gauss – more than twice the strength of the 3,000 gauss found in most other sunspots.
And it wasn’t in the umbra, either, but the bright region between two sunspots of the complex.
Because HINODE observed the sunspot for a period of time, the researchers were able to check the data over the next few days. The strong magnetic field stayed in the bright region between the two umbrae.
They concluded that the strong field belonged to the southern spot’s penumbra, the horizontal gas flows of which were compressing the northern spot’s horizontal magnetic fields.