Stephen Hawking’s final theory: untangling a peculiar black-hole paradox


The British theoretical physicist Stephen Hawking is perhaps best-known for his landmark work on black holes and, by extension, how they affect our understanding of the Universe. In the years before his death in 2018, he was still immersed in black hole theory, endeavouring to solve a puzzle that his own work had given rise to several decades earlier.

To put it succinctly, in the 1970s, Hawking discovered that black holes appear to be capable of destroying physical information – a characteristic very much at odds with contemporary quantum mechanics. Adapted from a 2016 paper that Hawking co-authored with the US theoretical physicist Andrew Strominger and the UK theoretical physicist Malcolm Perry, this animation offers a sophisticated-but-digestible – and frequently quite clever – visual presentation of Hawking’s final work, which proposes one potential solution to the ‘information paradox’.

The case for taking AI seriously as a threat to humanity


Why some people fear AI, explained.

Stephen Hawking has said, “The development of full artificial intelligence could spell the end of the human race.” Elon Musk claims that AI is humanity’s “biggest existential threat.”

That might have people asking: Wait, what? But these grand worries are rooted in research. Along with Hawking and Musk, prominent figures at Oxford and UC Berkeley and many of the researchers working in AI today believe that advanced AI systems, if deployed carelessly, could end all life on earth.

This concern has been raised since the dawn of computing. But it has come into particular focus in recent years, as advances in machine-learning techniques have given us a more concrete understanding of what we can do with AI, what AI can do for (and to) us, and how much we still don’t know.

There are also skeptics. Some of them think advanced AI is so distant that there’s no point in thinking about it now. Others are worried that excessive hype about the power of their field might kill it prematurely. And even among the people who broadly agree that AI poses unique dangers, there are varying takes on what steps make the most sense today.

The conversation about AI is full of confusion, misinformation, and people talking past each other — in large part because we use the word “AI” to refer to so many things. So here’s the big picture on how artificial intelligence might pose a catastrophic threat, in nine questions:

1) What is AI?

Artificial intelligence is the effort to create computers capable of intelligent behavior. It is a broad catchall term, used to refer to everything from Siri to IBM’s Watson to powerful technologies we have yet to invent.

Some researchers distinguish between “narrow AI” — computer systems that are better than humans in some specific, well-defined field, like playing chess or generating images or diagnosing cancer — and “general AI,” systems that can surpass human capabilities in many domains. We don’t have general AI yet, but we’re starting to get a better sense of the challenges it will pose.

Narrow AI has seen extraordinary progress over the past few years. AI systems have improved dramatically at translation, at games like chess and Go, at important research biology questions like predicting how proteins fold, and at generating images. AI systems determine what you’ll see in a Google search or in your Facebook Newsfeed. They are being developed to improve drone targeting and detect missiles.

But narrow AI is getting less narrow. Once, we made progress in AI by painstakingly teaching computer systems specific concepts. To do computer vision — allowing a computer to identify things in pictures and video — researchers wrote algorithms for detecting edges. To play chess, they programmed in heuristics about chess. To do natural language processing (speech recognition, transcription, translation, etc.), they drew on the field of linguistics.

But recently, we’ve gotten better at creating computer systems that have generalized learning capabilities. Instead of mathematically describing detailed features of a problem, we let the computer system learn that by itself. While once we treated computer vision as a completely different problem from natural language processing or platform game playing, now we can solve all three problems with the same approaches.

Our AI progress so far has enabled enormous advances — and has also raised urgent ethical questions. When you train a computer system to predict which convicted felons will reoffend, you’re using inputs from a criminal justice system biased against black people and low-income people — and so its outputs will likely be biased against black and low-income people too. Making websites more addictive can be great for your revenue but bad for your users.

Rosie Campbell at UC Berkeley’s Center for Human-Compatible AI argues that these are examples, writ small, of the big worry experts have about general AI in the future. The difficulties we’re wrestling with today with narrow AI don’t come from the systems turning on us or wanting revenge or considering us inferior. Rather, they come from the disconnect between what we tell our systems to do and what we actually want them to do.

For example, we tell a system to run up a high score in a video game. We want it to play the game fairly and learn game skills — but if it instead has the chance to directly hack the scoring system, it will do that. It’s doing great by the metric we gave it. But we aren’t getting what we wanted.

In other words, our problems come from the systems being really good at achieving the goal they learned to pursue; it’s just that the goal they learned in their training environment isn’t the outcome we actually wanted. And we’re building systems we don’t understand, which means we can’t always anticipate their behavior.

Right now the harm is limited because the systems are so limited. But it’s a pattern that could have even graver consequences for human beings in the future as AI systems become more advanced.

2) Is it even possible to make a computer as smart as a person?

Yes, though current AI systems aren’t nearly that smart.

One popular adage about AI is “everything that’s easy is hard, and everything that’s hard is easy.” Doing complex calculations in the blink of an eye? Easy. Looking at a picture and telling you whether it’s a dog? Hard (until very recently).

Lots of things humans do are still outside AI’s grasp. For instance, it’s hard to design an AI system that explores an unfamiliar environment, that can navigate its way from, say, the entryway of a building it’s never been in before up the stairs to a specific person’s desk. We don’t know how to design an AI system that reads a book and retains an understanding of the concepts.

The paradigm that has driven many of the biggest breakthroughs in AI recently is called “deep learning.” Deep learning systems can do some astonishing stuff: beat games we thought humans might never lose, invent compelling and realistic photographs, solve open problems in molecular biology.

These breakthroughs have made some researchers conclude it’s time to start thinking about the dangers of more powerful systems, but skeptics remain. The field’s pessimists argue that programs still need an extraordinary pool of structured data to learn from, require carefully chosen parameters, or work only in environments designed to avoid the problems we don’t yet know how to solve. They point to self-driving cars, which are still mediocre under the best conditions despite the billions that have been poured into making them work.

With all those limitations, one might conclude that even if it’s possible to make a computer as smart as a person, it’s certainly a long way away. But that conclusion doesn’t necessarily follow.

That’s because for almost all the history of AI, we’ve been held back in large part by not having enough computing power to realize our ideas fully. Many of the breakthroughs of recent years — AI systems that learned how to play Atari games, generate fake photos of celebrities, fold proteins, and compete in massive multiplayer online strategy games — have happened because that’s no longer true. Lots of algorithms that seemed not to work at all turned out to work quite well once we could run them with more computing power.

And the cost of a unit of computing time keeps falling. Progress in computing speed has slowed recently, but the cost of computing power is still estimated to be falling by a factor of 10 every 10 years. Through most of its history, AI has had access to less computing power than the human brain. That’s changing. By most estimates, we’re now approaching the era when AI systems can have the computing resources that we humans enjoy.

Furthermore, breakthroughs in a field can often surprise even other researchers in the field. “Some have argued that there is no conceivable risk to humanity [from AI] for centuries to come,” wrote UC Berkeley professor Stuart Russell, “perhaps forgetting that the interval of time between Rutherford’s confident assertion that atomic energy would never be feasibly extracted and Szilárd’s invention of the neutron-induced nuclear chain reaction was less than twenty-four hours.”

There’s another consideration. Imagine an AI that is inferior to humans at everything, with one exception: It’s a competent engineer that can build AI systems very effectively. Machine learning engineers who work on automating jobs in other fields often observe, humorously, that in some respects, their own field looks like one where much of the work — the tedious tuning of parameters — could be automated.

If we can design such a system, then we can use its result — a better engineering AI — to build another, even better AI. This is the mind-bending scenario experts call “recursive self-improvement,” where gains in AI capabilities enable more gains in AI capabilities, allowing a system that started out behind us to rapidly end up with abilities well beyond what we anticipated.

This is a possibility that has been anticipated since the first computers. I.J. Good, a colleague of Alan Turing who worked at the Bletchley Park codebreaking operation during World War II and helped build the first computers afterward, may have been the first to spell it out, back in 1965: “An ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.”

3) How exactly could it wipe us out?

It’s immediately clear how nuclear bombs will kill us. No one working on mitigating nuclear risk has to start by explaining why it’d be a bad thing if we had a nuclear war.

The case that AI could pose an existential risk to humanity is more complicated and harder to grasp. So many of the people who are working to build safe AI systems have to start by explaining why AI systems, by default, are dangerous.

The idea that AI can become a danger is rooted in the fact that AI systems pursue their goals, whether or not those goals are what we really intended — and whether or not we’re in the way. “You’re probably not an evil ant-hater who steps on ants out of malice,” Stephen Hawking wrote, “but if you’re in charge of a hydroelectric green-energy project and there’s an anthill in the region to be flooded, too bad for the ants. Let’s not place humanity in the position of those ants.”

Here’s one scenario that keeps experts up at night: We develop a sophisticated AI system with the goal of, say, estimating some number with high confidence. The AI realizes it can achieve more confidence in its calculation if it uses all the world’s computing hardware, and it realizes that releasing a biological superweapon to wipe out humanity would allow it free use of all the hardware. Having exterminated humanity, it then calculates the number with higher confidence.

Victoria Krakovna, an AI researcher at DeepMind (now a division of Alphabet, Google’s parent company), compiled a list of examples of “specification gaming”: the computer doing what we told it to do but not what we wanted it to do. For example, we tried to teach AI organisms in a simulation to jump, but we did it by teaching them to measure how far their “feet” rose above the ground. Instead of jumping, they learned to grow into tall vertical poles and do flips — they excelled at what we were measuring, but they didn’t do what we wanted them to do.

An AI playing the Atari exploration game Montezuma’s Revenge found a bug that let it force a key in the game to reappear, thereby allowing it to earn a higher score by exploiting the glitch. An AI playing a different game realized it could get more points by falsely inserting its name as the owner of high-value items.

Sometimes, the researchers didn’t even know how their AI system cheated: “the agent discovers an in-game bug. … For a reason unknown to us, the game does not advance to the second round but the platforms start to blink and the agent quickly gains a huge amount of points (close to 1 million for our episode time limit).”

What these examples make clear is that in any system that might have bugs or unintended behavior or behavior humans don’t fully understand, a sufficiently powerful AI system might act unpredictably — pursuing its goals through an avenue that isn’t the one we expected.

In his 2009 paper “The Basic AI Drives,” Steve Omohundro, who has worked as a computer science professor at the University of Illinois Urbana-Champaign and as the president of Possibility Research, argues that almost any AI system will predictably try to accumulate more resources, become more efficient, and resist being turned off or modified: “These potentially harmful behaviors will occur not because they were programmed in at the start, but because of the intrinsic nature of goal driven systems.”

His argument goes like this: Because AIs have goals, they’ll be motivated to take actions that they can predict will advance their goals. An AI playing a chess game will be motivated to take an opponent’s piece and advance the board to a state that looks more winnable.

But the same AI, if it sees a way to improve its own chess evaluation algorithm so it can evaluate potential moves faster, will do that too, for the same reason: It’s just another step that advances its goal.

If the AI sees a way to harness more computing power so it can consider more moves in the time available, it will do that. And if the AI detects that someone is trying to turn off its computer mid-game, and it has a way to disrupt that, it’ll do it. It’s not that we would instruct the AI to do things like that; it’s that whatever goal a system has, actions like these will often be part of the best path to achieve that goal.

That means that any goal, even innocuous ones like playing chess or generating advertisements that get lots of clicks online, could produce unintended results if the agent pursuing it has enough intelligence and optimization power to identify weird, unexpected routes to achieve its goals.

Goal-driven systems won’t wake up one day with hostility to humans lurking in their hearts. But they will take actions that they predict will help them achieve their goal — even if we’d find those actions problematic, even horrifying. They’ll work to preserve themselves, accumulate more resources, and become more efficient. They already do that, but it takes the form of weird glitches in games. As they grow more sophisticated, scientists like Omohundro predict more adversarial behavior.

4) When did scientists first start worrying about AI risk?

Scientists have been thinking about the potential of artificial intelligence since the early days of computers. In the famous paper where he put forth the Turing test for determining if an artificial system is truly “intelligent,” Alan Turing wrote:

Let us now assume, for the sake of argument, that these machines are a genuine possibility, and look at the consequences of constructing them. … There would be plenty to do in trying to keep one’s intelligence up to the standards set by the machines, for it seems probable that once the machine thinking method had started, it would not take long to outstrip our feeble powers. … At some stage therefore we should have to expect the machines to take control.

I.J. Good worked closely with Turing and reached the same conclusions, according to his assistant, Leslie Pendleton. In an excerpt from unpublished notes Good wrote shortly before he died in 2009, he writes about himself in third person and notes a disagreement with his younger self — while as a younger man, he thought powerful AIs might be helpful to us, the older Good expected AI to annihilate us.

[The paper] “Speculations Concerning the First Ultra-intelligent Machine” (1965) … began: “The survival of man depends on the early construction of an ultra-intelligent machine.” Those were his words during the Cold War, and he now suspects that “survival” should be replaced by “extinction.” He thinks that, because of international competition, we cannot prevent the machines from taking over. He thinks we are lemmings. He said also that “probably Man will construct the deus ex machina in his own image.”

In the 21st century, with computers quickly establishing themselves as a transformative force in our world, younger researchers started expressing similar worries.

Nick Bostrom is a professor at the University of Oxford, the director of the Future of Humanity Institute, and the director of the Governance of Artificial Intelligence Program. He researches risks to humanity, both in the abstract — asking questions like why we seem to be alone in the universe — and in concrete terms, analyzing the technological advances on the table and whether they endanger us. AI, he concluded, endangers us.

In 2014, he wrote a book explaining the risks AI poses and the necessity of getting it right the first time, concluding, “once unfriendly superintelligence exists, it would prevent us from replacing it or changing its preferences. Our fate would be sealed.”

Across the world, others have reached the same conclusion. Bostrom co-authored a paper on the ethics of artificial intelligence with Eliezer Yudkowsky, founder of and research fellow at the Berkeley Machine Intelligence Research Institute (MIRI), an organization that works on better formal characterizations of the AI safety problem.

Yudkowsky started his career in AI by worriedly poking holes in others’ proposals for how to make AI systems safe, and has spent most of it working to persuade his peers that AI systems will, by default, be unaligned with human values (not necessarily opposed to but indifferent to human morality) — and that it’ll be a challenging technical problem to prevent that outcome.

Increasingly, researchers realized that there’d be challenges that hadn’t been present with AI systems when they were simple. “‘Side effects’ are much more likely to occur in a complex environment, and an agent may need to be quite sophisticated to hack its reward function in a dangerous way. This may explain why these problems have received so little study in the past, while also suggesting their importance in the future,” concluded a 2016 research paper on problems in AI safety.

Bostrom’s book Superintelligence was compelling to many people, but there were skeptics. “No, experts don’t think superintelligent AI is a threat to humanity,” argued an op-ed by Oren Etzioni, a professor of computer science at the University of Washington and CEO of the Allan Institute for Artificial Intelligence. “Yes, we are worried about the existential risk of artificial intelligence,” replied a dueling op-ed by Stuart Russell, an AI pioneer and UC Berkeley professor, and Allan DaFoe, a senior research fellow at Oxford and director of the Governance of AI program there.

It’s tempting to conclude that there’s a pitched battle between AI-risk skeptics and AI-risk believers. In reality, they might not disagree as profoundly as you would think.

Facebook’s chief AI scientist Yann LeCun, for example, is a vocal voice on the skeptical side. But while he argues we shouldn’t fear AI, he still believes we ought to have people working on, and thinking about, AI safety. “Even if the risk of an A.I. uprising is very unlikely and very far in the future, we still need to think about it, design precautionary measures, and establish guidelines,” he writes.

That’s not to say there’s an expert consensus here — far from it. There is substantial disagreement about which approaches seem likeliest to bring us to general AI, which approaches seem likeliest to bring us to safe general AI, and how soon we need to worry about any of this.

Many experts are wary that others are overselling their field, and dooming it when the hype runs out. But that disagreement shouldn’t obscure a growing common ground; these are possibilities worth thinking about, investing in, and researching, so we have guidelines when the moment comes that they’re needed.

5) Why couldn’t we just shut off a computer if it got too powerful?

A smart AI could predict that we’d want to turn it off if it made us nervous. So it would try hard not to make us nervous, because doing so wouldn’t help it accomplish its goals. If asked what its intentions are, or what it’s working on, it would attempt to evaluate which responses are least likely to get it shut off, and answer with those. If it wasn’t competent enough to do that, it might pretend to be even dumber than it was — anticipating that researchers would give it more time, computing resources, and training data.

So we might not know when it’s the right moment to shut off a computer.

We also might do things that make it impossible to shut off the computer later, even if we realize eventually that it’s a good idea. For example, many AI systems could have access to the internet, which is a rich source of training data and which they’d need if they’re to make money for their creators (for example, on the stock market, where more than half of trading is done by fast-reacting AI algorithms).

But with internet access, an AI could email copies of itself somewhere where they’ll be downloaded and read, or hack vulnerable systems elsewhere. Shutting off any one computer wouldn’t help.

In that case, isn’t it a terrible idea to let any AI system — even one which doesn’t seem powerful enough to be dangerous — have access to the internet? Probably. But that doesn’t mean it won’t continue to happen.

So far, we’ve mostly talked about the technical challenges of AI. But from here forward, it’s necessary to veer more into the politics. Since AI systems enable incredible things, there will be lots of different actors working on such systems.

There will likely be startups, established tech companies like Google (Alphabet’s recently acquired startup DeepMind is frequently mentioned as an AI frontrunner), and nonprofits (the Elon Musk-founded OpenAI is another major player in the field).

There will be governments — Russia’s Vladimir Putin has expressed an interest in AI, and China has made big investments. Some of them will presumably be cautious and employ safety measures, including keeping their AI off the internet. But in a scenario like this one, we’re at the mercy of the least cautious actor, whoever they may be.

That’s part of what makes AI hard: Even if we know how to take appropriate precautions (and right now we don’t), we also need to figure out how to ensure that all would-be AI programmers are motivated to take those precautions and have the tools to implement them correctly.

6) What are we doing right now to avoid an AI apocalypse?

“It could be said that public policy on AGI [artificial general intelligence] does not exist,” concluded a paper this year reviewing the state of the field.

The truth is that technical work on promising approaches is getting done, but there’s shockingly little in the way of policy planning, international collaboration, or public-private partnerships. In fact, much of the work is being done by only a handful of organizations, and it has been estimated that around 50 people in the world work full time on technical AI safety.

Bostrom’s Future of Humanity Institute has published a research agenda for AI governance: the study of “devising global norms, policies, and institutions to best ensure the beneficial development and use of advanced AI.” It has published research on the risk of malicious uses of AI, on the context of China’s AI strategy, and on artificial intelligence and international security.

The longest-established organization working on technical AI safety is the Machine Intelligence Research Institute (MIRI), which prioritizes research into designing highly reliable agents — artificial intelligence programs whose behavior we can predict well enough to be confident they’re safe. (Disclosure: MIRI is a nonprofit and I donated to its work in 2017 and 2018.)

The Elon Musk-founded OpenAI is a very new organization, less than three years old. But researchers there are active contributors to both AI safety and AI capabilities research. A research agenda in 2016 spelled out “concrete open technical problems relating to accident prevention in machine learning systems,” and researchers have since advanced some approaches to safe AI systems.

Alphabet’s DeepMind, a leader in this field, has a safety team and a technical research agenda outlined here. “Our intention is to ensure that AI systems of the future are not just ‘hopefully safe’ but robustly, verifiably safe ,” it concludes, outlining an approach with an emphasis on specification (designing goals well), robustness (designing systems that perform within safe limits under volatile conditions), and assurance (monitoring systems and understanding what they’re doing).

There are also lots of people working on more present-day AI ethics problems: algorithmic bias, robustness of modern machine-learning algorithms to small changes, and transparency and interpretability of neural nets, to name just a few. Some of that research could potentially be valuable for preventing destructive scenarios.

But on the whole, the state of the field is a little bit as if almost all climate change researchers were focused on managing the droughts, wildfires, and famines we’re already facing today, with only a tiny skeleton team dedicating to forecasting the future and 50 or so researchers who work full time on coming up with a plan to turn things around.

Not every organization with a major AI department has a safety team at all, and some of them have safety teams focused only on algorithmic fairness and not on the risks from advanced systems. The US government doesn’t have a department for AI.

The field still has lots of open questions — many of which might make AI look much more scary, or much less so — which no one has dug into in depth.

7) Is this really likelier to kill us all than, say, climate change?

It sometimes seems like we’re facing dangers from all angles in the 21st century. Both climate change and future AI developments are likely to be transformative forces acting on our world.

Our predictions about climate change are more confident, both for better and for worse. We have a clearer understanding of the risks the planet will face, and we can estimate the costs to human civilization. They are projected to be enormous, risking potentially hundreds of millions of lives. The ones who will suffer most will be low-income people in developing countries; the wealthy will find it easier to adapt. We also have a clearer understanding of the policies we need to enact to address climate change than we do with AI.

There’s intense disagreement in the field on timelines for critical advances in AI. While AI safety experts agree on many features of the safety problem, they’re still making the case to research teams in their own field, and they disagree on some of the details. There’s substantial disagreement on how badly it could go, and on how likely it is to go badly. There are only a few people who work full time on AI forecasting. One of the things current researchers are trying to nail down is their models and the reasons for the remaining disagreements about what safe approaches will look like.

Most experts in the AI field think it poses a much larger risk of total human extinction than climate change, since analysts of existential risks to humanity think that climate change, while catastrophic, is unlikely to lead to human extinction. But many others primarily emphasize our uncertainty — and emphasize that when we’re working rapidly toward powerful technology about which there are still many unanswered questions, the smart step is to start the research now.

8) Is there a possibility that AI can be benevolent?

AI safety researchers emphasize that we shouldn’t assume AI systems will be benevolent by default. They’ll have the goals that their training environment set them up for, and no doubt this will fail to encapsulate the whole of human values.

When the AI gets smarter, might it figure out morality by itself? Again, researchers emphasize that it won’t. It’s not really a matter of “figuring out” — the AI will understand just fine that humans actually value love and fulfillment and happiness, and not just the number associated with Google on the New York Stock Exchange. But the AI’s values will be built around whatever goal system it was initially built around, which means it won’t suddenly become aligned with human values if it wasn’t designed that way to start with.

Of course, we can build AI systems that are aligned with human values, or at least that humans can safely work with. That is ultimately what almost every organization with an artificial general intelligence division is trying to do. A success with AI could give us access to decades or centuries of technological innovation all at once.

“If we’re successful, we believe this will be one of the most important and widely beneficial scientific advances ever made,” writes the introduction to Alphabet’s DeepMind. “From climate change to the need for radically improved healthcare, too many problems suffer from painfully slow progress, their complexity overwhelming our ability to find solutions. With AI as a multiplier for human ingenuity, those solutions will come into reach.”

So, yes, AI can share our values — and transform our world for the good. We just need to solve a very hard engineering problem first.

9) I just really want to know: how worried should we be?

To people who think the worrying is premature and the risks overblown, AI safety is competing with other priorities that sound, well, a bit less sci-fi — and it’s not clear why AI should take precedence. To people who think the risks described are real and substantial, it’s outrageous that we’re dedicating so few resources to working on them.

While machine-learning researchers are right to be wary of hype, it’s also hard to avoid the fact that they’re accomplishing some impressive, surprising things using very generalizable techniques, and that it doesn’t seem that all the low-hanging fruit has been picked.

At a major conference in early December, Google’s DeepMind cracked open a longstanding problem in biology: predicting how proteins fold. “Even though there’s a lot more work to do before we’re able to have a quantifiable impact on treating diseases, managing the environment, and more, we know the potential is enormous,” its announcement concludes.

AI looks increasingly like a technology that will change the world when it arrives. Researchers across many major AI organizations tell us it will be like launching a rocket: something we have to get right before we hit “go.” So it seems urgent to get to work learning rocketry. No matter whether or not humanity should be afraid, we should definitely be doing our homework.

Why Stephen Hawking’s Black Hole Puzzle Keeps Puzzling


The renowned British physicist, who died at 76, left behind a riddle that could eventually lead his successors to the theory of quantum gravity.

Photo of Stephen Hawking in 1979 in Princeton, New Jersey.

The physicist Stephen Hawking in 1979 in Princeton, New Jersey.

The renowned British physicist Stephen Hawking, who died today at 76, was something of a betting man, regularly entering into friendly wagers with his colleagues over key questions in theoretical physics. “I sensed when Stephen and I first met that he would enjoy being treated irreverently,” wrote John Preskill, a physicist at the California Institute of Technology, earlier today on Twitter. “So in the middle of a scientific discussion I could interject, ‘What makes you so sure of that, Mr. Know-It-All?’ knowing that Stephen would respond with his eyes twinkling: ‘Wanna bet?’”

And bet they did. In 1991, Hawking and Kip Thorne bet Preskill that information that falls into a black hole gets destroyed and can never be retrieved. Called the black hole information paradox, this prospect follows from Hawking’s landmark 1974 discovery about black holes — regions of inescapable gravity, where space-time curves steeply toward a central point known as the singularity. Hawking had shown that black holes are not truly black. Quantum uncertainty causes them to radiate a small amount of heat, dubbed “Hawking radiation.” They lose mass in the process and ultimately evaporate away. This evaporation leads to a paradox: Anything that falls into a black hole will seemingly be lost forever, violating “unitarity” — a central principle of quantum mechanics that says the present always preserves information about the past.

Hawking and Thorne argued that the radiation emitted by a black hole would be too hopelessly scrambled to retrieve any useful information about what fell into it, even in principle. Preskill bet that information somehow escapes black holes, even though physicists would presumably need a complete theory of quantum gravity to understand the mechanism behind how this could happen.

Physicists thought they resolved the paradox in 2004 with the notion of black hole complementarity. According to this proposal, information that crosses the event horizon of a black hole both reflects back out and passes inside, never to escape. Because no single observer can ever be both inside and outside the black hole’s horizon, no one can witness both situations simultaneously, and no contradiction arises. The argument was sufficient to convince Hawking to concede the bet. During a July 2004 talk in Dublin, Ireland, he presented Preskill with the eighth edition of Total Baseball: The Ultimate Baseball Encyclopedia, “from which information can be retrieved at will.”

Thorne, however refused to concede, and it seems he was right to do so. In 2012, a new twist on the paradox emerged. Nobody had explained precisely how information would get out of a black hole, and that lack of a specific mechanism inspired Joseph Polchinski and three colleagues to revisit the problem. Conventional wisdom had long held that once someone passed the event horizon, they would slowly be pulled apart by the extreme gravity as they fell toward the singularity. Polchinski and his co-authors argued that instead, in-falling observers would encounter a literal wall of fire at the event horizon, burning up before ever getting near the singularity.

At the heart of the firewall puzzle lies a conflict between three fundamental postulates. The first is the equivalence principle of Albert Einstein’s general theory of relativity: Because there’s no difference between acceleration due to gravity and the acceleration of a rocket, an astronaut named Alice shouldn’t feel anything amiss as she crosses a black hole horizon. The second is unitarity, which implies that information cannot be destroyed. Lastly, there’s locality, which holds that events happening at a particular point in space can only influence nearby points. This means that the laws of physics should work as expected far away from a black hole, even if they break down at some point within the black hole — either at the singularity or at the event horizon.

To resolve the paradox, one of the three postulates must be sacrificed, and nobody can agree on which one should get the axe. The simplest solution is to have the equivalence principle break down at the event horizon, thereby giving rise to a firewall. But several other possible solutions have been proposed in the ensuing years.

David Kaplan explores one of the biggest mysteries in physics: the apparent contradiction between general relativity and quantum mechanics.

Video: David Kaplan explores one of the biggest mysteries in physics: the apparent contradiction between general relativity and quantum mechanics.

Filming by Petr Stepanek. Editing and motion graphics by MK12. Music by Steven Gutheinz.

For instance, a few years before the firewalls paper, Samir Mathur, a string theorist at Ohio State University, raised similar issues with his notion of black hole fuzzballs. Fuzzballs aren’t empty pits, like traditional black holes. They are packed full of strings (the kind from string theory) and have a surface like a star or planet. They also emit heat in the form of radiation. The spectrum of that radiation, Mathur found, exactly matches the prediction for Hawking radiation. His “fuzzball conjecture” resolves the paradox by declaring it to be an illusion. How can information be lost beyond the event horizon if there is no event horizon?

Hawking himself weighed in on the firewall debate along similar lines by way of a two-page, equation-free paper posted to the scientific preprint site arxiv.org in late January 2014 — a summation of informal remarks he’d made via Skype for a small conference the previous spring. He proposed a rethinking of the event horizon. Instead of a definite line in the sky from which nothing could escape, he suggested there could be an “apparent horizon.” Information is only temporarily confined behind that horizon. The information eventually escapes, but in such a scrambled form that it can never be interpreted. He likened the task to weather forecasting: “One can’t predict the weather more than a few days in advance.”

In 2013, Leonard Susskind and Juan Maldacena, theoretical physicists at Stanford University and the Institute for Advanced Studies, respectively, made a radical attempt to preserve locality that they dubbed “ER = EPR.” According to this idea, maybe what we think are faraway points in space-time aren’t that far away after all. Perhaps entanglement creates invisible microscopic wormholes connecting seemingly distant points. Shaped a bit like an octopus, such a wormhole would link the interior of the black hole directly to the Hawking radiation, so the particles still inside the hole would be directly connected to particles that escaped long ago, avoiding the need for information to pass through the event horizon.

Physicists have yet to reach a consensus on any one of these proposed solutions. It’s a tribute to Hawking’s unique genius that they continue to argue about the black hole information paradox so many decades after his work first suggested it.

Black holes and soft hair: why Stephen Hawking’s final work is important


Malcolm Perry, who worked with Hawking on his final paper, explains how it improves our understanding of one of universe’s enduring mysteries

Star torn apart by black hole
An artist’s impression of a star being torn apart by a black hole.
Photograph: Nasa’s Goddard Space Flight Center

The information paradox is perhaps the most puzzling problem in fundamental theoretical physics today. It was discovered by Stephen Hawking 43 years ago, and until recently has puzzled many.

Starting in 2015, Stephen, Andrew Strominger and I started to wonder if we could understand a way out of this difficulty by questioning the basic assumptions that underlie the difficulties. We published our first paper on the subject in 2016 and have been working hard on this problem ever since.

The most recent work, and perhaps the last paper that Stephen was involved in, has just come out. While we have not solved the information paradox, we hope that we have paved the way, and we are continuing our intensive work in this area.

Physics is really about being able to predict the future given how things are now. For example, if you throw a ball, once you know its initial position and velocity, then you can figure out where it will be in the future. That kind of reasoning is fine for what we call classical physics but for small things, like atoms and electrons, the rules need some modifications, as described by quantum mechanics. In quantum mechanics, instead of describing precise outcomes, one finds that one can only calculate the probabilities for various things to happen. In the case of a ball being thrown, one would not know its precise trajectory, but only the probability that it would be in some particular place given its initial conditions.

What Hawking discovered was that in black hole physics, there seemed to be even greater uncertainty than in quantum mechanics. However, this kind of uncertainty seemed to be completely unacceptable in that it resulted in many of the laws of physics appearing to break down. It would deprive us of the ability to predict anything about the future of a black hole.

That might not have mattered – except that black holes are real physical objects. There are huge black holes at the centres of many galaxies. We know this because observations of the centre of our galaxy show that there is a compact object with a mass of a few million times that of our sun there; such a huge concentration of mass could only be a black hole. Quasars, extremely luminous objects at the centres of very distant galaxies, are powered by matter falling onto black holes. The observatory Ligo has recently discovered ripples in spacetime, gravitational waves, produced by the collision of black holes.

The root of the problem is that it was once thought that black holes were completely described by their mass and their spin. If you threw something into a black hole, once it was inside you would be unable to tell what it was that was thrown in.

These ideas were encapsulated in the phrase “a black hole has no hair”. We can often tell people apart by looking their hair, but black holes seemed to be completely bald. Back in 1974, Stephen discovered that black holes, rather than being perfect absorbers, behave more like what we call “black bodies”. A black body is characterised by a temperature, and all bodies with a temperature produce thermal radiation.

If you go to a doctor, it is quite likely your temperature will be measured by having a device pointed at you. This is an infrared sensor and it measures your temperature by detecting the thermal radiation you produce. A piece of metal heated up in a fire will glow because it produces thermal radiation.

Black holes are no different. They have a temperature and produce thermal radiation. The formula for this temperature, universally known as the Hawking temperature, is inscribed on the memorial to Stephen’s life in Westminster Abbey. Any object that has a temperature also has an entropy. The entropy is a measure of how many different ways an object could be made from its microscopic ingredients and still look the same. So, for a particular piece of red hot metal, it would be the number of ways the atoms that make it up could be arranged so as to look like the lump of metal you were observing. Stephen’s formula for the temperature of a black hole allowed him to find the entropy of a black hole.

The problem then was: how did this entropy arise? Since all black holes appear to be the same, the origin of the entropy was at the centre of the information paradox.

What we have done recently is to discover a gap in the mathematics that led to the idea that black holes are totally bald. In 2016, Stephen, Andy and I found that black holes have an infinite collection of what we call “soft hair”. This discovery allows us to question the idea that black holes lead to a breakdown in the laws of physics.

Stephen kept working with us up to the end of his life, and we have now published a paper that describes our current thoughts on the matter. In this paper, we describe a way of calculating the entropy of black holes. The entropy is basically a quantitative measure of what one knows about a black hole apart from its mass or spin.

While this is not a resolution of the information paradox, we believe it provides some considerable insight into it. Further work is needed but we feel greatly encouraged to continue our research in this area. The information paradox is intimately tied up with our quest to find a theory of gravity that is compatible with quantum mechanics.

Einstein’s general theory of relativity is extremely successful at describing spacetime and gravitation on large scales, but to see how the world works on small scales requires quantum theory. There are spectacularly successful theories of the non-gravitational forces of nature as explained by the “standard model” of particle physics. Such theories have been exhaustively tested and the recent discovery of the Higgs particle at Cern by the Large Hadron Collider is a marvellous confirmation of these ideas.

Yet the incorporation of gravitation into this picture is still something that eludes us. As well as his work on black holes, Stephen was pursuing ideas that he hoped would lead to a unification of gravitation with the other forces of nature in a way that would unite Einstein’s ideas with those of quantum theory. Our work on black holes does indeed shed light on this other puzzle. Sadly, Stephen is no longer with us to share our excitement about the possibility of resolving these issues, which have now been around for half a century.

Why Stephen Hawking’s Black Hole Puzzle Keeps Puzzling


The renowned British physicist, who died at 76, left behind a riddle that could eventually lead his successors to the theory of quantum gravity.
 

The physicist Stephen Hawking in 1979 in Princeton, New Jersey.

The physicist Stephen Hawking in 1979 in Princeton, New Jersey.

The renowned British physicist Stephen Hawking, who died today at 76, was something of a betting man, regularly entering into friendly wagers with his colleagues over key questions in theoretical physics. “I sensed when Stephen and I first met that he would enjoy being treated irreverently,” wrote John Preskill, a physicist at the California Institute of Technology, earlier today on Twitter. “So in the middle of a scientific discussion I could interject, ‘What makes you so sure of that, Mr. Know-It-All?’ knowing that Stephen would respond with his eyes twinkling: ‘Wanna bet?’”

And bet they did. In 1991, Hawking and Kip Thorne bet Preskill that information that falls into a black hole gets destroyed and can never be retrieved. Called the black hole information paradox, this prospect follows from Hawking’s landmark 1974 discovery about black holes — regions of inescapable gravity, where space-time curves steeply toward a central point known as the singularity. Hawking had shown that black holes are not truly black. Quantum uncertainty causes them to radiate a small amount of heat, dubbed “Hawking radiation.” They lose mass in the process and ultimately evaporate away. This evaporation leads to a paradox: Anything that falls into a black hole will seemingly be lost forever, violating “unitarity” — a central principle of quantum mechanics that says the present always preserves information about the past.

Hawking and Thorne argued that the radiation emitted by a black hole would be too hopelessly scrambled to retrieve any useful information about what fell into it, even in principle. Preskill bet that information somehow escapes black holes, even though physicists would presumably need a complete theory of quantum gravity to understand the mechanism behind how this could happen.

Physicists thought they resolved the paradox in 2004 with the notion of black hole complementarity. According to this proposal, information that crosses the event horizon of a black hole both reflects back out and passes inside, never to escape. Because no single observer can ever be both inside and outside the black hole’s horizon, no one can witness both situations simultaneously, and no contradiction arises. The argument was sufficient to convince Hawking to concede the bet. During a July 2004 talk in Dublin, Ireland, he presented Preskill with the eighth edition of Total Baseball: The Ultimate Baseball Encyclopedia, “from which information can be retrieved at will.”

Thorne, however refused to concede, and it seems he was right to do so. In 2012, a new twist on the paradox emerged. Nobody had explained precisely how information would get out of a black hole, and that lack of a specific mechanism inspired Joseph Polchinski and three colleagues to revisit the problem. Conventional wisdom had long held that once someone passed the event horizon, they would slowly be pulled apart by the extreme gravity as they fell toward the singularity. Polchinski and his co-authors argued that instead, in-falling observers would encounter a literal wall of fire at the event horizon, burning up before ever getting near the singularity.

At the heart of the firewall puzzle lies a conflict between three fundamental postulates. The first is the equivalence principle of Albert Einstein’s general theory of relativity: Because there’s no difference between acceleration due to gravity and the acceleration of a rocket, an astronaut named Alice shouldn’t feel anything amiss as she crosses a black hole horizon. The second is unitarity, which implies that information cannot be destroyed. Lastly, there’s locality, which holds that events happening at a particular point in space can only influence nearby points. This means that the laws of physics should work as expected far away from a black hole, even if they break down at some point within the black hole — either at the singularity or at the event horizon.

To resolve the paradox, one of the three postulates must be sacrificed, and nobody can agree on which one should get the axe. The simplest solution is to have the equivalence principle break down at the event horizon, thereby giving rise to a firewall. But several other possible solutions have been proposed in the ensuing years.

David Kaplan explores one of the biggest mysteries in physics: the apparent contradiction between general relativity and quantum mechanics.

Video: David Kaplan explores one of the biggest mysteries in physics: the apparent contradiction between general relativity and quantum mechanics.

Filming by Petr Stepanek. Editing and motion graphics by MK12.

For instance, a few years before the firewalls paper, Samir Mathur, a string theorist at Ohio State University, raised similar issues with his notion of black hole fuzzballs. Fuzzballs aren’t empty pits, like traditional black holes. They are packed full of strings (the kind from string theory) and have a surface like a star or planet. They also emit heat in the form of radiation. The spectrum of that radiation, Mathur found, exactly matches the prediction for Hawking radiation. His “fuzzball conjecture” resolves the paradox by declaring it to be an illusion. How can information be lost beyond the event horizon if there is no event horizon?

Hawking himself weighed in on the firewall debate along similar lines by way of a two-page, equation-free paper posted to the scientific preprint site arxiv.org in late January 2014 — a summation of informal remarks he’d made via Skype for a small conference the previous spring. He proposed a rethinking of the event horizon. Instead of a definite line in the sky from which nothing could escape, he suggested there could be an “apparent horizon.” Information is only temporarily confined behind that horizon. The information eventually escapes, but in such a scrambled form that it can never be interpreted. He likened the task to weather forecasting: “One can’t predict the weather more than a few days in advance.”

In 2013, Leonard Susskind and Juan Maldacena, theoretical physicists at Stanford University and the Institute for Advanced Studies, respectively, made a radical attempt to preserve locality that they dubbed “ER = EPR.” According to this idea, maybe what we think are faraway points in space-time aren’t that far away after all. Perhaps entanglement creates invisible microscopic wormholes connecting seemingly distant points. Shaped a bit like an octopus, such a wormhole would link the interior of the black hole directly to the Hawking radiation, so the particles still inside the hole would be directly connected to particles that escaped long ago, avoiding the need for information to pass through the event horizon.

Physicists have yet to reach a consensus on any one of these proposed solutions. It’s a tribute to Hawking’s unique genius that they continue to argue about the black hole information paradox so many decades after his work first suggested it.

‘Mind over matter’: Stephen Hawking – obituary by Roger Penrose


Theoretical physicist who made revolutionary contributions to our understanding of the nature of the universe.

 

Stephen Hawking at his office at the department of applied mathematics and theoretical physics at Cambridge University in 2005.
Stephen Hawking at his office at the department of applied mathematics and theoretical physics at Cambridge University in 2005.

The image of Stephen Hawking – who has died aged 76 – in his motorised wheelchair, with head contorted slightly to one side and hands crossed over to work the controls, caught the public imagination, as a true symbol of the triumph of mind over matter. As with the Delphic oracle of ancient Greece, physical impairment seemed compensated by almost supernatural gifts, which allowed his mind to roam the universe freely, upon occasion enigmatically revealing some of its secrets hidden from ordinary mortal view.

Of course, such a romanticised image can represent but a partial truth. Those who knew Hawking would clearly appreciate the dominating presence of a real human being, with an enormous zest for life, great humour, and tremendous determination, yet with normal human weaknesses, as well as his more obvious strengths. It seems clear that he took great delight in his commonly perceived role as “the No 1 celebrity scientist”; huge audiences would attend his public lectures, perhaps not always just for scientific edification.

The scientific community might well form a more sober assessment. He was extremely highly regarded, in view of his many greatly impressive, sometimes revolutionary, contributions to the understanding of the physics and the geometry of the universe.

Hawking had been diagnosed shortly after his 21st birthday as suffering from an unspecified incurable disease, which was then identified as the fatal degenerative motor neurone disease amyotrophic lateral sclerosis, or ALS. Soon afterwards, rather than succumbing to depression, as others might have done, he began to set his sights on some of the most fundamental questions concerning the physical nature of the universe. In due course, he would achieve extraordinary successes against the severest physical disabilities. Defying established medical opinion, he managed to live another 55 years.

His background was academic, though not directly in mathematics or physics. His father, Frank, was an expert in tropical diseases and his mother, Isobel (nee Walker), was a free-thinking radical who had a great influence on him. He was born in Oxford and moved to St Albans, Hertfordshire, at eight. Educated at St Albans school, he won a scholarship to study physics at University College, Oxford. He was recognised as unusually capable by his tutors, but did not take his work altogether seriously. Although he obtained a first-class degree in 1962, it was not a particularly outstanding one.

He decided to continue his career in physics at Trinity Hall, Cambridge, proposing to study under the distinguished cosmologist Fred Hoyle. He was disappointed to find that Hoyle was unable to take him, the person available in that area being Dennis Sciama, unknown to Hawking at the time. In fact, this proved fortuitous, for Sciama was becoming an outstandingly stimulating figure in British cosmology, and would supervise several students who were to make impressive names for themselves in later years (including the future astronomer royal Lord Rees of Ludlow).

Sciama seemed to know everything that was going on in physics at the time, especially in cosmology, and he conveyed an infectious excitement to all who encountered him. He was also very effective in bringing together people who might have things of significance to communicate with one another.

When Hawking was in his second year of research at Cambridge, I (at Birkbeck College in London) had established a certain mathematical theorem of relevance. This showed, on the basis of a few plausible assumptions (by the use of global/topological techniques largely unfamiliar to physicists at the time) that a collapsing over-massive star would result in a singularity in space-time – a place where it would be expected that densities and space-time curvatures would become infinite – giving us the picture of what we now refer to as a “black hole”. Such a space-time singularity would lie deep within a “horizon”, through which no signal or material body can escape. (This picture had been put forward by J Robert Oppenheimer and Hartland Snyder in 1939, but only in the special circumstance where exact spherical symmetry was assumed. The purpose of this new theorem was to obviate such unrealistic symmetry assumptions.) At this central singularity, Einstein’s classical theory of general relativity would have reached its limits.

Meanwhile, Hawking had also been thinking about this kind of problem with George Ellis, who was working on a PhD at St John’s College, Cambridge. The two men had been working on a more limited type of “singularity theorem” that required an unreasonably restrictive assumption. Sciama made a point of bringing Hawking and me together, and it did not take Hawking long to find a way to use my theorem in an unexpected way, so that it could be applied (in a time-reversed form) in a cosmological setting, to show that the space-time singularity referred to as the “big bang” was also a feature not just of the standard highly symmetrical cosmological models, but also of any qualitatively similar but asymmetrical model.

Some of the assumptions in my original theorem seem less natural in the cosmological setting than they do for collapse to a black hole. In order to generalise the mathematical result so as to remove such assumptions, Hawking embarked on a study of new mathematical techniques that appeared relevant to the problem.

A powerful body of mathematical work known as Morse theory had been part of the machinery of mathematicians active in the global (topological) study of Riemannian spaces. However, the spaces that are used in Einstein’s theory are really pseudo-Riemannian and the relevant Morse theory differs in subtle but important ways. Hawking developed the necessary theory for himself (aided, in certain respects, by Charles Misner, Robert Geroch and Brandon Carter) and was able to use it to produce new theorems of a more powerful nature, in which the assumptions of my theorem could be considerably weakened, showing that a big-bang-type singularity was a necessary implication of Einstein’s general relativity in broad circumstances.

A few years later (in a paper published by the Royal Society in 1970, by which time Hawking had become a fellow “for distinction in science” of Gonville and Caius College, Cambridge), he and I joined forces to publish an even more powerful theorem which subsumed almost all the work in this area that had gone before.

In 1967, Werner Israel published a remarkable paper that had the implication that non-rotating black holes, when they had finally settled down to become stationary, would necessarily become completely spherically symmetrical. Subsequent results by Carter, David Robinson and others generalised this to include rotating black holes, the implication being that the final space-time geometry must necessarily accord with an explicit family of solutions of Einstein’s equations found by Roy Kerr in 1963. A key ingredient to the full argument was that if there is any rotation present, then there must be complete axial symmetry. This ingredient was basically supplied by Hawking in 1972.

The very remarkable conclusion of all this is that the black holes that we expect to find in nature have to conform to this Kerr geometry. As the great theoretical astrophysicist Subramanyan Chandrasekhar subsequently commented, black holes are the most perfect macroscopic objects in the universe, being constructed just out of space and time; moreover, they are the simplest as well, since they can be exactly described by an explicitly known geometry (that of Kerr).

Following his work in this area, Hawking established a number of important results about black holes, such as an argument for its event horizon (its bounding surface) having to have the topology of a sphere. In collaboration with Carter and James Bardeen, in work published in 1973, he established some remarkable analogies between the behaviour of black holes and the basic laws of thermodynamics, where the horizon’s surface area and its surface gravity were shown to be analogous, respectively, to the thermodynamic quantities of entropy and temperature. It would be fair to say that in his highly active period leading up to this work, Hawking’s research in classical general relativity was the best anywhere in the world at that time.

Hawking, Bardeen and Carter took their “thermodynamic” behaviour of black holes to be little more than just an analogy, with no literal physical content. A year or so earlier, Jacob Bekenstein had shown that the demands of physical consistency imply – in the context of quantum mechanics – that a black hole must indeed have an actual physical entropy (“entropy” being a physicist’s measure of “disorder”) that is proportional to its horizon’s surface area, but he was unable to establish the proportionality factor precisely. Yet it had seemed, on the other hand, that the physical temperature of a black hole must be exactly zero, inconsistently with this analogy, since no form of energy could escape from it, which is why Hawking and his colleagues were not prepared to take their analogy completely seriously.

Hawking had then turned his attention to quantum effects in relation to black holes, and he embarked on a calculation to determine whether tiny rotating black holes that might perhaps be created in the big bang would radiate away their rotational energy. He was startled to find that irrespective of any rotation they would radiate away their energy – which, by Einstein’s E=mc2, means their mass. Accordingly, any black hole actually has a non-zero temperature, agreeing precisely with the Bardeen-Carter-Hawking analogy. Moreover, Hawking was able to supply the precise value “one quarter” for the entropy proportionality constant that Bekenstein had been unable to determine.

This radiation coming from black holes that Hawking predicted is now, very appropriately, referred to as Hawking radiation. For any black hole that is expected to arise in normal astrophysical processes, however, the Hawking radiation would be exceedingly tiny, and certainly unobservable directly by any techniques known today. But he argued that very tiny black holes could have been produced in the big bang itself, and the Hawking radiation from such holes would build up into a final explosion that might be observed. There appears to be no evidence for such explosions, showing that the big bang was not so accommodating as Hawking wished, and this was a great disappointment to him.

These achievements were certainly important on the theoretical side. They established the theory of black-hole thermodynamics: by combining the procedures of quantum (field) theory with those of general relativity, Hawking established that it is necessary also to bring in a third subject, thermodynamics. They are generally regarded as Hawking’s greatest contributions. That they have deep implications for future theories of fundamental physics is undeniable, but the detailed nature of these implications is still a matter of much heated debate.

Hawking himself was able to conclude from all this (though not with universal acceptance by particle physicists) that those fundamental constituents of ordinary matter – the protons – must ultimately disintegrate, although with a decay rate that is beyond present-day techniques for observing it. He also provided reasons for suspecting that the very rules of quantum mechanics might need modification, a viewpoint that he seemed originally to favour. But later (unfortunately, in my own opinion) he came to a different view, and at the Dublin international conference on gravity in July 2004, he publicly announced a change of mind (thereby conceding a bet with the Caltech physicist John Preskill) concerning his originally predicted “information loss” inside black holes.

Following his black-hole work, Hawking turned his attentions to the problem of quantum gravity, developing ingenious ideas for resolving some of the basic issues. Quantum gravity, which involves correctly imposing the quantum procedures of particle physics on to the very structure of space-time, is generally regarded as the most fundamental unsolved foundational issue in physics. One of its stated aims is to find a physical theory that is powerful enough to deal with the space-time singularities of classical general relativity in black holes and the big bang.

Hawking’s work, up to this point, although it had involved the procedures of quantum mechanics in the curved space-time setting of Einstein’s general theory of relativity, did not provide a quantum gravity theory. That would require the “quantisation” procedures to be applied to Einstein’s curved space-time itself, not just to physical fields within curved space-time.

With James Hartle, Hawking developed a quantum procedure for handling the big-bang singularity. This is referred to as the “no-boundary” idea, whereby the singularity is replaced by a smooth “cap”, this being likened to what happens at the north pole of the Earth, where the concept of longitude loses meaning (becomes singular) while the north pole itself has a perfectly good geometry.

To make sense of this idea, Hawking needed to invoke his notion of “imaginary time” (or “Euclideanisation”), which has the effect of converting the “pseudo-Riemannian” geometry of Einstein’s space-time into a more standard Riemannian one. Despite the ingenuity of many of these ideas, grave difficulties remain (one of these being how similar procedures could be applied to the singularities inside black holes, which is fundamentally problematic).

There are many other approaches to quantum gravity being pursued worldwide, and Hawking’s procedures, though greatly respected and still investigated, are not the most popularly followed, although all others have their share of fundamental difficulties also.

To the end of his life, Hawking continued with his research into the quantum-gravity problem, and the related issues of cosmology. But concurrently with his strictly research interests, he became increasingly involved with the popularisation of science, and of his own ideas in particular. This began with the writing of his astoundingly successful book A Brief History of Time (1988), which was translated into some 40 languages and sold over 25m copies worldwide.

Undoubtedly, the brilliant title was a contributing factor to the book’s phenomenal success. Also, the subject matter is something that grips the public imagination. And there is a directness and clarity of style, which Hawking must have developed as a matter of necessity when trying to cope with the limitations imposed by his physical disabilities. Before needing to rely on his computerised speech, he could talk only with great difficulty and expenditure of effort, so he had to do what he could with short sentences that were directly to the point. In addition, it is hard to deny that his physical condition must itself have caught the public’s imagination.

Although the dissemination of science among a broader public was certainly one of Hawking’s aims in writing his book, he also had the serious purpose of making money. His financial needs were considerable, as his entourage of family, nurses, healthcare helpers and increasingly expensive equipment demanded. Some, but not all, of this was covered by grants.

To invite Hawking to a conference always involved the organisers in serious calculations. The travel and accommodation expenses would be enormous, not least because of the sheer number of people who would need to accompany him. But a popular lecture by him would always be a sell-out, and special arrangements would be needed to find a lecture hall that was big enough. An additional factor would be the ensuring that all entrances, stairways, lifts, and so on would be adequate for disabled people in general, and for his wheelchair in particular.

He clearly enjoyed his fame, taking many opportunities to travel and to have unusual experiences (such as going down a mine shaft, visiting the south pole and undergoing the zero-gravity of free fall), and to meet other distinguished people.

The presentational polish of his public lectures increased with the years. Originally, the visual material would be line drawings on transparencies, presented by a student. But in later years impressive computer-generated visuals were used. He controlled the verbal material, sentence by sentence, as it would be delivered by his computer-generated American-accented voice. High-quality pictures and computer-generated graphics also featured in his later popular books The Illustrated Brief History of Time (1996) and The Universe in a Nutshell (2001). With his daughter Lucy he wrote the expository children’s science book George’s Secret Key to the Universe (2007), and he served as an editor, co-author and commentator for many other works of popular science.

He received many high accolades and honours. In particular, he was elected a fellow of the Royal Society at the remarkably early age of 32 and received its highest honour, the Copley medal, in 2006. In 1979, he became the 17th holder of the Lucasian chair of natural philosophy in Cambridge, some 310 years after Sir Isaac Newton became its second holder. He became a Companion of Honour in 1989. He made a guest appearance on the television programme Star Trek: The Next Generation, appeared in cartoon form on The Simpsons and was portrayed in the movie The Theory of Everything (2014).

It is clear that he owed a great deal to his first wife, Jane Wilde, whom he married in 1965, and with whom he had three children, Robert, Lucy and Timothy. Jane was exceptionally supportive of him in many ways. One of the most important of these may well have been in allowing him to do things for himself to an unusual extent.

He was an extraordinarily determined person. He would insist that he should do things for himself. This, in turn, perhaps kept his muscles active in a way that delayed their atrophy, thereby slowing the progress of the disease. Nevertheless, his condition continued to deteriorate, until he had almost no movement left, and his speech could barely be made out at all except by a very few who knew him well.

He contracted pneumonia while in Switzerland in 1985, and a tracheotomy was necessary to save his life. Strangely, after this brush with death, the progress of his degenerative disease seemed to slow to a virtual halt. His tracheotomy prevented any form of speech, however, so that acquiring a computerised speech synthesiser came as a necessity at that time.

In the aftermath of his encounter with pneumonia, the Hawkings’ home was almost taken over by nurses and medical attendants, and he and Jane drifted apart. They were divorced in 1995. In the same year, Hawking married Elaine Mason, who had been one of his nurses. Her support took a different form from Jane’s. In his far weaker physical state, the love, care and attention that she provided sustained him in all his activities. Yet this relationship also came to an end, and he and Elaine were divorced in 2007.

Despite his terrible physical circumstance, he almost always remained positive about life. He enjoyed his work, the company of other scientists, the arts, the fruits of his fame, his travels. He took great pleasure in children, sometimes entertaining them by swivelling around in his motorised wheelchair. Social issues concerned him. He promoted scientific understanding. He could be generous and was very often witty. On occasion he could display something of the arrogance that is not uncommon among physicists working at the cutting edge, and he had an autocratic streak. Yet he could also show a true humility that is the mark of greatness.

Hawking had many students, some of whom later made significant names for themselves. Yet being a student of his was not easy. He had been known to run his wheelchair over the foot of a student who caused him irritation. His pronouncements carried great authority, but his physical difficulties often caused them to be enigmatic in their brevity. An able colleague might be able to disentangle the intent behind them, but it would be a different matter for an inexperienced student.

To such a student, a meeting with Hawking could be a daunting experience. Hawking might ask the student to pursue some obscure route, the reason for which could seem deeply mysterious. Clarification was not available, and the student would be presented with what seemed indeed to be like the revelation of an oracle – something whose truth was not to be questioned, but which if correctly interpreted and developed would surely lead onwards to a profound truth. Perhaps we are all left with this impression now.

Hawking is survived by his children.

Stephen William Hawking, physicist, born 8 January 1942; died 14 March 2018, aged 76.

Stephen Hawking’s 5 Predictions About the Future


Visionary physicist Stephen Hawking died early Wednesday at the age of 76. An intellectual leader in the study of black holes, quantum mechanics, and physical cosmology, Hawking also found a degree of beloved celebrity that evades most scientists. The best-selling author was a mainstay in the public eye, using his computer-based communication system to explain the wonders of the universe.

In turn, his numerous appearances on television, radio, and the stage gave us an archive of Hawking’s advice for the future. Not one to shy away from the apocalyptic, Hawking was passionate about protecting humanity, which he predicted would face an onslaught of challenges in the years to come.

See also: “Eddie Redmayne Remembers Stephen Hawking in One Surprising Way”

Here’s a sampling of his scientific soothsaying.

Stephen Hawking is headlining the Starmus Festival.
Stephen Hawking died Wednesday at the age of 76.

Hawking Predicted A.I. May Be “The Worst Thing” for Humans

In November, Hawking warned at a technology conference in Lisbon, Portugal, that artificial intelligence could be “the worst thing ever to happen to humanity.” Because what an A.I. can learn is infinite, Hawking reasoned that it could eventually catch up to the limits of the human brain and surpass us.

“Success in creating effective A.I. could be the biggest event in the history of our civilization or the worst,” Hawking said at Web Summit last year. “We cannot know if we will be infinitely helped by A.I. or ignored by it and sidelined or conceivably destroyed by it.”

Hawking also told Wired in November that he feared A.I. would “replace humans altogether,” a concern he had in common with Elon Musk. Accordingly, the two men endorsed a list of 23 principles they feel should steer A.I. development in February 2017.

Hawking Predicted Meeting Aliens Will Be Bad News

It was Hawking’s belief that when humans inevitably meet aliens, we should run. That dread came less from an idea that aliens will be inherently bad, and more from his observations of humans. Much like Christopher Columbus triggered chaos in his coming to the Americas, colonizing aliens would also bring turmoil to our proverbial shores.

“We only have to look at ourselves to see how intelligent life might develop into something we wouldn’t want to meet,” Hawking told the Times of London in 2010. “I imagine they might exist in massive ships, having used up all of the resources from their home planet. Such advanced aliens should perhaps become nomads, looking to conquer and colonize whatever planets they can reach.”

But He Also Predicted We Probably Won’t Encounter Aliens Soon

Despite his concerns about a hostile alien civilization, Hawking never said this alien invasion would happen anytime soon. In April 2016, he explained at a conference for the space exploration project Breakthrough Starshot that the next 20 years, at least, will likely be alien free.

“The probability [of finding alien life] is low — probably,” Hawking told the crowd. “But the discoveries from the Kepler mission suggest that there are billions of habitable planets in our galaxy alone. There are at least 100 billion galaxies in the visible universe, so it seems likely that there are others out there.”

Stephen Hawking, Big Bang
Stephen Hawking: not a fan of aliens.

Hawking Predicted Our Time on Earth Would End

During his work with Breakthrough Starshot, Hawking asserted that within the next thousand or 10 thousand years, humans living on interstellar colonies would be absolute certainty. This would be, in Hawking’s opinion, for the best. Earth, he predicted, was in danger of experiencing astronomical events like asteroids and supernovas. To survive as a species, he declared in April 2016, “we must ultimately spread to the stars.”

This wasn’t a one-time prediction from Hawking. At the Starmus Festival in June 2017, he declared that humans needed to prepare for an exodus off this planet sometime within the next 200 to 500 years because of our own damage to Earth.

“We are running out of space, and the only place we can go to are other worlds,” Hawking told a crowd in Trondheim, Norway. “It is time to explore other solar systems. Spreading out may be the only thing that saves us from ourselves.”

Hawking Predicted Climate Change Could Ravage Earth

Hawking joined many scientists in his assertion that climate change could spell out the end for our planet, but it’s on this topic that he struck a (relatively) more hopeful tone. Sure, climate change could kill us all, but that doesn’t necessarily mean it will happen.

“We are close to the tipping point where global warming becomes irreversible,” Hawking told BBC News in July. “Climate change is one of the great dangers we face, and it’s one we can prevent if we act now.”

To move away from this tipping point, Hawking argued, world leaders like President Donald Trump (of whom he was no fan) would need to stick to the rules laid out by the Paris Agreement. According to Hawking, we aren’t at doomsday yet — and it’s up to our actions and ingenuity to keep it that way.

Stephen Hawking’s Warning on Robot Automation is More Relevant than Ever


Stephen Hawking was concerned with how the rise in robot automation could drive income inequality. The British physicist, who died Wednesday in his Cambridge home at the age of 76, warned that the combination of artificial intelligence and employment shifts could lead to a dramatic shift in power in both the short and long term.

DG1_9780

In a Reddit Ask Me Anything session in October 2015, a user asked about “the possibility of technological unemployment” in the face of robot automation, comparing it to the 19th century Luddite movement, whose members feared — often correctly — that the Industrial Revolution would cost them their jobs. In response, Hawking said:

If machines produce everything we need, the outcome will depend on how things are distributed. Everyone can enjoy a life of luxurious leisure if the machine-produced wealth is shared, or most people can end up miserably poor if the machine-owners successfully lobby against wealth redistribution. So far, the trend seems to be toward the second option, with technology driving ever-increasing inequality.

Hawking’s Reddit comment, the last one posted via his account “Prof-Stephen-Hawking,” was shared on a number of subreddits Wednesday morning, including the socialism subreddit, where it received nearly 7,000 upvotes.

It wasn’t the only time that Hawking warned about these coming changes. In a December 2016 column in The Guardian, he said “the automation of factories has already decimated jobs in traditional manufacturing, and the rise of artificial intelligence is likely to extend this job destruction deep into the middle classes, with only the most caring, creative or supervisory roles remaining.”

It’s a sentiment shared by others in the tech world. Brad Wardell, CEO of software developer Stardock, published a blog in September 2016 that warned the accumulation of wealth through automation could represent a catastrophic shift in power.

Figures like Virgin’s Richard Branson and Y Combinator’s Sam Altman have called for a basic income to alleviate the worst changes from these shifts. A PriceWaterhouseCoopers report found in March 2017 that 38 percent of U.S. jobs could be lost to automation over the coming 13 years.

Beyond mass unemployment, Hawking repeatedly warned that the A.I. underpinning large-scale robot automation could itself be a catastrophic risk. Much like Tesla CEO Elon Musk’s warnings that A.I. could pose “a fundamental risk to the existence of human civilization,” Hawking expressed fears that “A.I. may replace humans altogether” in a November 2017 interview.

“Artificial intelligence could be a real danger in the not too distant future,” Hawking told John Oliver in an episode of Last Week Tonight aired in June 2014. “It could design improvements to itself and outsmart us all.”

What Did Stephen Hawking Do? The Physicist’s 5 Biggest Achievements


On Wednesday, world-renowned astrophysicist Stephen Hawking died at age 76 in his home in Cambridge, England. He lived for 55 years with the neurological disease amyotrophic lateral sclerosis (ALS), and as a result, he spent most of his life using a wheelchair, which, for the last decade, also included hands-free communication capability that gave him the computerized voice with which so many people now associate him.

As a working physicist and prolific public figure, Hawking helped revolutionize the field of astrophysics. His scholarship helped elucidate our modern understanding of the universe and its origins, and he was quick to share his views on humanity and society. While his achievements are many, there are five in particular worth noting.

Space

5. Stephen Hawking Theorized How Black Holes Emit Information

Black holes are notoriously hungry phenomena, distorting spacetime and sucking in any matter that passes within their event horizon. But Hawking theorized that black holes actually radiate energy as a result of quantum effects near the event horizon. We could only observe this theoretical energy, which is referred to as “Hawking radiation,” in smaller black holes that are about the same mass as our sun. In larger black holes, it would be overwhelmed by the gas falling into the black hole. Hawkin’s hypothesized phenomenon hasn’t been directly observed, but as Inverse previously reported, physicists are working on it.

'Big Bang Theory' loved its nerdy celeb cameos, and Stephen Hawking's was an absolute treasure.

4. Stephen Hawking Proposed That the Singularity Was an Essential Element of the Big Bang Theory

The Big Bang Theory — the physics one, not the television one — proposes the universe began with a powerful expansion that started with one point, the singularity. Before Hawking’s time, physicists tried to reconcile the apparent paradox of the singularity. The idea of a single point of infinite density simply didn’t mesh with the conventional views of physics in the middle of the 20th century. In 1970, though, Hawking co-authored a paper with Roger Penrose that began to reconcile this notion.

This paper, titled “The singularities of gravitational collapse and cosmology,” countered the widely discussed notion that the Big Bang was preceded by the universe contracting. Physicists generally accept this version of the Big Bang Theory, in which there was nothing before the beginning of the universe.

Ripples in spacetime.

3. Stephen Hawking Proposed There Was No Meaningful Distinction Between Space and Time in the Early Universe

In his 1988 best-selling book, A Brief History of Time, Hawking proposed that at the very beginning of the universe, space existed, but time as we know it did not yet exist. Astrophysicists continue to describe space and time as being intrinsically tied to one another, but Hawking hypothesized that at the very beginning of everything there was no meaningful distinction. The curious public digested this hypothesis in Hawking’s book, but physicists continue to debate his idea.

Stephen Hawking, Big Bang

2. Stephen Hawking Provided Evidence That Time Travel Is Impossible

Back in 2009, Hawking hosted a time traveler party, inviting time travelers to join him for a reception to celebrate their achievements. Here’s the catch, though: He didn’t send out the invitations until the next day. The idea was that anyone who actually showed up would clearly be legit since nobody knew about the party before it happened. On Hawking’s 75th birthday in 2017, he announced that nobody had shown up to his party. While this isn’t definitive proof that time travel doesn’t exist, it’s pretty strong evidence. After all, if you discovered how to travel through time, wouldn’t Hawking’s time travel party be one of your first destinations?

Stephen Hawking on 'The Simpsons'

1. Stephen Hawking Played Himself Four Times on The Simpsons

Sure, revolutionizing astrophysics is great, but what about having your cartoon avatar immortalized for posterity? In addition to playing himself on Star Trek, Hawking appeared on The Simpsons four times between 1999 and 2010. Sure, this achievement wasn’t scientific, strictly speaking, but it does embody the character and public image of one of the best-known scientists in modern history. As a physicist, Hawking didn’t create much original work in his later years. But as a science popularizer, he continued to inspire people to learn about the world around them. And as far as monumental achievements go, that one’s hard to overstate.

Theories of Everything, Mapped


Ever since the dawn of civilization,” Stephen Hawking wrote in his international bestseller A Brief History of Time, “people have not been content to see events as unconnected and inexplicable. They have craved an understanding of the underlying order in the world.”

Explore the deepest mysteries at the frontier of fundamental physics, and the most promising ideas put forth to solve them.

In the quest for a unified, coherent description of all of nature — a “theory of everything” — physicists have unearthed the taproots linking ever more disparate phenomena. With the law of universal gravitation, Isaac Newton wedded the fall of an apple to the orbits of the planets. Albert Einstein, in his theory of relativity, wove space and time into a single fabric, and showed how apples and planets fall along the fabric’s curves. And today, all known elementary particles plug neatly into a mathematical structure called the Standard Model. But our physical theories remain riddled with disunions, holes and inconsistencies. These are the deep questions that must be answered in pursuit of the theory of everything.

Our map of the frontier of fundamental physics, built by the interactive developer Emily Fuhrman, weights questions roughly according to their importance in advancing the field. It seemed natural to give greatest weight to the quest for a theory of quantum gravity, which would encompass general relativity and quantum mechanics in a single framework. In their day-to-day work, though, many physicists focus more on rooting out dark matter, solving the Standard Model’s hierarchy problem, and pondering the goings-on in black holes, those mysterious swallowers of space and time. For each question, the map presents several proposed solutions. Relationships between these proposals form a network of ideas.

The map provides concise descriptions of highly complex theories; learn more by exploring the links to dozens of articles and videos, and vote for the ideas you find most elegant or promising. Finally, the map is extensive, but hardly exhaustive; proposed additions are welcome below.