Would Human Extinction Be a Tragedy?


Our species possesses inherent value, but we are devastating the earth and causing unimaginable animal suffering.

An overgrown lot along Highway 13 near the town of Haleyville, Ala.CreditWilliam Widmer for The New York Times.
Image
An overgrown lot along Highway 13 near the town of Haleyville, Ala.CreditCreditWilliam Widmer for The New York Times

There are stirrings of discussion these days in philosophical circles about the prospect of human extinction. This should not be surprising, given the increasingly threatening predations of climate change. In reflecting on this question, I want to suggest an answer to a single question, one that hardly covers the whole philosophical territory but is an important aspect of it. Would human extinction be a tragedy?

To get a bead on this question, let me distinguish it from a couple of other related questions. I’m not asking whether the experience of humans coming to an end would be a bad thing. (In these pages, Samuel Scheffler has given us an important reason to think that it would be.) I am also not asking whether human beings as a species deserve to die out. That is an important question, but would involve different considerations. Those questions, and others like them, need to be addressed if we are to come to a full moral assessment of the prospect of our demise. Yet what I am asking here is simply whether it would be a tragedy if the planet no longer contained human beings. And the answer I am going to give might seem puzzling at first. I want to suggest, at least tentatively, both that it would be a tragedy and that it might just be a good thing.

To make that claim less puzzling, let me say a word about tragedy. In theater, the tragic character is often someone who commits a wrong, usually a significant one, but with whom we feel sympathy in their descent. Here Sophocles’s Oedipus, Shakespeare’s Lear, and Arthur Miller’s Willy Loman might stand as examples. In this case, the tragic character is humanity. It is humanity that is committing a wrong, a wrong whose elimination would likely require the elimination of the species, but with whom we might be sympathetic nonetheless for reasons I discuss in a moment.

To make that case, let me start with a claim that I think will be at once depressing and, upon reflection, uncontroversial. Human beings are destroying large parts of the inhabitable earth and causing unimaginable suffering to many of the animals that inhabit it. This is happening through at least three means. First, human contribution to climate change is devastating ecosystems, as the recent article on Yellowstone Park in The Times exemplifies. Second, increasing human population is encroaching on ecosystems that would otherwise be intact. Third, factory farming fosters the creation of millions upon millions of animals for whom it offers nothing but suffering and misery before slaughtering them in often barbaric ways. There is no reason to think that those practices are going to diminish any time soon. Quite the opposite.

Humanity, then, is the source of devastation of the lives of conscious animals on a scale that is difficult to comprehend.

To be sure, nature itself is hardly a Valhalla of peace and harmony. Animals kill other animals regularly, often in ways that we (although not they) would consider cruel. But there is no other creature in nature whose predatory behavior is remotely as deep or as widespread as the behavior we display toward what the philosopher Christine Korsgaard aptly calls “our fellow creatures” in a sensitive book of the same name.

If this were all to the story there would be no tragedy. The elimination of the human species would be a good thing, full stop. But there is more to the story. Human beings bring things to the planet that other animals cannot. For example, we bring an advanced level of reason that can experience wonder at the world in a way that is foreign to most if not all other animals. We create art of various kinds: literature, music and painting among them. We engage in sciences that seek to understand the universe and our place in it. Were our species to go extinct, all of that would be lost.

Now there might be those on the more jaded side who would argue that if we went extinct there would be no loss, because there would be no one for whom it would be a loss not to have access to those things. I think this objection misunderstands our relation to these practices. We appreciate and often participate in such practices because we believe they are good to be involved in, because we find them to be worthwhile. It is the goodness of the practices and the experiences that draw us. Therefore, it would be a loss to the world if those practices and experiences ceased to exist.

One could press the objection here by saying that it would only be a loss from a human viewpoint, and that that viewpoint would no longer exist if we went extinct. This is true. But this entire set of reflections is taking place from a human viewpoint. We cannot ask the questions we are asking here without situating them within the human practice of philosophy. Even to ask the question of whether it would be a tragedy if humans were to disappear from the face of the planet requires a normative framework that is restricted to human beings.

Let’s turn, then, and take the question from the other side, the side of those who think that human extinction would be both a tragedy and overall a bad thing. Doesn’t the existence of those practices outweigh the harm we bring to the environment and the animals within it? Don’t they justify the continued existence of our species, even granting the suffering we bring to so many nonhuman lives?

To address that question, let us ask another one. How many human lives would it be worth sacrificing to preserve the existence of Shakespeare’s works? If we were required to engage in human sacrifice in order to save his works from eradication, how many humans would be too many? For my own part, I think the answer is one. One human life would be too many (or, to prevent quibbling, one innocent human life), at least to my mind. Whatever the number, though, it is going to be quite low.

Or suppose a terrorist planted a bomb in the Louvre and the first responders had to choose between saving several people in the museum and saving the art. How many of us would seriously consider saving the art?

So, then, how much suffering and death of nonhuman life would we be willing to countenance to save Shakespeare, our sciences and so forth? Unless we believe there is such a profound moral gap between the status of human and nonhuman animals, whatever reasonable answer we come up with will be well surpassed by the harm and suffering we inflict upon animals. There is just too much torment wreaked upon too many animals and too certain a prospect that this is going to continue and probably increase; it would overwhelm anything we might place on the other side of the ledger. Moreover, those among us who believe that there is such a gap should perhaps become more familiar with the richness of lives of many of our conscious fellow creatures. Our own science is revealing that richness to us, ironically giving us a reason to eliminate it along with our own continued existence.

One might ask here whether, given this view, it would also be a good thing for those of us who are currently here to end our lives in order to prevent further animal suffering. Although I do not have a final answer to this question, we should recognize that the case of future humans is very different from the case of currently existing humans. To demand of currently existing humans that they should end their lives would introduce significant suffering among those who have much to lose by dying. In contrast, preventing future humans from existing does not introduce such suffering, since those human beings will not exist and therefore not have lives to sacrifice. The two situations, then, are not analogous.

It may well be, then, that the extinction of humanity would make the world better off and yet would be a tragedy. I don’t want to say this for sure, since the issue is quite complex. But it certainly seems a live possibility, and that by itself disturbs me.

There is one more tragic aspect to all of this. In many dramatic tragedies, the suffering of the protagonist is brought about through his or her own actions. It is Oedipus’s killing of his father that starts the train of events that leads to his tragic realization; and it is Lear’s highhandedness toward his daughter Cordelia that leads to his demise. It may also turn out that it is through our own actions that we human beings bring about our extinction, or at least something near it, contributing through our practices to our own tragic end.

You’re More Likely to Die in a Human Extinction Event Than a Car Crash.


“A typical person is more than five times as likely to die in an extinction event as in a car crash,” says a new report.

An earlier version of this story presented an economic modeling assumption—the .01 chance of human extinction per year—as a vetted scholarly estimate. Following a correction from the Global Priorities Project, the text below has been updated.

Nuclear war. Climate change. Pandemics that kill tens of millions.

These are the most viable threats to globally organized civilization. They’re the stuff of nightmares and blockbusters—but unlike sea monsters or zombie viruses, they’re real, part of the calculus that political leaders consider everyday. A new report from the U.K.-based Global Challenges Foundation urges us to take them seriously.

The nonprofit began its annual report on “global catastrophic risk” with a startling provocation: If figures often used to compute human extinction risk are correct, the average American is more than five times likelier to die during a human-extinction event than in a car crash.

Partly that’s because the average person will probably not die in an automobile accident. Every year, one in 9,395 people die in a crash; that translates to about a 0.01 percent chance per year. But that chance compounds over the course of a lifetime. At life-long scales, one in 120 Americans die in an accident.

Yet the risk of human extinction due to climate change—or an accidental nuclear war, or a meteor—could be much higher than that. The Stern Review, the U.K. government’s premier report on the economics of climate change, assumed a 0.1-percent risk of human extinction every year. That may sound low, but it adds up when extrapolated to century-scale. Across 100 years, that figure would entail a 9.5 percent chance of human extinction.

And that number might even underestimate the risk. Another Oxford survey of experts from 2008 posited the annual extinction risk to be a higher figure, 0.2 percent. And the chance of dying from any major global calamity is also likely higher. The Stern Review, which supplies the 9.5-percent number, only assumed the danger of species-wide extinction. The Global Challenges Foundation’s report is concerned with all events that would wipe out more than 10 percent of Earth’s human population.

“We don’t expect any of the events that we describe to happen in any 10-year period. They might—but, on balance, they probably won’t,” Sebastian Farquhar, the director of the Global Priorities Project, told me. “But there’s lots of events that we think are unlikely that we still prepare for.”

For instance, most people demand working airbags in their cars and they strap in their seat-belts whenever they go for a drive, he said. We may know that the risk of an accident on any individual car ride is low, but we still believe that it makes sense to reduce possible harm.

So what kind of human-level extinction events are these? The report holds catastrophic climate change and nuclear war far above the rest, and for good reason. On the latter front, it cites multiple occasions when the world stood on the brink of atomic annihilation. While most of these occurred during the Cold War, another took place during the 1990s, the most peaceful decade in recent memory:

In 1995, Russian systems mistook a Norwegian weather rocket for a potential nuclear attack. Russian President Boris Yeltsin retrieved launch codes and had the nuclear suitcase open in front of him. Thankfully, Russian leaders decided the incident was a false alarm.

Climate change also poses its own risks. As I’ve written about before, serious veterans of climate science now suggest that global warming will spawn continent-sized superstorms by the end of the century. Farquhar said that even more conservative estimates can be alarming: UN-approved climate models estimate that the risk of six to ten degrees Celsius of warming exceeds 3 percent, even if the world tamps down carbon emissions at a fast pace. “On a more plausible emissions scenario, we’re looking at a 10-percent risk,” Farquhar said. Few climate adaption scenarios account for swings in global temperature this enormous.

Other risks won’t stem from technological hubris. Any year, there’s always some chance of a super-volcano erupting or an asteroid careening into the planet. Both would of course devastate the areas around ground zero—but they would also kick up dust into the atmosphere, blocking sunlight and sending global temperatures plunging. (Most climate scientists agree that the same phenomenon would follow any major nuclear exchange.)

Yet natural pandemics may pose the most serious risks of all. In fact, in the past two millennia, the only two events that experts can certify as global catastrophes of this scale were plagues. The Black Death of the 1340s felled more than 10 percent of the world population. Eight centuries prior, another epidemic of theYersinia pestis bacterium—the “Great Plague of Justinian” in 541 and 542—killed between 25 and 33 million people, or between 13 and 17 percent of the global population at that time.

No event approached these totals in the 20th century. The twin wars did not come close: About 1 percent of the global population perished in the Great War, about 3 percent in World War II. Only the Spanish flu epidemic of the late 1910s, which killed between 2.5 and 5 percent of the world’s people, approached the medieval plagues. Farquhar said there’s some evidence that the First World War and Spanish influenza were the same catastrophic global event—but even then, the death toll only came to about 6 percent of humanity.

The report briefly explores other possible risks: a genetically engineered pandemic, geo-engineering gone awry, an all-seeing artificial intelligence. Unlike nuclear war or global warming, though, the report clarifies that these remain mostly notional threats, even as it cautions:

[N]early all of the most threatening global catastrophic risks were unforeseeable a few decades before they became apparent. Forty years before the discovery of the nuclear bomb, few could have predicted that nuclear weapons would come to be one of the leading global catastrophic risks. Immediately after the Second World War, few could have known that catastrophic climate change, biotechnology, and artificial intelligence would come to pose such a significant threat.

So what’s the societal version of an airbag and seatbelt? Farquhar conceded that many existential risks were best handled by policies catered to the specific issue, like reducing stockpiles of warheads or cutting greenhouse-gas emissions. But civilization could generally increase its resilience if it developed technology to rapidly accelerate food production. If technical society had the power to ramp-up less sunlight-dependent food sources, especially, there would be a “lower chance that a particulate winter [from a volcano or nuclear war] would have catastrophic consequences.”

He also thought many problems could be helped if democratic institutions had some kind of ombudsman or committee to represent the interests of future generations. (This strikes me as a distinctly European proposal—in the United States, the national politics of a “representative of future generations” would be thrown off by the abortion debate and unborn personhood, I think.)

AI researchers back Elon Musk’s fears of technology causing human extinction


When big time entrepreneur Elon Musk headed to the podium at the MIT symposium at a recent meeting on technology, he turned heads as he spoke openly about the threat of artificial intelligence potentially causing human extinction.

human

“I think we should be very careful about artificial intelligence. If I were to guess at what our biggest existential threat is, it’s probably that,” said Elon Musk, CEO of electric car maker Tesla Motors. “With artificial intelligence, we are summoning the demon. In all those stories with the guy with the pentagram and the holy water, and he’s sure he can control the demon. It doesn’t work out.”

Elon Musk is known for the Tesla Motor’s dual-motor Model S sedan, which features a revolutionary autopilot feature, allowing the vehicle to steer itself between lanes. He is also the CEO and co-founder of SpaceX , a company that is now striving to build communities on Mars.

Can AI gain consciousness and evil intent?

It turns out that Elon Musk’s doomsday fears are quite plausible and potentially realistic. In fact, prominent AI researchers are coming out, backing Musk’s concerns.

“At first I was surprised and then I thought, ‘this is not completely crazy,'” said Andrew Moore, a computer scientist at Carnegie Mellon University. “I actually do think this is a valid concern and it’s really an interesting one. It’s a remote, far future danger but sometime we’re going to have to think about it. If we’re at all close to building these super-intelligent, powerful machines, we should absolutely stop and figure out what we’re doing.”

Moore and Musk agree — the increasing technology of artificial intelligence should be met with regulatory oversight at both the national and international level. In early August, Musk made his concerns public, saying that AI is “potentially more dangerous than nukes.”

Could AI robots defy humans and ultimately turn on them? Could they adapt to their masters, overcome their instruction, and become hostile? Do the autopilot features founded within his own automobile company pose a future threat to humans?

How long before AI learns to reason on its own?

Sonia Chernova, the director of the Robot Autonomy and Interactive Learning lab in the Robotics Engineering Program at Worcester Polytechnic Institute, backs Musk’s concerns and says, “It’s important to understand that the average person doesn’t understand how prevalent AI is.” But Chernova highlights the importance of differentiating between the various levels of artificial intelligence. Chernova said that some AI research is harmless, like the artificial intelligence built into email to filter out spam. Phone applications that make recommendations for movie and restaurant preference to users are essentially harmless AI. Google also uses AI for its Maps service.

She said many AI technologies pose no risk: “I think [Musk’s] comments were very broad and I really don’t agree there. His definition of AI is a little more than what we really have working. AI has been around since the 1950s. We’re now getting to the point where we can do image processing pretty well, but we’re so far away from making anything that can reason.”

Scientists agree that artificial intelligence crosses the line when it can reason, but Chernova says it could take 100 years or more for scientists to build an intelligent system like that.

Creating a system that keeps humans in control

Still, Musk said that he wants “to keep an eye” on AI researchers. That’s why he helped invest $40 million in Vicarious FPC, a company working on future AI algorithms.

Yaser Abu-Mostafa , professor of electrical engineering and computer science at the California Institute of Technology, agrees that AI is far from being able to reason intelligently, but it’s only a matter of time. He believes in creating systems that keep humans in control at all costs.

“Having a machine that is evil and takes over… that cannot possibly happen without us allowing it,” said Abu-Mostafa. “There are safeguards… If you go through the scenario of a machine that wants to take over or destroy the world, it’s a nice science-fiction scenario, as long as we don’t allow a system to control itself.”

Is it possible to give robots consciousness? Could evil intent take over machine intelligence and evolve to become an enemy of mankind, a threat to the human race?

Sources:

http://www.computerworld.com

http://www.naturalnews.com