The Slippery Slope: If Facebook bans content that questions vaccine dogma, will it soon ban articles about toxic chemotherapy, fluoride and pesticides, too?


Image: The Slippery Slope: If Facebook bans content that questions vaccine dogma, will it soon ban articles about toxic chemotherapy, fluoride and pesticides, too?

In accordance with the company’s ongoing efforts to censor all truth while promoting only establishment fake news on its platform, social media giant Facebook has decided to launch full-scale war against online free speech about vaccines.

Pandering to the demands by California Democrat Adam Schiff, Mark Zuckerberg and his team recently announced that they are now “exploring additional measures to best combat the problem” of Facebook users discussing and sharing information about how vaccines are harming and killing children via social media.

According to an official statement released by Facebook, the Bay Area-based corporation is planning to implement some changes to the platform in the very near future that may include “reducing or removing this type of content from recommendations, including Groups You Should Join, and demoting it in search results, while also ensuring that higher quality and more authoritative information is available.”

In other words, the only acceptable form of online speech pertaining to vaccines that will be allowed on Facebook is speech that conforms to whatever the U.S. Centers for Disease Control and Prevention (CDC) says is “accurate” and “scientific.” Anything else, even if it comes from scientific authorities with a differing viewpoint, will be classified as false by Facebook, and consequently demoted or removed.

Facebook’s censorship tactics are becoming more nefarious by the day. To keep up with the latest news, be sure to check out Censorship.news.

100% organic essential oil sets now available for your home and personal care, including Rosemary, Oregano, Eucalyptus, Tea Tree, Clary Sage and more, all 100% organic and laboratory tested for safety. A multitude of uses, from stress reduction to topical first aid. See the complete listing here, and help support this news site.

Facebook is quickly becoming the American government’s ministry of propaganda

Facebook’s rationale, of course, is that it’s simply looking out for the best interests of users who might be “misled” by information shared in Facebook groups suggesting that the MMR vaccine for measles, mumps, and rubella, as one example, isn’t nearly as safe as government health authorities claim.

And that’s just it: There are many things that the government is wrong about, but that have been officially sanctioned as “truth” by government propagandists. If Facebook bows down to these government hacks with regards to vaccines, there’s no telling what the company will try to ban from its platform in the future.

As we saw in the case of Cassandra C. from Connecticut, the government actually forced this young girl to undergo chemotherapy against her will, claiming that the “treatment” was absolutely necessary to “cure” her of non-Hodgkin’s lymphoma.

Not only did the government deny young Cassandra the right to make her own medical decisions, but it also overrode the will of her parents, who also opposed taking the chemotherapy route. In essence, the government forced Cassandra to undergo chemotherapy at gunpoint, and now it’s trying to do the exact same thing with Facebook.

If little Adam Schiff is successful at forcing Facebook to only allow information on its platform that conforms with the official government position on vaccines, the next step will be to outlaw the sharing of information on the platform about the dangers of chemotherapy, as well as the dangers of fluoride, pesticides, and other deadly chemicals that the government has deemed as “safe and effective.”

Soon there won’t be any free speech at all on Facebook, assuming the social media giant actually obeys this latest prompting by the government to steamroll people’s First Amendment rights online. And where will it end?

“The real national emergency is the fact that Democrats have power over our lives,” warns Mike Adams, the Health Ranger.

“These radical Leftists are domestic terrorists and suicidal cultists … they are the Stasi, the SS, the KGB and the Maoists rolled all into one. They absolutely will not stop until America as founded is completely ripped to shreds and replaced with an authoritarian communist-leaning regime run by the very same tyrants who tried to carry out an illegal political coup against President Trump.”

Advertisements

With INFANTICIDE now a core “value” of Democrats, all decent, life-loving human beings must denounce the Democrat party


Image: With INFANTICIDE now a core “value” of Democrats, all decent, life-loving human beings must denounce the Democrat party

There’s no two ways about it anymore: the Democrat Party is evil beyond words. And with the Democrats’ recent voting down of a bill, the Born-Alive Abortion Survivors Protection Act, that would have protected the lives of newly-born children from being murdered alive by abortionists, it’s now undeniably evident that there’s no possible way for decent human beings who support human rights and life in general to, in any way, identify as Democrats.

As if their love for abortion wasn’t already bad enough, today’s Democrats see nothing wrong with delivering the child victims of failed abortions and allowing them to die on the delivery table, all in the name of “reproductive rights” and “choice.” This newfound adoption of infanticide, a.k.a. baby murder, as one of their core “values” proves once and for all that Democrats hate human life, and openly embrace the “progressive” policy of murdering babies after they’ve already left the womb.

We might as well start referring to the Democrat Party as the Death Party – the party that will “cry” over the deaths of children whenever it suits their agenda of trying to scrap the Second Amendment, but that hoots, hollers, cheers, and claps when legislation is passed and signed that allows newborn babies to be chopped into bits and trashed as “medical waste” upon breathing their first breath of air.

There’s certainly no place for real Christians in the Democrat Party, which embraces pretty much every evil thing that the Bible condemns. Whether it’s brainwashing innocent children into believing that there are unlimited genders, or silencing free speech about the dangers of vaccines, the Democrat Party wants to destroy all that is good and wholesome, and replace it with every type of vice and wickedness.

Sponsored solution from the Health Ranger Store: The Big Berkey water filter removes almost 100% of all contaminants using only the power of gravity (no electricity needed, works completely off-grid). Widely consider the ultimate “survival” water filter, the Big Berkey is made of stainless steel and has been laboratory verified for high-efficiency removal of heavy metals by CWC Labs, with tests personally conducted by Mike Adams. Explore more here.

For more related news about the evil agenda of the Democrat Party and its army of “resistance” Leftists, be sure to check out LiberalMob.com and Libtards.news.

https://www.brighteon.com/embed/6003973369001

Things have taken a major turn for the worse since 2002, when a bipartisan Senate UNANIMOUSLY affirmed that born-alive children are human beings deserving of life

Believe it or not, it wasn’t that long ago that Democrats, or at least some of them, still had some level of conscience within their beings. Back in 2002, in fact, Democrats in the Senate unanimously, along with Republicans, voted to pass the Born Alive Infant Protection Act. This bill recognized all born children as “human persons,” affording them the same rights and protections as all other humans.

But somehow over the years, the Democrat Party decided that granting human life status to newborn babies infringed upon “women’s rights,” and here we are today.

“In just over a decade and a half, Democrats have gone from ‘safe, legal, and rare abortions’ to ‘kill ’em all and don’t stop when they’re born,’” writes Matt Walsh for The Daily Wire. “Many of us warned that the first slogan would lead eventually to the second. We take no pleasure in our vindication.”

As you may recall, it was the Republican Party that had to step up to the plate in the past to stamp out another evil known as slavery, which was openly embraced by the Democrat Party. And it’s now up to Republicans once again to intervene on behalf of society’s most vulnerable, unborn and newborn babies, to protect them from the Democrat Party death cult.

“It is probably not a coincidence that the Democrat Party, through its long and sordid history, has supported both of those peculiar institutions,” Walsh adds about the Democrats’ support for both slavery and baby murder.

“What a force for evil it has been. But what amazing consistency – to always fall on the wrong side of every human rights issue.”

Even a ‘Limited’ Nuclear War Could Wreck Earth’s Climate And Trigger Global Famine


Deadly tensions between India and Pakistan are boiling over in Kashmir, a disputed territory at the northern border of each country.

A regional conflict is worrisome enough, but climate scientists warn that if either country launches just a portion of its nuclear weapons, the situation might escalate into a global environmental and humanitarian catastrophe.

On February 14, a suicide bomber killed at least 40 Indian troops in a convoy travelling through Kashmir. A militant group based in Pakistan called Jaish-e-Mohammed claimed responsibility for the attack. India responded by launching airstrikes against its neighbour – the first in roughly 50 years – and Pakistan has said it shot down two Indian fighter jets and captured one of the pilots.

Both countries possess about 140 to 150 nuclear weapons. Though nuclear conflict is unlikely, Pakistani leaders have said their military is preparing for “all eventualities“. The country has also assembled its group responsible for making decisions on nuclear strikes.

“This is the premier nuclear flashpoint in the world,” Ben Rhodes, a political commentator, said on Wednesday’s episode of the “Pod Save the World” podcast.

For that reason, climate scientists have modelled how an exchange of nuclear weapons between the two countries – what is technically called a limited regional nuclear war – might affect the world.

Though the explosions would be local, the ramifications would be global, that research concluded. The ozone layer could be crippled and Earth’s climate may cool for years, triggering crop and fishery losses that would result in what the researchers called a “global nuclear famine”.

“The danger of nuclear winter has been under-understood – poorly understood – by both policymakers and the public,” Michael Mills, a researcher at the US National Center for Atmospheric Research, told Business Insider.

“It has reached a point where we found that nuclear weapons are largely unusable because of the global impacts.”

Why a ‘small’ nuclear war could ravage Earth

When a nuclear weapon explodes, its effects extend beyond the structure-toppling blast wave, blinding fireball, and mushroom cloud. Nuclear detonations close to the ground, for example, can spread radioactive debris called fallout for hundreds of miles.

But the most frightening effect is intense heat that can ignite structures for miles around. Those fires, if they occur in industrial areas or densely populated cities, can lead to a frightening phenomenon called a firestorm.

“These firestorms release many times the energy stored in nuclear weapons themselves,” Mills said. “They basically create their own weather and pull things into them, burning all of it.”

Mills helped model the outcome of an India-Pakistan nuclear war in a 2014 study. In that scenario, each country exchanges 50 weapons, less than half of its arsenal. Each of those weapons is capable of triggering a Hiroshima-size explosion, or about 15 kilotons’ worth of TNT.

The model suggested those explosions would release about 5 million tons of smoke into the air, triggering a decades-long nuclear winter.

The effects of this nuclear conflict would eliminate 20 to 50 percent of the ozone layer over populated areas. Surface temperatures would become colder than they have been for at least 1,000 years.

The bombs in the researchers’ scenario are about as powerful as the Little Boy nuclear weapon dropped on Hiroshima in 1945, enough to devastate a city.

But that’s far weaker than many weapons that exist today. The latest device North Korea tested was estimated to be about 10 times as powerful as Little Boy. The US and Russia each possess weapons 1,000 times as powerful.

Still, the number of weapons used is more important than strength, according to the calculations in this study.

How firestorms would wreck the climate

Most of the smoke in the scenario the researchers considered would come from firestorms that would tear through buildings, vehicles, fuel depots, vegetation, and more.

This smoke would rise through the troposphere (the atmospheric zone closest to the ground), and particles would then be deposited in a higher layer called the stratosphere. From there, tiny black-carbon aerosols could spread around the globe.

“The lifetime of a smoke particle in the stratosphere is about five years. In the troposphere, the lifetime is one week,” Alan Robock, a climate scientist at Rutgers University who worked on the study, told Business Insider.

“So in the stratosphere, the lifetime of smoke particles is much longer, which gives it 250 times the impact.”

The fine soot would cause the stratosphere, normally below freezing, to be dozens of degrees warmer than usual for five years. It would take two decades for conditions to return to normal.

This would cause ozone loss “on a scale never observed,” the study said.

That ozone damage would consequently allow harmful amounts of ultraviolet radiation from the sun to reach the ground, hurting crops and humans, harming ocean plankton, and affecting vulnerable species all over the planet.

But it gets worse: Earth’s ecosystems would also be threatened by suddenly colder temperatures.

Screen Shot 2019 03 01 at 3.52.36 pm(Mills et al., Earth’s Future, 2014)

The fine black soot in the stratosphere would prevent some sun from reaching the ground. The researchers calculated that average temperatures around the world would drop by about 1.5 degrees Celsius over the five years following the nuclear blasts.

In populated areas of North America, Europe, Asia, and the Middle East, changes could be more extreme (as illustrated in the graphic above). Winters there would be about 2.5 degrees colder and summers between 1 and 4 degrees colder, reducing critical growing seasons by 10 to 40 days. Expanded sea ice would also prolong the cooling process, since ice reflects sunlight away.

“It’d be cold and dark and dry on the ground, and that would affect plants,” Robock said. “This is something everybody should be concerned about because of the potential global effects.”

The change in ocean temperatures could devastate sea life and fisheries that much of the world relies on for food. Such sudden blows to the food supply and the “ensuing panic” could cause “a global nuclear famine”, according to the study’s authors.

Temperatures wouldn’t return to normal for more than 25 years.

The effects might be much worse than previously thought

Robock is working on new models of nuclear-winter scenarios; his team was awarded a nearly US$3 million grant from the Open Philanthropy Project to do so.

“You’d think the Department of Defence and the Department of Homeland Security and other government agencies would fund this research, but they didn’t and had no interest,” he said.

Since his earlier modelling work, Robock said, the potential effects of a nuclear conflict between India and Pakistan have gotten worse. That’s because India and Pakistan now have more nuclear weapons, and their cities have grown.

“It could be about five times worse than what we’ve previously calculated,” he said.

Because of his intimate knowledge of the potential consequences, Robock advocates the reduction of nuclear arsenals around the world. He said he thinks Russia and the US – which has nearly 7,000 nuclear weapons – are in a unique position to lead the way.

“Why don’t the US and Russia each get down to 200? That’s a first step,” Robock said.

“If President Trump wants the Nobel Peace Prize, he should get rid of land-based missiles, which are on hair-trigger alert, because we don’t need them,” he added.

“That’s how he’ll get a peace prize – not by saying we have more than anyone else.”

The future of work won’t be about college degrees, it will be about job skills


  • According to the survey Freelancing in America 2018, released Wednesday, 93 percent of freelancers with a four-year college degree say skills training was useful versus only 79 percent who say their college education was useful to the work they do now.
  • Sixty-five percent of children entering primary school will end up in jobs that don’t yet exist, reveals the World Economic Forum.
  • The result is a proliferation of new, nontraditional education options.

Students walk across campus at the University of Vermont in Burlington.

Students walk across campus at the University of Vermont in Burlington.

Twenty million students started college this fall, and this much is certain: The vast majority of them will be taking on debt — a lot of debt.

What’s less certain is whether their degrees will pay off.

According to the survey Freelancing in America 2018, released Wednesday, freelancers put more value on skills training: 93 percent of freelancers with a four-year college degree say skills training was useful versus only 79 percent who say their college education was useful to the work they do now. In addition, 70 percent of full-time freelancers participated in skills training in the past six months compared to only 49 percent of full-time non-freelancers.

The fifth annual survey, conducted by research firm Edelman Intelligence and co-commissioned by Upwork and Freelancers Union, polled 6,001 U.S. workers.

This new data points to something much larger. Rapid technological change, combined with rising education costs, have made our traditional higher-education system an increasingly anachronistic and risky path. The cost of a college education is so high now that we have reached a tipping point at which the debt incurred often isn’t outweighed by future earnings potential.

Yet too often, degrees are still thought of as lifelong stamps of professional competency. They tend to create a false sense of security, perpetuating the illusion that work — and the knowledge it requires — is static. It’s not.

“Too often, degrees are still thought of as lifelong stamps of professional competency. They tend to create a false sense of security, perpetuating the illusion that work — and the knowledge it requires — is static. It’s not.”

For example, a 2016 World Economic Forum report found that “in many industries and countries, the most in-demand occupations or specialties did not exist 10 or even five years ago, and the pace of change is set to accelerate.”

And recent data from Upwork confirms that acceleration. Its latest Upwork Quarterly Skills Index, released in July, found that “70 percent of the fastest-growing skills are new to the index.”

Expect the change to keep coming. The WEF cites one estimate finding that 65 percent of children entering primary school will end up in jobs that don’t yet exist.

Upwork CEO on IPO: The market is ready for us

Upwork CEO on IPO: The market is ready for us  

These trends aren’t just academic to me. It’s influenced the advice I give my children. While my father had one job throughout his life, I’ve had several. And I tell my children not only can they expect to have many jobs throughout their working lives but multiple jobs at the same time.

It is therefore imperative that we encourage more options to thrive without our current overreliance on college degrees as proof of ability. We need new routes to success and hope.

New, nontraditional education options

The future of work won’t be about degrees. More and more, it’ll be about skills. And no one school, whether it be Harvard, General Assembly or Udacity, can ever insulate us from the unpredictability of technological progression and disruption.

As a leader of a technology company and former head of engineering, I’ve hired many programmers during my career. And what matters to me is not whether someone has a computer science degree but how well they can think and how well they can code. In fact, among the top 20 fastest-growing skills on Upwork’s latest Skills Index, none require a degree.

Freelancers, the fastest-growing segment of the workforce, realize more than most that education doesn’t stop. It’s a lifelong process, and they are nearly twice as likely to reskill.

More and more, companies are catching on. Last year PwC began a pilot program allowing high school graduates to begin working as accountants and risk-management consultants. And this August, jobs website Glassdoor listed “15 more companies that no longer require a degree,” including tech giants such as Apple, IBM and Google. “Increasingly,” Glassdoor reported, “there are many companies offering well-paying jobs to those with nontraditional education or a high-school diploma.”

Google, for example, used to ask applicants for their college GPAs and transcripts; however, as Laszlo Bock — its head of hiring — has explained, those metrics aren’t valuable predictors of an employee’s performance. As a result, Bock told The New York Times a few years ago that the portion of non-college-educated employees at Google has grown over time.

And second, new nontraditional education options are proliferating. Often laser-focused on the most in-demand skills, would-be college students can now enroll in campus-based, project-focused institutions, like the Holberton School (where I’m a trustee) or online programs such as e-learning sites like Coursera or Udemy.

To be sure, I’m not saying college is a waste of time and money for everyone. But if there’s one takeaway, it’s this: The future of work won’t be about degrees. More and more, it’ll be about skills. And no one school, whether it be Harvard, General Assembly or Udacity, can ever insulate us from the unpredictability of technological progression and disruption.

But one thing can: The fastest-growing segment of the workforce — freelancers — have realized more than most that education doesn’t stop. It’s a lifelong process. Diploma or not, it’s a mindset worth embracing.

U.S. Isolated at U.N. Over Its Concerns About Abortion, Refugees


The United States found itself isolated in the 193-member United Nations General Assembly on Monday over Washington’s concerns about the promotion of abortion and a voluntary plan to address the global refugee crisis.

Only Hungary backed the United States and voted against an annual resolution on the work of the U.N. refugee agency, while 181 countries voted in favor and three abstained. The resolution has generally been approved by consensus for more than 60 years.

However, this year the resolution included approval of a compact on refugees, which was produced by U.N. refugee chief Filippo Grandi after it was requested by the General Assembly in 2016. The resolution calls on countries to implement the plan.

The United States was the only country to oppose the draft resolution last month when it was first negotiated and agreed by the General Assembly human rights committee. It said elements of the text ran counter to its sovereign interests, citing the global approach to refugees and migrants.

General Assembly resolutions are non-binding but can carry political weight. U.S. President Donald Trump used his annual address to world leaders at the United Nations in September to tout protection of U.S. sovereignty.

The United States also failed in a campaign, which started last month during negotiations on several draft resolutions in the General Assembly human rights committee, against references to “sexual and reproductive health” and “sexual and reproductive health-care services.”

It has said the language has “accumulated connotations that suggest the promotion of abortion or a right to abortion that are unacceptable to our administration.”

On Monday, Washington unsuccessfully tried to remove two paragraphs from a General Assembly resolution on preventing violence and sexual harassment of women and girls. It was the only country to vote against the language, while 131 countries voted to keep it in the resolution and 31 abstained.

The United States also failed in trying to remove similar language in another resolution on child, early and forced marriage on Monday, saying: “We do not recognize abortion as a method of family planning, nor do we support abortion in our reproductive health assistance.”

Only Nauru backed Washington in voting against the language, while 134 countries voted to keep it in the resolution and 32 abstained.

When Trump came to power last year he reinstated the so-called Mexico City Policy that withholds U.S. funding for international organizations that perform abortions or provide information about abortion.

Visions of a Better World


Noam Chomsky, Richard Dawkins, Martin Rees and others answer the question: What’s your utopia?

Visions of a Better World

Unless you are too stoned or enlightened to care, you are probably dissatisfied with the world as it is. In that case, you should have a vision of the world as you would like it to be. This better world is your utopia. That, at any rate, is the premise of a question I’ve been asking scientists and other thinkers lately: What’s your utopia?

I presented students’ responses to this question last year. This final column for 2018 (if aliens land in Central Park or CERN discovers a portal to a parallel universe, I’ll let major media handle it) offers responses from scientists and others I’ve interviewed lately. My hope is that these visions will cheer up readers bummed out by my previous post, “Dark Days.” See the end of the post for my utopia.

Noam Chomsky: I don’t have the talent to do more than to suggest what seem to me reasonable guidelines for a better future.  One might argue that Marx was too cautious in keeping to only a few general words about post-capitalist society, but he was right to recognize that it will have to be envisioned and developed by people who have liberated themselves from the bonds of illegitimate authority.

Richard Dawkins: My utopia is a world in which beliefs are based on evidence and morality is based on intelligent design—design by intelligent humans (or robots!). Neither beliefs nor morals should be based on gut feelings, or on ancient books, private revelations or priestly traditions.

Sheldon Solomon: Staying alive long enough to see that my children are relatively settled and economically secure and knowing that there’s a decent chance that the earth will not be reduced to a festering heap long before the sun explodes!

Sabine Hossenfelder: That we finally use scientific methods to restructure political and economic systems. The representative democracies that we have right now are entirely outdated and unable to cope with the complex problems which we must solve. We need new systems that better incorporate specialized knowledge and widely distributed information, and that better aggregate opinions. (I wrote about this in detail here.) It pains me a lot to think that my children will have to live through a phase of economic regress because we were too stupid and too slow to get our act together.

Scott Aaronson: Since I hang out with Singularity people so much, part of me reflexively responds: “utopia” could only mean an infinite number of sentient beings living in simulated paradises of their own choosing, racking up an infinite amount of utility.  If such a being wants challenge and adventure, then challenge and adventure is what it gets; if nonstop sex, then nonstop sex; if a proof of P≠NP, then a proof of P≠NP.  (Or the being could choose all three: it’s utopia, after all!)

Over a shorter time horizon, though, maybe the best I can do is talk about what I love and what I hate.  I love when the human race gains new knowledge, in math or history or anything else.  I love when important decisions fall into the hands of people who constantly second-guess themselves and worry that their own ‘tribe’ might be mistaken, who are curious about science and have a sense of the ironic and absurd.  I love when society’s outcasts, like Alan Turing or Michael Burry (who predicted the subprime mortgage crisis), force everyone else to pay attention to them by being inconveniently right.  And whenever I read yet another thinkpiece about the problems with “narrow-minded STEM nerds”—how we’re basically narcissistic children, lacking empathy and social skills, etc. etc.—I think to myself, “then let everyone else be as narrow and narcissistic as most of the STEM nerds I know; I have no further wish for the human race.”

On the other side, I hate the irreversible loss of anything—whether that means the deaths of individuals, the burning of the Library of Alexandria, genocides, the flooding of coastal cities as the earth warms, or the extinction of species.  I hate when the people in power are ones who just go with their gut, or their faith, or their tribe, or their dialectical materialism, and who don’t even feel self-conscious about the lack of error-correcting machinery in their methods for learning about the world.  I hate when kids with a passion for some topic have that passion beaten out of them in school, and then when they succeed anyway in pursuing the passion, they’re called stuck-up, privileged elitists.  I hate the “macro” version of the same schoolyard phenomenon, which recurs throughout cultures and history: the one where some minority is spat on and despised, manages to succeed anyway at something the world values, and is then despised even more because of its success.

So, until the Singularity arrives, I suppose my vision of utopia is simply more of what I love and less of what I hate!

David Deutsch: Of course I’m opposed to utopianism. Progress comes only through piecemeal, tentative improvements. I think the world will never be perfected, even when everything we think of as problematic today has been eliminated. We shall always be at the beginning of infinity. Never satisfied.

Stephen Wolfram: If you mean: what do I personally want to do all day?  Well, I’ve been fortunate that I’ve been able to set up my life to let me spend a large fraction of my time doing what I want to be doing, which usually means creating things and figuring things out.  I like building large, elegant, useful, intellectual and practical structures—which is what I hope I’ve done over a long period of time, for example, with Wolfram Language.

If you’re asking what I see as being the best ultimate outcome for our whole species—well, that’s a much more difficult question, though I’ve certainly thought about it.  Yes, there are things we want now—but how what we want will evolve after we’ve got those things is, I think, almost impossible for us to understand.  Look at what people see as goals today, and think how difficult it would be to explain many of them to someone even a few centuries ago.  Human goals will certainly evolve, and the things people will think are the best possible things to do in the future may well be things we don’t even have words for yet.

Peter Woit: Besides the peace, love and understanding thing, in my utopia everyone else would have as few problems and as much to enjoy about life as I currently do.

Martin Rees: A utopian society would, at the very least, require trust between individuals and their institutions. I worry that we are moving further from this ideal. Two trends are reducing interpersonal trust: firstly, the remoteness and globalization of those we routinely have to deal with; and secondly, the vulnerability of modern life to disruption –- the realization that “hackers” or dissidents can trigger incidents that cascade globally. Such trends necessitate burgeoning security measures. These are already irritants in our everyday life – security guards, elaborate passwords, airport searches and so forth — but they are likely to become ever more vexatious. Innovations like blockchain could offer protocols that render the entire Internet more secure. But their current applications – allowing an economy based on crypto-currencies to function independently of traditional financial institutions –seem damaging rather than benign. It’s depressing to realize how much of the economy is dedicated to activities that would be superfluous if we felt we could trust each other. (It would be a worthwhile exercise if some economist could quantify this.)

And the world is so interconnected that no utopia could exist on the scale of one nation-state.  Harmonious geopolitics would require a global distribution of wealth that’s perceived as fair– with far less inequality between rich and poor nations. And even without being utopian it’s surely a moral imperative (as well as in the self-interest of fortunate nations) to push towards this goal. Sadly, we downplay what’s happening even now in far-away countries and the plight of the “bottom billion.” And we discount too heavily the problems we’ll leave for new generations. Governments need to prioritize projects that are long-term in a political perspective, even if a mere instant in the history of our planet.

Tim Maudlin: In the utopian tradition that goes back to Plato (again!) utopias are not supposed to be real places. In Republic, Socrates says that it does not matter whether the ideal state actually exists: it is a pattern by reference to which one can judge the present situation and how it can be improved. There is a reason why Butler’s Erewhon is about a place called “Erewhon”. But as it happens my present not-yet-in-full-existence utopia is well on its way to full-blown reality. It is called the John Bell Institute for the Foundations of Physics, a non-profit institute formed to promote the study, teaching and investigation of the foundations of physics. So far we have our Faculty and Honorary Fellows and Bell Fellows and regular Fellows, and we have identified where our European campus will be (in Bojanić Bad, Hvar, Croatia) and are seeking an American campus in the Rockies. This is putting my views about utopia to the acid test.

Robin Hanson: My personal utopia would be an intellectual world where we actually lived up to most of the intellectual ideals we espouse. Where work is judged mainly on the long term benefit it gives the world, and arguments are accepted no matter how unpalatable their conclusions, or whose ox is gored. I actually think we know a lot about how to construct such a utopia if we wanted – see my work on futarchy and idea futures. The main problem seems to be that most of us don’t actually want my “utopia.”

Tyler Volk: John, having this opportunity to focus for a spell on your great questions: this is it!

Jim Holt: My utopia is a society that consists in its entirety of Tim and Vishnya Maudlin, David Albert, Jenann Ismael, Shelly Goldstein, Barry Loewer, Carlo Rovelli, Hartry Field, Trevor Teitel, and me, all arguing eternally about gauge theory while beautiful girls and comely boys peel grapes for us.

Nick Herbert: In sociology, I am utterly ignorant. My favorite poet Robinson Jeffers (“Shine, Perishing Republic”) held a dim view of human progress. Perhaps we are now living in the Last Golden Age before the Decline of the West. Whatever the case, Nick offers these words as a guide to rightly living in this odd complexity:

Love this well

ere it perish.

And thank you

for your mystery

which I almost entirely

do not understand.

John Horgan: As I argue in Mind-Body Problems, my free, online book, many of us are already living in pretty good utopias, democracies that give us unprecedented freedom to be who we want to be. But things could be—will be!–a lot better. We will recognize how stupid and wrong war is and end it once and for all. With the money we save from demilitarizing we will end poverty, too, improve education and health care for all, and solve the conundrum of climate change. And we will keep giving ourselves more freedom, more choices. Our children and their children will find new ways to be human, to live good, meaningful lives, ways we can’t even imagine now. This weird, wonderful human adventure will never, ever end. Happy Holidays!

The UN’s latest climate meeting ends positively


But there is a lot more to do if global warming is to be stopped

HOSTING COP24, the latest of the UN’s annual climate summits, in Katowice was meant to symbolise the transition from an old, dirty world to a new, clean one. Spiritually, the city is the home of Poland’s coal miners. Today, it is replete with besuited management consultants and bearded baristas. The venue itself was on top of a disused mine in the city centre.

Ahead of the two-week powwow, which concluded on December 15th, many feared the meeting would instead highlight the unresolved contradictions involved in that transition. So it came as a relief when nearly 14,000 delegates from 195 countries managed—more or less, and a day late—to achieve the gathering’s main objective: a “rule book” for putting into practice the Paris agreement of 2015, which commits the world to keeping global warming “well below” 2°C relative to pre-industrial times, and preferably within 1.5°C.

This outcome was far from assured. Setting an abstract goal, as governments had in Paris, is simpler than agreeing on how to go about reaching it. Technicalities—what counts as a reduction in emissions, who monitors countries’ progress and so on—can be politically thorny. Poland’s right-wing government, which presided over the talks, lacks both friends (alienated by, among other things, its anti-democratic attacks on judicial independence) and green credentials. Observers were braced for a diplomatic debacle.

Implementing the judgment of Paris

The summit got off to an inauspicious start. At the outset Poland’s president, Andrzej Duda, declared that his country cannot reasonably be expected to give up its 200 years’ worth of coal reserves. In France, his opposite number, Emmanuel Macron, caved in to massive protests and suspended a planned fuel-tax rise intended to help curb greenhouse-gas emissions from transport. Days earlier, Brazil had withdrawn its offer to host next year’s summit after Jair Bolsonaro, the president-elect who takes office in January and who would love to follow his American counterpart, Donald Trump, out of the Paris deal, said his government had no interest.

Despite these early setbacks, negotiators resolved most of 2,800-odd points of contention in the rule book’s pre-summit draft. Michal Kurtyka, the amiable Polish bureaucrat who chaired the proceedings, turned apparent haplessness into a virtue, by leaving delegates space to thrash out their differences.

Poor countries won firmer assurances that rich ones would help pay for their efforts to curb their greenhouse-gas emissions and to adapt to rising sea levels and fiercer floods, droughts, storms and other climate-related problems. The rich world, for its part, cajoled China into accepting uniform guidelines for tallying those emissions. Thus stripped of their most powerful voice, other developing countries reluctantly followed suit. If any cannot meet the standards, they must explain why and present a plan to make amends. This concession, long demanded by the Americans, may not persuade Mr Trump to keep the United States in the deal. But it could make things easier for any successor who wished to re-enter it after Mr Trump has left office.

Besides haggling over the rules, a handful of countries—including big polluters such as Ukraine—used the jamboree to announce plans for more ambitious “nationally determined contributions” (or NDCs, as the voluntary pledges countries submit under the Paris deal are known). The city councils of Melbourne and Sydney, in Australia, joined a growing number of national and local governments intent on phasing out coal. So did Israel and Senegal. In the wake of Brazil’s desertion, Chile stepped in to organise next year’s summit, which convention dictates should happen in Latin America. The Paris compact has thus not come apart at the seams.

Predictably, for negotiations that need to balance the interest of nearly 200 parties, no one leaves Katowice entirely happy. Vulnerable countries, such as small island states imperilled by rising seas, worry that the findings of a recent UN-backed scientific report outlining the dire consequences of another half a degree of warming, on top of the 1°C which has happened since the beginning of the Industrial Revolution, have been underplayed. Rich countries grumble that poor ones can still get away with emitting too much carbon dioxide.

Mr Kurtyka was also unable, because of Brazilian objections, to break an impasse on carbon trading. This is an arrangement that allows big belchers of CO{-2} to offset emissions by paying others to forgo some of theirs. Brazil balked at proposals intended to prevent double-counting in such trading, because it believed they penalised its large stockpile of carbon-trading instruments, such as promises not to chop down patches of the Amazon. As a result, the issue has been kicked into the long cassava.

The direction of travel is, nevertheless, correct. Earlier in the meeting Ottmar Edenhofer, a veteran German climate policymaker who is director of the Potsdam Institute for Climate Impact Research, had feared that Katowice would mark “the beginning of the end of the Paris agreement”. For all its shortcomings, the compromise which emerged is not that.

But after all is said and done, the 2°C goal (let alone the 1.5°C aspiration) still remains a distant prospect. The current set of NDCs puts the world on course for more or less 3°C of warming—and Kiribati and the Marshall Islands at risk of submersion. Campaigners, who spiced up the stodgy talks with a dash of sit-ins and marches, were right to decry the lack of ambition as unequal to the task of sparing future generations from climate catastrophe. The rule book is itself no nostrum for the planet’s man-made fever. The only real medicine would be firmer commitment to decarbonising economies. And, as Mr Macron is finding, that medicine can be bitter.

Suicide Is A Society-Wide Problem That Needs A Society-Wide Solution


People across our communities need the confidence and skills to speak openly about suicide.

The weekend before Greg Hunt got his fellow health ministers from the states and territories to agree to a national plan to reduce suicide, I watched people with paper butterflies in Bendigo trying to heal the sorrowful hurt of our national suicide emergency.

At a community event there, I saw affected family members and friends queue up — young and old, townies with tattoos and country conservatives in Akubras — to pin their homemade personal tributes onto a net that symbolised holding hope.

I counted some 50 butterflies and some 800 participants.

I listened to a local GP who regularly deals with people with suicidality say: “People aren’t dying to die. People are dying from the pain of not being heard.”

Now, as governments and stakeholders consider what a national suicide prevention plan should include, and we finally join the other 28 countries who currently have one, we would be wise to listen and learn from the hard-earned and heartfelt lessons of those of ‘lived experience’. Those who directly deal with suicidal people, those impacted on by suicide death, and those who have overcome suicidality.

The vast majority of those who experience suicidality do not die.

For the 3027 deaths by suicide in the past statistical year — a 10-year high at a time of 25 straight years of economic growth — there were likely more than 100,000 attempts. The vast majority of those who experience suicidality do not die.

Let’s start our listening there, where hope lives. We know from overseas successes that suicide is practically preventable. For many, suicidality is an experience of being overwhelmed by pain at a point in time. This ‘psych-ache’ is contributed to by isolating factors such as loss of work, lack of access to services, relationship breakdown, addiction, and, in some but certainly not all cases, mental illness.

If we can hear people in that critical period and respectfully support them through what’s happening for them, many go on to live positive and prosperous lives. Therefore, the infrastructure for crisis support is vital to recognise in a national plan. We believe we contribute to saving some 1100 lives per week by being unconditionally there for people in intense pain and confusion.

Part of our contribution needs to be about matching our tradition of empathy with greater effectiveness. This year, to compliment the near 1 million phone and Internet interactions we fielded from around 300,000 Australians in crisis, we will seek to introduce crisis text and messaging.

A large portion of Australian communications activity is by SMS or some form of messaging, and that’s where we need to be to help. That’s especially true of men (about 75 percent of all suicides), and younger people (where rates are rising again), who may be more likely to use text or messaging in the first instance to seek help. Plus, it may make crisis support more accessible to rural and regional communities with weaker signals for mobile coverage, which typically have the most frightening suicide rates in Australia. We have at least enough money from the Feds and some very dedicated corporates to trial this year.

People across our society need the confidence and skills to speak openly about suicide, to remove the barriers such as shame and blame, and to encourage help-seeking.

Another key message from people with ‘lived experience’, especially those who have sadly seen loved ones die, is the need for greater skills in the community to address suicide among our family, friends, workmates and neighbours. Organisations such as Mates in Construction are currently doing a great job of training people in the high-susceptibility industry that is construction.

But we need to do more to destigmatise suicide and empower more people to have suicide-related conversations. That includes more involvement by the broader business community, especially where suicide risk is higher. Focus should be on male-dominated professions, and ‘gatekeeper’ sectors such as education, social welfare, employment organisations and the judiciary. People across our society need the confidence and skills to speak openly about suicide, to remove the barriers such as shame and blame, and to encourage help-seeking.

On the other hand, ‘spotting the signs’ of suicide is a difficult proposition that often eludes trained professionals, and there’s limited return in training people in this method of prevention. It’s likely to be more effective to empower the community to ask the critical question, “Are you suicidal?”, that Lifeline asks an average of 2500 times per day.

We need to use what we know about speaking about suicide from our 54 years of experience and share it with a community that has come to trust us to a truly humbling extent. We need more support for school and university programs, and businesses are literally crying out for help for their employees, contractors, suppliers and stakeholders.

Another ‘lived experience’ voice that is vital to hear is the one that consistently says this to Lifeline crisis supporters: “I’ve just left the hospital after a suicide attempt and don’t know what to do.” There is a massive gap in services and support for the group that is much more likely to be suicidal: those who have already made an initial attempt. As overseas evidence suggests, many of the deaths of this group of people are preventable through better ‘post-vention’ and recovery, including improved discharge procedures, after-care facilities, follow-up services, and peer-to-peer support.

This we can do and it’s an area Hunt is very focussed on. It’s a group of people who number in their hundreds and we literally know them by name. They have been to hospital; we can deliver hope directly to them by breaking down the barriers between hospital systems and charities, and by using the best of what modern technology offers us, such as e-health.

A national strategy can’t be up to the mental health and emotional wellbeing sectors alone, because it will fail.

Whether it’s ‘lived experience’ or others, a key aspect is co-ownership. A society wide problem needs a society wide solution. A national strategy can’t be up to the mental health and emotional wellbeing sectors alone, because it will fail.

As an alternative approach, The Huffington Post Australia, Twitter, Accor, and Lifeline will soon hold a #stopsuicide summit with 50 CEO-level executives and leaders from multiple sectors such as financial services, public administration, media, transport, tourism, agriculture, the law, resources and ICT to discuss their ideas for innovation and problem solving around suicide.

Ultimately, it’s this continuum of compassion and innovation that we need to have a go at, or as the World Health Organisation recommends, from ‘universal’ strategies to fight stigma to ‘selective’ strategies to reduce risks in vulnerable communities to ‘indicated’ strategies for specific people who need immediate support. As a colleague describes it: more of what works and more of what we need to try. And, in that respect, the principle of co-design, the use of evidence, the inclusion of measurement and evaluation, and the identification of accountability structures are simply non-negotiables in good policy and practice.

While a national plan is a good and necessary thing, the truth is much suicide will be prevented not by change in public policy but change in personal perspective. The disconnectedness and toxic loneliness that drives much suicide is given space to exist when we don’t go out of our way to look after each other and connect.

When the pervasive narcissism of our times negates our niceness to each other. When vanity blocks our values. When our practice of empathy goes without everyday practice. When our compassion is doled out in convenient clicks rather than acts of kindness. When we don’t speak plainly about the very real social disadvantages that at least compound suicidality in many people.

In the months ahead, we have the chance to make a real plan to save Australian lives. But, in this very moment, we have the chance to make a real promise to ourselves to care and connect with those who most need it. One bereaved mother in Bendigo told me that’s what she now devotes her life too; we should look at our own actions too.

U.S. Appeals Court Narrows Trump Birth Control Ruling


A U.S. appeals court on Thursday narrowed an order that had blocked President Donald Trump’s administration from enforcing new rules that undermine an Obamacare requirement for employers to provide insurance that covers women’s birth control.

Last year two federal judges, one in Philadelphia and one in Oakland, California, had blocked the government from enforcing rules allowing businesses or nonprofits to obtain exemptions from the contraception policy on moral or religious grounds. The Justice Department appealed both rulings.

The 9th U.S. Circuit Court of Appeals said on Thursday the injunction issued in California should not apply nationwide, but only within the five states that sued over the policy. California’s attorney general filed the case, along with AGs in Delaware, Virginia, Maryland and New York.

Despite the 9th Circuit ruling, a nationwide injunction issued by the Philadelphia judge is still in effect while that case is under appeal at the 3rd Circuit, a spokesman for Pennsylvania’s attorney general said on Thursday.

A U.S. Justice Department spokesman could not immediately be reached for comment. At the time the California injunction was issued, a spokeswoman said: “This administration is committed to defending the religious liberty of all Americans.”

One 9th Circuit judge, an appointee of Republican President George H.W. Bush, said he would have revoked the California injunction altogether.

The cases are among several that Democratic state attorneys general filed after the Republican Trump administration revealed the new rules which targeted the contraceptive mandate implemented as part of 2010’s Affordable Care Act, popularly known as Obamacare.

The rules would let businesses or nonprofits lodge religious or moral objections to obtain an exemption from the law’s mandate that employers provide contraceptive coverage in health insurance with no co-payment.

Conservative Christian activists and congressional Republicans praised the move, while reproductive rights advocates and Democrats criticized it.

Inside the Two Years That Shook Facebook—and the World


One day in late February of 2016, Mark Zuckerberg sent a memo to all of Facebook’s employees to address some troubling behavior in the ranks. His message pertained to some walls at the company’s Menlo Park headquarters where staffers are encouraged to scribble notes and signatures. On at least a couple of occasions, someone had crossed out the words “Black Lives Matter” and replaced them with “All Lives Matter.” Zuckerberg wanted whoever was responsible to cut it out.

“ ‘Black Lives Matter’ doesn’t mean other lives don’t,” he wrote. “We’ve never had rules around what people can write on our walls,” the memo went on. But “crossing out something means silencing speech, or that one person’s speech is more important than another’s.” The defacement, he said, was being investigated.

All around the country at about this time, debates about race and politics were becoming increasingly raw. Donald Trump had just won the South Carolina primary, lashed out at the Pope over immigration, and earned the enthusiastic support of David Duke. Hillary Clinton had just defeated Bernie Sanders in Nevada, only to have an activist from Black Lives Matter interrupt a speech of hers to protest racially charged statements she’d made two decades before. And on Facebook, a popular group called Blacktivist was gaining traction by blasting out messages like “American economy and power were built on forced migration and torture.”

So when Zuckerberg’s admonition circulated, a young contract employee named Benjamin Fearnow decided it might be newsworthy. He took a screenshot on his personal laptop and sent the image to a friend named Michael Nuñez, who worked at the tech-news site Gizmodo. Nuñez promptly published a brief story about Zuckerberg’s memo.

A week later, Fearnow came across something else he thought Nuñez might like to publish. In another internal communication, Facebook had invited its employees to submit potential questions to ask Zuckerberg at an all-hands meeting. One of the most up-voted questions that week was “What responsibility does Facebook have to help prevent President Trump in 2017?” Fearnow took another screenshot, this time with his phone.

Fearnow, a recent graduate of the Columbia Journalism School, worked in Facebook’s New York office on something called Trending Topics, a feed of popular news subjects that popped up when people opened Facebook. The feed was generated by an algorithm but moderated by a team of about 25 people with backgrounds in journalism. If the word “Trump” was trending, as it often was, they used their news judgment to identify which bit of news about the candidate was most important. If The Onion or a hoax site published a spoof that went viral, they had to keep that out. If something like a mass shooting happened, and Facebook’s algorithm was slow to pick up on it, they would inject a story about it into the feed.

Facebook prides itself on being a place where people love to work. But Fearnow and his team weren’t the happiest lot. They were contract employees hired through a company called BCforward, and every day was full of little reminders that they weren’t really part of Facebook. Plus, the young journalists knew their jobs were doomed from the start. Tech companies, for the most part, prefer to have as little as possible done by humans—because, it’s often said, they don’t scale. You can’t hire a billion of them, and they prove meddlesome in ways that algorithms don’t. They need bathroom breaks and health insurance, and the most annoying of them sometimes talk to the press. Eventually, everyone assumed, Facebook’s algorithms would be good enough to run the whole project, and the people on Fearnow’s team—who served partly to train those algorithms—would be expendable.

The day after Fearnow took that second screenshot was a Friday. When he woke up after sleeping in, he noticed that he had about 30 meeting notifications from Facebook on his phone. When he replied to say it was his day off, he recalls, he was nonetheless asked to be available in 10 minutes. Soon he was on a video­conference with three Facebook employees, including Sonya Ahuja, the company’s head of investigations. According to his recounting of the meeting, she asked him if he had been in touch with Nuñez. He denied that he had been. Then she told him that she had their messages on Gchat, which Fearnow had assumed weren’t accessible to Facebook. He was fired. “Please shut your laptop and don’t reopen it,” she instructed him.

That same day, Ahuja had another conversation with a second employee at Trending Topics named Ryan Villarreal. Several years before, he and Fearnow had shared an apartment with Nuñez. Villarreal said he hadn’t taken any screenshots, and he certainly hadn’t leaked them. But he had clicked “like” on the story about Black Lives Matter, and he was friends with Nuñez on Facebook. “Do you think leaks are bad?” Ahuja demanded to know, according to Villarreal. He was fired too. The last he heard from his employer was in a letter from BCforward. The company had given him $15 to cover expenses, and it wanted the money back.

The firing of Fearnow and Villarreal set the Trending Topics team on edge—and Nuñez kept digging for dirt. He soon published a story about the internal poll showing Facebookers’ interest in fending off Trump. Then, in early May, he published an article based on conversations with yet a third former Trending Topics employee, under the blaring headline “Former Facebook Workers: We Routinely Suppressed Conservative News.” The piece suggested that Facebook’s Trending team worked like a Fox News fever dream, with a bunch of biased curators “injecting” liberal stories and “blacklisting” conservative ones. Within a few hours the piece popped onto half a dozen highly trafficked tech and politics websites, including Drudge Report and Breitbart News.

The post went viral, but the ensuing battle over Trending Topics did more than just dominate a few news cycles. In ways that are only fully visible now, it set the stage for the most tumultuous two years of Facebook’s existence—triggering a chain of events that would distract and confuse the company while larger disasters began to engulf it.

This is the story of those two years, as they played out inside and around the company. WIRED spoke with 51 current or former Facebook employees for this article, many of whom did not want their names used, for reasons anyone familiar with the story of Fearnow and Villarreal would surely understand. (One current employee asked that a WIRED reporter turn off his phone so the company would have a harder time tracking whether it had been near the phones of anyone from Facebook.)

The stories varied, but most people told the same basic tale: of a company, and a CEO, whose techno-optimism has been crushed as they’ve learned the myriad ways their platform can be used for ill. Of an election that shocked Facebook, even as its fallout put the company under siege. Of a series of external threats, defensive internal calculations, and false starts that delayed Facebook’s reckoning with its impact on global affairs and its users’ minds. And—in the tale’s final chapters—of the company’s earnest attempt to redeem itself.

In that saga, Fearnow plays one of those obscure but crucial roles that history occasionally hands out. He’s the Franz Ferdinand of Facebook—or maybe he’s more like the archduke’s hapless young assassin. Either way, in the rolling disaster that has enveloped Facebook since early 2016, Fearnow’s leaks probably ought to go down as the screenshots heard round the world.

II

By now, the story of Facebook’s all-consuming growth is practically the creation myth of our information era. What began as a way to connect with your friends at Harvard became a way to connect with people at other elite schools, then at all schools, and then everywhere. After that, your Facebook login became a way to log on to other internet sites. Its Messenger app started competing with email and texting. It became the place where you told people you were safe after an earthquake. In some countries like the Philippines, it effectively is the internet.

The furious energy of this big bang emanated, in large part, from a brilliant and simple insight. Humans are social animals. But the internet is a cesspool. That scares people away from identifying themselves and putting personal details online. Solve that problem—make people feel safe to post—and they will share obsessively. Make the resulting database of privately shared information and personal connections available to advertisers, and that platform will become one of the most important media technologies of the early 21st century.

But as powerful as that original insight was, Facebook’s expansion has also been driven by sheer brawn. Zuckerberg has been a determined, even ruthless, steward of the company’s manifest destiny, with an uncanny knack for placing the right bets. In the company’s early days, “move fast and break things” wasn’t just a piece of advice to his developers; it was a philosophy that served to resolve countless delicate trade-offs—many of them involving user privacy—in ways that best favored the platform’s growth. And when it comes to competitors, Zuckerberg has been relentless in either acquiring or sinking any challengers that seem to have the wind at their backs.

Facebook’s Reckoning

Two years that forced the platform to change

by Blanca Myers

March 2016

Facebook suspends Benjamin Fearnow, a journalist-­curator for the platform’s Trending Topics feed, after he leaks to Gizmodo.

May 2016

Gizmodo reports that Trending Topics “routinely suppressed conservative news.” The story sends Facebook scrambling.

July 2016

Rupert Murdoch tells Zuckerberg that Facebook is wreaking havoc on the news industry and threatens to cause trouble.

August 2016

Facebook cuts loose all of its Trending Topics journalists, ceding authority over the feed to engineers in Seattle.

November 2016

Donald Trump wins. Zuckerberg says it’s “pretty crazy” to think fake news on Facebook helped tip the election.

December 2016

Facebook declares war on fake news, hires CNN alum Campbell Brown to shepherd relations with the publishing industry.

September 2017

Facebook announces that a Russian group paid $100,000 for roughly 3,000 ads aimed at US voters.

October 2017

Researcher Jonathan Albright reveals that posts from six Russian propaganda accounts were shared 340 million times.

November 2017

Facebook general counsel Colin Stretch gets pummeled during congressional Intelligence Committee hearings.

January 2018

Facebook begins announcing major changes, aimed to ensure that time on the platform will be “time well spent.”

In fact, it was in besting just such a rival that Facebook came to dominate how we discover and consume news. Back in 2012, the most exciting social network for distributing news online wasn’t Facebook, it was Twitter. The latter’s 140-character posts accelerated the speed at which news could spread, allowing its influence in the news industry to grow much faster than Facebook’s. “Twitter was this massive, massive threat,” says a former Facebook executive heavily involved in the decisionmaking at the time.

So Zuckerberg pursued a strategy he has often deployed against competitors he cannot buy: He copied, then crushed. He adjusted Facebook’s News Feed to fully incorporate news (despite its name, the feed was originally tilted toward personal news) and adjusted the product so that it showed author bylines and headlines. Then Facebook’s emissaries fanned out to talk with journalists and explain how to best reach readers through the platform. By the end of 2013, Facebook had doubled its share of traffic to news sites and had started to push Twitter into a decline. By the middle of 2015, it had surpassed Google as the leader in referring readers to publisher sites and was now referring 13 times as many readers to news publishers as Twitter. That year, Facebook launched Instant Articles, offering publishers the chance to publish directly on the platform. Posts would load faster and look sharper if they agreed, but the publishers would give up an element of control over the content. The publishing industry, which had been reeling for years, largely assented. Facebook now effectively owned the news. “If you could reproduce Twitter inside of Facebook, why would you go to Twitter?” says the former executive. “What they are doing to Snapchat now, they did to Twitter back then.”

It appears that Facebook did not, however, carefully think through the implications of becoming the dominant force in the news industry. Everyone in management cared about quality and accuracy, and they had set up rules, for example, to eliminate pornography and protect copyright. But Facebook hired few journalists and spent little time discussing the big questions that bedevil the media industry. What is fair? What is a fact? How do you signal the difference between news, analysis, satire, and opinion? Facebook has long seemed to think it has immunity from those debates because it is just a technology company—one that has built a “platform for all ideas.”

This notion that Facebook is an open, neutral platform is almost like a religious tenet inside the company. When new recruits come in, they are treated to an orientation lecture by Chris Cox, the company’s chief product officer, who tells them Facebook is an entirely new communications platform for the 21st century, as the telephone was for the 20th. But if anyone inside Facebook is unconvinced by religion, there is also Section 230 of the 1996 Communications Decency Act to recommend the idea. This is the section of US law that shelters internet intermediaries from liability for the content their users post. If Facebook were to start creating or editing content on its platform, it would risk losing that immunity—and it’s hard to imagine how Facebook could exist if it were liable for the many billion pieces of content a day that users post on its site.

And so, because of the company’s self-image, as well as its fear of regulation, Facebook tried never to favor one kind of news content over another. But neutrality is a choice in itself. For instance, Facebook decided to present every piece of content that appeared on News Feed—whether it was your dog pictures or a news story—in roughly the same way. This meant that all news stories looked roughly the same as each other, too, whether they were investigations in The Washington Post, gossip in the New York Post, or flat-out lies in the Denver Guardian, an entirely bogus newspaper. Facebook argued that this democratized information. You saw what your friends wanted you to see, not what some editor in a Times Square tower chose. But it’s hard to argue that this wasn’t an editorial decision. It may be one of the biggest ever made.

In any case, Facebook’s move into news set off yet another explosion of ways that people could connect. Now Facebook was the place where publications could connect with their readers—and also where Macedonian teenagers could connect with voters in America, and operatives in Saint Petersburg could connect with audiences of their own choosing in a way that no one at the company had ever seen before.

III

In February of 2016, just as the Trending Topics fiasco was building up steam, Roger ­McNamee became one of the first Facebook insiders to notice strange things happening on the platform. McNamee was an early investor in Facebook who had mentored Zuckerberg through two crucial decisions: to turn down Yahoo’s offer of $1 billion to acquire Facebook in 2006; and to hire a Google executive named Sheryl Sandberg in 2008 to help find a business model. McNamee was no longer in touch with Zuckerberg much, but he was still an investor, and that month he started seeing things related to the Bernie Sanders campaign that worried him. “I’m observing memes ostensibly coming out of a Facebook group associated with the Sanders campaign that couldn’t possibly have been from the Sanders campaign,” he recalls, “and yet they were organized and spreading in such a way that suggested somebody had a budget. And I’m sitting there thinking, ‘That’s really weird. I mean, that’s not good.’ ”

But McNamee didn’t say anything to anyone at Facebook—at least not yet. And the company itself was not picking up on any such worrying signals, save for one blip on its radar: In early 2016, its security team noticed an uptick in Russian actors attempting to steal the credentials of journalists and public figures. Facebook reported this to the FBI. But the company says it never heard back from the government, and that was that.

Instead, Facebook spent the spring of 2016 very busily fending off accusations that it might influence the elections in a completely different way. When Gizmodo published its story about political bias on the Trending Topics team in May, the ­article went off like a bomb in Menlo Park. It quickly reached millions of readers and, in a delicious irony, appeared in the Trending Topics module itself. But the bad press wasn’t what really rattled Facebook—it was the letter from John Thune, a Republican US senator from South Dakota, that followed the story’s publication. Thune chairs the Senate Commerce Committee, which in turn oversees the Federal Trade Commission, an agency that has been especially active in investigating Facebook. The senator wanted Facebook’s answers to the allegations of bias, and he wanted them promptly.

The Thune letter put Facebook on high alert. The company promptly dispatched senior Washington staffers to meet with Thune’s team. Then it sent him a 12-page single-spaced letter explaining that it had conducted a thorough review of Trending Topics and determined that the allegations in the Gizmodo story were largely false.

Facebook decided, too, that it had to extend an olive branch to the entire American right wing, much of which was raging about the company’s supposed perfidy. And so, just over a week after the story ran, Facebook scrambled to invite a group of 17 prominent Republicans out to Menlo Park. The list included television hosts, radio stars, think tankers, and an adviser to the Trump campaign. The point was partly to get feedback. But more than that, the company wanted to make a show of apologizing for its sins, lifting up the back of its shirt, and asking for the lash.

According to a Facebook employee involved in planning the meeting, part of the goal was to bring in a group of conservatives who were certain to fight with one another. They made sure to have libertarians who wouldn’t want to regulate the platform and partisans who would. Another goal, according to the employee, was to make sure the attendees were “bored to death” by a technical presentation after Zuckerberg and Sandberg had addressed the group.

The power went out, and the room got uncomfortably hot. But otherwise the meeting went according to plan. The guests did indeed fight, and they failed to unify in a way that was either threatening or coherent. Some wanted the company to set hiring quotas for conservative employees; others thought that idea was nuts. As often happens when outsiders meet with Facebook, people used the time to try to figure out how they could get more followers for their own pages.

Afterward, Glenn Beck, one of the invitees, wrote an essay about the meeting, praising Zuckerberg. “I asked him if Facebook, now or in the future, would be an open platform for the sharing of all ideas or a curator of content,” Beck wrote. “Without hesitation, with clarity and boldness, Mark said there is only one Facebook and one path forward: ‘We are an open platform.’”

Inside Facebook itself, the backlash around Trending Topics did inspire some genuine soul-searching. But none of it got very far. A quiet internal project, codenamed Hudson, cropped up around this time to determine, according to someone who worked on it, whether News Feed should be modified to better deal with some of the most complex issues facing the product. Does it favor posts that make people angry? Does it favor simple or even false ideas over complex and true ones? Those are hard questions, and the company didn’t have answers to them yet. Ultimately, in late June, Facebook announced a modest change: The algorithm would be revised to favor posts from friends and family. At the same time, Adam Mosseri, Facebook’s News Feed boss, posted a manifesto titled “Building a Better News Feed for You.” People inside Facebook spoke of it as a document roughly resembling the Magna Carta; the company had never spoken before about how News Feed really worked. To outsiders, though, the document came across as boilerplate. It said roughly what you’d expect: that the company was opposed to clickbait but that it wasn’t in the business of favoring certain kinds of viewpoints.

The most important consequence of the Trending Topics controversy, according to nearly a dozen former and current employees, was that Facebook became wary of doing anything that might look like stifling conservative news. It had burned its fingers once and didn’t want to do it again. And so a summer of deeply partisan rancor and calumny began with Facebook eager to stay out of the fray.

IV

Shortly after Mosseri published his guide to News Feed values, Zuckerberg traveled to Sun Valley, Idaho, for an annual conference hosted by billionaire Herb Allen, where moguls in short sleeves and sunglasses cavort and make plans to buy each other’s companies. But Rupert Murdoch broke the mood in a meeting that took place inside his villa. According to numerous accounts of the conversation, Murdoch and Robert Thomson, the CEO of News Corp, explained to Zuckerberg that they had long been unhappy with Facebook and Google. The two tech giants had taken nearly the entire digital ad market and become an existential threat to serious journalism. According to people familiar with the conversation, the two News Corp leaders accused Facebook of making dramatic changes to its core algorithm without adequately consulting its media partners, wreaking havoc according to Zuckerberg’s whims. If Facebook didn’t start offering a better deal to the publishing industry, Thomson and Murdoch conveyed in stark terms, Zuckerberg could expect News Corp executives to become much more public in their denunciations and much more open in their lobbying. They had helped to make things very hard for Google in Europe. And they could do the same for Facebook in the US.

Facebook thought that News Corp was threatening to push for a government antitrust investigation or maybe an inquiry into whether the company deserved its protection from liability as a neutral platform. Inside Facebook, executives believed Murdoch might use his papers and TV stations to amplify critiques of the company. News Corp says that was not at all the case; the company threatened to deploy executives, but not its journalists.

Zuckerberg had reason to take the meeting especially seriously, according to a former Facebook executive, because he had firsthand knowledge of Murdoch’s skill in the dark arts. Back in 2007, Facebook had come under criticism from 49 state attorneys general for failing to protect young Facebook users from sexual predators and inappropriate content. Concerned parents had written to Connecticut attorney general Richard Blumenthal, who opened an investigation, and to The New York Times, which published a story. But according to a former Facebook executive in a position to know, the company believed that many of the Facebook accounts and the predatory behavior the letters referenced were fakes, traceable to News Corp lawyers or others working for Murdoch, who owned Facebook’s biggest competitor, MySpace. “We traced the creation of the Facebook accounts to IP addresses at the Apple store a block away from the MySpace offices in Santa Monica,” the executive says. “Facebook then traced interactions with those accounts to News Corp lawyers. When it comes to Facebook, Murdoch has been playing every angle he can for a long time.” (Both News Corp and its spinoff 21st Century Fox declined to comment.)

Zuckerberg took Murdoch’s threats seriously—he had firsthand knowledge of the older man’s skill in the dark arts.

When Zuckerberg returned from Sun Valley, he told his employees that things had to change. They still weren’t in the news business, but they had to make sure there would be a news business. And they had to communicate better. One of those who got a new to-do list was Andrew Anker, a product manager who’d arrived at Facebook in 2015 after a career in journalism (including a long stint at WIRED in the ’90s). One of his jobs was to help the company think through how publishers could make money on the platform. Shortly after Sun Valley, Anker met with Zuckerberg and asked to hire 60 new people to work on partnerships with the news industry. Before the meeting ended, the request was approved.

But having more people out talking to publishers just drove home how hard it would be to resolve the financial problems Murdoch wanted fixed. News outfits were spending millions to produce stories that Facebook was benefiting from, and Facebook, they felt, was giving too little back in return. Instant Articles, in particular, struck them as a Trojan horse. Publishers complained that they could make more money from stories that loaded on their own mobile web pages than on Facebook Instant. (They often did so, it turned out, in ways that short-changed advertisers, by sneaking in ads that readers were unlikely to see. Facebook didn’t let them get away with that.) Another seemingly irreconcilable difference: Outlets like Murdoch’s Wall Street Journal depended on paywalls to make money, but Instant Articles banned paywalls; Zuckerberg disapproved of them. After all, he would often ask, how exactly do walls and toll booths make the world more open and connected?

The conversations often ended at an impasse, but Facebook was at least becoming more attentive. This newfound appreciation for the concerns of journalists did not, however, extend to the journalists on Facebook’s own Trending Topics team. In late August, everyone on the team was told that their jobs were being eliminated. Simultaneously, authority over the algorithm shifted to a team of engineers based in Seattle. Very quickly the module started to surface lies and fiction. A headline days later read, “Fox News Exposes Traitor Megyn Kelly, Kicks Her Out For Backing Hillary.”

V

While Facebook grappled internally with what it was becoming—a company that dominated media but didn’t want to be a media company—Donald Trump’s presidential campaign staff faced no such confusion. To them Facebook’s use was obvious. Twitter was a tool for communicating directly with supporters and yelling at the media. Facebook was the way to run the most effective direct-­marketing political operation in history.

In the summer of 2016, at the top of the general election campaign, Trump’s digital operation might have seemed to be at a major disadvantage. After all, Hillary Clinton’s team was flush with elite talent and got advice from Eric Schmidt, known for running ­Google. Trump’s was run by Brad Parscale, known for setting up the Eric Trump Foundation’s web page. Trump’s social media director was his former caddie. But in 2016, it turned out you didn’t need digital experience running a presidential campaign, you just needed a knack for Facebook.

Over the course of the summer, Trump’s team turned the platform into one of its primary vehicles for fund-­raising. The campaign uploaded its voter files—the names, addresses, voting history, and any other information it had on potential voters—to Facebook. Then, using a tool called Look­alike Audiences, Facebook identified the broad characteristics of, say, people who had signed up for Trump newsletters or bought Trump hats. That allowed the campaign to send ads to people with similar traits. Trump would post simple messages like “This election is being rigged by the media pushing false and unsubstantiated charges, and outright lies, in order to elect Crooked Hillary!” that got hundreds of thousands of likes, comments, and shares. The money rolled in. Clinton’s wonkier messages, meanwhile, resonated less on the platform. Inside Facebook, almost everyone on the executive team wanted Clinton to win; but they knew that Trump was using the platform better. If he was the candidate for Facebook, she was the candidate for LinkedIn.

Trump’s candidacy also proved to be a wonderful tool for a new class of scammers pumping out massively viral and entirely fake stories. Through trial and error, they learned that memes praising the former host of The Apprentice got many more readers than ones praising the former secretary of state. A website called Ending the Fed proclaimed that the Pope had endorsed Trump and got almost a million comments, shares, and reactions on Facebook, according to an analysis by BuzzFeed. Other stories asserted that the former first lady had quietly been selling weapons to ISIS, and that an FBI agent suspected of leaking Clinton’s emails was found dead. Some of the posts came from hyperpartisan Americans. Some came from overseas content mills that were in it purely for the ad dollars. By the end of the campaign, the top fake stories on the platform were generating more engagement than the top real ones.

Even current Facebookers acknowledge now that they missed what should have been obvious signs of people misusing the platform. And looking back, it’s easy to put together a long list of possible explanations for the myopia in Menlo Park about fake news. Management was gun-shy because of the Trending Topics fiasco; taking action against partisan disinformation—or even identifying it as such—might have been seen as another act of political favoritism. Facebook also sold ads against the stories, and sensational garbage was good at pulling people into the platform. Employees’ bonuses can be based largely on whether Facebook hits certain growth and revenue targets, which gives people an extra incentive not to worry too much about things that are otherwise good for engagement. And then there was the ever-present issue of Section 230 of the 1996 Communications Decency Act. If the company started taking responsibility for fake news, it might have to take responsibility for a lot more. Facebook had plenty of reasons to keep its head in the sand.

Roger McNamee, however, watched carefully as the nonsense spread. First there were the fake stories pushing Bernie Sanders, then he saw ones supporting Brexit, and then helping Trump. By the end of the summer, he had resolved to write an op-ed about the problems on the platform. But he never ran it. “The idea was, look, these are my friends. I really want to help them.” And so on a Sunday evening, nine days before the 2016 election, McNamee emailed a 1,000-word letter to Sandberg and Zuckerberg. “I am really sad about Facebook,” it began. “I got involved with the company more than a decade ago and have taken great pride and joy in the company’s success … until the past few months. Now I am disappointed. I am embarrassed. I am ashamed.”

Eddie Guy

VI

It’s not easy to recognize that the machine you’ve built to bring people together is being used to tear them apart, and Mark Zuckerberg’s initial reaction to Trump’s victory, and Facebook’s possible role in it, was one of peevish dismissal. Executives remember panic the first few days, with the leadership team scurrying back and forth between Zuckerberg’s conference room (called the Aquarium) and Sandberg’s (called Only Good News), trying to figure out what had just happened and whether they would be blamed. Then, at a conference two days after the election, Zuckerberg argued that filter bubbles are worse offline than on Facebook and that social media hardly influences how people vote. “The idea that fake news on Facebook—of which, you know, it’s a very small amount of the content—influenced the election in any way, I think, is a pretty crazy idea,” he said.

Zuckerberg declined to be interviewed for this article, but people who know him well say he likes to form his opinions from data. And in this case he wasn’t without it. Before the interview, his staff had worked up a back-of-the-­envelope calculation showing that fake news was a tiny percentage of the total amount of election-­related content on the platform. But the analysis was just an aggregate look at the percentage of clearly fake stories that appeared across all of Facebook. It didn’t measure their influence or the way fake news affected specific groups. It was a number, but not a particularly meaningful one.

Zuckerberg’s comments did not go over well, even inside Facebook. They seemed clueless and self-absorbed. “What he said was incredibly damaging,” a former executive told WIRED. “We had to really flip him on that. We realized that if we didn’t, the company was going to start heading down this pariah path that Uber was on.”

A week after his “pretty crazy” comment, Zuckerberg flew to Peru to give a talk to world leaders about the ways that connecting more people to the internet, and to Facebook, could reduce global poverty. Right after he landed in Lima, he posted something of a mea culpa. He explained that Facebook did take misinformation seriously, and he presented a vague seven-point plan to tackle it. When a professor at the New School named David Carroll saw Zuckerberg’s post, he took a screenshot. Alongside it on Carroll’s feed ran a headline from a fake CNN with an image of a distressed Donald Trump and the text “DISQUALIFIED; He’s GONE!”

At the conference in Peru, Zuckerberg met with a man who knows a few things about politics: Barack Obama. Media reports portrayed the encounter as one in which the lame-duck president pulled Zuckerberg aside and gave him a “wake-up call” about fake news. But according to someone who was with them in Lima, it was Zuckerberg who called the meeting, and his agenda was merely to convince Obama that, yes, Facebook was serious about dealing with the problem. He truly wanted to thwart misinformation, he said, but it wasn’t an easy issue to solve.

One employee compared Zuckerberg to Lennie in Of Mice and Men—a man with no understanding of his own strength.

Meanwhile, at Facebook, the gears churned. For the first time, insiders really began to question whether they had too much power. One employee told WIRED that, watching Zuckerberg, he was reminded of Lennie in Of Mice and Men, the farm-worker with no understanding of his own strength.

Very soon after the election, a team of employees started working on something called the News Feed Integrity Task Force, inspired by a sense, one of them told WIRED, that hyperpartisan misinformation was “a disease that’s creeping into the entire platform.” The group, which included Mosseri and Anker, began to meet every day, using whiteboards to outline different ways they could respond to the fake-news crisis. Within a few weeks the company announced it would cut off advertising revenue for ad farms and make it easier for users to flag stories they thought false.

In December the company announced that, for the first time, it would introduce fact-checking onto the platform. Facebook didn’t want to check facts itself; instead it would outsource the problem to professionals. If Facebook received enough signals that a story was false, it would automatically be sent to partners, like Snopes, for review. Then, in early January, Facebook announced that it had hired Campbell Brown, a former anchor at CNN. She immediately became the most prominent journalist hired by the company.

Soon Brown was put in charge of something called the Facebook Journalism Project. “We spun it up over the holidays, essentially,” says one person involved in discussions about the project. The aim was to demonstrate that Facebook was thinking hard about its role in the future of journalism—essentially, it was a more public and organized version of the efforts the company had begun after Murdoch’s tongue-lashing. But sheer anxiety was also part of the motivation. “After the election, because Trump won, the media put a ton of attention on fake news and just started hammering us. People started panicking and getting afraid that regulation was coming. So the team looked at what Google had been doing for years with News Lab”—a group inside Alphabet that builds tools for journalists—“and we decided to figure out how we could put together our own packaged program that shows how seriously we take the future of news.”

Facebook was reluctant, however, to issue any mea culpas or action plans with regard to the problem of filter bubbles or Facebook’s noted propensity to serve as a tool for amplifying outrage. Members of the leadership team regarded these as issues that couldn’t be solved, and maybe even shouldn’t be solved. Was Facebook really more at fault for amplifying outrage during the election than, say, Fox News or MSNBC? Sure, you could put stories into people’s feeds that contradicted their political viewpoints, but people would turn away from them, just as surely as they’d flip the dial back if their TV quietly switched them from Sean Hannity to Joy Reid. The problem, as Anker puts it, “is not Facebook. It’s humans.”

VII

Zuckerberg’s “pretty crazy” statement about fake news caught the ear of a lot of people, but one of the most influential was a security researcher named Renée DiResta. For years, she’d been studying how misinformation spreads on the platform. If you joined an antivaccine group on Facebook, she observed, the platform might suggest that you join flat-earth groups or maybe ones devoted to Pizzagate—putting you on a conveyor belt of conspiracy thinking. Zuckerberg’s statement struck her as wildly out of touch. “How can this platform say this thing?” she remembers thinking.

Roger McNamee, meanwhile, was getting steamed at Facebook’s response to his letter. Zuckerberg and Sandberg had written him back promptly, but they hadn’t said anything substantial. Instead he ended up having a months-long, ultimately futile set of email exchanges with Dan Rose, Facebook’s VP for partnerships. McNamee says Rose’s message was polite but also very firm: The company was doing a lot of good work that McNamee couldn’t see, and in any event Facebook was a platform, not a media company.

“And I’m sitting there going, ‘Guys, seriously, I don’t think that’s how it works,’” McNamee says. “You can assert till you’re blue in the face that you’re a platform, but if your users take a different point of view, it doesn’t matter what you assert.”

As the saying goes, heaven has no rage like love to hatred turned, and McNamee’s concern soon became a cause—and the beginning of an alliance. In April 2017 he connected with a former Google design ethicist named Tristan Harris when they appeared together on Bloomberg TV. Harris had by then gained a national reputation as the conscience of Silicon Valley. He had been profiled on 60 Minutes and in The Atlantic, and he spoke eloquently about the subtle tricks that social media companies use to foster an addiction to their services. “They can amplify the worst aspects of human nature,” Harris told WIRED this past December. After the TV appearance, McNamee says he called Harris up and asked, “Dude, do you need a wingman?”

The next month, DiResta published an ­article comparing purveyors of disinformation on social media to manipulative high-frequency traders in financial markets. “Social networks enable malicious actors to operate at platform scale, because they were designed for fast information flows and virality,” she wrote. Bots and sock puppets could cheaply “create the illusion of a mass groundswell of grassroots activity,” in much the same way that early, now-illegal trading algorithms could spoof demand for a stock. Harris read the article, was impressed, and emailed her.

The three were soon out talking to anyone who would listen about Facebook’s poisonous effects on American democracy. And before long they found receptive audiences in the media and Congress—groups with their own mounting grievances against the social media giant.

VIII

Even at the best of times, meetings between Facebook and media executives can feel like unhappy family gatherings. The two sides are inextricably bound together, but they don’t like each other all that much. News executives resent that Facebook and Google have captured roughly three-quarters of the digital ad business, leaving the media industry and other platforms, like Twitter, to fight over scraps. Plus they feel like the preferences of Facebook’s algorithm have pushed the industry to publish ever-dumber stories. For years, The New York Times resented that Facebook helped elevate BuzzFeed; now BuzzFeed is angry about being displaced by clickbait.

And then there’s the simple, deep fear and mistrust that Facebook inspires. Every publisher knows that, at best, they are sharecroppers on Facebook’s massive industrial farm. The social network is roughly 200 times more valuable than the Times. And journalists know that the man who owns the farm has the leverage. If Facebook wanted to, it could quietly turn any number of dials that would harm a publisher—by manipulating its traffic, its ad network, or its readers.

Emissaries from Facebook, for their part, find it tiresome to be lectured by people who can’t tell an algorithm from an API. They also know that Facebook didn’t win the digital ad market through luck: It built a better ad product. And in their darkest moments, they wonder: What’s the point? News makes up only about 5 percent of the total content that people see on Facebook globally. The company could let it all go and its shareholders would scarcely notice. And there’s another, deeper problem: Mark Zuckerberg, according to people who know him, prefers to think about the future. He’s less interested in the news industry’s problems right now; he’s interested in the problems five or 20 years from now. The editors of major media companies, on the other hand, are worried about their next quarter—maybe even their next phone call. When they bring lunch back to their desks, they know not to buy green bananas.

This mutual wariness—sharpened almost to enmity in the wake of the election—did not make life easy for Campbell Brown when she started her new job running the nascent Facebook Journalism Project. The first item on her to-do list was to head out on yet another Facebook listening tour with editors and publishers. One editor describes a fairly typical meeting: Brown and Chris Cox, Facebook’s chief product officer, invited a group of media leaders to gather in late January 2017 at Brown’s apartment in Manhattan. Cox, a quiet, suave man, sometimes referred to as “the Ryan Gosling of Facebook Product,” took the brunt of the ensuing abuse. “Basically, a bunch of us just laid into him about how Facebook was destroying journalism, and he graciously absorbed it,” the editor says. “He didn’t much try to defend them. I think the point was really to show up and seem to be listening.” Other meetings were even more tense, with the occasional comment from journalists noting their interest in digital antitrust issues.

As bruising as all this was, Brown’s team became more confident that their efforts were valued within the company when Zuckerberg published a 5,700-word corporate manifesto in February. He had spent the previous three months, according to people who know him, contemplating whether he had created something that did more harm than good. “Are we building the world we all want?” he asked at the beginning of his post, implying that the answer was an obvious no. Amid sweeping remarks about “building a global community,” he emphasized the need to keep people informed and to knock out false news and clickbait. Brown and others at Facebook saw the manifesto as a sign that Zuckerberg understood the company’s profound civic responsibilities. Others saw the document as blandly grandiose, showcasing Zuckerberg’s tendency to suggest that the answer to nearly any problem is for people to use Facebook more.

Shortly after issuing the manifesto, Zuckerberg set off on a carefully scripted listening tour of the country. He began popping into candy shops and dining rooms in red states, camera crew and personal social media team in tow. He wrote an earnest post about what he was learning, and he deflected questions about whether his real goal was to become president. It seemed like a well-­meaning effort to win friends for Facebook. But it soon became clear that Facebook’s biggest problems emanated from places farther away than Ohio.

IX

One of the many things Zuckerberg seemed not to grasp when he wrote his manifesto was that his platform had empowered an enemy far more sophisticated than Macedonian teenagers and assorted low-rent purveyors of bull. As 2017 wore on, however, the company began to realize it had been attacked by a foreign influence operation. “I would draw a real distinction between fake news and the Russia stuff,” says an executive who worked on the company’s response to both. “With the latter there was a moment where everyone said ‘Oh, holy shit, this is like a national security situation.’”

That holy shit moment, though, didn’t come until more than six months after the election. Early in the campaign season, Facebook was aware of familiar attacks emanating from known Russian hackers, such as the group APT28, which is believed to be affiliated with Moscow. They were hacking into accounts outside of Facebook, stealing documents, then creating fake Facebook accounts under the banner of DCLeaks, to get people to discuss what they’d stolen. The company saw no signs of a serious, concerted foreign propaganda campaign, but it also didn’t think to look for one.

During the spring of 2017, the company’s security team began preparing a report about how Russian and other foreign intelligence operations had used the platform. One of its authors was Alex Stamos, head of Facebook’s security team. Stamos was something of an icon in the tech world for having reportedly resigned from his previous job at Yahoo after a conflict over whether to grant a US intelligence agency access to Yahoo servers. According to two people with direct knowledge of the document, he was eager to publish a detailed, specific analysis of what the company had found. But members of the policy and communications team pushed back and cut his report way down. Sources close to the security team suggest the company didn’t want to get caught up in the political whirlwind of the moment. (Sources on the politics and communications teams insist they edited the report down, just because the darn thing was hard to read.)

On April 27, 2017, the day after the Senate announced it was calling then FBI director James Comey to testify about the Russia investigation, Stamos’ report came out. It was titled “Information Operations and Facebook,” and it gave a careful step-by-step explanation of how a foreign adversary could use Facebook to manipulate people. But there were few specific examples or details, and there was no direct mention of Russia. It felt bland and cautious. As Renée DiResta says, “I remember seeing the report come out and thinking, ‘Oh, goodness, is this the best they could do in six months?’”

One month later, a story in Time suggested to Stamos’ team that they might have missed something in their analysis. The article quoted an unnamed senior intelligence official saying that Russian operatives had bought ads on Facebook to target Americans with propaganda. Around the same time, the security team also picked up hints from congressional investigators that made them think an intelligence agency was indeed looking into Russian Facebook ads. Caught off guard, the team members started to dig into the company’s archival ads data themselves.

Eventually, by sorting transactions according to a series of data points—Were ads purchased in rubles? Were they purchased within browsers whose language was set to Russian?—they were able to find a cluster of accounts, funded by a shadowy Russian group called the Internet Research Agency, that had been designed to manipulate political opinion in America. There was, for example, a page called Heart of Texas, which pushed for the secession of the Lone Star State. And there was Blacktivist, which pushed stories about police brutality against black men and women and had more followers than the verified Black Lives Matter page.

Numerous security researchers express consternation that it took Facebook so long to realize how the Russian troll farm was exploiting the platform. After all, the group was well known to Facebook. Executives at the company say they’re embarrassed by how long it took them to find the fake accounts, but they point out that they were never given help by US intelligence agencies. A staffer on the Senate Intelligence Committee likewise voiced exasperation with the company. “It seemed obvious that it was a tactic the Russians would exploit,” the staffer says.

When Facebook finally did find the Russian propaganda on its platform, the discovery set off a crisis, a scramble, and a great deal of confusion. First, due to a miscalculation, word initially spread through the company that the Russian group had spent millions of dollars on ads, when the actual total was in the low six figures. Once that error was resolved, a disagreement broke out over how much to reveal, and to whom. The company could release the data about the ads to the public, release everything to Congress, or release nothing. Much of the argument hinged on questions of user privacy. Members of the security team worried that the legal process involved in handing over private user data, even if it belonged to a Russian troll farm, would open the door for governments to seize data from other Facebook users later on. “There was a real debate internally,” says one executive. “Should we just say ‘Fuck it’ and not worry?” But eventually the company decided it would be crazy to throw legal caution to the wind “just because Rachel Maddow wanted us to.”

Ultimately, a blog post appeared under Stamos’ name in early September announcing that, as far as the company could tell, the Russians had paid Facebook $100,000 for roughly 3,000 ads aimed at influencing American politics around the time of the 2016 election. Every sentence in the post seemed to downplay the substance of these new revelations: The number of ads was small, the expense was small. And Facebook wasn’t going to release them. The public wouldn’t know what they looked like or what they were really aimed at doing.

This didn’t sit at all well with DiResta. She had long felt that Facebook was insufficiently forthcoming, and now it seemed to be flat-out stonewalling. “That was when it went from incompetence to malice,” she says. A couple of weeks later, while waiting at a Walgreens to pick up a prescription for one of her kids, she got a call from a researcher at the Tow Center for Digital Journalism named Jonathan Albright. He had been mapping ecosystems of misinformation since the election, and he had some excellent news. “I found this thing,” he said. Albright had started digging into CrowdTangle, one of the analytics platforms that Facebook uses. And he had discovered that the data from six of the accounts Facebook had shut down were still there, frozen in a state of suspended animation. There were the posts pushing for Texas secession and playing on racial antipathy. And then there were political posts, like one that referred to Clinton as “that murderous anti-American traitor Killary.” Right before the election, the Blacktivist account urged its supporters to stay away from Clinton and instead vote for Jill Stein. Albright downloaded the most recent 500 posts from each of the six groups. He reported that, in total, their posts had been shared more than 340 million times.

Eddie Guy

X

To McNamee, the way the Russians used the platform was neither a surprise nor an anomaly. “They find 100 or 1,000 people who are angry and afraid and then use Facebook’s tools to advertise to get people into groups,” he says. “That’s exactly how Facebook was designed to be used.”

McNamee and Harris had first traveled to DC for a day in July to meet with members of Congress. Then, in September, they were joined by DiResta and began spending all their free time counseling senators, representatives, and members of their staffs. The House and Senate Intelligence Committees were about to hold hearings on Russia’s use of social media to interfere in the US election, and McNamee, Harris, and ­DiResta were helping them prepare. One of the early questions they weighed in on was the matter of who should be summoned to testify. Harris recommended that the CEOs of the big tech companies be called in, to create a dramatic scene in which they all stood in a neat row swearing an oath with their right hands in the air, roughly the way tobacco executives had been forced to do a generation earlier. Ultimately, though, it was determined that the general counsels of the three companies—Facebook, Twitter, and Google—should head into the lion’s den.

And so on November 1, Colin Stretch arrived from Facebook to be pummeled. During the hearings themselves, DiResta was sitting on her bed in San Francisco, watching them with her headphones on, trying not to wake up her small children. She listened to the back-and-forth in Washington while chatting on Slack with other security researchers. She watched as Marco Rubio smartly asked whether Facebook even had a policy forbidding foreign governments from running an influence campaign through the platform. The answer was no. Rhode Island senator Jack Reed then asked whether Facebook felt an obligation to individually notify all the users who had seen Russian ads that they had been deceived. The answer again was no. But maybe the most threatening comment came from Dianne Feinstein, the senior senator from Facebook’s home state. “You’ve created these platforms, and now they’re being misused, and you have to be the ones to do something about it,” she declared. “Or we will.”

After the hearings, yet another dam seemed to break, and former Facebook executives started to go public with their criticisms of the company too. On November 8, billionaire entrepreneur Sean Parker, Facebook’s first president, said he now regretted pushing Facebook so hard on the world. “I don’t know if I really understood the consequences of what I was saying,” he said. “God only knows what it’s doing to our children’s brains.” Eleven days later, Facebook’s former privacy manager, Sandy Parakilas, published a New York Times op-ed calling for the government to regulate Facebook: “The company won’t protect us by itself, and nothing less than our democracy is at stake.”

XI

The day of the hearings, Zuckerberg had to give Facebook’s Q3 earnings call. The numbers were terrific, as always, but his mood was not. Normally these calls can put someone with 12 cups of coffee in them to sleep; the executive gets on and says everything is going well, even when it isn’t. Zuckerberg took a different approach. “I’ve expressed how upset I am that the Russians tried to use our tools to sow mistrust. We build these tools to help people connect and to bring us closer together. And they used them to try to undermine our values. What they did is wrong, and we are not going to stand for it.” The company would be investing so much in security, he said, that Facebook would make “significantly” less money for a while. “I want to be clear about what our priority is: Protecting our community is more important than maximizing our profits.” What the company really seeks is for users to find their experience to be “time well spent,” Zuckerberg said—using the three words that have become Tristan Harris’ calling card, and the name of his nonprofit.

Other signs emerged, too, that Zuckerberg was beginning to absorb the criticisms of his company. The Facebook Journalism Project, for instance, seemed to be making the company take its obligations as a publisher, and not just a platform, more seriously. In the fall, the company announced that Zuckerberg had decided—after years of resisting the idea—that publishers using Facebook Instant Articles could require readers to subscribe. Paying for serious publications, in the months since the election, had come to seem like both the path forward for journalism and a way of resisting the post-truth political landscape. (WIRED recently instituted its own paywall.) Plus, offering subscriptions arguably helped put in place the kinds of incentives that Zuckerberg professed to want driving the platform. People like Alex Hardiman, the head of Facebook news products and an alum of The New York Times, started to recognize that Facebook had long helped to create an economic system that rewarded publishers for sensationalism, not accuracy or depth. “If we just reward content based on raw clicks and engagement, we might actually see content that is increasingly sensationalist, clickbaity, polarizing, and divisive,” she says. A social network that rewards only clicks, not subscriptions, is like a dating service that encourages one-night stands but not marriages.

XII

A couple of weeks before Thanksgiving 2017, Zuckerberg called one of his quarterly all-hands meetings on the Facebook campus, in an outdoor space known as Hacker Square. He told everyone he hoped they would have a good holiday. Then he said, “This year, with recent news, a lot of us are probably going to get asked: ‘What is going on with Facebook?’ This has been a tough year … but … what I know is that we’re fortunate to play an important role in billions of people’s lives. That’s a privilege, and it puts an enormous responsibility on all of us.” According to one attendee, the remarks came across as blunter and more personal than any they’d ever heard from Zuckerberg. He seemed humble, even a little chastened. “I don’t think he sleeps well at night,” the employee says. “I think he has remorse for what has happened.”

During the late fall, criticism continued to mount: Facebook was accused of becoming a central vector for spreading deadly propaganda against the Rohingya in Myanmar and for propping up the brutal leadership of Rodrigo Duterte in the Philippines. And December brought another haymaker from someone closer by. Early that month, it emerged that Chamath Palihapitiya, who had been Facebook’s vice president for user growth before leaving in 2011, had told an audience at Stanford that he thought social media platforms like Facebook had “created tools that are ripping apart the social fabric” and that he feels “tremendous guilt” for being part of that. He said he tries to use Facebook as little as possible and doesn’t permit his children to use such platforms at all.

The criticism stung in a way that others hadn’t. Palihapitiya is close to many of the top executives at Facebook, and he has deep cachet in Silicon Valley and among Facebook engineers as a part-owner of the Golden State Warriors. Sheryl Sandberg sometimes wears a chain around her neck that’s welded together from one given to her by Zuckerberg and one given to her by Palihapitiya after her husband’s death. The company issued a statement saying it had been a long time since Palihapitiya had worked there. “Facebook was a very different company back then and as we have grown we have realized how our responsibilities have grown too.” Asked why the company had responded to Palihapitiya, and not to others, a senior Facebook executive said, “Chamath is—was—a friend to a lot of people here.”

Roger McNamee, meanwhile, went on a media tour lambasting the company. He published an essay in Washington Monthly and then followed up in The Washington Post and The Guardian. Facebook was less impressed with him. Executives considered him to be overstating his connection to the company and dining out on his criticism. Andrew Bos­worth, a VP and member of the management team, tweeted, “I’ve worked at Facebook for 12 years and I have to ask: Who the fuck is Roger McNamee?”

Zuckerberg did seem to be eager to mend one fence, though. Around this time, a team of Facebook executives gathered for dinner with executives from News Corp at the Grill, an upscale restaurant in Manhattan. Right at the start, Zuckerberg raised a toast to Murdoch. He spoke charmingly about reading a biography of the older man and of admiring his accomplishments. Then he described a game of tennis he’d once played against Murdoch. At first he had thought it would be easy to hit the ball with a man more than 50 years his senior. But he quickly realized, he said, that Murdoch was there to compete.

XIII

On January 4, 2018, Zuckerberg announced that he had a new personal challenge for the year. For each of the past nine years, he had committed himself to some kind of self-improvement. His first challenge was farcical—wear ties—and the others had been a little preening and collegiate. He wanted to learn Mandarin, read 25 books, run 365 miles. This year, though, he took a severe tone. “The world feels anxious and divided, and Facebook has a lot of work to do—whether it’s protecting our community from abuse and hate, defending against interference by nation-states, or making sure that time spent on Facebook is time well spent,” Zuckerberg declared. The language wasn’t original—he had borrowed from Tristan Harris again—but it was, by the accounts of many people around him, entirely sincere.

That New Year’s challenge, it turned out, was a bit of carefully considered choreography setting up a series of announcements, starting with a declaration the following week that the News Feed algorithm would be rejiggered to favor “meaningful interactions.” Posts and videos of the sort that make us look or like—but not comment or care—would be deprioritized. The idea, explained Adam Mosseri, is that, online, “interacting with people is positively correlated with a lot of measures of well-being, whereas passively consuming content online is less so.”

To numerous people at the company, the announcement marked a huge departure. Facebook was putting a car in reverse that had been driving at full speed in one direction for 14 years. Since the beginning, Zuckerberg’s ambition had been to create another internet, or perhaps another world, inside of Facebook, and to get people to use it as much as possible. The business model was based on advertising, and advertising was insatiably hungry for people’s time. But now Zuckerberg said he expected these new changes to News Feed would make people use Facebook less.

The announcement was hammered by many in the press. During the rollout, Mosseri explained that Facebook would downgrade stories shared by businesses, celebrities, and publishers, and prioritize stories shared by friends and family. Critics surmised that these changes were just a way of finally giving the publishing industry a middle finger. “Facebook has essentially told media to kiss off,” Franklin Foer wrote in The Atlantic. “Facebook will be back primarily in the business of making us feel terrible about the inferiority of our vacations, the relative mediocrity of our children, teasing us into sharing more of our private selves.”

People who know him say Zuckerberg has truly been altered in the crucible of the past several months.

But inside Facebook, executives insist this isn’t remotely the case. According to Anker, who retired from the company in December but worked on these changes, and who has great affection for the management team, “It would be a mistake to see this as a retreat from the news industry. This is a retreat from ‘Anything goes if it works with our algorithm to drive up engagement.’” According to others still at the company, Zuckerberg didn’t want to pull back from actual journalism. He just genuinely wanted there to be less crap on the platform: fewer stories with no substance; fewer videos you can watch without thinking.

And then, a week after telling the world about “meaningful interactions,” Zuckerberg announced another change that seemed to answer these concerns, after a fashion. For the first time in the company’s history, he said in a note posted to his personal page, Facebook will start to boost certain publishers—ones whose content is “trustworthy, informative, and local.” For the past year, Facebook has been developing algorithms to hammer publishers whose content is fake; now it’s trying to elevate what’s good. For starters, he explained, the company would use reader surveys to determine which sources are trustworthy. That system, critics were quick to point out, will surely be gamed, and many people will say they trust sources just because they recognize them. But this announcement, at least, went over a little better in boardrooms and newsrooms. Right after the post went up, the stock price of The New York Times shot up—as did that of News Corp.

Zuckerberg has hinted—and insiders have confirmed—that we should expect a year of more announcements like this. The company is experimenting with giving publishers more control over paywalls and allowing them to feature their logos more prominently to reestablish the brand identities that Facebook flattened years ago. One somewhat hostile outside suggestion has come from Facebook’s old antagonist Murdoch, who said in late January that if Facebook truly valued “trustworthy” publishers, it should pay them carriage fees.

The fate that Facebook really cares about, however, is its own. It was built on the power of network effects: You joined because everyone else was joining. But network effects can be just as powerful in driving people off a platform. Zuckerberg understands this viscerally. After all, he helped create those problems for MySpace a decade ago and is arguably doing the same to Snap today. Zuckerberg has avoided that fate, in part, because he has proven brilliant at co-opting his biggest threats. When social media started becoming driven by images, he bought Instagram. When messaging took off, he bought WhatsApp. When Snapchat became a threat, he copied it. Now, with all his talk of “time well spent,” it seems as if he’s trying to co-opt Tristan Harris too.

But people who know him say that Zuckerberg has truly been altered in the crucible of the past several months. He has thought deeply; he has reckoned with what happened; and he truly cares that his company fix the problems swirling around it. And he’s also worried. “This whole year has massively changed his personal techno-­optimism,” says an executive at the company. “It has made him much more paranoid about the ways that people could abuse the thing that he built.”

The past year has also altered Facebook’s fundamental understanding about whether it’s a publisher or a platform. The company has always answered that question defiantly—platform, platform, platform—for regulatory, financial, and maybe even emotional reasons. But now, gradually, Facebook has evolved. Of course it’s a platform, and always will be. But the company also realizes now that it bears some of the responsibilities that a publisher does: for the care of its readers, and for the care of the truth. You can’t make the world more open and connected if you’re breaking it apart. So what is it: publisher or platform? Facebook seems to have finally recognized that it is quite clearly both.

%d bloggers like this: