The Slippery Slope: If Facebook bans content that questions vaccine dogma, will it soon ban articles about toxic chemotherapy, fluoride and pesticides, too?


Image: The Slippery Slope: If Facebook bans content that questions vaccine dogma, will it soon ban articles about toxic chemotherapy, fluoride and pesticides, too?

In accordance with the company’s ongoing efforts to censor all truth while promoting only establishment fake news on its platform, social media giant Facebook has decided to launch full-scale war against online free speech about vaccines.

Pandering to the demands by California Democrat Adam Schiff, Mark Zuckerberg and his team recently announced that they are now “exploring additional measures to best combat the problem” of Facebook users discussing and sharing information about how vaccines are harming and killing children via social media.

According to an official statement released by Facebook, the Bay Area-based corporation is planning to implement some changes to the platform in the very near future that may include “reducing or removing this type of content from recommendations, including Groups You Should Join, and demoting it in search results, while also ensuring that higher quality and more authoritative information is available.”

In other words, the only acceptable form of online speech pertaining to vaccines that will be allowed on Facebook is speech that conforms to whatever the U.S. Centers for Disease Control and Prevention (CDC) says is “accurate” and “scientific.” Anything else, even if it comes from scientific authorities with a differing viewpoint, will be classified as false by Facebook, and consequently demoted or removed.

Facebook’s censorship tactics are becoming more nefarious by the day. To keep up with the latest news, be sure to check out Censorship.news.

100% organic essential oil sets now available for your home and personal care, including Rosemary, Oregano, Eucalyptus, Tea Tree, Clary Sage and more, all 100% organic and laboratory tested for safety. A multitude of uses, from stress reduction to topical first aid. See the complete listing here, and help support this news site.

Facebook is quickly becoming the American government’s ministry of propaganda

Facebook’s rationale, of course, is that it’s simply looking out for the best interests of users who might be “misled” by information shared in Facebook groups suggesting that the MMR vaccine for measles, mumps, and rubella, as one example, isn’t nearly as safe as government health authorities claim.

And that’s just it: There are many things that the government is wrong about, but that have been officially sanctioned as “truth” by government propagandists. If Facebook bows down to these government hacks with regards to vaccines, there’s no telling what the company will try to ban from its platform in the future.

As we saw in the case of Cassandra C. from Connecticut, the government actually forced this young girl to undergo chemotherapy against her will, claiming that the “treatment” was absolutely necessary to “cure” her of non-Hodgkin’s lymphoma.

Not only did the government deny young Cassandra the right to make her own medical decisions, but it also overrode the will of her parents, who also opposed taking the chemotherapy route. In essence, the government forced Cassandra to undergo chemotherapy at gunpoint, and now it’s trying to do the exact same thing with Facebook.

If little Adam Schiff is successful at forcing Facebook to only allow information on its platform that conforms with the official government position on vaccines, the next step will be to outlaw the sharing of information on the platform about the dangers of chemotherapy, as well as the dangers of fluoride, pesticides, and other deadly chemicals that the government has deemed as “safe and effective.”

Soon there won’t be any free speech at all on Facebook, assuming the social media giant actually obeys this latest prompting by the government to steamroll people’s First Amendment rights online. And where will it end?

“The real national emergency is the fact that Democrats have power over our lives,” warns Mike Adams, the Health Ranger.

“These radical Leftists are domestic terrorists and suicidal cultists … they are the Stasi, the SS, the KGB and the Maoists rolled all into one. They absolutely will not stop until America as founded is completely ripped to shreds and replaced with an authoritarian communist-leaning regime run by the very same tyrants who tried to carry out an illegal political coup against President Trump.”

Inside the Two Years That Shook Facebook—and the World


One day in late February of 2016, Mark Zuckerberg sent a memo to all of Facebook’s employees to address some troubling behavior in the ranks. His message pertained to some walls at the company’s Menlo Park headquarters where staffers are encouraged to scribble notes and signatures. On at least a couple of occasions, someone had crossed out the words “Black Lives Matter” and replaced them with “All Lives Matter.” Zuckerberg wanted whoever was responsible to cut it out.

“ ‘Black Lives Matter’ doesn’t mean other lives don’t,” he wrote. “We’ve never had rules around what people can write on our walls,” the memo went on. But “crossing out something means silencing speech, or that one person’s speech is more important than another’s.” The defacement, he said, was being investigated.

All around the country at about this time, debates about race and politics were becoming increasingly raw. Donald Trump had just won the South Carolina primary, lashed out at the Pope over immigration, and earned the enthusiastic support of David Duke. Hillary Clinton had just defeated Bernie Sanders in Nevada, only to have an activist from Black Lives Matter interrupt a speech of hers to protest racially charged statements she’d made two decades before. And on Facebook, a popular group called Blacktivist was gaining traction by blasting out messages like “American economy and power were built on forced migration and torture.”

So when Zuckerberg’s admonition circulated, a young contract employee named Benjamin Fearnow decided it might be newsworthy. He took a screenshot on his personal laptop and sent the image to a friend named Michael Nuñez, who worked at the tech-news site Gizmodo. Nuñez promptly published a brief story about Zuckerberg’s memo.

A week later, Fearnow came across something else he thought Nuñez might like to publish. In another internal communication, Facebook had invited its employees to submit potential questions to ask Zuckerberg at an all-hands meeting. One of the most up-voted questions that week was “What responsibility does Facebook have to help prevent President Trump in 2017?” Fearnow took another screenshot, this time with his phone.

Fearnow, a recent graduate of the Columbia Journalism School, worked in Facebook’s New York office on something called Trending Topics, a feed of popular news subjects that popped up when people opened Facebook. The feed was generated by an algorithm but moderated by a team of about 25 people with backgrounds in journalism. If the word “Trump” was trending, as it often was, they used their news judgment to identify which bit of news about the candidate was most important. If The Onion or a hoax site published a spoof that went viral, they had to keep that out. If something like a mass shooting happened, and Facebook’s algorithm was slow to pick up on it, they would inject a story about it into the feed.

Facebook prides itself on being a place where people love to work. But Fearnow and his team weren’t the happiest lot. They were contract employees hired through a company called BCforward, and every day was full of little reminders that they weren’t really part of Facebook. Plus, the young journalists knew their jobs were doomed from the start. Tech companies, for the most part, prefer to have as little as possible done by humans—because, it’s often said, they don’t scale. You can’t hire a billion of them, and they prove meddlesome in ways that algorithms don’t. They need bathroom breaks and health insurance, and the most annoying of them sometimes talk to the press. Eventually, everyone assumed, Facebook’s algorithms would be good enough to run the whole project, and the people on Fearnow’s team—who served partly to train those algorithms—would be expendable.

The day after Fearnow took that second screenshot was a Friday. When he woke up after sleeping in, he noticed that he had about 30 meeting notifications from Facebook on his phone. When he replied to say it was his day off, he recalls, he was nonetheless asked to be available in 10 minutes. Soon he was on a video­conference with three Facebook employees, including Sonya Ahuja, the company’s head of investigations. According to his recounting of the meeting, she asked him if he had been in touch with Nuñez. He denied that he had been. Then she told him that she had their messages on Gchat, which Fearnow had assumed weren’t accessible to Facebook. He was fired. “Please shut your laptop and don’t reopen it,” she instructed him.

That same day, Ahuja had another conversation with a second employee at Trending Topics named Ryan Villarreal. Several years before, he and Fearnow had shared an apartment with Nuñez. Villarreal said he hadn’t taken any screenshots, and he certainly hadn’t leaked them. But he had clicked “like” on the story about Black Lives Matter, and he was friends with Nuñez on Facebook. “Do you think leaks are bad?” Ahuja demanded to know, according to Villarreal. He was fired too. The last he heard from his employer was in a letter from BCforward. The company had given him $15 to cover expenses, and it wanted the money back.

The firing of Fearnow and Villarreal set the Trending Topics team on edge—and Nuñez kept digging for dirt. He soon published a story about the internal poll showing Facebookers’ interest in fending off Trump. Then, in early May, he published an article based on conversations with yet a third former Trending Topics employee, under the blaring headline “Former Facebook Workers: We Routinely Suppressed Conservative News.” The piece suggested that Facebook’s Trending team worked like a Fox News fever dream, with a bunch of biased curators “injecting” liberal stories and “blacklisting” conservative ones. Within a few hours the piece popped onto half a dozen highly trafficked tech and politics websites, including Drudge Report and Breitbart News.

The post went viral, but the ensuing battle over Trending Topics did more than just dominate a few news cycles. In ways that are only fully visible now, it set the stage for the most tumultuous two years of Facebook’s existence—triggering a chain of events that would distract and confuse the company while larger disasters began to engulf it.

This is the story of those two years, as they played out inside and around the company. WIRED spoke with 51 current or former Facebook employees for this article, many of whom did not want their names used, for reasons anyone familiar with the story of Fearnow and Villarreal would surely understand. (One current employee asked that a WIRED reporter turn off his phone so the company would have a harder time tracking whether it had been near the phones of anyone from Facebook.)

The stories varied, but most people told the same basic tale: of a company, and a CEO, whose techno-optimism has been crushed as they’ve learned the myriad ways their platform can be used for ill. Of an election that shocked Facebook, even as its fallout put the company under siege. Of a series of external threats, defensive internal calculations, and false starts that delayed Facebook’s reckoning with its impact on global affairs and its users’ minds. And—in the tale’s final chapters—of the company’s earnest attempt to redeem itself.

In that saga, Fearnow plays one of those obscure but crucial roles that history occasionally hands out. He’s the Franz Ferdinand of Facebook—or maybe he’s more like the archduke’s hapless young assassin. Either way, in the rolling disaster that has enveloped Facebook since early 2016, Fearnow’s leaks probably ought to go down as the screenshots heard round the world.

II

By now, the story of Facebook’s all-consuming growth is practically the creation myth of our information era. What began as a way to connect with your friends at Harvard became a way to connect with people at other elite schools, then at all schools, and then everywhere. After that, your Facebook login became a way to log on to other internet sites. Its Messenger app started competing with email and texting. It became the place where you told people you were safe after an earthquake. In some countries like the Philippines, it effectively is the internet.

The furious energy of this big bang emanated, in large part, from a brilliant and simple insight. Humans are social animals. But the internet is a cesspool. That scares people away from identifying themselves and putting personal details online. Solve that problem—make people feel safe to post—and they will share obsessively. Make the resulting database of privately shared information and personal connections available to advertisers, and that platform will become one of the most important media technologies of the early 21st century.

But as powerful as that original insight was, Facebook’s expansion has also been driven by sheer brawn. Zuckerberg has been a determined, even ruthless, steward of the company’s manifest destiny, with an uncanny knack for placing the right bets. In the company’s early days, “move fast and break things” wasn’t just a piece of advice to his developers; it was a philosophy that served to resolve countless delicate trade-offs—many of them involving user privacy—in ways that best favored the platform’s growth. And when it comes to competitors, Zuckerberg has been relentless in either acquiring or sinking any challengers that seem to have the wind at their backs.

Facebook’s Reckoning

Two years that forced the platform to change

by Blanca Myers

March 2016

Facebook suspends Benjamin Fearnow, a journalist-­curator for the platform’s Trending Topics feed, after he leaks to Gizmodo.

May 2016

Gizmodo reports that Trending Topics “routinely suppressed conservative news.” The story sends Facebook scrambling.

July 2016

Rupert Murdoch tells Zuckerberg that Facebook is wreaking havoc on the news industry and threatens to cause trouble.

August 2016

Facebook cuts loose all of its Trending Topics journalists, ceding authority over the feed to engineers in Seattle.

November 2016

Donald Trump wins. Zuckerberg says it’s “pretty crazy” to think fake news on Facebook helped tip the election.

December 2016

Facebook declares war on fake news, hires CNN alum Campbell Brown to shepherd relations with the publishing industry.

September 2017

Facebook announces that a Russian group paid $100,000 for roughly 3,000 ads aimed at US voters.

October 2017

Researcher Jonathan Albright reveals that posts from six Russian propaganda accounts were shared 340 million times.

November 2017

Facebook general counsel Colin Stretch gets pummeled during congressional Intelligence Committee hearings.

January 2018

Facebook begins announcing major changes, aimed to ensure that time on the platform will be “time well spent.”

In fact, it was in besting just such a rival that Facebook came to dominate how we discover and consume news. Back in 2012, the most exciting social network for distributing news online wasn’t Facebook, it was Twitter. The latter’s 140-character posts accelerated the speed at which news could spread, allowing its influence in the news industry to grow much faster than Facebook’s. “Twitter was this massive, massive threat,” says a former Facebook executive heavily involved in the decisionmaking at the time.

So Zuckerberg pursued a strategy he has often deployed against competitors he cannot buy: He copied, then crushed. He adjusted Facebook’s News Feed to fully incorporate news (despite its name, the feed was originally tilted toward personal news) and adjusted the product so that it showed author bylines and headlines. Then Facebook’s emissaries fanned out to talk with journalists and explain how to best reach readers through the platform. By the end of 2013, Facebook had doubled its share of traffic to news sites and had started to push Twitter into a decline. By the middle of 2015, it had surpassed Google as the leader in referring readers to publisher sites and was now referring 13 times as many readers to news publishers as Twitter. That year, Facebook launched Instant Articles, offering publishers the chance to publish directly on the platform. Posts would load faster and look sharper if they agreed, but the publishers would give up an element of control over the content. The publishing industry, which had been reeling for years, largely assented. Facebook now effectively owned the news. “If you could reproduce Twitter inside of Facebook, why would you go to Twitter?” says the former executive. “What they are doing to Snapchat now, they did to Twitter back then.”

It appears that Facebook did not, however, carefully think through the implications of becoming the dominant force in the news industry. Everyone in management cared about quality and accuracy, and they had set up rules, for example, to eliminate pornography and protect copyright. But Facebook hired few journalists and spent little time discussing the big questions that bedevil the media industry. What is fair? What is a fact? How do you signal the difference between news, analysis, satire, and opinion? Facebook has long seemed to think it has immunity from those debates because it is just a technology company—one that has built a “platform for all ideas.”

This notion that Facebook is an open, neutral platform is almost like a religious tenet inside the company. When new recruits come in, they are treated to an orientation lecture by Chris Cox, the company’s chief product officer, who tells them Facebook is an entirely new communications platform for the 21st century, as the telephone was for the 20th. But if anyone inside Facebook is unconvinced by religion, there is also Section 230 of the 1996 Communications Decency Act to recommend the idea. This is the section of US law that shelters internet intermediaries from liability for the content their users post. If Facebook were to start creating or editing content on its platform, it would risk losing that immunity—and it’s hard to imagine how Facebook could exist if it were liable for the many billion pieces of content a day that users post on its site.

And so, because of the company’s self-image, as well as its fear of regulation, Facebook tried never to favor one kind of news content over another. But neutrality is a choice in itself. For instance, Facebook decided to present every piece of content that appeared on News Feed—whether it was your dog pictures or a news story—in roughly the same way. This meant that all news stories looked roughly the same as each other, too, whether they were investigations in The Washington Post, gossip in the New York Post, or flat-out lies in the Denver Guardian, an entirely bogus newspaper. Facebook argued that this democratized information. You saw what your friends wanted you to see, not what some editor in a Times Square tower chose. But it’s hard to argue that this wasn’t an editorial decision. It may be one of the biggest ever made.

In any case, Facebook’s move into news set off yet another explosion of ways that people could connect. Now Facebook was the place where publications could connect with their readers—and also where Macedonian teenagers could connect with voters in America, and operatives in Saint Petersburg could connect with audiences of their own choosing in a way that no one at the company had ever seen before.

III

In February of 2016, just as the Trending Topics fiasco was building up steam, Roger ­McNamee became one of the first Facebook insiders to notice strange things happening on the platform. McNamee was an early investor in Facebook who had mentored Zuckerberg through two crucial decisions: to turn down Yahoo’s offer of $1 billion to acquire Facebook in 2006; and to hire a Google executive named Sheryl Sandberg in 2008 to help find a business model. McNamee was no longer in touch with Zuckerberg much, but he was still an investor, and that month he started seeing things related to the Bernie Sanders campaign that worried him. “I’m observing memes ostensibly coming out of a Facebook group associated with the Sanders campaign that couldn’t possibly have been from the Sanders campaign,” he recalls, “and yet they were organized and spreading in such a way that suggested somebody had a budget. And I’m sitting there thinking, ‘That’s really weird. I mean, that’s not good.’ ”

But McNamee didn’t say anything to anyone at Facebook—at least not yet. And the company itself was not picking up on any such worrying signals, save for one blip on its radar: In early 2016, its security team noticed an uptick in Russian actors attempting to steal the credentials of journalists and public figures. Facebook reported this to the FBI. But the company says it never heard back from the government, and that was that.

Instead, Facebook spent the spring of 2016 very busily fending off accusations that it might influence the elections in a completely different way. When Gizmodo published its story about political bias on the Trending Topics team in May, the ­article went off like a bomb in Menlo Park. It quickly reached millions of readers and, in a delicious irony, appeared in the Trending Topics module itself. But the bad press wasn’t what really rattled Facebook—it was the letter from John Thune, a Republican US senator from South Dakota, that followed the story’s publication. Thune chairs the Senate Commerce Committee, which in turn oversees the Federal Trade Commission, an agency that has been especially active in investigating Facebook. The senator wanted Facebook’s answers to the allegations of bias, and he wanted them promptly.

The Thune letter put Facebook on high alert. The company promptly dispatched senior Washington staffers to meet with Thune’s team. Then it sent him a 12-page single-spaced letter explaining that it had conducted a thorough review of Trending Topics and determined that the allegations in the Gizmodo story were largely false.

Facebook decided, too, that it had to extend an olive branch to the entire American right wing, much of which was raging about the company’s supposed perfidy. And so, just over a week after the story ran, Facebook scrambled to invite a group of 17 prominent Republicans out to Menlo Park. The list included television hosts, radio stars, think tankers, and an adviser to the Trump campaign. The point was partly to get feedback. But more than that, the company wanted to make a show of apologizing for its sins, lifting up the back of its shirt, and asking for the lash.

According to a Facebook employee involved in planning the meeting, part of the goal was to bring in a group of conservatives who were certain to fight with one another. They made sure to have libertarians who wouldn’t want to regulate the platform and partisans who would. Another goal, according to the employee, was to make sure the attendees were “bored to death” by a technical presentation after Zuckerberg and Sandberg had addressed the group.

The power went out, and the room got uncomfortably hot. But otherwise the meeting went according to plan. The guests did indeed fight, and they failed to unify in a way that was either threatening or coherent. Some wanted the company to set hiring quotas for conservative employees; others thought that idea was nuts. As often happens when outsiders meet with Facebook, people used the time to try to figure out how they could get more followers for their own pages.

Afterward, Glenn Beck, one of the invitees, wrote an essay about the meeting, praising Zuckerberg. “I asked him if Facebook, now or in the future, would be an open platform for the sharing of all ideas or a curator of content,” Beck wrote. “Without hesitation, with clarity and boldness, Mark said there is only one Facebook and one path forward: ‘We are an open platform.’”

Inside Facebook itself, the backlash around Trending Topics did inspire some genuine soul-searching. But none of it got very far. A quiet internal project, codenamed Hudson, cropped up around this time to determine, according to someone who worked on it, whether News Feed should be modified to better deal with some of the most complex issues facing the product. Does it favor posts that make people angry? Does it favor simple or even false ideas over complex and true ones? Those are hard questions, and the company didn’t have answers to them yet. Ultimately, in late June, Facebook announced a modest change: The algorithm would be revised to favor posts from friends and family. At the same time, Adam Mosseri, Facebook’s News Feed boss, posted a manifesto titled “Building a Better News Feed for You.” People inside Facebook spoke of it as a document roughly resembling the Magna Carta; the company had never spoken before about how News Feed really worked. To outsiders, though, the document came across as boilerplate. It said roughly what you’d expect: that the company was opposed to clickbait but that it wasn’t in the business of favoring certain kinds of viewpoints.

The most important consequence of the Trending Topics controversy, according to nearly a dozen former and current employees, was that Facebook became wary of doing anything that might look like stifling conservative news. It had burned its fingers once and didn’t want to do it again. And so a summer of deeply partisan rancor and calumny began with Facebook eager to stay out of the fray.

IV

Shortly after Mosseri published his guide to News Feed values, Zuckerberg traveled to Sun Valley, Idaho, for an annual conference hosted by billionaire Herb Allen, where moguls in short sleeves and sunglasses cavort and make plans to buy each other’s companies. But Rupert Murdoch broke the mood in a meeting that took place inside his villa. According to numerous accounts of the conversation, Murdoch and Robert Thomson, the CEO of News Corp, explained to Zuckerberg that they had long been unhappy with Facebook and Google. The two tech giants had taken nearly the entire digital ad market and become an existential threat to serious journalism. According to people familiar with the conversation, the two News Corp leaders accused Facebook of making dramatic changes to its core algorithm without adequately consulting its media partners, wreaking havoc according to Zuckerberg’s whims. If Facebook didn’t start offering a better deal to the publishing industry, Thomson and Murdoch conveyed in stark terms, Zuckerberg could expect News Corp executives to become much more public in their denunciations and much more open in their lobbying. They had helped to make things very hard for Google in Europe. And they could do the same for Facebook in the US.

Facebook thought that News Corp was threatening to push for a government antitrust investigation or maybe an inquiry into whether the company deserved its protection from liability as a neutral platform. Inside Facebook, executives believed Murdoch might use his papers and TV stations to amplify critiques of the company. News Corp says that was not at all the case; the company threatened to deploy executives, but not its journalists.

Zuckerberg had reason to take the meeting especially seriously, according to a former Facebook executive, because he had firsthand knowledge of Murdoch’s skill in the dark arts. Back in 2007, Facebook had come under criticism from 49 state attorneys general for failing to protect young Facebook users from sexual predators and inappropriate content. Concerned parents had written to Connecticut attorney general Richard Blumenthal, who opened an investigation, and to The New York Times, which published a story. But according to a former Facebook executive in a position to know, the company believed that many of the Facebook accounts and the predatory behavior the letters referenced were fakes, traceable to News Corp lawyers or others working for Murdoch, who owned Facebook’s biggest competitor, MySpace. “We traced the creation of the Facebook accounts to IP addresses at the Apple store a block away from the MySpace offices in Santa Monica,” the executive says. “Facebook then traced interactions with those accounts to News Corp lawyers. When it comes to Facebook, Murdoch has been playing every angle he can for a long time.” (Both News Corp and its spinoff 21st Century Fox declined to comment.)

Zuckerberg took Murdoch’s threats seriously—he had firsthand knowledge of the older man’s skill in the dark arts.

When Zuckerberg returned from Sun Valley, he told his employees that things had to change. They still weren’t in the news business, but they had to make sure there would be a news business. And they had to communicate better. One of those who got a new to-do list was Andrew Anker, a product manager who’d arrived at Facebook in 2015 after a career in journalism (including a long stint at WIRED in the ’90s). One of his jobs was to help the company think through how publishers could make money on the platform. Shortly after Sun Valley, Anker met with Zuckerberg and asked to hire 60 new people to work on partnerships with the news industry. Before the meeting ended, the request was approved.

But having more people out talking to publishers just drove home how hard it would be to resolve the financial problems Murdoch wanted fixed. News outfits were spending millions to produce stories that Facebook was benefiting from, and Facebook, they felt, was giving too little back in return. Instant Articles, in particular, struck them as a Trojan horse. Publishers complained that they could make more money from stories that loaded on their own mobile web pages than on Facebook Instant. (They often did so, it turned out, in ways that short-changed advertisers, by sneaking in ads that readers were unlikely to see. Facebook didn’t let them get away with that.) Another seemingly irreconcilable difference: Outlets like Murdoch’s Wall Street Journal depended on paywalls to make money, but Instant Articles banned paywalls; Zuckerberg disapproved of them. After all, he would often ask, how exactly do walls and toll booths make the world more open and connected?

The conversations often ended at an impasse, but Facebook was at least becoming more attentive. This newfound appreciation for the concerns of journalists did not, however, extend to the journalists on Facebook’s own Trending Topics team. In late August, everyone on the team was told that their jobs were being eliminated. Simultaneously, authority over the algorithm shifted to a team of engineers based in Seattle. Very quickly the module started to surface lies and fiction. A headline days later read, “Fox News Exposes Traitor Megyn Kelly, Kicks Her Out For Backing Hillary.”

V

While Facebook grappled internally with what it was becoming—a company that dominated media but didn’t want to be a media company—Donald Trump’s presidential campaign staff faced no such confusion. To them Facebook’s use was obvious. Twitter was a tool for communicating directly with supporters and yelling at the media. Facebook was the way to run the most effective direct-­marketing political operation in history.

In the summer of 2016, at the top of the general election campaign, Trump’s digital operation might have seemed to be at a major disadvantage. After all, Hillary Clinton’s team was flush with elite talent and got advice from Eric Schmidt, known for running ­Google. Trump’s was run by Brad Parscale, known for setting up the Eric Trump Foundation’s web page. Trump’s social media director was his former caddie. But in 2016, it turned out you didn’t need digital experience running a presidential campaign, you just needed a knack for Facebook.

Over the course of the summer, Trump’s team turned the platform into one of its primary vehicles for fund-­raising. The campaign uploaded its voter files—the names, addresses, voting history, and any other information it had on potential voters—to Facebook. Then, using a tool called Look­alike Audiences, Facebook identified the broad characteristics of, say, people who had signed up for Trump newsletters or bought Trump hats. That allowed the campaign to send ads to people with similar traits. Trump would post simple messages like “This election is being rigged by the media pushing false and unsubstantiated charges, and outright lies, in order to elect Crooked Hillary!” that got hundreds of thousands of likes, comments, and shares. The money rolled in. Clinton’s wonkier messages, meanwhile, resonated less on the platform. Inside Facebook, almost everyone on the executive team wanted Clinton to win; but they knew that Trump was using the platform better. If he was the candidate for Facebook, she was the candidate for LinkedIn.

Trump’s candidacy also proved to be a wonderful tool for a new class of scammers pumping out massively viral and entirely fake stories. Through trial and error, they learned that memes praising the former host of The Apprentice got many more readers than ones praising the former secretary of state. A website called Ending the Fed proclaimed that the Pope had endorsed Trump and got almost a million comments, shares, and reactions on Facebook, according to an analysis by BuzzFeed. Other stories asserted that the former first lady had quietly been selling weapons to ISIS, and that an FBI agent suspected of leaking Clinton’s emails was found dead. Some of the posts came from hyperpartisan Americans. Some came from overseas content mills that were in it purely for the ad dollars. By the end of the campaign, the top fake stories on the platform were generating more engagement than the top real ones.

Even current Facebookers acknowledge now that they missed what should have been obvious signs of people misusing the platform. And looking back, it’s easy to put together a long list of possible explanations for the myopia in Menlo Park about fake news. Management was gun-shy because of the Trending Topics fiasco; taking action against partisan disinformation—or even identifying it as such—might have been seen as another act of political favoritism. Facebook also sold ads against the stories, and sensational garbage was good at pulling people into the platform. Employees’ bonuses can be based largely on whether Facebook hits certain growth and revenue targets, which gives people an extra incentive not to worry too much about things that are otherwise good for engagement. And then there was the ever-present issue of Section 230 of the 1996 Communications Decency Act. If the company started taking responsibility for fake news, it might have to take responsibility for a lot more. Facebook had plenty of reasons to keep its head in the sand.

Roger McNamee, however, watched carefully as the nonsense spread. First there were the fake stories pushing Bernie Sanders, then he saw ones supporting Brexit, and then helping Trump. By the end of the summer, he had resolved to write an op-ed about the problems on the platform. But he never ran it. “The idea was, look, these are my friends. I really want to help them.” And so on a Sunday evening, nine days before the 2016 election, McNamee emailed a 1,000-word letter to Sandberg and Zuckerberg. “I am really sad about Facebook,” it began. “I got involved with the company more than a decade ago and have taken great pride and joy in the company’s success … until the past few months. Now I am disappointed. I am embarrassed. I am ashamed.”

Eddie Guy

VI

It’s not easy to recognize that the machine you’ve built to bring people together is being used to tear them apart, and Mark Zuckerberg’s initial reaction to Trump’s victory, and Facebook’s possible role in it, was one of peevish dismissal. Executives remember panic the first few days, with the leadership team scurrying back and forth between Zuckerberg’s conference room (called the Aquarium) and Sandberg’s (called Only Good News), trying to figure out what had just happened and whether they would be blamed. Then, at a conference two days after the election, Zuckerberg argued that filter bubbles are worse offline than on Facebook and that social media hardly influences how people vote. “The idea that fake news on Facebook—of which, you know, it’s a very small amount of the content—influenced the election in any way, I think, is a pretty crazy idea,” he said.

Zuckerberg declined to be interviewed for this article, but people who know him well say he likes to form his opinions from data. And in this case he wasn’t without it. Before the interview, his staff had worked up a back-of-the-­envelope calculation showing that fake news was a tiny percentage of the total amount of election-­related content on the platform. But the analysis was just an aggregate look at the percentage of clearly fake stories that appeared across all of Facebook. It didn’t measure their influence or the way fake news affected specific groups. It was a number, but not a particularly meaningful one.

Zuckerberg’s comments did not go over well, even inside Facebook. They seemed clueless and self-absorbed. “What he said was incredibly damaging,” a former executive told WIRED. “We had to really flip him on that. We realized that if we didn’t, the company was going to start heading down this pariah path that Uber was on.”

A week after his “pretty crazy” comment, Zuckerberg flew to Peru to give a talk to world leaders about the ways that connecting more people to the internet, and to Facebook, could reduce global poverty. Right after he landed in Lima, he posted something of a mea culpa. He explained that Facebook did take misinformation seriously, and he presented a vague seven-point plan to tackle it. When a professor at the New School named David Carroll saw Zuckerberg’s post, he took a screenshot. Alongside it on Carroll’s feed ran a headline from a fake CNN with an image of a distressed Donald Trump and the text “DISQUALIFIED; He’s GONE!”

At the conference in Peru, Zuckerberg met with a man who knows a few things about politics: Barack Obama. Media reports portrayed the encounter as one in which the lame-duck president pulled Zuckerberg aside and gave him a “wake-up call” about fake news. But according to someone who was with them in Lima, it was Zuckerberg who called the meeting, and his agenda was merely to convince Obama that, yes, Facebook was serious about dealing with the problem. He truly wanted to thwart misinformation, he said, but it wasn’t an easy issue to solve.

One employee compared Zuckerberg to Lennie in Of Mice and Men—a man with no understanding of his own strength.

Meanwhile, at Facebook, the gears churned. For the first time, insiders really began to question whether they had too much power. One employee told WIRED that, watching Zuckerberg, he was reminded of Lennie in Of Mice and Men, the farm-worker with no understanding of his own strength.

Very soon after the election, a team of employees started working on something called the News Feed Integrity Task Force, inspired by a sense, one of them told WIRED, that hyperpartisan misinformation was “a disease that’s creeping into the entire platform.” The group, which included Mosseri and Anker, began to meet every day, using whiteboards to outline different ways they could respond to the fake-news crisis. Within a few weeks the company announced it would cut off advertising revenue for ad farms and make it easier for users to flag stories they thought false.

In December the company announced that, for the first time, it would introduce fact-checking onto the platform. Facebook didn’t want to check facts itself; instead it would outsource the problem to professionals. If Facebook received enough signals that a story was false, it would automatically be sent to partners, like Snopes, for review. Then, in early January, Facebook announced that it had hired Campbell Brown, a former anchor at CNN. She immediately became the most prominent journalist hired by the company.

Soon Brown was put in charge of something called the Facebook Journalism Project. “We spun it up over the holidays, essentially,” says one person involved in discussions about the project. The aim was to demonstrate that Facebook was thinking hard about its role in the future of journalism—essentially, it was a more public and organized version of the efforts the company had begun after Murdoch’s tongue-lashing. But sheer anxiety was also part of the motivation. “After the election, because Trump won, the media put a ton of attention on fake news and just started hammering us. People started panicking and getting afraid that regulation was coming. So the team looked at what Google had been doing for years with News Lab”—a group inside Alphabet that builds tools for journalists—“and we decided to figure out how we could put together our own packaged program that shows how seriously we take the future of news.”

Facebook was reluctant, however, to issue any mea culpas or action plans with regard to the problem of filter bubbles or Facebook’s noted propensity to serve as a tool for amplifying outrage. Members of the leadership team regarded these as issues that couldn’t be solved, and maybe even shouldn’t be solved. Was Facebook really more at fault for amplifying outrage during the election than, say, Fox News or MSNBC? Sure, you could put stories into people’s feeds that contradicted their political viewpoints, but people would turn away from them, just as surely as they’d flip the dial back if their TV quietly switched them from Sean Hannity to Joy Reid. The problem, as Anker puts it, “is not Facebook. It’s humans.”

VII

Zuckerberg’s “pretty crazy” statement about fake news caught the ear of a lot of people, but one of the most influential was a security researcher named Renée DiResta. For years, she’d been studying how misinformation spreads on the platform. If you joined an antivaccine group on Facebook, she observed, the platform might suggest that you join flat-earth groups or maybe ones devoted to Pizzagate—putting you on a conveyor belt of conspiracy thinking. Zuckerberg’s statement struck her as wildly out of touch. “How can this platform say this thing?” she remembers thinking.

Roger McNamee, meanwhile, was getting steamed at Facebook’s response to his letter. Zuckerberg and Sandberg had written him back promptly, but they hadn’t said anything substantial. Instead he ended up having a months-long, ultimately futile set of email exchanges with Dan Rose, Facebook’s VP for partnerships. McNamee says Rose’s message was polite but also very firm: The company was doing a lot of good work that McNamee couldn’t see, and in any event Facebook was a platform, not a media company.

“And I’m sitting there going, ‘Guys, seriously, I don’t think that’s how it works,’” McNamee says. “You can assert till you’re blue in the face that you’re a platform, but if your users take a different point of view, it doesn’t matter what you assert.”

As the saying goes, heaven has no rage like love to hatred turned, and McNamee’s concern soon became a cause—and the beginning of an alliance. In April 2017 he connected with a former Google design ethicist named Tristan Harris when they appeared together on Bloomberg TV. Harris had by then gained a national reputation as the conscience of Silicon Valley. He had been profiled on 60 Minutes and in The Atlantic, and he spoke eloquently about the subtle tricks that social media companies use to foster an addiction to their services. “They can amplify the worst aspects of human nature,” Harris told WIRED this past December. After the TV appearance, McNamee says he called Harris up and asked, “Dude, do you need a wingman?”

The next month, DiResta published an ­article comparing purveyors of disinformation on social media to manipulative high-frequency traders in financial markets. “Social networks enable malicious actors to operate at platform scale, because they were designed for fast information flows and virality,” she wrote. Bots and sock puppets could cheaply “create the illusion of a mass groundswell of grassroots activity,” in much the same way that early, now-illegal trading algorithms could spoof demand for a stock. Harris read the article, was impressed, and emailed her.

The three were soon out talking to anyone who would listen about Facebook’s poisonous effects on American democracy. And before long they found receptive audiences in the media and Congress—groups with their own mounting grievances against the social media giant.

VIII

Even at the best of times, meetings between Facebook and media executives can feel like unhappy family gatherings. The two sides are inextricably bound together, but they don’t like each other all that much. News executives resent that Facebook and Google have captured roughly three-quarters of the digital ad business, leaving the media industry and other platforms, like Twitter, to fight over scraps. Plus they feel like the preferences of Facebook’s algorithm have pushed the industry to publish ever-dumber stories. For years, The New York Times resented that Facebook helped elevate BuzzFeed; now BuzzFeed is angry about being displaced by clickbait.

And then there’s the simple, deep fear and mistrust that Facebook inspires. Every publisher knows that, at best, they are sharecroppers on Facebook’s massive industrial farm. The social network is roughly 200 times more valuable than the Times. And journalists know that the man who owns the farm has the leverage. If Facebook wanted to, it could quietly turn any number of dials that would harm a publisher—by manipulating its traffic, its ad network, or its readers.

Emissaries from Facebook, for their part, find it tiresome to be lectured by people who can’t tell an algorithm from an API. They also know that Facebook didn’t win the digital ad market through luck: It built a better ad product. And in their darkest moments, they wonder: What’s the point? News makes up only about 5 percent of the total content that people see on Facebook globally. The company could let it all go and its shareholders would scarcely notice. And there’s another, deeper problem: Mark Zuckerberg, according to people who know him, prefers to think about the future. He’s less interested in the news industry’s problems right now; he’s interested in the problems five or 20 years from now. The editors of major media companies, on the other hand, are worried about their next quarter—maybe even their next phone call. When they bring lunch back to their desks, they know not to buy green bananas.

This mutual wariness—sharpened almost to enmity in the wake of the election—did not make life easy for Campbell Brown when she started her new job running the nascent Facebook Journalism Project. The first item on her to-do list was to head out on yet another Facebook listening tour with editors and publishers. One editor describes a fairly typical meeting: Brown and Chris Cox, Facebook’s chief product officer, invited a group of media leaders to gather in late January 2017 at Brown’s apartment in Manhattan. Cox, a quiet, suave man, sometimes referred to as “the Ryan Gosling of Facebook Product,” took the brunt of the ensuing abuse. “Basically, a bunch of us just laid into him about how Facebook was destroying journalism, and he graciously absorbed it,” the editor says. “He didn’t much try to defend them. I think the point was really to show up and seem to be listening.” Other meetings were even more tense, with the occasional comment from journalists noting their interest in digital antitrust issues.

As bruising as all this was, Brown’s team became more confident that their efforts were valued within the company when Zuckerberg published a 5,700-word corporate manifesto in February. He had spent the previous three months, according to people who know him, contemplating whether he had created something that did more harm than good. “Are we building the world we all want?” he asked at the beginning of his post, implying that the answer was an obvious no. Amid sweeping remarks about “building a global community,” he emphasized the need to keep people informed and to knock out false news and clickbait. Brown and others at Facebook saw the manifesto as a sign that Zuckerberg understood the company’s profound civic responsibilities. Others saw the document as blandly grandiose, showcasing Zuckerberg’s tendency to suggest that the answer to nearly any problem is for people to use Facebook more.

Shortly after issuing the manifesto, Zuckerberg set off on a carefully scripted listening tour of the country. He began popping into candy shops and dining rooms in red states, camera crew and personal social media team in tow. He wrote an earnest post about what he was learning, and he deflected questions about whether his real goal was to become president. It seemed like a well-­meaning effort to win friends for Facebook. But it soon became clear that Facebook’s biggest problems emanated from places farther away than Ohio.

IX

One of the many things Zuckerberg seemed not to grasp when he wrote his manifesto was that his platform had empowered an enemy far more sophisticated than Macedonian teenagers and assorted low-rent purveyors of bull. As 2017 wore on, however, the company began to realize it had been attacked by a foreign influence operation. “I would draw a real distinction between fake news and the Russia stuff,” says an executive who worked on the company’s response to both. “With the latter there was a moment where everyone said ‘Oh, holy shit, this is like a national security situation.’”

That holy shit moment, though, didn’t come until more than six months after the election. Early in the campaign season, Facebook was aware of familiar attacks emanating from known Russian hackers, such as the group APT28, which is believed to be affiliated with Moscow. They were hacking into accounts outside of Facebook, stealing documents, then creating fake Facebook accounts under the banner of DCLeaks, to get people to discuss what they’d stolen. The company saw no signs of a serious, concerted foreign propaganda campaign, but it also didn’t think to look for one.

During the spring of 2017, the company’s security team began preparing a report about how Russian and other foreign intelligence operations had used the platform. One of its authors was Alex Stamos, head of Facebook’s security team. Stamos was something of an icon in the tech world for having reportedly resigned from his previous job at Yahoo after a conflict over whether to grant a US intelligence agency access to Yahoo servers. According to two people with direct knowledge of the document, he was eager to publish a detailed, specific analysis of what the company had found. But members of the policy and communications team pushed back and cut his report way down. Sources close to the security team suggest the company didn’t want to get caught up in the political whirlwind of the moment. (Sources on the politics and communications teams insist they edited the report down, just because the darn thing was hard to read.)

On April 27, 2017, the day after the Senate announced it was calling then FBI director James Comey to testify about the Russia investigation, Stamos’ report came out. It was titled “Information Operations and Facebook,” and it gave a careful step-by-step explanation of how a foreign adversary could use Facebook to manipulate people. But there were few specific examples or details, and there was no direct mention of Russia. It felt bland and cautious. As Renée DiResta says, “I remember seeing the report come out and thinking, ‘Oh, goodness, is this the best they could do in six months?’”

One month later, a story in Time suggested to Stamos’ team that they might have missed something in their analysis. The article quoted an unnamed senior intelligence official saying that Russian operatives had bought ads on Facebook to target Americans with propaganda. Around the same time, the security team also picked up hints from congressional investigators that made them think an intelligence agency was indeed looking into Russian Facebook ads. Caught off guard, the team members started to dig into the company’s archival ads data themselves.

Eventually, by sorting transactions according to a series of data points—Were ads purchased in rubles? Were they purchased within browsers whose language was set to Russian?—they were able to find a cluster of accounts, funded by a shadowy Russian group called the Internet Research Agency, that had been designed to manipulate political opinion in America. There was, for example, a page called Heart of Texas, which pushed for the secession of the Lone Star State. And there was Blacktivist, which pushed stories about police brutality against black men and women and had more followers than the verified Black Lives Matter page.

Numerous security researchers express consternation that it took Facebook so long to realize how the Russian troll farm was exploiting the platform. After all, the group was well known to Facebook. Executives at the company say they’re embarrassed by how long it took them to find the fake accounts, but they point out that they were never given help by US intelligence agencies. A staffer on the Senate Intelligence Committee likewise voiced exasperation with the company. “It seemed obvious that it was a tactic the Russians would exploit,” the staffer says.

When Facebook finally did find the Russian propaganda on its platform, the discovery set off a crisis, a scramble, and a great deal of confusion. First, due to a miscalculation, word initially spread through the company that the Russian group had spent millions of dollars on ads, when the actual total was in the low six figures. Once that error was resolved, a disagreement broke out over how much to reveal, and to whom. The company could release the data about the ads to the public, release everything to Congress, or release nothing. Much of the argument hinged on questions of user privacy. Members of the security team worried that the legal process involved in handing over private user data, even if it belonged to a Russian troll farm, would open the door for governments to seize data from other Facebook users later on. “There was a real debate internally,” says one executive. “Should we just say ‘Fuck it’ and not worry?” But eventually the company decided it would be crazy to throw legal caution to the wind “just because Rachel Maddow wanted us to.”

Ultimately, a blog post appeared under Stamos’ name in early September announcing that, as far as the company could tell, the Russians had paid Facebook $100,000 for roughly 3,000 ads aimed at influencing American politics around the time of the 2016 election. Every sentence in the post seemed to downplay the substance of these new revelations: The number of ads was small, the expense was small. And Facebook wasn’t going to release them. The public wouldn’t know what they looked like or what they were really aimed at doing.

This didn’t sit at all well with DiResta. She had long felt that Facebook was insufficiently forthcoming, and now it seemed to be flat-out stonewalling. “That was when it went from incompetence to malice,” she says. A couple of weeks later, while waiting at a Walgreens to pick up a prescription for one of her kids, she got a call from a researcher at the Tow Center for Digital Journalism named Jonathan Albright. He had been mapping ecosystems of misinformation since the election, and he had some excellent news. “I found this thing,” he said. Albright had started digging into CrowdTangle, one of the analytics platforms that Facebook uses. And he had discovered that the data from six of the accounts Facebook had shut down were still there, frozen in a state of suspended animation. There were the posts pushing for Texas secession and playing on racial antipathy. And then there were political posts, like one that referred to Clinton as “that murderous anti-American traitor Killary.” Right before the election, the Blacktivist account urged its supporters to stay away from Clinton and instead vote for Jill Stein. Albright downloaded the most recent 500 posts from each of the six groups. He reported that, in total, their posts had been shared more than 340 million times.

Eddie Guy

X

To McNamee, the way the Russians used the platform was neither a surprise nor an anomaly. “They find 100 or 1,000 people who are angry and afraid and then use Facebook’s tools to advertise to get people into groups,” he says. “That’s exactly how Facebook was designed to be used.”

McNamee and Harris had first traveled to DC for a day in July to meet with members of Congress. Then, in September, they were joined by DiResta and began spending all their free time counseling senators, representatives, and members of their staffs. The House and Senate Intelligence Committees were about to hold hearings on Russia’s use of social media to interfere in the US election, and McNamee, Harris, and ­DiResta were helping them prepare. One of the early questions they weighed in on was the matter of who should be summoned to testify. Harris recommended that the CEOs of the big tech companies be called in, to create a dramatic scene in which they all stood in a neat row swearing an oath with their right hands in the air, roughly the way tobacco executives had been forced to do a generation earlier. Ultimately, though, it was determined that the general counsels of the three companies—Facebook, Twitter, and Google—should head into the lion’s den.

And so on November 1, Colin Stretch arrived from Facebook to be pummeled. During the hearings themselves, DiResta was sitting on her bed in San Francisco, watching them with her headphones on, trying not to wake up her small children. She listened to the back-and-forth in Washington while chatting on Slack with other security researchers. She watched as Marco Rubio smartly asked whether Facebook even had a policy forbidding foreign governments from running an influence campaign through the platform. The answer was no. Rhode Island senator Jack Reed then asked whether Facebook felt an obligation to individually notify all the users who had seen Russian ads that they had been deceived. The answer again was no. But maybe the most threatening comment came from Dianne Feinstein, the senior senator from Facebook’s home state. “You’ve created these platforms, and now they’re being misused, and you have to be the ones to do something about it,” she declared. “Or we will.”

After the hearings, yet another dam seemed to break, and former Facebook executives started to go public with their criticisms of the company too. On November 8, billionaire entrepreneur Sean Parker, Facebook’s first president, said he now regretted pushing Facebook so hard on the world. “I don’t know if I really understood the consequences of what I was saying,” he said. “God only knows what it’s doing to our children’s brains.” Eleven days later, Facebook’s former privacy manager, Sandy Parakilas, published a New York Times op-ed calling for the government to regulate Facebook: “The company won’t protect us by itself, and nothing less than our democracy is at stake.”

XI

The day of the hearings, Zuckerberg had to give Facebook’s Q3 earnings call. The numbers were terrific, as always, but his mood was not. Normally these calls can put someone with 12 cups of coffee in them to sleep; the executive gets on and says everything is going well, even when it isn’t. Zuckerberg took a different approach. “I’ve expressed how upset I am that the Russians tried to use our tools to sow mistrust. We build these tools to help people connect and to bring us closer together. And they used them to try to undermine our values. What they did is wrong, and we are not going to stand for it.” The company would be investing so much in security, he said, that Facebook would make “significantly” less money for a while. “I want to be clear about what our priority is: Protecting our community is more important than maximizing our profits.” What the company really seeks is for users to find their experience to be “time well spent,” Zuckerberg said—using the three words that have become Tristan Harris’ calling card, and the name of his nonprofit.

Other signs emerged, too, that Zuckerberg was beginning to absorb the criticisms of his company. The Facebook Journalism Project, for instance, seemed to be making the company take its obligations as a publisher, and not just a platform, more seriously. In the fall, the company announced that Zuckerberg had decided—after years of resisting the idea—that publishers using Facebook Instant Articles could require readers to subscribe. Paying for serious publications, in the months since the election, had come to seem like both the path forward for journalism and a way of resisting the post-truth political landscape. (WIRED recently instituted its own paywall.) Plus, offering subscriptions arguably helped put in place the kinds of incentives that Zuckerberg professed to want driving the platform. People like Alex Hardiman, the head of Facebook news products and an alum of The New York Times, started to recognize that Facebook had long helped to create an economic system that rewarded publishers for sensationalism, not accuracy or depth. “If we just reward content based on raw clicks and engagement, we might actually see content that is increasingly sensationalist, clickbaity, polarizing, and divisive,” she says. A social network that rewards only clicks, not subscriptions, is like a dating service that encourages one-night stands but not marriages.

XII

A couple of weeks before Thanksgiving 2017, Zuckerberg called one of his quarterly all-hands meetings on the Facebook campus, in an outdoor space known as Hacker Square. He told everyone he hoped they would have a good holiday. Then he said, “This year, with recent news, a lot of us are probably going to get asked: ‘What is going on with Facebook?’ This has been a tough year … but … what I know is that we’re fortunate to play an important role in billions of people’s lives. That’s a privilege, and it puts an enormous responsibility on all of us.” According to one attendee, the remarks came across as blunter and more personal than any they’d ever heard from Zuckerberg. He seemed humble, even a little chastened. “I don’t think he sleeps well at night,” the employee says. “I think he has remorse for what has happened.”

During the late fall, criticism continued to mount: Facebook was accused of becoming a central vector for spreading deadly propaganda against the Rohingya in Myanmar and for propping up the brutal leadership of Rodrigo Duterte in the Philippines. And December brought another haymaker from someone closer by. Early that month, it emerged that Chamath Palihapitiya, who had been Facebook’s vice president for user growth before leaving in 2011, had told an audience at Stanford that he thought social media platforms like Facebook had “created tools that are ripping apart the social fabric” and that he feels “tremendous guilt” for being part of that. He said he tries to use Facebook as little as possible and doesn’t permit his children to use such platforms at all.

The criticism stung in a way that others hadn’t. Palihapitiya is close to many of the top executives at Facebook, and he has deep cachet in Silicon Valley and among Facebook engineers as a part-owner of the Golden State Warriors. Sheryl Sandberg sometimes wears a chain around her neck that’s welded together from one given to her by Zuckerberg and one given to her by Palihapitiya after her husband’s death. The company issued a statement saying it had been a long time since Palihapitiya had worked there. “Facebook was a very different company back then and as we have grown we have realized how our responsibilities have grown too.” Asked why the company had responded to Palihapitiya, and not to others, a senior Facebook executive said, “Chamath is—was—a friend to a lot of people here.”

Roger McNamee, meanwhile, went on a media tour lambasting the company. He published an essay in Washington Monthly and then followed up in The Washington Post and The Guardian. Facebook was less impressed with him. Executives considered him to be overstating his connection to the company and dining out on his criticism. Andrew Bos­worth, a VP and member of the management team, tweeted, “I’ve worked at Facebook for 12 years and I have to ask: Who the fuck is Roger McNamee?”

Zuckerberg did seem to be eager to mend one fence, though. Around this time, a team of Facebook executives gathered for dinner with executives from News Corp at the Grill, an upscale restaurant in Manhattan. Right at the start, Zuckerberg raised a toast to Murdoch. He spoke charmingly about reading a biography of the older man and of admiring his accomplishments. Then he described a game of tennis he’d once played against Murdoch. At first he had thought it would be easy to hit the ball with a man more than 50 years his senior. But he quickly realized, he said, that Murdoch was there to compete.

XIII

On January 4, 2018, Zuckerberg announced that he had a new personal challenge for the year. For each of the past nine years, he had committed himself to some kind of self-improvement. His first challenge was farcical—wear ties—and the others had been a little preening and collegiate. He wanted to learn Mandarin, read 25 books, run 365 miles. This year, though, he took a severe tone. “The world feels anxious and divided, and Facebook has a lot of work to do—whether it’s protecting our community from abuse and hate, defending against interference by nation-states, or making sure that time spent on Facebook is time well spent,” Zuckerberg declared. The language wasn’t original—he had borrowed from Tristan Harris again—but it was, by the accounts of many people around him, entirely sincere.

That New Year’s challenge, it turned out, was a bit of carefully considered choreography setting up a series of announcements, starting with a declaration the following week that the News Feed algorithm would be rejiggered to favor “meaningful interactions.” Posts and videos of the sort that make us look or like—but not comment or care—would be deprioritized. The idea, explained Adam Mosseri, is that, online, “interacting with people is positively correlated with a lot of measures of well-being, whereas passively consuming content online is less so.”

To numerous people at the company, the announcement marked a huge departure. Facebook was putting a car in reverse that had been driving at full speed in one direction for 14 years. Since the beginning, Zuckerberg’s ambition had been to create another internet, or perhaps another world, inside of Facebook, and to get people to use it as much as possible. The business model was based on advertising, and advertising was insatiably hungry for people’s time. But now Zuckerberg said he expected these new changes to News Feed would make people use Facebook less.

The announcement was hammered by many in the press. During the rollout, Mosseri explained that Facebook would downgrade stories shared by businesses, celebrities, and publishers, and prioritize stories shared by friends and family. Critics surmised that these changes were just a way of finally giving the publishing industry a middle finger. “Facebook has essentially told media to kiss off,” Franklin Foer wrote in The Atlantic. “Facebook will be back primarily in the business of making us feel terrible about the inferiority of our vacations, the relative mediocrity of our children, teasing us into sharing more of our private selves.”

People who know him say Zuckerberg has truly been altered in the crucible of the past several months.

But inside Facebook, executives insist this isn’t remotely the case. According to Anker, who retired from the company in December but worked on these changes, and who has great affection for the management team, “It would be a mistake to see this as a retreat from the news industry. This is a retreat from ‘Anything goes if it works with our algorithm to drive up engagement.’” According to others still at the company, Zuckerberg didn’t want to pull back from actual journalism. He just genuinely wanted there to be less crap on the platform: fewer stories with no substance; fewer videos you can watch without thinking.

And then, a week after telling the world about “meaningful interactions,” Zuckerberg announced another change that seemed to answer these concerns, after a fashion. For the first time in the company’s history, he said in a note posted to his personal page, Facebook will start to boost certain publishers—ones whose content is “trustworthy, informative, and local.” For the past year, Facebook has been developing algorithms to hammer publishers whose content is fake; now it’s trying to elevate what’s good. For starters, he explained, the company would use reader surveys to determine which sources are trustworthy. That system, critics were quick to point out, will surely be gamed, and many people will say they trust sources just because they recognize them. But this announcement, at least, went over a little better in boardrooms and newsrooms. Right after the post went up, the stock price of The New York Times shot up—as did that of News Corp.

Zuckerberg has hinted—and insiders have confirmed—that we should expect a year of more announcements like this. The company is experimenting with giving publishers more control over paywalls and allowing them to feature their logos more prominently to reestablish the brand identities that Facebook flattened years ago. One somewhat hostile outside suggestion has come from Facebook’s old antagonist Murdoch, who said in late January that if Facebook truly valued “trustworthy” publishers, it should pay them carriage fees.

The fate that Facebook really cares about, however, is its own. It was built on the power of network effects: You joined because everyone else was joining. But network effects can be just as powerful in driving people off a platform. Zuckerberg understands this viscerally. After all, he helped create those problems for MySpace a decade ago and is arguably doing the same to Snap today. Zuckerberg has avoided that fate, in part, because he has proven brilliant at co-opting his biggest threats. When social media started becoming driven by images, he bought Instagram. When messaging took off, he bought WhatsApp. When Snapchat became a threat, he copied it. Now, with all his talk of “time well spent,” it seems as if he’s trying to co-opt Tristan Harris too.

But people who know him say that Zuckerberg has truly been altered in the crucible of the past several months. He has thought deeply; he has reckoned with what happened; and he truly cares that his company fix the problems swirling around it. And he’s also worried. “This whole year has massively changed his personal techno-­optimism,” says an executive at the company. “It has made him much more paranoid about the ways that people could abuse the thing that he built.”

The past year has also altered Facebook’s fundamental understanding about whether it’s a publisher or a platform. The company has always answered that question defiantly—platform, platform, platform—for regulatory, financial, and maybe even emotional reasons. But now, gradually, Facebook has evolved. Of course it’s a platform, and always will be. But the company also realizes now that it bears some of the responsibilities that a publisher does: for the care of its readers, and for the care of the truth. You can’t make the world more open and connected if you’re breaking it apart. So what is it: publisher or platform? Facebook seems to have finally recognized that it is quite clearly both.

How I Quit Apple, Microsoft, Google, Facebook, and Amazon


A reflection on my month without Apple, Microsoft, Google, Facebook, and Amazon, plus a how-to guide if you want to quit the biggest companies in tech.

SLAUGHTERHOUSE BIG FIVE:
EVERYTHING WAS UGLY AND NOTHING WORKED

It was just before closing time at a Verizon store in Bushwick, New York last May when I burst through the door, sweaty and exasperated. I had just sprinted—okay I walked, but briskly—from another Verizon outlet a few blocks away in the hopes I’d make it before they closed shop for the night. I was looking for a SIM card that would fit a refurbished 2012 Samsung Galaxy S3 that I had recently purchased on eBay, but the previous three Verizon stores I visited didn’t have any chips that would fit such an old model.

When I explained my predicament to the salesperson, he laughed in my face.

“You want to switch from you current phone to an… S3?” he asked incredulously.

I explained my situation. I was about to embark on a month without intentionally using any services or products produced by the so-called “Big Five” tech companies: Amazon, Apple, Facebook, Google, and Microsoft. At that point I had found adequate, open source replacements for most of the services offered by these companies, but ditching the Android OS, which is developed by Google, was proving difficult.

Most of the tech I use on a day-to-day basis is pretty utilitarian. At the time I was using a cheap ASUS laptop at work and a homebrew PC at my apartment. My phone was a Verizon-specific version of the Samsung Galaxy J3, a 2016 model that cost a little over $100 new. They weren’t fancy, but they’ve reliably met most of my needs for years.

For the past week and a half I had spent most of my evenings trying to port an independent mobile OS called Sailfish onto my phone without any luck. As it turned out, Verizon had locked the bootloader on my phone model, which is so obscure that no one in the vibrant Android hacking community had dedicated much time to figuring out a workaround. If I wanted to use Sailfish, I was going to have to get a different phone.

I remembered using a Galaxy S3 while living in India a few years ago and liking it well enough. I ultimately decided to go with that model after finding extensive documentation online from others who had had success porting unofficial operating systems onto their phones. So two days and $20-plus-shipping later, I was in possession of a surprisingly new-looking Verizon Galaxy S3. The only thing that remained to do before loading Sailfish onto the device was to find a SIM card that fit. SIM cards come in three different sizes—standard, micro, and nano—and my nano SIM wouldn’t fit in the S3’s micro SIM port.

By the time I explained all this to the Verizon employee, he had found a SIM card that would work. As he navigated the Android setup menu he asked me if I wanted him to link my Google account to the phone. “Oh that’s right,” he said, looking up from the phone and laughing. “Sorry, it’s just a habit.”

I could hardly blame him for the slipup. I’m probably the only person who has ever come into the store who didn’t want to synchronize the Google services they use with their phone. It’d be senseless to resist that kind of convenience and Google knows this, which is why Android prompts you to enter your Google credentials before you’ve even reached the phone’s dashboard for the first time. But what I wanted to know is whether there was another way.

Want a more in-depth explanation of why you might want to quit the Big Five? Check out my introductory blog post on how this experiment came about

By now, it’s common knowledge that Google, Facebook, and Amazon are harvesting as much of our personal data as they can get their hands on to feed us targeted ads, train artificial intelligence, and sell us things before we know we need them. The results of this ruthless data-driven hypercapitalism speak for themselves: Today, the Big Five tech companies are worth a combined total of $3 trillion dollars. When I started my month without the Big Five in May, Google’s parent company Alphabet, Amazon, and Apple were racing to be the first company in history with stock worth $1 trillion. In August, Apple became the first to reach this milestone and just a few weeks later Amazon’s market cap also briefly passed $1 trillion.

With the exception of Microsoft and Apple, these fortunes were not built by selling wildly popular products, but by collecting massive amounts of user data in order to more effectively sell us stuff. At the same time, this data has also been abused to swing elections and abet state surveillance. For most of us, giving away our data was seen as the price of convenience—Google and Facebook are “free” to use, after all.

Although Amazon now sells its own products, its rapid growth was fueled by selling other people’s products. This gave the company unprecedented access to consumer habits and data, which it used to spin out its own consumer goods brands and gain invaluable experience in logistics and web hosting. Both its in-house consumer brands and Amazon Web Services are now core parts of Amazon.

The widespread adoption of Microsoft and Apple products over the past 40 years, meanwhile, was no accident, but the result of monopoly-focused business tactics. The end result was that their products appear to be a natural default. You’re either a Mac person or a Windows person and you stick to your brand because that’s the way it’s always been.

 

As the open internet was swallowed whole by the megacorporations of Silicon Valley, however, a revolution was occurring in free, open source software (FOSS). Although FOSS can trace its roots back to the crew working at MIT’s artificial intelligence laboratory in the early 1980s, it broke into the mainstream in a big way largely due to the creation of Linux, an open operating system developed in the early 90s. These days there’s a galaxy of free and open source software that offers adequate alternatives to most Big Five services, and much of it is powered by Linux. In fact, a lot of the Big Five services you use on a daily basis are probably also based on Linux or open source software that has had some proprietary code grafted on top of it before it was repackaged and sold back to you.

My goal with going a month without the Big Five was to see if I could rely solely on open source or independent software without compromising what I was able to accomplish with proprietary code. Basically, could I live my normal life with open source alternatives?

Going into the experiment, I realized that there was a good chance I’d come crawling back to some of the Big Five services when it was over. Yet as I discovered over the four weeks, switching to independent alternatives didn’t negatively affect most parts of my life, but it did take a little getting used to.

Before diving into the nitty gritty of what worked and what didn’t, however, let me explain the limits of the experiment.

LIMITATIONS

After announcing my intention to relinquish Big Five services for a month, People On The Internet pointed out that my experiment would fail because I would almost certainly visit a website hosted by Amazon’s cloud service at some point, thereby indirectly putting money into Jeff Bezos’s pocket. This is, of course, true. Amazon Web Services hosts a number of popular sites that I use on a regular basis, such as Netflix, Reddit, Spotify, SoundCloud, and Yelp, all of which I visited at least once during the month.

Unfortunately, avoiding this kind of indirect support of Big Five through their back-end services will become even more difficult to avoid in the future. For example, Google is beginning to lay its own undersea internet cables, creating the infrastructure for totally networked homes, and developing self-driving car services. Microsoft is aggressively pursuing cloud computing platforms and recently acquired GitHub, a code repository I frequently use while teaching myself how to program. Amazon moved into the space data business and is also working on networking your home with devices like Alexa, and Facebook still controls how much of the world communicates through its website, Instagram, and WhatsApp.

Yet even if I did scrupulously avoid visiting sites hosted on Amazon Web Services, the experiment was designed to be temporary. This meant that rather than shutting down my work Gmail accounts, I had them forward my email to an alternative email provider that I would then use to send and receive emails. There were also inevitably important files that I neglected to transfer from my Google Drive to an alternative hosting service when I was preparing for the experiment, so I had to log in to my Google account to retrieve those files and move them over. Or there were times when I was attempting to change a YouTube link to a HookTube link and accidentally landed on YouTube.

I don’t think the handful of lapses alluded to above undercut the spirit of the experiment, however, since I wasn’t intentionally using any services offered by the Big Five. If I were permanently planning to leave the Big Five I would have transferred all my files from Google Drive, deleted my Gmail accounts, and so on.

So with these experimental limitations in mind, I present the Motherboard Guide to Quitting the Big Five, based on my own experience in May 2018.

THE MOTHERBOARD GUIDE
TO QUITTING THE BIG FIVE

1544569811922-1524514817075-drake_tech

Image: Motherboard

HOW TO QUIT FACEBOOK

My experiment in leaving the Big Five arguably began back in March, when I deleted my Facebook account in the wake of the Cambridge Analytica scandal. Of all the companies I abandoned for this experiment, Facebook and its subsidiaries were by far the easiest. I have tried and failed to start an Instagram account several times over the years. I find Instagram unbelievably boring and I’ve come to terms with the fact that I’ll never understand its already large, and still growing, appeal.

Quitting WhatsApp was more difficult since I used it to keep in touch with my friends abroad, many of whom live in countries where WhatsApp is the default communication tool. With friends and family in the US, I switched over to the encrypted chat app Telegram or just stuck to normal SMS and email. As I soon learned, the ideal messaging platform doesn’t exist. If security is your thing, WhatsApp, Messenger, Signal, and Telegram all have their flaws and all offer comparable services. The main advantage of WhatsApp is that nearly a quarter of the world already uses it.

I have been off Facebook for a few months now and my only regret is that I didn’t leave sooner. Although there is admittedly something of a phantom-limb effect right after leaving—pulling out my phone in response to imaginary pings from Messenger or reflexively navigating to the Facebook login page only to realize I no longer had a profile—the feeling that I was always missing something quickly subsided. I go out with friends and attend events just as much as I did before. I have no qualms about missing events that I would’ve received a mass Facebook invite to because now I live in blissful ignorance of their occurrence. Contrary to my expectations, my FOMO is at its lowest point in years.

“Contrary to my expectations, my FOMO is at its lowest point in years”

Admittedly, leaving Facebook is a privilege. In many places, Facebook and Messenger are people’s only links to the outside world, or people may depend on Facebook to run their business. It can also make it challenging for people to contact you if you leave. Although I made a point of collecting contact information from my friends before I deleted my account, there were inevitably some I forgot.

During my month without the Big Five, I received an email from an Argentinian friend I hadn’t seen in years who was passing through New York. When we met for dinner, he mentioned how hard I had been to track down without Facebook. Fortunately, I’ve listed my email publicly on my website and still had a Twitter profile at that point, so he was able to find an alternative method of contacting me. But for people who don’t work in industries where it’s normal to make your email public or to have a personal website, these types of missed connections are bound to happen.

As for the actual process of deleting your Facebook profile, it’s pretty simple. I’ve covered the process in detail in another article, but there are a few points you’ll want to consider before taking the plunge. If you’re the type of person who signs up for other apps such as Tinder or Airbnb with your Facebook account, then deleting your Facebook profile is going to be way more of a pain in the ass because you’re going to have to switch all those accounts over to an email login first. Second, if you have hundreds of photo albums dating back to 2008 that you want to save, be prepared to spend a few hours scraping them off of Facebook. (There are scripts that help with this, but I didn’t find any of them to be that efficient.) Other than that, there’s a button on Facebook that will allow you to download all your data in one fell swoop. It includes every like, comment, and event invite from the past decade so you can cherish these internet minutiae until you grow old and die.

Read More: Delete All Your Apps

There are a number of legitimate reasons you might want to consider leaving Facebook. In my case, I left due to my discomfort with the idea that I was giving away huge amounts of intensely personal data to a company that had a history of mishandling its users’ information. I was also getting tired of wasting so much time endlessly scrolling through status updates from people whom I hadn’t seen or talked to for years. I had managed to convince myself that clicking “like” on digital simulacra of people’s lives was socializing and, to borrow Mark Zuckerberg’s favorite word, being part of a “community.”

There’s no doubt that humans are social creatures and that human interaction is a critical part of an individual’s wellbeing. How strange, then, that a mounting body of evidence shows that reducing social media use actually decreases loneliness and feelings of unhappiness. To make matters worse, sometimes Facebook makes us unhappy on purpose.

But even if you have more free time than you know what to do with and don’t mind forking over your data to a multi-billion dollar company that just “runs ads,” you might consider ditching Facebook because it is a breeding ground for disinformation. In the past three years, evidence has emerged that Facebook was a primary vector for sowing political discord in the United States and, so far, Zuckerberg hasn’t demonstrated that his company has the faintest idea of how to stop it. Maybe one day it will figure out an effective filter for fake news, but until then, there’s a good chance that meme your racist uncle just posted was generated by a Russian bot.

Read More: The Impossible Job: Inside Facebook’s Struggle to Moderate 2 Billion People

During Zuckerberg’s testimony before the US Congress in April, Senator Lindsey Graham asked him point blank whether Facebook was a monopoly. Zuckerberg danced around the question and was ultimately unable to provide an example of alternative services offering a similar product to Facebook.

Although there are lots of alternative social media platforms out there, none of them are used by half the world’s population, which is exactly what makes Facebook so valuable. Still, if you want to keep social media in your life, you might want to use an alternative platform, such as Mastodon (a decentralized Twitter imitator) or Ello (a privacy-oriented, ad-free Facebook alternative). You won’t find anyone you know on there, probably, but at least your social media fix won’t come at the cost of your privacy.

HOW TO QUIT APPLE

I’ve only owned two Apple products in my life. One was an old 120 gigabyte iPod classic that I still miss dearly. The other was an iPhone 4 that I got in 2010 and had for a year and a half before I switched to Android and never looked back.

Since I didn’t have any Apple products to relinquish for my monthlong experiment, I used the time for a little introspection on why I dislike Apple products. The main reason is that I was raised using Windows, so I was disincentivized to learn the quirks of a new OS. As I grew older, however, I also found Apple’s “walled-garden” approach to its device ecosystem infuriating. (For many people, however, this closed ecosystem and interoperability between Apple devices is exactly what makes its products attractive.)

Apple’s obsession with total control is perhaps best exemplified by the release of iPhone 7 in 2016, which got rid of the ubiquitous headphone jack that has been used by literally every other digital device since forever and replaced it with a proprietary dongle. This was an affront to Apple’s devout followers, sure, but that didn’t stop the company from selling more than 200 million iPhones last year at around $600 a pop. And yet here we are, years after Apple adopted the dongle, and people are still mourning the loss of the headphone jack.

I know why I don’t use Apple, but even after a month of thinking about it, I still couldn’t rationalize why anyone would spend a night sleeping outside an Apple store to get their hands on one of its overpriced products. People love to justify their purchase of iPhones by appealing to the superior security of iOS compared to Android. But recent updates have significantly closed the security gap between Android phones and iPhones.

After a month of thinking about it, I still couldn’t rationalize why anyone would spend a night sleeping outside an Apple store to get their hands on one of its overpriced products

Unfortunately, there are no independent studies about what motivates most people to buy Apple phones, but I suspect that security probably wouldn’t top the list. Besides, as the fallout between the FBI and Apple over backdoors reminded us, there’s no such thing as an unhackable device. In fact, there’s a relatively cheap hacking tool that can be used by cops to bypass iPhone encryption. Even when Apple tried to fix this with a patch, iPhones got hacked again anyway. C’est la vie!

Okay, but what about Macs? Apple’s laptops and desktop computers are usually adored for their performance specs and native applications that are geared toward creative types (GarageBand, iMovie, etc.). Apple knows this, which is why a recent commercial campaign for MacBook features artists making art while a Daniel Johnston song called “Story of an Artist” plays in the background. Very subtle. The thing is, you can build a custom PC that matches or surpasses the technical specs of a high-end Mac without spending $5,000.

Despite what you may have heard, building a custom PC is not as hard as it sounds. It’s basically just an expensive and delicate form of electronic Lego. I don’t have any formal experience in computer science and I was able to build a decent PC with 2 GPUs, 16 gigs of memory, two terabytes of storage, and a quad-core CPU for around $1,000 by using handy tools such as PC Part Picker. My PC has way more power than I’ve ever needed and still costs less than a new MacBook and far less than a Mac desktop. As for the Mac’s native applications, most of these have fine Linux equivalents. For example, here’s an extensive list of free sound and MIDI software for Linux; Ubuntu Studio is great for most video editing needs; there are even several open source alternatives to Siri.

HOW TO QUIT AMAZON

Depending on how you look at it, Amazon is either the hardest or the easiest company to quit of the Big Five. On the one hand, its consumer-facing business is mostly predicated on the idea of convenience, as evidenced by products like the Dash button or Alexa. This should, in principle, make it easy to quit since it would only require going back to the old ways of buying things from an actual brick-and-mortar store or visiting websites that sell specific goods.

When I started my experiment, I had an Amazon Prime account, but really only used Amazon to regularly buy three things: Books, cat food, and cat litter. As someone who exclusively uses public transportation, these items are a pain to buy at a store and transfer to my house because they are large and heavy. Of course I could just order the cat products from another site, but Amazon Prime offers free shipping and the ability to set up recurring automatic orders.

Read More: How To Get Amazon Prime for Free for Life

During my experiment, however, I was determined to patronize my locally owned pet store since this seemed to be the most antithetical to Amazon’s dominance of all things retail. Carrying these items the few blocks to my house sucked (a box of cat litter weighs 40 pounds), but what blew even more was the price difference. The same cat food I always buy on Amazon cost more than twice as much at my local pet store. While this was fine for a month, I couldn’t afford this large of an increase in my expenses in the long term. My best bet, then, would be to still buy the pet items I needed online from websites such as Chewy, which still provide most of the convenience offered by Amazon.

It wasn’t convenience alone that made Amazon into the behemoth it is today—there were plenty of online book retailers around when Amazon hit the scene in 1994. What made Amazon successful was that its catalog included books not carried by other (online) bookstores. Over the past two decades, it has expanded this logic to every type of consumer good and this is precisely what makes “the everything store” so difficult to quit.

Whereas a brick-and-mortar store can only carry a finite inventory, Amazon’s inventory is effectively limitless. This combination of infinite selection and total convenience is exactly the type of selling point that appeals to America’s workforce, which is increasingly strapped for both time and money. For people living in rural areas or with disabilities, Amazon’s rapid delivery services can also be a lifeline.

I am able-bodied and live in one of the largest cities in the world, so quitting Amazon is arguably a privilege. I didn’t mind calling my local bookstore to ask it to order a particular title or popping into the local pet store every few weeks if that’s what it took to cut the company from my life.

Then one day I was making a recipe that called for pine nuts, only to discover that none of the three grocery stores in my neighborhood carried them. The only other grocery store remotely close to me was Whole Foods, which was recently acquired by Amazon and definitely carried pine nuts. So I caved, dear reader, and bought some overpriced seeds from an Amazon subsidiary.

Although shopping local or going to other online stores is an option for quitting Amazon, some of its other subsidiaries are far more difficult to replace because they are unique. I don’t game, but if I did it would be hard to find an adequate replacement for Twitch because so many gamers already use it. Likewise, the Internet Movie Database for movie facts and Goodreads for book reviews are two online destinations for which there isn’t an adequate alternative and are basically the go-to sites for their respective domains. Finally, as mentioned earlier, many major websites such as Netflix and Spotify run on Amazon Web Services, so if you use these services you’re also indirectly supporting Amazon.

Nevertheless, there are plenty of good reasons to limit your patronage of Amazon and its subsidiaries. For starters, Amazon has become notorious for its mistreatment of workers. A 2015 New York Times expose detailed the grueling expectations placed on Amazon’s white collar workers, and story after story after story keeps bubbling up that details the inhumane conditions faced by Amazon’s warehouse employees.

You may also take issue with Amazon’s development of facial recognition software that is used for predictive policing and the company’s support of similar products made by companies such as Palantir that use its cloud hosting service. Even if Amazon’s Echo and Dot are ostensibly benign, they are also liable to be hacked and turned into spy devices.

Finally, Amazon has developed a reputation for steamrolling local economies and may end up killing over 2 million jobs as it increases its dominance over traditional retail and other market sectors.

HOW TO QUIT MICROSOFT

I have used Microsoft’s operating system for as long as I can remember. My family’s first computer ran Windows 95, but the first experience I can recall with a computer was Windows 98 and the boot theme must’ve imprinted itself on my impressionable, 5-year-old brain because I’ve exclusively used Windows ever since. The Vista and XP years were rough, I’ll admit, but it’s always darkest before dawn. Windows 10 certainly has its flaws (especially when it comes to privacy), but I’d be lying if I said I wasn’t dreading swapping it out for Ubuntu, a popular Linux distribution.

Developed as an open source operating system by Linus Torvalds in the early 90s, Linux has grown from a nerdy curiosity to a defining feature of modern computer systems. Indeed, Google, Microsoft, Amazon, and Facebook are all major donors to the Linux foundation, which underscores their reliance on the kernel. These days the Linux kernel powers around 75 percent of cloud platforms and is also found at the core of many consumer-facing devices, including every phone using Android, which is the most popular mobile OS in the world by a huge margin.

Although Linux is prized by system admins everywhere for its versatility, it’s been slow to catch on as an operating system for average PC users who mostly use their computers for web browsing, word processing, and other simple tasks. In the beginning, Linux was still very experimental and didn’t offer equivalents for many of the standard programs found on Windows PCs or Macs. Further, many popular programs didn’t bother to create a version of their software that could be used on machines running Linux.

Today, things are much better in this respect. There are Linux equivalents of everything from Microsoft Office to Adobe’s Photoshop, and popular applications such as Spotify usually offer a Linux version of their software.

Prior to this experiment, my only experience with Linux was setting up a cryptocurrency mining rig that ran a custom operating system called EthOS specifically designed for mining. This familiarized me with some basic terminal commands, but really I was a total Linux noob.

Fortunately, getting Linux up and running on my laptop and home PC was pretty easy. For the laptop, I used a colleague’s 2010 Alienware gaming laptop. Rather than partitioning the hard drive, which is a way to have multiple operating systems on a single computer, I opted to erase Windows and have the laptop only run Ubuntu.

To do this, I downloaded Ubuntu (there are plenty of different Linux distributions to choose from, but Ubuntu is one of the most popular distros for casual users) onto a USB drive. If you wanted to try Linux before fully committing to replacing your OS with it, it is possible to run any distribution from a thumb drive. Since I was going to be doing this experiment for a month and wanted to have access to the computer’s storage space, I opted to wipe the computer and install Linux.

On my home PC, I have two terabytes of hard drive space, so I had more than enough room to host two operating systems side by side and still have a decent amount of storage allocated to each OS. When partitioning a disk to run both Windows/MacOS and Linux on the same computer, you can choose how much of your hard drive you want to allocate to each OS. In my case I chose to split it evenly. Now, whenever I reboot that PC, it will automatically boot into Windows, but if I enter the boot menu after restarting the computer, I can also choose to boot into Linux instead.

In spite of the easy of installation and compatibility with most software programs, Ubuntu and other Linux operating systems still haven’t really taken off in the mainstream. The reason for this, I think, is that using Linux actually feels like using a computer—as in, the remarkably complex network of transistors, logic gates, and the other stuff ensconced whatever device you’re reading this on. Linux violates the first rule of getting people to use a technology, which is that it shouldn’t feel like you’re using technology at all. To paraphrase Arthur C. Clarke, it should feel like magic. Linux does not feel like magic; it feels like a pain in the ass—at least until you’ve figured out how to use a command terminal.

We’ve gotten so accustomed to graphical user interfaces that most of us have forgotten that prior to the mid-80s, most computers didn’t have application icons that could summon advanced programs with a double tap on a mouse. Instead, pulling a document from a file or launching a program required the user to actually enter the desired command as text. The latest version of Ubuntu has a sleek graphical interface that isn’t that much different from what you’d find on Windows or MacOS, but after a few days of learning command terminal it’s hard to go back.

It’s possible to do basically everything from a Linux terminal, but just because it’s possible doesn’t necessarily mean you want to. Learning to effectively use the terminal was definitely the most gratifying part of my experiment. Although I am still a novice, I really liked that it allowed me to tell the computer exactly what I wanted it to do, without having to navigate endless menus or other superfluous features. It felt like I had real control over my computer, as opposed to being forced to use applications based on what the designers thought their users wanted. I also learned a great deal about how an operating system actually works by having to think through directory structures and follow logical sequences of commands.

Still, the first few days of using Linux were incredibly frustrating. It felt like I had to Google—ahem, query on DuckDuckGo—the answer for the simplest things, such as how to download an application. At this point, Ubuntu has a pretty extensive package repository, so many programs you use on a regular basis are probably one-click downloads. But if you want to run a more obscure program, you’re going to have to compile it yourself from the source, which includes learning how to make a directory and all that good stuff.

Other than my initial difficulties with the terminal, the Linux experience with Ubuntu was quite pleasant. There are alternative open source programs for pretty much everything you’d find on a Windows system. For example, LibreOffice is a perfect substitute for Word, Excel, and Powerpoint, GIMP is a more than adequate substitute for Adobe Photoshop for amateur photo editing, and Pidgin is a great instant messaging app. If you absolutely need to run Windows programs on a Linux machine, there’s an app called Wine that will let you do just that.

There are also a number of other “hidden” advantages that come with Linux. For starters, it is arguably the most secure OS—you probably don’t even need an anti-virus program. Ubuntu, along with and other Linux distributions, is generally an ultra-efficient and lean operating system, so if you are using an older computer like I was, you shouldn’t have any trouble running it. Best of all, it’s entirely free. This was a breath of fresh air after using Microsoft, which will charge you an arm and a leg for Windows ($139 for the home edition) and then still more for its defining features, such as Microsoft Office ($70 for a single user home edition).

HOW TO QUIT GOOGLE

Google was without a doubt the hardest company to purge from my life, but for this reason, also the most necessary. I am dependent on Google products for almost everything in my personal and professional life. At work, my editors and I workshop stories in Google docs; our company email system is hosted on Gmail servers; my contact with people at VICE that don’t directly work with Motherboard is almost exclusively through Hangouts; I organize calls with sources on Google Calendar; all my documents and photos are automatically synced to Google Drive; I frequently write about videos I find on YouTube; Google Maps is only way I know how to navigate New York City; Google’s Authenticator app secures many of my most important online accounts; Chrome has been my web browser since it was released a decade ago; and most importantly, my phone, and 75 percent of all the other phones on the planet, run Android, which is mainly developed by Google.

In some cases, Google’s products are far better than anything else out there (Google Maps) or are seemingly irreplaceable because that’s what everyone else uses (YouTube). Yet the real attraction to Google is that all of its products are seamlessly integrated across devices. The idea of unlinking all these vital aspects of my professional and personal life was off-putting, and trying to find adequate replacements for all these services seemed nearly impossible. But I am here to tell you that there is life after Google.

GMAIL

The easiest Google product to ditch was Gmail because there are plenty of good alternative email providers out there. I opted to go with Protonmail, a Swiss email provider that encrypts every email sent through its service. The only downside I noticed was that I used up approximately half of my allotted 500 MB of free storage space in the month.

It is, of course, possible to do a paid subscription and upgrade to get more storage, but this costs significantly more than Gmail’s storage upgrades, which also allows for file hosting through Google Drive. For the sake of comparison, 5 GB of storage on Protonmail costs a little over $5/month, whereas Google charges $2/month for 100 GB. This is the economics of scale at work.

Although it is possible to set up your own email server, this process is quite complex, though there are a few startups that are trying to streamline the process. If you haven’t set up a web server before (more on this below), try doing that first before making the leap to hosting your own email.

Rather than going through the hassle of deleting all my Gmail accounts for a month, I set up my Gmail accounts to automatically forward incoming mail to my new Protonmail accounts, so technically Google was still processing my email. For anyone looking to permanently ditch Gmail you’ll still probably want to forward your emails to your new email account at first so that you don’t end up missing anything important while your contacts catch up to your new email address. Another option is to send out a mass email informing your contacts of your new address.

GOOGLE DRIVE

My professional and personal life is such that I have amassed a substantial collection of documents, voice recordings, photographs, and other digital flotsam. To help keeps tabs on data distributed across several devices and to guard against data loss through hard drive destruction, I used a paid subscription to Google Drive. This got me a whopping 100 GB of storage space on Google’s servers for a couple bucks a month, but the real cost was a substantial loss of privacy. Google automatically scans the contents of its user documents stored on Drive to prevent violations of its terms of service and serve up targeted ads. Up until last year it also scanned personal Gmail accounts.

Although I always had the option of moving my personal documents to a different hosting device or to a local hard drive, this always seemed to be more hassle than it was worth since half of my job takes place in Google Docs, which my editors and I use for collaborative editing. Google Drive was convenient because it allows for collaboration on documents and storage in the same spot.

There are several great alternative cloud hosting services available, but far fewer alternative web services for collaborating on documents. One of the best known open source collaborative editors is Etherpad, which launched in 2008…and was almost immediately acquired by Google.

I opted to try Piratepad, a fork of Etherpad that was created by the Swedish Pirate Party. Although I loved the spirit of Piratepad, its barebones format made editing articles difficult because it was harder to leave comments and make suggestions on articles. Instead, you had to make changes directly in the document.

Moreover, whenever I tried to copy an article from Piratepad into VICE’s content management system, the format was totally wonky and reformatting the article added a substantial amount of time to the publishing process.

The solution my editors and I eventually landed on was far from ideal. I would write an article locally using LibreOffice Writer (the Linux equivalent of Word), send the document in Slack to my editors, who would upload it to Google Drive on their own computers, edit it, re-download it as an ODT file—the file format for text documents in LibreOffice—then send it back to me on Slack for rewrites. Despite how wildly inefficient this was, it allowed for all the editing amenities found in Google Docs without messing with the article’s format. Although this worked well enough for the month, it’s hard to imagine that this would be sustainable long term. As far as I could tell, when it comes to collaborative editing software there’s still no good replacement for Google Docs.

As for the hosting platform, I decided to use NextCloud, an open source fork of the file hosting service ownCloud. I was pleasantly surprised at how intuitive NextCloud’s interface was and how easy it was to integrate across my devices, including my rooted phone. NextCloud is run out of Germany, but because it is open source software, anyone can host their own file storage server locally and not rely on it. This only requires about $40 in set-up costs for a Raspberry Pi, a storage medium such as an external hard drive, and an ethernet cord. This sounds complicated, but there are plenty of easy-to-follow tutorials to set up your own “cloud” storage system at home.

MAPS

There was a point in my life where I knew how to use a compass and read a topological map, but whatever part of my brain was reserved for storing this information started to atrophy the day I discovered Google Maps. This app is, without question, the best map app in existence, which makes sense given how much Google has invested in mapping technology. The company has fleets of cars with cameras mounted on them that roam the world’s streets, but its most important data is anonymously submitted by millions of users whose smartphones deliver movement data to Google as they navigate a city.

At this point I couldn’t locate my own ass without consulting Google Maps, so the prospect of trying to navigate New York City—a city I had moved to only a few months prior—without this cartographic crutch was daunting. Last year, a cartographer named Justin O’Beirne published a fascinating deep dive into why Google’s maps are so good and why every competitor, including Apple, has found Google Maps to be basically impossible to replicate, so I knew going in I was going to experience a serious downgrade in navigation capabilities.

Despite this, there are plenty of alternative map apps to choose from. The three best alternatives, Apple Maps and Waze were off-limits because they are owned by Apple and Google, respectively. (I was also under the impression that Here was still owned by Nokia (Microsoft), but have since learned that it was sold to a consortium of German automakers in 2015.) I remembered the days when MapQuest was still considered the go-to for navigation, so I opted to use its service, figuring it probably got better over the years. If it has, it was hard to tell.

One of the most convenient things about Google Maps is that it integrates various forms of transportation into its directions. You’ll get different directions depending on whether you’re biking, taking a car, walking, or taking the subway. MapQuest, however, only offers driving and walking, which is less than ideal in a city where public transit and biking are major modes of transportation.

Throughout the month, I found myself getting frustrated with little things like having to figure out the crossroads of a subway stop, rather than just typing in the name of the stop to get MapQuest to understand where I was. Likewise, I ended up taking a lot of inefficient bike routes because the MapQuest app couldn’t tell me which streets had bike lanes. There’s something really nice about only having to type in “library” in Google Maps to get directed to New York Public Library a few blocks away. Unless you type out the full “New York Public Library” in MapQuest, you’re liable to get directions to a library in another state.

CHROME

Abandoning Chrome was more of an annoyance than anything. I’ve surfed the web using Google’s browser for a while now after years of being a devoted Firefox user. Although I still had Firefox installed on my laptop, it wasn’t nearly as perfectly tuned as my Chrome settings were. I mostly kept it around to use when I had to visit a site that insisted I turn off my various ad blockers and anti-tracking plugins I use on Chrome. The main reason I left Firefox a few years ago was its lackluster security, which is slowly improving.

Although I also briefly used Opera and Brave for this experiment, I ultimately settled on Firefox as my go-to browser. Opera and Brave are both based on Chromium, the underlying engine for Google’s Chrome browser.

Despite being open source, Firefox is not entirely Google-free, either. For the past decade, Mozilla has had an off-and-on agreement with Google to use its search engine by default, which is quite lucrative for Mozilla. Still, it wasn’t running Google’s engine, so I opted to use it for the majority of my experiment. As far as user experience was concerned, switching to Firefox was hardly a noticeable change.

GOOGLE SEARCH

There are plenty of alternative search engines out there, but the two leading candidates—Bing and Google Search—were off limits. For my experiment, I opted for DuckDuckGo, a privacy-oriented search engine. DuckDuckGo doesn’t track your searches nor serve you targeted ads. It’s hardly any wonder, then, that it is the default search engine for the TOR network.

DuckDuckGo also replicates a lot of features found in Google search, such as autocomplete and a command that allows you to directly search a website through the browser. For instance, if I were to type “!imdb the most unknown,” I’d find myself on IMDB’s page for Motherboard’s first documentary, The Most Unknown. Of course I wouldn’t have done that, however, because IMDB is owned by Amazon.

While I appreciated these features, I couldn’t help but notice a remarkable deterioration in the quality of my search results compared to Google. With Google, I can type in a loose collection of keywords and usually find my desired result. With DuckDuckGo, my searches would have to be painstakingly exact. This made things difficult when I didn’t know exactly what I was looking for, and constantly made me wonder if there were better search results that I wasn’t seeing. In any case, DuckDuckGo was still pretty impressive and it felt good to know I wasn’t being tracked every time I put something in the search bar.

Despite its best intentions and willingness to call Google to task for its monopolizing business practices, DuckDuckGo is not entirely free from the grips of the Big Five. According to the company, DuckDuckGo makes money by serving ads from the Yahoo-Microsoft search alliance. While these ads are based on the search query, rather than data about the user, at least a portion of DuckDuckGo’s revenue comes from Microsoft’s pockets. DuckDuckGo also is part of the Amazon affiliate program, so if you purchase Amazon products using the search engine the company earns a small commission.

YOUTUBE

A significant part of my job involves watching YouTube videos, so I had to figure out a way to still get access to them without routing my traffic through the website. In May, there was a really convenient service around called Hooktube that could do just that. To use HookTube, you simply replaced the “youtube” portion in any YouTube video link with “hooktube.” That’s it. When you used HookTube, you wouldn’t be routing traffic through Google’s servers, giving views to the videos, or seeing any ads.

Of course, all these videos still exist on Google’s servers and HookTube would be useless without them. This is yet another case where there is really no real replacement for YouTube in terms of the sheer amount of content hosted on the site. There are plenty of other video platforms (Vimeo, for example) but they have different—and vastly smaller—video libraries.

I really fell in love with HookTube, but unfortunately the service is no more. As detailed in HookTube’s changelog, on July 16 the service was ended due to increasing pressure from YouTube’s legal team. Although HookTube still exists, its links are routed through Google’s servers.

“HookTube is now effectively just a lightweight version of YouTube and useless to the 90 percent of you primarily concerned with denying Google data and seeing videos blocked by your governments,” the changelog reads. “Rest in pieces.”

In the meantime, others have attempted to make replacement versions of HookTube. Some of these appear to work well, but as HookTube demonstrated, it’s only a matter of time before they attract the attention of YouTube’s legal department. While it’s certainly possible to create an endless array of mirror sites to avoid censorship from internet service providers, similar to how torrenting sites such as Pirate Bay continue to operate despite a crackdown on torrenting, no one appears to have done the same with HookTube yet.

AUTHENTICATOR

If you’re thinking of ditching Google and you use two-factor authentication to secure your accounts, make sure you have your recovery code for every account secured using Google Authenticator. If you do not have these, you will be locked out of your account. I cannot emphasize how important it is to triple check that you have a backup way to get into accounts secured with two-factor authentication when leaving Google.

While I wouldn’t suggest reverting to SMS-based verification, which can be spoofed by attackers, there is a good alternative two-factor authentication service out there called Authy.

Read More: What Is a Two-Factor Authentication Recovery Code?

Authy can be used on any site that supports Authenticator, but it comes with a few distinct advantages, the most notable being that it has multiple-device functionality. Authenticator is tied to a single device, so if you want to use it on your phone and tablet at the same time, you’re out of luck. You’ll have to transfer all of your accounts to the new device.

Authy allows you to have the service on multiple devices, so if you lose your phone and haven’t backed up your seeds like I told you to, you’ll still be able to get back into your devices. (Importantly, you can also disable Authy on the lost device.) Moreover, Authenticator only works on mobile devices, whereas Authy works on desktops and laptops as well.

ANDROID

When I arrived home from the Verizon store with my Samsung Galaxy S3, I immediately set to work trying to figure out how to get Sailfish OS on it. Sailfish is perhaps the last truly independent mobile operating system available—Firefox OS, Windows Phone, and Ubuntu Mobile have all bitten the dust in the past few years. At this point, only about 0.1 percent of all smartphones aren’t running iOS or Android. If I were going to truly ditch Google, I was going to have to ditch Android as well.

Android is nominally “open source,” but it is far from “free open source software” in any meaningful sense. Google has maintained the Android Open Source Project (AOSP) since it acquired Android in 2005. Google’s software engineers are responsible for new releases of the Android operating system.

Android is based on the Linux kernel, the part of an operating system responsible for interfacing with the device’s hardware and managing the computer’s resources such as CPU and RAM. This source code is released for free through AOSP, so anyone can take the Android code made by Google developers and use it to make their own version of Android.

When you buy a phone, the Android OS that comes with it also has a bunch of services grafted on top. These are the Google Mobile Services (GMS) that many users take to be defining features Android: Google search, Maps, Drive, Gmail, and so on. These services are definitely not open source.

So why does this matter if anyone can modify Android code, or “fork” it, any time they want? Even if someone managed to fork Android and clone all its best apps, they’d be hard-pressed to find a manufacturer to build a device for this Android clone. As Ben Edelman, an associate professor at Harvard Business School, explained in a 2016 paper, device manufacturers are free to produce phones running “bare” versions of Android, but this means no Google apps are allowed to be pre-installed on the device.

If the device manufacturer wants to include Google Mobile Services on its Android phones, it must sign a Mobile Application Distribution Agreement that requires it to pre-install certain Google applications in prominent places, such as the phone’s home page. Google search must also be set as the default search provider “for all web access points.” Google also requires that its Network Location Provider service be “preloaded and the default, tracking users’ geographic location at all times and sending that information to Google.”

More troubling is that Google makes all device manufacturers that want to run Google Mobile Services on their devices sign an “Anti-Fragmentation Agreement” (AFA). This is a legal agreement that states the manufacturers won’t fork their own version of Android to run on their devices. As Edelman notes, no copies of this agreement have ever been leaked to the public, even though the existence of the document has been confirmed by Google. This is justified on the grounds that it will ensure that all apps work across all versions of Android, rather than having apps that only work with some Android forks.

Similar limitations bind members of the Open Handset Alliance, a group formed by Google in 2007 to bring together companies committed to developing products that are compatible with Google’s Android. According to Ars Technica, OHA contractually binds members from building non-Google approved devices that run competing Android forks. This is acknowledged by Google in a 2012 blog post: “By joining the Open Handset Alliance, each member contributes to and builds one Android platform, not a bunch of incompatible versions.”

As the venture capitalist Bill Gurley wrote in a particularly prescient blog post from 2009, Google’s tactic ensures it dominates the mobile OS market and drives everyone to use its real money maker—search. The reason search is so valuable is because it can gather data on its users and use it to sell them targeted ads. Android, Gurley writes, is not a “product” because Google is not trying to make a profit on it. Instead, “they want to take any layer that lives between themselves and the consumer and make it free (or even less than free). Google is scorching the Earth for 250 miles around the outside of the castle to ensure that no one can approach it. And best I can tell, they are doing a damn good job of it.”

The results of this tactic speak for themselves. Today, approximately 88 percent of all smartphones on the market run Android, and most of them are running Google’s version of the OS. Nevertheless, Google makes it a point to remind people that Android is open source so any company can put the bare AOSP version on their devices. This is technically true, and a few foolhardy companies have tried.

Perhaps the best cautionary tale is Amazon Fire, which was launched in 2014 on a bare AOSP version of Android. The device was widely panned for lacking Gmail and other basic apps, and Amazon discontinued the device the following year after racking up $170 million in losses and a surplus of $83 million worth of unsold devices.

In recent months, Google has moved to further its grip on uncertified Android devices. Previously, it was possible to buy a bare AOSP phone and side-load Google Play to download other Google apps so you could use it like a normal Google-certified Android. In March, however, Google started to block all uncertified Android from accessing any Google services or apps. The vibrant Android modification community was shit-out-of-luck if it wanted to use any Google services or log into its Google accounts.

In short, that left people with three options:

  1. If they wanted to use any Google services, they had to use Google-certified Android devices and an unmodified version of Android released by Google.
  2. They could use a bare AOSP or modified version of AOSP Android, but not access any Google services.
  3. They could use Sailfish OS, open source mobile operating system that is still being actively developed, but they still wouldn’t be able to use any Google services as applications. (They could still visit Google maps or Gmail through their browser, although the mobile versions of these services are less than stellar.)

I opted to use Sailfish OS, which is why I found myself in a Verizon store in Bushwick downgrading my phone to a Samsung Galaxy S3. The Sailfish OS is developed by Jolla, a small Finnish company that was started in 2012 by a group of former Nokia developers who jumped ship just prior to Nokia’s acquisition by Microsoft.

Initially, Jolla aspired to create an alternative phone that would pair with its open source, alternative operating system. Yet after years of setbacks and failed launches, it scaled back its ambitions to work exclusively on Sailfish.

Jolla has recently changed its focus to enterprise customers, but a small dedicated group of die-hard Sailfish fans have kept the consumer Sailfish OS alive and continue to drive its development.

Read More: Meet Sailfish, the Last Independent Mobile Operating System

Motherboard Editor-in-Chief Jason Koebler had a Nexus 5 that he had flashed with Sailfish. Before the experiment began I messed around with it a bit to familiarize myself with the operating system. I liked Sailfish a lot—its interface was close enough to Android to be familiar, but had enough idiosyncrasies to make it distinct. The most noticeable difference is that Sailfish is far more gesture-oriented.

Although Sailfish is an open source, alternative OS, you’re not limited to open source apps. Sailfish supports Android apps, which can be side-loaded onto the phone by downloading the app’s APK file from the internet and loading it onto the phone manually. Still, Jolla’s documentation for Sailfish says, “We always advise against installing Google Services on SailfishOS, as it is known to potentially cause a multitude of problems ranging from serious to trivial.”

Despite really liking Sailfish, I was ultimately unable to use the operating system for my experiment. I couldn’t use Jason’s phone because, though the Nexus 5 was manufactured by LG, it was developed in partnership with Google.

Although Samsung has recently embraced the Android modification community and there’s plenty of documentation available for how to install Sailfish on a Samsung Galaxy S3, Verizon does everything in its power to make sure its customers can’t get root access to its devices.

Verizon and other carriers, such as AT&T, have emerged as the biggest threat to the modification of mobile operating systems in the US by shipping all their phones with locked bootloaders. A bootloader is low-level software that is the first thing to start up when you turn on your phone. It makes sure all the software is working properly and in certain cases prevents users from installing unauthorized software.

Locked bootloaders prevent users from gaining the type of deep access to their phone to be able to swap out a stock Android OS for custom operating systems. Ironically, Microsoft’s Nokia phones and Google’s Nexus and Pixel phones make it super easy to unlock the bootloader on many carriers and are thus easy to customize. This isn’t the case with any phone on Verizon’s network. (Enterprising Android modders have figured out how to unlock the bootloader for some Verizon Android phones, but these are few and far between.)

After days of trying and failing to unlock my bootloader to flash Sailfish OS onto my Samsung Galaxy S3, I admitted defeat. Instead, I opted to run SuperLite, a lightweight version of Android, a ROM developed as part of the Android Open Kang Project (AOKP). (“Kang” is developer slang for stolen code.) AOKP is free open source software based on the official AOSP releases, but it is modified with third-party code contributed by the AOKP community and gives its users even more control over how the Android software interacts with their phone’s hardware.

Since I was unable to unlock my bootloader, I couldn’t “flash” a new ROM to my phone, which would have completely removed the stock Android version and replaced it with a custom ROM of my choice. Instead, I had to install the SuperLite AOKP ROM side-by-side with the stock version. Once it was installed, I could choose which version of the Android I wanted to boot into—basically the equivalent of partitioning your hard drive on a laptop or desktop.

The first step to do this is to enable developer mode in from the Android settings menu. Then, I downloaded and installed the file for a custom recovery system. In my case, I opted for Team Win Recovery (TWRP), one of the most popular recovery systems among Android modders. Once I had installed this on my phone (I just plugged my phone into my computer’s USB port and dragged the TWRP file to the SD card in my phone) I booted into the phone’s recovery mode and restarted my phone.

Next it was time to install the SuperLite AOKP ROM. After installing the SuperLite ROM on my phone’s SD card, I rebooted the phone. From the TWRP menu, select the “Boot Options” menu and then “ROM-Slot-1.” Select the option to create the new ROM slot. Once the ROM slot is created, go back to the main TWRP menu, select the “Install” option and then the zip file for the AOKP ROM you want to install. This will install the AOKP ROM on the ROM slot you just created. Once it’s done installing, reboot the phone and you should boot into the custom AOKP ROM.

It’s worth mentioning here, I think, just how much of a pain in the ass this was for someone who was unfamiliar with the process of rooting phones. Although most of my problems ended up being because of my phone’s locked bootloader, it still took several nights of trial and error to figure out what was going wrong and how to fix it. Ultimately, my difficulties with flashing various ROMs would delay the start of the experiment by several days.

So what was life like using a bare bones, AOKP version of Android without Google? Overall, I didn’t notice much of a difference. I could still link my Protonmail to my phone as well as my cloud storage through NextCloud. I side-loaded Spotify and Lyft by downloading their APK files from the internet and moving them to my phone. (I later learned that Lyft uses Google Maps and so was limited to using Uber.) The only real difference was when it came to using maps, as I mentioned above.

POST MORTEM: 6 MONTHS LATER

It’s now been six months since I finished my experiment, which was plenty of time to see which Big Five services crept back into my life. I resumed using pretty much every Google product the day after the experiment ended. This was mostly due to the nature of my job, which depends on access to my company Gmail account and collaborative editing in Google Docs.

Yet even in my personal life I continue to use Google Maps, Google Drive, and Google Search, although I try to limit my personal searches to DuckDuckGo as often as possible.

In June I also upgraded my phone to a Samsung Galaxy S7, which is currently running the latest version of Android.

A few months after the experiment ended, I swapped out my crappy laptop at work for a homebrew PC. If there was ever a time to fully make the transition to Linux, this was it, and yet I still found myself paying for Windows 10 and partitioning my drive so I could have access to each OS. Old habits die hard, but I now use the terminal in Windows quite regularly whereas before I didn’t use it at all.

Although I still use Amazon on occasion I have ended my Prime subscription and make a point of shopping local or buying from alternative websites whenever possible. So far, this change hasn’t made any noticeable difference on my quality of life.

I still think Apple is a ripoff and Facebook continues to get pwned by lawmakers for its mishandling of user data and disinformation. After I left Facebook, however, I found that I liked being off of social media so much that I also deactivated my only other social media account—Twitter. I have often heard that leaving Twitter when you work in media is a recipe for career suicide. For journalists who depend on it as a tool, this may very well be true. In my case, however, I’ve found that now that I have excised social media from my life I am far less stressed and have a lot more free time. I read more books and devote more time to my actual hobbies rather than scrolling endlessly through timelines.

It’s hard to say whether this experiment could scale to the point of becoming a sustainable way of existing. It was a success insofar as it is definitely possible to use open source replacements for pretty much every major service offered by the Big Five. It was a failure in that it was slightly less convenient and often resulted in an burden on others who were still using the Big Five services, such as my editors.

There was also something of a social burden, too, since I wasn’t able to use most major messaging apps. This was mostly a problem when it came to WhatsApp, which I use for international communication. Within the US, however, relying solely on SMS wasn’t an issue. Although it seemed like leaving Facebook would put a dent in my social life, this remained pretty much the same.

Finally, the experiment failed in the sense that I had to make compromises during the experiment, such as visiting websites hosted on Amazon Web Services or using an AOKP version of Android instead of Sailfish.

I’m certainly not the first person to forsake the Big Five and I’m sure I won’t be the last. There are dedicated communities of people who are determined to not use Google at any cost, however they remain the “preppers” of online life. This raises a disturbing question, however. Is a widespread migration to alternative services possible or, for that matter, even desirable?

It is certainly possible in principle, but a lot would have to change before the mass adoption of alternative services became realistic. Society would have to create the infrastructure for a more sustainable open source ecosystem. As Nadia Eghbal details in the report Roads and Bridges , free and open source software is built on the back of unseen and often unpaid labor. Some of the most popular open source projects in the world are developed and maintained by a few dedicated individuals. If we really care about their projects, we need to find a better way to support their work, other than relying on their goodwill. No one is really incentivized to keep these projects afloat, even if they’re found at the core of many Big Five services.

Whether ditching Big Five services is desirable is a much more difficult question to answer. There is no question that each of the Big Five companies has built incredibly valuable tools that have fundamentally changed the world. The reason most of us would be reluctant to abandon these tools is because they are usually free, useful, and convenient. Yet we are quickly learning the hidden costs of this digital convenience.

Since starting this experiment, #Deletefacebook has grown from a small protest to a sustained and widespread boycott. Google is now facing scrutiny from US and European regulators for mishandling data and monopolization, as well as its work on a censored search engine for China. Amazon continues to be criticized for its treatment of employees, reliance on government tax breaks and handouts, and willingness to sell surveillance tools to law enforcement agencies. Apple is in the middle of a US Supreme Court case about whether it used unlawful business tactics to monopolize its app store.

The social value of the tools developed by the Big Five is what we make of them—they are neither good nor evil by default. As DuckDuckGo demonstrated, it’s possible to create a great search engine that is still supported by ads, but doesn’t harvest user data. Linux has shown that its possible to make an incredibly robust operating system by drawing on the talents of thousands of developers. Android hackers have illustrated no lack of creativity when it comes to pushing the boundaries of what is possible with mobile operating systems, only to be thwarted by Google’s insistence on total control.

Perhaps our lawmakers will be able to reign in the worst inclinations of the titans of Silicon Valley. Or maybe people will get so fed up with the overreach of the Big Five that they will seek alternative services on their own, which seems far more unrealistic to me, given the general lack of understanding about how these companies operate and why it matters.

Nevertheless, I think it is a highly instructive experience to try to see how many Big Five services you can cut from your life, even if it’s just for a few days. Not only will you learn a lot about how servers, personal computers, and mobile phones work, but you might find some open source replacements better than what you were using before.

The important thing is to realize that none of these services are necessary. We may have come to develop a deep reliance on them, but that’s not the same thing. Being an “Apple person” or a “Windows person” is a marketing gimmick, not a personality trait. Amazon is just a version of Walmart that collaborates with cops. Your community existed before Facebook. Google wasn’t always a verb. We have the ability to change these companies by the way we interact with them—but only if we want to.

Former Facebook exec says social media is ripping apart society


‘No civil discourse, no cooperation; misinformation, mistruth.’

Chamath Palihapitiya speaks at a Vanity Fair event in October 2016.

Another former Facebook executive has spoken out about the harm the social network is doing to civil society around the world. Chamath Palihapitiya, who joined Facebook in 2007 and became its vice president for user growth, said he feels “tremendous guilt” about the company he helped make. “I think we have created tools that are ripping apart the social fabric of how society works,” he told an audience at Stanford Graduate School of Business, before recommending people take a “hard break” from social media.

Palihapitiya’s criticisms were aimed not only at Facebook, but the wider online ecosystem. “The short-term, dopamine-driven feedback loops we’ve created are destroying how society works,” he said, referring to online interactions driven by “hearts, likes, thumbs-up.” “No civil discourse, no cooperation; misinformation, mistruth. And it’s not an American problem — this is not about Russians ads. This is a global problem.”

He went on to describe an incident in India where hoax messages about kidnappings shared on WhatsApp led to the lynching of seven innocent people. “That’s what we’re dealing with,” said Palihapitiya. “And imagine taking that to the extreme, where bad actors can now manipulate large swathes of people to do anything you want. It’s just a really, really bad state of affairs.” He says he tries to use Facebook as little as possible, and that his children “aren’t allowed to use that shit.” He later adds, though, that he believes the company “overwhelmingly does good in the world.”

Palihapitiya’s remarks follow similar statements of contrition from others who helped build Facebook into the powerful corporation it is today. In November, early investor Sean Parker said he has become a “conscientious objector” to social media, and that Facebook and others had succeeded by “exploiting a vulnerability in human psychology.” A former product manager at the company, Antonio Garcia-Martinez, has said Facebook lies about its ability to influence individuals based on the data it collects on them, and wrote a book, Chaos Monkeys, about his work at the firm.

These former employees have all spoken out at a time when worry about Facebook’s power is reaching fever pitch. In the past year, concerns about the company’s role in the US election and its capacity to amplify fake news have grown, while other reports have focused on how the social media site has been implicated in atrocities like the “ethnic cleansing” of Myanmar’s Rohingya ethnic group.

In his talk, Palihapitiya criticized not only Facebook, but Silicon Valley’s entire system of venture capital funding. He said that investors pump money into “shitty, useless, idiotic companies,” rather than addressing real problems like climate change and disease. Palihapitiya currently runs his own VC firm, Social Capital, which focuses on funding companies in sectors like healthcare and education.

Palihapitiya also notes that although tech investors seem almighty, they’ve achieved their power more through luck than skill. “Everybody’s bullshitting,” he said. “If you’re in a seat, and you have good deal flow, and you have precious capital, and there’s a massive tailwind of technological change … Over time you get one of the 20

and you look like a genius. And nobody wants to admit that but that’s the fucking truth.”

Facebook Isn’t Sorry


On Monday morning Facebook revealed a new gadget — a voice-activated video chat tablet with an always-listening microphone and camera for your living room or kitchen that can detect when you are in your own house. This in-home panopticon is called Facebook Portal, and its debut comes at what might seem like an inopportune time for the company — days after a Gizmodo report revealed it was harvesting two-factor authentication numbers; less than 10 days after it revealed that an attack on its computer network had exposed the personal information of nearly 50 million users (and left 40 million more vulnerable); and barely six months after CEO Mark Zuckerberg appeared before Congress to explain how it let Cambridge Analytica acquire the private information of up to 87 million users without consent to be used for psychographic profiling.

To call Facebook’s newest home surveillance device ill-timed is generous. It’s like Trump announcing a new resort and casino in Moscow or BP announcing a fleet of Deepwater Horizon oil tankers. It’s a flagrant flex of Facebook’s market share muscle and a yet another reminder that the company’s data collection ambitions supersede all else.

It’s also further confirmation that Facebook isn’t particularly sorry for its privacy failures — despite a recent apology tour that included an expensive “don’t worry, we got this” mini-documentary, full-page apology ads in major papers, and COO Sheryl Sandberg saying things like, “We have a responsibility to protect your information. If we can’t, we don’t deserve it.” Worse, it belies the idea that Facebook has any real desire to reckon with the structural issues that obviously undergird its continued privacy missteps.

But more troubling still is what a product like Portal says about us, Facebook’s users: We don’t care enough about our privacy to quit it.

Tone-deaf business decisions like Portal are nothing new for Facebook. Eleven years ago, before Facebook was even a full behemoth, it was rolling out invasive features only to issue awkward apologies. The company didn’t appear to have the foresight then, and it doesn’t appear to now.

Weeks after the Cambridge Analytica privacy scandal broke, Facebook announced at its annual conference that it would soon use its trove of user data to roll out a dating app to help pair users together in “long-term” romantic relationships. Later in the year, while Zuckerberg told Congress “I promise to do better for you” and pledged increased transparency in its handling of users’ data, the company admitted to secretly using a private tool to delete the old messages of its founder. This summer, just days after Zuckerberg assured “we have a responsibility to protect people,” reports surfaced that Facebook asked US banks for granular customer financial data (including card transactions and checking account balances) to use for a banking feature. Even the company’s good faith attempts to secure its platform feel ham-handed and oblivious, like last November when Facebook asked users in Australia to upload their nude photos to Facebook for employee review to combat revenge porn.

To observers, these might seem like easily avoidable errors, but to Facebook, whose very identity and foundational mandate is the instinctual drive to amass personal data, they make perfect sense.

Facebook’s unquenchable thirst for personal information is often interpreted as sinister or malicious in nature — a frame that feels a bit too convenient. Facebook is quite obviously interested in profit and power, but its problems seem to stem less from some inherent evil than a broader, foundational failure to see itself outside of this data-gathering, world-connecting prism.

Facebook is a company founded on the principle of collecting data, and virtually every part of its two core missions (“to bring the world closer together” and to deliver profit to shareholders) require amassing more data and finding creative new ways to parse and connect it. Almost every part of Facebook — from Messenger to News Feed advertisements — improves with every new morsel of personal information collected. For this reason, many of Facebook’s biggest problems are technological problems of scale — of amassing and processing so much data — and yet Facebook argues that amassing more data is the way to improve every experience, which includes fixing its myriad problems. Advertisements intrusive and clumsy? Collect more and more precise information with which to make them more relevant! Too much algorithmically tailored, low-quality content in News Feed? Ask people to rate and rank it! Collect more data! Feed it to the algorithms! Then collect even more data and use the algorithms to police it.

Facebook has seen enormous success with this strategy. Despite all of the bad press and fallout (which includes everything from disrupting the media business to election interference to ethnic cleansing in places like Myanmar), the company is vast, powerful, and profitable. You know what happened after the Cambridge Analytica scandal? After its first president, Sean Parker, expressed regret over its ruthless monetization of attention? After legislators trotted out examples of election interference in front of executives? Facebook reported earnings and monthly average users that exceeded expectations. The stock spiked.

For Facebook employees, there’s often a cognitive dissonance between their work and how they see it described beyond company walls. “If you could see what I see, a lot of this would make more sense,” one current employee told me in October of 2017. Only recently does that answer really begin to make sense: It’s about the data.

A former senior employee described this as part of the “deeply rational engineer’s view” that guides Facebook’s decisions. “They believe that to the extent that something flourishes or goes viral on Facebook — it’s not a reflection of the company’s role, but a reflection of what people want,” they said. Data informs how decisions get made; it also conveniently absolves Facebook of blame.

It is the crystal ball that allows the company to see ahead and do what might feel to us mere mortals (privacy advocates, the media, regular users) as reckless. This is why Facebook might feel confident rolling out an always-listening home camera a few weeks after a report revealing the company harvested two-factor authentication phone numbers to target users for advertising purposes. And it might be one reason — perhaps among many — that the founders of both WhatsApp and Instagram have left the company in recent months.

Facebook is intimidatingly large and deeply woven into our cultural fabric, largely because we have allowed it to become so, and we can’t consider a world without Facebook in it. It’s not that we aren’t worried about politics becoming a Facebook data acquisition and targeting game, or outsourcing the public square to a private technology company. It’s that it’s so mind-numbingly hard to imagine how to actually loosen the company’s grip on our discourse, ad ecosystem, and our personal information that we often focus only on superficial or temporary ways to relieve it.

And that’s a great substrate for apathy. We’ve already given it so much, why stop now? No one else is going to delete Facebook, so why should I? Facebook understands this — the data tells them so. It also tells them that slickly produced videos and contrite congressional testimony are small ways to ameliorate lingering public concern.

But the real truth lies in the company’s innovations and ambitions, products like Portal. Facebook doesn’t really care. And maybe we don’t either.

Study suggests a direct link between screen time and ADHD in teens


Image: Study suggests a direct link between screen time and ADHD in teens

Adding to the list of health concerns associated with excessive screen time, one study suggests that there could be a link between the length of time teenagers spend online and attention deficit hyperactivity disorder (ADHD).

The two-year study, which was published in the Journal of the American Medical Association (JAMA), observed more than 2,500 high school students from Los Angeles.

Digital media and the attention span of teenagers

A team of researchers analyzed data from the teenagers who had shorter attention spans the more they became involved in different digital media platforms for the duration of the experiment.

The JAMA study observed adolescents aged 15 and 16 years periodically for two years. The researchers asked the teenagers about the frequency of their online activities and if they had experienced any of the known symptoms of ADHD.

As the teenagers’ digital engagement rose, their reported ADHD symptoms also went up by 10 percent. The researchers noted that based on the results of the study, even if digital media usage does not definitively cause ADHD, it could cause symptoms that would result in the diagnosis of ADHD or require pharmaceutical treatment.

Experts believe that ADHD begins in the early stages of childhood development. However, the exact circumstances, regardless if they are biological or environmental, have yet to be determined.

Adam Leventhal, a University of Southern California psychologist and senior author of the study, shared that the research team is now analyzing the occurrence of new symptoms that were not present when the study began.

The power of the elements: Discover Colloidal Silver Mouthwash with quality, natural ingredients like Sangre de Drago sap, black walnut hulls, menthol crystals and more. Zero artificial sweeteners, colors or alcohol. Learn more at the Health Ranger Store and help support this news site.

Other studies about digital engagement have implied that there is an inverse relationship with happiness. The less people used digital media, the more they reported feeling an overall sense of happiness. (Related: The social media paradox: Teens who are always online feel more lonely.)

The researchers concluded that the teenagers might have exhibited ADHD symptoms from the outset due to other factors. However, it is possible that excessive digital media usage can still aggravate these symptoms.

Fast facts about ADHD

ADHD is a neurodevelopmental disorder that is commonly diagnosed in children. However, it can also be diagnosed in older individuals. ADHD can be difficult to diagnose. Since several symptoms of ADHD are similar to normal childhood behaviors, the disorder itself can be hard to detect.

The symptoms of ADHD may include forgetting completed tasks, having difficulty sitting still, having difficulty staying organized, and having trouble concentrating or focusing.

  • Men are at least three times more likely to be diagnosed with ADHD than females.
  • During their lifetimes, at least 13 percent of men will be diagnosed with ADHD, as opposed to only 4.2 percent in women.
  • The average age of ADHD diagnosis is seven years old.
  • The symptoms of the condition will usually manifest when a child is aged three to six years old.
  • ADHD is not solely a childhood disorder. At least four percent of American adults older than 18 may have ADHD.

This disorder does not increase an individual’s risk for other conditions or diseases. However, some people with ADHD, mostly children, have a higher chance of experiencing different coexisting conditions. These can make social situations, like school, more difficult for kids with ADHD.

Some coexisting conditions of ADHD may include:

  • Anxiety disorder
  • Bed-wetting problems
  • Bipolar disorder
  • Conduct disorders and difficulties (e.g., antisocial behavior, fighting, and oppositional defiant disorder)
  • Depression
  • Learning disabilities
  • Sleep disorders
  • Substance abuse
  • Tourette syndrome

Minimize your child’s ADHD risk by reading more articles with tips on how to manage their internet use at Addiction.news.

Sources include:

Lifezette.com

Healthline.com

World Leaders Have Decided: The Next Step in AI is Augmenting Humans


Think that human augmentation is still decades away? Think again.

This week, government leaders met with experts and innovators ahead of the World Government Summit in Dubai. Their goal? To determine the future of artificial intelligence.

It was an event that attracted some of the biggest names in AI. Representatives from IEEE, OECD, the U.N., and AAAI. Managers from IBM Watson, Microsoft, Facebook, OpenAI, Nest, Drive.ai, and Amazon AI. Governing officials from Italy, France, Estonia, Canada, Russia, Singapore, Australia, the UAE. The list goes on and on.

Types of AI: From Reactive to Self-Aware [INFOGRAPHIC]

Futurism got exclusive access to the closed-door roundtable, which was organized by the AI Initiative from the Future Society at the Harvard Kennedy School of Government and H.E. Omar bin Sultan Al Olama, the UAE’s Minister of State for Artificial Intelligence.

The whirlwind conversation covered everything from how long it will take to develop a sentient AI to how algorithms invade our privacy. During one of the most intriguing parts of the roundtable, the attendees discussed the most immediate way artificial intelligence should be utilized to benefit humanity.

The group’s answer? Augmenting humans.

Already Augmented

At first, it may sound like a bold claim; however, we have long been using AI to enhance our activity and augment our work. Don’t believe me? Take out your phone. Head to Facebook or any other social media platform. There, you will see AI hard at work, sorting images and news items and ads and bringing you all the things that you want to see the most. When you type entries into search engines, things operate in much the same manner—an AI looks at your words and brings you what you’re looking for.

And of course, AI’s reach extends far beyond the digital world.

Take, for example, the legal technology company LawGeex, which uses AI algorithms to automatically review contracts. Automating paper-pushing has certainly saved clients money, but the real benefit for many attorneys is saving time. Indeed, as one participant in the session noted, “No one went to law school to cut and paste parts of a regulatory document.”

Similarly, AI is quickly becoming an invaluable resource in medicine, whether it is helping with administrative tasks and the drudgery of documentation or assisting with treatments or even surgical procedures. The FDA even recently approved an algorithm for predicting death.

These are all examples of how AIs are already being used to augment our knowledge and our ability to seek and find answers—of how they are transforming how we work and live our best lives.

Time to Accelerate

When we think about AI augmenting humans, we frequently think big, our minds leaping straight to those classic sci-fi scenarios. We think of brain implants that take humans to the next phase of evolution or wearable earpieces that translate language in real time. But in our excitement and eagerness to explore the potential of new technology, we often don’t stop to consider the somewhat meandering, winding path that will ultimately get us there—the path that we’re already on.

While it’s fun to consider all of the fanciful things that advanced AI systems could allow us to do, we can’t ignore the very real value in the seeming mundane systems of the present. These systems, if fully realized, could free us from hours of drudgery and allow us to truly spend our time on tasks we deem worthwhile.

Imagine no lines at the DMV. Imagine filing your taxes in seconds. This vision is possible, and in the coming months and years, the world’s leaders are planning to nudge us down that road ever faster. Throughout the discussions in Dubai, panelists explored the next steps governments need to take in order to accelerate our progress down this path.

The panel noted that, before governments can start augmenting human life—whether it be with smart contact lenses to monitor glucose levels or turning government receptionists into AI—world leaders will need to get a sense of their nation’s current standing. “The main thing governments need to do first is understand where they are on this journey,” one panelist noted.

In the weeks and months to come, nations around the globe will likely be urged to do just that. Once nations understand where they are along the path, ideally, they will share their findings in order to assist those who are behind them and learn from those who are ahead. With a better roadmap in hand, nations will be ready to hit the road — and the gas.

Silicon Valley’s Tax-Avoiding, Job-Killing, Soul-Sucking Machine


Four companies dominate our daily lives unlike any other in human history: Amazon, Apple, Facebook, and Google. We love our nifty phones and just-a-click-away services, but these behemoths enjoy unfettered economic domination and hoard riches on a scale not seen since the monopolies of the gilded age. The only logical conclusion? We must bust up big tech.

I’ve benefited enormously from big tech. Prophet, the consulting firm I cofounded in 1992, helped companies navigate a new landscape being reshaped by Google. Red Envelope, the upscale e-commerce company I cofounded in 1997, never would have made it out of the crib if Amazon hadn’t ignited the market’s interest in e-commerce. More recently, L2, which I founded in 2010, was born from the mobile and social waves as companies needed a way to benchmark their performance on new platforms.

Over the past decade, Amazon, Apple, Facebook, and Google—or, as I call them, “the Four”—have aggregated more economic value and influence than nearly any other commercial entity in history. Together, they have a market capitalization of $2.8 trillion (the GDP of France), a staggering 24 percent share of the S&P 500 Top 50, close to the value of every stock traded on the Nasdaq in 2001.

How big are they? Consider that Amazon, with a market cap of $591 billion, is worth more to the stock market than Walmart, Costco, T. J. Maxx, Target, Ross, Best Buy, Ulta, Kohl’s, Nordstrom, Macy’s, Bed Bath & Beyond, Saks/Lord & Taylor, Dillard’s, JCPenney, and Sears combined.

.

Meanwhile, Facebook and Google (now known as Alphabet) are together worth $1.3 trillion. You could merge the world’s top five advertising agencies (WPP, Omnicom, Publicis, IPG, and Dentsu) with five major media companies (Disney, Time Warner, 21st Century Fox, CBS, and Viacom) and still need to add five major communications companies (AT&T, Verizon, Comcast, Charter, and Dish) to get only 90 percent of what Google and Facebook are worth together.

And what of Apple? With a market cap of nearly $900 billion, Apple is the most valuable public company. Even more remarkable is that the company registers profit margins of 32 percent, closer to luxury brands Hermès (35 percent) and Ferrari (29 percent) than peers in electronics. In 2016, Apple brought in $46 billion in profits, a haul larger than that of any other American company, including JPMorgan Chase, Johnson & Johnson, and Wells Fargo. What’s more, Apple’s profits were greater than the revenues of either Coca- Cola or Facebook. This quarter, it will clock nearly twice the profits that Amazon has produced in its history.

The Four’s wealth and influence are staggering. How did we get here?

As I wrote in my book, The Four, the only way to build a company with the dominance and mass influence of Google, Amazon, Facebook, and Apple is to appeal to a core human organ that makes adoption of the platform instinctive.

GOOGLE: MIND-ALTERING

Our brains are sophisticated enough to ask very complex questions but not sophisticated enough to answer them. Since Homo sapiens emerged from caves, we’ve relied on prayer to address that gap: We lift our gaze to the heavens, send up a question, and wait for a response from a more intelligent being. “Will my kid be all right?” “Who might attack us?”

.

As Western nations become wealthier, organized religion plays a smaller role in our lives. But the void between questions and answers remains, creating an opportunity. As more and more people become alienated from traditional religion, we look to Google as our immediate, all-knowing oracle of answers from trivial to profound. Google is our modern-day god. Google appeals to the brain, offering knowledge to everyone, regardless of background or education level. If you have a smartphone or an Internet connection, your prayers will always be answered: “Will my kid be all right?” “Symptoms and treatment of croup. . .” “Who might attack us?” “Nations with active nuclear-weapons programs . . .”

Think back on every fear, every hope, every desire you’ve confessed to Google’s search box and then ask yourself: Is there any entity you’ve trusted more with your secrets? Does anybody know you better than Google?

FACEBOOK: THE HEART OF THE MATTER

Facebook appeals to the heart. Feeling loved is the key to well-being. Studies of kids in Romanian orphanages who had stunted physical and mental development found that the delay was due not to poor nutrition, as suspected, but to lack of human affection. Yet one of the traits of our species is that we need to love nearly as much as we need to be loved. Susan Pinker, a developmental psychologist, studied the Italian island of Sardinia, where centenarians are six times as common as they are on mainland Italy and ten times as common as in North America. Pinker discovered that among genetic and lifestyle factors, the Sardinians’ emphasis on close personal relationships and face-to-face interactions is the key to their superlongevity. Other studies have also found that the deciding factor in longevity isn’t genetics but lifestyle, especially the strength of our social bonds.

Facebook gives its 2.1 billion monthly active users tools to fuel our need to love others. It’s satisfying to rediscover someone we went to high school with. It’s good to know we can keep in touch with friends who move away. It takes minutes, with a “like” on a baby pic or a brief comment on a friend’s heartfelt post, to reinforce friendships and family relationships that are important to us.

AMAZON: ALWAYS CONSUMING

What sight is to the eyes and sound is to the ears, the feeling of more, of insatiety, is to the gut. We crave more stuff psychologically just as the stomach craves more sugar, more carbs, after an indulgent meal. Originally this instinct operated in the service of self-preservation: Having too little meant starvation and certain death, whereas too much was rare, a bloat or a hangover. But open your closets or your cupboards right now, and you’ll probably find you have ten to a hundred times as much as you need. Rationally, we know this makes no sense, but society and our higher brain haven’t caught up to the instinct of always feeling like we need more.

Amazon is the large intestine of the consumptive self. It stores nutrients and distributes them to the cardiovascular system of the 64 percent of American households who are Prime members. It has adopted the best strategy in the history of business—“more for less”—and deployed it more effectively and efficiently than any other firm in history.

APPLE: SET TO VIBRATE

The second-most-powerful instinct after survival is procreation. As sexual creatures, we want to signal how elegant, smart, and creative we are. We want to signal power. Sex is irrational, luxury is irrational, and Apple learned very early on that it could appeal to our need to be desirable—and in turn increase its profit margins—by placing print ads in Vogue, having supermodels at product launches, and building physical stores as glass temples to the brand.

A Dell computer may be powerful and fast, but it doesn’t indicate membership in the innovation class as a MacBook Air does. Likewise, the iPhone is something more than a phone, or even a smartphone. Consumers aren’t paying $1,000 for an iPhone X because they’re passionate about facial recognition. They’re signaling they make a good living, appreciate the arts, and have disposable income. It’s a sign to others: If you mate with me, your kids are more likely to survive than if you mate with someone carrying an Android phone. After all, iPhone users on average earn 40 percent more than Android users. Mating with someone who is on the iOS platform is a shorter path to a better life. The brain, the heart, the large intestine, and the groin: By appealing to these four organs, the Four have entrenched their services, products, and operating systems deeply into our psyches. They’ve made us more discerning, more demanding consumers. And what’s good for the consumer is good for society, right?

.

Well, yes and no. The Four have so much power over our lives that most of us would be rocked to the core if one or more of them were to disappear. Imagine not being able to have an iPhone, or having to use Yahoo or Bing for search, or losing years’ worth of memories you’ve posted on Facebook. What if you could no longer order something with one click on the Amazon app and have it arrive tomorrow?

At the same time, we’ve handed over so much of our lives to a few Silicon Valley executives that we’ve started talking about the downsides of these firms. As the Four have become increasingly dominant, a murmur of concern—and even resentment—has begun to make itself heard. After years of hype, we’ve finally begun to consider the suggestion that the government, or someone, ought to put the brakes on.

Not all of the arguments are equally persuasive, but they’re worth restating before we get to the real reason I believe we ought to break up big tech.


Big tech learned from the sins of the original gangster, Microsoft. The colossus at times appeared to feel it was above trafficking in PR campaigns and lobbyists to soften its image among the public and regulators. In contrast, the Four promote an image of youth and idealism, coupled with evangelizing the world-saving potential of technology.

The sentiment is sincere, but mostly canny. By appealing to something loftier than mere profit, the Four are able to satisfy a growing demand among employees for so-called purpose-driven firms. Big tech’s tinkerer- in-the-garage mythology taps into an old American reverence for science and engineering, one that dates back to the Manhattan Project and the Apollo program. Best of all, the companies’ vague, high-minded pronouncements—“Think Different,” “Don’t Be Evil”—provide the ultimate illusion. Political progressives are generally viewed as well-meaning but weak, an image that offered the perfect cover for companies that were becoming hugely powerful.

Facebook’s Sheryl Sandberg told women to “lean in” because she meant it, but she also had to register the irony of her message of female empowerment, set against a company that emerged from a site originally designed to rank the attractiveness of Harvard undergraduates, much less a firm destroying tens of thousands of jobs in an industry that hires a relatively high number of female employees: media and communications.

These public-relations efforts paid off handsomely but also set the companies up for a major fall. It’s an enormous letdown to discover that the guy who seems like the perfect gentleman is in fact addicted to opioids and a jerk to his mother. It’s even worse to learn that he only hung out with you because of your money (clicks).

In my experience as the founder of several early Internet firms, the people who work for the Four are no more or less evil than people at other successful companies. They’re a bit more educated, a little smarter, and much luckier, but like their parents before them, most are just trying to find their way and make a living. Sure, many of them would be happy to help out humanity. But presented with the choice between the betterment of society or a Tesla, most would opt for the Tesla—and the Tesla dealerships in Palo Alto are doing well, really well. Does this make them evil? Of course not. It simply makes them employees at a for-profit firm operating in a capitalist society.

Our government operates on an annual budget of approximately 21 percent of GDP, money that is used to keep our parks open and our military armed. Does big tech pay its fair share? Most would say no. Between 2007 and 2015, Amazon paid only 13 percent of its profits in taxes, Apple paid 17 percent, Google paid 16 percent, and Facebook paid just 4 percent. In contrast, the average tax rate for the S&P 500 was 27 percent.

.

So, yes, the Four do avoid taxes . . . and so do you. They’re just better at it. Apple, for example, uses an accounting trick to move its profits to domains such as Ireland, which results in lower taxes for the most profitable firm in the world. As of September 2017, the company was holding $250 billion overseas, a hoard that is barely taxed and should never have been abroad in the first place. That means a U. S. company is holding enough cash overseas to buy Disney and Netflix.

Apple is hardly alone. General Electric also engages in massive tax avoidance, but we’re not as angry about it, as we aren’t in love with GE. The fault here lies with us, and with our democratically elected government. We need to simplify the tax code—complex rules tend to favor those who can afford to take advantage of them—and we need to elect officials who will enforce it.


The destruction of jobs by the Four is significant, even frightening. Facebook and Google likely added $29 billion in revenue in 2017. To execute and service this additional business, they will create twenty thousand new, high-paying jobs.

The other side of the coin is less shiny. Advertising—whether digital or analog—is a low-growth (increasingly flat) business, meaning that the sector is largely zero-sum. Google doesn’t earn an extra dollar by growing the market; it takes a dollar from another firm. If we use the five largest media-services firms (WPP, Omnicom, Publicis, IPG, and Dentsu) as a proxy for their industry, we can estimate that $29 billion in revenue would have required about 219,000 traditional advertising professionals to service. That translates to 199,000 creative directors, copywriters, and agency executives deciding to “spend more time with their families” each year—nearly four Yankee Stadiums filled with people dressed in black holding pink slips.

The economic success stories of yesterday employed many more people than the firms that dominate the headlines today. Procter & Gamble, after a run-up in its stock price in 2017, has a market capitalization of $233 billion and employs ninety-five thousand people, or $2.4 million per employee. Intel, a new-economy firm that could be more efficient with its capital, enjoys a market cap of $209 billion and employs 102,000 people, or $2.1 million per employee. Meanwhile, Facebook, which was founded fourteen years ago, boasts a $542 billion market cap and employs only twenty-three thousand people, or $23.4 million per employee—ten times that of P&G and Intel.

.

Granted, we’ve seen job destruction before. But we’ve never seen companies quite this good at it. Uber set a new (low) bar with $68 billion spread across only twelve thousand employees, or $5.7 million per employee. It’s hardly obvious that a ride-share company—which requires actual drivers on the actual roads—would be the one to arbitrage the middle class with a Houdini move that would have Henry Ford spinning in his grave.

But Uber managed it by creating a two-class workforce, complete with a new classification: “driver-partners,” in other words, contractors. Keeping them off the payroll means that Uber’s investors and twelve thousand white-collar employees do not share any of the company’s $68 billion in equity with its “partners.” In addition, the firm is not inconvenienced with paying health or unemployment insurance and paid time off for any of its two-million-strong driver workforce.

Big tech’s job destruction makes an even stronger case for getting these firms to pay their fair share of taxes, so that the government can soften the blow with retraining and social services. We should be careful, however, not to let job destruction be the lone catalyst for intervention. Job replacement and productivity improvements—from farmers to factory workers, and factory workers to service workers, and service workers to tech workers—are part of the story of American innovation. It’s important to let our freaks of success fly their flag.


Getting warmer. Having your firm weaponized by foreign adversaries to undermine our democratic election process is bad . . . really bad. During the 2016 election, Russian troll pages on Facebook paid to promote approximately three thousand political ads. Fabricated content reached 126 million users. It doesn’t stop there—the GRU, the Russian military-intelligence agency, has lately taken a more bipartisan approach to sowing chaos. Even after the election, the GRU has used Facebook, Google, and Twitter to foment racially motivated violence. The platforms invested little or no money or effort to prevent it. The GRU purchased Facebook ads in rubles: literally and figuratively a red flag.

.

If you’re a country club with a beach or a pool, it’s more profitable, in the short run, not to have lifeguards. There are risks to that business model, as there are to Facebook’s dependence on mainly algorithmic moderation, but it saves a lot of money. The notion that we can expect big tech to allocate the requisite resources, of the companies’ own will, for the social good is similar to the idea that Exxon will take a leadership position on global warming. It’s not going to happen.

However, the alarm for trust busting, not just regulation, rang for me in November, when Senate Intelligence Committee chairman Richard Burr pleaded with the general counsels of Facebook, Google, and Twitter, “Don’t let nation-states disrupt our future. You’re the front line of defense for it.” This represented a seminal moment in our history, when our elected officials handed over our national defense to firms whose business model is to nag you about the shoes you almost bought, and remind you of your friends’ birthdays.

They should be our front line against our enemies?

Let’s be clear, our front line of defense has been, and must continue to be, the Army, Navy, Air Force, and Marines. Not the Zuck.


It’s not just federal officials who have folded in the face of big tech. As part of their bid for Amazon’s second headquarters, state and city officials in Chicago proposed to let Amazon keep $1.3 billion in employee payroll taxes and spend this money as the company sees fit. That’s right: Chicago offered to transfer its tax authority to Amazon and trusts the Seattle firm to allocate taxes in a manner best for Chicago’s residents.

 

The surrender of our government only gets worse from there. If you want to manufacture and sell a Popsicle to children, you must undergo numerous expensive FDA tests and provide thorough labeling that outlines the ingredients, calories, and sugar content of the treat. But what warning labels are included in Instagram’s user agreement? We’ve now seen abundant research indicating that social- media platforms are making teens more depressed. Ask yourself: If ice cream were making teens more prone to suicide, would we shrug and seat the CEO of Dreyer’s next to the president at dinners in Silicon Valley?

Anyone who doesn’t believe these products are the delivery systems for tobacco- like addiction has never separated a seven- year-old from an iPad in exchange for a look that communicates a plot to kill you. If you don’t believe in the addictive aspects of these platforms, ask yourself why American teenagers are spending an average of five hours a day glued to their Internet- connected screens. The variable rewards of social media keep us checking our notifications as though they were slot machines, and research has shown that children and teens are particularly sensitive to the dopamine cravings these platforms foster. It’s no accident that many tech companies’ execs are on the record saying they don’t give their kids access to these devices.

All of these are valid concerns. But none of them alone, or together, is enough to justify breaking up big tech. The following are reasons I believe the Four should be broken up.

The Purpose of an Economy

Ganesh Sitaraman, professor at Vanderbilt Law School, argues that the U. S. needs the middle class, that the Constitution was designed for a balanced share of wealth for our representative democracy to work. If the rich have too much power, it can lead to an oligarchy. If the poor have too much power, it can lead to a revolution. So the middle class needs to be the rudder that steers American democracy on an even keel.

 believe that the primary purpose of the economy, and one of its key agents, the firm, is to create and sustain the middle class. The U. S. middle class from 1941 to 2000 was one of the most ferocious sources of good in world history. The American middle class financed, fought, and won good wars; took care of the aged; funded a cure for polio; put men on the moon; and showed the rest of the world that self-interest, and the consumption and innovation it inspired, could be an engine for social and economic transformation.

The upward spiral of an economy depends on the circular flow between households and companies. Households offer resources and labor, and companies offer goods and jobs. Competition motivates the invention and distribution of better offerings (happy hour, rear-view camera, etc.), and the big wheel spins round and round. Big tech creates enormous stakeholder value. So why are we witnessing, for the first time in decades, other countries grow their middle class while ours is declining? If an economy is meant to sustain a middle class, and the social stability it fosters, then our economy is failing.

Without a doubt, there have been tremendous gains in productivity in the U. S. over the past thirty years. It would be hard to deny that the American consumer, at every level, has become the envy of the free world. Yet the productivity boost and the elevation of the consumer to modern-day nobility have created a dystopia in which we’ve traded well- paying jobs and economic security for powerful phones and coconut water delivered in under an hour.

.

How did that happen? Since the turn of the millennium, firms and investors have fallen in love with companies whose ability to replace humans with technology has enabled rapid growth and outsize profit margins. Those huge profits attract cheap capital and render the rest of the sector flaccid. Old-economy firms and fledgling start-ups have no shot.

The result is a winner-takes-all economy, both for companies and for people. Society is bifurcating into those who are part of the innovation economy (lords) and those who aren’t (serfs). One great idea will make a twenty- something the darling of venture capital, while those who are average, or even just unlucky (most of us), have to work much harder to save for retirement.

It’s never been easier to be a billionaire or harder to be a millionaire. It’s painfully clear that the invisible hand, for the past three decades, has been screwing the middle class. For the first time since the Great Depression, a thirty-year-old is less well-off than his or her parents at thirty.

Should we care? What if these icons of innovation are the disrupters we need to keep our economy fit? Isn’t there a chance we’ll come through the other end of the tunnel with a stronger economy and higher wages? Already there’s evidence that this isn’t happening. In fact, the bifurcation effect seems to be gaining momentum. It’s likely the biggest threat to our society. Many will argue it’s the world we live in. But isn’t the world what we make of it? And we have consciously shifted the mission of the U. S. from producing millions of millionaires to producing one trillionaire. Alexa, is this a good thing?


Markets Are Failing, Everywhere

Right now we are in the midst of a dramatic market failure, one in which the government has been lulled by the public’s fascination with big tech. Robust markets are efficient and powerful, yet just as football games don’t work without referees who regularly step in, throw flags, and move one team backward or forward, unfettered capitalism gave us climate change, the mortgage crisis, and U. S. health care.

Monopolies themselves aren’t always illegal, or even undesirable. Natural monopolies exist where it makes sense to have one firm achieve the requisite scale to invest and offer services at a reasonable price. But the tradeoff is heavy regulation. Florida Power & Light serves ten million people; its parent company, NextEra Energy, has a market cap of $72 billion. However, pricing and service standards are regulated by people who are fiduciaries for the public.

The Four, by contrast, have managed to preserve their monopoly-like powers without heavy regulation. I describe their power as “monopoly-like,” since, with the possible exception of Apple, they have not used their power to do the one thing that most economists would describe as the whole point of assembling a monopoly, which is to raise prices for consumers.

Nevertheless, the Four’s exploitation of our knee-jerk antipathy to big government has been so effective that it’s led most of us to forget that competition—no less than private property, wage labor, voluntary exchange, and a price system—is one of the indispensable cylinders of the capitalist engine. Their massive size and unchecked power have throttled competitive markets and kept the economy from doing its job—namely, to promote a vibrant middle class.


Air Supply

How do they do it? It’s useful here to remember how Microsoft killed Netscape in the 1990s. The process starts innocently enough, as a firm builds an outstanding product (Windows) that becomes a portal to an entire sector—what we’d now call a platform. To sustain its growth, the company points the portal at its own products (Internet Explorer) and bullies its partners (Dell) to shut out the competition. Even though Netscape had the more popular browser, with over 90 percent market share, it couldn’t compete with Microsoft’s implicit subsidies for Internet Explorer.

It’s happening everywhere across the Four, whether it’s the slow takeover of the entire first page of search results that Google can better monetize, substandard products on your iPhone’s home screen (like Apple Music), coordinating all assets of the firm (Facebook) to arrest and destroy a threat (Snap), or information-age steel dumping via fulfillment build-out and predatory pricing no other firm can access the capital to match (Amazon).


(Un)Natural Monopolies

Maybe the consumer is better off with these “natural” monopolies. The Department of Justice didn’t think so. In 1998, the federal government filed suit against Microsoft, alleging anticompetitive practices. During the trial, one witness reported that Microsoft executives had said they wanted to “cut off Netscape’s air supply” by giving away Internet Explorer for free.

In November 1999, a district court found that Microsoft had violated antitrust laws and subsequently ordered the company to be broken into two. (One company would sell Windows; the other would sell applications for Windows.) The breakup order was overruled by an appeals court, and ultimately Microsoft agreed to a settlement with the government that sought to curb the company’s monopolistic practices by less stringent means.

The settlement was criticized by some for being too lenient, but it’s worth asking whether Google—today worth $770 billion and the object of affection for any free-market evangelist—would exist if the DOJ hadn’t put Microsoft on notice regarding the infanticide of promising upstarts. In the absence of the antitrust case, Microsoft likely would have leveraged its market dominance to favor Bing over Google, just as it had used Windows to euthanize Netscape.

Indeed, the DOJ’s case against Microsoft may have been one of the most market-oxygenating acts in business history, one that unleashed trillions of dollars in shareholder value. The concentration of power achieved by the Four has created a market desperate for oxygen. I’ve sat in dozens of VC pitches by small firms. The narrative has become universal and static: “We don’t compete directly with the Four but would be great acquisition candidates.” Companies thread this needle or are denied the requisite oxygen (capital) to survive infancy. IPOs and the number of VC-funded firms have been in steady decline over the past few years.

Unlike Microsoft, which was typecast early on as the “Evil Empire,” Google, Apple, Facebook, and Amazon have combined savvy public-relations efforts with sophisticated political lobbying operations—think Oprah Winfrey crossed with the Koch brothers—to make themselves nearly immune to the scrutiny endured by Microsoft.


The Four’s unchecked power manifests most often as a restraint of competition. Consider: Amazon has become such a dominant force that it’s now able to perform Jedi mind tricks and inflict pain on potential competitors before it enters the market. Consumer stocks used to trade on two key signals: the underlying performance of the firm (Pottery Barn’s sales per square foot are up 10 percent) and the economic macro-climate (more housing starts). Now, however, private and public investors have added a third key signal: what Amazon may or may not do in the respective sector. Some recent examples:

The day Amazon announced it would enter the dental-supply business, dental-supply companies’ stock fell 4 to 5 percent. When Amazon reported it would sell prescription drugs, pharmacy stocks fell 3 to 5 percent.

Within twenty-four hours of the Amazon– Whole Foods acquisition announcement, large national grocery stocks fell 5 to 9 percent.

When the subject of monopolistic behavior comes up, Amazon’s public-relations team is quick to cite its favorite number: 4 percent—the share of U. S. retail (online and offline) Amazon controls, only half of Walmart’s market share. It’s a powerful defense against the call to break up the behemoth. But there are other numbers. Numbers you typically won’t see in an Amazon press release: • 34 percent: Amazon’s share of the worldwide cloud business

44 percent: Amazon’s share of U. S. online commerce

64 percent: U. S. households with Amazon Prime

71 percent: Amazon’s share of in-home voice devices

$1.4 billion: Amount of U. S. corporate taxes paid by Amazon since 2008, versus $64 billion for Walmart. (Amazon has added the entire value of Walmart to its market cap in the past twenty-four months.)

What about Facebook? Eighty-five percent of the time we spend on our phones is spent using an app. Four of the top five apps globally—Facebook, Instagram, WhatsApp, and Messenger—are owned by Facebook. And the top four have allied, under the command of the Zuck, to kill the fifth—Snap Inc. What this means is that our phones are no longer communications vehicles; they’re delivery devices for Facebook, Inc.

Facebook even has an internal database that tells it when a competitive app is gaining traction with its users, so that the social network can either acquire the firm (as it did with Instagram and WhatsApp) or kill it by mimicking its features (as it’s trying to do with Stories and Bonfire, which are aimed at Snapchat and Houseparty).

Google, for its part, now commands a 92 percent share of a market, Internet search, that is worth $92.4 billion worldwide. That’s more than the entire

advertising market of any country except the U. S. Search is now a larger market than the following global industries:

paper and forest products: $81 billion

construction and engineering: $79 billion

real estate management and development: $76 billion

gas utilities: $58 billion

How would we feel if one company controlled 92 percent of the global construction and engineering trade? Or 92 percent of the world’s paper and forest products? Would we worry that their power and influence had breached a reasonable threshold, or would we just think they were awesome innovators, as we do with Google? And then there’s Apple, the most successful firm selling a low-cost product at a premium price. The total material cost for the iPhone 8 Plus is $288, a fraction of the $799 price tag.

Put another way, Apple has the profit margin of Ferrari with the production volume of Toyota. Apple’s users are among the most loyal, too. It has a 92 percent retention rate among consumers, compared with just 77 percent for Samsung users. In February 2017, 79 percent of all active iOS users had updated to the most recent software, versus just 1.2 percent of all active Android devices.

Apple uses its privileged place in consumers’ lives to instill monopoly-like powers in its approach to competitors like Spotify. In 2016, the firm denied an update to the iOS Spotify app, essentially blocking iPhone users’ access to the latest version of the music-streaming service. While Spotify has double the subscribers of Apple Music, Apple makes up the discrepancy by placing a 30 percent tax on its competition.

.

Apple is not shy about using its popularity among consumers to its advantage. It was recently discovered that Apple has been purposely slowing down performance on outdated iPhone models, a strategy that is likely to entice users to upgrade sooner than they would have otherwise. This is the confidence of a monopoly.

In the late nineteenth century, the term trust came into use as a way to describe big businesses that controlled the majority of a particular market. Teddy Roosevelt gained a reputation as the original “trust buster” by breaking up the beef and railroad trusts, and filing forty more antitrust suits during his presidency. Fast-forward a hundred years, to 2016, and we find candidate Trump announcing that a Trump administration would not approve the AT&T–Time Warner merger “because it’s too much concentration of power in the hands of too few.” A year later, his Justice Department sued to block it.

So our presidents are still fighting the good fight, right? Well, let’s break this down. AT&T has 139 million wireless subscribers, sixteen million Internet subscribers, and twenty-five million video subscribers, about twenty million of which were acquired from DirecTV. Time Warner owns content-producing brands such as HBO, Warner Bros., TNT, TBS, and CNN. A vertical merger between the two companies could, in theory, create a megacorporation capable of creating and distributing content across its network of millions of wireless-phone, Internet, and video subscribers.

Too much power in the hands of too few? Maybe. But if content-and-distribution heft is what we’re worried about, then Teddy would have been knocking on Jeff’s, Tim’s, Larry’s, and Mark’s doors a decade ago. Already each of the Four has content and distribution that dwarfs a combined AT&T–Time Warner:

• Amazon spent $4.5 billion on original video in 2017, second only to Netflix’s $6 billion. Prime Video has launched in more than two hundred countries and recently struck a $50 million deal with the NFL to stream ten Thursday-night games. Amazon controls a 71 percent share in voice technology and has an installed distribution base of 64 percent of American households through Prime. Name a cable company with a 64 percent market share—I’ll wait. In addition, Amazon controls more of the market in cloud computing than the next five largest competitors combined. Alexa, does this foster innovation?

• Apple is set to spend $1 billion on original content this year. The company controls 2.2 million apps and set a record in 2013 when the number of songs it sold on iTunes hit twenty- five billion. Apple’s library now includes forty million songs, which can be distributed across the company’s one billion active iOS devices, and that’s not even mentioning its television and video offerings. But AT&T needs to sell Cartoon Network?

• Facebook owns a torrent of content created by its 2.1 billion monthly active users. Through its site and its apps, the company reaches 66 percent of U. S. adults. Facebook plans to spend $1 billion on original content. It’s the world’s most prolific content machine, dominating the majority of phones worldwide. Now “what’s on your mind?”

• Four hundred hours of video are uploaded to YouTube every minute, which means that Google has more video content than any other entity on earth. It also controls the operating system on two billion Android devices. But AT&T needs to divest Adult Swim?

Perhaps Trump is right that the merger of AT&T and Time Warner is unreasonable, but if so, then we should have broken up the Four ten years ago. Each of the Four, after all, wields a harmful monopolistic power that leverages market dominance to restrain trade. But where is the Department of Justice? Where are the furious Trump tweets? Convinced that the guys on the other side of the door are Christlike innovators, come to save humanity with technology, we’ve allowed our government to fall asleep at the wheel.

Margrethe Vestager, the EU commissioner for competition, is the only government official in a Western country whose testicles have descended—who is not afraid of, or infatuated with, big tech. Last May, she levied a $122 million fine against Facebook for lying to the EU about its ability to share data between Facebook and WhatsApp, and a month later she penalized Google $2.7 billion for anticompetitive practices.

This was a good start, but it’s worth noting that those fines are mere mosquito bites on the backs of elephants. The Facebook fine represented 0.6 percent of the acquisition price of WhatsApp, and Google’s amounted to just 3 percent of its cash on hand. We are issuing twenty-five-cent parking tickets for not feeding a meter that costs $100 every fifteen minutes. We are telling these companies that the smart, shareholder-friendly thing to do is obvious: Break the law, lie, do whatever it takes, and then pay a (relatively) anemic fine if you happen to get caught.

The monopolistic power of big tech serves as a macho test for capitalists. The embrace of the innovation class makes us feel powerful. We like success, especially outrageous success, and we’re inspired by billionaires and the incredible companies they founded. We also have a gag reflex when it comes to regulation, one that invites unattractive labels. Since I started suggesting that Amazon should be broken up, Stuart Varney of Fox News, a charming guy, has taken to introducing me on-air as a socialist. Any day now, I suspect he’ll start calling me European.

There’s no question that the markets sent a strong signal in 2017 that our economy is sated on regulation. But there’s a difference between regulation and trust busting. What’s missing from the story we tell ourselves about the economy is that trust busting is meant to protect the health of the market. It’s the antidote to crude, ham-handed regulation. When markets fail, and they do, we need those referees on the field who will throw a yellow flag and restore order. We are so there.

The tremendous success of the Four—which alone accounted for 40 percent of the gains in the S&P 500 for the month of October—wallpapers over the fact that, as a whole, the markets in which they operate are not healthy. Late last year, Refinery29 and BuzzFeed, two promising digital-marketing fledglings, announced layoffs, while Criteo, an ad-tech firm, shed 50 percent of its market capitalization. Why? Because there is Facebook, there is Google, and then there is everyone else. And all of those other firms, including Snap Inc., are dead; they just don’t know it yet.

Are we sure all these companies deserve to die? Or is it the case that our markets are failing and preventing the development of a healthy ecosystem with dozens of digital-marketing firms growing, hiring, and innovating?

Search…Your Feelings

Imagine two markets. One that includes the firms below:

Amazon | Apple | Facebook | Google

And another that includes these independent firms:

.

As Darth Vader urged his son, I want you to “search your feelings” and answer which market would:

Create more jobs and shareholder value.

While trust busting is typically bad for stocks in the short run, busting up Ma Bell unleashed a torrent of shareholder growth in telecommunications. Similarly, Microsoft, despite its run-in with the DOJ in the 1990s, just hit an all-time high. In addition, it’s reasonable to believe that Amazon and Amazon Web Services may be worth more as separate firms than they are as one.

Inspire more investment.

There are half as many publicly traded U. S. firms than there were twenty-two years ago, and most firms in the innovation economy understand that their most likely—or only—path to exist is to be acquired by big tech. An absence of buyers makes for an economy in which the two options are to go big (become Google) or go home (go out of business). While home runs provide good theater, the doubles and triples of acquisitions by medium-sized firms are likely a stronger engine of growth.

Broaden the tax base.

The aggregation of power has resulted in firms that have so much political clout and resources that they can bring their effective tax rates well below what midsize companies pay, creating a regressive tax system.

Why should we break up big tech? Not because the Four are evil and we’re good. It’s because we understand that the only way to ensure competition is to sometimes cut the tops off trees, just as we did with railroads and Ma Bell. This isn’t an indictment of the Four, or retribution, but recognition that a key part of a healthy economic cycle is pruning firms when they become invasive, cause premature death, and won’t let other firms emerge. The breakup of big tech should and will happen, because we’re capitalists. It’s time.

Former Facebook Executives Warn Social Media is Destroying Society


For those of us who use social media, we’ve all experienced the familiar “I’ll pop onto [insert platform of choice] for a minute, just to see what’s going on” and then realize, hours later, we’re still scrolling through our news feed, clicking the like icon or having our blood pressure rise by a troll’s diatribe or some other unpleasant post.

Regardless that a Harvard study has established social media platforms are highly addictive – and as pleasurable to the brain’s reward center as food, money and sex – I still often curse my lack of self-control and wasted hours where these sites are concerned.

Although I’m well-aware of the dark underbelly of social media, it’s surprising to see two former Facebook executives — former President Sean Parker, and former Vice President for User Growth, Chamath Palihapitiya — very publicly announce that Facebook is “ripping apart the social fabric of how society works,” and that it’s specifically designed to exploit human vulnerability and psychology.

 

Cultivating a Culture of Impatience and ‘Fake Brittle Popularity’

During an Axios event in Philadelphia last year, Parker warns that Facebook was intentionally designed to consume as much of our time and attention possible. Using manipulative psychology, the platform is structured in such a way to give you a little dopamine hit for each like and share, which in turn encourages you to contribute more content and interaction.

“It’s a social validation feedback loop… the creators [of Facebook] understood this consciously, and we did it anyway.”

Palihapitiya agrees. In a recent talk he gave to students of the graduate business school at Stanford University, he states:

“The short-term dopamine-driven feedback loops that we have created are destroying how society works: no civil discourse, no cooperation, misinformation, [and] mistruth. And this is not an American problem; this is not about Russian ads; this is a global problem.”

He says he feels tremendous guilt for the role he played in developing these tools that are ripping society apart.

“So we are in a really bad state of affairs right now, in my opinion. It is eroding the core foundations of how people behave by, and between, each other.” Palihapitiya said. “You know, my solution is I just don’t use these tools anymore. I haven’t for years. It’s created huge tension with my friends. Huge tensions in my social circles.”

In short, he didn’t want to become programmed — and his children “aren’t allowed to use that shit” either. He strongly recommends that everyone take a “hard break” from these platforms.

“You don’t realize it, but you are being programmed … but now you got to decide how much you’re willing to give up, how much of your intellectual independence.”

Moreover, Palihapitiya believes social media platforms have encouraged our society to be extremely impatient, fostering the expectation of instant gratification. They also strengthen our “perceived sense of perfection” with short-term signals: hearts, likes, thumbs up, which we confuse with true value.

“And instead what it really is fake brittle popularity that’s short-term, and that leaves you even more, and admit it, vacant and empty [than] before you did it, because it forces you into this vicious cycle where you’re like, ‘What’s the next thing I need to do now?’ ’cause I need it back.” He said. “Think about that compounded by 2 billion people. And then think about how people react to the perceptions of others. It’s just… really, really bad.”

Not only that, but he points out social media can turn deadly.

He describes an incident in India where seven innocent people were murdered by a violent mob incited by a fake WhatsApp post about alleged kidnappers in the region.

“That’s what we’re dealing with,” said Palihapitiya. “And imagine taking that to the extreme, where bad actors can now manipulate large swathes of people to do anything you want.”

Palihapitiya doesn’t only criticize social media, but Silicon Valley’s culture of venture capital funding. He says that investors drive money into “shitty, useless, idiotic companies,” rather than actively working towards solutions for our most pressing problems — like environmental issues and human disease.

After leaving Facebook, Palihapitiya started his own venture fund, Social Capital, which aims to “advance humanity by solving the world’s hardest problems.”

While he admits Facebook isn’t completely negative, he decided to take the capital they rewarded him with and “focus on the structural changes that I can control.”

Born in Sri Lanka and growing up as a poor immigrant in Canada, Palihapitiya recognized early on that money is a powerful instrument for change.

“In the absence of capital, you are irrelevant; with capital, you are powerful, and you decide,” he said. “Get money and don’t lose your moral compass when you do.”

Today, Palihapitiya’s net worth is estimated at $1 billion. His goal is to generate $1 trillion in income through his invested companies, which will be used to positively impact a quarter of the world’s population by the year 2045.

Founder and CEO Social Capital: Money as an Instrument of Change

Chamath Palihapitiya: Facebook is “ripping apart society.”

Article sources:

Mark Zuckerberg says Facebook is changing its news feed so it’s actually ‘good for people’


MarkZuckerberg2016Facebook CEO Mark Zuckerberg
  • Facebook plans to change how its news feed works, playing up status updates from friends and family.
  • On the flip side, it will deemphasize news articles and anything published by brands.
  • Facebook is trying to foster “meaningful interaction” and make Facebook more of a force for good, CEO Mark Zuckerberg said.
  • Facebook is coming off of a tough year, where it had to battle fake news and reports that Russian-linked groups attempted to influence the 2016 presidential election via ads on its service.

In the wake of criticism about how its news feed can be manipulated and is having a negative effect on users, Facebook is making some big changes to its flagship feature.

The company plans to give more prominence to status updates and photos shared by users’ friends and family while at the same time playing down news articles or anything published by brands, company official said.

“We feel a responsibility to make sure our services aren’t just fun to use, but also good for people’s well-being,” Zuckerberg said in a post Thursday on his Facebook page.

The New York Times reported the changes earlier on Thursday. Facebook confirmed them in Zuckerberg’s post and in a blog post titled ” Bringing People Closer Together ” by Adam Mosseri, who heads the company’s news feed.

Facebook’s revamping of its news feed is intended to ensure more “meaningful interaction” on the social network, Zuckerberg said in his post. The company wants to encourage users to have more conversations with people they know, rather than passively consuming articles or videos.

The news comes a week after Zuckerberg announced that his New Years resolution for 2018 would be to focus on systemic issues with Facebook including abuse and hacking.

“The world feels anxious and divided, and Facebook has a lot of work to do – whether it’s protecting our community from abuse and hate, defending against interference by nation states, or making sure that time spent on Facebook is time well spent,” wrote Zuckerberg in a Facebook post announcing his resolution.

The social networking giant is coming off a rough 2017, amid revelations of fake news andads placed by Russian linked actors allegedly to influence the 2016 presidential election .

%d bloggers like this: