This new kind of actuator, called a hasel, can oh-so-delicately grasp a raspberry without exploding it.
OH, THE POOR humanoid robots. After decades of development, they’re still less sprinty Terminator and more … octogenarian on sedatives. While these robots may looklike us, they aren’t built like us—electric motors in their joints drive their herky-jerky movements, whereas our muscles give us more precise control over our bodies. Well, unless we’re on sedatives.

But a burgeoning field called soft robotics promises to bring more “natural” movements to the machines. And today, a pair of papers in Science and Science Robotics detail a clever new variety of robotic “muscle,” a series of oil-fueled pouches activated with electricity. This actuator (aka the bit that moves a robot) is as strong and efficient as human muscle, but can pull off more contractions per second. Which could make for a prosthesis that moves more naturally, perhaps—or maybe farther down the road, soft yet strong robots that help you around the house without accidentally terminating you.

The new class of robotic muscle is called a “hydraulically amplified self-healing electrostatic” actuator, or “hasel.” They come in several designs, which we’ll get to in a moment. But in general, the actuator makes use of a pouch filled with oil—surrounded on either side with electrodes. Apply voltage to these and it creates an electric field.

“We’re just using these electrostatic forces to displace the fluid, to bring these electrodes together to pump the fluid to a different part of the pouch,” says Nicholas Kellaris, mechanical engineer from the Keplinger research group at the University of Colorado Boulder and lead author of the Science Robotics paper. That is, the hasel actuator squeezes inward on itself. “That’s what causes the pressure and the movement and deforms the structure to cause actuation.”

Now, there are other soft actuators out there that use compressed air or fluid for their movement, but these rely on bulky external reservoirs to hold the stuff. The hasel actuator, in contrast, leverages the oil (transformer oil, specifically) already inside the flexible pouch. This makes the actuator much faster, since its fluid doesn’t have to move through a long tube from a reservoir.

So, for instance, the researchers created circular hasel actuators that form into donuts when activated. “In this case we apply electric field over one part of the actuator and that pushed liquid out to what you would call the inactive region,” says Eric Acome, mechanical engineer at the University of Colorado Boulder and lead author of the Science paper. That thickens the actuator by shoving oil in the center into a ring around the edge—again, a donut. So stack several of these actuators together in two groups, each representing a “finger,” and you can bring the thickening stacks together to oh-so-delicately grasp a raspberry without exploding it. (See photo at the top of this story.)

Or say you want your robotic muscle to lift something. For that you’d want a rectangular actuator made up of three strips of pouches running horizontally. Each of these pouches has electrodes attached to their top half. If you apply voltage to the electrodes, they come together and squeeze the oil into the other half of the pouch, forming it into more of a cylinder. This shortens the length of the whole rectangular actuator, which could in turn lift a weight attached to the bottom. (You can try this at home, actually. Get a Ziploc bag and fill it up with a bit of air. Lay it flat and push down on one side and you’ll see the other end pull toward you.) The researchers can stack these actuators as well for even more strength.

So hasel actuators can lift and grip with impressive strength and precision. The challenge, though, is the electricity used to power them. Electricity is not a gentle force, especially when you’re using it to control the relatively delicate materials used in soft robotics. Another variety of soft actuator operates on the same principle as a hasel, using opposing electrodes to squeeze a piece of rubber—but that can lead to electric shorts, essentially little lightning bolts that rip through the rubber and lead to catastrophic failure.

That’s where the oil functions as a sort of shield for the hasel actuators. “It would just pass through the liquid and then moments later that liquid redistributes, becomes insulating again, and the device can continue to operate,” says Tim Morrissey, also of the University of Colorado Boulder. That is, instead of ripping apart.

  • Really, the beauty of the hasel is its relative simplicity compared to your traditional actuator, which is a highly complicated electric motor combined with a gear box. “This work will inspire others to explore interesting ways to combine different passive and active soft materials to create new actuator systems,” says Conor Walsh, a bioengineer at Harvard.

Still, because soft robots are far more delicate than their traditional robot counterparts, their designers have to worry about them not puncturing and losing power (and spilling transformer oil on your floor). So before they start working among us, soft robots will need self-healing skin. And indeed, some soft robots already do that. The team behind the hasel actuator is working to give their creation the same power.

Once they’re good and robust, soft robots of all kinds will infiltrate our lives. The rigidity of traditional prosthetic arms could give way to softer, more sensitive limbs. Making robots soft will allow us to work alongside them much more safely, meaning they won’t just steal every human job—they’ll complement human labor. And, inevitably, soft robots will be our companions, platonic or, um, otherwise.


INTERNET GIANTS AMAZON, Facebook, and Google plan to throw their collective weight behind efforts to save net neutrality.

The Internet Association, the industry’s primary lobbying organization, announced Friday that it plans to join lawsuits aimed at halting the Federal Communications Commission’s December action to repeal Obama-era net neutrality rules. Those rules banned internet service providers like Comcast and Verizon from blocking or otherwise discriminating against legal content online. The association represents dozens of smaller companies in addition to titans such as Google and Facebook.

Net neutrality supporters argue that agency’s plan is illegal under federal laws that prohibit “arbitrary and capricious” changes in regulations, and that the agency didn’t gather sufficient public input on its proposal to overturn its old rules.

“The final version of Chairman Pai’s rule, as expected, dismantles popular net neutrality protections for consumers,” Internet Association President and CEO Michael Beckerman said in a statement, referring to FCC Chair Ajit Pai. “This rule defies the will of a bipartisan majority of Americans and fails to preserve a free and open internet.”

  • The FCC Just Killed Net Neutrality. Now What?

The move is significant because Facebook, Google, and other internet giants faced criticism last year for not doing enough while the FCC was considering repealing net neutrality. Most of the biggest companies participated in a “Day of Action” in July to promote awareness of the issue, but last month The New York Times pointed out that these efforts were relatively small compared to some of the industry’s past actions. For example, in 2006 Google co-founder Sergey Brin traveled to Washington, DC to make the case for net neutrality. By comparison, the internet giants were quiet last year, apart from filing comments with the FCC in support of the Obama-era rules, and placing a few notifications on their websites during the Day of Action. Apple is conspicuously missing from the group, but broke a long silence on the topic of net neutrality last year when it filed its own FCC comment in support of net neutrality.

 The Internet Association does not plan to file a lawsuit itself, but will rather join legal action taken by others. The association didn’t specify a lawsuit it plans to join.

Several government officials and advocacy groups have said they plan legal action, but all have to wait until the repeal order is published in the Federal Register. The FCC Thursday published the final text of its order on its website. New York state Attorney General Eric Schneiderman promised to file suit, which attorneys general in several other states, including Illinois, Massachusetts, and Washington, promised to join the suit. Internet advocacy groups like Free Press were also quick to promise legal action.

Legal experts told WIRED last month that net neutrality advocates have a case, but it’s too early to tell how the courts will rule on the subject.

Beckerman also said the association and its member companies will push Congress to pass strong net neutrality protections.

The Internet Association advocated strong net neutrality protections in 2014, and filed a comment encouraging the agency to retain the Obama-era rules last year.

What Is the Criterion of Beauty?

Studies have shown that beautiful people, those seen as being attractive, are more successful, so can beauty also be empirical? What is the criterion of beauty, and is the appearance of outer beauty always just a reflection of the inner?

“The outer beauty comes from a different source than the inner. The outer beauty comes from your father and mother: their bodies create your body. But the inner beauty comes from your own growth of consciousness that you are carrying from many lives.

“In your individuality both are joined, the physical heritage from your father and mother and the spiritual heritage of your own past lives, its consciousness, its bliss, its joy.

“So it is not absolutely necessary that the outer will be a reflection of the inner, nor will vice versa be true, that the inner will correspond with the outer.

“But sometimes it happens that your inner beauty is so much, your inner light is so much that it starts radiating from your outer body. Your outer body may not be beautiful, but the light that comes from your sources, your innermost sources of eternal life, will make even a body which is not beautiful in the ordinary sense appear beautiful, radiant.

“But vice versa it is never true. Your outer beauty is only skin-deep. It cannot affect your inner beauty. On the contrary the outer beauty becomes a hindrance in search of the inner: you become too identified with the outer. Who is going to look for the inner sources? Most often it happens that the people who are outwardly very beautiful, are inwardly very ugly. Their outer beauty becomes a cover-up to hide themselves behind, and it is experienced by millions of people every day. You fall in love with a woman or a man, because you can see only the outer. And just within a few days you start discovering his inner state; it doesn’t correspond to his outer beauty. On the contrary it is very ugly.

“For example, Alexander the Great had a very beautiful body but he killed millions of people, just to fulfill his ego that he is the world conqueror. He met one man, Diogenes, when he was on his way to India, who lived naked, the only man in Greece who did, unique in a way. His beauty was tremendous, not just the outer, but the inner radiance was so much and so dazzling that even Alexander had to stop his armies when he was close by in a forest near a river. He stopped the armies and went to see Diogenes alone; alone, because he did not want anybody to know that there exists a man who is far more beautiful than Alexander himself.

“It was early morning and Diogenes was taking a sunbath, naked on the riverbank. Alexander could not believe that a beggar…. He had nothing, no possessions – even Buddha used to have a begging bowl, but that too Diogenes had thrown away. He was absolutely without any possessions, exactly as he was born, naked.

“Alexander could not believe his eyes. He had never seen such a beautiful personality and he could see that this beauty was not just on the outer side. Something infiltrated from the inner; a subtle radiation, a subtle aura surrounded him. All around him there was a fragrance, a silence.

“If the inner becomes beautiful – which is in your hands – the outer will have to mold itself according to the inner. The outer is not essential, it will have to reflect the inner in some way.

“But the converse is not true at all. You can have plastic surgery, you can have a beautiful face, beautiful eyes, a beautiful nose; you can change your skin; you can change your shape. That is not going to change your being. Inside you will still remain greedy, full of lust, violence, anger, rage, jealousy, with a tremendous will to power. All these things the plastic surgeon can do nothing about.

“For that you will need a different kind of surgery. It is happening here: you are on the table. As you become more and more meditative, peaceful, a deep at-onement with existence happens. You fall into the rhythm of the universe. The universe also has its own heartbeat. Your heartbeat, once it starts in rhythm with the universal heartbeat, will have transformed your being from that ugly stage of animality, into authentic humanity.

“And even the human is not the end. You can go on searching deeper and there is a place where you transcend humanity and something of the divine enters in you. Once the divine is there, it is almost like a light in a dark house. The windows will start showing the light; even the cracks in the wall or the roof or the doors will start showing the inner light.

“The inner is tremendously powerful, the outer is very weak. The inner is eternal, the outer is very temporary. How many years do you remain young? And as youth fades away you start feeling that you are becoming ugly, unless your inner being is also growing with your age. Then even in your old age you will have a beauty that the youth may feel jealous of.”


Sat Chit Anand, Talk #27

The biology of male breast cancer


Important differences have begun to emerge concerning the molecular profile of female and male breast cancer which may prove to be of therapeutic value. This review examined all the available data on the genomics of MBC. Most male cancers are ER+ve but without a corresponding increase in PR positivity and only a weaker association with estrogen-controlled markers such as PS2, HSP27 and Cathepsin-D. HER2 +ve cancers are rare in males and the role of androgen receptor is controversial. Although the Luminal A phenotype was the most frequent in both MBC and FBC, no Luminal B or HER2 phenotypes were found in males and the basal phenotype was very rare. Using hierarchical clustering in FBC, ERα clustered with PR, whereas in MBC, ERα associated with ERβ and AR. Based on limited data it appears that Oncotype DX is effective in determining recurrence risk in selected MBC. In future, tailored therapies based on genomics will probably yield the most promising approach for both MBC and FBC.

Study Backs Hepatitis C Treatment in Injection Drug Users

Use of sofosbuvir-velpatasvir to treat hepatitis C virus (HCV) infection in injection drug users usually leads to a sustained virologic response at 12 weeks, according to an industry-supported study in the Lancet Gastroenterology & Hepatology.

Some 103 patients with chronic HCV infection (types 1–4) who reported injection drug use in the prior 6 months received sofosbuvir-velpatasvir once daily for 12 weeks. At baseline, three-quarters had injected drugs in the past month. Roughly 60% of participants were receiving opioid substitution therapy during the study.

Of 100 participants who completed treatment, 99 had an end-of-treatment response. The primary outcome — sustained virologic response 12 weeks later — occurred in 94% of all participants. Of the three participants who completed treatment but didn’t achieve the primary outcome, two were lost to follow-up and one had HCV reinfection.

Commentators note that although guidelines recommend HCV treatment for drug users, “stigma … has resulted in insurance restrictions and reluctance from providers to offer appropriate medical therapy.” They conclude that HCV-infected injection drug users “can and should be treated with direct-acting antivirals.”

‘Most of us are too busy to be better’: the lazy person’s guide to self-improvement

If you can really improve yourself in just 10 minutes a day, as the self-help gurus claim, Tim Dowling is all for it

Tim Dowling
 ‘I do not, as a rule, make New Year resolutions.’

am lying on a mat, looking up at the bright blue of the skylight above me. I exhale purposefully, then let my lungs reinflate of their own accord. I am trying hard to concentrate on this slightly counterintuitive way of breathing, but the voices in my head are distracting me. They are telling me about business regulation, specifically about the inhibitory effect of hairdresser licensing in Utah.

I do not, as a rule, make New Year resolutions. As an anxious person, the 12 months that lie ahead of New Year’s Eve do not fill me with excitement or anticipation. I just wonder what else could go wrong. I am as susceptible as the next person to notions of promise, to the idea that, with the right effort, I could become fitter, smarter, happier, better. But each new December, as I coast towards the end of the year on squeaky wheels, I find myself feeling the same way: older, wiser, worse.

It’s the time and effort involved that puts me off most kinds of self-improvement. Many years ago, I signed up for an online life-coaching course, and when I complained about the difficulty of one of the exercises I’d been sent – I was meant to make a list of my qualities, keeping to the strict format “I am (quality)” – the instructor immediately replied by email, saying, “Yes, this is REAL WORK, isn’t it?’ I thought: I already have a job, thanks.

In recent years, however, a new school of self‑improvement has sprung up, one that seems to recognise that, frankly, most of us are too busy to be better. Books with titles such as The 10-Minute Millionaire, The 5-Minute Healer, 10 Minutes To Better Health and 10 Minutes A Day To A Better Marriage represent, if not a global revolution in self-improvement, at least a reliable publishing trend.

I am ineluctably drawn to the quick fix. Could it be possible to cram a year’s self-improvement into a few minutes of effort a day, to get the whole business out of the way before the end of January? It can’t do any harm to try, can it?

My first self-improvement guide is a new book called 15 Minutes To Happiness by Richard Nicholls. My first thought is that 15 minutes sounds a lot, especially when somebody else is promising to make me a millionaire in 10, but Nicholls’ book is full of quick exercises interspersed with longer explanations of why and how they work. Some of the exercises are designed to fix problems I don’t think I have, so I’m pretty sure I can skip ahead.

 Nicholls posits a model for happiness that I find reassuring. He stresses the value of negative thinking. He says that actively seeking happiness can often end up making people feel less happy. On page 49 he writes: “Be open to the possibility that you bought this book and you don’t actually need it.” This, I think, is my kind of self-help.

Here and there Nicholls inserts a “quick happiness boosting idea”, designed to give you an injection of contentment as and when you need it. In the chapter on gratitude, for example, he suggests you “take a moment or two to send a text message to someone thanking them for being a part of your life”. I embarked on a preliminary challenge: trying to find someone – anyone – in my list of contacts I could send a text like that, without having to send an immediate follow-up apology text: “Sorry about that – I was only following orders.”

Here’s another: “Put your town name into JustGiving.com and see who is raising money for a good cause in your local area. Even if you don’t donate anything to anyone, spending time looking at the good that’s going on in your town will dilute any doom and gloom you’ve picked up from elsewhere.”

I tried this one – it was incredibly easy, and it did make me feel slightly happier. It ended up costing me £30 (donated anonymously, because that’s the kind of person I am now), but the feeling lasted for almost four hours.

Tim Dowling doing yoga
 ‘I am, I discover, a collection of small aches.’ 

A dozen years ago, I had an hour-long session with a yoga instructor, and when I asked what sort of benefits I could expect, he promised that yoga would bring me joy. I hadn’t even considered this possibility, but I liked the sound of it. I will try this yoga, I thought. And when I get my joy, everyone else can go to hell.

 Then I went to one of his classes in a London studio, full of supple people in leggings, and found the whole experience nerve-racking and humiliating. It wasn’t relaxing at all. It was like auditioning for Cats.

So I’m done doing yoga in front of people, but a book called The 10 Minute Yoga Solution raises the possibility that I could get my joy in the privacy of my home, quietly and quickly. The author, Ira Trivedi, makes a lot of bold claims: she says that 10 minutes of yoga a day will not just make me calmer and more physically fit, it will improve my eyesight, control unhealthy eating habits and cure a multitude of hair problems (it’s all about blood flow to the scalp). She also mentions joy, if only in passing.

The book itself has very few words in it. It is simply a collection of illustrated poses – or asanas – with instructions, grouped into workouts tailored to specific requirements. Again, I find myself in a position to skip bits: yoga for women, for kids, for weight loss, for fasting, for binge-eating. I like the sound of “yoga for lazy people” and “yoga for hangovers”, but for the moment I am concentrating on yoga for beginners: eight poses, 10 minutes in all.

I wasn’t sure what to expect from a basic, self-administered yoga programme, but I hadn’t expected it to hurt quite so much. Sitting cross-legged hurts. The seated spinal twist hurts. Even the shavasana, the so-called corpse pose – lying flat on your back, arms and legs spread, palms up, toes pointing out – hurts. I am, I discover, a collection of small aches. As instructed, I contract the muscles in my feet and then relax them. My toes refuse to uncurl. Ten minutes begins to seem like an age.

There are, of course, a lot of self-improvement podcasts available – I found one titled simply You Suck: Be Better. Another, created by a former lawyer, suggested that I think of my time as if it were broken down into billable hours, so I learn to prize it more. I’d rather use my headphone time to acquire some actual information. I’ve got the happy book and the yoga routine already. What I really require is a little knowledge.

I’ve always resisted the idea of learning more about economics. It was a passive resistance – I just wasn’t that interested in the subject – but maybe, armed with the right podcast and a decent set of headphones, I could enter into a new phase of passive learning. By common consent, NPR’s Planet Money is one of the best economics podcasts going. I haven’t listened to many – well, any – but Planet Money is entertaining, informative and aimed squarely at the layman. It’s not a primer, but more of a fun way to engage with what for many remains an off-putting subject. I encounter no mathematics.

But there’s a lot of it: two years’ worth, with a new episode posted every couple of days. Where to begin? What’s more, the average length of each instalment is close to 20 minutes, which, in today’s self-improvement environment, is positively leisurely. There is a solution: it turns out you can just speed a podcast up. At first I thought: who would do this? But lots of people do it. My own children, it transpires, routinely listen to sped-up recordings of their university lectures in order to save time. I had to download a new app to acquire the facility, but I can now listen to Planet Money at three times the original speed. Actually, I can’t – it’s pretty well unintelligible at that clip – but I soon find that if I spend a few minutes trying to keep up with the podcast at double speed, it then sounds perfectly normal at a more relaxed one-and-a-half times. Within a few days, I’ve worked my way up to 1.8x. Over the course of a week, I grow increasingly impatient with the pace of actual human conversation. Spit it out, I want to say.

A week in, I rise (10 minutes) early and run through my yoga positions, beginning with some breathing: inhale the future, exhale the past, as the book says. I move on to the spinal twist and the shoulder stand. The corpse pose no longer hurts; in fact, my impersonation of a corpse is so convincing that I worry about my wife walking in and finding me. He died doing what he loved, she would think. Express yoga.

I listen to a podcast about robots taking over our jobs on my way to and from the shops; about 1.6x makes it the right length for the journey. Back at home, I sit down to settle on my next 15-minute happiness task. Deciding often takes longer than 15 minutes, because I reject a few out of hand. Going through Nicholls’ book, I come across the following passage: “If we’re grateful for life then we can’t be fearful, which means that any anxiety we experience gets processed as excitement instead. If we’re grateful, then we act out of a sense that we have enough rather than out of a sense of scarcity or envy.”

He goes on to suggest spending “15 minutes writing about some positive things that have happened to you”. I am extraordinarily resistant to this idea. I only like writing about bad things that have happened to me, in part because I know I will never run out. At first, I can’t even think of any recent positive experiences, but after a few minutes, I recall a long and mostly tedious drive to Exeter the previous week.

I was thinking about nothing but my destination when I came upon Stonehenge at sunset, the stones glistening in the low, pink light. At that moment, traffic slowed to a crawl, enabling me to get a long look. This is free, I thought. A wondrous thing to marvel at, and I haven’t driven an inch out of my way. After 10 minutes, the traffic cleared and I was off again, feeling strangely moved. And then I forgot all about it.

The exercise takes 20 minutes from start to finish – too long. I recall that email from the life coach – “This is REAL WORK, isn’t it?” I begin to think of my time in terms of billable hours.

Time is becoming an issue. Ten minutes of yoga is one thing, but when you add in a happiness exercise and the 12 minutes it takes me to listen to a 20-minute podcast, you’re talking about nearly a whole hour. It occurs to me that I might double up on some of this improvement.

 There is a certain amount of natural overlap. Both 15 Minutes To Happiness and The 10 Minute Yoga Solution stress the importance of breathing, and the exercises are not dissimilar. But focus is the key to both, and the focal points are different. It’s harder to mix mindfulness and stillness than it sounds. Add in a podcast explaining what GDP is, and the whole thing becomes an exercise in frustration. I am reminded, to my eternal disappointment, that there are no quick fixes.

After a fortnight of this, I would have to say the improvements have been marginal: some extra flexibility here, a little more gratitude there, a lot more to say when the subject of GDP next comes up at a dinner party. The Nicholls book is worth a read even if you do none of the exercises, if only to come away with the knowledge that the successful pursuit of happiness mainly involves not trying too hard. “It’s not unrealistic to think that in stopping trying to be happy, you can find that you’re happy enough already,” he writes. “Paradoxically, it could be that the only reason for you being unhappy is your relentless attempt at trying not be.”

And I’ve learned the lesson I was always going to learn, only faster: stop making New Year resolutions. Again.

How to Fix Facebook—Before It Fixes Us

An early investor explains why the social media platform’s business model is such a threat—and what to do about it.

In early 2006, I got a call from Chris Kelly, then the chief privacy officer at Facebook, asking if I would be willing to meet with his boss, Mark Zuckerberg. I had been a technology investor for more than two decades, but the meeting was unlike any I had ever had. Mark was only twenty-two. He was facing a difficult decision, Chris said, and wanted advice from an experienced person with no stake in the outcome.

When we met, I began by letting Mark know the perspective I was coming from. Soon, I predicted, he would get a billion-dollar offer to buy Facebook from either Microsoft or Yahoo, and everyone, from the company’s board to the executive staff to Mark’s parents, would advise him to take it. I told Mark that he should turn down any acquisition offer. He had an opportunity to create a uniquely great company if he remained true to his vision. At two years old, Facebook was still years away from its first dollar of profit. It was still mostly limited to students and lacked most of the features we take for granted today. But I was convinced that Mark had created a game-changing platform that would eventually be bigger than Google was at the time. Facebook wasn’t the first social network, but it was the first to combine true identity with scalable technology. I told Mark the market was much bigger than just young people; the real value would come when busy adults, parents and grandparents, joined the network and used it to keep in touch with people they didn’t get to see often.

My little speech only took a few minutes. What ensued was the most painful silence of my professional career. It felt like an hour. Finally, Mark revealed why he had asked to meet with me: Yahoo had made that billion-dollar offer, and everyone was telling him to take it.

It only took a few minutes to help him figure out how to get out of the deal. So began a three-year mentoring relationship. In 2007, Mark offered me a choice between investing or joining the board of Facebook. As a professional investor, I chose the former. We spoke often about a range of issues, culminating in my suggestion that he hire Sheryl Sandberg as chief operating officer, and then my help in recruiting her. (Sheryl had introduced me to Bono in 2000; a few years later, he and I formed Elevation Partners, a private equity firm.) My role as a mentor ended prior to the Facebook IPO, when board members like Marc Andreessen and Peter Thiel took on that role.

In my thirty-five-year career in technology investing, I have never made a bigger contribution to a company’s success than I made at Facebook. It was my proudest accomplishment. I admired Mark Zuckerberg and Sheryl Sandberg—whom I helped Mark recruit—enormously.In my thirty-five-year career in technology investing, I have never made a bigger contribution to a company’s success than I made at Facebook. It was my proudest accomplishment. I admired Mark and Sheryl enormously. Not surprisingly, Facebook became my favorite app. I checked it constantly, and I became an expert in using the platform by marketing my rock band, Moonalice, through a Facebook page. As the administrator of that page, I learned to maximize the organic reach of my posts and use small amounts of advertising dollars to extend and target that reach. It required an ability to adapt, because Facebook kept changing the rules. By successfully adapting to each change, we made our page among the highest-engagement fan pages on the platform.

My familiarity with building organic engagement put me in a position to notice that something strange was going on in February 2016. The Democratic primary was getting under way in New Hampshire, and I started to notice a flood of viciously misogynistic anti-Clinton memes originating from Facebook groups supporting Bernie Sanders. I knew how to build engagement organically on Facebook. This was not organic. It appeared to be well organized, with an advertising budget. But surely the Sanders campaign wasn’t stupid enough to be pushing the memes themselves. I didn’t know what was going on, but I worried that Facebook was being used in ways that the founders did not intend.

A month later I noticed an unrelated but equally disturbing news item. A consulting firm was revealed to be scraping data about people interested in the Black Lives Matter protest movement and selling it to police departments. Only after that news came out did Facebook announce that it would cut off the company’s access to the information. That got my attention. Here was a bad actor violating Facebook’s terms of service, doing a lot of harm, and then being slapped on the wrist. Facebook wasn’t paying attention until after the damage was done. I made a note to myself to learn more.

Meanwhile, the flood of anti-Clinton memes continued all spring. I still didn’t understand what was driving it, except that the memes were viral to a degree that didn’t seem to be organic. And, as it turned out, something equally strange was happening across the Atlantic.

When citizens of the United Kingdom voted to leave the European Union in June 2016, most observers were stunned. The polls had predicted a victory for the “Remain” campaign. And common sense made it hard to believe that Britons would do something so obviously contrary to their self-interest. But neither common sense nor the polling data fully accounted for a crucial factor: the new power of social platforms to amplify negative messages.

Facebook, Google, and other social media platforms make their money from advertising. As with all ad-supported businesses, that means advertisers are the true customers, while audience members are the product. Until the past decade, media platforms were locked into a one-size-fits-all broadcast model. Success with advertisers depended on producing content that would appeal to the largest possible audience. Compelling content was essential, because audiences could choose from a variety of distribution mediums, none of which could expect to hold any individual consumer’s attention for more than a few hours. TVs weren’t mobile. Computers were mobile, but awkward. Newspapers and books were mobile and not awkward, but relatively cerebral. Movie theaters were fun, but inconvenient.

When their business was limited to personal computers, the internet platforms were at a disadvantage. Their proprietary content couldn’t compete with traditional media, and their delivery medium, the PC, was generally only usable at a desk. Their one advantage—a wealth of personal data—was not enough to overcome the disadvantage in content. As a result, web platforms had to underprice their advertising.

Smartphones changed the advertising game completely. It took only a few years for billions of people to have an all-purpose content delivery system easily accessible sixteen hours or more a day. This turned media into a battle to hold users’ attention as long as possible. And it left Facebook and Google with a prohibitive advantage over traditional media: with their vast reservoirs of real-time data on two billion individuals, they could personalize the content seen by every user. That made it much easier to monopolize user attention on smartphones and made the platforms uniquely attractive to advertisers. Why pay a newspaper in the hopes of catching the attention of a certain portion of its audience, when you can pay Facebook to reach exactly those people and no one else?

Whenever you log into Facebook, there are millions of posts the platform could show you. The key to its business model is the use of algorithms, driven by individual user data, to show you stuff you’re more likely to react to. Wikipedia defines an algorithm as “a set of rules that precisely defines a sequence of operations.” Algorithms appear value neutral, but the platforms’ algorithms are actually designed with a specific value in mind: maximum share of attention, which optimizes profits. They do this by sucking up and analyzing your data, using it to predict what will cause you to react most strongly, and then giving you more of that.

Algorithms that maximize attention give an advantage to negative messages. People tend to react more to inputs that land low on the brainstem. Fear and anger produce a lot more engagement and sharing than joy. The result is that the algorithms favor sensational content over substance. Of course, this has always been true for media; hence the old news adage “If it bleeds, it leads.” But for mass media, this was constrained by one-size-fits-all content and by the limitations of delivery platforms. Not so for internet platforms on smartphones. They have created billions of individual channels, each of which can be pushed further into negativity and extremism without the risk of alienating other audience members. To the contrary: the platforms help people self-segregate into like-minded filter bubbles, reducing the risk of exposure to challenging ideas.

It took Brexit for me to begin to see the danger of this dynamic. I’m no expert on British politics, but it seemed likely that Facebook might have had a big impact on the vote because one side’s message was perfect for the algorithms and the other’s wasn’t. The “Leave” campaign made an absurd promise—there would be savings from leaving the European Union that would fund a big improvement in the National Health System—while also exploiting xenophobia by casting Brexit as the best way to protect English culture and jobs from immigrants. It was too-good-to-be-true nonsense mixed with fearmongering.

Meanwhile, the Remain campaign was making an appeal to reason. Leave’s crude, emotional message would have been turbocharged by sharing far more than Remain’s. I did not see it at the time, but the users most likely to respond to Leave’s messages were probably less wealthy and therefore cheaper for the advertiser to target: the price of Facebook (and Google) ads is determined by auction, and the cost of targeting more upscale consumers gets bid up higher by actual businesses trying to sell them things. As a consequence, Facebook was a much cheaper and more effective platform for Leave in terms of cost per user reached. And filter bubbles would ensure that people on the Leave side would rarely have their questionable beliefs challenged. Facebook’s model may have had the power to reshape an entire continent.

But there was one major element to the story that I was still missing.

Shortly after the Brexit vote, I reached out to journalists to validate my concerns about Facebook. At this point, all I had was a suspicion of two things: bad actors were exploiting an unguarded platform; and Facebook’s algorithms may have had a decisive impact on Brexit by favoring negative messages. My Rolodex was a bit dusty, so I emailed my friends Kara Swisher and Walt Mossberg at Recode, the leading tech industry news blog. Unfortunately, they didn’t reply. I tried again in August, and nothing happened.

Meanwhile, the press revealed that the Russians were behind the server hack at the Democratic National Committee and that Trump’s campaign manager had ties to Russian oligarchs close to Vladimir Putin. This would turn out to be the missing piece of my story. As the summer went on, I began noticing more and more examples of troubling things happening on Facebook that might have been prevented had the company accepted responsibility for the actions of third parties—such as financial institutions using Facebook tools to discriminate based on race and religion. In late September, Walt Mossberg finally responded to my email and suggested I write an op-ed describing my concerns. I focused entirely on nonpolitical examples of harm, such as discrimination in housing advertisements, suggesting that Facebook had an obligation to ensure that its platform not be abused. Like most people, I assumed that Clinton would win the election, and I didn’t want my concerns to be dismissed as inconsequential if she did.

My wife recommended that I send what I wrote to Mark Zuckerberg and Sheryl Sandberg before publishing in Recode. Mark and Sheryl were my friends, and my goal was to make them aware of the problems so they could fix them. I certainly wasn’t trying to take down a company in which I still hold equity. I sent them the op-ed on October 30. They each responded the next day. The gist of their messages was the same: We appreciate you reaching out; we think you’re misinterpreting the news; we’re doing great things that you can’t see. Then they connected me to Dan Rose, a longtime Facebook executive with whom I had an excellent relationship. Dan is a great listener and a patient man, but he was unwilling to accept that there might be a systemic issue. Instead, he asserted that Facebook was not a media company, and therefore was not responsible for the actions of third parties.

In the hope that Facebook would respond to my goodwill with a serious effort to solve the problems, I told Dan that I would not publish the op-ed. Then came the U.S. election. The next day, I lost it. I told Dan there was a flaw in Facebook’s business model. The platform was being exploited by a range of bad actors, including supporters of extremism, yet management claimed the company was not responsible. Facebook’s users, I warned, might not always agree. The brand was at risk of becoming toxic. Over the course of many conversations, I urged Dan to protect the platform and its users.

The last conversation we had was in early February 2017. By then there was increasing evidence that the Russians had used a variety of methods to interfere in our election. I formed a simple hypothesis: the Russians likely orchestrated some of the manipulation on Facebook that I had observed back in 2016. That’s when I started looking for allies.

On April 11, I cohosted a technology-oriented show on Bloomberg TV. One of the guests was Tristan Harris, formerly the design ethicist at Google. Tristan had just appeared on 60 Minutes to discuss the public health threat from social networks like Facebook. An expert in persuasive technology, he described the techniques that tech platforms use to create addiction and the ways they exploit that addiction to increase profits. He called it “brain hacking.”

In February 2016, I started to notice a flood of viciously misogynistic anti-Clinton memes originating from Facebook groups supporting Bernie Sanders. I knew how to build engagement organically on Facebook. This was not organic.The most important tool used by Facebook and Google to hold user attention is filter bubbles. The use of algorithms to give consumers “what they want” leads to an unending stream of posts that confirm each user’s existing beliefs. On Facebook, it’s your news feed, while on Google it’s your individually customized search results. The result is that everyone sees a different version of the internet tailored to create the illusion that everyone else agrees with them. Continuous reinforcement of existing beliefs tends to entrench those beliefs more deeply, while also making them more extreme and resistant to contrary facts. Facebook takes the concept one step further with its “groups” feature, which encourages like-minded users to congregate around shared interests or beliefs. While this ostensibly provides a benefit to users, the larger benefit goes to advertisers, who can target audiences even more effectively.

After talking to Tristan, I realized that the problems I had been seeing couldn’t be solved simply by, say, Facebook hiring staff to monitor the content on the site. The problems were inherent in the attention-based, algorithm-driven business model. And what I suspected was Russia’s meddling in 2016 was only a prelude to what we’d see in 2018 and beyond. The level of political discourse, already in the gutter, was going to get even worse.

I asked Tristan if he needed a wingman. We agreed to work together to try to trigger a national conversation about the role of internet platform monopolies in our society, economy, and politics. We recognized that our effort would likely be quixotic, but the fact that Tristan had been on 60 Minutes gave us hope.

Our journey began with a trip to New York City in May, where we spoke with journalists and had a meeting at the ACLU. Tristan found an ally in Arianna Huffington, who introduced him to people like Bill Maher, who invited Tristan to be on his show. A friend introduced me over email to a congressional staffer who offered to arrange a meeting with his boss, a key member of one of the intelligence committees. We were just starting, but we had already found an audience for Tristan’s message.

In July, we went to Washington, D.C., where we met with two members of Congress. They were interested in Tristan’s public health argument as it applied to two issues: Russia’s election meddling, and the giant platforms’ growing monopoly power. That was an eye-opener. If election manipulation and monopoly were what Congress cared about, we would help them understand how internet platforms related to those issues. My past experience as a congressional aide, my long career in investing, and my personal role at Facebook gave me credibility in those meetings, complementing Tristan’s domain expertise.

With respect to the election meddling, we shared a few hypotheses based on our knowledge of how Facebook works. We started with a question: Why was Congress focused exclusively on collusion between Russia and the Trump campaign in 2016? The Russian interference, we reasoned, probably began long before the presidential election campaign itself. We hypothesized that those early efforts likely involved amplifying polarizing issues, such as immigration, white supremacy, gun rights, and secession. (We already knew that the California secession site had been hosted in Russia.) We suggested that Trump had been nominated because he alone among Republicans based his campaign on the kinds of themes the Russians chose for their interference.

We theorized that the Russians had identified a set of users susceptible to its message, used Facebook’s advertising tools to identify users with similar profiles, and used ads to persuade those people to join groups dedicated to controversial issues. Facebook’s algorithms would have favored Trump’s crude message and the anti-Clinton conspiracy theories that thrilled his supporters, with the likely consequence that Trump and his backers paid less than Clinton for Facebook advertising per person reached. The ads were less important, though, than what came next: once users were in groups, the Russians could have used fake American troll accounts and computerized “bots” to share incendiary messages and organize events. Trolls and bots impersonating Americans would have created the illusion of greater support for radical ideas than actually existed. Real users “like” posts shared by trolls and bots and share them on their own news feeds, so that small investments in advertising and memes posted to Facebook groups would reach tens of millions of people. A similar strategy prevailed on other platforms, including Twitter. Both techniques, bots and trolls, take time and money to develop—but the payoff would have been huge.

Our final hypothesis was that 2016 was just the beginning. Without immediate and aggressive action from Washington, bad actors of all kinds would be able to use Facebook and other platforms to manipulate the American electorate in future elections.

These were just hypotheses, but the people we met in Washington heard us out. Thanks to the hard work of journalists and investigators, virtually all of these hypotheses would be confirmed over the ensuing six weeks. Almost every day brought new revelations of how Facebook, Twitter, Google, and other platforms had been manipulated by the Russians.

We now know, for instance, that the Russians indeed exploited topics like Black Lives Matter and white nativism to promote fear and distrust, and that this had the benefit of laying the groundwork for the most divisive presidential candidate in history, Donald Trump. The Russians appear to have invested heavily in weakening the candidacy of Hillary Clinton during the Democratic primary by promoting emotionally charged content to supporters of Bernie Sanders and Jill Stein, as well as to likely Clinton supporters who might be discouraged from voting. Once the nominations were set, the Russians continued to undermine Clinton with social media targeted at likely Democratic voters. We also have evidence now that Russia used its social media tactics to manipulate the Brexit vote. A team of researchers reported in November, for instance, that more than 150,000 Russian-language Twitter accounts posted pro-Leave messages in the run-up to the referendum.

The week before our return visit to Washington in mid-September, we woke up to some surprising news. The group that had been helping us in Washington, the Open Markets team at the think tank New America, had been advocating forcefully for anti-monopoly regulation of internet platforms, including Google. It turns out that Eric Schmidt, an executive at Alphabet, Google’s parent company, is a major New America donor. The think tank cut Open Markets loose. The story line basically read, “Anti-monopoly group fired by liberal think tank due to pressure from monopolist.” (New America disputes this interpretation, maintaining that the group was let go because of a lack of collegiality on the part of its leader, Barry Lynn, who writes often for this magazine.) Getting fired was the best possible evidence of the need for their work, and funders immediately put the team back in business as the Open Markets Institute. Tristan and I joined their advisory board.

Our second trip to Capitol Hill was surreal. This time, we had three jam-packed days of meetings. Everyone we met was already focused on our issues and looking for guidance about how to proceed. We brought with us a new member of the team, Renee DiResta, an expert in how conspiracy theories spread on the internet. Renee described how bad actors plant a rumor on sites like 4chan and Reddit, leverage the disenchanted people on those sites to create buzz, build phony news sites with “press” versions of the rumor, push the story onto Twitter to attract the real media, then blow up the story for the masses on Facebook. It was sophisticated hacker technique, but not expensive. We hypothesized that the Russians were able to manipulate tens of millions of American voters for a sum less than it would take to buy an F-35 fighter jet.

In Washington, we learned we could help policymakers and their staff members understand the inner workings of Facebook, Google, and Twitter. They needed to get up to speed quickly, and our team was happy to help.

Tristan and I had begun in April with very low expectations. By the end of September, a conversation on the dangers of internet platform monopolies was in full swing. We were only a small part of what made the conversation happen, but it felt good.

Facebook and Google are the most powerful companies in the global economy. Part of their appeal to shareholders is that their gigantic advertising businesses operate with almost no human intervention. Algorithms can be beautiful in mathematical terms, but they are only as good as the people who create them. In the case of Facebook and Google, the algorithms have flaws that are increasingly obvious and dangerous.

Thanks to the U.S. government’s laissez-faire approach to regulation, the internet platforms were able to pursue business strategies that would not have been allowed in prior decades. No one stopped them from using free products to centralize the internet and then replace its core functions. No one stopped them from siphoning off the profits of content creators. No one stopped them from gathering data on every aspect of every user’s internet life. No one stopped them from amassing market share not seen since the days of Standard Oil. No one stopped them from running massive social and psychological experiments on their users. No one demanded that they police their platforms. It has been a sweet deal.

A week before the 2016 election, I emailed Zuckerberg and Sandberg, suggesting that Facebook had an obligation to ensure that its platform not be exploited by bad actors. They each responded the next day, saying: We appreciate you reaching out, but think you’re misinterpreting the news.Facebook and Google are now so large that traditional tools of regulation may no longer be effective. The European Union challenged Google’s shopping price comparison engine on antitrust grounds, citing unfair use of Google’s search and AdWords data. The harm was clear: most of Google’s European competitors in the category suffered crippling losses. The most successful survivor lost 80 percent of its market share in one year. The EU won a record $2.7 billion judgment—which Google is appealing. Google investors shrugged at the judgment, and, as far as I can tell, the company has not altered its behavior. The largest antitrust fine in EU history bounced off Google like a spitball off a battleship.

It reads like the plot of a sci-fi novel: a technology celebrated for bringing people together is exploited by a hostile power to drive people apart, undermine democracy, and create misery. This is precisely what happened in the United States during the 2016 election. We had constructed a modern Maginot Line—half the world’s defense spending and cyber-hardened financial centers, all built to ward off attacks from abroad—never imagining that an enemy could infect the minds of our citizens through inventions of our own making, at minimal cost. Not only was the attack an overwhelming success, but it was also a persistent one, as the political party that benefited refuses to acknowledge reality. The attacks continue every day, posing an existential threat to our democratic processes and independence.

We still don’t know the exact degree of collusion between the Russians and the Trump campaign. But the debate over collusion, while important, risks missing what should be an obvious point: Facebook, Google, Twitter, and other platforms were manipulated by the Russians to shift outcomes in Brexit and the U.S. presidential election, and unless major changes are made, they will be manipulated again. Next time, there is no telling who the manipulators will be.

Awareness of the role of Facebook, Google, and others in Russia’s interference in the 2016 election has increased dramatically in recent months, thanks in large part to congressional hearings on October 31 and November 1. This has led to calls for regulation, starting with the introduction of the Honest Ads Act, sponsored by Senators Mark Warner, Amy Klobuchar, and John McCain, which attempts to extend current regulation of political ads on networks to online platforms. Facebook and Google responded by reiterating their opposition to government regulation, insisting that it would kill innovation and hurt the country’s global competitiveness, and that self-regulation would produce better results.

But we’ve seen where self-regulation leads, and it isn’t pretty. Unfortunately, there is no regulatory silver bullet. The scope of the problem requires a multi-pronged approach.

First, we must address the resistance to facts created by filter bubbles. Polls suggest that about a third of Americans believe that Russian interference is fake news, despite unanimous agreement to the contrary by the country’s intelligence agencies. Helping those people accept the truth is a priority. I recommend that Facebook, Google, Twitter, and others be required to contact each person touched by Russian content with a personal message that says, “You, and we, were manipulated by the Russians. This really happened, and here is the evidence.” The message would include every Russian message the user received.

This idea, which originated with my colleague Tristan Harris, is based on experience with cults. When you want to deprogram a cult member, it is really important that the call to action come from another member of the cult, ideally the leader. The platforms will claim this is too onerous. Facebook has indicated that up to 126 million Americans were touched by the Russian manipulation on its core platform and another twenty million on Instagram, which it owns. Together those numbers exceed the 137 million Americans who voted in 2016. What Facebook has offered is a portal buried within its Help Center where curious users will be able to find out if they were touched by Russian manipulation through a handful of Facebook groups created by a single troll farm. This falls far short of what is necessary to prevent manipulation in 2018 and beyond. There’s no doubt that the platforms have the technological capacity to reach out to every affected person. No matter the cost, platform companies must absorb it as the price for their carelessness in allowing the manipulation.

Second, the chief executive officers of Facebook, Google, Twitter, and others—not just their lawyers—must testify before congressional committees in open session. As Senator John Kennedy, a Louisiana Republican, demonstrated in the October 31 Senate Judiciary hearing, the general counsel of Facebook in particular did not provide satisfactory answers. This is important not just for the public, but also for another crucial constituency: the employees who keep the tech giants running. While many of the folks who run Silicon Valley are extreme libertarians, the people who work there tend to be idealists. They want to believe what they’re doing is good. Forcing tech CEOs like Mark Zuckerberg to justify the unjustifiable, in public—without the shield of spokespeople or PR spin—would go a long way to puncturing their carefully preserved cults of personality in the eyes of their employees.

These two remedies would only be a first step, of course. We also need regulatory fixes. Here are a few ideas.

First, it’s essential to ban digital bots that impersonate humans. They distort the “public square” in a way that was never possible in history, no matter how many anonymous leaflets you printed. At a minimum, the law could require explicit labeling of all bots, the ability for users to block them, and liability on the part of platform vendors for the harm bots cause.

Second, the platforms should not be allowed to make any acquisitions until they have addressed the damage caused to date, taken steps to prevent harm in the future, and demonstrated that such acquisitions will not result in diminished competition. An underappreciated aspect of the platforms’ growth is their pattern of gobbling up smaller firms—in Facebook’s case, that includes Instagram and WhatsApp; in Google’s, it includes YouTube, Google Maps, AdSense, and many others—and using them to extend their monopoly power.

This is important, because the internet has lost something very valuable. The early internet was designed to be decentralized. It treated all content and all content owners equally. That equality had value in society, as it kept the playing field level and encouraged new entrants. But decentralization had a cost: no one had an incentive to make internet tools easy to use. Frustrated by those tools, users embraced easy-to-use alternatives from Facebook and Google. This allowed the platforms to centralize the internet, inserting themselves between users and content, effectively imposing a tax on both sides. This is a great business model for Facebook and Google—and convenient in the short term for customers—but we are drowning in evidence that there are costs that society may not be able to afford.

Third, the platforms must be transparent about who is behind political and issues-based communication. The Honest Ads Act is a good start, but does not go far enough for two reasons: advertising was a relatively small part of the Russian manipulation; and issues-based advertising played a much larger role than candidate-oriented ads. Transparency with respect to those who sponsor political advertising of all kinds is a step toward rebuilding trust in our political institutions.

Fourth, the platforms must be more transparent about their algorithms. Users deserve to know why they see what they see in their news feeds and search results. If Facebook and Google had to be up-front about the reason you’re seeing conspiracy theories—namely, that it’s good for business—they would be far less likely to stick to that tactic. Allowing third parties to audit the algorithms would go even further toward maintaining transparency. Facebook and Google make millions of editorial choices every hour and must accept responsibility for the consequences of those choices. Consumers should also be able to see what attributes are causing advertisers to target them.

Facebook, Google, and other social media platforms make their money from advertising. As with all ad-supported businesses, that means advertisers are the true customers, while audience members are the product.Fifth, the platforms should be required to have a more equitable contractual relationship with users. Facebook, Google, and others have asserted unprecedented rights with respect to end-user license agreements (EULAs), the contracts that specify the relationship between platform and user. When you load a new operating system or PC application, you’re confronted with a contract—the EULA—and the requirement that you accept its terms before completing installation. If you don’t want to upgrade, you can continue to use the old version for some time, often years. Not so with internet platforms like Facebook or Google. There, your use of the product comes with implicit acceptance of the latest EULA, which can change at any time. If there are terms you choose not to accept, your only alternative is to abandon use of the product. For Facebook, where users have contributed 100 percent of the content, this non-option is particularly problematic.

All software platforms should be required to offer a legitimate opt-out, one that enables users to stick with the prior version if they do not like the new EULA. “Forking” platforms between old and new versions would have several benefits: increased consumer choice, greater transparency on the EULA, and more care in the rollout of new functionality, among others. It would limit the risk that platforms would run massive social experiments on millions—or billions—of users without appropriate prior notification. Maintaining more than one version of their services would be expensive for Facebook, Google, and the rest, but in software that has always been one of the costs of success. Why should this generation get a pass?

Customers understand that their “free” use of platforms like Facebook and Google gives the platforms license to exploit personal data. The problem is that platforms are using that data in ways consumers do not understand, and might not accept if they did.Sixth, we need a limit on the commercial exploitation of consumer data by internet platforms. Customers understand that their “free” use of platforms like Facebook and Google gives the platforms license to exploit personal data. The problem is that platforms are using that data in ways consumers do not understand, and might not accept if they did. For example, Google bought a huge trove of credit card data earlier this year. Facebook uses image-recognition software and third-party tags to identify users in contexts without their involvement and where they might prefer to be anonymous. Not only do the platforms use your data on their own sites, but they also lease it to third parties to use all over the internet. And they will use that data forever, unless someone tells them to stop.

There should be a statute of limitations on the use of consumer data by a platform and its customers. Perhaps that limit should be ninety days, perhaps a year. But at some point, users must have the right to renegotiate the terms of how their data is used.

Seventh, consumers, not the platforms, should own their own data. In the case of Facebook, this includes posts, friends, and events—in short, the entire social graph. Users created this data, so they should have the right to export it to other social networks. Given inertia and the convenience of Facebook, I wouldn’t expect this reform to trigger a mass flight of users. Instead, the likely outcome would be an explosion of innovation and entrepreneurship. Facebook is so powerful that most new entrants would avoid head-on competition in favor of creating sustainable differentiation. Start-ups and established players would build new products that incorporate people’s existing social graphs, forcing Facebook to compete again. It would be analogous to the regulation of the AT&T monopoly’s long-distance business, which led to lower prices and better service for consumers.

Eighth, and finally, we should consider that the time has come to revive the country’s traditional approach to monopoly. Since the Reagan era, antitrust law has operated under the principle that monopoly is not a problem so long as it doesn’t result in higher prices for consumers. Under that framework, Facebook and Google have been allowed to dominate several industries—not just search and social media but also email, video, photos, and digital ad sales, among others—increasing their monopolies by buying potential rivals like YouTube and Instagram. While superficially appealing, this approach ignores costs that don’t show up in a price tag. Addiction to Facebook, YouTube, and other platforms has a cost. Election manipulation has a cost. Reduced innovation and shrinkage of the entrepreneurial economy has a cost. All of these costs are evident today. We can quantify them well enough to appreciate that the costs to consumers of concentration on the internet are unacceptably high.

Increasing awareness of the threat posed by platform monopolies creates an opportunity to reframe the discussion about concentration of market power. Limiting the power of Facebook and Google not only won’t harm America, it will almost certainly unleash levels of creativity and innovation that have not been seen in the technology industry since the early days of, well, Facebook and Google.

Before you dismiss regulation as impossible in the current economic environment, consider this. Eight months ago, when Tristan Harris and I joined forces, hardly anyone was talking about the issues I described above. Now lots of people are talking, including policymakers. Given all the other issues facing the country, it’s hard to be optimistic that we will solve the problems on the internet, but that’s no excuse for inaction. There’s far too much at stake.

Facebook Can’t Be Fixed.

Facebook’s fundamental problem is not foreign interference, spam bots, trolls, or fame mongers. It’s the company’s core business model, and abandoning it is not an option.

Mark Zuckerberg has announced his annual “personal challenge,” which in the past has ranged from eating meat he personally kills to learning Mandarin.

This year, his personal challenge isn’t personal at all. It’s all business: He plans to fix Facebook.

In his short but impactful post, Zuckerberg notes that when he started doing personal challenges in 2009, Facebook did not have “a sustainable business model,” so his first pledge was to wear a tie all year, so as to focus himself on finding that model.

He sure as hell did find that model: data-driven audience-based advertising, but more on that in a minute. In his post, Zuckerberg notes that 2018 feels “a lot like that first year,” adding “Facebook has a lot of work to do — whether it’s protecting our community from abuse and hate, defending against interference by nation states, or making sure that time spent on Facebook is time well spent….My personal challenge for 2018 is to focus on fixing these important issues.”

The post is worthy of a doctoral dissertation. I’ve read it over and over, and would love, at some point, to break it down paragraph by paragraph. Maybe I’ll get to that someday, but first I want to emphatically state something it seems no one else is saying (at least not in mainstream press coverage of the post):

You cannot fix Facebook without completely gutting its advertising-driven business model.

And because he is required by Wall Street to put his shareholders above all else, there’s no way in hell Zuckerberg will do that.

Put another way, Facebook has gotten too big to pivot to a new, more “sustainable” business model. The company is on track to earn at least $16 billion in profits in 2017. Wherever the number lands (earnings for the year come out later this month), it’s at least 50 percent growth on the year before. As a stock, Facebook is breaking out in a massive way — it’s priced at roughly 36 times earnings — a healthy premium to the S&P’s average of around 25. Facebook’s financials from its first year through 2016 are in the opening image above. Here’s the stock price in 2017:

The stock is trading at $186 or so today; it began the year at roughly $120. As we all know, a stock price is the market’s estimate of future earnings. If Zuckerberg decides to really “fix” Facebook, well, that stock price will nosedive. And that will lead angry shareholders to sue the stuffing out of the company, and likely demand the CEO’s head on a pike.

If you’ve read “Lost Context,” you’ve already been exposed to my thinking on why the only way to “fix” Facebook is to utterly rethink its advertising model. It’s this model which has created nearly all the toxic externalities Zuckerberg is worried about: It’s the honeypot which drives the economics of spambots and fake news, it’s the at-scale algorithmic enabler which attracts information warriors from competing nation states, and it’s the reason the platform has become a dopamine-driven engagement trap where time is often not well spent.

To put it in Clintonese: It’s the advertising model, stupid.

We love to think our corporate heroes are somehow super human, capable of understanding what’s otherwise incomprehensible to mere mortals like the rest of us. But Facebook is simply too large an ecosystem for one person to fix. And anyway, his hands are tied from doing so. So instead, he’s doing what people (especially engineers) always do when the problem is so existential they can’t wrap their minds around it: He’s redefining the problem and breaking it into constituent parts.

Here are two scenarios for what might come of Zuckerberg’s 2018 quest:

  1. Facebook identifies a set of issues (Abuse and Hate, Interference by Nation States, Time Well Spent) and convenes working groups with panels of experts and pundits. The press is duly impressed, the lobbyists make sure Congress is kept in the loop, and in the end, they come up with well-intentioned but feckless point solutions which are implemented with little to no effect. The stock keeps climbing.
  2. Zuckerberg does the equivalent of dropping corporate acid and realizes the only way to fix Facebook is to make a massive, systemic change. He orders his team to redesign the entire Facebook product suite around a new True North: No longer will his company be driven by engagement and data collection, but rather by whether or not individual users report that they are happier after using the service. This leads to a massive rethink of the product and advertising platform, which after much debate shift from an audience model (deep data, specific to each individual) to a contextual model (not buying people, but buying the context in which those people are engaging). And given that most of an individual’s context on Facebook has to do with engaging with friends and family, well, ad inventory plunges. Maybe, just maybe, Facebook decides to charge a subscription fee, say, $10 a person per year. That alone could arguably bring in $20+ billion annually, but…let’s remember, I’m describing an acid trip.

Which do you think will happen?

Yeah, me too. If Zuckerberg picks #2, his business would shrink by tens of billions of dollars. It’d still be an awesome business, and he’d probably be a lock for Man of the Year. But it’d get his assed sued into oblivion by angry shareholders.

Then again….Zuckerberg is one of several tech founders who hold super majority shares that given him absolute control over the future of his company. The stated reason for this structure, popularized by Google founders Sergey Brin and Larry Page, was to ensure that the capitalistic vagaries of Wall St. don’t force visionary companies to hew to corporatist rationale as they mature. (Page and Brin actually name-checked the New York Times as their inspiration.)

Will Zuck drop acid? Let’s just say this: He certainly could.

Awake Under Anesthesia

One day in the nineteen-eighties, a woman went to the hospital for cancer surgery. The procedure was a success, and all of the cancer was removed. In the weeks afterward, though, she felt that something was wrong. She went back to her surgeon, who reassured her that the cancer was gone; she consulted a psychiatrist, who gave her pills for depression. Nothing helped—she grew certain that she was going to die. She met her surgeon a second time. When he told her, once again, that everything was fine, she suddenly blurted out, “The black stuff—you didn’t get the black stuff!” The surgeon’s eyes widened. He remembered that, during the operation, he had idly complained to a colleague about the black mold in his bathroom, which he could not remove no matter what he did. The cancer had been in the woman’s abdomen, and during the operation she had been under general anesthesia; even so, it seemed that the surgeon’s words had lodged in her mind. As soon as she discovered what had happened, her anxiety dissipated.

Henry Bennett, an American psychologist, tells this story to Kate Cole-Adams, an Australian journalist, in her book “Anesthesia: The Gift of Oblivion and the Mystery of Consciousness.” Cole-Adams hears many similar stories from other anesthesiologists and psychologists: apparently, people can hear things while under anesthesia, and can be affected by what they hear even if they can’t remember it. One woman suffers from terrible insomnia after her hysterectomy; later, while hypnotized, she recalls her anesthesiologist joking that she would “sleep the sleep of death.” Another patient becomes suicidal after a minor procedure; later, she remembers that, while she was on the table, her surgeon exclaimed, “She is fat, isn’t she?” In the nineteen-nineties, German scientists put headphones on thirty people undergoing heart surgery, then, during the operation, played them an abridged version of “Robinson Crusoe.” None of the patients recalled this happening, but afterward, when asked what came to mind when they heard the word “Friday,” many mentioned the story. In 1985, Bennett himself asked patients receiving gallbladder or spinal surgeries to wear headphones. A control group heard the sounds of the operating theatre; the others heard Bennett saying, “When I come to talk with you, you will pull on your ear.” When they met with him, those who’d heard the message touched their ears three times more often than those who hadn’t.

As a teen-ager, Cole-Adams was diagnosed with scoliosis. She came to dread the dangerous surgery she might someday need to correct the curvature of her spine; in middle age, she grew increasingly stooped and realized that the surgery was inevitable. She began researching “Anesthesia” in 1999, perhaps as a means of mastering her fear, and, after nearly twenty years’ work, has written an obsessive, mystical, terrifying, and even phantasmagorical exploration of anesthesia’s shadowy terra incognita. In addition to anesthesia, the book describes Cole-Adams’s childhood, her parents, a number of love affairs, and various spiritual experiences and existential crises—a drifting, atemporal assemblage meant to evoke the anesthetized mind. Cataloguing her many forgotten experiences and unfelt feelings, she wonders to what extent we already live in an anesthetized state.

Anesthesiologists speak of patients descending through “the planes of anesthesia”—from the “plane of disorientation” through the “plane of delirium” toward the “surgical plane.” While we go under, they monitor our brain waves, titrating their “anesthetic cocktails” to make sure that we receive neither too little sedation nor too much. (A typical cocktail contains a painkiller, a paralytic, which prevents muscles from flinching at the knife—the early paralytics were based on curare, the drug South American warriors put on the poison-tipped arrows with which they shot Europeans—and a “hypnotic,” which brings unconsciousness.) But even as they operate the machinery of anesthesia with great skill, anesthesiologists remain uncertain about the drugs’ underlying mechanisms. “Obviously we give anesthetics and we’ve got very good control over it,” one doctor tells Cole-Adams, “but in real philosophical and physiological terms we don’t know how anesthesia works.” The root of the problem is that no one understands why we are conscious. If you don’t know why the sun comes up, it’s hard to say why it goes down.

In her attempts to understand what going under anesthesia really entails, Cole-Adams encounters what Kate Leslie, an Australian anesthesiologist, calls “spooky little studies”—odd, suggestive, and often unreplicable experiments. In one such study, from 1993, Ian Russell, a British anesthesiologist, ties a tourniquet around the forearms of thirty-two women undergoing major gynecological surgery. He administers his anesthetic cocktail—the hypnotic drug midazolam, along with a painkiller and a muscle relaxant—then, by tightening the tourniquet, prevents the muscle relaxant from entering each woman’s hands and wrist. During surgery, a recorded message plays through headphones in which Russell addresses each patient by name. “If you can hear me, I would like you to open and close the fingers of your right hand,” he says. If the woman moves her hand, Russell lifts one of the earpieces and asks her to squeeze his fingers; if she squeezes, he asks her to do it again if she is in pain. Of the thirty-two patients Russell tested, twenty-three squeezed to suggest they could hear, and twenty squeezed again to say they were in pain. Although Russell was supposed to test sixty patients, he was so unnerved by these results that he ended the trial early. It’s possible, he suggests, that the women were conscious and suffering on the operating table. If that’s the case, then general anesthesia might be better described as “general amnesia.” (Afterward, none of the women recalled hearing Russell’s voice or squeezing his hand.)

Could Russell have failed to administer enough anesthetic? (He says he used as much as he would in any normal operation.) Could he have been feeling movements that weren’t there or that weren’t significant? (Cole-Adams attends an operation with Russell, during which he again employs his “isolated forearm technique”; this time, when the patient grips his fingers, he deems it a meaningless “reflex movement.”) It’s possible that the patients were aware, but only partially—aware enough to squeeze Russell’s hand, but not enough to know their own names, for instance, or to recall anything about their lives. Daniel Dennett, the philosopher of mind, argues that consciousness is not a binary state but a gradual one; it’s possible to be “sort of” conscious and, during that time, to have a “sort of” self. Every year, thousands of people have colonoscopies under so-called conscious sedation: they are drowsily awake and can communicate with their doctors, but remember little or nothing about the procedure afterward. If you don’t remember the pain, does it still count? Did it happen to “you”? Maybe being “sort of” aware during surgery isn’t so bad.

There are, Cole-Adams finds, no perfect studies of awareness under anesthesia. Studies like Russell’s, which use real patients, tend be poorly designed; those that use volunteers don’t involve real surgery. Investigating anesthetized awareness without surgery, she writes, “is a bit like testing your windshield wipers without rain.” “A surgical incision has a galvanizing effect even on an anesthetized patient,” she explains. “As the scalpel enters, her heart beats faster, her blood pressure rises, sometimes she jerks. She might edge closer to consciousness.” Another approach, of course, is simply to ask large numbers of people what they remember after they emerge from surgery. A study published in The Lancet in 2000 surveyed twelve thousand patients who had undergone surgery at two Swedish hospitals. The researchers found eighteen people whom they could be confident had been awake. The patients were surveyed at different times—just after the operation and at various intervals thereafter. Some remembered their experiences right away; others had no recollections at first but recalled the surgery after a week or two. One remembered the surgery in detail only twenty-four days afterward.

We tend to think that being anesthetized is like falling asleep. Cole-Adams concludes that the truth is stranger—it’s more like having your mind disassembled, then put together again. An American anesthesiologist named George Mashour tells her that “the unconscious mind is not this black sea of nothingness,” but an “active and dynamic” place; one might imagine the anesthetized mind as a concert hall in which the conductor is missing but the orchestra still performs. The systems of the brain continue to operate, but they don’t synch up. Perhaps because everyone’s mind devolves into cacophony differently, people have a bewildering array of experiences while anesthetized. An anesthesiologist in Melbourne recalls a patient who found himself awake during bypass surgery; although the man experienced his “chest being sawn open and pulled apart,” he didn’t feel pain, and “was amazed by it, not terrified by it.” (“He was a really easygoing sort of bloke,” the anesthesiologist recalls.) Another doctor recalls a patient waking from surgery looking “very pleased with herself”; when asked why she was so happy, she said, “You won’t believe it, but I’ve just had a half-hour orgasm!”

Not everyone is so lucky. At the center of “Anesthesia” is the story of Rachel Benmayor, an Australian woman who, twenty-five years ago, found herself paralyzed but capable of sensation during her Cesarean section. (Benmayor’s doctors had intended for her to go under general anesthesia.)* At first, she didn’t know where she was. Then she felt extraordinary, mounting pain, and a feeling as though a truck were driving back and forth across her midsection. (“When you open up the abdominal cavity, the air rushing onto the unprotected internal organs gives rise to a feeling of great pressure,” Cole-Adams explains.) She felt that she wasn’t breathing. (A ventilator was doing it for her.) Only when she heard the doctors talking to her husband—“Glenn, look, you’ve got a little girl!”—did she realize that she was awake during her operation. Now fully aware, she began to panic. She felt that the pain and paralysis would drive her mad. She decided to try to go “into” the pain. Instead of fleeing from the experience, she tells Cole-Adams, “I consciously turned myself around, and started feeling the pain and going into the pain, and just letting the pain sort of enclose me.” She felt herself descending into the agony—then, suddenly, although she could still feel the surgery, she found herself in a library. “It was like I was in the presence of everything that has been ever known by man and everything that ever will be,” she recalls.

All things that could be known or understood were there, whether man had ever known or understood them. . . . It was actually too big, too immense and I felt that I’d been forced there, and I had to survive it.

While she was in the library, a voice spoke to her, communicating several messages. The first: “Life is breath.” The second: “Everything is important, and nothing is important.” The third: “When people move through pain, they find the truth.” The fourth message had to do with Benmayor’s husband (she won’t tell Cole-Adams what it was); finally, the voice told her “that our life’s purpose as a human being was to procreate. That having children was our primary focus as human beings.” Even during the operation, Benmayor says, she resisted this idea. Then she felt the surgeons stitching her up and returned to her body. When she was able to move again, she summoned her doctor, who wept when he realized what had happened, and her husband, to whom she dictated the messages. For a time, she shook uncontrollably. Later, she held her daughter, Allegra. “Newborns have such a black stillness in their eyes,” she tells Cole-Adams, “and I just sort of held her in my arms and I felt like she’d just come from where I had been.”

Reading “Anesthesia,” you could easily miss the fact that Benmayor’s surgery happened in 1990; since then, Cole-Adams explains, new protocols and monitoring techniques have made her already rare experience even less likely to happen. Because the many interviews, studies, and anecdotes in the book are presented in a thematic, associative order, you must struggle to notice whether they are from the nineteen-sixties (the heyday of weird science) or the nineteen-nineties, when—one imagines—they are more reliable. “Anesthesia” does include a capsule history of anesthetics, starting with the discovery of ether, in the eighteen-fifties. But it is not a chronicle of technological progress, and you will not emerge from it with any sense of where the technology is going. One of the ironies of the book is that, if anesthetics were perfected, a window into the unconscious mind would have closed.

One of the central lessons of “Anesthesia” is how much can be accomplished in the midst of ignorance. It may be true that, “in real philosophical and physiological terms,” we don’t know exactly how anesthesia works—but that doesn’t stop anesthesiologists from doing their jobs better every year. Meanwhile, many of the improvements in anesthesia have ripple effects that have nothing to do with the mysteries of the mind. By “deactivating the powerful muscles of the torso,” for example, improved paralytic drugs have given surgeons “safe access to the fortified cities of the chest and the belly,” and this has made new, life-saving surgeries possible.

And yet, even as the craft of anesthesia is being improved in a brightly lit room, another room, just next door, remains dark. In Cole-Adams’s view, existence is like that. We experience, think, do, and feel quite a lot without fully understanding who, what, or where we are. In one of her book’s best moments, she describes a dream she’s had. She’s searching for a lost dog; she finds it “in a pound, on the edge of town.” It’s a beautiful red setter, lying in a cage. “As I enter, the creature raises its head toward me and I see with slow shock that its muzzle has been sewn up with fishing line,” she writes. “The red dog pulls itself off the ground and limps toward me. Rising on its hind legs, it puts its forelegs on my shoulders, and rests its head against the left side of my neck.” She knows the dog wants to be saved, but doesn’t know how to help it; inexplicably, she also knows that its name is Gadget, and at the end of the dream she leaves Gadget behind.

To Cole-Adams, the beautiful dog with its mouth sewn shut is “a visceral evocation of the plight of a person who might be both anesthetized and aware.” Their embrace, meanwhile, signifies “the chasm that exists between the conscious and unconscious minds: the one wordy, knowing, exclusive; the other voiceless, persistent, inclusive.” We all have our inner Gadgets: unconscious, partial, silenced selves that, by design, our minds don’t perceive. They’re always there; sometimes, under anesthesia, they try to speak.

How Actual Smart People Talk About Themselves

I’ve never met or interviewed Donald Trump, though like most of the world I feel amply exposed to his outlooks and styles of expression. So I can’t say whether, in person, he somehow conveys the edge, the sparkle, the ability to connect, the layers of meaning that we usually associate with both emotional and analytical intelligence.

But I have had the chance over the years to meet and interview a large sampling of people whom the world views the way Trump views himself. That is, according to this morning’s dispatches, as “like, really smart,” and “genius.”

In current circumstances it’s relevant to mention what I’ve learned this way.

I once spent weeks on interviews for a magazine profile of a man who had won a Nobel Prize in medicine while in his 40s. Back in my college days, one afternoon our biology professor passed around Dixie cups full of champagne before beginning the day’s classroom lecture, because of news that he had just won the Nobel Prize. In decades of reporting on the tech industry, I’ve interviewed people—GatesJobsMuskPage—whose names have become shorthands for their respective forms of brilliance, plus several more Nobel winners, plus others who are not famous but deserve to be.

During a brief stint of actually working at a tech company, I learned that some of the engineers and coders were viewed as just operating on a different plane: The code they wrote was better, tighter, and more elegant than other people’s, and they could write it much more quickly.

I’ve had the chance to interview and help select winners of fancy scholarships. Recently, in Shanghai, I interviewed a Chinese woman now in her early 20s who became the women’s world chess champion at age 16—and we were speaking in English.

If you report long enough on politics and public life, even there you will see examples of exceptional strategic, analytic, and bargaining intelligence, along with a lot of clownishness.

In short (as Lloyd Bentsen might once have put it): I’ve known some very smart people. Some very smart people have been friends of mine. And Donald Trump …


Here are three traits I would report from a long trail of meeting and interviewing people who by any reckoning are very intelligent.

  • They all know it. A lifetime of quietly comparing their ease in handling intellectual challenges—at the chess board, in the classroom, in the debating or writing arena—with the efforts of other people gave them the message.
  • Virtually none of them (need to) say it. There are a few prominent exceptions, of talented people who annoyingly go out of their way to announce that fact. Muhammad Ali is the charming extreme exception illustrating the rule: He said he was The Greatest, and was.* Most greats don’t need to say so. It would be like Roger Federer introducing himself with, “You know, I’m quite graceful and gifted.” Or Meryl Streep asking, “Have you seen my awards?”
  • They know what they don’t know. This to me is the most consistent marker of real intelligence. The more acute someone’s ability to perceive and assess, the more likely that person is to recognize his or her limits. These include the unevenness of any one person’s talents; the specific areas of weakness—social awkwardness, musical tin ear, being stronger with numbers than with words, or vice versa; and the incomparable vastness of what any individual person can never know. To read books seriously is to be staggered by the knowledge of how many more books will remain beyond your ken. It’s like looking up at the star-filled sky.
  • We can think of exceptions—the people who are eminent in one field and try unwisely to stretch that to another. (Celebrated scientists or artists who become ordinary pundits; Michael Jordan the basketball genius becoming Michael Jordan the minor-league baseball player.) But generally the cliche is true: The clearest mark of intelligence, even “genius,” is awareness of one’s limits and ignorance.

* * *

On the other hand, we have something known as the Dunning-Kruger effect: The more limited someone is in reality, the more talented the person imagines himself to be. Or, as David Dunning and Justin Kruger put it in the title of their original scientific-journal article, “Unskilled and Unaware of It: How Difficulties in Recognizing One’s Own Incompetence Lead to Inflated Self-Assessments.”

Odds are that the world’s most flamboyant illustration of this dangerous misperception, despite his claimed omniscience, would not even recognize the term, nor its ominous implications in his case.