Is there any real distinction between ‘high’ and ‘low’ pleasures?


<p>Ramen heaven. From Juzo Itami’s 1985 noodle-western <em>Tampopo. Courtesy Criterion Collection</em></p>

Ramen heaven. From Juzo Itami’s 1985 noodle-western Tampopo.

 

Parents often say that they don’t mind what their children do in life just as long as they are happy. Happiness and pleasure are almost universally seen as among the most precious human goods; only the most curmudgeonly would question whether benign enjoyment is anything other than a good thing. Disagreement soon creeps in, however, if you ask whether some forms of pleasure are better than others. Does it matter whether our pleasures are spiritual or carnal, intellectual or stupid? Or are all pleasures pretty much the same?

Utilitarianism, as a moral philosophy, puts pleasure at the centre of its concerns, arguing that actions are right to the extent that they increase happiness and decrease suffering, wrong to the extent that they cause the opposite. Yet even the early Utilitarians couldn’t agree about whether pleasures should be ranked. Jeremy Bentham believed that all sources of pleasure are of equal quality. ‘Prejudice apart,’ he wrote in The Rationale of Reward (1825), ‘the game of push-pin is of equal value with the arts and sciences of music and poetry.’ His protégé John Stuart Mill disagreed, arguing in Utilitarianism (1863) that: ‘It is better to be a human being dissatisfied than a pig satisfied; better to be Socrates dissatisfied than a fool satisfied.’

Mill argued for a distinction between ‘higher’ and lower pleasures. His distinction is difficult to pin down, but it more or less tracks the distinction between capacities thought to be unique to humans and those we share with other animals. Higher pleasures depend on distinctively human capacities, which have a more complex cognitive element, requiring abilities such as rational thought, self-awareness or language use. Lower pleasures, in contrast, require mere sentience. Humans and other animals alike enjoy basking in the sun, eating something tasty or having sex. Only humans engage in art, philosophy and so on.

Mill was certainly not the first to make this distinction. Aristotle among others thought that the senses of touch and taste were ‘servile and brutish’; the pleasures of eating were ‘as brutes also share in’ and so less valuable than those that used the more developed human mind. Yet many would continue to side with Bentham, arguing that we are really not so intellectual and high-minded as all that, and we might as well accept ourselves for the brutes that we are, shaped by biochemistry and animal drives.

The difficulty with resolving this disagreement about the kinds of pleasure is not that we struggle to agree on the right answer. It’s that we’re asking the wrong question. The entire debate assumes a clear divide between the intellectual and bodily, the human and the animal, which is no longer tenable. These days, few of us are card-carrying dualists who believe that we are made of immaterial minds and material bodies. We have plenty of scientific evidence for the importance of biochemistry and hormones in all that we do and think. Nonetheless, dualistic assumptions still inform our thinking. So, what happens if we take seriously the idea that the physical and the mental are inseparable, that we are fully embodied beings? What would it mean for our ideas about pleasure?

The dining table is a good place to start. Along with sex, food is usually considered to be the quintessential lower pleasure. All animals eat, using the senses of smell and taste. It doesn’t require any complex cognition to conclude that something is delicious. Philosophers have generally assumed that to take pleasure in eating is simply to sate a primitive desire. So, for instance, Plato believed that cookery could never be a form of art, because it ‘never regards either the nature or reason of that pleasure to which she devotes herself, but goes straight to her end’.

Plato and his successors, however, failed to appreciate something that the French food writer Jean Anthelme Brillat-Savarin captured pithily in The Physiology of Taste (1825): ‘Animals feed; man eats; only the man of intellect knows how to eat.’ Brillat-Savarin drew a distinction between mere animal feeding, which is the ingestion of food as fuel, and human eating, which can and should engage more than just our most basic carnal desires. Eating is a complex act. Simply gathering the ingredients takes thought, since what we buy not only requires planning but affects the wellbeing of growers, producers, animals and the planet. Cooking involves knowledge of ingredients, the application of skills, the balancing of different flavours and textures, considerations of nutrition, care for the ordering of courses or the place of the dish in the rhythm of the day. Eating, at its best, brings all these things together, adding an attentive aesthetic appreciation of the end result.

Eating illustrates how the difference between higher and lower pleasures is not what you enjoy but how you enjoy it. Wolfing down your food like a pig at a trough is a lower kind of pleasure. Preparing and eating it using the powers of reflection and attention that only a human being possesses turns it into a higher pleasure. This form of higher pleasure need not be intellectual in the academic sense. An accomplished chef might be judging the balance of flavours and textures intuitively; a home cook might simply be thinking about what his guests are most likely to enjoy. What makes the pleasure higher is that it engages our more complex human abilities. It expresses more than just the brute desire to satisfy a craving.

For every pleasure, it should not be difficult to see that the how matters more than the what. Furthermore, the highest pleasures do not merely use our distinctively human capacities, they use them for a valuable end. Someone who goes to the opera to be seen in a new dress is not experiencing the higher pleasures of music but indulging the lower pleasures of vanity. Someone who reads Dr Seuss with a careful ear for language gets a higher pleasure than someone who mechanically recites The Waste Land (1922) without any understanding of what T S Eliot was doing.

Even sex, perhaps the most primal human pleasure of all, can be appreciated in higher and lower ways. To adapt Brillat-Savarin, animals copulate, humans make love. In the intensity of sexual arousal and orgasm, it might not seem that our evolved human capacities are doing much work. But sex is highly contextual, and changes its nature depending on whether it is part and parcel of a genuine relationship between two human beings, however brief, or merely the satisfaction of a brute urge.

Mill was therefore right to believe that pleasures come in higher and lower forms but wrong to think that we could distinguish them on the basis of what we take pleasure in. What matters is how we enjoy them, which means that higher and lower pleasures are not two discrete categories but form a continuum. I think the persistence of the bogus form of the higher/lower pleasures distinction is a result of the fact that some things are more obviously amenable to richer appreciation than others. Art is typically enjoyed in mind-engaging ways, food all too often consumed in an animalistic one. This has led us to mistake association for identity.

The mistake also betrays a false view of human nature, which sees our intellectual or spiritual aspects as being what truly makes us human, and our bodies as embarrassing vehicles to carry them. When we learn how to take pleasure in bodily things in ways that engage our hearts and minds as well as our five senses, we give up the illusion that we are souls trapped in mortal coils, and we learn how to be fully human. We are neither angels above bodily pleasures nor crude beasts slavishly following them, but psychosomatic wholes who bring heart, mind, body and soul to everything we do.

Advertisements

Let’s bring back the Sabbath as a radical act against ‘total work’


<p><em>Bryan Mills/Flickr</em></p>

 

As a boy in late-1940s Memphis, my dad got a nickel every Friday evening to come by the home of a Russian Jewish immigrant named Harry Levenson and turn on his lights, since the Torah forbids lighting a fire in your home on the Sabbath. My father would wonder, however, if he were somehow sinning. The fourth commandment says that on the Sabbath ‘you shall not do any work – you, your son or your daughter, your male or female slave, your livestock, or the alien resident in your towns’. Was my dad Levenson’s slave? If so, how come he could turn on Levenson’s lights? Were they both going to hell?

‘Remember the Sabbath day, and keep it holy.’ The commandment smacks of obsolete puritanism – the shuttered liquor store, the cheque sitting in a darkened post office. We usually encounter the Sabbath as an inconvenience, or at best a nice idea increasingly at odds with reality. But observing this weekly day of rest can actually be a radical act. Indeed, what makes it so obsolete and impractical is precisely what makes it so dangerous.

When taken seriously, the Sabbath has the power to restructure not only the calendar but also the entire political economy. In place of an economy built upon the profit motive – the ever-present need for more, in fact the need for there to never be enough – the Sabbath puts forward an economy built upon the belief that there is enough. But few who observe the Sabbath are willing to consider its full implications, and therefore few who do not observe it have reason to find any value in it.

The Sabbath’s radicalism should be no surprise given the fact that it originated among a community of former slaves. The 10 commandments constituted a manifesto against the regime that they had recently escaped, and rebellion against that regime was at the heart of their god’s identity, as attested to in the first commandment: ‘I am the Lord your God, who brought you out of the land of Egypt, out of the house of slavery.’ When the ancient Israelites swore to worship only one god, they understood this to mean, in part, they owed no fealty to the pharaoh or any other emperor.

It is therefore instructive to read the fourth commandment in light of the pharaoh’s labour practices described earlier in the book of Exodus. He is depicted as a manager never satisfied with his slaves, especially those building the structures for storing surplus grain. The pharaoh orders that the slaves no longer be given straw with which to make bricks; they must now gather their own straw, while the daily quota for bricks would remain the same. When many fail to meet their quota, the pharaoh has them beaten and calls them lazy.

The fourth commandment presents a god who, rather than demanding ever more work, insists on rest. The weekly Sabbath placed a hard limit on how much work could be done and suggested that this was perfectly all right; enough work was done in the other six days. And whereas the pharaoh relaxed while his people toiled, Yahweh insisted that the people rest as Yahweh rested: ‘For in six days the Lord made heaven and earth, the sea, and all that is in them, but rested the seventh day; therefore the Lord blessed the Sabbath day and consecrated it.’

The Sabbath, as described in Exodus and other passages in the Torah, had a democratising effect. Yahweh’s example – not forcing others to labour while Yahweh rested – was one anybody in power was to imitate. It was not enough for you to rest; your children, slaves, livestock and even the ‘aliens’ in your towns were to rest as well. The Sabbath wasn’t just a time for personal reflection and rejuvenation. It wasn’t self-care. It was for everyone.

There was a reason the fourth commandment came where it did, bridging the commandments on how humans should relate to God with the commandments on how humans should relate to one another. As the Old Testament scholar Walter Brueggemann points out in his book Sabbath as Resistance (2014), a pharaonic economy driven by anxiety begets violence, dishonesty, jealousy, theft, the commodification of sex and familial alienation. None of these had a place in the Torahic economy, which was driven not by anxiety but by wholeness, enoughness. In such a society, there was no need to murder, covet, lie, commit adultery or dishonour one’s parents.

The Sabbath’s centrality to the Torahic economy was made clearer in other laws building upon the fourth commandment. Every seventh year, the Israelites were to let their fields ‘rest and lie fallow, so that the poor of your people may eat; and what they leave the wild animals may eat’. And every 50th year, they were to not only let their fields lie fallow, but forgive all debts; all slaves were to be freed and returned to their families, and all land returned to its original inhabitants. This was a far cry from the pharaonic regime where surplus grain was hoarded and parsed out to the poor only in exchange for work and loyalty. There were no strings attached; the goal wasn’t accumulating power but reconciling the community.

It is unknown if these radical commandments were ever followed to the letter. In any case, they are certainly not now. The Sabbath was desacralised into the weekend, and this desacralisation paved the way for the disappearance of the weekend altogether. The decline of good full-time work and the rise of the gig economy mean that we must relentlessly hustle and never rest. Why haven’t you answered that email? Couldn’t you be doing something more productive with your time? Bring your phone with you to the bathroom so you can at least keep busy.

We are expected to compete with each other for our own labour, so that we each become our own taskmaster, our own pharaoh. Offer your employer more and more work for the same amount of pay, so that you undercut your competition – more and more bricks, and you’ll even bring your own straw.

In our neo-pharaonic economy, we are worth no more than the labour we can perform, and the value of our labour is being ever devalued. We can never work enough. A profit-driven capitalist society depends on the anxious striving for more, and it would break down if there were ever enough.

The Sabbath has no place in such a society and indeed upends its most basic tenets. In a Sabbatarian economy, the right to rest – the right to do nothing of value to capital – is as holy as the right to work. We can give freely to the poor and open our homes to refugees without being worried that there will be nothing left for us. We can erase all debts from our records, because it is necessary for the community to be whole.

It is time for us, whatever our religious beliefs, to see the Sabbatarian laws of old not as backward and pharisaical, but rather as the liberatory statements they were meant to be. It is time to ask what our society would look like if it made room for a new Sabbath – or, to put it a different way, what our society would need to look like for the Sabbath to be possible.

Overlooked Biomarker May Predict Cancer Immunotherapy Response


Hi. I’m David Kerr, professor of cancer medicine from the University of Oxford.

As you know, I’ve believed for some time that ploidy (the number of sets of chromosomes in a cell) measurements have been largely overlooked in terms of the relative importance of prognostic markers for the whole range of different cancer types. One of my friends and colleagues, Prof Håvard Danielsen, from the University of Oslo, has established what I consider one of the best assays in the world for measuring ploidy and DNA content in tumors, and characterizing it from paraffin-embedded tissue. ‘It’s a very available system.

New Data on an Overlooked Biomarker

To this end, a really interesting study was recently reported in Science by Teresa Davoli and colleagues,[1] who looked at ploidy across a whole range of tumor types and tried to relate ploidy measurements to the hallmarks of cancer. A ploidy is also known as somatic copy number alterations (SCNAs), and [the authors] refined the definition of ploidy a little, into [SCNAs] that were predominantly focal or [SCNAs] that mainly correlated with whole-arm ploidy, or changes in DNA content in that way.

This was a fascinating study, in which they found that ploidy was indeed associated with prognosis, but also with signatures associated with proliferation, increased expression of the gene’s enzymes involved in control of proliferation and cell cycle—this was mainly for the focal SCNAs. The larger, long-arm, and chromosomal ploidy tended to be associated with reduced expression of immune signature and immune infiltration.

This is an important first step in showing that ploidy does correlate with drivers or hallmarks of cancer. There are some subtle differences that lead you toward increased proliferation or increased immune invasion, both of which work in terms of establishing the cancer.

The researchers then retrospectively analyzed ploidy levels in melanoma tumor specimens saved from two large trials using immune checkpoint inhibitors. What they showed was that aneuploid tumors (eg, elevated levels of SCNA) were indeed more resistant to immune blockade.

It’s a really fascinating new potential use of ploidy, which has been around for decades and has been much overlooked. Not only is it a prognostic marker and therefore gives us important information about the biological behavior of our patients’ tumors, but it may also be a predictive factor in that aneuploid tumors may be relatively resistant to immune checkpoint blockade.

[Because this study was conducted] retrospectively, there’s much work yet to be done. [However, it offers] a really interesting insight for a marker, which I think could be delivered relatively easily in a sophisticated way in every pathology lab in the world. It surprises me that we don’t measure ploidy more. [It is] another interesting insight—some beautiful basic science uncovering the subtle differences in ploidy, how it links to the different hallmarks of cancer, and how it may aid us in selecting patients who may benefit less from immune checkpoint inhibitors.

Thank you for listening. It will be really keen for you to have a look at the Science paper and post any comments that you may care to talk about. This is an unfinished story but one that must be followed up prospectively.

For all book lovers please visit my friend’s website.
URL: http://www.romancewithbooks.com

Oral Edoxaban May Be an Alternative to Dalteparin for Cancer-Related VTE


Hello. I am David Kerr, professor of cancer medicine from the University of Oxford, in England. I want to talk about a study published in the February 15 edition of the New England Journal of Medicine.[1] This was a beautifully well-designed and conducted randomized trial that compared edoxaban, a novel oral anticoagulant, with dalteparin in patients with cancer who had a venous thromboembolism (VTE).

The trials group calls itself the Hokusai Group. For those of you who don’t know Hokusai, he was an important Japanese painter who created many woodblock prints and was a member of the school of ukiyo-e, painters of passing, or everyday, life. There is something about the ephemerality of their art that has always attracted me. Among the more famous paintings are different views of Mount Fuji. My apologies if I am being a smarty pants, a clever clogs. But some of these paintings, particularly The Great Wave, are absolutely beautiful.

This trials group randomly assigned just over 1000 patients to receive oral edoxaban (after 5 days of low-molecular-weight heparin) or subcutaneous dalteparin. It was a noninferiority trial; treatment was given for a minimum of 6 months and a maximum of 12 months after the initial venothrombotic or embolic event. The composite endpoint, which is being used more and more in these trials of novel oral anticoagulants, was the recurrence rates of VTE and major bleeding incidents. The trial showed that edoxaban is not inferior to subcutaneous dalteparin.

Within the composite endpoint, there were fewer further thromboembolic events in the edoxaban arm but more bleeding events in the edoxaban arm. They evened each other out in terms of the noninferiority.

This was quite a useful study. It is an important first step in being able to show that we can substitute useful oral treatment for daily subcutaneous dalteparin. Patients don’t like dalteparin. We’ve all had patients who self-administer dalteparin during chemotherapy, and they are covered in bruises; they are sore. It’s a nuisance and it’s necessary, but if we find that we can substitute edoxaban, which is given orally and is well tolerated, then this is quite an important landmark study.

The major bleeds were predominantly in the upper gastrointestinal (GI) area. Quite a number of the patients who had GI bleeding had previously undergone GI surgery of some sort. Interpret that as you will.

This is an important study—well designed, well conducted, well reported—that tells us that we have a possible alternative to discuss with patients, offering them edoxaban instead of subcutaneous dalteparin.

Thank you for listening. Thank you for allowing me to segue into stories about Hokusai and the painters of ukiyo-e. Think about the passing life, the ephemerality. W.H. Auden called life the gallop to the grave; there’s some truth in that.

For all book lovers please visit my friend’s website.
URL: http://www.romancewithbooks.com

Is Aspirin the New (Old) Immunotherapy?


Hello. I’m David Kerr, professor of cancer medicine from the University of Oxford.

For those of you who follow me on Medscape and WebMD, you know that I don’t like aspirin: I love it. I think it’s a wonderful drug. There’s a lot of work going on just now looking at its molecular pharmacology.

There’s a great recent paper published by Dr Tsuyoshi Hamada and colleagues[1] looking at the role of aspirin as an immune checkpoint blockade inhibitor. It’s a lovely study. Using part of a retrospective sample collection, they were able to look at the impact of post-primary treatment use of aspirin in patients with resectable colorectal cancer.

They hypothesized that patients who had tumors with low expression of programmed cell death ligand 1 (PD-L1) would be more sensitive to the beneficial effects of aspirin. They looked at just over 600 patients. [The study used] a beautiful statistical analysis that stratified [findings] and accounted for all of the other contributory factors that might be tied up with aspirin’s use: PIK3CA mutations, [CDX2 expression], and even tumor-infiltrating lymphocytes. It’s what you would expect from a research group of this quality. The analysis was done very carefully indeed.

At the end of the study, they showed that their hypothesis was correct. Patients with tumors with relatively low expression of PD-L1 (also known as CB274) did better than those patients who had tumors that expressed high levels of PD-L1, for which aspirin seemed to have no benefits at all.

This all fits in with the link-up between the prostaglandin E2 pathway and immune suppression. It suggests that aspirin may be yet another potential partner drug that may enhance the activity of the huge excitement around the drugs which block the PD-L1, PD-1, the whole immune checkpoint pathway just now.

It was a really nice, very carefully conducted study. The results were quite compelling in terms of the survival benefits accrued to postsurgical use of aspirin in patients with low levels of PD-L1 expression. It again shows the importance of the microenvironment in determining the outcome of tumor behavior. This gives some potential therapeutic insight into why aspirin might be a very useful companion drug to give in combination with these rather more expensive, more complex immune blockade inhibitors.

Aspirin wins again. There’s yet more plausible biological mechanism supporting its use.

 

For all book lovers please visit my friend’s website.
URL: http://www.romancewithbooks.com

If We Can See the Outcome, We Can Change the Outcome


Tom Lee: This is Tom Lee, and I’m pleased to be chatting with Roy Rosin, a very interesting and effective Chief Innovation Officer at Penn Medicine. A bunch of organizations have chief innovation officers these days. I think Penn’s program is among the most robust in the country, and they’ve got one of the most interesting leaders. Roy, you came from outside health care to be a chief leader of innovation at Penn. What were you doing before, and why did you make this move?

Roy Rosin: It’s great to talk to you, Tom. It was fun to have an opportunity to do something like it in health care when I did. I had spent many years at Intuit, which a lot of people know as the maker of TurboTax or Quicken or QuickBooks, and it’s just a fantastic company out in Silicon Valley. I had been there quite a long time, had built some of the software businesses like Quicken, but back in 2003 — my wife is from Philadelphia and wanted to move home to be near family, so I made that transition with her. I sometimes say I never really left Intuit, I just wanted to stay married, and I needed to go back and rethink what I wanted to do.

Intuit was fabulous. They gave me an opportunity to stay with the company. In fact, I stayed for another 9 years working remotely, and in those 9 years what I was able to do was think about enterprise, scale, innovation. How do you get new things going and turn ideas into actions and outcomes, and how do you get a lot of new things off the ground that actually have some impact? So we did that. We built these programs over the course of 9 years. I could do it remotely because we had new businesses starting all over the place, and we learned a tremendous amount.

Intuit was a learning organization, and after 9 years it was a lot of travel. I had kids, going back and forth, and I had started talking with Kevin Volpp who I know you know well. He’s an old friend of mine from college, and Kevin had started an innovation center at Penn that also was doing just fascinating work in behavior changes applied to health care.

Some of what we had done at Intuit was . . . thinking about, how do you get people to make different and better decisions? A lot of what we did was for profit at Intuit, but some of it worked so well that you start to wonder, well, gee, how can I do this for purely a social mission to provide some meaningful difference in the world? So talking to Kevin got me fascinated. And then seeing about what was happening in health care, [in terms of] moving toward value-based plans and into value-based care, all of a sudden I realized this was a fascinating time to be going after social missions that I could be part of as a nonclinical person. That’s what led to my leaning into it and coming over and starting to work with Penn.

Lee: I’ve been following along and in just a few years, if memory serves me, you guys have launched like 90 or so projects. That’s a lot. It’s an overwhelming number. How is it going? Is 90 too many? Is 90 not enough? How are you thinking about the overall scale of what you’re doing?

Rosin: Yeah. It is a lot of projects. Now, to be fair, some of them are very light-touch. Sometimes we’re just advising or just consulting on a project and we meet with somebody a couple of times a month and it doesn’t take up a tremendous amount of time. [But] some of the projects we’re leading in a much deeper dive, so it’s a little bit hard to get a picture. But we’ve done 90 over about 5 years. And it’s going really well.

One of the things that I absolutely love to see are outcomes, measurable outcomes, where we’re defining, what is the needle we want to move. We want to move 30-day readmissions or an infection rate or something that is important and that we’re able to do so. And we find that we can. Those projects were across a lot of different areas. They cross new care models, models about how do you get people with uncontrolled hypertension to be normotensive, new models around how do you treat women who have had a miscarriage so they don’t end up in the ER or the OR, even models like IMPaCT, just a wonderful program that has to do with how do you treat vulnerable populations where the normal care design isn’t working well and they end up being what’s often called superutilizers in the wrong setting of care and the wrong cost and not being treated well. And from those new models to technology interventions to — Kevin and David [Asch] spent a lot of time developing connected health interventions, so seeing and knowing things we never saw or knew before, things that are going on in the home or in different settings that could determine the health outcomes, and as we start see those, we can change the outcome — to all kinds of technology interventions. It’s a broad, broad range of work that we’ve been able to do, and overall I’m happy with the way it’s been going.

Lee: When I’ve spoken with you, you told me that everywhere you look you see low-hanging fruit and I’m sure it’s true, but with so much opportunity to improve, how do you prioritize, how do you choose what to do and what not to do?

Rosin: It’s a blend. We try to stay aware of what the system’s priorities are. David Asch — he’s my co-conspirator — and I, we’ll often go on listening tours and spend time with the CEOs of the entities and the chief medical officers and chief nursing officers. We will try to plug into some of the operating mechanisms where the senior leadership is talking about system priorities. Of course, we’re aware of big changes in our environment like when an area becomes bundles and all of a sudden you’re responsible in a different way than you have been before. [For example,] we just signed a fairly public big deal with Independence Blue Cross where we’re now responsible for all 30-day readmissions and not being paid for 30-day readmissions, so those certainly set some of our priorities, but I think what’s an important insight is that the way innovation works is that you have to find passion. You have to find people who really want to make a change. Innovation doesn’t work well as an assignment where you go and say hey, I want you to work on this and please go do it.

What you’re looking for are clinical champions and care teams who are engaged, who want to work on the problem, and I always say they’re pulling instead of us pushing, so we are a blended tops-down and bottoms-up model. We also will do bottoms-up work that may involve a fascinating idea, or a new idea from a clinical or administrative leader or somebody on the front lines, that doesn’t necessarily seem fully aligned with some of the system priorities just because there’s a lot of energy and passion, and it might be off our radar, and those are exciting, too. We do have a blend, and it’s a portfolio of projects that we pay attention to. We stay mostly aligned with the top priorities in the health system.

Lee: Is the goal making money for Penn, or making money for innovators, or is the top priority changing Penn’s health care?

Rosin: We have a fairly lucky position, I’d say, in that we do get to spend time [doing] what I call “de-risking” more future models. We see the world moving toward more risk-bearing contracts. We see an increasing focus on value-based care. And we have a tremendous number of colleagues across Penn who are innovating, frankly, who are changing the way we work and doing good work. We’re certainly not the only people who are innovating care, and in many cases what we try to do is enable and create infrastructure where every team can go faster and do better work.

But our work has often stayed focused on where things become a little bit more risk-bearing and the future where we expect to be pretty soon. Now, we will certainly do operations projects so we’re looking for economic wins, we’re looking for places that our work can have a measurable economic impact on what Penn is doing, but we’re also in some areas that don’t.

A good example might be [that] when we started off we did work on some benefit redesign. We have 30,000 employees and they’re self-insured, so the cost of health care just makes bottom line. Doing work there that made our own employees better off and healthier saved a tremendous amount of money and it bought us the right to work on things like hypertension, where David had a strong desire to look at our folks who had uncontrolled hypertension to try to get it normal blood pressure.

If you’re perfect there, you don’t save or make any money in the near term, but it’s, as you know, a critically important area of health. We try to keep our eye out to a balance of both long and short term as well as things that are system priorities, tops-down and bottoms-up.

Lee: It sounds like you’re amassing political capital and using it as well as financial capital. But now, have you had disappointments that bug you? Things that you think should’ve worked and they fell short, they couldn’t move any needle, at least so far for reasons that you hadn’t anticipated or you haven’t figured out how to surmount yet?

Rosin: The ones I put in that bucket haven’t not worked, they’re maybe what I call stalled. When we do our work, we often will do small pilots with a small sample size because we are trying to get things ready before we scale them. That’s one of the big changes in an innovation approach — that you don’t scale until you figure out what works and you can validate a lot of the hypotheses that you may have about a new intervention or a new care model. We had a long list of successes at the pilot stage, but to the extent that I feel frustrated is how quickly some of those moved into a scaled model. Real wins and real success for us are scale of impact, things that help lots and lots of people, millions or an entire population.

And what we love is when we do work that gets taken even beyond Penn’s walls and applied in other locations. So the pace of getting from a successful pilot to a scaled win is probably the thing that’s been frustrating. It’s a solvable problem.

We have a new CEO at the Hospital of the University of Pennsylvania, a woman named Regina Cunningham who came up as a nurse and a chief nursing officer, and we had a great meeting with Regina the other day where we had a couple of important successful pilots. One was around people who are discharged on IV antibiotics — a  high-risk, high–readmission rate population — and another was the liver population, cirrhosis and liver transplant.

In both cases the teams had done extraordinarily good work cutting that readmission rate, and in the case of liver, dramatically reducing the cost of the intervention from $1,000 a person down to $50 a person, so cutting 95% of the cost out. And even with those kind of results sometimes the pilot would stay sort of in this middle ground of no man’s land.

We were always smart enough not to throw it over the wall. We know that us doing pilots and then going to find a champion doesn’t work. We have certainly done a pretty good job of engaging operations and moving them upstream and trying to stay in tight contact with the operational leaders of the system and have good partnerships, but I love what Regina did. In this meeting she said, “Look, here’s what we have to do. You guys have to do a better job of thinking of the budget cycles and getting in front of my leadership team. Here’s the setting I’m providing to you, here is the timing [for how] we’re going to do it. Here’s the story that you need to sell and the analysis that we need to have.”

And so, making sure we’re absolutely clear on who will operationalize and how good is good enough, what is that economic argument that we need, and making sure we have the audience set up early before it’s time to do a handoff. We’re getting better at that, and that’s what is getting me around some of the things that I might otherwise call disappointments. With hindsight, I think about an intervention very early [on] that turned into a success but wasn’t for a long time. It was the early days of connected health, and you already saw this at Partners with Joe Kvedar, and Kevin and David were already doing a wonderful job talking about automated hovering and talking about we need to stay connected once people leave the hospital. We have done a version of that in the CHF population, and working with one of our physicians we ended up with zero preventable readmissions, which was probably better than anyone expected. Again, a fairly small sample size.

Everyone saw this, and the analysis was done that it was successful and financially important. We decided to scale it. And a whole year went by, I mean a full year passed without forward motion, and what we realized is that organizationally, there wasn’t anybody who owned at that time — this was many years ago now — who owned the job of preventing readmissions, of keeping someone healthy and out of the hospital. The executive team created a structure called, basically our service line, so now you were not just focused on inpatient and separately on outpatient, but more focused on patient populations, and it was remarkable what happened after the organizational change.

All of a sudden, now somebody had this job and was accountable for keeping people healthy and out of the hospital, and then they were looking for a tool that essentially did what we had figured out how to do — all of a sudden it was adopted. The problem wasn’t a technology problem or a, say, can you come up with some kind of service model that works — it was actually, gosh, I need an organizational approach that embraces and wants it. That was interesting to see that when the org[anization] changed the innovation became successful, became adopted, and became scaled.

Lee: Let me close by asking a question that may be impossible because we can’t ask someone which child you love the most, but is there any particular innovation that you bring up as one that as among those that you love the most?

Rosin: It is hard to say your favorite child. I certainly love Shreya Kangovi’s IMPaCT program because of how completely she rethought the use of community health workers and how they’re hired and identified, trained, deployed, and get out the extraordinarily difficult problems of social determinants. [That] would be one. And she’s seeing a few dollar return for every dollar invested, which I think is phenomenal.

If I could tell a single story right now, it might be a program we call Heart Safe Motherhood. Heart Safe Motherhood is a neat program. It grows out of that same connected health approach of seeing and knowing about things we never used to see and know about.

In this case it was postpartum preeclampsia and that was the number one driver of both 7-day readmissions and morbidity in the maternal population. And the team had done a whole bunch of work, good work, and yet it wasn’t moving the needle. There were free walk-in clinics. We called people and tried to follow up. They weren’t answering the phone or returning our calls and [were] not showing up to the free walk-in clinic.

And what the team did was led by two doctors . . . and at first they realized that the preferred modality was texting, because in many cases this population . . . they didn’t want to talk to us maybe ever, certainly not at any particular point in time, and we would send the women home with a blood pressure cuff. One of the interesting insights early on was that it wasn’t high-tech. It wasn’t one of these wireless cuffs that would automatically broadcast. It was actually a low-tech off-the-shelf at a Walmart or CVS because they could just text us the number and that addressed connectivity and wireless issues. They started to iterate and play with the texting, can I get these blood pressure values.

Around the same time, ACOG, the American College of OB-GYN, created a standard that said, look, to keep this population safe you need two blood pressure values, that first week after discharge, one around 3 days, one around 7 days. And in all of the top systems, including Penn, we had that for nobody. We were at 0%, and by sending women home with these blood pressure cuffs and beginning the texting protocol, that team ended up going from 0% to 82% of coverage, where we have the information, the blood pressure information we needed to keep the women safe.

But why I like the program so much is [that] it wasn’t just about this automated hovering, it wasn’t just about having that information. The real outcome you’re trying to change is could you avoid the morbidity and the readmissions and the bad outcomes, bad outcomes both for the patient and for the health system. And they were able to do that. The numbers are growing, and they moved a couple hundred patients through the system now and it has zero readmissions.

It went from the highest, 7 day readmissions, to so far no readmissions, which [is] the real impact that we’re looking for, and then you get scale, because this program is now being adopted not only in other settings across Penn, but even in other cities and in other places. When we can get that kind of endorsement and support from, for example, the National Preeclampsia Foundation and others, and see the stuff start to scale and go to other places to have a population effect, I get really happy. That’s probably it.

Lee: That is a great story, Roy. It’s a lovely one and it also shows that innovation really has to occur at that disease level. It can’t happen across the board for all readmissions. But you have a lot to be proud of and I know a lot more great work’s going to come out. The approaches you’re using I think will be instructive for all the other folks out there listening. I want to thank you for taking the time to share your insights with the NEJM Catalyst audience today.

Artificial Intelligence Will Show Us Our True Selves


Garry Kasparov doesn’t believe machines are here to replace us. They are going to show us who we really are.

“AI will force us to be more human,” Kasparov says. Automation, by his reckoning, will make us focus on the traits that humanity can do better than artificial intelligence, like creativity and imagination. We’ll leave the rest to machines.

The former chess world champion, who two decades ago traded victories in matches with IBM’s supercomputer Deep Blue, recently told Inverse those impossible-to-automate, uniquely human traits will stay that way.

Computers can be entrusted with just about any mental labor that reduces to calculation and logic, but the initial spark of human inspiration will likely always have to come from a human mind, Kasparov says.

Garry Kasparov
IBM Deep Blue

“Many of the white-collar jobs will move into the machines’ domain.”

“I think in the next few years we’ll see a dramatic regrouping in the corporate world,” Kasparov says, predicting that new jobs will be built around compensating for A.I.’s creative shortcomings. “There are those who think it’s a diminishing job for humans, because we will be stuck with very small territory. To the contrary, I think it could have much bigger impact.”

Human imagination will be especially important as A.I. moves out of the closed systems that have historically been its territory. To pick an example of a closed system with which Kasparov is intimately familiar, consider chess.

With all due respect to Anatoly Karpov and Vladimir Kramnik, Kasparov’s most famous opponent will always be Deep Blue. That machine’s digital mind was narrowly engineered to accomplish its task of competing with a grandmaster, but that wouldn’t necessarily mean it could transfer that reasoning power to some other problem. That’s where humans come in.

“Many of the white-collar jobs will move into the machines’ domain, and it’s not the end of the world. We should start thinking of ourselves more as humans to make unique contributions to the machine-human collaboration.”

Chess Match

Lynne Parker, an A.I. and robotics researcher and the director of the Distributed Intelligence Laboratory at the University of Tennessee, agrees: the future of work isn’t in removing humans from the equation, but rather zeroing in on our most irreplaceable traits.

“There are some types of humans skills that are going to be extraordinarily difficult to do any sort of automation, and creativity is a good example,” she tells Inverse. “Judgment and common sense are things that we do that quite well, but A.I. systems? Not so much.”

As A.I. becomes more advanced, it may become less about programming and more about having the imagination to know what questions to ask the machine.

“People worry about their jobs going away, and I think the thing is jobs consist of a lot of sub-tasks, and some of those tasks might get automated,” Parker posits. “That might mean the mix of things we do in our jobs changes, but we’ll still have lots of value as people.”

“Judgment and common sense are things that we do that quite well, but A.I. systems? Not so much.”

Google Home Mini

Machines have only become more formidable opponents in the two decades since Kasparov’s series with Deep Blue. Consider AlphaGo Zero, a creation of Google’s DeepMind A.I. subsidiary, which last year became the greatest Go player in history with zero human input beyond the initial parameters. AlphaGo Zero won 100 straight matches against its predecessor algorithm AlphaGo, which in turn had defeated the world’s top-ranked human player.

But to focus on those vanquished champions is to pay attention to the wrong humans. AlphaGo Zero may have taught itself Go without human assistance, but its ability to play the game in the first place was the gift of its human programmers.

“That’s a phenomenal result that tells you that as long as we can secure the borders, the limits of a system, so to define a framework, machines can do the rest,” says Kasparov. “So we are now reaching a point where the very concept of human-machine collaboration could be a thing.”

“We are now reaching a point where the very concept of human-machine collaboration could be a thing.”

Garry Kasparov

Isaac Asimov

Another Russian émigré foresaw all this more than six decades ago. In his 1956 short story “Jokester,” Isaac Asimov describes the omniscient supercomputer Multivac, a machine able to solve every conceivable problem. But it takes a rare breed of people, selected for insight shaped not by logic but by indefinable insight, to pull those answers out of this computational leviathan.

“Early in the history of Multivac, it had become apparent that the bottleneck was the questioning procedure,” Asimov writes. “Multivac could answer the problem of humanity, all the problems, if — if it were asked meaningful questions. But as knowledge accumulated at an ever faster rate, it became ever more difficult to locate those meaningful questions. Reason alone wouldn’t do. What was needed was a rare type of intuition.”

As we fast approach the future Asimov imagined 62 years ago, Kasparov says not just asking questions but also recognizing the answers when they come is what will make us the most useful collaborators with A.I. — and, with that, it will let us find our most human selves.

“A machine will never recognize the concept of diminishing returns,” he says. “If you let a machine operate, the most powerful machines, the most inventive machines, to operate in the open-ended system, it will never stop, because it’s still about actually recognizing what is the end of the story.”

A.I., for all its brilliance, can only ever think inside its parameters, inside the box. No matter how big that box gets, it falls to us who can think outside it to guide machines as partners.

It makes sense that this would be Garry Kasparov’s vision. After all, consider what Asimov called those rare, uniquely inventive minds he imagined as Multivac’s collective imagination.

He called them grandmasters.

Neil DeGrasse Tyson Gives 3 Reasons Why Humans Are Still So Ignorant About The Universe


How did we even get here?

We have nanobots that swim inside our bodies and monitor our vital organs. We have autonomous robots that work alongside human doctors to perform complex surgeries. There are rovers driving across the surface of Mars and, as you read this, thee humans are orbiting high above you, living in the cold vacuum of space.

main article image

In many ways, it seems like we’re living in the future. But if you ask Neil deGrasse Tyson, it seems like we’re little more than infants trying to clutch sunbeams in our fists.

At the 2018 World Government Summit in Dubai, Tyson gave a presentation to an enraptured audience. The topic? How humans will – most definitely not – colonize Mars (Tyson, if you aren’t aware, is an eternal skeptic).

It seems fitting then that, following his rather depressing speech, he took the time to discuss how humans are, in many ways, entirely ignorant.

Here are three things that, according to Tyson, show just how far we have to go:

Dark matter

A portion of our Universe is missing. A rather significant portion, in fact.

Scientists estimate that less than 5 percent of our Universe is made up of ordinary matter (protons, neutrons, electrons, and all the things that make our bodies, our planet, and everything we’ve ever seen or touched).

The rest of the matter in our Universe? Well, we have no idea what it is.

“Dark matter is the longest standing unsolved problem in modern astrophysics,” Tyson said.

He continued with a slightly exasperated sigh, “It has been with us for eighty years, and it’s high time we had a solution.”

Yet, we aren’t exactly close.

The problem stems from the fact that dark matter doesn’t interact with electromagnetic radiation (aka light). We can only observe it because of its gravitational influence – say, by a galaxy spinning slower or faster than it should.

However, there are a number of ongoing experiments that seek to detect dark matter, such as SNOLAB and ADMX, so answers may be on the horizon.

Dark energy

Dark energy is, perhaps, one of the most interesting scientific discoveries ever made. This is because it may hold the keys to the ultimate fate of our Universe.

Tyson explains it as “a pressure in the vacuum of space forcing the acceleration of the [expansion of] the Universe.”

Does that sound confusing? That’s probably because it is.

If you weren’t aware, all of space is expanding – the space between the galaxies, the space between the Earth and the Sun, the space between your eyes and your computer screen.

Of course, this expansion is minimal. It’s so minimal that we don’t even notice it when we look at our local Solar System. But on a cosmic scale, its impact is profound.

Because space is so vast, billions of light-years of space are expanding, causing many galaxies to fly away from us at unimaginable speeds.

And if this flight continues, eventually the cosmos will be nothing more than a cold unendingly dark void. If it reverses, the Universe will collapse in on itself in a Big Crunch.

Unfortunately, we have absolutely no idea which will happen, as we have no clue what dark energy is.

Abiogenesis

We know a lot about how life evolved on Earth. About 3.5 billion years ago, the earliest forms of life emerged. These single-celled creatures dominated our planet for billions and billions of years.

A little over 600 million years ago, the first multicellular organisms took up residence. The Cambrian explosion followed soon after and *boom* the fossil record was born.

Just 500 million years ago, plants started taking to land. Animals soon followed, and here we are today.

However, Tyson is quick to point out that we don’t understand the most vital component of evolution – the beginning.

“We still don’t know how we go from organic molecules to self-replicating life,” Tyson said, and he noted how unfortunate this is because “that is basically the origin of life as we know it.”

The process is called abiogenesis. In non-scientific jargon, it deals with how life arises from nonliving matter. Although we have a number of hypotheses related to this process, we don’t have a comprehensive understanding or any evidence to support one.

There we have it. The biggest mysteries of the cosmos just happen to be some of the most important and fundamental. So, when will we finally figure out these scientific conundrums and move out of our infancy? Tyson refuses to make a prediction.

If there’s one thing he knows, it’s how very little humans actually know: “I’m not very good at predicting the future, and I’ve looked at other people’s predictions and seen how bad those are even among those that say ‘I am good.’ So I can tell you what I want to happen, but that’s different than what I think will happen.”

Confessions of a Renegade Psychiatrist


I felt this sensation in the pit of my stomach – it was a combination of sympathy and anger – listening to Annie tell me, through tears, about her postpartum journey into the world of psychiatry.

Three separate psychiatrists dismissed me when I expressed concerns about taking an addictive medication like Klonopin. It’s been two years, I can’t get off it, I’m on 4 psych meds and I feel worse than I ever did before I started this treatment.”

Annie was ushered into the promise-filled halls of psychiatry 3 months after the birth of her first baby when she began to experience racing heart, insomnia, vigilance, irritability, and a host of physical complaints including joint pain and hair loss. No one did blood work, asked about her diet, or cared about any of the myriad observations about her body and its changes in functioning.  This was a “head-up” intervention. I believe women deserve better. People deserve better.

Most patients who come to me for treatment of depression and anxiety do so because they want answers. They want to know WHY they are struggling. The closest they will be offered by their prescribing psychiatrist or primary care doc is some reductionist hand waving about serotonin imbalances. I think it is time to speak to these patients with respect, truthfulness, and to offer them more than a life-long relationship with a pill (or pills, as it will inevitably become over the years).

First, let’s review some basics:

Depression is NOT a Serotonin Deficiency

Thanks to direct-to-consumer advertising and complicit FDA endorsement of evidence-less claims, the public has been sold an insultingly oversimplified tale about the underlying driver of depression. Here’s how we know depression is not a serotonin deficiency corrected by Zoloft:

  • There has never been a single study, in humans, to validate the theory of low serotonin in depression.  Low levels are found in a minority of patients.
  • An antidepressant marketed as Stablon, increases reuptake of serotonin (reducing serotonin activity) and appears to be equally effective as those that decrease it or have no effect on it at all.
  • Manipulation of serotonin levels (depletion or enhancement) do not consistently result in a depressive syndrome.
  • These medications are used to treat an impossibly non-specific and broad array of illnesses from obsessive compulsive disorder to anorexia to premenstrual dysphoria to bipolar depression to irritable bowel syndrome.
  • Antidepressants of all categories seems to work about the same regardless of their presumed mechanism of action with about 73% of the response unrelated to pharmacologic activity.

You might wonder: Well, then how is it that antidepressants are a billion dollar industry and I have all these friends who are so much better on them?  Some pioneering individuals have investigated the data supporting antidepressant efficacy and have made compelling arguments for what is called the “active placebo” effect accounting for “breaking blind” in placebo-controlled trials. In short, the expectation of relief and subsequent change in symptoms experienced by “responders” is related to perception of side effects. This analysis suggests that antidepressants may only have 10% efficacy above and beyond the placebo effect. When you also consider the suppression of negative studies (permission of sedatives in trials, replacement of non-responders, and allowance of placebo washout) by pharmaceutical companies, you may start to worry that you have been sold a bill of goods. When inefficacy, long-term risks, increase in suicidal tendencies and violent behavior are taken into account, it is a marvel to observe the star-power of these medications.

What Is It Then? Inflammation!

Inflammation is a buzzword that produces 41 million+ Google hits for a reason: it appears to underlie just about every chronic disease plaguing Americans today. A contribution of genetic vulnerabilities likely determines who develops heart disease or cancer or obsessive compulsive disorder, but many researchers are convinced that depression may have a significant inflammatory component. Just as a fever is one of your immune system’s mechanism for eradicating intruders, suppressing a fever, in no way, serves to resolve the underlying infection or to support the body’s return to balance. Similarly, suppressing symptoms of depression does not achieve rebalancing, and will likely result in the Whack-a-Mole phenomenon of shifting symptoms, and protracted resolution.

There appears to be a specific subset of non-responders to medication who have measurable markers of inflammation as explored in this study. We know that medications such as interferon given to patients with Hepatitis result in significant levels of depression and even suicide, and we know that anti-inflammatory agents such as infliximab or even aspirin can result in resolution of symptoms.  Investigators like Miller and Raison have discussed, in a series of wonderful papers, the conceptualization of depression as “sickness behavior” with accompanying social withdrawal, fatigue, loss of appetite, decreased mobility.  Recent meta-analyses have identified at least 24 studies which have correlated levels of inflammatory cytokines like CRP, IL6, and TNFalpha with states of depression.

What Drives Inflammation?

What causes inflammation in the body that can affect the brain? This is the subject of an excellent book, Why Isn’t My Brain Working? A Revolutionary Understanding of Brain Decline and Effective Strategies to Recover Your Brain’s Health — and it turns out the list is long. But these are the contributors that I see most commonly in my practice:

Sugar

It’s in almost every packaged food. Seriously. Look for it and you will find it. It may come with different labels – cane sugar, crystalline fructose, high fructose corn syrup – but it’s all sugar. The way the body handles fructose and glucose is different; however, which may account for why fructose is 7 times more likely to result in glycation end products or sticky protein clumps that cause inflammation.  In addition to the above mood and anxiety rollercoaster, sugar causes changes in our cell membranes, in our arteries, our immune systems, our hormones, and our gut as I discuss here.

Food Intolerances

Gluten, soy, and corn have been identified as allergenic foods and a leading speculation as to how these foods became and are becoming more allergenic is the nature of their processing, hybridization, and genetic modification rendering them unrecognizable to our immune systems and vehicles of unwelcome information. Gluten and processed dairy, when incompletely digested, result in peptides which, once through the gut barrier, can stimulate the brain and immune system in inflammatory ways.

Autoimmunity

The epidemic incidence of autoimmune disorders in this country is a direct reflection of environmental assault on our system.  The body’s ability to determine self from other starts with the gut and our host defenses there. Unfortunately, it doesn’t end there because autoimmune disorders typically have psychiatric manifestations. This makes sense – the body’s immune system is misfiring, and the immune cells of the brain (called microglia) are following suit. Beyond rampant inflammation, autoimmune disorders such as Hashimoto’s thyroiditis (more here) also result in symptoms related to damage to tissues. Low or erratic thyroid function can cause anxiety, depression, flattened mood, cloudy thinking, metabolism changes, and fatigue. Sometimes even the presence of immune system misfiring can predict depression as was noted in this recent study where women with thyroid autoantibodies in pregnancy went on to develop postpartum depression.

Before You See A Psychiatrist

Diet/nutrition

Do a 30-day diet overhaul. If you feel committed to the cure, eliminate these provocative foods: corn, soy, legumes, dairy, grains. What do you eat? You’ll eat pastured/organic meats, wild fish, eggs, fruit, vegetables, and nuts/seeds. If this is not revolutionary, then you may be someone for whom nightshade vegetables, nuts, or eggs are inflammatory. If that seems entirely overwhelming, then start with dairy and gluten.  If that is too much, then gluten is my top pick.

Coconut Oil

Introduce 1-2 tablespoons of virgin coconut oil to give your brain an easy source of fuel that does not require significant digestion.  When your brain is inflamed and your sugar is out of balance, your brain cells end up starving for nutrients to make energy.  This can be an effective shortcut.

Turmeric

I use this spice in therapeutic doses, but it has recently been demonstrated to be as effective as Prozac. If you cook with it, use pepper and oil (red palm, coconut, olive oil, ghee) for enhanced absorption. (See: Groundbreaking Study Finds Turmeric Extract Superior to Prozac for Depression.)

Fermented Foods

Naturally fermented foods like sauerkraut, kimchi, and pickles as well as kefir and yogurt if you are dairy tolerant are a source of beneficial bacteria that can retrain the gut to protect you from unwanted pathogens.  A recent study demonstrated that these bacteria can, indeed, affect brain function.

Detox Your Environment

Here’s an important way to call off the dogs of your immune system. Give it less stimulation.

  • Filter air and water
  • Purchase products free of known carcinogens and endocrine disruptors such as parabens, TEA, fragrance (pthalates), sodium lauryl/laureth sulfate, and triclosan
  • Eat organic produce, pastured meat/dairy
  • Make your own cleaning products from household vinegar, baking soda, tea tree oil, or purchase similarly simple products
  • Avoid eating or drinking from heated plastics
  • Avoid cell phone use
  • Avoid processed foods and sugar, consume low-mercury fish
  • Carefully consider risks and benefits of any elective medical interventions

Also see: Toxic Products to Ban From Your Home – Plus Healthier Alternatives to Help You Do It

Promote Healing Messages

I have developed an appreciation of the body’s ability to work towards balance when obstacles are removed. An important obstacle is the stress response that is activated by many of the above factors as well as perceptions of busy-ness, lack of downtime and community support, and trauma. Take 10-20 minutes a day (or even 2!) to promote the relaxation response by breathing in with a count of 6 and breathing out with a count of 6. Imagine the air flowing in and out through your heart and cultivate a feeling of gratitude.  The benefits of this practice have been well-studied by Heartmath Institute.

Psychiatry has long suffered from pseudoscience and propaganda. From an embarrassing history of pathologizing human behavior, applying crude “treatments”, and imposing beliefs about societal welfare on vulnerable populations, we haven’t come very far in the past century. Incidence of mental illness is rising, partly from changes in diagnostic criteria, commercializing mental illness, collusion between doctors and patients around the “quick fix”, and partly because our bodies and minds are crying out in protest about this toxic world we live in.

Take control of your body to heal your mind – take back your health and bear witness to the power of a lifestyle renaissance.

‘Mind over matter’: Stephen Hawking – obituary by Roger Penrose


Theoretical physicist who made revolutionary contributions to our understanding of the nature of the universe.

 

Stephen Hawking at his office at the department of applied mathematics and theoretical physics at Cambridge University in 2005.
Stephen Hawking at his office at the department of applied mathematics and theoretical physics at Cambridge University in 2005.

The image of Stephen Hawking – who has died aged 76 – in his motorised wheelchair, with head contorted slightly to one side and hands crossed over to work the controls, caught the public imagination, as a true symbol of the triumph of mind over matter. As with the Delphic oracle of ancient Greece, physical impairment seemed compensated by almost supernatural gifts, which allowed his mind to roam the universe freely, upon occasion enigmatically revealing some of its secrets hidden from ordinary mortal view.

Of course, such a romanticised image can represent but a partial truth. Those who knew Hawking would clearly appreciate the dominating presence of a real human being, with an enormous zest for life, great humour, and tremendous determination, yet with normal human weaknesses, as well as his more obvious strengths. It seems clear that he took great delight in his commonly perceived role as “the No 1 celebrity scientist”; huge audiences would attend his public lectures, perhaps not always just for scientific edification.

The scientific community might well form a more sober assessment. He was extremely highly regarded, in view of his many greatly impressive, sometimes revolutionary, contributions to the understanding of the physics and the geometry of the universe.

Hawking had been diagnosed shortly after his 21st birthday as suffering from an unspecified incurable disease, which was then identified as the fatal degenerative motor neurone disease amyotrophic lateral sclerosis, or ALS. Soon afterwards, rather than succumbing to depression, as others might have done, he began to set his sights on some of the most fundamental questions concerning the physical nature of the universe. In due course, he would achieve extraordinary successes against the severest physical disabilities. Defying established medical opinion, he managed to live another 55 years.

His background was academic, though not directly in mathematics or physics. His father, Frank, was an expert in tropical diseases and his mother, Isobel (nee Walker), was a free-thinking radical who had a great influence on him. He was born in Oxford and moved to St Albans, Hertfordshire, at eight. Educated at St Albans school, he won a scholarship to study physics at University College, Oxford. He was recognised as unusually capable by his tutors, but did not take his work altogether seriously. Although he obtained a first-class degree in 1962, it was not a particularly outstanding one.

He decided to continue his career in physics at Trinity Hall, Cambridge, proposing to study under the distinguished cosmologist Fred Hoyle. He was disappointed to find that Hoyle was unable to take him, the person available in that area being Dennis Sciama, unknown to Hawking at the time. In fact, this proved fortuitous, for Sciama was becoming an outstandingly stimulating figure in British cosmology, and would supervise several students who were to make impressive names for themselves in later years (including the future astronomer royal Lord Rees of Ludlow).

Sciama seemed to know everything that was going on in physics at the time, especially in cosmology, and he conveyed an infectious excitement to all who encountered him. He was also very effective in bringing together people who might have things of significance to communicate with one another.

When Hawking was in his second year of research at Cambridge, I (at Birkbeck College in London) had established a certain mathematical theorem of relevance. This showed, on the basis of a few plausible assumptions (by the use of global/topological techniques largely unfamiliar to physicists at the time) that a collapsing over-massive star would result in a singularity in space-time – a place where it would be expected that densities and space-time curvatures would become infinite – giving us the picture of what we now refer to as a “black hole”. Such a space-time singularity would lie deep within a “horizon”, through which no signal or material body can escape. (This picture had been put forward by J Robert Oppenheimer and Hartland Snyder in 1939, but only in the special circumstance where exact spherical symmetry was assumed. The purpose of this new theorem was to obviate such unrealistic symmetry assumptions.) At this central singularity, Einstein’s classical theory of general relativity would have reached its limits.

Meanwhile, Hawking had also been thinking about this kind of problem with George Ellis, who was working on a PhD at St John’s College, Cambridge. The two men had been working on a more limited type of “singularity theorem” that required an unreasonably restrictive assumption. Sciama made a point of bringing Hawking and me together, and it did not take Hawking long to find a way to use my theorem in an unexpected way, so that it could be applied (in a time-reversed form) in a cosmological setting, to show that the space-time singularity referred to as the “big bang” was also a feature not just of the standard highly symmetrical cosmological models, but also of any qualitatively similar but asymmetrical model.

Some of the assumptions in my original theorem seem less natural in the cosmological setting than they do for collapse to a black hole. In order to generalise the mathematical result so as to remove such assumptions, Hawking embarked on a study of new mathematical techniques that appeared relevant to the problem.

A powerful body of mathematical work known as Morse theory had been part of the machinery of mathematicians active in the global (topological) study of Riemannian spaces. However, the spaces that are used in Einstein’s theory are really pseudo-Riemannian and the relevant Morse theory differs in subtle but important ways. Hawking developed the necessary theory for himself (aided, in certain respects, by Charles Misner, Robert Geroch and Brandon Carter) and was able to use it to produce new theorems of a more powerful nature, in which the assumptions of my theorem could be considerably weakened, showing that a big-bang-type singularity was a necessary implication of Einstein’s general relativity in broad circumstances.

A few years later (in a paper published by the Royal Society in 1970, by which time Hawking had become a fellow “for distinction in science” of Gonville and Caius College, Cambridge), he and I joined forces to publish an even more powerful theorem which subsumed almost all the work in this area that had gone before.

In 1967, Werner Israel published a remarkable paper that had the implication that non-rotating black holes, when they had finally settled down to become stationary, would necessarily become completely spherically symmetrical. Subsequent results by Carter, David Robinson and others generalised this to include rotating black holes, the implication being that the final space-time geometry must necessarily accord with an explicit family of solutions of Einstein’s equations found by Roy Kerr in 1963. A key ingredient to the full argument was that if there is any rotation present, then there must be complete axial symmetry. This ingredient was basically supplied by Hawking in 1972.

The very remarkable conclusion of all this is that the black holes that we expect to find in nature have to conform to this Kerr geometry. As the great theoretical astrophysicist Subramanyan Chandrasekhar subsequently commented, black holes are the most perfect macroscopic objects in the universe, being constructed just out of space and time; moreover, they are the simplest as well, since they can be exactly described by an explicitly known geometry (that of Kerr).

Following his work in this area, Hawking established a number of important results about black holes, such as an argument for its event horizon (its bounding surface) having to have the topology of a sphere. In collaboration with Carter and James Bardeen, in work published in 1973, he established some remarkable analogies between the behaviour of black holes and the basic laws of thermodynamics, where the horizon’s surface area and its surface gravity were shown to be analogous, respectively, to the thermodynamic quantities of entropy and temperature. It would be fair to say that in his highly active period leading up to this work, Hawking’s research in classical general relativity was the best anywhere in the world at that time.

Hawking, Bardeen and Carter took their “thermodynamic” behaviour of black holes to be little more than just an analogy, with no literal physical content. A year or so earlier, Jacob Bekenstein had shown that the demands of physical consistency imply – in the context of quantum mechanics – that a black hole must indeed have an actual physical entropy (“entropy” being a physicist’s measure of “disorder”) that is proportional to its horizon’s surface area, but he was unable to establish the proportionality factor precisely. Yet it had seemed, on the other hand, that the physical temperature of a black hole must be exactly zero, inconsistently with this analogy, since no form of energy could escape from it, which is why Hawking and his colleagues were not prepared to take their analogy completely seriously.

Hawking had then turned his attention to quantum effects in relation to black holes, and he embarked on a calculation to determine whether tiny rotating black holes that might perhaps be created in the big bang would radiate away their rotational energy. He was startled to find that irrespective of any rotation they would radiate away their energy – which, by Einstein’s E=mc2, means their mass. Accordingly, any black hole actually has a non-zero temperature, agreeing precisely with the Bardeen-Carter-Hawking analogy. Moreover, Hawking was able to supply the precise value “one quarter” for the entropy proportionality constant that Bekenstein had been unable to determine.

This radiation coming from black holes that Hawking predicted is now, very appropriately, referred to as Hawking radiation. For any black hole that is expected to arise in normal astrophysical processes, however, the Hawking radiation would be exceedingly tiny, and certainly unobservable directly by any techniques known today. But he argued that very tiny black holes could have been produced in the big bang itself, and the Hawking radiation from such holes would build up into a final explosion that might be observed. There appears to be no evidence for such explosions, showing that the big bang was not so accommodating as Hawking wished, and this was a great disappointment to him.

These achievements were certainly important on the theoretical side. They established the theory of black-hole thermodynamics: by combining the procedures of quantum (field) theory with those of general relativity, Hawking established that it is necessary also to bring in a third subject, thermodynamics. They are generally regarded as Hawking’s greatest contributions. That they have deep implications for future theories of fundamental physics is undeniable, but the detailed nature of these implications is still a matter of much heated debate.

Hawking himself was able to conclude from all this (though not with universal acceptance by particle physicists) that those fundamental constituents of ordinary matter – the protons – must ultimately disintegrate, although with a decay rate that is beyond present-day techniques for observing it. He also provided reasons for suspecting that the very rules of quantum mechanics might need modification, a viewpoint that he seemed originally to favour. But later (unfortunately, in my own opinion) he came to a different view, and at the Dublin international conference on gravity in July 2004, he publicly announced a change of mind (thereby conceding a bet with the Caltech physicist John Preskill) concerning his originally predicted “information loss” inside black holes.

Following his black-hole work, Hawking turned his attentions to the problem of quantum gravity, developing ingenious ideas for resolving some of the basic issues. Quantum gravity, which involves correctly imposing the quantum procedures of particle physics on to the very structure of space-time, is generally regarded as the most fundamental unsolved foundational issue in physics. One of its stated aims is to find a physical theory that is powerful enough to deal with the space-time singularities of classical general relativity in black holes and the big bang.

Hawking’s work, up to this point, although it had involved the procedures of quantum mechanics in the curved space-time setting of Einstein’s general theory of relativity, did not provide a quantum gravity theory. That would require the “quantisation” procedures to be applied to Einstein’s curved space-time itself, not just to physical fields within curved space-time.

With James Hartle, Hawking developed a quantum procedure for handling the big-bang singularity. This is referred to as the “no-boundary” idea, whereby the singularity is replaced by a smooth “cap”, this being likened to what happens at the north pole of the Earth, where the concept of longitude loses meaning (becomes singular) while the north pole itself has a perfectly good geometry.

To make sense of this idea, Hawking needed to invoke his notion of “imaginary time” (or “Euclideanisation”), which has the effect of converting the “pseudo-Riemannian” geometry of Einstein’s space-time into a more standard Riemannian one. Despite the ingenuity of many of these ideas, grave difficulties remain (one of these being how similar procedures could be applied to the singularities inside black holes, which is fundamentally problematic).

There are many other approaches to quantum gravity being pursued worldwide, and Hawking’s procedures, though greatly respected and still investigated, are not the most popularly followed, although all others have their share of fundamental difficulties also.

To the end of his life, Hawking continued with his research into the quantum-gravity problem, and the related issues of cosmology. But concurrently with his strictly research interests, he became increasingly involved with the popularisation of science, and of his own ideas in particular. This began with the writing of his astoundingly successful book A Brief History of Time (1988), which was translated into some 40 languages and sold over 25m copies worldwide.

Undoubtedly, the brilliant title was a contributing factor to the book’s phenomenal success. Also, the subject matter is something that grips the public imagination. And there is a directness and clarity of style, which Hawking must have developed as a matter of necessity when trying to cope with the limitations imposed by his physical disabilities. Before needing to rely on his computerised speech, he could talk only with great difficulty and expenditure of effort, so he had to do what he could with short sentences that were directly to the point. In addition, it is hard to deny that his physical condition must itself have caught the public’s imagination.

Although the dissemination of science among a broader public was certainly one of Hawking’s aims in writing his book, he also had the serious purpose of making money. His financial needs were considerable, as his entourage of family, nurses, healthcare helpers and increasingly expensive equipment demanded. Some, but not all, of this was covered by grants.

To invite Hawking to a conference always involved the organisers in serious calculations. The travel and accommodation expenses would be enormous, not least because of the sheer number of people who would need to accompany him. But a popular lecture by him would always be a sell-out, and special arrangements would be needed to find a lecture hall that was big enough. An additional factor would be the ensuring that all entrances, stairways, lifts, and so on would be adequate for disabled people in general, and for his wheelchair in particular.

He clearly enjoyed his fame, taking many opportunities to travel and to have unusual experiences (such as going down a mine shaft, visiting the south pole and undergoing the zero-gravity of free fall), and to meet other distinguished people.

The presentational polish of his public lectures increased with the years. Originally, the visual material would be line drawings on transparencies, presented by a student. But in later years impressive computer-generated visuals were used. He controlled the verbal material, sentence by sentence, as it would be delivered by his computer-generated American-accented voice. High-quality pictures and computer-generated graphics also featured in his later popular books The Illustrated Brief History of Time (1996) and The Universe in a Nutshell (2001). With his daughter Lucy he wrote the expository children’s science book George’s Secret Key to the Universe (2007), and he served as an editor, co-author and commentator for many other works of popular science.

He received many high accolades and honours. In particular, he was elected a fellow of the Royal Society at the remarkably early age of 32 and received its highest honour, the Copley medal, in 2006. In 1979, he became the 17th holder of the Lucasian chair of natural philosophy in Cambridge, some 310 years after Sir Isaac Newton became its second holder. He became a Companion of Honour in 1989. He made a guest appearance on the television programme Star Trek: The Next Generation, appeared in cartoon form on The Simpsons and was portrayed in the movie The Theory of Everything (2014).

It is clear that he owed a great deal to his first wife, Jane Wilde, whom he married in 1965, and with whom he had three children, Robert, Lucy and Timothy. Jane was exceptionally supportive of him in many ways. One of the most important of these may well have been in allowing him to do things for himself to an unusual extent.

He was an extraordinarily determined person. He would insist that he should do things for himself. This, in turn, perhaps kept his muscles active in a way that delayed their atrophy, thereby slowing the progress of the disease. Nevertheless, his condition continued to deteriorate, until he had almost no movement left, and his speech could barely be made out at all except by a very few who knew him well.

He contracted pneumonia while in Switzerland in 1985, and a tracheotomy was necessary to save his life. Strangely, after this brush with death, the progress of his degenerative disease seemed to slow to a virtual halt. His tracheotomy prevented any form of speech, however, so that acquiring a computerised speech synthesiser came as a necessity at that time.

In the aftermath of his encounter with pneumonia, the Hawkings’ home was almost taken over by nurses and medical attendants, and he and Jane drifted apart. They were divorced in 1995. In the same year, Hawking married Elaine Mason, who had been one of his nurses. Her support took a different form from Jane’s. In his far weaker physical state, the love, care and attention that she provided sustained him in all his activities. Yet this relationship also came to an end, and he and Elaine were divorced in 2007.

Despite his terrible physical circumstance, he almost always remained positive about life. He enjoyed his work, the company of other scientists, the arts, the fruits of his fame, his travels. He took great pleasure in children, sometimes entertaining them by swivelling around in his motorised wheelchair. Social issues concerned him. He promoted scientific understanding. He could be generous and was very often witty. On occasion he could display something of the arrogance that is not uncommon among physicists working at the cutting edge, and he had an autocratic streak. Yet he could also show a true humility that is the mark of greatness.

Hawking had many students, some of whom later made significant names for themselves. Yet being a student of his was not easy. He had been known to run his wheelchair over the foot of a student who caused him irritation. His pronouncements carried great authority, but his physical difficulties often caused them to be enigmatic in their brevity. An able colleague might be able to disentangle the intent behind them, but it would be a different matter for an inexperienced student.

To such a student, a meeting with Hawking could be a daunting experience. Hawking might ask the student to pursue some obscure route, the reason for which could seem deeply mysterious. Clarification was not available, and the student would be presented with what seemed indeed to be like the revelation of an oracle – something whose truth was not to be questioned, but which if correctly interpreted and developed would surely lead onwards to a profound truth. Perhaps we are all left with this impression now.

Hawking is survived by his children.

Stephen William Hawking, physicist, born 8 January 1942; died 14 March 2018, aged 76.