Neanderthal Genes Influence Contemporary Humans’ Skull Shape, Brain Size


Individuals carrying these ancient ancestors’ DNA are more likely to have slightly elongated, rather than rounded, brains

The researchers are quick to point out that their findings don’t suggest a link between brain size or shape and behavior, but instead offer an exploration of the genetic evolution of modern brains (Philipp Gunz)

By Meilan Solly

smithsonian.com
December 14, 2018

Neanderthals may have gone extinct some 40,000 years ago, but thanks to long-ago species interbreeding, their genes live on in modern humans.

The implications of this genetic inheritance remain largely unclear, although previous studies have proposed links with disease immunity, hair color and even sleeping patterns. Now, Carl Zimmer reports for The New York Times, a study recently published in Current Biology offers yet another example of Neanderthals’ influence on Homo sapiens: Compared to individuals lacking Neanderthal DNA, carriers are more likely to have slightly elongated, rather than rounded, brains.

This tendency makes sense given Neanderthals’ distinctive elongated skull shape, which Science magazine’s Ann Gibbons likens to a football, as opposed to modern humans’ more basketball-shaped skulls. It would be logical to assume this stretched out shape reflects similarly protracted brains, but as lead author Philipp Gunz of Germany’s Max Planck Institute for Evolutionary Anthropology tells Live Science’s Charles Q. Choi, brain tissue doesn’t fossilize, making it difficult to pinpoint the “underlying biology” of Neanderthal skulls.

To overcome this obstacle, Gunz and his colleagues used computed tomography (CT) scanning to generate imprints of seven Neanderthal and 19 modern human skulls’ interior braincases. Based on this data, the team established a “globularity index” capable of measuring how globular (rounded) or elongated the brain is. Next, Dyani Lewis writes for Cosmos, the researchers applied this measure to magnetic resonance imaging (MRI) scans of around 4,500 contemporary humans of European ancestry, and then compared these figures to genomic data cataloguing participants’ share of Neanderthal DNA fragments.

Two specific genes emerged in correlation with slightly less globular heads, according to The New York Times’ Zimmer: UBR4, which is linked to the generation of neurons, and PHLPP1, which controls the production of a neuron-insulating sleeve called myelin. Both UBR4 and PHLPP1 affect significant regions of the brain, including the part of the forebrain called the putamen, which forms part of the basal ganglia, and the cerebellum. As Sarah Sloat explains for Inverse, the basal ganglia influences cognitive functions such as skill learning, fine motor control and planning, while the cerebellum assists in language processing, motor movement and working memory.

In modern human brains, PHLPP1 likely produces extra myelin in the cerebellum; UBR4 may make neurons grow faster in the putamen. Comparatively, Science’s Gibbons notes, Neanderthal variants may lower UBR4 expression in the basal ganglia and reduce the myelination of axions in the cerebellum—phenomena that could contribute to small differences in neural connectivity and the cerebellum’s regulation of motor skills and speech, the study’s lead author Simon Fisher of the Netherlands’ Max Planck Institute for Psycholinguistics tells Gibbons .

Still, the effects of such gene variations are probably negligible in living humans, merely adding a slight, barely discernible elongation to the skull.

“Brain shape differences are one of the key distinctions between ourselves and Neanderthals,” Darren Curnoe, a paleoanthropologist from Australia’s University of New South Wales who was not involved in the study, tells Cosmos, “and very likely underpins some of the major behavioural differences between our species.”

In an interview with The New York Times, Fisher adds that the evolution of UBR4 and PHLPP1 genes could reflect modern humans’ development of sophisticated language, tool-making and similarly advanced behaviors.

But, Gunz is quick to point out, the researchers are not issuing a decisive statement on the genes controlling brain shape, nor the effects of such genes on modern humans carrying fragments of Neanderthal DNA: “I don’t want to sound like I’m promoting some new kind of phrenology,” he tells Cosmos. “We’re not trying to argue that brain shape is under any direct selection, and brain shape is directly related to behaviour at all.”

Advertisements

The Inventor of the Genetically Modified Potato Comes Clean


How many years did you spend working on creating GM potatoes? Was this all lab-based work or did you get out to see the farms that were growing the potatoes?

During my 26 years as a genetic engineer, I created hundreds of thousands of different GM potatoes at a direct cost of about $50 million. I started my work at universities in Amsterdam and Berkeley, continued at Monsanto, and then worked for many years at J. R. Simplot Company, which is one of the largest potato processors in the world.

I had my potatoes tested in greenhouses or the field, but I rarely left the laboratory to visit the farms or experimental stations. Indeed, I believed that my theoretical knowledge about potatoes was sufficient to improve potatoes. This was one of my biggest mistakes.

Have the GM potatoes you helped create been approved by the FDA and EPA in the U.S. or indeed elsewhere in the world?

It is amazing that the USDA and FDA approved the GM potatoes by only evaluating our own data. How can the regulatory agencies assume there is no bias? When I was at J.R. Simplot, I truly believed that my GM potatoes were perfect, just like a parent believes his or her children are perfect.

I was biased and all genetic engineers are biased. It is not just an emotional bias. We need the GM crops to be approved. There is a tremendous amount of pressure to succeed, to justify our existence by developing modifications that create hundreds of millions of dollars in value. We test our GM crops to confirm their safety, not to question their safety.

The regulatory petitions for deregulation are full with meaningless data but hardly include any attempts to reveal the unintended effects. For instance, the petitions describe the insertion site of the transgene, but they don’t mention the numerous random mutations that occurred during the tissue culture manipulations. And the petitions provide data on compounds that are safe and don’t matter, such as the regular amino acids and sugars, but hardly give any measurements on the levels of potential toxins or allergens.

The Canadian and Japanese agencies approved our GMO potatoes as well, and approvals are currently considered in China, South Korea, Taiwan, Malaysia, Singapore, Mexico, and the Philippines.

What was your role at Monsanto and J.R. Simplot?

I led a small team of 15 scientists at Monsanto, and I directed the entire biotech R&D effort at Simplot (up to 50 scientists). My initial focus was on disease control but I eventually considered all traits with commercial value. I published hundreds of patents and scientific studies on the various aspects of my work.

Why did you leave firstly Monsanto and then later J.R. Simplot?

I left Monsanto to start an independent biotech program at J.R. Simplot, and I left J.R. Simplot when my ‘pro-biotech’ filter was wearing thin and began to shatter; when I discovered the first mistakes. These first mistakes were minor but made me feel uncomfortable. I realized there had to be bigger mistakes still hidden from my view.

Why have you decided to reveal information about the failings of GM potatoes after spending many years creating them?

I dedicated many years of my life to the creation of GMO potatoes, and I initially believed that my potatoes were perfect but then I began to doubt. It again took me many years to take a step back from my work, reconsider it, and discover the mistakes. Looking back at myself and my colleagues, I believe now that we were all brainwashed; that we all brainwashed ourselves.

We believed that the essence of life was a dead molecule, DNA, and that we could improve life by changing this molecule in the lab. We also assumed that theoretical knowledge was all we needed to succeed, and that a single genetic change would always have one intentional effect only.

We were supposed to understand DNA and to make valuable modifications, but the fact of the matter was that we knew as little about DNA as the average American knows about the Sanskrit version of the Bhagavad Gita. We just knew enough to be dangerous, especially when combined with our bias and narrowmindedness.

We focused on short-term benefits (in the laboratory) without considering the long-term deficits (in the field). It was the same kind of thinking that produced DDT, PCBs, Agent Orange, recombinant bovine growth hormone, and so on. I believe that it is important for people to understand how little genetic engineers know, how biased they are, and how wrong they can be. My story is just an example.

Cancer “Kill Switch


A Northwestern University team spent about eight years meticulously studying the human genome and all of its various chemicals and processes it uses to regulate itself, and it has discovered what it bills as a seemingly foolproof “self-destruct pathway,” that could be utilized for healing, to destroy any type of cancer cell one can think of.

The mechanism they found involves the creation of things called siRNAs, small RNA molecules that serve to interfere with a multitude of genes that are essential to the destructive proliferation of malignant, fast-growing cells. It is said that these siRNAs reportedly have little effect on our healthy, good cells. However, one might see already that if this cancer fighting strategy has risks, people should take a very close look at them.

They say insight was provided by two recent studies, and Marcus Peter, the research leader along with his colleagues have outlined in detail the series of events that the siRNA molecules trigger in our bodies.

They called this process of siRNA molecules causing cancer cell death DISE, or Death By Induced Survival gene Elmination. The team managed to identify six-nucleotide-long sequences that would be required for inducing this state of “DISE.”

It was explained that when researchers examined the nucleotide sequences of the various noncoding, non-protein translating RNA molecules our bodies produce naturally to selectively inhibit the expression of genes, they made the discovery that DISE-associated sequences are there, present at one end of several tumor suppressing RNA strands.

Yet another investigation concluded that those sequences can also be found throughout the genome, embedded in protein-coding sequences.

“We think this is how multicellular organisms eliminated cancer before the development of the adaptive immune system, which is about 500 million years old,” Peter said last year, in a statement. “It could be a fail-safe that forces rogue cells to commit suicide. We believe it is active in every cell protecting us from cancer.”

However, there was one small problem: they still had to determine just how the body produces these free siRNAs that are capable of producing DISE. That’s how complicated the body gets.

In another new study, a breakthrough came as it was published last month in eLife, and in that one Peter and his team observed the body making these molecules, as our cells basically chop a larger strand of RNA, that codes for a cell death cycle protein known as CD95L, into multiple siRNAs.

A series of experiments were conducted, and then they managed to show that the exact same cellular machinery could be utilized in converting other large protein-coding RNAs into these DISE siRNAs.

Even more remarkably, they found that somewhere around three percent of all the coding RNAs in our entire genome could be “processed” to serve the purpose.

“Now that we know the kill code, we can trigger the mechanism without having to use chemotherapy and without messing with the genome,” Peter said last month in a press conference.

Now, they want to make “next-generation medications” and “gene therapy,” and while this sounds like it makes a lot of sense, it’s true that people would be wise to keep their wits about them and know just what they are signing up for if they proceed with some new treatment resulting from this research.

Five-MicroRNA Signature: Predicting Outcomes in HPV-Negative Head and Neck Cancer


The prognosis for human papillomavirus (HPV)-negative head and neck squamous cell carcinoma is generally poorer than for those with HPV-positive disease. Dr. Julia Hess, of the German Research Center for Environmental Health GmbH in Neuherberg, and colleagues sought to find prognostic markers to help predict the risk of recurrence in this patient population and thus create personalized treatments with radiation, targeted drugs, and immune checkpoint inhibitors. They may have succeeded: By retrospectively performing microRNA (miRNA) expression profiling, they discovered a “five-miRNA signature [that] is a strong and independent prognostic factor for disease recurrence and survival of patients with HPV-negative head and neck squamous cell carcinoma,” the authors reported. Of note, added Dr. Hess in Clinical Cancer Research, “its prognostic significance is independent from known clinical parameters.”

The five-miRNA signature, when combined with established risk factors, allowed four prognostically distinct groups to be defined. Recursive-partitioning analysis classified 162 patients into being at low (n = 17), low-intermediate (n = 80), high-intermediate (n = 48), or high risk (n = 17) for recurrence (P < .001).

“[The five-miRNA signature] represents the basis for a more focused search for molecular therapeutic targets,” which would potentially improve “therapy success for appropriate patients,” the researchers stated. Currently, even when given state-of-the-art, standard-of-care therapy, patients with HPV-negative head and neck squamous cell carcinoma cancer have an overall survival rate of only about 50%.

BRCA1/2 Mutations Linked with Better Outcome in Triple-negative Breast Cancer


Cancer Connect

BRCA1/2 Mutations Linked with Better Outcome in Triple-negative Breast Cancer

According to the results of a small study, approximately 20% of women with triple-negative breast cancer are carriers of a BRCA1 or BRCA2 gene mutation. Triple-negative breast cancer patients with these mutations appear to have better survival than patients without these mutations. These results were recently presented at the 2010 Breast Cancer Symposium.[1]

Some breast cancers display different characteristics that require different types of treatment. The majority of breast cancers are hormone receptor-positive, meaning that the cancer cells are stimulated to grow by exposure to the female hormones estrogen and/or progesterone. These cancers are typically treated with hormonal therapy that reduces the production of these hormones or blocks their effects.  Other cancers are referred to as HER2-positive, which means that they overexpress the human epidermal growth factor receptor 2, part of a biologic pathway that is involved in replication and growth of a cell. HER2-positive breast cancers account for approximately 25% of breast cancers and are treated with agents that target the receptor to slow growth and replication.

Triple-negative breast cancer refers to cancers that are estrogen receptor-negative, progesterone receptor-negative, and HER2-negative. Triple-negative breast cancers tend to be more aggressive than other breast cancers and have fewer treatment options. Research is ongoing to determine prognostic factors such as gene mutations that may impact prognosis and help to individualize care.

In the current study, researchers from the M. D. Anderson Cancer Center evaluated the frequency and effects of BRCA1 and BRCA2 gene mutations among 77 women with triple-negative breast cancer. Inherited mutations in these genes can be passed down through either the mother’s or the father’s side of the family and greatly increase the risk of breast and ovarian cancer.

  • 15 of the 77 patients (20%) had a BRCA1 or BRCA2 mutation.
  • Five-year relapse-free survival was 86% for patients with a BRCA mutation compared with 52% for patients without a BRCA mutation.

The researchers concluded that triple-negative breast cancer patients with BRCA mutations experienced a significantly lower recurrence rate. These findings were unexpected because previous studies had not shown a difference in survival.

Patients with triple-negative breast cancer may wish to speak with their healthcare team regarding the risks and benefits of genetic testing.

Reference:

[1] Gonzalez-Angulo M, Chen H, Timms K, et al. Incidence and outcome of BRCA mutation carriers with triple receptor-negative breast cancer (TNBC). Presented at the 2010 Breast Cancer Symposium, Washington, DC, October 1-3, 2010. Abstract 160.

A new view for protein turnover in the brain


Lysosomes were identified in dendrites and dendritic spines using various techniques. Cultured neurons show lysosomes throughout neurons and in dendritic spines indicated in yellow (left and upper right); brain slices show a lysosome in the …more

Keeping the human brain in a healthy state requires a delicate balance between the generation of new cellular material and the destruction of old. Specialized structures known as lysosomes, found in nearly every cell in your body, help carry out this continuous turnover by digesting material that is too old or no longer useful.

Scientists have a strong interest in this degradation process since it must be tightly regulated to ensure healthy brain functioning for learning and memory. When lysosomes fail to do their job, brain-related disorders such as Parkinson’s and Alzheimer’s are possible.

Scientists at the University of California San Diego, led by graduate student Marisa Goo under the guidance of Professor Gentry Patrick, have provided the first evidence that lysosomes can travel to distant parts of neurons to branch-like areas known as dendrites. Surprisingly, they also found that lysosomes can be recruited to dendritic spines, specific areas where neurons communicate with each other. The researchers also revealed that direct activation of a single dendritic spine can directly recruit lysosomes to these specialized locations. The results are published in the Aug. 7 issue of the Journal of Cell Biology.

“Previously there was no reason to think that lysosomes could travel out to the ends of dendrites at synapses,” said Patrick a professor of neurobiology in UC San Diego’s Division of Biological Sciences. “We are showing neuronal activity is delivering them to the synapse and they are playing an integral and instructive role in remodeling and plasticity, which are so important for learning and memory.”

The researchers used genetically encoded fluorescent markers to label lysosomes and follow their movements, sometimes tens and even hundreds of microns away from the cell body. Confocal, two-photon, and electron microscopy were used to reveal that lysosomes move in dendrites and are present in spines, something previously unseen.

“We’ve shown that lysosomes can be recruited to a single synapse… until now we had no idea that lysosomes could receive such instructive cues,” said Patrick, “For many neurodegenerative diseases, disfunction seems to play a role. So now we can look at the distribution and trafficking of lysosomes—which we now know are controlled by neurons—and ask: Is that altered in disease?”

Genetic Testing Misses Half of Women at Risk for Breast Cancer


Nearly half of patients with breast cancer who, on multigene panel testing, are found to have a pathogenic or likely pathogenic variant for breast cancer do not meet current National Comprehensive Cancer Network (NCCN) guidelines for genetic testing, new research shows.

In a cohort of 959 women who were either currently undergoing treatment or had previously been treated for breast cancer, 49.9% met established 2017 NCCN germline genetic testing guidelines, and 50% did not, lead author Peter Beitsch, MD, Dallas Surgical Group–TME/Breast Care Network, Texas, and colleagues report.

Of those patients who met NCCN guidelines for germline testing, 9.39% had either a pathogenic or a likely pathogenic variant; 7.9% of those who did not meet the guidelines also had a pathogenic or likely pathogenic variant. The difference between the two groups was not statistically significant, the investigators add.

“Our results indicate that nearly half of patients with breast cancer with a P/LP [pathogenic/likely pathogenic] variant with clinically actionable and/or management guidelines in development are missed by current testing guidelines,” the investigators observe.

“We recommend that all patients with a diagnosis of breast cancer undergo expanded panel testing,” they conclude.

The study was published online December 7 in the Journal of Clinical Oncology.

However, in a related editorial, breast cancer experts argue that widespread testing would detect genetic variants of unknown significance for which there are currently no established clinical courses of action.

Study Details

For their study, Beitsch and colleagues set up a multicenter, prospective registry with the help of 20 community and academic sites, all of which were experienced in cancer genetic testing and counseling.

They focussed on 959 patients who had a history of breast cancer but had not undergone prior single-gene or multigene testing.

“All patients underwent germ line genetic testing with a multicancer panel of 80 genes,” the authors explain.

“Overall, 83 (8.65%) of 959 patients had a P/LP variant,” they write.

The investigators then considered findings from only BRCA1 and BRCA2 genetic testing.

In this subgroup analysis, positive BCRA1/2 rates were fourfold higher among those who met current NCCN germline testing guidelines, at 2.51%, compared to those who did not, at 0.63% (P = .020).

However, the authors point out that patients with a clearly identifiable personal and family history consistent with NCCN testing guidelines were likely to have already undergone genetic testing and therefore would have been excluded from the study.

In contrast, rates of variants of “uncertain significance” were virtually identical between those who met current NCCN guidelines for genetic testing and those who did not.

“Carriers of clinically actionable variants in genes other than BRCA1/2 are likely to fall outside of the current guidelines,” Beitsch and colleagues point out.

“Results of our study suggest that a strategy that simply tests all patients with a personal history of breast cancer would almost double the number of patients identified as having a clinically actionable genetic test result,” they reason.

Variants of Unknown Significance

In a related editorial, Kara Milliron, MS, and Jennifer J. Griggs, MD, MPH, both from the University of Michigan Cancer Center in Ann Arbor, argue that widespread uptake of genetic testing would increase the likelihood of identifying pathogenic variants of genes for which there are no established guidelines for reducing cancer risk.

For example, pathogenic variants in ATM are associated with an increased breast cancer risk, “but there is insufficient evidence to support risk-reducing breast surgery or bilateral salpingo-oophorectomy,” they point out.

Furthermore, more widespread testing is likely to increase detection rates of variants of unknown significance, which were common in the study by Beitsch and colleagues.

“These variants present challenges for both patients and medical providers in the management of ambiguity that arises in a patient with a malignancy and family members,” Milliron and Griggs write. This issue is particularly problematic in certain racial and ethnic groups in which such variants are both more common and more poorly characterized, they add.

Of greater concern are barriers to high-quality counseling following genetic testing.

“The shortage of genetic counselors has been well documented,” the editorialists note, and currently, “many patients receive genetic testing without seeing a genetic counselor,” they state.

Lastly, the cost of widespread testing and counseling cannot be overlooked, especially when considering expanding that testing to all patients with breast cancer.

Medicare does cover BCRA1/2 testing, and some states cover genetic testing.

“Thus, costs to patients may be prohibitive in the most vulnerable populations,” the editorialists write.

NCCN guidelines for genetic testing were published about 20 years ago and were designed to identify patients who were most likely to carry BRCA1/2 variants.

This was done to reduce the number needed to test; at that time, the cost of genetic testing was $2000 to $5000 per test.

Now, it is approximately 10 times less costly to conduct germline testing than it was when the guidelines were originally established.

Still, Milliron and Griggs calculate that the call to have all women with breast cancer undergo genetic testing would amount to about $400 million in total costs to insurance companies. They arrive at this estimate by considering the current cost of $1500 per test multiplied by the number of new breast cancer cases diagnosed each year in the United States.

Debate ethics of embryo models from stem cells


Mouse stem cell colony compared with a natural embryo

The cells of a 4-day-old artificial embryo (left) resemble those of a 5.5-day-old mouse embryo (right).

Over the past five years, various studies have shown that mouse and human stem cells can spontaneously organize in a dish into 3D structures that are increasingly similar to mouse15 or human68 embryos. All that is needed is the right number and combination of cells, growth factors and, sometimes, a means of physically confining the cells, such as in microwells8.

In the past 18 months, researchers have taken a significant step forward, using mouse models. They have incorporated tissues into the models that resemble those that become the yolk sac and placenta. In mammals, these ‘extra-embryonic organs’1,4,5 grow in synergy with the embryo, mediate its implantation and form the interface with the mother.

In short, it now seems feasible that stem cells can be developed into models that are almost indistinguishable from embryos in the lab. Such models can also be transferred into the womb of a mouse1, where they begin to implant.

These models open up all sorts of possibilities in research. Studying mouse and human embryogenesis in the lab could lead to better infertility treatments or contraceptives, more-effective and safer in vitro fertilization (IVF) procedures, the prevention and treatment of developmental disorders and even the creation of organs for people who need a transplant (see ‘Why model embryos?’).

Why model embryos?

Five ways in which embryo models could improve health.

Treating infertility. Embryo models could give researchers a better understanding of implantation and gastrulation, and lead to better infertility treatments. (It is thought that at least 40% of pregnancies fail by 20 weeks, and that 70% of those that fail do so at implantation15.)

Improving IVF. Only around 20% of IVF procedures result in a birth16. Using stem-cell models, researchers could optimize implantation and minimize cellular abnormalities, such as an aberrant number of chromosomes. As well as safeguarding the health of children conceived in vitro, this could reduce the number of procedures.

Designing new contraceptives. Embryo-model work could improve drugs that prevent implantation (as the oral contraceptive pill or intrauterine devices do, in part). Women and health professionals need drugs and devices that are easier to use and that have fewer side effects. Family planning is central to sustainable, global development.

Preventing disease. Subtle cell abnormalities during the first weeks of pregnancy, such as those caused by the use of alcohol or medications, can do damage throughout pregnancy and beyond17. They can alter development of the placenta and restrict embryo growth, affecting the baby’s birth weight and propensity for chronic diseases (such as those of the heart) decades later18. Entities based on stem cells could help researchers to pinpoint the genetic and epigenetic changes involved18, and assess the effects of diets or drugs10,16.

Creating organs. Mini brains, livers, kidneys and other organoids made from stem cells are highly simplified. Initiating organ development in an environment as similar as possible to the developing embryo might enable researchers to reliably generate structures that more closely resemble mature, functional organs, for drug screens or even for transplantation. N.R., M.P. et al.

These models also raise profound ethical questions. What should their legal and ethical status be now, and in the future as they are refined? Do the probable insights these embryo models provide outweigh possible ethical concerns? Because of the potential benefits, is there now a moral imperative to develop this research?

In 2015, various commentators, including four of us (M.P., M.M., G.deW. and W.D.) flagged the potential ethical implications of developing embryo models from stem cells9. At the time, investigators had modelled only a short span of development. No precursors of the extra-embryonic tissues had been generated.

Given the pace of progress, we now think that a major international discussion is needed to help guide this research.

New avenues

So far, biologists have produced four different types of ‘embryo model’: three in mice1,3,5 and one using human cells7 (see ‘What’s been modelled?’ and ‘Model systems’). All of the models stop developing after a few days, and the extent to which their gene-expression patterns match those of natural embryos has yet to be rigorously assessed1,3,5. Even with these limitations — which are likely to be overcome in the future — stem-cell models open up new avenues for exploring human development and disease.

The first few weeks of development are crucial to the success of a pregnancy and the health of a child10. But little is known about how the human embryo forms, implants and develops in the days that follow. Embryos can be observed using ultrasound only after about five weeks. And there are strict regulatory constraints on researchers’ ability to manipulate human embryos experimentally.

What is known about this period in human development comes mainly from three lines of research. These are: studies of embryos formed through IVF, including of blastocysts cultured in the laboratory for up to 13 days11,12; a small number of archival specimens of human embryos obtained decades ago through surgery and other procedures that would now be considered unethical in most countries; and a few comparative studies on closely related primate species, such as cynomolgus monkeys (Macaca fascicularis)13.

Unlike embryos formed through the fusion of a sperm and an egg, model embryos can be generated in large numbers and tweaked, for instance by using gene editing. This means that they can be used in high-throughput genetic tests and drug screens — procedures that generally form the basis of therapeutic discoveries.

Model systems

How stem cells are used to study embryo development.

Mouse stem cells can form 3D structures that resemble the 3.5-day-old mouse embryo (the blastocyst) before it implants in the uterus. These ‘blastoids’ contain analogues of the three cell lineages thought to form the embryo, placenta and yolk sac. Blastoids implanted into female mice trigger a uterine response. Currently, development stops shortly after implantation1.

Mouse stem cells can also form entities that are similar to specific regions of the 6.5–8-day-old mouse embryo after it has implanted in the uterus2–5. A process called gastrulation, during which the body plan is established, occurs in these regions. These models are termed ETS/X embryo-like structures4,5 and gastruloids2,3. In the first type, some interactions between the embryonic and extra-embryonic tissues are repeated4,5, the anterior–posterior body axis is laid down and analogues of gastrulating cells are generated5. In gastruloids, the three basic germ layers are laid down2,3 and the precursors of organs develop3.

Work with human stem cells is less advanced, but is on a similar trajectory. Currently, human stem cells can model aspects of gastrulation19 and the formation of the beginnings of the amniotic cavity20. They can also form 3D asymmetric cysts that model the development of the epiblast–amniotic ectoderm axis6,7. As far as we know, this structure arises during the second week, soon after implantation. N.R., M.P. et al.

Biologists can also use model embryos to uncover basic principles. For instance, it is well known that the placenta supports and instructs the embryo’s development. Yet a study this year1 showed that, in early embryos (blastocysts), the embryo guides the formation and implantation of the future placenta. (That work was led by one of us (N.R.), building on previous observations14.)

We think that stem-cell-based models could transform medicine in at least five ways (see ‘Why model embryos?’). Done properly, studies on embryo models could even obviate some of the ethical conflicts surrounding research on human development: researchers would have less need to study embryos from people or other primates.

Four questions

Future progress depends on addressing now the ethical and policy issues that could arise.

Ultimately, individual jurisdictions will need to formulate their own policies and regulations, reflecting their values and priorities. However, we urge funding bodies, along with scientific and medical societies, to start an international discussion as a first step. Bioethicists, scientists, clinicians, legal and regulatory specialists, patient advocates and other citizens could offer at least some consensus on an appropriate trajectory for the field.

Two outputs are needed. First, guidelines for researchers; second, a reliable source of information about the current state of the research, its possible trajectory, its potential medical benefits and the key ethical and policy issues it raises. Both guidelines and information should be disseminated to journalists, ethics committees, regulatory bodies and policymakers.

Four questions in particular need attention.

Should embryo models be treated legally and ethically as human embryos, now or in the future?

If the majority view is ‘no’, biologists could use stem-cell-based models both in basic research and in preclinical applications, unfettered by current legislation or guidelines on human-embryo research. If most stakeholders lean towards ‘yes’, work involving these models would be permitted in countries that allow the creation of human embryos for research, such as the United Kingdom — subject to the usual ethical and legal restrictions.

Answering this question could require testing whether these entities are capable of developing to term, but such experiments would themselves raise ethical questions. Moreover, the worldwide ban on human reproductive cloning would prevent such a test from being conducted on models formed from induced pluripotent stem cells.

In practice, different models might need to be treated in different ways. For example, it is unlikely that current post-implantation models could ever develop fully into an organism. They mirror only some regions of the embryo, and skip over the developmental stage that normally occurs when it implants in the uterus. Complicating matters, researchers might be able to constrain or enhance the developmental capacity of a particular model using gene editing — such as by incorporating suicide genes that destroy the tissue at a certain point. In other words, what might be considered an embryo could be flipped by genetic means into a non-embryo, and vice versa.

A human pluripotent stem cell colony

A human pluripotent stem-cell colony that mimics the initiation of gastrulation in the embryo, when three different cell layers appear.

Which research applications involving human embryo models are ethically acceptable?

Most would agree that research into the origin of infertility and genetic diseases, for example, is a worthy goal and probably achievable within current ethical boundaries. Conversely, the use of human embryo models for reproduction is much harder to justify. Such applications are a long way off, but one day it might be feasible to transfer an embryo created from (genetically edited) stem cells to a woman’s uterus to treat infertility or circumvent genetic diseases. Most — including the International Society for Stem Cell Research (ISSCR) — rightly argue that it is not morally acceptable to create humans in this way, even setting aside the considerable uncertainty regarding the healthy outcome of a stem-cell-derived pregnancy.

How far should attempts to develop an intact human embryo in a dish be allowed to proceed?

The response to this will depend on the answer to our first question. If human-embryo models are deemed equivalent to human embryos, they will become part of an ongoing debate on the time limits on culturing embryos. In more than 20 countries, it is against the law for researchers to maintain intact human embryos in the laboratory past 14 days of development or beyond the initiation of gastrulation (when three different cell layers appear) — whichever comes first12.

Does a modelled part of a human embryo have an ethical and legal status similar to that of a complete embryo?

At the moment, the following are not deemed biologically equivalent to a whole embryo: tissues sampled from embryos for diagnostic purposes; embryonic stem cells; and extra-embryonic stem cells. But it is unclear at which point a partial model contains enough material to ethically represent the whole, so this must also be discussed by regulators.

Four recommendations

These are complex questions, and discussions about all these issues and others will need to be regularly revisited as the field evolves. The pace of progress, however, prompts us to recommend the following.

First, we think that the intention of the research should be considered the key ethical criterion by regulators, rather than surrogate measures of the equivalence between the human embryo and a model. This was the approach taken with cloning. In the late 1990s and early 2000s, many nations prohibited human reproductive cloning, but did not ban the transfer of nuclear material from a somatic cell to an egg to produce a blastocyst and generate lines of stem cells. Here, the key consideration was the intention of the study rather than whether the clone was equivalent to a natural embryo.

Second, we urge regulators to ban the use of stem-cell-based entities for reproductive purposes.

Third, in our view, current stem-cell models that are designed to replicate only a restricted part of development, or that form just a few anatomical structures, should not have the ethical status of embryos.

Finally, we urge any scientist using human stem cells for research to abide by existing guidelines, such as those of the ISSCR. They should send their research proposals to a stem-cell oversight committee or a local independent ethical review board before undertaking any studies, submit their results to peer review and publicize their findings.

As part of ensuring good practice, stem-cell researchers, developmental biologists, human embryologists and others need to reach consensus on what terminology accurately captures the properties of the different models. (Currently, several terms are used interchangeably to describe the various types.) Ideally, terms should reflect the cellular composition and tissue organization of each, and indicate their developmental stage and potential.

Such provisions will help to ensure that this research is conducted ethically. Crucially, the recommendations will also help citizens to understand what researchers are doing, and why. Transparency and effective engagement with the public is essential to ensure that promising avenues for research proceed with due caution, especially given the complexity of the science.

Cell-jacking proteins could be the key to cracking Zika and dengue


Targeting common host proteins used by different viruses to manipulate human cells could lead to new treatments.

 

 

Indian patients suffering from dengue fever in a hospital ward

People with dengue fever receive treatment at a hospital in India.

Dengue and Zika viruses replicate inside people by hijacking some of the same proteins, according to a study1 published on 13 December in Cell.

This finding comes from a suite of techniques that exposes how viruses manipulate the cells they infect, which marks a shift in how researchers are thinking about drug development. The idea is to target human proteins exploited by viruses, rather than targeting the pathogens themselves. The medicines developed with such an approach might treat multiple illnesses, rather than a single disease. They could also and sidestep the drug resistance that results from rapid viral evolution.

In the new study, investigators demonstrate that dengue and Zika viruses replicate and spread by exploiting some of the same proteins in humans and mosquitoes — the insects that transmit both viruses to people. The study authors also identified a protein related to brain development that is hijacked by the Zika virus.

“This has the potential to change the paradigm of antiviral drug development,” says John Young, global head of infectious-diseases discovery at Roche, a pharmaceutical company in Basel, Switzerland.

Nevan Krogan, a geneticist at the University of California, San Francisco, led the project and is using this host-centered approach to also investigate how Ebola, HIV, chlamydia and four other infectious microbes hack human cells. He’s also started to apply the approach to look at how human proteins are altered in non-communicable conditions — such as Alzheimer’s and cancer.

Go fish

Viruses are too tiny to fend off inflammatory attacks from their hosts and multiply on their own, so they manipulate the host’s proteins to do their bidding. Each virus exploits different weaknesses in the cells that they infiltrate. Yet Krogan wondered whether there might be some overlap in how viruses rewired the proteins in the cells they infect. “We want to find commonalities so that you can come up with one drug to hit multiple diseases,” he says.

To fish for hijacked proteins, Krogan and his colleagues used a molecular ‘hook’ attached to viral proteins that would stick to any other protein that the virus interacted with. The team then infused the modified viruses into human and mosquito cells. Next, they isolated the captured proteins and identified them using a technique that classifies compounds according to mass. The researchers then used machine learning and other computational methods to search for patterns in the data that indicated which proteins to explore further.

With this method, Krogan’s team identified 28 proteins in both people and mosquitoes that interact with both Zika and dengue viruses. One of these proteins, SEC61, normally shuttles other proteins around inside of cells.

Krogan suspected that the viruses might usurp SEC61 for their own transportation needs. To test this idea, the team treated cells infected with dengue or Zika with a chemical that inhibits SEC61, and found that both viruses couldn’t replicate.

That chemical is currently being tested as a cancer treatment, says Krogan. He suggests that it could one day be developed into a therapy for dengue and Zika — infections that result in fevers and, occasionally, death. Development of such a therapy could be hindered by the possible side effects of targeting proteins that are vital to cellular functions, because that could cause as much damage as the diseases themselves.

The team also discovered that a protein in humans and mosquitoes, ANKLE2, seemed integral to microcephaly — a brain abnormality seen in babies infected with Zika in utero. ANKLE2 is involved in brain development, and when the researchers injected excess ANKLE2 into fruit flies infected with Zika, their brains developed normally compared with infected flies that didn’t receive the injections. It’s still unclear exactly how Zika influences ANKLE2, and how that leads to microcephaly.

Finding common ground

“I am blown away by this paper,” says Nikos Vasilakis, a virologist at the University of Texas Medical Branch at Galveston. Researchers, including Vasilakis, had highlighted other proteins that might contribute to microcephaly. But Vasilakis says that this is the first time he’s read about an approach that reveals several, testable protein interactions.

Krogan hopes that this host-centered approach will help drug developers to find treatments for a range of maladies. In a study2 published alongside the dengue–Zika paper, Krogan’s team reveals human proteins that the Ebola virus manipulates. His group is also analysing the functions of 435 proteins that are potentially reprogrammed by HIV.

Furthermore, Krogan says that focusing on the host side of a condition, rather than on the pathogen, can help to bridge research gaps. For example, if the same network of proteins is altered in someone with dengue and cancer, then researchers could pool their knowledge to hunt for a treatment that targets those proteins. “Science is so siloed,” he says. “The data we are generating makes connections between proteins, and also between scientists.”

Effect of Early Sustained Prophylactic Hypothermia on Neurologic Outcomes Among Patients With Severe Traumatic Brain Injury


The POLAR Randomized Clinical Trial

Key Points

Question  Does early prophylactic hypothermia improve long-term neurologic outcomes in patients with severe traumatic brain injury?

Findings  In this randomized clinical trial that included 511 adults, the proportion of patients with favorable neurologic outcomes at 6 months was 48.8% after hypothermia vs 49.1% after normothermia, a difference that was not statistically significant.

Meaning  These findings do not support the use of early prophylactic hypothermia in patients with severe traumatic brain injury.

Abstract

Importance  After severe traumatic brain injury, induction of prophylactic hypothermia has been suggested to be neuroprotective and improve long-term neurologic outcomes.

Objective  To determine the effectiveness of early prophylactic hypothermia compared with normothermic management of patients after severe traumatic brain injury.

Design, Setting, and Participants  The Prophylactic Hypothermia Trial to Lessen Traumatic Brain Injury–Randomized Clinical Trial (POLAR-RCT) was a multicenter randomized trial in 6 countries that recruited 511 patients both out-of-hospital and in emergency departments after severe traumatic brain injury. The first patient was enrolled on December 5, 2010, and the last on November 10, 2017. The final date of follow-up was May 15, 2018.

Interventions  There were 266 patients randomized to the prophylactic hypothermia group and 245 to normothermic management. Prophylactic hypothermia targeted the early induction of hypothermia (33°C-35°C) for at least 72 hours and up to 7 days if intracranial pressures were elevated, followed by gradual rewarming. Normothermia targeted 37°C, using surface-cooling wraps when required. Temperature was managed in both groups for 7 days. All other care was at the discretion of the treating physician.

Main Outcomes and Measures  The primary outcome was favorable neurologic outcomes or independent living (Glasgow Outcome Scale–Extended score, 5-8 [scale range, 1-8]) obtained by blinded assessors 6 months after injury.

Results  Among 511 patients who were randomized, 500 provided ongoing consent (mean age, 34.5 years [SD, 13.4]; 402 men [80.2%]) and 466 completed the primary outcome evaluation. Hypothermia was initiated rapidly after injury (median, 1.8 hours [IQR, 1.0-2.7 hours]) and rewarming occurred slowly (median, 22.5 hours [IQR, 16-27 hours]). Favorable outcomes (Glasgow Outcome Scale–Extended score, 5-8) at 6 months occurred in 117 patients (48.8%) in the hypothermia group and 111 (49.1%) in the normothermia group (risk difference, 0.4% [95% CI, –9.4% to 8.7%]; relative risk with hypothermia, 0.99 [95% CI, 0.82-1.19]; P = .94). In the hypothermia and normothermia groups, the rates of pneumonia were 55.0% vs 51.3%, respectively, and rates of increased intracranial bleeding were 18.1% vs 15.4%, respectively.

Conclusions and Relevance  Among patients with severe traumatic brain injury, early prophylactic hypothermia compared with normothermia did not improve neurologic outcomes at 6 months. These findings do not support the use of early prophylactic hypothermia for patients with severe traumatic brain injury.

Trial Registration  clinicaltrials.gov Identifier: NCT00987688; Anzctr.org.au Identifier: ACTRN12609000764235

Introduction

Severe traumatic brain injury is a leading cause of neurologic disability, and approximately 50% of patients have long-term outcomes of death or severe disability.13 The economic and social costs of severe traumatic brain injury are high.4

Acute management of patients after traumatic brain injury targets physiologic parameters to minimize secondary brain injury.5,6 Rapid decreasing of body temperature as early as possible after injury, or prophylactic hypothermia, may improve outcomes compared with normothermic traumatic brain injury management.79 Prophylactic hypothermia can attenuate cerebral inflammatory and biochemical cascades, which are activated early after traumatic brain injury,6 thereby limiting secondary brain injury.9,10 This is distinct from late-rescue hypothermia for elevated intracranial pressures, which in the Eurotherm3235 trial11 was associated with harm. However, prophylactic hypothermia may contribute to coagulopathy, immunosuppression, bleeding, infection, and dysrhythmias after trauma.9,12

A 2007 meta-analysis suggested that decreased mortality and long-term neurologic benefit were associated with prophylactic hypothermia after severe traumatic brain injury and provided a low-grade recommendation for clinical use.7 The only large randomized trial (n = 392) included showed no benefit with prophylactic hypothermia13 but had methodological limitations, including delayed induction and limited duration of hypothermia, as well as rewarming triggered by a time irrespective of an individual’s intracranial pressure. Two subsequent trials stopped prematurely (≤50% planned recruitment) and reported no effect.14,15 A 2018 meta-analysis reported decreased risk of death with prophylactic hypothermia.8 These authors found that hypothermia between 33°C and 35°C, cooling in excess of 48 hours, and slow rewarming (<0.25°C/h) were most strongly associated with improved survival.8 Substantial clinical uncertainty in regard to early prophylactic hypothermia remains.

A multinational randomized trial of early prophylactic hypothermia (33°C-35°C) sustained for at least 72 hours, followed by slow rewarming (in the absence of elevated intracranial pressure), compared with normothermia after severe traumatic brain injury was conducted.

Methods
Trial Design and Oversight

The Prophylactic Hypothermia Trial to Lessen Traumatic Brain Injury–Randomized Clinical Trial (POLAR-RCT) was a multicenter randomized trial in Australia, New Zealand, France, Switzerland, Saudi Arabia, and Qatar, which planned to recruit 510 patients after severe traumatic brain injury. The first patient was enrolled on December 5, 2010, and the last on November 10, 2017. The last patient’s outcome was completed on May 15, 2018.

Ethical approval was obtained from Monash University and local ethics committees for participating sites and ambulance services. Approval was given for a deferred model of consent, and written informed consent was then sought from each enrolled patient’s nearest relative or designated person as soon as possible, and subsequently from the patient if he or she regained capacity. The trial protocol and statistical analysis plan (Supplement 1) were developed by the management committee and published.16 Data were collected by investigators and research coordinators at the trial sites (collaborators). The management committee and the independent data and safety monitoring committee conducted planned, blinded interim analyses assessing conduct, progress, and safety after 125 and then 250 participants had been recruited (Supplement 1). After publication of the Eurotherm3235 trial,11 the data and safety monitoring committee recommended the conduct of additional interim analyses for safety at recruitment of 300, 350, 400, and 450 participants.

Participants

Five out-of-hospital or paramedic agencies and 14 emergency departments (EDs) screened for patients with traumatic brain injury. Eligible patients with head injuries were estimated to be aged 18 to 60 years, had a Glasgow Coma Scale score of less than 9, and had actual or imminent endotracheal intubation. Out-of-hospital exclusion criteria included significant bleeding suggested by systolic hypotension (<90 mm Hg) or sustained tachycardia (>120/min), suspected pregnancy, possible uncontrolled bleeding, Glasgow Coma Scale score of 3 and unreactive pupils, or destination hospital not a study site. Patients not enrolled out-of-hospital who fulfilled entry criteria remained eligible for enrollment in the ED (for additional ED exclusion criteria, see eTable 1 in Supplement 2) for up to 3 hours after injury.

Data Collection

Randomized patients were followed up to death or to 6 months after randomization. Online case report forms were used. These included baseline demographic and processes-of-care data, including temperature and intracranial pressure measurements hourly for the first 96 hours.

Randomization and Study Treatment

Participants were randomly assigned 1:1 to prophylactic hypothermia (hypothermia group) or to controlled normothermia (normothermia group) through the use of sealed opaque envelopes and permuted variable block sizes (2 and 4). Randomization was stratified by out-of-hospital vs ED enrollment and by ambulance service and geographic regions. Treating clinicians were not blinded to trial group assignment. Scoring of the primary outcome was performed by blinded independent assessors using structured telephone questionnaires.

Induction of Hypothermia

In the hypothermia group, in both the out-of-hospital and ED settings hypothermia was induced by patient exposure, a bolus of up to 2000 mL intravenous ice-cold (4°C) 0.9% saline, and surface-cooling wraps once the patient was in the ED targeting an initial core temperature of 35°C. Patients were then assessed in the ED for significant clinical risk of bleeding (positive abdominal ultrasonographic or computed tomographic result, persistent hypotension, or life-threatening injury requiring immediate surgery in any body area except the head). Once these significant risk factors for bleeding were excluded, a core temperature of 33°C was targeted.

Maintenance of Hypothermia

Hypothermia was maintained at 33°C (or 35°C if bleeding concerns persisted) with a Gaymar Meditherm 3 console with surface-cooling wraps for at least 72 hours after randomization. Patients who were randomized to the hypothermia group and subsequently developed hemodynamic instability presumed to be caused by bleeding could be rewarmed to 35°C or to normothermia if their condition was considered life threatening. Target temperature for all other hypothermia patients was 33°C ± 0.5°C.

Rewarming

Intracranial pressure monitors were inserted according to usual site practice. Seventy-two hours after randomization, intracranial pressure was assessed in the hypothermia group. If the intracranial pressure was less than 20 mm Hg, gradual controlled rewarming was commenced at a target rate up to 0.25°C/h. If there was a sustained increase in intracranial pressure greater than 20 mm Hg during rewarming, the patient was recooled and then reassessed regularly for suitability for rewarming. The maximum period of hypothermia was 7 days postrandomization. Once rewarming had reached 37°C, patients were maintained normothermic with automated surface-cooling wraps, if required, for up to 7 days postrandomization.

Normothermia

Patients in the normothermia group were transported to the hospital without exposure or cold fluids and warmed if required to normothermia according to usual practice. In the intensive care unit, the temperature target was 37°C ± 0.5°C. Surface-cooling wraps could be used to manage pyrexia or refractory intracranial hypertension.

Patients in both groups could receive other treatments for elevated intracranial pressure as clinically indicated, and in both study groups care was recommended to be managed according to international traumatic brain injury guidelines.5,7

Outcomes

The primary outcome measure was based on the Glasgow Outcome Scale–Extended (GOS-E) score17 at 6 months after injury. A GOS-E score of 1 indicates death, 2 indicates vegetative state, 3 to 4 indicates severe disability, 5 to 6 indicates moderate disability, and 7 to 8 indicates good recovery. The primary outcome was the percentage of favorable outcomes (GOS-E score, 5 to 8).18 Secondary outcomes were GOS-E score as an ordinal variable, mortality at hospital discharge and at 6 months, and proportion of patients with adverse events (including intracranial bleeding, extracranial bleeding, pneumonia, bloodstream infections, and other infections) within 10 days of randomization. Duration of mechanical ventilation and intensive care unit and hospital length of stay was also reported. Secondary outcomes of neurologic function assessed by the sliding dichotomy method, complier average causal effect of hypothermia, quality of life, and cost-effectiveness are not reported here.

Statistical Analysis

We published a statistical analysis plan before completion of the study19 and an update (Supplement 1) before data lock and unblinding. The planned sample size of 500 patients allowed for withdrawals because of dropouts, loss of consent, and crossover from hypothermia therapy to normothermia (ie, significant bleeding or clinician decision that traumatic brain injury was likely not severe), and also allowed interim analyses. A total of 364 evaluable patients enabled detection of an absolute difference of 15% in favorable outcome from an estimated baseline rate of 50%,1,16,19 with 82% power and a 2-sided P = .05. This hypothesized absolute 15% increase in favorable neurologic outcomes was based on a 46% improvement of favorable outcomes (relative risk, 1.46; 95% CI, 1.12-1.92; P = .006) with hypothermia in a 2007 meta-analysis7 and on a 50% increase (P = .02) in favorable outcomes in a subgroup of patients with severe traumatic brain injury who were younger than 45 years and were hypothermic on arrival in the hospital and subsequently randomized to hypothermia vs normothermia (ie, received early hypothermia).13 The final trial size was marginally increased to 510 during 2017 after blinded review of the combined proportion of patients with consent withdrawn or lost to follow-up.

All a priori–defined analyses were performed with patients according to randomized group, excluding those who withdrew consent unless otherwise indicated, with no imputation of missing data. The primary outcome of favorable GOS-E score at 6 months and secondary outcomes (mortality and adverse events) were compared with unadjusted χ2 test for equal proportions, with results reported as frequency (percentage) per treatment group with a relative risk and risk difference, both accompanied by 95% CIs. We conducted sensitivity analyses with hierarchic multivariable log-binomial regression, adjusting for extended International Mission for Prognosis and Analysis of Clinical Trials in Traumatic Brain Injury (IMPACT-TBI) score20 treating randomization strata (location and site) as random effects, with results reported as relative risks (95% CI). The extended IMPACT-TBI score estimates probability of an unfavorable patient outcome, using the key risk factors of age, motor component of the Glasgow Coma Scale, pupil reactivity, brain computed tomography Marshall score, and the secondary insults hypotension and hypoxia. We analyzed GOS-E score as an ordinal variable, using ordinal logistic regression with the proportional odds assumption justified with a score test and results reported as odds ratios (95% CI). Patient survival was assessed with Cox proportional hazards regression censored at 6 months or last known point of contact, with results presented as Kaplan-Meier survival curves with corresponding log-rank test. We visually assessed the proportional hazards assumption across treatment groups, using log-cumulative hazard plots.

Prespecified subgroup analyses were performed for patients with surgically evacuated hematomas and those with any significant intracranial hematomas, with heterogeneity between subgroups determined by fitting an interaction between treatment and subgroup with logistic regression. The effect on favorable outcome of time taken for cooled patients to achieve a target temperature of 33°C was compared with unadjusted χ2 test for equal proportions, with results reported as frequency (percentage).

Planned analyses were conducted in prespecified per-protocol and as-treated populations19 (Supplement 1), with both analyses excluding all patients who did not satisfy study inclusion and exclusion criteria. Evaluable patients were then examined for cooling compliance (defined as ≤35°C for >48 hours within 96 hours of randomization) and either excluded from the analysis (per protocol) or transferred to the opposite treatment group (as treated). Per-protocol and as-treated sensitivity analyses were performed, with cooling compliance defined as patients who were cooled for the majority of their first 72 hours instead of 96 hours. Post hoc analyses of missingness in the primary outcome and comparison of evaluable patients who received an adequate dose of cooling compared with controls were also performed, with detailed description of per-protocol, as-treated, and post hoc analyses shown in Supplement 1.

All analyses were conducted with SAS version 9.4, and 2-sided P  <.05 was used to indicate statistical significance. Because no adjustment was made for multiple comparisons, all secondary outcomes should be interpreted as exploratory.

Results
Patient Characteristics

An initial 8 patients had composed a run-in phase without randomization and were not included. A total of 511 patients were enrolled, including 231 patients (45%) who were enrolled out-of-hospital (Figure 1); 266 patients were randomly assigned to the prophylactic hypothermia group and 245 to the normothermia group. Eleven patients (6 hypothermia group and 5 normothermia group) were excluded because of withdrawal of consent (Figure 1), leaving 500 evaluable patients. A total of 293 patients, 132 in the hypothermia group and 161 in the normothermia group, received the full trial protocol (eTable 8 in Supplement 2). A total of 240 patients in the prophylactic hypothermia group and 226 in the normothermia group were evaluated for the primary outcome (Figure 1).

Baseline characteristics of the 2 study groups were similar in all respects (Table 1). The patients were predominantly men, with a mean age of 34.5 years (SD, 13.4) and a median Glasgow Coma Scale score of 6 (interquartile range [IQR], 4 to 7). The majority of patients (70.6%) had diffuse brain injury (brain swelling or hemorrhages, without subdural or extradural brain hematomas), and the median time from injury to randomization was 1.9 hours (IQR, 1.0 to 2.7).

Core temperature was significantly lower in the hypothermia group than in the control group during the first 96 hours after randomization (Figure 2A). Among patients in the hypothermia group who reached target temperatures, for 233 (89.6%) the time from injury to the initial temperature target of 35°C was a median of 2.5 hours (IQR, 0.8 to 5.5), and for 186 patients (71.5%), the time to reach the final temperature target of 33°C was a median of 10.1 hours (IQR, 6.8 to 15.9) (eTable 2 in Supplement 2). A total of 85 evaluable patients (33%) in the hypothermia group received less than 48 hours of hypothermia (33°C-35°C), and 27% of patients in the hypothermia group never reached the final target temperature of 33°C because of complications or physician decisions (eFigures 3 and 4 and eTable 3 in Supplement 2). The median duration of hypothermia until rewarming commenced was 72.2 hours (IQR, 69.8 to 77.3). The median duration of rewarming to normothermia was 22.5 hours (IQR, 16 to 27); 34 patients had rewarming paused because of increased intracranial pressure (eFigure 1 in Supplement 2). Mean daily intracranial pressure was similar in both groups during induction, maintenance, and rewarming (Figure 2B; eFigure 1 in Supplement 2), as was the elevated intracranial pressure therapy intensity (eTable 4 in Supplement 2).

Primary Outcome

Six months after injury, favorable outcomes occurred for 117 patients (48.8%) in the hypothermia group and 111 (49.1%) in the normothermia group (absolute risk difference, –0.4 percentage points [95% CI, –9.4 to 8.7]; unadjusted relative risk with hypothermia, 0.99 [95% CI, 0.82-1.19]; P = .94) (Table 2, Figure 3). This result was similar after adjustment for the IMPACT-TBI extended model prediction20 of unfavorable outcome (Table 2).

Secondary Outcomes

When GOS-E score at 6 months after injury was considered as an ordinal variable, there remained no significant difference between treatments (unadjusted odds ratio for hypothermia vs normothermia, 0.97 [95% CI, 0.71-1.34]; P = .88). Mortality occurred at 6 months after injury in 54 of 256 patients (21.1%) in the hypothermia group and 44 of 239 (18.4%) in the normothermia group (absolute risk difference, 2.7 percentage points [95% CI, –4.3 to 9.7]; unadjusted relative risk, 1.15 [95% CI, 0.80-1.64]; P = .45) (Table 2). Results were similar for time to death (unadjusted hazard ratio, 1.13 [95% CI, 0.76-1.69]; P = .54) (eFigure 2 in Supplement 2).

Additional Outcomes

Results were not significantly different between groups for time to reach target temperature (eTable 7 in Supplement 2), days of mechanical ventilation, intensive care unit and hospital length of stay, mean GOS-E score at 6 months, and unfavorable GOS-E score for survivors (eTable 5 in Supplement 2).

Adverse Events

The proportions of patients with adverse events within 10 days of randomization for new or increased intracranial bleeding were 18.1% in the hypothermia group and 15.4% in the normothermia group; for pneumonia, 55.0% in the hypothermia group and 51.3% in the normothermia group (Table 2; eTable 6 in Supplement 2). Propofol-related infusion syndrome was diagnosed in 3 patients, 2 in the hypothermia group and 1 in the normothermia group; the latter was receiving nonprotocolized late-rescue hypothermia for refractory increased intracranial pressure. One of these patients died.

Per-Protocol and As-Treated Analyses

Some patients in the hypothermia group were rewarmed prematurely because either the clinicians believed that the brain injury was not as severe as initially thought or the patients developed serious bleeding (eTable 3 and eFigures 3 and 6 in Supplement 2). There were, however, no significant baseline differences between groups in either the per-protocol (eTable 8 in Supplement 2) or as-treated (eTable 10 in Supplement 2) analyses. With respect to the primary outcome, favorable outcomes were not different between groups in either the per-protocol or as-treated analyses (eTables 9 and 11 in Supplement 2). Pneumonia was increased in the hypothermia group in the per-protocol analysis (70.5% in the hypothermia group and 57.1% in the normothermia group; absolute risk difference, 13.3% [95% CI, 2.4%-24.2%]; unadjusted relative risk, 1.23 [95% CI, 1.04-1.47]; P = .02) and the as-treated analysis (70.7% in the hypothermia group and 54.6% in the normothermia group; absolute risk difference, 16.1% [95% CI, 5.7%-26.5%]; unadjusted relative risk, 1.29 [95% CI, 1.09-1.53]; P = .003) (eTables 9 and 11 in Supplement 2). These results remained consistent in per-protocol and as-treated sensitivity analyses (eFigures 5 and 7 in Supplement 2).

Subgroup Analyses

With respect to the primary outcome, there were no significant interactions between treatment group and either of the prespecified subgroups: presence of surgically evacuated cranial hematomas and any intracranial hematoma (surgically evacuated or not) (Table 2).

Post hoc Analyses

There were no significant differences between groups in post hoc analyses of scenarios for missingness in the primary outcome (eTable 12 in Supplement 2). There were also no significant differences in the proportion of patients with a favorable outcome in a comparison of evaluable patients who received an adequate dose of cooling compared with controls (eTable 13 in Supplement 2).

Discussion

In this international randomized trial, prophylactic hypothermia (early sustained hypothermia followed by slow rewarming) compared with normothermia after severe traumatic brain injury did not increase favorable neurologic outcomes. There was no benefit from prophylactic hypothermia in any of the secondary outcomes, including mortality, or in predefined subgroups, per-protocol analyses, or as-treated analyses.

Multiple studies and meta-analyses have reported benefit for prophylactic hypothermia as a potential neuroprotectant after traumatic brain injury.7,8,2131 Three higher-quality multicenter randomized trials of prophylactic hypothermia demonstrated no benefit, but these had methodological limitations and 2 stopped prematurely (≤50% projected sample size).1315 The most recent meta-analysis of prophylactic hypothermia after severe traumatic brain injury8 suggested that early prophylactic hypothermia may be most beneficial when it targets hypothermia of 35°C to 33°C, longer cooling (>48 hours), and slower rewarming (<0.25°C/h). Although the Eurotherm3235 trial of late-rescue hypothermia for adult patients with traumatic brain injury with intracranial hypertension reported harm,11 it did not address the effect of prophylactic hypothermia after severe traumatic brain injury. A large high-quality trial addressing the limitations of prophylactic hypothermia trials was required to inform clinical practice and resolve clinician uncertainty.

To our knowledge, this study is the largest trial of prophylactic hypothermia after traumatic brain injury to date. The study design accounted for limitations of previous trials of prophylactic hypothermia.7,8,16,32 The protocol included early induction and maintenance of hypothermia for at least 72 hours, followed by individually titrated rewarming. The time from injury to initiating hypothermia was short (median, 1.8 hours). The median time to reach 33°C was greater than 10 hours, reflecting a clinical reality that hypothermia therapy below 35°C in trauma patients requires time for exclusion of undiagnosed injuries. This time also implies that laboratory trials of hypothermia may not translate to trauma patients. Most patients in the hypothermia group remained hypothermic in excess of 48 hours.8 The findings of the as-treated analyses demonstrated that crossover of patients who were rewarmed prematurely between groups did not obscure a beneficial effect of hypothermia. Most patients were rewarmed slowly (median, 22.5 hours), without significant elevation in intracranial pressure, whereas 34 patients had rewarming paused because of increased intracranial pressure (eFigure 1 in Supplement 2). Furthermore, there was no effect of hypothermia on intracranial pressure or on elevated intracranial pressure therapy intensity. This trial suggested that prophylactic hypothermia is not neuroprotective after severe traumatic brain injury.

Prolonged hypothermia has been suggested to be immunosuppressive,12 and the per-protocol analyses found increased risk of pneumonia in the hypothermia group. There were also 3 episodes of propofol-related infusion syndrome. This often fatal syndrome may be more likely during hypothermia because of reduced hepatic metabolism of propofol.33

Limitations

This trial has several limitations. First, a significant number of patients in the hypothermia group never reached the target temperature of 33°C (19% had hypothermia withdrawn early and a further 13% did not reach 33°C). This reflects the enrollment of patients without severe traumatic brain injury in the out-of-hospital setting before full evaluation, palliation of unsurvivable injuries, or neurosurgical concerns about hypothermia in injuries with significant risk of further intracranial bleeding. Second, clinicians and patients’ families were not blinded to the intervention. Although this may have introduced bias, the use of trained blinded outcomes assessors minimized this potential. Third, bedside clinicians had the option not to enroll patients if they believed it was not in the patients’ best interests. Although this may have introduced bias, it is an essential part of the ethical conduct of trials in the critically ill.

Conclusions

Among patients with severe traumatic brain injury, early prophylactic hypothermia compared with normothermia did not improve neurologic outcomes at 6 months. These findings do not support the use of early prophylactic hypothermia for patients with severe traumatic brain injury.