Yale’s Mysterious Fossil Beast

Despite being discovered over a century ago, Synoplotherium is still an enigma.

I can’t help but feel a little bit sorry for Synoplotherium. Every time I’ve gone to visit the old beast, safeguarded behind a pane of glass at Yale’sPeabody Museum of Natural History, I’ve been the only one present to pay my respects to this enigmatic mammal.

As in any museum they inhabit, dinosaurs steal the show at the Peabody. The skeleton of O.C. Marsh’s great Apatosaurus – or is that Brontosaurusnow? – leads a parade of tail-dragging saurians who show the osteological frames that artist Rudolph Zallinger used to paint the stunning Age of Reptiles mural that runs the length of the room. Yet, despite their celebrity, dinosaurs didn’t lead the way in introducing Peabody visitors to the past. The all-but-forgotten Synoplotherium has that honor.

The skeleton was found in the 45-million-year-old Eocene rock of Wyoming’s Bridger Basin, dug up by field workers Sam Smith and Jacob Heisey in 1875. It was one of the most complete fossil mammals yet found, and Marsh dubbed the species Dromocyon vorax for its dog-like appearance and the voracious appetite it must have had. Only after decades of osteological scrutiny did other researchers realize that Marsh’s “Dromocyon” was really a more complete representative ofSynoplotherium – a title dubbed by Marsh’s nemesis E.D. Cope.

Marsh would have been furious at the name change, and he would have been aghast to see it on exhibit. Marsh fervently believed that bones were for experts only and any attempt at public display hindered research. Reconstructions should only be done on paper, never with real bones. It’s clear that not everyone agreed with this view, though, as the Peabody’s paleontologists went about mounting the bones of Synoplotherium almost immediately after Marsh’s death in 1899. Synoplotherium was the first creature to ever be mounted at the Peabody.

Synoplotherium foot

The skeleton hasn’t budged a bone since it was put back together in 1899. Each part of the German shepherd-sized carnivore creates the impression of an animal that is like a dog, but not quite. The tail’s too long, the skull’s too big, and the feet end in flat, blunted toes rather than claws. Now you might understand why paleontologists sometimes call Synoplotheriumand its relatives – known as mesonychids – “wolves with hooves.”

Were Synoplotherium a dinosaur, there’d probably be an expansive literature on the Cenozoic flesh-ripper. It was the apex predator of a time when mammalian evolution took some decidedly strange turns, or at least what we’d consider odd with our 45 million years of hindsight. Things as they are, however, Synoplotherium is much like Patriofelis and other Eocene oddities – they have been named, categorized, and reconstructed, but not understood.

The first and last major source of information on Synoplotherium is a section of a massive review of Eocene mammals laid out by paleontologist Jacob Wortman in 1902. He covered the whole animal from its oversized skull to what remained of its tail, detecting a few surprises along the way. “The lower jaw of the individual under consideration is of unusual pathological interest, as showing, among these ancient animals, the result of healing of a fracture,” Wortman wrote. This Synoplotherium not only had the left side of its jaw cracked right at the midpoint, but survived the ordeal to old age, as shown by its worn-down teeth.

What broke the carnivore’s jaw? No one knows. Paleontologist G.R. Weiland commented that the break “was doubtless received in some raid on the young of Palaeosyops” – one of the rhino-like brontotheres – but this was just offhand speculation. It’s just one of countless mysteries that still surround the animal. I pulled the tomes Evolution of Tertiary Mammals of North America and The Beginning of the Age of Mammals off my shelf when I got home from my last visit to the Peabody, hoping for more, but compendiums didn’t offer much detail beyond dental formulae and the impression that Synoplotherium was more of a runner than other mesonychids. Even among experts, this beast has been neglected.

Still, the bones remain. Their display cocoon makes study difficult – Marsh wasn’t totally off-base – but they still stand there, waiting to spill their stories. Clues about how Synoplotherium moved, ate, healed, and more are all patiently waiting, ready to show us a little more about a mammalian dawn when life was simultaneously familiar and strange. Go see for yourself, if you can, and wait a moment in the dim quiet for the beast to speak to you.

Can ADHD Appear for the First Time in Adulthood?

Two studies suggest onset in adults, but symptoms may be different from those in kids.

Attention deficit hyperactivity disorder (ADHD), usually diagnosed in children, may show up for the first time in adulthood, two recent studies suggest.

And not only can ADHD appear for the first time after childhood, but the symptoms for adult-onset ADHD may be different from symptoms experienced by kids, the researchers found.

“Although the nature of symptoms differs somewhat between children and adults, all age groups show impairments in multiple domains – school, family and friendships for kids and school, occupation, marriage and driving for adults,” said Stephen Faraone, a psychiatry researcher at SUNY Upstate Medical University in Syracuse, New York and author of an editorial accompanying the two studies in JAMA Psychiatry.

Faraone cautions, however, that some newly diagnosed adults might have had undetected ADHD as children. Support from parents and teachers or high intelligence, for example, might prevent ADHD symptoms from emerging earlier in life.

It’s not clear whether study participants “were completely free of psychopathology prior to adulthood,” Faraone said in an email.

One of the studies, from Brazil, tracked more than 5,200 people born in 1993 until they were 18 or 19 years old.

At age 11, 393 kids, or 8.9 percent, had childhood ADHD. By the end of the study, 492 participants, or 12.2 percent, met all the criteria for young adult ADHD except the age of diagnosis.

Childhood ADHD was more prevalent among males, while adult ADHD was more prevalent among females, the study also found.

Just 60 of the nearly 400 kids with ADHD still had symptoms at the end of the study, and only 60 of the nearly 500 adults with ADHD had been diagnosed as children.

“The main take-home message is that adult patients experiencing significant and lasting symptoms of inattention, hyperactivity or impulsivity that cause impairment should seek evaluation, even if they began recently by their perception or if family members deny their existence in childhood,” senior study author Dr. Luis Augusto Rohde, a psychiatry researcher at Federal University of Rio Grande Do Sul in Brazil said by email.

The second study focused on 2,040 twins born in England and Wales in 1994 and 1995. During childhood, 247 of them met the diagnosis criteria for ADHD. Of those, 54 still met the diagnosis criteria for the disease at age 18.

Among 166 individuals with adult ADHD, roughly one third didn’t meet the criteria for ADHD at any of four evaluations during childhood, the study also found.

It’s possible some of these adults had undiagnosed ADHD as kids, but symptoms may also look different in older people than they do in children, said senior study author Louise Arseneault of King’s College in London.

People with adult ADHD may have more inattentive symptoms like being forgetful or having difficulty concentrating, whereas children with ADHD may have more hyperactive symptoms, Arseneault said by email. “And if adults do experience hyperactive symptoms, these symptoms may manifest more as feelings of internal restlessness rather than obvious hyperactive behavior like running or climbing around in inappropriate situations,” she said.

Whatever Happened to Advanced Biofuels?

Cellulosic ethanol continues to struggle to use inedible crop waste to match ethanol from corn—and fossil fuels.

The Project Liberty plant is a multi–$100-million effort to get ethanol for cars past the obstacles of food-versus-fuel debates, farmer recalcitrance and, ultimately, fossil fuels. It is also the fruition of a 16-year journey for founder and executive chairman Jeff Broin of ethanol-producing company POET—an odyssey that began with a pilot plant in Scotland, S.D., and progressed through a grant of $105 million from taxpayers via the U.S. Department of Energy (DoE).

The government invested because of hopes that such advanced biofuelscould reduce global warming pollution from vehicles compared with gasoline. And making ethanol from inedible parts of corn plants is perhaps better than using the edible starch in corn kernels that could find use as food or feed for animals. “We’re processing about 770 tons a day of corn stover—basically the leftovers from the cornfield—into ethanol,” Broin told me during a tour of POET’s new Project Liberty facility, which makes ethanol from cellulose and is located next to a traditional facility that produces the fuel from the starch in corn kernels. “[It’s] one of the first plants in the world to do that, so we’re pretty excited.”

Project Liberty Credit: © David Biello

The ability to make fuel from corn stover is the result of nearly two years of tinkering since the plant officially opened in September 2014. And Project Liberty is one of the culminations of an American effort to break oil’s monopoly on transportation fuels in favor of domestically grown biofuels. Most recently, the U.S. Congress mandated that a certain percentage of the fuels used in U.S. vehicles come from biofuels under the terms of the Renewable Fuel Standard federal program, which came into effect in 2005 and mandates the development of biofuels. The RFS put a particular focus on biofuels that do not come from food, as traditional corn ethanol comes from starch that can serve as food or animal feed. The great hope was for cleaner, greener ethanol to be made from cellulose, the inedible plant fiber in the corn leaves, stalks, husks and cobs.

Getting Project Liberty up and running has already required an investment of at least $275 million from POET and its Dutch partner Royal DSM, including grants from the DoE and the state of Iowa. Assuming the kinks are now ironed out, the plant could use some 260,000 metric tons of the nonedible parts of corn plants to produce as much as 95 million liters of cellulosic ethanol each year. Already bales of stover sprawl around the new facility, waiting to be consumed by an industrial process—instead of by mushrooms.

Shredded stover Credit: © David Biello

Project Liberty basically industrializeswhat fungi growth and other decay processes accomplish naturally to release the solar energy stored up by green plants. The corn stover is first freed from its baling and shredded.

The strips of corn leaves and chunks of corncob are then bathed in sulfuric acid to begin breaking them down into fibers.Enzymes—biological proteins freed from their living hosts to do industrial work—eat into those fibers further, and the resulting sour soup is processed to remove water.

This mash, which looks like mud, then goes into giant fermentation vats where specialty yeast eats the sugars in it to produce ethanol.

Cellulosic ethanol feedstock “mud” Credit: © David Biello

At this point, the mud still contains leftover fibers, particularly lignin—the tough strands that allow cornstalks or trees to stand tall and resist decay while living—which becomes an industrial fuel for the facility’s boiler after being pressed into coal-like cakes.

And the rest of the leftovers are fed into one of the nation’s largestbiodigesters—reactors that can hold some four million liters of leftovers for anaerobic digestion—to make methane that the plant can use for power, eliminating its need for natural gas fuel. This helps reduce greenhouse gas pollution, as does the facility’s main product—a fuel fermented from plant material that pulled carbon dioxide out of the sky while growing. Burning cellulosic ethanol as a fuel could result in just 10 percent of the CO2 emissions produced by burning gasoline.

Corn products Credit: © David Biello

One big secret to making it all work is the advanced biofuel refinery’s location right next to a conventional corn ethanol plant, which makes ethanol from the starch in corn kernels. That facility is roughly half the size of its cellulosic fuel neighbor, costs less than half as much to build and run, and produces twice as much ethanol. It can use the leftover lignin and biodigested methane from the cellulosic facility as fuel for distillation and other processes. “We literally can shut off the natural gas valve to the starch plant,” Broin notes.

There are plenty of cellulosic leftovers on farm fields and elsewhere in the U.S., not just corn stover but also from other energy crops such asswitchgrass, along with wood waste and agricultural remains such as wheat straw. The DoE estimates that roughly 900 million metric tons of such material is available each year—a renewable resource that could make about 300 billion liters of ethanol. And there are plenty of conventional ethanol plants near which to locate—almost 200 according to the most recent data from DoE—and that is just in the U.S. “We see the opportunity to build these plants all over the world,” Broin says.

Stacked stover outside Project Liberty Credit: © David Biello

POET is not alone. Agrochemical giant DuPont opened its own cellulosic biorefinery next to a conventional starch-based fuel facility last October in Nevada, Iowa, and it should eventually be able to produce around 115 million liters of cellulosic ethanol per year. “You know, it’s been a long time coming but we’re proud it’s here,” Terry Branstad, governor of Iowa, said on October 30 last year at the opening of the new DuPont facility, which is still not running at capacity.

“Thirty million gallons of biofuel will be produced without consuming a single additional bushel of corn,” added U.S. Sen. Chuck Grassley, a longtime ethanol supporter and one of the architects of the RFS. “You have achieved here what Congress hoped: new biofuels that were cleaner, greener and more efficient.”

DuPont’s cellulosic ethanol biorefinery in Iowa Credit: Courtesy of DuPont

But the opening of DuPont’s new facility also occasioned the shutdown of DuPont’s other cellulosic refinery in Tennessee. “DuPont remains committed to the commercialization of cellulosic biofuel and will focus its resources on its Iowa facility,” Jan Koninckx, DuPont’s global business director for advanced biofuels, told Scientific American in an e-mailed statement.

Cellulosic fuels’ main hurdle seems to be economic. In April 2012 Blue Sugars Corp. of South Dakota produced the first batch of qualifying cellulosic ethanol, a little more than 75,500 liters, then promptly went out of business. In 2013 no cellulosic ethanol was produced but by last year—after several DoE-supported plants came online—all five of those biorefineries produced a total of 8.3 million liters of cellulosic ethanol, according to the U.S. Environmental Protection Agency, which administers the RFS. Already, Spanish multinational corporation Abengoa’s cellulosic ethanol plant—which opened in 2014 in Hugoton, Kans.—sits unused due to technology troubles as well as Abengoa’s bankruptcy. That plant consumed a $132-million loan guarantee as well as a $97-million grant from the DoE before idling.

Abengoa’s shuttered cellulosic plant in Kansas Credit: © Bill Kubota

Simply put, cellulosic ethanol is more expensive to make than ethanol fermented from cornstarch or from sugarcane, the world’s second-largest source of fuel ethanol. “Everything’s expensive here because it’s first of a kind,” Broin says. “The next plant will be a lot cheaper.”

Cellulosic ethanol: it smells much worse than it looks Credit: © David Biello

On a more basic level, moving fibers and sludge through an industrial facility is tough to do without breakdowns. Thecorn stover arriving at POET’s cellulosic facility had as much as three or four times more sand and gravel mixed in than engineers had anticipated, and that grit wreaked havoc on pumps, valves and other equipment.

“We have made literally hundreds of small process changes,” including a thorough washing of the corn stover once it comes out of the bale, Broin says. There have also been not-so-small process changes, such as using large cranes to tear open buildings in order to replace equipment.

Corn stover up close Credit: © David Biello

But Broin notes that ethanol from corn faced the same difficulties during its ascent over several decades. And cellulosic ethanol is now definitely being produced in meaningful quantities at Project Liberty. “We’re shipping cellulosic ethanol,” Broin says. “We’re filling railcars.”

That’s good because the EPA now requires nearly 1.2 billion liters of cellulosic biofuels in 2017 under the terms of the RFS. That’s still well below where lawmakers like Grassley thought cellulosic ethanol would be by now—they had established a target of some 11 billion liters per year back in 2007. When that failed to materialize, the EPA mandated at least 465 million liters of cellulosic biofuel in 2015, but only saw 8.3 million liters mixed into the nation’s fuel supply. So the rest of the nearly 57 billion liters of ethanol fuel came from corn kernels, with attendant concerns about industrial farming practices, water pollution and impacts on food prices.

Flex fuel future? Credit: © David Biello

“Everything about ethanol is good, there is nothing bad. It’s good for agriculture, good for the environment, good for economic development in rural communities,” Grassley told Scientific American at the DuPont opening in October. “We will demonstrate that we can produce food and fuel forever. We don’t need to worry about choosing between food and fuel.”

The combination of cellulosic biorefineries with starch-based ethanol plants could prove more potent economically, but it fails if the goal is toreduce the amount of ethanol made from corn. Instead of replacing corn ethanol, cellulosic ethanol may simply supplement it.

Ethanol from corn kernels, lots and lots of corn kernels Credit: © David Biello

Regardless, the ethanol industry’s ambitions are much larger than just Project Liberty and its peers or even the hundreds of conventional ethanol fuel plants: The sector hopes to one day produce around 570 billion liters a year, including a major contribution from cellulose, according to Broin. “There’s a big market out there that we’d like to replace—that’s the entire gasoline market,” Broin says. “We can grow our way easily out of our reliance on fossil fuels and people just don’t understand that.”

How Might Cell Phone Signals Cause Cancer?

An expert answers questions about what could happen at the cellular level after a report links radio-frequency signals to tumors in rats.

The release of a study Friday linking cancer in rats to the type of radiation emitted by cell phones presents some of the strongest implications in more than two decades of research that higher doses of such signals could be linked to tumors in laboratory animals—unsettling news for billions of mobile phone users worldwide. Still missing, however, is a clear understanding of exactly how radiofrequency (RF) radiation used by mobile phones might create cellular-level changes that could lead to cancer.

The study by the U.S. National Toxicology Program (NTP) found that as the thousands of rats studied were exposed to greater intensities of RF radiation, more of them developed rare forms of brain and heart cancer that could not be easily explained away, exhibiting a direct dose-response relationship. NTP acknowledges that the research is not definitive and that more research needs to be done.

This is familiar territory for Jerry Phillips, a biochemist and director of the Excel Science Center at the University of Colorado at Colorado Springs. Phillips conducted Motorola-funded research into the potential health impacts of cell phones during the 1990s while he was with the U.S. Department of Veterans Affairs’ Pettis VA Medical Center in Loma Linda, Calif. Phillips and his colleagues looked at the effects of different RF signals on rats, and on cells in a dish. “The most troublesome finding to Motorola at the time is that these radiofrequency signals could interact with living tissues, which is what we saw in the rats,” he says.

Scientific American spoke with Phillips about the NTP’s announcement, as well as his own experiences trying to understand how RF signals could be causing the DNA damage seen in his lab’s rats.

[An edited transcript of the interview follows.]

How is cell phone radiation different from other forms of radiation?

Cell phone radiation is non-ionizing radiation. X-rays, for example, are ionizing radiation and contain sufficient energy to break chemical bonds. Non-ionizing radiation associated with radiofrequency fields is very, very low-energy, so there’s insufficient energy to break chemical bonds. It was always assumed that because the power being created by the handsets was low enough, there would be insufficient energy for heat production—and without heat production there would be no biological effects [on users] whatsoever.

What happens to living cells when they are exposed to RF radiation?

The signal couples with those cells, although nobody really knows what the nature of that coupling is. Some effects of that reaction can be things like movement of calcium across membranes, the production of free radicals or a change in the expression of genes in the cell. Suddenly important proteins are being expressed at times and places and in amounts that they shouldn’t be, and that has a dramatic effect on the function of the cells. And some of these changes are consistent with what’s seen when cells undergo conversion from normal to malignant. These effects vary depending on the nature of the signal, the length of the exposure and the specifics of the signal itself.

How does the use of rats impact the validity of a study designed to determine whether cell phones are safe for people?

We try to find the best model system available based on physiology, genetics and what we know about biochemistry. Rats are really a pretty good model for humans. Of course, the question you’ve asked is now what the [wireless device] industry is going to hit on. Their primary rebuttal is that these are rats and not people.

NTP studied both Code Division Multiple Access (CDMA) and Global System for Mobile (GSM) modulations, which dictate how signals carry information. Why test more than one modulation in a study like this?

You test those two modulations because both are in wide use today. I don’t know exactly what [the NTP’s] rationale was, but the rationale we used for our study in the 1990s was to find out if signal modulation had an effect on what we were looking at. Part of the problem studying radiofrequency radiation is that we have not a clue what constitutes a dose. If you have a chemical, you can weigh it out and you know what the dose is. But with radiofrequency radiation there are too many parameters—power intensity, carrier frequency, length of exposure, signal intermittency or some combination—and nobody knows what’s most important.

What has been the prevailing argument against non-ionizing radiation causing cancer?

It’s a complicated issue. If you look as something as simple as smoking—for so long people had no clue what was in cigarette smoke that caused cancer. You could see when a smoker died that the lungs were different from those of a non-smoker, but at first it was hard to identify the mechanism causing the change in the lungs. It’s been the same sort of argument here—there’s no plausible explanation that something with such low energy could cause significant biological effects that are adverse to human health and development. Those of us working in the area of gene expression saw those effects, but there had been no way to explain them.

What should people take away from the NTP’s latest study results?

All this really does is provide a couple of answers but raise even more questions. My guess is that the needle won’t move much at all in this country. If you look at all of the research being done on this, it’s all from outside this country. People want to believe their technology is safe. I do. I would love to believe it, but I know better.

How do you reconcile your own cell phone use with the potential health hazards?

I’ll connect the phone to Bluetooth in my car. Or I’ll text. Or I if I have to make a phone call I put it on speaker. But you have to realize that this issue opens up a much bigger can of worms than cell phones. If this radiation, this form of energy can interact with biological tissue then it’s going to reopen a lot of what were supposedly settled issues regarding the safety of wireless communications. If we’re going to be bathed in a whole new electromagnetic environment, how safe is it?

How Zika Spiraled Out of Control

The virus was a tiny, barely known annoyance. Scientists now think environmental changes made Zika explode into a global crisis.

The world’s biggest collection of Zika virus is housed in a tan concrete building, rising up from the flat campus of the University of Texas Medical Branch here in Galveston. Inside, armed guards watch the lobby, and access to certain floors requires special clearance. These safeguards are in place because other viruses, including those that cause Ebola and severe acute respiratory syndrome (SARS), are also on the premises.

Zika is not as easily spread as deadly Ebola, so the laboratories that work with the mosquito-carried virus do not require spacesuit-like protection gear. On a recent visit I held a clear plastic bag containing the complete virus collection in my bare hands. Inside the bag the 17 vials of pure, freeze-dried virus resembled old glue. They did not look like the material of a global health crisis that has panicked entire nations. (The scientist who handed me the bag did keep one hand hovering nearby, presumably ready to catch if I dropped it.) Despite the modest appearance, experiments with the collection may be researchers’ best hope of understanding how the virus got so out of control—and what to do about it.

This plastic bag holds the world’s largest collection of pure Zika virus, housed at the University of Texas Medical Branch at Galveston. Credit: DINA FINE MARON

The virus was first detected in Uganda, and for more than 60 years it stayed small and scientists took little notice, believing all it did was cause symptoms on par with a mild flu. But since 2015 Zika has ricocheted to more than 40 countries, transmitted by mosquito bite or human sexual contact. Researchers now blame it for terrible birth defects, including microcephaly, when babies are born with abnormally tiny heads, and Guillain–Barré syndrome, an autoimmune disease that can affect patients of any age. “The more we learn the worse it gets,” says Anthony Fauci, director of the National Institute of Allergy and Infectious Diseases at the National Institutes of Health.

Virologists now need to figure out which of two potential causes is responsible for the Zika explosion. One possibility is that genetic changes within the virus supercharged its ability to infect people and cause disease. The second scenario is that the change happened outside the virus: After decades of relative isolation it reached dense population areas, providing more fertile ground for its spread. If researchers identify new mutations within the virus that also allow it to breach the placenta and cause microcephaly in fetuses, then they could be on the lookout for ways to combat that transmission, too.

If the outbreaks are just the result of environmental changes, perhaps Zika was always capable of causing serious effects, but they were rare in small populations and only became obvious when the virus hit a lot of people. “It could just be a numbers game,” says Erin Staples, a U.S. Centers for Disease Control and Prevention medical epidemiologist. (There is a third, less-accepted theory suggesting that previous exposure to another virus called dengue could somehow make people more susceptible to Zika. But there is little support for this compared with the other two ideas.)

Galveston is an ideal place to tackle this inside versus outside debate because the Medical Branch research center is home to the massive World Reference Center for Emerging Viruses and Arboviruses. (The latter are viruses carried by ticks, mosquitoes or other arthropods.) The collection includes more than 7,000 pathogens. Investigators also have a variety of specialized resources for testing virus infectivity in various bugs and animals, sequencing viral genes and imaging pathogen structures.


For choosing among competing theories about what went wrong with Zika, it also helps that arbovirus expert Scott Weaver, the university’s director of the Institute for Human Infections and Immunity, and his colleagues have become good at a rather arcane skill: making mosquitoes spit.

For the past few months Weaver’s team has fed lab-grown mosquitoes, related to the insects spreading disease in Brazil and the Dominican Republic, blood meals spiked with either a 2015 Zika strain from Mexico, a 2010 strain from Cambodia or a 1984 strain from Senegal. The scientists wait a bit, then coax the mosquitoes to spit, and examine that saliva for the amount of Zika virus particles it contains. Comparing mosquitoes that carry different Zika strains lets the researchers test whether newer strains travel more quickly from insects’ guts to their saliva than do older strains. Such increased speed in a virus is often caused by a genetic change. If the virus now found in the Americas and Caribbean moves into the bugs’ spit faster than other strains do, it could fuel a larger, faster-moving outbreak in humans—and favor the inside version in the inside/outside hypotheses duel.

Doctoral student Christopher Roundy has become the main drool wrangler. “Basically I spend my days collecting mosquito spit,” Roundy says. “It doesn’t sound that glamorous.” Roundy begins the tests in Weaver’s lab by mixing a specific strain of Zika with human blood donated by a local hospital. (The blood was getting too old for use in human transfusions.) Then he pours the spiked blood into a metal canister that he covers with skin from a mouse—mosquitoes are more likely to bite through skin. The other side of the canister is hooked to a heater, because mosquitoes also prefer warm meals. The whole setup is then placed, skin-side down, over the opening of a small Styrofoam container holding dozens of female mosquitoes (males do not bite). Frenzied by the smell of blood, the females stab their proboscises through the skin, sometimes two or three times. Once they are full their stomachs become red and engorged—making it clear that they have eaten.

For some, this is their last meal. Every day afterward a small number of those bugs are killed and taken apart. Their bodies, sans legs, are blended into a slurry; the legs are similarly pureed and studied separately. Both batches of bug bits are then scrutinized for signs of the virus—indicators of how far the pathogen traveled in the body.

Scientists stun mosquitoes by putting them on ice during Zika virus experiments at UTMB. Credit: DINA FINE MARON  

After eight days Roundy turns his attention to the bugs’ spit. He stuns a handful of the remaining live mosquitoes by placing them in the freezer for about a minute. Then, Roundy uses tweezers and a magnifying glass to gently guide each of their proboscises into a small tube containing a salt solution. That exposure to the salty mix forces them to drool. He adds that spit to dishes of cells and mixes in chemicals that cause Zika-infected cells to turn a purplish color. The purple cells are visible to the naked eye. Under a microscope, Roundy also examines the spit for tiny purple blotches—viral particles—that he might have missed.

If there are genetic changes in newer Zika strains, Roundy, Weaver and UTMB pathologist Nikos Vasilakis are particularly interested in any that might help the virus infiltrate human cells or aid its journey via mosquito bite. They also want to know if mutations may explain its connections to microcephaly. More virus in any single drop of blood, for example—thanks to boosted replication rates of the organism—could perhaps help the virus to more readily cross the placental barrier and reach a developing fetus. If there are no such changes, that is telling, too. It shifts the balance toward the “outside” environmental idea.

Searching for inside/outside evidence using this method has worked before. The same Galveston team used a similar approach to determine that another mosquito-borne disease, chikungunya, had mutated in the past. By studying spit they found that specific genetic adaptations allowed that virus, normally spread by the Aedes aegypti mosquito, to expand its range by jumping to a different carrier, the Aedes albopictus mosquito. A single amino acid change in one of the virus’s glycoproteins, for example, allowed the virus to replicate about 40 times easier in the bug than it once did. The information prompted health workers to expand warnings about the disease to more areas of the Pacific rather than limit them to those locations at risk from one of the species. (It is unlikely that Zika made a similar species jump because the virus is already thriving in areas with A. aegypti, Weaver says.)


The Zika test results were finalized in late April. After tracking different strains of the virus via thousands of bugs the Galveston team came to a quiet conclusion: The latest Zika strain does not appear to be more active or more easily transmissible than are prior strains. In fact, older Zika from Africa in the 1980s is a bit quicker traveling through the mosquito body. The discovery does not completely eliminate the idea of harmful genetic changes yet this evidence tips the scales of blame toward the environment.

For scientists such as Weaver who are concerned about public health, the environmental tilt is disquieting. “It’s bad news in a way because that tells us that the strain that is circulating today in Asia and Africa probably has at least the same capacity to initiate urban epidemics,” he says. (Immunity in some of those populations from past outbreaks, however, may be protecting them from large epidemics.) The serious symptoms became more apparent as more people grew ill and public health officials saw enough of an uptick in microcephaly, other birth defects and Guillain–Barré to fuel concern and further investigation.

Sources: Zika Virus Vectors and Reservoirs, by Scott C. Weaver, University of Texas Medical Branch, February 26, 2016; “Zika Virus Outside Africa,” by Edward B. Hayes, in Emerging Infectious Diseases, Vol. 15, No. 9; September 2009; Centers for Disease Control and Prevention. CREDIT: TIFFANY FARRANT-GONZALEZ

Kathryn Hanley, a biology professor at New Mexico State University who investigates viruses such as dengue and Zika, notes the environment and internal mutations actually could be working together. “These are not mutually exclusive hypotheses,” she says.  “It could be that the virus previously had little access to urban populations, but on the way to getting that access it also acquired mutations that increased its transmission.” Researchers at Galveston and elsewhere plan further tests on insects and animals to explore this and other ideas.

There are still many unknowns about the virus. For instance, the way Zika infects human cells and hijacks their machinery to copy itself many times has proved elusive. And because this virus looks so much like dengue, testing for Zika has remained a serious challenge, complicating efforts to accurately track the spread. The first large outbreak—in 2007 on Yap Island in Micronesia—infected 70 percent of the island’s population but was only identified because scientists had been deployed there on the assumption the isle was under siege by dengue. Other Zika cases have undoubtedly been missed altogether, scientists say.

That suggests researchers have underestimated this virus all along.  Perhaps other benign-seeming threats could be more dangerous than previously thought as well, they worry. “You should not necessarily be scared, but you have to be open-minded and quick to respond,” says the CDC’s Staples. If Zika as well as other emerging viruses have unexpected effects as they reach different populations, it is a serious problem. Without better global surveillance of diseases in remote locations, health agencies will find it hard to foresee or prepare for an organism that could be quickly transported to an urban area and spiral out of control.

Such surveillance is currently not done: Many developing nations have little health infrastructure and few available resources to strengthen it. Moreover, most of the World Health Organization’s annual budget comes from donor countries that earmark it for specific projects such as HIV prevention or polio eradication, not surveillance of amorphous threats.

Humanity is not without defenses, however. Some insecticides and larvicides work on the A. aegypti mosquito, Zika’s preferred host, so disease control experts know which mosquitoes to study and target. Still, says Ann Powers, acting chief of the CDC’s Arboviral Diseases Branch, “we need to be more vigilant.” All it takes to change the profile of a disease, she knows, is a few extra mosquito bites.

Maybe Life in the Cosmos Is Rare After All

The conclusion that the universe is teeming with biology is based on an unproved assumption.

A planet like Kepler 22b could plausibly be habitable—but that doesn’t mean it’s inhabited 

When I was a student in the 1960s almost all scientists believed we are alone in the universe. The search for intelligent life beyond Earth was ridiculed; one might as well have professed an interest in looking for fairies. The focus of skepticism concerned the origin of life, which was widely assumed to have been a chemical fluke of such incredibly low probability it would never have happened twice. “The origin of life appears at the moment to be almost a miracle,” was the way Francis Crick described it, “so many are the conditions which would have had to have been satisfied to get it going.” Jacques Monod concurred; in his 1976 bookChance and Necessity he wrote, “Man knows at last that he is alone in the indifferent immensity of the universe, whence which he has emerged by chance.”

Today the pendulum has swung decisively the other way. Many distinguished scientists proclaim that the universe is teeming with life, at least some of it intelligent. The biologist Christian de Duve went so far as to call life “a cosmic imperative.” Yet the science has hardly changed. We are almost as much in the dark today about the pathway from non-life to life as Darwin was when he wrote, “It is mere rubbish thinking at present of the origin of life; one might as well think of the origin of matter.”

There is no doubt that SETI – the search for extraterrestrial intelligence – has received a huge fillip from the recent discovery of hundreds of extra-solar planets. Astronomers think there could be billions of earthlike planets in our galaxy alone. Clearly there is no lack of habitable real estate out there. But habitable implies inhabited only if life actually arises.

I am often asked how likely it is that we will find intelligent life beyond Earth. The question is meaningless. Because we don’t know the process that transformed a mish-mash of chemicals into a living cell, with all its staggering complexity, it is impossible to calculate the probability that it will happen. You can’t estimate the odds of an unknown process. Astrobiologists, however, seem more preoccupied with the chances that microbial life will eventually evolve intelligence. Although biologists can’t do the math on that either, at least they understand the process; it is Darwinian evolution. But this is to put the cart before the horse. The biggest uncertainty surrounds the first step—getting the microbes in the first place.

Carl Sagan once remarked that the origin of life can’t be that hard or it would not have popped up so quickly once Earth became hospitable. It’s true that we can trace the presence of life on Earth back 3.5 billion years. But Sagan’s argument ignores the fact that we are a product of the very terrestrial biology being studied. Unless life on Earth had started quickly, humans would not have evolved before the sun became too hot and fried our planet to a crisp. Because of this unavoidable selection bias, we can’t draw any statistical significance from a sample of one.

Another common argument is that the universe is so vast there just has to be life out there somewhere. But what does that statement mean? If we restrict attention to the observable universe there are probably 1023planets. Yes, that’s a big number. But it is dwarfed by the odds against forming even simple organic molecules by random chance alone. If the pathway from chemistry to biology is long and complicated, it may well be that less than one in a trillion trillion planets ever spawns life.

Affirmations that life is widespread are founded on a tacit assumption that biology is not the upshot of random chemical reactions, but the product of some sort of directional self-organization that favors the living state over others—a sort of life principle at work in nature. There may be such a principle, but if so we have found no evidence for it yet.

Maybe we don’t need to look far. If life really does pop up readily, as Sagan suggested, then it should have started many times on our home planet. If there were multiple origins of life on Earth, the microbial descendants of another genesis could be all around us, forming a sort of shadow biosphere. Nobody has seriously looked under our noses for life as we do not know it. It would take the discovery of just a single “alien” microbe to settle the matter.


Major Cell Phone Radiation Study Reignites Cancer Questions

Exposure to radio-frequency radiation linked to tumor formation in rats.

Federal scientists released partial findings Friday from a $25-million animal study that tested the possibility of links between cancer and chronic exposure to the type of radiation emitted from cell phones and wireless devices. The findings, which chronicle an unprecedented number of rodents subjected to a lifetime of electromagnetic radiation starting in utero, present some of the strongest evidence to date that such exposure is associated with the formation of rare cancers in at least two cell types in the brains and hearts of rats. The results, which were posted on a prepublication Web site run by Cold Spring Harbor Laboratory, are poised to reignite controversy about how such everyday exposure might affect human health.

Researchers at the National Toxicology Program (NTP), a federal interagency group under the National Institutes of Health, led the study. They chronically exposed rodents to carefully calibrated radio-frequency (RF) radiation levels designed to roughly emulate what humans with heavy cell phone use or exposure could theoretically experience in their daily lives. The animals were placed in specially built chambers that dosed their whole bodies with varying amounts and types of this radiation for approximately nine hours per day throughout their two-year life spans. “This is by far—far and away—the most carefully done cell phone bioassay, a biological assessment. This is a classic study that is done for trying to understand cancers in humans,” says Christopher Portier, a retired head of the NTP who helped launch the study and still sometimes works for the federal government as a consultant scientist. “There will have to be a lot of work after this to assess if it causes problems in humans, but the fact that you can do it in rats will be a big issue. It actually has me concerned, and I’m an expert.”

More than 90 percent of American adults use cell phones. Relatively little is known about their safety, however, because current exposure guidelines are based largely on knowledge about acute injury from thermal effects, not long-term, low-level exposure. The International Agency for Research on Cancer in 2011 classified RF radiation as a possible human carcinogen. But data from human studies has been “inconsistent,” the NTP has said on its website. Such studies are also hampered by the realities of testing in humans, such as recall bias—meaning cancer patients have to try to remember their cell phone use from years before, and how they held their handsets. Those data gaps prompted the NTP to engage in planning these new animal studies back in 2009.

The researchers found that as the thousands of rats in the new study were exposed to greater intensities of RF radiation, more of them developed rare forms of brain and heart cancer that could not be easily explained away, exhibiting a direct dose–response relationship. Overall, the incidence of these rare tumors was still relatively low, which would be expected with rare tumors in general, but the incidence grew with greater levels of exposure to the radiation. Some of the rats had glioma—a tumor of the glial cells in the brain—or schwannoma of the heart. Furthering concern about the findings: In prior epidemiological studies of humans and cell phone exposure, both types of tumors have also cropped up as associations.

In contrast, none of the control rats—those not exposed to the radiation—developed such tumors. But complicating matters was the fact that the findings were mixed across sexes: More such lesions were found in male rats than in female rats. The tumors in the male rats “are considered likely the result of whole-body exposure” to this radiation, the study authors wrote. And the data suggests the relationship was strongest between the RF exposure and the lesions in the heart, rather than the brain: Cardiac schwannomas were observed in male rats at all exposed groups, the authors note. But no “biologically significant effects were observed in the brain or heart of female rats regardless of modulation.” Based on these findings, Portier said that this is not just an associated finding—but that the relationship between radiation exposure and cancer is clear. “I would call it a causative study, absolutely. They controlled everything in the study. It’s [the cancer] because of the exposure.”

Earlier studies had never found that this type of radiation was associated with the formation of these cancers in animals at all. But none of those studies followed as many animals, for as long or with the same larger intensity exposures, says Ron Melnick, a scientist who helped design the study and is now retired from the NTP.

The new results, published on Web site bioRXiv, involved experiments on multiple groups of 90 rats. The study was designed to give scientists a better sense of the magnitude of exposure that would be associated with cancer in rodents. In the study rats were exposed to RF at 900 megahertz. There were three test groups with each species of each sex, tested at different radiation intensities (1.5, three and six watts per kilogram, or W/kg), and one control group. (The lowest-intensity level roughly approximates the levels allowed by U.S. cell phone companies, which is1.6 W/kg.)  “There are only 90 animals per group, so because there is a trend—and this is the purpose of these assays where you do multiple doses you extrapolate downward and calculate a risk for humans from those trends—so that information is useful. Probably what caused cancer at the high doses will cause cancer at lower doses but to a lesser degree,” Portier says.

Rodents across all the test groups were chronically exposed to RF for approximately nine hours spread out over the course of the day. (Their entire bodies were exposed because people are exposed to such radiation beyond their heads, especially when they carry them or store them in their bras, says John Bucher, the associate director of the NTP.) During the study the rats were able to run around in their cages, and to eat and sleep as usual. The experiments also included both types of modulations emitted from today’s cell phones: Code Division Multiple Access and Global System for Mobile. (Modulations are the way the information is carried, so although the total radiation levels were roughly the same across both types, there were differences in how radiation is emitted from the antenna—either a higher exposure for a relatively short time or a lower exposure for a longer time.) Overall, there was no statistically significant difference between the number of tumors that developed in the animals exposed to CDMA versus GSM modulations. With both modulations and tumor types, there was also a statistically significant trend upward—meaning the incidence increased with more radiation exposure. Yet, drilling down into the data, in the male rats exposed to GSM-modulated RF radiation the number of brain tumors at all levels of exposure was not statistically different than in control males—those who had no exposure at all.  “The trend here is important. The question is, ‘Should one be concerned?’ The answer is clearly ‘Yes.’ But it raises a number of questions that couldn’t be fully answered, ” says David Carpenter, a public health clinician and the director of the Institute for Health and the Environment at the University at Albany, S.U.N.Y.

The findings are not definitive, and there were other confusing findings that scientists cannot explain—including that male rats exposed to the radiation seemed to live longer than those in the control group. “Overall we feel that the tumors are likely related to the exposures,” says Bucher, but such unanswered questions “have been the subject of very intense discussions here.”

The NTP released the partial findings on Friday after an online publication called Microwave News reported them earlier this week. The program will still be putting out other results about the work in rats and additional findings about similar testing conducted in mice. The NIH toldScientific American in a statement, “This study in mice and rats is under review by additional experts. It is important to note that previous human, observational data collected in earlier, large-scale population-based studies have found limited evidence of an increased risk for developing cancer from cell phone use.” Still, the NTP was clearly expecting these findings to carry some serious weight: Ahead of Friday’s publication the NTP said on its Web site that the study (and prior work leading to these experiments) would “provide critical information regarding the safety of exposure to radio-frequency radiation and strengthen the science base for determining any potential health effects in humans.”

In response to media queries, cell phone industry group CTIA–The Wireless Association issued a statement Friday saying that it and the wireless industry are still reviewing the study’s findings. “Numerous international and U.S. organizations including the U.S. Food and Drug Administration, World Health Organization and American Cancer Society have determined that the already existing body of peer-reviewed and published studies shows that there are no established health effects from radio frequency signals used in cellphones,” the CTIA statement said.

The Federal Communications Commission, which had been briefed by NIH officials, told Scientific American in a statement, “We are aware that the National Toxicology Program is studying this important issue.  Scientific evidence always informs FCC rules on this matter. We will continue to follow all recommendations from federal health and safety experts including whether the FCC should modify its current policies and RF exposure limits.”

This animal study was designed primarily to answer questions about cancer risks humans might experience when they use phones themselves, as opposed to smaller levels of exposure from wireless devices in the workplace or from living or working near cell phone towers. But it may have implications for those smaller levels as well, Portier says.

The findings shocked some scientists who had been closely tracking the study. “I was surprised because I had thought it was a waste of money to continue to do animal research in this area. There had been so many studies before that had pretty consistently not shown elevations in cancer. In retrospect the reason for that is that nobody maintained a sufficient number of animals for a sufficient period of time to get results like this,” Carpenter says.

Exposing rodents to radiation for this type of experiment is a tricky business. First, scientists need to be able to calculate exactly how much the rats should be exposed to relative to humans. Too much exposure would not be a good proxy for human use. And with finely calculated low-level exposure rates, scientists still need to be sure they are not going to heat the animals enough to kill them or to cause other health problems. (Subsequent work will be published on the animals’ temperatures.)

The fact that scientists were able to expose animals to nonionizing radiation (like that emitted by cell phones) and those animals went on to develop tumors but that exposure did not significantly raise the animals’ body temperatures was “important” to release, Bucher says.

There are safety steps individuals can take, Carpenter says. Using the speakerphone, keeping the phone on the desk instead of on the body and using a wired headset whenever possible would help limit RF exposure. “We are certainly not going to go back to a pre-wireless age,” he says. But there are a number of ways to reduce exposure, particularly among sensitive populations.”

Neandertals Built Cave Structures–and No One Knows Why

Walls of stalagmites in a French cave might have had a domestic or a ceremonial use.

Neanderthals built one of the world’s oldest constructions—176,000-year-old semicircular walls of stalagmites in the bowels of a cave in southwest France. The walls are currently the best evidence that Neanderthals built substantial structures and ventured deep into caves, but researchers are wary of concluding much more.

“The big question is why they made it,” says Jean-Jacques Hublin, a palaeoanthropologist at the Max Planck Institute for Evolutionary Anthropology in Leipzig, Germany who was not involved in the study, which is published online in Nature on May 25. “Some people will come up with interpretations of ritual or religion or symbolism. Why not? But how to prove it?”

Speleologists first discovered the structures in Bruniquel Cave in the early 1990s. They are located about a third of a kilometre from the cave entrance, through a narrow passage that at one point requires crawling on all fours. Archaeologists later found a burnt bone from an herbivore or cave bear nearby and could detect no radioactive carbon left in it—a sign that the bone was older than 50,000 years, the limit of carbon dating. But when the archaeologist leading the excavation died in 1999, work stopped.

Then a few years ago, Sophie Verheyden, a palaeoclimatologist at the Royal Belgian Institute of Natural Sciences in Brussels and a keen speleologist, became curious about the cave after buying a holiday home nearby. She assembled a team of archaeologists, geochronologists and other experts to take a closer look at the mysterious structures.


The six structures are made of about 400 large, broken-off stalagmites, arranged in semi-circles up to 6.7 metres wide. The researchers think that the pieces were once stacked up to form rudimentary walls. All have signs of burning, suggesting that fires were made within the walls. By analysing calcite accreted on the stalagmites and stumps since they were broken off, the team determined that the structures were made 174,400 to 178,600 years ago.

“It’s obvious when you see it, that it’s not natural,” says Dominique Genty, a geoscientist at the Institute Pierre-Simon Laplace in Gif-sur-Yvette who co-led the study with Verheyden and archaeologist Jacques Jaubert, at the University of Bordeaux, in France. Their team found no signs that cave bears had hibernated near the structures, and so might have broken the stalagmites off themselves.

The researchers have so far found no remains of early humans, stone tools or other signs of occupation, but they think that Neanderthals made the structures, because no other hominins are known in western Europe at that time. “So far, it’s difficult to imagine that it’s not human made, and I don’t imagine any natural agent creating something like that,” Hublin agrees.


But Harold Dibble, an archaeologist at the University of Pennsylvania in Philadelphia, isn’t so sure. “When they say there’s no evidence of cave bears in this spot, maybe they’re looking at the evidence for cave bears,” he says. The authors could make a stronger case for Neanderthals if they can show, for instance, that the stalagmite pieces are uniform in size or shape and therefore selected.

If Neanderthals did build the structures, it’s not at all clear why. “It’s a big mystery,” says Genty, whose team speculates that their purpose may have ranged from the spiritual to the more domestic. Evidence for symbolism among Neanderthals is limited, ranging from etchings on a cave wall to eagle talons possibly used as jewelry.

“To me, constructing some sort of structure—things a lot of animals do, including chimps—and equating that with modern cultural behaviour is quite a leap,” says Dibble.

Marie Soressi, an archaeologist at the Leiden University in the Netherlands, says that it is no surprise that Neanderthals living 176,000 years ago had the brains to stack stalagmites. They made complex stone tools and even used fire to forge specialized glues.

More surprising is the revelation that some ventured into deep, dark spaces, says Soressi, who wrote a News and Views article for Nature that accompanies the report. “I would not have expected that, and I think it immediately changes the way we are going to investigate the underground in the future,” she says.

Has a Hungarian Physics Lab Found a Fifth Force of Nature?

Some theorists say a radioactive decay anomaly could imply a fundamental new force .

A laboratory experiment in Hungary has spotted an anomaly in radioactive decay that could be the signature of a previously unknown fifth fundamental force of nature, physicists say—if the finding holds up.

Attila Krasznahorkay at the Hungarian Academy of Sciences’s Institute for Nuclear Research in Debrecen, Hungary, and his colleagues reported their surprising result in 2015 on the arXiv preprint server, and this January in the journal Physical Review Letters. But the report – which posited the existence of a new, light boson only 34 times heavier than the electron—was largely overlooked.

Then, on April 25, a group of US theoretical physicists brought the finding to wider attention by publishing its own analysis of the result on arXiv. The theorists showed that the data didn’t conflict with any previous experiments—and concluded that it could be evidence for a fifth fundamental force. “We brought it out from relative obscurity,” says Jonathan Feng, at the University of California, Irvine, the lead author of the arXiv report.

Four days later, two of Feng’s colleagues discussed the finding at a workshop at the SLAC National Accelerator Laboratory in Menlo Park, California. Researchers there were sceptical but excited about the idea, says Bogdan Wojtsekhowski, a physicist at the Thomas Jefferson National Accelerator Facility in Newport News, Virginia. “Many participants in the workshop are thinking about different ways to check it,” he says. Groups in Europe and the United States say that they should be able to confirm or rebut the Hungarian experimental results within about a year.


Gravity, electromagnetism and the strong and weak nuclear forces are the four fundamental forces known to physics—but researchers have made many as-yet unsubstantiated claims of a fifth. Over the past decade, the search for new forces has ramped up because of the inability of the standard model of particle physics to explain dark matter—an invisible substance thought to make up more than 80% of the Universe’s mass. Theorists have proposed various exotic-matter particles and force-carriers, including “dark photons”, by analogy to conventional photons that carry the electromagnetic force.

Krasznahorkay says his group was searching for evidence of just such a dark photon – but Feng’s team think they found something different. The Hungarian team fired protons at thin targets of lithium-7, which created unstable beryllium-8 nuclei that then decayed and spat out pairs of electrons and positrons. According to the standard model, physicists should see that the number of observed pairs drops as the angle separating the trajectory of the electron and positron increases. But the team reported that at about 140º, the number of such emissions jumps—creating a ‘bump’ when the number of pairs are plotted against the angle—before dropping off again at higher angles.


Krasznahorkay says that the bump is strong evidence that a minute fraction of the unstable beryllium-8 nuclei shed their excess energy in the form of a new particle, which then decays into an electron–positron pair. He and his colleagues calculate the particle’s mass to be about 17 megaelectronvolts (MeV).

“We are very confident about our experimental results,” says Krasznahorkay. He says that the team has repeated its test several times in the past three years, and that it has eliminated every conceivable source of error. Assuming it has done so, then the odds of seeing such an extreme anomaly if there were nothing unusual going on are about 1 in 200 billion, the team says.

Feng and colleagues say that the 17-MeV particle is not a dark photon. After analysing the anomaly and looking for properties consistent with previous experimental results, they concluded that the particle could instead be a “protophobic X boson”. Such a particle would carry an extremely short-range force that acts over distances only several times the width of an atomic nucleus. And where a dark photon (like a conventional photon) would couple to electrons and protons, the new boson would couple to electrons and neutrons. Feng says that his group is currently investigating other kinds of particles that could explain the anomaly. But the protophobic boson is “the most straightforward possibility”, he says.


Jesse Thaler, a theoretical physicist at the Massachusetts Institute of Technology (MIT) in Cambridge, says that the unconventional coupling proposed by Feng’s team makes him sceptical that the new particle exists. “It certainly isn’t the first thing I would have written down if I were allowed to augment the standard model at will,” he says. But he adds that he is “paying attention” to the proposal. “Perhaps we are seeing our first glimpse into physics beyond the visible Universe,” he says.

Researchers should not have to wait long to find out whether a 17-MeV particle really does exist. The DarkLight experiment at the Jefferson Laboratory is designed to search for dark photons with masses of 10–100 MeV, by firing electrons at a hydrogen gas target. Now, says collaboration spokesperson Richard Milner of MIT, it will target the 17-MeV region as a priority, and within about a year, could either find the proposed particle or set stringent limits on its coupling with normal matter.

Also searching for the proposed boson will be the LHCb experiment at CERN, Europe’s particle-physics lab near Geneva, which will study quark–antiquark decays, and two experiments that will fire positrons at a fixed target—one at the INFN Frascati National Laboratory near Rome, due to switch on in 2018, and the other at the Budker Institute of Nuclear Physics in the Siberian town of Novosibirsk, Russia.

Rouven Essig, a theoretical physicist at Stony Brook University in New York and one of the organizers of the SLAC workshop, thinks that the boson’s “somewhat unexpected” properties make a confirmation unlikely. But he welcomes the tests. “It would be crazy not to do another experiment to check this result,” he says. “Nature has surprised us before!”

Patients with Peritoneal Carcinomatosis from Gastric Cancer Treated with Cytoreductive Surgery and Hyperthermic Intraperitoneal Chemotherapy: Is Cure a Possibility?



Peritoneal carcinomatosis is an increasingly common finding in gastric carcinoma. Previously, patients were treated as terminal, and median survival was poor. The use of cytoreductive surgery (CRS) and hyperthermic intraperitoneal chemotherapy (HIPEC) in this context is still highly debatable.


The aim of this study was to evaluate the long-term outcomes associated with CRS and HIPEC, and define prognostic factors for cure, if possible.


All patients with gastric carcinomatosis from five French institutions who underwent combined complete CRS and HIPEC and had a minimum follow-up of 5 years were included in this study. Cure was defined as a disease-free interval of more than 5 years from the last treatment until the last follow-up.


Of the 81 patients who underwent CRS and HIPEC from 1989 to 2009, 59 had a completeness of cytoreduction score (CCS) of 0 (complete macroscopic resection), and the median Peritoneal Cancer Index (PCI) score was 6. Mitomycin C was the most commonly used drug during HIPEC (88 %). The 5-year overall survival (OS) rate was 18 %, with nine patients still disease-free at 5 years, for a cure rate of 11 %. All ‘cured’ patients had a PCI score below 7 and a CCS of 0. Factors associated with improved OS on multivariate analysis were synchronous resection (p = 0.02), a lower PCI score (p = 0.12), and the CCS (p = 0.09).


The cure rate of 11 % for patients with gastric carcinomatosis who are deemed terminal emphasizes that CRS and HIPEC should be considered in highly selected patients (low disease extent and complete CRS).

%d bloggers like this: