Light shed on how genes shape face


Scientists are starting to understand why one person’s face can look so different from another’s.

Working on mice, researchers have identified thousands of small regions of DNA that influence the way facial features develop.

The study also shows that tweaks to genetic material can subtly alter face shape.

Face

The findings, published in Science, could also help researchers to learn how facial birth defects arise.

The researchers said that although the work was carried out on animals, the human face was likely to develop in the same way.

Professor Axel Visel, from the Joint Genome Institute at the Lawrence Berkeley National Laboratory in California, told BBC News: “We’re trying to find out how these instructions for building the human face are embedded in human DNA.

“Somewhere in there there must be that blueprint that defines what our face looks like.”

Switch off

The international team has found more than 4,000 “enhancers” in the mouse genome that appear to play a role in facial appearance.

These short stretches of DNA act like switches, turning genes on and off. And for 200 of these, the researchers have identified how and where they work in developing mice.

Prof Visel said: “In the mouse embryos we can see where exactly, as the face develops, this switch turns on the gene that it controls.”

Mice Transgenic mice revealed how genes affected the face during development

The scientists also looked at what happened when three of these genetic switches were removed from mice.

“These mice looked pretty normal, but it is really hard for humans to see differences in the face of mice,” explained Prof Visel.

“The way we can get around this is to use CT scans to study the shapes of the skulls of these mice. We take them and scan their heads. then we can measure the shape of the skull of these mice and we can do this in a very precise way.”

By comparing the transgenic mice with unmodified mice, the researchers found that the changes were very subtle. However some mice developed longer or shorter skulls, while others have wider or narrower faces.

“What this really tells us is that this particular switch also plays a role in development of the skull and can affect what exactly the skull looks like,” he explained.

Designer babies?

Understanding this could also help to reveal why and how things can go wrong as embryos develop in the womb, leading to facial birth defects.

Prof Visel said: “There are many kinds of craniofacial birth defects; cleft of the lip and palate are the most common ones.

“And they have severe implications for the kids that are affected. They affect feeding, speech, breathing, they can require extensive surgery and they have psychological implications.”

While some of these are caused by genetic mutations, the researchers want to understand how the genetic switches interact.

Professor Visel added that scientists were just at the beginning of understanding the processes that shape the face, but their early results suggested it was an extremely complex process.

He said it was unlikely in the near future that DNA could be used to predict someone’s exact appearance, or that parents could alter genetic material to change the way a baby looks.

Why do some people prefer bitter drinks?


There’s been a wave of popularity for drinks like the Aperol spritz, the Negroni, and a host of cocktails flavoured with “bitters”. Why are people turning their backs on sweet cocktails in favour of a bitter taste?

The last two decades have seen an extraordinary resurgence in cocktail-making on both sides of the Atlantic, with everything from Cointreau-sweetened Cosmopolitans to sugary Mojitos being drunk in vast quantities.

But there is now a definite trend towards bitter drinks. People are ordering whisky or gin-based drinks paired with vermouths. And there is growing interest in the US, UK and other European nations in Italian amari.

These complex, herbal, bittersweet drinks, with names like Averna, Ramazzotti, Montenegro and Fernet Branca, are usually consumed as aperitivi or digestivi – drinks thought to either encourage the appetite before dinner or help with digestion afterwards.

Bitter tasting cocktails are having a renaissance

Their bitter mixer cousins, Cynar, Campari and Aperol, are increasingly being used in cocktails.

Aperol – based on bitter orange and rhubarb and containing classic bitter ingredients like gentian and cinchona (a source of quinine) – has rocketed in popularity in recent years following a push by owner Gruppo Campari.

Old fashioned cocktails

Sales rose 156% in the UK in 2012 and 56% in the US. This year’s figures, announced soon, are expected to be even bigger. A poster campaign in the UK encourages people to try an Aperol spritz – prosecco sparkling wine and soda water mixed with Aperol.

A fundamental point of the spritz is its low alcohol content. Aperol’s slogan is “poco alcolico”, roughly meaning a little bit alcoholic.

“I think the Aperol spritz was probably the most asked-for drink in the outdoor areas of most decent bars in London this summer,” says World Duty Free mixologist Charlie McCarthy.

Laura Tallo, from Nonna’s Italian Cucina in Bath, says many British drinkers have returned after holidays in Italy, having seen certain drinks paired with tavola calda – the selection of hot, freshly-baked food.

“People are definitely beginning to embrace the Italian custom of drinking aperitifs. We have seen a definite trend emerging of people choosing classic Italian pre-dinner drinks such as an Aperol spritz, Negroni [equal parts Campari, gin and sweet vermouth], Americano or Martini,” she says.

The expert viewBrooklyn-based author Brad Thomas Parsons, wrote Bitters: A Spirited History of a Classic Cure-All:”Bitters, as a term, is more like an umbrella for a variety of liquid seasonings. From citrus to aromatic to floral and beyond. There should be a bittering agent to be a proper bitters but bitters are concentrated and not meant to be consumed on their own.”Amaro, on the other hand-potable spirits like Cynar, Fernet, and Averna-are meant to be imbibed on their own (or as an ingredient in a drink). These are definitely bitter to the taste-ranging from the soft end of bittersweet to bracingly medicinal.”

Such drinks are not to everyone’s taste, of course. While many Italians have been brought up around the tradition of amari, they can baffle non-Italian palates at first taste.

“They say of a Negroni the first two or three sips you despise, and after you have had two or three drinks you start to like it,” says Tom Ross, bars manager of the Polpo restaurant group in London.

The taste of Fernet Branca – vaguely minty but with pungent undertones of cough medicine – is so powerful that comedian Bill Cosby constructed a seven-minute anecdote around his initial horror on encountering the drink in Italy. And yet Fernet is loved by many, being drunk with cola in Argentina and – accompanied by a separate shot of ginger beer – known as the “bartender’s handshake” in San Francisco.

One of the first recorded definitions of a cocktail was in a New York journal in 1803, which classified it as a mixture of any “spirituous liquor”, with water, sugar and “bitters”, known at the time as a bittered sling.

bartender pouring bitters

You can find the descendant of these traditional bitters (with the term typically referring to both singular and plural) in any decent bar in the UK or US. There’ll be a rather unusual bottle among the others – small with a yellow top and an oversized label covered in small print. It is the world’s most famous cocktail bitters, Angostura.

A matter of taste.

Tongue
  • Humans can detect five basic tastes – sweet, sour, bitter, salty and umami (savoury); some evidence that fat may be sixth taste
  • Bitter tastes include coffee, beer, unsweetened cocoa and citrus peel
  • Sour is indicative of acidity; taste found in citrus fruit, wine, and sour milk

This bitters is the key ingredient in pink gin, the traditional officers’ cocktail in the Royal Navy. It’s also the bedrock of famous cocktails, including the Old Fashioned, beloved of Mad Men’s Don Draper, and the Manhattan. A supply shortage in 2009 caused panic throughout the world’s bartending community, according to McCarthy, and prompted bartenders to start making their own.

The current wave of speakeasy-type bars inspired by the prohibition years in the US, has prompted interest in traditional and hitherto forgotten cocktails. This in turn has prompted demand for more unusual bitters.

Bob Petrie, of Bob’s Bitters, started in 2005 when he was approached by the Dorchester Hotel to create a range.

Traditional bitters are very complex, with aromatic flavours brought out from a combination of barks, roots, herbs, and spices by macerating them in alcohol. He looked at pairing them with the “botanicals” in gin, and came up with a range including cardamom, chocolate, coriander, ginger, grapefruit, lavender, liquorice, orange and mandarin, peppermint and vanilla.

“Single flavours are a lot easier for the barman as it gives a lot more scope,” Petrie says.

bitter bottles

“What they can do with mine is mix and match.”

Rochdale-born US celebrity mixologist Gary Regan is another who was prompted to make his own to replicate the orange bitters in some old cocktail recipes.

“When I realised it was hard to get good ones I decided to make my own. I stole a recipe from the Gentleman’s Companion of 1939. I had about four different trials and eventually got something I liked.

Bitter history.

Angostura bark
  • Prepared by infusion or distillation, using ingredients such as angostura bark (pictured), cascarilla, cassia, gentian, orange peel, and quinine
  • Angostura bitters first sold in 1824 as cure for sea sickness and stomach maladies, named after Venezuelan town where it was formulated
  • Quinine derived from cinchona tree bark; used as treatment for malaria, lupus and arthritis and in very dilute form in tonic water and bitter lemon

“This shift towards bitter has been going on for eight years,” Regan estimates. “[But] most bitters aren’t that bitter. They taste herbal. In fact, they are a bit on the sweet side sometimes.”

Fee Brothers in Rochester, New York, has been making bitters since 1863, with a short break for prohibition. Joe and Ellen Fee are the fourth generation. Their 92-year-old father still regularly visits the plant.

Joe Fee says there has been a marked increase in sales over the past seven years. “Just about anything can be made into a bitters. They are your spice rack behind the bar.”

“I am the Willy Wonka of the cocktail mixers,” boasts Ellen Fee, who conjures up the new flavours. “I like to think in terms of what flavour is already bitter. Cocoa powder is bitter, cranberry is bitter. You add to that and make it more interesting.”

The plethora of these cocktail tinctures and potions shows the tastes of aficionados have shifted. But there are other flagbearers for a more bitter palate.

Many James Bond fans attempt to recreate the Vesper Martini from Casino Royale. Bond asks for it to be made with strong gin, vodka and the bitter Kina Lillet. Lillet took out much of the bitter quinine in the 1980s and fans tend to use Cocchi Americano Italian vermouth instead.

And the Queen has followed in her mother’s footsteps by drinking gin and Dubonnet – the quinine-bittered French aperitif. Just a few years ago, her choice was seen as unusual.

Now she’s part of an established trend.

Dino impact ‘also destroyed bees’


Scientists say there was a widespread extinction of bees 66 million years ago, at the same time as the event that killed off the dinosaurs.

Carpenter bee     Sandra Rehan, University of New Hampshire

The demise of the dinosaurs was almost certainly the result of an asteroid or comet hitting Earth.

But the extinction event was selective, affecting some groups more than others.

Writing in Plos One journal, the team used fossils and DNA analysis to show that one bee group suffered a serious decline at the time of this collision.

The researchers chose to study bees within the subfamily known as Xylocopinae – which included the carpenter bees.

This was because the evolutionary history of this group could be traced back to the Cretaceous Period, when the dinosaurs still walked the Earth.

Previous studies had suggested a widespread extinction among flowering plants during the Cretaceous-Paleogene (K-Pg) extinction event 66 million years ago.

And it had long been assumed that the bees that depended upon these plants would have met the same fate.

Yet, unlike the dinosaurs, “there is a relatively poor fossil record of bees,” said the paper’s lead author Sandra Rehan, a biologist at the University of New Hampshire in Durham, US. This has made the confirmation of such an extinction difficult.

Post K-T impact
The impact that wiped out the dinosaurs created opportunities for other animals

However, the researchers were able to use an extinct group of Xylocopinae as a calibration point for timing the dispersal of these bees.

They were also able to study flower fossils that had evolved traits that allowed them to be pollinated by bee relatives of the Xylocopinae.

“The data told us something major was happening in four different groups of bees at the same time,” said Dr Rehan.

“And it happened to be the same time as the dinosaurs went extinct.”

The findings of this study could have implications for today’s concern about the loss in diversity of bees, a pivotal species for agriculture and biodiversity.

“Understanding extinctions and the effects of declines in the past can help us understand the pollinator decline and the global crisis in pollinators today,” Dr Rehan explained.

The Science of Love: How Positivity Resonance Shapes the Way We Connect.


The neurobiology of how the warmest emotion blurs the boundaries by you and not-you.

We kick-started the year with some of history’s most beautiful definitions of love. But timeless as their words might be, the poets and the philosophers have a way of escaping into the comfortable detachment of the abstract and the metaphysical, leaving open the question of what love really is on an unglamorously physical, bodily, neurobiological level — and how that might shape our experience of those lofty abstractions. That’s precisely what psychologist Barbara Fredrickson, who has been studying positive emotions for decades, explores in the unfortunately titled but otherwise excellent Love 2.0: How Our Supreme Emotion Affects Everything We Feel, Think, Do, and Become (UK; public library). Using both data from her own lab and ample citations of other studies, Fredrickson dissects the mechanisms of love to reveal both its mythologies and its practical mechanics.

She begins with a definition that parallels Dorion Sagan’s scientific meditation on sex:

First and foremost, love is an emotion, a momentary state that arises to infuse your mind and body alike. Love, like all emotions, surfaces like a distinct and fast-moving weather pattern, a subtle and ever-shifting force. As for all positive emotions, the inner feeling love brings you is inherently and exquisitely pleasant — it feels extraordinarily good, the way a long, cool drink of water feels when you’re parched on a hot day. Yet far beyond feeling good, a micro-moment of love, like other positive emotions, literally changes your mind. It expands your awareness of your surroundings, even your sense of self. The boundaries between you and not-you — what lies beyond your skin — relax and become more permeable. While infused with love you see fewer distinctions between you and others. Indeed, your ability to see others — really see them, wholeheartedly — springs open. Love can even give you a palpable sense of oneness and connection, a transcendence that makes you feel part of something far larger than yourself.

[…]

Perhaps counterintuitively, love is far more ubiquitous than you ever thought possible for the simple fact that love is connection. It’s that poignant stretching of your heart that you feel when you gaze into a newborn’s eyes for the first time or share a farewell hug with a dear friend. It’s even the fondness and sense of shared purpose you might unexpectedly feel with a group of strangers who’ve come together to marvel at a hatching of sea turtles or cheer at a football game. The new take on love that I want to share with you is this: Love blossoms virtually anytime two or more people — even strangers — connect over a shared positive emotion, be it mild or strong.

Fredrickson zooms in on three key neurobiological players in the game of love — your brain, your levels of the hormone oxytocin, and your vagus nerve, which connects your brain to the rest of your body — and examines their interplay as the core mechanism of love, summing up:

Love is a momentary upwelling of three tightly interwoven events: first, a sharing of one or more positive emotions between you and another; second, a synchrony between your and the other person’s biochemistry and behaviors; and third, a reflected motive to invest in each other’s well-being that brings mutual care.

She shorthands this trio “positivity resonance” — a concept similar to limbic revision — and likens the process to a mirror in which you and your partner’s emotions come into sync, reflecting and reinforcing one another:

This is no ordinary moment. Within this mirrored reflection and extension of your own state, you see far more. A powerful back-and-forth union of energy springs up between the two of you, like an electric charge.

What makes “positivity resonance” so compelling a concept and so arguably richer than traditional formulations of “love” is precisely this back-and-forthness and the inclusiveness implicit to it. Fredrickson cautions against our solipsistic view of love, common in the individualistic cultures of the West:

Odds are, if you were raised in a Western culture, you think of emotions as largely private events. you locate them within a person’s boundaries, confined within their mind and skin. When conversing about emotions, your use of singular possessive adjectives betrays this point of view. You refer to ‘my anxiety,’ ‘his anger,’ or ‘her interest.’ Following this logic, love would seem to belong to the person who feels it. Defining love as positivity resonance challenges this view. Love unfolds and reverberates between and among people — within interpersonal transactions — and thereby belong to all parties involved, and to the metaphorical connective tissue that binds them together, albeit temporarily. … More than any other positive emotion, then, love belongs not to one person, but to pairs or groups of people. It resides within connections.

Citing various research, Fredrickson puts science behind what Anaïs Nin poetically and intuitively cautioned against more than half a century ago:

People who suffer from anxiety, depression, or even loneliness or low self-esteem perceive threats far more often than circumstances warrant. Sadly, this overalert state thwarts both positivity and positivity resonance. Feeling unsafe, then, is the first obstacle to love.

But perhaps the insight hardest to digest in the age of artificial semi-connectedness — something Nin also cautioned against a prescient few decades before the internet — has to do with the necessary physicality of love:

Love’s second precondition is connection, true sensory and temporal connection with another living being. You no doubt try to ‘stay connected’ when physical distance keeps you and your loved ones apart. You use the phone, e-mail, and increasingly texts or Facebook, and it’s important to do so. Yet your body, sculpted by the forces of natural selection over millennia, was not designed for the abstractions of long-distance love, the XOXs and LOLs. Your body hungers for more.

[…]

True connection is one of love’s bedrock prerequisites, a prime reason that love is not unconditional, but instead requires a particular stance. Neither abstract nor mediated, true connection is physical and unfolds in real time. It requires sensory and temporal copresence of bodies .The main mode of sensory connection, scientists contend, is eye contact. Other forms of real-time sensory contact — through touch, voice, or mirrored body postures and gestures — no doubt connect people as well and at times can substitute for eye contact. Nevertheless, eye contact may well be the most potent trigger for connection and oneness.

[…]

Physical presence is key to love, to positivity resonance.

While Fredrickson argues for positivity resonance as a phenomenon that can blossom between any set of people, not just lovers, she takes care to emphasize the essential factor that separates intimate love from other love: time.

Love is a many-splendored thing. This classic saying is apt, not only because love can emerge from the shoots of any other positive emotion you experience, be it amusement, serenity, or gratitude, but also because of your many viable collaborators in love, ranging from our sister to your soul mate, your newborn to your neighbor, even someone you’ve never met before.

[…]

At the level of positivity resonance, micro-moments of love are virtually identical regardless of whether they bloom between you and a stranger or you and a soul mate; between you and an infant or you and your lifelong best friend. The clearest difference between the love you feel with intimates and the love you feel with anyone with whom you share a connection is its sheer frequency. Spending more total moments together increases your chances to feast on micro-moments of positivity resonance. These micro-moments change you.

[…]

Whereas the biological synchrony that emerges between connected brains and bodies may be comparable no matter who the other person may be, the triggers for your micro-moments of love can be wholly different with intimates. The hallmark feature of intimacy is mutual responsiveness, that reassuring sense that you and your soul mate — or you and your best friend — really ‘get’ each other. This means that you come to your interactions with a well-developed understanding of each other’s inner workings, and you use that privileged knowledge thoughtfully, for each other’s benefit. Intimacy is that safe and comforting feeling you get when you can bask in the knowledge that this other person truly understands and appreciates you. You can relax in this person’s presence and let your guard down. Your mutual sense of trust, perhaps reinforced by your commitments of loyalty to each other, allows each of you to be more open with each other than either of you would be elsewhere.

UNC child neurologist finds potential route to better treatments for Fragile X, autism.


When you experience something, neurons in the brain send chemical signals called neurotransmitters across synapses to receptors on other neurons. How well that process unfolds determines how you comprehend the experience and what behaviors might follow. In people with Fragile X syndrome, a third of whom are eventually diagnosed with Autism Spectrum Disorder, that process is severely hindered, leading to intellectual impairments and abnormal behaviors.

In a study published in the online journal PLoS One, a team of UNC School of Medicine researchers led by pharmacologist C.J. Malanga, MD, PhD, describes a major reason why current medications only moderately alleviate Fragile X symptoms. Using mouse models, Malanga discovered that three specific drugs affect three different kinds of neurotransmitter receptors that all seem to play roles in Fragile X. As a result, current Fragile X drugs have limited benefit because most of them only affect one receptor.

“There likely won’t be one magic bullet that really helps people with Fragile X,” said Malanga, an associate professor in the Department of Neurology. “It’s going to take therapies acting through different receptors to improve their behavioral symptoms and intellectual outcomes.”

Nearly one million people in the United States have Fragile X Syndrome, which is the result of a single mutated gene called FMR1. In people without Fragile X, the gene produces a protein that helps maintain the proper strength of synaptic communication between neurons. In people with Fragile X, FMR1 doesn’t produce the protein, the synaptic connection weakens, and there’s a decrease in synaptic input, leading to mild to severe learning disabilities and behavioral issues, such as hyperactivity, anxiety, and sensitivity to sensory stimulation, especially touch and noise.

More than two decades ago, researchers discovered that – in people with mental and behavior problems – a receptor called mGluR5 could not properly regulate the effect of the neurotransmitter, glutamate. Since then, pharmaceutical companies have been trying to develop drugs that target glutamate receptors. “It’s been a challenging goal,” Malanga said. “No one so far has made it work very well, and kids with Fragile X have been illustrative of this.”

But there are other receptors that regulate other neurotransmitters in similar ways to mGluR5. And there are drugs already available for human use that act on those receptors. So Malanga’s team checked how those drugs might affect mice in which the Fragile X gene has been knocked out.

By electrically stimulating specific brain circuits, Malanga’s team first learned how the mice perceived reward. The mice learned very quickly that if they press a lever, they get rewarded via a mild electrical stimulation. Then his team provided a drug molecule that acts on the same reward circuitry to see how the drugs affect the response patterns and other behaviors in the mice.

His team studied one drug that blocked dopamine receptors, another drug that blocked mGluR5 receptors, and another drug that blocked mAChR1, or M1, receptors. Three different types of neurotransmitters – dopamine, glutamate, and acetylcholine – act on those receptors. And there were big differences in how sensitive the mice were to each drug.

“Turns out, based on our study and a previous study we did with my UNC colleague Ben Philpot, that Fragile X mice and Angelman Syndrome mice are very different,” Malanga said. “And how the same pharmaceuticals act in these mouse models of Autism Spectrum Disorder is very different.”

Malanga’s finding suggests that not all people with Fragile X share the same biological hurdles. The same is likely true, he said, for people with other autism-related disorders, such as Rett syndrome and Angelman syndrome.

“Fragile X kids likely have very different sensitivities to prescribed drugs than do other kids with different biological causes of autism,” Malanga said.

‘We’ve reached the end of antibiotics’


‘We’ve reached the end of antibiotics‘: Top CDC expert declares that ‘miracle drugs’ that have saved millions are no match against ‘superbugs’ because people have overmedicated themselves

A high-ranking official with the Centers for Disease Control and Prevention has declared in an interview with PBS that the age of antibiotics has come to an end.

‘For a long time, there have been newspaper stories and covers of magazines that talked about “The end of antibiotics, question mark?”‘ said Dr Arjun Srinivasan. ‘Well, now I would say you can change the title to “The end of antibiotics, period.”’

Nightmare superbug: Srinivasan said that about 10 years ago, he began seeing outbreaks of different kinds of MRSA infections, which previously had been limited to hospitals, in schools and gyms

The associate director of the CDC sat down with Frontline over the summer for a lengthy interview about the growing problem of antibacterial resistance.

Srinivasan, who is also featured in a Frontline report called ‘Hunting the Nightmare Bacteria,’ which aired Tuesday, said that both humans and livestock have been overmedicated to such a degree that bacteria are now resistant to antibiotics.

‘We’re in the post-antibiotic era,’ he said. ‘There are patients for whom we have no therapy, and we are literally in a position of having a patient in a bed who has an infection, something that five years ago even we could have treated, but now we can’t.’.

Dr Srinivasan offered an example of this notion, citing the recent case of three Tampa Bay Buccaneers players who made headlines after reportedly contracting potentially deadly MRSA infections, which until recently were largely restricted to hospitals.

About 10 years ago, however, the CDC official began seeing outbreaks of different kinds of MRSA infections in schools and gyms.

‘In hospitals, when you see MRSA infections, you oftentimes see that in patients who have a catheter in their blood, and that creates an opportunity for MRSA to get into their bloodstream,’ he said.

Nightmare superbug: Srinivasan said that about 10 years ago, he began seeing outbreaks of different kinds of MRSA infections, which previously had been limited to hospitals, in schools and gyms

‘In the community, it was causing a very different type of infection. It was causing a lot of very, very serious and painful infections of the skin, which was completely different from what we would see in health care.’

With bacteria constantly evolving and developing resistance to conventional antibiotics, doctors have been forced to ‘reach back into the archives’ and ‘dust off’ older, more dangerous cures like colistin.

‘It’s very toxic,’ said Srinivasan. ‘We don’t like to use it. It damages the kidneys. But we’re forced to use it in a lot of instances.’

The expert went on, saying that the discovery of antibiotics in 1928 by Professor Alexander Fleming revolutionized medicine, allowing doctors to treat hundreds of millions of people suffering from illnesses that had been considered terminal for centuries.

 

Antibiotics also paved the way for successful organ transplants, chemotherapy, stem cell and bone marrow transplantations – all the procedures that weaken the immune system and make the body susceptible to infections.

However, the CDC director explained that people have fueled the fire of bacterial resistance through rampant overuse and misuse of antibiotics.

‘These drugs are miracle drugs, these antibiotics that we have, but we haven’t taken good care of them over the 50 years that we’ve had them,’ he told Frontline.

Srinivasan added that pharmaceutical companies are at least partially to blame for this problem, saying that they have neglected the development of new and more sophisticated antibiotics that could keep up with bacterial resistance because ‘there’s not much money to be made’ in this field.
 

FDA Recommending Schedule II Reclassification for Hydrocodone Combination Products.


The FDA has announced it will recommend(www.fda.gov) to HHS that hydrocodone combination products be reclassified as Schedule II drugs. These products currently are classified as Schedule III medications. The agency said it will submit a formal recommendation package to HHS by early December.

“We anticipate that the National Institute on Drug Abuse will concur with our recommendation,” said the FDA in its announcement. “This will begin a process that will lead to a final decision by the DEA on the appropriate scheduling of these products.”

According to Alan Schwartzstein, M.D., of Oregon, Wis., chair of the AAFP Commission on Health of the Public and Science, although the change could be viewed as a burden to prescribers and patients — requiring that prescriptions be written for no more than 90 days without an option for refill — the benefits outweigh any potential negatives.

“The AAFP is very concerned about the problem of opiate misuse and pain management, and we are working actively within our organization with the FDA and other organizations to minimize opiate misuse and maximize the treatment of pain relief for our patients,” Schwartzstein told AAFP News Now. “We recognize that the lack of ability to write for refills more than 90 days will require that our members more frequently write these prescriptions, but we understand and accept that this is necessary to achieve our goals.”

Schwartzstein said the FDA move complements the conclusions published in the Academy’s 2012 position paper on the subject of opioid abuse and pain management.

“We want to work with other organizations and with our members to make sure that students and physicians are kept up-to-date on appropriate prescribing of immediate- and long-acting opiates,” he said. “It is also important that (physicians) are using (opiates) in the most effective way to manage pain without contributing to misuse of the substances.”

According to the National Safety Council(www.nsc.org) (NSC), 45 U.S. citizens die each day from an unintentional overdose of prescription pain medication, and one in every 20 Americans age 12 or older reported using prescription painkillers recreationally in the past year.

Moreover, in its latest report on prescription drug abuse(www.nsc.org), the NSC says only three states adequately address the issue.

Meanwhile, the Trust for America’s Health (TFAH) reported in its recent publicationPrescription Drug Abuse: Strategies to Stop the Epidemic(healthyamericans.org) that prescription drug-related deaths now outnumber those from heroin and cocaine combined, and drug overdose deaths exceed motor vehicle-related deaths in 29 states and Washington, D.C.

“Misuse and abuse of prescription drugs costs the country an estimated $53.4 billion a year in lost productivity, medical costs and criminal justice costs, and, currently, only one in 10 Americans with a substance abuse disorder receives treatment,” TFAH said in the report.

In an interview with AAFP News Now, NSC medical adviser and family physician Donald Teater, M.D., of Clyde, N.C., said he applauds the FDA move and is pleased with the AAFP’s affirmative stance on the issue.

“(The schedule change) may present some hardships for a few people, but I think that those should be minor,” said Teater. “When you are looking at the overall picture — where greater than 16,000 people are dying every year from prescription pain medications — something has to be done. And, because hydrocodone is the most commonly prescribed opioid, it just makes sense to put more controls on it to prevent those accidental deaths.”

Teater said he believes the FDA’s move will help address the prescription drug misuse issue somewhat, but added that more needs to be done.

“(The proposed schedule change) is certainly not a magic bullet, but it will make some difference,” he said. “In the whole area of prescription drug abuse and overdose, there’s just not going to be one magic solution, but a whole bunch of small steps like this.

“I think we need to be pleased with every positive step and keep going forward.”

Asteroid miners want to turn rocks into spacecraft.


Not content with sending the first tourist into space and landing NASA’s Mars rovers between them, Eric Anderson and Chris Lewicki have an outlandish plan to mine asteroids, backed by Google billionaires. But, they tell Paul Marks, that’s just the start.

Eric Anderson (left) and Chris Lewicki (right) aim to launch asteroid-spotting telescopes by 2014 <i>(Image: Brain Smale)</i>

Your asteroid mining company Planetary Resources is backed by the Google executives Larry Page and Eric Schmidt. How tough was it to convince them to invest?
Eric Anderson: The Google guys all like space and see the importance of developing an off-planet economy. So Larry Page and Eric Schmidt became investors. And Google’s Sergey Brin has his name down as a future customer of my space tourism company Space Adventures.

You want to put space telescopes in orbit to seek out asteroids rich in precious metals or water – and then send out robotic spacecraft to study and mine them. Are you serious?
Chris Lewicki: Yes. We’re launching the first telescopes in 18 months – and we’re actually building them ourselves in our own facility in Bellevue, Washington. We have a team of more than 30 engineers with long experience of doing this kind of thing at NASA’s Jet Propulsion Laboratory, myself included. Many of our team worked on designing and building NASA’s Curiosity rover, and I was a system engineer on the Spirit and Opportunity rovers – and flight director when we landed them on Mars.

How many asteroid-spotting telescopes will you need – and are they anything like Hubble?
EA: We’d like to put up at least 10 or 15 of them in orbit in the next five years,some of them on Virgin Galactic rockets. They’re a lot less capable than Hubble, which is a billion dollar space vehicle the size of a school bus. Our telescopes – which we call the Arkyd 100 spacecraft – are cubes half-a-metre on a side and will cost around $1 million each, though the first one, of course, will cost much more. But when they are developed to a high level of performance, we want to print them en masse on an assembly line. They will have sub-arc-second resolution, which is just a mind-blowing imaging capability.
CL: The smaller we can make them the lower they cost to launch. Making them the size of a mini fridge, with 22-centimetre-diameter optics, hits the sweet spot between capability and launch cost.

How can you tell if an asteroid might have platinum, gold or water deposits?
CL: We’ll characterise them by studying their albedo – the amount of light that comes from them – and then with the appropriate instruments we can start to classify them, as to what type of asteroid they are, whether they are stony, metallic or carbonaceous. We’re starting with optical analyses though we could use swarms of Arkyd 100s with spectroscopic, infrared or ultraviolet sensors, too, if needed.

Once you spot a likely asteroid, what then?
EA: We’ll send other spacecraft out to intercept and study them. They will be rocket-assisted versions of the telescope – the Arkyd 200 for nearer Earth space, and the Arkyd 300 which is the same except that it will have a deep space communications capability. We’ll make sure we understand every cubic inch of that asteroid. We’ll find out where it is, what its inertia is, what its spin rate is, whether it has been burned, impacted, or is carbonaceous or metallic. We’ll know that asteroid inside and out before we go there and mine it.

Will you be able to tell, remotely, if a space rock has lucrative platinum deposits, say?
CL: Probably not. But we would be able to tell metals from water or silicates. There’s an asteroid out in the main belt right now called 24 Themis, and we’ve been able to sense water ice on its surface from way back here on Earth. Identifying metals will require spectrometry and direct analysis of the materials returned. The Arkyd 300 will get right up to the asteroid, land on it and take samples – like NASA’s NEAR and Japan’s Hayabusa missions did – then return pictures, data and grain samples back to Earth for analysis.

Digging up ore on an asteroid 50 to 500 metres wide in zero gravity will be a tough task, even for robots. What technology will you use?
CL: The data the 300-series gathers will allow us to design the mining spacecraft. There are many, many different options for that. They could vary from very small spacecraft that swarm and cooperate on a bunch of tasks, to very large spacecraft that look seriously industrial. Before we can begin the detailed design of a mining spacecraft, we need to actually go there, explore the asteroid and learn where the specific opportunities are.

You’ve suggested an asteroid could be brought closer to the Earth to make it easier to mine. Is that really feasible?
EA: It is. One of the ways that we could do that is simply to turn the water on an asteroid into rocket fuel and burn it in a thruster that nudges its trajectory. Split water into hydrogen and oxygen, and you get the same fuels that launch space shuttles. Some asteroids are 20 per cent water, and that amount would let you move the thing anywhere in the solar system.

Another way is to set up a catapult on the asteroid itself and use the thermal energy of the sun to wind up the catapult. Then you throw stuff off in the opposite direction you want the asteroid to go. Conservation of momentum will eventually move the thing forward – like standing on a skateboard and shooting a gun.
CL: This is not only our view. A Keck Institute “return an asteroid study”, involving people at JPL, NASA Johnson Space Center and Caltech, showed that the technology exists to place small asteroids a few metres wide in orbit around the moon for further study.

Can you think of any other uses for asteroid repositioning?
EA: There is one incredible concept: we could place the asteroid in an orbit between the Earth and Mars to allow astronauts who want to get there to hop on and off it like a bus. Think about that. You could make a spacecraft out of the asteroid.

Apart from your commitment to turn a profit for your investors, might there be spin-offs for the rest of us?
EA: Hopefully, we’ll be finding hundreds of new asteroids that would not otherwise have been discovered – including asteroids that are Earth-threatening. We do need to develop the ability to move asteroids: every few hundred years an asteroid strikes that is capable of creating great loss of life and billions of dollars worth of damage. If the 1908 Tunguska meteor had struck London or New York it would have killed millions of people. It is one of the few natural risks we know will happen – the question is when. And we have to be ready for that. So while some might regard moving asteroids as risky, it really is something we need for our planet’s future safety.

What will be your first priority: seeking precious metals or rocket fuel on the asteroids?
EA: One of our first goals is to deploy networks of orbital rocket propellant depots, effectively setting up gas stations throughout the inner solar system to open up highways for spaceflight.

So you are planning filling stations for people like Elon Musk, the SpaceX billionaire planning a crewed mission to MarsMovie Camera?
EA: Elon and I share a common goal, in fact we share many common goals. But nothing would enable Mars settlement faster than a drastic reduction in the cost of getting to and from the planet, which would be directly helped by having fuel depots throughout the inner solar system.

Space fuel crisis: NASA confronts the plutonium pinch.


The cold war plutonium reserves that fuel NASA’s deep space probes are running low. How will we power our way to the outer solar system in future?

MORE than 18 billion kilometres from home, Voyager 1 is crossing the very edge of the solar system. If its instruments are correct, the craft is finally about to enter the unknown – the freezing vastness of interstellar space. It is the culmination of a journey that has lasted 35 years.

Voyager has a nuclear tiger in its tank <i>(Image: NASA)</i>

NASA’s most distant probe owes its long life to a warm heart of plutonium-238. A by-product of nuclear weapons production, the material creates heat as it decays and this is converted into electricity to power Voyager’s instruments. Engineers expect the craft will continue to beam back measurements for another decade or so, before disappearing into the void.

Since the 1960s, this plutonium isotope has played a crucial role in long-haul space missions, mainly in craft travelling too far from the sun to make solar panels practical. The Galileo mission to Jupiter, for instance, and the Pioneer and Voyager probes all relied on it, as does the Cassini orbiter, which has revealed the ethane lakes and icy geysers on Saturn’s moons, among other wonders.

Yet despite many successes, this kind of mission may soon be a thing of the past. The production of plutonium-238 halted decades ago and the space agency’s store is running perilously low. Without fresh supplies, our exploration of the outer solar system could soon come to a grinding halt.

The problem is that plutonium-238 is neither simple nor cheap to make, and restarting production lines will take several years and cost about $100 million. Though NASA and the US Department of Energy (DoE) are keen, Congress has so far refused to hand over the necessary funds.

But there could be a better way to make it. At a NASA meeting in March, physicists from the Center for Space Nuclear Research (CSNR) in Idaho Falls proposed a radical approach that they claim should please all parties. It will be quicker, cleaner and cheaper and could offer a production line run in a commercial fashion that not only meets NASA’s needs, but also turns a tidy profit into the bargain.

So what to do? Putting the production of this material on a commercial footing, as CSNR suggests, might prove easier on the public purse, but critics are concerned this could compromise safety. Plutonium is one of the most poisonous substances known – the isotope is a powerful emitter of alpha particles and deadly if inhaled. They argue that the time and money needed to restart production would be better spent developing safer alternatives. So is this the perfect opportunity to say farewell to this cold war technology and devise new, cleaner sources of space power that could benefit us earthlings too?

Plutonium-238 has played a key role in almost all of NASA’s long-duration space missions for good reason: it produces heat through the emission of alpha particles, and with a half-life of about 87 years, the material decays slowly. Sealed into a device called a radioisotope thermoelectric generator, the decaying plutonium heats a thermocouple to create electricity. Each gram of plutonium-238 generates approximately half a watt of power. On average, NASA has used a couple of kilograms of the isotope each year to power its various craft.

It does not occur naturally. Like its weapons-grade cousin, plutonium-239, it was originally created in the reactors that made material for nuclear bombs, but US production halted when those facilities were shut down in 1988. To fill the gap, the US purchased plutonium-238 from Russia until 2009, when a contract dispute ended the supply. With Russia now also running short, it is doubtful that any new deal will be struck.

So the US government must decide whether or not to resume production. According to a 2009 report by the US National Research Council, NASA has access to about 5 kilograms of the stuff, which could last it until the end of the decade (see diagram). Officials at the DoE say that if they get the go-ahead now, 2 kilograms could be made annually by 2018 – just in time to restock NASA’s cupboards. But funding is proving hard to come by. NASA has agreed to share the burden and released about $14 million for studies to work out the costs of restarting the production line – which would most likely be at Oak Ridge National Laboratory in Tennessee. However, costs could eventually spiral to $150 million, suggest some at the research council, and Congress seems loath to provide any funding directly to the DoE.

Clearly, making plutonium-238 is an expensive business. The conventional way to produce it involves placing a batch of neptunium-237 inside a powerful nuclear reactor and irradiating it with neutrons for up to a year (see diagram). The sample must then be put through a series of purification steps to separate plutonium-238 from the other fission products that also form.

At the NASA Innovative Advanced Concepts (NIAC) symposium in March, however, Steven Howe of CSNR proposed what could be a simpler and cheaper way to make it. The trick is to use a mechanical feed line, a coiled pipe surrounding the reactor core. Small capsules containing just a few grams of neptunium-237 are pushed continuously along this pipe, each one spending just days in the reactor. As they pop out the other end, the plutonium-238 is extracted and the remaining neptunium-237 is sent through the line again. About 0.01 per cent of the neptunium is converted on each pass, so this cycle would need to be repeated thousands of times to create the kilos of material required by NASA.

This technique brings some significant advantages, including shorter irradiation times causing far fewer fission products to be generated. This simplifies the subsequent chemical separation steps and reduces the amount of radioactive waste. In addition, it can work in small reactors that are far cheaper to run than the powerful national lab facilities that are required for the batch processing of old. Howe even envisions operating along commercial lines, so NASA and the DoE would just purchase the final product, rather than footing the bill for the entire production process.

The CSNR team working on this concept is already funded by a $100,000 NIAC grant and has submitted a proposal to build a prototype feed line and to demonstrate that they can mechanically push the capsules through it, as well as perform the subsequent separation steps. Howe believes they can have their process up and running in just three years, at a cost of about $50 million – half the proposed cost of restarting conventional production – and could create about 1.5 kilograms of plutonium-238 each year.

Though the team still has to determine the optimum irradiation time, operating the process continuously instead of converting several kilograms in batches twice a year should help keep costs and the facility size down. And if they charge $6 million per kilogram – less than the latest Russian asking price – this process would be cost-effective for private industry, Howe says. “Like commercial space travel, we’re doing commercialised plutonium production,” he says.

Whether or not Howe’s technique saves money, or even makes it, breathing fresh life into plutonium production is not popular with everyone. Plutonium-238 is highly toxic, and an accident during or after launch could release it into the atmosphere. In 1964, for example, a US navy navigation satellite re-entered the atmosphere and broke up, dispersing 1 kilogram of plutonium-238 around the planet, roughly double the amount released into the atmosphere by weapons testing. Though the plutonium’s containers have been redesigned to survive re-entry intact, the Cassini probe’s near-Earth fly-by in 2006 triggered widespread public protests. Restarting plutonium production is “a very frightening possibility”, says Bruce Gagnon of the Global Network Against Weapons and Nuclear Power in Space based in Brunswick, Maine. “It obviously indicates that the nuclear industry views space as a new market,” he says. “It’s like playing Russian roulette.” Gagnon is also worried by the prospects of a commercialised production line. “When you introduce the profit incentive, you start cutting corners,” he says.

Then there are concerns over proliferation and political capital. While plutonium-238 cannot be used to make a nuclear weapon, it is a different story with neptunium-237. This is weapons-grade material: bombarded by fast neutrons, it is capable of sustaining a chain reaction without unstable heat decay. Edwin Lyman at the Union of Concerned Scientists based in Cambridge, Massachusetts, believes that given these safety and security issues, non-nuclear power generation systems should be a priority for space applications. “Alternatives need to be explored fully,” he says. “If the US proceeds with the restart, it will be more difficult for us to dissuade other countries from doing the same, should they decide they need to produce their own plutonium-238 supply.”

Can sunlight help fill the gap? The intensity of light drops with distance from the sun, following an inverse square law, so sending solar-powered spacecraft to the outer planets looks like a non-starter. In Pluto’s orbit, for example, it would take a solar array of 2000 square metres to generate the same amount of power as a 1-square-metre array in Earth’s orbit. Nevertheless, in August 2011, NASA launched Juno, the first mission to Jupiter using solar energy instead of plutonium. Juno relies on three 10-metre-long solar panels to gather the power it needs to operate. And according to a 2007 NASA report, solar-powered missions beyond Jupiter are not out of the question.

What’s needed, says James Fincannon of NASA’s Glenn Research Center, are new solar cells that can cope with the extreme conditions in the outer solar system. Great strides are being made in developing lightweight, high-efficiency solar cells, he says. If the cost and mass of these arrays can be reduced further, and if a spacecraft’s power needs can be reduced to less than 300 watts – about half that of the Galileo probe – Fincannon suggests that a craft powered by a 250-square-metre solar array could operate as far away as Uranus. Gagnon agrees. “For years we’ve said that solar would work even in deep space,” he says.

There are even plutonium-free ways to power craft exploring the darker reaches of the solar system where Fincannon’s arrays would not work. At the NIAC symposium where Howe discussed his plutonium production process, Michael Paul of Pennsylvania State University’s Applied Research Laboratory described a novel engine that could power craft on the surface of cloud-wrapped worlds where little sunlight penetrates.

Take Venus. Paul proposes combining lithium fuel with carbon dioxide from the greenhouse-planet’s atmosphere, and burning it to provide heat for a Stirling engine – a heat pump that uses a temperature difference to drive a piston linked to a generator (see “Cloud power”). The system would not need nuclear launch approval, could operate at very high power levels and could be modified to work on Titan, Mars or even in the permanent dark of the moon’s south pole, he says. With further development Paul believes the technology could be ready to launch by 2020. “I see this power system as a way to enable a whole new set of opportunities that are closed off because we just don’t have enough plutonium,” he says.

Paul admits that his lithium-powered landers would last just a fraction of the decades-long lifetime achievable using plutonium. “Fifty years of work has shown that there are applications where there are no alternatives – period,” says Ralph McNutt of the Johns Hopkins University Applied Physics Laboratory. But, he adds, “to the extent that there are alternatives to radioactive power sources, we should take them”. Fincannon agrees: “It is always a good time to come up with alternative power sources,” he says.

Besides, spending money developing lightweight solar cells or more efficient Stirling engines could offer benefits on, as well as off, Earth. Engineers are already exploring ways to turn metal powder into fuel for vehicle engines, and Paul suggests his technology could help expand underwater exploration missions, too. The same can no longer be said for plutonium-238. Once used to power cardiac pacemakers, security and health concerns mean that the material is no longer welcome.

So which way will NASA jump? Howe remains determined to fight in plutonium’s corner and recently presented his case to the agency. As far as space is concerned, this power struggle isn’t over yet.

When this article was first posted, it contained an incorrect reference to Isaac Newton

Cloud power

Solar power is not an option for landers heading through thick clouds like those surrounding Venus. That’s where the generator conceived by Michael Paul and his team at Pennsylvania State University comes in. Paul’s team suggest powering a Venus lander by burning lithium with carbon dioxide sucked in from the planet’s atmosphere – eliminating the need to carry along an oxidiser with the fuel.

The heat from combustion would drive a small turbine or Stirling engine, which would power the lander’s electronics. But on boiling-hot Venus, the biggest challenge is keeping the lander’s electronics cool. Paul predicts that four-fifths of the engine’s power output will go towards pumping heat away from the craft’s electronics. Previous missions to Venus have lasted no more than 2 hours beyond touchdown before their batteries petered out. Paul calculates that 200 kilograms of lithium will be enough to keep sensors running for a week.

He believes that adding the planet’s carbon dioxide to lithium is the only way to pack enough punch to power a Venus lander. To provide electrical power and cooling for a week-long mission would otherwise require 850 kilograms of batteries, for instance, or 50 plutonium-powered generators. Even NASA’s colossal Cassini probe – “the flagship of all flagship missions” – doesn’t have that many, says Paul.

Deep engines of earthquakes and volcanoes.


Plate tectonics can’t explain all the earthquakes, volcanoes and landscapes of Earth, so what else is shaping its surface?

“A LOT of people thinks that the devil has come here. Some thinks that this is the beginning of the world coming to a end.”

To George Heinrich Crist, who wrote this on 23 January 1812, the series of earthquakes that had just ripped through the Mississippi river valley were as inexplicable as they were deadly. Two centuries on and we are no closer to an understanding. According to our established theory of Earth’s tectonic activity, the US Midwest is just not the sort of place such tremors should occur.

Hawaii's volcanoes pose a problem for traditional theories of plate tectonics <i>(Richard A. Cooke III/Getty Images)</i>

That’s not the only thing we are struggling to explain. Submerged fossil landscapes off the west coast of Scotland, undersea volcanoes in the south Pacific, the bulging dome of land that is the interior of southern Africa: all over the world we see features that plate tectonics alone is hard pressed to describe.

So what can? If a new body of research is to be believed, the full answer lies far deeper in our planet. If so, it could shake up geology as fundamentally as the acceptance of plate tectonics did half a century ago.

The central idea of plate tectonics is that Earth’s uppermost layers – a band of rock between 60 and 250 kilometres thick known as the lithosphere – is divided into a mosaic of rigid pieces that float and move atop the viscous mantle immediately beneath. The theory surfaced in 1912, when German geophysicist Alfred Wegener argued on the basis of fossil distributions that today’s continents formed from a single supercontinent, which came to be called Pangaea, that broke up and began drifting apart 200 million years ago.

Wegener lacked a mechanism to make his plates move, and the idea was at first ridiculed. But evidence slowly mounted that Earth’s surface was indeed in flux. In the 1960s people finally came to accept that plate tectonics could not only explain many features of Earth’s topography, but also why most of the planet’s seismic and volcanic activity is concentrated along particular strips of its surface: the boundaries between plates. At some of these margins plates move apart, creating rift valleys on land or ridges on ocean floors where hotter material wells up from the mantle, cools and forms new crust. Elsewhere, they press up against each other, forcing up mountain chains such as the Himalayas, or dive down beneath each other at seismically vicious subduction zones such as the Sunda trench, the site of the Sumatra-Andaman earthquake in December 2004.

And so plate tectonics became the new orthodoxy. But is it the whole truth? “Because it was so hugely successful as a theory, everybody became a bit obsessed with horizontal motions and took their eye off an interesting ball,” says geologist Nicky White at the University of Cambridge.

That ball is what is happening deep within Earth, in regions far beyond the reach of standard plate-tectonic theory. The US geophysicist Jason Morganwas a pioneer of plate tectonics, but in the 1970s he was also one of the first to find fault with the theory’s explanation for one particular surface feature, the volcanism of the Hawaiian islands. These islands lie thousands of kilometres away from the boundaries of the Pacific plate on which they sit. The plate-tectonic line is that their volcanism is caused by a weakness in the plate that allows hotter material to well up passively from the mantle. Reviving an earlier idea of the Canadian geophysicist John Tuzo Wilson, Morgan suggested instead that a plume of hot mantle material is actively pushing its way up from many thousands of kilometres below and breaking through to the surface.

Mapping the underworld

That went against the flow, and it wasn’t until the mid-1980s that others began to think Morgan might have a point. The turnaround came when seismic waves unleashed by earthquakes began to reveal some of our underworld’s structure as they travelled through Earth’s interior. Seismic waves travel at different velocities through materials of different densities and temperatures. By timing their arrival at sensors positioned on the surface we could begin to construct a 3D view of what sort of material is where.

The resulting images are rough and fuzzy, but seem to reveal a complex, dynamic mantle. Most dramatically, successive measurements have exposed two massive piles of very hot, dense thermochemical material sitting at the bottom of the mantle near its boundary with Earth’s molten core. One is under the southern Pacific Ocean, and one beneath Africa. Each is thousands of kilometres across, and above each a superplume of hotter material seems to be rising towards the surface.

That could explain why the ocean floor in the middle of the southern Pacific lies some 1000 metres above the surrounding undersea topography, another thing plate tectonics has difficulty explaining. Something similar goes for the African plume. “If you go south of the Congo all the way down to southern South Africa, including Madagascar, that whole region is propped up by this superplume,” says White.

Seismic imaging reveals smaller plume-like features extending upwards in the upper reaches of the mantle beneath Iceland and Hawaii – perhaps explaining both these islands’ existence and their volcanism. Off the coast of Argentina, meanwhile, the sea floor plunges down almost a kilometre, directly above a mantle region that seismic imaging identifies to be cold and downwelling. And although southern Africa is being propped up by its superplume, smaller hot upwellings and cold downwellings at the top of that plume seem to correspond with local surface topography. The Congo basin, for instance, lies on a cold area and is hundreds of metres lower than its surroundings. “Africa has quite an egg-box shape,” says White.

Almost everywhere we look, there is evidence of vertical movements within Earth reshaping its surface. “At the time plate tectonics was formed, the deep interior was unknown, so people drew cartoons,” says Shun-ichiro Karato, a geophysicist at Yale University. “This is beyond cartoons.”

What is less clear is how the mechanisms work. Standard plate-tectonic theory has it that material plunging into the mantle at subduction zones is recycled in the shallow mantle, reappearing through volcanic activity near the subduction zone itself or further afield at boundaries where two plates are being pushed apart. Blurry yet tantalising images, however, show sections of subducted plates at various stages of descent through Earth’s interior towards the lower mantle (see diagram).

That material clearly can’t all stay down. “You need to preserve the mass balance of the mantle,” says Dietmar Müller of the University of Sydney, Australia. “As you are stuffing plates down into the mantle, that initiates a return flow of material going up.”

But how exactly? Simulations performed last year by Bernhard Steinberger at the GFZ German Research Centre for Geosciences in Potsdam and his colleagues show how a subducted slab, once it arrives at the boundary between the mantle and the core, can bulldoze material along that layer. When this material meets a thermochemical pile, plumes begin to form above. “We can see plumes developing at more or less the right places,” says Steinberger. For example, their model shows that slabs being subducted beneath the Aleutian Islands near Alaska could trigger a plume beneath Hawaii, creating a hotspot that fuels the Hawaiian volcanoes (Geochemistry Geophysics Geosystems, vol 13, p Q01W09).

Fossil landscape

Meanwhile, Clint Conrad at the University of Hawaii at Manoa and his colleagues have modelled the effect of a tectonic plate moving one way while the mantle beneath is moving in the other direction. They found that if this “shearing” effect occurs in a region where the mantle varies in density or the overlying plate changes in thickness, it can cause mantle material to melt and rise. This model accurately predicts that volcanic seamounts are present on the west but not the east of the East Pacific Rise, a mid-ocean ridge that runs roughly parallel to the western coast of South America. Seismic measurements indicate that the mantle and the plate to the west are moving in opposite directions; to the east they are not. The model also predicts that the shearing effect is largest under the western US, southern Europe, eastern Australia and Antarctica – all areas of volcanic activity away from plate boundaries.

If the dynamics of the deep Earth can change surface topography today, the same must have been true in the past. But while fossil and geological records tell us how drifting plates remapped the planet’s surface over eons, seismic imaging only works for the here and now. “It’s more difficult to decipher the history of the Earth in deep time, over hundreds of millions of years,” says Müller.

White and his colleagues found some clues to a small part of the story off the west coast of Scotland last year. They set off explosions from a ship and recorded the reflected waves, to get a sense of what lies beneath the sea floor. What they saw buried under more recent layers of rock and sediment were fossil landscapes some 55 million years old, replete with hills, valleys and networks of rivers. “They look just like somewhere you could go for an afternoon walk,” says White – only they are 2 kilometres beneath the seabed (Nature Geoscience, vol 4, p 562).

By analysing the way these rivers had changed course over time, the team showed that the region was once pushed almost a kilometre above sea level before being buried again, all in the space of a million years. That is far too quick for plate tectonics to throw up a mountain range and have erosion wear it down again. Instead, White points his finger at a blob of hot, mantle materialthat he says travelled radially outwards from the mantle plume that is possibly fuelling the volcanoes in nearby Iceland. “If the plate is like a carpet, rats running underneath the carpet would make it go up and down,” he says.

Müller’s team have identified similarly precipitous vertical movements of the land that is now in eastern Australia, during the Cretaceous period between 145 and 65 million years ago. Again, the timescales involved more or less discount simple plate tectonics. “We are pretty sure this has something to do with a convecting mantle,” says Müller.

Even iconic events of Earth’s tectonic past might not be all they seem. The Himalayas had formed by 35 million years ago, after the Indian plate separated from the supercontinent Gondwana, sped north and slammed into the Eurasian plate. That is still the broad picture, but plate tectonics fails to explain why India zoomed towards its target at speeds of up to 18 centimetres per year. Today, plates only reach speeds of about 8 centimetres per year, a limit set by how fast subducting slabs can sink into the mantle.

Steven Cande and Dave Stegman of the Scripps Institution of Oceanography in La Jolla, California, think they have the answer. Last year they used computer models to argue controversially that the horizontal force exerted by the mushrooming head of the Reunion plume, thought to be the source of the massive outpouring of lava that formed the Deccan Traps in western India about 67 million years ago, sent India on its headlong path (Nature, vol 475, p 47).

The anomalous and periodically devastating seismicity of the US Midwest, meanwhile, might be explained by plate tectonics and the propagation of surface stresses (New Scientist, 14 January, p 34) – or the root causes might go deeper. In 2007, Alessandro Forte of the University of Quebec at Montreal, Canada, and his colleagues implicated the ancient Farallon plate, which started slipping into the mantle along the west coast of North America during the Cretaceous. Their model suggests that the plate has now burrowed deep enough to cause a downwelling below the mid-Mississippi river valley, deforming the overlying lithosphere sufficiently to trigger the disastrous events of two centuries ago (Geophysical Research Letters, vol 34, p L04308).

It all adds up to a picture where more than plate tectonics is at work in shaping our planet’s past, present and future. “It’s just amazing to think that Earth’s surface is rather less stable than plate tectonics in its simplest form would have it,” says White.

Iceland’s anomalies

Not everyone is convinced. Gillian Foulger of the University of Durham, UK, argues that the region around Iceland, for example, is no hotter than the rest of the mid-Atlantic ridge, a diverging plate margin on which the island also sits. Iceland’s topography and volcanic activity can be adequately explained by the tectonic activity at such a plate boundary without invoking a plume-driven hotspot (Science, vol 300, p 921). She and fellow “aplumatics” also point out that, while seismic waves do travel slower in the shallow mantle beneath Iceland, Hawaii and other supposed hotspots, these velocity anomalies don’t extend all the way down to the bottom of the mantle where according to the theories that have been advanced the plumes supposedly begin their journey. “That’s never been seen, not one single time, in a reliable way,” she says.

Enthusiasts for a deeper explanation of Earth’s surface activity think it is only a matter of time and better seismic imaging before these objections are also countered. Efforts to improve imaging are already under way in the form ofEarthscope, an ongoing project to blanket the US with seismographs, giving geologists a fine-grained look at the mantle underneath (New Scientist, 11 April 2009, p 26). What is needed, however, are similar projects to understand crucial regions of the mantle such as those below Africa and the Pacific Ocean. “If you can design a grand whole-Earth experiment, where you have seismometers scattered evenly all over Earth’s surface, at sea and on land, you can do a brilliant job in making better sharp tomographic images,” says White.

If we can do that, will history repeat itself, the doubters be won over, and another hotly disputed model become the new orthodoxy? Müller certainly thinks so: “Geology is on the cusp of another revolution like plate tectonics.”