N-Acetylcysteine Plus Intravenous Fluids Versus Intravenous Fluids Alone to Prevent Contrast-Induced Nephropathy in Emergency Computed Tomography..



We test the hypothesis that N-acetylcysteine plus normal saline solution is more effective than normal saline solution alone in the prevention of contrast-induced nephropathy.


The design was a randomized, double blind, 2-center, placebo-controlled interventional trial. Inclusion criteria were patients undergoing chest, abdominal, or pelvic computed tomography (CT) scan with intravenous contrast, older than 18 years, and at least one contrast-induced nephropathy risk factor. Exclusion criteria were end-stage renal disease, pregnancy, N-acetylcysteine allergy, or clinical instability. Intervention for the treatment group was N-acetylcysteine 3 g in 500 mL normal saline solution as an intravenous bolus and then 200 mg/hour (67 mL/hour) for up to 24 hours; and for the placebo group was 500 mL normal saline solution and then 67 mL/hour for up to 24 hours. The primary outcome was contrast-induced nephropathy, defined as an increase in creatinine level of 25% or 0.5 mg/dL, measured 48 to 72 hours after CT.


The data safety and monitoring board terminated the study early for futility. Of 399 patients enrolled, 357 (89%) completed follow-up and were included. The N-acetylcysteine plus saline solution group contrast-induced nephropathy rate was 14 of 185 (7.6%) versus 12 of 172 (7.0%) in the normal saline solution only group (absolute risk difference 0.6%; 95% confidence interval -4.8% to 6.0%). The contrast-induced nephropathy rate in patients receiving less than 1 L intravenous fluids in the emergency department (ED) was 19 of 147 (12.9%) versus 7 of 210 (3.3%) for greater than 1 L intravenous fluids (difference 9.6%; 95% confidence interval 3.7% to 15.5%), a 69% risk reduction (odds ratio 0.41; 95% confidence interval 0.21 to 0.80) per liter of intravenous fluids.


We did not find evidence of a benefit for N-acetylcysteine administration to our ED patients undergoing contrast-enhanced CT. However, we did find a significant association between volume of intravenous fluids administered and reduction in contrast-induced nephropathy.

Source: Pubmed



Air Pollution Linked to Significant Decrease in Life Expectancy.

Research on coal burning in China offers powerful evidence of air pollution’s effect on public health

A study released earlier this week indicates that airborne pollution in China may have shortened the lives of 500 million Chinese by 2.5 billion years. The paper, published inPNAS on Monday, examined pollution data and death records to see whether coal burning, long a source of air pollution, could have damaged public health across northern China in the 1990s. The findings raise concern for developing countries across the globe.


Michael Greenstone, an economist at Massachusetts Institute of Technology and a primary author of the study, says he is driven by the question, “Over the course of a lifetime, what are the costs of exposure to high levels of air pollution?” Yet, he notes, finding answers has been extremely difficult, because simply comparing pollution levels and health in different locales can be misleading. For one thing, people often move from place to place and experience varying levels of pollution, so it is not safe to assume that all have had the same exposure. And access to health care differs across population groups around the world. “I’ve been searching for the right setting to answer this question for more than a decade,” Greenstone says.

A few years ago one of Greenstone’s colleagues saw a possible solution in a decades-old Chinese policy. From 1950 to 1980 the Chinese government was in a period ofsocialist transformation. During this time the “Huai River policy” provided free coal for heating homes and offices north of the Huai River, which runs west to east across eastern China. Meanwhile budget constraints kept free coal from being provided south of the river. At the same time, government rules restricted a family’s ability to move, so that many lived in one location for decades.

By examining rates of mortality and respiratory-related illnesses on both sides of the river, Greenstone’s team identified a difference: life expectancies are lower and pollution concentrations are higher north of the Huai, where coal burning was widespread. “We’ve imposed a really demanding test on the data: we want to know whether or not there’s a jump in air pollution right at the Huai river border,” Greenstone says. “And we found that. All other factors are identical.”

To make these connections, Greenstone and his team examined pollution data from sites across the country for the years 1981 to 2000. They then collected mortality data from China’s Disease Surveillance Points, 145 sites chosen by the government to accurately represent the wealth and geographic dispersion of the populace.

After collecting data for five years, Greenstone’s team plugged the information into equations that measured the sensitivity of the mortality rate to pollution levels on either side of the river. The results estimate that lifelong exposure to 100 micrograms of “total suspended particulates,” or TSPs, (minuscule solid particles floating in the air, such as pollutants) per meter of air cubed will shorten a person’s life by three years, on average. For the years between 1981 and 2000 northern China’s air contained a daily average of 551.6 micrograms of TSPs, whereas the south held an average of 354 micrograms. In the U.S. the legally required standard for air quality from 1971 to 1987 was 75 micrograms.

Most TSPs are visible, and not all pose health risks. “Typically, our respiratory system is designed to handle whatever is in the environment,” says Kent Pinkerton, director of the University of California, Davis, Center for Health and the Environment. The nose, mucus in the esophagus and cells in the lungs all filter foreign substances to facilitate clean breathing. Some pollutant particles can overwhelm the body’s natural defense systems, however, causing inflammation in the lungs. This inflammation could result in breathing difficulties, exacerbate preexisting conditions and, in extreme cases, cause death.

As more countries in Asia and Africa power toward industrialization, air pollution becomes an increasing concern. In China between 1981 and 2001 concentrations of TSPs were more than double the country’s earlier average of 200 micrograms per meter cubed and more than five times the average amount in the U.S., even before the passage of the latter nation’s Clean Air Act in 1970. The pollution problem is not restricted to outdoor air, as families in developing countries also encounter pollution inside the home; according to Pinkerton, almost half the world prepares food with wood, charcoal or coal, all common sources of air pollutants.

Developing countries are really trying to strike the right balance between economic growth to confront poverty and environmental quality and public health,” Greenstone says. “I think this study will help them—it shows a relationship between pollution and health.”


Source: Scientific American

The Neuroscience of Everybody’s Favourite Topic.

Why do people spend so much time talking about themselves?

Human beings are social animals. We spendlarge portions of our waking hours communicating with others, and the possibilities for conversation are seemingly endless—we can make plans and crack jokes; reminisce about the past and dream about the future; share ideas and spread information. This ability to communicate—with almost anyone, about almost anything—has played a central role in our species’ ability to not just survive, but flourish.


How do you choose to use this immensely powerful tool—communication? Do your conversations serve as doorways to new ideas and experiences? Do they serve as tools for solving the problems of disease and famine?

Or do you mostly just like to talk about yourself?

If you’re like most people, your own thoughts and experiences may be your favorite topic of conversation.  On average, people spend 60 percent of conversationstalking about themselves—and this figure jumps to 80 percent when communicating via social media platforms such as Twitter or Facebook.

Why, in a world full of ideas to discover, develop, and discuss, do people spend the majority of their time talking about themselves? Recent research suggests a simple explanation: because it feels good.

In order to investigate the possibility that self-disclosure is intrinsically rewarding, researchers from the Harvard University Social Cognitive and Affective Neuroscience Lab utilized functional magnetic resonance imaging (fMRI). This research tool highlights relative levels of activity in various neural regions by tracking changes in blood flow; by pairing fMRI output with behavioral data, researchers can gain insight into the relationships between behavior and neural activity. In this case, they were interested in whether talking about the self would correspond with increased neural activity in areas of the brain associated with motivation and reward.

In an initial fMRI experiment, the researchers asked 195 participants to discuss both their own opinions and personality traits and the opinions and traits of others, then looked for differences in neural activation between self-focused and other-focused answers. Because the same participants discussed the same topics in relation to both themselves and others, researchers were able to use the resulting data to directly compare neural activation during self-disclosure to activation during other-focused communication.

Three neural regions stood out. Unsurprisingly, and in line with previous research, self-disclosure resulted in relatively higher levels of activation in areas of the medial prefrontal cortex (MPFC) generally associated with self-related thought. The two remaining regions identified by this experiment, however, had never before been associated with thinking about the self: the nucleus accumbens (NAcc) and the ventral tegmental area (VTA), both parts of the mesolimbic dopamine system.

These newly implicated areas of the brain are generally associated with reward, and have been linked to the pleasurable feelings and motivational states associated with stimuli such as sexcocaine, and good food. Activation of this system when discussing the self suggests that self-disclosure, like other more traditionally recognized stimuli, may be inherently pleasurable—and that people may be motivated to talk about themselves more than other topics (no matter how interesting or important these non-self topics may be).

This experiment left at least one question unanswered, however. Although participants were revealing information about themselves, it was unclear whether or not anyone was paying attention; they were essentially talking without knowing who (if anyone) was on the other end of the line. Thus, the reward- and motivation-related neural responses ostensibly produced by self-disclosure could be produced by the act of disclosure—of revealing information about the self to someone else—but they could also be a result of focusing on the self more generally—whether or not anyone was listening.

In order to distinguish between these two possibilities, the researchers conducted a follow-up experiment. In this experiment, participants were asked to bring a friend or relative of their choosing to the lab with them; these companions were asked to wait in an adjoining room while participants answered questions in a fMRI machine. As in the first study, participants responded to questions about either their own opinions and attitudes or the opinions and attitudes of someone else; unlike in the first study, these participants were explicitly told whether their responses would be “shared” or “private”; shared responses were relayed in real time to each participant’s companion and private responses were never seen by anyone, including the researchers.

In this study, answering questions about the self always resulted in greater activation of neural regions associated with motivation and reward (i.e., NAcc, VTA) than answering questions about others, and answering questions publicly always resulted in greater activation of these areas than answering questions privately.  Importantly, these effects were additive; both talking about the self and talking to someone else were associated with reward, and doing both produced greater activation in reward-related neural regions than doing either separately.

These results suggest that self-disclosure—revealing personal information to others—produces the highest level of activation in neural regions associated with motivation and reward, but that introspection—thinking or talking about the self, in the absence of an audience—also produces a noticeable surge of neural activity in these regions. Talking about the self is intrinsically rewarding, even if no one is listening.

Talking about the self is not at odds with the adaptive functions of communication. Disclosing private information to others can increase interpersonal liking and aid in the formation of new social bonds—outcomes that influence everything from physical survival to subjective happiness. Talking about one’s own thoughts and self-perceptions can lead to personal growth via external feedback. And sharing information gained through personal experiences can lead to performance advantages by enabling teamwork and shared responsibility for memory. Self-disclosure can have positive effects on everything from the most basic of needs—physical survival—to personal growth through enhanced self-knowledge; self-disclosure, like other forms of communication, seems to be adaptive.

You may like to talk about yourself simply because it feels good—because self-disclosure produces a burst of activity in neural regions associated with pleasure, motivation, and reward.  But, in this case, feeling good may be no more than a means to an end—it may be the immediate reward that jump-starts a cycle of self-sharing, ultimately leading to wide varieties of long-term benefits.

Source: Scientific American


Sound Waves Levitate and Move Objects.

A new approach to contact-free manipulation could be used to combine lab samples–and prevent contamination

Water droplets, coffee granules, fragments of polystyrene and even a toothpick are among the items that have been flying around in a Swiss laboratory lately — all of them kept in the air by sound waves. The device that achieves this acoustic levitation is the first to be capable of handling several objects simultaneously. It is described today in theProceedings of the National Academy of Sciences.

Typically, levitation techniques make use of electromagnetism; magnetic forces have even been used to levitate frogs. It has long been known that sound waves could counter gravity, too, but so far the method has lacked practical application because it could do little more than keep an object in place.

To also move and manipulate levitating objects, Dimos Poulikakos, a mechanical engineer at the Swiss Federal Institute of Technology (ETH) in Zurich, and his colleagues built sound-making platforms using piezoelectric crystals, which shrink or stretch depending on the voltage applied to them. Each platform is the size of a pinky nail.

The platforms emit sound waves which move upward until they reach surface lying above, where they bounce back. When the downward-moving reflected waves overlap with the upward-moving source waves, the two ‘cancel out’ in the middle, at so-called node points. Objects placed there remain stuck in place because of the pressure of sound waves coming from both directions.

By adjusting the position of the nodes, the researchers can tow the objects between platforms. The platforms can be arranged in different ways to adapt to various experiments. In one demonstration involving a T-shaped array of platforms, the researchers joined two droplets introduced at separate locations then deposited the combined droplet at a third location.

Hands-free reactions
The system could be used to combine chemical reactants without the contamination that can result from contact with the surface of a container. Sound waves are already used in the pharmaceutical industry to obtain accurate results during drug screening. Yet Poulikakos’s method is the first to offer the possibility of precisely controlling several items simultaneously.

Poulikakos suggests that the system could be used to safely try out hazardous chemical reactions. “We had fun demonstrating the idea by colliding a lump of sodium with some water, which is obviously an aggressive reaction,” he says.

Peter Christianen, a physicist who works on electromagnetic levitation at Radboud University in Nijmegen, the Netherlands, says that he’s impressed with the invention. “I really like it; this is a very versatile platform — almost anything you want to manipulate, you can.”

Source: Scientific American


Is Sugar Really Toxic? Sifting through the Evidence.

Our very first experience of exceptional sweetness—a dollop of buttercream frosting on a parent’s finger; a spoonful of strawberry ice cream instead of the usual puréed carrots—is a gustatory revelation that generally slips into the lacuna of early childhood. Sometimes, however, themoment of original sweetness is preserved. A YouTube video from February 2011 begins with baby Olivia staring at the camera, her face fixed in rapture and a trickle of vanilla ice cream on her cheek. When her brother Daniel brings the ice cream cone near her once more, she flaps her arms and arches her whole body to reach it.


Considering that our cells depend on sugar for energy, it makes sense that we evolved an innate love for sweetness. How much sugar we consume, however—as well as how it enters the body and where we get it from in the first place—has changed dramatically over time. Before agriculture, our ancestors presumably did not have much control over the sugars in their diet, which must have come from whatever plants and animals were available in a given place and season. Around 6,000 BC, people in New Guinea began to grow sugarcane, chewing and sucking on the stalks to drink the sweet juice within. Sugarcane cultivation spread to India, where by 500 BC people had learned to turn bowls of the tropical grass’s juice into crude crystals. From there sugar traveled with migrants and monks to China, Persia, northern Africa and eventually to Europe in the 11th century.

For more than 400 years, sugar remained a luxury in Europe—an exotic spice—until manufacturing became efficient enough to make “white gold” much more affordable. Christopher Columbus brought sugarcane to the New World in 1493 and in the 16th and 17th centuries European powers established sugarcane plantations in the West Indies and South America. Sugar consumption in England increased by 1,500 percentbetween the 18th and 19th centuries. By the mid 19th century, Europeans and Americans had come to regard refined sugar as a necessity. Today, we add sugar in one form or another to the majority of processed foods we eat—everything from bread, cereals, crunchy snacks and desserts to soft drinks, juices, salad dressings and sauces—and we are not too stingy about using it to sweeten many raw and whole foods as well.

By consuming so much sugar we are not just demonstrating weak willpower and indulging our sweet tooth—we are in fact poisoning ourselves according to a group of doctors, nutritionists and biologists, one of the most prominent members of which isRobert Lustig of the University of California, San Francisco, famous for his viral YouTube video “Sugar: The Bitter Truth.” A few journalists, such as Gary Taubes andMark Bittman, have reached similar conclusions. Sugar, they argue, poses far greater dangers than cavities and love handles; it is a toxin that harms our organs and disrupts the body’s usual hormonal cycles. Excessive consumption of sugar, they say, is one of the primary causes of the obesity epidemic and metabolic disorders like diabetes, as well as a culprit of cardiovascular disease. More than one-third of American adults and approximately 12.5 million children and adolescents in the U.S.are obese. In 1980, 5.6 million Americans were diagnosed with diabetes; in 2011 more than 20 million Americans had the illness.


The argument that sugar is a toxin depends on some technical details about the different ways the human body gets energy from different types of sugar. Today, Americans eat most of their sugar in two main forms: table sugar and high-fructose corn syrup. A molecule of table sugar, or sucrose, is a bond between one glucose molecule and one fructose molecule—two simple sugars with the same chemical formula, but slightly different atomic structures. In the 1960s, new technology allowed the U.S. corn industry to cheaply convert corn-derived glucose intro fructose and produce high fructose corn syrup, which—despite its name—is almost equal parts free-floating fructose and glucose: 55 percent fructose, 42 percent glucose and three percent other sugars. Because fructose is about twice as sweet as glucose, an inexpensive syrup mixing the two was an appealing alternative to sucrose from sugarcane and beets.

Regardless of where the sugar we eat comes from, our cells are interested in dealing with fructose and glucose, not the bulkier sucrose. Enzymes in the intestine split sucrose into fructose and glucose within seconds, so as far as the human body is concerned sucrose and high-fructose corn syrup are equivalent. The same is not true for their constituent molecules. Glucose travels through the bloodstream to all of our tissues, because every cell readily converts glucose into energy. In contrast, liver cells are one of the few types of cells that can convert fructose to energy, which puts the onus of metabolizing fructose almost entirely on one organ. The liver accomplishes this primarily by turning fructose into glucose and lactate. Eating exceptionally large amounts of fructose taxes the liver: it spends so much energy turning fructose into other molecules that it may not have much energy left for all its other functions. A consequence of this energy depletion is production of uric acid, which research has linked to gout, kidney stones and high blood pressure.

The human body strictly regulates the amount of glucose in the blood. Glucose stimulates the pancreas to secrete the hormone insulin, which helps remove excess glucose from blood, and bolsters production of the hormone leptin, which suppresses hunger. Fructose does not trigger insulin production and appears to raise levels of the hormone grehlin, which keeps us hungry. Some researchers have suggested that large amounts of fructose encourage people to eat more than they need. In studies with animals and people by Kimber Stanhope of the University of California Davis and other researchers, excess fructose consumption has increased fat production, especially in the liver, and raised levels of circulating triglycerides, which are a risk factor for clogged arteries and cardiovascular disease. Some research has linked a fatty liver to insulin resistance—a condition in which cells become far less responsive to insulin than usual, exhausting the pancreas until it loses the ability to properly regulate blood glucose levels. Richard Johnson of the University of Colorado Denver has proposed that uric acid produced by fructose metabolism also promotes insulin resistance. In turn insulin resistance is thought to be a major contributor to obesity and Type 2 diabetes; the three disorders often occur together.

Because fructose metabolism seems to kick off a chain reaction of potentially harmful chemical changes inside the body, Lustig, Taubes and others have singled out fructose as the rotten apple of the sugar family. When they talk about sugar as a toxin, they mean fructose specifically. In the last few years, however, prominent biochemists and nutrition experts have challenged the idea that fructose is a threat to our health and have argued that replacing fructose with glucose or other sugars would solve nothing. First, as fructose expert John White points out, fructose consumption has been declining for more than a decade, but rates of obesity continued to rise during the same period. Of course, coinciding trends alone do not definitively demonstrate anything. A more compelling criticism is that concern about fructose is based primarily on studies in which rodents and people consumed huge amounts of the molecule—up to 300 grams of fructose each day, which is nearly equivalent to the total sugar in eight cans of Coke—or a diet in which the vast majority of sugars were pure fructose. The reality is that most people consume far less fructose than used in such studies and rarely eat fructose without glucose.


On average, people in America and Europe eat between 100 and 150 grams of sugar each day, about half of which is fructose. It’s difficult to find a regional diet or individual food that contains only glucose or only fructose. Virtually all plants have glucose, fructose and sucrose—not just one or another of these sugars. Although some fruits, such as apples and pears, have three times as much fructose as glucose, most of the fruits and veggies we eat are more balanced. Pineapples, blueberries, peaches, carrots, corn and cabbage, for example, all have about a 1:1 ratio of the two sugars. In his New York Times Magazine article, Taubes claims that “fructose…is what distinguishes sugar from other carbohydrate-rich foods like bread or potatoes that break down upon digestion to glucose alone.” This is not really true. Although potatoes and white bread are full of starch—long chains of glucose molecules—they also have fructose and sucrose. Similarly, Lustig has claimed that the Japanese diet promotes weight loss because it is fructose-free, but the Japanese consume plenty of sugar—about 83 grams a day on average—including fructose in fruit, sweetened beverages and the country’s many meticulously crafted confectioneries. High-fructose corn syrup wasdeveloped and patented in part by Japanese researcher Yoshiyuki Takasaki in the 1960s and ’70s.

Not only do many worrying fructose studies use unrealistic doses of the sugar unaccompanied by glucose, it also turns out that the rodents researchers have studied metabolize fructose in a very different way than people do—far more different than originally anticipated. Studies that have traced fructose’s fantastic voyage through the human body suggest that the liver converts as much as 50 percent of fructose into glucose, around 30 percent of fructose into lactate and less than one percent into fats. In contrast, mice and rats turn more than 50 percent of fructose into fats, so experiments with these animals would exaggerate the significance of fructose’s proposed detriments for humans, especially clogged arteries, fatty livers and insulin resistance.

In a series of meta-analyses examining dozens of human studies, John Sievenpiper of St. Michael’s Hospital in Toronto and his colleagues found no harmful effects of typical fructose consumption on body weightblood pressure or uric acid production. In a 2011 study, Sam Sun—a nutrition scientist at Archer Daniels Midland, a major food processing corporation—and his colleagues analyzed data about sugar consumption collected from more than 25,000 Americans between 1999 and 2006. Their analysisconfirmed that people almost never eat fructose by itself and that for more than 97 percent of people fructose contributes less daily energy than other sugars. They did not find any positive associations between fructose consumption and levels of trigylcerides, cholesterol or uric acid, nor any significant link to waist circumference or body mass index (BMI). And in a recent BMC Biology Q&A, renowned sugar expertLuc Tappy of the University of Lausanne writes: “Given the substantial consumption of fructose in our diet, mainly from sweetened beverages, sweet snacks, and cereal products with added sugar, and the fact that fructose is an entirely dispensable nutrient, it appears sound to limit consumption of sugar as part of any weight loss program and in individuals at high risk of developing metabolic diseases. There is no evidence, however, that fructose is the sole, or even the main factor in the development of these diseases, nor that it is deleterious to everybody.”

To properly understand fructose metabolism, we must also consider in what form we consume the sugar, as explained in a recent paper by David Ludwig, Director of the New Balance Foundation Obesity Prevention Center of Boston Children’s Hospital and a professor at Harvard. Drinking a soda or binging on ice cream floods our intestines and liver with large amounts of loose fructose. In contrast, the fructose in an apple does not reach the liver all at once. All the fiber in the fruit—such as cellulose that only our gut bacteria can break down—considerably slows digestion. Our enzymes must first tear apart the apple’s cells to reach the sugars sequestered within. “It’s not just about the fiber in food, but also its very structure,” Ludwig says. “You could add Metamucil to Coca Cola and not get any benefit.” In a small but intriguing study, 17 adults in South Africa ate primarily fruit—about 20 servings with approximately 200 grams of total fructose each day—for 24 weeks and did not gain weight, develop high blood pressure or imbalance their insulin and lipid levels.

To strengthen his argument, Ludwig turns to the glycemic index, a measure of how quickly food raises levels of glucose in the blood. Pure glucose and starchy foods such as Taubes’s example of the potato have a high glycemix index; fructose has a very low one. If fructose is uniquely responsible for obesity and diabetes and glucose is benign, then high glycemic index diets should not be associated with metabolic disorders—yet they are. A small percentage of the world population may in fact consume so much fructose that they endanger their health because of the difficulties the body encounters in converting the molecule to energy. But the available evidence to date suggests that, for most people, typical amounts of dietary fructose are not toxic.


Even if Lustig is wrong to call fructose poisonous and saddle it with all the blame for obesity and diabetes, his most fundamental directive is sound: eat less sugar. Why? Because super sugary, energy-dense foods with little nutritional value are one of the main ways we consume more calories than we need, albeit not the only way. It might be hard to swallow, but the fact is that many of our favorite desserts, snacks, cereals and especially our beloved sweet beverages inundate the body with far more sugar than it can efficiently metabolize. Milkshakes, smoothies, sodas, energy drinks and even unsweetened fruit juices all contain large amounts of free-floating sugars instantly absorbed by our digestive system.

Avoiding sugar is not a panacea, though. A healthy diet is about so much more than refusing that second sugar cube and keeping the cookies out of reach or hidden in the cupboard. What about all the excess fat in our diet, so much of which is paired with sugar and contributes to heart disease? What about bad cholesterol and salt? “If someone is gaining weight, they should look to sugars as a place to cut back,” says Sievenpiper, “but there’s a misguided belief that if we just go after sugars we will fix obesity—obesity is more complex than that. Clinically, there are some people who come in drinking way too much soda and sweet beverages, but most people are just overconsuming in general.” Then there’s all the stuff we really should eat more of: whole grains; fruits and veggies; fish; lean protein. But wait, we can’t stop there: a balanced diet is only one component of a healthy lifestyle. We need to exercise too—to get our hearts pumping, strengthen our muscles and bones and maintain flexibility. Exercising, favoring whole foods over processed ones and eating less overall sounds too obvious, too simplistic, but it is actually a far more nuanced approach to good health than vilifying a single molecule in our diet—an approach that fits the data. Americans have continued to consume more and more total calories each year—average daily intake increased by 530 calories between 1970 and 2000—whilesimultaneously becoming less and less physically active. Here’s the true bitter truth: Yes, most of us should make an effort to eat less sugar—but if we are really committed to staying healthy, we’ll have to do a lot more than that.


Source: scientificamerican.com

Anxiolytics in patients suffering a suspected acute coronary syndrome: Multi-centre randomised controlled trial in Emergency Medical Service..



The prehospital treatment of pain and discomfort among patients who suffer from acute coronary syndrome (ACS) needs a treatment strategy which combines relief of pain with relief of anxiety.


The aim of the present study was to evaluate the impact on pain and anxiety of the combination of an anxiolytic and an analgesic as compared with an analgesic alone in the prehospital setting of suspected ACS.


A multi-centre randomised controlled trial compared the combination of Midazolam (Mi)+Morphine (Mo) and Mo alone. All measures took part: Prior to randomisation, 15min thereafter and on admission to a hospital. Inclusion criteria were: 1) pain raising suspicion of ACS and 2) pain score ≥4.


Pain score after 15min.


In all, 890 patients were randomised to Mi+Mo and 873 to Mo alone. Pain was reduced from a median of 6 to 4 and finally to 3 in both groups. The mean dose of Mo was 5.3mg in Mi+Mo and 6.0mg in Mo alone (p<0.0001). Anxiety was reported in 66% in Mi+Mo and in 64% in Mo alone at randomisation (NS); 15min thereafter in 31% and 39% (p=0.002) and finally in 12% and 26% respectively (p<0.0001). On admission to a hospital nausea or vomiting was reported in 9% in Mi+Mo and in 13% in Mo alone (p=0.003). Drowsiness differed; 15% and 14% were drowsy in Mi+Mo versus 2% and 3% in Mo alone respectively (p<0.001).


Despite the fact that the combination of anxiolytics and analgesics as compared with analgesics alone reduced anxiety and the requirement of Morphine in the prehospital setting of acute coronary syndrome, this strategy did not reduce patients’ estimation of pain (primary endpoint). More effective pain relief among these patients is warranted.

Source: Pubmed


Royal dynasties as human inbreeding laboratories: the Habsburgs.

The European royal dynasties of the Early Modern Age provide a useful framework for human inbreeding research. In this article, consanguineous marriage, inbreeding depression and the purging of deleterious alleles within a consanguineous population are investigated in the Habsburgs, a royal dynasty with a long history of consanguinity over generations. Genealogical information from a number of historical sources was used to compute kinship and inbreeding coefficients for the Habsburgs. The marriages contracted by the Habsburgs from 1450 to 1750 presented an extremely high mean kinship (0.0628±0.009), which was the result of the matrimonial policy conducted by the dynasty to establish political alliances through marriage. A strong inbreeding depression for both infant and child survival was detected in the progeny of 71 Habsburg marriages in the period 1450–1800. The inbreeding load for child survival experienced a pronounced decrease from 3.98±0.87 in the period 1450–1600 to 0.93±0.62 in the period 1600–1800, but temporal changes in the inbreeding depression for infant survival were not detected. Such a reduction of inbreeding depression for child survival in a relatively small number of generations could be caused by elimination of deleterious alleles of a large effect according with predictions from purging models. The differential purging of the infant and child inbreeding loads suggest that the genetic basis of inbreeding depression was probably very different for infant and child survival in the Habsburg lineage. Our findings provide empirical support that human inbreeding depression for some fitness components might be purged by selection within consanguineous populations.


royal inbreeding; Habsburg dynasty; consanguineous marriage; inbreeding depression; purging of inbreeding depression

Source: Nature

Neptune’s New Moon May Be Named after One of Sea God’s Monstrous Children.

This past Monday, the planet Neptune officially got a new moon, a relatively tiny chunk of rock and ice about as wide as Manhattan is long. The object is currently dubbed S/2004 N 1, and it’s the fourteenth now known to circle that distant icy world. Mark Showalter, a researcher at the SETI Institute in Mountain View, California, found the moon in early July in archived images that the Hubble Space Telescope had snapped between 2004 and 2009. While using special software that stacks up and manipulates sequential images to reveal the motion of orbiting companions around a planet, Showalter tweaked a single line of code, switching the software’s gaze from close-in to Neptune to hundreds of thousands of miles further out. He walked away for an hour, and came back to see the software had found something curious in the old Hubble images, a small white dot that seemed to circle Neptune once every 22.5 hours. Further analyses confirmed it was a moon, one that had previously gone unseen because of its speedy orbit and small size.


Such discoveries have become old hat for Showalter, who has also discovered moons around Saturn, Uranus and Pluto. After discovering his two Plutonian moons in 2011 and 2012, Showalter held a contest to allow the public to nominate and vote on its favorite names for both new worlds. The results helped inspire the final names for the new moons, Kerberosand Styx, which wereannounced by the International Astronomical Union (IAU) on July 2.

Shortly after the announcement of Neptune’s newest moon, I called up Showalter to chat about the moon’s environment, its scientific value, and what he plans to name his latest discovery. An edited version of our conversation follows.

Scientific American: Was this a surprise?

Showalter: Not really, no. We went into this more focused on the ring arcs of Neptune, which are peculiar and persistent bright regions in a couple of dust rings around the planet. There were four arcs in the data from Voyager 2’s 1989 flyby, but two of those have now faded away, and we wanted to piece together what’s going on there and make sense of how the arcs are evolving. We always knew there was a possibility for more moons, things that would have been too small for Voyager 2 to see, so we had our eyes peeled the whole time. It’s definitely still a rush to find something like this, though.

Tell us more about the moon — what do we know about it?

We know its orbit pretty well. But we can only see it as an unresolved dot. We don’t even know its color, because to see these things with Hubble you have to use the whitest filters, which don’t give you info about how red, green, or blue something is. Everything else, we infer from context. We can guess how big it is based on how much light it reflects. It sits between two much larger moons, Proteus and Larissa. These and other moons near it all have similar surface reflectivity, or albedo, within 8 to 10 percent of each other. They’re about as dark as asphalt. When we make the educated guess that this moon shares that same albedo, that tells us this thing is probably on the order of 12 miles across.

What would it look like on the surface?

We don’t know for sure, but someday in the distant future, if and when we get a closer look at this thing, we’ll probably find it to be a cratered, irregularly shaped rock. One reason I’m in this field of astronomy — planetary astronomy — is that I like to visualize things, but it’s hard for me to picture a cosmological object like a quasar in my mind. It’s bright, it’s very far away, and that’s about all I can see. But when I think about the objects I study — planets, moons, asteroids, comets — they have landscapes, they can have geysers and volcanoes, they have things that are much more relatable. They’re more “Earth-like,” but also very exotic and different from what we see in our everyday lives. That combination of the familiar and the alien is something anyone who reads science fiction or watches Star Wars or Star Trek can appreciate.

I’m glad you mentioned Star Trek, since so many Trekkies unsuccessfully lobbied to name one of Pluto’s new moons “Vulcan” after Spock’s home planet. Might this new moon get the Star Trek treatment?

Let me just say first that I’m not surprised the IAU nomenclature committee rejected Vulcan despite the support of so many Star Trek fans. I was a little disappointed, but that’s a name already associated with hypothetical objects that may orbit interior to Mercury, so I knew it would be a tough sell. If they didn’t buy it, no problem. It’s still an honor just to have the opportunity to name a moon. Since Vulcan was rejected, I’ve been publicly mocked by William Shatner, and that’s an honor in its own way, too. But getting back to this new moon, the name has to somehow relate to Poseidon or Neptune, the Greek or Roman gods of the sea. At first I thought that wasn’t as interesting as naming Pluto’s moons for minions of Hades, but after a bit of reading I’ve found some great stuff, and I’ve gotten good suggestions from in and out of the research group. And we are talking about involving the public in this again, but having done it once, I know it’s a huge amount of work, whereas I could just sit down with my group in a room and decide on potential names in an hour or two.

One of my favorite possible names comes from The Odyssey, where Odysseus and his crew are on an island with a giant cyclops. That cyclops’s name is Polyphemus, and he is actually a son of Poseidon. “Polyphemus” is also good because it hasn’t yet been used for an asteroid — asteroids have already taken a lot of the great names. So that will probably be on the list. Another is a goddess, a daughter of Poseidon namedLamia. Lamia got in trouble with Zeus and was turned into a nightmarish creature that stalks and eats children. Even into the Middle Ages, people would tell their children to behave themselves, or else the Lamia will get you! So that’s another colorful one. You can probably guess that I’d prefer to name it after a hideous monster. I was a 12-year-old boy once, too, you know.

This is the sixth moon you’ve discovered, and you’re also credited with discovering rings around Jupiter and Uranus. What’s next?

Unlike those earlier moons, this new moon wouldn’t have shown up in the analyses I have done for Hubble observations of Uranus and Pluto, so it might be worth revisiting that data. There are also some very long exposures of the Saturn and Jupiter systems in the archive. Having this refined technique now, where we can take something from being undetectable in a single image to being detectable in several images combined by motion-tracking, is very powerful. One limitation of the technique is that you have to assume what you’re looking for is something in a circular, co-planar orbit, which is generally a good assumption. That’s what lets you extrapolate where an object should be in each image. Who knows, maybe we could find something in these other Hubble datasets, or for that matter even old spacecraft data! You never know what might turn up, so all of these archived observationsshould be reanalyzed at some point. Also, I think anyone would agree that if we sent another spacecraft out to Uranus or Neptune, there would be a huge flood of these sorts of new discoveries coming in, and many of us hope for exactly such a mission from NASA. There’s still so much we haven’t seen around these worlds because they’ve only been visited once, by Voyager 2 as it flew by [in 1989]. The big breakthroughs will always come from actually going to these places and seeing them up close.

Other than the thrill of finding and naming new objects after hideous monsters, what’s the scientific value of this?

Every one of the moons we’ve found, I think, has an interesting story to tell, and you don’t know what story the universe is trying to tell you until you find it. I think it’s possible for a truly boring moon to exist, one that tells you nothing, but so far in the history of solar system science I don’t think we’ve found one. Every moon so far has an interesting story if you look closely enough. In the case of this new moon, I keep wondering how this tiny little thing ended up wedged between two much bigger moons, Proteus and Larissa. This object has .01 percent of the mass between them. It’s minuscule, and yet somehow when they formed together it didn’t just become an extra layer of dust coating Proteus. How did it get left behind? Figuring that out will take some careful study. Neptune’s largest moon, Triton, orbits backwards and was probably captured long ago, and when that happened it must have disrupted any other moons, which means the moons we see today must have somehow re-formed afterward. Maybe this new moon can help us understand more of that early history.

Source: scientificamerican.com

Lasers Boost Space Communications.

Before NASA even existed, science-fiction writer Arthur C. Clarke in 1945 imaginedspacecraft that could send messages back to Earth using beams of light. After decades of setbacks and dead ends, the technology to do this is finally coming of age.


Two spacecraft set for launch in the coming weeks will carry lasers that allow data to be transferred faster than ever before. One, scheduled for take-off on 5 September, is NASA’s Lunar Atmosphere and Dust Environment Explorer (LADEE), a mission that will beam video and scientific data from the Moon. The other, a European Space Agency (ESA) project called Alphasat, is due to launch on 25 July, and will be the first optical satellite to collect large amounts of scientific data from other satellites.

“This is a big step forward,” says Hamid Hemmati, a specialist in optical communications at NASA’s Jet Propulsion Laboratory in Pasadena, California. “Europe is going beyond demonstrations for the first time and making operational use of the technology.”

These lasers could provide bigger pipes for a coming flood of space information. New Earth-observation satellites promise to deliver petabytes of data every year. Missions such as the Mars Reconnaissance Orbiter (MRO) already have constraints on the volume of data they can send back because of fluctuations in download rates tied to a spacecraft’s varying distance from Earth. “Right now, we’re really far from Earth, so we can’t fit as many images in our downlink,” says Ingrid Daubar, who works on the MRO’s HiRISE camera at the University of Arizona in Tucson. Laser data highways could ultimately allow space agencies to kit their spacecraft with more sophisticated equipment, says John Keller, deputy project scientist for NASA’s Lunar Reconnaissance Orbiter (LRO). That is not yet possible, he says. “We’re limited by the rate at which we can download the data.”

Today’s spacecraft send and receive messages using radio waves. The frequencies used are hundreds of times higher than those put out by music stations on Earth and can cram in more information, allowing orbital broadcasts to transmit hundreds of megabits of information per second. Lasers, which operate at higher frequencies still, can reach gigabits per second (see ‘Tuned in’). And unlike the radio portion of the electromagnetic spectrum, which is crowded and carefully apportioned, optical wavelengths are underused and unregulated.

Efforts to develop laser communication systems struggled for much of the twentieth century: weak lasers and problematic detectors derailed project after project. But recent advances in optics have begun to change the situation. “The technology has matured,” says Frank Heine, chief scientist at Tesat-Spacecom, a company based in Backnang, Germany.


In the 1980s, Europe took advantage of improved lasers and optical detectors to begin work on its first laser communication system, the Semiconductor Laser Intersatellite Link Experiment (SILEX). Equipped with the system, the ESA satellite Artemis received 50 megabits of information per second from a French satellite in 2001and then exchanged messages with a Japanese satellite in 2005. The project taught engineers how to stabilize and point a laser in space. But it was abandoned after its intended application — a constellation of satellites to provide Internet services — was dropped in favor of the network of fiber-optic cables now criss-crossing the globe.

Since then, Heine’s team at Tesat-Spacecom has created a laser terminal for satellite-to-satellite communication, at a cost to the German Aerospace Center of €95 million (US$124 million). The laser, amplified by modern fiber-optic technology, achieves a power of watts — compared with the tens of milliwatts reached by SILEX. In 2008, terminals mounted on two satellites transferred information at gigabits per second over a few thousand kilometers.

ESA’s Alphasat will extend the range of this laser terminal to tens of thousands of kilometers once it is positioned high in geostationary orbit. Future satellites that sport laser terminals in lower orbits will be able to beam as much as 1.8 gigabits per second of information up to Alphasat, which will then relay the data to the ground using radio waves. Alphasat’s geostationary orbit means that it can provide a constant flow of data to its ground station — unlike low-Earth-orbit satellites, which can communicate with the ground for only an hour or two each day as they race by overhead. “Other satellites will be able to buy time on our laser terminal,” says Philippe Sivac, Alphasat’s acting project manager.

One client will be another ESA mission due to launch this year: Sentinel-1, the first of several spacecraft to be sent up for Europe’s new global environmental-monitoring program Copernicus. It will beam weather data to Alphasat until the end of 2014. At that point, Europe plans to start deploying a network of dedicated laser-relay satellites that will ultimately handle 6 terabytes of images, surface-temperature measurements and other data collected every day by a fleet of Sentinel spacecraft.

But Europe’s space lasers have a significant drawback. Although they can shuttle information between spacecraft, they have trouble talking to the ground — a task that must still be performed by radio waves. This is because these lasers encode information by slightly varying the frequency of light in a way analogous to modulating an FM radio station. A beam modulated in this way is protected from solar interference but is vulnerable to atmospheric turbulence.

The laser on NASA’s upcoming LADEE mission will communicate directly with Earth using a different approach that is less susceptible to atmospheric interference. It encodes information AM-style by tweaking the amplitudes rather than the frequency of a light wave’s peaks.

NASA hopes that the LADEE demonstration will extend laser communications beyond Earth’s immediate vicinity, to the Moon and other planets. Deep-space missions currently rely on radio transmissions. But radio waves spread out when they travel long distances, weakening the signal and reducing the data-transfer rate.

Laser beams, by contrast, keep their focus, allowing them to shuttle the already greater quantities of information they encode over longer distances without using the extra power needed by radio transmitters. “Laser communication becomes more advantageous the farther out you go,” says Donald Cornwell, mission manager for the Lunar Laser Communication Demonstration project on LADEE at NASA’s Goddard Space Flight Center in Greenbelt, Maryland.

In 1992, the Galileo probe, on its way to Jupiter, spotted laser pulses sent more than 6 million kilometers from Earth. A laser on Earth pinged the Mars Global Surveyor in 2005. Another struck the MESSENGER mission en route to Mercury, which responded with its own laser pulses. In January this year, the Lunar Reconnaissance Orbiter received the first primitive message sent by laser to the Moon — an image of the Mona Lisa that travelled pixel by pixel in a sort of Morse code.

LADEE carries NASA’s first dedicated laser communications system. With a bandwidth of 622 megabits per second, more than six times what is possible with radio from the distance of the Moon, the system can broadcast high-definition television-quality video. But even though its AM optical system is good at penetrating Earth’s turbulent atmosphere, it will still need a backup radio link for cloudy days when the laser is blocked. To minimize this problem, LADEE’s primary ground station is in a largely cloudless desert in New Mexico, with alternative sites in two other sunny spots: California and the Canary Islands.

Source: http://www.scientificamerican.com

Gene Patents and Personalized Cancer Care: Impact of the Myriad Case on Clinical Oncology.

Genomic discoveries have transformed the practice of oncology and cancer prevention. Diagnostic and therapeutic advances based on cancer genomics developed during a time when it was possible to patent genes. A case before the Supreme Court,Association for Molecular Pathology v Myriad Genetics, Inc seeks to overturn patents on isolated genes. Although the outcomes are uncertain, it is suggested here that the Supreme Court decision will have few immediate effects on oncology practice or research but may have more significant long-term impact. The Federal Circuit court has already rejected Myriad’s broad diagnostic methods claims, and this is not affected by the Supreme Court decision. Isolated DNA patents were already becoming obsolete on scientific grounds, in an era when human DNA sequence is public knowledge and because modern methods of next-generation sequencing need not involve isolated DNA. The Association for Molecular Pathology v Myriad Supreme Court decision will have limited impact on new drug development, as new drug patents usually involve cellular methods. A nuanced Supreme Court decision acknowledging the scientific distinction between synthetic cDNA and genomic DNA will further mitigate any adverse impact. A Supreme Court decision to include or exclude all types of DNA from patent eligibility could impact future incentives for genomic discovery as well as the future delivery of medical care. Whatever the outcome of this important case, it is important that judicial and legislative actions in this area maximize genomic discovery while also ensuring patients’ access to personalized cancer care.

Source: JCO