Scientists bring innovative microbeam radiation therapy for cancer treatment a step closer


M3 India | Clinical news | Scientists bring innovative microbeam radiation therapy for cancer treatment a step closer
https://www.m3india.in/contents/clinical_news/62618/scientists-bring-innovative-microbeam-radiation?utm_source=facebook&utm_medium=ads&utm_campaign=CP12_Article_Oncology_M3

Holistic Doctor Found Dead, Murdered in Home, Police Asking For Help


Holistic Doctor Found Dead, Murdered in Home, Police Asking For Help
http://itrustnaturalcures.com/holistic-doctor-found-dead-murdered-home-police-asking-help/

World Renowned Heart Surgeon Speaks Out On What Really Causes Heart Disease


World Renowned Heart Surgeon Speaks Out On What Really Causes Heart Disease
http://expand-your-consciousness.com/world-renowned-heart-surgeon-speaks-really-causes-heart-disease/?t=HHL

Will You Be Successful In Your Future? Here’s How You Can Tell.


Will You Be Successful In Your Future? Here’s How You Can Tell – The Technology Post
https://thetechnologypost.com/will-you-be-successful-in-your-future-heres-how-you-can-tell/?utm_source=BR&utm_medium=BR&utm_term=BR

When a Medical “Cure” Makes Things Much, Much Worse


In 1960s Japan, a bizarre outbreak of hairy green tongues failed to set off alarms around the world

Visual by Fine Art Images/Heritage Images/Getty Images

Keiko Yamaguchi’s troubles began with diarrhea. After a few weeks, her toes went numb. The numbness and weakness crept up her legs, to her hips, and her vision began to fail. That was in early 1967. By the end of 1968, Yamaguchi, just 22 years old, was blind and paralyzed from the waist down.

She was one of more than 11,000 people in Japan, (with reported cases also occurring in Great Britain, Sweden, Mexico, India, Australia, and several other nations) who were struck by a mysterious epidemic between 1955 and 1970. The outbreak was concentrated in Japan where an estimated 900 died of the disease, which doctors eventually named SMON, for subacute myelo-optic neuropathy—“myelo” from the Greek word referring to the spinal cord; “optic” referring to vision; and neuropathy indicating a disease of the nerves.

The illness usually started with bouts of diarrhea and vomiting. Some patients, like Yamaguchi, became paralyzed and blind. (My efforts to track her down have been unsuccessful.) An uncertain number developed “green hairy tongue”: Their tongues sprouted what looked like tiny green hairs. Some of the afflicted developed green urine. Family members, too, came down with the disease, as did doctors and nurses who treated it. Approximately 5 to 10 percent of SMON patients died.

What was causing the outbreak? During the 1960s, Japan—where SMON was concentrated—launched vigorous research efforts to find out. Doctors thought an answer was at hand when a researcher studying SMON patients announced that he’d isolated the echovirus, which is known to cause intestinal problems. But soon other viruses were found in patients, including Coxsackie and a herpes virus. The herpes finding was compelling, since those viruses are known to affect the nervous system. But one by one, each claim was disproved when independent researchers were unable to replicate earlier laboratory findings.

 In THE DANGER WITHIN US, award-winning journalist Jeanne Lenzer brings these horrifying statistics to life through the story of one working class man who, after his “cure” nearly kills him, ends up in a battle for justice against the medical establishment.

Other possible causes were considered and shot down. No drinking water pathogen was detected. Pesticides? That hypothesis was discarded when a study found that farmers, who would have the greatest exposure, had lower rates of SMON than non-farmers. There was some excitement when researchers found that many victims had taken two types of antibiotics, but it seemed unlikely that two different antibiotics would both suddenly cause the same highly unusual disease. Besides, experts noted, some patients took the antibiotics only after developing symptoms of SMON.

Then, in late 1970, three years after the drug theory was dismissed, a pharmacologist made a forehead-slapping discovery. The two presumably different antibiotics, it turned out, were simply different brand names for clioquinol, a drug used to treat amoebic dysentery. The green hairy tongue and green urine, it turned out, had been caused by the breakdown of clioquinol in the patients’ systems. One month after the discovery, Japan banned clioquinol, and the SMON epidemic—one of the largest drug disasters in history—came to an abrupt end.

It appeared that the epidemic was concentrated in Japan in part because the drug was routinely used not just for dysentery, but to prevent traveler’s diarrhea and various forms of abdominal upset; and in part because Japanese doctors prescribed the drug at far higher doses and for longer periods than was customary in other countries.

The illusion that SMON was an infectious disease was compelling: When patients with abdominal upset or diarrhea were treated with clioquinol and developed SMON, family members, doctors and nurses often took the drug thinking it would protect them—inadvertently creating the very disease they feared. The resulting cluster outbreaks made SMON look like an infectious disease. In short, what people thought was a cure for SMON was in fact its cause.

Few doctors know the story of SMON, and perhaps even fewer use the catchphrase “cure as cause.” Yet the phenomenon is more relevant today than ever. A study published last year suggests that medical interventions, including problems with prescribed drugs and implanted medical devices—from cardiac stents to artificial hips and birth control devices—are now the third leading cause of death in the U.S.

The green tongue fur of a patient with SMON, rendered as blue in this image. 

Examples abound in virtually every specialty, from cardiology to psychiatry to cancer care. Jerome Hoffman, an emeritus professor of medicine at UCLA, says it isn’t surprising: Because drugs and medical devices target disordered body systems, it’s all too easy to overshoot and make the disorder worse.

In the 1980s and 1990s, for instance, patients were widely treated with heart rhythm drugs to prevent the abnormal heartbeats called premature ventricular contractions (PVCs) from triggering deadly ventricular fibrillation. The drugs were quite good at reducing the abnormal beats, and doctors prescribed them widely, believing they were saving lives. But in 1989, the Cardiac Arrhythmia Suppression Trial, or CAST, sponsored by the National Institutes of Health, demonstrated that although the drugs effectively suppressed PVCs, when they did occur they were much more likely to trigger deadly rhythms. Treated patients were 3.6 times as likely to die as patients given a placebo.

The drugs could fix the PVCs but kill the patient; as the old joke goes, the operation was a success but the patient died. The problem was invisible for more than a decade because doctors assumed that when a patient died suddenly it was from the underlying heart condition—not the treatment they prescribed.

In another case of cure as cause, a landmark study of Prozac to treat adolescent depression found that it increased overall suicidality—the very outcome it is intended to prevent. In the study, 15 percent of depressed adolescents treated with Prozac became suicidal, versus 6 percent treated with psychotherapy, and 11 percent treated with placebo. These numbers were not made obvious by Eli Lilly, the manufacturer, or the lead researcher who claimed that Prozac was “the big winner” in the treatment of depressed teens. Doctors, unaware that the drug could increase suicidality, often increased the dosage when teens became more depressed in treatment, thinking the underlying depression — not the drug — was at fault. Studies of other drugs in the same class as Prozac, selective serotonin reuptake inhibitors, or SSRIs, have shown similar problems.

There are many other instances of cure as cause: cardiac stents that caused clots in the coronary arteries; implanted pacemaker-defibrillators that misfired or failed to fire, causing deadly heart rhythms; and vagus nerve stimulators to treat seizures that instead have led to increased seizures.

One of SMON’s lessons is the danger of perverse financial incentives. Japanese doctors were paid for each prescription they wrote, a practice considered unethical in most peer nations. Doctors in some prefectures in Japan can still sell drugs to their patients. No wonder they prescribed such high doses of clioquinol for prolonged periods.

More than half of doctors in the U.S. receive money or other blandishments from Big Pharma and device manufacturers. The amounts can be stupendous: Some doctors have received tens of millions of dollars to implant certain devices or to promote certain drugs. Such influence takes a toll on the humans exposed to harmful treatments. The nonprofit group Institute for Safe Medication Practices conducted a study to quantify drug harms and concluded that prescribed medicines are “one of the most significant perils to human health resulting from human activity.” With the rise of the medical-industrial complex and its extraordinary profits, industry has a vested interest in blaming bad outcomes on a patient’s underlying disease and not on their own products.

Industry claims often mislead doctors and patients alike. Ciba-Geigy, the main manufacturer of clioquinol, said the drug was safe because it couldn’t be absorbed into the bloodstream from the intestines. Yet legal filings from a lawsuit against the company show that Ciba-Geigy was aware of the drug’s harmful effects for years. As early as 1944, clioquinol’s inventors said the drug should be strictly controlled and limited to 10 to 14 days’ use. In 1965, after a Swiss veterinarian published reports that dogs given clioquinol developed seizures and died, Ciba was content to issue a warning that the drug shouldn’t be given to animals.

In the U.S., pharma’s influence over what doctors and the public believe about drugs and devices has increased by orders of magnitude, as virtually all research is now conducted by industry and genuinely independent research has all but vanished. In 1977, industry sponsorship provided 29 percent of funding for clinical and nonclinical research. Estimates today suggest that figure has increased to around 60 percent. Even most “independent” research, such as that conducted by the National Institutes of Health, is now “partnered” with industry, making our reliance on industry claims nearly complete.

Stemming the tide of medical interventions that do more harm than good will require a deep examination of cure as causeand a willingness to stop depending on the industry that perversely promotes it.

How Proteins Helped Scientists Read Between the Lines of a 1630 Plague Death Registry


New tech reveals bacterial contamination, what scribes were eating and how many rats were around

EANRTR2.jpg
An etching of carts laden with corpses in the Piazza San Babila, Milan during the plague of 1630. 

For centuries, the plague was a vicious harbinger of death across Europe. It brought devastation to cities and rural villages at irregular intervals, and from 1629 to 1630 it alighted upon Milan, Italy. The unimaginable death toll—60,000 people in a city of 130,000—imprinted itself on the Italian imagination, eventually coming to be featured in Alessandro Manzoni’s 19th century novel The Betrothed.

During the long Milano plague season, scribes recorded the names and ages of every individual who perished in meticulous death registries. Now, it turns out those detailed documents held more than names and dates—they were also full of invisible stories hiding among the written records.

Nearly 400 years later, scientists have returned to uncover new details about the environmental conditions around the manuscripts, from what those scribes were eating to the animals kept nearby. The discoveries were all thanks to a game-changing technology: polymer disks that extract centuries-old proteins from the paper. Their findings, recently published in the Journal of Proteomics, detail everything from the prevalence of rodents to the enormous quantity of bacteria all around the manuscripts—and open up a new avenue of inquiry for other crucial historical texts.

“We started this research a few years ago from a basic idea, that papers and manuscripts absorb different proteins from the writer and the environment around the paper,” says physicist Gleb Zilberstein, one of the study’s authors. But they never would have guessed how much those proteins would reveal.

The first clue that uncovering such details might be possible came from an unlikely source: brown, circular polymer disks made from ethyl-vinyl acetate, originally meant for manuscript preservation, Zilberstein says. His team had tried using them to remove harmful acids in the cellulose-based paper of the 75-year-old notebooks of Mikhail Bulgakov, Russian author of The Master and Margarita.

Upon removing the disks, they discovered that the polymers were also full of proteins, which could provide rich data about the authors’ environmental conditions. In fact, proteins can be a better source of such data than DNA, says Zilberstein. “Most people who work in biochemical characterization of artifacts use genomes,” Zilberstein says. “It’s good, but DNA is less stable than the peptides in proteins.” This type of analysis is called proteomics, and it’s only been refined in the past few years.

 With the Milan manuscripts, they went about the process more purposefully, leaving the EVA disks on the pages for 60 to 90 minutes to allow proteins to adhere to the disk without causing any degradation to the paper. Those peptide chains—amino acids linked like Lego blocks—were then analyzed in a mass spectrometry machine and identified using protein databases. The researchers retrieved more than 70,000 peptide sequences comprising 600 different protein families from the 11 pages of the death registry, and a one-page notice kept in the same archive.
E36FDEF9-E4A1-441C-8514-C91CABBAC1AB.JPG
A public notice on new quarantine policies that researchers analyzed for the new study. A brown EVA disk, which pulls acids and proteins from the page, is in the bottom right corner. 

While peptides might be more stable than DNA, they come with their own inconveniences: they’re also much harder to identify. This was one challenge with the 1630 documents, the researchers say. As biochemist Kathryn Stone writes in a 2013 report on proteomics technology, “Protein structure can be much more heterogeneous than DNA structure,” which requires researchers to make inferences about where the peptides came from.

“Proteins are indeed more stable in some ways than DNA, but have less discriminatory power at the sequence level. Also, even though you may find traces of proteins, discriminating them from contamination is a lot harder than it is from DNA,” said Hendrik Poinar, an evolutionary biologist at the McMaster Ancient DNA Centre who was not involved in the research, by email. But even with those caveats, Poinar added of the EVA disk analysis, “I say, ‘Great start, onwards!’”

The researchers found 312 peptide sequences that matched known bacteria. Then, they narrowed that number down to 17 that fell in the Yersinia family—the bacteria responsible for Y. pestis, or the bubonic plague. But the proteins don’t belong exclusively to Y. pestis. They could also belong to other species of Yersinia bacteria, including some that aren’t deadly to humans.

As Ann Carmichael, a history professor emeritus at Indiana University at Bloomington who has spent her academic career researching the medical history of the plague, puts it: “The identification of the proteins are only as good as the database they’ve compiled.” But that doesn’t mean she’s not intrigued by the new research. “It’s exciting material and I think there will be a lot of refining in the labs,” says Carmichael, who also wasn’t involved in the new study.

Carmichael’s first reaction to the new study was the disgust of realizing that all these particles were in manuscripts she’d handled. “We’ve all thumbed through the pages of manuscripts and I’ve spent a lot of time with Milanese documents,” she says. One of her colleagues even came across mouse droppings in the pages of the manuscript she was reviewing. Apart from the “ewww” of knowing the ratio of rat proteins to human proteins was nearly one-to-one, Carmichael found the discoveries fascinating.

University of Texas historian Stefano D’Amico agrees that the new technique can offer insights that the text and its production alone couldn’t. Specifically, he pointed to the finding that the scribes were eating mainly maize, potatoes, chickpeas, rice and carrots, and that sheep and goats were somewhere in the lazaretto, which housed the sick. (The authors speculate that those farm animals may have been housed in the quarantined lazaretto to feed infants whose mothers died of the plague.)

“All the information about the diet of these people, what they ate at the time, what kind of animals were in the area of the lazaretto—the environment in which these people were operating—this is all important to historians,” D’Amico says.

407D0317-11DD-441D-AD87-E509566C9974.JPG
A page from the plague death registries, with a UVA disk at the bottom right. 

Of course, the registries themselves have plenty to say about how the plague upended Italian society during the Renaissance. Carmichael, who has reviewed documents from the centuries before 1630, was struck by the consistency of the administrators recording the names and deaths of these individuals. “They show up for work, they do the same thing over and over. It is a tedious, thankless job. And the only time you don’t find these records is when the plague gets so bad that record-keeping collapses. But they still try to do it.”

The fastidious documentation, then, was an effort to impose order on a chaotic situation. The idea was to help officials identify when new outbreaks of the plague were beginning, so that they could quarantine the city from trade with other cities, and begin rounding up afflicted individuals to transport them to encampments or the lazaretto, an enormous structure outside the city that housed as many as 9,000 people in and around its grounds. While some people afflicted with the plague went there willingly, most were forcibly removed from the city along with their families and other contacts.

“Once inside, you were basically a prisoner,” D’Amico says. “There was one entrance and it was guarded by soldiers. You could only get out if you survived the epidemic.”

Being constantly threatened with death took its toll on civilians. “These are the centuries when Europe is colonizing the globe and all sorts of things are happening—the Renaissance, the Reformation, the Scientific Revolution—and the plague is an interruption,” Carmichael says. “Daniel Defoe said plague was an unseen mine: you step on it, and it blows up and changes your life.”

For Zilberstein and the chemists who developed this technology, learning more about what life was like during the plague is only the beginning. The EVA disks could have any number of applications for historians and archivists hoping to uncover more information about their documents. For instance, Zilberstein says they hope to investigate the original papers of writers like Anton Chekhov and Friedrich Nietzsche, to see whether they were using any medicines or suffering from any medical conditions at the time of writing their books.

There are some caveats. Different countries have different climates, and some manuscripts might be contaminated with more modern proteins, depending on how they’re handled. But Zilberstein believes that plucking up peptides is still a fruitful way forward in cultural heritage research. As he says, “We can read the hidden data from old sources of paper-based information.”

How Drugged-Up Shellfish Help Scientists Understand Human Pollution


These involuntary medicine-guzzlers have much tell us about the consequences of pharmaceutical waste

BE1MN4.jpg
From developmental problems to reproductive issues, drug waste is affecting marine wildlife. 

From coastal cities around the world, through pipes lurking just beneath the waves, streams of human waste flood into the sea.

Sometimes this water is cleaned—filtered, aerated, and treated with bleach. Sometimes it is not, and the reams of sewage—whatever we wash down the drain or flush down the toilet—flow into the ocean raw. If that grosses you out, consider that human excrement is probably the least crappy component of the flow, at least when it comes to environmental impacts. More troubling are certain invisible substances that easily pass through wastewater treatment plants and end up in the ocean.

Every Advil you pop or antidepressant you swallow is processed in your body and excreted, often as chemical byproducts that can still affect other organisms. Scientists have only tested a fraction of pharmaceuticals for their effects on marine life, and most remain unregulated in wastewater.

In their quest to understand the effects of drugs on marine life, however, scientists have found an involuntary ally: shellfish. Because they live stationary lives, clams and mussels have been accidental test subjects in pharmaceutical pollution research. Now, these shellfish are helping sound the alarm about several common drugs and chemicals.

Off the shore of São Paulo, Brazil, a pipe releases mostly untreated sewage into Santos Bay. And as biologist Fabio Pusceddu of the University of São Paulo reports in a recent study, the animals around this outfall appear to be feeling the effects of our drugs.

Recent studies have raised concerns about substances making it into the environment, including antibiotics in soaps and personal care products, estrogen mimics in birth control, and painkillers, but there’s not much data on the effects of these compounds on wildlife. So, Pusceddu grew shellfish in the lab on sediment contaminated with two drugs, exposing them to the same concentrations they face in Santos Bay.

One was ibuprofen, a common painkiller, and the other was triclosan, an antibacterial compound found in products including toothpastes and body washes. The drug exposure caused a range of negative effects, including malformed membranes and reproductive difficulties. This is a problem, Pusceddu says, because most toxicity assessments done by governments to see if a substance should be regulated only look at acute effects, which usually means whether the compound is lethal. But just because animals are surviving our pharmaceutical pollution doesn’t mean they are unaffected.

Studies of chronic impacts from longer-term exposure are expensive and time-consuming, but it’s exactly these impacts that worry Pusceddu. “We’re not talking about issues in one individual,” Pusceddu says, “but in a population in the long term.”

 Coastal environments vary widely from city to city. São Paulo’s sewage lingers in sheltered Santos Bay, amplifying the effects of drug exposure. But on Canada’s west coast, deep water, dynamic tides and strong currents routinely flush the Juan de Fuca Strait, where the city of Victoria, British Columbia, has been pumping raw sewage through only a coarse screen since the 1960s. City officials, however, are worried about pharmaceuticals and began routinely monitoring the outfalls for drugs in 2004.

In a recent study, Chris Lowe, program manager with the Wastewater and Marine Environment Program for the Victoria region, showed that shellfish, sediment and water in the region immediately around sewage outfalls show traces of drugs, including triclosan and ibuprofen. Lowe’s study only looked in detail at a dozen drugs, but he and his colleagues have detected many more.

So what does this outpouring of pharmaceutical waste mean for ocean life? Unlike heavy metals, most drugs don’t accumulate up the food chain. Though some compounds, such as triclosan, can build up in animal fat. But since drugs are designed to be effective at low doses, a little can do a lot of potential damage.

As of yet, there’s no widely used technology to target drugs. The only way these compounds are removed from sewage is if they bind to particles that are otherwise filtered out by standard treatments or if they break down naturally. Some researchers are developing systems that can be added to treatment plants to filter out pharmaceuticals, such as activated carbon filters or bacteria specifically designed to break down drugs. But these are still in development, and many drugs escape even the most advanced treatment plants currently operating.

Pusceddu says the effects of pharmaceutical waste vary by location and solutions should, too. In Brazil, for example, ibuprofen often comes in large packages, so people may flush a lot of expired medication. In this case, the solution may be to try to get manufacturers to make smaller packages. But ultimately, Pusceddu says we need to learn a lot more about what these compounds do in the environment. Only then can we tell if the drugs that keep us healthy are making the ocean sick.

How to see a memory


Every memory leaves its own imprint in the brain, and researchers are starting to work out what one looks like.

For someone who’s not a Sherlock superfan, cognitive neuroscientist Janice Chen knows the BBC’s hit detective drama better than most. With the help of a brain scanner, she spies on what happens inside viewers’ heads when they watch the first episode of the series and then describe the plot.

Chen, a researcher at Johns Hopkins University in Baltimore, Maryland, has heard all sorts of variations on an early scene, when a woman flirts with the famously aloof detective in a morgue. Some people find Sherlock Holmes rude while others think he is oblivious to the woman’s nervous advances. But Chen and her colleagues found something odd when they scanned viewers’ brains: as different people retold their own versions of the same scene, their brains produced remarkably similar patterns of activity1.

Chen is among a growing number of researchers using brain imaging to identify the activity patterns involved in creating and recalling a specific memory. Powerful technological innovations in human and animal neuroscience in the past decade are enabling researchers to uncover fundamental rules about how individual memories form, organize and interact with each other. Using techniques for labelling active neurons, for example, teams have located circuits associated with the memory of a painful stimulus in rodents and successfully reactivated those pathways to trigger the memory. And in humans, studies have identified the signatures of particular recollections, which reveal some of the ways that the brain organizes and links memories to aid recollection. Such findings could one day help to reveal why memories fail in old age or disease, or how false memories creep into eyewitness testimony. These insights might also lead to strategies for improved learning and memory.

The work represents a dramatic departure from previous memory research, which identified more general locations and mechanisms. “The results from the rodents and humans are now really coming together,” says neuroscientist Sheena Josselyn at the Hospital for Sick Children in Toronto, Canada. “I can’t imagine wanting to look at anything else.”

In search of the engram

The physical trace of a single memory — also called an engram — has long evaded capture. US psychologist Karl Lashley was one of the first to pursue it and devoted much of his career to the quest. Beginning around 1916, he trained rats to run through a simple maze, and then destroyed a chunk of cortex, the brain’s outer surface. Then he put them in the maze again. Often the damaged brain tissue made little difference. Year after year, the physical location of the rats’ memories remained elusive. Summing up his ambitious mission in 1950, Lashley wrote2: “I sometimes feel, in reviewing the evidence on the localization of the memory trace, that the necessary conclusion is that learning is just not possible.”

Memory, it turns out, is a highly distributed process, not relegated to any one region of the brain. And different types of memory involve different sets of areas. Many structures that are important for memory encoding and retrieval, such as the hippocampus, lie outside the cortex — and Lashley largely missed them. Most neuroscientists now believe that a given experience causes a subset of cells across these regions to fire, change their gene expression, form new connections, and alter the strength of existing ones — changes that collectively store a memory. Recollection, according to current theories, occurs when these neurons fire again and replay the activity patterns associated with past experience.

Scientists have worked out some basic principles of this broad framework. But testing higher-level theories about how groups of neurons store and retrieve specific bits of information is still challenging. Only in the past decade have new techniques for labelling, activating and silencing specific neurons in animals allowed researchers to pinpoint which neurons make up a single memory (see ‘Manipulating memory’).

Josselyn helped lead this wave of research with some of the earliest studies to capture engram neurons in mice3. In 2009, she and her team boosted the level of a key memory protein called CREB in some cells in the amygdala (an area involved in processing fear), and showed that those neurons were especially likely to fire when mice learnt, and later recalled, a fearful association between an auditory tone and foot shocks. The researchers reasoned that if these CREB-boosted cells were an essential part of the fear engram, then eliminating them would erase the memory associated with the tone and remove the animals’ fear of it. So the team used a toxin to kill the neurons with increased CREB levels, and the animals permanently forgot their fear.

A few months later, Alcino Silva’s group at the University of California, Los Angeles, achieved similar results, suppressing fear memories in mice by biochemically inhibiting CREB-overproducing neurons4. In the process, they also discovered that at any given moment, cells with more CREB are more electrically excitable than their neighbours, which could explain their readiness to record incoming experiences. “In parallel, our labs discovered something completely new — that there are specific rules by which cells become part of the engram,” says Silva.

But these types of memory-suppression study sketch out only half of the engram. To prove beyond a doubt that scientists were in fact looking at engrams, they had to produce memories on demand, too. In 2012, Susumu Tonegawa’s group at the Massachusetts Institute of Technology in Cambridge reported creating a system that could do just that.

By genetically manipulating brain cells in mice, the researchers could tag firing neurons with a light-sensitive protein. They targeted neurons in the hippocampus, an essential region for memory processing. With the tagging system switched on, the scientists gave the animals a series of foot shocks. Neurons that responded to the shocks churned out the light-responsive protein, allowing researchers to single out cells that constitute the memory. They could then trigger these neurons to fire using laser light, reviving the unpleasant memory for the mice5. In a follow-up study, Tonegawa’s team placed mice in a new cage and delivered foot shocks, while at the same time re-activating neurons that formed the engram of a ‘safe’ cage. When the mice were returned to the safe cage, they froze in fear, showing that the fearful memory was incorrectly associated with a safe place6. Work from other groups has shown that a similar technique can be used to tag and then block a given memory7,8.

This collection of work from multiple groups has built a strong case that the physiological trace of a memory — or at least key components of this trace — can be pinned down to specific neurons, says Silva. Still, neurons in one part of the hippocampus or the amygdala are only a tiny part of a fearful foot-shock engram, which involves sights, smells, sounds and countless other sensations. “It’s probably in 10–30 different brain regions — that’s just a wild guess,” says Silva.

A broader brush

Advances in brain-imaging technology in humans are giving researchers the ability to zoom out and look at the brain-wide activity that makes up an engram. The most widely used technique, functional magnetic resonance imaging (fMRI), cannot resolve single neurons, but instead shows blobs of activity across different brain areas. Conventionally, fMRI has been used to pick out regions that respond most strongly to various tasks. But in recent years, powerful analyses have revealed the distinctive patterns, or signatures, of brain-wide activity that appear when people recall particular experiences. “It’s one of the most important revolutions in cognitive neuroscience,” says Michael Kahana, a neuroscientist at the University of Pennsylvania in Philadelphia.

The development of a technique called multi-voxel pattern analysis (MVPA) has catalysed this revolution. Sometimes called brain decoding, the statistical method typically feeds fMRI data into a computer algorithm that automatically learns the neural patterns associated with specific thoughts or experiences. As a graduate student in 2005, Sean Polyn — now a neuroscientist at Vanderbilt University in Nashville, Tennessee — helped lead a seminal study applying MVPA to human memory for the first time9. In his experiment, volunteers studied pictures of famous people, locations and common objects. Using fMRI data collected during this period, the researchers trained a computer program to identify activity patterns associated with studying each of these categories.

Later, as subjects lay in the scanner and listed all the items that they could remember, the category-specific neural signatures re-appeared a few seconds before each response. Before naming a celebrity, for instance, the ‘celebrity-like’ activity pattern emerged, including activation of an area of the cortex that processes faces. It was some of the first direct evidence that when people retrieve a specific memory, their brain revisits the state it was in when it encoded that information. “It was a very important paper,” says Chen. “I definitely consider my own work a direct descendant.”

Chen and others have since refined their techniques to decode memories with increasing precision. In the case of Chen’s Sherlockstudies, her group found that patterns of brain activity across 50 scenes of the opening episode could be clearly distinguished from one another. These patterns were remarkably specific, at times telling apart scenes that did or didn’t include Sherlock, and those that occurred indoors or outdoors.

Near the hippocampus and in several high-level processing centres such as the posterior medial cortex, the researchers saw the same scene-viewing patterns unfold as each person later recounted the episode — even if people described specific scenes differently1. They even observed similar brain activity in people who had never seen the show but had heard others’ accounts of it10.

“It was a surprise that we see that same fingerprint when different people are remembering the same scene, describing it in their own words, remembering it in whatever way they want to remember,” says Chen. The results suggest that brains — even in higher-order regions that process memory, concepts and complex cognition — may be organized more similarly across people than expected.

Melding memories

As new techniques provide a glimpse of the engram, researchers can begin studying not only how individual memories form, but how memories interact with each other and change over time.

At New York University, neuroscientist Lila Davachi is using MVPA to study how the brain sorts memories that share overlapping content. In a 2017 study with Alexa Tompary, then a graduate student in her lab, Davachi showed volunteers pictures of 128 objects, each paired with one of four scenes — a beach scene appeared with a mug, for example, and then a keyboard; a cityscape was paired with an umbrella, and so on. Each object appeared with only one scene, but many different objects appeared with the same scene11. At first, when the volunteers matched the objects to their corresponding scenes, each object elicited a different brain-activation pattern. But one week later, neural patterns during this recall task had become more similar for objects paired with the same scene. The brain had reorganized memories according to their shared scene information. “That clustering could represent the beginnings of learning the ‘gist’ of information,” says Davachi.

Clustering related memories could also help people use prior knowledge to learn new things, according to research by neuroscientist Alison Preston at the University of Texas at Austin. In a 2012 study, Preston’s group found that when some people view one pair of images (such as a basketball and a horse), and later see another pair (such as a horse and a lake) that shares a common item, their brains reactivate the pattern associated with the first pair12. This reactivation appears to bind together those related image pairs; people that showed this effect during learning were better at recognizing a connection later — implied, but never seen — between the two pictures that did not appear together (in this case, the basketball and the lake). “The brain is making connections, representing information and knowledge that is beyond our direct observation,” explains Preston. This process could help with a number of everyday activities, such as navigating an unfamiliar environment by inferring spatial relationships between a few known landmarks. Being able to connect related bits of information to form new ideas could also be important for creativity, or imagining future scenarios.

In a follow-up study, Preston has started to probe the mechanism behind memory linking, and has found that related memories can merge into a single representation, especially if the memories are acquired in close succession13. In a remarkable convergence, Silva’s work has also found that mice tend to link two memories formed closely in time. In 2016, his group observed that when mice learnt to fear foot shocks in one cage, they also began expressing fear towards a harmless cage they had visited a few hours earlier14. The researchers showed that neurons encoding one memory remained more excitable for at least five hours after learning, creating a window in which a partially overlapping engram might form. Indeed, when they labelled active neurons, Silva’s team found that many cells participated in both cage memories.

These findings suggest some of the neurobiological mechanisms that link individual memories into more general ideas about the world. “Our memory is not just pockets and islands of information,” says Josselyn. “We actually build concepts, and we link things together that have common threads between them.” The cost of this flexibility, however, could be the formation of false or faulty memories: Silva’s mice became scared of a harmless cage because their memory of it was formed so close in time to a fearful memory of a different cage. Extrapolating single experiences into abstract concepts and new ideas risks losing some detail of the individual memories. And as people retrieve individual memories, these might become linked or muddled. “Memory is not a stable phenomenon,” says Preston.

Researchers now want to explore how specific recollections evolve with time, and how they might be remodelled, distorted or even recreated when they are retrieved. And with the ability to identify and manipulate individual engram neurons in animals, scientists hope to bolster their theories about how cells store and serve up information — theories that have been difficult to test. “These theories are old and really intuitive, but we really didn’t know the mechanisms behind them,” says Preston. In particular, by pinpointing individual neurons that are essential for given memories, scientists can study in greater detail the cellular processes by which key neurons acquire, retrieve and lose information. “We’re sort of in a golden age right now,” says Josselyn. “We have all this technology to ask some very old questions.”

Greek Yogurt Fuels Your Morning…And Your Plane?


Researchers have developed a method for turning yogurt whey into bio-oil, which could potentially be processed into biofuel for planes

airport.jpg

Do you, like many Americans, enjoy the tangy taste and thick creaminess of Greek yogurt? Well, one day your yogurt could help fuel airplanes.

Researchers at Cornell University and the University of Tübingen in Germany have developed a method of turning yogurt whey, the liquid left behind after straining out the milk proteins, into bio-oil. This bio-oil could then potentially be processed into biofuel for vehicles, including planes.

Lars Angenent, the microbiologist and environmental engineer who led the research, says he watched the Greek yogurt craze explode in upstate New York while he was working at Cornell. Local Greek yogurt producers used fleets of trucks to haul away liquid whey – for every kilogram of yogurt, there’s two to three kilograms of whey left behind, and America produces more than 770,000 metric tons of Greek yogurt annually.

“If we treat the waste on site – that means at the yogurt plant – less trucking is needed, which reduces the carbon footprint,” Angenent says.

His lab had discovered how to convert lactic acid into bio-oil, and Angenent knew whey would be a good source for lactic acid. They tested the process and found that it did indeed work the way they’d hoped. The team recently published their research in the journal Joule.

The bio-oil produced from whey could also potentially be used as animal feed. Its natural antimicrobial capabilities could help replace antibiotics, which are commonly used to treat farm animals but bring risks of antibiotic resistance.

“[If] the bio-oil can be fed to the cows and acts as an antimicrobial, we would close the circle, and the Greek yogurt industry could become more sustainable,” says Angenent.

Angenent has created a company to explore the commercial potential of this technology, and hopes to see the bio-oil in use by 2020. He and his team are also investigating the biofuel potential of other waste liquids.

Joanne Ivancic, executive director of Advanced Biofuels USA, a nonprofit dedicated to promoting biofuels, says Angenent’s research is promising, but that the future of any biofuel depends on numerous political and economic factors.

“The commercial potential of anything that’s going to take the place of petroleum or natural gas fuels depends on the price of oil and the price of natural gas,” Ivancic says. “They have to be competitive because supportive government policy is just not there.”

Since the early 2000s, conservationists and manufacturers alike have hoped that biofuels could help deal with both climate change and issues of fuel security. But growing crops like corn and soybeans to produce ethanol, the most common biofuel, has some major environmental and social downsides. These crops require massive amounts of fertile land, displacing crops that could be used for food and sucking up resources like fertilizer and water.

So researchers have been turning to other potential biofuel sources. Some are looking at plants such as hemp and switchgrass that are less resource-intensive than corn or soybeans. Sugar beets, termed “energy beets,” by their supporters, is another crop with fuel potential, and has the added benefit of remediating phosphorous in the soil, helping to keep nearby watersheds healthy. This past summer ExxonMobil announced the creation of a strain of genetically modified algae they say produces twice as much oil as regular algae. One company is beginning to process household garbage like eggshells and coffee grounds into jet fuel. In late 2016, Alaska Airlines powered a cross-country flight with a new biofuel produced by wood scraps. Like the yogurt whey, the wood has the benefit of being a waste product that would otherwise present a disposal challenge; many of the most promising potential biofuel materials are waste products or “co-products” of other processes.

Ivancic is optimistic that increasing cultural awareness about the perils of climate change will help make these kinds of biofuels economically feasible.

“In the 1970s we recognized the Clean Water Act and the Clean Air Act,” she says. “If we can tap into that same kind of concern for the environment then we may get the policies and the consumer demand that we need.”

Do you work more than 39 hours a week? Your job could be killing you


Health%20at%20work%20illo%201

Long hours, stress and physical inactivity are bad for our wellbeing – yet we’re working harder than ever. Isn’t it time we fought back?

When a new group of interns recently arrived at Barclays in New York, they discovered a memo in their inboxes. It was from their supervisor at the bank, and headed: “Welcome to the jungle.” The message continued: “I recommend bringing a pillow to the office. It makes sleeping under your desk a lot more comfortable … The internship really is a nine-week commitment at the desk … An intern asked our staffer for a weekend off for a family reunion – he was told he could go. He was also asked to hand in his BlackBerry and pack up his desk.”

Although the (unauthorised) memo was meant as a joke, no one laughed when it was leaked to the media. Memories were still fresh of Moritz Erhardt, the 21-year-old London intern who died after working 72 hours in a row at Bank of America. It looked as if Barclays was also taking the “work ethic” to morbid extremes.

Following 30 years of neoliberal deregulation, the nine-to-five feels like a relic of a bygone era. Jobs are endlessly stressed and increasingly precarious. Overwork has become the norm in many companies – something expected and even admired. Everything we do outside the office – no matter how rewarding – is quietly denigrated. Relaxation, hobbies, raising children or reading a book are dismissed as laziness. That’s how powerful the mythology of work is.

Technology was supposed to liberate us from much of the daily slog, but has often made things worse: in 2002, fewer than 10% of employees checked their work email outside of office hours. Today, with the help of tablets and smartphones, it is 50%, often before we get out of bed.

Health at work illo 2

Some observers have suggested that workers today are never “turned off”. Like our mobile phones, we only go on standby at the end of the day, as we crawl into bed exhausted. This unrelenting joylessness is especially evident where holidays are concerned. In the US, one of the richest economies in the world, employees are lucky to get two weeks off a year.

You might almost think this frenetic activity was directly linked to our biological preservation and that we would all starve without it. As if writing stupid emails all day in a cramped office was akin to hunting-and-gathering of a previous age … Thankfully, a sea change is taking place. The costs of overwork can no longer be ignored. Long-term stress, anxiety and prolonged inactivity have been exposed as potential killers.

Researchers at Columbia University Medical Center recently used activity trackers to monitor 8,000 workers over the age of 45. The findings were striking. The average period of inactivity during each waking day was 12.3 hours. Employees who were sedentary for more than 13 hours a day were twice as likely to die prematurely as those who were inactive for 11.5 hours. The authors concluded that sitting in an office for long periods has a similar effect to smoking and ought to come with a health warning.

When researchers at University College London looked at 85,000 workers, mainly middle-aged men and women, they found a correlation between overwork and cardiovascular problems, especially an irregular heart beat or atrial fibrillation, which increases the chances of a stroke five-fold.

Labour unions are increasingly raising concerns about excessive work, too, especially its impact on relationships and physical and mental health. Take the case of the IG Metall union in Germany. Last week, 15,000 workers (who manufacture car parts for firms such as Porsche) called a strike, demanding a 28-hour work week with unchanged pay and conditions. It’s not about indolence, they say, but self-protection: they don’t want to die before their time. Science is on their side: research from the Australian National University recently found that working anything over 39 hours a week is a risk to wellbeing.

Is there a healthy and acceptable level of work? According to US researcher Alex Soojung-Kim Pang, most modern employees are productive for about four hours a day: the rest is padding and huge amounts of worry. Pang argues that the workday could easily be scaled back without undermining standards of living or prosperity.

Health at work illo 3

Other studies back up this observation. The Swedish government, for example, funded an experiment where retirement home nurses worked six-hour days and still received an eight-hour salary. The result? Less sick leave, less stress, and a jump in productivity.

All this is encouraging as far as it goes. But almost all of these studies focus on the problem from a numerical point of view – the amount of time spent working each day, year-in and year-out. We need to go further and begin to look at the conditions of paid employment. If a job is wretched and overly stressful, even a few hours of it can be an existential nightmare. Someone who relishes working on their car at the weekend, for example, might find the same thing intolerable in a large factory, even for a short period. All the freedom, creativity and craft are sucked out of the activity. It becomes an externally imposed chore rather than a moment of release.

Why is this important?

Because there is a danger that merely reducing working hours will not change much, when it comes to health, if jobs are intrinsically disenfranchising. In order to make jobs more conducive to our mental and physiological welfare, much less work is definitely essential. So too are jobs of a better kind, where hierarchies are less authoritarian and tasks are more varied and meaningful.

Capitalism doesn’t have a great track record for creating jobs such as these, unfortunately. More than a third of British workers think their jobs are meaningless, according to a survey by YouGov. And if morale is that low, it doesn’t matter how many gym vouchers, mindfulness programmes and baskets of organic fruit employers throw at them. Even the most committed employee will feel that something is fundamentally missing. A life.

%d bloggers like this: