New Study Shows It Does Matter Which Books You Read to Your Baby


Not all stories are equal when it comes to development.

Parents often receive books at pediatric checkups via programs like Reach Out and Read and hear from a variety of health professionals and educators that reading to their kids is critical for supporting development.

 

The pro-reading message is getting through to parents, who recognise that it’s an important habit. A summary report by Child Trends, for instance, suggests 55 percent of three- to five-year-old children were read to every day in 2007.

According to the U.S. Department of Education, 83 percent of three- to five-year-old children were read to three or more times per week by a family member in 2012.

What this ever-present advice to read with infants doesn’t necessarily make clear, though, is that what’s on the pages may be just as important as the book-reading experience itself.

Are all books created equal when it comes to early shared-book reading? Does it matter what you pick to read? And are the best books for babies different than the best books for toddlers?

In order to guide parents on how to create a high-quality book-reading experience for their infants, my psychology research lab has conducted a series of baby learning studies.

One of our goals is to better understand the extent to which shared book reading is important for brain and behavioral development.

What’s on baby’s bookshelf

Researchers see clear benefits of shared book reading for child development. Shared book reading with young children is good for language and cognitive development, increasing vocabulary and pre-reading skills and honing conceptual development.

Shared book reading also likely enhances the quality of the parent-infant relationshipby encouraging reciprocal interactions – the back-and-forth dance between parents and infants. Certainly not least of all, it gives infants and parents a consistent daily time to cuddle.

Recent research has found that both the quality and quantity of shared book reading in infancy predicted later childhood vocabulary, reading skills and name writing ability.

In other words, the more books parents read, and the more time they’d spent reading, the greater the developmental benefits in their 4-year-old children.

This important finding is one of the first to measure the benefit of shared book reading starting early in infancy. But there’s still more to figure out about whether some books might naturally lead to higher-quality interactions and increased learning.

Babies and books in the lab

In our investigations, my colleagues and I followed infants across the second six months of life. We’ve found that when parents showed babies books with faces or objects that were individually named, they learn more, generalise what they learn to new situations and show more specialised brain responses.

This is in contrast to books with no labels or books with the same generic label under each image in the book. Early learning in infancy was also associated with benefits four years later in childhood.

Our most recent addition to this series of studies was funded by the National Science Foundation and just published in the journal Child Development. Here’s what we did.

First, we brought six-month-old infants into our lab, where we could see how much attention they paid to story characters they’d never seen before. We used electroencephalography (EEG) to measure their brain responses.

Infants wear a cap-like net of 128 sensors that let us record the electricity naturally emitted from the scalp as the brain works. We measured these neural responses while infants looked at and paid attention to pictures on a computer screen.

EEG caps let researchers record infants’ brain activity.

These brain measurements can tell us about what infants know and whether they can tell the difference between the characters we show them.

We also tracked the infants’ gaze using eye-tracking technology to see what parts of the characters they focused on and how long they paid attention.

The data we collected at this first visit to our lab served as a baseline. We wanted to compare their initial measurements with future measurements we’d take, after we sent them home with storybooks featuring these same characters.

We divided up our volunteers into three groups. One group of parents read their infants storybooks that contained six individually named characters that they’d never seen before.

Another group were given the same storybooks but instead of individually naming the characters, a generic and made-up label was used to refer to all the characters (such as “Hitchel”).

Finally, we had a third comparison group of infants whose parents didn’t read them anything special for the study.

After three months passed, the families returned to our lab so we could again measure the infants’ attention to our storybook characters. It turned out that only those who received books with individually labeled characters showed enhanced attention compared to their earlier visit.

And the brain activity of babies who learned individual labels also showed that they could distinguish between different individual characters. We didn’t see these effects for infants in the comparison group or for infants who received books with generic labels.

These findings suggest that very young infants are able to use labels to learn about the world around them and that shared book reading is an effective tool for supporting development in the first year of life.

Tailoring book picks for maximum effect

So what do our results from the lab mean for parents who want to maximise the benefits of storytime?

Not all books are created equal. The books that parents should read to six- and nine-month-olds will likely be different than those they read to two-year-olds, which will likely be different than those appropriate for four-year-olds who are getting ready to read on their own.

In other words, to reap the benefits of shared book reading during infancy, we need to be reading our little ones the right books at the right time.

For infants, finding books that name different characters may lead to higher-quality shared book reading experiences and result in the learning and brain development benefits we find in our studies. All infants are unique, so parents should try to find books that interest their baby.

My own daughter loved the Pat the Bunny books, as well as stories about animals, like Dear Zoo. If names weren’t in the book, we simply made them up.

The ConversationIt’s possible that books that include named characters simply increase the amount of parent talking. We know that talking to babies is important for their development.

So parents of infants: Add shared book reading to your daily routines and name the characters in the books you read.

Talk to your babies early and often to guide them through their amasing new world – and let storytime help.

New hologram technology created with tiny nanoantennas.


Researchers have created tiny holograms using a “metasurface” capable of the ultra-efficient control of light, representing a potential new technology for advanced sensors, high-resolution displays and information processing.

The metasurface, thousands of V-shaped nanoantennas formed into an ultrathin gold foil, could make possible “planar photonics” devices and optical switches small enough to be integrated into computer chips for information processing, sensing and telecommunications, said Alexander Kildishev, associate research professor of electrical and computer engineering at Purdue University.

Laser light shines through the nanoantennas, creating the hologram 10 microns above the metasurface. To demonstrate the technology, researchers created a hologram of the word PURDUE smaller than 100 microns wide, or roughly the width of a human hair.

“If we can shape characters, we can shape different types of light beams for sensing or recording, or, for example, pixels for 3-D displays. Another potential application is the transmission and processing of data inside chips for information technology,” Kildishev said. “The smallest features – the strokes of the letters – displayed in our experiment are only 1 micron wide. This is a quite remarkable spatial resolution.”

holograms with laser lights
Laser light shines through the metasurface from below, creating a hologram 10 microns above the structure. (Xingjie Ni, Birck Nanotechnology Center)

Findings are detailed in a research paper appearing on Friday (Nov. 15) in the journal Nature Communications.

Metasurfaces could make it possible to use single photons – the particles that make up light – for switching and routing in future computers. While using photons would dramatically speed up computers and telecommunications, conventional photonic devices cannot be miniaturized because the wavelength of light is too large to fit in tiny components needed for integrated circuits.

Nanostructured metamaterials, however, are making it possible to reduce the wavelength of light, allowing the creation of new types of nanophotonic devices, said Vladimir M. Shalaev, scientific director of nanophotonics at Purdue’s Birck Nanotechnology Center and a distinguished professor of electrical and computer engineering.

“The most important thing is that we can do this with a very thin layer, only 30 nanometers, and this is unprecedented,” Shalaev said. “This means you can start to embed it in electronics, to marry it with electronics.”

The layer is about 1/23rd the width of the wavelength of light used to create the holograms.

The Nature Communications article was co-authored by former Purdue doctoral student Xingjie Ni, who is now a postdoctoral researcher at the University of California, Berkeley; Kildishev; and Shalaev.

Under development for about 15 years, metamaterials owe their unusual potential to precision design on the scale of nanometers. Optical nanophotonic circuits might harness clouds of electrons called “surface plasmons” to manipulate and control the routing of light in devices too tiny for conventional lasers.

The researchers have shown how to control the intensity and phase, or timing, of laser light as it passes through the nanoantennas. Each antenna has its own “phase delay” – how much light is slowed as it passes through the structure. Controlling the intensity and phase is essential for creating working devices and can be achieved by altering the V-shaped antennas.

The work is partially supported by U.S. Air Force Office of Scientific Research, Army research Office, and the National Science Foundation. Purdue has filed a provisional patent application on the concept.

Accidental discovery dramatically improves electrical conductivity.


Quite by accident, Washington State University researchers have achieved a 400-fold increase in the electrical conductivity of a crystal simply by exposing it to light. The effect, which lasted for days after the light was turned off, could dramatically improve the performance of devices like computer chips.

Strontium Titanate

WSU doctoral student Marianne Tarun chanced upon the discovery when she noticed that the conductivity of some strontium titanate shot up after it was left out one day. At first, she and her fellow researchers thought the sample was contaminated, but a series of experiments showed the effect was from light.

“It came by accident,” said Tarun. “It’s not something we expected. That makes it very exciting to share.”

The phenomenon they witnessed—”persistent photoconductivity“—is a far cry from superconductivity, the complete lack of  pursued by other physicists, usually using temperatures near absolute zero. But the fact that they’ve achieved this at room temperature makes the phenomenon more immediately practical.

And while other researchers have created persistent photoconductivity in other materials, this is the most dramatic display of the phenomenon.

The research, which was funded by the National Science Foundation, appears this month in the journal Physical Review Letters.

“The discovery of this effect at  opens up new possibilities for practical devices,” said Matthew McCluskey, co-author of the paper and chair of WSU’s physics department. “In standard computer memory, information is stored on the surface of a computer chip or hard drive. A device using persistent photoconductivity, however, could store information throughout the entire volume of a crystal.”

This approach, called holographic memory, “could lead to huge increases in information capacity,” McCluskey said.

Strontium titanate and other oxides, which contain oxygen and two or more other elements, often display a dizzying variety of electronic phenomena, from the high resistance used for insulation to superconductivity’s lack of resistance.

“These diverse properties provide a fascinating playground for scientists but applications so far have been limited,” said McCluskey.

McCluskey, Tarun and physicist Farida Selim, now at Bowling Green State University, exposed a sample of  to light for 10 minutes. Its improved conductivity lasted for days. They theorize that the light frees electrons in the material, letting it carry more current.

How can non-scientists influence the course of scientific research? | Cath Ennis


Science communication should be more than the dissemination of results to the public; it should also flow in the other direction, with members of the public able to communicate their priorities to scientists and those who fund them. But how?

A researcher in biosafety protective gear

Scientists don’t conduct their research in isolation from society – at least, not all scientists, all of the time. Photograph: US Army Medical Research Institute of Infectious Disease

Scientific research has an enormous impact on modern society, with its effects felt in many aspects of our lives. But scientists are also part of that society, and can adapt their research topics and methods to reflect its ever-changing priorities. All too often, though, these priorities are dictated by governments or by the private sector, while the views of members of the public aren’t heard. However, it’s certainly possible for interested individuals to influence the course of scientific research.

Follow the (grant) money

Science is a constant cycle of applying for grants to generate data to publish in manuscripts that form the basis of the next grant application. Success rates vary enormously depending on the country and the field, among other factors, but are generally low – 10-20% wouldn’t be at all unusual. As such, the agencies that allocate research funding have more influence over trends in scientific research than any other entity. They can set aside funds for research in specific fields; they can favour one kind of research over others (eg basic versus applied research); they can favour certain methods, or types of research institution; they can decide to rank grant applications on criteria other than the quality of the scientific question and approach.

On the latter point, there’s a trend toward making funded scientists more accountable to society. For example, Genome Canada and its regional affiliates require all grant applicants to complete a lengthy section describing how their proposed project encompasses research on “genomics and its related ethical, environmental, economic, legal and social aspects” (GE3LS); the reviewers’ scores for this section can make or break an application’s success. In the US, applicants to the National Science Foundation (NSF) have to complete a “Broader Impacts” section that’s judged on criteria that include the investigator’s plan for “improved STEM [Science, Technology, Engineering and Mathematics] education and educator development at any level; increased public scientific literacy and public engagement with science and technology; improved well-being of individuals in society”. Other funding agencies have similar criteria.

(Should scientists be thinking about and doing these things anyway? Yes, of course, and many do – but many don’t, due to lack of time, resources, training, and/or interest, and some won’t ever consider these ideas unless their funding depends on it).

So how can individuals communicate their opinions and priorities to funding agencies?

A lot of support for scientific research (most or even all of it, in some fields) comes from national and local governments and is taxpayer-funded. Major players include the seven Research Councils in the UK; the Tri-Council Agencies in Canada; and the National Institutes of Health Research and National Science Foundation in the US. If you have an opinion about the research types, topics, or methods your government should be funding, or about the need for funded scientists to demonstrate commitment to public outreach or any of the factors encompassed by E3LS research, you can direct it (in order of decreasing likelihood of impact) to your science minister or equivalent, local representative, or prime minister/president.

If you think your opinion will be shared by many others, some governments have websites where you can create a petition. The government is obliged to issue some kind of response (even if it’s an official “thanks, but no thanks”) to petitions with a certain number of signatures (currently 100,000 in both the UK and the US).

Non-government sources of research funding include private sector companies and charitable organisations. No one outside the companies in question is likely to have any chance of influencing the former, but the latter – mostly medical research charities that focus on a specific disorder or group of disorders – do listen to donors. Some allow donations to be directed towards specific topics or types of research; others can be persuaded by direct communications from donors.

Some funding agencies also directly involve members of the public in grant review – for example, the government-funded UK NHS National Institute for Health Research and Canadian Institutes of Health Research recruit lay or community reviewers, as do many charities. A lay reviewer’s opinion on the importance of the proposed research is unlikely to make or break an applicant’s success (although this is certainly possible), but their broader feedback to the funders may have more of an effect, especially for smaller organisations.

Crowdfunding

The crowdfunding model exemplified by Kickstarter, in which investors can browse business and creative pitches and contribute money to help develop a new product or service, is starting to gain some traction in the research community (see articles in the journal Nature from January 2012 and May 2013). Sites specific to scientific research, including Petridish and Microryza, have sprung up, and host requests for funding from investigators in a variety of fields, from all over the world. Donors may be offered incentives such as early access to research findings, or direct participation in the research.

I don’t believe crowdfunding will be an eventual replacement for current sources of research monies – government and charitable funding, and (most importantly) peer review, should and will remain an essential component of scientific research. Besides, research in my field (genomics) and many others is far too expensive to be supported by individual small-scale donations. However, a crowdfunded project can be perfect for early-career researchers, pilot studies, research ideas outside the mainstream, and other niches. These projects can provide the crucial preliminary data required by mainstream funding agencies, to demonstrate the validity of the approach and the idea, and thereby have the potential to launch much larger studies. One high-impact paper in a new area can even initiate a whole new sub-field, magnifying the influence of any individual donor’s money.

In conclusion, there are a number of ways in which members of the public can communicate their opinions and priorities to scientists and those who fund them, none of which necessitate pitchforks and flaming torches. Money talks, but so do time, effort, and votes – so get cracking, and good luck!

The ideas in this post originated and evolved from an impromptu session I led at this year’s Vancouver Change Camp about what responsibilities science owes to society, and vice versa. Many thanks to all the participants for a fascinating discussion, and especially to Sara Mimick for her support on the day.

Critical tool for brain research derived from ‘pond scum’.


The poster child for basic research might well be a one-celled green algae found in ordinary lakes and ponds. Amazingly, this unassuming creature—called Chlamydomonas—is helping scientists solve one of the most complex and important mysteries of science: How billions of neurons in the brain interact with one another through electrochemical signals to produce thoughts, memories and behaviors, and how malfunctioning neurons may contribute to incurable brain diseases such as Parkinson’s disease and schizophrenia.

1-criticaltool

 

It may seem counterintuitive that a tiny, relatively simple organism that doesn’t even have a brain could help scientists understand how the brain works. But this algae‘s value to brain scientists is not based on its intellect. Rather, it is based on its light-sensitivity, i.e., the fact that this organism’s movements are controlled by light.

 

Following the light

 

Chlamydomonas is light sensitive because it must detect and move towards light to feed itself through photosynthesis. You’ve seen this type of light sensitivity in action if you’ve ever noticed algae accumulate in a lake or pond on a sunny day.

 

The secret to the Chlamydomonas’s light-chasing success is a light-sensitive protein, known as a channelrhodopsin, which is located on the boundary of the algae’s eye-like structure, called an eyespot.

When hit by light, this light-sensitive protein—acting much like a solar panel—converts light into an electric current. It do so by changing its shape to form a channel through the boundary of the eyespot. This channel allows positively charged particles to cross the boundary and enter the eyespot region. The resulting flow of charged particles generates an electric current that, through a cascade of events, forces the algae’s two flagella—whip-like swimming structures—to steer the organism towards the light.

The light-sensing proteins of Chlamydomonas and their ability to generate electric currents for light chasing were discovered in 2002 by a research team at the University of Texas Health Science Center at Houston that was led by John Spudich and included Oleg SIneshchekov and Kwang-Hwan Jung; the team was funded by the National Science Foundation (NSF). This team’s discoveries about the algal proteins followed decades of research by Spudich, a biophysical chemist, and his collaborators on how light-sensing receptors control swimmingbehavior in many types of microorganisms.

 

“My interest in Chlamydomonas was derived from my interest in the basic principles of vision. That is, the molecular mechanisms by which organisms use light to obtain information about their environment,” says Spudich. “I have long been fascinated with how microorganisms ‘see’ the world and started with the simplest—bacteria with light-sensitive movements (phototaxis), followed by phototaxis in more complex algae. Our focus throughout has been on understanding the basic biology of these phenomena.”

 

Identifying the functions of neurons

Nevertheless, Spudich’s discovery of the light-sensitive algal proteins was a game-changer for an NSF-funded team of brain researchers at Stanford University that was comprised of Karl Deisseroth, Edward Boyden, and Feng Zhang. Working together in a uniquely interdisciplinary team during the early 2000s, these researchers collectively offered expertise in neuroscience, electrical engineering, physiology, chemistry, genetics, synthetic biology and psychiatry. (Boyden and Zhang are now at MIT.)

criticaltool

A primary goal of this team was to develop a new technology for selectively turning on and off target neurons and circuits of neurons in the brains of laboratory animals, so that resulting behavioral changes could be observed in real time; this information could be used to help identify the functions of targeted neurons and circuits.

The strategy behind this technology—eventually dubbed optogenetics—is analogous to that used by someone who, one by one, systemically turns on and off the fuses (or circuit breakers) in a house to identify the contribution of each fuse (or circuit breaker) to the house’s power output.

 

An on/off switch for neurons

But unlike household fuses and circuit breakers, neurons don’t have a user-friendly on/off switch. To develop a way to control neurons, the Stanford team had to create a new type of neuronal switch. With funding from NSF, the team developed a light-based switch that could be used to selectively turn on target neurons merely by exposing them to light.

Why did the team opt for a light-based strategy? Because light—an almost omnipresent force in nature—has the power to turn on and off many types of important electrical and chemical reactions that occur in nature including, for example, photosynthesis. The team therefore reasoned that light might, under certain conditions, also have the power to turn on and off electrochemical signaling from brain neurons.

But to create a light-based neuronal on/off switch, the team had to solve a big problem: Neurons are not naturally light sensitive. So the team had to find a way to impart target neurons with light sensitivity, so that they could be selectively activated by a light-based switch without altering non-target neurons. One potential strategy: to implant in target neurons some kind of light sensitive molecule that is not present elsewhere in the brain.

 

The team lacked the right type of light-sensitive molecule for the job until several important studies were announced. These studies included Spudich’s discovery of the light-sensitive algal proteins, as well as research led by microbial biophysicists Peter Hegemann, Georg Nagel and Ernst Bamberg in Germany, which showed that these proteins can generate electrical currents in animal cells, not just in algae.

 

Flicking the switch

These studies inspired the team to insert Spudich’s light-sensitive algal proteins into cultured neurons from rats and mice via a pioneering genetic engineering method that was developed by the team. When exposed to light in laboratory tests in 2004, these inserted proteins generated electric currents—just as they did in the light-sensitive algae from which they originated. But instead of turning on light-chasing behaviors as they did in the algae, these currents—when generated in target neurons—turned on the normal electrochemical signaling of the neurons, as desired.

In other words, the team showed that by selectively inserting light-sensitive proteins into target neurons, they could impart these neurons with light sensitivity so that they would be activated by light. The team thereby developed the basics of optogenetics—which is defined by Deisseroth as “the combination of genetics and optics to control well-defined events within specific cells of living tissue.”

The members of the team (either working together or in other teams) also developed tools to:

·         Turn off target neurons and stop their electrochemical signaling by manipulating light-sensing proteins.

·         Deliver light to target neurons in laboratory animals via a laser attached to a fiber cable implanted in the brain.

·         Insert light-sensitive proteins into various types of neurons so that their functions could be identified.

·         Control the functioning of any gene in the body. Such control supports studies of how gene expression in the brain may influence neurochemical signaling and how changes in key genes in neurons may influence factors such as learning and memory.

“The brain is a mystery, and in order to solve it, we need to develop a great variety of new technologies,” says Boyden. “In the case of optogenetics, we turned to the diversity of the natural world to find tools for activating and silencing neurons—and found, serendipitously, molecules that were ready to use.”

 

The power of optogenetics

 

Thousands of research groups around the world are currently incorporating increasingly advanced techniques in optogenetics into studies of the brains of laboratory animals. Such studies are designed to reveal how healthy brains learn and create memories and to identify the neuronal bases of brain diseases and disorders such as Parkinson’s disease, anxiety, schizophrenia, depression, strokes, pain, post-traumatic stress syndrome, drug addiction, obsessive-compulsive disease, aggression and some forms of blindness.

Deisseroth says, “What excites neuroscientists about optogenetics is control over defined events within defined cell types at defined times—a level of precision that is most crucial to biological understanding even beyond neuroscience. And milliscale-scale timing precision within behaving mammals has been essential for key insights into both normal brain function and into clinical problems, such as parkinsonism.”

Indeed, optogenetics is now so important to brain research that it is considered one of the critical tools for the Brain Research through Advancing Innovative Neurotechnologies through Advancing Innovative Neurotechnologies (BRAIN) Initiative, which was announced by President Obama in April 2013.

In addition, optogenetics is being applied to other organs besides the brain. For example, NSF-funded researchers are working to develop optogenetic techniques to treat cardiac arrhythmia.

 

The laws of unintended consequences.

 

As with many pivotal scientific advances, the development of optogenetics was built upon many basic-research studies that were inspired by the intellectual curiosity of researchers who could not possibly have foreseen the important practical applications of their work.

“The development of optogenetics is yet one more beautiful example of a revolutionary biotechnology growing out of purely basic research,” says Spudich.

What’s more, many of the varied disciplines that contributed to the invention of optogenetics—including electrical engineering, genetic engineering, physics and microbiology—may seem, on first blush, unrelated to one another and to brain science. But perhaps most surprising was the importance of basic research on algal proteins to the development of optogenetics.

Deisseroth said, “The story of optogenetics shows that hidden within the ground we have already traveled over or passed by, there may reside the essential tools, shouldered aside by modernity, that will allow us to map our way forward. Sometimes these neglected or archaic tools are those that are most needed—the old, the rare, the small and the weak.”

Food for thought for anyone tempted to dismiss algae in a murky body of water as worthless pond scum!

If You Know How a Cow Feels, Will You Eat Less Meat?


Inside a lab on the Stanford University campus, students experience what it might feel like to be a cow.

if-you-know-how-cow-feels-will-you-eat-less-meat_1

Inside a lab on the Stanford University campus here, students experienced what it might feel like to be a cow.

They donned a virtual reality helmet and walked on hands and feet while in a virtual mirror they saw themselves as bovine. As the animal was jabbed with an electrical prod, a lab worker poked a volunteer’s side with a sticklike device. The ground shook to simulate the prod’s vibrations. The cow at the end was led toward a slaughterhouse.

Participants then recorded what they ate for the next week. The study sought to uncover whether temporarily “becoming” a cow prompted reduced meat consumption.

The motivation wasn’t to make people vegetarians, said Jeremy Bailenson, director of Stanford’s Virtual Human Interaction Lab. But the project hoped to uncover whether virtual reality could alter behaviors that tax the environment and contribute to climate change.

“If somebody becomes an animal, do they gain empathy for that animal and think about its plight?” Bailenson asked. “In this case, empathy toward the animal also coincides with an environmental benefit, which is that [not eating] animals consumes less energy.”

It’s one of several environment-related experiments Bailenson is conducting in the lab, all tailored toward revealing whether there are new ways to encourage environmental preservation. Volunteers also have virtually chopped down a tree, a study aimed at examining attitudes toward paper use. Others took a virtual reality shower while eating lumps of coal — literally consuming it — to gain insight into how much was needed to heat the water.

Virtual reality, along with computer games and other kinds of technology, is being used to approach environmental issues from new angles. The National Science Foundation awarded a $748,000 grant to Stanford and Harvard University to run four experiments. Meanwhile, in Vancouver, British Columbia, that city, smaller townships and professors from the University of British Columbia are running sustainability-related experiments that use visualization techniques.

The work is important because many people have difficulty grasping climate change facts, said Tim Herron, who manages the Decision Theatre lab at the University of British Columbia.

“It’s just a much more compelling way of getting people to understand the effects of their behavior now on the future,” Herron said. “It’s about visualizing the data for people. Once people can see it, it’s amazing how much it changes things. People begin to really understand the necessity to make some changes now to prevent these sort of things.”

Studies have long-term impact
Virtual reality experiences can alter behavior, Bailenson said. The tree experiment in particular, he said, has stuck with those who went through the experience.

The research came out of a news article Bailenson read that said if people did not use recycled toilet paper, over the course of their lives they would each use up two virgin trees.

In the subsequent experiment Bailenson ran, students stood in the virtual reality version of a forest where they heard wind rustling and birds chirping as they flew past. The participants held a device meant to represent a chain saw, and felt resistance as they passed it back and forth through a tall tree.

The wood cracked, then crashed to the ground with a thunderous boom. The forest fell silent, birds no longer singing.

Before the student left the lab, a woman there knocked over a glass of water on a desk and asked the participant to help her clean it up. The people who had gone through virtual reality used 20 percent less paper than those who had watched a video of a tree being cut down, Bailenson said.

Bailenson said he gets emails months after that experiment from people telling him they can’t walk down the toilet paper aisle of a store without thinking about the falling tree.

The results of the cow experiment aren’t yet finalized, so Bailenson doesn’t know whether people ate less meat in the days afterward. But the comments from the study participants show they did empathize with the cows, he said. Stanford does not release names of the volunteers but provided some of their answers to questions presented after the experiment.

“Once I got used to it I began to feel like I was the cow,” one person wrote. “I truly felt like I was going to the slaughter house towards the end and I felt sad that I (as a cow) was going to die. That last prod felt really sad.”

Funding obstacles for climate research
Bailenson hopes to move more into the climate change arena, though so far he hasn’t won funding for that effort. He’s applied for grants with the National Science Foundation, but none has been successful.

In an interview, he answered cautiously when asked whether the subject is too politically dicey. He said that at NSF, “there’s variance among reviewers as to the scientific details of global warming.”

“Even among scientists who are fairly certain that global warming is real, which is most scientists, what the exact effects are going to be depend on the model of what’s going on with warming,” he added. “There’s a lot more variance in what people think the outcome to warming is going to be.”

Debbie Wing, a spokeswoman at NSF, said she could not comment on research proposals that hadn’t been funded. But she said all requests “go through a gold standard, merit-based, peer-reviewed evaluation for selection.”

Bailenson has secured some money to teach about ocean acidification. The cause of that — the seas absorbing excess carbon dioxide — essentially has the same culprit as climate change, he said.

He envisions developing a virtual reality experience in which a person would perform common activities in his or her home, all the while generating black balloons that represent carbon dioxide emissions. Those balloons would then ride up into the atmosphere and subsequently fall to the ocean. Once in the water, the molecules would prompt a change in the waters’ pH.

He said he could potentially have the person become a fish trying to find food that’s vanished, or an organism on a reef struggling to finding calcium for shell. The initial results of the cow study, showing that people do empathize with the animal, indicate that the same model could be useful in other experiments, he said.

Playing video games to visualize climate change
In Vancouver, computer games are being used to illustrate the effects of global warming.

High school students from the suburb of Delta go to the Decision Theatre to play a game where they make decisions about land development and power use. “It’s like ‘SimCity‘ with climate change overtones,” the Decision Theatre’s Herron said, referring to the series of city-building computer games.

Students can opt for choices that mitigate the effects of climate change, like putting housing next to transit, while “if you make other choices, you end up with waterfront property” because of flooding, he said.

Delta is funding the experiment as it faces major choices about adaptation. Sea-level rise likely will override existing dikes in the region, Herron said.

The idea is to talk to students and their families about picking options that can benefit people and “not try to sell it as we have to give up” everything, Herron said. If it’s presented as all sacrifice, he said, people won’t buy into it until forced to and it’s too late to limit warming.

At Harvard, the effort focuses on negotiation.

Participants sit in front of a computer screen and take on the role of a park ranger or a golf course owner while discussing uses for a pond and surrounding land. In one version, they then swap roles and debate from the other side.

Those who fill both personas “compromise more and form better relationships” than those playing just one role, said Hunter Gehlbach, associate professor of education at Harvard. The experiment measured negotiation by giving volunteers a pretend commission that increased if they brought the other person closer to their side and decreased for concessions.

The tests are important, Gehlbach said, because it’s one thing to know the correct scientific approaches to an environmental problem but another for disparate sides to agree on a solution.

“We know an awful lot about global warming, and yet there are a lot of personal and emotional, nonscientific barriers to getting better policies out there,” Gehlbach said. “That’s where I think the social science comes into play.”

Source: Scientific American

Video Analytics Could Flag Crimes Before They Happen.


Boston-Video-AnalyticsSoon after the investigation into Monday’s Boston Marathon bombings began, law enforcement urged the public to e-mail any video, images or other information that might lead them to the guilty party. “No piece of information or detail is too small,” states the F.B.I.’s Web site. Picking through all of this footage in search of clues has been no small task for investigators, given the size of the camera-carrying crowd that had assembled to watch the race, not to mention the video surveillance already put in place by the city and local merchants.

Law enforcement now say they have found video images of two separate suspects carrying black bags at each explosion site and are planning to release the images Thursday so that the public can help identify the men, the Boston Globe reports.

Whereas software for analyzing such video can identify and flag objects, colors and even patterns of behavior after the fact, the hope is that someday soon intelligent video camera setups will be able to detect suspicious activity and issue immediate warnings in time to prevent future tragedies.

A team of New York University researchers is working toward that goal, having developed software they say can measure the “sentiment” of people in a crowd. So far, the technology has primarily been tested as a marketing tool at sporting events (gauging what advertisements capture an audience’s attention, for example), but the researchers are eyeing homeland security applications as well. The U.S. military, which is funding much of the N.Y.U. research, is interested in knowing whether this software could detect when someone is approaching a checkpoint or base with a weapon or explosives concealed under their clothing.

“So far, we can detect if they’re eating or using their cell phones or clapping,” says N.Y.U. computer science professor Chris Bregler. It’s not an exact science, but monitoring crowd behavior helps marketers understand what creates a positive crowd response—whether they are high-fiving action on the field, responding to a call for “the wave” or laughing at an advertisement on the scoreboard. The software is programmed to detect only positive sentiment at this time. Negative sentiments—booing and impolite gestures–are next on the researchers’ agenda.

The key to analyzing video in real time is programming the accompanying analytical software to look for certain cues–a rigid object under soft, flowing clothing, for example–and issue immediate alerts. First, the software must be “trained,” Bregler says. This is done with the help of Internet services such as Amazon’s Mechanical Turk digital labor marketplace, where participants are paid to analyze and tag video footage based on what’s on the screen. Bregler and his team load these results into a computer neural network—a cluster of microprocessors that essentially analyzes relationships among data—so that the software can eventually identify this activity on its own.

One challenge for the researchers is developing its analytical software so that it can examine a variety of different types of video footage, whether it’s professional-quality camerawork on the nightly news or someone recording an event with a shaky cell phone camera. “The U.S. military wants us to look at, say, Arab Spring footage and large demonstrations for early signs that they will turn violent,” Bregler says.

Bregler’s earlier research to identify specific movement signatures (see video below) used the same motion-capture technology used for special effects in the Lord of the Rings and Harry Potter movies. Bregler’s motion-analysis research attracted the attention of the Pentagon’s Defense Advanced Research Projects Agency (DARPA) in 2000 as a possible means of identifying security threats. Following 9/11 his researched ramped up thanks to funding from the National Science Foundation and the U.S. Office of Naval Research. Law enforcement and counterterrorism organizations already had facial-recognition technology but were looking for additional ways to better make sense of countless hours of surveillance footage.

Given that people don’t normally walk around in tight-fitting motion-capture suits laden with reflective markers, the N.Y.U. team developed their technology to focus more on scanning a camera’s surroundings and identifying spots that are unique, such as the way light reflects off a shirt’s button differently than it does off the shirt’s fabric. The researchers’ goal is for their software to be able to identify a person’s emotional state and other attributes based on movement.

Without such advanced video analytics, investigators must essentially reverse-engineer the action depicted in the video they receive, Bregler says. In the case of the Boston Marathon, the researchers have been analyzing video of the explosions and then working backward to see who was in the area prior to the bombing. “Most likely the data needed to figure out what happened exists,” he adds. “Investigators just need to find it, which is difficult given the volume of the video coming in.”

Source: Scientific American

Gut Bacteria Can Affect Fat Absorption, and Act in Accordance to “Social Structures”.


Much new research is now emerging on the importance of bacteria – intestinal bacteria, to be more exact. These are commonly referred to as probiotics, and are the antithesis to antibiotics, both of which I’ll discuss below.

These microscopic critters are also known as your microbiome.

Around 100 trillion of these beneficial bacterial cells populate your body, particularly your intestines and other parts of your digestive system. In fact, 90 percent of the genetic material in your body is not yours, but rather that of bacteria, fungi, viruses and other microorganisms that compose your microflora.

We’re now discovering that the composition of this microflora has a profound impact on your health. For example, we now know that your intestinal bacteria influence your:

  • Genetic expression
  • Immune system
  • Brain development, mental health, and memory
  • Weight, and
  • Risk of numerous chronic and acute diseases, from diabetes to cancer

Certain Gut Microbes Affect Absorption of Dietary Fats

Most recently, a research team that includes Carnegie’s Steve Farber and Juliana Carten has revealed that certain gut microbes increase the absorption of dietary fats.1 According to the authors:

Diet-induced alterations in microbiota composition might influence fat absorption, providing mechanistic insight into how microbiota-diet interactions regulate host energy balance.”

Medical News Today2 recently reported on the findings, stating:

“Previous studies showed gut microbes aid in the breakdown of complex carbohydrates, but their role in dietary fat metabolism remained a mystery, until now… ‘This study is the first to demonstrate that microbes can promote the absorption of dietary fats in the intestine and their subsequent metabolism in the body,’ said senior study author John Rawls of the University of North Carolina. ‘The results underscore the complex relationship between microbes, diet and host physiology.'”

The bacteria identified as instrumental in increasing fat absorption are called Firmicutes, which, incidentally, have previously been linked to obesity, as they’re found in greater numbers in the guts of obese subjects. The researchers also found that the abundance of Firmicutes was influenced by diet. This adds weight to previous research postulating that gut bacteria can increase your body’s ability to absorb fat, and therefore extract more calories from your food compared to others who have a different composition of bacteria in their intestines – even when consuming the same amount of food.

Now, more recent research published in the journal Science3 reveals that bacteria may have “social structures similar to plants and animals.” According to the authors:

“In animals and plants, social structure can reduce conflict within populations and bias aggression toward competing populations; however, for bacteria in the wild it remains unknown whether such population-level organization exists. Here, we show that environmental bacteria are organized into socially cohesive units in which antagonism occurs between, rather than within, ecologically defined populations.

By screening approximately 35,000 possible mutual interactions among Vibrionaceae isolates from the ocean, we show that genotypic clusters known to have cohesive habitat association also act as units in terms of antibiotic production and resistance.

Genetic analyses show that within populations, broad-range antibiotics are produced by few genotypes, whereas all others are resistant, suggesting cooperation between conspecifics. Natural antibiotics may thus mediate competition between populations rather than solely increase the success of individuals.”

What this means is that certain bacteria have the ability to produce chemical compounds that inhibit the growth of other bacteria, while not harming their own kind or “close relatives.” These chemical compounds or natural antibiotics act as a type of chemical warfare, allowing the bacteria in question to gain a competitive edge by killing off the competition. Meanwhile, other “allies” are spared, as they are resistant to the antibiotic chemicals produced.

As reported by Medical News Today:4

“‘The research has the potential to bridge gaps in our understanding of the relationships between plants and humans and their non-disease- and disease-causing bacterial flora,’ said Robert Fleischmann, a program director in the Division of Biological Infrastructure for the National Science Foundation.

‘We use antibiotics to kill pathogenic microbes, which cause harm to humans and animals,’ said Polz. ‘As an unfortunate side effect, this has lead to the widespread buildup of resistance, particularly in hospitals where pathogens and humans encounter each other often.’

In addition, the results help scientists make sense of why closely related bacteria are so diverse in their gene content. Part of the answer, they say, is that the diversity allows the bacteria to play different social roles. Social differentiation, for example, could mitigate the negative effects of two species competing for the same limiting resource – food or habitat, for instance – and generate population level behavior that emerges from the interaction between close relatives.”

Beware of Fluoridated Antibiotics that Can Ruin Your Gut Flora and Your Health

Your lifestyle can and does influence your gut flora on a daily basis. All of these common exposures can wreak havoc on the makeup of bacteria in your gut, but researchers are now increasingly looking at the cascading ill effects of antibiotic drugs in particular. For example, your gut bacteria are extremely sensitive to:

Antibiotics are severely overused – not just in medicine, but also in food production. In fact, about 80 percent of all the antibiotics produced are used in agriculture – not only to fight infection, but to promote unhealthy (though profitable) weight gain in the animals. Hence, if you want to avoid overexposure to antibiotics, it’s also crucial to avoid conventionally-raised meats.

That said, certain antibiotics prescribed in medicine are so harmful they probably shouldn’t be used at all. Medications such as Avelox, Cipro, and Levaquin have been named in over 2,000 drug injury lawsuits.5

These are all fluoroquinolones, a class of fluoridated antibiotics associated with a number of serious side effects, such as potentially blinding retinal detachment, kidney failure, and permanent tendon damage. Fluoroquinolones do carry a black box warning for tendonitis, ruptured tendons, and its potentially detrimental effect on neuromuscular activity, but many patients simply do not read the warning labels before taking the drug. Other serious injuries linked to fluoroquinolones include:

Injury to central nervous system Injury to your heart Liver problems
Gastrointestinal problems Injury to musculoskeletal system Injury to renal system
Injury to visual and/or auditory system Altered blood sugar metabolism Depression
Psychotic reactions and hallucinations Phototoxicity Disfiguring rashes
Staphylococcus aureus infection C. difficile infection Severe diarrhea

Learn More about the Dangers of Fluoroquinolone Antibiotics

Shockingly, despite all these risks, fluoroquinolones are one of the most commonly prescribed classes of antibiotics in the world. John Fratti, who was hired by the FDA in a part-time position as an FDA Patient Representative for drug safety, is on a quest to raise awareness on the dangers of fluoroguinolone toxicity. He filed a Freedom of Information (FOI) request with the FDA on two of the top fluoroquinolones, Levaquin and Cipro, and learned that they are associated with over 2,500 deaths.

A non-profit organization called Quinolone Vigilance Foundation, established by Mr. David Melvin, was created to spread awareness of the dangers associated with this class of drugs, and the Foundation’s website contains both information and support for those injured by these drugs. Fortunately, fluoroquinolones have started getting some well-deserved media attention as of late.

According to a recent article in The New York Times:6

“A half-dozen fluoroquinolones have been taken off the market because of unjustifiable risks of adverse effects. Those that remain are undeniably important drugs, when used appropriately. But doctors at the Centers for Disease Control and Prevention have expressed concern that too often fluoroquinolones are prescribed unnecessarily as a ‘one size fits all’ remedy without considering their suitability for different patients.

Experts caution against giving these drugs to certain patients who face higher than average risks of bad reactions – children under age 18, adults over 60, and pregnant and nursing women – unless there is no effective alternative. The risk of adverse effects is also higher among people with liver disease and those taking corticosteroids or nonsteroidal anti-inflammatory drugs.

When an antibiotic is prescribed, it is wise to ask what the drug is and whether it is necessary, what side effects to be alert for, whether there are effective alternatives, when to expect the diagnosed condition to resolve, and when to call if something unexpected happens or recovery seems delayed.”

Last year, PBS NewsHour7 aired a segment highlighting the dangers of fluoroquinolones. Fratti, who is himself a victim of fluoroquinolone toxicity, was interviewed. He was prescribed Levaquin a few years ago for a minor bacterial infection. The drug caused nerve damage, tendon damage and damage to his central nervous system.

How to Optimize Your Gut Flora

The good news is that positively influencing the bacteria growing in your body is relatively easy. Aside from reserving antibiotics for serious cases of infection only, one of the most important steps you can take is to stop consuming sugary foods. When you eat a healthy diet that is low in sugars and processed foods, one of the major benefits is that it causes the good bacteria in your gut to flourish and build up a major defense against the bad bacteria getting a foothold in your body in the first place.

This is one of the many reasons I highly recommend reducing, with the plan of eliminating, sugars and most grains from your diet. Following my recently updated nutrition plan will help you optimize your diet in a systematic step-by-step fashion. A healthy diet is the ideal way to maintain a healthy gut, and regularly consuming traditionally fermented or cultured foods is the easiest way to ensure optimal gut flora. Healthy options include:

Fermented vegetables of all kinds (cabbage, carrots, kale, collards, celery spiced with herbs like ginger and garlic) Lassi (an Indian yogurt drink, traditionally enjoyed before dinner) Tempeh
Fermented raw milk such as kefir or yogurt, but NOT commercial versions, which typically do not have live cultures and are loaded with sugars that feed pathogenic bacteria Natto Kimchee

 

Just make sure to steer clear of pasteurized versions, as pasteurization will destroy many of the naturally-occurring probiotics. For example, most of the “probiotic” yogurts you find in every grocery store these days are NOT recommended. Since they’re pasteurized, they will be associated with all of the problems of pasteurized milk products instead. They also typically contain added sugars, high fructose corn syrup, dyes, and/or artificial sweeteners; all of which are detrimental to your health.

Consuming traditionally fermented foods will also provide you with the following added benefits:

  • Important nutrients: Some fermented foods are excellent sources of essential nutrients such as vitamin K2, which is important for preventing arterial plaque buildup and heart disease. Cheese curd, for example, is an excellent source of both probiotics and vitamin K2. You can also obtain all the K2 you’ll need (about 200 micrograms) by eating 15 grams, or half an ounce, of natto daily. They are also a potent producer of many B vitamins
  • Optimizing your immune system: Probiotics have been shown to modulate immune responses via your gut’s mucosal immune system, and have anti-inflammatory potential. Eighty percent of your immune system is located in your digestive system, making a healthy gut a major focal point if you want to maintain optimal health, as a robust immune system is your number one defense system against ALL disease
  • Detoxification: Fermented foods are some of the best chelators available. The beneficial bacteria in these foods are very potent detoxifiers, capable of drawing out a wide range of toxins and heavy metals
  • Cost effective: Fermented foods can contain 100 times more probiotics than a supplement, so just adding a small amount of fermented foods to each meal will give you the biggest bang for your buck
  • Natural variety of microflora: As long as you vary the fermented and cultured foods you eat, you’ll get a much wider variety of beneficial bacteria than you could ever get from a supplement

When you first start out, you’ll want to start small, adding as little as half a tablespoon of fermented vegetables to each meal, and gradually work your way up to about a quarter to half a cup (2 to 4 oz) of fermented vegetables or other cultured food with one to three meals per day. Since cultured foods are efficient detoxifiers, you may experience detox symptoms, or a “healing crisis,” if you introduce too many at once.

Learn to Make Your Own Fermented Vegetables

Fermented vegetables are easy to make on your own. It’s also the most cost-effective way to get high amounts of healthful probiotics in your diet. To learn how, review the following interview with Caroline Barringer, a Nutritional Therapy Practitioner (NTP) and an expert in the preparation of the foods prescribed in Dr. Natasha Campbell-McBride’s Gut and Psychology Syndrome (GAPS) Nutritional Program. In addition to the wealth of information shared in this interview, I highly recommend getting the book Gut and Psychology Syndrome, which provides all the necessary details for Dr. McBride’s GAPS protocol.

Although you can use the native bacteria on cabbage and other vegetables, it is typically easier to get consistent results by using a starter culture. Caroline prepares hundreds of quarts of fermented vegetables a week and has found that she gets great results by using three to four high quality probiotic capsules to jump start the fermentation process.

Source: Dr. Mercola