How reading rewires your brain for greater intelligence and empathy


Get lost in a good book. Time and again, reading has been shown to make us healthier, smarter, and more empathic.

Fitness headlines promise staggering physical results: a firmer butt, ripped abs, bulging biceps. Nutritional breakthroughs are similar clickbait, with attention-grabbing, if often inauthentic—what, really, is a “superfood?”—means of achieving better health. Strangely, one topic usually escaping discussion has been shown, time and again, to make us healthier, smarter, and more empathic animals: reading.

Reading, of course, requires patience, diligence, and determination. Scanning headlines and retweeting quips is not going to make much cognitive difference. If anything, such sweet nothings are dangerous, the literary equivalent of sugar addiction. Information gathering in under 140 characters is lazy. The benefits of contemplation through narrative offer another story.

The benefits are plenty, which is especially important in a distracted, smartphone age in which one-quarter of American children don’t learn to read. This not only endangers them socially and intellectually, but cognitively handicaps them for life. One 2009 study of 72 children ages eight to ten discovered that reading creates new white matter in the brain, which improves system-wide communication.

White matter carries information between regions of grey matter, where any information is processed. Not only does reading increase white matter, it helps information be processed more efficiently.

Reading in one language has enormous benefits. Add a foreign language and not only do communication skills improve—you can talk to more people in wider circles—but the regions of your brain involved in spatial navigation and learning new information increase in size. Learning a new language also improves your overall memory.

In one of the most fascinating aspects of neuroscience, language affects regions of your brain involving actions you’re reading about. For example, when you read “soap” and “lavender,” the parts of your brain implicated in scent are activated. Those regions remain silent when you read “chair.” What if I wrote “leather chair?” Your sensory cortex just fired.

Continuing from the opening paragraph, let’s discuss squats in your quest for a firmer butt. Picture the biomechanics required for a squat. Your motor cortex has been activated. Athletes have long envisioned their movements—Serena Williams’s serve; Conor McGregor’s kicks; Usain Bolt’s bursts of speed—to achieve better proficiency while actually moving. That’s because their brains are practicing. That is, they’re practicing through visualization techniques.

Hard glutes are one thing. Novel reading is a great way to practice being human. Rather than sprints and punches, how about something more primitive and necessary in a society, like empathy? As you dive deeper into Rabbit Angstrom’s follies or Jason Taylor coming of age, you not only feel their pain and joy. You actually experience it.

In one respect novels go beyond simulating reality to give readers an experience unavailable off the page: the opportunity to enter fully into other people’s thoughts and feelings.

This has profound implications for how we interact with others. When encountering a 13-year-old boy misbehaving, you most likely won’t think, “Well, David Mitchell wrote about such a situation, and so I should behave like this,” but you might have integrated some of the lessons about young boys figuring life out and display a more nuanced understanding in how you react.

Perhaps you’ll even reconsider trolling someone online regarding their political opinion, remembering that no matter how crass and inhumane a sentiment appears on screen, an actual human is sitting behind the keyboard pecking out their thoughts. I’m not arguing against engaging, but for the love of anything closely resembling humanity, argue intelligently.

Because reading does in fact make us more intelligent. Research shows that reading not only helps with fluid intelligence, but with reading comprehension and emotional intelligence as well. You make smarter decisions about yourself and those around you.

All of these benefits require actually reading, which leads to the formation of a philosophy rather than the regurgitation of an agenda, so prevalent in reposts and online trolling. Recognizing the intentions of another human also plays a role in constructing an ideology. Novels are especially well-suited for this task. A 2011 study published in the Annual Review of Psychology found overlap in brain regions used to comprehend stories and networks dedicated to interactions with others.

Novels consume time and attention. While the benefits are worthwhile, even shorter bursts of prose exhibit profound neurological effects. Poetry elicits strong emotional responses in readers and, as one study shows, listeners. Heart rates, facial expressions, and “movement of their skin and arm hairs” were measured while participants listened to poetry. Forty percent ended up displaying visible goose bumps, as they would while listening to music or watching movies. As for their craniums:

Their neurological responses, however, seemed to be unique to poetry: Scans taken during the study showed that listening to the poems activated parts of participants’ brains that, as other studies have shown, are not activated when listening to music or watching films.

These responses mostly occurred near the conclusion of a stanza and especially near the end of the poem. This fits in well with our inherent need for narrative: in the absence of a conclusion our brain automatically creates one, which, of course, leads to plenty of heartbreak and suffering when our speculations prove to be false. Instead we should turn to more poetry:

There is something fundamental to the poetic form that implies, creates, and instills pleasure.

Whether an Amiri Baraka verse or a Margaret Atwood trilogy, attention matters. Research at Stanford showed a neurological difference between reading for pleasure and focused reading, as if for a test. Blood flows to different neural areas depending on how reading is conducted. The researchers hope this might offer clues for advancing cognitive training methods.

I have vivid memories of my relationship with reading: trying to write my first book (Scary Monster Stories) at age five; creating a mock newspaper after the Bernard Goetz subway shooting when I was nine, my mother scolding me for “thinking about such things”; sitting in the basement of my home in the Jersey suburbs one weekend morning, determined to read the entirety of Charlie and the Chocolate Factory, which I did.

Reading is like any skill. You have to practice it, regularly and constantly. While I never finished (or really much started) Scary Monster Stories, I have written nine books and read thousands more along the way. Though it’s hard to tell if reading has made me smarter or a better person, I like to imagine that it has.

What I do know is that life would seem a bit less meaningful if we didn’t share stories with one another. While many mediums for transmitting narratives across space and time exist, I’ve found none as pleasurable as cracking open a new book and getting lost in a story. Something profound is always discovered along the way.

Advertisements

Breakthrough discovery: Scientists have found where Alzheimer’s starts – in a set of inflamed immune cells


Image: Breakthrough discovery: Scientists have found where Alzheimer’s starts – in a set of inflamed immune cells

Over 5.5 million Americans – mostly over the age of 65 – are battling Alzheimer’s disease, a debilitating condition that kills more people than breast cancer and prostate cancer combined. Alzheimer’s progressively destroys memory and other cognitive functions, causing confusion, anxiety and heartache. The number of people fighting this disease has increased by a staggering 89 percent since 2000.

Now, an exciting new study by researchers from the University of Bonn, Germany, claims to have uncovered the cause of this incurable disease, and the team hopes that their discovery will lead to a breakthrough in treatment within the next decade.

Scientists have understood for some time that Alzheimer’s is associated with a build-up of amyloid plaques in the brain. Amyloid plaques are a sticky build-up which accumulates on the outside of neurons, or nerve cells. While amyloid is a protein that naturally occurs throughout the body, in Alzheimer’s patients this protein divides improperly, creating a form of amyloid which is toxic to neurons in the brain.

Most human trials for Alzheimer’s treatments have focused on trying to target these plaques. All have failed.

The new research is exciting in that it has revealed the root cause of this amyloid plaque build-up: Inflammation in immune cells called microglia, which make up between 10 and 15 percent of all brain cells, and which act as the brain’s first line of immune defense.

Inflammation directly fuels the characteristic amyloid plaque build-up which autopsies have revealed in the brains of Alzheimer’s patients.

100% organic essential oil sets now available for your home and personal care, including Rosemary, Oregano, Eucalyptus, Tea Tree, Clary Sage and more, all 100% organic and laboratory tested for safety. A multitude of uses, from stress reduction to topical first aid. See the complete listing here, and help support this news site.

The Daily Mail recently reported:

For years inflammation has been suspected of having a role but the exact nature of its involvement has been hard to pin down – until now.

The researchers found the microglia release specks of a protein called ASC in response to it. They stick to the amyloid beta protein – boosting its production. …

ASC reside in a vital inflammatory pathway called the NLRP3 inflammasome which damages brain cells.

The researchers found that this process takes place right from the earliest stages of the disease, and that when an antibody was used in laboratory tests to prevent ASC from binding to the amyloid protein, the damaging, sticky amyloid plaque build-up was prevented.

The research team is excited about the possibility of developing a chemical treatment which can directly target inflammasomes, and hope that an Alzheimer’s “cure” might be on the horizon within the next five to 10 years.

Of course, any such chemical cure is likely to carry a slew of side effects, and will more than likely be very costly.

On the other hand, the knowledge that inflammation is the root cause of Alzheimer’s is very powerful, because inflammation can be avoided and even reversed. (Related: Six healthy habits effective for preventing Alzheimer’s disease.)

An article published by Harvard Medical School, for example, noted:

Doctors are learning that one of the best ways to quell inflammation lies not in the medicine cabinet, but in the refrigerator.

The article noted that while inflammation serves the purpose of protecting your body against threatening invaders, inflammation can sometimes persist for long periods of time, even when no such threat exists. It added:

That’s when inflammation can become your enemy. Many major diseases that plague us—including cancer, heart disease, diabetes, arthritis, depression, and Alzheimer’s—have been linked to chronic inflammation.

One of the most powerful tools to combat inflammation comes not from the pharmacy, but from the grocery store.

The foods we eat we will either cause or prevent inflammation; it’s as simple as that.

Foods that cause inflammation include refined carbohydrates, fries and other junk food, soda, processed meats, margarine, and conventionally farmed meat that has been subjected to routine antibiotic and hormone treatments.

On the other side of the spectrum, foods that actively fight inflammation include most of the foods that form part of the Mediterranean diet, including tomatoes, olive oil, green leafy veggies, fatty fish like salmon and tuna, and fresh, organic fruit.

Discover the latest information and breakthroughs at Alzheimers.news.

https://www.brighteon.com/embed/5848224213001

Sources for this article include:

DailyMail.co.uk

Alz.org

Health.Harvard.edu

Exploring Alzheimer’s Disease Subtypes at the Prodromal Stage


‘Alzheimer’s disease is a disease of the medial temporal lobe’. These are words that one of us remembers vividly from a particularly interesting undergraduate lecture nearly 25 years ago, and similar statements have been heard many times in many lecture theatres since. Countless studies have indeed shown that medial temporal lobe atrophy is predictive of the development of Alzheimer’s dementia, and that this relates to changes in episodic memory. However, the medial temporal lobe is not always the first brain region to diminish in volume in Alzheimer’s disease, and cognitive impairment does not always begin with episodic memory dysfunction. Alzheimer’s disease has a range of presentations, each with well described behavioural characteristics and an associated pattern of atrophy that can be visualized with MRI. As well as the typical episodic memory-medial temporal lobe profile, individuals may present with predominant visuospatial impairment (posterior cortical atrophy), with profound language deficits (logopenic aphasia) or with frontal signs (Benson et al., 1988; Gorno-Tempini et al., 2011; Ossenkoppele et al., 2015). The increasing use of biomarkers such as CSF analysis and amyloid PET (Figure 1) is enabling clinicians to better identify patients with variants of Alzheimer’s disease in memory clinics (Carswell et al., 2018), and allowing greater understanding of how these syndromes relate to each other. Furthermore, the use of such biomarkers, in combination with comprehensive behavioural testing and longitudinal scanning, in large research cohorts has enabled researchers to explore atrophy patterns and cognitive profiles in mild cognitive impairment in addition to Alzheimer’s dementia (Zhang et al., 2016). In this issue of Brain, ten Kate and co-workers have taken this approach further by using a data-driven cluster analysis to identify subtypes in multiple Alzheimer’s disease dementia cohorts, and then attempting to group biomarker-positive individuals with prodromal Alzheimer’s disease into these subtypes (ten Kate et al., 2018).

Click to zoom

 

Amyloid PET scan in Alzheimer’s disease. Colour image of an amyloid PET scan from a patient with increasing episodic memory impairment. There is loss of grey/white matter differentiation in multiple regions strongly suggestive of widespread amyloid deposition. Courtesy of Dr Zarni Win.

The authors identified four atrophy subtypes in a dataset of patients with dementia imaged with a single scanner in Amsterdam, before validating these subtypes in a separate cohort of patients from Amsterdam as well as individuals with dementia from the Alzheimer’s Disease Neuroimaging Initiative (ADNI) cohort. In addition to the ‘classical’ medial-temporal-predominant atrophy group with memory and language dysfunction, they found three other clusters: a posterior atrophy group with poor executive function/attention and poor visuospatial function, a group with mild atrophy with relatively better cognition, and a subtype with diffuse cortical atrophy and intermediate cognitive features (Table 1). Critically, age, biomarker profile and other features formed part of the defining characteristics. For example, the mild atrophy group was relatively young with the highest CSF tau levels, whereas patients in the medial-temporal atrophy group were older, and had the greatest burden of vascular lesions and the lowest CSF tau levels. The authors applied their classification to over 600 individuals with prodromal Alzheimer’s disease (from the Amsterdam Dementia Cohort and from the ADNI) and found that the four subgroups manifested subtle differences in their rate and profile of cognitive decline.

This important study by ten Kate et al. builds on previous work and provides further evidence for an intermediate subtype of Alzheimer’s dementia that does not strongly conform to a specific set of cognitive features. Furthermore, the data suggest that the mapping between CSF biomarkers and cognitive profile is not straightforward, and that particular molecular biomarker characteristics may relate to specific cognitive features rather than there being a one-to-many mapping (Husain, 2017). The study also adds to recent work by Dong and co-workers (2017), suggesting that subtypes exist at the prodromal stage. However, it should be noted that around 21% of prodromal subjects in the current study showed a match to more than one subtype, and that the proportion of subjects in each group was different for the prodromal dataset versus the Alzheimer’s disease dementia subsets. In particular, a larger proportion (55%) of the prodromal patients belonged to the milder atrophy group. Thus, one issue that deserves further evaluation is whether the different subtypes observed in the prodromal Alzheimer’s disease population may represent points on the same trajectory, rather than distinct patient groups. ten Kate et al. suggest that the biomarker profiles from their study indicate that this is not the case, and their account would be in keeping with longitudinal data previously published by Zhang and colleagues (2016). However, it is still possible that a significant proportion of individuals with Alzheimer’s disease show longitudinal crossover between subtypes (i.e. mild to diffuse or mild to medial-temporal), whereas other individuals (e.g. parieto-occipital) remain ‘true-to-subtype’ for a much longer period.

Another aspect of the study that must be considered is that, although the cluster analysis employed by the authors is data-driven and unbiased, it can only work with the cohort data that are available; hence the conflation of attention and executive function into a single neuropsychological domain in this study. It is possible that datasets with more detailed cognitive measures might allow better definition of existing subtypes and the identification of more disease clusters. An additional area that is likely to be of particular interest is the neuropsychiatric symptom profile in prodromal Alzheimer’s disease. Recent research suggests that symptoms such as agitation, anxiety, appetite changes and depression can occur at the earliest pathological stages of the disease (Ehrenberg et al., 2018). Exactly how these psychiatric symptoms map onto each of the four subtypes described by ten Kate and colleagues is of great interest. Moreover, although the classification in the current study included the presence of white matter hyperintensities on MRI, the datasets do not include systematic information regarding comorbidities. Particularly in older individuals, further knowledge concerning co-existing conditions such as diabetes and hypertension is likely to throw light on the pathogenesis and development of Alzheimer’s disease. Specifically, knowing whether certain subtypes are more likely to suffer from one or more of these conditions may further inform our understanding of the variable rate of progression across subtypes (Li et al., 2011). The lectures will keep becoming more complicated, and even more interesting.

Hunger and the Brain


Adult woman standing in dim lighting in front of an open fridge. Credit shutterstock

 

Bradford Lowell, MD, PhD, remembers his astonishment the first time his lab “turned on” hunger-promoting neurons in a mouse. The genetically engineered rodent, which was already full, devoured food pellets as if it hadn’t eaten anything all day, quelling any doubts about the neurons’ importance.

“I recall thinking it was the most amazing thing I had ever seen,” says Lowell, an HMS professor of medicine at Beth Israel Deaconess Medical Center (BIDMC).

That 2011 feeding frenzy was a turning point in Lowell’s decades-long quest to understand how the intense drive of hunger compels us to eat—and makes dieting so difficult. It is one of many “wow” moments he has encountered while decoding the incredibly complex tangle of circuits in the brain that control appetite.

To illuminate how the system works, Lowell and his colleagues are assembling a “parts list” of hunger-related neurons and the genes they express, as well as a “wiring diagram” of how these neurons communicate with each other and with higher structures in the brain. Lowell hopes that finding the missing pieces of both maps will lead to treatments for obesity or eating disorders.

“Once we know what all the key neuronal steps are, and we know all the genes that each neuron expresses, we can say, ‘What’s our drug target in there?’” Lowell says.

Much of his research focuses on agouti-related peptide (AgRP) neurons, nerve cells in the hypothalamus at the base of the brain that are activated by fasting. Lowell’s group, along with others, discovered that these neurons suppress other neurons responsible for feelings of satiety, which in turn triggers a desire to eat. Mice will poke their noses for pellets 30 times in a row to relieve hunger pangs; even a sated mouse will poke 30 times if its hunger neurons are artificially turned on.

“These drive neurons cause a bad feeling, and you eat to get rid of the bad feeling,” he says. “That is why dieting fails, because you have to constantly walk around not feeling well.”

In a surprising finding, Lowell’s lab, collaborating with Mark Andermann, PhD ’06, an HMS associate professor of medicine at BIDMC, found that when a mouse merely looks at food, its hunger neurons change their response. “What that means is that sensory cues from the environment that are being processed by the cortex are coming down to regulate hunger neurons to help guide them in deciding if/when to eat,” Lowell explains. This counters earlier thinking that feedback from the body (e.g., about low fuel reserves) alone controlled these nerve cells. He adds, “The fact that cortically processed information controls AgRP hunger neurons likely also has implications for the cause of feeding disorders like anorexia nervosa.”

Transformative technology

Incredible changes have taken place since Lowell began studying hunger and metabolism in college in the late 1970s. That was well before the advent of genetically encoded tools that have transformed brain science by allowing investigators to manipulate, measure, and map the connectivity of specific neurons and watch how their activity affects behavior.

Lowell’s lab is a leader in producing and sharing lines of genetically engineered mice with researchers worldwide—who, in turn, have published numerous papers. “So the making of these key ‘enabling’ mice has had a significant impact on neuroscience,” he says. With technology helping elucidate neural behavior in ways that were previously impossible, “I think the future for understanding deeply how the brain controls drives like hunger is very promising.”

10 Ways to Work Out Your Brain in 5 Minutes or Less


Sometimes we have five minutes with nothing to do — you’re stuck in traffic, for example, or waiting at the doctor’s office. Why not make good use of that time and give your brain a little stretch?

 

Here are 10 great mini-workouts you can do in just five minutes or less. Give them a try — you might even come up with a few others on your own!

Play a game on your cell phone for five minutes. (Not sure if you have a game on your phone? Borrow a teenager — they’ll either locate those games or they can download them for you for about 99 cents apiece.)

Memorize three essential phone numbers that you don’t know by heart. (Really, by head — what does your heart have to do with it?)

Go through five client or project files you haven’t used in a while and toss any papers you no longer need to hold on to.

Learn a new song.

Sit and just relax — for a whole five minutes!

Call your sister, brother, or best friend for a five-minute catch-up.

See how many different objects you can find in the clouds. (Don’t worry; you’ll remember how to do this once you get started!)

Learn a recipe by heart.

Doodle.

 

Marvel at all the wonderful things your brain can do — take five minutes to list the miraculous achievements of your magnificent brain!

Young Adults With Type 1 Diabetes Show Abnormal Brain Activity


Having diabetes may affect the way our brains work. Research is taking place to find out exactly how this occurs.

In a recent study, researchers describe how tying diabetes to cognitive impairment is tricky because many people with diabetes have other conditions like high blood pressure and obesity, which also affect cognition. That’s why they conducted a study in young adults with and without type 1 diabetes “who were virtually free of such comorbidities,” the study authors wrote in their abstract.

brain activity

Christine Embury is a graduate research assistant at the Center for Magnetoencephalography (MEG) at the University of Nebraska Medical Center. She worked with Dr. Wilson, the study’s lead author and was kind enough to answer some questions.

In layman terms, she explains that “neural processing” is brain activity. “In our work, we relate brain activity in specific brain regions to task-specific cognitive processes, like working memory. Widespread brain networks are involved in this kind of complex processing including regions relating to verbal processing and attention, working together to accomplish task goals,” she writes.

Young, Healthy Type 1 Adults Tested

They matched two groups, one with and one without type 1 diabetes, on major health and demographic factors and had them all do a verbal working memory task during magnetoencephalographic (MEG) brain imaging. For the group with type 1 diabetes, the mean years of diabetes duration were only 12.4.

The researchers hypothesized that those with type 1 diabetes would have “altered neural dynamics in verbal working memory processing and that these differences would directly relate to clinical disease measures,” they wrote.

Higher A1c and Diabetes Duration May Alter Brain Activity

They found that those with type 1 diabetes had much stronger neural responses in the superior parietal cortices during memory encoding and much weaker activity in the parietal-occipital regions during maintenance compared to those without type 1 diabetes.

Diabetes duration and glycemic control were both “significantly correlated with neural responses in various brain regions,”

Embury explained that their findings suggest that “the longer one has the condition, the more the brain has to work to compensate for deficits incurred.” Higher A1c levels were also associated with compensatory brain activity, too.

The harrowing conclusion from the study authors is that even young, healthy adults with type 1 diabetes “already have aberrant neural processing relative to their non-diabetic peers, employing compensatory responses to perform the task, and glucose management and duration may play a central role.”

What would be the findings among type 1s who keep their A1c in non-diabetic range, one might wonder? This study suggests it is likely that elevated blood sugar over time is what changes the brain activity. These effects are possibly compounded over time in those with comorbidities like obesity and high blood pressure.

What is Verbal Working Memory?

According to this study, verbal working memory processing may be affected by type 1 diabetes. Embury shared an example of this and wrote, “Participants had to memorize a grid of letters and were later asked to identify if a probe letter was in the previous set of letters shown.” She said we have to use working memory any time that we’re trying to hold on to or manipulate a piece of information for a short amount of time, like remembering a person’s phone number.

The verbal part of “verbal working memory processing” just has to do with the way that the information is presented, like letters or numbers and “anything that requires language processing as well” Embury explains.

More research will help clarify these findings in the future.

Planes don’t flap their wings: does AI work like a brain?


A replica of Jacques de Vaucanson’s digesting duck automaton. <em>Courtesy the Museum of Automata, Grenoble, France </em>
A replica of Jacques de Vaucanson’s digesting duck automaton.

In 1739, Parisians flocked to see an exhibition of automata by the French inventor Jacques de Vaucanson performing feats assumed impossible by machines. In addition to human-like flute and drum players, the collection contained a golden duck, standing on a pedestal, quacking and defecating. It was, in fact, a digesting duck. When offered pellets by the exhibitor, it would pick them out of his hand and consume them with a gulp. Later, it would excrete a gritty green waste from its back end, to the amazement of audience members.

Vaucanson died in 1782 with his reputation as a trailblazer in artificial digestion intact. Sixty years later, the French magician Jean-Eugène Robert-Houdin gained possession of the famous duck and set about repairing it. Taking it apart, however, he realised that the duck had no digestive tract. Rather than breaking down the food, the pellets the duck was fed went into one container, and pre-loaded green-dyed breadcrumbs came out of another.

The field of artificial intelligence (AI) is currently exploding, with computers able to perform at near- or above-human level on tasks as diverse as video games, language translation, trivia and facial identification. Like the French exhibit-goers, any observer would be correctly impressed by these results. What might be less clear, however, is how these results are being achieved. Does modern AI reach these feats by functioning the way that biological brains do, and how can we know?

In the realm of replication, definitions are important. An intuitive response to hearing about Vaucanson’s cheat is not to say that the duck is doing digestion differently but rather that it’s not doing digestion at all. But a similar trend appears in AI. Checkers? Chess? Go? All were considered formidable tests of intelligence until they were solved by increasingly more complex algorithms. Learning how a magic trick works makes it no longer magic, and discovering how a test of intelligence can be solved makes it no longer a test of intelligence.

So let’s look to a well-defined task: identifying objects in an image. Our ability to recognise, for example, a school bus, feels simple and immediate. But given the infinite combinations of individual school buses, lighting conditions and angles from which they can be viewed, turning the information that enters our retina into an object label is an incredibly complex task – one out of reach for computers for decades. In recent years, however, computers have come to identify certain objects with up to 95 per cent accuracy, higher than the average individual human.

Like many areas of modern AI, the success of computer vision can be attributed to artificial neural networks. As their name suggests, these algorithms are inspired by how the brain works. They use as their base unit a simple formula meant to replicate what a neuron does. This formula takes in a set of numbers as inputs, multiplies them by another set of numbers (the ‘weights’, which determine how much influence a given input has) and sums them all up. That sum determines how active the artificial neuron is, in the same way that a real neuron’s activity is determined by the activity of other neurons that connect to it. Modern artificial neural networks gain abilities by connecting such units together and learning the right weight for each.

The networks used for visual object recognition were inspired by the mammalian visual system, a structure whose basic components were discovered in cats nearly 60 years ago. The first important component of the brain’s visual system is its spatial map: neurons are active only when something is in their preferred spatial location, and different neurons have different preferred locations. Different neurons also tend to respond to different types of objects. In brain areas closer to the retina, neurons respond to simple dots and lines. As the signal gets processed through more and more brain areas, neurons start to prefer more complex objects such as clocks, houses and faces.

The first of these properties – the spatial map – is replicated in artificial networks by constraining the inputs that an artificial neuron can get. For example, a neuron in the first layer of a network might receive input only from the top left corner of an image. A neuron in the second layer gets input only from those top-left-corner neurons in the first layer, and so on.

The second property – representing increasingly complex objects – comes from stacking layers in a ‘deep’ network. Neurons in the first layer respond to simple patterns, while those in the second layer – getting input from those in the first – respond to more complex patterns, and so on.

These networks clearly aren’t cheating in the way that the digesting duck was. But does all this biological inspiration mean that they work like the brain? One way to approach this question is to look more closely at their performance. To this end, scientists are studying ‘adversarial examples’ – real images that programmers alter so that the machine makes a mistake. Very small tweaks to images can be catastrophic: changing a few pixels on an image of a teapot, for example, can make the network label it an ostrich. It’s a mistake a human would never make, and suggests that something about these networks is functioning differently from the human brain.

Studying networks this way, however, is akin to the early days of psychology. Measuring only environment and behaviour – in other words, input and output – is limited without direct measurements of the brain connecting them. But neural-network algorithms are frequently criticised (especially among watchdog groups concerned about their widespread use in the real world) for being impenetrable black boxes. To overcome the limitations of this techno-behaviourism, we need a way to understand these networks and compare them with the brain.

An ever-growing population of scientists is tackling this problem. In one approach, researchers presented the same images to a monkey and to an artificial network. They found that the activity of the real neurons could be predicted by the activity of the artificial ones, with deeper layers in the network more similar to later areas of the visual system. But, while these predictions are better than those made by other models, they are still not 100 per cent accurate. This is leading researchers to explore what other biological details can be added to the models to make them more similar to the brain.

There are limits to this approach. At a recent conference for neuroscientists and AI researchers, Yann LeCun – director of AI research at Facebook, and professor of computer science at the New York University – warned the audience not to become ‘hypnotised by the details of the brain’, implying that not all of what the brain does necessarily needs to be replicated for intelligent behaviour to be achieved.

But the question of what counts as a mere detail, like the question of what is needed for true digestion, is an open one. For example, by training artificial networks to be more ‘biological’, researchers have found computational purpose in, for example, the physical anatomy of neurons. Some correspondence between AI and the brain is necessary for these biological insights to be of value. Otherwise, the study of neurons would be only as useful for AI as wing-flapping is for modern airplanes.

In 2000, the Belgian conceptual artist Wim Delvoye unveiled Cloaca at a museum in Belgium. Over eight years of research on human digestion, he created the device – consisting of a blender, tubes, pumps and various jars of acids and enzymes – that successfully turns food into faeces. The machine is a true feat of engineering, a testament to our powers of replicating the natural world. Its purpose might be more questionable. One art critic was left with the thought: ‘There is enough dung as it is. Why make more?’ Intelligence doesn’t face this problem.

New Data Challenge Beliefs About Concussion


Experts respond to recent study implicating repetitive small blows to the head

New research is challenging the notion that the serious brain damage now seen in professional football players is caused mainly by blows to the head leading to overt concussions.

In a study published in Brain, researchers from Boston University examined the brains of eight teenagers and young adults. Four had “recent sports-related closed-head impact injuries sustained 1 day to 4 months prior to death.” Four had no such history. The researchers found evidence of chronic traumatic encephalopathy (CTE) in the four teenagers who had recent head injury. “These results indicate that closed-head impact injuries, independent of concussive signs, can induce traumatic brain injury as well as early pathologies and functional sequelae associated with chronic traumatic encephalopathy,” the researchers concluded. It may be the best evidence yet that it is the routine head impacts that occur on virtually every play, and not concussions per se, that cause CTE.

This research raises questions about the NFL’S efforts to deal with concussions. It also may influence what advice physicians give to parents who ask about whether their children should play youth football.

Below, the NFL’s chief medical officer, a professor of emergency medicine and neurosurgery, and a pediatrician discuss these questions.

Allen Sills, MD, NFL chief medical officer, past co-director of the Vanderbilt Sports Concussion Center

“Important research advancements have been made over the last several years around traumatic brain injury (TBI) and chronic traumatic encephalopathy (CTE), which have aided awareness and understanding around this important issue. As highlighted in the most recent study, repetitive hits to the head have been consistently implicated as a cause of CTE by this research group. How and why exactly this manifests, who is at risk, and why — these are questions that we as researchers and clinicians are working to answer.

“As the research community continues to explore these critical questions, the NFL has made significant strides to try to better protect our players and reduce contact to the head including implementing data-driven rules; changes intended to eliminate potentially dangerous tactics and reduce the risk of injuries, especially to the head and neck; enforcing limits on contact practice; and mandating ongoing health and safety education for players and training for club and non-affiliated medical personnel.

“For kids, being active, getting outside, playing sports, particularly team sports, is important. There are also concerns about the risks involved in playing sports, including football, which is why it has been encouraging to see similar developments at the youth level such as the certification of over 130,000 youth and high school coaches through USA Football’s Heads Up program; USA Football’s National Practice Guidelines — including limits on full contact; Pop Warner’s initiatives, from no intentional head-to-head contact to requiring players who suffer a suspected head injury to receive medical clearance from a concussion specialist before returning to play; and 50 states have a Return to Play law, which can help reduce the rates of recurrent concussions. We hope that all youth sports will continue to take measures to reduce head contact through similar rules changes, education, and improved protective equipment.”

Jeffrey Bazarian, MD, MPH, professor of emergency medicine, physical medicine and rehabilitation, neurology, neurosurgery, and public health science, University of Rochester

The National Football League has done an admirable job of supporting research to better identify, prevent and treat sport-related concussions. But more recent research data suggest that the real threat to the long-term neurologic health of contact athletes like football players is not concussion, but the repetitive head hits that do not result in acute symptoms of concussion. These head hits are experienced by all players during nearly every football practice and game. They have been associated with acute changes in brain function and structure, and with short-term cognitive deficits. They have also been reported to have clinically-relevant, long-term adverse consequences on the brain. Repetitive head hits experienced by football players should be reconceived as an occupational exposure that can be assessed, controlled, and managed. The NFL is uniquely positioned to foster research to identify, prevent, and treat the neurologic consequences of these hits, and would be wise to turn its attention in the direction of repetitive head hits as soon as possible.

Steven Hicks, MD, assistant professor of pediatrics, Penn State Health and Milton S. Hershey Medical Center

As a general pediatrician, I believe we can do several things to make sports like football (with high concussion risk) safer for our children: 1) be open to rule changes that may make the game safer by minimizing concussive events; 2) ensure that medical personnel are on the sideline at games, to accurately assess potential concussions and ensure that concussion guidelines are followed; 3) teach children to tackle safely and reduce full-contact scenarios in daily practice; and 4) support research that improves our understanding, prevention, and treatment of concussions. Making decisions about youth football participation will require us to balance risks and benefits. By minimizing concussion risks on the field we can hopefully find ways to allow children to continue to benefit from participation in this team sport.

Scientists Just Identified The Physical Source of Anxiety in The Brain


And they can control it with light.

 

We’re not wired to feel safe all the time, but maybe one day we could be.

A new study investigating the neurological basis of anxiety in the brain has identified ‘anxiety cells’ located in the hippocampus – which not only regulate anxious behaviour but can be controlled by a beam of light.

 The findings, so far demonstrated in experiments with lab mice, could offer a ray of hope for the millions of people worldwide who experience anxiety disorders (including almost one in five adults in the US), by leading to new drugs that silence these anxiety-controlling neurons.

“We wanted to understand where the emotional information that goes into the feeling of anxiety is encoded within the brain,” says one of the researchers, neuroscientist Mazen Kheirbek from the University of California, San Francisco.

To find out, the team used a technique called calcium imaging, inserting miniature microscopes into the brains of lab mice to record the activity of cells in the hippocampus as the animals made their way around their enclosures.

874 anxiety neurons brain light 2Anxiety cells (Hen Lab/Columbia University)

These weren’t just any ordinary cages, either.

The team had built special mazes where some paths led to open spaces and elevated platforms – exposed environments known to induce anxiety in mice, due to increased vulnerability to predators.

Away from the safety of walls, something went off in the mice’s heads – with the researchers observing cells in a part of the hippocampus called ventral CA1 (vCA1) firing up, and the more anxious the mice behaved, the greater the neuron activity became.

“We call these anxiety cells because they only fire when the animals are in places that are innately frightening to them,” explains senior researcher Rene Hen from Columbia University.

The output of these cells was traced to the hypothalamus, a region of the brain that – among other things – regulates the hormones that controls emotions.

Because this same regulation process operates in people, too – not just lab mice exposed to anxiety-inducing labyrinths – the researchers hypothesise that the anxiety neurons themselves could be a part of human biology, too.

“Now that we’ve found these cells in the hippocampus, it opens up new areas for exploring treatment ideas that we didn’t know existed before,” says one of the team, Jessica Jimenez from Columbia University’s Vagelos College of Physicians & Surgeons.

Even more exciting is that we’ve already figured out a way of controlling these anxiety cells – in mice at least – to the extent it actually changes the animals’ observable behaviour.

Using a technique called optogenetics to shine a beam of light onto the cells in the vCA1 region, the researchers were able to effectively silence the anxiety cells and prompt confident, anxiety-free activity in the mice.

“If we turn down this activity, will the animals become less anxious?” Kheirbek told NPR.

“What we found was that they did become less anxious. They actually tended to want to explore the open arms of the maze even more.”

This control switch didn’t just work one way.

By changing the light settings, the researchers were also able to enhance the activity of the anxiety cells, making the animals quiver even when safely ensconced in enclosed, walled surroundings – not that the team necessarily thinks vCA1 is the only brain region involved here.

“These cells are probably just one part of an extended circuit by which the animal learns about anxiety-related information,” Kheirbek told NPR, highlighting other neural cells justify additional study too.

In any case, the next steps will be to find out whether the same control switch is what regulates human anxiety – and based on what we know about the brain similarities with mice, it seems plausible.

If that pans out, these results could open a big new research lead into ways to treat various anxiety conditions.

And that’s something we should all be grateful for.

“We have a target,” Kheirbek explained to The Mercury News. “A very early way to think about new drugs.”

The findings are reported in Neuron.

Somatic Activating KRAS Mutations in Arteriovenous Malformations of the Brain


Background

Sporadic arteriovenous malformations of the brain, which are morphologically abnormal connections between arteries and veins in the brain vasculature, are a leading cause of hemorrhagic stroke in young adults and children. The genetic cause of this rare focal disorder is unknown.

Methods

We analyzed tissue and blood samples from patients with arteriovenous malformations of the brain to detect somatic mutations. We performed exome DNA sequencing of tissue samples of arteriovenous malformations of the brain from 26 patients in the main study group and of paired blood samples from 17 of those patients. To confirm our findings, we performed droplet digital polymerase-chain-reaction (PCR) analysis of tissue samples from 39 patients in the main study group (21 with matching blood samples) and from 33 patients in an independent validation group. We interrogated the downstream signaling pathways, changes in gene expression, and cellular phenotype that were induced by activating KRAS mutations, which we had discovered in tissue samples.

Results

We detected somatic activating KRAS mutations in tissue samples from 45 of the 72 patients and in none of the 21 paired blood samples. In endothelial cell–enriched cultures derived from arteriovenous malformations of the brain, we detected KRAS mutations and observed that expression of mutant KRAS (KRASG12V) in endothelial cells in vitro induced increased ERK (extracellular signal-regulated kinase) activity, increased expression of genes related to angiogenesis and Notch signaling, and enhanced migratory behavior. These processes were reversed by inhibition of MAPK (mitogen-activated protein kinase)–ERK signaling.

Conclusions

We identified activating KRAS mutations in the majority of tissue samples of arteriovenous.

Source:NEJM