Brain Activity Has Been Recorded as Much as 10 Minutes After Death


main article image

 

Doctors in a Canadian intensive care unit stumbled on a very strange case last year – when life support was turned off for four terminal patients, one of them showed persistent brain activity even after they were declared clinically dead.

For more than 10 minutes after doctors confirmed death through a range of observations, including the absence of a pulse and unreactive pupils, the patient appeared to experience the same kind of brain waves (delta wave bursts) we get during deep sleep.

And it’s an entirely different phenomenon to the sudden ‘death wave’ that’s been observed in rats following decapitation.

“In one patient, single delta wave bursts persisted following the cessation of both the cardiac rhythm and arterial blood pressure (ABP),” the team from the University of Western Ontario in Canada reported in March 2017.

They also found that death could be a unique experience for each individual, noting that across the four patients, the frontal electroencephalographic (EEG) recordings of their brain activity displayed few similarities both before and after they were declared dead.

“There was a significant difference in EEG amplitude between the 30-minute period before and the 5-minute period following ABP cessation for the group,” the researchers explained.

Before we get into the actual findings, the researchers are being very cautious about the implications, saying it’s far too early to be talking about what this could mean for our post-death experience, especially considering their sample size is one.

In the absence of any biological explanation for how brain activity could possibly continue several minutes after the heart has stopped beating, the researchers said the scan could be the result of some kind of error at the time of recording.

But they were at a loss to explain what that error could be, as the medical equipment showed no signs of malfunction, meaning the source of the anomaly cannot be confirmed – biologically or otherwise.

“It is difficult to posit a physiological basis for this EEG activity given that it occurs after a prolonged loss of circulation,” the researchers wrote.

“These waveform bursts could, therefore, be artefactual [human error] in nature, although an artefactual source could not be identified.”

You can see the brain scans of the four terminal patients below, showing the moment of clinical death at Time 0, or when the heart had stopped a few minutes after life support had been turned off:

brain-waves-deathsNorton et al. (2017)

The yellow brain activity is what we’re looking for in these scans (view a larger version here), and you can see in three of the four patients, this activity faded away before the heart stopped beating – as much as 10 minutes before clinical death, in the case of patient #2.

But for some reason, patient #4 shows evidence of delta wave bursts for 10 minutes and 38 seconds after their heart had stopped.

The researchers also investigated if a phenomenon known as ‘death waves’ occurred in the patients – in 2011, a separate team observed a burst of brain activity in rat brains about 1 minute after decapitation, suggesting that the brain and the heart have different moments of expiration.

“It seems that the massive wave which can be recorded approximately 1 minute after decapitation reflects the ultimate border between life and death,” researchers from Radboud University in the Netherlands reported at the time.

death-wave

When the Canadian team looked for this phenomenon in their human patients, they came up empty.

“We did not observe a delta wave within 1 minute following cardiac arrest in any of our four patients,” they reported.

If all of this feels frustratingly inconsequential, welcome to the strange and incredibly niche field of necroneuroscience, where no one really knows what’s actually going on.

But what we do know is that very strange things can happen at the moment of death – and afterwards – with a pair of studies from 2016 finding that more than 1,000 genes were still functioning several days after death in human cadavers.

And it wasn’t like they were taking longer than everything else to sputter out – they actually increased their activity following the moment of clinical death.

The big takeaway from studies like these isn’t that we understand more about the post-death experience now than we did before, because the observations remain inconclusive and without biological explanation.

But what they do show is that we’ve got so much to figure out when it comes to the process of death, and how we – and other animals – actually experience it, from our bodies to our brains.

Young Adults With Type 1 Diabetes Show Abnormal Brain Activity


Having diabetes may affect the way our brains work. Research is taking place to find out exactly how this occurs.

In a recent study, researchers describe how tying diabetes to cognitive impairment is tricky because many people with diabetes have other conditions like high blood pressure and obesity, which also affect cognition. That’s why they conducted a study in young adults with and without type 1 diabetes “who were virtually free of such comorbidities,” the study authors wrote in their abstract.

brain activity

Christine Embury is a graduate research assistant at the Center for Magnetoencephalography (MEG) at the University of Nebraska Medical Center. She worked with Dr. Wilson, the study’s lead author and was kind enough to answer some questions.

In layman terms, she explains that “neural processing” is brain activity. “In our work, we relate brain activity in specific brain regions to task-specific cognitive processes, like working memory. Widespread brain networks are involved in this kind of complex processing including regions relating to verbal processing and attention, working together to accomplish task goals,” she writes.

Young, Healthy Type 1 Adults Tested

They matched two groups, one with and one without type 1 diabetes, on major health and demographic factors and had them all do a verbal working memory task during magnetoencephalographic (MEG) brain imaging. For the group with type 1 diabetes, the mean years of diabetes duration were only 12.4.

The researchers hypothesized that those with type 1 diabetes would have “altered neural dynamics in verbal working memory processing and that these differences would directly relate to clinical disease measures,” they wrote.

Higher A1c and Diabetes Duration May Alter Brain Activity

They found that those with type 1 diabetes had much stronger neural responses in the superior parietal cortices during memory encoding and much weaker activity in the parietal-occipital regions during maintenance compared to those without type 1 diabetes.

Diabetes duration and glycemic control were both “significantly correlated with neural responses in various brain regions,”

Embury explained that their findings suggest that “the longer one has the condition, the more the brain has to work to compensate for deficits incurred.” Higher A1c levels were also associated with compensatory brain activity, too.

The harrowing conclusion from the study authors is that even young, healthy adults with type 1 diabetes “already have aberrant neural processing relative to their non-diabetic peers, employing compensatory responses to perform the task, and glucose management and duration may play a central role.”

What would be the findings among type 1s who keep their A1c in non-diabetic range, one might wonder? This study suggests it is likely that elevated blood sugar over time is what changes the brain activity. These effects are possibly compounded over time in those with comorbidities like obesity and high blood pressure.

What is Verbal Working Memory?

According to this study, verbal working memory processing may be affected by type 1 diabetes. Embury shared an example of this and wrote, “Participants had to memorize a grid of letters and were later asked to identify if a probe letter was in the previous set of letters shown.” She said we have to use working memory any time that we’re trying to hold on to or manipulate a piece of information for a short amount of time, like remembering a person’s phone number.

The verbal part of “verbal working memory processing” just has to do with the way that the information is presented, like letters or numbers and “anything that requires language processing as well” Embury explains.

More research will help clarify these findings in the future.

Best Friends Really do Share Brain Patterns, Neuroscientists Reveal


Whenever my best friends and I say the same thing in a group chat, we send the wavy dash emoji, 〰️, shorthand for we’re on the same wavelength! The concept gets tossed around in pop culture, though its meaning has always been more symbolic than scientific. Until Wednesday, there wasn’t much proof that friends who think the same thing shared anything but a set of references and some dumb inside jokes.

brainwaves friends

But in the journal Nature Communications, a team of Dartmouth College scientists provided evidence of what best friends have imagined all along:

“Neural responses to dynamic, naturalistic stimuli, like videos, can give us a window into people’s unconstrained, spontaneous thought processes as they unfold. Our results suggest that friends process the world around them in exceptionally similar ways,” said lead author Carolyn Parkinson in a statement on Wednesday. At the time of the study, Parkinson was at Dartmouth, and she’s currently an assistant professor of psychology and director of the Computational Social Neuroscience Lab at the University of California, Los Angeles.

fMRI of 'me'
fMRI machines are used to measure changes in brain activity in real time.

Taking 280 graduate students of varying degrees of friendship, which participants self-reported, Parkinson and her team wondered whether they could predict which individuals were closer friends based solely on their brain activity while watching the same set of videos. Their hypothesis, a slightly more refined version of pop psychology’s 〰️ theory, was that people who had closer social ties would respond to the videos in more similar ways, which in turn would be reflected in their patterns of brain activity. Plotting the self-reported relationships on a map, the researchers then got to work on finding the links between individuals’ brain activity.

In solitude, 42 of the participants watched the same series of politics, science, comedy, and music videos as the researchers observed their brain activity using an fMRI scanner, a device that tracks changes in blood flow in the brain. The idea is that certain regions of the brain surge with blood — that is, become more active — depending on how the individual responds to the video.

Across the participants, the parts of the brain linked to emotional responses, attention, and high-level reasoning became active, in varying degrees. The analysis revealed that, as the researchers predicted, the people with the most similar brain activity patterns were the closest friends. The strength of the correlation was directly related to the social closeness of the individuals, even when the researchers considered variables like handedness, age, gender, ethnicity, and nationality.

“We are a social species and live our lives connected to everybody else. If we want to understand how the human brain works, then we need to understand how brains work in combination— how minds shape each other,” explained senior author Thalia Wheatley, Ph.D., a study co-author and psychologist at Dartmouth, in a statement.

Mapping the experimental data produced a social network that the researchers could use to predict how close individuals were, solely on the basis of their brain activity. As many of us have already intuited, it seems clear that the experiences we share with our closest friends do cause us to think and respond to things in similar ways, but the exact mechanisms that lead to synchronicity — is it a function of time spent together or laughter shared? — remain to be discovered.

Mind Aglow: Scientists Watch Thoughts Form in the Brain


A new technology shows real-time communication among neurons that promises to reveal brain activity in unprecedented detail.

In a mouse brain, cell-based detectors called CNiFERs change their fluorescence when neurons release dopamine.

When a single neuron fires, it is an isolated chemical blip. When many fire together, they form a thought. How the brain bridges the gap between these two tiers of neural activity remains a great mystery, but a new kind of technology is edging us closer to solving it.

The glowing splash of cyan in the photo above comes from a type of biosensor that can detect the release of very small amounts of neurotransmitters, the signaling molecules that brain cells use to communicate. These sensors, called CNiFERs (pronounced “sniffers”), for cell-based neurotransmitter fluorescent engineered reporters, are enabling scientists to examine the brain in action and up close.

This newfound ability, developed as part of the White House BRAIN Initiative, could further our understanding of how brain function arises from the complex interplay of individual neurons, including how complex behaviors like addiction develop. Neuroscientist Paul Slesinger at Icahn School of Medicine at Mount Sinai, one of the senior researchers who spearheaded this research, presented the sensors Monday at the American Chemical Society’s 252nd National Meeting & Exposition.

Current technologies have proved either too broad or too specific to track how tiny amounts of neurotransmitters in and around many cells might contribute to the transmission of a thought. Scientists have used functional magnetic resonance imaging to look at blood flow as a surrogate for brain activity over fairly long periods of time or have employed tracers to follow the release of a particular neurotransmitter from a small set of neurons for a few seconds. But CNiFERs make for a happy medium; they allow researchers to monitor multiple neurotransmitters in many cells over significant periods of time.

When a CNiFER comes in contact with the neurotransmitter it is designed to detect, it fluoresces. Using a tiny sensor implanted in the brain, scientists can then measure how much light the CNiFER emits, and from that infer the amount of neurotransmitter present. Because they comprise several interlocking parts, CNiFERs are highly versatile, forming a “plug-and-play system,” Slesinger says. Different sections of the sensor can be swapped out to detect individual neurotransmitters. Prior technology had trouble distinguishing between similar molecules, such as dopamine and norepinephrine, but CNiFERs do not.

The sensors are being tested in animals to examine particular brain processes. Slesinger and his colleagues have used CNiFERs to look more closely at a classic psychological phenomenon: Pavlovian conditioning. Just as Pavlov trained his dog to salivate at the sound of a dinner bell, Slesinger and his team trained mice to associate an audio cue with a food reward. At the beginning of the experiment, the mice experienced a release of dopamine and norepinephrine when they received a sugar cube. As the animals became conditioned to associate the audio cue with the sugar, however, the neurotransmitter release occurred earlier, eventually coinciding with the audio cue rather than the actual reward.

Mouse studies might be a far cry from the kind of human impact that neuroscience ultimately strives toward—better treatments for Parkinson’s patients or concussion sufferers, for example—but this is where it all begins. Slesinger is especially interested in using CNiFERs to study addiction. A more nuanced understanding of how addiction develops in mouse brains could help identify novel targets to combat addiction in people.

Brain activity is as unique as fingerprints, and correlates to intelligence, study finds


Coming soon: job interviews by brain scan?

A person’s behaviour and way of doing things can often give them away to those who know them best, and now research says it’s not just our outward idiosyncrasies that can identify us – even our brain activity is unique.

Researchers at Yale University in the US have found that images of brain activity taken by functional magnetic resonance imaging (fMRI) can be used as a kind of ‘cognitive fingerprint’ to identify particular individuals – and can even correlate to how intelligent we are.

“In most past studies, fMRI data have been used to draw contrasts between, say, patients and healthy controls,” said Emily Finn, co-first author of the study. “We have learned a lot from these sorts of studies, but they tend to obscure individual differences which may be important.”

And those differences can be telling. So telling, in fact, that when the researchers analysed numerous fMRI scans taken of 126 participants sourced by the Human Connectome Project, they were able to identify individuals by recognising their unique ‘connectivity profile’.

The researchers looked at activity across some 268 different regions in the brain by scanning the participants six times over. Sometimes the researchers engaged the participants with a cognitive task during the fMRI scan, while during other sessions, they simply rested during the procedure.

The researchers were able to identify individual participants with up to 99 percent accuracy when comparing scans of the same person involved in a similar cognitive task, although their strike rate fell to about 80 percent if the scan showed the same person doing a disparate task or being at rest.

“Until this, we really didn’t know the extent to which each unique individual has a unique pattern of connectivity,” Russell Poldrack, a cognitive neuroscientist at Stanford University and advisor to the Human Connectome Project, told Rachel Ehrenberg at Nature.

And beyond simply identifying who you are, the study also shows that brain activity can give researchers clues about a person’s level of intelligence. While fMRI data doesn’t simply indicate how smart you are, connectivity patterns in brain activity do correlate to how well people perform in an intelligence test, according to the researchers.

“[T]he uniqueness seems to be tied to cognitive function in some way,” said Poldrack, with stronger connections in participants’ prefrontal and parietal lobes correlating to better intelligence test scores.

While some might worry that these kinds of techniques could potentially be used for discriminatory purposes in the future – the prospect of having to undergo a mandatory brain scan as part of the interview process for a job or college position, for example – what excites the researchers is the potential for brain activity data to be used for therapeutic purposes, where treatments could be specifically tailored to an individual’s unique brain connectivity profile.

“We have hundreds of drugs for treating neuropsychiatric illness, but there’s still a lot of trial and error and failed treatments,” Finn told Nature. “This might be another tool.”

THAT SMARTPHONE IS GIVING YOUR THUMBS SUPERPOWERS


141223122218-large

When people spend time interacting with their smartphones via touchscreen, it actually changes the way their thumbs and brains work together, according to a report in the Cell Press journal Current Biology on December 23. More touchscreen use in the recent past translates directly into greater brain activity when the thumbs and other fingertips are touched, the study shows.

“I was really surprised by the scale of the changes introduced by the use of smartphones,” says Arko Ghosh of the University of Zurich and ETH Zurich in Switzerland. “I was also struck by how much of the inter-individual variations in the fingertip-associated brain signals could be simply explained by evaluating the smartphone logs.”

It all started when Ghosh and his colleagues realized that our newfound obsession with smartphones could be a grand opportunity to explore the everyday plasticity of the human brain. Not only are people suddenly using their fingertips, and especially their thumbs, in a new way, but many of us are also doing it an awful lot, day after day. Not only that, but our phones are also keeping track of our digital histories to provide a readymade source of data on those behaviors.

Ghosh explains it this way: “I think first we must appreciate how common personal digital devices are and how densely people use them. What this means for us neuroscientists is that the digital history we carry in our pockets has an enormous amount of information on how we use our fingertips (and more).”

While neuroscientists have long studied brain plasticity in expert groups–musicians or video gamers, for instance–smartphones present an opportunity to understand how regular life shapes the brains of regular people.

To link digital footprints to brain activity in the new study, Ghosh and his team used electroencephalography (EEG) to record the brain response to mechanical touch on the thumb, index, and middle fingertips of touchscreen phone users in comparison to people who still haven’t given up their old-school mobile phones.

The researchers found that the electrical activity in the brains of smartphone users was enhanced when all three fingertips were touched. In fact, the amount of activity in the cortex of the brain associated with the thumb and index fingertips was directly proportional to the intensity of phone use, as quantified by built-in battery logs. The thumb tip was even sensitive to day-to-day fluctuations: the shorter the time elapsed from an episode of intense phone use, the researchers report, the larger was the cortical potential associated with it.

The results suggest to the researchers that repetitive movements over the smooth touchscreen surface reshape sensory processing from the hand, with daily updates in the brain’s representation of the fingertips. And that leads to a pretty remarkable idea: “We propose that cortical sensory processing in the contemporary brain is continuously shaped by personal digital technology,” Ghosh and his colleagues write.

What exactly this influence of digital technology means for us in other areas of our lives is a question for another day. The news might not be so good, Ghosh and colleagues say, noting evidence linking excessive phone use with motor dysfunctions and pain.

Scientists have invented a brain decoder that could read your inner thoughts


Scientists have figured out how to read the words of our inner monologue, a finding that could help people who cannot physically speak to communicate with the world.

brain2

Image: Alex Mit/Shutterstock

Talking to yourself or having mindless internal thoughts is something which most of us can admit to. But imagine if we told you that someone could eavesdrop on your private thoughts? This sounds very creepy, but it’s exactly what scientists are working towards achieving.

When you hear someone speak, sound waves activate specific neurons that allow the brain to interpret the sounds as words. Now scientists have created an algorithm that does the same thing, but with brain activity instead of sound waves.

To learn how to translate people’s thoughts, researchers from the University of California in the US looked at the brain activity of seven people undergoing epilepsy surgery. The participants were asked to first read aloud a short piece of text, then read it silently in their head.

While they read the text aloud, the team built a personal ‘decoder’ for each patient, by mapping which neurons were reacting to different aspects of speech. They made this map using Electrocorticographic (ECoG) readings of electrodes implanted in the patients.

Once they’d worked out which brain patterns related to which words, they then used their decoder to try to read brain activity during silent reading, and found that it was able to translate several words that the volunteers were thinking.

The researchers also applied the decoder while the participants were listening to Pink Floyd, to see which neurons respond to various musical notes.

“Sound is sound,” Brian Pasley, neuroscientist and lead author of the study, told Helen Thompson from New Scientist. “[The decoder] helps us understand different aspects of how the brain processes it.”

The team are now fine-tuning their algorithms, and though there is a lot more work to be done, this is an important step to developing a device that could help paralysed patients speak again.

“Ultimately, if we understand [how to] covert speech well enough, we’ll be able to create a medical prosthesis that could help someone who is paralysed, or locked in and can’t speak,” Pasley told Thompson from New Scientist. 

World-first experiment achieves direct brain-to-brain communication in human subjects


For the first time, an international team of neuroscientists has transmitted a message from the brain of one person in India to the brains of three people in France.

telepathy

The team, which includes researchers from Harvard Medical School’s Beth Israel Deaconess Medical Center, the Starlab Barcelona in Spain, and Axilum Robotics in France, has announced today the successful transmission of a brain-to-brain message over a distance of 8,000 kilometres.

“We wanted to find out if one could communicate directly between two people by reading out the brain activity from one person and injecting brain activity into the second person, and do so across great physical distances by leveraging existing communication pathways,” said one of the team, Harvard’s Alvaro Pascual-Leone in a press release. “One such pathway is, of course, the Internet, so our question became, ‘Could we develop an experiment that would bypass the talking or typing part of internet and establish direct brain-to-brain communication between subjects located far away from each other in India and France?'”

The team achieved this world-first feat by fitting out one of their participants – known as the emitter – with a device called an electrode-based brain-computer (BCI). This device, which sits over the participant’s head, can interpret the electrical currents in the participant’s brain and translate them into a binary code called Bacon’s cipher. This type of code is similar to what computers use, but more compact.

“The emitter now has to enter that binary string into the laptop using her thoughts,” says Francie Diep at Popular Science. “She does this by using her thoughts to move the white circle on-screen to different corners of the screen. (Upper right corner for “1,” bottom right corner for “0.”) This part of the process takes advantage of technology that several labs have developed, to allow people with paralysis to control computer cursors or robot arms.”

Once uploaded, this code is then transmitted via the Internet to another participant – called the receiver – who was also fitted with a device, this time a computer-brain interface (CBI). This device emits electrical pulses, directed by a robotic arm, through the receiver’s head, which make them ‘see’ flashes of light called phosphenes that don’t actually exist.

“As soon as the receivers’ machine gets the emitter’s binary message over the Internet, the machine gets to work,” says Diep. “It moves its robotic arm around, sending phosphenes to the receivers at different positions on their skulls. Flashes appearing in one position correspond to 1s in the emitter’s message, while flashes appearing in another position correspond to 0s.

Exactly how the receivers are recording the flashes so they can translate all those 0s and 1s isn’t clear, but it could be as simple and writing them down with an actual pen and paper.

While it’s not clear at this stage what the applications for this technology could be, it’s a pretty incredible achievement. Oh, and the messages they transmitted? The conveniently brief and friendly, “Hola” and “Ciao”.

ADHD: Scientists discover brain’s anti-distraction system.


Two Simon Fraser University psychologists have made a brain-related discovery that could revolutionize doctors’ perception and treatment of attention-deficit disorders.

Psychologists have made a brain-related discovery that could revolutionize doctors’ perception and treatment of attention-deficit disorders. This discovery opens up the possibility that environmental and/or genetic factors may hinder or suppress a specific brain activity that the researchers have identified as helping us prevent distraction.

 


This discovery opens up the possibility that environmental and/or genetic factors may hinder or suppress a specific brain activity that the researchers have identified as helping us prevent distraction.
The Journal of Neuroscience has just published a paper about the discovery by John McDonald, an associate professor of psychology and his doctoral student John Gaspar, who made the discovery during his master’s thesis research.
This is the first study to reveal our brains rely on an active suppression mechanism to avoid being distracted by salient irrelevant information when we want to focus on a particular item or task.
McDonald, a Canada Research Chair in Cognitive Neuroscience, and other scientists first discovered the existence of the specific neural index of suppression in his lab in 2009. But, until now, little was known about how it helps us ignore visual distractions.
“This is an important discovery for neuroscientists and psychologists because most contemporary ideas of attention highlight brain processes that are involved in picking out relevant objects from the visual field. It’s like finding Waldo in a Where’s Waldo illustration,” says Gaspar, the study’s lead author.
“Our results show clearly that this is only one part of the equation and that active suppression of the irrelevant objects is another important part.”
Given the proliferation of distracting consumer devices in our technology-driven, fast-paced society, the psychologists say their discovery could help scientists and health care professionals better treat individuals with distraction-related attentional deficits.
“Distraction is a leading cause of injury and death in driving and other high-stakes environments,” notes McDonald, the study’s senior author. “There are individual differences in the ability to deal with distraction. New electronic products are designed to grab attention. Suppressing such signals takes effort, and sometimes people can’t seem to do it.
“Moreover, disorders associated with attention deficits, such as ADHD and schizophrenia, may turn out to be due to difficulties in suppressing irrelevant objects rather than difficulty selecting relevant ones.”
The researchers are now turning their attention to understanding how we deal with distraction. They’re looking at when and why we can’t suppress potentially distracting objects, whether some of us are better at doing so and why that is the case.
“There’s evidence that attentional abilities decline with age and that women are better than men at certain visual attentional tasks,” says Gaspar, the study’s first author.
The study was based on three experiments in which 47 students performed an attention-demanding visual search task. Their mean age was 21. The researchers studied their neural processes related to attention, distraction and suppression by recording electrical brain signals from sensors embedded in a cap they wore.
Story Source:
The above story is based on materials provided by Simon Fraser University. Note: Materials may be edited for content and length.
Journal Reference:
J. M. Gaspar, J. J. McDonald. Suppression of Salient Objects Prevents Distraction in Visual Search. Journal of Neuroscience, 2014; 34 (16): 5658 DOI: 10.1523/JNEUROSCI.4161-13.2014

%d bloggers like this: