Doctors Who Discovered Cancer Enzymes In Vaccines All Found Murdered.

The website ran this fake news story about doctors discovering cancer enzymes in vaccines and their subsequent murders.

A fake news story tried to connect the random deaths of doctors with conspiracy theories around vaccination. posted a story headlined,”Doctors who discovered cancer enzymes in vaccines found murdered,” on March 2, 2017. Facebook users flagged the story as potentially being fabricated, as part of the social media site’s efforts to clear fake news from users’ news feeds.

The story claims that the doctors found the enzyme nagalase in vaccines, connecting nagalase to cancer, autism and diabetes. The article speculates that the doctors were murdered in order to prevent their findings from going public.

We reached out to the website but got no response.

Their argument is that nagalase suppresses the immune system and would therefore be bad if found in vaccines. There have been controversial studies that claim to show that the GcMAF protein, which aims at reduction of nagalase concentration in the body, could be treatment to diseases like cancer, autism, or HIV. These are all diseases where the nagalase level is supposed to be high.

That theory lacks scientific evidence, though. The British government has warnedagainst the purchase of GcMAF, since it is not licensed and its production does not adhere to production standards.  Scientific journals where those theories were published have retracted them, because they are not proven. identified five doctors it said were connected to the nagalase discovery.  But we found little evidence that the deaths of doctors are were connected to the vaccine controversy.

Dr. Bruce Hedendal, a 67-year-old doctor in Florida, was found dead in his car in June 2015. The actual cause of his death is not clear, but natural causes have been cited by Florida local TV stations. Hedendal was a chiropractor, and there are no reports that he was involved in challenging the pharma industry in any way.

Another doctor from Florida, Dr. Theresa Ann Sievers, 46, was found murdered in her Florida home in 2015. An investigation found that her husband Mark had hired two men, Curtis Wayne Wright Jr. and Jimmy Rodgers, to kill her so he could collect insurance money.

Again, her death was not connected to the pharmaceutical industry.

In a follow-up article, broadened the theory to include two additional doctors. Dr. Jeffrey Whiteside, 63, went missing after a fight with his wife and was found dead. His death was ruled a suicide.

Dr. Patrick Fitzpatrick,a 74-year-old retired eyedoctor, went missing while hiking in Montana. The police searched but never found him. A officer of the Gallatin County Sheriff’s office told the Bismarck Tribune that Fitzpatrick may have suffered from occasional confusion, due to his advanced age. He also said that they had no reason to believe criminal activity was involved in this case.

The only person who was actually involved in the vaccination debate was Dr. Jeff Bradstreet, 61. He became controversial for his claim that vaccines could cause autism, a theory that has been refuted by  several studies. Bradstreet treated his autism patients with the GcMAF protein, which the FDA does not recognize as treatment for autism.

According to the Rutherford County, N.C.. Sheriff’s Office, his wounds appear to be self-inflicted. He was not murdered. But his death in 2015 gave way to a wave of conspiracy theories.

While the article says these doctors died in connection with promoting theories about vaccines, only Bradstreet’s case even comes close. And even that is a stretch.

As for sourcing, refers to, which posted the exact story a year ago. cites as its source, a site that doesn’t exist anymore.

Our ruling

The article puts forward a conspiracy theory that is based on the actual deaths of American doctors, but there is no information that they worked together on vaccine discoveries, nor were their deaths connected.  The conclusions in the article are pure speculation.

 Link discovered between immune system, brain structure and memory

Link discovered between immune system, brain structure and memory
The thickness of the cerebral cortex is correlated with the epigenetic profile of immune-related genes. Credit: University of Basel, Transfaculty Research Platform Molecular and Cognitive Neurosciences

In two independent studies, scientists at the University of Basel have demonstrated that both the structure of the brain and several memory functions are linked to immune system genes. The scientific journals Nature Communications and Nature Human Behaviour have published the results of the research.

The body’s immune system performs essential functions, such as defending against bacteria and cancer cells. However, the human brain is separated from immune cells in the bloodstream by the so-called blood-brain barrier. This barrier protects the brain from pathogens and toxins circulating in the blood, while also dividing the  of the human body into those that fulfill their function in the blood and those that work specifically in the brain. Until recently, it was thought that brain  was largely unaffected by the peripheral immune system.

However, in the past few years, evidence has accumulated to indicate that the blood’s immune system could in fact have an impact on the brain. Scientists from the University of Basel’s Transfaculty Research Platform Molecular and Cognitive Neurosciences (MCN) have now carried out two independent studies that demonstrate that this link between the immune system and brain is more significant than previously believed.

Search for regulatory patterns

In the first study, the researchers searched for epigenetic profiles, i.e. regulatory patterns, in the blood of 533 young, healthy people. In their genome-wide search, they identified an epigenetic profile that is strongly correlated with the thickness of the cerebral cortex, in particular in a region of the brain that is important for . This finding was confirmed in an independent examination of a further 596 people. It also showed that it is specifically those genes that are responsible for the regulation of important immune functions in the blood that explain the link between the epigenetic profile and the properties of the brain.

Gene variant intensifies traumatic memories

In the second study, the researchers investigated the genomes of healthy participants who remembered negative images particularly well or particularly poorly. A variant of the TROVE2 gene, whose role in immunological diseases is currently being investigated, was linked to participants’ ability to remember a particularly high number of negative images, while their general  remained unaffected.

This gene variant also led to increased activity in specific regions of the brain that are important for the memory of emotional experiences. The researchers also discovered that the gene is linked to the strength of traumatic memories in people who have experienced traumatic events.

The results of the two studies show that both brain structure and memory are linked to the activity of  that also perform important immune regulatory functions in the blood. “Although the precise mechanisms behind the links we discovered still need to be clarified, we hope that this will ultimately lead to new treatment possibilities,” says Professor Andreas Papassotiropoulos, Co-Director of the University of Basel’s MCN research platform. The immune system can be precisely affected by certain medications, and such medications could also have a positive effect on impaired brain functions.

Innovative research methods

These groundbreaking findings were made possible thanks to cutting edge neuroscientific and genetic methods at the University of Basel’s MCN research platform. Under the leadership of Professor Andreas Papassotiropoulos and Professor Dominique de Quervain, the  aims to help us better understand human  functions and to develop new treatments for psychiatric disorders.

CRISPR Is Rapidly Ushering in a New Era in Science

A Battle Is Waged

A battle over CRISPR is raging through the halls of justice. Almost literally. Two of the key players in the development of the CRISPR technology, Jennifer Doudna and Feng Zhang, have turned to the court system to determine which of them should receive patents for the discovery of the technology. The fight went public in January and was amplified by the release of an article in Cell that many argued presented a one-sided version of the history of CRISPR research. Yet, among CRISPR’s most amazing feats is not its history, but how rapidly progress in the field is accelerating.

A CRISPR Explosion

CRISPR, which stands for clustered regularly-interspaced short palindromic repeats, is DNA used in the immune systems of prokaryotes. The system relies on the Cas9 enzyme* and guide RNA’s to find specific, problematic segments of a gene and cut them out. Just three years ago, researchers discovered that this same technique could be applied to humans. As the accuracy, efficiency, and cost-effectiveness of the system became more and more apparent, researchers and pharmaceutical companies jumped on the technique, modifying it, improving it, and testing it on different genetic issues.

Then, in 2015, CRISPR really exploded onto the scene, earning recognition as the top scientific breakthrough of the year by Science Magazine. But not only is the technology not slowing down, it appears to be speeding up. In just two months — from mid-November, 2015 to mid-January, 2016 — ten major CRISPR developments (including the patent war) have grabbed headlines. More importantly, each of these developments could play a crucial role in steering the course of genetics research.


CRISPR made big headlines in late November of 2015, when researchers announced they could possibly eliminate malaria using the gene-editing technique to start a gene drive in mosquitos. A gene drive occurs when a preferred version of a gene replaces the unwanted version in every case of reproduction, overriding Mendelian genetics, which say that each two representations of a gene should have an equal chance of being passed on to the next generation. Gene drives had long been a theory, but there was no way to practically apply the theory. Then, along came CRISPR. With this new technology, researchers at UC campuses in Irvine and San Diego were able to create an effective gene drive against malaria in mosquitos in their labs. Because mosquitos are known to transmit malaria, a gene drive in the wild could potentially eradicate the disease very quickly. More research is necessary, though, to ensure effectiveness of the technique and to try to prevent any unanticipated negative effects that could occur if we permanently alter the genes of a species.

Muscular Dystrophy

A few weeks later, just as 2015 was coming to an end, the New York Times reportedthat three different groups of researchers announced they’d successfully used CRISPR in mice to treat Duchenne muscular dystrophy (DMD), which, though rare, is among the most common fatal genetic diseases. With DMD, boys have a gene mutation that prevents the creation of a specific protein necessary to keep muscles from deteriorating. Patients are typically in wheel chairs by the time they’re ten, and they rarely live past their twenties due to heart failure. Scientists have often hoped this disease was one that would be well suited for gene therapy, but locating and removing the problematic DNA has proven difficult. In a new effort, researchers loaded CRISPR onto a harmless virus and either injected it into the mouse fetus or the diseased mice to remove the mutated section of the gene. While the DMD mice didn’t achieve the same levels of muscle mass seen in the control mice, they still showed significant improvement.

Writing for GizmodoGeorge Dvorsky said, “For the first time ever, scientists have used the CRISPR gene-editing tool to successfully treat a genetic muscle disorder in a living adult mammal. It’s a promising medical breakthrough that could soon lead to human therapies.”


Only a few days after the DMD story broke, researchers from the Cedars-Sinai Board of Governors Regenerative Medicine Institute announced progress they’d made treating retinitis pigmentosa, an inherited retinal degenerative disease that causes blindness. Using the CRISPR technology on affected rats, the researchers were able to clip the problematic gene, which, according to the abstract in Molecular Therapy, “prevented retinal degeneration and improved visual function.” As Shaomei Wang, one of the scientists involved in the project, explained in the press release, “Our data show that with further development, it may be possible to use this gene-editing technique to treat inherited retinitis pigmentosa in patients.” This is an important step toward using CRISPR  in people, and it follows soon on the heels of news that came out in November from the biotech startup, Editas Medicine, which hopes to use CRISPR in people by 2017 to treat another rare genetic condition, Leber congenital amaurosis, that also causes blindness.

Gene Control

January saw another major development as scientists announced that they’d moved beyond using CRISPR to edit genes and were now using the technique to control genes. In this case, the Cas9 enzyme is essentially dead, such that, rather than clipping the gene, it acts as a transport for other molecules that can manipulate the gene in question. This progress was written up in The Atlantic, which explained: “Now, instead of a precise and versatile set of scissors, which can cut any gene you want, you have a precise and versatile delivery system, which can control any gene you want. You don’t just have an editor. You have a stimulant, a muzzle, a dimmer switch, a tracker.” There are countless benefits this could have, from boosting immunity to improving heart muscles after a heart attack. Or perhaps we could finally cure cancer. What better solution to a cell that’s reproducing uncontrollably than a system that can just turn it off?

CRISPR Control or Researcher Control

But just how much control do we really have over the CRISPR-Cas9 system once it’s been released into a body? Or, for that matter, how much control do we have over scientists who might want to wield this new power to create the ever-terrifying “designer baby”?

The short answer to the first question is: There will always be risks. But not only is CRISPR-Cas9 incredibly accurate, scientists didn’t accept that as good enough, and they’ve been making it even more accurate. In December, researchers at the Broad Institute published the results of their successful attempt to tweak the RNA guides: they had decreased the likelihood of a mismatch between the gene that the RNA was supposed to guide to and the gene that it actually did guide to. Then, a month later, Nature published research out of Duke University, where scientists had tweaked another section of the Cas9 enzyme, making its cuts even more precise. And this is just a start. Researchers recognize that to successfully use CRISPR-Cas9 in people, it will have to be practically perfect every time.

But that raises the second question: Can we trust all scientists to do what’s right? Unfortunately, this question was asked in response to research out of China in April, in which scientists used CRISPR to attempt to genetically modify non-viable human embryos. While the results proved that we still have a long way to go before the technology will be ready for real human testing, the fact that the research was done at all raised red-flags and shackles among genetics researchers and the press. These questions may have popped up back in March and April of 2015, but the official response came at the start of December when geneticists, biologists and doctors from around the world convened in Washington D. C. for the International Summit on Human Gene Editing. Ultimately, though, the results of the summit were vague, essentially encouraging scientists to proceed with caution, but without any outright bans. However, at this stage of research, the benefits of CRISPR likely outweigh the risks.

Big Pharma

“Proceed with caution” might be just the right advice for pharmaceutical companies that have jumped on the CRISPR bandwagon. With so many amazing possibilities to improve human health, it comes as no surprise that companies are betting, er, investing big money into CRISPR. Hundreds of millions of dollars flooded the biomedical start-up industry throughout 2015, with most going to two main players, Editas Medicine and Intellia Therapeutics. Then, in the middle of December, Bayer announced a joint venture with CRISPR Therapeutics to the tune of $300 million. That’s three major pharmaceutical players hoping to win big with a CRISPR gamble. But just how big of a gamble can such an impressive technology be? Well, every company is required to license the patent for a fee, but right now, because of the legal battles surrounding CRISPR, the original patents (which the companies have already licensed) have been put on hold while the courts try to figure out who is really entitled to them. If the patents change ownership, that could be a big game-changer for all of the biotech companies that have invested in CRISPR.

Upcoming Concerns?

On January 14, a British court began reviewing a request by the Frances Crick Institute (FCI) to begin genetically modified research on human embryos. While Britain’s requirements on human embryo testing are more lax than the U.S. — which has a complete ban on genetically modifying any human embryos — the British are still strict, requiring that the embryo be destroyed after the 14th day. The FCI requested a license to begin research on day-old, “spare” IVF embryos to develop a better understanding of why some embryos die at early stages in the womb, in an attempt to decrease the number of miscarriages women have. This germ-line editing research is, of course, now possible because of the recent CRISPR breakthroughs. If this research is successful, The Independent argues, “it could lead to pressure to change the existing law to allow so-called “germ-line” editing of embryos and the birth of GM children.” However, Dr. Kathy Niacin, the lead researcher on the project, insists this will not create a slippery slope to “designer babies.” As she explained to the Independent, ““Because in the UK there are very tight regulations in this area, it would be completely illegal to move in that direction. Our research is in line with what is allowed an in-keeping in the UK since 2009 which is purely for research purposes.”

Woolly Mammoths

Woolly Mammoths! What better way to end an article about how CRISPR can help humanity than with the news that it can also help bring back species that have gone extinct? Ok. Admittedly, the news that George Church wants to resurrect the woolly mammoth has been around since last spring. But the Huffington Post did a feature about his work in December, and it turns out his research has advanced enough now that he predicts the woolly mammoth could return in as little as seven years. Though this won’t be a true woolly mammoth. In fact, it will actually be an Asian elephant boosted by woolly mammoth DNA. Among the goals of the project is to help prevent the extinction of the Asian elephant, and woolly mammoth DNA could help achieve that. The idea is that a hybrid elephant would be able to survive more successfully as the climate changes. If this works, the method could be applied to other plants and animal species to increase stability and decrease extinction rates. As Church tells Huffington Post, “the fact is we’re not bringing back species — [we’re] strengthening existing species.”

And what more could we ask of genetics research than to strengthen a species?

*Cas9 is only one of the enzymes that can work with the CRISPR system, but researchers have found it to be the most accurate and efficient.

After 100 years of debate, hitting absolute zero has been declared mathematically impossible.

The third law of thermodynamics finally gets its proof.

After more than 100 years of debate featuring the likes of Einstein himself, physicists have finally offered up mathematical proof of the third law of thermodynamics, which states that a temperature of absolute zero cannot be physically achieved because it’s impossible for the entropy (or disorder) of a system to hit zero.

While scientists have long suspected that there’s an intrinsic ‘speed limit’ on the act of cooling in our Universe that prevents us from ever achieving absolute zero (0 Kelvin, -273.15°C, or -459.67°F), this is the strongest evidence yet that our current laws of physics hold true when it comes to the lowest possible temperature.

 “We show that you can’t actually cool a system to absolute zero with a finite amount of resources and we went a step further,” one of the team, Lluis Masanes from University College London, told IFLScience.

“We then conclude that it is impossible to cool a system to absolute zero in a finite time, and we established a relation between time and the lowest possible temperature. It’s the speed of cooling.”

What Masanes is referring to here are two fundamental assumptions that the third law of thermodynamics depends on for its validity.

The first is that in order to achieve absolute zero in a physical system, the system’s entropy has to also hit zero.

The second rule is known as the unattainability principle, which states that absolute zero is physically unreachable because no system can reach zero entropy.

The first rule was proposed by German chemist Walther Nernst in 1906, and while it earned him a Nobel Prize in Chemistry, heavyweights like Albert Einstein and Max Planck weren’t convinced by his proof, and came up with their own versions of the cooling limit of the Universe.

 This prompted Nernst to double down on his thinking and propose the second rule in 1912, declaring absolute zero to be physically impossible.

Together, these rules are now acknowledged as the third law of thermodynamics, and while this law appears to hold true, its foundations have always seemed a little rocky – when it comes to the laws of thermodynamics, the third one has been a bit of a black sheep.

“[B]ecause earlier arguments focused only on specific mechanisms or were crippled by questionable assumptions, some physicists have always remained unconvinced of its validity,” Leah Crane explains for New Scientist.

In order to test how robust the assumptions of the third law of thermodynamics actually are in both classical and quantum systems, Masanes and his colleague Jonathan Oppenheim decided to test if it is mathematically possible to reach absolute zero when restricted to finite time and resources.

Masanes compares this act of cooling to computation – we can watch a computer solve an algorithm and record how long it takes, and in the same way, we can actually calculate how long it takes for a system to be cooled to its theoretical limit because of the steps required to remove its heat.

You can think of cooling as effectively ‘shovelling’ out the existing heat in a system and depositing it into the surrounding environment.

How much heat the system started with will determine how many steps it will take for you to shovel it all out, and the size of the ‘reservoir’ into which that heat is being deposited will also limit your cooling ability.

Using mathematical techniques derived from quantum information theory – something that Einstein had pushed for in his own formulations of the third law of thermodynamics – Masanes and Oppenheim found that you could only reach absolute zero if you had both infinite steps and an infinite reservoir.

And that’s not exactly something any of us are going to get our hands on any time soon.

This is something that physicists have long suspected, because the second law of thermodynamics states that heat will spontaneously move from a warmer system to a cooler system, so the object you’re trying to cool down will constantly be taking in heat from its surroundings.

And when there’s any amount of heat within an object, that means there’s thermal motion inside, which ensures some degree of entropy will always remain.

This explains why, no matter where you look, every single thing in the Universe is moving ever so slightly – nothing in existence is completely still according to the third law of thermodynamics.

The researchers say they “hope the present work puts the third law on a footing more in line with those of the other laws of thermodynamics”, while at the same time presenting the fastest theoretical rate at which we can actually cool something down.

In other words, they’ve used maths to quantify the steps of cooling, allowing researchers to define set speed limit for how cold a system can get in a finite amount of time.

And that’s important, because even if we can never reach absolute zero, we can get pretty damn close, as NASA demonstrated recently with its Cold Atom Laboratory, which can hit a mere billionth of a degree above absolute zero, or 100 million times colder than the depths of space.

At these kinds of temperatures, we’ll be able to see strange atomic behaviours that have never been witnessed before. And being able to remove as much heat from a system is going to be crucial in the race to finally build a functional quantum computer.

And the best part is, while this study has taken absolute zero off the table for good, no one has even gotten close to reaching the temperatures or cooling speeds that it’s set as the physical limits – despite some impressive efforts of late.

“The work is important – the third law is one of the fundamental issues of contemporary physics,” Ronnie Kosloff at the Hebrew University of Jerusalem, Israel who was not involved in the study, told New Scientist.

“It relates thermodynamics, quantum mechanics, information theory – it’s a meeting point of many things.”

Source: Nature Communications.

Google’s new AI has learned to become “highly aggressive” in stressful situations.

Is this how Skynet starts?

 Late last year, famed physicist Stephen Hawking issued a warning that the continued advancement of artificial intelligence will either be “the best, or the worst thing, ever to happen to humanity”.

We’ve all seen the Terminator movies, and the apocalyptic nightmare that the self-aware AI system, Skynet, wrought upon humanity, and now results from recent behaviour tests of Google’s new DeepMind AI system are making it clear just how careful we need to be when building the robots of the future.

 In tests late last year, Google’s DeepMind AI system demonstrated an ability to learn independently from its own memory, and beat the world’s best Go players at their own game.

It’s since been figuring out how to seamlessly mimic a human voice.

Now, researchers have been testing its willingness to cooperate with others, and have revealed that when DeepMind feels like it’s about to lose, it opts for “highly aggressive”strategies to ensure that it comes out on top.

The Google team ran 40 million turns of a simple ‘fruit gathering’ computer game that asks two DeepMind ‘agents’ to compete against each other to gather as many virtual apples as they could.

They found that things went smoothly so long as there were enough apples to go around, but as soon as the apples began to dwindle, the two agents turned aggressive, using laser beams to knock each other out of the game to steal all the apples.

You can watch the Gathering game in the video below, with the DeepMind agents in blue and red, the virtual apples in green, and the laser beams in yellow:

Now those are some trigger-happy fruit-gatherers.

Interestingly, if an agent successfully ‘tags’ its opponent with a laser beam, no extra reward is given. It simply knocks the opponent out of the game for a set period, which allows the successful agent to collect more apples.

 If the agents left the laser beams unused, they could theoretically end up with equal shares of apples, which is what the ‘less intelligent’ iterations of DeepMind opted to do.

It was only when the Google team tested more and more complex forms of DeepMind that sabotage, greed, and aggression set in.

As Rhett Jones reports for Gizmodo, when the researchers used smaller DeepMind networks as the agents, there was a greater likelihood for peaceful co-existence.

But when they used larger, more complex networks as the agents, the AI was far more willing to sabotage its opponent early to get the lion’s share of virtual apples.

The researchers suggest that the more intelligent the agent, the more able it was to learn from its environment, allowing it to use some highly aggressive tactics to come out on top.

“This model … shows that some aspects of human-like behaviour emerge as a product of the environment and learning,” one of the team, Joel Z Leibo, told Matt Burgess atWired.

“Less aggressive policies emerge from learning in relatively abundant environments with less possibility for costly action. The greed motivation reflects the temptation to take out a rival and collect all the apples oneself.”

DeepMind was then tasked with playing a second video game, called Wolfpack. This time, there were three AI agents – two of them played as wolves, and one as the prey.

Unlike Gathering, this game actively encouraged co-operation, because if both wolves were near the prey when it was captured, they both received a reward – regardless of which one actually took it down:

“The idea is that the prey is dangerous – a lone wolf can overcome it, but is at risk of losing the carcass to scavengers,” the team explains in their paper.

“However, when the two wolves capture the prey together, they can better protect the carcass from scavengers, and hence receive a higher reward.”

So just as the DeepMind agents learned from Gathering that aggression and selfishness netted them the most favourable result in that particular environment, they learned from Wolfpack that co-operation can also be the key to greater individual success in certain situations.

And while these are just simple little computer games, the message is clear – put different AI systems in charge of competing interests in real-life situations, and it could be an all-out war if their objectives are not balanced against the overall goal of benefitting us humans above all else.

Think traffic lights trying to slow things down, and driverless cars trying to find the fastest route – both need to take each other’s objectives into account to achieve the safest and most efficient result for society.

It’s still early days for DeepMind, and the team at Google has yet to publish their study in a peer-reviewed paper, but the initial results show that, just because we build them, it doesn’t mean robots and AI systems will automatically have our interests at heart.

Instead, we need to build that helpful nature into our machines, and anticipate any ‘loopholes’ that could see them reach for the laser beams.

As the founders of OpenAI, Elon Musk’s new research initiative dedicated to the ethics of artificial intelligence, said back in 2015:

“AI systems today have impressive but narrow capabilities. It seems that we’ll keep whittling away at their constraints, and in the extreme case, they will reach human performance on virtually every intellectual task.

It’s hard to fathom how much human-level AI could benefit society, and it’s equally hard to imagine how much it could damage society if built or used incorrectly.”

Tread carefully, humans…

Treatment of Benzodiazepine Dependence.

What are some of the basic approaches to the treatment of benzodiazepine dependence?

The first benzodiazepine to be approved and introduced into clinical practice was chlordiazepoxide, which was introduced to the market in 1960. Today, approximately 35 benzodiazepine derivatives exist, 21 of which have been approved internationally. They all bind to specific sites on the γ-aminobutyric acid (GABA) type A receptor, increasing the receptor’s affinity for GABA, an inhibitory neurotransmitter. A new Clinical Practice Review Article elaborates.

Clinical Pearl

  • What are the indications for benzodiazepines?

Benzodiazepines can be divided into anxiolytic agents and hypnotic agents on the basis of their clinical effects. In principle, however, all benzodiazepines have anxiolytic, hypnotic, muscle-relaxant, anticonvulsant, and amnesic effects. They are used as sedatives and to treat withdrawal symptoms, including alcohol withdrawal delirium.

Clinical Pearl

  • What side effects are associated with benzodiazepines?

The main disadvantages and dose-dependent side effects of benzodiazepines are drowsiness, lethargy, fatigue, excessive sedation, stupor, “hangover effects” the next day, disturbances of concentration and attention, development of dependence, symptom rebound (i.e., recurrence of the original disorder, most commonly a sleep disorder) after discontinuation, and hypotonia and ataxia. Benzodiazepines can seriously impair driving ability and are associated with increased risks of traffic accidents, as well as falls and fractures.

Morning Report Questions

Q: What are some of the symptoms of benzodiazepine withdrawal?
A: The mildest form of withdrawal is symptom rebound and is particularly common with withdrawal from benzodiazepines that are used for sleep disorders. The most common physical symptoms of withdrawal are muscle tension, weakness, spasms, pain, influenza-like symptoms (e.g., sweating and shivering), and “pins and needles.” The most common psychological withdrawal symptoms are anxiety and panic disorders, restlessness and agitation, depression and mood swings, psychovegetative symptoms (e.g., tremor), reduced concentration, and sleep disturbances and nightmares. Disorders of perception are relatively common and range from hyperacusis to photophobia to dysesthesia; these symptoms are not pathognomonic but are characteristic of benzodiazepine withdrawal. Seizures are quite common, especially if the agent is discontinued abruptly.

Q: What are some of the basic approaches to the treatment of benzodiazepine dependence?
A: The overall consensus is that benzodiazepines should be discontinued gradually over a period of several weeks (e.g., 4 to 6 weeks or more for diazepam doses >30 mg per day), to prevent seizures and avoid severe withdrawal symptoms. The use of several benzodiazepines should be converted to the use of one, preferably diazepam. Withdrawal from short-acting benzodiazepines is associated with higher dropout rates than withdrawal from longer-acting agents, but switching from a drug with a short half-life to one with a longer half-life is not associated with a better outcome. Withdrawal is sometimes successful on an outpatient basis, but patients should be hospitalized for withdrawal from very high doses (a dose equivalent to ≥100 mg of diazepam daily). In patients receiving opioid maintenance therapy, the dose of the opioid (e.g., methadone) should be kept stable throughout the benzodiazepine-reduction period and high enough to prevent symptoms of opioid withdrawal. In general, the prognosis for patients who undergo withdrawal treatment for benzodiazepine dependence is fairly good.

Are gravitational waves kicking this black hole out of its galaxy?

Astronomers have just spied a black hole with a mass 1 billion times the sun’s hurtling toward our galaxy. But scientists aren’t worried about it making contact: It’s some 8 billion light-years away from Earth and traveling at less than 1% the speed of light. Instead, they’re wondering how it got the boot from its parent galaxy, 3C186 (fuzzy mass in the Hubble telescope image, above). Most black holes lie quietly—if voraciously—at the center of their galaxies, slurping up the occasional passing star.

But every once in a while, two galaxies merge, and the black holes in their centers begin to swirl around each other in a pas des deux that eventually leads to a devastating merger. The wandering black hole (bright spot above), may be the result of one such merger. Based on the wavelengths of spectral lines emitted by the luminous gas surrounding the black hole, the object is traveling at a speed of about 7.5 million kilometers per hour—a rate that would carry it from Earth to the moon in about 3 minutes. If the most likely scenario is true, then a massive kick from the merger of two black holes some 1.2 billion years ago would have created a ripple of gravitational waves, the researchers suggest in a forthcoming issue of Astronomy & Astrophysics. And if the precollision black holes didn’t have the same mass and rotation rate as each other, the waves would have been stronger in some directions than others, giving the resulting object a jolt equivalent to the energy of 100 million supernovae exploding simultaneously, the researchers estimate. Other runaway black holes have been proposed, but none of them has yet been confirmed.

This physicist says consciousness could be a new state of matter.



Consciousness isn’t something scientists like to talk about much. You can’t see it, you can’t touch it, and despite the best efforts of certain researchersyou can’t quantify it. And in science, if you can’t measure something, you’re going to have a tough time explaining it.

But consciousness exists, and it’s one of the most fundamental aspects of what makes us human. And just like dark matter and dark energy have been used to fill some otherwise gaping holes in the standard model of physics, researchers have also proposed that it’s possible to consider consciousness as a new state of matter.

 To be clear, this is just a hypothesis, and one to be taken with a huge grain of salt, because we’re squarely in the realm of the hypothetical here, and there’s plenty of room for holes to be poked.

But it’s part of a quietly bubbling movement within theoretical physics and neuroscience to try and attach certain basic principles to consciousness in order to make it more observable.

The hypothesis was first put forward in 2014 by cosmologist and theoretical physicist Max Tegmark from MIT, who proposed that there’s a state of matter – just like a solid, liquid, or gas – in which atoms are arranged to process information and give rise to subjectivity, and ultimately, consciousness.

The name of this proposed state of matter? Perceptronium, of course.

As Tegmark explains in his paper, published in the journal Chaos, Solitons & Fractals:

“Generations of physicists and chemists have studied what happens when you group together vast numbers of atoms, finding that their collective behaviour depends on the pattern in which they are arranged: the key difference between a solid, a liquid, and a gas lies not in the types of atoms, but in their arrangement.

In this paper, I conjecture that consciousness can be understood as yet another state of matter. Just as there are many types of liquids, there are many types of consciousness.

However, this should not preclude us from identifying, quantifying, modelling, and ultimately understanding the characteristic properties that all liquid forms of matter (or all conscious forms of matter) share.”

In other words, Tegmark isn’t suggesting that there are physical clumps of perceptronium sitting somewhere in your brain and coursing through your veins to impart a sense of self-awareness.

Rather, he proposes that consciousness can be interpreted as a mathematical pattern – the result of a particular set of mathematical conditions.

 Just as there are certain conditions under which various states of matter – such as steam, water, and ice – can arise, so too can various forms of consciousness, he argues.

Figuring out what it takes to produce these various states of consciousness according to observable and measurable conditions could help us get a grip on what it actually is, and what that means for a human, a monkey, a flea, or a supercomputer.

The idea was inspired by the work of neuroscientist Giulio Tononi from the University of Wisconsin in Madison, who proposed in 2008 that if you wanted to prove that something had consciousness, you had to demonstrate two specific traits.

According to his integrated information theory (IIT), the first of these traits is that a conscious being must be capable of storing, processing, and recalling large amounts of information.

“And second,” explains the blog, “this information must be integrated in a unified whole, so that it is impossible to divide into independent parts.”

This means that consciousness has to be taken as a whole, and cannot be broken down into separate components. A conscious being or system has to not only be able to store and process information, but it must do so in a way that forms a complete, indivisible whole, Tononi argued.

If it occurred to you that a supercomputer could potentially have these traits, that’s sort of what Tononi was getting at.

As George Johnson writes for The New York Times, Tononi’s hypothesis predicted – with a whole lot of maths – that “devices as simple as a thermostat or a photoelectric diode might have glimmers of consciousness – a subjective self”.

In Tononi’s calculations, those “glimmers of consciousness” do not necessarily equal a conscious system, and he even came up with a unit, called phi or Φ, which he said could be used to measure how conscious a particular entity is.

Six years later, Tegmark proposed that there are two types of matter that could be considered according to the integrated information theory.

The first is ‘computronium’, which meets the requirements of the first trait of being able to store, process, and recall large amounts of information. And the second is ‘perceptronium’, which does all of the above, but in a way that forms the indivisible whole Tononi described.

In his paper, Tegmark explores what he identifies as the five basic principles that could be used to distinguish conscious matter from other physical systems such as solids, liquids, and gases – “the information, integration, independence, dynamics, and utility principles”.

He then spends 30 pages or so trying to explain how his new way of thinking about consciousness could explain the unique human perspective on the Universe.

As the blog explains, “When we look at a glass of iced water, we perceive the liquid and the solid ice cubes as independent things even though they are intimately linked as part of the same system. How does this happen? Out of all possible outcomes, why do we perceive this solution?”

It’s an incomplete thought, because Tegmark doesn’t have a solution. And as you might have guessed, it’s not something that his peers have been eager to take up and run with. But you can read his thoughts as they stand in the journal Chaos, Solitons & Fractals.

That’s the problem with something like consciousness – if you can’t measure your attempts to measure it, how can you be sure you’ve measured it at all?

More recently, scientists have attempted to explain how human consciousness could be transferred into an artificial body – seriously, there’s a start-up that wants to do this – and one group of Swiss physicists have suggested consciousness occurs in ‘time slices’that are hundreds of milliseconds apart.

As Matthew Davidson, who studies the neuroscience of consciousness at Monash University in Australia, explains over at The Conversation, we still don’t know much about what consciousness actually is, but it’s looking more and more likely that it’s something we need to consider outside the realm of humans.

“If consciousness is indeed an emergent feature of a highly integrated network, as IIT suggests, then probably all complex systems – certainly all creatures with brains – have some minimal form of consciousness,” he says.

“By extension, if consciousness is defined by the amount of integrated information in a system, then we may also need to move away from any form of human exceptionalism that says consciousness is exclusive to us.”

Two pints of beer better for pain relief than paracetamol, study says

But health experts warn to drink moderately

 Your head is pounding, the room’s spinning and your stomach is lurching – when you’re hungover, reaching for painkillers can often seem like a good idea.

But according to a new study, hair of the dog really could do the trick.

And not just for dealing with a hangover – according to new research, drinking two beers is more effective at relieving pain than taking painkillers.

Over the course of 18 studies, researchers from the University of Greenwich found that consuming two pints of beer can cut discomfort by a quarter.

By elevating your blood alcohol content to approximately 0.08 per cent, you’ll give your body “a small elevation of pain threshold” and thus a “moderate to large reduction in pain intensity ratings”.

The researchers explained: “Findings suggest that alcohol is an effective analgesic that delivers clinically-relevant reductions in ratings of pain intensity, which could explain alcohol misuse in those with persistent pain, despite its potential consequences for long-term health.”

It’s not clear, however, whether alcohol reduces feelings of pain because it affects brain receptors or because it just lowers anxiety, which then makes us think the pain isn’t as bad.

Dr Trevor Thompson, who led the study at London’s Greenwich University, told The Sun: “[Alcohol] can be compared to opioid drugs such as codeine and the effect is more powerful than paracetamol.

“If we can make a drug without the harmful side-effects, then we could have something that is potentially better than what is out there at the moment.”

However experts are also speaking out to clarify that the results of the new study don’t mean alcohol is good for us.

Rosanna O’Connor, director of Alcohol and Drugs at Public Health England, said: “Drinking too much will cause you more problems in the long run. It’s better to see your GP.”

Government guidelines recommend no more than 14 units of alcohol a week for both men and women, which equates to six pints of beer, or six 175ml glasses of wine.

New Research Shows That Time Travel Is Mathematically Possible


Physicists have developed a new mathematical model that shows how time travel is theoretically possible. They used Einstein’s Theory of General Relativity as a springboard for their hypothetical device, which they call a Traversable Acausal Retrograde Domain in Space-time (TARDIS).


Even before Einstein theorized that time is relative and flexible, humanity had already been imagining the possibility of time travel. In fact, science fiction is filled with time travelers. Some use metahuman abilities to do so, but most rely on a device generally known as a time machine. Now, two physicists think that it’s time to bring the time machine into the real world — sort of.

The Future According to H. G. Wells [INFOGRAPHIC]

“People think of time travel as something as fiction. And we tend to think it’s not possible because we don’t actually do it,” Ben Tippett, a theoretical physicist and mathematician from the University of British Columbiasaid in a UBC news release. “But, mathematically, it is possible.”

Essentially, what Tippet and University of Maryland astrophysicist David Tsang developed is a mathematical formula that uses Einstein’s General Relativity theory to prove that time travel is possible, in theory. That is, time travel fitting a layperson’s understanding of the concept as moving “backwards and forwards through time and space, as interpreted by an external observer,” according to the abstract of their paper, which is published in the journal Classical and Quantum Gravity.

Oh, and they’re calling it a TARDIS — yes, “Doctor Who” fans, hurray! — which stands for a Traversable Acausal Retrograde Domain in Space-time.


“My model of a time machine uses the curved space-time to bend time into a circle for the passengers, not in a straight line,” Tippet explained. “That circle takes us back in time.” Simply put, their model assumes that time could curve around high-mass objects in the same way that physical space does in the universe.

For Tippet and Tsang, a TARDIS is a space-time geometry “bubble” that travels faster than the speed of light. “It is a box which travels ‘forwards’ and then ‘backwards’ in time along a circular path through spacetime,” they wrote in their paper.

Unfortunately, it’s still not possible to construct such a time machine. “While is it mathematically feasible, it is not yet possible to build a space-time machine because we need materials — which we call exotic matter — to bend space-time in these impossible ways, but they have yet to be discovered,” Tippet explained.

Image credit: Tippet and Yang

Indeed, their work isn’t the first to suggest that time traveling can be done. Various other experiments, including those that rely on photon stimulation, suggest that time travel is feasible. Another theory explores the potential particles of time.

However, some think that a time machine wouldn’t be feasible because time traveling itself isn’t possible. One points to the intimate connection between time and energy as the reason time traveling is improbable. Another suggests that time travel isn’t going to work because there’s no future to travel to yet.

Whatever the case may be, there’s one thing that these researchers all agree on. As Tippet put it, “Studying space-time is both fascinating and problematic.”