Use of Paraquat in US Increases While 32 Countries Banned It


In the U.S., farmers dependent on genetically engineered (GE) Roundup Ready soybean crops are increasingly running into a problem. Weeds are quickly becoming resistant to Roundup herbicide, leaving the farmers desperate for an effective chemical alternative.

spraying toxic chemicals

Story at-a-glance

  • Paraquat is an herbicide commonly used in suicide attempts, because it can kill a person in a single sip
  • Increasing research suggests exposure to paraquat is associated with an increased risk of Parkinson’s disease
  • More than 100 crops worldwide are sprayed with paraquat, including in the U.S., even though it’s been banned in 32 countries due to its high toxicity

Paraquat is being marketed as one such alternative, but this pesticide is highly toxic to humans, according to the U.S. Environmental Protection Agency (EPA), to the extent that just one sip can be fatal.1

Sadly, it’s become an herbicide commonly used for suicide attempts,2 but in countries that have banned its use, such as South Korea, suicide rates dropped significantly.

The association was so strong that researchers wrote in 2015, “Paraquat prohibition should be considered as a national suicide prevention strategy in developing and developed countries.”3

Europe is ahead of the game, as they’ve already prohibited the use of paraquat. In all, 32 countries have banned the chemical.4

In Switzerland, paraquat has been banned since 1989, but the country still allows it to be produced, as long as it’s shipped for use elsewhere, like in the U.S., where use is actually growing dramatically.

Despite Parkinson’s Concerns, Use of Paraquat in US Increases

Paraquat is banned in Europe not only because of its highly lethal nature but also because it’s linked to Parkinson’s disease. The EPA is aware of the issue, and reported “a large body of epidemiology data on paraquat dichloride use and Parkinson’s disease.”5

They’re currently reviewing whether to allow continued use in the U.S., but not with the urgency one would expect; a decision isn’t even expected until 2018. In the meantime, its connection to Parkinson’s disease grows.

“Paraquat causes tissue damage by setting off a redox cycle that generates toxic superoxide free radicals,” researchers wrote in the International Journal of Environmental Research and Public Health.6

In acute exposures, it typically kills by damaging lung tissue, as one biochemist and toxicologist explained in a warning about paraquat:7

“I should warn you that it probably has killed more people than all other pesticides combined. It is taken up by a transport system in the lung where it generates a variety of reactive oxygen species and burns the lungs; people normally drown several weeks later of pneumonia.

There is no antidote. Most deaths now are suicides and murders, with a few accidents thrown in.”

Chronic exposures, however, are known to damage the nervous system. Paraquat is often applied in the same areas as another pesticide, maneb, and it’s believed the two have synergistic effects.

One study published in the American Journal of Epidemiology revealed that exposure to both pesticides within 500 meters of the home increased the risk of Parkinson’s disease by 75 percent.8

Even the EPA noted that paraquat may be a causative agent or contributor to the development of Parkinson’s disease,9 and in 2011 a National Institutes of Health study found people who used paraquat were 2.5 times more likely to develop Parkinson’s than non-users.10

A meta-analysis of more than 100 studies similarly found a two-fold increased risk of Parkinson’s with paraquat exposure,11 while those with a certain genetic variant (individuals lacking glutathione S-transferase T1, or GSTT1) may face a particularly increased Parkinson’s risk — by 11-fold — with paraquat exposure.12

In the U.S., the use of paraquat has increased four-fold in the last 10 years. In 2015 alone, 7 million pounds of the herbicide were sprayed on close to 15 million U.S. acres.13

EPA Proposes Label Changes Due to Paraquat Poisonings

The EPA reported a “large number of human incidents” involving accidental and intentional consumption of paraquat that have been reported to U.S. poison control centers.

“There is a disproportionately high number of deaths resulting from accidental ingestion of paraquat compared to similar pesticides,” they noted, which led to so-called “safening” agents being added to the chemical in the 1980s.14

The pesticide was dyed blue and now includes a special addition to make it smell terrible. It’s also formulated to solidify when it contacts stomach acid and an emetic was added to induce vomiting should it be ingested.

Accidental poisonings have occurred nonetheless, particularly when farmers transfer the chemical into beverage containers for storage or sharing, despite warnings against doing so on the label.

The EPA therefore proposed a new set of changes they believe will make paraquat safer to use while again avoiding the burning question, which is why paraquat is still allowed in the U.S. at all.

Paraquat is already a restricted-use product in California, but EPA proposals would restrict its use nationally. Their proposed changes include:15

  • New closed-system packaging that make it impossible to transfer the pesticide into other packaging
  • Special training for applicators, emphasizing the importance of not transferring it to improper containers
  • Warnings added to labels to highlight risks and toxicity
  • Prohibiting application from hand-held and backpack equipment
  • Restricting use to certified pesticide applicators only

A Potential Human Rights Issue

The practice of manufacturing hazardous substances explicitly for use far away from the manufacturing plant is being investigated as a potential human rights issue, paraquat included.

Paraquat manufactured by Syngenta in Britain, where it is banned, was shipped to more than two-dozen countries in 2016, including the U.S., The New York Times reported:16

“‘This is one of the quintessential examples of double standards,’ said Baskut Tuncak, a United Nations official who specializes in hazardous substances. ‘Paraquat is banned in the U.K. and the E.U., but it’s still being used, and resulting in serious harms outside the E.U. where it’s being shipped.'”

Syngenta, meanwhile, continues to refute the paraquat-Parkinson’s link, enlisting the help of Dr. Colin Berry, professor emeritus at Queen Mary University in London and a known consultant for the pesticide industry, including Syngenta and Monsanto, to downplay the scientific evidence.

(Berry gained notoriety in 2000 for being at the center of a misdiagnosis case, in which two of his patients were wrongly diagnosed with breast cancer and received double mastectomies unnecessarily as a result.17)

Paraquat Is Absorbed ‘Systematically’ in Animals

The industry website Paraquat.com, hosted by the Paraquat Information Center on behalf of Syngenta Crop Protection AG describes the chemical as a “contact herbicide” that is not systemic, “meaning that it does not move inside plants.”

This ability to use paraquat with precision on crops, targeting only problematic weeds or in between rows of vegetables, is one reason why it’s promoted as a perfect alternative to glyphosate (the active ingredient in Roundup), which works systemically.18

“Using paraquat adds to the all-important diversity of mode of action necessary for successful weed control programs,” Paraquat.com advertised in September 2016.19 Never mind the fact that spraying more pesticides onto crops is only setting the stage for increased weed resistance down the road.

In animals, meanwhile, paraquat does not stay contained to one precise area. Rather, according to a U.S. Fish and Wildlife Service report, it’s absorbed systematically.20 The lungs are often the primary target, followed by absorption in the gastrointestinal tract and skin.

“Administration of paraquat by every route of entry tested frequently resulted in irreversible changes, in the lung,” researchers wrote, continuing:

“Delayed toxic effects of paraquat occurring after the excretion of virtually all of the material have caused it to be classified as a ‘hit and run’ compound — that is, a compound causing immediate damage, the consequences of which are not readily apparent.”

The U.S. Centers for Disease Control and Prevention (CDC) also described paraquat’s systemic dispersal:21 “After paraquat enters the body, it is distributed to all areas of the body. Paraquat causes toxic chemical reactions to occur throughout many parts of the body, primarily the lungs, liver and kidneys.”

Industry Site Touts Paraquat’s ‘Perfect Pastures’

If you’re willing to overlook paraquat’s ability to kill with a single sip and concerning link to chronic disease in humans, you might be wowed by paraquat’s penchant for creating perfect pastures, according to Syngenta’s site. Spraying the toxic chemical can lead to more productive pastures, including improved yield and quality of livestock grazing and forage, Syngenta claims.22

The site also touts paraquat’s benefits to sugarcane fields, including higher sugar yields and improved soil structure. Syngenta wants farmers to use paraquat not only for weed control (including in combination with other herbicides that use different modes of action) but also promotes it as a desiccant useful for drying out leaves so that they may be returned to the soil “to increase organic matter levels.”

Plus, when farmers douse their crops with more paraquat just prior to harvest, they can avoid the “adverse environmental impact” from burning, Syngenta gushes. They also suggest spraying the poison toward the end of daylight hours. “Paraquat diffuses through leaf tissue in the water in cell walls, but as cells desiccate this limits how far it can penetrate into weeds,” Syngenta writes.23

With lower light intensity in the waning daylight, it slows paraquat’s fast speed of action so that it can kill weeds more effectively. Further, Syngenta is hoping rice farmers will also embrace the use of paraquat and have touted its supposed environmental benefits in the cultivation of newer varieties of flood-resistant rice. According to Syngenta:24

“Rice farmers in Indonesia are taking advantage of paraquat’s fast action to ensure weeds are controlled in fields that are subjected to frequent tidal flooding. Paraquat is the only herbicide that can act fast enough between tides to kill weeds because it is absorbed by leaves before it can be washed off — in just the same way that it is rainfast in less than half an hour.”

In case you missed the last part, paraquat is “rainfast,” meaning that within 15 to 30 minutes of application, it can no longer be washed off by the rain. It remains on the plant even after rainfall or washing.25

Warning on Paraquat Use in Batteries

A new organic flow battery technology has been developed that promises to cost 60 percent less than standard flow batteries, courtesy of the batteries’ inexpensive synthesized molecules.26 Among them is methyl viologen, aka paraquat. Writing in Chemical & Engineering News, biochemist Bruce Hammock warned that such batteries pose considerable health risks:27

” … [I]ts high toxicity if consumed in even tiny amounts presents a health hazard that requires adequate warnings regarding accidents or end-of-life issues with the proposed batteries. In working some as a toxicologist with the computer industry, I saw highly trained people become very lax over some toxic chemicals. Anytime we have a high-volume use for a highly toxic material, I worry about a problem years downstream.”

Eat Organic to Avoid Paraquat and Other Pesticides

Paraquat is not approved for household use in the U.S., but it is sprayed on more than 100 crops globally, including coffee, almonds and oranges. Its use in food crops may be particularly problematic when used as a pre-harvest desiccant, such as on cereal grains. When used in this way, levels of up to 0.2 mg/kg of plant matter have been reported, according to Pesticide Action Network UK, but the Acceptable Daily Intake is recommended as 0.004 mg/kg/day body weight.28

Until toxic chemicals like paraquat are no longer viewed as an acceptable and necessary tool in agriculture, your best bet, health-wise, is to support those farmers using organic farming methods and other alternative methods of non-chemical weed control. Remember, too, that you can control what’s in the food you eat by growing some of your own fruits and vegetables in your own backyard.

 Genes affecting our communication skills relate to genes for psychiatric disorder


DNA

By screening thousands of individuals, an international team led by researchers of the Max Planck Institute for Psycholinguistics, the University of Bristol, the Broad Institute and the iPSYCH consortium has provided new insights into the relationship between genes that confer risk for autism or schizophrenia and genes that influence our ability to communicate during the course of development.

The researchers studied the genetic overlap between the risk of having these psychiatric disorders and measures of social communicative competence – the ability to socially engage with other people successfully – during middle childhood to adolescence. They showed that genes influencing social communication problems during childhood overlap with genes conferring risk for autism, but that this relationship wanes during adolescence. In contrast, genes influencing risk for schizophrenia were most strongly interrelated with genes affecting social competence during later adolescence, in line with the natural history of the disorder. The findings were published in Molecular Psychiatry on 3 January 2017.

Timing makes the difference

“The findings suggest that the risk of developing these contrasting is strongly related to distinct sets of genes, both of which influence , but which exert their maximum influence during different periods of development”, explained Beate St Pourcain, senior investigator at the MPI and lead author of the study.

People with autism and with schizophrenia both have problems interacting and communicating with other people, because they cannot easily initiate social interactions or give appropriate responses in return. On the other hand, the disorders of autism and schizophrenia develop in very different ways. The first signs of ASD typically occur during infancy or early childhood, whereas the symptoms of schizophrenia usually do not appear until early adulthood.

Features of autism or schizophrenia are found in many of us

People with autism have serious difficulties in engaging socially with others and understanding social cues, as well as being rigid, concrete thinkers with obsessive interests. In contrast, is characterised by hallucinations, delusions, and seriously disturbed thought processes. Yet recent research has shown that many of these characteristics and experiences can be found, to a mild degree, in typically developing children and adults. In other words, there is an underlying continuum between normal and abnormal behaviour.

Recent advances in genome-wide analyses have helped drawing a more precise picture of the genetic architecture underlying and their related symptoms in unaffected people. A large proportion of risk to disorder, but also variation in milder symptoms, stems from combined small effects of many thousands of genetic differences across the genome, known as polygenic effects. For social communication behaviour, these genetic factors are not constant, but change during childhood and adolescence. This is because exert their effects consistent with their biological programming.

Disentangling psychiatric disorders

“A developmentally sensitive analysis of genetic relationships between traits and disorders may help to disentangle apparent behavioural overlap between psychiatric conditions”, St Pourcain commented.

George Davey Smith, Professor of Clinical Epidemiology at the University of Bristol and senior author of the study, said, “The emergence of associations between genetic predictors for different psychiatric conditions and social communication differences, around the ages the particular conditions reveal themselves, provides a window into the specific causes of these conditions”.

David Skuse, Professor of Behavioural and Brain Sciences at University College London added, “This study has shown convincingly how the measurement of social communicative competence in childhood is a sensitive indicator of genetic risk. Our greatest challenge now is to identify how genetic variation influences the development of the social brain”.

It’s official: A brand-new human organ has been classified.


Researchers have classified a brand-new organ inside our bodies, one that’s been hiding in plain sight in our digestive system this whole time.

Although we now know about the structure of this new organ, its function is still poorly understood, and studying it could be the key to better understanding and treatment of abdominal and digestive disease.

Known as the mesentery, the new organ is found in our digestive systems, and was long thought to be made up of fragmented, separate structures. But recent research has shown that it’s actually one, continuous organ.

The evidence for the organ’s reclassification is now published in The Lancet Gastroenterology & Hepatology.

“In the paper, which has been peer reviewed and assessed, we are now saying we have an organ in the body which hasn’t been acknowledged as such to date,” said J Calvin Coffey, a researcher from the University Hospital Limerick in Ireland, who first discovered that the mesentery was an organ.

“The anatomic description that had been laid down over 100 years of anatomy was incorrect. This organ is far from fragmented and complex. It is simply one continuous structure.”

Thanks to the new research, as of last year, medical students started being taught that the mesentery is a distinct organ.

The world’s best-known series of medical textbooks, Gray’s Anatomy, has even been updated to include the new definition.

So what is the mesentery? It’s a double fold of peritoneum – the lining of the abdominal cavity – that attaches our intestine to the wall of our abdomen, and keeps everything locked in place.

One of the earliest descriptions of the mesentery was made by Leonardo da Vinci, and for centuries it was generally ignored as a type of insignificant attachment. Over the past century, doctors who studied the mesentery assumed it was a fragmented structure made of separate sections, which made it pretty unimportant.

But in 2012, Coffey and his colleagues showed through detailed microscopic examinations that the mesentery is actually a continuous structure.

Over the past four years, they’ve gathered further evidence that the mesentery should actually be classified as its own distinct organ, and the latest paper makes it official.

You can see the new organ illustrated below:

image 4479-Mesentery

And while that doesn’t change the structure that’s been inside our bodies all along, with the reclassification comes a whole new field of medical science that could improve our health outcomes.

“When we approach it like every other organ… we can categorise abdominal disease in terms of this organ,” said Coffey.

That means that medical students and researchers will now investigate what role – if any – the mesentery might play on abdominal diseases, and that understanding will hopefully lead to better outcomes for patients.

“Now we have established anatomy and the structure. The next step is the function. If you understand the function you can identify abnormal function, and then you have disease. Put them all together and you have the field of mesenteric science … the basis for a whole new area of science,” said Coffey.

“This is relevant universally as it affects all of us.”

It just goes to show that no matter how advanced science becomes, there’s always more to learn and discover, even within our own bodies.

Gravitational waves can’t solve our black hole problems, physicists warn.


When it comes to black holes, the past couple of years have seen a firestorm of disagreements about event horizons, firewalls, and the very nature of black hole life and death erupting between cosmologists. It’s admittedly been a quiet firestorm, with papers here and there carefully arguing their positions while being respectful of opposing views, but that still counts as a firestorm in science.

Some people thought that the gravitational waves observed earlier this year could put an end to the dispute, but a group of physicists now warns that we shouldn’t be so quick to jump to conclusions.

The arguments centre around two related disagreements over what we’re actually talking about when we call something a ‘black hole’.

The first is over what happens when something falls into a black hole. Traditionally, black holes are thought to be objects with a gravitational force so strong, light isn’t even going fast enough to escape their clutches. And if light – the fastest thing in the Universe – can’t escape, then neither can anything else.

A black hole is usually defined by its event horizon – the outline of the region in space where gravity is strong enough to hold light down. You wouldn’t even necessarily notice as you passed over the event horizon of a black hole, since it’s just a place in space like any other. You’d only notice when you tried to escape.

But a few years ago, a couple of papers suggested that this very simplified view leads to some problems that can’t be resolved with our current understanding of the laws of physics.

Instead, they said, there must be something special about the event horizon: just after something passed over the event horizon, it would be scrambled and burnt up beyond recognition by something called a firewall. These firewalls seemed to eliminate the theoretical problems, but they were a pretty weird solution – and not everyone was on board.

One of those not on-board was Stephen Hawking, who thought it was ridiculous. Hawking and those who agreed with him maintained that there was nothing special about the edge of a black hole.

But then Hawking went a step further, adding fuel to the second part of the debate. In his quest to disprove the firewall, he ended up with a black hole without an event horizon.

This turned the definition of a black hole upside-down: without the event horizon – the place beyond which nothing can ever escape the black hole – what even defines a black hole? Physicists weren’t exactly scrambling to the table with answers.

At the same time, there were some alternatives to black holes being developed that might still have extreme gravity but wouldn’t have a point of no return. These strange objects have been dubbed ‘black hole mimickers’.

And then there were those who kept black holes with event horizons but still refused to believe the firewall.

All of the different parties converged on the gravitational waves observed by the Laser Interferometer Gravitational-Wave Observatory (LIGO). At first glance, the gravitational waves seemed like a clear victory for the black-holes-with-event-horizons camp. The pattern of the waves seemed to exactly match their predictions of what it should look like when two black holes with event horizons collide to form another black hole with an event horizon.

They were particularly excited about the ‘ringdown’ – when the final black hole sheds some energy and settles down after all the excitement of the collision. The ringdown, they said, precisely matched what they expected and didn’t match other contradictory predictions.

But the authors of a new paper say we can’t be quite so sure. They showed that the gravitational waves LIGO detected could have been made by any of those black hole mimickers – the objects that have gravity like a black hole without having an event horizon. So it seems like we’re back to square one.

But there is hope for distinguishing between the different hypotheses. They still disagree when it comes to the ringdown, but the disagreements are going to take better measurements to resolve. With better measurements of gravitational waves and of the ringdown, we should be able to answer these fundamental questions about the nature of black holes.

Watch the video. URL:https://youtu.be/XE5PNbsUERE

Men should marry young, smart women, say scientists.


Men should marry a woman who is cleverer than they are and at least five years younger, if they want the relationship to stand the best chance of lasting, according to new research.

May to September: George and Amal Clooney

George and Amal Clooney

Scientists tracked 1,000 couples who were either married or in serious relationships over five years and then looked for patterns among those who were still together.

They found that neither should have been divorced in the past, the man should be five or more years older and the woman should have received more education than the man.

The academics’ report, published in the European Journal of Operational Research, did say that men and women choose partners “on the basis of love, physical attraction, similarity of taste, beliefs and attitudes, and shared values”.

But it added that using “objective factors” such as age, education and cultural origin “may help reduce divorce”.

Their research suggests marital bliss for pop star Beyoncé Knowles, 33, and her husband, the rap mogul Jay-Z, who is 11 years older at 44. She is also better educated as he did not receive a high school diploma.

However while Michael Douglas, 70, is considerably older than his wife, Catherine Zeta-Jones, 45, the fact that he was previously divorced would count against them, the findings suggest.

The scientists, including Dr Emmanuel Fragniere of the University of Bath, found that a previous divorce lessened the chances of a relationship surviving, but this was less marked when both partners had been divorced before.

Clever Protein Marketing Tricks Gym Rats Into Pooping Expensive Turds


Anyone who’s sucked down a scoop of protein powder knows that the quest for the perfect body can have some pretty smelly consequences. “Protein poo,” a particularly dank-smelling excretion, has long been considered a necessary — and expensive — evil among body builders wanting to enrich their diets with the raw material needed to build muscle. But some nutritional experts suggest that all those agonizing hours spent on the toilet may have been for naught.

As dietitians report in the Guardian, most people have diets rich enough in protein that supplements are unnecessary — and therefore, so are the terrible poops (and weird protein-filled urine) they elicit. The reason that protein powders continue to be so popular, they suggest, is because clever marketing has sold consumers an easily digestible half-truth: muscles are made of protein, so eating more protein will lead to more muscle.

The high cost of protein supplementation really stinks.

The high cost of protein supplementation really stinks.

It is true that the body needs protein to maintain and grow the muscles in the body. Our muscle fibers are made of long chains of amino acids, which can be derived from the protein we eat but are also produced naturally by the body. But, as with all of the various fuels that the body requires in order to run, there is only so much protein the body can hold. To maintain homeostasis with the levels of other substances in the body, any excess protein has to be dumped. And, according to the dietitian’s calculation, most of the protein we consume is excess.

“The majority of people are consuming much more than the recommended daily allowance of protein through their everyday diet,” Dr. Alison Tedstone, the chief nutritionist of Public Health England, told the Guardian. Spending money on protein supplements, she explained further, is “unlikely to bring any additional benefit.”

Harvard Medical School’s health blog notes that the recommended daily allowance — the bare minimum you’re supposed to eat so you don’t get sick — of protein is a scant 0.8 grams of protein per kilogram (or 0.013 ounces per pound). Nutritionists suggest aiming for twice that amount — that is, obtaining 10 to 35 percent of your recommended caloric intake from protein — to be safe.

Most people won’t need protein supplements to hit that target, but if they take them anyway, they’d better be willing to accept that they’re literally flushing their money down the toilet. The price they pay is as much as a financial burden as an olfactory one: Proteins in supplements are often derived from dairy products (take heed, lactose-intolerant folks) and contain much more rotten egg-scented sulfur than carbs and fat. Taking them in and breaking them down isn’t just expensive — it downright stinks.

The Brain Tech to Merge Humans and AI Is Already Being Developed


Are you scared of artificial intelligence (AI)?

Do you believe the warnings from folks like Prof. Stephen Hawking, Elon Musk and others?

Is AI the greatest tool humanity will ever create, or are we “summoning the demon”?

To quote the head of AI at Singularity University, Neil Jacobstein, “It’s not artificial intelligence I’m worried about, it’s human stupidity.”

In a recent Abundance 360 webinar, I interviewed Bryan Johnson, the founder of a new company called Kernel which he seeded with $100 million.

To quote Bryan, “It’s not about AI vs. humans. Rather, it’s about creating HI, or ‘Human Intelligence’: the merger of humans and AI.”

Let’s dive in.

Meet Bryan Johnson and His New Company Kernel

Bryan Johnson is an amazing entrepreneur.

In 2007, he founded Braintree, an online and mobile payments provider. In 2013, PayPal acquired Braintree for $800 million.

In 2014, Bryan launched the OS Fund with $100 million of his personal capital to support inventors and scientists who aim to benefit humanity by rewriting the operating systems of life.

His investments include endeavors to cure age-related diseases and radically extend healthy human life to 100+ (Human Longevity Inc.), replicate the human visual cortex using artificial intelligence (Vicarious), expand humanity’s access to resources (Planetary Resources, Inc.), reinvent transportation using autonomous vehicles (Matternet), educate on accelerating technological progress (Singularity University), reimagine food using biology (Hampton Creek), make biology a predictable programming language (Emulate, Gingko Bioworks, Lygos, Pivot Bio, Synthego, Synthetic Genomics), and digitize analog businesses (3Scan, Emerald Cloud Lab, Plethora, Tempo Automation, Viv), among others.

Bryan is a big thinker, and now he is devoting his time, energy and resources to building “HI” through Kernel.

The company is building on 15 years of academic research at USC, funded by the NIH, DARPA and others, and they’ll begin human trials in the coming months.

But what is HI? And neuroprosthetics? And how is AI related?

Keep reading.

BCI, Neural Lace and HI

Your brain is composed of 100 billion cells called neurons, making 100 trillion synaptic connections.

These cells and their connections make you who you are and control everything you do, think and feel.

In combination with your sensory organs (i.e., eyes, ears), these systems shape how you perceive the world.

And sometimes, they can fail.

That’s where neuroprosthetics come into the picture.

The term “neuroprosthetics” describes the use of electronic devices to replace the function of impaired nervous systems or sensory organs.

They’ve been around for a while — the first cochlear implant was implanted in 1957 to help deaf individuals hear — and since then, over 350,000 have been implanted around the world, restoring hearing and dramatically improving quality of life for those individuals.

But cochlear implants only hint at a very exciting field that researchers call the brain-computer interface, or BCI: the direct communication pathway between the brain (the central nervous system, or CNS) and an external computing device.

The vision for BCI involves interfacing the digital world with the CNS for the purpose of augmenting or repairing human cognition.

You might have heard people like Elon Musk and others talking about a “neural lace” (this was actually a concept coined by science fiction writer Iain M. Banks).

Banks described a “neural lace” as essentially a very fine mesh that grows inside your brain and acts as a wireless brain-computer interface, releasing certain chemicals on command.

Well… though the idea might have started as science fiction, companies like Kernel are making it very real.

And once they do, we’ll have robust brain-computer interfaces, and we’ll be able to fix and augment ourselves. Ultimately this will also allow us to merge with AIs and become something more than just human.

Human Intelligence (HI)

Humans have always built tools of intelligence.

We started with rocks and progressively built more intelligent tools such as thermostats, calculators, computers and now AI. These are extensions of ourselves, and so we’ve been increasing our intelligence through our tools.

But now, our tools have become sophisticated enough (thanks to exponential technologies riding atop Moore’s Law) that we are about to incorporate them into our biology and take an exponential leap forward in intelligence.

This is so significant that it will change us as a species — we’re taking evolution into our own hands.

I like to say we’re going from evolution by natural selection — Darwinism — into evolution by intelligent direction.

We can now focus on technologies to augment human intelligence (HI).

This is what Bryan Johnson and Kernel are focused on.

The first step is to answer the basic question: can we mimic the natural function of neurons firing?

If we can mimic that natural functioning, and restore circuitry, or even if we can just maintain that circuitry, it begs the question: could we improve that circuitry?

Could we make certain memories stronger? Could we make certain memories weaker? Could we work with neural code in the same way we work with biological code via synthetic biology or genetic code? How do we read and write to neurons? Could we merge with AIs?

In my friend Ray Kurzweil’s mind, the answer is most certainly yes.

A Refresher on Ray Kurzweil’s Prediction

Ray Kurzweil is a brilliant technologist, futurist, and director of engineering at Google focused on AI and language processing.

He has also made more correct (and documented) technology predictions about the future than anyone:

As reported, “of the 147 predictions that Kurzweil has made since the 1990’s, fully 115 of them have turned out to be correct, and another 12 have turned out to be “essentially correct” (off by a year or two), giving his predictions a stunning 86% accuracy rate.”

Not too long ago, I wrote a post about his wildest prediction yet:

“In the early 2030s,” Ray said, “we are going to send nanorobots into the brain (via capillaries) that will provide full immersion virtual reality from within the nervous system and will connect our neocortex to the cloud. Just like how we can wirelessly expand the power of our smartphones 10,000-fold in the cloud today, we’ll be able to expand our neocortex in the cloud.”

A few weeks ago, I asked Bryan about Ray’s prediction about whether we’d be able to begin having our neocortex in the cloud by the 2030s.

His response, “Oh, I think it will happen before that.”

Exciting times.

Is Virtual Reality the Surprising Solution to the Fermi Paradox?


“If the transcension hypothesis is correct, inner space, not outer space, is the final frontier for universal intelligence. Our destiny is density.”

–John Smart

Only decades into our “age of cosmology” — the moment when we earned the technological rights to peer deep into our cosmic home — we’ve learned that we live in a mega-palace of a universe. And we’ve also found something odd. We seem to be the only ones home! Where are the aliens? Was it something we said?

Within just a single generation, powerful telescopes, satellites, and space probes have given us tools to explore the structure of our universe. And the more we find; the more we discover how fine-tuned it could be for life. At least in our own solar system, organic compound-glazed comets drift everywhere. Some scientists suggest that one of these life-triggering ice balls crashed into Earth billions of years ago, delivering the conditions for chemistry and biology to take root. If the rest of the universe is similarly structured, we should be inside a life-spawning factory of a universe.

Stars scattered like grains of sand in the Andromeda galaxy.

Just consider the mind-boggling numbers.

Even the most conservative NASA estimates say our universe has 500 billion billion stars like our own, and orbiting those suns are another 100 billion billion Earth-like planets. That is 100 habitable planets for every grain of sand on Earth. That’s trillions of opportunities for some other planet to grow life.

For argument’s sake, if even just a tenth of a percent of those planets capable of supporting life harbored some version of it, then there would be one million planets with life in the Milky Way Galaxy alone. A few might even have developed civilizations like our own, and cosmically thinking, if even just a handful of alien civilizations have advanced beyond our current level of technological progress, humanity should be waking up to a universe like the world of Star Trek.

But so far, no Ferengi, Klingons, Vulcans, Romulans—nobody.

Enrico Fermi, an Italian physicist, pointed out all of this weirdness in an observation that was later named for him: “the Fermi Paradox.” The paradox highlights the contradiction between the high probability that life would emerge in our universe, and the utter lack of evidence that advanced life exists anywhere else.

And it’s not just SETI signals we’re after — the paradox highlights that some advanced civilization that predates us has had enough time to fill our galaxy with spacecraft and other forms of blinking lights.

So, it should seem strikingly weird that we haven’t spotted anyone else.

There are no shortage of theories seeking to account for the Fermi Paradox. Entire lists of would-be explanations exist and if you have the time to spend down that rabbit hole (it’s a fun one), a recommended brain vacation is this excellent summary at Wait But Why.

But let’s focus on another emerging contender of a theory — virtual reality is to blame.

To address the Fermi Paradox, futurist John Smart proposes the fascinating “transcension hypothesis” theorizing that evolutionary processes in our universe might lead all advanced civilizations towards the same ultimate destination; one in which we transcend out of our current space-time dimension into virtual worlds of our own design.

According to Smart, as a species moves into its technologically advanced stage of progress, it develops virtual environments that exist on computers infinitely smaller than the ones we use today. Advanced species don’t colonize outer space — an idea they’d find archaic — but instead colonize inner space.

Smart proposes that our current efforts to explore parts of our solar system and beyond are just the adolescent stages of a technologically young species. We may continue to send space probes, satellites, and even courageous members of our own species into parts of our galaxy, but eventually those efforts are overwhelmed by the allure of infinite possibilities inside worlds of our own creation.

gargantua-black-hole-1Our virtual future may lie in “black-hole-like” environments that our computationally advanced descendants will build.

Computers of 40 years ago were the size of buildings, yet today we carry far more powerful ones in our pocket. Our tendencies to compress computation into smaller and smaller environments leads Smart to propose that eventually we’ll create near-infinitely small computers far more powerful than today’s.

How these infinitely small computers function is still a matter of theoretical physics and computer science, but Smart points out that there is a “vast untapped scale” of reality below the level of the atom far more broad than the plane of reality we fleshy pieces of biology inhabit. Inner space engineering, as Smart calls it, may take place in the femtoscales of reality currently out of reach by today’s tools of technology. Eventually Smart theorizes that an advanced species may even harness the spooky weirdness of black hole physics to harness their event horizons for computational density that could process entire universes of virtual realities.

If recent advancements in computing make the transcension hypothesis a little more palatable, the recent and sudden wave of progress in virtual reality adds a touch more plausibility. And combining the two: Perhaps, it’s reasonable to assume that over time, our virtual worlds will become indistinguishable from our current reality.

Soon, we won’t visit the Internet from the glass window of our computer screens, but rather walk around inside it as a physical place. Philip Rosedale, the creator of Second Life, recently announced plans for a bold new virtual universe with a potential physical game map as large as the landmass of Earth. Essentially, he’ll create a virtual world with its own laws of physics, and once he’s pressed play, a newly formed universe will have its own “let there be light” creation moment.

Where we go from there will be stunning to watch.

As we continue our plunge into virtual spaces, the validity of the transcension hypothesis will come into sharper focus. If technology trends toward a world of microscopic computers with infinitely complex realities inside, this might explain why we can’t see any alien neighbors. They’ve left us behind for the digital wormholes of their own design.

Of course, there are holes to be poked in any far-reaching theory about our place in the cosmos, and for now much of this speculating requires generalized conclusions based on a limited understanding of reality. We don’t have enough data yet, and for now we’re settled into the discomfort of “we just don’t know.”

In the meantime, and until science can catch up with our imagination — it’s fascinating to ponder a future living inside virtual realities of our own choosing.

Of course, it’s also possible that we’re already there.

Artificial Spinal Cord Wirelessly Restores Walking in Paralyzed Monkeys


Until a few years ago, reversing paralysis was the stuff of movie miracles.

Yet according to Dr. Andrew Jackson, a neuroscientist at Newcastle University in the UK, as early as the end of this decade, we may witness patients with spinal cord injuries regain control of their own two legs and walk again.

By implanting a wireless neural prosthetic into the spinal cord of paralyzed monkeys, a team led by Dr. Grégoire Courtine at the Swiss Federal institute of Technology (EPFL) in Lausanna, Switzerland achieved the seemingly impossible: the monkeys regained use of a paralyzed lower limb, a mere six days after their initial injury without requiring any training.

The close-looped system directly reads signals from the brain in real-time and works on the patients’ own limbs, which means it doesn’t require expensive exoskeletons or external stimulation of the patient’s leg muscles to induce the contractions necessary for walking. That’s huge: it means the system could be readily used by patients in their own homes without doctor supervision.

“When we turned on the brain-spine interface for the very first time … and the animal was showing stepping movement using its paralyzed leg, I remember a lot of screaming in the room; it seemed incredible,” says Dr. Courtine in an interview with Nature.

“The study represents a major step towards restoring lost motor function using neural interfaces,” agrees Jackson, who was not involved in the study.

Bridging the Gap

Every time we decide to move, the brain sends a cascade of signals down the spinal cord to instruct our muscles to contract accordingly. Severing this information relay, as in the case of spinal cord injuries from sports or car accidents, often results in irreversible paralysis.

Rewiring a damaged spinal cord is incredibly tough. The nerves don’t regenerate, even after careful coaxing with different cocktails of regenerative drugs. The injury sites are often hostile to stem cell transplants, making it difficult for foreign cells to grow and integrate.

To get around the issue, Courtine and other brain-machine interface (BMI) experts are turning to neural interfaces to manually reconnect brain and muscle. And Courtine’s system is exceedingly clever.

To start off, his team designed two implants: one to receive incoming signals from the brain and another to replace the damaged spinal cord.

The first, a neural interface, is made up of arrays of 96 microelectrodes that hook into the parts of the brain that controls leg movement. Once implanted, it automatically captures signals coming from multiple neurons that usually work together to give out a certain command — for example, flexing your foot, bending your leg or stop walking altogether.

The matchbox-sized device then sends the signal to an external computer, which uses an algorithm to figure out what movement the neural signals were encoding.

Fine-tuning the input signal took a lot of effort. Figuring out how different sets of electrical signals represent different aspects of movement was just the first step. The scientists also had to map out how the signals cyclically changed with time as a monkey walked, to ensure they could reproduce the smooth, gliding gait when working with a paralyzed monkey.

Once the neural signals were decoded, the computer used the information to wirelessly operate a second electrode array sitting over the lower part of the spinal cord of a paralyzed monkey, below the level of injury. This second implant, the “pulse generator,” acts as an electrical stimulator that takes in messages from the brain implant and delivers them to undamaged parts of the spinal cord that normally control leg muscle movement with a series of zaps.

The results were astonishing.

In two monkeys that each lost the use of a hind leg, the wireless brain-spine interface allowed them to walk normally within the first week after their injuries — without any training. As time went on, the quality and quantity of the steps improved, suggesting the system had triggered neuroplasticity in the brain and damaged spinal cord.

Future Cyborgs

The field of brain-machine interfaces is moving so fast that blink, and you might miss the latest breakthrough. Within the past year or so, BMIs have allowed paralyzed patients to Google on a tablet with brain waves, grasp objects using robotic surrogates and control a variety of prosthetic hands and other devices. And just a few months ago, a surprisingly study showed that implants that directly stimulate the spinal cord helps paraplegic patients recover some voluntary movement of their own legs.

Yet even amongst this slew of incredible advances, Courtine’s study stands out. For one, walking is genuinely hard. So far, most neural interface studies have focused on reanimating the upper body.

“There’s quite a lot to locomotion, so when we’re walking we’re not just moving our legs to step. We’re also controlling balance and coordinating activity across both sides of the legs,” says Jackson, “So restoring movement to the legs brings with it a different set of challenges to restoring grasping movement in the hand.”

Courtine puts it even more bluntly. “Walking is all-or-nothing,” he says.

If patient can’t walk normally with a neural prosthetic they may instead prefer to remain in a wheelchair as the more pragmatic solution. There’s much more pressure to deliver something that works right off the bat.

Which leads to the second thing that makes Courtine’s system impressive. With wireless, closed-loop stimulation, patients would no longer be tethered to expensive equipment by dozens of wires. Even the computer may soon be out of the picture.

“The only reason why the computer is there is because it allows some flexibility in how we change different algorithms that we use to control new activity and that we use to control the stimulation,” says study author Tomislav Milekovic to Motherboard. In humans, he envisions direct communication between the brain and spinal cord implants, without having to bring the signal outside the body.

Going from monkeys to humans requires a bit more work. Unlike monkeys that use all four limbs to walk, we’re bipedal. And real-world injuries are often messier than the surgically induced lesions in the study, which only damaged one leg.

That said, in humans with incomplete spinal cord damage, there’s a lot of circuitry that survives and could be commandeered for the system to work on.

“It seems quite feasible that a similar technique could be used to generate walking in an individual who has both legs paralyzed,” says Jackson.

And according to him, that day may come much faster than you’d imagine.

We’re seeing a remarkably fast translation from first demonstrations in monkeys to clinical trials, usually around four to five years, he says. If the trend continuous, we may be seeing the first human brain-spinal prosthetic trials as early as 2020.

Courtine and his team have already begun testing the spinal stimulator in eight wheelchair-bound patients. Once that part’s refined, he’s moving on to optimizing the brain implant and hooking the two parts together.

“There are many, many challenges that we are going to face in the coming decade to optimize all these interventions, but we are really committed to making this step forward,” he says.

Quantum Computers Could Crush Today’s Top Encryption in 15 Years


Quantum computers could bring about a quantum leap in processing power, with countless benefits for fields like data science and AI. But there’s also a dark side: this extra power will make it simple to crack the encryption keeping everything from our emails to our online banking secure.

A recent report from the Global Risk Institute predicted that there is a one in seven chance vital cryptography tools will be rendered useless by 2026, rising to a 50% chance by 2031. In the meantime, hackers and spies can hoover up data encrypted using current approaches and simply wait until quantum computers powerful enough to crack the code have been developed.

quantum-computers-encryption-7The threat to encryption from quantum computers stems from the fact that some of the most prevalent approaches rely on solving fiendishly complicated mathematical problems. Unfortunately, this is something quantum computers are expected to be incredibly good at.

While traditional computers use binary systems with bits that can either be represented as 0 or 1, a quantum bit—or “qubit”—can be simultaneously 0 and 1 thanks to a phenomenon known as superposition. As you add qubits to the systems this means the power of the computer grows exponentially, making quantum computers far more efficient.

In 1994 Peter Shor of Bell Laboratories created a quantum algorithm that can solve a problem called integer factorization. As a report from the National Institute of Standards and Technology (NIST) released in April notes, this algorithm can be used to efficiently solve the mathematical problems at the heart of three of the most widely-used encryption approaches: Diffie-Hellman key exchange, RSA, and elliptic curve cryptography.

The threat is not imminent, though; building quantum computers is difficult. Most designs rely on complex and expensive technology like superconductors, lasers and cryogenics and have yet to make it out of the lab. Google, IBM and Microsoft are all working on commercializing the technology. Canadian company D-Wave is already selling quantum computers, but capabilities are still limited.

The very laws of quantum mechanics that makes these computers so powerful also provide a way to circumvent the danger. Quantum cryptography uses qubits in the form of photons to transmit information securely by encoding it into the particles’ quantum states. Attempting to measure any property of a quantum state will alter another property, which means attempts to intercept and read the message can be easily detected by the recipient.

quantum-computers-encryption-2The most promising application of this approach is called quantum key distribution, which uses quantum communication to securely share keys that can be used to decrypt messages sent over conventional networks. City-wide networks have already been demonstrated in the US, Europe and Japan, and China’s newest satellite is quantum communication-enabled.

But the systems are held back by low bandwidth and the fact they only work over short distances. China is trying to build a 2,000km-long quantum network between Shanghai and Beijing, but this will require 32 “trusted nodes” to decode the key and retransmit it, introducing complexity and potential weaknesses to the system.

There’s also no guarantee quantum communication will be widely adopted by the time encryption-cracking quantum computers become viable. And importantly, building a single powerful encryption-busting quantum computer would require considerably less resources than restructuring entire communication networks to accommodate quantum cryptography.

Fortunately, there are other approaches to the problem that do not rely on quantum physics. So-called symmetric-key algorithms are likely to be resistant to quantum attacks if the key lengths are doubled, and new approaches like lattice-based, code-based and multi-variate cryptography all look likely to be uncrackable by quantum computers.

Symmetric-keys only work in a limited number of applications, though, and the other methods are still at the research stage. On the back of its report the NIST announced that it would launch a public competition to help drive development of these new approaches. It also recommends organizations focus on “crypto agility” so they can easily swap out their encryption systems as quantum-hardened ones become available.

But the document also highlighted the fact that it has taken roughly 20 years to deploy our current cryptography infrastructure. Just a month before the release of the report, researchers from MIT and the University of Innsbruck in Austria demonstrated a five-atom quantum computer capable of running Shor’s algorithm to factor the number 15.

Crucially, their approach is readily scalable, which the team says means building a more powerful quantum computer is now an engineering challenge rather than a conceptual one. Needless to say, the race is on.