If Looks Could Kill: Carcinogens in Hairdressers’ Blood from Dyes and Perms | TIME


Study finds concerning carcinogens from dyes and perms in hairdressers’ blood

Looking fabulous can come at a price—and sometimes that price is unwittingly outsourced to the people who provide these services for a living.

A new study of 295 female hairdressers, 32 regular users of hair dyes, and 60 people who get regular hair treatments found that permanent hair dyes and the chemicals used to straighter or curl hair can be carcinogenic to humans. Those with the highest exposure are hairdressers. The researchers found that among hair dressers, carcinogen levels in their blood tended to rise alongside the number of weekly permanent light hair coloring treatments they did.

175532379

It was only a few years ago that it was discovered that the Brazilian hair straightening—a treatment that smooths hair for up to six months–could release the known carcinogen formaldehyde. This despite the suggestion that keratin, which is a natural protein found in hair, is the ingredient doing the heavy lifting. In 2011, the Occupational Safety and Health Administration (OSHA) and several State OSHA programs issued a Hazard Alert after hearing complaints from salon workers. A subsequent investigation found air-borne formaldehyde that exceeded OSHA safety guidelines

Brazilian blowouts are still a beauty salon option, even though the FDA issued a warning letter to one company that makes Brazilian blowout solution for labeling and safety violations. Yet, due to how the United States regulates salon treatments and cosmetics, the agency had little recourse to pull the products from salon shelves.

While it’s up to the consumer to choose whether to undergo a hair styling that puts them at a risk for chemical exposure, hairdressers are the ones really putting themselves at risk. The new study, published in the journal Occupational & Environmental Medicine, says hairdressers should protect themselves by using gloves and completing steps that cannot be done with gloves before hair dying.

 

What Sugar Does to Your Body.


The instant something sweet touches your tongue, your taste buds direct-message your brain: deee-lish. Your noggin’s reward system ignites, unleashing dopamine. Meanwhile, the sugar you swallowed lands in your stomach, where it’s diluted by digestive juices and shuttled into your small intestine. Enzymes begin breaking down every bit of it into two types of molecules: glucose and fructose. Most added sugar comes from sugar cane or sugar beets and is equal parts glucose and fructose; lab-concocted high-fructose corn syrup, however, often has more processed fructose than glucose. Eaten repeatedly, these molecules can hit your body…hard.

 

Glucose

  • It seeps through the walls of your small intestine, triggering your pancreas to secrete insulin, a hormone that grabs glucose from your blood and delivers it to your cells to be used as energy.
  • But many sweet treats are loaded with so much glucose that it floods your body, lending you a quick and dirty high. Your brain counters by shooting out serotonin, a sleep-regulating hormone. Cue: sugar crash.
  • Insulin also blocks production of leptin, the “hunger hormone” that tells your brain that you’re full. The higher your insulin levels, the hungrier you will feel (even if you’ve just eaten a lot). Now in a simulated starvation mode, your brain directs your body to start storing glucose as belly fat.
  • Busy-beaver insulin is also surging in your brain, a phenomenon that could eventually lead to Alzheimer’s disease. Out of whack, your brain produces less dopamine, opening the door for cravings and addiction-like neurochemistry.
  • Still munching? Your pancreas has pumped out so much insulin that your cells have become resistant to the stuff; all that glucose is left floating in your bloodstream, causing prediabetes or, eventually, full-force diabetes.

Fructose

  • It, too, seeps through your small intestine into the bloodstream, which delivers fructose straight to your liver.
  • ​Your liver works to metabolize fructose—i.e., turn it into something your body can use. But the organ is easily overwhelmed, especially if you have a raging sweet tooth. Over time, excess fructose can prompt globules of fat to grow throughout the liver, a process called lipogenesis, the precursor to nonalcoholic fatty liver disease.
  • ​Too much fructose also lowers HDL, or “good” cholesterol, and spurs the production of triglycerides, a type of fat that can migrate from the liver to the arteries, raising your risk for heart attack or stroke.
  • ​Your liver sends an S.O.S. for extra insulin (yep, the multitasker also aids liver function). Overwhelmed, your pancreas is now in overdrive, which can result in total-body inflammation that, in turn, puts you at even higher risk for obesity and diabetes.

Headers linked to memory deficit in soccer players .


Soccer players who hit the ball with their head a lot don’t score as well on a memory test as players who head the ball less often, a new study finds. Frequent headers are also associated with abnormalities in the white matter of the brain, researchers report June 11 in Radiology.

“These changes are subtle,” says Inga Koerte, a radiologist at Harvard Medical School and Brigham and Women’s Hospital in Boston. “But you don’t need a concussive trauma to get changes in the microstructure of your brain.”

While soccer players can get concussions from colliding with goal posts, the ground or each other, concussions are uncommon from heading the ball, even though it can move at 80 kilometers per hour, says coauthor Michael Lipton, a neuroradiologist at the Albert Einstein College of Medicine in New York City.

He and his colleagues took magnetic resonance imaging scans of 28 men and nine women who played amateur soccer. The players, with an average age of 31, tallied up their games and practice sessions in the previous year and estimated how many headers they had done in each. Most players headed the ball hundreds of times; some hit thousands of headers.

The MRIs revealed brain abnormalities in some players, mainly in the white matter of three regions of the brain. White matter coats nerve fibers, and bundles of fibers cross and converge in the three regions. But the areas aren’t associated with a single function, Lipton says. Attention, memory, sensory inputs and visual and spatial functions could all be processed there.

Players who headed balls the most showed more abnormalities than those who headed fewer. For one brain region, 850 headers represented a threshold: Players above that mark clearly had more abnormalities than players below it. For the other brain regions, thresholds were about 1,300 and 1,550 headers.

On a standard memory test, the nine players with the highest header count scored worse on average than the nine who had the fewest. The researchers estimated that the threshold for memory loss would be 1,800 headers.

The regions with white matter abnormalities sit toward the back of the head, opposite the typical point of impact of a header. Lipton says brain “recoil” might explain the location. “When there is a head impact, the brain sloshes back and forth inside the head,” he says. On a frontal impact, he says, the brain presses against the front of the skull momentarily and then slams into the back of the skull.

In 2012, Koerte and her colleagues found that soccer players had more white matter abnormalities than swimmers. That report, in JAMA, included only players who had never had a concussion. Lipton’s team found no difference in concussion history when players were grouped by number of headers.

How to Increase Concentration While Studying .


1. Get Rid of Distractions This should be pretty obvious right? But actually, it’s not. We often get distracted without realizing that we already are. Once distracted, we engage in the new task and we don’t pay attention to what we were originally supposed to do. Instead, to get rid of distractions, identify what distract you and then take proactive measures to not let them distract you again. For example, if it’s the internet, television or people, then you might want to consider studying at the library. However, sometimes distractions can be thoughts that are intrusive. For instance, thoughts stemming from anxieties or worries. If this is the case, there is only so much you can do about them. You can try seeking professional help, dealing with the cause of anxiety or worries, or doing your best to distract yourself from those thoughts. 2. Avoid Procrastinating Sometimes, we don’t begin on the task because we are already distracted. If this happens, put the distractions away and tell yourself that you will start the task for 5 minutes straight. Generally, you should find that the 5 minutes is enough to motivate you to continue, but if it doesn’t, then you may want to consider re-evaluating what’s important to you or what’s really keeping you from starting. 3. Rewarding Yourself This helps sometimes if you can be diligent at it. For example, you might say I will do 30 minutes of homework today then I will reward myself by playing 30 minutes of video game or television. Then over time, you build up a habit of doing the task that you actually no longer need the reward. 4. Find A Good Time To Study It helps if you pay attention to when you study the best or when you’re most productive. For some people, this might be when you first wake up and you don’t have anything distracting you yet. Throughout the day, we’re more likely to become distracted so it’s better to study when our mind is still fresh. Furthermore, studying when you first wake up is really productive too, because you are less tired and generally already more focused. 5. Finding A Good Place to Study I wish we all had a perfect studying place that’s designed for optimal studying, but usually almost all locations have problems. For example, even the library can be inconvenient when we have to look or wait for space. However, if you’re able to decorate a room that’s perfect for studying, then do that. You will find yourself motivated by studying in the room. 6. Set Up A Timer We are generally really poor at estimating how much time we need to get something done. We overestimate the amount of time we have and we end up procrastinating.

[​IMG]

 

If instead, we just set up a timer to like work for an hour, we are more likely to want to hold true to our words and see what we can accomplish. A part of achieving our goals is to set up a deadline for when certain tasks are needed to be done. 7. Break Larger Task Into Smaller Ones Sometimes, we feel like we have so much to do that we procrastinate, because we dread. Instead, we can avoid this by doing what we’re able to do first and slowly build our way up. Furthermore, it helps if you seek help for the tasks you can not accomplish so you don’t get stuck on one place and becoming frustrated. 8. Achieve Flow When you are in a state of flow, you are so focused on the task that you lose awareness of surroundings. In this case, when you studying, flow is when you can not be distracted by anything anymore. When this happens to you, take note of it, and keep doing what you are doing. 9. Listen to Music or do Something That Helps You Get Focused Some people find it easier to focus when they listen to music. Find the music that helps you get in the mood and use it to help you focus. Once you are focused, turn off the music if it begins to get distracting. Also, try to figure out what helps you focus. For example, nap to recover your mental energy or take a walk outside to help clear your mind.

Computer simulation could become ‘integral’ in the diagnosis, treatment, or prevention of disease by the end of the century .


The life-saving, real-time techniques are close, but each NHS scandal is a setback for virtual medicine

 

A patient is rushed to hospital after suffering a severe stroke. Immediate action is needed to return blood to the affected part of their brain; without this they may suffer brain damage or die. Poised to begin surgery, doctors are warned by real-time computer simulations that show how the blood is flowing around the patient’s brain that the consequences of their actions could be dire. Equipped with this information, they change course, and the patient survives.

Such a scenario, where computers are relied upon to make calculations that even the best brain surgeon could not, sounds fantastical. But this is exactly the sort of medical advance the authors of a groundbreaking new book, due to be launched tomorrow, argue is within touching distance.

Computational Biomedicine: Modelling the Human Body is the world’s first textbook dedicated to the direct use of computer simulation in the diagnosis, treatment, or prevention of a disease. It claims that such technology will be “integral” to the way clinical decisions are made in the UK’s operating theatres by the end of this century.

Arguing that medicine is “on the verge of a radical transformation driven by the inexorably increasing power of information technology”, it predicts that drugs will soon be selected on the basis of an individual patient’s “digital profile”, so treatments can be tailor-made to suit them.

One of the book’s authors is Professor Peter Coveney, director of the University College London Centre for Computational Science. He argues that the very fact of its publication proves the discipline has moved beyond theory and should be embraced by doctors, some of whom remain sceptical. “There’s something very significant that’s going on,” he said. “It’s no longer just a research activity, it’s getting to the stage where one can write a textbook and say ‘this is the way things work in this field’ without being overly detailed and niche.”

Recent advances in computer-simulated medicine were discussed last month at a conference at the University of Sheffield, where the Insigneo Institute was founded a year ago. It comprises 123 academics and clinicians working towards a grand European Commission-backed project known as the Virtual Physiological Human: a computerised replica of the human body that will allow the virtual testing of treatments on patients based on their own specific needs.

A supercomputer that can make more than one million billion calculations a second

A supercomputer that can make more than one million billion calculations a second (Rex)

Professor Coveney describes this as “the equivalent for the human body of Google Earth”. “You could have your own personalised body map … and you could be in charge of managing that and interrogating it yourself,” he said.

The creation of an entire virtual human is still many years away. But scientists have already managed to use images of a patient’s heart to build virtual arteries, with which they can accurately predict the effectiveness of an operation, such as the insertion of a stent, used to treat heart disease.

“It’s not all going to happen overnight, it’s going to be incremental. But you should be able to build it up [so] that there are useful components of it along the way,” Professor Coveney said of the virtual human project. “As soon as you start digitising, there are applications – it’s not just all or nothing.”

But there is a problem with computerised medicine: to work properly, it needs patient data, and lots of it. Public trust in this area has been damaged by the troubled history of NHS data projects, the most recent being the botched rollout of the Care.data programme, which has been repeatedly delayed.

Every NHS data scandal that hits the headlines is a major setback for the field of virtual medicine, says Professor Coveney. But he believes that if patients were handed the power to manage their own data, rather than relying on the faceless bosses of their local hospital, they would soon view it as no more sensitive than internet banking.

“You have patients’ groups who argue and lobby against data being made available … [but] they are often much more conservative on behalf of the patients than the patients as individuals are,” he said. “A lot of patients are very willing to provide their data if you ask them, if there’s even a part of a chance that it might help to cure them.”

One of Professor Coveney’s colleagues is Derek Groen, a postdoctoral researcher whose work concerns modelling the blood flow in a patient’s brain. Tapping away on his keyboard at UCL’s central London campus, Dr Groen is able to communicate with Archer, the UK’s most powerful supercomputer. Based at the University of Edinburgh, it is capable of more than a million billion calculations a second, allowing it to complete the modelling of a patient’s blood flow with ease.

Derek Groen, whose work models the flow of blood

Derek Groen, whose work models the flow of blood (Susannah Ireland)

The use of supercomputers is vital in this type of work, but not all of them are in the UK. In the future, says Professor Coveney, this could throw up thorny ethical dilemmas. If a computer based in the US makes calculations that result in the death of a patient in the UK, could it be held liable?

Although his PhD was in astrophysics, Dr Groen said he prefers his current work because it could directly benefit people in the short term. “I feel that it’s very close to the everyday experience of people and society in general,” he said. “I’ve had people in my family who were affected by strokes – my grandmother, for example, had one at a younger age and another at an older age.

“That’s not directly why I’m doing this, but what I do find interesting is… that you get to speak to clinicians and collaborate with them to try and figure things out. For me, that’s very motivating.”

HP’s new technology rethinks computer architecture .


The Machine is six times more powerful than existing computational devices.

HP_the-machine

Image: HP/Engadget

HP has unveiled a new machine that has the potential to revolutionise computing and cope with the massive amounts of data generated by mobile devices and the Internet of things (a network composed of household appliances, cars, vending machines and many other devices).

The Machine is designed to cope with tonnes of information by using clusters of special-purpose cores that are more efficient than generalised cores. It is wired with photonics instead of copper wires, meaning it consumes 80 times less energy and is much faster. According to HP, it can handle 160 petabytes of data in 250 nanoseconds.

The Machine is six times more powerful than existing servers. However, the first models powered by this technology won’t be commercially available until 2018.

Levosimendan: a new inodilatory drug for the treatment of decompensated heart failure.


Levosimendan is a new calcium sensitizer developed for the treatment of congestive heart failure. Experimental studies indicate that levosimendan increases myocardial contractility and dilates both the peripheral and coronary vessels. Its positive inotropic effect is based on calcium-dependent binding of the drug to cardiac troponin C. It also acts as an opener of ATP-dependent potassium channels in vascular smooth muscle, thus inducing vasodilation. Although levosimendan acts preferentially as a calcium sensitizer it has also demonstrated selective phosphodiesterase III inhibitory effects in vitro. However, this selective inhibition does not seem to contribute to the positive action at pharmacologically relevant concentrations. Levosimendan has an active metabolite, OR-1896. Similarly to levosimendan, the metabolite exerts its positive inotropic and vasodilatory effects on myocardium and vasculature. The elimination half-life of levosimendan is about 1 hour. Thus, with intravenous administration, the parent drug rapidly disappears from the circulation after the infusion is stopped. The active metabolite, however, has a half-life of approximately 80 hours, and can be detected in circulation up to 2 weeks after stopping a 24-hour infusion of levosimendan. The intravenous formulation of levosimendan has been studied in several randomized comparative studies in patients with decompensated heart failure. Both patients with ischemic and non-ischemic etiology have participated in the studies. Levosimendan produces significant, dose-dependent increases in cardiac output, stroke volume and heart rate, and decreases in PCWP, mean blood pressure, mean pulmonary artery pressure, mean right atrial pressure and total peripheral resistance. With a loading dose, the effects on PCWP and cardiac ouput are seen within few minutes. There is no sign of development of tolerance even with a prolonged infusion up to 48 hours. Cardiac performance is improved with no significant increases in oxygen consumption or potentially malignant rhythm disorders. Due to the formation of an active metabolite, the hemodynamic effects are maintained up to several days after stopping levosimendan infusion. Compared to dobutamine, levosimendan produces similar increase in cardiac output but profoundly greater decrease in pulmonary capillary wedge pressure. On the contrary to dobutamine, the hemodynamic effects are not attenuated with concomitant beta-blocker use. Levosimendan has been shown to have favourable effects on symptoms of heart failure superior to placebo and at least comparable to dobutamine. Mortality and morbidity in levosimendan treated patients has been shown to be significantly lower when compared to dobutamine or placebo treated patients. The most common adverse events associated with levosimendan treatment are headache and hypotension, as a likely consequence of the vasodilating properties of the compound. In conclusion, levosimendan offers a new effective option for the treatment of acutely decompensated heart failure. Unlike traditional inotropes, levosimendan seems also to be safe in terms of morbidity and mortality.

 

 

Why Can’t You Use Blood from Someone Who Has a Different Blood Type Than You?


Story at-a-glance
• Everyone has one of four blood types – A, B, AB, or O – which is inherited from your parents
• Your blood type is determined by the presence or absence of two antigens – A and B – on the surface of red blood cells. A third antigen, called Rh factor, will either be present or absent, making your blood type positive or negative
• If incompatible blood types are given during a transfusion, the donor cells will be attacked by the patient’s immune system, which may cause shock, kidney failure, and death
• Everyone can receive type O blood, the most common type in the US, as it has neither A nor B antigens on red cells

Everyone has one of four blood types – A, B, AB, or O – which is inherited from your parents, like your eye color, dimples, or curly hair. While all blood is similar in its components (such as containing red cells, platelets, and plasma), it also has important characteristics that make it unique.
Your blood type is determined by the presence or absence of two antigens – A and B – on the surface of red blood cells. A third antigen, called Rh factor, will either be present or absent. Antigens are substances that may trigger an immune response, causing your body to launch an attack if it believes they are foreign.
Taken together, these factors determine the right type of blood for your body, should you need a transfusion. Receiving the wrong type can be catastrophic, even resulting in death. According to Blood Transfusions and the Immune System:1
“If incompatible blood is given in a transfusion, the donor cells are treated as if they were foreign invaders, and the patient’s immune system attacks them accordingly.
Not only is the blood transfusion rendered useless, but a potentially massive activation of the immune system and clotting system can cause shock, kidney failure, circulatory collapse, and death.”
What Exactly Is Blood?
Blood is a living tissue made up of red blood cells, white blood cells, platelets, and plasma (which is more than 90 percent water). Your body weight is about seven percent blood. Men have about 12 pints of blood in their body while women have about nine.2

Blood’s main role is to transport oxygen throughout your body, although it also plays a role in fighting off infections and carrying waste out of your cells. Blood also:3

Regulates your body’s acidity (pH) levels Regulates your body temperature (increased blood flow to an area adds warmth)
Supplies essential nutrients, such as glucose and amino acids, to cells Has specialized cells that promote blood clotting if you are bleeding
Transports hormones Has “hydraulic functions,” helping men to maintain an erection, for instance
Which Blood Types Are Compatible?
It’s not entirely true that you can’t use blood from someone who has a different blood type than you. Everyone can receive type O blood, the most common type in the US, as it has neither A nor B antigens on red cells (and both A and B antibody in the plasma).
Beyond that, however, blood types must be carefully matched as follows to avoid potentially deadly consequences. First, a breakdown of the four blood types:4
• Type A: Only the A antigen on red cells (B antibody in the plasma). The second most common blood type.
• Type B: Only the B antigen on red cells (and A antibody in the plasma). Relatively rare, especially among Hispanics and Caucasians.
• Type AB: Both A and B antigens on red cells (both A and B antibody in the plasma). Very uncommon, only seven percent of Asians, four percent of African Americans, four percent of Caucasians, and two percent of Hispanics have this blood type.
• Type O: Neither A nor B antigens on red cells (both A and B antibody in the plasma). The most common blood type, especially among Hispanics.
Your blood type may be either positive or negative, depending on the presence or absence of Rh factor (about 85 percent of people are Rh positive). Generally, Rh negative blood is given to Rh-negative patients while those with Rh positive blood receive Rh positive blood in transfusions.
Rh factor is generally tested during pregnancy, as an incompatibility between mother and fetus may cause the mother’s body to attack the baby’s “foreign” blood. (Rh immune globulin is an effective treatment that can stop this attack if found early on.)
The American Red Cross has created the following chart to explain which blood types are compatible with others.

Source: American Red Cross, Blood Types
Why Are There Different Blood Types?
It’s thought that different blood types developed as a way to help protect humans from infectious disease. For instance, cells infected with malaria don’t “stick” as well to type O or type B blood cells, which means a person with type O blood may get less sick if they’re infected with malaria than someone with a different blood type.

Perhaps not coincidentally, regions with high burdens of malaria, such as Africa, also have a high rate of type O blood. The fact that certain blood types are incompatible is likely the result of a mutation. As reported by Live Science:5
“Blood type A is the most ancient, and it existed before the human species evolved from its hominid ancestors. Type B is thought to have originated some 3.5 million years ago, from a genetic mutation that modified one of the sugars that sit on the surface of red blood cells. Starting about 2.5 million years ago, mutations occurred that rendered that sugar gene inactive, creating type O, which has neither the A nor B version of the sugar.

And then there is AB, which is covered with both A and B sugars. …But incompatibility is not part of the reason humans have blood types, says Harvey Klein, chief of transfusion medicine at the National Institutes of Health Clinical Center. ‘Blood transfusion is a recent phenomenon (hundreds of years, not millions), and therefore had nothing to do with the evolution of blood groups,’ he said.”
Does Your Blood Type Dictate Your Diet?
You may have heard about diets based on your blood type, which claim that certain foods react in different ways in your body depending on your blood type. I personally do not advocate such diets. I actually attended a small lecture given by Dr. D’Adamo before he published his book Eat Right for Your Type. I believe one of the main reasons why most support it is due to the fact that O is the most common blood type and calls for a severe grain restriction. If you are a blood type A like myself, it can lead to severe problems.
I actually developed diabetes after following it for a short time. My fasting blood sugar shot up to 126. Not only did it include eating large amounts of fruit for breakfast, but advocated mild exercising for blood type A. So, I cut down my exercise and increased my fruit intake, which resulted in a 20-pound weight gain and a diagnosis of diabetes. This is one of the reasons I am so passionate about my nutrition plan – it is based on whole foods, nothing too extreme, and goes by the guiding principle to listen to your body and let it be your guide on which foods are best for you.
Facts About Donating Blood
Someone in the US needs blood every two seconds,6 so if you’re up for doing a good deed, donating blood is a phenomenal choice. More than 41,000 blood donations are needed each day, but although about 38 percent of Americans are eligible to donate blood, less than 10 percent actually do so each year.7 The two most common reasons why people don’t donate blood are fear of needles or simply not thinking about it.
On the other hand, those who choose to donate most often do so in order to help others (which it does in spades, as one donation may save the lives of up to three people). So, if you can spare an hour or so of your time, your donated blood may save the life of someone in an emergency (or the countless other scenarios in which blood transfusions are necessary). Finally, if youriron levels are high, donating your blood is a safe, effective, and inexpensive solution, as one of the best ways you can get rid of excess iron is by bleeding.

 

The Revolutionary Quantum Computer That May Not Be Quantum at All


Google owns a lot of computers—perhaps a million servers stitched together into the fastest, most powerful artificial intelligence on the planet. But last August, Google teamed up with NASA to acquire what may be the search giant’s most powerful piece of hardware yet. It’s certainly the strangest.
Located at NASA Ames Research Center in Mountain View, California, a couple of miles from the Googleplex, the machine is literally a black box, 10 feet high. It’s mostly a freezer, and it contains a single, remarkable computer chip—based not on the usual silicon but on tiny loops of niobium wire, cooled to a temperature 150 times colder than deep space. The name of the box, and also the company that built it, is written in big, science-fiction-y letters on one side: D-WAVE. Executives from the company that built it say that the black box is the world’s first practical quantum computer, a device that uses radical new physics to crunch numbers faster than any comparable machine on earth. If they’re right, it’s a profound breakthrough. The question is: Are they?
Hartmut Neven, a computer scientist at Google, persuaded his bosses to go in with NASA on the D-Wave. His lab is now partly dedicated to pounding on the machine, throwing problems at it to see what it can do. An animated, academic-tongued German, Neven founded one of the first successful image-recognition firms; Google bought it in 2006 to do computer-vision work for projects ranging from Picasa to Google Glass. He works on a category of computational problems called optimization—finding the solution to mathematical conundrums with lots of constraints, like the best path among many possible routes to a destination, the right place to drill for oil, and efficient moves for a manufacturing robot. Optimization is a key part of Google’s seemingly magical facility with data, and Neven says the techniques the company uses are starting to peak. “They’re about as fast as they’ll ever be,” he says.
That leaves Google—and all of computer science, really—just two choices: Build ever bigger, more power-hungry silicon-based computers. Or find a new way out, a radical new approach to computation that can do in an instant what all those other million traditional machines, working together, could never pull off, even if they worked for years.

That, Neven hopes, is a quantum computer. A typical laptop and the hangars full of servers that power Google—what quantum scientists charmingly call “classical machines”—do math with “bits” that flip between 1 and 0, representing a single number in a calculation. But quantum computers use quantum bits, qubits, which can exist as 1s and 0s at the same time. They can operate as many numbers simultaneously. It’s a mind-bending, late-night-in-the-dorm-room concept that lets a quantum computer calculate at ridiculously fast speeds.
Unless it’s not a quantum computer at all. Quantum computing is so new and so weird that no one is entirely sure whether the D-Wave is a quantum computer or just a very quirky classical one. Not even the people who build it know exactly how it works and what it can do. That’s what Neven is trying to figure out, sitting in his lab, week in, week out, patiently learning to talk to the D-Wave. If he can figure out the puzzle—what this box can do that nothing else can, and how—then boom. “It’s what we call ‘quantum supremacy,’” he says. “Essentially, something that cannot be matched anymore by classical machines.” It would be, in short, a new computer age.
A former wrestler short-listed for Canada’s Olympic team, D-Wave founder Geordie Rose is barrel-chested and possessed of arms that look ready to pin skeptics to the ground. When I meet him at D-Wave’s headquarters in Burnaby, British Columbia, he wears a persistent, slight frown beneath bushy eyebrows. “We want to be the kind of company that Intel, Microsoft, Google are,” Rose says. “The big flagship $100 billion enterprises that spawn entirely new types of technology and ecosystems. And I think we’re close. What we’re trying to do is build the most kick-ass computers that have ever existed in the history of the world.”
The office is a bustle of activity; in the back rooms technicians peer into microscopes, looking for imperfections in the latest batch of quantum chips to come out of their fab lab. A pair of shoulder-high helium tanks stand next to three massive black metal cases, where more techs attempt to weave together their spilt guts of wires. Jeremy Hilton, D-Wave’s vice president of processor development, gestures to one of the cases. “They look nice, but appropriately for a startup, they’re all just inexpensive custom components. We buy that stuff and snap it together.” The really expensive work was figuring out how to build a quantum computer in the first place.
Like a lot of exciting ideas in physics, this one originates with Richard Feynman. In the 1980s, he suggested that quantum computing would allow for some radical new math. Up here in the macroscale universe, to our macroscale brains, matter looks pretty stable. But that’s because we can’t perceive the subatomic, quantum scale. Way down there, matter is much stranger. Photons—electromagnetic energy such as light and x-rays—can act like waves or like particles, depending on how you look at them, for example. Or, even more weirdly, if you link the quantum properties of two subatomic particles, changing one changes the other in the exact same way. It’s called entanglement, and it works even if they’re miles apart, via an unknown mechanism that seems to move faster than the speed of light.
Knowing all this, Feynman suggested that if you could control the properties of subatomic particles, you could hold them in a state of superposition—being more than one thing at once. This would, he argued, allow for new forms of computation. In a classical computer, bits are actually electrical charge—on or off, 1 or 0. In a quantum computer, they could be both at the same time.
It was just a thought experiment until 1994, when mathematician Peter Shor hit upon a killer app: a quantum algorithm that could find the prime factors of massive numbers. Cryptography, the science of making and breaking codes, relies on a quirk of math, which is that if you multiply two large prime numbers together, it’s devilishly hard to break the answer back down into its constituent parts. You need huge amounts of processing power and lots of time. But if you had a quantum computer and Shor’s algorithm, you could cheat that math—and destroy all existing cryptography. “Suddenly,” says John Smolin, a quantum computer researcher at IBM, “everybody was into it.”
That includes Geordie Rose. A child of two academics, he grew up in the backwoods of Ontario and became fascinated by physics and artificial intelligence. While pursuing his doctorate at the University of British Columbia in 1999, he readExplorations in Quantum Computing, one of the first books to theorize how a quantum computer might work, written by NASA scientist—and former research assistant to Stephen Hawking—Colin Williams. (Williams now works at D-Wave.)
Reading the book, Rose had two epiphanies. First, he wasn’t going to make it in academia. “I never was able to find a place in science,” he says. But he felt he had the bullheaded tenacity, honed by years of wrestling, to be an entrepreneur. “I was good at putting together things that were really ambitious, without thinking they were impossible.” At a time when lots of smart people argued that quantum computers could never work, he fell in love with the idea of not only making one but selling it.
With about $100,000 in seed funding from an entrepreneurship professor, Rose and a group of university colleagues founded D-Wave. They aimed at an incubator model, setting out to find and invest in whoever was on track to make a practical, working device. The problem: Nobody was close.
At the time, most scientists were pursuing a version of quantum computing called the gate model. In this architecture, you trap individual ions or photons to use as qubits and chain them together in logic gates like the ones in regular computer circuits—the ands, ors, nots, and so on that assemble into how a computer thinks. The difference, of course, is that the qubits could interact in much more complex ways, thanks to superposition, entanglement, and interference.
But qubits really don’t like to stay in a state of super¬position, what’s called coherence. A single molecule of air can knock a qubit out of coherence. The simple act of observing the quantum world collapses all of its every-number-at-once quantumness into stochastic, humdrum, non¬quantum reality. So you have to shield qubits—from everything. Heat or other “noise,” in physics terms, screws up a quantum computer, rendering it useless.
You’re left with a gorgeous paradox: Even if you successfully run a calculation, you can’t easily find that out, because looking at it collapses your superpositioned quantum calculation to a single state, picked at random from all possible superpositions and thus likely totally wrong. You ask the computer for the answer and get garbage.
Lashed to these unforgiving physics, scientists had built systems with only two or three qubits at best. They were wickedly fast but too underpowered to solve any but the most prosaic, lab-scale problems. But Rose didn’t want just two or three qubits. He wanted 1,000. And he wanted a device he could sell, within 10 years. He needed a way to make qubits that weren’t so fragile.
“WHAT WE’RE TRYING TO DO IS BUILD THE MOST KICK-ASS COMPUTERS THAT HAVE EVER EXISTED IN THE HISTORY OF THE WORLD.”
In 2003, he found one. Rose met Eric Ladizinsky, a tall, sporty scientist at NASA’s Jet Propulsion Lab who was an expert in superconducting quantum interference devices, or Squids. When Ladizinsky supercooled teensy loops of niobium metal to near absolute zero, magnetic fields ran around the loops in two opposite directions at once. To a physicist, electricity and magnetism are the same thing, so Ladizinsky realized he was seeing superpositioning of electrons. He also suspected these loops could become entangled, and that the charges could quantum-tunnel through the chip from one loop to another. In other words, he could use the niobium loops as qubits. (The field running in one direction would be a 1; the opposing field would be a 0.) The best part: The loops themselves were relatively big, a fraction of a millimeter. A regular microchip fab lab could build them.
The two men thought about using the niobium loops to make a gate-model computer, but they worried the gate model would be too susceptible to noise and timing errors. They had an alternative, though—an architecture that seemed easier to build. Called adiabatic annealing, it could perform only one specific computational trick: solving those rule-laden optimization problems. It wouldn’t be a general-purpose computer, but optimization is enormously valuable. Anyone who uses machine learning—Google, Wall Street, medicine—does it all the time. It’s how you train an artificial intelligence to recognize patterns. It’s familiar. It’s hard. And, Rose realized, it would have an immediate market value if they could do it faster.
In a traditional computer, annealing works like this: You mathematically translate your problem into a landscape of peaks and valleys. The goal is to try to find the lowest valley, which represents the optimized state of the system. In this metaphor, the computer rolls a rock around the problem-¬scape until it settles into the lowest-possible valley, and that’s your answer. But a conventional computer often gets stuck in a valley that isn’t really lowest at all. The algorithm can’t see over the edge of the nearest mountain to know if there’s an even lower vale. A quantum annealer, Rose and Ladizinsky realized, could perform tricks that avoid this limitation. They could take a chip full of qubits and tune each one to a higher or lower energy state, turning the chip into a representation of the rocky landscape. But thanks to superposition and entanglement between the qubits, the chip could computationally tunnel through the landscape. It would be far less likely to get stuck in a valley that wasn’t the lowest, and it would find an answer far more quickly.
INSIDE THE BLACK BOX
The guts of a D-Wave don’t look like any other computer. Instead of metals etched into silicon, the central processor is made of loops of the metal niobium, surrounded by components designed to protect it from heat, vibration, and electromagnetic noise. Isolate those niobium loops well enough from the outside world and you get a quantum computer, thousands of times faster than the machine on your desk—or so the company claims. —Cameron Bird

Bioelectronic medicines: a research roadmap


Realizing the vision of a new class of medicines based on modulating the electrical signalling patterns of the peripheral nervous system needs a firm research foundation. Here, an interdisciplinary community puts forward a research roadmap for the next 5 years.
With the rapid rise in technology for the precision detection and modulation of electrical signalling patterns in the nervous system, a new class of treatments known as bioelectronic medicines seems within reach1. Specifically, the peripheral nervous system will be at the centre of these advances, as the functions it controls in chronic diseases are extensive and its small number of fibres per nerve renders them more tractable to targeted modulation.
The vision for bioelectronic medicines is one of miniature, implantable devices that can be attached to individual peripheral nerves anywhere in the viscera, extending beyond early clinical examples in hypertension2 and sleep apnoea3. Such devices will be able to decipher and modulate neural signalling patterns, achieving therapeutic effects that are targeted at single functions of specific organs. This precision could be further enhanced through closed-loop control: that is, devices that can record neural electrical activity and physiological parameters, analyse the data in real time and modulate neural signalling accordingly1. For this vision to be realized, a solid research foundation for bioelectronic medicines is needed. This article puts forward a roadmap for the next 5 years towards generating that base.
An emerging community of ‘bioelectricians’
This roadmap has its origins in a meeting of research leaders from academia, industry and government in December 2013, for which neurophysiologists, neural engineers, disease biologists, neurosurgeons, as well as data and material scientists came together to define the research path towards bioelectronic medicines. Three principal research areas crystallized in the meeting: the creation of a visceral nerve atlas; the advancement of neural interfacing technology; and the early establishment of therapeutic feasibility. The direction in these areas has been further synthesized and refined by the authors of this roadmap, with the intention of engaging and expanding an emerging research community interested in bioelectronic medicines. Key elements of the plans in these three areas are summarized here, with detailed points and references provided inSupplementary information S1 (box).
Creation of a visceral nerve atlas
As with the large-scale genome and brain projects (see the NIH interim report for further information), a biological map of structure and function — underpinned by data recording standards and central repositories that enable collaborative data mining — will be crucial. The roadmap focuses on the innervation of visceral organs, such as the lungs, heart, liver, pancreas, kidney, bladder, gastrointestinal tract and lymphoid and reproductive organs. Their specific innervation, including sympathetic, parasympathetic, sensory and enteric systems, needs to be mapped, with the goal of achieving resolution at the level of nerve fibres and action potentials.
Structurally, knowledge of the detailed peripheral nerve wiring will guide the selection of organ-specific points of investigation. The key research steps towards establishing such a structural map are to expand the toolkit for high-resolution tracing and fingerprinting of visceral nerve fibres, establish the intra- and interspecies variation of organ innervation, and then build detailed maps in the most appropriate animal model for each organ. Another important early priority is to advance techniques for imaging the anatomical course and targets of visceral nerves in humans, paving the way for precision implantation of bioelectronic medicines in the clinic.
Functionally, the focus should be on decoding the neural signalling patterns that control individual organs. This approach will hinge on simultaneous recordings of both neural signalling and biomarkers of organ function (for example, blood pressure and cytokine release) that should be mined for correlations, and on stimulation and blocking experiments to test causation. The research should be iterative, drilling deeper into the signals as higher-resolution interfacing technology emerges until the functional units of nerve fibres and their signalling patterns are established.
Advancement of interface technology
Neural interfacing technology provides the basis for mapping neural signals and for bioelectronic medicines. Electrode-based interfaces have long been a work horse in electrophysiology and neuromodulation, but they must be adapted and miniaturized to interrogate visceral nerves effectively: cuff and array electrodes need to be scaled to <100 μm nerve diameter, and new materials and architectures should be pursued that can best address largely unmyelinated nerve structure, irregular neuroanatomy and movement in the viscera. Beyond electrodes, biophysical techniques can both help reveal the complex details of action potential patterns in peripheral nerves and pave the way for less invasive precision neuromodulation in the longer term. Such methods include optogenetic and nanoparticle approaches for deciphering, stimulating and blocking action potentials in a large set of nerve fibres in parallel4, and ultrasonic and tomography techniques for non-invasive recording and modulation.
To capitalize on these advances, we also need to develop platform electronics that control the nerve interfaces and integrate high-bandwidth wireless data transfer, power management and signal processing5. Such platforms need to be made both smaller and more reliable to facilitate long-term recording, stimulation and blocking experiments across animal models. A particular need is to make them compatible with experiments in rodents, for which a wealth of disease models exist — something that was recognized in the innovation challenge singled out by the participants in the December 2013 meeting (see the Innovation Challenge for further information). Miniaturization will also be an important requirement to achieve the broad-reaching clinical application of bioelectronic medicines.
Early establishment of therapeutic feasibility
Therapeutic promise is the real impetus for the research described here. Therefore, a range of proof-of-principle experiments should be initiated. Where successful, these should be followed by optimization of the ‘treatment codes’ — the specific signalling patterns to be introduced in nerves to most effectively treat disease.
Proof of principle here means defining which neural circuits exert influence over disease progression in a representative animal model. By focusing on two types of experiments, rapid read-outs could be achieved across visceral organs and functions: the first is to examine the correlation of neural signals and biomarker patterns during disease progression, and the second is to investigate the effect of blocking and stimulating neural activity during established disease.
A longer experimental phase should then be pursued to determine the treatment code. This testing can be broadly split into four types of investigation. First, the best intervention point on the nerve needs to be established: near to the target organ on small nerve branches or farther from the organ on the larger, mixed preganglionic bundles of nerves. Second, the equivalent to dose–response curves should be developed in the multi-dimensional neural signal pattern space. Third, the potential added benefit of ‘closing the loop’ — self-tuning of the modulation in response to neural patterns and disease biomarkers — needs to be evaluated. Last, the long-term safety of disease-modifying neuromodulation needs to be assessed, including potential immune reactions, neural responses and physiological adaptation. Together, these investigations would lay a solid foundation upon which multiple future bioelectronic medicines could be prototyped.
Conclusion
The research outlined here, and detailed further in Supplementary information S1 (box), aims to serve as a guide for the growing community entering the field of bioelectronic medicines. If executed successfully, it will help bring a new class of precision medicines to patients.
Source: nature