Immune System May Play Role in Obesity .


Certain immune system cells may play an important role in weight control, an early study suggests.

Scientists had known that the immune cells may help ward off obesity in mice. The new findings are the first to suggest the same is true in humans, researchers report in the Dec. 22 online edition of Nature.

The investigators found that the cells, known as ILC2s, were less common in belly fat from obese adults, versus thinner people. What’s more, in experiments with mice, they found that ILC2s seem to spur the development of “beige” fat cells, which boost the body’s calorie burning.

Obese Kids’ Brains Show Stronger Response to Sugar: Study

It appears that these (ILC2) cells don’t work properly in obesity, according to senior researcher David Artis, a professor of immunology at Weill Cornell Medical College in New York City.

Exactly why or how that happens is not clear, Artis said, but those are key questions for future research. The ultimate hope, he added, is to develop new approaches to tackling obesity.

It’s only in the past few years that researchers have been gaining an understanding of how the immune system affects metabolism and weight control, according to Artis.

That might sound surprising, since the immune system is best known as the body’s defense against infections. But it makes sense in evolutionary terms, Artis said.

He explained that while the immune system’s immediate job is to fight infection, it’s conceivable that some of its components evolved to have the ability to “communicate” with fat tissue during times of adversity, in order to alter the body’s metabolism.

What You Might Not Know About Weight Loss Surgery

“You can imagine it basically telling the fat tissue, ‘We’re going to be malnourished for a while. Let’s adapt,'” Artis said.

An obesity researcher who was not involved in the study said the new research adds to evidence that the immune system is a player in weight control.

“It’s really quite intriguing,” said Dr. Charles Billington, an endocrinologist at the University of Minnesota in Minneapolis.

The general idea that immune function and metabolism are connected is not new, according to Billington, who is also a spokesman for the Obesity Society. He noted that when people are injured or have an allergic reaction, the body often goes into “hypermetabolism,” or revved-up calorie burning.

But, Billington said, this study and some other recent work show how the immune system influences metabolism, and possibly longer-term weight control.

He also stressed, however, that there are plenty of unknowns.

“There is some kind of overlap between the immune system and metabolism,” he said, “but we don’t really understand it yet.”

Dulled Sense of Taste May Boost Weight-Loss Surgery Results

ILC2s are one group of immune cells believed to help fight infections and play a role in allergies. Artis and colleagues wanted to know if these cells might have other jobs, too.

The researchers started with samples of belly fat taken from both obese and normal-weight adults. It turned out that fat from obese people had fewer ILC2s — just like obese lab mice.

Then the researchers tested the effects of injecting lab mice with interleukin-33 — an immune system protein that acts like a “chemical messenger” among cells.

The study authors found that the treatment boosted ILC2s in the animals’ white fat, which in turn increased calorie burning.

White fat, Billington explained, is the kind that stores extra calories and shows up as a beer belly or love handles. But there is another fat, called brown fat, which actually takes up little space in the body and burns calories to generate heat.

Scientists have long been interested in finding a way to turn up the dial on brown fat, according to Artis. But in addition to the white and brown varieties, he said, there’s a third type of body fat — so-called beige fat.

Like brown fat, it burns calories and creates heat. What’s more, Artis said, it may play an important role in preventing obesity.

In his team’s experiments, ILC2 cells seemed to boost calorie burning by enhancing the animals’ stores of beige fat.

Use Chia Seeds With Caution, Researcher Warns

And what does that mean for humans?

“Obviously, we’re in the infancy of this research, and there’s a lot more work to do,” Artis stressed. But the goal, he said, is to develop new approaches to treating obesity, by better understanding the communication between the immune system and body fat.

That will be a long road, according to Billington. He pointed to one big question: Since immune system cells have multiple jobs, how do you get them to only boost beige fat, without doing things you don’t want — like spur allergic reactions?

And in the bigger picture, obesity research has made one thing clear: Metabolism and weight control are complex. “There’s unlikely to be any ‘magic bullet’ against obesity,” Billington said.

Study links positive thinking in older adults to increased longevity


Most people have heard about the benefits of walking through life seeing the proverbial glass half full, rather than focusing on worry and self-doubt. Positive thinking has the power to cancel out the negative thoughts that can cause physical and mental stress and, in turn, wreak havoc on health.

positive

According to the Mayo Clinic, taking a more optimistic approach throughout life comes with significant health benefits ranging from being better able to cope during stressful situations and able to fight off colds to having lower rates of depression and even living longer.(1)

A recent study conducted by Australian researches reinforces the power of positivity, showing that there is a correlation between a positive attitude and a stronger immune system that helps lead to a longer lifespan.(2)

Positive thinking leads to longevity

Fifty adults between the ages of 65 and 90 years were studied for two years, all of them were asked to view a series of positive and negative images. Blood tests were administered to gauge immune function, and it was discovered that participants who could recall more positive images than negative ones had enhanced immune functioning, something that’s typically compromised among older adults, which leads to a downward health spiral.(2)

“Despite the fact that people often think of late life as a period of doom and gloom, older people are often more positive than younger people,” said lead researcher Dr. Elise Kalokerinos. “Our research suggests that this focus on the positive may help older people protect their declining health.”(2)

Mindset also drives dietary choices

This is not the first time positive thinking has been linked to improvements in overall health. Many health experts and inspirational speakers have honed in on the importance that the mind has in shunning junk foods and, instead, choosing healthier options like fresh fruits and vegetables.

Esther Hicks is one such inspirational speaker. She believes that a person’s mindset can lead them down a path that either helps them feel good or bad about what they eat and therefore continue to eat a particular way on a regular basis.(2) Hicks suggests remaining as mindful as possible of the body and its cravings, saying that people should stop thinking in terms of good and bad foods and, instead, hone in on who they truly are and what’s best for their needs.(3)

In addition to paying attention to individual needs rather than the good/bad options often presented to people throughout life, there are other ways to invite more positivity. For example, writing down or having an awareness as to what a person is grateful for has been shown in studies to strengthen overall health, vastly reduce stress and increase longevity.(4)

Sources:

(1) http://www.mayoclinic.org

(2) http://www.scienceworldreport.com

(3) http://www.naturalnews.com

(4) http://www.naturalnews.com

Learn more: http://www.naturalnews.com/048095_positive_thinking_longevity_older_adults.html#ixzz3MvAbGWJN

How To Be Alone, But Not Lonely


It wasn’t too long ago that I was telling the world about my failed engagement, which included watching my loved one hit rock bottom, stashing a wedding dress in my parents’ closet that I’ll never wear and selling the home I loved. Tumultuous, indeed.

We had been dating for five-and-a-half years, and lived together for all but three months of that. As a 25-year-old, it’s hard to look back and remember a time in my adult life that I wasn’t in this serious relationship. I had moved into adulthood alongside another person who influenced the woman I grew to be, whether I realized it or not.

I found this fantastic Oscar Wilde quote, and it really hit a nerve for me:

I think it’s very healthy to spend time alone. You need to know how to be alone and not be defined by another person.

Here I am now, single and re-evaluating being just me, not defined by anyone else. How exactly does one make the most of being alone, but avoid being lonely?

1. Embrace the cliches.

Getting dressed up, buying new lipstick, venting about old flames, and generally having a fun night out with your girlfriends isn’t reserved for post-break-up sadness. It can be difficult to get “just the girls” together sometimes, especially as we couple-up and get a little older. Don’t give up. Grab the hot pink nail polish. Find the new cupcake and wine bar in town. Be a “woo girl.” And love the heck out of it.

2. Stay busy. Really busy.

You’ll have a hard time sulking over being alone (and thus, feeling lonely), if you have a jam-packed calendar. I took this theory to a fairly extreme level and decided that I’d go back to school full-time. Yup. That’s a full course load, including two lab classes, along with my current job. But trust me, when you’re go-go-go from 6am until midnight, there’s no time to watch sad movies and sulk with ice cream.

3. Do “couple” things alone.

When you’re newly single, it can be very easy to think you can’t do certain things by yourself. For example, it may seem strange to go out to dinner, see a movie in the theatre, or take a cooking class solo. Challenge yourself to think otherwise—nothing is restricted just because you’re single. Instead of waiting for a girlfriend’s schedule to match up with yours, buy one ticket for that new film you’ve been talking about. And better yet? No one to share the Reese’s Pieces with!

4. Do the things you want to do. Anything.

If I want to stay in on a Friday night and read the book that’s been collecting dust for over a year, then that’s exactly what I’m going to do. And because it’s just me as part of this equation, I don’t have to feel like I’m holding anyone back from having a good time. I can stay out with my friends until bar close and not wake anyone up when I get into bed. I can take up knitting. Spontaneously decide to visit a friend in another state for the weekend. Spend my Saturday mornings volunteering. I now only have my opinion to worry about.

5. Document the process.

I’m a huge advocate for journaling. Whether you’re single or not, I believe that brain dumping at the end of the day is super cathartic and helps manage your thoughts and feelings. Plus, it gives you an opportunity to stop and think about the good things that happen. Appreciate them. Savor them. Some days will have bigger good things, while others may be as simple as sitting outside for a few minutes to enjoy the sunshine. Either way, reminiscing about these moments and putting them on paper will help keep any sad or lonely thoughts at bay.

No matter what, give yourself room to figure out what makes you a unique and awesome person. Take all the time you need, and be a little selfish. Because when you do find yourself in a relationship again, you won’t find success if you’re not happy with who you are first.

Enjoy the journey of self-discovery!

‘Suspended Animation’ Trials: Surgeons To Test New Technique For Saving The Almost-Dead


shutterstock_134636009
Surgeons at UPMC Presbyterian Hospital in Pittsburgh will be testing a new technique for placing patients in a state of suspended animation, by draining all of their blood and replacing it with a saline.

A science fiction staple screen-grabbed from the Syfy channel may soon be playing in a hospital near you. Surgeons at UPMC Presbyterian Hospital in Pittsburgh will be testing a new technique to save patients’ lives by placing them in a state of suspended animation,hovering within the mists between life and death. Squirm-inducing details include draining all of a patient’s blood and replacing it with a saline solution that stops nearly all cellular activity. This process, which could be equated to inducing hypothermia, would give surgeons enough time to operate on injuries that would otherwise be fatal.

“We are suspending life, but we don’t like to call it suspended animation because it sounds like science fiction,” Dr. Samuel Tishman, the lead surgeon in the trial told New Scientist. “So we call it emergency preservation and resuscitation.”

Possibly the wildest thing about this new technique is that clinical trial testing has been approved by the Food and Drug Administration (FDA), which does not require approval from the patient or the family. Since eligible patients are not likely to survive their injuries anyway, the FDA figures it’s OK for doctors to make this unusual, last-ditch effort to save a life. Once you’ve been awakened from near-death with all your blood replaced, you’ll simply be grateful… right?

Animal Testing

Suspended animation was first tested by Dr. Hasan Alam and colleagues at the University of Michigan Hospital in 2002. First, the scientists sedated a group of Yorkshire pigs weighing in at 100 to 125 lbs. Next, the researchers induced a massive hemorrhage as a way to mimic the effects of gunshot wounds. Quickly, they drained the swines’ blood and replaced it with, in the first run of animals, a cold potassium solution, and in a second run, a saline solution. With either solution, the body temperature of the wounded animals cooled swiftly. Next, the doctors treated the injuries and afterward, drained the solution and restored the animals’ blood. In the first run, seven of nine animals survived. In the second run, all but one. These revived animals, no matter which method had been used, “were neurologically intact, and their capacity to learn new skills was no different than for control animals,” the authors wrote in their published research.

The scientists explain that a cool body can be kept technically alive more easily than a warm one. When a body is cooled to this extreme level, less work is required of individual cells, which need less oxygen (anaerobic glycolysis) to perform their chemical reactions at lower temperatures. Having tested the technique on pigs, it’s time for trial runs of the emergency preservation and resuscitation method on humans. Yee-ha!

For the first 10 human experiments, UPMC Presbyterian surgeons will need to identify the right patients. The perfect profile, according to Tishman, would be someone who has gone into cardiac arrest after a gunshot or some similar injury — someone who isn’t responding to attempts to restart their heart. Then surgeons will pump the saline solution into their heart and brain, and eventually through the entire body. Once this has been accomplished, the surgeons will operate on the patient now considered clinically dead: no blood, no brain activity, and no breathing. Tishman explained that his team will have about two hours to fix a patient and replace the saline solution with blood. The heart should restart by itself… if not, a patient will receive a complementary jumpstart.

The surgeons will test their technique on an initial batch of 10 non-consenting patients, compare results, and then continue, making their way forward, 10 patients at a time, until they have accumulated enough data. “Can we go longer than a few hours with no blood flow? I don’t know.” Tisherman told the New Scientist. “We’re trying to save lives, not pack people off to Mars.”

Graphene: Fast, Strong, Cheap, and Impossible to Use.


One atom thick, graphene is the thinnest material known and may be the strongest.

One atom thick, graphene is the thinnest material known and may be the strongest.
Until Andre Geim, a physics professor at the University of Manchester, discovered an unusual new material called graphene, he was best known for an experiment in which he used electromagnets to levitate a frog. Geim, born in 1958 in the Soviet Union, is a brilliant academic—as a high-school student, he won a competition by memorizing a thousand-page chemistry dictionary—but he also has a streak of unorthodox humor. He published the frog experiment in the European Journal of Physics, under the title “Of Flying Frogs and Levitrons,” and in 2000 it won the Ig Nobel Prize, an annual award for the silliest experiment. Colleagues urged Geim to turn the honor down, but he refused. He saw the frog levitation as an integral part of his style, an acceptance of lateral thinking that could lead to important discoveries. Soon afterward, he began hosting “Friday sessions” for his students: free-form, end-of-the-week experiments, sometimes fuelled by a few beers. “The Friday sessions refer to something that you’re not paid for and not supposed to do during your professional life,” Geim told me recently. “Curiosity-driven research. Something random, simple, maybe a bit weird—even ridiculous.” He added, “Without it, there are no discoveries.”

On one such evening, in the fall of 2002, Geim was thinking about carbon. He specializes in microscopically thin materials, and he wondered how very thin layers of carbon might behave under certain experimental conditions. Graphite, which consists of stacks of atom-thick carbon layers, was an obvious material to work with, but the standard methods for isolating superthin samples would overheat the material, destroying it. So Geim had set one of his new Ph.D. students, Da Jiang, the task of trying to obtain as thin a sample as possible—perhaps a few hundred atomic layers—by polishing a one-inch graphite crystal. Several weeks later, Jiang delivered a speck of carbon in a petri dish. After looking at it under a microscope, Geim recalls, he asked him to try again; Jiang admitted that this was all that was left of the crystal. As Geim teasingly admonished him (“You polished a mountain to get a grain of sand?”), one of his senior fellows glanced at a ball of used Scotch tape in the wastebasket, its sticky side covered with a gray, slightly shiny film of graphite residue.
It would have been a familiar sight in labs around the world, where researchers routinely use tape to test the adhesive properties of experimental samples. The layers of carbon that make up graphite are weakly bonded (hence its adoption, in 1564, for pencils, which shed a visible trace when dragged across paper), so tape removes flakes of it readily. Geim placed a piece of the tape under the microscope and discovered that the graphite layers were thinner than any others he’d seen. By folding the tape, pressing the residue together and pulling it apart, he was able to peel the flakes down to still thinner layers.

Geim had isolated the first two-dimensional material ever discovered: an atom-thick layer of carbon, which appeared, under an atomic microscope, as a flat lattice of hexagons linked in a honeycomb pattern. Theoretical physicists had speculated about such a substance, calling it “graphene,” but had assumed that a single atomic layer could not be obtained at room temperature—that it would pull apart into microscopic balls. Instead, Geim saw, graphene remained in a single plane, developing ripples as the material stabilized.

Geim enlisted the help of a Ph.D. student named Konstantin Novoselov, and they began working fourteen-hour days studying graphene. In the next two years, they designed a series of experiments that uncovered startling properties of the material. Because of its unique structure, electrons could flow across the lattice unimpeded by other layers, moving with extraordinary speed and freedom. It can carry a thousand times more electricity than copper. In what Geim later called “the first eureka moment,” they demonstrated that graphene had a pronounced “field effect,” the response that some materials show when placed near an electric field, which allows scientists to control the conductivity. A field effect is one of the defining characteristics of silicon, used in computer chips, which suggested that graphene could serve as a replacement—something that computer makers had been seeking for years.

Geim and Novoselov wrote a three-page paper describing their discoveries. It was twice rejected by Nature, where one reader stated that isolating a stable, two-dimensional material was “impossible,” and another said that it was not “a sufficient scientific advance.” But, in October, 2004, the paper, “Electric Field Effect in Atomically Thin Carbon Films,” was published in Science, and it astonished scientists. “It was as if science fiction had become reality,” Youngjoon Gil, the executive vice-president of the Samsung Advanced Institute of Technology, told me.

Labs around the world began studies using Geim’s Scotch-tape technique, and researchers identified other properties of graphene. Although it was the thinnest material in the known universe, it was a hundred and fifty times stronger than an equivalent weight of steel—indeed, the strongest material ever measured. It was as pliable as rubber and could stretch to a hundred and twenty per cent of its length. Research by Philip Kim, then at Columbia University, determined that graphene was even more electrically conductive than previously shown. Kim suspended graphene in a vacuum, where no other material could slow the movement of its subatomic particles, and showed that it had a “mobility”—the speed at which an electrical charge flows across a semiconductor—of up to two hundred and fifty times that of silicon.
In 2010, six years after Geim and Novoselov published their paper, they were awarded the Nobel Prize in Physics. By then, the media were calling graphene “a wonder material,” a substance that, as the Guardian put it, “could change the world.” Academic researchers in physics, electrical engineering, medicine, chemistry, and other fields flocked to graphene, as did scientists at top electronics firms. The U.K. Intellectual Property Office recently published a report detailing the worldwide proliferation of graphene-related patents, from 3,018 in 2011 to 8,416 at the beginning of 2013. The patents suggest a wide array of applications: ultra-long-life batteries, bendable computer screens, desalinization of water, improved solar cells, superfast microcomputers. At Geim and Novoselov’s academic home, the University of Manchester, the British government invested sixty million dollars to help create the National Graphene Institute, in an effort to make the U.K. competitive with the world’s top patent holders: Korea, China, and the United States, all of which have entered the race to find the first world-changing use for graphene.

The progress of a technology from the moment of discovery to transformative product is slow and meandering; the consensus among scientists is that it takes decades, even when things go well. Paul Lauterbur and Peter Mansfield shared a Nobel Prize for developing the MRI, in 1973—almost thirty years after scientists first understood the physical reaction that allowed the machine to work. More than a century passed between the moment when the Swedish chemist Jöns Jakob Berzelius purified silicon, in 1824, and the birth of the semiconductor industry.

New discoveries face formidable challenges in the marketplace. They must be conspicuously cheaper or better than products already for sale, and they must be conducive to manufacture on a commercial scale. If a material arrives, like graphene, as a serendipitous discovery, with no targeted application, there is another barrier: the limits of imagination. Now that we’ve got this stuff, what do we do with it?

Aluminum, discovered in minute quantities in a lab in the eighteen-twenties, was hailed as a wonder substance, with qualities never before seen in a metal: it was lightweight, shiny, resistant to rust, and highly conductive. It could be derived from clay (at first, it was called “silver from clay”), and the idea that a valuable substance was produced from a common one lent it a quality of alchemy. In the eighteen-fifties, a French chemist devised a method for making a few grams at a time, and aluminum was quickly adopted for use in expensive jewelry. Three decades later, a new process, using electricity, allowed industrial production, and the price plummeted.

“People said, ‘Wow! We’ve got this silver from clay, and now it’s really cheap and we can use it for anything,’ ” Robert Friedel, a historian of technology at the University of Maryland, told me. But the enthusiasm soon cooled: “They couldn’t figure out what to use it for.” In 1900, the Sears and Roebuck catalogue advertised aluminum pots and pans, Friedel notes, “but you can’t find any of what we’d call ‘technical’ uses.” Not until after the First World War did aluminum find its transformative use. “The killer app is the airplane, which didn’t even exist when they were going all gung ho and gaga over this stuff.”

Some highly touted discoveries fizzle altogether. In 1986, the I.B.M. researchers Georg Bednorz and K. Alex Müller discovered ceramics that acted as radically more practical superconductors. The next year, they won a Nobel, and an enormous wave of optimism followed. “Presidential commissions were thrown together to try to put the U.S. out in the lead,” Cyrus Mody, a history-of-science professor at Rice University, in Houston, says. “People were talking about floating trains and infinite transmission lines within the next couple of years.” But, in three decades of struggle, almost no one has managed to turn the brittle ceramics into a substance that can survive everyday use.
BUY THE PRINT »
Friedel offered a broad axiom: “The more innovative—the more breaking-the-mold—the innovation is, the less likely we are to figure out what it is really going to be used for.” Thus far, the only consumer products that incorporate graphene are tennis racquets and ink. But many scientists insist that its unusual properties will eventually lead to a breakthrough. According to Geim, the influx of money and researchers has speeded up the usual time line to practical usage. “We started with submicron flakes, barely seen even in an optical microscope,” he says. “I never imagined that by 2009, 2010, people would already be making square metres of this material. It’s extremely rapid progress.” He adds, “Once someone sees that there is a gold mine, then very heavy equipment starts to be applied from many different research areas. When people are thinking, we are quite inventive animals.”
Samsung, the Korea-based electronics giant, holds the greatest number of patents in graphene, but in recent years research institutions, not corporations, have been most active. A Korean university, which works with Samsung, is in first place among academic institutions. Two Chinese universities hold the second and third slots. In fourth place is Rice University, which has filed thirty-three patents in the past two years, almost all from a laboratory run by a professor named James Tour.

Tour, fifty-five, is a synthetic organic chemist, but his expansive personality and entrepreneurial brio make him seem more like an executive overseeing a company’s profitable R. & D. division. A short, dark-eyed man with a gym-pumped body, he greeted me volubly when I visited him recently at his office, in the Dell Butcher building at Rice. “I mean, the stuff is just amazing!” he said, about graphene. “You can’t believe what this stuff can do!” Tour, like most senior scientists, must concern himself with both research and commerce. He has twice appeared before Congress to warn about federal budget cuts to science, and says that his lab has managed to thrive only because he has secured funding through aggressive partnerships with industry. He charges each business he contracts with two hundred and fifty thousand dollars a year; his lab nets a little more than half, with which he can hire two student researchers and pay for their materials for a year. Much of Tour’s work involves spurring the creativity of those researchers (twenty-five of whom are devoted to graphene); they’re the ones who devise the inventions that Tour sells. Graphene has been a boon, he said: “You have a lot of people moving into this area. Not just academics but companies in a big way, from the big electronics firms, like Samsung, to oil companies.”

Tour brings a special energy to the endeavor. Raised in a secular Jewish home in White Plains, he became a born-again Christian as a freshman at Syracuse University. Married, with four grown children, he rises at three-forty every morning for an hour and a half of prayer and Bible study—followed, several times a week, with workouts at the gym—and arrives at the office at six-fifteen. In 2001, he made headlines by signing “A Scientific Dissent from Darwinism,” a petition that promoted intelligent design, but he insists that this reflected only his personal doubts about how random mutation occurs at the molecular level. Although he ends e-mails with “God bless,” he says that, apart from a habit of praying for divine guidance, he feels that religion plays no part in his scientific work.

Tour endorses a scattershot approach for his students’ research. “We work on whatever suits our fancy, as long as it is swinging for the fences,” he said. As chemists, he noted, they are particularly suited to quick experiments, many of which can yield results in a matter of hours—unlike physicists, whose experiments can take months. His lab has published a hundred and thirty-one journal articles on graphene—second only to a lab at the University of Texas at Austin—and his researchers move rapidly to file provisional applications with the U.S. Patent and Trademark Office, which give them legal ownership of an idea for a year before they must file a full claim. “We don’t wait very long before we file,” Tour said; he urges students to write up their work in less than forty-eight hours. “I was just told by a company that has licensed one of our technologies that we beat the Chinese by five days.”

Many of his lab’s recent inventions are designed for immediate exploitation by industry, supplying funds to support more ambitious work. Tour has sold patents for a graphene-infused paint whose conductivity might help remove ice from helicopter blades, fluids to increase the efficiency of oil drills, and graphene-based materials to make the inflatable slides and life rafts used in airplanes. He points out that graphene is the only substance on earth that is completely impermeable to gas, but it weighs almost nothing; lighter rafts and slides could save the airline industry millions of dollars’ worth of fuel a year.
In Tour’s laboratory, a large, high-ceilinged room with tightly configured rows of worktables, a score of young men in white lab coats and safety goggles were working. Tour and I stopped at a bench where Loïc Samuels, a graduate student from Antigua, was making a batch of graphene-based gel, to be used in a scaffold for spinal-cord injuries. “Instead of just having a nonfunctional scaffold material, you have something that’s actually electrically conductive,” Samuels said, as he swirled a test tube in a jeweller’s bath. “That helps the nerve cells, which communicate electrically, connect with each other.” Tour showed me videos of lab rats whose back legs had been paralyzed. In one video, two rats inched themselves along the bottom of a cage, dragging their hind legs. In another video, of rats that had been treated, they walked normally. Tour warned that it takes years before the F.D.A. approves human trials. “But it’s an incredible start,” he said.

In 2010, one of Tour’s researchers, Alexander Slesarev, a Russian who had studied at Moscow State University, suggested that graphene oxide, a form of graphene created when oxygen and hydrogen molecules are bonded to it, might attract radioactive material. Slesarev sent a sample to a former colleague at Moscow State, where students placed the powder in solutions containing nuclear material. They discovered that the graphene oxide binds with the radioactive elements, forming a sludge that could easily be scooped away. Not long afterward, the earthquake and tsunami in Japan created a devastating spill of nuclear material, and Tour flew to Japan to pitch the technology to the Japanese. “We’re deploying it right now in Fukushima,” he told me.

Working at one of the benches was a young man with a round, open face: a twenty-five-year-old Ph.D. student named Ruquan Ye, who last year devised a new way to make quantum dots, highly fluorescent nanoparticles used in medical imaging and plasma television screens. Usually made in tiny amounts from toxic chemicals, such as cadmium selenide and indium arsenide, quantum dots cost a million dollars for a one-kilogram bottle. Ye’s technique uses graphene derived from coal, which is a hundred dollars a ton.

“The method is simple,” Ye told me. He showed me a vial filled with a fine black powder: anthracite coal that he had ground. “I place this in a solution of acids for one day, then heat the solution on a hot plate.” By tweaking the process, he can make the material emit various light frequencies, creating dots of various colors for differentiated tagging of tumors. The coal-based dots are compatible with the human body—coal is carbon, and so are we—which suggests that Ye’s dots could replace the highly toxic ones used in hospitals worldwide. In a darkened room next to the lab, he shone a black light on several small vials of clear liquid. They fluoresced into glowing ingots: red, blue, yellow, violet.

Tour usually declines to take credit for the discoveries in his lab. “It’s all the students,” he said. “They’re at that age, their twenties, when the synapses are just firing. My job is to inspire them and provide a credit card, and direct them away from rabbit holes.” But he acknowledged that the quantum-dot idea originated with him: “One day, I said, ‘We gotta find out what’s in coal. People have been using this for five thousand years. Let’s see what’s really in it. I bet it’s small domains of graphene’—and, sure enough, it was. It was just sitting right there. A twenty-five-per-cent yield. And, remember, it’s a million dollars a kilogram!”

Tour turned to his lab manager, Paul Cherukuri, and said, “We’re going to be rich someday, aren’t we?” As Cherukuri laughed, Tour added, “I’m going to come in here and count money every day.”

Perhaps the most tantalizing property described in Geim and Novoselov’s 2004 paper was the “mobility” with which electronic information can flow across graphene’s surface. “The slow step in our computers is moving information from point A to point B,” Tour told me. “Now you’ve taken the slow step, the biggest hurdle in silicon electronics, and you’ve introduced a new material and—boom! All of a sudden, you’re increasing speed not by a factor of ten but by a factor of a hundred, possibly even more.”

The news galvanized the semiconductor industry, which was struggling to keep up with Moore’s Law, devised in 1965 by Gordon Moore, a co-founder of Intel. Every two years, he predicted, the density—and thus the effectiveness—of computer chips would double. For five decades, engineers have managed to keep pace with Moore’s Law through miniaturization, packing increasing numbers of transistors onto chips—as many as four billion on a silicon wafer the size of a fingernail. Engineers have further speeded computers by “doping” silicon: introducing atoms from other elements to squeeze the lattice tighter. But there’s a limit. Shrink the chip too much, moving its transistors too close together, and silicon stops working. As early as 2017, silicon chips may no longer be able to keep pace with Moore’s Law. Graphene, if it works, offers a solution.
“Five more minutes and I’m all yours, Mr. Antsy.”
BUY THE PRINT »

There’s a problem, though. Semiconductors, such as silicon, are defined by their ability to turn on and off in the presence of an electric field; in logic chips, that switching process generates the ones and the zeros that are the language of computers. Graphene, a semi-metal, cannot be turned off. At first, engineers believed that they could dope graphene to open up a “band gap,” the electrical property that allows semiconductors to act as switches. But, ten years after Geim and Novoselov’s paper, no one has succeeded in opening a gap wide enough. “You’d have to change it so much that it’s no longer graphene,” Tour said. Indeed, those who have managed to create such a gap learned that it kills the mobility, rendering graphene no better than the materials we use now. The result has been a certain dampening of the mood at semiconductor companies.

I recently visited the Thomas J. Watson Research Center, the main R. & D. lab for I.B.M., a major fabricator of silicon semiconductor chips. A half hour north of New York City, the center is housed in a building designed by Eero Saarinen, in 1961. A vast arc of glass with an upswept front awning, it is a kind of monument to the difficulty of predicting the future. Saarinen imagined that transformative ideas would emerge from groups of scientists working in meeting areas, where recliners and coffee tables still sit beside soaring windows. Instead, the scientists spend much of the day hunched over computer screens in their offices: small, windowless dens, which seem to have been created as an afterthought.

In one cramped office, I met Supratik Guha, who is the director of physical sciences at I.B.M. and who sets the company’s strategy for worldwide research. A thoughtful man, as precisely understated as Tour is effusive, Guha lamented the “excessive hype” that has surrounded graphene as a replacement for silicon, and talked mournfully about how the effort to introduce a band gap is, at best, “one major innovation away.” He hastened to add that I.B.M. has not written off graphene. In early 2014, the company announced that its researchers had built the first graphene-based integrated circuit for wireless devices, which could lead to cheaper, more efficient cell phones. But in the quest to make graphene a replacement for silicon, Guha admits, they hold little hope.

For now, I.B.M.’s focus remains the single-walled carbon nanotube, which was developed at Rice by Tour’s mentor and predecessor, Rick Smalley. In the eighties, Smalley and his colleagues discovered that molecules of carbon atoms arrange themselves in a variety of shapes; some were spheres (which he called “buckyballs,” for their resemblance to Buckminster Fuller’s geodesic domes) and others were tubes. When the researchers found that the tubes can act as semiconductors, the material was immediately suggested as a potential replacement for silicon. Along with his collaborators, Smalley was awarded the Nobel Prize in Chemistry in 1996, and he persuaded Rice to build the multimillion-dollar nanotechnology center that Tour later took over. Yet carbon nanotubes have resisted easy exploitation. They have the necessary band gap, but building a chip with them entails maneuvering billions of minute objects into precise locations—a difficulty that has bedevilled scientists for almost two decades. Without quite admitting that he has lost interest in carbon nanotubes, Tour told me that they “never really commercialized well.”

At I.B.M., which has invested more than a decade of research and tens of millions of dollars in the material, there is great reluctance to admit defeat. Guha introduced me to George Tulevski, who helps lead I.B.M.’s carbon-nanotube research program. When I mentioned graphene, he evinced the defensiveness that might be expected of a scientist who has devoted nearly ten years to one recalcitrant technology only to be told about a glamorous new one. “Devices have to turn on and off,” Tulevski said. “If it doesn’t turn off, it just consumes way too much power. There’s no way to turn graphene off. So those electrons are going superfast, and that’s great—but you can’t turn the device off.”

Cyrus Mody, the historian, is equally cautious. “This idea that there’s a form of microelectronics that is theoretically much, much faster than conventional silicon is not new,” he told me. He points to the precedent of the Josephson-junction circuit. In 1962, the British physicist Brian David Josephson predicted that electricity would flow at unprecedented speeds through a circuit composed of two superconductors separated by a “weak link” material. The insight led to a Nobel Prize in Physics—and to dreams of exponentially faster electronics.

“A lot of people thought we’d be switching over to superconducting Josephson-junction microelectronics soon,” Mody said. “But when you actually get down to manufacturing a complex circuit with lots and lots and lots of logic gates, and making lots and lots of such circuits with very large yields, the manufacturing problems really make it impossible to keep going. And I think that’s going to be the hurdle that people haven’t really considered enough when they talk about graphene.”

But other scientists argue that the obstacle is not graphene’s physical properties. “The semiconductor industry knows how to introduce a band gap,” Amanda Barnard, a theoretical physicist who heads Australia’s Commonwealth Scientific and Industrial Research Organization, told me. The problem is business: “We’ve got a global investment on the order of trillions of dollars in silicon, and we’re not going to walk away from that. Initially, graphene needs to work with silicon—it needs to work in our existing factories and production lines and research capabilities—and then we’ll get some momentum going.”

Tour has little sympathy for the semiconductor industry’s disappointment with graphene. “I.B.M. is all bummed out because they’re single-minded,” he said. “They’ve got to make computers—and they’ve got Moore’s Law. But that’s their own fault! What other industry has challenged itself with doubling its performance every eighteen months? In the chemical industry, if we can get a one-per-cent-higher yield in a year we think we’ve done pretty well.”

Perhaps the most expansive thinker about the material’s potential is Tomas Palacios, a Spanish scientist who runs the Center for Graphene Devices and 2D Systems, at M.I.T. Rather than using graphene to improve existing applications, as Tour’s lab mostly does, Palacios is trying to build devices for a future world.

At thirty-six, Palacios has an undergraduate’s reedy build and a gentle way of speaking that makes wildly ambitious notions seem plausible. As an electrical engineer, he aspires to “ubiquitous electronics,” increasing “by a factor of one hundred” the number of electronic devices in our lives. From the perspective of his lab, the world would be greatly enhanced if every object, from windows to coffee cups, paper currency, and shoes, were embedded with energy harvesters, sensors, and light-emitting diodes, which allowed them to cheaply collect and transmit information. “Basically, everything around us will be able to convert itself into a display on demand,” he told me, when I visited him recently. Palacios says that graphene could make all this possible; first, though, it must be integrated into those coffee cups and shoes.

As Mody pointed out, radical innovation often has to wait for the right environment. “It’s less about a disruptive technology and more about moments when the linkages among a set of technologies reach a point where it’s feasible for them to change lots of practices,” he said. “Steam engines had been around a long time before they became really disruptive. What needed to happen were changes in other parts of the economy, other technologies linking up with the steam engine to make it more efficient and desirable.”

For Palacios, the crucial technological complement is an advance in 3-D printing. In his lab, four students were developing an early prototype of a printer that would allow them to create graphene-based objects with electrical “intelligence” built into them. Along with Marco de Fazio, a scientist from STMicrolectronics, a firm that manufactures ink-jet print heads, they were clustered around a small, half-built device that looked a little like a Tinkertoy contraption on a mirrored base. “We just got the printer a couple of weeks ago,” Maddy Aby, a ponytailed master’s student, said. “It came with a kit. We need to add all the electronics.” She pointed to a nozzle lying on the table. “This just shoots plastic now, but Marco gave us these print heads that will print the graphene and other types of inks.”

The group’s members were pondering how to integrate graphene into the objects they print. They might mix the material into plastic or simply print it onto the surface of existing objects. There were still formidable hurdles. The researchers had figured out how to turn graphene into a liquid—no easy task, since the material is severely hydrophobic, which means that it clumps up and clogs the print heads. They needed to first convert graphene to graphene oxide, adding groups of oxygen and hydrogen molecules, but this process negates its electrical properties. So once they printed the object they would have to heat it with a laser. “When you heat it up,” Aby said, “you burn off those groups and reduce it back to graphene.”
When that might be possible was uncertain; she hoped to have the device working in three months. “The laser needs more approval from the powers that be,” she said, glancing balefully at the printer’s mirrored base—the kind perfect for bouncing laser beams all over a room. De Fazio suggested that they cover it with a silicon wafer.

“That could work,” Aby said.
“Of course, this could also be confirmation bias from me wanting you to get sick.”
BUY THE PRINT »
Palacios recognizes that millennial change comes only after modest, strategic increments. He mentioned Samsung, which, according to industry rumor, is planning to launch the first device with a screen that employs graphene. “Graphene is only a small component, used to deliver the current to the display,” he said. “But that’s an exciting first application—it doesn’t have to be the breakthrough that we are all looking forward to. It’s a good way to get graphene into everyone’s focus and, that way, justify more investment.” In the meantime, one of his students, Lili Yu, has been working on a prototype for a flexible screen.

Palacios, in his office, told me that his most ambitious goal is “graphene origami,” in which sheets of the material are folded to mimic organelles, minuscule structures inside a biological cell. “It’s not that different from what nature does with DNA, a material that is a one-dimensional structure that gets folded many, many, many times to make the chromosomes.” If the method works, it could be used to pack huge amounts of computing power into a tiny space. There might be applications in medicine, he says, and in something he calls smart dust—“things that are just as tiny as dust particles but have a functionality to tell us about the pollution in the atmosphere, or if there is a flu virus nearby. These things will be able to connect to your phone or to the embedded displays everywhere, to tell you about things happening around you.”

For the moment, the challenges are more earthbound: scientists are still trying to devise a cost-effective way to produce graphene at scale. Companies like Samsung use a method pioneered at the University of Texas, in which they heat copper foil to eighteen hundred degrees Fahrenheit in a low vacuum, and introduce methane gas, which causes graphene to “grow” as an atom-thick sheet on both sides of the copper—much as frost crystals “grow” on a windowpane. They then use acids to etch away the copper. The resulting graphene is invisible to the naked eye and too fragile to touch with anything but instruments designed for microelectronics. The process is slow, exacting, and too expensive for all but the largest companies to afford.

At Tour’s lab, a twenty-six-year-old postdoc named Zhiwei Peng was waiting to hear from a final reviewer of a paper he had submitted, in which he detailed a way to create graphene with no superheating, no vacuums, and no gases. (The paper was later approved for publication.) Peng had stumbled on his method a few months before. While heating graphene oxide with a laser, he missed the sample, and accidentally heated the material it was sitting on, a sheet of polyimide plastic. Where the laser touched the plastic, it left a black residue. He discovered that the residue was layers of graphene, loosely bonded with oxygen molecules, which—like the residue on Geim’s tape—could easily be exfoliated to single-atom sheets. He showed me how it worked, the laser tracking back and forth across the surface of a piece of polyimide and leaving with each pass a needle-thin deposit of material. Single layers of graphene absorb 2.5 per cent of available light; as layers pile up, they begin to appear black. After a few minutes, Peng had produced a crisp, matte-black lattice—perhaps an inch wide, and worth tens of thousands of dollars. Cherukuri, Tour’s lab manager, pointed at it and said, “That is the race.”

The tech-research firm Gartner uses an analytic tool that it calls the Hype Cycle to help investors determine which discoveries will make money. A graph of the cycle resembles a cursive lowercase “r,” in which a discovery begins with a Technology Trigger, climbs quickly to a Peak of Inflated Expectations, falls into the Trough of Disillusionment, and, as practical uses are found, gradually ascends to the Plateau of Productivity. The implication is not (or not only) that most discoveries don’t behave as expected; it’s that a new thing typically becomes useful sometime after the publicity fades.
Nearly every scientist I spoke with suggested that graphene lends itself especially well to hype. “It’s an electrically useful material in a time when we love electrical devices,” Amanda Barnard told me. “If it had come along at a time when we were not so interested in electronic devices, the hype might not have been so disproportionate. But then there wouldn’t have been the same appetite for investment.” Indeed, Henry Petroski, a professor at Duke and the author of “To Engineer Is Human,” says that hype is necessary to attract development dollars. But he offers an important proviso: “If there is too much hype at the discovery stage and the product doesn’t live up to the hype, that’s one way of its becoming disappointing and abandoned, eventually.”

Guha, at I.B.M., believes that the field of nanotechnology has been oversold. “Nobody stands to benefit from giving the bad news,” he told me. “The scientist wants to give the good news, the journalist wants to give the good news—there is no feedback control to the system. In order to develop a technology, there is a lot of discipline that needs to go in, a lot of things that need to be done that are perhaps not as sexy.”

Tour concurs, and admits to some complicity. “People put unrealistic time lines on us,” he told me. “We scientists have a tendency to feed that—and I’m guilty of that. A few years ago, we were building molecular electronic devices. The Times called, and the reporter asked, ‘When could these be ready?’ I said, ‘Two years’—and it was nonsense. I just felt so excited about it.”

The impulse to overlook obvious difficulties to commercial development is endemic to scientific research. Geim’s paper, after all, mentioned the band-gap problem. “People knew that graphene is a gapless semiconductor,” Amirhasan Nourbakhsh, an M.I.T. scientist specializing in graphene, told me. “But graphene was showing extremely high mobility—and mobility in semiconductor technology is very important. People just closed their eyes.”

According to Friedel, the historian, scientists rely on the stubborn conviction that an obvious obstacle can be overcome. “There is a degree of suspension of disbelief that a lot of good research has to engage in,” he said. “Part of the art—and it is art—comes from knowing just when it makes sense to entertain that suspension of disbelief, at least momentarily, and when it’s just sheer fantasy.” Lord Kelvin, famous for installing telegraph cables on the Atlantic seabed, was clearly capable of overlooking obstacles. But not always. “Before his death, in 1907, Lord Kelvin carefully, carefully calculated that a heavier-than-air flying machine would never be possible,” Friedel says. “So we always have to have some humility. A couple of bicycle mechanics could come along and prove us wrong.”

Recently, some of the most exciting projects from Tour’s lab have encountered obstacles. An additive to fluids used in oil drilling, developed with a subsidiary of the resource company Schlumberger, promised to make drilling more efficient and to leave less waste in the ground; instead, barrels of the stuff decomposed before they could be used. The company that hired Tour’s group to make inflatable slides and rafts for aircraft found a cheaper lab. (Tour was philosophical about it, in part because he knew he’d still get some money from the contract. “They’ll have to come back and get the patent,” he said.) The technology for the Fukushima-reactor cleanup stalled when scientists in Japan couldn’t get the powder to work, and the postdoc who developed the method was unable to get a visa to go assist them. “You’ve got to teach them how it’s done,” Tour said. “You want the pH right.”

Tour’s optimism for graphene remains undimmed, and his group has been working on further inventions: superfast cell-phone chargers, ultra-clean fuel cells for cars, cheaper photovoltaic cells. “What Geim and Novoselov did was to show the world the amazingness of graphene, that it had these extraordinary electrical properties,” Tour said. “Imagine if one were God. Here, He’s given us pencils, and all these years scientists are trying to figure out some great thing, and you’re just stripping off sheets of graphene as you use your pencil. It has been before our eyes all this time!

Startup Brain Power Uses Google Glass To Develop Apps For Kids With Autism .


One atom thick, graphene is the thinnest material known and may be the strongest.

One atom thick, graphene is the thinnest material known and may be the strongest.
Until Andre Geim, a physics professor at the University of Manchester, discovered an unusual new material called graphene, he was best known for an experiment in which he used electromagnets to levitate a frog. Geim, born in 1958 in the Soviet Union, is a brilliant academic—as a high-school student, he won a competition by memorizing a thousand-page chemistry dictionary—but he also has a streak of unorthodox humor. He published the frog experiment in the European Journal of Physics, under the title “Of Flying Frogs and Levitrons,” and in 2000 it won the Ig Nobel Prize, an annual award for the silliest experiment. Colleagues urged Geim to turn the honor down, but he refused. He saw the frog levitation as an integral part of his style, an acceptance of lateral thinking that could lead to important discoveries. Soon afterward, he began hosting “Friday sessions” for his students: free-form, end-of-the-week experiments, sometimes fuelled by a few beers. “The Friday sessions refer to something that you’re not paid for and not supposed to do during your professional life,” Geim told me recently. “Curiosity-driven research. Something random, simple, maybe a bit weird—even ridiculous.” He added, “Without it, there are no discoveries.”

On one such evening, in the fall of 2002, Geim was thinking about carbon. He specializes in microscopically thin materials, and he wondered how very thin layers of carbon might behave under certain experimental conditions. Graphite, which consists of stacks of atom-thick carbon layers, was an obvious material to work with, but the standard methods for isolating superthin samples would overheat the material, destroying it. So Geim had set one of his new Ph.D. students, Da Jiang, the task of trying to obtain as thin a sample as possible—perhaps a few hundred atomic layers—by polishing a one-inch graphite crystal. Several weeks later, Jiang delivered a speck of carbon in a petri dish. After looking at it under a microscope, Geim recalls, he asked him to try again; Jiang admitted that this was all that was left of the crystal. As Geim teasingly admonished him (“You polished a mountain to get a grain of sand?”), one of his senior fellows glanced at a ball of used Scotch tape in the wastebasket, its sticky side covered with a gray, slightly shiny film of graphite residue.
It would have been a familiar sight in labs around the world, where researchers routinely use tape to test the adhesive properties of experimental samples. The layers of carbon that make up graphite are weakly bonded (hence its adoption, in 1564, for pencils, which shed a visible trace when dragged across paper), so tape removes flakes of it readily. Geim placed a piece of the tape under the microscope and discovered that the graphite layers were thinner than any others he’d seen. By folding the tape, pressing the residue together and pulling it apart, he was able to peel the flakes down to still thinner layers.

Geim had isolated the first two-dimensional material ever discovered: an atom-thick layer of carbon, which appeared, under an atomic microscope, as a flat lattice of hexagons linked in a honeycomb pattern. Theoretical physicists had speculated about such a substance, calling it “graphene,” but had assumed that a single atomic layer could not be obtained at room temperature—that it would pull apart into microscopic balls. Instead, Geim saw, graphene remained in a single plane, developing ripples as the material stabilized.

Geim enlisted the help of a Ph.D. student named Konstantin Novoselov, and they began working fourteen-hour days studying graphene. In the next two years, they designed a series of experiments that uncovered startling properties of the material. Because of its unique structure, electrons could flow across the lattice unimpeded by other layers, moving with extraordinary speed and freedom. It can carry a thousand times more electricity than copper. In what Geim later called “the first eureka moment,” they demonstrated that graphene had a pronounced “field effect,” the response that some materials show when placed near an electric field, which allows scientists to control the conductivity. A field effect is one of the defining characteristics of silicon, used in computer chips, which suggested that graphene could serve as a replacement—something that computer makers had been seeking for years.

Geim and Novoselov wrote a three-page paper describing their discoveries. It was twice rejected by Nature, where one reader stated that isolating a stable, two-dimensional material was “impossible,” and another said that it was not “a sufficient scientific advance.” But, in October, 2004, the paper, “Electric Field Effect in Atomically Thin Carbon Films,” was published in Science, and it astonished scientists. “It was as if science fiction had become reality,” Youngjoon Gil, the executive vice-president of the Samsung Advanced Institute of Technology, told me.

Labs around the world began studies using Geim’s Scotch-tape technique, and researchers identified other properties of graphene. Although it was the thinnest material in the known universe, it was a hundred and fifty times stronger than an equivalent weight of steel—indeed, the strongest material ever measured. It was as pliable as rubber and could stretch to a hundred and twenty per cent of its length. Research by Philip Kim, then at Columbia University, determined that graphene was even more electrically conductive than previously shown. Kim suspended graphene in a vacuum, where no other material could slow the movement of its subatomic particles, and showed that it had a “mobility”—the speed at which an electrical charge flows across a semiconductor—of up to two hundred and fifty times that of silicon.
In 2010, six years after Geim and Novoselov published their paper, they were awarded the Nobel Prize in Physics. By then, the media were calling graphene “a wonder material,” a substance that, as the Guardian put it, “could change the world.” Academic researchers in physics, electrical engineering, medicine, chemistry, and other fields flocked to graphene, as did scientists at top electronics firms. The U.K. Intellectual Property Office recently published a report detailing the worldwide proliferation of graphene-related patents, from 3,018 in 2011 to 8,416 at the beginning of 2013. The patents suggest a wide array of applications: ultra-long-life batteries, bendable computer screens, desalinization of water, improved solar cells, superfast microcomputers. At Geim and Novoselov’s academic home, the University of Manchester, the British government invested sixty million dollars to help create the National Graphene Institute, in an effort to make the U.K. competitive with the world’s top patent holders: Korea, China, and the United States, all of which have entered the race to find the first world-changing use for graphene.

The progress of a technology from the moment of discovery to transformative product is slow and meandering; the consensus among scientists is that it takes decades, even when things go well. Paul Lauterbur and Peter Mansfield shared a Nobel Prize for developing the MRI, in 1973—almost thirty years after scientists first understood the physical reaction that allowed the machine to work. More than a century passed between the moment when the Swedish chemist Jöns Jakob Berzelius purified silicon, in 1824, and the birth of the semiconductor industry.

New discoveries face formidable challenges in the marketplace. They must be conspicuously cheaper or better than products already for sale, and they must be conducive to manufacture on a commercial scale. If a material arrives, like graphene, as a serendipitous discovery, with no targeted application, there is another barrier: the limits of imagination. Now that we’ve got this stuff, what do we do with it?

Aluminum, discovered in minute quantities in a lab in the eighteen-twenties, was hailed as a wonder substance, with qualities never before seen in a metal: it was lightweight, shiny, resistant to rust, and highly conductive. It could be derived from clay (at first, it was called “silver from clay”), and the idea that a valuable substance was produced from a common one lent it a quality of alchemy. In the eighteen-fifties, a French chemist devised a method for making a few grams at a time, and aluminum was quickly adopted for use in expensive jewelry. Three decades later, a new process, using electricity, allowed industrial production, and the price plummeted.

“People said, ‘Wow! We’ve got this silver from clay, and now it’s really cheap and we can use it for anything,’ ” Robert Friedel, a historian of technology at the University of Maryland, told me. But the enthusiasm soon cooled: “They couldn’t figure out what to use it for.” In 1900, the Sears and Roebuck catalogue advertised aluminum pots and pans, Friedel notes, “but you can’t find any of what we’d call ‘technical’ uses.” Not until after the First World War did aluminum find its transformative use. “The killer app is the airplane, which didn’t even exist when they were going all gung ho and gaga over this stuff.”

Some highly touted discoveries fizzle altogether. In 1986, the I.B.M. researchers Georg Bednorz and K. Alex Müller discovered ceramics that acted as radically more practical superconductors. The next year, they won a Nobel, and an enormous wave of optimism followed. “Presidential commissions were thrown together to try to put the U.S. out in the lead,” Cyrus Mody, a history-of-science professor at Rice University, in Houston, says. “People were talking about floating trains and infinite transmission lines within the next couple of years.” But, in three decades of struggle, almost no one has managed to turn the brittle ceramics into a substance that can survive everyday use.
BUY THE PRINT »
Friedel offered a broad axiom: “The more innovative—the more breaking-the-mold—the innovation is, the less likely we are to figure out what it is really going to be used for.” Thus far, the only consumer products that incorporate graphene are tennis racquets and ink. But many scientists insist that its unusual properties will eventually lead to a breakthrough. According to Geim, the influx of money and researchers has speeded up the usual time line to practical usage. “We started with submicron flakes, barely seen even in an optical microscope,” he says. “I never imagined that by 2009, 2010, people would already be making square metres of this material. It’s extremely rapid progress.” He adds, “Once someone sees that there is a gold mine, then very heavy equipment starts to be applied from many different research areas. When people are thinking, we are quite inventive animals.”
Samsung, the Korea-based electronics giant, holds the greatest number of patents in graphene, but in recent years research institutions, not corporations, have been most active. A Korean university, which works with Samsung, is in first place among academic institutions. Two Chinese universities hold the second and third slots. In fourth place is Rice University, which has filed thirty-three patents in the past two years, almost all from a laboratory run by a professor named James Tour.

Tour, fifty-five, is a synthetic organic chemist, but his expansive personality and entrepreneurial brio make him seem more like an executive overseeing a company’s profitable R. & D. division. A short, dark-eyed man with a gym-pumped body, he greeted me volubly when I visited him recently at his office, in the Dell Butcher building at Rice. “I mean, the stuff is just amazing!” he said, about graphene. “You can’t believe what this stuff can do!” Tour, like most senior scientists, must concern himself with both research and commerce. He has twice appeared before Congress to warn about federal budget cuts to science, and says that his lab has managed to thrive only because he has secured funding through aggressive partnerships with industry. He charges each business he contracts with two hundred and fifty thousand dollars a year; his lab nets a little more than half, with which he can hire two student researchers and pay for their materials for a year. Much of Tour’s work involves spurring the creativity of those researchers (twenty-five of whom are devoted to graphene); they’re the ones who devise the inventions that Tour sells. Graphene has been a boon, he said: “You have a lot of people moving into this area. Not just academics but companies in a big way, from the big electronics firms, like Samsung, to oil companies.”

Tour brings a special energy to the endeavor. Raised in a secular Jewish home in White Plains, he became a born-again Christian as a freshman at Syracuse University. Married, with four grown children, he rises at three-forty every morning for an hour and a half of prayer and Bible study—followed, several times a week, with workouts at the gym—and arrives at the office at six-fifteen. In 2001, he made headlines by signing “A Scientific Dissent from Darwinism,” a petition that promoted intelligent design, but he insists that this reflected only his personal doubts about how random mutation occurs at the molecular level. Although he ends e-mails with “God bless,” he says that, apart from a habit of praying for divine guidance, he feels that religion plays no part in his scientific work.

Tour endorses a scattershot approach for his students’ research. “We work on whatever suits our fancy, as long as it is swinging for the fences,” he said. As chemists, he noted, they are particularly suited to quick experiments, many of which can yield results in a matter of hours—unlike physicists, whose experiments can take months. His lab has published a hundred and thirty-one journal articles on graphene—second only to a lab at the University of Texas at Austin—and his researchers move rapidly to file provisional applications with the U.S. Patent and Trademark Office, which give them legal ownership of an idea for a year before they must file a full claim. “We don’t wait very long before we file,” Tour said; he urges students to write up their work in less than forty-eight hours. “I was just told by a company that has licensed one of our technologies that we beat the Chinese by five days.”

Many of his lab’s recent inventions are designed for immediate exploitation by industry, supplying funds to support more ambitious work. Tour has sold patents for a graphene-infused paint whose conductivity might help remove ice from helicopter blades, fluids to increase the efficiency of oil drills, and graphene-based materials to make the inflatable slides and life rafts used in airplanes. He points out that graphene is the only substance on earth that is completely impermeable to gas, but it weighs almost nothing; lighter rafts and slides could save the airline industry millions of dollars’ worth of fuel a year.
In Tour’s laboratory, a large, high-ceilinged room with tightly configured rows of worktables, a score of young men in white lab coats and safety goggles were working. Tour and I stopped at a bench where Loïc Samuels, a graduate student from Antigua, was making a batch of graphene-based gel, to be used in a scaffold for spinal-cord injuries. “Instead of just having a nonfunctional scaffold material, you have something that’s actually electrically conductive,” Samuels said, as he swirled a test tube in a jeweller’s bath. “That helps the nerve cells, which communicate electrically, connect with each other.” Tour showed me videos of lab rats whose back legs had been paralyzed. In one video, two rats inched themselves along the bottom of a cage, dragging their hind legs. In another video, of rats that had been treated, they walked normally. Tour warned that it takes years before the F.D.A. approves human trials. “But it’s an incredible start,” he said.

In 2010, one of Tour’s researchers, Alexander Slesarev, a Russian who had studied at Moscow State University, suggested that graphene oxide, a form of graphene created when oxygen and hydrogen molecules are bonded to it, might attract radioactive material. Slesarev sent a sample to a former colleague at Moscow State, where students placed the powder in solutions containing nuclear material. They discovered that the graphene oxide binds with the radioactive elements, forming a sludge that could easily be scooped away. Not long afterward, the earthquake and tsunami in Japan created a devastating spill of nuclear material, and Tour flew to Japan to pitch the technology to the Japanese. “We’re deploying it right now in Fukushima,” he told me.

Working at one of the benches was a young man with a round, open face: a twenty-five-year-old Ph.D. student named Ruquan Ye, who last year devised a new way to make quantum dots, highly fluorescent nanoparticles used in medical imaging and plasma television screens. Usually made in tiny amounts from toxic chemicals, such as cadmium selenide and indium arsenide, quantum dots cost a million dollars for a one-kilogram bottle. Ye’s technique uses graphene derived from coal, which is a hundred dollars a ton.

“The method is simple,” Ye told me. He showed me a vial filled with a fine black powder: anthracite coal that he had ground. “I place this in a solution of acids for one day, then heat the solution on a hot plate.” By tweaking the process, he can make the material emit various light frequencies, creating dots of various colors for differentiated tagging of tumors. The coal-based dots are compatible with the human body—coal is carbon, and so are we—which suggests that Ye’s dots could replace the highly toxic ones used in hospitals worldwide. In a darkened room next to the lab, he shone a black light on several small vials of clear liquid. They fluoresced into glowing ingots: red, blue, yellow, violet.

Tour usually declines to take credit for the discoveries in his lab. “It’s all the students,” he said. “They’re at that age, their twenties, when the synapses are just firing. My job is to inspire them and provide a credit card, and direct them away from rabbit holes.” But he acknowledged that the quantum-dot idea originated with him: “One day, I said, ‘We gotta find out what’s in coal. People have been using this for five thousand years. Let’s see what’s really in it. I bet it’s small domains of graphene’—and, sure enough, it was. It was just sitting right there. A twenty-five-per-cent yield. And, remember, it’s a million dollars a kilogram!”

Tour turned to his lab manager, Paul Cherukuri, and said, “We’re going to be rich someday, aren’t we?” As Cherukuri laughed, Tour added, “I’m going to come in here and count money every day.”

Perhaps the most tantalizing property described in Geim and Novoselov’s 2004 paper was the “mobility” with which electronic information can flow across graphene’s surface. “The slow step in our computers is moving information from point A to point B,” Tour told me. “Now you’ve taken the slow step, the biggest hurdle in silicon electronics, and you’ve introduced a new material and—boom! All of a sudden, you’re increasing speed not by a factor of ten but by a factor of a hundred, possibly even more.”

The news galvanized the semiconductor industry, which was struggling to keep up with Moore’s Law, devised in 1965 by Gordon Moore, a co-founder of Intel. Every two years, he predicted, the density—and thus the effectiveness—of computer chips would double. For five decades, engineers have managed to keep pace with Moore’s Law through miniaturization, packing increasing numbers of transistors onto chips—as many as four billion on a silicon wafer the size of a fingernail. Engineers have further speeded computers by “doping” silicon: introducing atoms from other elements to squeeze the lattice tighter. But there’s a limit. Shrink the chip too much, moving its transistors too close together, and silicon stops working. As early as 2017, silicon chips may no longer be able to keep pace with Moore’s Law. Graphene, if it works, offers a solution.
“Five more minutes and I’m all yours, Mr. Antsy.”
BUY THE PRINT »

There’s a problem, though. Semiconductors, such as silicon, are defined by their ability to turn on and off in the presence of an electric field; in logic chips, that switching process generates the ones and the zeros that are the language of computers. Graphene, a semi-metal, cannot be turned off. At first, engineers believed that they could dope graphene to open up a “band gap,” the electrical property that allows semiconductors to act as switches. But, ten years after Geim and Novoselov’s paper, no one has succeeded in opening a gap wide enough. “You’d have to change it so much that it’s no longer graphene,” Tour said. Indeed, those who have managed to create such a gap learned that it kills the mobility, rendering graphene no better than the materials we use now. The result has been a certain dampening of the mood at semiconductor companies.

I recently visited the Thomas J. Watson Research Center, the main R. & D. lab for I.B.M., a major fabricator of silicon semiconductor chips. A half hour north of New York City, the center is housed in a building designed by Eero Saarinen, in 1961. A vast arc of glass with an upswept front awning, it is a kind of monument to the difficulty of predicting the future. Saarinen imagined that transformative ideas would emerge from groups of scientists working in meeting areas, where recliners and coffee tables still sit beside soaring windows. Instead, the scientists spend much of the day hunched over computer screens in their offices: small, windowless dens, which seem to have been created as an afterthought.

In one cramped office, I met Supratik Guha, who is the director of physical sciences at I.B.M. and who sets the company’s strategy for worldwide research. A thoughtful man, as precisely understated as Tour is effusive, Guha lamented the “excessive hype” that has surrounded graphene as a replacement for silicon, and talked mournfully about how the effort to introduce a band gap is, at best, “one major innovation away.” He hastened to add that I.B.M. has not written off graphene. In early 2014, the company announced that its researchers had built the first graphene-based integrated circuit for wireless devices, which could lead to cheaper, more efficient cell phones. But in the quest to make graphene a replacement for silicon, Guha admits, they hold little hope.

For now, I.B.M.’s focus remains the single-walled carbon nanotube, which was developed at Rice by Tour’s mentor and predecessor, Rick Smalley. In the eighties, Smalley and his colleagues discovered that molecules of carbon atoms arrange themselves in a variety of shapes; some were spheres (which he called “buckyballs,” for their resemblance to Buckminster Fuller’s geodesic domes) and others were tubes. When the researchers found that the tubes can act as semiconductors, the material was immediately suggested as a potential replacement for silicon. Along with his collaborators, Smalley was awarded the Nobel Prize in Chemistry in 1996, and he persuaded Rice to build the multimillion-dollar nanotechnology center that Tour later took over. Yet carbon nanotubes have resisted easy exploitation. They have the necessary band gap, but building a chip with them entails maneuvering billions of minute objects into precise locations—a difficulty that has bedevilled scientists for almost two decades. Without quite admitting that he has lost interest in carbon nanotubes, Tour told me that they “never really commercialized well.”

At I.B.M., which has invested more than a decade of research and tens of millions of dollars in the material, there is great reluctance to admit defeat. Guha introduced me to George Tulevski, who helps lead I.B.M.’s carbon-nanotube research program. When I mentioned graphene, he evinced the defensiveness that might be expected of a scientist who has devoted nearly ten years to one recalcitrant technology only to be told about a glamorous new one. “Devices have to turn on and off,” Tulevski said. “If it doesn’t turn off, it just consumes way too much power. There’s no way to turn graphene off. So those electrons are going superfast, and that’s great—but you can’t turn the device off.”

Cyrus Mody, the historian, is equally cautious. “This idea that there’s a form of microelectronics that is theoretically much, much faster than conventional silicon is not new,” he told me. He points to the precedent of the Josephson-junction circuit. In 1962, the British physicist Brian David Josephson predicted that electricity would flow at unprecedented speeds through a circuit composed of two superconductors separated by a “weak link” material. The insight led to a Nobel Prize in Physics—and to dreams of exponentially faster electronics.

“A lot of people thought we’d be switching over to superconducting Josephson-junction microelectronics soon,” Mody said. “But when you actually get down to manufacturing a complex circuit with lots and lots and lots of logic gates, and making lots and lots of such circuits with very large yields, the manufacturing problems really make it impossible to keep going. And I think that’s going to be the hurdle that people haven’t really considered enough when they talk about graphene.”

But other scientists argue that the obstacle is not graphene’s physical properties. “The semiconductor industry knows how to introduce a band gap,” Amanda Barnard, a theoretical physicist who heads Australia’s Commonwealth Scientific and Industrial Research Organization, told me. The problem is business: “We’ve got a global investment on the order of trillions of dollars in silicon, and we’re not going to walk away from that. Initially, graphene needs to work with silicon—it needs to work in our existing factories and production lines and research capabilities—and then we’ll get some momentum going.”

Tour has little sympathy for the semiconductor industry’s disappointment with graphene. “I.B.M. is all bummed out because they’re single-minded,” he said. “They’ve got to make computers—and they’ve got Moore’s Law. But that’s their own fault! What other industry has challenged itself with doubling its performance every eighteen months? In the chemical industry, if we can get a one-per-cent-higher yield in a year we think we’ve done pretty well.”

Perhaps the most expansive thinker about the material’s potential is Tomas Palacios, a Spanish scientist who runs the Center for Graphene Devices and 2D Systems, at M.I.T. Rather than using graphene to improve existing applications, as Tour’s lab mostly does, Palacios is trying to build devices for a future world.

At thirty-six, Palacios has an undergraduate’s reedy build and a gentle way of speaking that makes wildly ambitious notions seem plausible. As an electrical engineer, he aspires to “ubiquitous electronics,” increasing “by a factor of one hundred” the number of electronic devices in our lives. From the perspective of his lab, the world would be greatly enhanced if every object, from windows to coffee cups, paper currency, and shoes, were embedded with energy harvesters, sensors, and light-emitting diodes, which allowed them to cheaply collect and transmit information. “Basically, everything around us will be able to convert itself into a display on demand,” he told me, when I visited him recently. Palacios says that graphene could make all this possible; first, though, it must be integrated into those coffee cups and shoes.

As Mody pointed out, radical innovation often has to wait for the right environment. “It’s less about a disruptive technology and more about moments when the linkages among a set of technologies reach a point where it’s feasible for them to change lots of practices,” he said. “Steam engines had been around a long time before they became really disruptive. What needed to happen were changes in other parts of the economy, other technologies linking up with the steam engine to make it more efficient and desirable.”

For Palacios, the crucial technological complement is an advance in 3-D printing. In his lab, four students were developing an early prototype of a printer that would allow them to create graphene-based objects with electrical “intelligence” built into them. Along with Marco de Fazio, a scientist from STMicrolectronics, a firm that manufactures ink-jet print heads, they were clustered around a small, half-built device that looked a little like a Tinkertoy contraption on a mirrored base. “We just got the printer a couple of weeks ago,” Maddy Aby, a ponytailed master’s student, said. “It came with a kit. We need to add all the electronics.” She pointed to a nozzle lying on the table. “This just shoots plastic now, but Marco gave us these print heads that will print the graphene and other types of inks.”

The group’s members were pondering how to integrate graphene into the objects they print. They might mix the material into plastic or simply print it onto the surface of existing objects. There were still formidable hurdles. The researchers had figured out how to turn graphene into a liquid—no easy task, since the material is severely hydrophobic, which means that it clumps up and clogs the print heads. They needed to first convert graphene to graphene oxide, adding groups of oxygen and hydrogen molecules, but this process negates its electrical properties. So once they printed the object they would have to heat it with a laser. “When you heat it up,” Aby said, “you burn off those groups and reduce it back to graphene.”
When that might be possible was uncertain; she hoped to have the device working in three months. “The laser needs more approval from the powers that be,” she said, glancing balefully at the printer’s mirrored base—the kind perfect for bouncing laser beams all over a room. De Fazio suggested that they cover it with a silicon wafer.

“That could work,” Aby said.
“Of course, this could also be confirmation bias from me wanting you to get sick.”
BUY THE PRINT »
Palacios recognizes that millennial change comes only after modest, strategic increments. He mentioned Samsung, which, according to industry rumor, is planning to launch the first device with a screen that employs graphene. “Graphene is only a small component, used to deliver the current to the display,” he said. “But that’s an exciting first application—it doesn’t have to be the breakthrough that we are all looking forward to. It’s a good way to get graphene into everyone’s focus and, that way, justify more investment.” In the meantime, one of his students, Lili Yu, has been working on a prototype for a flexible screen.

Palacios, in his office, told me that his most ambitious goal is “graphene origami,” in which sheets of the material are folded to mimic organelles, minuscule structures inside a biological cell. “It’s not that different from what nature does with DNA, a material that is a one-dimensional structure that gets folded many, many, many times to make the chromosomes.” If the method works, it could be used to pack huge amounts of computing power into a tiny space. There might be applications in medicine, he says, and in something he calls smart dust—“things that are just as tiny as dust particles but have a functionality to tell us about the pollution in the atmosphere, or if there is a flu virus nearby. These things will be able to connect to your phone or to the embedded displays everywhere, to tell you about things happening around you.”

For the moment, the challenges are more earthbound: scientists are still trying to devise a cost-effective way to produce graphene at scale. Companies like Samsung use a method pioneered at the University of Texas, in which they heat copper foil to eighteen hundred degrees Fahrenheit in a low vacuum, and introduce methane gas, which causes graphene to “grow” as an atom-thick sheet on both sides of the copper—much as frost crystals “grow” on a windowpane. They then use acids to etch away the copper. The resulting graphene is invisible to the naked eye and too fragile to touch with anything but instruments designed for microelectronics. The process is slow, exacting, and too expensive for all but the largest companies to afford.

At Tour’s lab, a twenty-six-year-old postdoc named Zhiwei Peng was waiting to hear from a final reviewer of a paper he had submitted, in which he detailed a way to create graphene with no superheating, no vacuums, and no gases. (The paper was later approved for publication.) Peng had stumbled on his method a few months before. While heating graphene oxide with a laser, he missed the sample, and accidentally heated the material it was sitting on, a sheet of polyimide plastic. Where the laser touched the plastic, it left a black residue. He discovered that the residue was layers of graphene, loosely bonded with oxygen molecules, which—like the residue on Geim’s tape—could easily be exfoliated to single-atom sheets. He showed me how it worked, the laser tracking back and forth across the surface of a piece of polyimide and leaving with each pass a needle-thin deposit of material. Single layers of graphene absorb 2.5 per cent of available light; as layers pile up, they begin to appear black. After a few minutes, Peng had produced a crisp, matte-black lattice—perhaps an inch wide, and worth tens of thousands of dollars. Cherukuri, Tour’s lab manager, pointed at it and said, “That is the race.”

The tech-research firm Gartner uses an analytic tool that it calls the Hype Cycle to help investors determine which discoveries will make money. A graph of the cycle resembles a cursive lowercase “r,” in which a discovery begins with a Technology Trigger, climbs quickly to a Peak of Inflated Expectations, falls into the Trough of Disillusionment, and, as practical uses are found, gradually ascends to the Plateau of Productivity. The implication is not (or not only) that most discoveries don’t behave as expected; it’s that a new thing typically becomes useful sometime after the publicity fades.
Nearly every scientist I spoke with suggested that graphene lends itself especially well to hype. “It’s an electrically useful material in a time when we love electrical devices,” Amanda Barnard told me. “If it had come along at a time when we were not so interested in electronic devices, the hype might not have been so disproportionate. But then there wouldn’t have been the same appetite for investment.” Indeed, Henry Petroski, a professor at Duke and the author of “To Engineer Is Human,” says that hype is necessary to attract development dollars. But he offers an important proviso: “If there is too much hype at the discovery stage and the product doesn’t live up to the hype, that’s one way of its becoming disappointing and abandoned, eventually.”

Guha, at I.B.M., believes that the field of nanotechnology has been oversold. “Nobody stands to benefit from giving the bad news,” he told me. “The scientist wants to give the good news, the journalist wants to give the good news—there is no feedback control to the system. In order to develop a technology, there is a lot of discipline that needs to go in, a lot of things that need to be done that are perhaps not as sexy.”

Tour concurs, and admits to some complicity. “People put unrealistic time lines on us,” he told me. “We scientists have a tendency to feed that—and I’m guilty of that. A few years ago, we were building molecular electronic devices. The Times called, and the reporter asked, ‘When could these be ready?’ I said, ‘Two years’—and it was nonsense. I just felt so excited about it.”

The impulse to overlook obvious difficulties to commercial development is endemic to scientific research. Geim’s paper, after all, mentioned the band-gap problem. “People knew that graphene is a gapless semiconductor,” Amirhasan Nourbakhsh, an M.I.T. scientist specializing in graphene, told me. “But graphene was showing extremely high mobility—and mobility in semiconductor technology is very important. People just closed their eyes.”

According to Friedel, the historian, scientists rely on the stubborn conviction that an obvious obstacle can be overcome. “There is a degree of suspension of disbelief that a lot of good research has to engage in,” he said. “Part of the art—and it is art—comes from knowing just when it makes sense to entertain that suspension of disbelief, at least momentarily, and when it’s just sheer fantasy.” Lord Kelvin, famous for installing telegraph cables on the Atlantic seabed, was clearly capable of overlooking obstacles. But not always. “Before his death, in 1907, Lord Kelvin carefully, carefully calculated that a heavier-than-air flying machine would never be possible,” Friedel says. “So we always have to have some humility. A couple of bicycle mechanics could come along and prove us wrong.”

Recently, some of the most exciting projects from Tour’s lab have encountered obstacles. An additive to fluids used in oil drilling, developed with a subsidiary of the resource company Schlumberger, promised to make drilling more efficient and to leave less waste in the ground; instead, barrels of the stuff decomposed before they could be used. The company that hired Tour’s group to make inflatable slides and rafts for aircraft found a cheaper lab. (Tour was philosophical about it, in part because he knew he’d still get some money from the contract. “They’ll have to come back and get the patent,” he said.) The technology for the Fukushima-reactor cleanup stalled when scientists in Japan couldn’t get the powder to work, and the postdoc who developed the method was unable to get a visa to go assist them. “You’ve got to teach them how it’s done,” Tour said. “You want the pH right.”

Tour’s optimism for graphene remains undimmed, and his group has been working on further inventions: superfast cell-phone chargers, ultra-clean fuel cells for cars, cheaper photovoltaic cells. “What Geim and Novoselov did was to show the world the amazingness of graphene, that it had these extraordinary electrical properties,” Tour said. “Imagine if one were God. Here, He’s given us pencils, and all these years scientists are trying to figure out some great thing, and you’re just stripping off sheets of graphene as you use your pencil. It has been before our eyes all this time!

Scans May Spot People Who’ll Benefit From Surgery for OCD .


Though most patients with obsessive-compulsive disorder (OCD) can be successfully treated with medication and therapy, between 10 percent to 20 percent have a form of the illness that doesn’t respond to standard care, experts say.

However, patients with this so-called “refractory OCD” do have hope in the form of a type of brain surgery that disables certain brain networks believed to contribute to OCD.

The challenge: to distinguish between patients most likely to benefit from the surgery, known as “dorsal anterior cingulotomy,” from those who probably won’t.

Now new research, reported in the Dec. 23 online issue of JAMA Psychiatry, suggests that doctors may be able to spot candidates for the surgery by looking at a key structure in the targeted brain region.

In the study, investigators conducted MRI scans of 15 refractory OCD patients, all of whom had undergone cingulotomy surgery.

The team, led by Garrett Banks of Columbia University in New York City, found that only about half (8 patients) had responded positively to the procedure, which involves the short-circuiting of problematic brain networks.

9 Secrets To Lasting Weight Loss


9 Secrets To Lasting Weight Loss

Armed with these simple tips, you canpainlessly shed pounds.

Mindful eating may not be a mainstream weight-loss tactic (yet), but that doesn’t mean it’s unsupported by science. In a 2013 Kent State University study, researchers found that mindfulness strategies — for example, paying close attention to the taste and smell of food, and attending to hunger and fullness — significantly increased people’s satiety after a meal. Another study showed that dieters who still practiced mindfulness techniques after completing a weight-loss program continued dropping weight. So how can you bring these skills to the table to drop pounds? Start with these secrets to success, gleaned from the new book 20 Pounds Younger.

Eliminate distractions.

We live in a world where the ability to multitask is considered résumé-worthy. But eating while working, answering e-mails, or doing other tasks can make you consume more than you need. A study in the American Journal of Clinical Nutrition found that people who played solitaire during lunch felt less full than undistracted eaters and ate significantly more when offered cookies just half an hour later. So make your meal strictly about eating: Banish the TV, iPad, smartphone, or book from the table—period.

Pay attention to portions.

People who eat mindlessly often prefer to remain in a state of ignorance, with no knowledge of serving sizes or the number of calories in foods. But in order to give your body what it needs, you need to face the facts. “How many M&Ms is a portion? How many chips?” says Lesley Lutes, PhD, an associate professor of psychology at East Carolina University. “Take it out, put it on your plate.” In her experience, people are often surprised — in a good way. “They thought a portion was just three or four chips,” she says. “They felt so guilty about what they were eating that they’d just stick their hand in the bag and keep eating. But we want you to celebrate food.” The first step? Understanding — and consciously choosing — what you eat.

Put your food on display.

When you eat straight out of the bag, what happens? (1) You don’t stop eating until the bag is empty, and (2) you have no idea how much food you actually shoveled in. “People consume a lot more calories if they’re not focused on the food,” says Lutes. “Seeing the food — and seeing the portion size — actually helps you feel more full.” So regardless of how much or how little you’re eating, use a plate or a bowl. That way, your mind will register that you’re eating — and you’ll expand the sensory experience (and pleasure) of your meal. “We eat first with our eyes,” says Katie Rickel, PhD, a clinical psychologist and weight-loss expert who works at a weight-management facility in Durham, North Carolina. “We have to gain some pleasure from the visual appearance of food — otherwise, watching Food Network shows would be totally boring.” (This is also why we like to post our meals on Instagram.) Another trick that helps some people: Leave a bit of food on your plate. By conditioning yourself to stop eating before the empty plate signals that you’re “full,” you’ll gain the confidence that you can overcome visual cues to keep eating.

Appreciate your food.

I know it seems hokey, but before or during your meal, take a moment to think about where your food came from — for example, “This piece of fruit started as a seed, which was planted by a farmer or blown by the wind. Sunlight gave that seed the energy to grow, then someone tended the plant as it matured, harvested the fruit, and delivered it to me.” “This makes the experience more whole, rather than just stuffing food into your mouth without thinking about it,” says Rickel. Plus, it’s much easier to trace the path of “real” food than it is the heavily processed stuff, which may actually be a little gross to think about in too much detail. “This could probably help you choose cleaner, more whole foods,” she says.

Start off eating slowly.

You probably think eating mindfully means eating at a snail’s pace. But that’s only true in the beginning. “For teaching purposes, we slow it down,” says Jennifer Daubenmier, PhD, an assistant professor at the Osher Center for Integrative Medicine at the University of California at San Francisco. “But with practice, you don’t necessarily have to eat in slow motion.” As Rickel points out, “If you took every single bite of every single meal mindfully, then you wouldn’t get anything else done during the day.” So sure, when you’re learning to be mindful, it’s helpful to slow down your shoveling. But eventually, tuning in to the experience of eating will become so second nature that you won’t have to dine at a grandma pace. One easy way to help you keep a reasonable pace: Put your utensils down and your hands in your lap between bites.

Pretend you’re a food critic.

Your job isn’t just to hoover down the food on your plate — you have to take note of the presentation, the nuances of every flavor, and how satisfying each item is. “When you bite into a grape, all of these juices come out — and there are sensations you’d totally miss if you just stuffed a handful of grapes into your mouth,” says Rickel. “Try to follow the first bite down your esophagus and into your belly, and take a moment to notice whether you feel one grape more energetic.” In mindful eating workshops, people first practice this with just three or four raisins. “That really brings people’s attention down to their sen- sory experience,” says Daubenmier. “They really notice the texture, the smell, and the thoughts that come up.”

Observe your inner experience.

You can drag out your meal for two hours, but all of that extra time doesn’t mean a thing if you aren’t paying attention to what’s happening inside your body and mind. To truly be mindful, you need to take note of every sensation and urge:How do I know when I’m hungry? What sensations do I experience? What does it feel like when I’m emotionally, but not physically, hungry? How do I know when I’m full? 

Eat how much you need — not how much you think you should. 

A lot of factors probably contribute to the size of your meals: how much you put on your plate, what others around you are eating, and — if you’re dieting — guilt about what you think you should do. But the truth is, only your body can tell you how much you need to consume. In mindful eating programs, “people think the idea is to get them to stop after one bite,” says Lutes. “But we want you to eat what you want, but be mindful of it, actually enjoy it, and not feel guilty about it.” In other words, if your body’s signals are telling you to continue eating, then you have no reason to feel bad about doing so.

Try to be mindful every time you eat. 

You can eat mindfully at a buffet, a birthday party, or during Thanksgiving dinner. The key: Let your friends or family members do the talking at the start of the meal, buying you a few moments to take a mindful bite or two. Mini meditations are perhaps the easiest way to put this into practice. Before you eat, analyze your level of hunger and any emotions you’re bringing to the table, and take a few deep breaths to help you focus on the food in front of you. (Some people find it helpful to close their eyes, but you don’t have to.) About halfway through the meal, check in again, noticing the decrease in hunger and increase in fullness you’re experiencing. This is a good time to answer the questions, “Do I really need to keep eating?” and “Am I satisfied?”

Music Education Improves Students’ Academic Performance, But Active Participation Is Required


piano
More evidence links music lessons to improved grades. 

Teachers have long observed the effect that music education can have on students, but recent research is showing just how integral learning a musical instrument is to a child’s development. Using the most advanced brain analysis technology, Dr. Nina Kraus from Northwestern University has been able to show precisely what happens to “the brain on music.”

Music and language have a special relationship that Kraus and her team are only beginning to understand. In the study, which appears online in the open-access journal Frontiers in Psychology, the team showed that exposure to music lessons physically stimulated the brain and changed it for the better. However, simply being exposed to music education doesn’t seem to be sufficient, you have to also be actively involved.

The Harmony Project, which provides after school music education programs in underserved communities across New York City, allowed Kraus and her team to study the brains of some of their students to gain data for their study. It was important to use underprivileged students in their research because these children generally have lower language skills. This is because growing up in a disadvantaged environment has been linked to noisier settings, linguistic deprivation, and not hearing as much complex words, sentences, and concepts, Kraus explained in a press release. These factors may cause the areas of the brain related to language to become weaker.

The team looked at the neural differences of the children participating in the Harmony Project by using electrode wires with button sensors to capture the brain’s responses. Although Kraus’s past research has shown the music education caused greater gains in speech processing, and thus reading, in her current project Kraus realized the type of music education offered also influenced how great these gains would be. For example, children who learned how to play a musical instrument showed stronger language skills than children who took music appreciation courses.

“Even in a group of highly motivated students, small variations in music engagement — attendance and class participation — predicted the strength of neural processing after music training,” Kraus said.

The team witnessed how music education helped the children to better distinguish between similar speech sounds.

“Speech processing efficiency is closely linked to reading, since reading requires the ability to segment speech strings into individual sound units,” Kraus said. “A poor reader’s brain often processes speech suboptimally.”

Those involved in the Harmony Project didn’t need Kraus to tell them how much music can improve a child’s academic skills. According to the press release, although the project works in neighborhoods with an average high school dropout rate of 50 percent, around 90 percent of children who participate in the Harmony Project go on to college.

Although the results are exciting, Kraus cautions that music education “isn’t a quick fix.” Her research shows that the child must show dedication to his studies. Fortunately, according to the testimony of the project’s students, studying music is far more of a pleasure than a chore.

Researchers take ‘first baby step’ toward anti-aging drug


Researchers could be closing in on a “fountain of youth” drug that can delay the effects of aging and improve the health of older adults, a new study suggests.

Seniors received a significant boost to their immune systems when given a drug that targets a genetic signaling pathway linked to aging and immune function, researchers with the drug maker Novartis report.

The experimental medication, a version of the drug rapamycin, improved the seniors’ to a by 20 percent, researchers said in the current issue of Science Translational Medicine.

The study is a “watershed” moment for research into the health effects of aging, said Dr. Nir Barzilai, director of the Institute for Aging Research at Albert Einstein College of Medicine in New York City.

Rapamycin belongs to a class of drugs known as mTOR inhibitors, which have been shown to counteract aging and aging-related diseases in mice and other animals.

Barzilai, who wasn’t involved in the study, said this is one of the first studies to show that these drugs also can delay the effects of aging in humans.

“It sets the stage for using this drug to target aging, to improve everything about aging,” Barzilai said. “That’s really going to be for us a turning point in research, and we are very excited.”

The mTOR genetic pathway promotes healthy growth in the young. But it appears to have a negative effect on mammals as they grow older, said study lead author Dr. Joan Mannick, executive director of the New Indications Discovery Unit at the Novartis Institutes for Biomedical Research.

When drugs like rapamycin are used to inhibit the effects of the mTOR pathway in mice, they “seem to extend lifespan and delay the onset of aging-related illnesses,” Mannick said.

Mannick and her colleagues decided to investigate whether a rapamycin-like drug could reverse the natural decline that elderly people experience in their ability to fight off infections.

In the clinical trial, more than 200 people 65 and older randomly received either the experimental drug or a placebo for several weeks, followed by a dose of flu vaccine.

Flu is particularly hard on seniors, with people 65 and older accounting for nine out of 10 influenza-related deaths in the United States, according to background information provided by the researchers.

Those who received the experimental version of rapamycin developed about 20 percent more antibodies in response to the flu vaccine, researchers found. Even low doses of the medication produced an improved immune response.

The researchers also found that the group given the generally had fewer white blood cells associated with age-related immune decline.

Mannick called this study the “first baby step,” and was reluctant to say whether it could lead to immune-boosting medications for the elderly.

“It’s very important to point out that the risk/benefit of MTOR inhibitors should be established in clinical trials before anybody thinks this could be used to treat aging-related conditions,” she said.

Barzilai was more enthusiastic. Research such as this could revolutionize the way age-related illnesses are treated, he said.

“Aging is the major risk factor for the killers we’re afraid of,” he said, noting that people’s risk for heart disease, cancer and other deadly illnesses increases as they grow older. “If the aging is the major risk, the way to extend people’s lives and improve their health is to delay aging.”

Until science focuses on aging itself, “you’re just exchanging one disease for another,” Barzilai said. For example, he said, a person receiving cholesterol-lowering treatment to prevent heart disease likely will instead fall prey to cancer or Alzheimer’s disease.

%d bloggers like this: