Google isn’t the company that we should have handed the Web over to


Analysis: Microsoft adopting Chromium puts the Web in a perilous place.

The word

With Microsoft’s decision to end development of its own Web rendering engine and switch to Chromium, control over the Web has functionally been ceded to Google. That’s a worrying turn of events, given the company’s past behavior.

Chrome itself has about 72 percent of the desktop-browser market share. Edge has about 4 percent. Opera, based on Chromium, has another 2 percent. The abandoned, no-longer-updated Internet Explorer has 5 percent, and Safari—only available on macOS—about 5 percent. When Microsoft’s transition is complete, we’re looking at a world where Chrome and Chrome-derivatives take about 80 percent of the market, with only Firefox, at 9 percent, actively maintained and available cross-platform.

The mobile story has stronger representation from Safari, thanks to the iPhone, but overall tells a similar story. Chrome has 53 percent directly, plus another 6 percent from Samsung Internet, another 5 percent from Opera, and another 2 percent from Android browser. Safari has about 22 percent, with the Chinese UC Browser sitting at about 9 percent. That’s two-thirds of the mobile market going to Chrome and Chrome derivatives.

In terms of raw percentages, Google won’t have quite as big a lock on the browser space as Microsoft did with Internet Explorer—Internet Explorer 6 peaked at around 80 percent, and all versions of Internet Explorer together may have reached as high as 95 percent. But Google’s reach is, in practice, much greater: not only is the Web a substantially more important place today than it was in the early 2000s, but also there’s a whole new mobile Web that operates in addition to the desktop Web.

Embrace and extend, Mountain View style

Google is already a company that exercises considerable influence over the direction of the Web’s development. By owning both the most popular browser, Chrome, and some of the most-visited sites on the Web (in particular the namesake search engine, YouTube, and Gmail), Google has on a number of occasions used its might to deploy proprietary tech and put the rest of the industry in the position of having to catch up.

Back in 2009, Google introduced SPDY, a proprietary replacement for HTTP that addressed what Google saw as certain performance issues with existing HTTP/1.1. Google wasn’t exactly wrong in its assessments, but SPDY was something of a unilateral act, with Google responsible for the design and functionality. SPDY was adopted by other browsers and Web servers over the next few years, and Google’s protocol became widespread.

SPDY was subsequently used as the basis for HTTP/2, a major revision to the HTTP protocol developed by the Internet Engineering Task Force (IETF), the consortium that develops Internet protocols with members from across the industry. While SPDY did initiate the HTTP/2 work, the protocol finally delivered in 2015 was extensively modified from Google’s initial offering.

The same story is repeating with HTTP/3. In 2012, Google announced a new experimental protocol, QUIC, intended again to address performance issues with existing HTTP/1.1 and HTTP/2. Google deployed QUIC, and Chrome would use QUIC when communicating with Google properties. Again, QUIC became the basis for IETF’s HTTP development, and HTTP/3 uses a derivative of QUIC that’s modified from and incompatible with Google’s initial work.

It’s not just HTTP that Google has repeatedly worked to replace. Google AMP (“Accelerated Mobile Pages”) is a cut-down HTML combined with Google-supplied JavaScript designed to make mobile Web content load faster. This year, Google said that it would try to build AMP with Web standards and introduced a new governance model that gave the project much wider industry oversight.

Bad actor?

This is a company that, time and again, has tried to push the Web into a Google-controlled proprietary direction to improve the performance of Google’s online services when used in conjunction with Google’s browser, consolidating Google’s market positioning and putting everyone else at a disadvantage. Each time, pushback has come from the wider community, and so far, at least, the result has been industry standards that wrest control from Google’s hands. This action might already provoke doubts about the wisdom of handing effective control of the Web’s direction to Google, but at least a case could be made that, in the end, the right thing was done.

But other situations have had less satisfactory resolutions. YouTube has been a particular source of problems. Google controls a large fraction of the Web’s streaming video, and the company has, on a number of occasions, made changes to YouTube that make it worse in Edge and/or Firefox. Sometimes these changes have improved the site experience in Chrome, but even that isn’t always the case.

A person claiming to be a former Edge developer has today described one such action. For no obvious reason, Google changed YouTube to add a hidden, empty HTML element that overlaid each video. This element disabled Edge’s fastest, most efficient hardware accelerated video decoding. It hurt Edge’s battery-life performance and took it below Chrome’s. The change didn’t improve Chrome’s performance and didn’t appear to serve any real purpose; it just hurt Edge, allowing Google to claim that Chrome’s battery life was actually superior to Edge’s. Microsoft asked Google if the company could remove the element, to no avail.

The latest version of Edge addresses the YouTube issue and reinstated Edge’s performance. But when the company talks of having to do extra work to ensure EdgeHTML is compatible with the Web, this is the kind of thing that Microsoft has been forced to do.

As another example, YouTube uses a feature called HTML imports to load scripts. HTML imports haven’t been widely adopted, either by developers or browsers alike, and ECMAScript modules are expected to serve the same role. But they’re available in Chrome and used by YouTube. For Firefox and Edge, YouTube sends a JavaScript implementation of HTML imports which carries significant performance overheads. The result? YouTube pages that load in a second in Chrome take many seconds to load in other browsers.

These actions may not be deliberate on the part of Google—it’s possible that the company simply doesn’t care about other browsers, rather than actively trying to hinder them. But even an attitude of “Google first, who cares about the rest?” is not the kind of thing that we should want from a company trusted with so much control over the Web.

The strong get stronger; the weak get weaker

Microsoft’s decision both gives Google an ever-larger slice of the pie and weakens Microsoft’s position as an opposing voice. Even with Edge and Internet Explorer having a diminished share of the market, Microsoft has retained some sway; its IIS Web server commands a significant Web presence, and there’s still value in having new protocols built in to Windows, as it increases their accessibility to software developers.

But now, Microsoft is committed to shipping and supporting whatever proprietary tech Google wants to develop, whether Microsoft likes it or not. Microsoft has been very explicit that its adoption of Chromium is to ensure maximal Chrome compatibility, and the company says that it is developing new engineering processes to ensure that it can rapidly integrate, test, and distribute any changes from upstream—it doesn’t ever want to be in the position of substantially lagging behind Google’s browser.

But this commitment ties Microsoft’s hands: it means that the company can’t ever meaningfully fork Chromium and diverge from its development path, because doing so will jeopardize that compatibility and increase the cost and complexity of incorporating Google’s changes. This means that, even if Google takes Chromium in a direction that Microsoft disagrees with or opposes, Microsoft will have little option but to follow along regardless.

Web developers have historically only bothered with such trivia as standards compliance and as a way to test their pages in multiple browsers when the market landscape has forced them to. This is what made Firefox’s early years so painful: most developers tested in Internet Explorer and nothing else, leaving Firefox compatibility to chance. As Firefox, and later Chrome, rose to challenge Internet Explorer’s dominance, cross-browser testing became essential, and standards adherence became more valuable.

Two costs more than three or four

When developers test and design in only a single browser, adding a second into the mix can be relatively expensive and complicated; that second browser will typically reveal unwitting dependencies on the particular behavior of the first browser, requiring lots of changes to stick more closely to the standards. But adding a third tends to be cheaper, and a fourth cheaper still. Moving from one browser to two already means that the worst of the non-standard code and dependence on implementation quirks must be addressed.

With Chrome, Firefox, and Edge all as going concerns, a fair amount of discipline is imposed on Web developers. But with Edge removed and Chrome taking a large majority of the market, making the effort to support Firefox becomes more expensive.

Mozilla CEO Chris Beard fears that this consolidation could make things harder for Mozilla—an organization that exists to ensure that the Web remains a competitive landscape that offers meaningful options and isn’t subject to any one company’s control. Mozilla’s position is already tricky, dependent as it is on Google’s funding. But Mozilla is doing important, desirable work—Firefox has improved by leaps and bounds over the last year, and the development of the Rust language—which hopes to wed native code performance with safe memory handling—continues to show promise.

By relegating Firefox to being the sole secondary browser, Microsoft has just made it that much harder to justify making sites work in Firefox. The company has made designing for Chrome and ignoring everything else a bit more palatable, and Mozilla’s continued existence is now that bit more marginal. Microsoft’s move puts Google in charge of the direction of the Web’s development. Google’s track record shows it shouldn’t be trusted with such a position.

Google Removes Egg from Salad Emoji to Make It ‘More Inclusive’ for Vegans


Hi. Sorry. This is dumb. Bye.

How do vegans survive? Not only must they avoid all meat and dairy, but they cannot so much as glance at a virtual representation of an animal product. No cheese, no honey, no leather shoes, and absolutely no using an emoji with an egg [gasp] in it.

Well, thank God Google has recognised the hugely unethical nature of pixels in the form of an egg. Yesterday, the tech company announced that it would be removing the hard-boiled egg from its salad emoji, thus allowing Android-owning vegans to regain the use of their phones.

According to The Verge, Google plans to introduce 157 new emojis to the Android P Beta 2 when it’s released later this year. It will also be editing the salad emoji as part of a push for, er, “diversity.”

In a tweet, Jennifer Daniel, head of Google’s Expression design team, explained the change. “There’s big talk about inclusion and diversity at Google,” she wrote. “So if you need any evidence of Google is making this priority, may I direct your attention to the emoji—we’ve removed the egg in Android P Beta 2, making this a more inclusive vegan salad.”

This isn’t the first time Google has amended an emoji after great emotional turmoil. Late last year, the company finally responded to months of complaints about the placement of cheese in its burger emoji, redesigning it to go over the burger, instead of over the bun.

What would we do without you, Google?

Google’s new AI algorithm predicts heart disease by looking at your eyes


Experts say it could provide a simpler way to predict cardiovascular risk

The algorithm could allow doctors to predict cardiovascular risk more simply by using scans of the retina.

Scientists from Google and its health-tech subsidiary Verily have discovered a new way to assess a person’s risk of heart disease using machine learning. By analyzing scans of the back of a patient’s eye, the company’s software is able to accurately deduce data, including an individual’s age, blood pressure, and whether or not they smoke. This can then be used to predict their risk of suffering a major cardiac event — such as a heart attack — with roughly the same accuracy as current leading methods.

The algorithm potentially makes it quicker and easier for doctors to analyze a patient’s cardiovascular risk, as it doesn’t require a blood test. But, the method will need to be tested more thoroughly before it can be used in a clinical setting. A paper describing the work was published today in the Nature journal Biomedical Engineering, although the research was also shared before peer review last September.

Luke Oakden-Rayner, a medical researcher at the University of Adelaide who specializes in machine learning analysis, told The Verge that the work was solid, and shows how AI can help improve existing diagnostic tools. “They’re taking data that’s been captured for one clinical reason and getting more out of it than we currently do,” said Oakden-Rayner. “Rather than replacing doctors, it’s trying to extend what we can actually do.”

To train the algorithm, Google and Verily’s scientists used machine learning to analyze a medical dataset of nearly 300,000 patients. This information included eye scans as well as general medical data. As with all deep learning analysis, neural networks were then used to mine this information for patterns, learning to associate telltale signs in the eye scans with the metrics needed to predict cardiovascular risk (e.g., age and blood pressure).

Although the idea of looking at your eyes to judge the health of your heart sounds unusual, it draws from a body of established research. The rear interior wall of the eye (the fundus) is chock-full of blood vessels that reflect the body’s overall health. By studying their appearance with camera and microscope, doctors can infer things like an individual’s blood pressure, age, and whether or not they smoke, which are all important predictors of cardiovascular health.

Two images of the fundus, or interior rear of your eye. The one on the left is a regular image; the on the right shows how Google’s algorithm picks out blood vessels (in green) to predict blood pressure.

When presented with retinal images of two patients, one of whom suffered a cardiovascular event in the following five years, and one of whom did not, Google’s algorithm was able to tell which was which 70 percent of the time. This is only slightly worse than the commonly used SCORE method of predicting cardiovascular risk, which requires a blood test and makes correct predictions in the same test 72 percent of the time.

Alun Hughes, professor of Cardiovascular Physiology and Pharmacology at London’s UCL, said Google’s approach sounded credible because of the “long history of looking at the retina to predict cardiovascular risk.” He added that artificial intelligence had the potential to speed up existing forms of medical analysis, but cautioned that the algorithm would need to be tested further before it could be trusted.

For Google, the work represents more than just a new method of judging cardiovascular risk. It points the way toward a new AI-powered paradigm for scientific discovery. While most medical algorithms are built to replicate existing diagnostic tools (like identifying skin cancer, for example), this algorithm found new ways to analyze existing medical data. With enough data, it’s hoped that artificial intelligence can then create entirely new medical insight without human direction. It’s presumably part of the reason Google has created initiatives like its Project Baseline study, which is collecting exhaustive medical records of 10,000 individuals over the course of four years.

For now, the idea of an AI doctor churning out new diagnoses without human oversight is a distant prospect — most likely decades, rather than years, in the future. But Google’s research suggests the idea isn’t completely far-fetched.

THE RACE FOR AR GLASSES STARTS NOW


 

Though the Next Big Thing won’t appear for a while, we know pretty much what it will look like: a lightweight, always-on wearable that obliterates the divide between the stuff we see on screens and the stuff we see when we look up from our screens. “We know what we really want: AR glasses,” said Oculus’s chief scientist Michael Abrash at Facebook’s F8 developers’ conference in April. “They aren’t here yet, but when they arrive they’re going to be the great transformational technologies of the next 50 years.” He predicted that in the near future, “instead of carrying stylish smartphones everywhere, we’ll be wearing stylish glasses.” And he added that “these glasses will offer AR, VR, and everything in between, and we’ll wear them all day and we’ll use them in every aspect of our lives.”

That may seem surprising to those still thinking of mixed-reality wearables as a series of over-promises: Google Glass’s humiliating stumble; Snapchat’s low-selling Spectacles; Magic Leap’s epically late headset; and, um, the disappointing initial sales of Oculus’s own virtual-reality headsets. But you can write those off as baby steps, because all the big companies are going long on augmented reality. In 2018, you’ll see the building blocks on your mobile phones. These are just the earliest attempts at a new technology platform that will eventually have its unveiling as a mainstream, must-have wearable.

Indeed, an augmented reality Manhattan Project has become one of those things—like streaming video entertainment, search engines, and a phalanx of Washington lobbyists—that every self-respecting tech oligarch must have these days. And as far as the future is concerned, tech’s Big Five believe it may be the most importanthere’s been an increasing consensus that artificial reality—the technology that tricks the senses into seeing, hearing, and interacting with digital objects and scenarios as if they are as substantial as the furniture we sit on and the people across from us—will become the Fourth Platform in computing. Each of the previous three uber-platforms, coming roughly every 15 years or so, has been an epochal event, offering an opportunity for reshuffling the power rankings of tech companies. And each threatened the existence of industry leaders blinded by the false sunshine of the Innovator’s Dilemma, which holds that the winners in one round of tech progress are too locked into their victories to bet on the next wave.

In the early eighties, personal computing destroyed mini-computer companies, and launched Apple and Microsoft. The mid-nineties saw the explosion of the internet, bushwhacking endless industries and spawning giants like Google and Amazon. The 2007 iPhone kicked off the mobile era; companies going all-in thrived, while those that came late to mobile (yes, I mean, you, Microsoft) suffered.

In the short run, we’re stuck with a landscape ruled by five behemoths (Microsoft has recovered enough to join the cabal). These companies appear so powerful that it would be easy to miss how fragile their futures may be. A new technology platform always forces a new round of musical chairs as the companies that are the first to recognize it and build the tools to support it dominate a new wave. Augmented reality is that new platform. (Some even call it the final computing platform, but that’s really reserved for the inevitable brain implant, which is easily another 15 years out.)

Not every company working on post-reality glasses shares an identical vision; some have differing views of how immersive it should be. But all have quietly adopted the implicit assumption that a persistent, wearable artificial reality is the next big thing. The pressure of the competition has forced them to begin releasing interim products, now.

  • When something doesn’t work, the companies can’t afford to give up. Look what happened at Google. One of the most humiliating missteps in its history was the botching of Glass, which started as a geek passion project and wound up as an object of ridicule. Instead of burying the incident, the company persisted. As I reported last summer, Glass is getting great reviews from serious businesses, like manufacturing and health care, giving Google’s parent company Alphabet an apparent edge in actually field-testing the glasses concept.

Microsoft might beg to differ. It has already released its own device, a more immersive headset called HoloLens. And newer companies dedicated to augmented reality, such as Magic Leap (powered in part by a $350 million Google investment), are pushing the limits of the current science. But you can bet your Bitcoins that Amazon and Apple are also striving to be the Warby Parker of this new paradigm. Just check out some of Apple’s patents. And earlier this month, Amazon joined the fray, introducing a new AWS service to help developers create applications in augmented and virtual reality. Available now in preview, it lets non-VR experts create “scenes” that run on a variety of devices, including Oculus, Gear and Google’s Daydream. Called Sumerian, after the seminal Mesopotamian civilization, it signals Amazon’s belief that its dominance in commerce will extend to an artificial world.

While we wait for the ultimate augmented reality glasses, 2018’s version of augmented reality involves layering information—from Harry Potter characters to Ikea furniture—onto the live images provided by your mobile phone camera. Apple, Microsoft, Google, and Facebook all are providing deep toolsets for developers to create apps for this approach.

All of those efforts are just a test run for the ultimate vision quest: a set of always-on glasses that will blur the line between the physical world and a digital contract made of pure information. The impact on society will be mind-boggling and, in some respects, troubling and even dangerous. But we’ve got maybe between 5 and 15 years to start arguing about those effects. Meanwhile, in secret labs around the world, tech oligarchs and wannabes are hard at work inventing a wave of computing that will literally be in your face. Like it or not, the next field of battle in tech is for your field of vision.

Firefighter Helmets Now Have Built In Thermal Imaging


IN BRIEF

A fire protection and security company recently launched a new product called the “Scott Sight,” a face mask that incorporates thermal imaging with a display screen.

Tyco’s Scott Safety is bringing a big upgrade to the field of firefighting with their newly released product, the Scott Sight. This hands-free device is the first in the industry that incorporates an in-mask thermal intelligence system, according to an April 18th press release from the fire protection and security company.

The Scott Sight works in a similar way to Google Glass. A screen within the mask itself displays readings and various interfaces via a thermal camera at nine frames per second for up to four hours.

Thermal imaging, an essential part of fighter fighter’s operations, has been done using handheld cameras since the 1990s. Tyco believes the Scott Sight is so revolutionary because it’s the first major upgrade to the technology in decades.

The new product removes the need for hand-held thermal imaging devices, and allows firefighters to work more effectively and safely in the hero business.

Someday, Scott Sight could even help save your life.

Google Glass in the ED?


Healthcare moves one step closer to Star Trek …

Imagine walking into an emergency room with an awful rash and waiting hours to see a doctor until, finally, a physician who doesn’t have specific knowledge of your condition gives you an ointment and a referral to a dermatologist.

That could change if a technological device like Google Glass, which is a wearable computer that is smaller than an ink pen and includes a camera function, could be strapped to an emergency room doctor’s head or to his or her eyeglasses and used to beam a specialist in to see patients at the bedside. Not only would a patient get a more specific initial diagnosis and treatment, but a second visit to a dermatologist might not be necessary.

Researchers did just this for a small sample of people at the emergency room of the Rhode Island Hospital in Providence. They found during the course of the study that 93.5 percent of patients who were seen with a skin problem liked the experience, and 96.8 percent were confident in the accuracy of the video equipment and that their privacy was protected.

“There had been a lot of talk about using Glass in healthcare, but at the time that we designed the study, no one had actually tried it. No one knew if it would work,” said Megan Ranney, a study author and assistant professor of emergency medicine and policy at Brown University.

ER doctors normally have to page an on-call specialist — in the study, a dermatologist — to talk through the patient’s condition. With that information, the dermatologist makes a judgment call about the treatment, usually without ever seeing the patient. If there’s no dermatologist available, which can frequently be the situation, doctors do what they can but then refer the patient for follow-up dermatological care. Many rural and community hospitals do not have dermatologists on staff and it’s up to the emergency physician to care for the patient.

In the study, researchers instead had the physicians connect via Google Glass, enabling the specialist to see on his or her office iPad or computer what the ER doctor was seeing in person. The ER doctor was able to communicate with the dermatologist, and both physicians could ask questions of the patient in real time.

“You’ve rolled the first and second visit into this one visit. You have the specialist at the bedside, and if you get better, you don’t need to have follow-up,” said Paul Porter, a physician in the emergency department of Rhode Island Hospital and study author. “There’s nothing more frustrating [for the patient than] to be seen, leave with diagnostic uncertainty, and have to go somewhere else. … People don’t want that answer.”

Emergency rooms across the country may already use telemedicine technology for patients with skin or other visible conditions, but many of those machines can cost as much as to $60,000 — not to mention the expense of maintenance and support. Google Glass costs less than $2,000.

In addition, many ERs either don’t have the funds to obtain a telemedicine “cart,” or don’t use it because the size — 4 to 6 square feet — can be too large for that setting, said Edward Boyer, a professor of emergency medicine at the University of Massachusetts Medical School in Worcester, Mass.

“The crowding in emergency rooms means we physically do not have enough room to manage the patients they have in them. A dermatology cart is not a little thing, and a lot of ERs don’t have that much spare room to store and wheel around one of those things,” said Boyer.

The researchers’ next step is to study whether Google Glass or similar headset technology could be used for other ER patients, such as those showing signs of stroke or who may have been exposed to poison.

In the latter instances, poison control center toxicologists are always available, though mainly consulted via the telephone. But these patients commonly have visual symptoms such as seizures, said Peter Chai, a lead author and fellow in medical toxicology at the University of Massachusetts Medical School. And, if a person is severely ill due to poisoning, they are flown via helicopter to the closest major hospital, he added.

 “If we could see them virtually, could we save the money of transport, keep them in the community intensive care unit, and give better patient care?” Chai said, noting that even if ERs in smaller or rural settings don’t have access to telemedicine, they may be able to afford this type of device.

The research surveyed 31 people with skin conditions in the Rhode Island Hospital emergency department for six months and was published as a research letter in JAMA Dermatology April 15. Google Glass is currently not available commercially, but healthcare providers can get the device throughhealthcare technology companies.

Startup Brain Power Uses Google Glass To Develop Apps For Kids With Autism .


One atom thick, graphene is the thinnest material known and may be the strongest.

One atom thick, graphene is the thinnest material known and may be the strongest.
Until Andre Geim, a physics professor at the University of Manchester, discovered an unusual new material called graphene, he was best known for an experiment in which he used electromagnets to levitate a frog. Geim, born in 1958 in the Soviet Union, is a brilliant academic—as a high-school student, he won a competition by memorizing a thousand-page chemistry dictionary—but he also has a streak of unorthodox humor. He published the frog experiment in the European Journal of Physics, under the title “Of Flying Frogs and Levitrons,” and in 2000 it won the Ig Nobel Prize, an annual award for the silliest experiment. Colleagues urged Geim to turn the honor down, but he refused. He saw the frog levitation as an integral part of his style, an acceptance of lateral thinking that could lead to important discoveries. Soon afterward, he began hosting “Friday sessions” for his students: free-form, end-of-the-week experiments, sometimes fuelled by a few beers. “The Friday sessions refer to something that you’re not paid for and not supposed to do during your professional life,” Geim told me recently. “Curiosity-driven research. Something random, simple, maybe a bit weird—even ridiculous.” He added, “Without it, there are no discoveries.”

On one such evening, in the fall of 2002, Geim was thinking about carbon. He specializes in microscopically thin materials, and he wondered how very thin layers of carbon might behave under certain experimental conditions. Graphite, which consists of stacks of atom-thick carbon layers, was an obvious material to work with, but the standard methods for isolating superthin samples would overheat the material, destroying it. So Geim had set one of his new Ph.D. students, Da Jiang, the task of trying to obtain as thin a sample as possible—perhaps a few hundred atomic layers—by polishing a one-inch graphite crystal. Several weeks later, Jiang delivered a speck of carbon in a petri dish. After looking at it under a microscope, Geim recalls, he asked him to try again; Jiang admitted that this was all that was left of the crystal. As Geim teasingly admonished him (“You polished a mountain to get a grain of sand?”), one of his senior fellows glanced at a ball of used Scotch tape in the wastebasket, its sticky side covered with a gray, slightly shiny film of graphite residue.
It would have been a familiar sight in labs around the world, where researchers routinely use tape to test the adhesive properties of experimental samples. The layers of carbon that make up graphite are weakly bonded (hence its adoption, in 1564, for pencils, which shed a visible trace when dragged across paper), so tape removes flakes of it readily. Geim placed a piece of the tape under the microscope and discovered that the graphite layers were thinner than any others he’d seen. By folding the tape, pressing the residue together and pulling it apart, he was able to peel the flakes down to still thinner layers.

Geim had isolated the first two-dimensional material ever discovered: an atom-thick layer of carbon, which appeared, under an atomic microscope, as a flat lattice of hexagons linked in a honeycomb pattern. Theoretical physicists had speculated about such a substance, calling it “graphene,” but had assumed that a single atomic layer could not be obtained at room temperature—that it would pull apart into microscopic balls. Instead, Geim saw, graphene remained in a single plane, developing ripples as the material stabilized.

Geim enlisted the help of a Ph.D. student named Konstantin Novoselov, and they began working fourteen-hour days studying graphene. In the next two years, they designed a series of experiments that uncovered startling properties of the material. Because of its unique structure, electrons could flow across the lattice unimpeded by other layers, moving with extraordinary speed and freedom. It can carry a thousand times more electricity than copper. In what Geim later called “the first eureka moment,” they demonstrated that graphene had a pronounced “field effect,” the response that some materials show when placed near an electric field, which allows scientists to control the conductivity. A field effect is one of the defining characteristics of silicon, used in computer chips, which suggested that graphene could serve as a replacement—something that computer makers had been seeking for years.

Geim and Novoselov wrote a three-page paper describing their discoveries. It was twice rejected by Nature, where one reader stated that isolating a stable, two-dimensional material was “impossible,” and another said that it was not “a sufficient scientific advance.” But, in October, 2004, the paper, “Electric Field Effect in Atomically Thin Carbon Films,” was published in Science, and it astonished scientists. “It was as if science fiction had become reality,” Youngjoon Gil, the executive vice-president of the Samsung Advanced Institute of Technology, told me.

Labs around the world began studies using Geim’s Scotch-tape technique, and researchers identified other properties of graphene. Although it was the thinnest material in the known universe, it was a hundred and fifty times stronger than an equivalent weight of steel—indeed, the strongest material ever measured. It was as pliable as rubber and could stretch to a hundred and twenty per cent of its length. Research by Philip Kim, then at Columbia University, determined that graphene was even more electrically conductive than previously shown. Kim suspended graphene in a vacuum, where no other material could slow the movement of its subatomic particles, and showed that it had a “mobility”—the speed at which an electrical charge flows across a semiconductor—of up to two hundred and fifty times that of silicon.
In 2010, six years after Geim and Novoselov published their paper, they were awarded the Nobel Prize in Physics. By then, the media were calling graphene “a wonder material,” a substance that, as the Guardian put it, “could change the world.” Academic researchers in physics, electrical engineering, medicine, chemistry, and other fields flocked to graphene, as did scientists at top electronics firms. The U.K. Intellectual Property Office recently published a report detailing the worldwide proliferation of graphene-related patents, from 3,018 in 2011 to 8,416 at the beginning of 2013. The patents suggest a wide array of applications: ultra-long-life batteries, bendable computer screens, desalinization of water, improved solar cells, superfast microcomputers. At Geim and Novoselov’s academic home, the University of Manchester, the British government invested sixty million dollars to help create the National Graphene Institute, in an effort to make the U.K. competitive with the world’s top patent holders: Korea, China, and the United States, all of which have entered the race to find the first world-changing use for graphene.

The progress of a technology from the moment of discovery to transformative product is slow and meandering; the consensus among scientists is that it takes decades, even when things go well. Paul Lauterbur and Peter Mansfield shared a Nobel Prize for developing the MRI, in 1973—almost thirty years after scientists first understood the physical reaction that allowed the machine to work. More than a century passed between the moment when the Swedish chemist Jöns Jakob Berzelius purified silicon, in 1824, and the birth of the semiconductor industry.

New discoveries face formidable challenges in the marketplace. They must be conspicuously cheaper or better than products already for sale, and they must be conducive to manufacture on a commercial scale. If a material arrives, like graphene, as a serendipitous discovery, with no targeted application, there is another barrier: the limits of imagination. Now that we’ve got this stuff, what do we do with it?

Aluminum, discovered in minute quantities in a lab in the eighteen-twenties, was hailed as a wonder substance, with qualities never before seen in a metal: it was lightweight, shiny, resistant to rust, and highly conductive. It could be derived from clay (at first, it was called “silver from clay”), and the idea that a valuable substance was produced from a common one lent it a quality of alchemy. In the eighteen-fifties, a French chemist devised a method for making a few grams at a time, and aluminum was quickly adopted for use in expensive jewelry. Three decades later, a new process, using electricity, allowed industrial production, and the price plummeted.

“People said, ‘Wow! We’ve got this silver from clay, and now it’s really cheap and we can use it for anything,’ ” Robert Friedel, a historian of technology at the University of Maryland, told me. But the enthusiasm soon cooled: “They couldn’t figure out what to use it for.” In 1900, the Sears and Roebuck catalogue advertised aluminum pots and pans, Friedel notes, “but you can’t find any of what we’d call ‘technical’ uses.” Not until after the First World War did aluminum find its transformative use. “The killer app is the airplane, which didn’t even exist when they were going all gung ho and gaga over this stuff.”

Some highly touted discoveries fizzle altogether. In 1986, the I.B.M. researchers Georg Bednorz and K. Alex Müller discovered ceramics that acted as radically more practical superconductors. The next year, they won a Nobel, and an enormous wave of optimism followed. “Presidential commissions were thrown together to try to put the U.S. out in the lead,” Cyrus Mody, a history-of-science professor at Rice University, in Houston, says. “People were talking about floating trains and infinite transmission lines within the next couple of years.” But, in three decades of struggle, almost no one has managed to turn the brittle ceramics into a substance that can survive everyday use.
BUY THE PRINT »
Friedel offered a broad axiom: “The more innovative—the more breaking-the-mold—the innovation is, the less likely we are to figure out what it is really going to be used for.” Thus far, the only consumer products that incorporate graphene are tennis racquets and ink. But many scientists insist that its unusual properties will eventually lead to a breakthrough. According to Geim, the influx of money and researchers has speeded up the usual time line to practical usage. “We started with submicron flakes, barely seen even in an optical microscope,” he says. “I never imagined that by 2009, 2010, people would already be making square metres of this material. It’s extremely rapid progress.” He adds, “Once someone sees that there is a gold mine, then very heavy equipment starts to be applied from many different research areas. When people are thinking, we are quite inventive animals.”
Samsung, the Korea-based electronics giant, holds the greatest number of patents in graphene, but in recent years research institutions, not corporations, have been most active. A Korean university, which works with Samsung, is in first place among academic institutions. Two Chinese universities hold the second and third slots. In fourth place is Rice University, which has filed thirty-three patents in the past two years, almost all from a laboratory run by a professor named James Tour.

Tour, fifty-five, is a synthetic organic chemist, but his expansive personality and entrepreneurial brio make him seem more like an executive overseeing a company’s profitable R. & D. division. A short, dark-eyed man with a gym-pumped body, he greeted me volubly when I visited him recently at his office, in the Dell Butcher building at Rice. “I mean, the stuff is just amazing!” he said, about graphene. “You can’t believe what this stuff can do!” Tour, like most senior scientists, must concern himself with both research and commerce. He has twice appeared before Congress to warn about federal budget cuts to science, and says that his lab has managed to thrive only because he has secured funding through aggressive partnerships with industry. He charges each business he contracts with two hundred and fifty thousand dollars a year; his lab nets a little more than half, with which he can hire two student researchers and pay for their materials for a year. Much of Tour’s work involves spurring the creativity of those researchers (twenty-five of whom are devoted to graphene); they’re the ones who devise the inventions that Tour sells. Graphene has been a boon, he said: “You have a lot of people moving into this area. Not just academics but companies in a big way, from the big electronics firms, like Samsung, to oil companies.”

Tour brings a special energy to the endeavor. Raised in a secular Jewish home in White Plains, he became a born-again Christian as a freshman at Syracuse University. Married, with four grown children, he rises at three-forty every morning for an hour and a half of prayer and Bible study—followed, several times a week, with workouts at the gym—and arrives at the office at six-fifteen. In 2001, he made headlines by signing “A Scientific Dissent from Darwinism,” a petition that promoted intelligent design, but he insists that this reflected only his personal doubts about how random mutation occurs at the molecular level. Although he ends e-mails with “God bless,” he says that, apart from a habit of praying for divine guidance, he feels that religion plays no part in his scientific work.

Tour endorses a scattershot approach for his students’ research. “We work on whatever suits our fancy, as long as it is swinging for the fences,” he said. As chemists, he noted, they are particularly suited to quick experiments, many of which can yield results in a matter of hours—unlike physicists, whose experiments can take months. His lab has published a hundred and thirty-one journal articles on graphene—second only to a lab at the University of Texas at Austin—and his researchers move rapidly to file provisional applications with the U.S. Patent and Trademark Office, which give them legal ownership of an idea for a year before they must file a full claim. “We don’t wait very long before we file,” Tour said; he urges students to write up their work in less than forty-eight hours. “I was just told by a company that has licensed one of our technologies that we beat the Chinese by five days.”

Many of his lab’s recent inventions are designed for immediate exploitation by industry, supplying funds to support more ambitious work. Tour has sold patents for a graphene-infused paint whose conductivity might help remove ice from helicopter blades, fluids to increase the efficiency of oil drills, and graphene-based materials to make the inflatable slides and life rafts used in airplanes. He points out that graphene is the only substance on earth that is completely impermeable to gas, but it weighs almost nothing; lighter rafts and slides could save the airline industry millions of dollars’ worth of fuel a year.
In Tour’s laboratory, a large, high-ceilinged room with tightly configured rows of worktables, a score of young men in white lab coats and safety goggles were working. Tour and I stopped at a bench where Loïc Samuels, a graduate student from Antigua, was making a batch of graphene-based gel, to be used in a scaffold for spinal-cord injuries. “Instead of just having a nonfunctional scaffold material, you have something that’s actually electrically conductive,” Samuels said, as he swirled a test tube in a jeweller’s bath. “That helps the nerve cells, which communicate electrically, connect with each other.” Tour showed me videos of lab rats whose back legs had been paralyzed. In one video, two rats inched themselves along the bottom of a cage, dragging their hind legs. In another video, of rats that had been treated, they walked normally. Tour warned that it takes years before the F.D.A. approves human trials. “But it’s an incredible start,” he said.

In 2010, one of Tour’s researchers, Alexander Slesarev, a Russian who had studied at Moscow State University, suggested that graphene oxide, a form of graphene created when oxygen and hydrogen molecules are bonded to it, might attract radioactive material. Slesarev sent a sample to a former colleague at Moscow State, where students placed the powder in solutions containing nuclear material. They discovered that the graphene oxide binds with the radioactive elements, forming a sludge that could easily be scooped away. Not long afterward, the earthquake and tsunami in Japan created a devastating spill of nuclear material, and Tour flew to Japan to pitch the technology to the Japanese. “We’re deploying it right now in Fukushima,” he told me.

Working at one of the benches was a young man with a round, open face: a twenty-five-year-old Ph.D. student named Ruquan Ye, who last year devised a new way to make quantum dots, highly fluorescent nanoparticles used in medical imaging and plasma television screens. Usually made in tiny amounts from toxic chemicals, such as cadmium selenide and indium arsenide, quantum dots cost a million dollars for a one-kilogram bottle. Ye’s technique uses graphene derived from coal, which is a hundred dollars a ton.

“The method is simple,” Ye told me. He showed me a vial filled with a fine black powder: anthracite coal that he had ground. “I place this in a solution of acids for one day, then heat the solution on a hot plate.” By tweaking the process, he can make the material emit various light frequencies, creating dots of various colors for differentiated tagging of tumors. The coal-based dots are compatible with the human body—coal is carbon, and so are we—which suggests that Ye’s dots could replace the highly toxic ones used in hospitals worldwide. In a darkened room next to the lab, he shone a black light on several small vials of clear liquid. They fluoresced into glowing ingots: red, blue, yellow, violet.

Tour usually declines to take credit for the discoveries in his lab. “It’s all the students,” he said. “They’re at that age, their twenties, when the synapses are just firing. My job is to inspire them and provide a credit card, and direct them away from rabbit holes.” But he acknowledged that the quantum-dot idea originated with him: “One day, I said, ‘We gotta find out what’s in coal. People have been using this for five thousand years. Let’s see what’s really in it. I bet it’s small domains of graphene’—and, sure enough, it was. It was just sitting right there. A twenty-five-per-cent yield. And, remember, it’s a million dollars a kilogram!”

Tour turned to his lab manager, Paul Cherukuri, and said, “We’re going to be rich someday, aren’t we?” As Cherukuri laughed, Tour added, “I’m going to come in here and count money every day.”

Perhaps the most tantalizing property described in Geim and Novoselov’s 2004 paper was the “mobility” with which electronic information can flow across graphene’s surface. “The slow step in our computers is moving information from point A to point B,” Tour told me. “Now you’ve taken the slow step, the biggest hurdle in silicon electronics, and you’ve introduced a new material and—boom! All of a sudden, you’re increasing speed not by a factor of ten but by a factor of a hundred, possibly even more.”

The news galvanized the semiconductor industry, which was struggling to keep up with Moore’s Law, devised in 1965 by Gordon Moore, a co-founder of Intel. Every two years, he predicted, the density—and thus the effectiveness—of computer chips would double. For five decades, engineers have managed to keep pace with Moore’s Law through miniaturization, packing increasing numbers of transistors onto chips—as many as four billion on a silicon wafer the size of a fingernail. Engineers have further speeded computers by “doping” silicon: introducing atoms from other elements to squeeze the lattice tighter. But there’s a limit. Shrink the chip too much, moving its transistors too close together, and silicon stops working. As early as 2017, silicon chips may no longer be able to keep pace with Moore’s Law. Graphene, if it works, offers a solution.
“Five more minutes and I’m all yours, Mr. Antsy.”
BUY THE PRINT »

There’s a problem, though. Semiconductors, such as silicon, are defined by their ability to turn on and off in the presence of an electric field; in logic chips, that switching process generates the ones and the zeros that are the language of computers. Graphene, a semi-metal, cannot be turned off. At first, engineers believed that they could dope graphene to open up a “band gap,” the electrical property that allows semiconductors to act as switches. But, ten years after Geim and Novoselov’s paper, no one has succeeded in opening a gap wide enough. “You’d have to change it so much that it’s no longer graphene,” Tour said. Indeed, those who have managed to create such a gap learned that it kills the mobility, rendering graphene no better than the materials we use now. The result has been a certain dampening of the mood at semiconductor companies.

I recently visited the Thomas J. Watson Research Center, the main R. & D. lab for I.B.M., a major fabricator of silicon semiconductor chips. A half hour north of New York City, the center is housed in a building designed by Eero Saarinen, in 1961. A vast arc of glass with an upswept front awning, it is a kind of monument to the difficulty of predicting the future. Saarinen imagined that transformative ideas would emerge from groups of scientists working in meeting areas, where recliners and coffee tables still sit beside soaring windows. Instead, the scientists spend much of the day hunched over computer screens in their offices: small, windowless dens, which seem to have been created as an afterthought.

In one cramped office, I met Supratik Guha, who is the director of physical sciences at I.B.M. and who sets the company’s strategy for worldwide research. A thoughtful man, as precisely understated as Tour is effusive, Guha lamented the “excessive hype” that has surrounded graphene as a replacement for silicon, and talked mournfully about how the effort to introduce a band gap is, at best, “one major innovation away.” He hastened to add that I.B.M. has not written off graphene. In early 2014, the company announced that its researchers had built the first graphene-based integrated circuit for wireless devices, which could lead to cheaper, more efficient cell phones. But in the quest to make graphene a replacement for silicon, Guha admits, they hold little hope.

For now, I.B.M.’s focus remains the single-walled carbon nanotube, which was developed at Rice by Tour’s mentor and predecessor, Rick Smalley. In the eighties, Smalley and his colleagues discovered that molecules of carbon atoms arrange themselves in a variety of shapes; some were spheres (which he called “buckyballs,” for their resemblance to Buckminster Fuller’s geodesic domes) and others were tubes. When the researchers found that the tubes can act as semiconductors, the material was immediately suggested as a potential replacement for silicon. Along with his collaborators, Smalley was awarded the Nobel Prize in Chemistry in 1996, and he persuaded Rice to build the multimillion-dollar nanotechnology center that Tour later took over. Yet carbon nanotubes have resisted easy exploitation. They have the necessary band gap, but building a chip with them entails maneuvering billions of minute objects into precise locations—a difficulty that has bedevilled scientists for almost two decades. Without quite admitting that he has lost interest in carbon nanotubes, Tour told me that they “never really commercialized well.”

At I.B.M., which has invested more than a decade of research and tens of millions of dollars in the material, there is great reluctance to admit defeat. Guha introduced me to George Tulevski, who helps lead I.B.M.’s carbon-nanotube research program. When I mentioned graphene, he evinced the defensiveness that might be expected of a scientist who has devoted nearly ten years to one recalcitrant technology only to be told about a glamorous new one. “Devices have to turn on and off,” Tulevski said. “If it doesn’t turn off, it just consumes way too much power. There’s no way to turn graphene off. So those electrons are going superfast, and that’s great—but you can’t turn the device off.”

Cyrus Mody, the historian, is equally cautious. “This idea that there’s a form of microelectronics that is theoretically much, much faster than conventional silicon is not new,” he told me. He points to the precedent of the Josephson-junction circuit. In 1962, the British physicist Brian David Josephson predicted that electricity would flow at unprecedented speeds through a circuit composed of two superconductors separated by a “weak link” material. The insight led to a Nobel Prize in Physics—and to dreams of exponentially faster electronics.

“A lot of people thought we’d be switching over to superconducting Josephson-junction microelectronics soon,” Mody said. “But when you actually get down to manufacturing a complex circuit with lots and lots and lots of logic gates, and making lots and lots of such circuits with very large yields, the manufacturing problems really make it impossible to keep going. And I think that’s going to be the hurdle that people haven’t really considered enough when they talk about graphene.”

But other scientists argue that the obstacle is not graphene’s physical properties. “The semiconductor industry knows how to introduce a band gap,” Amanda Barnard, a theoretical physicist who heads Australia’s Commonwealth Scientific and Industrial Research Organization, told me. The problem is business: “We’ve got a global investment on the order of trillions of dollars in silicon, and we’re not going to walk away from that. Initially, graphene needs to work with silicon—it needs to work in our existing factories and production lines and research capabilities—and then we’ll get some momentum going.”

Tour has little sympathy for the semiconductor industry’s disappointment with graphene. “I.B.M. is all bummed out because they’re single-minded,” he said. “They’ve got to make computers—and they’ve got Moore’s Law. But that’s their own fault! What other industry has challenged itself with doubling its performance every eighteen months? In the chemical industry, if we can get a one-per-cent-higher yield in a year we think we’ve done pretty well.”

Perhaps the most expansive thinker about the material’s potential is Tomas Palacios, a Spanish scientist who runs the Center for Graphene Devices and 2D Systems, at M.I.T. Rather than using graphene to improve existing applications, as Tour’s lab mostly does, Palacios is trying to build devices for a future world.

At thirty-six, Palacios has an undergraduate’s reedy build and a gentle way of speaking that makes wildly ambitious notions seem plausible. As an electrical engineer, he aspires to “ubiquitous electronics,” increasing “by a factor of one hundred” the number of electronic devices in our lives. From the perspective of his lab, the world would be greatly enhanced if every object, from windows to coffee cups, paper currency, and shoes, were embedded with energy harvesters, sensors, and light-emitting diodes, which allowed them to cheaply collect and transmit information. “Basically, everything around us will be able to convert itself into a display on demand,” he told me, when I visited him recently. Palacios says that graphene could make all this possible; first, though, it must be integrated into those coffee cups and shoes.

As Mody pointed out, radical innovation often has to wait for the right environment. “It’s less about a disruptive technology and more about moments when the linkages among a set of technologies reach a point where it’s feasible for them to change lots of practices,” he said. “Steam engines had been around a long time before they became really disruptive. What needed to happen were changes in other parts of the economy, other technologies linking up with the steam engine to make it more efficient and desirable.”

For Palacios, the crucial technological complement is an advance in 3-D printing. In his lab, four students were developing an early prototype of a printer that would allow them to create graphene-based objects with electrical “intelligence” built into them. Along with Marco de Fazio, a scientist from STMicrolectronics, a firm that manufactures ink-jet print heads, they were clustered around a small, half-built device that looked a little like a Tinkertoy contraption on a mirrored base. “We just got the printer a couple of weeks ago,” Maddy Aby, a ponytailed master’s student, said. “It came with a kit. We need to add all the electronics.” She pointed to a nozzle lying on the table. “This just shoots plastic now, but Marco gave us these print heads that will print the graphene and other types of inks.”

The group’s members were pondering how to integrate graphene into the objects they print. They might mix the material into plastic or simply print it onto the surface of existing objects. There were still formidable hurdles. The researchers had figured out how to turn graphene into a liquid—no easy task, since the material is severely hydrophobic, which means that it clumps up and clogs the print heads. They needed to first convert graphene to graphene oxide, adding groups of oxygen and hydrogen molecules, but this process negates its electrical properties. So once they printed the object they would have to heat it with a laser. “When you heat it up,” Aby said, “you burn off those groups and reduce it back to graphene.”
When that might be possible was uncertain; she hoped to have the device working in three months. “The laser needs more approval from the powers that be,” she said, glancing balefully at the printer’s mirrored base—the kind perfect for bouncing laser beams all over a room. De Fazio suggested that they cover it with a silicon wafer.

“That could work,” Aby said.
“Of course, this could also be confirmation bias from me wanting you to get sick.”
BUY THE PRINT »
Palacios recognizes that millennial change comes only after modest, strategic increments. He mentioned Samsung, which, according to industry rumor, is planning to launch the first device with a screen that employs graphene. “Graphene is only a small component, used to deliver the current to the display,” he said. “But that’s an exciting first application—it doesn’t have to be the breakthrough that we are all looking forward to. It’s a good way to get graphene into everyone’s focus and, that way, justify more investment.” In the meantime, one of his students, Lili Yu, has been working on a prototype for a flexible screen.

Palacios, in his office, told me that his most ambitious goal is “graphene origami,” in which sheets of the material are folded to mimic organelles, minuscule structures inside a biological cell. “It’s not that different from what nature does with DNA, a material that is a one-dimensional structure that gets folded many, many, many times to make the chromosomes.” If the method works, it could be used to pack huge amounts of computing power into a tiny space. There might be applications in medicine, he says, and in something he calls smart dust—“things that are just as tiny as dust particles but have a functionality to tell us about the pollution in the atmosphere, or if there is a flu virus nearby. These things will be able to connect to your phone or to the embedded displays everywhere, to tell you about things happening around you.”

For the moment, the challenges are more earthbound: scientists are still trying to devise a cost-effective way to produce graphene at scale. Companies like Samsung use a method pioneered at the University of Texas, in which they heat copper foil to eighteen hundred degrees Fahrenheit in a low vacuum, and introduce methane gas, which causes graphene to “grow” as an atom-thick sheet on both sides of the copper—much as frost crystals “grow” on a windowpane. They then use acids to etch away the copper. The resulting graphene is invisible to the naked eye and too fragile to touch with anything but instruments designed for microelectronics. The process is slow, exacting, and too expensive for all but the largest companies to afford.

At Tour’s lab, a twenty-six-year-old postdoc named Zhiwei Peng was waiting to hear from a final reviewer of a paper he had submitted, in which he detailed a way to create graphene with no superheating, no vacuums, and no gases. (The paper was later approved for publication.) Peng had stumbled on his method a few months before. While heating graphene oxide with a laser, he missed the sample, and accidentally heated the material it was sitting on, a sheet of polyimide plastic. Where the laser touched the plastic, it left a black residue. He discovered that the residue was layers of graphene, loosely bonded with oxygen molecules, which—like the residue on Geim’s tape—could easily be exfoliated to single-atom sheets. He showed me how it worked, the laser tracking back and forth across the surface of a piece of polyimide and leaving with each pass a needle-thin deposit of material. Single layers of graphene absorb 2.5 per cent of available light; as layers pile up, they begin to appear black. After a few minutes, Peng had produced a crisp, matte-black lattice—perhaps an inch wide, and worth tens of thousands of dollars. Cherukuri, Tour’s lab manager, pointed at it and said, “That is the race.”

The tech-research firm Gartner uses an analytic tool that it calls the Hype Cycle to help investors determine which discoveries will make money. A graph of the cycle resembles a cursive lowercase “r,” in which a discovery begins with a Technology Trigger, climbs quickly to a Peak of Inflated Expectations, falls into the Trough of Disillusionment, and, as practical uses are found, gradually ascends to the Plateau of Productivity. The implication is not (or not only) that most discoveries don’t behave as expected; it’s that a new thing typically becomes useful sometime after the publicity fades.
Nearly every scientist I spoke with suggested that graphene lends itself especially well to hype. “It’s an electrically useful material in a time when we love electrical devices,” Amanda Barnard told me. “If it had come along at a time when we were not so interested in electronic devices, the hype might not have been so disproportionate. But then there wouldn’t have been the same appetite for investment.” Indeed, Henry Petroski, a professor at Duke and the author of “To Engineer Is Human,” says that hype is necessary to attract development dollars. But he offers an important proviso: “If there is too much hype at the discovery stage and the product doesn’t live up to the hype, that’s one way of its becoming disappointing and abandoned, eventually.”

Guha, at I.B.M., believes that the field of nanotechnology has been oversold. “Nobody stands to benefit from giving the bad news,” he told me. “The scientist wants to give the good news, the journalist wants to give the good news—there is no feedback control to the system. In order to develop a technology, there is a lot of discipline that needs to go in, a lot of things that need to be done that are perhaps not as sexy.”

Tour concurs, and admits to some complicity. “People put unrealistic time lines on us,” he told me. “We scientists have a tendency to feed that—and I’m guilty of that. A few years ago, we were building molecular electronic devices. The Times called, and the reporter asked, ‘When could these be ready?’ I said, ‘Two years’—and it was nonsense. I just felt so excited about it.”

The impulse to overlook obvious difficulties to commercial development is endemic to scientific research. Geim’s paper, after all, mentioned the band-gap problem. “People knew that graphene is a gapless semiconductor,” Amirhasan Nourbakhsh, an M.I.T. scientist specializing in graphene, told me. “But graphene was showing extremely high mobility—and mobility in semiconductor technology is very important. People just closed their eyes.”

According to Friedel, the historian, scientists rely on the stubborn conviction that an obvious obstacle can be overcome. “There is a degree of suspension of disbelief that a lot of good research has to engage in,” he said. “Part of the art—and it is art—comes from knowing just when it makes sense to entertain that suspension of disbelief, at least momentarily, and when it’s just sheer fantasy.” Lord Kelvin, famous for installing telegraph cables on the Atlantic seabed, was clearly capable of overlooking obstacles. But not always. “Before his death, in 1907, Lord Kelvin carefully, carefully calculated that a heavier-than-air flying machine would never be possible,” Friedel says. “So we always have to have some humility. A couple of bicycle mechanics could come along and prove us wrong.”

Recently, some of the most exciting projects from Tour’s lab have encountered obstacles. An additive to fluids used in oil drilling, developed with a subsidiary of the resource company Schlumberger, promised to make drilling more efficient and to leave less waste in the ground; instead, barrels of the stuff decomposed before they could be used. The company that hired Tour’s group to make inflatable slides and rafts for aircraft found a cheaper lab. (Tour was philosophical about it, in part because he knew he’d still get some money from the contract. “They’ll have to come back and get the patent,” he said.) The technology for the Fukushima-reactor cleanup stalled when scientists in Japan couldn’t get the powder to work, and the postdoc who developed the method was unable to get a visa to go assist them. “You’ve got to teach them how it’s done,” Tour said. “You want the pH right.”

Tour’s optimism for graphene remains undimmed, and his group has been working on further inventions: superfast cell-phone chargers, ultra-clean fuel cells for cars, cheaper photovoltaic cells. “What Geim and Novoselov did was to show the world the amazingness of graphene, that it had these extraordinary electrical properties,” Tour said. “Imagine if one were God. Here, He’s given us pencils, and all these years scientists are trying to figure out some great thing, and you’re just stripping off sheets of graphene as you use your pencil. It has been before our eyes all this time!

New Google Glass App Lets You Shoot Guns Around Corners


TrackingPoint turns Glass into a remote sight for aiming around corners.

Google Glass has lots of potential in professional fields, and for better or worse, that includes military applications.

TrackingPoint, an Austin-based company that adds aim assistance technology to firearms, is showing off how Google Glass could be used as a remote rifle sight. In a video on YouTube, a shooter uses Glass to effortlessly aim from behind cover, with a view of the target appearing on the screen in front of his right eye.

TrackingPoint hasn’t actually made Glass integration available yet, though the company can already stream its scope views to a tablet directly over Wi-Fi. Glass integration is currently in the testing phase, and it seems like the next logical step now that anyone can buy a Glass prototype.

watch the videeo.. URL:http://www.youtube.com/watch?feature=player_embedded&v=itdwWvAnNx4

Google Glass advice: how to avoid being a glasshole.


Google’s smartglass guidelines for early adopters: stop being creepy, don’t be rude, and don’t try to read War and Peace

  • Google Glass wearing advice
Google explains how to not be a ‘glasshole’ wearing the company’s pioneering smart glasses. Photograph: Pawel Supernak/EPA

Google has given some official advice on what to do and perhaps more importantly, what not to do, while wearing the company’s Google Glass smartglasses to avoid being a “glasshole”.

Early adopters of Glass, derogatorily called “glassholes”, have come under fire for using it in socially unacceptable conditions where mobile phones aren’t allowed, for being creepy filming people without their permission and for being rude, staring off into the distance for long periods of time.

Glass has gone far beyond the confines of Google employees with its extended “Explorer” early adopter programme. As Google states, it is definitely in the company’s best interest to get its first smartglass customers to behave, as “breaking the rules or being rude will not get businesses excited about Glass and will ruin it for other Explorers”.

To try and help Explorers avoid being glassholes and breaking social codes, Google has compiled a list of solid suggestions pulled from the experiences of early Glass adopters, and some of them are really quite funny.

Stop looking like a tech zombie

Glass was designed to avoid the need to stare down at a smartphone or device to get information, placing snippets of text just outside your field of vision, but that can have some pretty creepy consequences.

If you find yourself staring off into the prism for long periods of time you’re probably looking pretty weird to the people around you.

Google helpfully suggests that reading things like Tolstoy’s 1,225 pages of War and Peace probably isn’t the best idea, suggesting that “things like that are better done on bigger screens”.

Use some common sense

Google encourages Explorers to try Glass in all kinds of situations, but it would probably be best to avoid activities that could see wearers land on their faces.

Glass is a piece of technology, so use common sense. Water-skiing, bull-riding or cage-fighting with Glass are probably not good ideas.

At $1,500 (£900) a piece, Glass might be hi-tech but it is not exactly robust when it comes to high-impact sports.

Glass probably doesn’t contribute to a romantic meal

The idea of smartglasses being worn in public is new, and people are curious. Passersby will stop and stare, ask questions or maybe even react badly if you turn to face them, so Google helpfully suggests that taking Glass off might be the best idea.

If you’re worried about someone interrupting that romantic dinner at a nice restaurant with a question about Glass, just take it off and put it around the back of your neck or in your bag.

Of course, you also have the fact that your date might be creeped out that you have a head-mounted camera pointed at them all night, regardless of whether or not you are recording their every move.

Stop standing in the corner of the room being creepy

Apparently the temptation to record the every move of people going about their day is insatiable for some Glass Explorers. Google suggests that Glass wearers should treat the camera function like they would a mobile phone camera – ask permission and stop being creepy.

“Standing alone in the corner of a room staring at people while recording them through Glass is not going to win you any friends.”

Some people are pretty tetchy when it comes to being caught on camera, just ask the paparazzi.

What it would take to get me to wear Google Glass on my glasses.


Glasses with Glass? Google’s latest design tweak to its wearable headsets is a smart move, but not quite enough to get me to leap.

Scott Stein
 

The latest design iteration of Google Glass is here. Unlike the weird headband-visor-with-a-monocle design of the original, there are now prescription versions of Google Glass: real glasses, on top of which are Glass.

I’m a glasses-wearer. I struggled with Glass on my glasses, and eventually even got temporary contacts. And I remember that, a year ago at Google I/O, some prescription glasses with Google Glass attached were floating around the show floor.

  • So am I satisfied with the latest news about Glass and glasses? No. Because, even if these new glasses with Glass attached offer up a less intrusive-looking, slightly more stylish and possibly more convenient solution, it’s not enough for me yet. I’m talking about wearing, mind you, not using — using them is an entirely different debate, and one that’ll keep shifting as the software and apps evolve. But if you’re asking me if I would really wear Glass all the time as anything more than an experiment, I’d need a bit more.

For it to be a comfortable wearable device for me, some other steps need to be taken. Here’s what I want Google Glass to do:

Work with my prescription
The lenses made by VSP for Google’s Glass glasses actually only work for -4 to +4 prescriptions. My -9 prescription is out of the question. That’s a shame, because, really, aren’t these new glasses meant to offer access to Glass for all?

(Credit: Sarah Tew/CNET)

Work with my own glasses
Buying a Google Glass-compatible glasses frame costs $225, and then you have to buy lenses, which may or may not be supported by your vision insurance. These are real glasses, but can have Google Glass screwed onto them. I want to use my own glasses. I like my own glasses. Buying a new pair is expensive. There are only four frames to choose from, and I have no idea how they’ll look on me. Google’s prescription Glass glasses are a good first step, but not enough.

Clip on and off
Bluetooth headsets are easy to pop on and off as needed. Screwing Google Glass on and off of these new glasses, comparatively, doesn’t exactly seem like an easy process. During the day, I don’t want to wear Glass at every moment. I’d want to pop Glass on and off whenever I feel like it, like I do when I take out my earbuds or remove a Bluetooth earpiece. I should be able to attach and detach my little Glass screen as I like. Wearable tech should be optional, not stapled onto your required eyewear. These eyepieces should be optional, more like a mini monocle. Use magnets, use a clip…be creative.

Google’s Sergey Brin sports the Explorer Edition Google Glass.

Be a lot smaller

Which brings me to this: Google Glass without the titanium headband-visor is small, but it’s long like a pencil or stylus. Most people look at Glass, see that little lens and camera, and think that’s mostly it. I wish I could pop that bit off and tuck it in my pocket. Easier said than done, but I’d love something akin to the Jawbone Era of Google Glass. That, of course, could take years. But the closer Glass gets to a Jawbone-sized minigadget — and the sooner — the better for me.

If those things happen…well, I wouldn’t mind wearing a Google Glass around at all. As far as buying one…that’s another story.