Intel made a pair of smart glasses you might actually want to wear out – here’s how they work

Intel's smart glasses close up

Intel has developed a prototype for a pair of smart glasses that are designed to look normal – or at least, normal for a pair of smart glasses.

News of Intel’s latest hardware broke in February, when  Bloomberg’s Sarah Frier and Ian King  published a report saying the company was looking for a number of investors to take a majority stake in its augmented-reality (AR) unit, which had been developing a pair of eyeglasses that lets you see text and other information in your field of view.

Days after Bloomberg’s report went live,  The Verge’s Dieter Bohn posted a video showing off a physical prototype of the Vaunt smart glasses. Here’s a look at how they work:

Intel designed these glasses to be worn in public without feeling like a technophile.

Intel designed these glasses to be worn in public without feeling like a technophile.

The horn-rimmed camera-less glasses have a very different look from Google Glass. Intel’s Vaunt glasses are designed with minimalistic functionality so the pair only weighs about 50 grams (about a tenth of a pound), and apart from an occasional “red glimmer,” the lens display isn’t visible to anyone on the other side of the glasses.

The lens can display “simple basic information” into the right eye.

The lens can display "simple basic information" into the right eye.

The image is called a retinal projection: A red monochrome projector shines an image on a “holographic mirror,” which bounces the image into your eye.

The image is called a retinal projection: A red monochrome projector shines an image on a "holographic mirror," which bounces the image into your eye.

To avoid having text in your line of sight all the time, Intel built the glasses so that the dashboard shows up only when the user glances down at the bottom of the frame.

The second the user looks up, the display disappears. There’s no need for hand gestures.

The display is shown through a “Vertical-Cavity Surface-Emitting Laser,” which is “so low-power, it’s at the very bottom end of a class-one laser,” according to New Devices Group’s industrial design director Mark Eastwood. That apparently means that the laser is safe for your eyes.

The display is shown through a "Vertical-Cavity Surface-Emitting Laser," which is "so low-power, it's at the very bottom end of a class-one laser," according to New Devices Group's industrial design director Mark Eastwood. That apparently means that the laser is safe for your eyes.

The glasses do need to be fitted to your eyes, however, so there’s enough distance between your eyes and the lens to see what’s displayed.

Intel is developing other prototype Vaunt styles, and developers will soon be able to start using the glasses through Intel’s early access program, to create use cases for them.

Intel is developing other prototype Vaunt styles, and developers will soon be able to start using the glasses through Intel's early access program, to create use cases for them.

The Vaunt is going to work with both Android and iPhone. And while the executives in the video do name a few use cases (grocery shopping, choosing between restaurants that are right in front of you), it’s clear that the extent of the Vaunt’s capabilities will be entirely up to software developers.

To learn more about the Vaunt smart glasses, check out The Verge’s video below.

What You Need to Know About The Intel Flaw Everyone’s Freaking Out About

Practically every PC, laptop, tablet, and smartphone is affected.

Silicon Valley is abuzz about ‘Meltdown’ and ‘Spectre’ – new ways for hackers to attack Intel, AMD, and ARM processors that were first discovered by Google last year, and publicly disclosed Wednesday.

Meltdown and Spectre, which take advantage of the same basic security vulnerability in those chips, could hypothetically be used by malicious actors to “read sensitive information in [a] system’s memory, such as passwords, encryption keys, or sensitive information open in applications,” as Google puts it in an official FAQ.

The first thing you need to know: Pretty much every PC, laptop, tablet, and smartphone is affected by the security flaw, regardless of which company made the device or what operating system it runs.

The vulnerability isn’t easy to exploit – it requires a specific set of circumstances, including having malware already running on the device – but it’s not just theoretical.

And the problem could affect much more than just personal devices. The flaw potentially could be exploited on servers and in data centres and massive cloud computing platforms such as Amazon Web Services, Microsoft Azure, or Google Cloud.

In fact, given the right conditions, Meltdown or Spectre could be used by customers of those cloud services to actually steal data from one another.

Although fixes are already being rolled out for the vulnerability, they often will come with a price. Some devices, especially older PCs, could be slowed markedly by them.

Here’s what Meltdown and Spectre are. And, just as importantly, here’s what they’re not.

Am I in immediate danger from this?

There’s some good news: Intel and Google say that they have never seen any attacks like Meltdown or Spectre actually being used in the wild. And companies including Intel, Amazon, Google, Apple, and Microsoft are rushing to issue fixes, with the first wave already out.

The most immediate consequence of all of this will come from those fixes. Some devices will see a performance dip of as much as 30 percent after the fixes are installed, according to some reports. Intel, however, disputed that figure, saying the amount by which computers will be slowed will depend on how they’re being used.

The Meltdown attack only seems to work on Intel processors. You can guard against it with software updates, according to Google. Those are already starting to become available for Linux and Windows 10.

Spectre, by contrast, appears to be much more dangerous. Google says it’s been able to successfully execute Spectre attacks on processors from Intel, ARM, and AMD. And, according to the search giant, there’s no single, simple fix.

It’s harder to pull off a Spectre-based attack, which is why nobody’s completely panicking. But the attack takes advantages of an integral part of how processors work, meaning it will take a new generation of hardware to stamp it out for good.

In fact, that’s how Spectre got its name.

“As it is not easy to fix, it will haunt us for quite some time,” says the official Meltdown/Spectre FAQ.

What are Meltdown and Spectre, anyway?

Despite how they have been discussed so far in the press, Meltdown and Spectre aren’t really “bugs”. Instead, they represent methods discovered by Google’s Project Zero cybersecurity lab to take advantage of the normal ways that Intel, ARM, and AMD processors work.

To use a Star Wars analogy, Google inspected the Death Star plans and found an exploitable weakness in a small thermal exhaust port.

In the same way that two precisely-placed proton torpedoes could blow up the Death Star, so too can Meltdown and Spectre take advantage of a very specific design quirk and get around (or “melt down”, hence the name) processors’ normal security precautions.

In this case, the design feature in question is something called speculative execution, which is a processing technique most Intel chips have used since 1995, and one that’s common in ARM and AMD processors, too.

With speculative execution, processors essentially guess what you’re going to do next. If they guess right, then they’re already ahead of the curve, and you have a snappier computing experience. If they guess wrong, they dump the data and start over.

What Project Zero found were two key ways to trick even secure, well-designed apps into leaking data from those returned processes. The exploits take advantage of a flaw in how the data is dumped that could allow them – with the right malware installed – to read data that should be secret.

This vulnerability is potentially particularly dangerous in cloud computing systems, where users essentially rent time from massive supercomputing clusters. The servers in those clusters may be shared among multiple users, meaning customers running unpatched and unprepared systems could fall prey to data thieves sharing their processors.

What can I do about it?

To guard against the security flaw and the exploits, the first and best thing you can do is make sure you’re up to date with your security patches. The major operating systems have already started issuing patches that will guard against the Meltdown and Spectre attacks.

In fact, fixes have already begun to hit Linux, Android, Apple’s MacOS, and Microsoft’s Windows 10. So whether you have an Android phone, or you’re a developer using Linux in the cloud, it’s time to update your operating system.

Meanwhile, Microsoft told Business Insider it’s working on rolling out mitigations for its Azure cloud platform. Google Cloud is urging customers to update their operating systems, too.

It’s just as important to make sure you stay up-to-date. While Spectre may not have an easy fix, Google says that there are ways to guard against related exploits. Expect Microsoft, Apple, and Google to issue a series of updates to their operating systems as new Spectre-related attacks are discovered.

Additionally, because Meltdown and Spectre require malicious code to already be running on your system, let this be a reminder to practice good online safety behaviours.

Don’t download any software from a source you don’t explicitly trust. And don’t click on any links or files claiming you won $US10 million in a contest you never entered.

Why could the fixes also slow down my device?

The Meltdown and Spectre attacks take advantage of how the “kernels”, or cores, of operating systems interact with processors. Theoretically, the two are supposed to be separated to some degree to prevent exactly this kind of attack. However, Google’s report proves the current precautions aren’t enough.

Operating system developers are said to be adopting a new level of virtual isolation, basically making requests between the processor and the kernel take the long way around.

The problem is that enforcing this kind of separation requires at least a little extra processing power, which would no longer be available to the rest of the system.

As The New York Times notes, researchers are concerned that the fixes could slow down computers by as much as 20 percent to 30 percent. Microsoft is reported to believe that PCs with Intel processors older than the two-year-old “Skylake” models could see significant slowdowns.

Intel disputes that the performance hits will be as dramatic as The Times suggests.

Some of the slowdowns, should they come to pass, could be mitigated by future software updates. Because the vulnerability was just made public, it’s possible that workarounds and new techniques for circumventing the performance hit will come to light as more developers work on solving the problem.

What happens next?

Publicly, Intel is confident the Meltdown and Spectre bugs won’t have a material impact on its stock price or market share, given that they’re relatively hard to execute and have never been used (that we know of).

Meanwhile, AMD shares are soaring on word that the easier-to-pull-off Meltdown attack isn’t known to work on its processors.

However, as Google is so eager to remind us, Spectre looms large. Speculative execution has been a cornerstone of processor design for more than two decades. It will require a huge rethinking from the entire processor industry to guard against this kind of attack in the future.

The threat of Spectre means the next generation of processors – from all the major chip designers – are going to be a lot different than they are today.

Even so, the threat of Spectre is likely to linger with us far into the future. Consumers are replacing their PCs less frequently, which means older PCs that are at risk of the Spectre attack could be in use for years to come.

Meanwhile, there’s been a persistent problem with updating Android devices to the latest version of the operating system, so there’s likely to be lots of unpatched smartphones and tablets in use for as far as the eye can see. So would-be Spectre attackers are likely going to have their choice of targets.

It’s not the end of the world. But it might just be the end of an era for Intel, AMD, ARM, and the way processors are built.

Apple responds to Intel, ARM chip flaws: All Macs and iOS devices are vulnerable, but don’t panic

Late on Thursday, Apple issued a new support document highlighting how the recently unearthed chip vulnerabilities involving Intel, ARM, and AMD processors impacts nearly the entirety of Apple’s product line. Specifically, Apple notes that all Macs and iOS devices are technically susceptible to Spectre and Meltdown, two vulnerabilities which could allow a malicious actor to access sensitive user data in protected memory. Apple, though, makes a point of emphasizing that no known exploits have been uncovered.



“All Mac systems and iOS devices are affected,” the support document reads, “but there are no known exploits impacting customers at this time. Since exploiting many of these issues requires a malicious app to be loaded on your Mac or iOS device, we recommend downloading software only from trusted sources such as the App Store.”

As for what Apple is doing to combat the vulnerabilities, which, interestingly enough, were discovered by security researchers at Google’s Project Zero, Apple relays that patches for the Meltdown vulnerability were already issued with the following updates: iOS 11.2, macOS 10.13.2, and tvOS 11.2. Incidentally, Apple notes that watchOS did not require a patch. Additionally, Apple maintains that the updates above have no discernible impact on system performance. This point is worth highlighting given that the original report from The Register claimed that the requisite patches could result in systems running as much as 30% slower.

With respect to the Spectre vulnerability, which Apple notes is “extremely difficult to exploit,” Apple says that iOS and Mac users can expect a patch relatively soon.

To this point, Apple notes:

Analysis of these techniques revealed that while they are extremely difficult to exploit, even by an app running locally on a Mac or iOS device, they can be potentially exploited in JavaScript running in a web browser. Apple will release an update for Safari on macOS and iOS in the coming days to mitigate these exploit techniques. Our current testing indicates that the upcoming Safari mitigations will have no measurable impact on the Speedometer and ARES-6 tests and an impact of less than 2.5% on the JetStream benchmark.

The entirety of Apple’s new support document can be read below:

About speculative execution vulnerabilities in ARM-based and Intel CPUs

Security researchers have recently uncovered security issues known by two names, Meltdown and Spectre. These issues apply to all modern processors and affect nearly all computing devices and operating systems. All Mac systems and iOS devices are affected, but there are no known exploits impacting customers at this time. Since exploiting many of these issues requires a malicious app to be loaded on your Mac or iOS device, we recommend downloading software only from trusted sources such as the App Store. Apple has already released mitigations in iOS 11.2, macOS 10.13.2, and tvOS 11.2 to help defend against Meltdown. Apple Watch is not affected by Meltdown. In the coming days we plan to release mitigations in Safari to help defend against Spectre. We continue to develop and test further mitigations for these issues and will release them in upcoming updates of iOS, macOS, tvOS, and watchOS.


The Meltdown and Spectre issues take advantage of a modern CPU performance feature called speculative execution. Speculative execution improves speed by operating on multiple instructions at once—possibly in a different order than when they entered the CPU. To increase performance, the CPU predicts which path of a branch is most likely to be taken, and will speculatively continue execution down that path even before the branch is completed. If the prediction was wrong, this speculative execution is rolled back in a way that is intended to be invisible to software.

The Meltdown and Spectre exploitation techniques abuse speculative execution to access privileged memory—including that of the kernel—from a less-privileged user process such as a malicious app running on a device.


Meltdown is a name given to an exploitation technique known as CVE-2017-5754 or “rogue data cache load.” The Meltdown technique can enable a user process to read kernel memory. Our analysis suggests that it has the most potential to be exploited. Apple released mitigations for Meltdown in iOS 11.2, macOS 10.13.2, and tvOS 11.2. watchOS did not require mitigation. Our testing with public benchmarks has shown that the changes in the December 2017 updates resulted in no measurable reduction in the performance of macOS and iOS as measured by the GeekBench 4 benchmark, or in common Web browsing benchmarks such as Speedometer, JetStream, and ARES-6.


Spectre is a name covering two different exploitation techniques known as CVE-2017-5753 or “bounds check bypass,” and CVE-2017-5715 or “branch target injection.” These techniques potentially make items in kernel memory available to user processes by taking advantage of a delay in the time it may take the CPU to check the validity of a memory access call.

Analysis of these techniques revealed that while they are extremely difficult to exploit, even by an app running locally on a Mac or iOS device, they can be potentially exploited in JavaScript running in a web browser. Apple will release an update for Safari on macOS and iOS in the coming days to mitigate these exploit techniques. Our current testing indicates that the upcoming Safari mitigations will have no measurable impact on the Speedometer and ARES-6 tests and an impact of less than 2.5% on the JetStream benchmark. We continue to develop and test further mitigations within the operating system for the Spectre techniques, and will release them in upcoming updates of iOS, macOS, tvOS, and watchOS.

Becoming Human: Intel is Bringing the Power of Sight to Machines

Intel acquires eight-year old startup Movidius to position itself as the leader in computer vision and depth-sensing technologies. While the details of the acquisition remain undisclosed, Intel and Movidius both stand to gain from this deal.


No, this is not the beginning of a Terminator-esque world.

But yes, it certainly is a start of major developments in computer vision and machine learning technology. Intel is intent to boost its RealSense platform by acquiring Dublin-based computer vision startup Movidius.

With Intel’s existing framework, coupled with Movidius’ power-efficient system on chip (SoC), the pairing is bound to lead to major developments in consumer and enterprise products.

“As part of Intel, we’ll remain focused on this mission, but with the technology and resources to innovate faster and execute at scale. We will continue to operate with the same eagerness to invent and the same customer-focus attitude that we’re known for,” Movidius CEO Remi El-Ouazzane writes in a statement posted in their site.


With the existing applications of Intel’s RealSense platform, Movidius is even better equipped to realize its dream of giving sight to machines. But Movidius is not the only one that will benefit from this deal.

Remi El-Ouazzane and Josh Walden. 

“We see massive potential for Movidius to accelerate our initiatives in new and emerging technologies. The ability to track, navigate, map and recognize both scenes and objects using Movidius’ low power and high performance SoCs opens up opportunities in areas where heat, battery life and form factors are key,’ explains Josh Walden, Senior Vice President and General Manager of Intel’s New Technology Group.

Movidius has existing deals with Lenovo, for its Myriad 2 processors, and with Google, to use its neural computation engine to improve machine learning capabilities of mobile devices.

Is machine learning the next commodity?

It’s not every day you can witness an entire class of software making the transition from specialized, expensive-to-develop code to a general-purpose technology. But that’s exactly what’s happening with machine learning.Chances are, you’re already hip-deep in machine-learning applications. It’s how Google Photo organizes those pictures from your vacation in Spain. It’s how Facebook suggests tags for the pictures you took at last week’s soccer match. It’s how the cars of nearly every major automaker can help you avoid unsafe lane changes.

And it’s also the start of something even bigger.

Machine learning – which enables a computer to learn without new programming – is exploding in its ability to handle highly complex tasks. It can make houses and buildings not just smart, but actively intelligent. It can take e-commerce from a one-size-fits-all experience to something personalized. It might even find your next date.

Driving this surge of machine-learning development is a wave of data generated by mobile phones, sensors, and video cameras. It’s a wave whose scope, scale, and projected growth are unprecedented.

Every minute of every day, YouTube gains 300 hours of video, Apple users download 51,000 apps, and 347,222 100,000 Tweets make their way into the world. Those stats come from the good folks at Domo, who call the time we’re living in “an era where data never sleeps.”

Intel Capital's Sanjit Dang

Intel Capital’s Sanjit Dang

Until now, the hot topic of conversation has been how to analyze information and take action based on the results. But the volume of data has become so great, and its trajectory so steep, that we need to automate many of those actions. Now.

As a result, we expect machine learning will become the next great commodity. In the short term, we expect the cost of advanced algorithms to plummet – especially given multiple open-source initiatives – and to spur new areas of specialization. Longer term, we expect these kinds of algorithms to make their way into standard microprocessors.

Marc Andreessen once said software is eating the world. In the case of machine learning, it will have a very large appetite.

Proprietary becomes open

To understand the potential of machine learning as a commodity, Linux is a good place to start. Released as a free, open-source operating system in 1991, it now powers nearly all the world’s supercomputers, most of the servers behind the Internet, and the majority of financial trades worldwide – not to mention tens of millions of Android mobile phones and consumer devices.

Like Linux, machine learning is well down the open-source path. In the last few months, Baidu, Facebook, and Google have released sets of open-source machine-learning algorithms. Another group of high-tech heavyweights, including Sam Altman, Elon Musk, and Peter Thiel, have launched the OpenAI initiative. And universities and tech communities are adding new tools to the mix.

In the short-to-medium term, we see three outcomes from this activity. First, companies that need to integrate machine learning into their products will do so inexpensively – either through their engineering teams or third-party vendors.

Second, a three-tier system of available algorithms will establish itself. At the bottom layer will be open-source code. In the middle will be code with greater capabilities, available under license from Amazon, Google, Microsoft, or one of the other big players. At the top will be the highly prized code that keeps these companies competitive; it will stay closely guarded until they feel it’s time to make it available widely.

Finally, we forecast a flurry of merger, acquisition, and licensing agreements as algorithm providers look to grow and defend their positions. We also expect more specialization as they attempt to lock down various markets.

In fact, that process already is well under way.

Smarter buildings & commerce

For all the talk about smarter homes and buildings, today’s technologies aren’t nearly as intelligent as they could be. Yes, they can collect data and operate within confined parameters. But they can’t adapt to the way you live your life.

If you get a new dog, for example, fixed-intelligence devices can’t tell the difference between the two of you. If your calendar shows you working from home, these devices won’t think to disable your security system without asking.

Fortunately, that’s changing. Startups such as Nuro Technologies, for example, are pairing sophisticated sensors and self-learning networks for in-home applications. Think of the sensors as mini iPhones in and around your house. You can download software into them – fire sensing, irrigation control, security and more – the same way you load apps into a phone.

Commerce is also a big opportunity for machine learning. Maybe the biggest. One of our portfolio companies, Vizury, uses machine learning to help companies display only the online ads you want to see. Awarestack is another great example: it uses data about how and where you park a car to create algorithms that can help you get around more efficiently.

Then there’s Dil Mil, an online dating app very popular in the South Asian community and growing rapidly. Unlike conventional apps that use the data they collect to make a romantic match, it looks at social behaviors – such as posting on Instagram, Facebook, and Twitter – to find the best possible match. All in real time.

Next stop: silicon

If the Linux of the 1990s illustrates the long-term impact of machine learning, the laptop and desktop machines of the 1980s point to their final destination. In a word: silicon.

Just as modems and graphics cards made their way into microprocessors and motherboards, so will machine learning software. There is simply too much data through which companies need to sift, too many actions they’ll need to take, and too many good algorithms already available.

It’s going to be an exciting time.

A director at Intel Capital, Sanjit Dang drives investments in user computing across the consumer and enterprise sectors. He has also driven several investments related to big data, the Internet of Things, and cloud computing.

Computer made from tiny carbon tubes.

The first computer built entirely with carbon nanotubes has been unveiled, opening the door to a new generation of digital devices.

“Cedric” is only a basic prototype but could be developed into a machine which is smaller, faster and more efficient than today’s silicon models.

Nanotubes have long been touted as the heir to silicon’s throne, but building a working computer has proven awkward.


The breakthrough by Stanford University engineers is published in Nature.

Cedric is the most complex carbon-based electronic system yet realised.

So is it fast? Not at all. It might have been in 1955.

Cedric’s vital statistics

  • 1 bit processor
  • Speed: 1 kHz
  • 178 transistors
  • 10-200 nanotubes per transistor
  • 2 billion carbon atoms
  • Turing complete
  • Multitasking
  • 100 microns – width of human hair
  • 10 microns – water droplet
  • 8 microns – transistors in Cedric
  • 625 nanometres (nm) – wavelength of red light
  • 20-450 nm – single viruses
  • 22 nm latest silicon chips
  • 9 nm – smallest carbon nanotube chip
  • 6 nm – cell membrane
  • 1 nm – single carbon nanotube
  • _70097572_handholdingcntwafer

How small is a carbon computer chip?

The computer operates on just one bit of information, and can only count to 32.

“In human terms, Cedric can count on his hands and sort the alphabet. But he is, in the full sense of the word, a computer,” says co-author Max Shulaker.

“There is no limit to the tasks it can perform, given enough memory”.

In computing parlance, Cedric is “Turing complete”. In principle, it could be used to solve any computational problem.

It runs a basic operating system which allows it to swap back and forth between two tasks – for instance, counting and sorting numbers.

And unlike previous carbon-based computers, Cedric gets the answer right every time.



“People have been talking about a new era of carbon nanotube electronics, but there have been few demonstrations. Here is the proof,” said Prof Subhasish Mitra, lead author on the study.

The Stanford team hope their achievement will galvanise efforts to find a commercial successor to silicon chips, which could soon encounter their physical limits.

Carbon nanotubes (CNTs) are hollow cylinders composed of a single sheet of carbon atoms.

They have exceptional properties which make them ideal as a semiconductor material for building transistors, the on-off switches at the heart of electronics.

For starters, CNTs are so thin – thousands could fit side-by-side in a human hair – that it takes very little energy to switch them off.

“Think of it as stepping on a garden hose. The thinner the pipe, the easier it is to shut off the flow,” said HS Philip Wong, co-author on the study.

Continue reading the main story

How small is a carbon computer chip?

  • 100 microns – width of human hair
  • 10 microns – water droplet
  • 8 microns – transistors in Cedric
  • 625 nanometres (nm) – wavelength of red light
  • 20-450 nm – single viruses
  • 22 nm latest silicon chips
  • 9 nm – smallest carbon nanotube chip
  • 6 nm – cell membrane
  • 1 nm – single carbon nanotube

But while single-nanotube transistors have been around for 15 years, no-one had ever put the jigsaw pieces together to make a useful computing device.

So how did the Stanford team succeed where others failed? By overcoming two common bugbears which have bedevilled carbon computing.

First, CNTs do not grow in neat, parallel lines. “When you try and line them up on a wafer, you get a bowl of noodles,” says Mitra.

The Stanford team built chips with CNTs which are 99.5% aligned – and designed a clever algorithm to bypass the remaining 0.5% which are askew.

They also eliminated a second type of imperfection – “metallic” CNTs – a small fraction of which always conduct electricity, instead of acting like semiconductors that can be switched off.

To expunge these rogue elements, the team switched off all the “good” CNTs, then pumped the remaining “bad” ones full of electricity – until they vaporised. The result is a functioning circuit.

The Stanford team call their two-pronged technique “imperfection-immune design”. Its greatest trick? You don’t even have to know where the imperfections lie – you just “zap” the whole thing.

“These are initial necessary steps in taking carbon nanotubes from the chemistry lab to a real environment,” said Supratik Guha, director of physical sciences for IBM’s Thomas J Watson Research Center.

But hang on – what if, say, Intel, or another chip company, called up and said “I want a billion of these”. Could Cedric be scaled up and factory-produced?

In principle, yes: “There is no roadblock”, says Franz Kreupl, of the Technical University of Munich in Germany.

“If research efforts are focused towards a scaled-up (64-bit) and scaled-down (20-nanometre transistor) version of this computer, we might soon be able to type on one.”

Shrinking the transistors is the next challenge for the Stanford team. At a width of eight microns (8,000 nanometres) they are much fatter than today’s most advanced silicon chips.

But while it may take a few years to achieve this gold standard, it is now only a matter of time – there is no technological barrier, says Shulaker.

“In terms of size, IBM has already demonstrated a nine-nanometre CNT transistor.

“And as for manufacturing, our design is compatible with current industry processes. We used the same tools as Intel, Samsung or whoever.

“So the billions of dollars invested into silicon has not been wasted, and can be applied for CNTs.”

For 40 years we have been predicting the end of silicon. Perhaps that end is now in sight.

Source: BBC

A first: Stanford engineers build computer using carbon nanotube technology.

A team of Stanford engineers has built a basic computer using carbon nanotubes, a semiconductor material that has the potential to launch a new generation of electronic devices that run faster, while using less energy, than those made from silicon chips.

This unprecedented feat culminates years of efforts by scientists around the world to harness this promising material.

The achievement is reported today in an article on the cover of Nature magazine written by Max Shulaker and other doctoral students in electrical engineering. The research was led by Stanford professors Subhasish Mitra and H.S. Philip Wong.

“People have been talking about a new era of carbon nanotube electronics moving beyond silicon,” said Mitra, an electrical engineer and computer scientist, and the Chambers Faculty Scholar of Engineering. “But there have been few demonstrations of complete digital systems using this exciting technology. Here is the proof.”

Experts say the Stanford achievement will galvanize efforts to find successors to silicon chips, which could soon encounter physical limits that might prevent them from delivering smaller, faster, cheaper electronic devices.

“Carbon nanotubes (CNTs) have long been considered as a potential successor to the silicon transistor,” said Professor Jan Rabaey, a world expert on electronic circuits and systems at UC Berkeley.

But until now it hasn’t been clear that CNTs could fulfill those expectations.


“There is no question that this will get the attention of researchers in the semiconductor community and entice them to explore how this technology can lead to smaller, more energy-efficient processors in the next decade,” Rabaey said.

Mihail Roco, senior advisor for Nanotechnology at the National Science Foundation, called the Stanford work “an important, scientific breakthrough.”

It was roughly 15 years ago that carbon nanotubes were first fashioned into transistors, the on-off switches at the heart of digital electronic systems.

But a bedeviling array of imperfections in these carbon nanotubes has long frustrated efforts to build complex circuits using CNTs. Professor Giovanni De Micheli, director of the Institute of Electrical Engineering at École Polytechnique Fédérale de Lausanne in Switzerland, highlighted two key contributions the Stanford team has made to this worldwide effort.

“First, they put in place a process for fabricating CNT-based circuits,” De Micheli said. “Second, they built a simple but effective circuit that shows that computation is doable using CNTs.”

As Mitra said: “It’s not just about the CNT computer. It’s about a change in directions that shows you can build something real using nanotechnologies that move beyond silicon and its cousins.”

Why worry about a successor to silicon? Such concerns arise from the demands that designers place upon semiconductors and their fundamental workhorse unit, those on-off switches known as transistors

For decades, progress in electronics has meant shrinking the size of each transistor to pack more transistors on a chip. But as transistors become tinier they waste more power and generate more heat – all in a smaller and smaller space, as evidenced by the warmth emanating from the bottom of a laptop.

Many researchers believe that this power-wasting phenomenon could spell the end of Moore’s Law, named for Intel Corp. co-founder Gordon Moore, who predicted in 1965 that the density of transistors would double roughly every two years, leading to smaller, faster and, as it turned out, cheaper electronics.

But smaller, faster and cheaper has also meant smaller, faster and hotter.

“Energy dissipation of silicon-based systems has been a major concern,” said Anantha Chandrakasan, head of electrical engineering and computer science at MIT and a world leader in chip research. He called the Stanford work “a major benchmark” in moving CNTs toward practical use. CNTs are long chains of carbon atoms that are extremely efficient at conducting and controlling electricity. They are so thin – thousands of CNTs could fit side by side in a human hair – that it takes very little energy to switch them off, according to Wong, co-author of the paper and the Williard R. and Inez Kerr Bell Professor at Stanford.

“Think of it as stepping on a garden hose,” Wong said. “The thinner the hose, the easier it is to shut off the flow.” In theory, this combination of efficient conductivity and low-power switching make carbon nanotubes excellent candidates to serve as electronic transistors.

“CNTs could take us at least an order of magnitude in performance beyond where you can project silicon could take us,” Wong said. But inherent imperfections have stood in the way of putting this promising material to practical use.

First, CNTs do not necessarily grow in neat parallel lines, as chipmakers would like.

Over time, researchers have devised tricks to grow 99.5 percent of CNTs in straight lines. But with billions of nanotubes on a chip, even a tiny degree of misaligned tubes could cause errors, so that problem remained.

A second type of imperfection has also stymied CNT technology.

Depending on how the CNTs grow, a fraction of these carbon nanotubes can end up behaving like metallic wires that always conduct electricity, instead of acting like semiconductors that can be switched off.

Since mass production is the eventual goal, researchers had to find ways to deal with misaligned and/or metallic CNTs without having to hunt for them like needles in a haystack.

“We needed a way to design circuits without having to look for imperfections or even know where they were,” Mitra said. The Stanford paper describes a two-pronged approach that the authors call an “imperfection-immune design.”

To eliminate the wire-like or metallic nanotubes, the Stanford team switched off all the good CNTs. Then they pumped the semiconductor circuit full of electricity. All of that electricity concentrated in the metallic nanotubes, which grew so hot that they burned up and literally vaporized into tiny puffs of carbon dioxide. This sophisticated technique was able to eliminate virtually all of the metallic CNTs in the circuit at once.

Bypassing the misaligned nanotubes required even greater subtlety.

So the Stanford researchers created a powerful algorithm that maps out a circuit layout that is guaranteed to work no matter whether or where CNTs might be askew.

“This ‘imperfections-immune design’ (technique) makes this discovery truly exemplary,” said Sankar Basu, a program director at the National Science Foundation.

The Stanford team used this imperfection-immune design to assemble a basic computer with 178 transistors, a limit imposed by the fact that they used the university’s chip-making facilities rather than an industrial fabrication process.

Their CNT computer performed tasks such as counting and number sorting. It runs a basic operating system that allows it to swap between these processes. In a demonstration of its potential, the researchers also showed that the CNT computer could run MIPS, a commercial instruction set developed in the early 1980s by then Stanford engineering professor and now university President John Hennessy.

Though it could take years to mature, the Stanford approach points toward the possibility of industrial-scale production of carbon nanotube semiconductors, according to Naresh Shanbhag, a professor at the University of Illinois at Urbana-Champaign and director of SONIC, a consortium of next-generation chip design research.

“The Wong/Mitra paper demonstrates the promise of CNTs in designing complex computing systems,” Shanbhag said, adding that this “will motivate researchers elsewhere” toward greater efforts in chip design beyond silicon.

“These are initial necessary steps in taking carbon nanotubes from the chemistry lab to a real environment,” said Supratik Guha, director of physical sciences for IBM’s Thomas J. Watson Research Center and a world leader in CNT research.

Journal reference: Nature


Russian Roulette — An Excerpt From the Wired E-Book John McAfee’s Last Stand.

Twelve weeks before the murder, John McAfee flicks open the cylinder of his Smith & Wesson revolver and empties the bullets, letting them clatter onto the table between us. A few tumble to the floor. McAfee is 66, lean and fit, with veins bulging out of his forearms. His hair is bleached blond in patches, like a cheetah, and tattoos wrap around his arms and shoulders.

More than 25 years ago, he formed McAfee Associates, a maker of antivirus software that went on to become immensely popular and was acquired by Intel in 2010 for $7.68 billion. Now he’s holed up in a bungalow at his island estate 15 miles off the coast of Belize. The shades are drawn so I can see only a sliver of the white sand beach and turquoise water outside. The table is piled with boxes of ammunition, fake IDs, Frontiersman bear deterrent, and a single blue baby pacifier.

McAfee picks a bullet off the floor and fixes me with a wide-eyed, manic intensity, his light blue eyes sparkling. “This is a bullet, right?” he says in the congenial Southern accent that has stuck with him since his boyhood in Virginia.

“Let’s put the gun down,” I tell him. I’d come here to investigate why the government of Belize was accusing him of assembling a private army and entering the drug trade. It seemed implausible that a wildly successful tech entrepreneur would disappear into the Central American jungle and become a narco-trafficker. Now I’m not so sure.

But he explains that the accusations are a fabrication. “Maybe what happened didn’t actually happen,” he says, staring hard at me. “Can I do a demonstration?”

He loads the bullet into the gleaming silver revolver and spins the cylinder.

“This scares you, right?” he says. Then he puts the gun to his head.

My heart rate kicks up; it takes me a second to respond. “Yeah, I’m scared,” I admit.

“We don’t have to do this.”

“I know we don’t,” he says, the muzzle pressed against his temple. And then he pulls the trigger. Nothing happens. He pulls it five times in rapid succession. There are only six chambers.

“Reholster the gun,” I demand.

He keeps his eyes fixed on me and pulls the trigger a sixth time. Still nothing. With the gun still to his head, he starts pulling the trigger incessantly. “I can do this all day long,” he says to the sound of the hammer clicking. “I can do this a thousand times. Ten thousand times. Nothing will ever happen. Why? Because you have missed something. You are operating on an assumption about reality that is wrong.”

It’s the same thing, he argues, with the government’s accusations. They were a smoke screen—an attempt to distort reality—but there’s one thing everybody agrees on: The trouble really got rolling in the humid predawn murk of April 30, 2012.




Blaine goes for shock-factor with latest stunt.

Daredevil stuntman David Blaine lit up New York’s Pier 54 on Friday for his latest high voltage feat.

The illusionist is scheduled to spend three days and nights standing in the middle of a million volts of electric currents streamed by tesla coils.

The stunt is called “Electrified: One Million Volts Always On.”

“Electrified” also is being streamed on YouTube, thanks to computing company Intel. Viewing stations are located in London, Beijing, Tokyo and Sydney. Viewers at the stations are able to control the coils.

The 39-year-old Blaine is wearing a chainmail bodysuit as a barrier between himself and the electric currents.

Blaine’s past stunts include hanging upside down over Central Park, being buried alive and encased in a block of ice.

Source: Yahoo news.