How Quantum Technology Is Making Computers Millions Of Times More Powerful

When the first digital computer was built in the 1940s, it revolutionized the world of data calculation. When the first programming language was introduced in the 1950s, it transformed digital computers from impractical behemoths to real-world tools. And when the keyboard and mouse were added in the 1960s, it brought computers out of industry facilities and into people’s homes. There’s a technology that has the potential to change the world in an even bigger way than any of those breakthroughs. Welcome to the future of quantum computing.


1s and 0s (And Everything In Between)

Every computer you’ve ever encountered works on the principles of a Turing machine: they manipulate little particles of information, called bits, that exist as either a 0 or a 1, a system known as binary. The fundamental difference in a quantum computer is that it’s not limited to those two options. Their “quantum bits,” or qubits, can exist as 0, 1, or a superposition of 0 and 1—that is, both 0 and 1 and all points in between. It’s only once you measure them that they “decide” on a value. That’s what’s so groundbreaking about quantum computing: Conventional computers can only work on one computation at a time; the fastest just have ways of making multiple components work on separate tasks simultaneously. But the magic of superposition gives quantum computers the ability to work on a million computations at once. With that kind of power, just imagine what humanity could accomplish!

But that’s not all that makes quantum computing so impressive—there’s also the phenomenon of entanglement. Qubits don’t exist in a vacuum. Generally, systems of multiple qubits are entangled, so that they each take on the properties of the others. Take an entangled system of two qubits, for example. Once you measure one qubit, it “chooses” one value. But because of its relationship, or correlation, to the other qubit, that value instantly tells you the value of the other qubit—you don’t even need to measure it. When you add more qubits to the system, those correlations get more complicated. According to Plus magazine, “As you increase the number of qubits, the number of those correlations grows exponentially: for n qubits there are 2n correlations. This number quickly explodes: to describe a system of 300 qubits you’d already need more numbers than there are atoms in the visible Universe.” But that’s just the point—because those numbers are beyond what we could ever record with a conventional computer, the hope is that quantum computers could crunch unfathomably large amounts of information that conventional computers could never dream about.

D-Wave 2000Q quantum computer

The First Steps Of Quantum Computing

 In the future, quantum computers could revolutionize everything from human genomics to artificial intelligence—and over just a few decades, the technology has already gotten to the point where that’s a very real possibility. In 1998, researchers successfully analyzed information from a single qubit, and in 2000, scientists at Los Alamos National Laboratory unveiled the first 7-qubit quantum computer. Less than two decades later, and D-Wave’s 1,000-qubit quantum computers are being used by the likes of Google, NASA, and Lockheed Martin, and a 2,000-qubit quantum computer is being unveiled. Feel like replacing your old laptop? That quantum computer will run you a cool $15 million—a small price to pay for a millionfold improvement.

Watch And Learn: Our Favorite Content About The Future Of Computers

Quantum Computers Explained

  1. Transistors can either block or open the way for bits of information to pass.00:54
  2. Four classical bits can be in one of 16 different configurations at once; quantum qubits can be in all 16 combinations at once.03:44
  3. Quantum computers could better simulate the quantum world, possibly leading to insights in medicine and other fields.06:12

New quantum computer device takes advantage of a loophole in causality

Researchers in Finland have figured out a way to reliably make quantum computers – technology that’s tipped to revolutionise computing in the coming years – even more powerful. And all they had to do was throw common sense out the window.

You’re almost certainly reading this article on a classical computer – which includes all phones, laptops, and tablets – meaning that your computer can only ever do one thing at a time. It reads one bit, then the next bit, then the next bit, and so on. The reading is lightning fast and combines millions or billions or trillions of bits to give you what you want, but the bits are always read and used in order.

So if your computer searches for the solution to a problem, it tries one answer (a particular batch of ones and zeros), checks how far the result is from the goal, tries another answer (a different batch), and repeats. For complicated problems, that process can take an incredibly long time. Sometimes, that’s good. Very clever multiplication secures your bank account, and faster or more efficient equation-solvers put that in jeopardy.

But there are other times – like when biochemists want to try out 1,000 compounds on a particular cell – where it would be nice to give a computer all of the options at once and have it quickly return which have the best chances of success.

This is where quantum computers come in. Instead of sequentially trying individual sets of ones and zeros, they can try all sets – all solutions to the problem – effectively at once. They do this by taking advantage of entanglement, where pairs or groups of atoms (or photons) are linked together in a special way that makes them act like a single system doing a single action. The pairs of atoms make up qubits, which are the quantum analogs of bits.

During a calculation, as long as the atoms stay entangled, the qubits simultaneously use every possible combination of ones and zeros that an equivalent number of bits could hold. They explore all of these options and settle on the best one. Then the energy (or spin, or whatever you want, but let’s stick to energy) of each qubit is measured.

Atoms have discrete energies, so a qubit with a low measured energy would be called a 0, and one a level up would be a 1. Measurement destroys the entanglement, but it reveals the solution.

But why stop at 0 and 1? If the atoms could each search through more values, the computer could test more options at once. So scientists have started looking into qutrits, where there are three options: 0, 1, and 2, or low energy, middle energy, and high energy. Qutrits are hard to set up, but a stable arrangement would make for an extra powerful computer.

This is where researchers led by Sorin Paraoanu from Aalto University in Finland come in. Publishing in Nature Communications, the team describes how they made qutrits by shooting two pulses of light at a group of entangled atoms. One pulse took them from the lowest energy (0) to a step above it (1), and another pulse lifted them from there to a higher energy (2). The pulses allowed the atoms to access all three of the energies, making them qutrits.

If the atoms sat at the middle energy for too long, they had a good chance of becoming disentangled. This would have ended the experiment immediately. So Paraoanu’s team did something odd: they sent the pulses in the wrong order. First came the pulse to bring the atoms from 1 to 2, then the one to take them from 0 to 1. It’s like backing out of a parking space by going forward first.

Obviously, you wouldn’t do that because you understand causality. You know that you need to back up before you have room to move forward.

Atoms don’t care. When the first pulse hit them, they started seeking out all of the possible energies they could go to because of it, and then they settled on the best course after the second pulse hit – even though they couldn’t have known the second was on its way when the first one came. They skipped sitting at 1 for any time at all and went right on to 2, where they were much more stable. Once they were at 2, computations could begin.

It seems impossible, but it’s just quantum mechanics. Bring on the future of computing.

Quantum computing will bring immense processing possibilities

Quantum computing will bring immense processing possibilities

The one thing everyone knows about quantum mechanics is its legendary weirdness, in which the basic tenets of the world it describes seem alien to the world we live in. Superposition, where things can be in two states simultaneously, a switch both on and off, a cat both dead and alive. Or entanglement, what Einstein called “spooky action-at-distance” in which objects are invisibly linked, even when separated by huge.

But weird or not, is approaching a century old and has found many applications in daily life. As John von Neumann once said: “You don’t understand quantum mechanics, you just get used to it.” Much of electronics is based on quantum physics, and the application of quantum theory to computing could open up huge possibilities for the complex calculations and data processing we see today.

Imagine a computer processor able to harness super-position, to calculate the result of an arbitrarily large number of permutations of a complex problem simultaneously. Imagine how entanglement could be used to allow systems on different sides of the world to be linked and their efforts combined, despite their physical separation. Quantum computing has immense potential, making light work of some of the most difficult tasks, such as simulating the body’s response to drugs, predicting weather patterns, or analysing big datasets.

Such processing possibilities are needed. The first transistors could only just be held in the hand, while today they measure just 14 nm – 500 times smaller than a red blood cell. This relentless shrinking, predicted by Intel founder Gordon Moore as Moore’s law, has held true for 50 years, but cannot hold indefinitely. Silicon can only be shrunk so far, and if we are to continue benefiting from the performance gains we have become used to, we need a different approach.

Quantum fabrication

Advances in have made it possible to mass-produce quantum-scale semiconductors – electronic circuits that exhibit quantum effects such as super-position and entanglement.

Quantum computing will bring immense processing possibilities
Replica of the first ever transistor, manufactured at Bell Labs in 1947. 

The image, captured at the atomic scale, shows a cross-section through one potential candidate for the building blocks of a quantum computer, a semiconductor nano-ring. Electrons trapped in these rings exhibit the strange properties of , and semiconductor fabrication processes are poised to integrate these elements required to build a quantum computer. While we may be able to construct a quantum computer using structures like these, there are still major challenges involved.

In a classical computer processor a huge number of transistors interact conditionally and predictably with one another. But quantum behaviour is highly fragile; for example, under quantum physics even measuring the state of the system such as checking whether the switch is on or off, actually changes what is being observed. Conducting an orchestra of quantum systems to produce useful output that couldn’t easily by handled by a classical computer is extremely difficult.

But there have been huge investments: the UK government announced £270m funding for quantum technologies in 2014 for example, and the likes of Google, NASA and Lockheed Martin are also working in the field. It’s difficult to predict the pace of progress, but a useful quantum computer could be ten years away.

The basic element of is known as a qubit, the quantum equivalent to the bits used in traditional computers. To date, scientists have harnessed to represent qubits in many different ways, ranging from defects in diamonds, to semiconductor nano-structures or tiny superconducting circuits. Each of these has is own advantages and disadvantages, but none yet has met all the requirements for a quantum computer, known as the DiVincenzo Criteria.

The most impressive progress has come from D-Wave Systems, a firm that has managed to pack hundreds of qubits on to a small chip similar in appearance to a traditional processor.

Quantum computing will bring immense processing possibilities
Quantum circuitry.

Quantum secrets

The benefits of harnessing aren’t limited to computing, however. Whether or not quantum computing will extend or augment digital computing, the same can be harnessed for other means. The most mature example is quantum communications.

Quantum physics has been proposed as a means to prevent forgery of valuable objects, such as a banknote or diamond, as illustrated in the image below. Here, the unusual negative rules embedded within prove useful; perfect copies of unknown states cannot be made and measurements change the systems they are measuring. These two limitations are combined in this quantum anti-counterfeiting scheme, making it impossible to copy the identity of the object they are stored in.

The concept of quantum money is, unfortunately, highly impractical, but the same idea has been successfully extended to communications. The idea is straightforward: the act of measuring quantum super-position states alters what you try to measure, so it’s possible to detect the presence of an eavesdropper making such measurements. With the correct protocol, such as BB84, it is possible to communicate privately, with that privacy guaranteed by fundamental laws of physics.

Quantum computing will bring immense processing possibilities
Adding a quantum secret to a standard barcode prevents tampering or forgery of valuable goods.

Quantum communication systems are commercially available today from firms such as Toshiba and ID Quantique. While the implementation is clunky and expensive now it will become more streamlined and miniaturised, just as transistors have miniaturised over the last 60 years.

Improvements to nanoscale fabrication techniques will greatly accelerate the development of quantum-based technologies. And while useful quantum computing still appears to be some way off, it’s future is very exciting indeed.


New chip architecture may provide foundation for quantum computer

New chip architecture may provide foundation for quantum computer
A photograph of the completed BGA trap assembly. The trap chip is at the center, sitting atop the larger interposer chip that fans out the wiring. The trap chip surface area is 1mm x 3mm, while the interposer is roughly 1 cm square. Credit: D. Youngner, Honeywell

Quantum computers are in theory capable of simulating the interactions of molecules at a level of detail far beyond the capabilities of even the largest supercomputers today. Such simulations could revolutionize chemistry, biology and material science, but the development of quantum computers has been limited by the ability to increase the number of quantum bits, or qubits, that encode, store and access large amounts of data.

In a paper appearing this week in the Journal of Applied Physics, from AIP Publishing, a team of researchers at Georgia Tech Research Institute and Honeywell International have demonstrated a new device that allows more electrodes to be placed on a chip—an important step that could help increase qubit densities and bring us one step closer to a quantum computer that can simulate molecules or perform other algorithms of interest.

“To write down the quantum state of a system of just 300 qubits, you would need 2^300 numbers, roughly the number of protons in the known universe, so no amount of Moore’s Law scaling will ever make it possible for a classical computer to process that many numbers,” said Nicholas Guise, who led the research. “This is why it’s impossible to fully simulate even a modest sized quantum system, let alone something like chemistry of complex molecules, unless we can build a quantum computer to do it.”

While existing computers use classical bits of information, quantum computers use “” or qubits to store information. Classical bits use either a 0 or 1, but a qubit, exploiting a weird quantum property called superposition, can actually be in both 0 and 1 simultaneously, allowing much more information to be encoded. Since qubits can be correlated with each other in a way that classical bits cannot, they allow a new sort of massively parallel computation, but only if many qubits at a time can be produced and controlled. The challenge that the field has faced is scaling this technology up, much like moving from the first transistors to the first computers.

New chip architecture may provide foundation for quantum computer
Fluorescence images of calcium ions confined in the BGA trap. 

Creating the Building Blocks for Quantum Computing

One leading qubit candidate is individual ions trapped inside a vacuum chamber and manipulated with lasers. The scalability of current trap architectures is limited since the connections for the electrodes needed to generate the trapping fields come at the edge of the chip, and their number are therefore limited by the chip perimeter.

The GTRI/Honeywell approach uses new microfabrication techniques that allow more electrodes to fit onto the chip while preserving the laser access needed.

The team’s design borrows ideas from a type of packaging called a ball grid array (BGA) that is used to mount integrated circuits. The ball grid array’s key feature is that it can bring electrical signals directly from the backside of the mount to the surface, thus increasing the potential density of electrical connections.

The researchers also freed up more chip space by replacing area-intensive surface or edge capacitors with trench capacitors and strategically moving wire connections.

New chip architecture may provide foundation for quantum computer
SEMs of the trench capacitor and TSV (thru-substrate via) structures fabricated into the trap chip. These make electrical connections to the trap electrodes while filtering out RF pickup.
The space-saving moves allowed tight focusing of an addressing laser beam for fast operations on single qubits. Despite early difficulties bonding the chips, a solution was developed in collaboration with Honeywell, and the device was trapping ions from the very first day.

The team was excited with the results. “Ions are very sensitive to stray electric fields and other noise sources, and a few microns of the wrong material in the wrong place can ruin a trap. But when we ran the BGA trap through a series of benchmarking tests we were pleasantly surprised that it performed at least as well as all our previous traps,” Guise said.

Working with trapped ion currently requires a room full of bulky equipment and several graduate students to make it all run properly, so the researchers say much work remains to be done to shrink the technology. The BGA project demonstrated that it’s possible to fit more and more electrodes on a surface trap chip while wiring them from the back of the chip in a compact and extensible way. However, there are a host of engineering challenges that still need to be addressed to turn this into a miniaturized, robust and nicely packaged system that would enable , the researchers say.

In the meantime, these advances have applications beyond quantum computing. “We all hope that someday quantum computers will fulfill their vast promise, and this research gets us one step closer to that,” Guise said. “But another reason that we work on such difficult problems is that it forces us to come up with solutions that may be useful elsewhere. For example, like those demonstrated here for ion traps are also very relevant for making miniature atomic devices like sensors, magnetometers and chip-scale atomic clocks.”

The Revolutionary Quantum Computer That May Not Be Quantum at All

Google owns a lot of computers—perhaps a million servers stitched together into the fastest, most powerful artificial intelligence on the planet. But last August, Google teamed up with NASA to acquire what may be the search giant’s most powerful piece of hardware yet. It’s certainly the strangest.
Located at NASA Ames Research Center in Mountain View, California, a couple of miles from the Googleplex, the machine is literally a black box, 10 feet high. It’s mostly a freezer, and it contains a single, remarkable computer chip—based not on the usual silicon but on tiny loops of niobium wire, cooled to a temperature 150 times colder than deep space. The name of the box, and also the company that built it, is written in big, science-fiction-y letters on one side: D-WAVE. Executives from the company that built it say that the black box is the world’s first practical quantum computer, a device that uses radical new physics to crunch numbers faster than any comparable machine on earth. If they’re right, it’s a profound breakthrough. The question is: Are they?
Hartmut Neven, a computer scientist at Google, persuaded his bosses to go in with NASA on the D-Wave. His lab is now partly dedicated to pounding on the machine, throwing problems at it to see what it can do. An animated, academic-tongued German, Neven founded one of the first successful image-recognition firms; Google bought it in 2006 to do computer-vision work for projects ranging from Picasa to Google Glass. He works on a category of computational problems called optimization—finding the solution to mathematical conundrums with lots of constraints, like the best path among many possible routes to a destination, the right place to drill for oil, and efficient moves for a manufacturing robot. Optimization is a key part of Google’s seemingly magical facility with data, and Neven says the techniques the company uses are starting to peak. “They’re about as fast as they’ll ever be,” he says.
That leaves Google—and all of computer science, really—just two choices: Build ever bigger, more power-hungry silicon-based computers. Or find a new way out, a radical new approach to computation that can do in an instant what all those other million traditional machines, working together, could never pull off, even if they worked for years.

That, Neven hopes, is a quantum computer. A typical laptop and the hangars full of servers that power Google—what quantum scientists charmingly call “classical machines”—do math with “bits” that flip between 1 and 0, representing a single number in a calculation. But quantum computers use quantum bits, qubits, which can exist as 1s and 0s at the same time. They can operate as many numbers simultaneously. It’s a mind-bending, late-night-in-the-dorm-room concept that lets a quantum computer calculate at ridiculously fast speeds.
Unless it’s not a quantum computer at all. Quantum computing is so new and so weird that no one is entirely sure whether the D-Wave is a quantum computer or just a very quirky classical one. Not even the people who build it know exactly how it works and what it can do. That’s what Neven is trying to figure out, sitting in his lab, week in, week out, patiently learning to talk to the D-Wave. If he can figure out the puzzle—what this box can do that nothing else can, and how—then boom. “It’s what we call ‘quantum supremacy,’” he says. “Essentially, something that cannot be matched anymore by classical machines.” It would be, in short, a new computer age.
A former wrestler short-listed for Canada’s Olympic team, D-Wave founder Geordie Rose is barrel-chested and possessed of arms that look ready to pin skeptics to the ground. When I meet him at D-Wave’s headquarters in Burnaby, British Columbia, he wears a persistent, slight frown beneath bushy eyebrows. “We want to be the kind of company that Intel, Microsoft, Google are,” Rose says. “The big flagship $100 billion enterprises that spawn entirely new types of technology and ecosystems. And I think we’re close. What we’re trying to do is build the most kick-ass computers that have ever existed in the history of the world.”
The office is a bustle of activity; in the back rooms technicians peer into microscopes, looking for imperfections in the latest batch of quantum chips to come out of their fab lab. A pair of shoulder-high helium tanks stand next to three massive black metal cases, where more techs attempt to weave together their spilt guts of wires. Jeremy Hilton, D-Wave’s vice president of processor development, gestures to one of the cases. “They look nice, but appropriately for a startup, they’re all just inexpensive custom components. We buy that stuff and snap it together.” The really expensive work was figuring out how to build a quantum computer in the first place.
Like a lot of exciting ideas in physics, this one originates with Richard Feynman. In the 1980s, he suggested that quantum computing would allow for some radical new math. Up here in the macroscale universe, to our macroscale brains, matter looks pretty stable. But that’s because we can’t perceive the subatomic, quantum scale. Way down there, matter is much stranger. Photons—electromagnetic energy such as light and x-rays—can act like waves or like particles, depending on how you look at them, for example. Or, even more weirdly, if you link the quantum properties of two subatomic particles, changing one changes the other in the exact same way. It’s called entanglement, and it works even if they’re miles apart, via an unknown mechanism that seems to move faster than the speed of light.
Knowing all this, Feynman suggested that if you could control the properties of subatomic particles, you could hold them in a state of superposition—being more than one thing at once. This would, he argued, allow for new forms of computation. In a classical computer, bits are actually electrical charge—on or off, 1 or 0. In a quantum computer, they could be both at the same time.
It was just a thought experiment until 1994, when mathematician Peter Shor hit upon a killer app: a quantum algorithm that could find the prime factors of massive numbers. Cryptography, the science of making and breaking codes, relies on a quirk of math, which is that if you multiply two large prime numbers together, it’s devilishly hard to break the answer back down into its constituent parts. You need huge amounts of processing power and lots of time. But if you had a quantum computer and Shor’s algorithm, you could cheat that math—and destroy all existing cryptography. “Suddenly,” says John Smolin, a quantum computer researcher at IBM, “everybody was into it.”
That includes Geordie Rose. A child of two academics, he grew up in the backwoods of Ontario and became fascinated by physics and artificial intelligence. While pursuing his doctorate at the University of British Columbia in 1999, he readExplorations in Quantum Computing, one of the first books to theorize how a quantum computer might work, written by NASA scientist—and former research assistant to Stephen Hawking—Colin Williams. (Williams now works at D-Wave.)
Reading the book, Rose had two epiphanies. First, he wasn’t going to make it in academia. “I never was able to find a place in science,” he says. But he felt he had the bullheaded tenacity, honed by years of wrestling, to be an entrepreneur. “I was good at putting together things that were really ambitious, without thinking they were impossible.” At a time when lots of smart people argued that quantum computers could never work, he fell in love with the idea of not only making one but selling it.
With about $100,000 in seed funding from an entrepreneurship professor, Rose and a group of university colleagues founded D-Wave. They aimed at an incubator model, setting out to find and invest in whoever was on track to make a practical, working device. The problem: Nobody was close.
At the time, most scientists were pursuing a version of quantum computing called the gate model. In this architecture, you trap individual ions or photons to use as qubits and chain them together in logic gates like the ones in regular computer circuits—the ands, ors, nots, and so on that assemble into how a computer thinks. The difference, of course, is that the qubits could interact in much more complex ways, thanks to superposition, entanglement, and interference.
But qubits really don’t like to stay in a state of super¬position, what’s called coherence. A single molecule of air can knock a qubit out of coherence. The simple act of observing the quantum world collapses all of its every-number-at-once quantumness into stochastic, humdrum, non¬quantum reality. So you have to shield qubits—from everything. Heat or other “noise,” in physics terms, screws up a quantum computer, rendering it useless.
You’re left with a gorgeous paradox: Even if you successfully run a calculation, you can’t easily find that out, because looking at it collapses your superpositioned quantum calculation to a single state, picked at random from all possible superpositions and thus likely totally wrong. You ask the computer for the answer and get garbage.
Lashed to these unforgiving physics, scientists had built systems with only two or three qubits at best. They were wickedly fast but too underpowered to solve any but the most prosaic, lab-scale problems. But Rose didn’t want just two or three qubits. He wanted 1,000. And he wanted a device he could sell, within 10 years. He needed a way to make qubits that weren’t so fragile.
In 2003, he found one. Rose met Eric Ladizinsky, a tall, sporty scientist at NASA’s Jet Propulsion Lab who was an expert in superconducting quantum interference devices, or Squids. When Ladizinsky supercooled teensy loops of niobium metal to near absolute zero, magnetic fields ran around the loops in two opposite directions at once. To a physicist, electricity and magnetism are the same thing, so Ladizinsky realized he was seeing superpositioning of electrons. He also suspected these loops could become entangled, and that the charges could quantum-tunnel through the chip from one loop to another. In other words, he could use the niobium loops as qubits. (The field running in one direction would be a 1; the opposing field would be a 0.) The best part: The loops themselves were relatively big, a fraction of a millimeter. A regular microchip fab lab could build them.
The two men thought about using the niobium loops to make a gate-model computer, but they worried the gate model would be too susceptible to noise and timing errors. They had an alternative, though—an architecture that seemed easier to build. Called adiabatic annealing, it could perform only one specific computational trick: solving those rule-laden optimization problems. It wouldn’t be a general-purpose computer, but optimization is enormously valuable. Anyone who uses machine learning—Google, Wall Street, medicine—does it all the time. It’s how you train an artificial intelligence to recognize patterns. It’s familiar. It’s hard. And, Rose realized, it would have an immediate market value if they could do it faster.
In a traditional computer, annealing works like this: You mathematically translate your problem into a landscape of peaks and valleys. The goal is to try to find the lowest valley, which represents the optimized state of the system. In this metaphor, the computer rolls a rock around the problem-¬scape until it settles into the lowest-possible valley, and that’s your answer. But a conventional computer often gets stuck in a valley that isn’t really lowest at all. The algorithm can’t see over the edge of the nearest mountain to know if there’s an even lower vale. A quantum annealer, Rose and Ladizinsky realized, could perform tricks that avoid this limitation. They could take a chip full of qubits and tune each one to a higher or lower energy state, turning the chip into a representation of the rocky landscape. But thanks to superposition and entanglement between the qubits, the chip could computationally tunnel through the landscape. It would be far less likely to get stuck in a valley that wasn’t the lowest, and it would find an answer far more quickly.
The guts of a D-Wave don’t look like any other computer. Instead of metals etched into silicon, the central processor is made of loops of the metal niobium, surrounded by components designed to protect it from heat, vibration, and electromagnetic noise. Isolate those niobium loops well enough from the outside world and you get a quantum computer, thousands of times faster than the machine on your desk—or so the company claims. —Cameron Bird

Proposed modular quantum computer architecture offers scalability to large numbers of qubits

How do you build a universal quantum computer? Turns out, this question was addressed by theoretical physicists about 15 years ago. The answer was laid out in a research paper and has become known as the DiVincenzo criteria. The prescription is pretty clear at a glance; yet in practice the physical implementation of a full-scale universal quantum computer remains an extraordinary challenge.

To glimpse the difficulty of this task, consider the guts of a would-be  computer. The computational heart is composed of multiple quantum bits, or qubits, that can each store 0 and 1 at the same time. The qubits can become “entangled,” or correlated in ways that are impossible in conventional devices. A  device must create and maintain these quantum connections in order to have a speed and storage advantage over any conventional computer. That’s the upside. The difficulty arises because harnessing entanglement for computation only works when the qubits are almost completely isolated from the outside world. Isolation and control becomes much more difficult as more and more qubits are added into the computer. Basically, as quantum systems are made bigger, they generally lose their quantum-ness.

In pursuit of a quantum computer, scientists have gained amazing control over various quantum systems. One leading platform in this broad field of research is trapped atomic ions, where nearly 20 qubits have been juxtaposed in a single quantum register. However, scaling this or any other type of qubit to much larger numbers while still contained in a single register will become increasingly difficult, as the connections will become too numerous to be reliable.

Physicists led by ion-trapper Christopher Monroe at the JQI have now proposed a modular quantum computer architecture that promises scalability to much larger numbers of qubits. This research is described in the journal Physical Review A, a topical journal of the American Physical Society. The components of this architecture have individually been tested and are available, making it a promising approach. In the paper, the authors present expected performance and scaling calculations, demonstrating that their architecture is not only viable, but in some ways, preferable when compared to related schemes.

Individual qubit modules are at the computational center of this design, each one consisting of a small crystal of perhaps 10-100 trapped ions confined with electromagnetic fields. Qubits are stored in each atomic ion’s internal energy levels. Logical gates can be performed locally within a single module, and two or more ions can be entangled using the collective properties of the ions in a module.

One or more qubits from the ion trap modules are then networked through a second layer of optical fiber photonic interconnects. This higher-level layer hybridizes photonic and ion-trap technology, where the quantum state of the ion qubits is linked to that of the photons that the ions themselves emit. Photonics is a natural choice as an information bus as it is proven technology and already used for conventional information flow. In this design, the fibers are directed to a reconfigurable switch, so that any set of modules could be connected. The switch system, which incorporates special micro-electromechanical mirrors (MEMs) to direct light into different fiber ports, would allow for entanglement between arbitrary modules and on-demand distribution of quantum information.

The defining feature of this new architecture is that it is modular, meaning that several identical modules composed of smaller registers are connected in a way that is inherently scalable. Modularity is a common property of complex systems, from social networks to biological function, and will likely be a necessary component of any future large-scale quantum computer. Monroe explains,”This is the only way to imagine scaling to larger , by building them in smaller standard units and hooking them together. In this case, we know how to engineer every aspect of the architecture.”

In conventional computers, modularity is routinely exploited to realize the massive interconnects required in semiconductor devices, which themselves have been successfully miniaturized and integrated with other electronics and photonics. The first programmable computers were the size of large rooms and used vacuum tubes, and now people have an incredible computer literally at their fingertips. Today’s processors have billions of semiconductor transistors fabricated on chips that are only about a centimeter across.

Similar fabrication techniques are now used to construct computer chip-style ion-traps, sometimes with integrated optics. The modular quantum architecture proposed in this research would not only allow many ion-trap chips to be tied together, but could also be exploited with alternative qubit modules that couple easily to photons such as  made from nitrogen vacancy centers in diamond or ultracold atomic gases (the neutral cousin of ion-traps).

Study doubts quantum computer speed.

D-Wave Two quantum computer at Nasa Ames

A D-Wave machine has been installed at Nasa Ames Research Center in California

  • A new academic study has raised doubts about the performance of a commercial quantum computer in certain circumstances.

In some tests devised by a team of researchers, the commercial quantum computer has performed no faster than a standard desktop machine.

The team set random maths problems for the D-Wave Two machine and a regular computer with an optimised algorithm.

Google and Nasa share a D-Wave unit at a space agency facility in California.

The comparison found no evidence D-Wave’s $15m (£9.1m) computer was exploiting quantum mechanics to calculate faster than a regular machine.

"Qubit" probability distributions
  • But the team only looked at one type of computing problem and the D-Wave Two may perform better in other tasks.

The study has been submitted to a journal, but has not yet completed the peer review process to verify the findings.

And D-Wave told BBC News the tests set by the scientists were not the kinds of problems where quantum computers offered any advantage over classical types.

Quantum computers promise to carry out fast, complex calculations by tapping into the principles of quantum mechanics.

Daunting challenge

In conventional computers, “bits” of data are stored as a string of 1s and 0s.

But in a quantum system, “qubits” can be both 1s and 0s at the same time – enabling multiple calculations to be performed simultaneously.

Small-scale, laboratory-bound quantum computers supporting a limited number of qubits can perform simple calculations.

But building large-scale versions poses a daunting engineering challenge.

Thus, Canada-based D-Wave Systems drew scepticism when, in 2011, they started selling their machines, which appeared to use a non-mainstream method known as adiabatic quantum computing.

But last year, two separate studies showed indirect evidence for a quantum effect known as entanglement in the computers. And in a separate study released in 2013, Catherine McGeoch of Amherst College in Massachusetts, a consultant for D-Wave, found the machine was 3,600 times faster on some tests than a desktop computer.

Last year, it was announced that Google, Nasa and other scientists would share time on a D-Wave Two – which has a liquid helium-cooled processor operating close to the temperature known as absolute zero – at the US space agency’s Ames facility in California.

In the latest research, Prof Matthias Troyer of ETH Zurich and colleagues set random maths problems for a D-Wave machine owned by defence giant Lockheed Martin, pitting it against a desktop machine.

Their results revealed that there were some instances in which D-Wave Two was faster than the “classical” computer, but likewise there were others where it performed more slowly.

D-Wave Two
Nasa and Google are using the machine to investigate complex problems

Overall, Prof Troyer’s team found no evidence for what they call “quantum speedup” in the D-Wave machine.

But Jeremy Hilton, D-Wave’s vice-president of processor development, told BBC News: “The 512 qubit processor – used in this recent benchmarking study – was able to meet and match the state-of-the-art classical algorithms and computers even though it has been shown that these particular benchmarking problems will not benefit from a quantum speedup.

“Hence, for this particular benchmark, one does not expect to see a scaling advantage for quantum annealing.”

Indeed, in the latest paper, Matthias Troyer and his colleagues write: “Our results for one particular benchmark do not rule out the possibility of speedup for other classes of problems and illustrate that quantum speedup is elusive and can depend on the question posed.”

Mr Hilton commented: “An important element of D-Wave’s technology is our roadmap and vision. We are laser focused on the performance of the machine, understanding how the technology is working so we can continue to improve it and solve real world problems.

He added: “Our customers are interested in solving real world problems that classical computers are less suited for and are often more complex than what we glean from a straightforward benchmarking test.”

D-Wave says it currently has a 1,000 qubit processor in its lab and plans to release it later in 2014.

“Our goal with the next generation of processors is to enhance quantum annealing performance, such that even benchmarks repeated at the 512 qubit scale would perform and scale better. We haven’t yet seen any fundamental limits to performance that cannot be improved with design changes,” Mr Hilton explained.

Common Blue Pigment Could Help Make A Quantum Computer.

Sometimes you just have to look around. A new analysis of a common blue pigment—it’s used in the British five-pound note—found it has some unusual properties that make it a candidate semiconductor for quantum computers.

Researchers from the U.K. and Canada found molecules of copper phthalocyanine are able to hold the superimposed state of a quantum bit for as long as, or longer than, other materials being studied for quantum computers. Unlike ordinary bits, which must take on one of two states—for example, 0 or 1—quantum bits must hold two states at once. If a material is able to hold quantum states long enough, engineers could get them to store and pass on information.

Researchers are interested in building computers with quantum bits because such machines could work much faster than computers today. Some quantum computers already exist, but they’re still experimental and often aren’t able to solve practical problems.

Copper phthalocyanine has one other property that makes it a good prospect for a quantum semiconductor, the researchers wrote in a paper they published yesterday in the journal Nature. The researchers were able to produce it as a thin film, which is convenient for putting into electronic devices.

Top five physics discoveries chosen.

‘Top five physics discoveries’ chosen by magazine

Four-qubit quantum device (E Lucero)
Quantum computing – one of five physics discoveries that could “improve the everyday lives of ordinary people around the world”

Five physics discoveries with the potential to transform the world have been selected by a leading science magazine for its 25th birthday issue.

Quantum computing and science that could enable shoes to charge a mobile phone are among the list compiled by Physics World.

A potential new tumour treatment called hadron therapy and the “wonder-material” graphene also feature.

The magazine also picked its top five breakthroughs of the last 25 years.

In all, the publication compiled five lists of five to examine different aspects of physics.

Eternal riddles

Graphene has been one of the most talked-about discoveries in the last decade.

Its strength, flexibility and conductivity make it a potentially ideal material for bendable smartphones and superior prosthetic limbs.

Top 5 discoveries to change the world.


But graphene has another, less-heralded property which could help it transform the everyday lives of people around the world.

Despite being just one atom thick, it is impervious to almost all liquids and gases.

Generating holes in sheets of graphene could therefore create a selective membrane – “the ultimate water purifier” – which might someday create drinking water from the sea.

“Predicting the future is a mug’s game. Of course, we expect to get a few of them wrong,” Hamish Johnston, editor of told BBC News.

“Grandiose, utopian predictions that never materialise always look faintly ridiculous in years to come – have you seen anyone recently flying to work on a nuclear-powered jet-pack?”

Physics World is the monthly magazine of the Institute of Physics and was first published in October 1988.

Selecting the five most important breakthroughs of its lifetime was “harder than choosing Nobel laureates”, according to reporter Tushna Commissariat.

Cat's Eye Nebula
The Cat’s Eye Nebula is one of the “five best images” chosen by Physics World

“There have been so many eye-popping findings that our final choice is, inevitably, open to debate,” she wrote.

“Yet for us, these five discoveries stand out above all others as having done the most to transform our understanding of the world.”

They are, in chronological order:

The magazine’s 25th anniversary issue also highlights five images that have allowed us to “see” a physical phenomenon or effect.

They range from the microscopic – electrons on a copper crystal – to the enormous – the Cat’s Eye Nebula, as photographed by the Hubble Space Telescope.

The list of five “biggest unanswered questions” features some eternal riddles – “is life on Earth unique?” Another is: “what exactly is time?”

The 5 biggest unanswered questions

  • What is the nature of the dark universe?
  • What is time?
  • Is life on Earth unique?
  • Can we unify quantum mechanics and gravity?
  • Can we exploit the weirdness of quantum mechanics?

The final top five is a set of “fiendish physics-themed puzzles” devised by the British signals intelligence agency GCHQ.

The first has already appeared online – a jumbled set of letters on a page which need to be deciphered before arriving at a physics-themed answer.

A similar puzzle was recently used by GCHQ to attract potential employees.

It will be followed by another four problems, one per week throughout October, which will become progressively more challenging.

“We think the puzzles are going to really stretch even the brightest minds,” says Matin Durrani, editor of Physics World.

“You won’t need any physics to solve them, but they are certainly going to make you think and they’re a fun way to celebrate our 25th anniversary.

“I also hope our top fives in the birthday issue will remind everyone just how vital, enjoyable and interesting physics can be.”

Quantum Computers Check Each Other’s Work.

Image courtesy of Equinox Graphics

Check it twice. Quantum computers rely on these clusters of entangled qubits—units of data that embody many states at once—to achieve superspeedy processing. New research shows one such computer can verify the solutions of another.

Quantum computers can solve problems far too complex for normal computers, at least in theory. That’s why research teams around the globe have strived to build them for decades. But this extraordinary power raises a troubling question: How will we know whether a quantum computer’s results are true if there is no way to check them? The answer, scientists now reveal, is that a simple quantum computer—whose results humans can verify—can in turn check the results of other dramatically more powerful quantum machines.

Quantum computers rely on odd behavior of quantum mechanics in which atoms and other particles can seemingly exist in two or more places at once, or become “entangled” with partners, meaning they can instantaneously influence each other regardless of distance. Whereas classical computers symbolize data as bits—a series of ones and zeroes that they express by flicking switchlike transistors either on or off—quantum computers use quantum bits (qubits) that can essentially be on and off at the same time, or in any on/off combination, such as 32% on and 68% off.

Because each qubit can embody so many different states, quantum computers could compute certain classes of problems dramatically faster than regular computers by running through every combination of possibilities at once. For instance, a quantum computer with 300 qubits could perform more calculations in an instant than there are atoms in the universe.

Currently, all quantum computers involve only a few qubits “and thus can be easily verified by a classical computer, or on a piece of paper,” says quantum physicist Philip Walther of the University of Vienna. But their capabilities could outstrip conventional computers “in the not-so-far future,” he warns, which raises the verification problem.

Scientists have suggested a few ways out of this conundrum that would involve computers with large numbers of qubits or two entangled quantum computers. But these still lie outside the reach of present technology.

Now, quantum physicist Stefanie Barz at the University of Vienna, along with Walther and their colleagues, has a new strategy for verification. It relies on a technique known as blind quantum computing, an idea which they first demonstrated in a 2012 Science paper. A quantum computer receives qubits and completes a task with them, but it remains blind to what the input and output were, and even what computation it performed.

To test a machine’s accuracy, the researchers peppered a computing task with “traps”—short intermediate calculations to which the user knows the result in advance. “In case the quantum computer does not do its job properly, the trap delivers a result that differs from the expected one,” Walther explains. These traps allow the user to recognize when the quantum computer is inaccurate, the researchers report online today in Nature Physics. The results show experimentally that one quantum computer can verify the results of another, and that theoretically any size of quantum computer can verify any other, Walther says.

The existence of undetectable errors will depend on the particular quantum computer and the computation it carries out. Still, the more traps users build into the tasks, the better they can ensure the quantum computer they test is computing accurately. “The test is designed in such a way that the quantum computer cannot distinguish the trap from its normal tasks,” Walther says.

The researchers used a 4-qubit quantum computer as the verifier, but any size will do, and the more qubits the better, Walther notes. The technique is scalable, so it could be used even on computers with hundreds of qubits, he says, and it can be applied to any of the many existing quantum computing platforms.

“Like almost all current quantum computing experiments, this currently has the status of a fun demonstration proof of concept, rather than anything that’s directly useful yet,” says theoretical computer scientist Scott Aaronson at the Massachusetts Institute of Technology in Cambridge. But that doesn’t detract from the importance of these demonstrations, he adds. “I’m very happy that they’re done, as they’re necessary first steps if we’re ever going to have useful quantum computers.”