Understanding Quantum Mechanics: What is Electromagnetism?

On the face of it, both electricity and magnetism are remarkably similar to gravity. Just as two masses are attracted to each other by an inverse square force, the force between two charged objects or two poles of a magnet  are also inverse square. The difference is that gravity is always attractive, whereas electricity and magnetism can be either attractive or repulsive.For example, two positive charges will push away from each other, while a positive and negative charge will pull toward each other.

As with gravity, electricity and magnetism raised the question of action-at-a-distance. How does one charge “know” to be pushed or pulled by the other charge? How do they interact across the empty space between them? The answer to that question came from James Clerk Maxwell.

Maxwell’s breakthrough was to change the way we thought electromagnetic forces. His idea was that each charge must reach out to each other with some kind of energy. That is, a charge is surrounded by a field of electricity, a field that other charges can detect. Charges possess electric fields, and charges interact with the electric fields of other charges. The same must be true of magnets. Magnets possess magnetic fields, and interact with magnetic fields. Maxwell’s model was not just a description of the force between charges and magnets, but a also description of the electric and magnetic fields themselves. With that change of view, Maxwell found the connection between electricity and magnetism. They were connected by their fields.

A moving electric field creates a magnetic field, and a moving magnetic field creates an electric field. Not only are the two connected, but one type of field can create the other. Maxwell had created a single, unified description of electricity and magnetism. He had united two different forces into a single unified force, which we now call electromagnetism.

Maxwell’s theory not only revolutionized physics, it gave astrophysics the tools to finally understand some of the complex behavior of interstellar space. By the mid-1900s Maxwell’s equations were combined with the Navier-Stokes equations describing fluids to create magnetohydrodynamics (MHD). Using MHD we could finally begin to model the behavior of plasma within magnetic fields, which is central to our understanding of everything from the Sun to the formation of stars and planets. As our computational powers grew, we were able to create simulations of protostars and young planets.

Although there are still many unanswered questions, we now know that the dance of plasma and electromagnetism plays a crucial role in the formation of stars and planets.

While Maxwell’s electromagnetism is an incredibly powerful theory, it is a classical model just like Newton’s gravity and general relativity. But unlike gravity, electromagnetism could be combined with quantum theory to create a fully quantum model known as quantum electrodynamics (QED).

A central idea of quantum theory is a duality between particle-like and wave-like (or field-like) behavior. Just has electrons and protons can interact as fields, the electromagnetic field can interact as particle-like quanta we call photons. In QED, charges and a electromagnetic fields are described as interactions of quanta. This is most famously done through Richard Feynman’s figure-based approach now known as Feynman diagrams.

first feynman diagram
The first feynman diagram. Credit: Richard Feynman.

Feynman diagrams are often mis-understood to represent what is actually happening when charges interact. For example, two electrons approach each other, exchange a photon, and then move away from each other. Or the idea that virtual particles can pop in and out of existence in real time. While the diagrams are easy to understand as particle interactions, they are still quanta, and still subject to quantum theory. How they are actually used in QED is to calculate all the possible ways that charges could interact through the electromagnetic field in order to determine the probability of a certain outcome. Treating all these possibilities as happening in real time is like arguing that five apples on a table become real one at a time as you count them.

QED has become the most accurate physical model we’ve devised so far, but this theoretical power comes at the cost of losing the intuitive concept of a force.

Feynman’s interactions can be used to calculate the force between charges, just as Einstein’s spacetime curvature can be used to calculate the force between masses. But QED also allows for interactions that aren’t forces. An electron can emit a photon in order to change its direction, and an electron and positron can interact to produce a pair of photons. In QED matter can become energy and energy can be come matter.

What started as a simple force has become a fairy dance of charge and light. Through this dance we left the classical world and moved forward in search of the strong and the weak.

You thought quantum mechanics was weird: check out entangled time

<em>Photo by Alan Levine/Flickr</em>

In the summer of 1935, the physicists Albert Einstein and Erwin Schrödinger engaged in a rich, multifaceted and sometimes fretful correspondence about the implications of the new theory of quantum mechanics. The focus of their worry was what Schrödinger later dubbed entanglement: the inability to describe two quantum systems or particles independently, after they have interacted.

Until his death, Einstein remained convinced that entanglement showed how quantum mechanics was incomplete. Schrödinger thought that entanglement was the defining feature of the new physics, but this didn’t mean that he accepted it lightly. ‘I know of course how the hocus pocus works mathematically,’ he wrote to Einstein on 13 July 1935. ‘But I do not like such a theory.’ Schrödinger’s famous cat, suspended between life and death, first appeared in these letters, a byproduct of the struggle to articulate what bothered the pair.

The problem is that entanglement violates how the world ought to work. Information can’t travel faster than the speed of light, for one. But in a 1935 paper, Einstein and his co-authors showed how entanglement leads to what’s now called quantum nonlocality, the eerie link that appears to exist between entangled particles. If two quantum systems meet and then separate, even across a distance of thousands of lightyears, it becomes impossible to measure the features of one system (such as its position, momentum and polarity) without instantly steering the other into a corresponding state.

Up to today, most experiments have tested entanglement over spatial gaps. The assumption is that the ‘nonlocal’ part of quantum nonlocality refers to the entanglement of properties across space. But what if entanglement also occurs across time? Is there such a thing as temporal nonlocality?

The answer, as it turns out, is yes. Just when you thought quantum mechanics couldn’t get any weirder, a team of physicists at the Hebrew University of Jerusalem reported in 2013 that they had successfully entangled photons that never coexisted. Previous experiments involving a technique called ‘entanglement swapping’ had already showed quantum correlations across time, by delaying the measurement of one of the coexisting entangled particles; but Eli Megidish and his collaborators were the first to show entanglement between photons whose lifespans did not overlap at all.

Here’s how they did it. First, they created an entangled pair of photons, ‘1-2’ (step I in the diagram below). Soon after, they measured the polarisation of photon 1 (a property describing the direction of light’s oscillation) – thus ‘killing’ it (step II). Photon 2 was sent on a wild goose chase while a new entangled pair, ‘3-4’, was created (step III). Photon 3 was then measured along with the itinerant photon 2 in such a way that the entanglement relation was ‘swapped’ from the old pairs (‘1-2’ and ‘3-4’) onto the new ‘2-3’ combo (step IV). Some time later (step V), the polarisation of the lone survivor, photon 4, is measured, and the results are compared with those of the long-dead photon 1 (back at step II).

Figure 1. Time line diagram: (I) Birth of photons 1 and 2, (II) detection of photon 1, (III) birth of photons 3 and 4, (IV) Bell projection of photons 2 and 3, (V) detection of photon 4.

The upshot? The data revealed the existence of quantum correlations between ‘temporally nonlocal’ photons 1 and 4. That is, entanglement can occur across two quantum systems that never coexisted.

What on Earth can this mean? Prima facie, it seems as troubling as saying that the polarity of starlight in the far-distant past – say, greater than twice Earth’s lifetime – nevertheless influenced the polarity of starlight falling through your amateur telescope this winter. Even more bizarrely: maybe it implies that the measurements carried out by your eye upon starlight falling through your telescope this winter somehow dictated the polarity of photons more than 9 billion years old.

Lest this scenario strike you as too outlandish, Megidish and his colleagues can’t resist speculating on possible and rather spooky interpretations of their results. Perhaps the measurement of photon 1’s polarisation at step II somehow steers the future polarisation of 4, or the measurement of photon 4’s polarisation at step V somehow rewrites the past polarisation state of photon 1. In both forward and backward directions, quantum correlations span the causal void between the death of one photon and the birth of the other.

Just a spoonful of relativity helps the spookiness go down, though. In developing his theory of special relativity, Einstein deposed the concept of simultaneity from its Newtonian pedestal. As a consequence, simultaneity went from being an absolute property to being a relative one. There is no single timekeeper for the Universe; precisely when something is occurring depends on your precise location relative to what you are observing, known as your frame of reference. So the key to avoiding strange causal behaviour (steering the future or rewriting the past) in instances of temporal separation is to accept that calling events ‘simultaneous’ carries little metaphysical weight. It is only a frame-specific property, a choice among many alternative but equally viable ones – a matter of convention, or record-keeping.

The lesson carries over directly to both spatial and temporal quantum nonlocality. Mysteries regarding entangled pairs of particles amount to disagreements about labelling, brought about by relativity. Einstein showed that no sequence of events can be metaphysically privileged – can be considered more real – than any other. Only by accepting this insight can one make headway on such quantum puzzles.

The various frames of reference in the Hebrew University experiment (the lab’s frame, photon 1’s frame, photon 4’s frame, and so on) have their own ‘historians’, so to speak. While these historians will disagree about how things went down, not one of them can claim a corner on truth. A different sequence of events unfolds within each one, according to that spatiotemporal point of view. Clearly, then, any attempt at assigning frame-specific properties generally, or tying general properties to one particular frame, will cause disputes among the historians. But here’s the thing: while there might be legitimate disagreement about which properties should be assigned to which particles and when, there shouldn’t be disagreement about the very existence of these properties, particles, and events.

These findings drive yet another wedge between our beloved classical intuitions and the empirical realities of quantum mechanics. As was true for Schrödinger and his contemporaries, scientific progress is going to involve investigating the limitations of certain metaphysical views. Schrödinger’s cat, half-alive and half-dead, was created to illustrate how the entanglement of systems leads to macroscopic phenomena that defy our usual understanding of the relations between objects and their properties: an organism such as a cat is either dead or alive. No middle ground there.

Most contemporary philosophical accounts of the relationship between objects and their properties embrace entanglement solely from the perspective of spatial nonlocality. But there’s still significant work to be done on incorporating temporal nonlocality – not only in object-property discussions, but also in debates over material composition (such as the relation between a lump of clay and the statue it forms), and part-whole relations (such as how a hand relates to a limb, or a limb to a person). For example, the ‘puzzle’ of how parts fit with an overall whole presumes clear-cut spatial boundaries among underlying components, yet spatial nonlocality cautions against this view. Temporal nonlocality further complicates this picture: how does one describe an entity whose constituent parts are not even coexistent?

Discerning the nature of entanglement might at times be an uncomfortable project. It’s not clear what substantive metaphysics might emerge from scrutiny of fascinating new research by the likes of Megidish and other physicists. In a letter to Einstein, Schrödinger notes wryly (and deploying an odd metaphor): ‘One has the feeling that it is precisely the most important statements of the new theory that can really be squeezed into these Spanish boots – but only with difficulty.’ We cannot afford to ignore spatial ortemporal nonlocality in future metaphysics: whether or not the boots fit, we’ll have to wear ’em.

Scientists Are Rethinking the Very Nature of Space and Time

The Nature of Space and Time

A pair of researchers have uncovered a potential bridge between general relativityand quantum mechanics — the two preeminent physics theories — and it could force physicists to rethink the very nature of space and time.

Albert Einstein’s theory of general relativity describes gravity as a geometric property of space and time. The more massive an object, the greater its distortion of spacetime, and that distortion is felt as gravity.

In the 1970s, physicists Stephen Hawking and Jacob Bekenstein noted a link between the surface area of black holes and their microscopic quantum structure, which determines their entropy. This marked the first realization that a connection existed between Einstein’s theory of general relativity and quantum mechanics.

Less than three decades later, theoretical physicist Juan Maldacena observed another link between between gravity and the quantum world. That connection led to the creation of a model that proposes that spacetime can be created or destroyed by changing the amount of entanglement between different surface regions of an object.

In other words, this implies that spacetime itself, at least as it is defined in models, is a product of the entanglement between objects.

To further explore this line of thinking, ChunJun Cao and Sean Carroll of the California Institute of Technology (CalTech) set out to see if they could actually derive the dynamical properties of gravity (as familiar from general relativity) using the framework in which spacetime arises out of quantum entanglement. Their research was recently published in arXiv.

Using an abstract mathematical concept called Hilbert space, Cao and Carroll were able to find similarities between the equations that govern quantum entanglement and Einstein’s equations of general relativity. This supports the idea that spacetime and gravity do emerge from entanglement.

Carroll told Futurism the next step in the research is to determine the accuracy of the assumptions they made for this study.

“One of the most obvious ones is to check whether the symmetries of relativity are recovered in this framework, in particular, the idea that the laws of physics don’t depend on how fast you are moving through space,” he said.

A Theory of Everything

Today, almost everything we know about the physical aspects of our universe can be explained by either general relativity or quantum mechanics. The former does a great job of explaining activity on very large scales, such as planets or galaxies, while the latter helps us understand the very small, such as atoms and sub-atomic particles.

However, the two theories are seemingly not compatible with one another. This has led physicists in pursuit of the elusive “theory of everything” — a single framework that would explain it all, including the nature of space and time.

Because gravity and spacetime are an important part of “everything,” Carroll said he believes the research he and Cao performed could advance the pursuit of a theory that reconciles general relativity and quantum mechanics. Still, he noted that the duo’s paper is speculative and limited in scope.

“Our research doesn’t say much, as yet, about the other forces of nature, so we’re still quite far from fitting ‘everything’ together,” he told Futurism.

Still, if we could find such a theory, it could help us answer some of the biggest questions facing scientists today. We may be able to finally understand the true nature of dark matterdark energyblack holes, and other mysterious cosmic objects.

Already, researchers are tapping into the ability of the quantum world to radically improve our computing systems, and a theory of everything could potentially speed up the process by revealing new insights into the still largely confusing realm.

While theoretical physicists’ progress in pursuit of a theory of everything has been “spotty,” according to Carroll, each new bit of research — speculative or not — leads us one step closer to uncovering it and ushering in a whole new era in humanity’s understanding of the universe.

According To Quantum Mechanics, Reality Might Not Exist Without An Observer

If a tree falls in the forest and there’s no one around to hear it, does it make a sound? The obvious answer is yes—a tree falling makes a sound whether or not we hear it—but certain experts in quantum mechanics argue that without an observer, all possible realities exist. That means that the tree both falls and doesn’t fall, makes a sound and is silent, and all other possibilities therein. This was the crux of the debate between Niels Bohr and Albert Einstein. Learn more about it in the video below.

Quantum Entanglement And The Bohr-Einstein Debate

Does reality exist when we’re not watching?

The Double Slit Experiment

Learn about one of the most famous experiments in quantum physics.

Watch the video. URL:

An Illustrated Lesson In Quantum Entanglement

Delve into this heavy topic with some light animation.

Watch the video. URL:

Inside knowledge: Is information the only thing that exists?

Physics suggests information is more fundamental than matter, energy, space and time – the problems start when we try to work out what that means


“IT FROM bit.” This phrase, coined by physicist John Wheeler, encapsulates what a lot of physicists have come to believe: that tangible physical reality, the “it”, is ultimately made from information, or bits.

Concepts such as entropy in thermodynamics, a measure of disorder whose irresistible rise seems to characterise our universe, have long been known to be connected with information. More recently, some efforts to unify general relativity, the theory that describes space and time, with quantum mechanics, the theory that describes particles and matter, have homed in on information as a common language.

Inside knowledge: The biggest questions about facts, truth, lies and belief

Forget alternative facts. To get to the bottom of what we know and how we know we know it, delve into our special report on epistemology – the science of knowledge itself

But what is this information? Is it “ontological” – a real thing from which space, time and matter emerge, just as an atom emerges from fundamental particles such as electrons and quarks and gluons? Or is it “epistemic” – something that just represents our state of knowledge about reality?

Here opinions are divided. Cosmologist Paul Davies argues in the book Information and the Nature of Reality that information “occupies the ontological basement”. In other words, it is not about something, it is itself something. Sean Carroll at the California Institute of Technology in Pasadena disagrees. Even if all of reality emerges from information, he says, this information is just knowledge about the universe’s basic quantum state.

Watch the video. URL:

So we have to drill deeper. In quantum mechanics, an object’s state is encoded.


Prominent Astrophysicist Calls the Big Bang A “Mirage”

Article Image
Artist conceptualization of the Big Bang.

Science classes the world over teach that the Big Bang is the beginning of our universe, as if it’s established fact. In reality, it’s a theory and one that’s been challenged periodically. In the last few years, two teams of scientists have revived the debate, and offer fascinating alternative models. A recent paper published in the journal Nature, even goes so far as to suggest that the Big Bang was a “mirage.”

This paper was written by astrophysicist Niayesh Afshordi and colleagues, at the University of Waterloo in Ontario, Canada. They built upon the work of physicist Gia Dvali at the Ludwig Maximillian’s University in Munich, Germany. Physicists have some evidence that the Big Bang took place.

For instance, microwave radiation lurking in the background suggests an apocryphal explosion some 13.7 billion years ago, when the Big Bang is said to have taken place. The fact that the universe is still expanding also suggests that all things came from a common point, strengthening the accepted theory. But what happened before it took place has always been a mystery.

Today, we’re told is that everything began with an unimaginably hot, infinitely dense point in space, which did not adhere to the standard laws of physics. This is known as the singularity. But almost nothing is known about it. Afshordi points out in an interview in Nature, “For all physicists know, dragons could have come flying out of the singularity.” Mathematically, the Big Bang itself holds up. But equations can only show us what happened after, not before.

Background radiation in the universe. 

Since the singularity doesn’t fit into normal, predictable physics models and can’t offer a glimpse into its own origins, some scientists are searching for other answers. Dr. Ahmed Farag Ali of Benha University, in Egypt, calls the singularity, “the most serious problem of general relativity.”

He collaborated with Professor Saurya Das of the University of Lethbridge, in Canada, to investigate. In 2015, they released a series of equations which describe the universe, not as an object with a beginning and an end, but as a constantly flowing river, devoid of all boundaries.

There was no Big Bang in this view and similarly no “Big Crunch,” or a time when the universe might stop expanding and begin condensing. They published their work in the journal Physics Letters B, and plan to introduce a follow-up study. The paper attempts a Herculean feat, to heal the rift between general relativity and quantum mechanics.

In this view, the universe began when it filled with gravitons as a bath fills with water. These don’t contain any mass themselves but pass gravity on to other particles. From there, this “quantum fluid” spread out and the speed of expansion accelerated.

So far, it remains a hypothesis which must undergo a battery of tests, before it can compete with or supersede the present model. This isn’t the only challenge to currently accepted theory.

Currently accepted model. NASA Jet Propulsion Laboratory. Caltech.

To get a better idea on how the universe began, Prof. Afshordi and his team created a 3D model it, floating inside a 4D model of “bulk space.” Remember, the fourth dimension is space-time. This 3D model resembled a membrane, so scientists named it the “brane.” Next, they examined stars within the model and realized that over time, some would die off in violent supernova, turning into 4D black holes.

Black holes have an edge called the event horizon. Reach it and nothing will save you from being pulled in. Nothing escapes its omnipotent pull, not light, not even stars. We think of an event horizon as a corona around a black hole, as it is usually represented in 2D images. Everything in space is 3D (4D actually). So it isn’t a ring, but an outer layer of the black hole’s surface.

Afshordi ran the model to see what would happen when a 4D black hole swallowed a 4D star. A 3D brane fired out, as a result. What’s more, the ejected material began expanding in space. So the universe may be the result of a violent interaction between a star and a black hole.

Ashfordi said, “Astronomers measured that expansion and extrapolated back that the Universe must have begun with a Big Bang — but that is just a mirage.”

To learn more about one alternate theory to the Big Bang, click here:


A Deeper Look into Quantum Mechanics

Superconducting qubits are tops UCSB

Winfried Hensinger is the director of the Sussex Centre for Quantum Technologies in England, and he has spent a lifetime devoted to studying the ins and outs of quantum mechanics and just what it can do for us. When Hensinger first started in the field, quantum computing was still very much a theory, but now it is all around us, and various projects are within reach of creating a universal quantum computer. So, now that scientists are taking quantum computing more seriously it won’t be long before the field begins to explode and applications that we never even imagined possible will become available to use.

Quantum computing works with information that is stored in qubits which have a value of either 1, 0, or any quantum superposition of the two states.  The notion behind quantum superposition is that a quantum object has the ability to occupy more than one state until it’s measured.  Because quantum objects are used in this kind of computing, any given set of quantum values can represent much more data than binary ever could because data is not limited to 1’s and 0’s.

Currently, researchers are still battling it out to create a successful quantum computer, but they still have a way to go.  Systems have been constructed that has access to a few qubits and are good for testing hardware configurations or running some algorithms, but are very expensive and still very basic.  When Hensinger was asked about the current changes within quantum computing, he simply replied, “It used to be a physics problem.  Now, it’s and engineering problem.” 

Two possibilities from researchers of what the foundation of quantum computing should look like are superconducting qubits and trapped ions. Superconducting qubits relies on supercooled electrical circuits and could bring many advantages to the manufacturing process when making them on a mass scale.  The trapped ions method refers to a method that can cope with environmental factors but has trouble controlling lots of charged atoms within a vacuum.  Hensinger supports both of these implementations and believes they will both produce a quantum computer.  During his research, Hensinger’s results showed that the trapped ions method was slightly ahead of the competition.

However, Hensinger has also created his own method with the help of his team at Sussex and focuses on individually controlled voltages that are needed within the quantum circuit. He says, “With this concept, it becomes much easier to build a quantum computer.  This is one of the reasons why I’m very optimistic about trapped ions.”  Hensinger and co also chose to work with trapped ions as it works at room temperature, unlike the superconducting qubits method.

IBM, on the other hand, has chosen to work with superconducting qubits as the basis for their quantum work.  Their quantum computer consists of a five-qubit processor that’s contained within a printed circuit board.  The refrigerated system contains control wires that transmit microwave signals to the chip and send signals out through various amplifiers and passive microwave components where they are interpreted by a classical computer for easy reading of the system’s qubit state from outside the refrigerated system.  All of this takes up more than 100 square feet within IBM’s lab, and that is because of the significant cooling that needs to be done.


Jerry Chow, the manager of the Experimental Quantum Computing team at IBM, says that the reason IBM uses superconducting qubits is more to do with previous research using this technique that had been done.  And, as Chow explains, “I think superconducting qubits are really attractive because they’re micro-fabricate.  You can make them on a computer chip, on a silicon wafer, pattern them with the standard lithography techniques using transistor processes, and in that sense have a pretty straightforward route toward scaling.”

Two beryllium ions trapped 40 micrometers apart from the square gold chip in the center form the heart of this ‘trapped ion’ quantum computer. (Photo: Y. Colombe/NIST)
Two beryllium ions trapped 40 micrometers apart from the square gold chip in the center form the heart of this ‘trapped ion’ quantum computer. 
NASA’s 512-qubit Vesuvius processor is cooled to 20 millikelvin, more than 100 times colder than interstellar space. (Photo: NASA Ames/John Hardman)
NASA’s 512-qubit Vesuvius processor is cooled to 20 millikelvin, more than 100 times colder than interstellar space. 

So, one thing that we know for certain is that when it comes to quantum computing both superconducting qubits and trapped ions have emerged as the two techniques to take note of. Quantum computing will develop further over the next few years, and it’s in everyone’s best interests if large-scale quantum computers aren’t tied down to just one possible solution.  Hensinger for one is definitely in support of both ideas and notes, “It’s healthy to have different groups trying different things.”  At the moment, it’s still hard to say exactly what quantum computing will be used for in the future, but algorithms are constantly being worked on to see what hardware could be capable of using quantum computing.

A quantum algorithm is a recipe that is usually written in a mathematical format and is a solution for solving a particular problem.  But, because quantum computing does not work in the same was as classical computing, algorithms that are made from binary are useless, so new ones need to be made and that is something that Krysta Svore and team at Microsoft’s Quantum Architectures and Computation Group focuses on.  She states, “We have a programming language that we have developed explicitly for quantum computing.  Our language and our tools are called LIQUI|>.  LIQUI\> allows us to express these quantum algorithms, then it goes through a set of optimizations, compilations, and basically rewrites the language instructions into device-specific instructions.”

Svore and team at the Quantum Architecture and Computation Group have access to a simulated quantum computer that is currently running on a classical system.  This allows them to debug existing quantum algorithms, as well as design and test new ones and helps the hardware team to see how quantum computers could be used in practice.  However, IBM has taken things one step further.  As well as having a successful simulation that they can work on they have also launched the IBM Experience which is an online interface that allows students and enthusiasts to have a go themselves with a five-qubit system and run their own algorithms and experiments from the cloud-based platform. 

IBM’s five qubit processor uses a lattice architecture that scale to create larger, more powerful quantum computers. (Photo: IBM)
IBM’s five qubit processor uses a lattice architecture that scale to create larger, more powerful quantum computers.
One of the most famous applications in the world of quantum computing comes in the form of Shor’s algorithm.  Ryan O’Donnell of the Carnegie Mellon School of Computer Science in Pittsburgh said, “In 1997, Shor showed an algorithm for a quantum computer that would be able to factor such numbers very efficiently”, when referring to numbers with thousands of digits.  Ever since then it has become a kind of measuring stick for the advancement of the whole field.  One of the current applications involving quantum hardware is to research different areas of science further.

Although quantum computing is going to become more common over the next few years, it’s not suddenly going to become the next mainstream technology that is found in every office and home.  But, the technology in one form or another may do.  In the next ten years, quantum computing will develop although its exact development may not be immediately obvious to the general public as at the moment, the promise of quantum computing is much more advanced then where researchers are with it.  But, eventually, it will revolutionize computing.


Watch the video. URL:https://youtu.be/z1GCnycbMeA

Has LIGO Proved Einstein Wrong & Found Signs of Quantum Gravity?

Three physicists have predicted that finding ‘echoes’ of gravitational waves coming from blackhole mergers might be signs of a theory that finally unifies quantum mechanics and general relativity.

A computer simulation shows two neutron stars, the extremely dense cores of now-dead stars, smashing into each other to form a blackhole. Credit: NASA Goddard Space Flight Centre/Flickr, CC BY 2.0 (quantum)

A computer simulation shows two neutron stars, the extremely dense cores of now-dead stars, smashing into each other to form a blackhole. Credit: NASA Goddard Space Flight Centre/Flickr, CC BY 2.0

Your high-school physics teacher would’ve likely taught you to think about the smallest constituents of nature by asking you to start with a large object – like a chair – and then keep breaking it down into smaller bits. For the purposes of making sense of your syllabus, you probably stopped at protons, electrons and neutrons. That’s a pity because, if you’d kept going, you’d have stumbled upon some of the biggest mysteries of the universe. At some point, you’d have hit the Planck scale: the smallest region of space, the shortest span of time. This is the smallest scale that quantum mechanics can make sense of, and this is where many physicists expect to find the fundamental particles that make up space itself.

If this region – or some phenomena that are thought to belong exclusively to this region – are found, then physicists will have made a stunning discovery. Apart from finding the ‘atoms’ of space, they’d have opened the doors to marrying the two biggest theories of physics: quantum mechanics and the theory of general relativity (GR). The former’s demesne is the small and smaller particles you passed along the way to the Planck scale. The latter’s is the largest distances and spans of time in the universe. And the discovery would be stunning because GR, created by Albert Einstein 101 years ago, doesn’t allow space to have any constituent ‘atoms’. For GR, space is smooth. And it is this fundamental conflict that has prevented the theories from being reconciled into a single ‘quantum gravity’ theory.

But the first signs of change might be here.

In 2015, the twin Laser Interferometer Gravitational-wave Observatories (LIGO) in the US had made the first direct detection of gravitational waves. These are ripples of energy sent strumming through space at the speed of light when a massive object accelerates. LIGO had in fact detected gravitational waves created by two blackholes that were spinning rapidly around each other before colliding and merging to form a larger blackhole. The discovery was unequivocal proof that Einstein’s GR was valid and realistic. But curiously enough, three physicists recently announced that the discovery may have in fact achieved the opposite: invalidate GR and instead signal proof that the first signs of quantum gravity may have been found.

A blackhole is a particularly interesting thing. One is formed when the core of a dying star of a certain kind becomes so massive that, after the initial supernova explosion, it collapses inwards, tightly curving space around itself such that even light can’t escape its prodigious gravity. A blackhole is a consequence of GR – though Einstein didn’t himself predict its existence first. At the same time, because of its freaky nature, a blackhole also often exhibits quantum mechanical properties that physicists have been interested in for their potential to reveal something about quantum gravity. And many of these properties have to do with the blackhole’s outer shell: the event horizon, behind which nothing can escape the blackhole’s heart no matter how fast it is moving away.

GR can’t perfectly predict what the insides of a blackhole are like – and the theory simply breaks down when it comes to the blackhole’s heart itself. But using LIGO’s data when it tuned in to the mergers of two blackhole-pairs in 2015, three physicists are now saying there’s some reason to believe GR may be breaking down at the event horizon itself. They say they are motivated by having spotted signs (in the data) of a quantum-gravity effect known simply as an echo.

According to GR, the event horizon is a smooth and infinitely thin surface: at any given moment, you’re either behind it, falling into the blackhole, or in front of it, looking into the abyss that is the blackhole. But according to quantum mechanics, the event horizon is actually a ‘firewall’ of particles popping in and out of existence. You step into it and you’re immediately incinerated. But apart from making for an interesting gedanken experiment about suicidal astronauts, the existence of such a firewall can have very real consequences.

When two blackholes collide to form a larger blackhole, there is a very large amount of energy released. In LIGO’s first detection of a merger, made on September 14, 2015, two blackholes weighing 29 and 36 solar masses merged to form a blackhole weighing 62 solar masses. The remaining three solar masses – equivalent to 178.7 billion trillion trillion trillion joules of energy – were expelled as gravitational waves. If GR has its way, with an infinitely thin event horizon, then the waves are immediately expelled into space. However, if quantum mechanics has its way, then some of the waves are first trapped inside the firewall of particles, where they bounce around like echoes depending on the angle at which they were ensnared, and escape in instalments. Corresponding to the delay in setting off into space, LIGO would have detected them similarly: not arriving all at once but with delays.

LIGO original template for GW150914, along with Abedi-Dykaar-Afshordi's best fit template for the echoes. Caption and credit: arXiv:1612.00266

LIGO original template for GW150914, along with Abedi-Dykaar-Afshordi’s best fit template for the echoes.

The three physicists – Jahed Abedi, Hannah Dykaar and Niayesh Afshordi – simulated firewall-esque conditions using mirrors placed close to a computer-simulated blackhole to determine the intervals at which gravitational echoes from each of the three events LIGO has detected so far would arrive at. When they had their results, they went looking for similar signals in the LIGO data. In a pre-print paper uploaded to the arXiv server on December 1, the trio writes that it did find them, with a statistical significance of 2.9 sigma. This is a mathematical measure of confidence that’s not good enough to technically be considered evidence (3 sigma), let alone proof of any kind (5 sigma). And when tested for each event, the odds are lower: they max out at 2 sigma in the case of the merger known as GW150914, the first one that LIGO detected. Finally, even if the signal persists, it might not ultimately be due to quantum gravity at all but some other sources.

Nonetheless, the significance isn’t zero – and the LIGO team has confirmed that it is looking for more signs of echoes in its data. Luckily for everyone, the detectors also recently restarted with upgrades to make them more sensitive, to more accurately study the gravitational wave signals arising from blackholes of diverse masses. If future experiments can’t detect stronger echoes (or eliminate existing sources of noise that could be clouding observations), then that’s that for this line of verifying quantum gravity. But until then, it wouldn’t be amiss to speculate on its veracity – or on variations of it that might yield better results, results closer to LIGO’s capabilities – if only because the data that LIGO collects for each merger is so complex.

Ultimately, the most heartening takeaway from the Abedi-Dykaar-Afshordi thesis is that there is an experimental way to confirm the predictions of quantum gravity at all. Physicists have long held it to be out of human reach. This is obvious when you realise the Planck length is 100-billion-billion-times smaller than the diameter of a proton and the Planck second is 10-billion-billion-billion-times smaller than the smallest unit of time that some of the most powerful atomic clocks can measure. If quantum gravity is a true theory, then it will be to nature’s unending credit that it spawned blackholes that can magnify the effects of such infinitesimal provenance – and to humankind’s for building machines that can eavesdrop on them.

An Indian LIGO detector is currently under development and is expected to join the twin American ones to study gravitational waves by 2023.

Interstellar was right. Falling into a black hole is not the end, says Stephen Hawking.

“If you feel you are in a black hole, don’t give up, there’s a way out,” Stephen Hawking told the Royal Institute of Technology in Stockholm

An artist's impression of a supermassive black hole at the centre of a distant quasar

An artist’s impression of a supermassive black hole at the centre of a distant quasar

Interstellar was right. Falling into a black hole is not the end, professor Stephen Hawking has claimed.

Although physicists had assumed that all matter must be destroyed by the huge gravitational forces of a black hole, Hawking told delegates in Sweden that it could escape and even pop into another dimension.

The theory solves the ‘information paradox’ which has puzzled scientists for decades. While quantum mechanics says that nothing can ever be destroyed, general relativity says it must be.

However under Hawking’s new theory, anything that is sucked into a black hole is effectively trapped at the event horizon – the sphere surrounding the hole from which it was thought that nothing can escape.

And he claims that anything which fell in could re-emerge back into our universe, or a parallel one, through Hawking radiation – protons which manage to escape from the black hole because of quantum fluctuations.

“If you feel you are in a black hole, don’t give up, there’s a way out,” Hawking told an audience held at the KTH Royal Institute of Technology in Stockholm

In the film Interstellar, Cooper, played by Matthew McConaughey, plunges into the black hole Gargantura. As Cooper’s ship breaks apart in the force, he evacuates and ends up in a Tesseract – a four dimensional cube. He eventually makes it out of the black hole.

The blac hole Gargantua from the film Interstellar

The black hole Gargantua from the film Interstellar

Black holes are stars that have collapsed under their own gravity, producing such extreme forces that even light can’t escape.

But Hawking claims that information never makes it inside the black hole in the first place and instead is ‘translated’ into a kind of hologram which sits in the event horizon.

“I propose that the information is stored not in the interior of the black hole as one might expect, but on its boundary, the event horizon,” said Prof Hawking

“The idea is the super translations are a hologram of the ingoing particles,” he said. “Thus they contain all the information that would otherwise be lost.”

Hawking also believes that radiation leaving the black hole can pick up some of the information stored at the event horizon and carry it back out. However it is unlikely to be in the same state in which it entered.

“The information about ingoing particles is returned, but in a chaotic and useless form,” he said. “This information paradox. For all practical purposes, the information is lost.

“The message of this lecture is that black holes ain’t as black as they are painted. They are not the eternal prisons they were once thought. Things can get out of a black hole both on the outside and possibly come out in another universe.”

Hawking and colleagues are expected to publish a paper on the work next month.

“He is saying that the information is there twice already from the very beginning, so it’s never destroyed in the black hole to begin with,” Sabine Hossenfelder of the Nordic Institute for Theoretical Physics in Stockholm told New Scientist. “At least that’s what I understood.”

NASA’s ‘impossible’ EM Drive works: German researcher confirms and it can take us to the moon in just 4 HOURS

Over the past whole year, there’s been a lot of excitement about the electromagnetic propulsion drive, also known as EM Drive – a logically impossible engine that’s challenged almost everyone’s prospects by continuing to stand up to experimental study. The EM drive is so thrilling because it yields enormous amounts of propulsion that could hypothetically blast us to Mars in only 70 days, without the need for dense and costly rocket fuel. Instead, it’s actually propelled forward by microwaves bouncing back and forth inside a sealed off chamber, and this is what makes the EM drive so powerful, and at the same time so debatable.

As effective as this kind of propulsion may sound, it challenges one of the essential concepts of physics – the conservation of momentum, which states that for anything to be propelled forward, some kind of propellant must be pushed out in the opposite direction. For that reason, the drive was generally laughed at and overlooked when it was designed by English scientist Roger Shawyer in the early 2000s. But a few years later, a group of Chinese researchers decided to construct their own version, and to everyone’s amazement, it really worked. Then an American inventor did the something just like that, and convinced NASA’s Eagleworks Laboratories, supervised by Harold ‘Sonny’ White, to give it a try. And they admitted that it actually works. Now Martin Tajmar, a well-known professor and chairman for Space Systems at Dresden University of Technology in Germany, has worked with his own EM Drive, and has once again revealed that it produces thrust – although for reasons he can’t clarify yet.

Tajmar offered his outcomes at the 2015 American Institute for Aeronautics and Astronautics’ Propulsion and Energy Forum and Exposition in Florida on 27th of July, and you can read his entire paper here. He has a long history of experimentally testing (and exposing) revolutionary propulsion systems, so his outcomes are a big deal for those looking for outside confirmation of the EM Drive.

Most importantly, his system produced a parallel amount of thrust as was initially forecast by Shawyer, which is more than a few thousand times greater than a typical photon rocket.

So where does all of this leave us with the EM Drive? While it’s fun to speculate about just how revolutionary it could be for humanity, what we really need now are results published in a peer-reviewed journal – which is something that Shawyer claims he is just a few months away from doing, as David Hambling reports for Wired.

So it might turn out that we need to modify some of our laws of physics in order to clarify how the drive actually works. But if that opens up the opportunity of human travel throughout the entire Solar System – and, more significantly, beyond – then it’s a sacrifice we’re certainly willing to make.

%d bloggers like this: