In recent years scientists have started exploring the concept of anti-lasers – devices that can perfectly absorb a particular wavelength of light, as opposed to emitting it the way a laser does.
Now researchers have published a study that explores the blueprint for building an anti-laser that’s more complex than anything we’ve seen before.
More than an anti-laser, this team’s device is a ‘random anti-laser’: capable of absorbing waves randomly scattered in all directions. This ability could have a variety of potential uses, in everything from phone antennas to medical equipment – anywhere waves are captured.
An anti-laser may sound wild, but it’s actually pretty much what it says on the tin. You can think of such a device as a laser light burst happening in reverse – getting swallowed up rather than beamed out, according to the researchers.
“So far, anti-lasers have only been realised in one-dimensional structures onto which laser light was directed from opposite sides,” says one of the team, Stefan Rotter from the Vienna University of Technology in Austria.
“Our approach is much more general: we were able to show that even arbitrarily complicated structures in two or three dimensions can perfectly absorb a suitably tailored wave. In this way, this novel concept can also be used for a much wider range of applications.”
It’s that versatility and flexibility that sets this new anti-laser apart from what previous such devices. The team worked up a set of calculations and computer simulations to theorise how such a perfectly absorbing anti-laser might work, then backed them up with physical lab tests.
Key to the process is finding a wave front for the incoming signals in order to perfectly absorb them. That then enables the absorption of waves that aren’t arriving in predictable ways, but rather as scattered signals bouncing in from multiple sources.
“Waves that are being scattered in a complex way are really all around us – think about a mobile phone signal that is reflected several times before it reaches your cell phone,” says Rotter.
“This multiple scattering is made practical use of in so-called random lasers. Such exotic lasers are based on a disordered medium with a random internal structure that can trap light and emit a very complicated, system-specific laser field when supplied with energy.”
The random anti-laser setup. (Vienna University of Technology)
When it came to building their own anti-laser, the scientists set up a series of randomly placed Teflon cylinders, and sent microwave signals scattering through them – a little bit like rocks deflecting water waves in a puddle of water.
A waveguide placed on top with an antenna in its centre was used to absorb the incoming waves. The researchers managed to get an absorption rate of approximately 99.8 percent of the signals they broadcast.
That high mark is only in tightly controlled conditions, though – the team first measured the wave reflections as they came back in order to finely tune the central antenna to absorb them. Both the frequency of the signal and the absorption strength have to be carefully calibrated.
As a first attempt though, it’s very promising, and the theoretical physics behind the project suggests it can be adapted to a range of other signals and applications. It could work for any scenario “in which waves need to be perfectly focused, routed or absorbed”, write the researchers.
“Imagine, for example, that you could adjust a cell phone signal exactly the right way, so that it is perfectly absorbed by the antenna in your cell phone,” says Rotter.
“Also in medicine, we often deal with the task of delivering wave energy to a very specific point – such as shock waves shattering a kidney stone.”
Being treated with an anti-laser sounds pretty cool to us.
The Nobel Prize winning physicist made his mark as an original genius.
Richard Phillips Feynman (1918-1988) was one of the most brilliant and original physicists of the 20th century. With an extraordinary intuition, he always sought to address the problems of physics in a different way than others.
In 1965, he received the Nobel Prize in Physics and shared it with J. Schwinger and S. Tomonaga for his work in quantum electrodynamics. Independently, the results are shown as the quantum and relativistic study of systems with electrical charges such as electrons and positrons in the interaction with electromagnetic fields or electromagnetism.
Feynman’s technique illustrates his mood quite well. All his colleagues wrote long mathematical formulas whereas Richard Feynman drew, literally, the physical processes that he wanted to study, from which the calculations can be easily made with precise rules.
Currently, the use of Feynman diagrams or the variants of these diagrams is the standard procedure for calculations in the field of physics.
Feynman and the classroom space
For Feynman, the classroom was a theater, and he was an actor who had to maintain an intrigue while talking about physics and writing numbers and formulas on the board. With this intention, the classes or lectures were prepared very well like pieces of the classic theater with presentation and outcome.
His passionate way of talking about physics is probably why he became such a popular lecturer. Many of his lectures have been translated and published in the form of books, and there were even some recorded for television.
It is a novel reading, as befits a classic. To explain what physics is, Feynman reflects on general questions such as the principles of conservation, the meaning of the symmetries of physical laws and temporal evolution, and the distinction between past and future.
With his pragmatic style, Feynman always entered directly into the heart of the issue, into the audience, and the audience could grasp the problem posed.
A good example of this is when we talk about quantum physics. The whole mystery of quantum can be summed up in the wave/corpuscle duality, and the double-slit experiment contains the basic ingredients for discussing it.
Feynman does it with simplicity, and a depth that has never been surpassed and practically all the disclosures of quantum mechanics are inspired by this version.
Feynman only taught seniors and Ph.D. students with one crucial exception. In the years 1961-62 and 1962-63, he did a physics course for first and second-year students that have now become one of the most famous physics courses.
The classes were recorded, transcribed and published under the title “The Feynman Lectures on Physics” in three volumes that continue to be edited and translated today.
Although Feynman made a great effort to find simple and clear explanations for the students, the most who benefited were the Ph.D. students, professors, and scientists who attended his course because he used a brilliant way to illustrate by example how to think and reason in physics.
The course does not present physics in the traditional way, but the vision of Feynman. Feynman thus became a great teacher of teachers as has been written many times.
Naturally, the anecdotes are too beautiful to be true in all details, but faithfully convey the Feynman style and the vision he wanted to give of himself. And, it is that Feynman liked to tell funny stories in which he often had the most prominent role.
Feynman’s rise in popularity
But the true popularity came to him from participation in the commission in charge of investigating the accident of the Challenger in January of 1986. The space shuttle exploded shortly after rising, and the live broadcast on television of the accident amplified the social impact.
A good half of the second book of anecdotes is dedicated to this participation. Contrary to the president of the commission who wanted to control the whole process, Feynman did his own investigation following his own style.
Soon, he was convinced that the problem was in the rubber seals that closed the fuel tank. He saw that they could not withstand the low temperatures existing at the time of launch and decided to demonstrate it during one of the commission’s public sessions.
He showed that a piece of the gasket, compressed with a gag and cooled in a glass with ice, took more than enough time to recover its form, enough time for the fuel to escape from the tank and explode. The journalists present spread their intervention everywhere and everyone understood the main cause of the accident.
If we leave all of Feynman’s aspects aside, his originality is basically his biggest legacy to humanity and future generations. If in youth this originality was above all motivated by a desire for competition, in his adulthood, it found a more interesting and profound aspect.
The laws of physics can often be formulated in many ways, different at first glance until with certain mathematical work; they are shown to be identical. Feynman said that this is a mysterious fact that nobody understands and saw a reflection of the simplicity of nature.
“If you realize all the time what’s kind of wonderful – that is, if we expand our experience into wilder and wilder regions of experience – every once in a while, we have these integrations when everything’s pulled together into a unification, in which it turns out to be simpler than it looked before,” Feynman aptly said in his book The Pleasure of Finding Things Out.
Astronomers claim in a new paper that star motions should make it easy for civilizations to spread across the galaxy, but still we might find ourselves alone.
As far as anyone knows, we have always been alone. It’s just us on this pale blue dot, “home to everyone you love, everyone you know, everyone you ever heard of,” as Carl Sagan so memorably put it. No one has called or dropped by. And yet the universe is filled with stars, nearly all those stars have planets, and some of those planets are surely livable. So where is everybody?
The Italian physicist Enrico Fermi was purportedly the first to pose this question, in 1950, and scientists have offered a bounty of solutions for his eponymous paradox since. One of the most famous came from Sagan himself, with William Newman, who postulated in a 1981 paper that we just need patience. Nobody has visited because they’re all too far away; it takes time to evolve a species intelligent enough to invent interstellar travel, and time for that species to spread across so many worlds. Nobody is here yet.
Other researchers have argued that extraterrestrial life might rarely become space-faring (just as only one species on Earth ever has). Some argue that tech-savvy species, when they arise, quickly self-destruct. Still others suggest aliens may have visited in the past, or that they’re avoiding us on purpose, having grown intelligent enough to be suspicious of everyone else. Perhaps the most pessimistic answer is a foundational paper from 1975, in which the astrophysicist Michael Hart declared that the only plausible reason nobody has visited is that there really is nobody out there.
Now comes a paper that rebuts Sagan and Newman, as well as Hart, and offers a new solution to the Fermi paradox that avoids speculation about alien psychology or anthropology.
The research, which is under review by The Astrophysical Journal, suggests it wouldn’t take as long as Sagan and Newman thought for a space-faring civilization to planet-hop across the galaxy, because the movements of stars can help distribute life. “The sun has been around the center of the Milky Way 50 times,” said Jonathan Carroll-Nellenback, an astronomer at the University of Rochester, who led the study. “Stellar motions alone would get you the spread of life on time scales much shorter than the age of the galaxy.” Still, although galaxies can become fully settled fairly quickly, the fact of our loneliness is not necessarily paradoxical: According to simulations by Carroll-Nellenback and his colleagues, natural variability will mean that sometimes galaxies will be settled, but often not — solving Fermi’s quandary.
The question of how easy it would be to settle the galaxy has played a central role in attempts to resolve the Fermi paradox. Hart and others calculated that a single space-faring species could populate the galaxy within a few million years, and maybe even as quickly as 650,000 years. Their absence, given the relative ease with which they should spread, means they must not exist, according to Hart.
Sagan and Newman argued it would take longer, in part because long-lived civilizations are likelier to grow more slowly. Faster-growing, rapacious societies might peter out before they could touch all the stars. So maybe there have been a lot of short-lived, fast-growing societies that wink out, or a few long-lived, slowly expanding societies that just haven’t arrived yet, as Jason Wright of Pennsylvania State University, a coauthor of the new study, summarized Sagan and Newman’s argument. But Wright doesn’t agree with either solution.
“That conflates the expansion of the species as a whole with the sustainability of individual settlements,” he said. “Even if it is true for one species, it is not going to be this iron-clad law of xenosociology where if they are expanding, they are necessarily short-lived.” After all, he noted, life on Earth is robust, “and it expands really fast.”
In their new paper, Carroll-Nellenback, Wright and their collaborators Adam Frank of Rochester and Caleb Scharf of Columbia University sought to examine the paradox without making untestable assumptions. They modeled the spread of a “settlement front” across the galaxy, and found that its speed would be strongly affected by the motions of stars, which previous work — including Sagan and Newman’s — treated as static objects. The settlement front could cross the entire galaxy based just on the motions of stars, regardless of the power of propulsion systems. “There is lots of time for exponential growth basically leading to every system being settled,” Carroll-Nellenback said.
But the fact that no interstellar visitors are here now — what Hart called “Fact A” — does not mean they do not exist, the authors say. While some civilizations might expand and become interstellar, not all of them last forever. On top of that, not every star is a choice destination, and not every planet is habitable. There’s also what Frank calls “the Aurora effect,” after Kim Stanley Robinson’s novel Aurora, in which settlers arrive at a habitable planet on which they nonetheless cannot survive.
When Carroll-Nellenback and his coauthors included these impediments to settlement in their model and ran many simulations with different star densities, seed civilizations, spacecraft velocities and other variations, they found a vast middle ground between a silent, empty galaxy and one teeming with life. It’s possible that the Milky Way is partially settled, or intermittently so; maybe explorers visited us in the past, but we don’t remember, and they died out. The solar system may well be amid other settled systems; it’s just been unvisited for millions of years.
Anders Sandberg, a futurist at the University of Oxford’s Future of Humanity Institute who has studied the Fermi paradox, said he thinks spacecraft would spread civilizations more effectively than stellar motions. “But the mixing of stars could be important,” he wrote in an email, “since it is likely to spread both life, through local panspermia” — the spread of life’s chemical precursors — “and intelligence, if it really is hard to travel long distances.”
Frank views his and his colleagues’ new paper as SETI-optimistic. He and Wright say that now we need to look harder for alien signals, which will be possible in the coming decades as more sophisticated telescopes open their eyes to the panoply of exoplanets and begin glimpsing their atmospheres.
“We are entering an era when we are going to have actual data relevant to life on other planets,” Frank said. “This couldn’t be more relevant than in the moment we live.”
Seth Shostak, an astronomer at the SETI Institute who has studied the Fermi paradox for decades, thinks it is likely to be explained by something more complex than distance and time — like perception.
Maybe we are not alone and have not been. “The click beetles in my backyard don’t notice that they’re surrounded by intelligent beings — namely my neighbors and me,” Shostak said, “but we’re here, nonetheless.”
The British theoretical physicist Stephen Hawking is perhaps best-known for his landmark work on black holes and, by extension, how they affect our understanding of the Universe. In the years before his death in 2018, he was still immersed in black hole theory, endeavouring to solve a puzzle that his own work had given rise to several decades earlier.
To put it succinctly, in the 1970s, Hawking discovered that black holes appear to be capable of destroying physical information – a characteristic very much at odds with contemporary quantum mechanics. Adapted from a 2016 paper that Hawking co-authored with the US theoretical physicist Andrew Strominger and the UK theoretical physicist Malcolm Perry, this animation offers a sophisticated-but-digestible – and frequently quite clever – visual presentation of Hawking’s final work, which proposes one potential solution to the ‘information paradox’.
A trio of researchers with McMaster, Concordia and Trent Universities has solved the mystery of why pairs of grapes ignite into fireballs when cooked together in a microwave oven. In their paper published in Proceedings of the National Academy of Sciences, Hamza Khattak, Pablo Bianucci and Aaron Slepkov claim that the fireball is not the result of heat from the outside of the grapes making its way in, but instead comes about due to hotspots that form in both grapes.
Back in 2011, impressive videos of grapes igniting in microwaves went viral on YouTube. All a person had to do was cut a grape in half, leaving the two halves connected by a bit of skin at the bottom, and heat them in a microwave oven—within seconds, a tiny fireball would appear between them. Making things even more exciting was that nobody could explain it. Since that time, many armchair scientists have presented possible explanations—one of the more popular was the suggestion that the grapes somehow form an antenna directing the microwaves across the skin bridge. In this new effort, the physicists in Canada ran multiple tests on the grapes and other similar objects to learn the true reason for the formation of the fireball.
The tests consisted mostly of using thermal imaging cameras to capture the action as the grapes were heated and running simulations. They also tested other similarly sized fruit and plastic balls filled with water.
The researchers found that the formation of the fireball was the result of a simple process. As the microwaves enter the grapes, hot spots form in both pieces at the points where they are closest to one another due to a bond between them. As the hot spots grow hotter, surrounding electrolytes become supercharged, resulting in the formation of a burst of plasma in the form of a small fireball.
The researchers note that the same effect could be produced using similarly sized fruit or water-filled balls. They also found that it is not necessary to maintain any sort of physical connection between the two pieces—all that is required is that they be no more than three millimeters apart.
Physical systems with discrete energy levels are ubiquitous in nature and form fundamental building blocks of quantum technology. Artificial atom-like and molecule-like systems were previously demonstrated to regulate light for coherent and dynamic control of the frequency, amplitude and the phase of photons. In a recent study, Mian Zhang and colleagues engineered a photonic molecule with two distinct energy levels, using coupled lithium niobate micro-ring resonators that could be controlled via external microwave excitation. The frequency and phase of light could be precisely operated by programmed microwave signals using canonical two-level systems to include Autler-Townes splitting, Stark shift, Rabi oscillation and Ramsey interference phenomena in the study. Through such coherent control, the scientists showed on-demand optical storage and retrieval by reconfiguring the photonic molecule into a bright-dark mode pair. The dynamic control of light in a programmable and scalable electro-optic system will open doors for applications in microwave-signal processing, quantum photonic gates in the frequency domain and to explore concepts in optical computing as well as in topological physics.
The results are now published on Nature Photonics, where Zhang et al. overcame the existing performance trade-off, to realize a programmable photonic two-level system that can be controlled dynamically via gigahertz microwave signals. To accomplish this, the scientists created a microwave addressable photonic molecule using a pair of integrated lithium niobate micro-ring resonators patterned close to each other (radius 80 μm). The combined effects of low optical loss, efficient co-integration of optical waveguides and microwave electrodes allowed the simultaneous realization of a large electrical bandwidth (> 30 GHz), strong modulation efficiency and long photon lifetime (~2 ns).
A photonic analogue of a two-level system can typically facilitate the investigation of complex physical phenomena in materials, electronics and optics. Such systems convey important functions, including unique on-demand photon storage and retrieval, coherent optical frequency shift and optical quantum information processing at room temperature. For dynamic control of photonic two-level systems, electro-optic methods are ideally suited due to their fast response, programmability and possibility for large-scale integration.
For electro-optic control of a two-level system, the photon lifetime of each energy state must be longer than the time required for the system to be driven from one state to the other. Conventional integrated photonic platforms have not met the requirements of a simultaneously long photon life-time and fast modulation so far. Electrically active photonic platforms (based on silicon, graphene and other polymers), allow fast electro-optic modulation at gigahertz frequencies but suffer from shorter photon lifetimes. However, pure electrical tuning is still highly desirable, as narrowband microwave signals offer much better control with minimal noise and scalability.
In their work, Zhang et al. showed that optical transmission of the photonic molecule measured using a telecom-wavelength laser, supported a pair of well-defined optical energy levels. The evanescent coupling of light from one resonator to another was enabled through a 500 nm gap between the micro-ring resonators to form the two well-resolved optical energy levels. The scientists explored the analogy between an atomic and photonic two-level system to demonstrate control of the photonic molecule.
In the experiments, light from the tunable telecom wavelength laser was launched into the lithium niobate waveguides and collected from them via a pair of lensed optical fibres. The scientists used an arbitrary waveform generator to operate microwave control signals before sending them to electrical amplifiers. The efficient overlap between microwaves and optical fields observed in the system enabled higher tuning/modulation efficiency than those previously observed with bulk electro-optic systems. Such coherent microwave-to-optical conversion can link electronic quantum processes and memories via low-loss optical telecommunication, for applications in future quantum information networks.
Zhang et al. next used a continuous-wave coherent microwave field to control a photonic two-level system. In this system, the number of photons that could populate each of the two levels was not limited to one. The splitting frequency of the system was precisely controlled up to several gigahertz by controlling the amplitude of the microwave signals. The effect was used to control the effective coupling strength between the energy levels of the photonic molecule. Coherent spectral dynamics in the photonic molecule were investigated for a variety of microwave strengths applied to the photonic two-level system. The scientists also described the controlled amplitude and phase of the system using Rabi oscillation and Ramsey interference, while using Bloch spheres/geometric representations of the photonic two-level energy system to represent the phenomena.
The work allowed controlled writing and reading of light into a resonator, from an external waveguide to achieve on-demand photon storage and retrieval, a critical task for optical signal processing. To facilitate this experimentally, Zhang et al. applied a large DC bias voltage (15 V) to reconfigure the double-ring system into a pair of bright and dark modes. In the setup, the mode localized in the first ring provided access to the optical waveguides and became optically bright (bright mode). The other mode was localized in the second ring that was geometrically decoupled from the input optical waveguide to become optically dark. Accordingly, the scientists demonstrated coherent and dynamic control of a two-level photonic molecule with microwave fields and on-demand photon storage/retrieval through meticulous experiments in the study. The work opens a path to a new form of control on photons. The results are an initial step with potentially immediate applications in signal processing and quantum photonics.
The design parameters of the coupled resonators provide space to investigate the dynamic control of two-level and multi-level photonic systems, leading to a new class of photonic technologies. The scientists envision that these findings will lead to advances in topological photonics, advanced photonic computation concepts and on-chip frequency-based optical quantum systems in the near future.
Albert Einstein’s desk can still be found on the second floor of Princeton’s physics department. Positioned in front of a floor-to-ceiling blackboard covered with equations, the desk seems to embody the spirit of the frizzy-haired genius as he asks the department’s current occupants, “So, have you solved it yet?”
Einstein never achieved his goal of a unified theory to explain the natural world in a single, coherent framework. Over the last century, researchers have pieced together links between three of the four known physical forces in a “standard model,” but the fourth force, gravity, has always stood alone.
No longer. Thanks to insights made by Princeton faculty members and others who trained here, gravity is being brought in from the cold—although in a manner not remotely close to how Einstein had imagined it.
Though not yet a “theory of everything,” this framework, laid down over 20 years ago and still being filled in, reveals surprising ways in which Einstein’s theory of gravity relates to other areas of physics, giving researchers new tools with which to tackle elusive questions.
The key insight is that gravity, the force that brings baseballs back to Earth and governs the growth of black holes, is mathematically relatable to the peculiar antics of the subatomic particles that make up all the matter around us.
This revelation allows scientists to use one branch of physics to understand other seemingly unrelated areas of physics. So far, this concept has been applied to topics ranging from why black holes run a temperature to how a butterfly’s beating wings can cause a storm on the other side of the world.
This relatability between gravity and subatomic particles provides a sort of Rosetta stone for physics. Ask a question about gravity, and you’ll get an explanation couched in the terms of subatomic particles. And vice versa.
“This has turned out to be an incredibly rich area,” said Igor Klebanov, Princeton’s Eugene Higgins Professor of Physics, who generated some of the initial inklings in this field in the 1990s. “It lies at the intersection of many fields of physics.”
From tiny bits of string
The seeds of this correspondence were sprinkled in the 1970s, when researchers were exploring tiny subatomic particles called quarks. These entities nest like Russian dolls inside protons, which in turn occupy the atoms that make up all matter. At the time, physicists found it odd that no matter how hard you smash two protons together, you cannot release the quarks—they stay confined inside the protons.
One person working on quark confinement was Alexander Polyakov, Princeton’s Joseph Henry Professor of Physics. It turns out that quarks are “glued together” by other particles, called gluons. For a while, researchers thought gluons could assemble into strings that tie quarks to each other. Polyakov glimpsed a link between the theory of particles and the theory of strings, but the work was, in Polyakov’s words, “hand-wavy” and he didn’t have precise examples.
Meanwhile, the idea that fundamental particles are actually tiny bits of vibrating string was taking off, and by the mid-1980s, “string theory” had lassoed the imaginations of many leading physicists. The idea is simple: just as a vibrating violin string gives rise to different notes, each string’s vibration foretells a particle’s mass and behavior. The mathematical beauty was irresistible and led to a swell of enthusiasm for string theory as a way to explain not only particles but the universe itself.
One of Polyakov’s colleagues was Klebanov, who in 1996 was an associate professor at Princeton, having earned his Ph.D. at Princeton a decade earlier. That year, Klebanov, with graduate student Steven Gubser and postdoctoral research associate Amanda Peet, used string theory to make calculations about gluons, and then compared their findings to a string-theory approach to understanding a black hole. They were surprised to find that both approaches yielded a very similar answer. A year later, Klebanov studied absorption rates by black holes and found that this time they agreed exactly.
That work was limited to the example of gluons and black holes. It took an insight by Juan Maldacena in 1997 to pull the pieces into a more general relationship. At that time, Maldacena, who had earned his Ph.D. at Princeton one year earlier, was an assistant professor at Harvard. He detected a correspondence between a special form of gravity and the theory that describes particles. Seeing the importance of Maldacena’s conjecture, a Princeton team consisting of Gubser, Klebanov and Polyakov followed up with a related paper formulating the idea in more precise terms.
Another physicist who was immediately taken with the idea was Edward Witten of the Institute for Advanced Study (IAS), an independent research center located about a mile from the University campus. He wrote a paper that further formulated the idea, and the combination of the three papers in late 1997 and early 1998 opened the floodgates.
“It was a fundamentally new kind of connection,” said Witten, a leader in the field of string theory who had earned his Ph.D. at Princeton in 1976 and is a visiting lecturer with the rank of professor in physics at Princeton. “Twenty years later, we haven’t fully come to grips with it.”
Two sides of the same coin
This relationship means that gravity and subatomic particle interactions are like two sides of the same coin. On one side is an extended version of gravity derived from Einstein’s 1915 theory of general relativity. On the other side is the theory that roughly describes the behavior of subatomic particles and their interactions.
The latter theory includes the catalogue of particles and forces in the “standard model” (see sidebar), a framework to explain matter and its interactions that has survived rigorous testing in numerous experiments, including at the Large Hadron Collider.
In the standard model, quantum behaviors are baked in. Our world, when we get down to the level of particles, is a quantum world.
Notably absent from the standard model is gravity. Yet quantum behavior is at the basis of the other three forces, so why should gravity be immune?
The new framework brings gravity into the discussion. It is not exactly the gravity we know, but a slightly warped version that includes an extra dimension. The universe we know has four dimensions, the three that pinpoint an object in space—the height, width and depth of Einstein’s desk, for example—plus the fourth dimension of time. The gravitational description adds a fifth dimension that causes spacetime to curve into a universe that includes copies of familiar four-dimensional flat space rescaled according to where they are found in the fifth dimension. This strange, curved spacetime is called anti-de Sitter (AdS) space after Einstein’s collaborator, Dutch
astronomer Willem de Sitter.
The breakthrough in the late 1990s was that mathematical calculations of the edge, or boundary, of this anti-de Sitter space can be applied to problems involving quantum behaviors of subatomic particles described by a mathematical relationship called conformal field theory (CFT). This relationship provides the link, which Polyakov had glimpsed earlier, between the theory of particles in four space-time dimensions and string theory in five dimensions. The relationship now goes by several names that relate gravity to particles, but most researchers call it the AdS/CFT (pronounced A-D-S-C-F-T) correspondence.
Tackling the big questionsThis correspondence, it turns out, has many practical uses. Take black holes, for example. The late physicist Stephen Hawking startled the physics community by discovering that black holes have a temperature that arises because each particle that falls into a black hole has an entangled particle that can escape as heat.
Using AdS/CFT, Tadashi Takayanagi and Shinsei Ryu, then at the University of California-Santa Barbara, discovered a new way to study
entanglement in terms of geometry, extending Hawking’s insights in a fashion that experts consider quite remarkable.
In another example, researchers are using AdS/CFT to pin down chaos theory, which says that a random and insignificant event such as the flapping of a butterfly’s wings could result in massive changes to a large-scale system such as a faraway hurricane. It is difficult to calculate chaos, but black holes—which are some of the most chaotic quantum systems possible—could help. Work by Stephen Shenker and Douglas Stanford at Stanford University, along with Maldacena, demonstrates how, through AdS/CFT, black holes can model quantum chaos.
One open question Maldacena hopes the AdS/CFT correspondence will answer is the question of what it is like inside a black hole, where an infinitely dense region called a singularity resides. So far, the relationship gives us a picture of the black hole as seen from the outside, said Maldacena, who is now the Carl P. Feinberg Professor at IAS.
“We hope to understand the singularity inside the black hole,” Maldacena said. “Understanding this would probably lead to interesting lessons for the Big Bang.”
The relationship between gravity and strings has also shed new light on quark confinement, initially through work by Polyakov and Witten, and later by Klebanov and Matt Strassler, who was then at IAS.
Those are just a few examples of how the relationship can be used. “It is a tremendously successful idea,” said Gubser, who today is a professor of physics at Princeton. “It compels one’s attention. It ropes you in, it ropes in other fields, and it gives you a vantage point on theoretical physics that is very compelling.”
The relationship may even unlock the quantum nature of gravity. “It is among our best clues to understand gravity from a quantum perspective,” said Witten. “Since we don’t know what is still missing, I cannot tell you how big a piece of the picture it ultimately will be.”
Still, the AdS/CFT correspondence, while powerful, relies on a simplified version of spacetime that is not exactly like the real universe. Researchers are working to find ways to make the theory more broadly applicable to the everyday world, including Gubser’s research on modeling the collisions of heavy ions, as well as high-temperature superconductors.
Also on the to-do list is developing a proof of this correspondence that draws on underlying physical principles. It is unlikely that Einstein would be satisfied without a proof, said Herman Verlinde, Princeton’s Class of 1909 Professor of Physics, the chair of the Department of Physics and an expert in string theory, who shares office space with Einstein’s desk.
“Sometimes I imagine he is still sitting there,” Verlinde said, “and I wonder what he would think of our progress.”
Scientists in Germany say they have hit a new superconductivity milestone. According to their paper, they achieved resistance-free electrical current at the highest temperature yet: just 250 Kelvin, or -23 degrees Celsius (-9.4 degrees Fahrenheit).
Although the team’s superconducting material has yet to be verified, the claim has merit – the work was led by Mikhail Eremets, a physicist at the Max Planck Institute for Chemistry, who set the previous high temperature record for superconductivity in 2014, at 203 Kelvin (-70 degrees Celsius).
Superconductivity, first discovered in 1911, is a curious phenomenon. Usually, the flow of an electrical current encounters some degree of resistance – a bit like how air resistance pushes back on a moving object, for example.
The higher the conductivity of a material, the less electrical resistance it has, and the current can flow more freely.
But at low temperatures in some materials, something strange happens. Resistance lowers to zero, and the current flows unimpeded. When accompanied by something called the Meissner effect – the expulsion of the material’s magnetic fields as it transitions below that critical temperature – this is called superconductivity.
So-called room-temperature superconductivity, above 0 degrees Celsius, is something of a white whale for scientists. If it could be achieved, it would revolutionise electrical efficiency, vastly improving power grids, high-speed data transfer, and electrical motors, to name a few potential applications.
Eremets and his team achieved the previous high-temperature superconductivity record using hydrogen sulfide – yep, the compound that makes rotten eggs and human flatulence stinky – under 150 gigapascals of pressure (Earth’s core is between 330 and 360 gigapascals).
Scientists who rushed to understand hydrogen sulfide superconductivity believe that this result is possible because hydrogen sulfide is so light a material that it can vibrate at high speeds, which means higher temperatures – but the pressure is needed to keep it from vibrating itself apart.
This new research used a different material, called lanthanum hydride, under about 170 gigapascals of pressure. Earlier this year, the team reported they had achieved superconductivity using this material at 215 Kelvin (-58.15 C°, -72 F°) – and now, just a few months later, they have improved on that result.
“This leap, by 50 Kelvin, from the previous critical temperature record of 203 Kelvin,” the researchers wrote in their paper, “indicates the real possibility of achieving room-temperature superconductivity (that is at 273 Kelvin) in the near future at high pressures, and the perspective of conventional superconductivity at ambient pressure.”
The result is yet to be verified by the scientific community, and the paper is awaiting peer-review.
There are three tests, reports MIT Technology Review, that are considered the gold standard for superconductivity, and the team has only achieved two: the drop in resistance below a critical temperature threshold, and replacing elements in the material with heavier isotopes to observe a corresponding drop in superconductivity temperature.
The third is the Meissner effect, which is the name given to one of the signatures of superconductivity. As the material passes below the critical temperature and transitions into superconductivity, it ejects its magnetic field.
The team has yet to observe this phenomenon because their sample is so small – well below the detection capabilities of their magnetometer. However, the transition into superconductivity has an effect on the external magnetic field, too. It’s not a direct detection, but the team was able to observe this effect.
It’s not the Meissner effect, but it does look promising. And you can bet that physicists with the ability to do so will be falling over each other to verify and attempt to replicate the team’s result.
The concept of time travel has always captured the imagination of physicists and laypersons alike. But is it really possible? Of course it is. We’re doing it right now, aren’t we? We are all traveling into the future one second at a time.
But that was not what you were thinking. Can we travel much further into the future? Absolutely. If we could travel close to the speed of light, or in the proximity of a black hole, time would slow down enabling us to travel arbitrarily far into the future. The really interesting question is whether we can travel back into the past.
I am a physics professor at the University of Massachusetts, Dartmouth, and first heard about the notion of time travel when I was 7, from a 1980 episode of Carl Sagan’s classic TV series, Cosmos. I decided right then that someday I was going to pursue a deep study of the theory that underlies such creative and remarkable ideas: Einstein’s relativity. Twenty years later, I emerged with a Ph.D. in the field and have been an active researcher in the theory ever since.
Now, one of my doctoral students has just published a paper in the journal Classical and Quantum Gravity that describes how to build a time machine using a very simple construction.
Closed Time-Like Curves
Einstein’s general theory of relativity allows for the possibility of warping time to such a high degree that it actually folds upon itself, resulting in a time loop. Imagine you’re traveling along this loop; that means that at some point, you’d end up at a moment in the past and begin experiencing the same moments since, all over again — a bit like deja vu, except you wouldn’t realize it. Such constructs are often referred to as “closed time-like curves” or CTCs in the research literature, and popularly referred to as “time machines.” Time machines are a byproduct of effective, faster-than-light travel schemes, and understanding them can improve our understanding of how the universe works.
Over the past few decades, well-known physicists like Kip Thorne and Stephen Hawking produced seminal work on models related to time machines.
The general conclusion that has emerged from previous research, including Thorne’s and Hawking’s, is that nature forbids time loops. This is perhaps best explained in Hawking’s “Chronology Protection Conjecture,” which essentially says that nature doesn’t allow for changes to its past history, thus sparing us from the paradoxes that can emerge if time travel were possible.
Perhaps the most well-known amongst these paradoxes that emerge due to time travel into the past is the so-called “grandfather paradox” in which a traveler goes back into the past and murders his own grandfather. This alters the course of history in a way that a contradiction emerges: The traveler was never born and therefore cannot exist. There have been many movie and novel plots based on the paradoxes that result from time travel — perhaps some of the most popular ones being the Back to the Future movies and Groundhog Day.
Depending on the details, different physical phenomena may intervene to prevent closed time-like curves from developing in physical systems. The most common is the requirement for a particular type of “exotic” matter that must be present in order for a time loop to exist. Loosely speaking, exotic matter is matter that has negative mass. The problem is negative mass is not known to exist in nature.
Caroline Mallary, a doctoral student at the University of Massachusetts Dartmouth has published a new model for a time machine in the journal Classical and Quantum Gravity. This new model does not require any negative mass exotic material and offers a very simple design.
Mallary’s model consists of two super long cars — built of material that is not exotic, and have positive mass — parked in parallel. One car moves forward rapidly, leaving the other parked. Mallary was able to show that in such a setup, a time loop can be found in the space between the cars.
So Can You build This in Your Backyard?
If you suspect there is a catch, you are correct. Mallary’s model requires that the center of each car has infinite density. That means they contain objects — called singularities — with an infinite density, temperature, and pressure. Moreover, unlike singularities that are present in the interior of black holes, which makes them totally inaccessible from the outside, the singularities in Mallary’s model are completely bare and observable and, therefore, have true physical effects.
Physicists don’t expect such peculiar objects to exist in nature, either. So, unfortunately, a time machine is not going to be available anytime soon. However, this work shows that physicists may have to refine their ideas about why closed time-like curves are forbidden.
An updated 10-year analysis of the RTOG 0214 trial showed that the use of prophylactic cranial irradiation (PCI) improved disease-free survival (DFS) and reduced brain metastases, but failed to improve overall survival (OS) in patients with locally advanced non-small-cell lung carcinoma (LA-NSCLC).
The results of the 10-year follow-up of this phase III study, done from September 2001 to August 2007 in 356 patients with stage III A/B LA-NSCLC (median age, 60), showed that PCI did not improve OS rate vs observation alone (17.6 percent vs 13.3 percent; hazard ratio [HR], 1.23; 95 percent confidence interval [CI], 0.95 to 1.59; p=0.124) [Sun A, et al, WCLC 2018 abstract OA01.01]
Patients who underwent PCI, however, experienced better DFS (12.6 percent vs 7.5 percent; HR, 1.32; 95 percent CI, 1.03 to 1.69; p=0.0298) and less central nervous system (CNS) metastasis (16.7 percent vs 28.3 percent; HR, 2.33; 95 percent CI, 1.31 to 4.15; p=0.0298) vs those who were just observed.
“There was only 45 percent power to detect the hypothesized difference [HR=1.25], and if we were able to accrue the targeted number, there may have been a benefit in OS,” said study investigator Dr Alex Sun from the University of Toronto, Toronto, Ontario, Canada.
“As compared with previous trials, PCI employing delivery of 30 Gy in 15 fractions as used in the RTOG 0214 study might also be too low a dose to exert its desirable effects,” commented discussant Dr John Armstrong of St Luke’s Radiation Oncology Network, Dublin, Ireland. [Radiat Oncol 2016;11:67]
“However, a subgroup analysis among 225 patients who did not have surgery as primary treatment showed that patients who underwent PCI had better OS [p=0.026] and DFS [p=0.014] and a lower incidence of brain metastases [p=0.003],” said Sun.
Most patients in the study experienced grade 1 (14.6 percent) or grade 2 (35 percent) acute toxicities or grade 1 (12.7 percent) or grade 2 (8.7 percent) late toxicities, with neurocognitive-associated toxicities being the most commonly reported.
“The most probable reason why we do not do much PCI is due to concerns about its reported toxicities such as somnolence, cognitive disturbances, neuropathy, memory impairment and dizziness. It is difficult to convince patients to undergo a procedure which will unlikely alter their survival,” said Armstrong. [J Clin Oncol 2018;36:2366-2377]
A previous study in 113 cancer patients with brain metastases showed that hippocampus-sparing intensity-modulated radiation therapy (IMRT) is associated with a significantly lower decline in Hopkins Verbal Learning Test-Revised Delayed Recall scores vs historical controls (p<0.001). [J Clin Oncol 2014;32:3810-3816]
In the future, PCI in NSCLC is foreseen to involve identification of ultra-high-risk individuals and performance of research which can be used for profiling patients who will experience brain metastases. For these patients, aggressive surveillance with volumetric MRI and early intervention with stereotactic surgery should be performed.