Scary Physics of ‘Curve 9’ Leads to Terrifying Olympic Luge Crash


On Tuesday, Americans woke up to the news that Emily Sweeney of Team USA crashed during her luge run at the Olympics. Sweeney refused a stretcher and was able to walk away from the frightening accident — a tough-as-nails moment after careening down what’s essentially a roller coaster made of ice. Sweeney crashed after losing control during her final heat at Curve 9 — already notorious among Olympic sliders before the games even began.

Emily Sweeney, luge

The insane physics of the Winter Olympic’s “fastest sport on ice” means that coming out of a curve during the luge feels like, in the words of 2014 Olympian Chris Mazder, “launching into space on a rocket.” Sliders can reach speeds of 90 miles per hour after launching onto the ice with a 50-pound sled, propelling forward with spike-equipped gloves, and steering with their calves.

All that speed means that luge races are timed to one-thousandth of a second, and any time lost on a curve can ruin an athlete’s chances of placement. Defending Gold medalist Felix Loch of Team USA bumped into Curve 9 on Sunday, losing hundredths of a second and his chance at a 2018 medal.

Luge races take place on a track built with a length of 1,000 to 1,500 meters with a difference in elevation between 110 to 130 meters and an average slope of nine to 11 percent. Curve 9 is just one of 16 obstacles on Pyeongchang’s Alpensia Sliding Center track. As the athletes shoot down the U-shaped groove of the course they have to maneuver through left curves, right curves, hairpin curves, S-shaped curves, and a three-turn combination called a labyrinth.

But it was Curve 9 that everyone had been talking about before the Olympics kicked off. Before her own run, Sweeney described the curve as like “driving on a slanted road, but having your call getting pulled in a direction away from the way you’re steering.” It’s the angle of the curve that’s so rough — the turn sends the lugers to the right, but the track is actually designed to go 45 degrees to the left.

And when lugers hit the angle of the curve, their force against the ice can be as high as eight times that of gravity. The aerodynamic position of their body combined with the tiny amount of contact the steel (the name for the sled’s blades) makes with the ice minimizes the force of the drag, adding incredible speed to the centrifugal force that emerges as a reaction between the ice, the athlete, and the inertia. All of that means when a curve shoots you to the right, but the track goes to the left, retaining control is going to be incredibly difficult.

It’s ultimately what got Sweeney, who began sliding at severe and alternating angles after losing control after Curve 9. She was ultimately thrown from her sled into a tumble — a scary finale for a first-time Olympic run.

Advertisements

Blade Dynamics Explains How Olympic Figure Skaters Can Stand on Each Other


On Sunday night, Team Canada’s figure skating power pair, Tessa Virtue and Scott Moir, skated their way to an Olympic gold with a rapturous performance to Moulin Rouge’s “El Tango De Roxanne.” In their heart-stopping routine, they dazzled audiences with an insane lift in which Virtue literally steps onto Moir’s quads with her blades, then just stands there, arms outstretched, like some glorious figurehead of a Spandex-clad ship. It’s just mind-boggling.

scott moir tessa virtue olympics

If you, like me, spent at least three hours watching the clip of their routine on repeat today, a serious question may have crossed your mind: How are Moir’s thighs not gushing blood onto the ice?

It’s a legitimate question. The blades on figure skates are sharp enough to cut the skin on a person’s face and can even slice deeper, resulting in some serious injuries. But a closer look at the blade — and its positioning on the lifter’s leg — shows how it can be done safely.

Here’s a version of the lift from a previous performance of the same routine at the Canadian National Skating Championships in Vancouver this January, in which the pair got a perfect score. Even with proper technique, this can’t be comfortable, but look at them beam! How do they do it?

tessa virtue scott moir lift
Unreal!

Part of the reason this can be done safely is because the blade of a figure skate is not like the blade of a knife or sword. If you look at a figure skate’s blade up close, you’ll notice that its cross-section is concave like the letter C, with its two arms making up the outer and inner edge that actually touch the ice. In the middle is the groove. Regular knife or sword blades have a thin, singular edge.

Both edges of a concave figure skate blade are very sharp, but having two edges helps diffuse the weight of the wearer over a greater surface area, reducing the force on any one point on the leg. If skate blades were like knives, all of the wearer’s weight would be concentrated on a single, localized edge.

cross section figure skating blade
When it comes to distributing force, two edges are better than one.

When a force (in this case, Virtue’s weight) gets exerted perpendicularly on a surface (the blade’s twin edges, pressing into Moir’s thigh), the pressure is calculated by dividing the force by the surface area on which it rests. With the greater surface area supplied by two edges, the pressure is smaller than it would be if there was only a single blade. Whatever that pressure is in this situation, it’s not enough to cut through Moir’s uniform and flesh.

That’s not the only factor skaters have to consider to do this safely. Another reason Moir is not bleeding out beneath Virtue’s blades is because they’re not moving on his thighs. Once she’s in place, she stays perfectly still.

 Doing so prevents the edges of the blades from sawing into him, the way you might use a knife to slice a tomato. Chefs and physicists alike know that using a saw-like slicing action with a blade seems to cut more efficiently, and in 2012 researchers at Harvard University discovered why.

Pressing a wire against a gel and looking closely at the point of contact, they saw that cuts didn’t appear in the gel until the wire pushed down far enough to form the beginnings of a crack. But as the BBC put it, “the key difference between ‘chopping’ and slicing is that, in the former case the wire mostly just compresses the gel, whereas in the latter case the sideways movement of the wire also stretches it.” That stretching weakened the gel further, making slicing easier.

In the paper, they explained that the extra stretching is caused by friction between the wire and the surface. Applying this principle to Moir and Virtue, any friction between her blade edges and his leg would initiate a cut faster than a motionless blade simply pressing downward. Again, their impeccable technique saves the day.

The crazy lift starts around the 3:50 mark.

Of course, just because physics can determine a safe way to do this move doesn’t mean most people should try it. Even serious skaters rarely do it. The Goose lift that has become virtually synonymous with Virtue and Moir has rarely been replicated by any other pair, so for the love of Roxanne, leave this power move to these two perfect, prolific professionals.

The incredible “Goose lift.”

For The First Time, a Portable Atomic Clock Has Been Used to Measure Gravity


Atomic clocks are capable of the most precise physical measurements humanity can make, but because they’re so complex, they’ve been restricted to laboratory use – until now.

For the first time, scientists have developed a portable version, and used it to take measurements of gravity outside a laboratory setting.

The technology involved in atomic clocks is breathtaking. They keep track of the extremely regular oscillation of atoms trapped by lasers to keep the most accurate time possible, allowing it to be measured to the 18th decimal place.

The most accurate atomic clock ever built using strontium atoms contained in a lattice of lasers – what is known as an optical lattice atomic clock – won’t lose or gain a second for 15 billion years. That’s longer than the current age of the Universe.

The strontium atoms are cooled to a temperature just above that of absolute zero, trapped by the interference pattern of two laser beams. The laser excites the atom, which causes it to oscillate.

The new portable atomic clock, also a strontium optical lattice developed by researchers at the Physikalisch-Technische Bundesanstal in Germany, is not quite as accurate as the 2015 record-breaker. It has an uncertainty of 7.4 × 10−17.

But it’s accurate enough to measure gravitational redshift, as the international team of researchers has just discovered.

We know that gravity affects matter. We know that it affects light. And, yes, it also has an effect on time – where gravity is stronger, time moves slower.

You wouldn’t be able to detect this with a regular timepiece on Earth, but atomic clocks are so precise that they can be used to measure this effect.

This field is called relativistic geodesy, because, surprise, it was predicted by Einstein’s theory of general relativity.

Gravitational redshift has also been measured by atomic clocks in a laboratory setting before. Measuring it with the portable atomic clock doesn’t tell us anything new about gravitational redshift – but it does tell us that the portable atomic clock is worth pursuing.

portable atomic clock in trailerThe atomic clock inside its trailer.

The team drove the clock in a temperature-stabilised and vibration-dampened trailer to the French Modane Underground Laboratory, and compared the measurements they took with measurements taken at the Istituto Nazionale di Ricerca Metrologica in Torino, 90 kilometres away and at a height difference of 1,000 metres (3,280 feet).

An optical fibre link and frequency combs allowed the two clocks to be connected and their readings compared accurately.

Meanwhile, measurements were also taken using a cryogenic caesium fountain clock and an ytterbium optical lattice clock. And the researchers then drove the portable clock to Torino to check it against measurements at that location.

The measurements were consistent, but the clock does still need a little bit of work, wrote Andrew Ludlow of the National Institute of Standards and Technology, who did not participate in the research.

“As would be expected for this type of pioneering effort, the measurement campaign was not perfect,” he wrote in a related editorial for Nature Physics.

“There were periods of time when the portable optical clock would not function, and the accuracy of the measurements were limited below the capability of optical clocks.

“And while the relativistic geodetic measurement agreed nicely with conventional geodetic measurements, its accuracy was two orders of magnitude below the conventional techniques.”

Nevertheless, the experiment did prove the principle, representing a significant milestone towards portable atomic clocks.

In the future, these could be used in much more flexible ways than the current laboratory-bound atomic clocks.

For instance, putting an optical lattice clock in space would open up new tests for general relativity, comparison with terrestrial atomic clocks, geophysics, space-based interferometry and, yes, more relativistic geodesy tests from low-Earth orbit altitudes.

It could also help monitor sea-level changes resulting from climate change, and help establish a unified world height reference system, the researchers noted.

“Optical clocks are deemed to be the next generation atomic clocks – operating not only in laboratories but also as mobile precision instruments,” said Christian Lisdat of the Physikalisch-Technische Bundesanstalt.

“This cooperation proves again how disciplines such as physics or metrology, geodesy and climate impact research can mutually benefit each other.”

Scientists Just Made ‘Superionic Ice’ That’s Solid And Liquid at The Same Time


“A really strange state of matter.”

Scientists think they’ve finally discovered a totally new type of water ice called superionic water, water that is simultaneously a solid and a liquid, potentially teaching us much more about this most versatile of substances and leading to the development of new materials.

The idea of superionic water has actually been around for several decades – it’s believed to exist inside the mantles of planets like Uranus and Neptune – but until now no one had managed to prove its existence in an experiment.

Step forward the team of researchers behind the new study, who were able to produce superionic water from a high-pressure type of ice and a series of powerful laser pulses.

That combination provided the kinds of temperatures and pressures we don’t get naturally here on Earth, giving us our first real glimpse of this mysterious water.

“These are very challenging experiments, so it was really exciting to see that we could learn so much from the data,” says one of the team, physicist Marius Millot from the Lawrence Livermore National Laboratory (LLNL) in California.

“Especially since we spent about two years making the measurements and two more years developing the methods to analyse the data.”

Water molecules are made from two hydrogen atoms connected to one oxygen atom in a V-shape. Weak forces between the molecules become more obvious as they cool, causing them to push apart when water freezes.

 In superionic water, intense heat breaks the bonds between a water molecule’s atoms, leaving a solid crystal structure of oxygen atoms, and a flow of hydrogen nuclei or ions in between them – creating both a solid and a liquid at the same time.

“That’s a really strange state of matter,” Millot told Kenneth Chang at The New York Times.

To begin with, pressure more than a million times that of Earth was exerted on water by passing it through two diamond layers and creating a special kind of ice called ice VII, which remains solid at room temperature.

At a separate laboratory, laser shock waves lasting 10-20 billionths of a second were then sent through the ice, resulting in conditions extreme enough to generate superionic water.

The initial pre-compression of the ice enabled researchers to push the ice to higher temperatures before everything vaporised.

superionic water 2Laser-driven shock compressions completed the process.

By capturing the optical appearance of the ice, scientists were able to determine that ions rather than electrons were moving around in the material, because of its opaque rather than shiny look.

Now we know that superionic ice actually exists, it could help explain the rather off-centre magnetic fields of Uranus and Neptune, a discrepancy that scientists have put down to shells of superionic ice inside their mantles.

It’s also another valuable example of how molecules act under extremes of temperature and pressure, and further down the line, we could even engineer new materials with specific properties by being able to manipulate how the molecules react.

“Driven by the increase in computing resources available, I feel we have reached a turning point,” says one of the researchers, physicist Sebastien Hamel from LLNL.

“We are now at a stage where a large enough number of these simulations can be run to map out large parts of the phase diagram of materials under extreme conditions in sufficient detail to effectively support experimental efforts.”

The research has been published in Nature Physics.

How Do We Know That the Earth is Round?


In Brief

We know that Earth is round, but how? Because of the pictures from space? Well, not quite. We knew that Earth was round long before we ever went to space.

Recently, a lot of celebrities (and other random people) have been asserting that the Earth is flat. Maybe they are doing it for attention. Perhaps it’s all some attention-grabbing scheme. If so, it is a scheme that (unfortunately) worked pretty well. Or maybe they genuinely believe this.

Case in point, in a series of tweets posted two days ago, B.o.B said that the planet is not round. He went on and on offering “evidence” that Earth is flat.

His ramblings even caught the attention of famed astrophysicist Neil deGrasse Tyson, who tried to correct B.o.B using some facts and hard evidence.

Since this is currently a popular topic, let’s take a moment to really break the science down (just bookmark this in the event that you encounter a “Flat Earther” during your online wanderings…).

There are several things that people often bring up when discussing obvious facts, such as the sum of two and two equaling four, the Earth traversing around the Sun once a year (not everyone understands this, actually), or perhaps the shape of the Earth. However, commonly held scientific facts are not always as self evident as they appear to be.

Knowledge that we take for granted in the twenty-first century, may have been mind boggling in centuries past….or was it?

For example, we take the fact that the Earth is a “sphere” for granted (or almost a perfect spherical, it bulges at the equator). But how do you know the shape of the Earth is curved?  What piece of knowledge convinced you of this fact?

The likely answer is that we have pictures of the Earth from outer space showing its curvature did the trick.  But this was a fact that was known long before we had any cameras in space. In fact, without this knowledge, we would have never made it to space.

I feel that I need to start off by saying this: Christopher Columbus did NOT discover the Earth was round.

I know not everyone was taught this, but many were, myself included, and he is still credited by some for this discovery.  This is simply not true.  In fact, as an aside, Columbus did not even fully circumnavigate the Earth, he was simply the first modern European to land in the Americas.

Knowledge of the Earth’s curvature goes back even farther than Columbus. Much farther.

It’s easy to credit an individual with a certain discovery, like how heliocentrism is often credited to Galileo, though such discoveries are rarely the observations of one person.  In fact, they almost never are. The heliocentric model was proposed by Copernicus, and proven by several other scientists, Galileo included.  Atoms were proposed by Democritus and proven much later.  Evolution was known prior to Darwin, but he showed a mechanism through which it could happen (natural selection).

In short, science is a process that takes ideas through a very long period – sometimes centuries – of proposal, study, and questioning.  The idea of a spherical Earth is no different.

So I cannot drop the name of a super-genius who came up with, and proved, a spherical Earth all on his own, but I can give you the names of its contributors.

The initial assertion is often credited to ancient Greek philosophers, such as Pythagoras and Aristotle.  They cited simple observations, such as the changing position of stars as you travel north or south, the sinking of ships below the horizon, and the shape of the moon.  Most of their observations are ones you can make on your own as well.  But one of the more noteworthy early contributors to the shape of the Earth would be Eratosthenes.

Eratosthenes

Eratosthenes was a librarian at the ancient Library of Alexandria sometime around 240 BCE.  He made many contributions to science and the understanding of the universe, but here I wish to tell you about his measurement of the circumference of the Earth.

Under the assumption that the Sun’s rays would be parallel as they hit Earth, Eratosthenes measured the angle of the shadows given by pillars in Alexandria and Syene (a town a few hundred miles south on the Nile) on the same day of the year at noon.

In Syene, pillars cast no shadow, as the Sun was directly overhead, but in Alexandria, there was a notable shadow.  With the measure of the shadows’ angles, and knowing the distance between Syene and Alexandria, he could measure the circumference of the Earth.

Eratosthenes’ Experiment

His measurements were fairly accurate, but exactly how accurate is still up for debate.  First of all his original measurements are lost (you can probably blame that on the destruction of the Library), and secondly, we don’t know the exact length of the unit he used.  He measured the circumference of the Earth to be around 250,000 Stadia.

The problem is that stadia (also called stadion) did not have a consistent length across the world at the time.  But even with the highest degree of inaccuracy, it was an impressive estimate for the time period.

So this isn’t exactly recent knowledge that came about in the Renaissance.  We’ve had proof for quite some time that the Earth is round.  What Columbus did was greatly underestimate the distance between Europe and Asia.  Had he not landed in the Americas, he and his crew would have starved.

Also I think that it is important to understand that, even if people at that time believed the world was flat, they weren’t necessarily unintelligent.  It would have been a perfectly reasonable assumption to make to the untrained observer.  Our immediate experience does not show many signs the the Earth is round.  What did you think before someone taught you the Earth was a sphere? (you probably don’t remember back that far, but you know what I mean.)

Yet, there are still those who doubt the Earth is not a flat disk.

The Flat Earth Society is a website that claims they believe the Earth is flat.  It’s debatable whether or not they are serious, but for every seemingly nonsensical theory there seems to be a group of people that follow it.  In order for the Flat Earth Society to reconcile a flat Earth, they have to accept some pretty daring hypotheses.  For example: the Sun is relatively close to the Earth (and small) – to account for varying shadow lengths in different latitudes, the Earth is the center of the universe, and space agencies are lying to us, among many others.

The Flat Earth’s Society Earth map

They have a fairly interesting map of the Earth as well.  In their geography, the North Pole is in the center of the disk, and Antarctica is an “ice wall” at the edge which holds all of the water in.  In 2012, Minutephysics made a video titled “Top 10 Reasons Why We Know the Earth is Round.” Watch the video below, and then see what FES had to say about this to get a glimpse of their ideas.

 

Though I do have to thank the FES.  Because of them, I was compelled to research many reasons why a spherical Earth is an established scientific fact.  For me the most compelling of these is the existence of two celestial poles.  So the next time you think of an “obvious” fact, ask yourself why it is so obvious, and motivate yourself to research.

For The First Time, Physicists Have Slowed High-Speed Electrons Using a ‘Sheet of Light’


Electrons boosted to near light-speed velocities have been shaken to a virtual crawl after being made to collide with what one physicist describes as ‘a sheet of light’.

Sure, that sheet is an ultra-intense laser briefly brighter than a quadrillion suns … but that just makes it even cooler.

Not to mention the fact these kinds of physics are what we might expect on the fringes of a black hole, opening the way to new models for studying quasars and other astronomical phenomena.

An experiment led by researchers from Imperial College London has pushed ordinary physics to its limits, measuring for the first time a process called the radiation reaction.

In simple terms, this describes the force on a charged particle as it jerks in response to emitting a photon of light.

Old-school ‘classical’ models are usually enough to explain what’s going on with the electrons moving inside an electromagnetic field. For most purposes the force of the radiation reaction can be ignored – it’s too small to matter much in these models.

But boost an electron up to super-high speeds and its radiation reaction can no longer be ignored. Quantum physics needs to take over the mathematics under these extreme circumstances.

That’s all great on paper, but until recently physicists haven’t been able to actually observe this force in action.

Now, that’s been changed for the first time – thanks to advances in laser technology.

“Testing our theoretical predictions is of central importance for us at Chalmers, especially in new regimes where there is much to learn,” says researcher Mattias Marklund of Chalmers University of Technology, Sweden.

“Paired with theory, these experiments are a foundation for high-intensity laser research in the quantum domain.”

The experiment made use of the Gemini laser at the Science and Technology Facilities Council’s Central Laser Facility in the UK; a device capable of delivering an ultra-intense beam of light in a matter of femtoseconds.

On the other side of this awesome collision was a beam of electrons pushed to high speed using laser-pulses in what’s known as laser wakefield acceleration.

When a ridiculously intense beam of photons meets electrons kicked up to speeds approaching that of light, this whole radiation reaction becomes a serious force.

Or as physicist Alec Thomas from Lancaster University and the University of Michigan put it, “One thing I always find so fascinating about this is that the electrons are stopped as effectively by this sheet of light – a fraction of a hair’s breadth thick – as by something like a millimetre of lead.”

This change in speed becomes apparent in the light the particles emit, which is boosted into the gamma range of wavelengths.

“The real result then came when we compared this detection with the energy in the electron beam after the collision,” says senior author Stuart Mangles from Imperial College London.

“We found that these successful collisions had a lower than expected electron energy, which is clear evidence of radiation reaction.”

Getting a good match between theory and experiment is a useful thing if we’re to understand how charged particles interact with light under some of the more extreme conditions in our Universe.

As matter is whipped into a frenzy on the fringes of quasars – discs of dust and gas surrounding black holes – it’s likely to experience forces such as these.

For now, while the results are solid evidence of radiation reaction, more work needs to be done to further refine the details.

Not that we’re complaining. Bring on the bigger lasers.

Missing Neutrons May Lead a Secret Life as Dark Matter


This may be the reason experiments can’t agree on the neutron lifetime, according to a new idea.
Missing Neutrons May Lead a Secret Life as Dark Matter
Neutrons, the mundane-seeming inhabitants of atoms, could be hiding a secret connection to dark matter, according to a new proposal.

Neutrons shouldn’t be all that mysterious. Found inside every atomic nucleus, they may seem downright mundane—but they have long confounded physicists who try to measure how long these particles can live outside of atoms. For more than 10 years researchers have tried two types of experiments that have yielded conflicting results. Scientists have struggled to explain the discrepancy, but a new proposal suggests the culprit may be one of the biggest mysteries of all: dark matter.

Scientists are pretty sure the universe contains more matter than the stuff we can see, and their best guess is that it takes the form of invisible particles. What if neutrons are decaying into these invisible particles? This idea, put forward by University of California, San Diego, physicists Bartosz Fornal and Benjamin Grinstein in a paper posted this month to the physics preprint site arXiv.org, would explain why one type of neutron experiment consistently measures a different value than the other. If true, it could also provide the first way to access the dark matter particles physicists have long been seeking to no avail.

The idea has already gripped many researchers making neutron lifetime measurements, and some have quickly scrambled to look for evidence of it in their experiments. If neutrons are turning into dark matter, the process could also produce gamma-ray photons, according to Fornal and Grinstein’s calculations. “We have some germanium gamma-ray detectors lying around,” says Christopher Morris, who runs neutron experiments at Los Alamos National Laboratory. By serendipity, he and his team just recently installed a large tank to collect neutrons on their way from the start of the experiment to the point where physicists try to measure their lifetimes. This tank provided a large holding cell where many neutrons might decay into dark particles, if the process in fact occurs, and produce gamma-rays as a by-product. “When we heard about this paper, we took our detector and set it up next to our big tank and started looking for gamma rays.” He and his team are still analyzing the results of this trial, but hope to have a paper out in a few weeks reporting on what they found.

Only one of the two types of neutron decay experiments would be sensitive to neutrons decaying into dark matter. This type, called “bottle experiments,” essentially puts a given number of neutrons into a “bottle” with magnetic walls that holds them inside, then counts how many are left after a certain amount of time. Through many measurements the researchers can calculate how long an average neutron lives.

The other type of experiment looks for the main product of neutron decays. Through a well-known process called beta decay, a neutron outside of an atomic nucleus will break down into a proton, an electron and an antimatter neutrino. So-called “beam” experiments shoot a beam of neutrons into a magnetic trap that catches positively charged protons. Researchers count how many neutrons go in and how many protons come out after a given time, then infer the average time it takes a neutron to decay.

Both classes of experiments find neutrons can last for only about 15 minutes outside of atoms. But bottle experiments measure an average of 879.6 seconds plus or minus 0.6 second, according to the Particle Data Group, an international statistics-keeping collaboration. Beam experiments get a value of 888.0 seconds plus or minus 2.0 seconds. The 8.4-second difference may not seem like much, but it is larger than either of the calculations’ margins of error—which are based on the experimenters’ understanding of all the sources of uncertainty in their measurements. The difference leaves the two figures with a statistically significant “4-sigma” deviation. Experimenters behind both methods have scoured their setups for overlooked problems and sources of uncertainty, with no luck so far.

But if neutrons can transform in more ways than just beta decay, it would explain why bottle and beam experiments do not find the same answers. Fornal and Grinstein suggest that occasionally neutrons turn into some type of dark particle, undetectable by traditional means. The bottle experiments would then measure a slightly shorter lifetime for the neutron than beam experiments, because the former would be counting the dark matter decays in addition to the beta decays—and thus detecting a larger number of total decays in any given time period. The beam setup, however, only measures how long it takes neutrons to turn into protons, so their tally would not include dark matter decays and would therefore suggest neutrons can stick around slightly longer. And that is indeed what the two methods show.

“It would be nice to have an explanation,” says Peter Geltenbort, who runs bottle experiments at the Institut Laue–Langevin in France. If the dark particle idea is correct, “it means that we experimentalists are giving the right error for our measurements. People have written that maybe we are just too optimistic estimating our systematic [uncertainties], but it would confirm that we did a good job.” Geltenbort is also collaborating with Morris on the Los Alamos bottle experiments.

Perhaps the larger implication—if neutron experiments show any evidence for the dark particle hypothesis—is that physicists might then have a link to dark matter. The dark particle that Fornal and Grinstein propose could be the same particle that makes up the cosmos’ missing mass. It could also be a different invisible particle, perhaps part of some larger sector of numerous dark particles. “They [Fornal and Grinstein] are building a very specific set of models to explain the neutron lifetime discrepancy,” says dark matter theorist Peter Graham of Stanford University. “It’s not obvious that their models really fit into any other dark matter models that people have built for other reasons.” For the neutron to decay into a dark particle, for instance, that particle must be lighter than the neutron’s mass of around 940 MeV/c2 (mega–electron volts divided by the speed of light squared). On the other hand, one of the most popular classes of theorized dark matter particles, so-called weakly interacting massive particles (WIMPs), would weigh somewhere around 100 GeV/c2 (giga–electron volts divided by the speed of light squared)—roughly 100 times more than a neutron.

Fornal first started thinking about the neutron enigma about a year ago. “I ran into an article by Peter Geltenbort about this mysterious discrepancy between the neutron lifetime measurements,” and thought, “wow, that’s a really big thing to explain,” he says. The article was an adaptation of an April 2016 Scientific American story Geltenbort had authored with University of Tennessee Knoxville physicist Geoffrey Greene that was published in the Institut Laue–Langevin’s annual report. Fornal says he was reminded of the topic a few months ago, when he and Grinstein came across a reference to it. “We didn’t find any theoretical model explaining this, and thought it might be an interesting thing to do,” he says. The researchers worked on the hypothesis over the holidays and posted their paper online just after the new year. They are surprised—but thrilled—that they might know soon whether or not neutron decay experiments see evidence for their proposal. “[neutron researchers] started looking for this so quickly,” Fornal says. “It’s nice to hear that this theory is not disconnected from experiments.”

Scientists find Time Travel is mathematically POSSIBLE


Mathematically, time travel is possible. Scientists have created a new mathematical model that dictates how time travel is theoretically possible. Experts used Einstein’s Theory of General relativity as basis for a hypothetical device which they named a Traversable Acausal Retrograde Domain in Space-time (TARDIS).

In other words, they’ve come up with a mathematical model of a theoretical time machine box that has the ability to move back and forth through space and time.

For centuries have humans imagined traveling in time. This idea resulted in countless movies, series and books produced and science fiction seems to have figured out everything there is about Time Travel.  But now scientists decided to see whether they could learn something more about time travel and whether this is just an idea possible in science fiction.

 “People think of time travel as something as fiction. And we tend to think it’s not possible because we don’t actually do it,” said Ben Tippett, a physicist and mathematician from the University of British Columbiasaid in a UBC news release, adding “But, mathematically, it is possible.”

What Tippett and his colleague from University of Maryland astrophysicist David Tsang created was a mathematical formula based on Einstein’s General Relativity theory to show how Time Travel is in fact possible, at least in theory.

 According to the abstract of the scientific paper, which was published in the journal Classical and Quantum Gravity, “we present geometry which has been designed to fit a layperson’s description of a ‘time machine.’ It is a theoretical box which allows those within it to travel back and forth through time and space, as seen by an external observer.”

Graciously, they’ve named it TARDIS—which stands for Traversable Acausal Retrograde Domain in Space-time.

Tippet further explained how: “My model of a time machine uses the curved space-time to bend time into a circle for the passengers, not in a straight line. That circle takes us back in time.”

In other words, their newly formulated model ‘assumes’ how time could curve around high-mass objects just as physical space does in the universe.

Tippet and Tsang refer to their TARDIS as a space-time geometry “bubble” that has the ability to move fast than the speed of light. They explain in their paper how: “It is a box which travels ‘forwards’ and then ‘backwards’ in time along a circular path through space-time.”

“Delighted external observers would be able to watch the time travelers within the box evolving backward in time: un-breaking eggs and separating cream from their coffee,” explain scientists in their paper.

But don’t get all excited, it’s still not possible to build—at least not yet.

“While is it mathematically possible, it is not yet possible to construct a space-time machine because we need materials—which we call exotic matter—to bend space-time in these impossible ways, but they have yet to be discovered,” Tippet explained.

New Theory Cracks Open the Black Box of Deep Learning


A new idea called the “information bottleneck” is helping to explain the puzzling success of today’s artificial-intelligence algorithms — and might also explain how human brains learn.

Even as machines known as “deep neural networks” have learned to converse, drive cars, beat video games and Go champions, dream, paint pictures and help make scientific discoveries, they have also confounded their human creators, who never expected so-called “deep-learning” algorithms to work so well. No underlying principle has guided the design of these learning systems, other than vague inspiration drawn from the architecture of the brain (and no one really understands how that operates either).

Like a brain, a deep neural network has layers of neurons — artificial ones that are figments of computer memory. When a neuron fires, it sends signals to connected neurons in the layer above. During deep learning, connections in the network are strengthened or weakened as needed to make the system better at sending signals from input data — the pixels of a photo of a dog, for instance — up through the layers to neurons associated with the right high-level concepts, such as “dog.” After a deep neural network has “learned” from thousands of sample dog photos, it can identify dogs in new photos as accurately as people can. The magic leap from special cases to general concepts during learning gives deep neural networks their power, just as it underlies human reasoning, creativity and the other faculties collectively termed “intelligence.” Experts wonder what it is about deep learning that enables generalization — and to what extent brains apprehend reality in the same way.

Last month, a YouTube video of a conference talk in Berlin, shared widely among artificial-intelligence researchers, offered a possible answer. In the talk, Naftali Tishby, a computer scientist and neuroscientist from the Hebrew University of Jerusalem, presented evidence in support of a new theory explaining how deep learning works. Tishby argues that deep neural networks learn according to a procedure called the “information bottleneck,” which he and two collaborators first described in purely theoretical terms in 1999. The idea is that a network rids noisy input data of extraneous details as if by squeezing the information through a bottleneck, retaining only the features most relevant to general concepts. Striking new computer experiments by Tishby and his student Ravid Shwartz-Ziv reveal how this squeezing procedure happens during deep learning, at least in the cases they studied.

Tishby’s findings have the AI community buzzing. “I believe that the information bottleneck idea could be very important in future deep neural network research,” said Alex Alemi of Google Research, who has already developed new approximation methods for applying an information bottleneck analysis to large deep neural networks. The bottleneck could serve “not only as a theoretical tool for understanding why our neural networks work as well as they do currently, but also as a tool for constructing new objectives and architectures of networks,” Alemi said.

Geoffrey Hinton, a pioneer of deep learning who works at Google and the University of Toronto, emailed Tishby after watching his Berlin talk. “It’s extremely interesting,” Hinton wrote. “I have to listen to it another 10,000 times to really understand it, but it’s very rare nowadays to hear a talk with a really original idea in it that may be the answer to a really major puzzle.”

According to Tishby, who views the information bottleneck as a fundamental principle behind learning, whether you’re an algorithm, a housefly, a conscious being, or a physics calculation of emergent behavior, that long-awaited answer “is that the most important part of learning is actually forgetting.”

The Bottleneck

Tishby began contemplating the information bottleneck around the time that other researchers were first mulling over deep neural networks, though neither concept had been named yet. It was the 1980s, and Tishby was thinking about how good humans are at speech recognition — a major challenge for AI at the time. Tishby realized that the crux of the issue was the question of relevance: What are the most relevant features of a spoken word, and how do we tease these out from the variables that accompany them, such as accents, mumbling and intonation? In general, when we face the sea of data that is reality, which signals do we keep?

“This notion of relevant information was mentioned many times in history but never formulated correctly,” Tishby said in an interview last month. “For many years people thought information theory wasn’t the right way to think about relevance, starting with misconceptions that go all the way to Shannon himself.”

Claude Shannon, the founder of information theory, in a sense liberated the study of information starting in the 1940s by allowing it to be considered in the abstract — as 1s and 0s with purely mathematical meaning. Shannon took the view that, as Tishby put it, “information is not about semantics.” But, Tishby argued, this isn’t true. Using information theory, he realized, “you can define ‘relevant’ in a precise sense.”

Imagine X is a complex data set, like the pixels of a dog photo, and Y is a simpler variable represented by those data, like the word “dog.” You can capture all the “relevant” information in X about Y by compressing X as much as you can without losing the ability to predict Y. In their 1999 paper, Tishby and co-authors Fernando Pereira, now at Google, and William Bialek, now at Princeton University, formulated this as a mathematical optimization problem. It was a fundamental idea with no killer application.

“I’ve been thinking along these lines in various contexts for 30 years,” Tishby said. “My only luck was that deep neural networks became so important.”

Eyeballs on Faces on People on Scenes

Though the concept behind deep neural networks had been kicked around for decades, their performance in tasks like speech and image recognition only took off in the early 2010s, due to improved training regimens and more powerful computer processors. Tishby recognized their potential connection to the information bottleneck principle in 2014 after reading a surprising paper by the physicists David Schwaband Pankaj Mehta.

The duo discovered that a deep-learning algorithm invented by Hinton called the “deep belief net” works, in a particular case, exactly like renormalization, a technique used in physics to zoom out on a physical system by coarse-graining over its details and calculating its overall state. When Schwab and Mehta applied the deep belief net to a model of a magnet at its “critical point,” where the system is fractal, or self-similar at every scale, they found that the network automatically used the renormalization-like procedure to discover the model’s state. It was a stunning indication that, as the biophysicist Ilya Nemenmansaid at the time, “extracting relevant features in the context of statistical physics and extracting relevant features in the context of deep learning are not just similar words, they are one and the same.”

The only problem is that, in general, the real world isn’t fractal. “The natural world is not ears on ears on ears on ears; it’s eyeballs on faces on people on scenes,” Cranmer said. “So I wouldn’t say [the renormalization procedure] is why deep learning on natural images is working so well.” But Tishby, who at the time was undergoing chemotherapy for pancreatic cancer, realized that both deep learning and the coarse-graining procedure could be encompassed by a broader idea. “Thinking about science and about the role of my old ideas was an important part of my healing and recovery,” he said.

In 2015, he and his student Noga Zaslavsky hypothesizedthat deep learning is an information bottleneck procedure that compresses noisy data as much as possible while preserving information about what the data represent. Tishby and Shwartz-Ziv’s new experiments with deep neural networks reveal how the bottleneck procedure actually plays out. In one case, the researchers used small networks that could be trained to label input data with a 1 or 0 (think “dog” or “no dog”) and gave their 282 neural connections random initial strengths. They then tracked what happened as the networks engaged in deep learning with 3,000 sample input data sets.

The basic algorithm used in the majority of deep-learning procedures to tweak neural connections in response to data is called “stochastic gradient descent”: Each time the training data are fed into the network, a cascade of firing activity sweeps upward through the layers of artificial neurons. When the signal reaches the top layer, the final firing pattern can be compared to the correct label for the image — 1 or 0, “dog” or “no dog.” Any differences between this firing pattern and the correct pattern are “back-propagated” down the layers, meaning that, like a teacher correcting an exam, the algorithm strengthens or weakens each connection to make the network layer better at producing the correct output signal. Over the course of training, common patterns in the training data become reflected in the strengths of the connections, and the network becomes expert at correctly labeling the data, such as by recognizing a dog, a word, or a

In their experiments, Tishby and Shwartz-Ziv tracked how much information each layer of a deep neural network retained about the input data and how much information each one retained about the output label. The scientists found that, layer by layer, the networks converged to the information bottleneck theoretical bound: a theoretical limit derived in Tishby, Pereira and Bialek’s original paper that represents the absolute best the system can do at extracting relevant information. At the bound, the network has compressed the input as much as possible without sacrificing the ability to accurately predict its label.

Tishby and Shwartz-Ziv also made the intriguing discovery that deep learning proceeds in two phases: a short “fitting” phase, during which the network learns to label its training data, and a much longer “compression” phase, during which it becomes good at generalization, as measured by its performance at labeling new test data.

As a deep neural network tweaks its connections by stochastic gradient descent, at first the number of bits it stores about the input data stays roughly constant or increases slightly, as connections adjust to encode patterns in the input and the network gets good at fitting labels to it. Some experts have compared this phase to memorization.

Then learning switches to the compression phase. The network starts to shed information about the input data, keeping track of only the strongest features — those correlations that are most relevant to the output label. This happens because, in each iteration of stochastic gradient descent, more or less accidental correlations in the training data tell the network to do different things, dialing the strengths of its neural connections up and down in a random walk. This randomization is effectively the same as compressing the system’s representation of the input data. As an example, some photos of dogs might have houses in the background, while others don’t. As a network cycles through these training photos, it might “forget” the correlation between houses and dogs in some photos as other photos counteract it. It’s this forgetting of specifics, Tishby and Shwartz-Ziv argue, that enables the system to form general concepts. Indeed, their experiments revealed that deep neural networks ramp up their generalization performance during the compression phase, becoming better at labeling test data. (A deep neural network trained to recognize dogs in photos might be tested on new photos that may or may not include dogs, for instance.)

It remains to be seen whether the information bottleneck governs all deep-learning regimes, or whether there are other routes to generalization besides compression. Some AI experts see Tishby’s idea as one of many important theoretical insights about deep learning to have emerged recently. Andrew Saxe, an AI researcher and theoretical neuroscientist at Harvard University, noted that certain very large deep neural networks don’t seem to need a drawn-out compression phase in order to generalize well. Instead, researchers program in something called early stopping, which cuts training short to prevent the network from encoding too many correlations in the first place.

Tishby argues that the network models analyzed by Saxe and his colleagues differ from standard deep neural network architectures, but that nonetheless, the information bottleneck theoretical bound defines these networks’ generalization performance better than other methods. Questions about whether the bottleneck holds up for larger neural networks are partly addressed by Tishby and Shwartz-Ziv’s most recent experiments, not included in their preliminary paper, in which they train much larger, 330,000-connection-deep neural networks to recognize handwritten digits in the 60,000-image Modified National Institute of Standards and Technology database, a well-known benchmark for gauging the performance of deep-learning algorithms. The scientists saw the same convergence of the networks to the information bottleneck theoretical bound; they also observed the two distinct phases of deep learning, separated by an even sharper transition than in the smaller networks. “I’m completely convinced now that this is a general phenomenon,” Tishby said.

Humans and Machines

The mystery of how brains sift signals from our senses and elevate them to the level of our conscious awareness drove much of the early interest in deep neural networks among AI pioneers, who hoped to reverse-engineer the brain’s learning rules. AI practitioners have since largely abandoned that path in the mad dash for technological progress, instead slapping on bells and whistles that boost performance with little regard for biological plausibility. Still, as their thinking machines achieve ever greater feats — even stoking fears that AI could someday pose an existential threat — many researchers hope these explorations will uncover general insights about learning and intelligence.

Brenden Lake, an assistant professor of psychology and data science at New York University who studies similarities and differences in how humans and machines learn, said that Tishby’s findings represent “an important step towards opening the black box of neural networks,” but he stressed that the brain represents a much bigger, blacker black box. Our adult brains, which boast several hundred trillion connections between 86 billion neurons, in all likelihood employ a bag of tricks to enhance generalization, going beyond the basic image- and sound-recognition learning procedures that occur during infancy and that may in many ways resemble deep learning.

For instance, Lake said the fitting and compression phases that Tishby identified don’t seem to have analogues in the way children learn handwritten characters, which he studies. Children don’t need to see thousands of examples of a character and compress their mental representation over an extended period of time before they’re able to recognize other instances of that letter and write it themselves. In fact, they can learn from a single example. Lake and his colleagues’ modelssuggest the brain may deconstruct the new letter into a series of strokes — previously existing mental constructs — allowing the conception of the letter to be tacked onto an edifice of prior knowledge. “Rather than thinking of an image of a letter as a pattern of pixels and learning the concept as mapping those features” as in standard machine-learning algorithms, Lake explained, “instead I aim to build a simple causal model of the letter,” a shorter path to generalization.

Such brainy ideas might hold lessons for the AI community, furthering the back-and-forth between the two fields. Tishby believes his information bottleneck theory will ultimately prove useful in both disciplines, even if it takes a more general form in human learning than in AI. One immediate insight that can be gleaned from the theory is a better understanding of which kinds of problems can be solved by real and artificial neural networks. “It gives a complete characterization of the problems that can be learned,” Tishby said. These are “problems where I can wipe out noise in the input without hurting my ability to classify. This is natural vision problems, speech recognition. These are also precisely the problems our brain can cope with.”

Meanwhile, both real and artificial neural networks stumble on problems in which every detail matters and minute differences can throw off the whole result. Most people can’t quickly multiply two large numbers in their heads, for instance. “We have a long class of problems like this, logical problems that are very sensitive to changes in one variable,” Tishby said. “Classifiability, discrete problems, cryptographic problems. I don’t think deep learning will ever help me break cryptographic codes.”

Generalizing — traversing the information bottleneck, perhaps — means leaving some details behind. This isn’t so good for doing algebra on the fly, but that’s not a brain’s main business. We’re looking for familiar faces in the crowd, order in chaos, salient signals in a noisy world.

Physicists Just Found a New Way to Bend a Fundamental Rule of Light Waves


One of the more well-known rules in physics is that light can only ever go one speed, so long as nothing stands in its way.

But new research has found there could be an interesting exception to this rule, where the mixing of light waves could bring them to a complete standstill.

 

The discovery hints at new ways of wrangling not just photons but nearly any kind of wave, which could be useful in technology that relies on information sent and stored using light.

Delaying light’s journey isn’t itself all that hard. Put a bunch of atoms in their way, and photons will take their time slipping in and out of the forest of particles.

Chill those particles right down so they lose their individual identities, and light can be set into slow motion and even stopped completely as it passes through the cloud.

More recently, it’s been shown that light’s pathway can be affected by changing its angular momentum, effectively twisting it so it takes longer than its usual 299,792,458 metres per second to get from A to B.

A small group of physicists from the Israel Institute of Technology and the Institute for Pure and Applied Mathematics (IMPA) in Brazil have now come up with another method, showing it’s theoretically possible to weave waves of light together in such a way that they stop dead in their tracks.

 The trick relies on tuning the light waves so they meet at what’s called an exceptional point – mathematical jargon that describes how the features of different waves match one another at a given coordinate.

Exceptional points were little more than mathematical concepts until fairly recently, when researchers demonstrated experimentally that they could be created by confining microwaves within a narrow grid.

When we talk about light waves, most of us imagine ripples of varying heights and length.

Light is of course defined by qualities such as wavelength and frequency, but it also has numerous other properties that form repeating patterns as photons traverse space.

These patterns can be tweaked by constraining their properties using things called waveguides, so two light waves can coalesce within the same space. This combination of properties described as an exceptional point gives rise to some interesting behaviours.

Last year researchers applied these points to the development of sensors that could respond to the smallest disruptions.

Now physicists have shown using mathematical models that it’s possible to use a kind of waveguide that balances the wave’s energy to produce an exceptional point where light stands still.

 By toggling the set-up in such a way that the waves can gain or lose energy, the light waves can be made to coalesce and freeze, or speed up and resume their journey out the other side.

If we’re to be particularly pedantic, we shouldn’t imagine it as photons standing still waiting for the go signal. No fundamental laws are being broken.

Just as light passing through a medium or being twisted is still technically moving at light speed, the photons in these waves are caught in a figurative electromagnetic whirlwind based on their interactions at the exceptional point.

For now, this achievement is still just by the numbers – a light-stopping device based on exceptional points hasn’t been built.

But if it does, we’d have another way to manipulate waves of light. The researchers also speculate that the same concepts technically apply to any kind of wave, including sound.

Given photons are quickly becoming the new electrons in information technology, we need all the tools we can find to get a firm grip on these speedy little particles.

%d bloggers like this: