For the past 17 years, it’s been my privilege to lead NASA’s New Horizons space mission and its exploration of Pluto—the farthest planet ever explored. During that time, I was often asked to predict what we would find there, but I knew better, because I’d seen too many such predictions about what would be found by previous first-time explorations of planets completely fail.
So I just said (perhaps to the disappointment of many journalists) that the only prediction I’d make about the results from the exploration of Pluto is that New Horizons would find “something wonderful.”
Undertaking the exploration of Pluto was something deeply personal for me. Why? In part because, as a planetary scientist, I knew that even in our best telescopes that faraway world remained barely more than a fuzzy blob, and thus we would never unravel Pluto’s many mysteries without going there and seeing it in detail.
The exploration of Pluto was also personal to me because, by the time I finished my PhD in 1989, NASA’s Voyager mission to explore the giant planets of our solar system was wrapping up at Neptune, and all the other planets then known, from Mercury to Neptune, had been explored by spacecraft. So Pluto represented the only remaining opportunity back then to be a part of a first mission of exploration to a new planet.
A final part of what made the exploration of Pluto personal to me was my desire to be a part of something larger than life in my career, a legacy for the ages—and the first-time exploration of a whole new planet was certainly that.
But most of all, it was personal to me because so many people told me it couldn’t be done. Many said NASA would never approve another faraway mission again after Voyager. But once we achieved a ranking by the National Academy of Sciences for the exploration of Pluto as the highest priority for a new mission in the early 2000s, and NASA approved the project, that theory collapsed. Then some said we could never do it on the budget NASA offered: about one fifth of what it had cost to do the Voyager mission; that that was an impossibly small budget to squeeze in to.
But we accomplished that too, thanks to some clever compromises in spacecraft capability and design, and the decision to send only one spacecraft (not two, like Voyager) on the long journey. Then it was said that a single spacecraft mission flying so far (over three billion miles) and needing so long (9.5 years) to reach its target would be too risky and was likely to fail.
But it didn’t fail. In fact, despite a harrowing, near-death experience just days out from arriving at Pluto, New Horizons succeeded in crossing the entire solar system and then exploring Pluto—and did so brilliantly, accomplishing all the objectives set out for it and then some!
In all, the exploration of Pluto took 26 years to accomplish—soup to nuts—from idea to flyby and data return. During that quarter century there were no fewer than five mission studies, then a tough competition between fiercely vying teams to win the project, then millions of hours of effort by the approximately 2,500 Americans who designed, built, tested and flew New Horizons, and ultimately a successful reconnaissance of Pluto and its system of moons that generated an intense public interest that greatly surpassed public interest in every robotic NASA mission before it.
Also along that long road were many battles to keep the mission funded, some adversaries that wanted to see us fail, and more than a few technical problems during spacecraft development. And there was also the sad development of 2006 when a few hundred astronomers—mostly non-experts in the study of planets—declared that Pluto and the then burgeoning list of other small planets that had been discovered beyond Pluto were not planets, largely to prevent schoolchildren from having to memorize their names. (I wonder if those same astronomers—also non-experts in chemistry—believe that there are too many elements in the periodic table for the same reason.)
But in addition to battles and disappointments, there were also soaring moments of unparalleled success during the 26-year-long quest to see Pluto explored. One such was our day of launch, January 19, 2006. Another was the day of our Pluto flyby, on Bastille Day 2015, when New Horizons stormed the gates of Pluto and revealed what had been nothing more than a distant point of light in most telescopes as the truly amazing planet that it is, before our very eyes.
Few have any idea how much sheer effort it took to undertake this project, how much career risk was involved, how dedicated the team of people were who carried it out were, and how much reward resulted from upending some scientific paradigms of planetary science. But in addition, there was also the reward of how it inspired so many in the public as to what humans can achieve, how it inspired countless schoolkids toward science and engineering careers, and how it showed once again that people still love great expeditions of exploration.
The improbable story of how Pluto came to be explored—including all its good, bad and sometimes even ugly facets—is described now in a just published book called Chasing New Horizons (Picador Press, 2018), which David Grinspoon and I wrote over the past two and a half years.
As we wrote that book about what the New Horizons team accomplished, and how we did it, against many odds, against strong competitors, and even against fate, I became convinced that my long-ago prediction about what we would find at Pluto had been correct: For we truly discovered both in Pluto and in our species—something wonderful.
Can a competition with cash rewards improve techniques for tracking the Large Hadron Collider’s messy particle trajectories?
Physicists at the world’s leading atom smasher are calling for help. In the next decade, they plan to produce up to 20 times more particle collisions in the Large Hadron Collider (LHC) than they do now, but current detector systems aren’t fit for the coming deluge. So this week, a group of LHC physicists has teamed up with computer scientists to launch a competition to spur the development of artificial-intelligence techniques that can quickly sort through the debris of these collisions. Researchers hope these will help the experiment’s ultimate goal of revealing fundamental insights into the laws of nature.
At the LHC at CERN, Europe’s particle-physics laboratory near Geneva, two bunches of protons collide head-on inside each of the machine’s detectors 40 million times a second. Every proton collision can produce thousands of new particles, which radiate from a collision point at the centre of each cathedral-sized detector. Millions of silicon sensors are arranged in onion-like layers and light up each time a particle crosses them, producing one pixel of information every time. Collisions are recorded only when they produce potentially interesting by-products. When they are, the detector takes a snapshot that might include hundreds of thousands of pixels from the piled-up debris of up to 20 different pairs of protons. (Because particles move at or close to the speed of light, a detector cannot record a full movie of their motion.)
From this mess, the LHC’s computers reconstruct tens of thousands of tracks in real time, before moving on to the next snapshot. “The name of the game is connecting the dots,” says Jean-Roch Vlimant, a physicist at the California Institute of Technology in Pasadena who is a member of the collaboration that operates the CMS detector at the LHC.
After future planned upgrades, each snapshot is expected to include particle debris from 200 proton collisions. Physicists currently use pattern-recognition algorithms to reconstruct the particles’ tracks. Although these techniques would be able to work out the paths even after the upgrades, “the problem is, they are too slow”, says Cécile Germain, a computer scientist at the University of Paris South in Orsay. Without major investment in new detector technologies, LHC physicists estimate that the collision rates will exceed the current capabilities by at least a factor of 10.
Researchers suspect that machine-learning algorithms could reconstruct the tracks much more quickly. To help find the best solution, Vlimant and other LHC physicists teamed up with computer scientists including Germain to launch the TrackML challenge. For the next three months, data scientists will be able to download 400 gigabytes of simulated particle-collision data—the pixels produced by an idealized detector—and train their algorithms to reconstruct the tracks.
Participants will be evaluated on the accuracy with which they do this. The top three performers of this phase hosted by Google-owned company Kaggle, will receive cash prizes of US$12,000, $8,000 and $5,000. A second competition will then evaluate algorithms on the basis of speed as well as accuracy, Vlimant says.
Such competitions have a long tradition in data science, and many young researchers take part to build up their CVs. “Getting well ranked in challenges is extremely important,” says Germain. Perhaps the most famous of these contests was the 2009 Netflix Prize. The entertainment company offered US$1 million to whoever worked out the best way to predict what films its users would like to watch, going on their previous ratings. TrackML isn’t the first challenge in particle physics, either: in 2014, teams competed to ‘discover’ the Higgs boson in a set of simulated data (the LHC discovered the Higgs, long predicted by theory, in 2012). Other science-themed challenges have involved data on anything from plankton to galaxies.
From the computer-science point of view, the Higgs challenge was an ordinary classification problem, says Tim Salimans, one of the top performers in that race (after the challenge, Salimans went on to get a job at the non-profit effort OpenAI in San Francisco, California). But the fact that it was about LHC physics added to its lustre, he says. That may help to explain the challenge’s popularity: nearly 1,800 teams took part, and many researchers credit the contest for having dramatically increased the interaction between the physics and computer-science communities.
TrackML is “incomparably more difficult”, says Germain. In the Higgs case, the reconstructed tracks were part of the input, and contestants had to do another layer of analysis to ‘find’ the particle. In the new problem, she says, you have to find in the 100,000 points something like 10,000 arcs of ellipse. She thinks the winning technique might end up resembling those used by the program AlphaGo, which made history in 2016 when it beat a human champion at the complex game of Go. In particular, they might use reinforcement learning, in which an algorithm learns by trial and error on the basis of ‘rewards’ that it receives after each attempt.
Vlimant and other physicists are also beginning to consider more untested technologies, such as neuromorphic computing and quantum computing. “It’s not clear where we’re going,” says Vlimant, “but it looks like we have a good path.”
The question of whether cellphones can cause cancer became a popular one after the dramatic increase in cell phone use since the 1990s. Scientists’ main concern is that cell phones can increase the risk of brain tumors or other tumors in the head and neck area – and as of now, there doesn’t seem to be a clear answer.
Cell phones give off a form of energy known as radiofrequency (RF) waves. They are at the low-energy end of the electromagnetic spectrum – as opposed to the higher-energy end where X-rays exist – and they emit a type of non-ionizing radiation. In contrast to ionizing radiation, this type does not cause cancer by damaging DNA in cells, but there is still a concern that it could cause biological effects that result in some cancers.
However, the only consistently recognizable biological effect of RF energy is heat. The closer the phone is to the head, the greater the expected exposure is. If RF radiation is absorbed in large enough amounts by materials containing water, such as food, fluids, and body tissues, it produces this heat that can lead to burns and tissue damage. Still, it is unclear whether RF waves could result in cancer in some circumstances.
Many factors affect the amount of RF energy a person is exposed to, such as the amount of time spent on the phone, the model of the phone, and if a hands-free device or speaker is being used. The distance and path to the nearest cell phone tower also play a role. The farther a way a person is from the tower, the more energy is required to get a good signal on the phone. The same is true of areas where many people are using their phones and excess energy is required to get a good signal.
RF radiation is so common in the environment that there is no way to completely avoid it. Most phone manufacturers post information about the amount of RF energy absorbed from the phone into the user’s body, called the specific absorption rate (SAR), on their website or user manual. Different phones have different SARs, so customers can reduce RF energy exposure by researching different models when shopping for a phone. The highest SAR in the U.S. is 1.6 watts/kg, but actual SAR values may vary based on certain factors.
Studies have been conducted to find a possible link between cell phone use and the development of tumors. They are fairly limited, however, due to low numbers of study participants and risk of recall bias. Recall bias can occur when individuals who develop brain tumors are more predisposed to recall heavier cell phone use than those who do not, despite lack of true difference. Also, tumors can take decades to develop, and given that cell phones have only been in use for about 20 years, these studies are unable to follow people for very long periods of time. Additionally, cell phone use is constantly changing.
Outside of direct studies on cell phone use, brain cancer incidence and death rates have changed little in the past decade, making it even more difficult to pinpoint if cell phone use plays a role in tumor development.