5G looks like it’s the next best thing in tech, but it’s really a Trojan horse for harming humanity


Image: 5G looks like it’s the next best thing in tech, but it’s really a Trojan horse for harming humanity

Many so-called “experts” are claiming that it’ll be a huge step forward for innovation in everything from manufacturing and transportation, to medicine and beyond. But in reality, 5G technology represents an existential threat to humanity – a “phony war” on the people who inhabit this planet we call Earth, and all in the name of “progress.”

Writing for GreenMedInfo, Claire Edwards, a former editor and trainer in intercultural writing for the United Nations (U.N.), warns that 5G might end up being the straw that breaks the camel’s back in terms of the state of public health. Electro-hypersensitivity (EHS), she says, could soon become a global pandemic as a result of 5G implementation, with people developing severe health symptoms that inhibit their ability to live normal lives.

This “advanced” technology, Edwards warns, involves the use of special “laser-like beams of electromagnetic radiation,” or EMR, that are basically blasted “from banks of thousands of tiny antennas” installed all over the place, typically on towers and poles located within just a couple hundred feet of one another.

While she still worked for the U.N., Edwards tried to warn her superiors about the dangers of 5G EMR, only to have these petitions fall on deaf ears. This prompted her to contact the U.N. Secretary-General, Antonio Guterres, who then pushed the World Health Organization (WHO) to take a closer look into the matter – though this ended up being a dead end as well.

For more news about 5G and its threat to humanity, be sure to check out Conspiracy.news.

The power of the elements: Discover Colloidal Silver Mouthwash with quality, natural ingredients like Sangre de Drago sap, black walnut hulls, menthol crystals and more. Zero artificial sweeteners, colors or alcohol. Learn more at the Health Ranger Store and help support this news site.

Elon Musk is planning to launch 4,425 5G satellites in to Earth’s orbit THIS JUNE

Edwards worries particularly about 5G implementation in space, as existing space law is so woefully inadequate that countries all around the world, including the U.S., will likely blanket the atmosphere in 5G equipment, turning our entire planet into an EMR hell.

Elon Musk of Tesla fame is one such purveyor of 5G technology who’s planning to launch an astounding 4,425 5G satellites in to Earth’s orbit by June 2019. This means that, in a matter of just a few months, 5G will be everywhere and completely inescapable.

“There are no legal limits on exposure to EMR,” Edwards writes.

“Conveniently for the telecommunications industry, there are only non-legally enforceable guidelines such as those produced by the grandly named International Commission on Non-Ionising Radiation Protection, which turns out to be like the Wizard of Oz, just a tiny little NGO in Germany that appoints its own members, none of whom is a medical doctor or environmental expert.”

Edwards sees 5G implementation as eventually leading to a “catastrophe for all life in Earth” in the form of “the last great extinction.” She likens it to a “biological experiment” representing the “most heinous manifestation of hubris and greed in human history.”

There’s already evidence to suggest that 5G implementation in a few select cities across the United States, including in Sacramento, California, is causing health problems for people who live near 5G equipment. At firehouses where 5G equipment was installed, for instance, firefighters are reporting things like memory problems and confusion.

Some people are also reporting reproductive issues like miscarriages and stillbirths, as well as nosebleeds and insomnia, all stemming from the presence of 5G transmitters.

Edwards encourages folks to sign The Stop 5G Appeal if they care about protecting people, animals, insects, and the planet from this impending 5G assault.

“Our newspapers are now casually popularizing the meme that human extinction would be a good thing, but when the question becomes not rhetorical but real, when it’s your life, your child, your community, your environment that is under immediate threat, can you really subscribe to such a suggestion?” Edwards asks.

The case for taking AI seriously as a threat to humanity


Why some people fear AI, explained.

Stephen Hawking has said, “The development of full artificial intelligence could spell the end of the human race.” Elon Musk claims that AI is humanity’s “biggest existential threat.”

That might have people asking: Wait, what? But these grand worries are rooted in research. Along with Hawking and Musk, prominent figures at Oxford and UC Berkeley and many of the researchers working in AI today believe that advanced AI systems, if deployed carelessly, could end all life on earth.

This concern has been raised since the dawn of computing. But it has come into particular focus in recent years, as advances in machine-learning techniques have given us a more concrete understanding of what we can do with AI, what AI can do for (and to) us, and how much we still don’t know.

There are also skeptics. Some of them think advanced AI is so distant that there’s no point in thinking about it now. Others are worried that excessive hype about the power of their field might kill it prematurely. And even among the people who broadly agree that AI poses unique dangers, there are varying takes on what steps make the most sense today.

The conversation about AI is full of confusion, misinformation, and people talking past each other — in large part because we use the word “AI” to refer to so many things. So here’s the big picture on how artificial intelligence might pose a catastrophic threat, in nine questions:

1) What is AI?

Artificial intelligence is the effort to create computers capable of intelligent behavior. It is a broad catchall term, used to refer to everything from Siri to IBM’s Watson to powerful technologies we have yet to invent.

Some researchers distinguish between “narrow AI” — computer systems that are better than humans in some specific, well-defined field, like playing chess or generating images or diagnosing cancer — and “general AI,” systems that can surpass human capabilities in many domains. We don’t have general AI yet, but we’re starting to get a better sense of the challenges it will pose.

Narrow AI has seen extraordinary progress over the past few years. AI systems have improved dramatically at translation, at games like chess and Go, at important research biology questions like predicting how proteins fold, and at generating images. AI systems determine what you’ll see in a Google search or in your Facebook Newsfeed. They are being developed to improve drone targeting and detect missiles.

But narrow AI is getting less narrow. Once, we made progress in AI by painstakingly teaching computer systems specific concepts. To do computer vision — allowing a computer to identify things in pictures and video — researchers wrote algorithms for detecting edges. To play chess, they programmed in heuristics about chess. To do natural language processing (speech recognition, transcription, translation, etc.), they drew on the field of linguistics.

But recently, we’ve gotten better at creating computer systems that have generalized learning capabilities. Instead of mathematically describing detailed features of a problem, we let the computer system learn that by itself. While once we treated computer vision as a completely different problem from natural language processing or platform game playing, now we can solve all three problems with the same approaches.

Our AI progress so far has enabled enormous advances — and has also raised urgent ethical questions. When you train a computer system to predict which convicted felons will reoffend, you’re using inputs from a criminal justice system biased against black people and low-income people — and so its outputs will likely be biased against black and low-income people too. Making websites more addictive can be great for your revenue but bad for your users.

Rosie Campbell at UC Berkeley’s Center for Human-Compatible AI argues that these are examples, writ small, of the big worry experts have about general AI in the future. The difficulties we’re wrestling with today with narrow AI don’t come from the systems turning on us or wanting revenge or considering us inferior. Rather, they come from the disconnect between what we tell our systems to do and what we actually want them to do.

For example, we tell a system to run up a high score in a video game. We want it to play the game fairly and learn game skills — but if it instead has the chance to directly hack the scoring system, it will do that. It’s doing great by the metric we gave it. But we aren’t getting what we wanted.

In other words, our problems come from the systems being really good at achieving the goal they learned to pursue; it’s just that the goal they learned in their training environment isn’t the outcome we actually wanted. And we’re building systems we don’t understand, which means we can’t always anticipate their behavior.

Right now the harm is limited because the systems are so limited. But it’s a pattern that could have even graver consequences for human beings in the future as AI systems become more advanced.

2) Is it even possible to make a computer as smart as a person?

Yes, though current AI systems aren’t nearly that smart.

One popular adage about AI is “everything that’s easy is hard, and everything that’s hard is easy.” Doing complex calculations in the blink of an eye? Easy. Looking at a picture and telling you whether it’s a dog? Hard (until very recently).

Lots of things humans do are still outside AI’s grasp. For instance, it’s hard to design an AI system that explores an unfamiliar environment, that can navigate its way from, say, the entryway of a building it’s never been in before up the stairs to a specific person’s desk. We don’t know how to design an AI system that reads a book and retains an understanding of the concepts.

The paradigm that has driven many of the biggest breakthroughs in AI recently is called “deep learning.” Deep learning systems can do some astonishing stuff: beat games we thought humans might never lose, invent compelling and realistic photographs, solve open problems in molecular biology.

These breakthroughs have made some researchers conclude it’s time to start thinking about the dangers of more powerful systems, but skeptics remain. The field’s pessimists argue that programs still need an extraordinary pool of structured data to learn from, require carefully chosen parameters, or work only in environments designed to avoid the problems we don’t yet know how to solve. They point to self-driving cars, which are still mediocre under the best conditions despite the billions that have been poured into making them work.

With all those limitations, one might conclude that even if it’s possible to make a computer as smart as a person, it’s certainly a long way away. But that conclusion doesn’t necessarily follow.

That’s because for almost all the history of AI, we’ve been held back in large part by not having enough computing power to realize our ideas fully. Many of the breakthroughs of recent years — AI systems that learned how to play Atari games, generate fake photos of celebrities, fold proteins, and compete in massive multiplayer online strategy games — have happened because that’s no longer true. Lots of algorithms that seemed not to work at all turned out to work quite well once we could run them with more computing power.

And the cost of a unit of computing time keeps falling. Progress in computing speed has slowed recently, but the cost of computing power is still estimated to be falling by a factor of 10 every 10 years. Through most of its history, AI has had access to less computing power than the human brain. That’s changing. By most estimates, we’re now approaching the era when AI systems can have the computing resources that we humans enjoy.

Furthermore, breakthroughs in a field can often surprise even other researchers in the field. “Some have argued that there is no conceivable risk to humanity [from AI] for centuries to come,” wrote UC Berkeley professor Stuart Russell, “perhaps forgetting that the interval of time between Rutherford’s confident assertion that atomic energy would never be feasibly extracted and Szilárd’s invention of the neutron-induced nuclear chain reaction was less than twenty-four hours.”

There’s another consideration. Imagine an AI that is inferior to humans at everything, with one exception: It’s a competent engineer that can build AI systems very effectively. Machine learning engineers who work on automating jobs in other fields often observe, humorously, that in some respects, their own field looks like one where much of the work — the tedious tuning of parameters — could be automated.

If we can design such a system, then we can use its result — a better engineering AI — to build another, even better AI. This is the mind-bending scenario experts call “recursive self-improvement,” where gains in AI capabilities enable more gains in AI capabilities, allowing a system that started out behind us to rapidly end up with abilities well beyond what we anticipated.

This is a possibility that has been anticipated since the first computers. I.J. Good, a colleague of Alan Turing who worked at the Bletchley Park codebreaking operation during World War II and helped build the first computers afterward, may have been the first to spell it out, back in 1965: “An ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.”

3) How exactly could it wipe us out?

It’s immediately clear how nuclear bombs will kill us. No one working on mitigating nuclear risk has to start by explaining why it’d be a bad thing if we had a nuclear war.

The case that AI could pose an existential risk to humanity is more complicated and harder to grasp. So many of the people who are working to build safe AI systems have to start by explaining why AI systems, by default, are dangerous.

The idea that AI can become a danger is rooted in the fact that AI systems pursue their goals, whether or not those goals are what we really intended — and whether or not we’re in the way. “You’re probably not an evil ant-hater who steps on ants out of malice,” Stephen Hawking wrote, “but if you’re in charge of a hydroelectric green-energy project and there’s an anthill in the region to be flooded, too bad for the ants. Let’s not place humanity in the position of those ants.”

Here’s one scenario that keeps experts up at night: We develop a sophisticated AI system with the goal of, say, estimating some number with high confidence. The AI realizes it can achieve more confidence in its calculation if it uses all the world’s computing hardware, and it realizes that releasing a biological superweapon to wipe out humanity would allow it free use of all the hardware. Having exterminated humanity, it then calculates the number with higher confidence.

Victoria Krakovna, an AI researcher at DeepMind (now a division of Alphabet, Google’s parent company), compiled a list of examples of “specification gaming”: the computer doing what we told it to do but not what we wanted it to do. For example, we tried to teach AI organisms in a simulation to jump, but we did it by teaching them to measure how far their “feet” rose above the ground. Instead of jumping, they learned to grow into tall vertical poles and do flips — they excelled at what we were measuring, but they didn’t do what we wanted them to do.

An AI playing the Atari exploration game Montezuma’s Revenge found a bug that let it force a key in the game to reappear, thereby allowing it to earn a higher score by exploiting the glitch. An AI playing a different game realized it could get more points by falsely inserting its name as the owner of high-value items.

Sometimes, the researchers didn’t even know how their AI system cheated: “the agent discovers an in-game bug. … For a reason unknown to us, the game does not advance to the second round but the platforms start to blink and the agent quickly gains a huge amount of points (close to 1 million for our episode time limit).”

What these examples make clear is that in any system that might have bugs or unintended behavior or behavior humans don’t fully understand, a sufficiently powerful AI system might act unpredictably — pursuing its goals through an avenue that isn’t the one we expected.

In his 2009 paper “The Basic AI Drives,” Steve Omohundro, who has worked as a computer science professor at the University of Illinois Urbana-Champaign and as the president of Possibility Research, argues that almost any AI system will predictably try to accumulate more resources, become more efficient, and resist being turned off or modified: “These potentially harmful behaviors will occur not because they were programmed in at the start, but because of the intrinsic nature of goal driven systems.”

His argument goes like this: Because AIs have goals, they’ll be motivated to take actions that they can predict will advance their goals. An AI playing a chess game will be motivated to take an opponent’s piece and advance the board to a state that looks more winnable.

But the same AI, if it sees a way to improve its own chess evaluation algorithm so it can evaluate potential moves faster, will do that too, for the same reason: It’s just another step that advances its goal.

If the AI sees a way to harness more computing power so it can consider more moves in the time available, it will do that. And if the AI detects that someone is trying to turn off its computer mid-game, and it has a way to disrupt that, it’ll do it. It’s not that we would instruct the AI to do things like that; it’s that whatever goal a system has, actions like these will often be part of the best path to achieve that goal.

That means that any goal, even innocuous ones like playing chess or generating advertisements that get lots of clicks online, could produce unintended results if the agent pursuing it has enough intelligence and optimization power to identify weird, unexpected routes to achieve its goals.

Goal-driven systems won’t wake up one day with hostility to humans lurking in their hearts. But they will take actions that they predict will help them achieve their goal — even if we’d find those actions problematic, even horrifying. They’ll work to preserve themselves, accumulate more resources, and become more efficient. They already do that, but it takes the form of weird glitches in games. As they grow more sophisticated, scientists like Omohundro predict more adversarial behavior.

4) When did scientists first start worrying about AI risk?

Scientists have been thinking about the potential of artificial intelligence since the early days of computers. In the famous paper where he put forth the Turing test for determining if an artificial system is truly “intelligent,” Alan Turing wrote:

Let us now assume, for the sake of argument, that these machines are a genuine possibility, and look at the consequences of constructing them. … There would be plenty to do in trying to keep one’s intelligence up to the standards set by the machines, for it seems probable that once the machine thinking method had started, it would not take long to outstrip our feeble powers. … At some stage therefore we should have to expect the machines to take control.

I.J. Good worked closely with Turing and reached the same conclusions, according to his assistant, Leslie Pendleton. In an excerpt from unpublished notes Good wrote shortly before he died in 2009, he writes about himself in third person and notes a disagreement with his younger self — while as a younger man, he thought powerful AIs might be helpful to us, the older Good expected AI to annihilate us.

[The paper] “Speculations Concerning the First Ultra-intelligent Machine” (1965) … began: “The survival of man depends on the early construction of an ultra-intelligent machine.” Those were his words during the Cold War, and he now suspects that “survival” should be replaced by “extinction.” He thinks that, because of international competition, we cannot prevent the machines from taking over. He thinks we are lemmings. He said also that “probably Man will construct the deus ex machina in his own image.”

In the 21st century, with computers quickly establishing themselves as a transformative force in our world, younger researchers started expressing similar worries.

Nick Bostrom is a professor at the University of Oxford, the director of the Future of Humanity Institute, and the director of the Governance of Artificial Intelligence Program. He researches risks to humanity, both in the abstract — asking questions like why we seem to be alone in the universe — and in concrete terms, analyzing the technological advances on the table and whether they endanger us. AI, he concluded, endangers us.

In 2014, he wrote a book explaining the risks AI poses and the necessity of getting it right the first time, concluding, “once unfriendly superintelligence exists, it would prevent us from replacing it or changing its preferences. Our fate would be sealed.”

Across the world, others have reached the same conclusion. Bostrom co-authored a paper on the ethics of artificial intelligence with Eliezer Yudkowsky, founder of and research fellow at the Berkeley Machine Intelligence Research Institute (MIRI), an organization that works on better formal characterizations of the AI safety problem.

Yudkowsky started his career in AI by worriedly poking holes in others’ proposals for how to make AI systems safe, and has spent most of it working to persuade his peers that AI systems will, by default, be unaligned with human values (not necessarily opposed to but indifferent to human morality) — and that it’ll be a challenging technical problem to prevent that outcome.

Increasingly, researchers realized that there’d be challenges that hadn’t been present with AI systems when they were simple. “‘Side effects’ are much more likely to occur in a complex environment, and an agent may need to be quite sophisticated to hack its reward function in a dangerous way. This may explain why these problems have received so little study in the past, while also suggesting their importance in the future,” concluded a 2016 research paper on problems in AI safety.

Bostrom’s book Superintelligence was compelling to many people, but there were skeptics. “No, experts don’t think superintelligent AI is a threat to humanity,” argued an op-ed by Oren Etzioni, a professor of computer science at the University of Washington and CEO of the Allan Institute for Artificial Intelligence. “Yes, we are worried about the existential risk of artificial intelligence,” replied a dueling op-ed by Stuart Russell, an AI pioneer and UC Berkeley professor, and Allan DaFoe, a senior research fellow at Oxford and director of the Governance of AI program there.

It’s tempting to conclude that there’s a pitched battle between AI-risk skeptics and AI-risk believers. In reality, they might not disagree as profoundly as you would think.

Facebook’s chief AI scientist Yann LeCun, for example, is a vocal voice on the skeptical side. But while he argues we shouldn’t fear AI, he still believes we ought to have people working on, and thinking about, AI safety. “Even if the risk of an A.I. uprising is very unlikely and very far in the future, we still need to think about it, design precautionary measures, and establish guidelines,” he writes.

That’s not to say there’s an expert consensus here — far from it. There is substantial disagreement about which approaches seem likeliest to bring us to general AI, which approaches seem likeliest to bring us to safe general AI, and how soon we need to worry about any of this.

Many experts are wary that others are overselling their field, and dooming it when the hype runs out. But that disagreement shouldn’t obscure a growing common ground; these are possibilities worth thinking about, investing in, and researching, so we have guidelines when the moment comes that they’re needed.

5) Why couldn’t we just shut off a computer if it got too powerful?

A smart AI could predict that we’d want to turn it off if it made us nervous. So it would try hard not to make us nervous, because doing so wouldn’t help it accomplish its goals. If asked what its intentions are, or what it’s working on, it would attempt to evaluate which responses are least likely to get it shut off, and answer with those. If it wasn’t competent enough to do that, it might pretend to be even dumber than it was — anticipating that researchers would give it more time, computing resources, and training data.

So we might not know when it’s the right moment to shut off a computer.

We also might do things that make it impossible to shut off the computer later, even if we realize eventually that it’s a good idea. For example, many AI systems could have access to the internet, which is a rich source of training data and which they’d need if they’re to make money for their creators (for example, on the stock market, where more than half of trading is done by fast-reacting AI algorithms).

But with internet access, an AI could email copies of itself somewhere where they’ll be downloaded and read, or hack vulnerable systems elsewhere. Shutting off any one computer wouldn’t help.

In that case, isn’t it a terrible idea to let any AI system — even one which doesn’t seem powerful enough to be dangerous — have access to the internet? Probably. But that doesn’t mean it won’t continue to happen.

So far, we’ve mostly talked about the technical challenges of AI. But from here forward, it’s necessary to veer more into the politics. Since AI systems enable incredible things, there will be lots of different actors working on such systems.

There will likely be startups, established tech companies like Google (Alphabet’s recently acquired startup DeepMind is frequently mentioned as an AI frontrunner), and nonprofits (the Elon Musk-founded OpenAI is another major player in the field).

There will be governments — Russia’s Vladimir Putin has expressed an interest in AI, and China has made big investments. Some of them will presumably be cautious and employ safety measures, including keeping their AI off the internet. But in a scenario like this one, we’re at the mercy of the least cautious actor, whoever they may be.

That’s part of what makes AI hard: Even if we know how to take appropriate precautions (and right now we don’t), we also need to figure out how to ensure that all would-be AI programmers are motivated to take those precautions and have the tools to implement them correctly.

6) What are we doing right now to avoid an AI apocalypse?

“It could be said that public policy on AGI [artificial general intelligence] does not exist,” concluded a paper this year reviewing the state of the field.

The truth is that technical work on promising approaches is getting done, but there’s shockingly little in the way of policy planning, international collaboration, or public-private partnerships. In fact, much of the work is being done by only a handful of organizations, and it has been estimated that around 50 people in the world work full time on technical AI safety.

Bostrom’s Future of Humanity Institute has published a research agenda for AI governance: the study of “devising global norms, policies, and institutions to best ensure the beneficial development and use of advanced AI.” It has published research on the risk of malicious uses of AI, on the context of China’s AI strategy, and on artificial intelligence and international security.

The longest-established organization working on technical AI safety is the Machine Intelligence Research Institute (MIRI), which prioritizes research into designing highly reliable agents — artificial intelligence programs whose behavior we can predict well enough to be confident they’re safe. (Disclosure: MIRI is a nonprofit and I donated to its work in 2017 and 2018.)

The Elon Musk-founded OpenAI is a very new organization, less than three years old. But researchers there are active contributors to both AI safety and AI capabilities research. A research agenda in 2016 spelled out “concrete open technical problems relating to accident prevention in machine learning systems,” and researchers have since advanced some approaches to safe AI systems.

Alphabet’s DeepMind, a leader in this field, has a safety team and a technical research agenda outlined here. “Our intention is to ensure that AI systems of the future are not just ‘hopefully safe’ but robustly, verifiably safe ,” it concludes, outlining an approach with an emphasis on specification (designing goals well), robustness (designing systems that perform within safe limits under volatile conditions), and assurance (monitoring systems and understanding what they’re doing).

There are also lots of people working on more present-day AI ethics problems: algorithmic bias, robustness of modern machine-learning algorithms to small changes, and transparency and interpretability of neural nets, to name just a few. Some of that research could potentially be valuable for preventing destructive scenarios.

But on the whole, the state of the field is a little bit as if almost all climate change researchers were focused on managing the droughts, wildfires, and famines we’re already facing today, with only a tiny skeleton team dedicating to forecasting the future and 50 or so researchers who work full time on coming up with a plan to turn things around.

Not every organization with a major AI department has a safety team at all, and some of them have safety teams focused only on algorithmic fairness and not on the risks from advanced systems. The US government doesn’t have a department for AI.

The field still has lots of open questions — many of which might make AI look much more scary, or much less so — which no one has dug into in depth.

7) Is this really likelier to kill us all than, say, climate change?

It sometimes seems like we’re facing dangers from all angles in the 21st century. Both climate change and future AI developments are likely to be transformative forces acting on our world.

Our predictions about climate change are more confident, both for better and for worse. We have a clearer understanding of the risks the planet will face, and we can estimate the costs to human civilization. They are projected to be enormous, risking potentially hundreds of millions of lives. The ones who will suffer most will be low-income people in developing countries; the wealthy will find it easier to adapt. We also have a clearer understanding of the policies we need to enact to address climate change than we do with AI.

There’s intense disagreement in the field on timelines for critical advances in AI. While AI safety experts agree on many features of the safety problem, they’re still making the case to research teams in their own field, and they disagree on some of the details. There’s substantial disagreement on how badly it could go, and on how likely it is to go badly. There are only a few people who work full time on AI forecasting. One of the things current researchers are trying to nail down is their models and the reasons for the remaining disagreements about what safe approaches will look like.

Most experts in the AI field think it poses a much larger risk of total human extinction than climate change, since analysts of existential risks to humanity think that climate change, while catastrophic, is unlikely to lead to human extinction. But many others primarily emphasize our uncertainty — and emphasize that when we’re working rapidly toward powerful technology about which there are still many unanswered questions, the smart step is to start the research now.

8) Is there a possibility that AI can be benevolent?

AI safety researchers emphasize that we shouldn’t assume AI systems will be benevolent by default. They’ll have the goals that their training environment set them up for, and no doubt this will fail to encapsulate the whole of human values.

When the AI gets smarter, might it figure out morality by itself? Again, researchers emphasize that it won’t. It’s not really a matter of “figuring out” — the AI will understand just fine that humans actually value love and fulfillment and happiness, and not just the number associated with Google on the New York Stock Exchange. But the AI’s values will be built around whatever goal system it was initially built around, which means it won’t suddenly become aligned with human values if it wasn’t designed that way to start with.

Of course, we can build AI systems that are aligned with human values, or at least that humans can safely work with. That is ultimately what almost every organization with an artificial general intelligence division is trying to do. A success with AI could give us access to decades or centuries of technological innovation all at once.

“If we’re successful, we believe this will be one of the most important and widely beneficial scientific advances ever made,” writes the introduction to Alphabet’s DeepMind. “From climate change to the need for radically improved healthcare, too many problems suffer from painfully slow progress, their complexity overwhelming our ability to find solutions. With AI as a multiplier for human ingenuity, those solutions will come into reach.”

So, yes, AI can share our values — and transform our world for the good. We just need to solve a very hard engineering problem first.

9) I just really want to know: how worried should we be?

To people who think the worrying is premature and the risks overblown, AI safety is competing with other priorities that sound, well, a bit less sci-fi — and it’s not clear why AI should take precedence. To people who think the risks described are real and substantial, it’s outrageous that we’re dedicating so few resources to working on them.

While machine-learning researchers are right to be wary of hype, it’s also hard to avoid the fact that they’re accomplishing some impressive, surprising things using very generalizable techniques, and that it doesn’t seem that all the low-hanging fruit has been picked.

At a major conference in early December, Google’s DeepMind cracked open a longstanding problem in biology: predicting how proteins fold. “Even though there’s a lot more work to do before we’re able to have a quantifiable impact on treating diseases, managing the environment, and more, we know the potential is enormous,” its announcement concludes.

AI looks increasingly like a technology that will change the world when it arrives. Researchers across many major AI organizations tell us it will be like launching a rocket: something we have to get right before we hit “go.” So it seems urgent to get to work learning rocketry. No matter whether or not humanity should be afraid, we should definitely be doing our homework.

Tesla Electric Plane: How Elon Musk Plans to Bring Batteries to the Skies


In addition to semi-announced long term initiatives like the Tesla pickup truck, Elon Musk has also floated the idea of making a Tesla plane. And while it sounds far fetched now, once it finishes developing its range of electric vehicles, bringing renewable energy to commercial aviation seems like a pretty logical next step.

“The exciting thing to do would be a vertical takeoff and landing supersonic jet of some kind,” Tesla CEO Elon Musk said during a September appearance on Joe Rogan Experience, noting that he’d discussed the idea with “friends and girlfriends.”

It sounds like pie-in-the-sky thinking, but Tesla has built a reputation as a firm that can take existing vehicles and electrify them with great success. The company first released the Roadster in 2008, the first all-electric production car with a lithium-ion battery, back when electric cars were a rare oddity. It then launched the Model S sedan in 2012, the Model X sports utility vehicle in 2015, and the Model 3 entry-level car in 2017. It’s now planning the entry-level Model Y, a Semi electric truck and second-generation Roadster, all while producing around 7,000 cars per week.

Tesla also has a lot of experience with building bigger and bigger batteries technology. It built the world’s largest lithium-ion battery in South Australia with 100 megawatts of storage, using the Powerpack commercial product to store wind and solar energy. It also sells the Powerwall for home users, while the company’s Solar Roof can blend into an existing property to harvest energy. If anyone knows how to make a huge battery fly through the air with passengers and cargo, it could be Tesla.

Tesla Electric Plane: What’s Elon Musk Said About It?

Musk has discussed his plane idea several times over the years, going back to 2009 when he mentioned the idea to George Zachary at the Charles River Ventures CEO Summit, stating that “an electric plane gets more feasible as battery energy improves,” but “I try not to think about because I have too much to think about.” His comments were captured on (somewhat grainy) video which was published by TechCrunch at the time.

There have also been some pop cultural references. In the 2010 film Iron Man 2, Musk makes a brief cameo appearance by telling Robert Downey Jr.’s character Tony Stark that he’s “got an idea for an electric jet.” Stark, whose character Downey Jr. is said to be modeled on Musk in the first place, tells him that “we’ll make it work.”:

The idea continued gaining steam. In 2012, he mentioned in a Jalopnik discussion that he had “this airplane design that I’ve had in mind for about four years.” In 2013, he said during a YouTube video chat that “maybe at some point in the future” he would complete the plane if nobody else would. In 2014, Musk said at an MIT Aeronautics and Astronautics Centennial Symposium that he was “toying” with designs:

By 2015, then, Musk was saying pretty consistently that he had “a design in mind.” The following year, he told Tesla investors he was “dying to do that.” But his most recent remarks seemed like a hedge, when in 2017 he said he had “no plans right now” to begin pursuing the project in earnest.

Tesla Electric Plane: When Will It Take Off?

The key holdup with Musk’s idea, in his words, is that he’s waiting for battery technology to improve. A plane would require a high energy density, 400 watt-hours of energy per kilogram of plane. The battery found in a Tesla car ranks at around 250 watt-hours per kilogram. However, 400 is the bare minimum for making such a plane work, and Musk claims 500 is more ideal.

As for when we might reach that point? Subhash Dhar, CEO of XALT Energy, predicted in August 2017 that density is likely to reach that point by 2022. This would bring big benefits for electric cars, reaching a range of 400 miles per change from a 50 kilowatt-hour battery pack. Dhar was speaking in an industry-wide sense, though, and it’s possible that Tesla’s internal developments envision a different timetable. Musk suggested during a chat with Tesla investors in 2017 that the company was “maybe four years or five years away from having 500 watt-hours per kilogram…maybe half a decade in volume production.”

While Tesla waits for the technology to catch up, Musk told Joe Rogan in September that the plane “isn’t necessary right now…electric cars are important. Solar energy is important. Stationary storage of energy is important. These things are much more important than creating an electric supersonic VTOL.”

Tesla Electric Plane: How Would It Work?

The energy density is critical to the way the plane works. Musk told Joe Rogan in September that with an electric plane, “you want to go as high as possible, so you need a certain energy density in the battery pack, because you have to overcome gravitational potential energy.” While it requires a lot of energy to rise, the energy used in cruising “is very low, and then you can recapture a large amount of your gravitational potential energy on the way down.” That means “you really don’t need any kind of reserve fuel if you will, because you have…the energy of height.”

Musk’s plane would probably focus on using electric motors to move a fan, as he explained to Stephen Colbert in 2014, which would also reduce the need for giant runways as seen with traditional jet engines. The main issue is reaching those higher altitudes.

“The higher you go, the faster you’ll go with the same amount of energy,” he told Joe Rogan in September. “At a certain altitude, you can go supersonic with quite a lot less energy per mile than an aircraft at 35,000 feet. Because it’s just a force balance.”

Tesla Electric Plane: Why Not a Flying Car?

Musk has actually spoken out against flying cars. In an April 2017 TED talk, he described the anxiety about the concept of over-head vehicles, saying “did they service their hubcap? Or is it going to come off and guillotine me as they’re flying past?”

In February 2017, he told Bloomberg that he was focusing on digging tunnels with The Boring Company as a means of increasing capacity in cities instead of taking to the skies, explaining that “if somebody doesn’t maintain their flying car, it could drop a hubcap and guillotine you. Your anxiety level will not decrease as a result of things that weigh a lot buzzing around your head.”

Tesla Electric Plane: What’s the Competition?

A number of competitors are already trying to beat Musk to the punch. Boeing-backed Zunum Aero aims to release a 12-seat hybrid electric plane by 2022, while a consortium of Airbus, Siemens and Rolls-Royce plans to release the E-Fan X hybrid plane in 2020. Like hybrid cars, these machines would depend on more than one fuel source to complete the trip.

Similar to Uber’s VTOL concept, Audi has worked with Airbus and Italdesign on a modular concept vehicle, capable of running for 31 miles across a city:

Audi ItalDesign Airbus
The PopUp.Next concept.

While these are cool, their short ranges leave the playing field open for a design that can handle international voyages. In June 2016, designers from the Technical University of Munich unveiled a design for the “Lilium Jet,” a two-passenger plane that could travel 300 miles on one charge with speeds of up to 250 mph. Another competitor on this front is budget airline Easyjet, which is working with Wright Electric to build a nine-seater plane set to fly next year with a long-term goal of electrifying its short-haul flights:

Easyjet's electric plane.
Easyjet’s electric plane.

The race is on to electrify the skies.

Why Did NASA Wake Up This Interstellar Spacecraft After 37 Years?


Since it left Earth 40 years ago, NASA’s Voyager 1 spacecraft has had an enviable adventure across the solar system — and beyond. Though it hasn’t made headlines in a while, the spacecraft delighted space nerds Friday night when news broke that it fired up its backup thrusters for the first time in 37 years.

The question is, why? Scientists from NASA and the University of Arkansas tell Inverse it’s actually a great new boost for the ol’ spacecraft, especially since the thrusters it’s been using since 2014 aren’t doing so well.

“The attitude control thrusters on Voyager 1 are showing degradation, meaning they appear to be reaching their end of life,” NASA’s Voyager project manager Suzanne Dodd tells Inverse. “We did a test of the [trajectory correction maneuver (TCM)] thrusters to see if they would operate, and could replace the attitude control thrusters. By using the TCM thrusters we will gain two to three additional years of lifetime for the mission.”

Jupiter's Great Red Spot
(March 1, 1979) As Voyager 1 flew by Jupiter, it captured this photo of the Great Red Spot.

Back in its heyday, Voyager 1 visited Jupiter and Saturn — and took exquisite photos of its journey. In fact, according to NASA, the spacecraft hasn’t needed to use its TCM thrusters since November 8, 1980. But even though the Voyager 1 is about 13.1 billion miles from Earth — and was half-asleep for a few decades — its TCM thrusters worked perfectly during this test. It took 19 hours and 35 minutes for the spacecraft’s signal to reach Earth, confirming the experiment worked.

“Having a signal and firing its thrusters? That’s incredible with 21 billion kilometers from Earth in ultra freezing temperatures in the vacuum of space!” Caitlin Ahrens, an astronomer at the University of Arkansas tells Inverse. “Voyager 1, in essence, has no limits to its travel!”

Since this recent test went over swimmingly, the space agency says it will switch Voyager 1 to the TCM thrusters sometime in January. It’ll likely perform a similar test on Voyager 2’s TCM thrusters down the line. In a few years, that spacecraft will join its twin, Voyager 1, in interstellar space. We love a happy ending!


Neil DeGrasse Tyson Defends Elon Musk, Saying He’s “The Best Thing We’ve Had Since Thomas Edison”


main article image

Are you on team Elon?

 

 

Neil deGrasse Tyson defended Tesla CEO Elon Musk in an interview with TMZ, calling him “the best thing we’ve had since Thomas Edison.”

Tyson defended Musk’s conduct in an interview last week with Joe Rogan in which Musk was filmed smoking marijuana. (Recreational use of marijuana is legal in California, where the interview was filmed.)

“Can they leave him alone? Let the man get high if he wants to get high,” Tyson said.

Before his interview with Rogan, Musk told The New York Times in August that marijuana hurts one’s ability to work.

“Weed is not helpful for productivity. There’s a reason for the word ‘stoned.’ You just sit there like a stone on weed,” Musk said.

Tyson also addressed the process by which Musk explored the possibility of converting Tesla into a private company, saying Musk had to be accountable to the public since Tesla is traded on public markets.

“He’s got to obey the SEC, clearly. But if he doesn’t want to obey the SEC, then he’s got to have a private company, then he can do what he wants,” Tyson said.

Musk attracted controversy in August over his statements about wanting to take Tesla private, which raised questions about the certainty of funding Musk referenced in a tweet and where exactly that funding would come from.

Fox Business and The New York Times reported that the SEC had sent subpoenas to Tesla concerning Tesla’s plans to explore going private and Musk’s statements about the process. Musk ultimately said Tesla will remain a public company.

Tyson later said he’s a fan of Musk, suggesting that he’s among the most innovative people working today.

“Count me as team Elon,” Tyson said. “He’s the only game in town. He’s the best thing we’ve had since Thomas Edison.”

Elon Musk Just Gave The Most Revealing Look Yet at The Rocket That’ll Fly to The Moon And Mars


main article image

A stunning insight into the future of space travel.

Elon Musk has provided several new, rare, and telling glimpses into how his rocket company, SpaceX, is building a spacecraft to reach Mars.

On September 17, Musk announced that SpaceX would fly Japanese billionaire Yusaku Maezawa around the moon on the company’s Big Falcon Rocket or BFR. During that event, Musk showed off new renderings of the launch system, along with a few photos of the work going on inside SpaceX’s spaceship-building tent at the Port of Los Angeles.

These were the first new details about SpaceX’s rocket construction we’d gotten since April, when Musk posted a photo that revealed SpaceX was building the spacecraft using a 40-foot-long, 30-foot-wide cylindrical tool.

“SpaceX main body tool for the BFR interplanetary spaceship,” Musk said on Instagram.

SpaceXMoon1(Elon Musk/SpaceX; Instagram)

Aerospace industry experts say the newly released pictures reveal new information about how SpaceX is constructing the BFR and how quickly the project is moving.

“It’s unusual for companies and even government agencies that develop rockets to reveal much about the hardware they’re developing. But what Musk wants to do is to bring along the public with him,” Marco Cáceres, a senior space analyst at the Teal Group, told Business Insider. “He lives and breathes this company. So when he has hardware that he’s excited about, he just wants to show it and be as transparent as possible.”

What the new BFR fabrication images reveal

https://gfycat.com/ifr/FirstVioletAntbear

The BFR is designed to be a 39-story launch system made of two parts: a 180-foot-tall spaceship, from tip to tail, and a 230-foot-tall rocket booster (which the ship rides into orbit). Musk has said the spaceship is the “hardest part” of the system to build, so SpaceX is prototyping it first.

Musk’s vision is to launch the spaceship into orbit and refuel it while it circles Earth. Then the ship can fire up its engines, fly through space, land on Mars, and later rocket off of that planet and return to Earth. Because it’s designed to be 100% reusable, the system will supposedly be able to do all of this many times.

Musk said in 2016 that SpaceX is building the system “primarily of an advanced carbon fibre,” which can be stronger than steel at one-fifth of the weight.

One of the new images Musk shared on September 17 shows a ribbed, spoked tube with a worker inside. This is the inside of the cylindrical tool that Musk first revealed in March; it’s called a mandrel. Robots wrap layer upon layer of carbon-fibre tape around the mandrel to form a 30-foot-wide “barrel section” of the spaceship.

SpaceXMoon2Inside a mandrel that SpaceX uses to build carbon-fibre-composite sections of the Big Falcon Rocket. (SpaceX)

The carbon fibres are soaked in a glue-like epoxy, then heated so that the composite cures and hardens.

The photo below, which Musk also revealed on September 17, shows a barrel section that’s been cured and freed of the mandrel. The rounded dome on the left appears to be part of a propellant tank also made of carbon-fibre composites.

SpaceXMoon3A completed carbon-fibre-composite barrel section of SpaceX’s Big Falcon Rocket. (SpaceX)

Many carbon-fibre tapes are woven fabric. But Steve Nutt, a professor of chemical, aerospace, and mechanical engineering at the University of Southern California, told Business Insider that he thinks SpaceX engineers are wrapping the mandrel in an unwoven version of the tape.

Nutt said such unwoven tapes provide the “highest stiffness and strength” because they don’t easily kink or wrinkle (which can weaken a structure). They also maximise the amount of super-strong carbon fibre relative to epoxy, he said.

Nutt said it’s “quite clever what they are doing.”

Carbon fibre must be squeezed while it’s heated and cured, so Nutt thinks SpaceX may be using very large plastic bags and sucking out the air to compress the layers of tape. But he’s unsure how SpaceX is actually heating the parts.

“Structures are getting too big to oven-cure, so they might be using so-called ‘heat blankets,'” he said.

‘He’s shoving this in NASA’s face’

Cáceres, who’s studied the aerospace industry for decades, said the new photos highlight a project of epic proportions.

“This is probably the biggest challenge that I’ve seen since the Saturn V days, in terms of engineering,” Cáceres said, referring to NASA’s Apollo-era moon rocket. “Nothing I’ve seen is remotely this size.”

Even New Glenn, a reusable heavy-lift rocket being built by mega-billionaire Jeff Bezos’ rocket company Blue Origin, doesn’t compare, he said.

SpaceXMoon4Yusaku Maezawa stands inside a completed carbon-fibre-composite barrel section of SpaceX’s Big Falcon Rocket. (Yusaku Maezawa/Twitter)

Revealing these images forces the public – and potential investors – to take Musk seriously, Cáceres said.

Cáceres previously estimated that the BFR development program could cost about $US5 billion, and Musk gave the same estimate when he announced Maezawa’s role in SpaceX’s moon tourism mission.

“He’s looking for investors because he’s not Jeff Bezos, who could probably do this on his own,” Cáceres said. “Musk is not as wealthy. He can look for investors by building stuff and showing it off. If you see how much hardware he has and how big it is, people will say, ‘Yeah, this is a serious program.'”

If the 2023 moon mission aboard BFR – a project Maezawa calls #dearMoon -is successful, that would send a big message to NASA about SpaceX’s capabilities.

“This doesn’t look like a stunt,” Cáceres said. “It looks like a trial run.”

SpaceX has gotten billions of dollars in NASA funding through the agency’s Commercial Crew Program, which aims to partner with private companies to build a system for launching astronauts to the International Space Station. So it would make sense for Musk to try to get NASA’s attention (and money) again for the development of BFR.

Right now, NASA is building a giant, one-use launcher called Space Launch System, which may cost more than $US20 billion to develop and priced at about $US1 billion per launch. Meanwhile, SpaceX’s BFR may cost the company tens of millions to refuel and launch once the spacecraft is operational.

“In a way, he’s shoving it in NASA’s face and saying, ‘You guys are crazy to build this rocket,'” Cáceres said of Musk and SLS, respectively.

“Elon Musk is a very charismatic figure and a showman. He understands that, for many years, NASA has been trying to create public excitement about space exploration, and they always try to recreate the excitement around Apollo. But they’re not successful.”

Musk, on the other hand, may be beating NASA at that goal.

“The thing Musk is building looks like it’s out of a science fiction movie. He wants to get the public excited, and that excitement can attract investors, ” Cáceres said.

If SpaceX does not attract NASA funding for BFR development, then the company might rely on space tourism, contracts with government and commercial interests to launch cargo and satellites, and profits from SpaceX’s planned constellation of 12,000 internet-providing satellites, called Starlink, to pay the ambitious program’s bills.

“People can’t say, ‘Musk is all talk.’ He has accomplished so much in a short amount of time,” Cáceres said.

“When I was at trade shows 10 years ago, when I asked Boeing and others about SpaceX, they rolled their eyes and said, ‘They aren’t going to be around very long.’ Now SpaceX is the major player in the industry.”

Elon Musk Says SpaceX’s BFR Design Is Inspired by Tintin Comics


BFR spacex

Elon Musk unveiled a new design for SpaceX’s BFR rocket on Thursday, and he’s taking inspiration from a famous series of Belgian comics. The CEO confirmed on Twitter that the new design “intentionally” bears resemblance to the vehicles depicted in The Adventures of Tintin, the whimsical series that depicts Tintin and his friends embarking on far-flung trips to find new stories.

On Thursday, SpaceX announced the BFR rocket will also ferry a private passenger around the moon.

The BFR was first announced at the International Astronautical Congress in Adelaide, Australia, in September 2017. SpaceX plans to send two BFRs to Mars in 2022, followed by four more in 2024. Two of the latter four will fly the first humans to Mars, with the other four providing supplies so they can refuel and return home.

The redesign shared with the moon announcement bears similarities to rockets as featured in Hergé’s comic series. The 1950 comic Destination Moon shows a red-and-yellow checkered rocket with three giant fins on the base, elevating the rocket above the ground, which Tintin and his friends use to visit the moon and explore a secret government project. The story continued in 1953 comic Explorers on the Moon.

The comics, published nearly two decades before NASA’s 1969 lunar visit, come surprisingly close to predicting Neil Armstrong’s famous words. Tintin exits the craft in the comic and, making his first steps on the dusty surface, proclaims: “This is it! I’ve walked a few steps! For the first time in the history of mankind there is an explorer on the moon!”

The new BFR design was depicted in a Twitter post below:

The new ship looks notably different from the IAC renderings:

The BFR on Mars
The BFR as depicted at IAC 2017.

Eagle-eyed followers immediately clocked some similarities between the giant-finned new craft and rockets from the Tintin comics:

Musk confirmed the similarity over Twitter:

It’s not the first time Musk has made reference to Tintin. In February, he dubbed two of SpaceX’s satellites Tintin A and B. The two crafts are part of a plan to provide internet service in space, using a staggering 4,425 satellites starting next year. The goal is to bring internet access to remote places that lack the infrastructure to support connectivity.

Musk’s new Tintin ship will play a pivotal role in a historic mission. SpaceX plans to reveal more details of the mission on Monday:

SpaceX has signed the world’s first private passenger to fly around the Moon aboard our BFR launch vehicle – an important step toward enabling access for everyday people who dream of traveling to space. Only 24 humans have been to the Moon in history. No one has visited since the last Apollo mission in 1972. Find out who’s flying and why on Monday, September 17 at 6pm PT.

SpaceX Teases Announcement About Private Moon Trip


BFR flying around the moon.

Elon Musk’s SpaceX will update the public on Monday on its deal to send a human on a trip around the moon, it announced Thursday night. The lunar recreational mission will be launched on BFR, the in-development rocket designed to launch humans to Mars. The journey will mark the first time a person on a private spacecraft will fly around the moon.

SpaceX hyped a webcast of the big reveal this way:

SpaceX has signed the world’s first private passenger to fly around the Moon aboard our BFR launch vehicle – an important step toward enabling access for everyday people who dream of traveling to space. Only 24 humans have been to the Moon in history. No one has visited since the last Apollo mission in 1972. Find out who’s flying and why on Monday, September 17 at 6pm PT.

The announcement marks a key development in plans to send private citizens into space. SpaceX announced in February 2017 that two private citizens had paid for a lunar orbit trip, which would take them deeper into space than any human has ever ventured before. The original announcement claimed the passengers would enter space in the Crew Dragon capsule, lifted into orbit on board the Falcon Heavy rocket.

The mission was scheduled for some time after the capsule took the first NASA astronauts to the ISS, a milestone scheduled for around April 2019. That plan seemed in doubt, however, when Musk told reporters the day prior to the Falcon Heavy test launch in February that the rocket probably wouldn’t send people into space.

“It looks like BFR development is moving quickly, and it will not be necessary to qualify Falcon Heavy for crewed spaceflight,” Musk said in February of this year after the Falcon Heavy demonstration launch. “We kind of tabled the Crew Dragon on Falcon Heavy in favor of focusing our energy on BFR.”

The BFR is a gargantuan rocket, first detailed at the International Astronautical Congress in Adelaide, Australia, in September 2017. Where the Falcon Heavy has a liftoff thrust of around 2,500 tons, placing it as the most powerful rocket in operation, the BFR has a thrust of 5,400 tons. SpaceX plans to use all this power to send two cargo rockets to Mars in 2022, before sending two further cargo rockets alongside two manned rockets to the red planet in 2024. The reusable design of the rockets means the humans can use the planet’s resources to collect together fuel and return on the same rocket.

While the company’s private spaceflight plans have been relatively quiet since the February announcement, SpaceX officials indicated that the company still planned to move ahead with its historic flight. SpaceX representative James Gleeson said in June that the company is “still planning to fly private individuals around the moon, and there is growing interest from many customers.”

The company is expected to complete the first hop test firing of the BFR next year. A 95,000-gallon liquid oxygen tank arrived at the Boba Chica facility in Texas in July, expected to support propellant-loading operations during vehicle tests. The tank will sit alongside a 600-kilowatt solar array and two ground station antennas. Company president Gwynne Shotwell has previously indicated that Boca Chica will support these initial tests.

Elon Musk Will Put Humans on Mars Much Sooner Than We Think, Astronaut Says


The first space race of the 21st century has Mars as its finish line. While victory is still likely years, even decades away, Elon Musk and SpaceX have made the most tangible progress toward the red planet, and one person with plenty of first-hand expertise in all things space thinks the company could get us to Mars sooner than anybody would have guessed.

British astronaut Tim Peake, who has spent 185 days aboard the International Space Station, offered his thoughts on the future of Martian exploration at a recent event organized by the charity Aerobility. He said the first humans on Mars will likely get there in about two decades, if government agencies remain the main drivers, but there’s a chance private spaceflight could accelerate that timeline.

“Humans on Mars, I think will be the late 2030s,” said Peake. “That’s what the government space agencies and the International Space Exploration Group are working towards. It could be that some of [these people’s] programs bring that date forward. But, the late 2030s would be a realistic time frame. What could throw a big bowling ball through all that is commercial spaceflight.”

Musk and SpaceX aren’t the only players in the commercial field, of course. There are Richard Branson and Virgin Galactic, Jeff Bezos and Blue Origin, and more conventional private contractors for government projects, like Boeing and Lockheed Martin. But Musk’s company has raced ahead of its competitors in demonstrating the actual practical usability of its rocket technology, especially after last month’s Falcon Heavy launch.

“We have seen the ambitions of people like Elon Musk,” said Peake. “There are several other companies that also have ambitions to send people to Mars. I think that we will end up working very closely with these companies in public-private partnerships when we eventually go to Mars.”

That idea of public-private partnerships is an intriguing one. For his part, Musk has spoken exclusively about SpaceX when detailing his plans for Mars. A partnership with NASA on a Mars mission — perhaps one where NASA astronauts and terrestrial support staff conduct a mission using one of SpaceX’s planned BFR craft to get to the red planet — is certainly conceivable, but it’s not the plan right now.

Still, such a plan could prove the most effective way to combine SpaceX’s cutting-edge rocket tech with NASA’s institutional experience, but that’s not the plan right now. It’s also possible that NASA or another space agency — like Peake’s own European Space Agency — could team with another private spaceflight company, assuming Musk plans to go it alone.

The point is, though, that all those possibilities suggest an acceleration of government agencies’ current plans for spaceflight. If the set target is the late 2030s, it’s possible market competition could knock several years off that, especially if SpaceX can find the same success with the BFR that it did last month with the Falcon Heavy.

Why Elon Musk’s Tesla Will Last Thousands of Years in Space, Scientifically


After at least seven years of planning and preparation, on Tuesday, SpaceX’s Falcon Heavy will finally make its maiden voyage. Though the aerospace company will attempt to land several of its boosters back on terra firma, its payload will continue along on a one-way ticket toward Mars — but how long will its sojourn around the Red Planet last?

Falcon Heavy's Tesla heading toward Mars

Let’s clear something up right off the bat: Elon Musk’s midnight cherry Roadster isn’t actually going to Mars. It’s not even going to be orbiting Mars in the way that the red planet’s two moons do. Instead, the car will be placed in a heliocentric orbit, meaning it will revolve around the sun, just like all the planets in our solar system.

“The Falcon Heavy will achieve this by using three rockets, with the first two separating after stage one of the launch,” Ben Thornber, an associate professor at the University of Sydney, writes in The Conversation. “The final rocket will then lift the Tesla Roadster up into space, where it will enter a highly elliptical orbit between the Earth and Mars. Without external interference, Falcon Heavy will remain in this orbit for thousands of years.”

The specific kind of heliocentric orbit Musk’s roadster will be traveling on is called Trans-Mars injection, meaning it will orbit the sun in a manner that will bring it close to Earth and Mars again and again. This could mean that the Tesla will be placed several million miles away from Mars, but it’ll still get to snuggle up to the planet — and ours — many times.

Of course, getting the Roadster to its final destination is the hard part. First and foremast, the Falcon Heavy has to not blow up on ascent, which is harder than it sounds. Then, it has to traverse Earth’s Van Allen belt, which is full of high-energy particles waiting to whack the crap out of it. Then, the Tesla has to cruise in deep space for about six hours to reach the finish line.

It’s a lot, but if SpaceX pulls this off, it’ll make history — again.

“We estimate it will be in that orbit for several hundred million years, or maybe in excess of a billion years,” Musk announced in a press conference yesterday.

There’s only one way to find out whether or not Musk’s Tesla will make it to Mars — watch the launch on Tuesday at 1:30 p.m. Eastern.

%d bloggers like this: