Top Scientists, Experts and Philosophers Warn of Dangers of Artificial Intelligence .

If it is ever achieved – and the current consensus is that it will be – the creation of a fully-fledged artificial intelligence could be the most major milestone in the history of humanity.

The technology has the ability to operate on a level currently inaccessible to humans, and potentially reap major rewards, but there are also massive dangers.

This is why several notable scientists, industry experts and technicians have banded together to deliver an open letter to the artificial intelligence developing community. They’re not asking for the research to stop, merely that there be some kind of oversight to mitigate the risks.

The open letter, which was devised by the Future of Life Institute and contains names such as Stephen Hawking, Elon Musk, Skype co-founder Jaan Tallinn, George Church, and Nick Bostrom, states:

There is now a broad consensus that AI research is progressing steadily, and that its impact on society is likely to increase. The potential benefits are huge, since everything that civilization has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools AI may provide, but the eradication of disease and poverty are not unfathomable. Because of the great potential of AI, it is important to research how to reap its benefits while avoiding potential pitfalls.

The open letter also added that AI research was currently too preoccupied with simply making AI happen, but not arriving at it in the best possible way. It states technicians need to “focus research not only on making AI more capable, but also on maximizing the societal benefit of AI.”

The document also included a compiled report of research proposals which raised certain issues as well as outlining important factors that must be taken into account, including:

  • Verification – “Did I build this system right?”
  • Validity – “Did I build the right system?”
  • Security – “Is this system safe from manipulation?”
  • Control – “Ok, I built the system wrong, can I fix it?”

What are the dangers with Artificial Intelligence?


Firstly, as mentioned in the opening, super-(or artificial)-intelligence will be unlike anything else humanity has ever created, and can affect the world in ways no other technology, including the wheel, internal combustion engine and internet, ever have. One of the co-signers of the letter, and member of the prestigious Oxford University’s philosophy faculty, Nick Bostrom has stated:

A prerequisite for having a meaningful discussion of superintelligence is the realization that superintelligence is not just another technology, another tool that will add incrementally to human capabilities. Superintelligence is radically different.

Bostrom also claims the biggest issue isn’t necessarily a Skynet scenario in which robots attempt to kill off humanity or launch nuclear bombs into the atmosphere, but one where a small elite group has control of a super-intelligence. In this way, a super-intelligence could be pre-programmed with human prejudices towards others.

Similarly, an error in programming could result in unforeseen consequences. He posits that a super-intelligence dedicated to the mundane task of manufacturing paper clips (and nothing else), could break beyond expected limits in order to maximize the output of paper clips. In this sense, capitalistic sensibilities of increasing production, output and profit, need to be controlled within an ethical framework. This is something humans innately do, but would have to be something carefully programmed into an AI. As The Atlantic states, if the robots kill us, its because it’s their job and we’ve programmed them that way.

But could robots really wipe out humanity? Well, Stuart Armstrong, a philosopher and Research Fellow at the institute, thinks it’s possible. In fact, AI might be the only technology capable of wiping us out.

One of the things that makes AI risk scary is that it’s one of the few that is genuinely an extinction risk if it were to go bad. With a lot of other risks, it’s actually surprisingly hard to get to an extinction risk… First of all forget about the Terminator. The robots are basically just armoured bears and we might have fears from our evolutionary history but the really scary thing would be an intelligence that would be actually smarter than us – more socially adept… When they can become better at politics, at economics, potentially at technological research.

Furthermore, the resources required to develop an AI means the feat will only be available to states and major corporations – entities with expressed agendas and attitudes towards certain people. Would the US allow its super-intelligence to mutually benefit all the world’s population on an objective basis? What if that means the AI diverts more resources away from the US? Perhaps even to unfriendly states? What if that benevolence falls foul of US foreign policy?

What if that AI is Johnny Depp?

The same is also true for corporations like Google. What if their AI suggests decreasing its profits in exchange for increasing social support? Would Google really allow that? In that sense, can we actually create a truly non-prejudiced AI?

But there is one more, terrifying, conclusion. An artificial intelligence could rob us of our most important possession: humanity.

More subtly, it could result in a superintelligence realizing a state of affairs that we might now judge as desirable but which in fact turns out to be a false utopia, in which things essential to human flourishing have been irreversibly lost. We need to be careful about what we wish for from a superintelligence, because we might get it.

What are the potential benefits?

But it is not all doom and gloom. AI can also potentially rid the world of many of our current problems.

It has been suggested that a fully-fledged super-intelligence could aid the development of space travel, unlock the secrets of creation, answer our fundamental questions, eliminate age and disease, calculate the best possible solution to issues, and if coupled with nano-technology, end environmental destruction and “unnecessary suffering of all kinds”.

These are all lofty and worthwhile goals, but they still rest one on major issue – that a benevolent AI is developed. Bostrom claims the only solution is to build a super-intelligence which is fundamentally and irreversibly imbued with a sense of respect towards ALL humans (regardless of race, creed or political leanings) and perhaps even all sentient life.

But once again, if the machine is created by inherently flawed humans and by organizations whose expressed agendas are often subtly, if not explicitly, contrary to the benevolence for all, is this possible?

Could an AI actually result in a world where social, economic and political divisions are more pronounced? One where we are divided by our access to super-intelligence (and its benefits) and those who do not. This would no longer be a simple division of the ‘First’ and ‘Third World’ or ‘Developed’ and ‘Developing countries,’ but something of a much greater magnitude, and potentially, danger.