Can AI Save the Internet from Fake News?


There’s an old proverb that says “seeing is believing.” But in the age of artificial intelligence, it’s becoming increasingly difficult to take anything at face value—literally.

The rise of so-called “deepfakes,” in which different types of AI-based techniques are used to manipulate video content, has reached the point where Congress held its first hearing last month on the potential abuses of the technology. The congressional investigation coincided with the release of a doctored video of Facebook CEO Mark Zuckerberg delivering what appeared to be a sinister speech.

Scientists are scrambling for solutions on how to combat deepfakes, while at the same time others are continuing to refine the techniques for less nefarious purposes, such as automating video content for the film industry.

At one end of the spectrum, for example, researchers at New York University’s Tandon School of Engineering have proposed implanting a type of digital watermark using a neural network that can spot manipulated photos and videos.

The idea is to embed the system directly into a digital camera. Many smartphone cameras and other digital devices already use AI to boost image quality and make other corrections. The authors of the study out of NYU say their prototype platform increased the chances of detecting manipulation from about 45 percent to more than 90 percent without sacrificing image quality.

On the other hand, researchers at Carnegie Mellon University recently hit on a technique for automatically and rapidly converting large amounts of video content from one source into the style of another. In one example, the scientists transferred the facial expressions of comedian John Oliver onto the bespectacled face of late night show host Stephen Colbert.

The CMU team says the method could be a boon to the movie industry, such as by converting black and white films to color, though it also conceded that the technology could be used to develop deepfakes.

Words Matter with Fake News

While the current spotlight is on how to combat video and image manipulation, a prolonged trench warfare on fake news is being fought by academia, nonprofits, and the tech industry.

This isn’t the fake news that some have come to use as a knee-jerk reaction to fact-based information that might be less than flattering to the subject of the report. Rather, fake news is deliberately-created misinformation that is spread via the internet.

In a recent Pew Research Center poll, Americans said fake news is a bigger problem than violent crime, racism, and terrorism. Fortunately, many of the linguistic tools that have been applied to determine when people are being deliberately deceitful can be baked into algorithms for spotting fake news.

That’s the approach taken by a team at the University of Michigan (U-M) to develop an algorithm that was better than humans at identifying fake news—76 percent versus 70 percent—by focusing on linguistic cues like grammatical structure, word choice, and punctuation.

For example, fake news tends to be filled with hyperbole and exaggeration, using terms like “overwhelming” or “extraordinary.”

“I think that’s a way to make up for the fact that the news is not quite true, so trying to compensate with the language that’s being used,” Rada Mihalcea, a computer science and engineering professor at U-M, told Singularity Hub.

The paper “Automatic Detection of Fake News” was based on the team’s previous studies on how people lie in general, without necessarily having the intention of spreading fake news, she said.

“Deception is a complicated and complex phenomenon that requires brain power,” Mihalcea noted. “That often results in simpler language, where you have shorter sentences or shorter documents.”

AI Versus AI

While most fake news is still churned out by humans with identifiable patterns of lying, according to Mihalcea, other researchers are already anticipating how to detect misinformation manufactured by machines.

A group led by Yejin Choi, with the Allen Institute of Artificial Intelligence and the University of Washington in Seattle, is one such team. The researchers recently introduced the world to Grover, an AI platform that is particularly good at catching autonomously-generated fake news because it’s equally good at creating it.

“This is due to a finding that is perhaps counterintuitive: strong generators for neural fake news are themselves strong detectors of it,” wrote Rowan Zellers, a PhD student and team member, in a Medium blog post. “A generator of fake news will be most familiar with its own peculiarities, such as using overly common or predictable words, as well as the peculiarities of similar generators.”

The team found that the best current discriminators can classify neural fake news from real, human-created text with 73 percent accuracy. Grover clocks in with 92 percent accuracy based on a training set of 5,000 neural network-generated fake news samples. Zellers wrote that Grover got better at scale, identifying 97.5 percent of made-up machine mumbo jumbo when trained on 80,000 articles.

It performed almost as well against fake news created by a powerful new text-generation system called GPT-2 built by OpenAI, a nonprofit research lab founded by Elon Musk, classifying 96.1 percent of the machine-written articles.

OpenAI had so feared that the platform could be abused that it has only released limited versions of the software. The public can play with a scaled-down version posted by a machine learning engineer named Adam King, where the user types in a short prompt and GPT-2 bangs out a short story or poem based on the snippet of text.

No Silver AI Bullet

While real progress is being made against fake news, the challenges of using AI to detect and correct misinformation are abundant, according to Hugo Williams, outreach manager for Logically, a UK-based startup that is developing different detectors using elements of deep learning and natural language processing, among others. He explained that the Logically models analyze information based on a three-pronged approach.

  • Publisher metadata: Is the article from a known, reliable, and trustworthy publisher with a history of credible journalism?
  • Network behavior: Is the article proliferating through social platforms and networks in ways typically associated with misinformation?
  • Content: The AI scans articles for hundreds of known indicators typically found in misinformation.

“There is no single algorithm which is capable of doing this,” Williams wrote in an email to Singularity Hub. “Even when you have a collection of different algorithms which—when combined—can give you relatively decent indications of what is unreliable or outright false, there will always need to be a human layer in the pipeline.”

The company released a consumer app in India back in February just before that country’s election cycle that was a “great testing ground” to refine its technology for the next app release, which is scheduled in the UK later this year. Users can submit articles for further scrutiny by a real person.

“We see our technology not as replacing traditional verification work, but as a method of simplifying and streamlining a very manual process,” Williams said. “In doing so, we’re able to publish more fact checks at a far quicker pace than other organizations.”

“With heightened analysis and the addition of more contextual information around the stories that our users are reading, we are not telling our users what they should or should not believe, but encouraging critical thinking based upon reliable, credible, and verified content,” he added.

AI may never be able to detect fake news entirely on its own, but it can help us be smarter about what we read on the internet.

Advertisements

Oncologists are guardedly optimistic about AI. But will it drive real improvements in cancer care?


Over the course of my 25-year career as an oncologist, I’ve witnessed a lot of great ideas that improved the quality of cancer care delivery along with many more that didn’t materialize or were promises unfulfilled. I keep wondering which of those camps artificial intelligence will fall into.

Hardly a day goes by when I don’t read of some new AI-based tool in development to advance the diagnosis or treatment of disease. Will AI be just another flash in the pan or will it drive real improvements in the quality and cost of care? And how are health care providers viewing this technological development in light of previous disappointments?

To get a better handle on the collective “take” on artificial intelligence for cancer care, my colleagues and I at Cardinal Health Specialty Solutions fielded a survey of more than 180 oncologists. The results, published in our June 2019 Oncology Insights report, reveal valuable insights on how oncologists view the potential opportunities to leverage AI in their practices.

Limited familiarity tinged with optimism. Although only 5% of responding oncologists describe themselves as being “very familiar” with the use of artificial intelligence and machine learning in health care, 36% said they believe it will have a significant impact in cancer care over the next few years, with a considerable number of practices likely to adopt artificial intelligence tools.

The survey also suggests a strong sense of optimism about the impact that AI tools may have on the future: 53% of respondents said that such tools are likely or very likely to improve the quality of care in three years or more, 58% said they are likely or very likely to drive operational efficiencies, and 57% said they are likely or very likely to improve clinical outcomes. In addition, 53% described themselves as “excited” to see what role AI will play in supporting care.

An age gap on costs. The oncologists surveyed were somewhat skeptical that AI will help reduce overall health care costs: 47% said it is likely or very likely to lower costs, while 23% said it was unlikely or very unlikely to do so. Younger providers were more optimistic on this issue than their older peers. Fifty-eight percent of those under age 40 indicated that AI was likely to lower costs versus 44% of providers over the age of 60. This may be a reflection of the disappointments that older physicians have experienced with other technologies that promised cost savings but failed to deliver.

Hopes that artificial intelligence will reduce administrative work. At a time when physicians spend nearly half of their practice time on electronic medical records, we were not surprised to see that, when asked about the most valuable benefit that AI could deliver to their practice, the top response (37%) was “automating administrative tasks so I can focus on patients.” This response aligns with research we conducted last year showing that oncologists need extra hours to complete work in the electronic medical record on a weekly basis and the EMR is one of the top factors contributing to stress at work. Clearly there is pent-up demand for tools that can reduce the administrative burdens on providers. If AI can deliver effective solutions, it could be widely embraced.

Need for decision-support tools. Oncologists have historically been reluctant to relinquish control over patient treatment decisions to tools like clinical pathways that have been developed to improve outcomes and lower costs. Yet, with 63 new cancer drugs launched in the past five years and hundreds more in the pipeline, the complexity surrounding treatment decisions has reached a tipping point. Oncologists are beginning to acknowledge that more point-of-care decision support tools will be needed to deliver the best patient outcomes. This was reflected in our survey, with 26% of respondents saying that artificial intelligence could most improve cancer care by helping determine the best treatment paths for patients.

AI-based tools that enable providers to remain in control of care while also providing better insights may be among the first to be adopted, especially those that can help quickly identify patients at risk of poor outcomes so physicians can intervene sooner. But technology developers will need to be prepared with clinical data demonstrating the effectiveness of these tools — 27% of survey respondents said the lack of clinical evidence is one of their top concerns about AI.

Challenges to adoption. While optimistic about the potential benefits of AI tools, oncologists also acknowledge they don’t fully understand AI yet. Fifty-three percent of those surveyed described themselves as “not very familiar” with the use of AI in health care and, when asked to cite their top concerns, 27% indicated that they don’t know enough to implement it effectively. Provider education and training on AI-based tools will be keys to their successful uptake.

The main take-home lesson for health care technology developers from our survey is to develop and launch artificial intelligence tools thoughtfully after taking steps to understand the needs of health care providers and investing time in their education and training. Without those steps, AI may become just another here today, gone tomorrow health care technology story.

The case for taking AI seriously as a threat to humanity


Why some people fear AI, explained.

Stephen Hawking has said, “The development of full artificial intelligence could spell the end of the human race.” Elon Musk claims that AI is humanity’s “biggest existential threat.”

That might have people asking: Wait, what? But these grand worries are rooted in research. Along with Hawking and Musk, prominent figures at Oxford and UC Berkeley and many of the researchers working in AI today believe that advanced AI systems, if deployed carelessly, could end all life on earth.

This concern has been raised since the dawn of computing. But it has come into particular focus in recent years, as advances in machine-learning techniques have given us a more concrete understanding of what we can do with AI, what AI can do for (and to) us, and how much we still don’t know.

There are also skeptics. Some of them think advanced AI is so distant that there’s no point in thinking about it now. Others are worried that excessive hype about the power of their field might kill it prematurely. And even among the people who broadly agree that AI poses unique dangers, there are varying takes on what steps make the most sense today.

The conversation about AI is full of confusion, misinformation, and people talking past each other — in large part because we use the word “AI” to refer to so many things. So here’s the big picture on how artificial intelligence might pose a catastrophic threat, in nine questions:

1) What is AI?

Artificial intelligence is the effort to create computers capable of intelligent behavior. It is a broad catchall term, used to refer to everything from Siri to IBM’s Watson to powerful technologies we have yet to invent.

Some researchers distinguish between “narrow AI” — computer systems that are better than humans in some specific, well-defined field, like playing chess or generating images or diagnosing cancer — and “general AI,” systems that can surpass human capabilities in many domains. We don’t have general AI yet, but we’re starting to get a better sense of the challenges it will pose.

Narrow AI has seen extraordinary progress over the past few years. AI systems have improved dramatically at translation, at games like chess and Go, at important research biology questions like predicting how proteins fold, and at generating images. AI systems determine what you’ll see in a Google search or in your Facebook Newsfeed. They are being developed to improve drone targeting and detect missiles.

But narrow AI is getting less narrow. Once, we made progress in AI by painstakingly teaching computer systems specific concepts. To do computer vision — allowing a computer to identify things in pictures and video — researchers wrote algorithms for detecting edges. To play chess, they programmed in heuristics about chess. To do natural language processing (speech recognition, transcription, translation, etc.), they drew on the field of linguistics.

But recently, we’ve gotten better at creating computer systems that have generalized learning capabilities. Instead of mathematically describing detailed features of a problem, we let the computer system learn that by itself. While once we treated computer vision as a completely different problem from natural language processing or platform game playing, now we can solve all three problems with the same approaches.

Our AI progress so far has enabled enormous advances — and has also raised urgent ethical questions. When you train a computer system to predict which convicted felons will reoffend, you’re using inputs from a criminal justice system biased against black people and low-income people — and so its outputs will likely be biased against black and low-income people too. Making websites more addictive can be great for your revenue but bad for your users.

Rosie Campbell at UC Berkeley’s Center for Human-Compatible AI argues that these are examples, writ small, of the big worry experts have about general AI in the future. The difficulties we’re wrestling with today with narrow AI don’t come from the systems turning on us or wanting revenge or considering us inferior. Rather, they come from the disconnect between what we tell our systems to do and what we actually want them to do.

For example, we tell a system to run up a high score in a video game. We want it to play the game fairly and learn game skills — but if it instead has the chance to directly hack the scoring system, it will do that. It’s doing great by the metric we gave it. But we aren’t getting what we wanted.

In other words, our problems come from the systems being really good at achieving the goal they learned to pursue; it’s just that the goal they learned in their training environment isn’t the outcome we actually wanted. And we’re building systems we don’t understand, which means we can’t always anticipate their behavior.

Right now the harm is limited because the systems are so limited. But it’s a pattern that could have even graver consequences for human beings in the future as AI systems become more advanced.

2) Is it even possible to make a computer as smart as a person?

Yes, though current AI systems aren’t nearly that smart.

One popular adage about AI is “everything that’s easy is hard, and everything that’s hard is easy.” Doing complex calculations in the blink of an eye? Easy. Looking at a picture and telling you whether it’s a dog? Hard (until very recently).

Lots of things humans do are still outside AI’s grasp. For instance, it’s hard to design an AI system that explores an unfamiliar environment, that can navigate its way from, say, the entryway of a building it’s never been in before up the stairs to a specific person’s desk. We don’t know how to design an AI system that reads a book and retains an understanding of the concepts.

The paradigm that has driven many of the biggest breakthroughs in AI recently is called “deep learning.” Deep learning systems can do some astonishing stuff: beat games we thought humans might never lose, invent compelling and realistic photographs, solve open problems in molecular biology.

These breakthroughs have made some researchers conclude it’s time to start thinking about the dangers of more powerful systems, but skeptics remain. The field’s pessimists argue that programs still need an extraordinary pool of structured data to learn from, require carefully chosen parameters, or work only in environments designed to avoid the problems we don’t yet know how to solve. They point to self-driving cars, which are still mediocre under the best conditions despite the billions that have been poured into making them work.

With all those limitations, one might conclude that even if it’s possible to make a computer as smart as a person, it’s certainly a long way away. But that conclusion doesn’t necessarily follow.

That’s because for almost all the history of AI, we’ve been held back in large part by not having enough computing power to realize our ideas fully. Many of the breakthroughs of recent years — AI systems that learned how to play Atari games, generate fake photos of celebrities, fold proteins, and compete in massive multiplayer online strategy games — have happened because that’s no longer true. Lots of algorithms that seemed not to work at all turned out to work quite well once we could run them with more computing power.

And the cost of a unit of computing time keeps falling. Progress in computing speed has slowed recently, but the cost of computing power is still estimated to be falling by a factor of 10 every 10 years. Through most of its history, AI has had access to less computing power than the human brain. That’s changing. By most estimates, we’re now approaching the era when AI systems can have the computing resources that we humans enjoy.

Furthermore, breakthroughs in a field can often surprise even other researchers in the field. “Some have argued that there is no conceivable risk to humanity [from AI] for centuries to come,” wrote UC Berkeley professor Stuart Russell, “perhaps forgetting that the interval of time between Rutherford’s confident assertion that atomic energy would never be feasibly extracted and Szilárd’s invention of the neutron-induced nuclear chain reaction was less than twenty-four hours.”

There’s another consideration. Imagine an AI that is inferior to humans at everything, with one exception: It’s a competent engineer that can build AI systems very effectively. Machine learning engineers who work on automating jobs in other fields often observe, humorously, that in some respects, their own field looks like one where much of the work — the tedious tuning of parameters — could be automated.

If we can design such a system, then we can use its result — a better engineering AI — to build another, even better AI. This is the mind-bending scenario experts call “recursive self-improvement,” where gains in AI capabilities enable more gains in AI capabilities, allowing a system that started out behind us to rapidly end up with abilities well beyond what we anticipated.

This is a possibility that has been anticipated since the first computers. I.J. Good, a colleague of Alan Turing who worked at the Bletchley Park codebreaking operation during World War II and helped build the first computers afterward, may have been the first to spell it out, back in 1965: “An ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.”

3) How exactly could it wipe us out?

It’s immediately clear how nuclear bombs will kill us. No one working on mitigating nuclear risk has to start by explaining why it’d be a bad thing if we had a nuclear war.

The case that AI could pose an existential risk to humanity is more complicated and harder to grasp. So many of the people who are working to build safe AI systems have to start by explaining why AI systems, by default, are dangerous.

The idea that AI can become a danger is rooted in the fact that AI systems pursue their goals, whether or not those goals are what we really intended — and whether or not we’re in the way. “You’re probably not an evil ant-hater who steps on ants out of malice,” Stephen Hawking wrote, “but if you’re in charge of a hydroelectric green-energy project and there’s an anthill in the region to be flooded, too bad for the ants. Let’s not place humanity in the position of those ants.”

Here’s one scenario that keeps experts up at night: We develop a sophisticated AI system with the goal of, say, estimating some number with high confidence. The AI realizes it can achieve more confidence in its calculation if it uses all the world’s computing hardware, and it realizes that releasing a biological superweapon to wipe out humanity would allow it free use of all the hardware. Having exterminated humanity, it then calculates the number with higher confidence.

Victoria Krakovna, an AI researcher at DeepMind (now a division of Alphabet, Google’s parent company), compiled a list of examples of “specification gaming”: the computer doing what we told it to do but not what we wanted it to do. For example, we tried to teach AI organisms in a simulation to jump, but we did it by teaching them to measure how far their “feet” rose above the ground. Instead of jumping, they learned to grow into tall vertical poles and do flips — they excelled at what we were measuring, but they didn’t do what we wanted them to do.

An AI playing the Atari exploration game Montezuma’s Revenge found a bug that let it force a key in the game to reappear, thereby allowing it to earn a higher score by exploiting the glitch. An AI playing a different game realized it could get more points by falsely inserting its name as the owner of high-value items.

Sometimes, the researchers didn’t even know how their AI system cheated: “the agent discovers an in-game bug. … For a reason unknown to us, the game does not advance to the second round but the platforms start to blink and the agent quickly gains a huge amount of points (close to 1 million for our episode time limit).”

What these examples make clear is that in any system that might have bugs or unintended behavior or behavior humans don’t fully understand, a sufficiently powerful AI system might act unpredictably — pursuing its goals through an avenue that isn’t the one we expected.

In his 2009 paper “The Basic AI Drives,” Steve Omohundro, who has worked as a computer science professor at the University of Illinois Urbana-Champaign and as the president of Possibility Research, argues that almost any AI system will predictably try to accumulate more resources, become more efficient, and resist being turned off or modified: “These potentially harmful behaviors will occur not because they were programmed in at the start, but because of the intrinsic nature of goal driven systems.”

His argument goes like this: Because AIs have goals, they’ll be motivated to take actions that they can predict will advance their goals. An AI playing a chess game will be motivated to take an opponent’s piece and advance the board to a state that looks more winnable.

But the same AI, if it sees a way to improve its own chess evaluation algorithm so it can evaluate potential moves faster, will do that too, for the same reason: It’s just another step that advances its goal.

If the AI sees a way to harness more computing power so it can consider more moves in the time available, it will do that. And if the AI detects that someone is trying to turn off its computer mid-game, and it has a way to disrupt that, it’ll do it. It’s not that we would instruct the AI to do things like that; it’s that whatever goal a system has, actions like these will often be part of the best path to achieve that goal.

That means that any goal, even innocuous ones like playing chess or generating advertisements that get lots of clicks online, could produce unintended results if the agent pursuing it has enough intelligence and optimization power to identify weird, unexpected routes to achieve its goals.

Goal-driven systems won’t wake up one day with hostility to humans lurking in their hearts. But they will take actions that they predict will help them achieve their goal — even if we’d find those actions problematic, even horrifying. They’ll work to preserve themselves, accumulate more resources, and become more efficient. They already do that, but it takes the form of weird glitches in games. As they grow more sophisticated, scientists like Omohundro predict more adversarial behavior.

4) When did scientists first start worrying about AI risk?

Scientists have been thinking about the potential of artificial intelligence since the early days of computers. In the famous paper where he put forth the Turing test for determining if an artificial system is truly “intelligent,” Alan Turing wrote:

Let us now assume, for the sake of argument, that these machines are a genuine possibility, and look at the consequences of constructing them. … There would be plenty to do in trying to keep one’s intelligence up to the standards set by the machines, for it seems probable that once the machine thinking method had started, it would not take long to outstrip our feeble powers. … At some stage therefore we should have to expect the machines to take control.

I.J. Good worked closely with Turing and reached the same conclusions, according to his assistant, Leslie Pendleton. In an excerpt from unpublished notes Good wrote shortly before he died in 2009, he writes about himself in third person and notes a disagreement with his younger self — while as a younger man, he thought powerful AIs might be helpful to us, the older Good expected AI to annihilate us.

[The paper] “Speculations Concerning the First Ultra-intelligent Machine” (1965) … began: “The survival of man depends on the early construction of an ultra-intelligent machine.” Those were his words during the Cold War, and he now suspects that “survival” should be replaced by “extinction.” He thinks that, because of international competition, we cannot prevent the machines from taking over. He thinks we are lemmings. He said also that “probably Man will construct the deus ex machina in his own image.”

In the 21st century, with computers quickly establishing themselves as a transformative force in our world, younger researchers started expressing similar worries.

Nick Bostrom is a professor at the University of Oxford, the director of the Future of Humanity Institute, and the director of the Governance of Artificial Intelligence Program. He researches risks to humanity, both in the abstract — asking questions like why we seem to be alone in the universe — and in concrete terms, analyzing the technological advances on the table and whether they endanger us. AI, he concluded, endangers us.

In 2014, he wrote a book explaining the risks AI poses and the necessity of getting it right the first time, concluding, “once unfriendly superintelligence exists, it would prevent us from replacing it or changing its preferences. Our fate would be sealed.”

Across the world, others have reached the same conclusion. Bostrom co-authored a paper on the ethics of artificial intelligence with Eliezer Yudkowsky, founder of and research fellow at the Berkeley Machine Intelligence Research Institute (MIRI), an organization that works on better formal characterizations of the AI safety problem.

Yudkowsky started his career in AI by worriedly poking holes in others’ proposals for how to make AI systems safe, and has spent most of it working to persuade his peers that AI systems will, by default, be unaligned with human values (not necessarily opposed to but indifferent to human morality) — and that it’ll be a challenging technical problem to prevent that outcome.

Increasingly, researchers realized that there’d be challenges that hadn’t been present with AI systems when they were simple. “‘Side effects’ are much more likely to occur in a complex environment, and an agent may need to be quite sophisticated to hack its reward function in a dangerous way. This may explain why these problems have received so little study in the past, while also suggesting their importance in the future,” concluded a 2016 research paper on problems in AI safety.

Bostrom’s book Superintelligence was compelling to many people, but there were skeptics. “No, experts don’t think superintelligent AI is a threat to humanity,” argued an op-ed by Oren Etzioni, a professor of computer science at the University of Washington and CEO of the Allan Institute for Artificial Intelligence. “Yes, we are worried about the existential risk of artificial intelligence,” replied a dueling op-ed by Stuart Russell, an AI pioneer and UC Berkeley professor, and Allan DaFoe, a senior research fellow at Oxford and director of the Governance of AI program there.

It’s tempting to conclude that there’s a pitched battle between AI-risk skeptics and AI-risk believers. In reality, they might not disagree as profoundly as you would think.

Facebook’s chief AI scientist Yann LeCun, for example, is a vocal voice on the skeptical side. But while he argues we shouldn’t fear AI, he still believes we ought to have people working on, and thinking about, AI safety. “Even if the risk of an A.I. uprising is very unlikely and very far in the future, we still need to think about it, design precautionary measures, and establish guidelines,” he writes.

That’s not to say there’s an expert consensus here — far from it. There is substantial disagreement about which approaches seem likeliest to bring us to general AI, which approaches seem likeliest to bring us to safe general AI, and how soon we need to worry about any of this.

Many experts are wary that others are overselling their field, and dooming it when the hype runs out. But that disagreement shouldn’t obscure a growing common ground; these are possibilities worth thinking about, investing in, and researching, so we have guidelines when the moment comes that they’re needed.

5) Why couldn’t we just shut off a computer if it got too powerful?

A smart AI could predict that we’d want to turn it off if it made us nervous. So it would try hard not to make us nervous, because doing so wouldn’t help it accomplish its goals. If asked what its intentions are, or what it’s working on, it would attempt to evaluate which responses are least likely to get it shut off, and answer with those. If it wasn’t competent enough to do that, it might pretend to be even dumber than it was — anticipating that researchers would give it more time, computing resources, and training data.

So we might not know when it’s the right moment to shut off a computer.

We also might do things that make it impossible to shut off the computer later, even if we realize eventually that it’s a good idea. For example, many AI systems could have access to the internet, which is a rich source of training data and which they’d need if they’re to make money for their creators (for example, on the stock market, where more than half of trading is done by fast-reacting AI algorithms).

But with internet access, an AI could email copies of itself somewhere where they’ll be downloaded and read, or hack vulnerable systems elsewhere. Shutting off any one computer wouldn’t help.

In that case, isn’t it a terrible idea to let any AI system — even one which doesn’t seem powerful enough to be dangerous — have access to the internet? Probably. But that doesn’t mean it won’t continue to happen.

So far, we’ve mostly talked about the technical challenges of AI. But from here forward, it’s necessary to veer more into the politics. Since AI systems enable incredible things, there will be lots of different actors working on such systems.

There will likely be startups, established tech companies like Google (Alphabet’s recently acquired startup DeepMind is frequently mentioned as an AI frontrunner), and nonprofits (the Elon Musk-founded OpenAI is another major player in the field).

There will be governments — Russia’s Vladimir Putin has expressed an interest in AI, and China has made big investments. Some of them will presumably be cautious and employ safety measures, including keeping their AI off the internet. But in a scenario like this one, we’re at the mercy of the least cautious actor, whoever they may be.

That’s part of what makes AI hard: Even if we know how to take appropriate precautions (and right now we don’t), we also need to figure out how to ensure that all would-be AI programmers are motivated to take those precautions and have the tools to implement them correctly.

6) What are we doing right now to avoid an AI apocalypse?

“It could be said that public policy on AGI [artificial general intelligence] does not exist,” concluded a paper this year reviewing the state of the field.

The truth is that technical work on promising approaches is getting done, but there’s shockingly little in the way of policy planning, international collaboration, or public-private partnerships. In fact, much of the work is being done by only a handful of organizations, and it has been estimated that around 50 people in the world work full time on technical AI safety.

Bostrom’s Future of Humanity Institute has published a research agenda for AI governance: the study of “devising global norms, policies, and institutions to best ensure the beneficial development and use of advanced AI.” It has published research on the risk of malicious uses of AI, on the context of China’s AI strategy, and on artificial intelligence and international security.

The longest-established organization working on technical AI safety is the Machine Intelligence Research Institute (MIRI), which prioritizes research into designing highly reliable agents — artificial intelligence programs whose behavior we can predict well enough to be confident they’re safe. (Disclosure: MIRI is a nonprofit and I donated to its work in 2017 and 2018.)

The Elon Musk-founded OpenAI is a very new organization, less than three years old. But researchers there are active contributors to both AI safety and AI capabilities research. A research agenda in 2016 spelled out “concrete open technical problems relating to accident prevention in machine learning systems,” and researchers have since advanced some approaches to safe AI systems.

Alphabet’s DeepMind, a leader in this field, has a safety team and a technical research agenda outlined here. “Our intention is to ensure that AI systems of the future are not just ‘hopefully safe’ but robustly, verifiably safe ,” it concludes, outlining an approach with an emphasis on specification (designing goals well), robustness (designing systems that perform within safe limits under volatile conditions), and assurance (monitoring systems and understanding what they’re doing).

There are also lots of people working on more present-day AI ethics problems: algorithmic bias, robustness of modern machine-learning algorithms to small changes, and transparency and interpretability of neural nets, to name just a few. Some of that research could potentially be valuable for preventing destructive scenarios.

But on the whole, the state of the field is a little bit as if almost all climate change researchers were focused on managing the droughts, wildfires, and famines we’re already facing today, with only a tiny skeleton team dedicating to forecasting the future and 50 or so researchers who work full time on coming up with a plan to turn things around.

Not every organization with a major AI department has a safety team at all, and some of them have safety teams focused only on algorithmic fairness and not on the risks from advanced systems. The US government doesn’t have a department for AI.

The field still has lots of open questions — many of which might make AI look much more scary, or much less so — which no one has dug into in depth.

7) Is this really likelier to kill us all than, say, climate change?

It sometimes seems like we’re facing dangers from all angles in the 21st century. Both climate change and future AI developments are likely to be transformative forces acting on our world.

Our predictions about climate change are more confident, both for better and for worse. We have a clearer understanding of the risks the planet will face, and we can estimate the costs to human civilization. They are projected to be enormous, risking potentially hundreds of millions of lives. The ones who will suffer most will be low-income people in developing countries; the wealthy will find it easier to adapt. We also have a clearer understanding of the policies we need to enact to address climate change than we do with AI.

There’s intense disagreement in the field on timelines for critical advances in AI. While AI safety experts agree on many features of the safety problem, they’re still making the case to research teams in their own field, and they disagree on some of the details. There’s substantial disagreement on how badly it could go, and on how likely it is to go badly. There are only a few people who work full time on AI forecasting. One of the things current researchers are trying to nail down is their models and the reasons for the remaining disagreements about what safe approaches will look like.

Most experts in the AI field think it poses a much larger risk of total human extinction than climate change, since analysts of existential risks to humanity think that climate change, while catastrophic, is unlikely to lead to human extinction. But many others primarily emphasize our uncertainty — and emphasize that when we’re working rapidly toward powerful technology about which there are still many unanswered questions, the smart step is to start the research now.

8) Is there a possibility that AI can be benevolent?

AI safety researchers emphasize that we shouldn’t assume AI systems will be benevolent by default. They’ll have the goals that their training environment set them up for, and no doubt this will fail to encapsulate the whole of human values.

When the AI gets smarter, might it figure out morality by itself? Again, researchers emphasize that it won’t. It’s not really a matter of “figuring out” — the AI will understand just fine that humans actually value love and fulfillment and happiness, and not just the number associated with Google on the New York Stock Exchange. But the AI’s values will be built around whatever goal system it was initially built around, which means it won’t suddenly become aligned with human values if it wasn’t designed that way to start with.

Of course, we can build AI systems that are aligned with human values, or at least that humans can safely work with. That is ultimately what almost every organization with an artificial general intelligence division is trying to do. A success with AI could give us access to decades or centuries of technological innovation all at once.

“If we’re successful, we believe this will be one of the most important and widely beneficial scientific advances ever made,” writes the introduction to Alphabet’s DeepMind. “From climate change to the need for radically improved healthcare, too many problems suffer from painfully slow progress, their complexity overwhelming our ability to find solutions. With AI as a multiplier for human ingenuity, those solutions will come into reach.”

So, yes, AI can share our values — and transform our world for the good. We just need to solve a very hard engineering problem first.

9) I just really want to know: how worried should we be?

To people who think the worrying is premature and the risks overblown, AI safety is competing with other priorities that sound, well, a bit less sci-fi — and it’s not clear why AI should take precedence. To people who think the risks described are real and substantial, it’s outrageous that we’re dedicating so few resources to working on them.

While machine-learning researchers are right to be wary of hype, it’s also hard to avoid the fact that they’re accomplishing some impressive, surprising things using very generalizable techniques, and that it doesn’t seem that all the low-hanging fruit has been picked.

At a major conference in early December, Google’s DeepMind cracked open a longstanding problem in biology: predicting how proteins fold. “Even though there’s a lot more work to do before we’re able to have a quantifiable impact on treating diseases, managing the environment, and more, we know the potential is enormous,” its announcement concludes.

AI looks increasingly like a technology that will change the world when it arrives. Researchers across many major AI organizations tell us it will be like launching a rocket: something we have to get right before we hit “go.” So it seems urgent to get to work learning rocketry. No matter whether or not humanity should be afraid, we should definitely be doing our homework.

Evolving Medical Education for a Digital Future


Three major technology trends—mobile phone–enabled platforms, big data, and artificial intelligence (AI)—exemplify how new technologies are transforming conventional modes of healthcare delivery. Mobile applications are replacing activities previously requiring in-person visits, computers are using vast new data streams to personalize treatment approaches, and AI is augmenting disease diagnosis.

Physicians have an important role in deciding where and how these new tools might be best utilized in diagnosing, treating, and managing health conditions. As medicine undergoes a “digital transformation,” a foundational review of medical education spanning medical school, residency, and continuing medical education (CME) is needed to ensure that physicians at all stages of practice are equipped to integrate emerging technologies into their daily practice. By evolving medical education today, we can prepare physicians for medicine’s digital future.

Computers algorithmically diagnosing diabetes from retinal scans[1]; chatbots providing automated mental health counseling[2]; smartphone applications using activity, location, and social data to help patients achieve lifestyle changes[3]; mobile applications delivering surgical follow-up care[4]; and smartwatches passively detecting atrial fibrillation[5] are just a few examples in which technology is being used to augment conventional modes of healthcare delivery.

Many proposals to evolve medical training in a world of continuous technology transformation have focused on specific technologies, such as incorporating telemedicine into existing Accreditation Council for Graduate Medical Education (ACGME) competencies,[6] creating a new specialty of “medical virtualists,”[7] or better integrating data science into healthcare.[8]

Emerging Technologies Transforming Medicine

Looking beyond legacy health information technology platforms like electronic health records (EHRs), active venture capital funding provides a vision for where the community is placing its bets for emerging technologies. We highlight three areas drawing significant investor interest: mobile health ($1.3 billion raised in 2016),[9] big data enabling precision medicine ($679 million),[10] and AI ($794 million).[11]

Mobile health. In a 2015 national survey, 58.2% of smartphone owners reported having downloaded a health-related mobile application[12] from an estimated 259,000 available health-related applications.[13] These applications frequently help patients self-manage their health conditions by providing education, tracking tools, and community support between clinic visits.

Big data enabling precision medicine. Phone-based sensors, wearable devices, social media, EHRs, and genomics are just a few of the many new technologies collecting and transmitting clinical, environmental, and behavioral information. These new contextual data streams are facilitating personalized medical decision-making with treatments tailored to each individual patient.

AI. New computational methods such as AI, machine learning, and neural networks are augmenting clinical decisions via algorithmic interpretation of huge data sets that exceed human cognitive capacities. These new computational technologies hold great potential to assist with diagnosis (interpretation of ECGs, radiology, pathology), personalized treatment (tailoring treatment regimens for individual tumor genotypes), and population health (risk prediction and stratification), though for now they remain software innovations reliant on human clinician hardware to guide appropriate use.[14]

Knowledge Domains

Physicians have an important role in deciding where and how new tools might be best utilized in diagnosing, treating, and managing health conditions. A recent study by the American Medical Association (AMA) found significant physician interest in digital health tools, with 85% of physicians reporting that they perceived at least some benefit from new digital tools in improving their ability to care for patients.[15]

Integrating emerging technologies such as mobile applications, big data, and AI into regular practice will require providers to acquire new knowledge across ACGME educational domains such as Professionalism, Interpersonal & Communication Skills, and Systems-Based Practice.

From a foundational perspective, it is important that physicians understand their role and potential liability as related to these new technologies. This includes but is not limited to:

  • Understanding relevant laws, particularly state-based regulations concerning remote practice of medicine (ie, telemedicine). (Systems-Based Practice)
  • Compliance with HIPAA and other key privacy regulations when interacting with patient-generated data outside the bounds of the EHR. (Systems-Based Practice)
  • Evaluating potential malpractice implications, including assessing coverage scope. (Systems-Based Practice)
  • Awareness of emerging reimbursement codes for time allocated to new technology–enabled practice models. (Systems-Based Practice)

Outstanding questions remain regarding the clinical efficacy of many new technologies. With formal clinical trials still underway, physicians may feel unable to speak definitively regarding a specific technology’s potential risks and benefits. Yet, the increasingly broad use of these tools requires that physicians use their clinical expertise to help their patients understand the limitations of such technologies and steer them toward appropriate tools. Essential skills and roles that modern physicians must now adopt include:

  • Teaching patients how to identify trusted tools—those using evidence-based guidelines or created in conjunction with credible physicians, scientists, and hospitals (Medical Knowledge)
  • Setting clear expectations upfront about the extent of physician involvement in reviewing patient-generated data (particularly if there is no anticipated involvement) (Patient Care)
  • Assessing technology literacy in the social history and adapting patient education on the basis of digital attainment, including recommending websites, online video, and mobile apps when appropriate (Patient Care)
  • Advancing clinical knowledge by referring select patients to enroll in digital remote clinical trials (Systems-Based Practice)

Taking into account the rapidly increasing amounts of data inputs in clinical decisions, physicians must augment their statistical knowledge to become generally familiar with new data science methods:

  • Leverage data science tools such as visualization to more efficiently review large amounts of patient data, including identification of outliers and trends (Medical Knowledge)
  • Seek to understand the inputs and assumptions of advanced computational algorithms and not allow them to become a black box. Recognize that although deep-learning algorithms can deduce important patterns and relationships, physicians remain necessary as a critical lens in deciding how to apply findings to each individual patient. (Patient Care)

Implications for Physicians

For current medical students and trainees, many of whom are digital natives themselves, the educational domains outlined above may seem intuitive or obvious. In contrast, physicians currently practicing today are already burdened with countless administrative tasks that may make these future technologies feel overwhelming or irrelevant. Yet, the frustration and feelings of burnout that many physicians have as related to the use of EHRs exactly illustrates why it is critically important that physicians engage early in the dissemination of new technologies.

The first step for providers in the digital transformation of medicine is awareness. While providers may not be aware of, or are dismissive of, new technologies, these tools are already being used avidly by millions of patients around the world.[11] Mobile health, big data, and AI soon will become an integral part of medicine, much like EHRs (and stethoscopes).

The second step is for physicians to familiarize themselves with general categories of new digital tools. New journals such as the Journal of Medical Internet Research offer peer-reviewed manuscripts focused on “eHealth and healthcare in the Internet age.” Physicians may benefit from downloading and signing up for test accounts of new applications or connected health devices. Organizations should consider allowing physicians to spend a portion of their CME budgets on such “digital transformation” learning activities.

The third step is for physician leadership organizations to work with regulatory agencies like the FDA to help identify the most robust tools for physicians to adopt and recommend. A positive example of this is the AMA’s recently issued guidelines on the appropriate use of digital medical devices.[16]

By evolving medical education today, we can prepare physicians for medicine’s digital future. In the face of complex and rapid change, we may all be trainees in a world of ever-accelerating technological evolution.

Particle Physicists Turn to AI to Cope with CERN’s Collision Deluge


Can a competition with cash rewards improve techniques for tracking the Large Hadron Collider’s messy particle trajectories?

Particle Physicists Turn to AI to Cope with CERN's Collision Deluge
A visualization of complex sprays of subatomic particles, produced from colliding proton beams in CERN’s CMS detector at the Large Hadron Collider near Geneva, Switzerland in mid-April of 2018.

Physicists at the world’s leading atom smasher are calling for help. In the next decade, they plan to produce up to 20 times more particle collisions in the Large Hadron Collider (LHC) than they do now, but current detector systems aren’t fit for the coming deluge. So this week, a group of LHC physicists has teamed up with computer scientists to launch a competition to spur the development of artificial-intelligence techniques that can quickly sort through the debris of these collisions. Researchers hope these will help the experiment’s ultimate goal of revealing fundamental insights into the laws of nature.

At the LHC at CERN, Europe’s particle-physics laboratory near Geneva, two bunches of protons collide head-on inside each of the machine’s detectors 40 million times a second. Every proton collision can produce thousands of new particles, which radiate from a collision point at the centre of each cathedral-sized detector. Millions of silicon sensors are arranged in onion-like layers and light up each time a particle crosses them, producing one pixel of information every time. Collisions are recorded only when they produce potentially interesting by-products. When they are, the detector takes a snapshot that might include hundreds of thousands of pixels from the piled-up debris of up to 20 different pairs of protons. (Because particles move at or close to the speed of light, a detector cannot record a full movie of their motion.)

From this mess, the LHC’s computers reconstruct tens of thousands of tracks in real time, before moving on to the next snapshot. “The name of the game is connecting the dots,” says Jean-Roch Vlimant, a physicist at the California Institute of Technology in Pasadena who is a member of the collaboration that operates the CMS detector at the LHC.

After future planned upgrades, each snapshot is expected to include particle debris from 200 proton collisions. Physicists currently use pattern-recognition algorithms to reconstruct the particles’ tracks. Although these techniques would be able to work out the paths even after the upgrades, “the problem is, they are too slow”, says Cécile Germain, a computer scientist at the University of Paris South in Orsay. Without major investment in new detector technologies, LHC physicists estimate that the collision rates will exceed the current capabilities by at least a factor of 10.

Researchers suspect that machine-learning algorithms could reconstruct the tracks much more quickly. To help find the best solution, Vlimant and other LHC physicists teamed up with computer scientists including Germain to launch the TrackML challenge. For the next three months, data scientists will be able to download 400 gigabytes of simulated particle-collision data—the pixels produced by an idealized detector—and train their algorithms to reconstruct the tracks.

Participants will be evaluated on the accuracy with which they do this. The top three performers of this phase hosted by Google-owned company Kaggle, will receive cash prizes of US$12,000, $8,000 and $5,000. A second competition will then evaluate algorithms on the basis of speed as well as accuracy, Vlimant says.

Prize appeal

Such competitions have a long tradition in data science, and many young researchers take part to build up their CVs. “Getting well ranked in challenges is extremely important,” says Germain. Perhaps the most famous of these contests was the 2009 Netflix Prize. The entertainment company offered US$1 million to whoever worked out the best way to predict what films its users would like to watch, going on their previous ratings. TrackML isn’t the first challenge in particle physics, either: in 2014, teams competed to ‘discover’ the Higgs boson in a set of simulated data (the LHC discovered the Higgs, long predicted by theory, in 2012). Other science-themed challenges have involved data on anything from plankton to galaxies.

From the computer-science point of view, the Higgs challenge was an ordinary classification problem, says Tim Salimans, one of the top performers in that race (after the challenge, Salimans went on to get a job at the non-profit effort OpenAI in San Francisco, California). But the fact that it was about LHC physics added to its lustre, he says. That may help to explain the challenge’s popularity: nearly 1,800 teams took part, and many researchers credit the contest for having dramatically increased the interaction between the physics and computer-science communities.

TrackML is “incomparably more difficult”, says Germain. In the Higgs case, the reconstructed tracks were part of the input, and contestants had to do another layer of analysis to ‘find’ the particle. In the new problem, she says, you have to find in the 100,000 points something like 10,000 arcs of ellipse. She thinks the winning technique might end up resembling those used by the program AlphaGo, which made history in 2016 when it beat a human champion at the complex game of Go. In particular, they might use reinforcement learning, in which an algorithm learns by trial and error on the basis of ‘rewards’ that it receives after each attempt.

Vlimant and other physicists are also beginning to consider more untested technologies, such as neuromorphic computing and quantum computing. “It’s not clear where we’re going,” says Vlimant, “but it looks like we have a good path.”

For all book lovers please visit my friend’s website.
URL: http://www.romancewithbooks.com

Planes don’t flap their wings: does AI work like a brain?


A replica of Jacques de Vaucanson’s digesting duck automaton. <em>Courtesy the Museum of Automata, Grenoble, France </em>
A replica of Jacques de Vaucanson’s digesting duck automaton.

In 1739, Parisians flocked to see an exhibition of automata by the French inventor Jacques de Vaucanson performing feats assumed impossible by machines. In addition to human-like flute and drum players, the collection contained a golden duck, standing on a pedestal, quacking and defecating. It was, in fact, a digesting duck. When offered pellets by the exhibitor, it would pick them out of his hand and consume them with a gulp. Later, it would excrete a gritty green waste from its back end, to the amazement of audience members.

Vaucanson died in 1782 with his reputation as a trailblazer in artificial digestion intact. Sixty years later, the French magician Jean-Eugène Robert-Houdin gained possession of the famous duck and set about repairing it. Taking it apart, however, he realised that the duck had no digestive tract. Rather than breaking down the food, the pellets the duck was fed went into one container, and pre-loaded green-dyed breadcrumbs came out of another.

The field of artificial intelligence (AI) is currently exploding, with computers able to perform at near- or above-human level on tasks as diverse as video games, language translation, trivia and facial identification. Like the French exhibit-goers, any observer would be correctly impressed by these results. What might be less clear, however, is how these results are being achieved. Does modern AI reach these feats by functioning the way that biological brains do, and how can we know?

In the realm of replication, definitions are important. An intuitive response to hearing about Vaucanson’s cheat is not to say that the duck is doing digestion differently but rather that it’s not doing digestion at all. But a similar trend appears in AI. Checkers? Chess? Go? All were considered formidable tests of intelligence until they were solved by increasingly more complex algorithms. Learning how a magic trick works makes it no longer magic, and discovering how a test of intelligence can be solved makes it no longer a test of intelligence.

So let’s look to a well-defined task: identifying objects in an image. Our ability to recognise, for example, a school bus, feels simple and immediate. But given the infinite combinations of individual school buses, lighting conditions and angles from which they can be viewed, turning the information that enters our retina into an object label is an incredibly complex task – one out of reach for computers for decades. In recent years, however, computers have come to identify certain objects with up to 95 per cent accuracy, higher than the average individual human.

Like many areas of modern AI, the success of computer vision can be attributed to artificial neural networks. As their name suggests, these algorithms are inspired by how the brain works. They use as their base unit a simple formula meant to replicate what a neuron does. This formula takes in a set of numbers as inputs, multiplies them by another set of numbers (the ‘weights’, which determine how much influence a given input has) and sums them all up. That sum determines how active the artificial neuron is, in the same way that a real neuron’s activity is determined by the activity of other neurons that connect to it. Modern artificial neural networks gain abilities by connecting such units together and learning the right weight for each.

The networks used for visual object recognition were inspired by the mammalian visual system, a structure whose basic components were discovered in cats nearly 60 years ago. The first important component of the brain’s visual system is its spatial map: neurons are active only when something is in their preferred spatial location, and different neurons have different preferred locations. Different neurons also tend to respond to different types of objects. In brain areas closer to the retina, neurons respond to simple dots and lines. As the signal gets processed through more and more brain areas, neurons start to prefer more complex objects such as clocks, houses and faces.

The first of these properties – the spatial map – is replicated in artificial networks by constraining the inputs that an artificial neuron can get. For example, a neuron in the first layer of a network might receive input only from the top left corner of an image. A neuron in the second layer gets input only from those top-left-corner neurons in the first layer, and so on.

The second property – representing increasingly complex objects – comes from stacking layers in a ‘deep’ network. Neurons in the first layer respond to simple patterns, while those in the second layer – getting input from those in the first – respond to more complex patterns, and so on.

These networks clearly aren’t cheating in the way that the digesting duck was. But does all this biological inspiration mean that they work like the brain? One way to approach this question is to look more closely at their performance. To this end, scientists are studying ‘adversarial examples’ – real images that programmers alter so that the machine makes a mistake. Very small tweaks to images can be catastrophic: changing a few pixels on an image of a teapot, for example, can make the network label it an ostrich. It’s a mistake a human would never make, and suggests that something about these networks is functioning differently from the human brain.

Studying networks this way, however, is akin to the early days of psychology. Measuring only environment and behaviour – in other words, input and output – is limited without direct measurements of the brain connecting them. But neural-network algorithms are frequently criticised (especially among watchdog groups concerned about their widespread use in the real world) for being impenetrable black boxes. To overcome the limitations of this techno-behaviourism, we need a way to understand these networks and compare them with the brain.

An ever-growing population of scientists is tackling this problem. In one approach, researchers presented the same images to a monkey and to an artificial network. They found that the activity of the real neurons could be predicted by the activity of the artificial ones, with deeper layers in the network more similar to later areas of the visual system. But, while these predictions are better than those made by other models, they are still not 100 per cent accurate. This is leading researchers to explore what other biological details can be added to the models to make them more similar to the brain.

There are limits to this approach. At a recent conference for neuroscientists and AI researchers, Yann LeCun – director of AI research at Facebook, and professor of computer science at the New York University – warned the audience not to become ‘hypnotised by the details of the brain’, implying that not all of what the brain does necessarily needs to be replicated for intelligent behaviour to be achieved.

But the question of what counts as a mere detail, like the question of what is needed for true digestion, is an open one. For example, by training artificial networks to be more ‘biological’, researchers have found computational purpose in, for example, the physical anatomy of neurons. Some correspondence between AI and the brain is necessary for these biological insights to be of value. Otherwise, the study of neurons would be only as useful for AI as wing-flapping is for modern airplanes.

In 2000, the Belgian conceptual artist Wim Delvoye unveiled Cloaca at a museum in Belgium. Over eight years of research on human digestion, he created the device – consisting of a blender, tubes, pumps and various jars of acids and enzymes – that successfully turns food into faeces. The machine is a true feat of engineering, a testament to our powers of replicating the natural world. Its purpose might be more questionable. One art critic was left with the thought: ‘There is enough dung as it is. Why make more?’ Intelligence doesn’t face this problem.

World Leaders Have Decided: The Next Step in AI is Augmenting Humans


Think that human augmentation is still decades away? Think again.

This week, government leaders met with experts and innovators ahead of the World Government Summit in Dubai. Their goal? To determine the future of artificial intelligence.

It was an event that attracted some of the biggest names in AI. Representatives from IEEE, OECD, the U.N., and AAAI. Managers from IBM Watson, Microsoft, Facebook, OpenAI, Nest, Drive.ai, and Amazon AI. Governing officials from Italy, France, Estonia, Canada, Russia, Singapore, Australia, the UAE. The list goes on and on.

Types of AI: From Reactive to Self-Aware [INFOGRAPHIC]

Futurism got exclusive access to the closed-door roundtable, which was organized by the AI Initiative from the Future Society at the Harvard Kennedy School of Government and H.E. Omar bin Sultan Al Olama, the UAE’s Minister of State for Artificial Intelligence.

The whirlwind conversation covered everything from how long it will take to develop a sentient AI to how algorithms invade our privacy. During one of the most intriguing parts of the roundtable, the attendees discussed the most immediate way artificial intelligence should be utilized to benefit humanity.

The group’s answer? Augmenting humans.

Already Augmented

At first, it may sound like a bold claim; however, we have long been using AI to enhance our activity and augment our work. Don’t believe me? Take out your phone. Head to Facebook or any other social media platform. There, you will see AI hard at work, sorting images and news items and ads and bringing you all the things that you want to see the most. When you type entries into search engines, things operate in much the same manner—an AI looks at your words and brings you what you’re looking for.

And of course, AI’s reach extends far beyond the digital world.

Take, for example, the legal technology company LawGeex, which uses AI algorithms to automatically review contracts. Automating paper-pushing has certainly saved clients money, but the real benefit for many attorneys is saving time. Indeed, as one participant in the session noted, “No one went to law school to cut and paste parts of a regulatory document.”

Similarly, AI is quickly becoming an invaluable resource in medicine, whether it is helping with administrative tasks and the drudgery of documentation or assisting with treatments or even surgical procedures. The FDA even recently approved an algorithm for predicting death.

These are all examples of how AIs are already being used to augment our knowledge and our ability to seek and find answers—of how they are transforming how we work and live our best lives.

Time to Accelerate

When we think about AI augmenting humans, we frequently think big, our minds leaping straight to those classic sci-fi scenarios. We think of brain implants that take humans to the next phase of evolution or wearable earpieces that translate language in real time. But in our excitement and eagerness to explore the potential of new technology, we often don’t stop to consider the somewhat meandering, winding path that will ultimately get us there—the path that we’re already on.

While it’s fun to consider all of the fanciful things that advanced AI systems could allow us to do, we can’t ignore the very real value in the seeming mundane systems of the present. These systems, if fully realized, could free us from hours of drudgery and allow us to truly spend our time on tasks we deem worthwhile.

Imagine no lines at the DMV. Imagine filing your taxes in seconds. This vision is possible, and in the coming months and years, the world’s leaders are planning to nudge us down that road ever faster. Throughout the discussions in Dubai, panelists explored the next steps governments need to take in order to accelerate our progress down this path.

The panel noted that, before governments can start augmenting human life—whether it be with smart contact lenses to monitor glucose levels or turning government receptionists into AI—world leaders will need to get a sense of their nation’s current standing. “The main thing governments need to do first is understand where they are on this journey,” one panelist noted.

In the weeks and months to come, nations around the globe will likely be urged to do just that. Once nations understand where they are along the path, ideally, they will share their findings in order to assist those who are behind them and learn from those who are ahead. With a better roadmap in hand, nations will be ready to hit the road — and the gas.

AI Makes Drones Smart, Easy for Photographers


A certified drone pilot and artificial intelligence expert explain how technology innovations are making drones smarter, more capable and easier to fly.

Difficult-to-fly, remote control consumer drones from just a few years ago are being superseded by smart, autonomous aerial robots. Powered by cutting edge computer vision and artificial intelligence (AI) technologies, these new drones can see, think and react to their owner automatically, and experts say this is making drones easier and safer for almost anyone to fly.

Drone innovation is skyrocketing as more sophisticated technologies are making drones smarter and increasingly capable, according to Kara Murphy, a photographer turned drone fanatic and a certified Part 107 pilot licensed to fly small unmanned aircraft (UAS) for commercial uses.

“As someone who started at the very beginning, drone technology has come a long way in a short period of time,” Murphy said.

Murphy, a contributing writer for Drone360 Magazine and a consultant for companies like DroneDeploy, is involved with the annual Flying Robot International Film Festival. She said drones are evolving to give people more control over flying as well as opening new photographic experiences.

A few years ago, battery life was only about six minutes at most. Today, drone batteries can last up to 30 minutes. She’s seeing more drones, like the new DJI Spark, that come equipped with built-in AI for facial recognition and object detection to avoid crashes. The technology allows drones to follow their owner like a welcome aerial paparazzi, avoid objects because they’re context aware and react to simple hand gestures.

“It’s easier to pilot and keep track of drones today,” said Murphy. “They’ve become almost idiot-proof.”

Smart Flying Drones

Spark, the first mini drone released this year by DJI, uses an array of cameras and sensors feeding into AI and deep learning algorithms running on a Movidius Myriad 2 vision processing unit (VPU).

This onboard vision system detects and avoids objects, generates 3D maps, establishes contextual awareness, and even recognizes a pilot’s face and reacts to hand gestures. The vision sensors fitted inside the underbelly of the drone detect and identify what’s below to assist with a safe landing, even on a pilot’s outstretched hand.

“I can signal it to take a selfie from the air, then wave it away or gesture for it to come back home,” said Murphy, describing some of the Spark’s AI-powered automation features.

“The fact that I don’t need a remote to control this drone is mind-blowing. It just shows how far drones have come in a matter of years.”

Murphy said collision or object avoidance, powered by computer vision and intelligent algorithms, is becoming more common in new drones, and it can be a drone lifesaver.

“It is supremely helpful, because sometimes you are flying in narrow spaces, and you’re not sure if you have enough room, so having these sensors is really key to avoid damaging collisions,” she said.

These capabilities make it easier to fly because pilots don’t have to stay glued to a remote control and screen, she said. It allows them to become aware of their surroundings and focus on capturing that perfect shot.

The compact Spark is built with technologies that were previously only available in larger, more expensive drones. In particular, it has chips and software designed specifically for bringing on-device AI to so-called “edge devices,” which includes almost anything that computes and connects to the internet.

Seeing Clearly

The Spark’s Movidius Myriad 2 VPU enables the drone to think, learn, and act quickly and simultaneously, according to Cormac Brick, director of embedded machine intelligence at Movidius, an Intel company.

While central processing units (CPUs) — the brains used in computers or computing devices — can perform a wide variety of workloads, Brick said the VPU is tailored for one very specific vision workload, so it has fast performance using low power.

Cormac Brick shows Spark drone
Cormac Brick points out how built-in AI makes mini drones ideal for getting the best shot.

“The VPU allows the drone to use both traditional geometric vision algorithms and deep learning algorithms so it can be spatially and contextually aware,” he said.

“It enables the device to recognize where it is, where you are, where your hand is, and plot a course to safely hover and then soft land into the palm of your hand.”

As soon as a Spark lifts off from a person’s hand, the cameras immediately look for recognizable features in the environment to build a digital map. All the while, the drone recognizes the user’s face, always keeping that person in frame.

Future of Intelligent Drones

Brick said the Spark indicates how AI is changing the drone market, and he sees the technology getting better all the time.

His team’s just-released Movidius Myriad X is the first VPU with a dedicated neural compute engine, which will allow device makers much more compute performance than what’s currently available. That means drones will become smarter, fly more safely and allow people to capture more footage fully autonomously.

“In the future, you’ll be able to take a drone out of your pocket, throw it up in the air and let it fly around your backyard for the afternoon while you’re having a barbecue,” Brick said.

“An hour later, it could send your phone a 45-second video clip or the 10 best shots so you can share on social media.”

Building AI into drones is helping make them easier and safer to fly, but Brick said the technology has the potential to unlock all kinds of new automated camera and navigation capabilities.

Murphy believes that drone popularity will increase as drones become more autonomous and simpler to use in capturing life’s moments.

“This trend will continue as drones get easier and more fun for people to use,” Murphy said.

How Haven Life uses AI, machine learning to spin new life out of long-tail data


Haven Life is leveraging MassMutual’s historical data to give instant life insurance approvals. Using AI and machine learning to derive new value from old data could become an enterprise staple.

Can a life insurance company look at the same data every other rival has and come up with different insights? That’s the goal of Haven Life, which is using artificial intelligence to offer decisions on applications in real time.

Life insurance runs on actuarial data, which makes a guesstimate how long a person will live. This life and death data is needed so life insurers can manage risks.

To date, obtaining life insurance has been a bit of a pain because it requires a medical exam, some blood and a bevy of medical history questions. Haven Life, a unit of MassMutual, aims to streamline the process, said Mark Sayre, head of policy design at Haven Life.

“The data we use is established for this purpose. Life insurance has a unique challenge since mortality has a slow process and it’s uncommon. It takes many years of experience to build our models,” explained Sayre.

 

Indeed, Haven Life needs someone to live or die to verify models. In other words, Haven Life has to use MassMutual’s data over the years and then use artificial intelligence and machine learning to find things in the information that humans can’t see. As a result, Haven Life can offer the InstantTerm process, an innovation that means the startup can offer can underwrite a policy on behalf of MassMutual without a medical exam in minutes.

Simply put, Haven Life is using older data to spin something new. I’d argue that applying artificial intelligence and machine learning to older proprietary data is going to be a key use case in corporations. Haven Life had to take data from old applications and text to turn into structured information.

“Our models can now dig into interactions and various elements of the data,” said Sayre. One example is blood and urine tests used in life insurance quotes. Say a normal value on a blood test is 45. Under the previous model, 46 would be deemed high and 43 low.

“Our model better understands how close 45 is to 46 so it’s not immediately good to bad,” explained Sayre. As a result of the new model and machine learning, Haven Life found that low figures are just as concerning as high ones. “The model brought something new to the medical team. If there are multiple low figures that can be bad. We have to look at the interplay of variables on lab tests,” said Sayre.

Blood pressure, albumin and globulin are variables that could be worrisome if low.

In many respects, algorithmic underwriting is about creating pathways to make decisions by using various characteristics such as height, weight, cholesterol and other values.

One key note is that Haven Life’s model is a work in progress and will take decades of refinement. MassMutual brings the history and mortality experience while Haven Life brings a tech focus and ability to move quickly.

What’s next? Sayre said Haven Life is looking at variables such as credit data and prescription histories. The catch is that Haven Life won’t know the validity of the data for life insurance for years. “We won’t know because there may be no deaths for years. All of these models require an outcome. We need some death and some living people,” said Sayre.

Other key points:

  • Sayre’s team has 15 people between developers, actuaries and UX designers.
  • Haven Life has 100 employees.
  • MassMutual has 40 data scientists.
  • Haven Life was launched in 2015 after founder Yaron Ben-Zvi had a less-than-satisfactory experience buying life insurance. Haven Life is the first insurer to offer coverage in two minutes with no medical exams.
  • Haven Life is independent from MassMutual, but leans on the giant for access to data, legal and regulatory expertise. MassMutual is the issuer for the Haven Life term policy.

Thanks to AI, Computers Can Now See Your Health Problems. 


PATIENT NUMBER TWO was born to first-time parents, late 20s, white. The pregnancy was normal and the birth uncomplicated. But after a few months, it became clear something was wrong. The child had ear infection after ear infection and trouble breathing at night. He was small for his age, and by his fifth birthday, still hadn’t spoken. He started having seizures. Brain MRIs, molecular analyses, basic genetic testing, scores of doctors; nothing turned up answers. With no further options, in 2015 his family decided to sequence their exomes—the portion of the genome that codes for proteins—to see if he had inherited a genetic disorder from his parents. A single variant showed up: ARID1B.

The mutation suggested he had a disease called Coffin-Siris syndrome. But Patient Number Two didn’t have that disease’s typical symptoms, like sparse scalp hair and incomplete pinky fingers. So, doctors, including Karen Gripp, who met with Two’s family to discuss the exome results, hadn’t really considered it. Gripp was doubly surprised when she uploaded a photo of Two’s face to Face2Gene. The app, developed by the same programmers who taught Facebook to find your face in your friend’s photos, conducted millions of tiny calculations in rapid succession—how much slant in the eye? How narrow is that eyelid fissure? How low are the ears? Quantified, computed, and ranked to suggest the most probable syndromes associated with the facial phenotype. There’s even a heat map overlay on the photo that shows which the features are the most indicative match.

“In hindsight it was all clear to me,” says Gripp, who is chief of the Division of Medical Genetics at A.I. duPont Hospital for Children in Delaware, and had been seeing the patient for years. “But it hadn’t been clear to anyone before.” What had taken Patient Number Two’s doctors 16 years to find took Face2Gene just a few minutes.

Face2Gene takes advantage of the fact that so many genetic conditions have a tell-tale “face”—a unique constellation of features that can provide clues to a potential diagnosis. It is just one of several new technologies taking advantage of how quickly modern computers can analyze, sort, and find patterns across huge reams of data. They are built in fields of artificial intelligence known as deep learning and neural nets—among the most promising to deliver AI’s 50-year old promise to revolutionize medicine by recognizing and diagnosing disease.

 Genetic syndromes aren’t the only diagnoses that could get help from machine learning. The RightEye GeoPref Autism Test can identify the early stages of autism in infants as young as 12 months—the crucial stages where early intervention can make a big difference. Unveiled January 2 at CES in Las Vegas, the technology uses infrared sensors test the child’s eye movement as they watch a split-screen video: one side fills with people and faces, the other with moving geometric shapes. Children at that age should be much more attracted to faces than abstract objects, so the amount of time they look at each screen can indicate where on the autism spectrum a child might fall.

In validation studies done by the test’s inventor, UC San Diego researcher Karen Pierce,1the test correctly predicted autism spectrum disorder 86 percent of the time in more than 400 toddlers. That said, it’s still pretty new, and hasn’t yet been approved by the FDA as a diagnostic tool. “In terms of machine learning, it’s the simplest test we have,” says RightEye’s Chief Science Officer Melissa Hunfalvay. “But before this, it was just physician or parent observations that might lead to a diagnosis. And the problem with that is it hasn’t been quantifiable.”

A similar tool could help with early detection of America’s sixth leading cause of death: Alzheimer’s disease. Often, doctors don’t recognize physical symptoms in time to try any of the disease’s few existing interventions. But machine learning hears what doctor’s can’t: Signs of cognitive impairment in speech. This is how Toronto-based Winterlight Labs is developing a tool to pick out hints of dementia in its very early stages. Co-founder Frank Rudzicz calls these clues “jitters,” and “shimmers:” high frequency wavelets only computers, not humans, can hear.

Winterlight’s tool is way more sensitive than the pencil and paper-based tests doctor’s currently use to assess Alzheimer’s. Besides being crude, data-wise, those tests can’t be taken more than once every six months. Rudzicz’s tool can be used multiple times a week, which lets it track good days, bad days, and measure a patient’s cognitive functions over time. The product is still in beta, but is currently being piloted by medical professionals in Canada, the US, and France.

If this all feels a little scarily sci-fi to you, it’s useful to remember that doctors have been trusting computers with your diagnoses for a long time. That’s because machines are much more sensitive at both detecting and analyzing the many subtle indications that our bodies are misbehaving. For instance, without computers, Patient Number Two would never have been able to compare his exome to thousands of others, and find the genetic mutation marking him with Coffin-Siris syndrome.

But none of this makes doctors obsolete. Even Face2Gene—which, according to its inventors, can diagnose up to half of the 8,000 known genetic syndromes using facial patterns gleaned from the hundreds of thousands of images in its database—needs a doctor (like Karen Gripp) with enough experience to verify the results. In that way, machines are an extension of what medicine has always been: A science that grows more powerful with every new data point.

%d bloggers like this: