How AI Will Rewire Us


Fears about how robots might transform our lives have been a staple of science fiction for decades. In the 1940s, when widespread interaction between humans and artificial intelligence still seemed a distant prospect, Isaac Asimov posited his famous Three Laws of Robotics, which were intended to keep robots from hurting us. The first—“a robot may not injure a human being or, through inaction, allow a human being to come to harm”—followed from the understanding that robots would affect humans via direct interaction, for good and for ill. Think of classic sci-fi depictions: C-3PO and R2-D2 working with the Rebel Alliance to thwart the Empire in Star Wars, say, or HAL 9000 from 2001: A Space Odyssey and Ava from Ex Machina plotting to murder their ostensible masters. But these imaginings were not focused on AI’s broader and potentially more significant social effects—the ways AI could affect how we humans interact with one another.

Radical innovations have previously transformed the way humans live together, of course. The advent of cities sometime between 5,000 and 10,000 years ago meant a less nomadic existence and a higher population density. We adapted both individually and collectively (for instance, we may have evolved resistance to infections made more likely by these new circumstances). More recently, the invention of technologies including the printing press, the telephone, and the internet revolutionized how we store and communicate information.

As consequential as these innovations were, however, they did not change the fundamental aspects of human behavior that comprise what I call the “social suite”: a crucial set of capacities we have evolved over hundreds of thousands of years, including love, friendship, cooperation, and teaching. The basic contours of these traits remain remarkably consistent throughout the world, regardless of whether a population is urban or rural, and whether or not it uses modern technology.

But adding artificial intelligence to our midst could be much more disruptive. Especially as machines are made to look and act like us and to insinuate themselves deeply into our lives, they may change how loving or friendly or kind we are—not just in our direct interactions with the machines in question, but in our interactions with one another.

Consider some experiments from my lab at Yale, where my colleagues and I have been exploring how such effects might play out. In one, we directed small groups of people to work with humanoid robots to lay railroad tracks in a virtual world. Each group consisted of three people and a little blue-and-white robot sitting around a square table, working on tablets. The robot was programmed to make occasional errors—and to acknowledge them: “Sorry, guys, I made the mistake this round,” it declared perkily. “I know it may be hard to believe, but robots make mistakes too.”

As it turned out, this clumsy, confessional robot helped the groups perform better—by improving communication among the humans. They became more relaxed and conversational, consoling group members who stumbled and laughing together more often. Compared with the control groups, whose robot made only bland statements, the groups with a confessional robot were better able to collaborate.

In another, virtual experiment, we divided 4,000 human subjects into groups of about 20, and assigned each individual “friends” within the group; these friendships formed a social network. The groups were then assigned a task: Each person had to choose one of three colors, but no individual’s color could match that of his or her assigned friends within the social network. Unknown to the subjects, some groups contained a few bots that were programmed to occasionally make mistakes. Humans who were directly connected to these bots grew more flexible, and tended to avoid getting stuck in a solution that might work for a given individual but not for the group as a whole. What’s more, the resulting flexibility spread throughout the network, reaching even people who were not directly connected to the bots. As a consequence, groups with mistake-prone bots consistently outperformed groups containing bots that did not make mistakes. The bots helped the humans to help themselves.

Both of these studies demonstrate that in what I call “hybrid systems”—where people and robots interact socially—the right kind of AI can improve the way humans relate to one another. Other findings reinforce this. For instance, the political scientist Kevin Munger directed specific kinds of bots to intervene after people sent racist invective to other people online. He showed that, under certain circumstances, a bot that simply reminded the perpetrators that their target was a human being, one whose feelings might get hurt, could cause that person’s use of racist speech to decline for more than a month.

But adding AI to our social environment can also make us behave less productively and less ethically. In yet another experiment, this one designed to explore how AI might affect the “tragedy of the commons”—the notion that individuals’ self-centered actions may collectively damage their common interests—we gave several thousand subjects money to use over multiple rounds of an online game. In each round, subjects were told that they could either keep their money or donate some or all of it to their neighbors. If they made a donation, we would match it, doubling the money their neighbors received. Early in the game, two-thirds of players acted altruistically. After all, they realized that being generous to their neighbors in one round might prompt their neighbors to be generous to them in the next one, establishing a norm of reciprocity. From a selfish and short-term point of view, however, the best outcome would be to keep your own money and receive money from your neighbors. In this experiment, we found that by adding just a few bots (posing as human players) that behaved in a selfish, free-riding way, we could drive the group to behave similarly. Eventually, the human players ceased cooperating altogether. The bots thus converted a group of generous people into selfish jerks.

Let’s pause to contemplate the implications of this finding. Cooperation is a key feature of our species, essential for social life. And trust and generosity are crucial in differentiating successful groups from unsuccessful ones. If everyone pitches in and sacrifices in order to help the group, everyone should benefit. When this behavior breaks down, however, the very notion of a public good disappears, and everyone suffers. The fact that AI might meaningfully reduce our ability to work together is extremely concerning.

Already, we are encountering real-world examples of how AI can corrupt human relations outside the laboratory. A study examining 5.7 million Twitter users in the run-up to the 2016 U.S. presidential election found that trolling and malicious Russian accounts—including ones operated by bots—were regularly retweeted in a similar manner to other, unmalicious accounts, influencing conservative users particularly strongly. By taking advantage of humans’ cooperative nature and our interest in teaching one another—both features of the social suite—the bots affected even humans with whom they did not interact directly, helping to polarize the country’s electorate.

Other social effects of simple types of AI play out around us daily. Parents, watching their children bark rude commands at digital assistants such as Alexa or Siri, have begun to worry that this rudeness will leach into the way kids treat people, or that kids’ relationships with artificially intelligent machines will interfere with, or even preempt, human relationships. Children who grow up relating to AI in lieu of people might not acquire “the equipment for empathic connection,” Sherry Turkle, the MIT expert on technology and society, told The Atlantic’s Alexis C. Madrigal not long ago, after he’d bought a toy robot for his son.

As digital assistants become ubiquitous, we are becoming accustomed to talking to them as though they were sentient; writing in these pages last year, Judith Shulevitz described how some of us are starting to treat them as confidants, or even as friends and therapists. Shulevitz herself says she confesses things to Google Assistant that she wouldn’t tell her husband. If we grow more comfortable talking intimately to our devices, what happens to our human marriages and friendships? Thanks to commercial imperatives, designers and programmers typically create devices whose responses make us feel better—but may not help us be self-reflective or contemplate painful truths. As AI permeates our lives, we must confront the possibility that it will stunt our emotions and inhibit deep human connections, leaving our relationships with one another less reciprocal, or shallower, or more narcissistic.

All of this could end up transforming human society in unintended ways that we need to reckon with as a polity. Do we want machines to affect whether and how children are kind? Do we want machines to affect how adults have sex?

Kathleen Richardson, an anthropologist at De Montfort University in the U.K., worries a lot about the latter question. As the director of the Campaign Against Sex Robots—and, yes, sex robots are enough of an incipient phenomenon that a campaign against them isn’t entirely premature—she warns that they will be dehumanizing and could lead users to retreat from real intimacy. We might even progress from treating robots as instruments for sexual gratification to treating other people that way. Other observers have suggested that robots could radically improve sex between humans. In his 2007 book, Love and Sex With Robots, the iconoclastic chess master turned businessman David Levy considers the positive implications of “romantically attractive and sexually desirable robots.” He suggests that some people will come to prefer robot mates to human ones (a prediction borne out by the Japanese man who “married” an artificially intelligent hologram last year). Sex robots won’t be susceptible to sexually transmitted diseases or unwanted pregnancies. And they could provide opportunities for shame-free experimentation and practice—thus helping humans become “virtuoso lovers.” For these and other reasons, Levy believes that sex with robots will come to be seen as ethical, and perhaps in some cases expected.

Long before most of us encounter AI dilemmas this intimate, we will wrestle with more quotidian challenges. The age of driverless cars, after all, is upon us. These vehicles promise to substantially reduce the fatigue and distraction that bedevil human drivers, thereby preventing accidents. But what other effects might they have on people? Driving is a very modern kind of social interaction, requiring high levels of cooperation and social coordination. I worry that driverless cars, by depriving us of an occasion to exercise these abilities, could contribute to their atrophy.

Not only will these vehicles be programmed to take over driving duties and hence to usurp from humans the power to make moral judgments (for example, about which pedestrian to hit when a collision is inevitable), they will also affect humans with whom they’ve had no direct contact. For instance, drivers who have steered awhile alongside an autonomous vehicle traveling at a steady, invariant speed might be lulled into driving less attentively, thereby increasing their likelihood of accidents once they’ve moved to a part of the highway occupied only by human drivers. Alternatively, experience may reveal that driving alongside autonomous vehicles traveling in perfect accordance with traffic laws actually improves human performance.

Either way, we would be reckless to unleash new forms of AI without first taking such social spillovers—or externalities, as they’re often called—into account. We must apply the same effort and ingenuity that we apply to the hardware and software that make self-driving cars possible to managing AI’s potential ripple effects on those outside the car. After all, we mandate brake lights on the back of your car not just, or even primarily, for your benefit, but for the sake of the people behind you.

In 1985, some four decades after Isaac Asimov introduced his laws of robotics, he added another to his list: A robot should never do anything that could harm humanity. But he struggled with how to assess such harm. “A human being is a concrete object,” he later wrote. “Injury to a person can be estimated and judged. Humanity is an abstraction.”

Focusing specifically on social spillovers can help. Spillovers in other arenas lead to rules, laws, and demands for democratic oversight. Whether we’re talking about a corporation polluting the water supply or an individual spreading secondhand smoke in an office building, as soon as some people’s actions start affecting other people, society may intervene. Because the effects of AI on human-to-human interaction stand to be intense and far-reaching, and the advances rapid and broad, we must investigate systematically what second-order effects might emerge, and discuss how to regulate them on behalf of the common good.

Already, a diverse group of researchers and practitioners—computer scientists, engineers, zoologists, and social scientists, among others—is coming together to develop the field of “machine behavior,” in hopes of putting our understanding of AI on a sounder theoretical and technical foundation. This field does not see robots merely as human-made objects, but as a new class of social actors.

The inquiry is urgent. In the not-distant future, AI-endowed machines may, by virtue of either programming or independent learning (a capacity we will have given them), come to exhibit forms of intelligence and behavior that seem strange compared with our own. We will need to quickly differentiate the behaviors that are merely bizarre from the ones that truly threaten us. The aspects of AI that should concern us most are the ones that affect the core aspects of human social life—the traits that have enabled our species’ survival over the millennia.

The Enlightenment philosopher Thomas Hobbes argued that humans needed a collective agreement to keep us from being disorganized and cruel. He was wrong. Long before we formed governments, evolution equipped humans with a social suite that allowed us to live together peacefully and effectively. In the pre-AI world, the genetically inherited capacities for love, friendship, cooperation, and teaching have continued to help us to live communally.

Unfortunately, humans do not have the time to evolve comparable innate capacities to live with robots. We must therefore take steps to ensure that they can live nondestructively with us. As AI insinuates itself more fully into our lives, we may yet require a new social contract—one with machines rather than with other humans.

Advertisements

Can Artificial Intelligence Read X-Rays?


An artificial intelligence (AI) system can analyze chest X-rays and spot patients who should receive immediate care, researchers report.

The system could also reduce backlogs in hospitals someday. Chest X-rays account for 40 percent of all diagnostic imaging worldwide, and there can be large backlogs, according to the researchers.

“Currently, there are no systematic and automated ways to triage chest X-rays and bring those with critical and urgent findings to the top of the reporting pile,” explained study co-author Giovanni Montana. He is formerly of King’s College London and is now at the University of Warwick in Coventry, England.

Montana and his colleagues used more than 470,300 adult chest X-rays to develop an AI system that could identify unusual results.

The system’s performance in prioritizing X-rays was assessed in a simulation using a separate set of 15,887 chest X-rays. All identifying information was removed from the X-rays to protect patient privacy.

The system was highly accurate in distinguished abnormal from normal chest X-rays, researchers said. Simulations showed that with the AI system, critical findings received an expert radiologist opinion within an average of 2.7 days, compared with an average of 11.2 days in actual practice.

The study results were published Jan. 22 in the journal Radiology.

“The initial results reported here are exciting as they demonstrate that an AI system can be successfully trained using a very large database of routinely acquired radiologic data,” Montana said in a journal news release.

“With further clinical validation, this technology is expected to reduce a radiologist’s workload by a significant amount by detecting all the normal exams, so more time can be spent on those requiring more attention,” he added.

The researchers said the next step is to test a much larger number of X-rays and to conduct a multi-center study to assess the AI system’s performance.

The case for taking AI seriously as a threat to humanity


Why some people fear AI, explained.

Stephen Hawking has said, “The development of full artificial intelligence could spell the end of the human race.” Elon Musk claims that AI is humanity’s “biggest existential threat.”

That might have people asking: Wait, what? But these grand worries are rooted in research. Along with Hawking and Musk, prominent figures at Oxford and UC Berkeley and many of the researchers working in AI today believe that advanced AI systems, if deployed carelessly, could end all life on earth.

This concern has been raised since the dawn of computing. But it has come into particular focus in recent years, as advances in machine-learning techniques have given us a more concrete understanding of what we can do with AI, what AI can do for (and to) us, and how much we still don’t know.

There are also skeptics. Some of them think advanced AI is so distant that there’s no point in thinking about it now. Others are worried that excessive hype about the power of their field might kill it prematurely. And even among the people who broadly agree that AI poses unique dangers, there are varying takes on what steps make the most sense today.

The conversation about AI is full of confusion, misinformation, and people talking past each other — in large part because we use the word “AI” to refer to so many things. So here’s the big picture on how artificial intelligence might pose a catastrophic threat, in nine questions:

1) What is AI?

Artificial intelligence is the effort to create computers capable of intelligent behavior. It is a broad catchall term, used to refer to everything from Siri to IBM’s Watson to powerful technologies we have yet to invent.

Some researchers distinguish between “narrow AI” — computer systems that are better than humans in some specific, well-defined field, like playing chess or generating images or diagnosing cancer — and “general AI,” systems that can surpass human capabilities in many domains. We don’t have general AI yet, but we’re starting to get a better sense of the challenges it will pose.

Narrow AI has seen extraordinary progress over the past few years. AI systems have improved dramatically at translation, at games like chess and Go, at important research biology questions like predicting how proteins fold, and at generating images. AI systems determine what you’ll see in a Google search or in your Facebook Newsfeed. They are being developed to improve drone targeting and detect missiles.

But narrow AI is getting less narrow. Once, we made progress in AI by painstakingly teaching computer systems specific concepts. To do computer vision — allowing a computer to identify things in pictures and video — researchers wrote algorithms for detecting edges. To play chess, they programmed in heuristics about chess. To do natural language processing (speech recognition, transcription, translation, etc.), they drew on the field of linguistics.

But recently, we’ve gotten better at creating computer systems that have generalized learning capabilities. Instead of mathematically describing detailed features of a problem, we let the computer system learn that by itself. While once we treated computer vision as a completely different problem from natural language processing or platform game playing, now we can solve all three problems with the same approaches.

Our AI progress so far has enabled enormous advances — and has also raised urgent ethical questions. When you train a computer system to predict which convicted felons will reoffend, you’re using inputs from a criminal justice system biased against black people and low-income people — and so its outputs will likely be biased against black and low-income people too. Making websites more addictive can be great for your revenue but bad for your users.

Rosie Campbell at UC Berkeley’s Center for Human-Compatible AI argues that these are examples, writ small, of the big worry experts have about general AI in the future. The difficulties we’re wrestling with today with narrow AI don’t come from the systems turning on us or wanting revenge or considering us inferior. Rather, they come from the disconnect between what we tell our systems to do and what we actually want them to do.

For example, we tell a system to run up a high score in a video game. We want it to play the game fairly and learn game skills — but if it instead has the chance to directly hack the scoring system, it will do that. It’s doing great by the metric we gave it. But we aren’t getting what we wanted.

In other words, our problems come from the systems being really good at achieving the goal they learned to pursue; it’s just that the goal they learned in their training environment isn’t the outcome we actually wanted. And we’re building systems we don’t understand, which means we can’t always anticipate their behavior.

Right now the harm is limited because the systems are so limited. But it’s a pattern that could have even graver consequences for human beings in the future as AI systems become more advanced.

2) Is it even possible to make a computer as smart as a person?

Yes, though current AI systems aren’t nearly that smart.

One popular adage about AI is “everything that’s easy is hard, and everything that’s hard is easy.” Doing complex calculations in the blink of an eye? Easy. Looking at a picture and telling you whether it’s a dog? Hard (until very recently).

Lots of things humans do are still outside AI’s grasp. For instance, it’s hard to design an AI system that explores an unfamiliar environment, that can navigate its way from, say, the entryway of a building it’s never been in before up the stairs to a specific person’s desk. We don’t know how to design an AI system that reads a book and retains an understanding of the concepts.

The paradigm that has driven many of the biggest breakthroughs in AI recently is called “deep learning.” Deep learning systems can do some astonishing stuff: beat games we thought humans might never lose, invent compelling and realistic photographs, solve open problems in molecular biology.

These breakthroughs have made some researchers conclude it’s time to start thinking about the dangers of more powerful systems, but skeptics remain. The field’s pessimists argue that programs still need an extraordinary pool of structured data to learn from, require carefully chosen parameters, or work only in environments designed to avoid the problems we don’t yet know how to solve. They point to self-driving cars, which are still mediocre under the best conditions despite the billions that have been poured into making them work.

With all those limitations, one might conclude that even if it’s possible to make a computer as smart as a person, it’s certainly a long way away. But that conclusion doesn’t necessarily follow.

That’s because for almost all the history of AI, we’ve been held back in large part by not having enough computing power to realize our ideas fully. Many of the breakthroughs of recent years — AI systems that learned how to play Atari games, generate fake photos of celebrities, fold proteins, and compete in massive multiplayer online strategy games — have happened because that’s no longer true. Lots of algorithms that seemed not to work at all turned out to work quite well once we could run them with more computing power.

And the cost of a unit of computing time keeps falling. Progress in computing speed has slowed recently, but the cost of computing power is still estimated to be falling by a factor of 10 every 10 years. Through most of its history, AI has had access to less computing power than the human brain. That’s changing. By most estimates, we’re now approaching the era when AI systems can have the computing resources that we humans enjoy.

Furthermore, breakthroughs in a field can often surprise even other researchers in the field. “Some have argued that there is no conceivable risk to humanity [from AI] for centuries to come,” wrote UC Berkeley professor Stuart Russell, “perhaps forgetting that the interval of time between Rutherford’s confident assertion that atomic energy would never be feasibly extracted and Szilárd’s invention of the neutron-induced nuclear chain reaction was less than twenty-four hours.”

There’s another consideration. Imagine an AI that is inferior to humans at everything, with one exception: It’s a competent engineer that can build AI systems very effectively. Machine learning engineers who work on automating jobs in other fields often observe, humorously, that in some respects, their own field looks like one where much of the work — the tedious tuning of parameters — could be automated.

If we can design such a system, then we can use its result — a better engineering AI — to build another, even better AI. This is the mind-bending scenario experts call “recursive self-improvement,” where gains in AI capabilities enable more gains in AI capabilities, allowing a system that started out behind us to rapidly end up with abilities well beyond what we anticipated.

This is a possibility that has been anticipated since the first computers. I.J. Good, a colleague of Alan Turing who worked at the Bletchley Park codebreaking operation during World War II and helped build the first computers afterward, may have been the first to spell it out, back in 1965: “An ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.”

3) How exactly could it wipe us out?

It’s immediately clear how nuclear bombs will kill us. No one working on mitigating nuclear risk has to start by explaining why it’d be a bad thing if we had a nuclear war.

The case that AI could pose an existential risk to humanity is more complicated and harder to grasp. So many of the people who are working to build safe AI systems have to start by explaining why AI systems, by default, are dangerous.

The idea that AI can become a danger is rooted in the fact that AI systems pursue their goals, whether or not those goals are what we really intended — and whether or not we’re in the way. “You’re probably not an evil ant-hater who steps on ants out of malice,” Stephen Hawking wrote, “but if you’re in charge of a hydroelectric green-energy project and there’s an anthill in the region to be flooded, too bad for the ants. Let’s not place humanity in the position of those ants.”

Here’s one scenario that keeps experts up at night: We develop a sophisticated AI system with the goal of, say, estimating some number with high confidence. The AI realizes it can achieve more confidence in its calculation if it uses all the world’s computing hardware, and it realizes that releasing a biological superweapon to wipe out humanity would allow it free use of all the hardware. Having exterminated humanity, it then calculates the number with higher confidence.

Victoria Krakovna, an AI researcher at DeepMind (now a division of Alphabet, Google’s parent company), compiled a list of examples of “specification gaming”: the computer doing what we told it to do but not what we wanted it to do. For example, we tried to teach AI organisms in a simulation to jump, but we did it by teaching them to measure how far their “feet” rose above the ground. Instead of jumping, they learned to grow into tall vertical poles and do flips — they excelled at what we were measuring, but they didn’t do what we wanted them to do.

An AI playing the Atari exploration game Montezuma’s Revenge found a bug that let it force a key in the game to reappear, thereby allowing it to earn a higher score by exploiting the glitch. An AI playing a different game realized it could get more points by falsely inserting its name as the owner of high-value items.

Sometimes, the researchers didn’t even know how their AI system cheated: “the agent discovers an in-game bug. … For a reason unknown to us, the game does not advance to the second round but the platforms start to blink and the agent quickly gains a huge amount of points (close to 1 million for our episode time limit).”

What these examples make clear is that in any system that might have bugs or unintended behavior or behavior humans don’t fully understand, a sufficiently powerful AI system might act unpredictably — pursuing its goals through an avenue that isn’t the one we expected.

In his 2009 paper “The Basic AI Drives,” Steve Omohundro, who has worked as a computer science professor at the University of Illinois Urbana-Champaign and as the president of Possibility Research, argues that almost any AI system will predictably try to accumulate more resources, become more efficient, and resist being turned off or modified: “These potentially harmful behaviors will occur not because they were programmed in at the start, but because of the intrinsic nature of goal driven systems.”

His argument goes like this: Because AIs have goals, they’ll be motivated to take actions that they can predict will advance their goals. An AI playing a chess game will be motivated to take an opponent’s piece and advance the board to a state that looks more winnable.

But the same AI, if it sees a way to improve its own chess evaluation algorithm so it can evaluate potential moves faster, will do that too, for the same reason: It’s just another step that advances its goal.

If the AI sees a way to harness more computing power so it can consider more moves in the time available, it will do that. And if the AI detects that someone is trying to turn off its computer mid-game, and it has a way to disrupt that, it’ll do it. It’s not that we would instruct the AI to do things like that; it’s that whatever goal a system has, actions like these will often be part of the best path to achieve that goal.

That means that any goal, even innocuous ones like playing chess or generating advertisements that get lots of clicks online, could produce unintended results if the agent pursuing it has enough intelligence and optimization power to identify weird, unexpected routes to achieve its goals.

Goal-driven systems won’t wake up one day with hostility to humans lurking in their hearts. But they will take actions that they predict will help them achieve their goal — even if we’d find those actions problematic, even horrifying. They’ll work to preserve themselves, accumulate more resources, and become more efficient. They already do that, but it takes the form of weird glitches in games. As they grow more sophisticated, scientists like Omohundro predict more adversarial behavior.

4) When did scientists first start worrying about AI risk?

Scientists have been thinking about the potential of artificial intelligence since the early days of computers. In the famous paper where he put forth the Turing test for determining if an artificial system is truly “intelligent,” Alan Turing wrote:

Let us now assume, for the sake of argument, that these machines are a genuine possibility, and look at the consequences of constructing them. … There would be plenty to do in trying to keep one’s intelligence up to the standards set by the machines, for it seems probable that once the machine thinking method had started, it would not take long to outstrip our feeble powers. … At some stage therefore we should have to expect the machines to take control.

I.J. Good worked closely with Turing and reached the same conclusions, according to his assistant, Leslie Pendleton. In an excerpt from unpublished notes Good wrote shortly before he died in 2009, he writes about himself in third person and notes a disagreement with his younger self — while as a younger man, he thought powerful AIs might be helpful to us, the older Good expected AI to annihilate us.

[The paper] “Speculations Concerning the First Ultra-intelligent Machine” (1965) … began: “The survival of man depends on the early construction of an ultra-intelligent machine.” Those were his words during the Cold War, and he now suspects that “survival” should be replaced by “extinction.” He thinks that, because of international competition, we cannot prevent the machines from taking over. He thinks we are lemmings. He said also that “probably Man will construct the deus ex machina in his own image.”

In the 21st century, with computers quickly establishing themselves as a transformative force in our world, younger researchers started expressing similar worries.

Nick Bostrom is a professor at the University of Oxford, the director of the Future of Humanity Institute, and the director of the Governance of Artificial Intelligence Program. He researches risks to humanity, both in the abstract — asking questions like why we seem to be alone in the universe — and in concrete terms, analyzing the technological advances on the table and whether they endanger us. AI, he concluded, endangers us.

In 2014, he wrote a book explaining the risks AI poses and the necessity of getting it right the first time, concluding, “once unfriendly superintelligence exists, it would prevent us from replacing it or changing its preferences. Our fate would be sealed.”

Across the world, others have reached the same conclusion. Bostrom co-authored a paper on the ethics of artificial intelligence with Eliezer Yudkowsky, founder of and research fellow at the Berkeley Machine Intelligence Research Institute (MIRI), an organization that works on better formal characterizations of the AI safety problem.

Yudkowsky started his career in AI by worriedly poking holes in others’ proposals for how to make AI systems safe, and has spent most of it working to persuade his peers that AI systems will, by default, be unaligned with human values (not necessarily opposed to but indifferent to human morality) — and that it’ll be a challenging technical problem to prevent that outcome.

Increasingly, researchers realized that there’d be challenges that hadn’t been present with AI systems when they were simple. “‘Side effects’ are much more likely to occur in a complex environment, and an agent may need to be quite sophisticated to hack its reward function in a dangerous way. This may explain why these problems have received so little study in the past, while also suggesting their importance in the future,” concluded a 2016 research paper on problems in AI safety.

Bostrom’s book Superintelligence was compelling to many people, but there were skeptics. “No, experts don’t think superintelligent AI is a threat to humanity,” argued an op-ed by Oren Etzioni, a professor of computer science at the University of Washington and CEO of the Allan Institute for Artificial Intelligence. “Yes, we are worried about the existential risk of artificial intelligence,” replied a dueling op-ed by Stuart Russell, an AI pioneer and UC Berkeley professor, and Allan DaFoe, a senior research fellow at Oxford and director of the Governance of AI program there.

It’s tempting to conclude that there’s a pitched battle between AI-risk skeptics and AI-risk believers. In reality, they might not disagree as profoundly as you would think.

Facebook’s chief AI scientist Yann LeCun, for example, is a vocal voice on the skeptical side. But while he argues we shouldn’t fear AI, he still believes we ought to have people working on, and thinking about, AI safety. “Even if the risk of an A.I. uprising is very unlikely and very far in the future, we still need to think about it, design precautionary measures, and establish guidelines,” he writes.

That’s not to say there’s an expert consensus here — far from it. There is substantial disagreement about which approaches seem likeliest to bring us to general AI, which approaches seem likeliest to bring us to safe general AI, and how soon we need to worry about any of this.

Many experts are wary that others are overselling their field, and dooming it when the hype runs out. But that disagreement shouldn’t obscure a growing common ground; these are possibilities worth thinking about, investing in, and researching, so we have guidelines when the moment comes that they’re needed.

5) Why couldn’t we just shut off a computer if it got too powerful?

A smart AI could predict that we’d want to turn it off if it made us nervous. So it would try hard not to make us nervous, because doing so wouldn’t help it accomplish its goals. If asked what its intentions are, or what it’s working on, it would attempt to evaluate which responses are least likely to get it shut off, and answer with those. If it wasn’t competent enough to do that, it might pretend to be even dumber than it was — anticipating that researchers would give it more time, computing resources, and training data.

So we might not know when it’s the right moment to shut off a computer.

We also might do things that make it impossible to shut off the computer later, even if we realize eventually that it’s a good idea. For example, many AI systems could have access to the internet, which is a rich source of training data and which they’d need if they’re to make money for their creators (for example, on the stock market, where more than half of trading is done by fast-reacting AI algorithms).

But with internet access, an AI could email copies of itself somewhere where they’ll be downloaded and read, or hack vulnerable systems elsewhere. Shutting off any one computer wouldn’t help.

In that case, isn’t it a terrible idea to let any AI system — even one which doesn’t seem powerful enough to be dangerous — have access to the internet? Probably. But that doesn’t mean it won’t continue to happen.

So far, we’ve mostly talked about the technical challenges of AI. But from here forward, it’s necessary to veer more into the politics. Since AI systems enable incredible things, there will be lots of different actors working on such systems.

There will likely be startups, established tech companies like Google (Alphabet’s recently acquired startup DeepMind is frequently mentioned as an AI frontrunner), and nonprofits (the Elon Musk-founded OpenAI is another major player in the field).

There will be governments — Russia’s Vladimir Putin has expressed an interest in AI, and China has made big investments. Some of them will presumably be cautious and employ safety measures, including keeping their AI off the internet. But in a scenario like this one, we’re at the mercy of the least cautious actor, whoever they may be.

That’s part of what makes AI hard: Even if we know how to take appropriate precautions (and right now we don’t), we also need to figure out how to ensure that all would-be AI programmers are motivated to take those precautions and have the tools to implement them correctly.

6) What are we doing right now to avoid an AI apocalypse?

“It could be said that public policy on AGI [artificial general intelligence] does not exist,” concluded a paper this year reviewing the state of the field.

The truth is that technical work on promising approaches is getting done, but there’s shockingly little in the way of policy planning, international collaboration, or public-private partnerships. In fact, much of the work is being done by only a handful of organizations, and it has been estimated that around 50 people in the world work full time on technical AI safety.

Bostrom’s Future of Humanity Institute has published a research agenda for AI governance: the study of “devising global norms, policies, and institutions to best ensure the beneficial development and use of advanced AI.” It has published research on the risk of malicious uses of AI, on the context of China’s AI strategy, and on artificial intelligence and international security.

The longest-established organization working on technical AI safety is the Machine Intelligence Research Institute (MIRI), which prioritizes research into designing highly reliable agents — artificial intelligence programs whose behavior we can predict well enough to be confident they’re safe. (Disclosure: MIRI is a nonprofit and I donated to its work in 2017 and 2018.)

The Elon Musk-founded OpenAI is a very new organization, less than three years old. But researchers there are active contributors to both AI safety and AI capabilities research. A research agenda in 2016 spelled out “concrete open technical problems relating to accident prevention in machine learning systems,” and researchers have since advanced some approaches to safe AI systems.

Alphabet’s DeepMind, a leader in this field, has a safety team and a technical research agenda outlined here. “Our intention is to ensure that AI systems of the future are not just ‘hopefully safe’ but robustly, verifiably safe ,” it concludes, outlining an approach with an emphasis on specification (designing goals well), robustness (designing systems that perform within safe limits under volatile conditions), and assurance (monitoring systems and understanding what they’re doing).

There are also lots of people working on more present-day AI ethics problems: algorithmic bias, robustness of modern machine-learning algorithms to small changes, and transparency and interpretability of neural nets, to name just a few. Some of that research could potentially be valuable for preventing destructive scenarios.

But on the whole, the state of the field is a little bit as if almost all climate change researchers were focused on managing the droughts, wildfires, and famines we’re already facing today, with only a tiny skeleton team dedicating to forecasting the future and 50 or so researchers who work full time on coming up with a plan to turn things around.

Not every organization with a major AI department has a safety team at all, and some of them have safety teams focused only on algorithmic fairness and not on the risks from advanced systems. The US government doesn’t have a department for AI.

The field still has lots of open questions — many of which might make AI look much more scary, or much less so — which no one has dug into in depth.

7) Is this really likelier to kill us all than, say, climate change?

It sometimes seems like we’re facing dangers from all angles in the 21st century. Both climate change and future AI developments are likely to be transformative forces acting on our world.

Our predictions about climate change are more confident, both for better and for worse. We have a clearer understanding of the risks the planet will face, and we can estimate the costs to human civilization. They are projected to be enormous, risking potentially hundreds of millions of lives. The ones who will suffer most will be low-income people in developing countries; the wealthy will find it easier to adapt. We also have a clearer understanding of the policies we need to enact to address climate change than we do with AI.

There’s intense disagreement in the field on timelines for critical advances in AI. While AI safety experts agree on many features of the safety problem, they’re still making the case to research teams in their own field, and they disagree on some of the details. There’s substantial disagreement on how badly it could go, and on how likely it is to go badly. There are only a few people who work full time on AI forecasting. One of the things current researchers are trying to nail down is their models and the reasons for the remaining disagreements about what safe approaches will look like.

Most experts in the AI field think it poses a much larger risk of total human extinction than climate change, since analysts of existential risks to humanity think that climate change, while catastrophic, is unlikely to lead to human extinction. But many others primarily emphasize our uncertainty — and emphasize that when we’re working rapidly toward powerful technology about which there are still many unanswered questions, the smart step is to start the research now.

8) Is there a possibility that AI can be benevolent?

AI safety researchers emphasize that we shouldn’t assume AI systems will be benevolent by default. They’ll have the goals that their training environment set them up for, and no doubt this will fail to encapsulate the whole of human values.

When the AI gets smarter, might it figure out morality by itself? Again, researchers emphasize that it won’t. It’s not really a matter of “figuring out” — the AI will understand just fine that humans actually value love and fulfillment and happiness, and not just the number associated with Google on the New York Stock Exchange. But the AI’s values will be built around whatever goal system it was initially built around, which means it won’t suddenly become aligned with human values if it wasn’t designed that way to start with.

Of course, we can build AI systems that are aligned with human values, or at least that humans can safely work with. That is ultimately what almost every organization with an artificial general intelligence division is trying to do. A success with AI could give us access to decades or centuries of technological innovation all at once.

“If we’re successful, we believe this will be one of the most important and widely beneficial scientific advances ever made,” writes the introduction to Alphabet’s DeepMind. “From climate change to the need for radically improved healthcare, too many problems suffer from painfully slow progress, their complexity overwhelming our ability to find solutions. With AI as a multiplier for human ingenuity, those solutions will come into reach.”

So, yes, AI can share our values — and transform our world for the good. We just need to solve a very hard engineering problem first.

9) I just really want to know: how worried should we be?

To people who think the worrying is premature and the risks overblown, AI safety is competing with other priorities that sound, well, a bit less sci-fi — and it’s not clear why AI should take precedence. To people who think the risks described are real and substantial, it’s outrageous that we’re dedicating so few resources to working on them.

While machine-learning researchers are right to be wary of hype, it’s also hard to avoid the fact that they’re accomplishing some impressive, surprising things using very generalizable techniques, and that it doesn’t seem that all the low-hanging fruit has been picked.

At a major conference in early December, Google’s DeepMind cracked open a longstanding problem in biology: predicting how proteins fold. “Even though there’s a lot more work to do before we’re able to have a quantifiable impact on treating diseases, managing the environment, and more, we know the potential is enormous,” its announcement concludes.

AI looks increasingly like a technology that will change the world when it arrives. Researchers across many major AI organizations tell us it will be like launching a rocket: something we have to get right before we hit “go.” So it seems urgent to get to work learning rocketry. No matter whether or not humanity should be afraid, we should definitely be doing our homework.

Doctors in UK sceptical that Artificial Intelligence will replace them: Survey


https://medicaldialogues.in/doctors-in-uk-sceptical-that-artificial-intelligence-will-replace-them-survey/

Top influencers in artificial intelligence


Artificial intelligence influencers are driving conversations about AI news and trends across social media and beyond. They advise on company boards, build startups and are moulding an industry that is key in today’s tech world, with implications going far beyond.

Nearly 70 years on since Alan Turing posed the question: “Can machines think?”, artificial intelligence is finally beginning to have a significant impact on the global economy. Proponents of AI believe that it has the potential to transform the world as we know it, with Google’s chief executive officer Sundar Pichai describing AI as “more profound than electricity or fire” at an American television network MSNBC event in San Francisco.

But AI is still in its infancy, and needs the nurturing hands of its influencers to help it grow, in spite of the research and development effort that has been put into it over the years. In a decade’s time, today’s state-of-the-art AI will seem rudimentary and simplistic.

Here are the top ten influencers in AI according to research by GlobalData which carries an interactive dashboard covering influencers in AI.

  1. Spiros Margaris, VC, Margaris Ventures founder and member of advisory board for wefox Group

@SpirosMargaris with 66,700 Twitter followers

Margaris is based in Switzerland. A venture capitalist and founder of Margaris Ventures, he was also appointed to the advisory board of insurtech company wefox Group in 2018.

https://platform.twitter.com/widgets/follow_button.d30011b0f5ce05b98f24b01d3331b3c1.en-gb.html#dnt=false&id=twitter-widget-0&lang=en-gb&screen_name=SpirosMargaris&show_count=false&show_screen_name=true&size=m&time=1545578633193

  1. Evan Kirstel, thought leader, technology influencer and B2B marketer

@evankirstel 227,000 Twitter followers

Kirstel is based in Boston, the USA. He is a chief digital evangelist and co-founder of EviraHealth, a social media partner across health tech.

3 Things That Will Change the World Today

https://platform.twitter.com/widgets/follow_button.d30011b0f5ce05b98f24b01d3331b3c1.en-gb.html#dnt=false&id=twitter-widget-1&lang=en-gb&screen_name=evankirstel&show_count=false&show_screen_name=true&size=m&time=1545578633232

  1. Ronald van Loon, director at Adversitement

@Ronald_vanLoon 164,000 Twitter followers

van Loon is director of Adversitement, which helps data-driven companies create business value. He is based in The Netherlands and is also an advisory board member for Simplilearn, an educator in cybersecurity, cloud computing, project management, digital marketing, data science and others.

https://platform.twitter.com/widgets/follow_button.d30011b0f5ce05b98f24b01d3331b3c1.en-gb.html#dnt=false&id=twitter-widget-2&lang=en-gb&screen_name=Ronald_vanLoon&show_count=false&show_screen_name=true&size=m&time=1545578633238

  1. Mike Quindazzi, business development leader and management consultant at PwC

@MikeQuindazzi 108,000 Twitter followers

Quindazzi is a managing director for PwC in Los Angeles, USA. He consults on emerging technology, including blockchain, augmented reality, 3D printing, drones, virtual reality, mobile strategies, internet of things, robotics, big data, predictive analytics, fintech, cybersecurity and insurtech.

https://platform.twitter.com/widgets/follow_button.d30011b0f5ce05b98f24b01d3331b3c1.en-gb.html#dnt=false&id=twitter-widget-3&lang=en-gb&screen_name=MikeQuindazzi&show_count=false&show_screen_name=true&size=m&time=1545578633244

  1. Kirk Borne, principal data scientist at Booz Allen Hamilton

@KirkDBorne 217,000 Twitter followers

Borne is an American data scientist and executive advisor at management and technology consulting and engineering services firm Booz Allen Hamilton.

https://platform.twitter.com/widgets/follow_button.d30011b0f5ce05b98f24b01d3331b3c1.en-gb.html#dnt=false&id=twitter-widget-4&lang=en-gb&screen_name=KirkDBorne&show_count=false&show_screen_name=true&size=m&time=1545578633250

  1. Ganapathi Pulipaka, a chief data scientist at Confidential

@gp_pulipaka 50,200 Twitter followers

Dr Pulipaka is based in Los Angeles, USA. He is a chief data scientist at Confidential, as well as author of The Future of Data Science and Parallel Computing.

https://platform.twitter.com/widgets/follow_button.d30011b0f5ce05b98f24b01d3331b3c1.en-gb.html#dnt=false&id=twitter-widget-5&lang=en-gb&screen_name=gp_pulipaka&show_count=false&show_screen_name=true&size=m&time=1545578633255

  1. Tamara McCleary, chief executive officer of Thulium

@TamaraMcCleary  291,000 Twitter followers

McCleary is based in Boulder, USA. He is the founder and chief executive officer of Thulium, a brand amplification company in B2B social media marketing.

https://platform.twitter.com/widgets/follow_button.d30011b0f5ce05b98f24b01d3331b3c1.en-gb.html#dnt=false&id=twitter-widget-6&lang=en-gb&screen_name=TamaraMcCleary&show_count=false&show_screen_name=true&size=m&time=1545578633260

  1. Thomas Power, board member at 9Spokes

@thomaspower 338,000 Twitter followers

Power is a board member at several companies, including blockchain infrastructure company OST, data dashboard 9Spokes,Team Blockchain and the Blockchain Industry Compliance and Regulation Association. He is based in the UK.

https://platform.twitter.com/widgets/follow_button.d30011b0f5ce05b98f24b01d3331b3c1.en-gb.html#dnt=false&id=twitter-widget-7&lang=en-gb&screen_name=thomaspower&show_count=false&show_screen_name=true&size=m&time=1545578633265

  1. Sandy Carter, vice president at Amazon Web Services

@sandy_carter 79,600 Twitter followers

Carter is based in San Francisco, USA. He is vice president at Amazon Web Services, a subsidiary of Amazon that provides on-demand cloud computing services.

https://platform.twitter.com/widgets/follow_button.d30011b0f5ce05b98f24b01d3331b3c1.en-gb.html#dnt=false&id=twitter-widget-8&lang=en-gb&screen_name=sandy_carter&show_count=false&show_screen_name=true&size=m&time=1545578633270

  1. Larry Kim, chief executive officer at MobileMonkey

@larrykim 802,000 Twitter followers

Kim is based in Boston, USA. He is chief executive officer at MobileMonkey, a messenger marketing platform that amplifies Facebook advertising.

https://platform.twitter.com/widgets/follow_button.d30011b0f5ce05b98f24b01d3331b3c1.en-gb.html#dnt=false&id=twitter-widget-9&lang=en-gb&screen_name=larrykim&show_count=false&show_screen_name=true&size=m&time=1545578633275

Top AI trends

These are the top ten trends talked about by AI influencers over the last 90 days according to research by GlobalData.

1.     Machine Learning
2.     Deep Learning
3.     Fintech
4.     IoT
5.     Big Data
6.     Robotics
7.     Data Science
8.     Insurtech
9.     Analytics
10.   Cloud Computing

 

Top AI companies

These are the top ten most influential companies on Twitter over the last 90 days when it comes to AI.

1.     Google
2.     IBM
3.     Intel
4.     GitHub
5.     Stanford University
6.     Tencent
7.     The Durable Slate Company
8.     Carnegie Mellon University
9.     GaN Corp.
10.   Perceptron

Methodology

GlobalData used a series of algorithms to identify Twitter users conversing using a set of keywords. The keywords are determined from in-depth web research on blogs, forums, social platforms and articles.

Cluster groups are formed based on influencer types and topics, and the frequency of tweets. Follower strength, average engagement and influencing ability and behaviours were also measured to identify the key influencers. A weighting is given to critical engagement metrics such as followers, mentions, retweets and favourites.

Deeper analysis was carried out on each influencer to understand their engagement levels: how their social activity is acknowledged and how successfully they drive discussions on new and emerging trends.

Reconstructing jobs Creating good jobs in the age of artificial intelligence


​Fears of AI-based automation forcing humans out of work or accelerating the creation of unstable jobs may be unfounded. AI thoughtfully deployed could instead help create meaningful work.

Creating good jobs

When it comes to work, workers, and jobs, much of the angst of the modern era boils down to the fear that we’re witnessing the automation endgame, and that there will be nowhere for humans to retreat as machines take over the last few tasks. The most recent wave of commentary on this front stems from the use of artificial intelligence (AI) to capture and automate tacit knowledge and tasks, which were previously thought to be too subtle and complex to be automated. Is there no area of human experience that can’t be quantified and mechanized? And if not, what is left for humans to do except the menial tasks involved in taking care of the machines?

At the core of this concern is our desire for good jobs—jobs that, without undue intensity or stress, make the most of workers’ natural attributes and abilities; where the work provides the worker with motivation, novelty, diversity, autonomy, and work/life balance; and where workers are duly compensated and consider the employment contract fair. Crucially, good jobs support workers in learning by doing—and, in so doing, deliver benefits on three levels: to the worker, who gains in personal development and job satisfaction; to the organization, which innovates as staff find new problems to solve and opportunities to pursue; and to the community as a whole, which reaps the economic benefits of hosting thriving organizations and workers. This is what makes good jobs productive and sustainable for the organization, as well as engaging and fulfilling for the worker. It is also what aligns good jobs with the larger community’s values and norms, since a community can hardly argue with having happier citizens and a higher standard of living.1

Does the relentless advance of AI threaten to automate away all the learning, creativity, and meaning that make a job a good job? Certainly, some have blamed technology for just such an outcome. Headlines today often express concern over technological innovation resulting in bad jobs for humans, or even the complete elimination of certain professions. Some fear that further technology advancement in the workplace will result in jobs that are little more than collections of loosely related tasks where employers respond to cost pressures by dividing work schedules into ever smaller slithers of time, and where employees are being asked to work for longer periods over more days. As the monotonic progress of technology has automated more and more of a firm’s function, managers have fallen into the habit of considering work as little more than a series of tasks, strung end-to-end into processes, to be accomplished as efficiently as possible, with human labor as a cost to be minimized. The result has been the creation of narrowly defined, monotonous, and unstable jobs, spanning knowledge work and procedural jobs in bureaucracies and service work in the emerging “gig economy.”2

The problem here isn’t the technology; rather, it’s the way the technology is used—and, more than that, the way people think about using it. True, AI can execute certain tasks that human beings have historically performed, and it can thereby replace the humans who were once responsible for those tasks. However, just because we can use AI in this manner doesn’t mean that we should. As we have previously argued, there is tantalizing evidence that using AI on a task-by-task basis may not be the most effective way to apply it.3 Conceptualizing work in terms of tasks and processes, and using technology to automate those tasks and processes, may have served us well in the industrial era, but just as AI differs from previous generations of technologies in its ability to mimic (some) human behaviors, so too should our view of work evolve so as to allow us to best put that ability to use.

In this essay, we argue that the thoughtful use of AI-based automation, far from making humans obsolete or relegating them to busywork, can open up vast possibilities for creating meaningful work that not only allows for, but requires, the uniquely human strengths of sense-making and contextual decisions. In fact, creating good jobs that play to our strengths as social creatures might be necessary if we’re to realize AI’s latent potential and break us out of the persistent period of low productivity growth that we’re experiencing today. But for AI to deliver on its promise, we must take a fundamentally different view of work and how work is organized—one that takes AI’s uniquely flexible capabilities into account, and that treats humans and intelligent machines as partners in search of solutions to a shared problem.

Problems rather than processes

Consider a chatbot—a computer program that a user can converse or chat with—typically used for product support or as a shopping assistant. The computer in the Enterprise from Star Trek is a chatbot, as is Microsoft’s Zo, and the virtual assistants that come with many smartphones. The use of AI allows a chatbot to deliver a range of responses to a range of stimuli, rather than limiting it to a single stereotyped response to a specific input. This flexibility in recognizing inputs and generating appropriate responses is the hallmark of AI-based automation, distinguishing it from automation using prior generations of technology. Because of this flexibility, AI-enabled systems can be said to display digital behaviors, actions that are driven by the recognition of what is required in a particular situation as a response to a particular stimulus.

We can consider a chatbot to embody a set of digital behaviors, how the bot responds to different utterances from the user. On the one hand, the chatbot’s ability to deliver different responses to different inputs gives it more utility and adaptability than a nonintelligent automated system. On the other hand, the behaviors that chatbots evince are fairly simple, constrained to canned responses in a conversation plan or limited by access to training data.4 More than that, chatbots are also constrained by their inability to leverage the social and cultural context they find themselves in. This is what makes chatbots—and AI-enabled systems generally—fundamentally different from humans, and an important reason that AI cannot “take over” all human jobs.

Humans rely on context to make sense of the world. The meaning of “let’s table the motion,” for example, depends on the context it’s uttered in. Our ability to refer to the context of a conversation is a significant contributor to our rich behaviors (as opposed to a chatbot’s simple ones). We can tune our response to verbal and nonverbal cues, past experience, knowledge of past or current events, anticipation of future events, knowledge of our counterparty, our empathy for the situation of others, or even cultural preferences (whether or not we’re consciously aware of them). The context of a conversation also evolves over time; we can infer new facts and come to new realizations. Indeed, the act of reaching a conclusion or realizing that there’s a better question to ask might even provide the stimulus required to trigger a different behavior.

Chatbots are limited in their ability to draw on context. They can only refer to external information that has been explicitly integrated into the solution. They don’t have general knowledge or a rich understanding of culture. Even the ability to refer back to earlier in a conversation is problematic, making it hard for earlier behaviors to influence later ones. Consequentially, a chatbot’s behaviors tend to be of the simpler, functional kind, such as providing information in response to an explicit request. Nor do these behaviors interact with each other, preventing more complex behaviors from emerging.

The way chatbots are typically used exemplifies what we would argue is a “wrong” way to use AI-based automation—to execute tasks typically performed by a human, who is then considered redundant and replaceable. By only automating the simple behaviors within the reach of technology, and then treating the chatbot as a replacement for humans, we’re eliminating richer, more complex social and cultural behaviors that make interactions valuable. A chatbot cannot recognize humor or sarcasm, interpret elliptical allusions, or engage in small talk—yet we have put them in situations where, being accustomed to human interaction, people expect all these elements and more. It’s not surprising that users find chatbots frustrating and chatbot adoption is failing.5

A more productive approach is to combine digital and human behaviors. Consider the challenge of helping people who, due to a series of unfortunate events, find themselves about to become homeless. Often these people are not in a position to use a task-based interface—a website or interactive voice response (IVR) system—to resolve their situation. They need the rich interaction of a behavior-based interface, one where interaction with another human will enable them to work through the issue, quantify the problem, explore possible options, and (hopefully) find a solution.

We would like to use technology to improve the performance of the contact center such a person might call in this emergency. Reducing the effort required to serve each client would enable the contact center to serve more clients. At the same time, we don’t want to reduce the quality of the service. Indeed, ideally, we would like to take some of the time saved and use it to improve the service’s value by empowering social workers to delve deeper into problems and find more suitable (ideally, longer-term) solutions. This might also enable the center to move away from break-fix operation, where a portion of demand is due to the center’s inability to resolve problems at the last time of contact. Clearly, if we can use technology appropriately then it might be possible to improve efficiency (more clients serviced), make the center more effective (more long-term solutions and less break-fix), and also increase the value of the outcome for the client (a better match between the underlying need and services provided).

If we’re not replacing the human, then perhaps we can augment the human by using a machine to automate some of the repetitive tasks. Consider oncology, a common example used to illustrate this human-augmentation strategy. Computers can already recognize cancer in a medical image more reliably than a human. We could simply pass responsibility for image analysis to machines, with the humans moving to more “complex” unautomated tasks, as we typically integrate human and machine by defining handoffs between tasks. However, the computer does not identify what is unusual with this particular tumor, or what it has in common with other unusual tumors, and launch into the process of discovering and developing new knowledge. We see a similar problem with our chatbot example, where removing the humans from the front line prevents social workers from understanding how the factors driving homelessness are changing, resulting in a system that can only service old demand, not new. If we break this link between doing and understanding, then our systems will become more precise over time (as machine operation improves) but they will not evolve outside their algorithmic box.

Our goal must be to construct work in such a way that digital behaviors are blended with human behaviors, increasing accuracy and effectiveness, while creating space for the humans to identify the unusual and build new knowledge, resulting in solutions that are superior to those that digital or human behaviors would create in isolation . Hence, if we’re to blend AI and human to achieve higher performance, then we need to find a way for human and digital behaviors to work together, rather than in sequence. To do this, we need to move away from thinking of work as a string of tasks comprising a process, to envisioning work as a set of complementary behaviors concentrated on addressing a problem. Behavior-based work can be conceptualized as a team standing around a shared whiteboard, each holding a marker, responding to new stimuli (text and other marks) appearing on the board, carrying out their action, and drawing their result on the same board. Contrast this with task-based work, which is more like a bucket brigade where the workers stand in a line and the “work” is passed from worker to worker on its way to a predetermined destination, with each worker carrying out his or her action as the work passes by. Task-based work enables us to create optimal solutions to specific problems in a static and unchanging environment. Behavior-based work, on the other hand, provides effective solutions to ill-defined problems in a complex and changing world.

If we’re to blend AI and human to achieve higher performance, then we need to find a way for human and digital behaviors to work together, rather than in sequence.

To facilitate behavior-based work, we need to create a shared context that captures what is known about the problem to be solved, and against which both human and digital behaviors can operate. The starting point in our contact center example might be a transcript of the conversation so far, transcribed via a speech-to-text behavior. A collection of “recognize-client behaviors” monitor the conversation to determine if the caller is a returning client. This might be via voice-print or speech-pattern recognition. The client could state their name clearly enough for the AI to understand. They may have even provided a case number or be calling from a known phone number. Or the social worker might step in if they recognize the caller before the AI does. Regardless, the client’s details are fetched from case management to populate our shared context, the shared digital whiteboard, with minimal intervention.

As the conversation unfolds, digital behaviors use natural language to identify key facts in the dialogue. A client mentions a dependent child, for example. These facts are highlighted for both the human and other digital behaviors to see, creating a summary of the conversation updated in real time. The social worker can choose to accept the highlighted facts, or cancel or modify them. Regardless, the human’s focus is on the conversation, and they only need to step in when captured facts need correcting, rather than being distracted by the need to navigate a case management system.

Digital behaviors can encode business rules or policies. If, for example, there is sufficient data to determine that the client qualifies for emergency housing, then a business-rule behavior could recognize this and assert it in the shared context. The assertion might trigger a set of “find emergency housing behaviors” that contact suitable services to determine availability, offering the social worker a set of potential solutions. Larger services might be contacted via B2B links or robotic process automation (if no B2B integration exists). Many emergency housing services are small operations, so the contact might be via a message (email or text) to the duty manager, rather than via a computer-to-computer connection. We might even automate empathy by using AI to determine the level of stress in the client’s voice, providing a simple graphical measure of stress to the social worker to help them determine if the client needs additional help, such as talking to an external service on the client’s behalf.

As this example illustrates, the superior value provided by structuring work around problems, rather than tasks, relies on our human ability to make sense of the world, to spot the unusual and the new, to discover what’s unique in this particular situation and create new knowledge. The line between human and machine cannot be delineated in terms of knowledge and skills unique to one or the other. The difference is that humans can participate in the social process of creating knowledge, while machines can only apply what has already been discovered.6

Good for workers, firms, and society

AI enables us to think differently about how we construct work. Rather than construct work from products and specialized tasks, we can choose to construct work from problems and behaviors. Individuals consulting financial advisors, for example, typically don’t want to purchase investment products as the end goal; what they really want is to secure a happy retirement. The problem can be defined as follows: What does a “happy retirement” look like; how much income is needed to support that lifestyle, how to balance spending and saving today to find the cash to invest and navigate and (financial) challenges that life puts in the road, and what investments give the client the best shot at getting from here to there? The financial advisor, client, and robo-advisor could collaborate around a common case file, a digital representation of their shared problem, incrementally defining what a “happy retirement” is and, consequently, the needed investment goals, income streams, and so on. This contrasts with treating the work as a process of “request investment parameters” (which the client doesn’t know) and then “recommend insurance” and “provide investment recommendations” (which the client doesn’t want, or only wants as a means to an end). The financial advisor’s job is to provide the rich human behaviors—educator to the investor’s student—to elucidate and establish the retirement goals (and, by extension, investment goals), while the robo-advisor provides simple algorithmic ones, responding to changes in the case file by updating it with an optimal investment strategy. Together, the human and robo-advisor can explore more options (thanks to the power and scope of digital behaviors) and develop a deeper understanding of the client’s needs (thanks to the human advisor’s questioning and contextual knowledge) than either could alone, creating more value as a result.

Rather than construct work from products and specialized tasks, we can choose to construct work from problems and behaviors.

If organizing work around problems and combining AI and human behaviors to help solve them can deliver greater value to customers, it similarly holds the potential to deliver greater value for businesses, as productivity is partly determined by how we construct jobs. The majority of the productivity benefits associated with a new technology don’t come from the initial invention and introduction of new production technology. They come from learning-by-doing:7 workers at the coalface identifying, sharing, and solving problems and improving techniques. Power looms are a particularly good example, with their introduction into production improving productivity by a factor of 2.5, but with a further factor of 20 provided by subsequent learning-by-doing.8

It’s important to maintain the connection between the humans—the creative problem identifiers—and the problems to be discovered. This is something that Toyota did when it realized that highly mechanized factories were efficient, but they didn’t improve. Humans were reintroduced and given roles in the production process to enable them to understand what the machines were doing, develop expertise, and consequently improve the production processes. The insights from these workers reduced waste in crankshaft production by 10 percent and helped shorten the production line. Others improved axel production and cut costs for chassis parts.9

This improvement was no coincidence. Jobs that are good for individuals—because they make the most of human sense-making nature—generally are also good for firms, because they improve productivity through learning by doing. As we will see below, they can also be good for society as a whole.

Consider bus drivers. With the development of autonomous vehicles in the foreseeable future, pundits are worried about what to do with all the soon to be unemployed bus drivers. However, rather than fearing that autonomous buses will make bus drivers redundant, we should acknowledge that they will find themselves in situations that only a human, and human behaviors, can deal with. Challenging weather (heavy rain or extreme glare) might require a driver to step in and take control. Unexpected events—accidents, road work, or an emergency—could require a human’s judgment to determine which road rule to break. (Is it permissible to edge into a red light while making space for an emergency vehicle?) Routes need to be adjusted due to anything from a temporarily moved stop to modifying routes due to roadwork. A human presence might be legally required to, for example, monitor underage children or represent the vehicle at an accident.

As with chatbots, automating the simple behaviors and then eliminating the human will result in an undesirable outcome. A more productive approach is to discover the problems that bus drivers deal with, and then structure work and jobs around these problems and the kinds of behaviors needed to solve them. AI can be used to automate the simple behaviors, enabling the drivers to focus on more important ones, making the human-bus combination more productive as a result. The question is: Which problems and decision centers should we choose?

Let us assume that the simple behaviors required to drive a bus are automated. Our autonomous bus can steer, avoiding obstacles and holding its lane, maintain speed and separation with other vehicles, and obey the rules of the road. We can also assume that the bus will follow a route and schedule. If the service is frequent enough, then the collection of buses on a route might behave as a flock, adjusting speed to maintain separation and ensure that a bus arrives at each stop every five minutes or so, rather than attempting to arrive at a specific time.

As with the power loom, automating these simple behaviors means that drivers are not required to be constantly present for the bus (or loom) to operate. Rather than drive a single bus, they can now “drive” a flock of buses. The drivers monitor where each bus is, how it’s tracking to schedule, with the system suggesting interventions to overcome problems, such as a breakdown, congestion, or changed road conditions. The drivers can step in to pilot a particular bus should the conditions be too challenging (roadworks, perhaps, where markings and signaling are problematic), or to deal with an event that requires that human touch.

These buses could all be on the same route. A mobile driver might be responsible for four-to-five sequential buses on a route, zipping between them as needed to manage accidents or dealing with customer complaints (or disagreements between customers). Or the driver might be responsible for buses in a geographic area, on multiple routes. It’s even possible to split the work, creating a desk-bound “driver” responsible for drone operation of a larger number of buses, while mobile and stationary drivers restrict themselves to incidents requiring a physical presence. School or community buses, for example, might have remote video monitoring while in transit, complemented by a human presence at stops.

Breaking the requirement that each bus have its own driver will provide us with an immediate productivity gain. If 10 drivers can manage 25 autonomous buses, then we will see productivity increase by a factor of 2.5, as we did with power looms: good jobs for the firm, as workers are more productive. Doing this requires an astute division of labor between mobile, stationary, and remote drivers, creating three different “bus driver” jobs that meet different work preferences: good jobs for the worker and the firm. Ensuring that these jobs involve workers as stakeholders in improving the system enables us to tap into learning-by-doing, allowing workers to continue to work on their craft, and the subsequent productivity improvements that learning-by-doing provides, which is good for workers and the firm.

These jobs don’t require training in software development or AI. They do require many of the same skills as existing bus drivers: understanding traffic, managing customers, dealing with accidents, and other day-to-day challenges. Some new skills will also be required, such as training a bus where to park at a new bus stop (by doing it manually the first time), or managing a flock of buses remotely (by nudging routes and separations in response to incidents), though these skills are not a stretch. Drivers will require a higher level of numeracy and literacy than in the past though, as it is a document-driven world that we’re describing. Regardless, shifting from manual to autonomous buses does not imply making existing bus drivers redundant en masse. Many will make the transition on their own, others will require some help, and a few will require support to find new work.

The question then, is: What to do with the productivity dividend? We could simply cut the cost of a bus ticket, passing the benefit onto existing patrons. Some of the saving might also be returned to the community, as public transport services are often subsidized. Another choice is to transform public transport, creating a more inclusive and equitable public transport system.

Buses are seen as an unreliable form of transport—schedules are sparse with some buses only running hourly for part of the day, and not running at all otherwise; and route coverage is inadequate leaving many (less fortunate) members of society in public transport deserts (locations more than 800 m from high-frequency public transport). We could rework the bus network to provide a more frequent service, as well as extending service into under-serviced areas, eliminating public transport deserts. The result could be a fairer and more equitable service at a similar cost to the old, with the same number of jobs. This has the potential to transform lives. Reliable bus services might result in higher patronage, resulting in more bus routes being created, more frequent services on existing bus routes, and more bus “drivers” being hired. Indeed, this is the pattern we saw with power looms during the Industrial Revolution. Improved productivity resulted in lower prices for cloth, enabling a broader section of the community to buy higher quality clothing, which increased demand and created more jobs for weavers. Automation can result in jobs that are good for the worker, firm, and society as a whole.

Automation can result in jobs that are good for the worker, firm, and society as a whole.

How will we shape the jobs of the future?

There is no inevitability about the nature of work in the future. Clearly, the work will be different than it is today, though how it is different is an open question. Predictions of a jobless future, or a nirvana where we live a life of leisure, are most likely wrong. It’s true that the development of new technology has a significant effect on the shape society takes, though this is not a one-way street, as society’s preferences shape which technologies are pursued and which of their potential uses are socially acceptable. Melvin Kranzberg, a historian specializing in the history of technology, captured this in his fourth law: “Although technology might be a prime element in many public issues, nontechnical factors take precedence in technology-policy decisions.”10

The jobs first created by the development of the moving assembly line were clearly unacceptable by social standards of the time. The solution was for society to establish social norms for the employee-employer relationship—with the legislation of the eight-hour an example of this—and the development of the social institutions to support this new relationship. New “sharing economy” jobs and AI encroaching into the workplace suggest that we might be reaching a similar point, with many firms feeling that they have no option but to create bad jobs if they want to survive. These bad jobs can carry an economic cost, as they drag profitability down. In this essay, as well as our previous,11 we have argued that these bad jobs are also preventing us from capitalizing on the opportunity created by AI.

Our relationship with technology has changed, and how we conceive work needs to change as a consequence. Prior to the Industrial Revolution, work was predominantly craft-based; we had an instrumental relationship with technology; and social norms and institutions were designed to support craft-based work. After the Industrial Revolution, with the development of the moving production line as the tipping point, work was based on task-specialization, and a new set of social norms and institutions were developed to support work built around products, tasks, and the skills required to prosecute them. With the advent of AI, our relationship with technology is changing again, and this automation is better thought of as capturing behaviors rather than tasks. As we stated previously, if automation in the industrial era was the replication of tasks previously isolated and defined for humans, then in this post-industrial era automation might be the replication of isolated and well-defined behaviors that were previously unique to humans.12

There are many ways to package human and digital behaviors—of constructing the jobs of the future. We, as a community, get to determine what these jobs look like. This future will still require bus drivers, mining engineers and machinery operators, financial advisors, as well as social workers and those employed in the caring professions, as it is our human proclivity for noticing the new and unusual, of making sense of the world, that creates value. Few people want financial products for their retirement fund; what they really want is a happy retirement. In a world of robo-advisors, all the value is created in the human conversation between financial advisors and clients, where they work together to discover what the clients’ happy retirement is (and consequently, investment goals, incomes stream, etc.), not in the mechanical creation and implementation of an investment strategy based on predefined parameters. If we’re to make the most of AI, realize the productivity (and, consequently, quality of life) improvements it promises, and deliver the opportunities for operational efficiency, then we need to choose to create good jobs:

  • Jobs that make the most of our human nature as social problem identifiers and solvers
  • Jobs that are productive and sustainable for organizations
  • Jobs with an employee-employer relationship aligned with social norms
  • Jobs that support learning by doing, providing for the worker’s personal development, for the improvement of the organization, and for the wealth of the community as a whole.

The question, then, is: What do we want these jobs of the future to look like?

5 Important Artificial Intelligence Predictions (For 2019) Everyone Should Read


Artificial Intelligence – specifically machine learning and deep learning – was everywhere in 2018 and don’t expect the hype to die down over the next 12 months.

The hype will die eventually of course, and AI will become another consistent thread in the tapestry of our lives, just like the internet, electricity, and combustion did in days of yore.

But for at least the next year, and probably longer, expect astonishing breakthroughs as well as continued excitement and hyperbole from commentators.

5 Important Artificial Intelligence Predictions (For 2019) Everyone Should ReadAdobe Stock

This is because expectations of the changes to business and society which AI promises (or in some cases threatens) to bring about go beyond anything dreamed up during previous technological revolutions.

AI points towards a future where machines not only do all of the physical work, as they have done since the industrial revolution but also the “thinking” work – planning, strategizing and making decisions.

The jury’s still out on whether this will lead to a glorious utopia, with humans free to spend their lives following more meaningful pursuits, rather than on those which economic necessity dictates they dedicate their time, or to widespread unemployment and social unrest.

YOU MAY ALSO LIKE

We probably won’t arrive at either of those outcomes in 2019, but it’s a topic which will continue to be hotly debated. In the meantime, here are five things that we can expect to happen:

  1. AI increasingly becomes a matter of international politics

2018 has seen major world powers increasingly putting up fences to protect their national interests when it comes to trade and defense. Nowhere has this been more apparent than in the relationship between the world’s two AI superpowers, the US and China.

In the face of tariffs and export restrictions on goods and services used to create AI imposed by the US Government, China has stepped up its efforts to become self-reliant when it comes to research and development.

Chinese tech manufacturer Huawei announced plans to develop its own AI processing chips, reducing the need for the country’s booming AI industry to rely on US manufacturers like Intel and Nvidia.

At the same time, Google has faced public criticism for its apparent willingness to do business with Chinese tech companies (many with links to the Chinese government) while withdrawing (after pressure from its employees) from arrangements to work with US government agencies due to concerns its tech may be militarised.

With nationalist politics enjoying a resurgence, there are two apparent dangers here.

Firstly, that artificial intelligence technology could be increasingly adopted by authoritarian regimes to restrict freedoms, such as the rights to privacy or free speech.

Secondly, that these tensions could compromise the spirit of cooperation between academic and industrial organizations across the world. This framework of open collaboration has been instrumental to the rapid development and deployment of AI technology we see taking place today and putting up borders around a nation’s AI development is likely to slow that progress. In particular, it is expected to slow the development of common standards around AI and data, which could greatly increase the usefulness of AI.

  1. A Move Towards “Transparent AI”

The adoption of AI across wider society – particularly when it involves dealing with human data – is hindered by the “black box problem.” Mostly, its workings seem arcane and unfathomable without a thorough understanding of what it’s actually doing.

To achieve its full potential AI needs to be trusted – we need to know what it is doing with our data, why, and how it makes its decisions when it comes to issues that affect our lives. This is often difficult to convey – particularly as what makes AI particularly useful is its ability to draw connections and make inferences which may not be obvious or may even seem counter-intuitive to us.

But building trust in AI systems isn’t just about reassuring the public. Research and business will also benefit from openness which exposes bias in data or algorithms. Reports have even found that companies are sometimes holding back from deploying AI due to fears they may face liabilities in the future if current technology is later judged to be unfair or unethical.

In 2019 we’re likely to see an increased emphasis on measures designed to increase the transparency of AI. This year IBM unveiled technology developed to improve the traceability of decisions into its AI OpenScale technology. This concept gives real-time insights into not only what decisions are being made, but how they are being made, drawing connections between data that is used, decision weighting and potential for bias in information.

The General Data Protection Regulation, put into action across Europe this year, gives citizens some protection against decisions which have “legal or other significant” impact on their lives made solely by machines. While it isn’t yet a blisteringly hot political potato, its prominence in public discourse is likely to grow during 2019, further encouraging businesses to work towards transparency.

  1. AI and automation drilling deeper into every business

In 2018, companies began to get a firmer grip on the realities of what AI can and can’t do. After spending the previous few years getting their data in order and identifying areas where AI could bring quick rewards, or fail fast, big business is as a whole ready to move ahead with proven initiatives, moving from piloting and soft-launching to global deployment.

In financial services, vast real-time logs of thousands of transactions per second are routinely parsed by machine learning algorithms. Retailers are proficient at grabbing data through till receipts and loyalty programmes and feeding it into AI engines to work out how to get better at selling us things. Manufacturers use predictive technology to know precisely what stresses machinery can be put under and when it is likely to break down or fail.

In 2019 we’ll see growing confidence that this smart, predictive technology, bolstered by learnings it has picked up in its initial deployments, can be rolled out wholesale across all of a business’s operations.

AI will branch out into support functions such as HR or optimizing supply chains, where decisions around logistics, as well as hiring and firing, will become increasingly informed by automation. AI solutions for managing compliance and legal issues are also likely to be increasingly adopted. As these tools will often be fit-for-purpose across a number of organizations, they will increasingly be offered as-a-service, offering smaller businesses a bite of the AI cherry, too.

We’re also likely to see an increase in businesses using their data to generate new revenue streams. Building up big databases of transactions and customer activity within its industry essentially lets any sufficiently data-savvy business begin to “Googlify” itself. Becoming a source of data-as-a-service has been transformational for businesses such as John Deere, which offers analytics based on agricultural data to help farmers grow crops more efficiently. In 2019 more companies will adopt this strategy as they come to understand the value of the information they own.

  1. More jobs will be created by AI than will be lost to it.

As I mentioned in my introduction to this post, in the long-term its uncertain if the rise of the machines will lead to human unemployment and social strife, a utopian workless future, or (probably more realistically) something in between.

For the next year, at least, though, it seems it isn’t going to be immediately problematic in this regard. Gartner predicts that by the end of 2019, AI will be creating more jobs than it is taking.

While 1.8 million jobs will be lost to automation – with manufacturing in particular singled out as likely to take a hit – 2.3 million will be created. In particular, Gartner’s report finds, these could be focused on education, healthcare, and the public sector.

A likely driver for this disparity is the emphasis placed on rolling out AI in an “augmenting” capacity when it comes to deploying it in non-manual jobs. Warehouse workers and retail cashiers have often been replaced wholesale by automated technology. But when it comes to doctors and lawyers, AI service providers have made concerted effort to present their technology as something which can work alongside human professionals, assisting them with repetitive tasks while leaving the “final say” to them.

This means those industries benefit from the growth in human jobs on the technical side – those needed to deploy the technology and train the workforce on using it – while retaining the professionals who carry out the actual work.

For the financial services, the outlook is perhaps slightly grimmer. Some estimates, such as those made by former Citigroup CEO Vikram Pandit in 2017, predict that the sector’s human workforce could be 30% smaller within five years. With back-office functions increasingly being managed by machines, we could be well on our way to seeing that come true by the end of next year.

  1. AI assistants will become truly useful

AI is genuinely interwoven into our lives now, to the point that most people don’t give a second thought to the fact that when they search Google, shop at Amazon or watch Netflix, highly precise, AI-driven predictions are at work to make the experience flow.

A slightly more apparent sense of engagement with robotic intelligence comes about when we interact with AI assistants – Siri, Alexa, or Google Assistant, for example – to help us make sense of the myriad of data sources available to us in the modern world.

In 2019, more of us than ever will use an AI assistant to arrange our calendars, plan our journeys and order a pizza. These services will become increasingly useful as they learn to anticipate our behaviors better and understand our habits.

Data gathered from users allows application designers to understand exactly which features are providing value, and which are underused, perhaps consuming valuable resources (through bandwidth or reporting) which could be better used elsewhere.

As a result, functions which we do want to use AI for – such as ordering taxis and food deliveries, and choosing restaurants to visit – are becoming increasingly streamlined and accessible.

On top of this, AI assistants are designed to become increasingly efficient at understanding their human users, as the natural language algorithms used to encode speech into computer-readable data, and vice versa is exposed to more and more information about how we communicate.

It’s evident that conversations between  Alexa or Google Assistant and us can seem very stilted today. However, the rapid acceleration of understanding in this field means that, by the end of 2019, we will be getting used to far more natural and flowing discourse with the machines we share our lives with.

 

Google’s New AI Is a Master of Games, but How Does It Compare to the Human Mind?


After building AlphaGo to beat the world’s best Go players, Google DeepMind built AlphaZero to take on the world’s best machine players

AI Chess
Google’s new artificial intelligence program, AlphaZero, taught itself to play chess, shogi, and Go in a matter of hours, and outperforms the top-ranking AIs in the gameplay arena.

For humans, chess may take a lifetime to master. But Google DeepMind’s new artificial intelligence program, AlphaZero, can teach itself to conquer the board in a matter of hours.

Building on its past success with the AlphaGo suite—a series of computer programs designed to play the Chinese board game Go—Google boasts that its new AlphaZero achieves a level of “superhuman performance” at not just one board game, but three: Go, chess, and shogi (essentially, Japanese chess). The team of computer scientists and engineers, led by Google’s David Silver, reported its findings recently in the journal Science.

“Before this, with machine learning, you could get a machine to do exactly what you want—but only that thing,” says Ayanna Howard, an expert in interactive computing and artificial intelligence at the Georgia Institute of Technology who did not participate in the research. “But AlphaZero shows that you can have an algorithm that isn’t so [specific], and it can learn within certain parameters.”

AlphaZero’s clever programming certainly ups the ante on gameplay for human and machine alike, but Google has long had its sights set on something bigger: engineering intelligence.

The researchers are careful not to claim that AlphaZero is on the verge of world domination (others have been a little quicker to jump the gun). Still, Silver and the rest of the DeepMind squad are already hopeful that they’ll someday see a similar system applied to drug design or materials science.

So what makes AlphaZero so impressive?

Gameplay has long been revered as a gold standard in artificial intelligence research. Structured, interactive games are simplifications of real-world scenarios: Difficult decisions must be made; wins and losses drive up the stakes; and prediction, critical thinking, and strategy are key.

Encoding this kind of skill is tricky. Older game-playing AIs—including the first prototypes of the original AlphaGo—have traditionally been pumped full of codes and data to mimic the experience typically earned through years of natural, human gameplay (essentially, a passive, programmer-derived knowledge dump). With AlphaGo Zero (the most recent version of AlphaGo), and now AlphaZero, the researchers gave the program just one input: the rules of the game in question. Then, the system hunkered down and actively learned the tricks of the trade itself.

Go
AlphaZero is based on AlphaGo Zero, part of the AlphaGo suite designed to play the Chinese board game Go, pictured above. Early iterations of the original program were fed data from human-versus-human games; later versions engaged in self-teaching, wherein the software played games against itself to learn its own strategy.

This strategy, called self-play reinforcement learning, is pretty much exactly what it sounds like: To train for the big leagues, AlphaZero played itself in iteration after iteration, honing its skills by trial and error. And the brute-force approach paid off. Unlike AlphaGo Zero, AlphaZero doesn’t just play Go: It can beat the best AIs in the business at chess and shogi, too. The learning process is also impressively efficient, requiring only two, four, or 30 hours of self-tutelage to outperform programs specifically tailored to master shogi, chess, and Go, respectively. Notably, the study authors didn’t report any instances of AlphaZero going head-to-head with an actual human, Howard says. (The researchers may have assumed that, given that these programs consistently clobber their human counterparts, such a matchup would have been pointless.)

AlphaZero was also able to trounce Stockfish (the now unseated AI chess master) and Elmo (the former AI shogi expert) despite evaluating fewer possible next moves on each turn during game play. But because the algorithms in question are inherently different, and may consume different amounts of power, it’s difficult to directly compare AlphaZero to other, older programs, points out Joanna Bryson, who studies artificial intelligence at the University of Bath in the United Kingdom and did not contribute to AlphaZero.

Google keeps mum about a lot of the fine print on its software, and AlphaZero is no exception. While don’t know everything about the program’s power consumption, what’s clear is this: AlphaZero has to be packing some serious computational ammo. In those scant hours of training, the program kept itself very busy, engaging in tens or hundreds of thousands of practice rounds to get its board game strategy up to snuff—far more than a human player would need (or, in most cases, could even accomplish) in pursuit of proficiency.

This intensive regimen also used 5,000 of Google’s proprietary machine-learning processor units, or TPUs, which by some estimates consume around 200 watts per chip. No matter how you slice it, AlphaZero requires way more energy than a human brain, which runs on about 20 watts.

The absolute energy consumption of AlphaZero must be taken into consideration, adds Bin Yu, who works at the interface of statistics, machine learning, and artificial intelligence at the University of California, Berkeley. AlphaZero is powerful, but might not be good bang for the buck—especially when adding in the person-hours that went into its creation and execution.

Energetically expensive or not, AlphaZero makes a splash: Most AIs are hyper-specialized on a single task, making this new program—with its triple threat of game play—remarkably flexible. “It’s impressive that AlphaZero was able to use the same architecture for three different games,” Yu says.

So, yes. Google’s new AI does set a new mark in several ways. It’s fast. It’s powerful. But does that make it smart?

This is where definitions start to get murky. “AlphaZero was able to learn, starting from scratch without any human knowledge, to play each of these games to superhuman level,” DeepMind’s Silver said in a statement to the press.

Even if board game expertise requires mental acuity, all proxies for the real world have their limits. In its current iteration, AlphaZero maxes out by winning human-designed games—which may not warrant the potentially alarming label of “superhuman.” Plus, if surprised with a new set of rules mid-game, AlphaZero might get flummoxed. The actual human brain, on the other hand, can store far more than three board games in its repertoire.

What’s more, comparing AlphaZero’s baseline to a tabula rasa (blank slate)as the researchers do—is a stretch, Bryson says. Programmers are still feeding it one crucial morsel of human knowledge: the rules of the game it’s about to play. “It does have far less to go on than anything has before,” Bryson adds, “but the most fundamental thing is, it’s still given rules. Those are explicit.”

And those pesky rules could constitute a significant crutch. “Even though these programs learn how to perform, they need the rules of the road,” Howard says. “The world is full of tasks that don’t have these rules.”

When push comes to shove, AlphaZero is an upgrade of an already powerful program—AlphaGo Zero, explains JoAnn Paul, who studies artificial intelligence and computational dreaming at the Virginia Polytechnic Institute and State University and was not involved in the new research. AlphaZero uses many of the same building blocks and algorithms as AlphaGo Zero, and still constitutes just a subset of true smarts. “I thought this new development was more evolutionary than revolutionary,” she adds. “None of these algorithms can create. Intelligence is also about storytelling. It’s imagining things that are not yet there. We’re not thinking in those terms in computers.”

Part of the problem is, there’s still no consensus on a true definition of “intelligence,” Yu says—and not just in the domain of technology. “It’s still not clear how we are training critically thinking beings, or how we use the unconscious brain,” she adds.

To this point, many researchers believe there are likely multiple types of intelligence. And tapping into one far from guarantees the ingredients for another. For instance, some of the smartest people out there are terrible at chess.

With these limitations, Yu’s vision of the future of artificial intelligence partners humans and machines in a kind of coevolution. Machines will certainly continue to excel at certain tasks, she explains, but human input and oversight may always be necessary to compensate for the unautomated.

Of course, there’s no telling how things will shake out in the AI arena. In the meantime, we have plenty to ponder. “These computers are powerful, and can do certain things better than a human can,” Paul says. “But that still falls short of the mystery of intelligence.”

Questions for Artificial Intelligence in Health Care


Artificial intelligence (AI) is gaining high visibility in the realm of health care innovation. Broadly defined, AI is a field of computer science that aims to mimic human intelligence with computer systems.1 This mimicry is accomplished through iterative, complex pattern matching, generally at a speed and scale that exceed human capability. Proponents suggest, often enthusiastically, that AI will revolutionize health care for patients and populations. However, key questions must be answered to translate its promise into action.

What Are the Right Tasks for AI in Health Care?

At its core, AI is a tool. Like all tools, it is better deployed for some tasks than for others. In particular, AI is best used when the primary task is identifying clinically useful patterns in large, high-dimensional data sets. Ideal data sets for AI also have accepted criterion standards that allow AI algorithms to “learn” within the data. For example, BRCA1 is a known genetic sequence linked to breast cancer, and AI algorithms can use that as “the source for truth” criterion when specifying models to predict breast cancer. With appropriate data, AI algorithms can identify subtle and complex associations that are unavailable with traditional analytic approaches, such as multiple small changes on a chest computed tomographic image that collectively indicate pneumonia. Such algorithms can be reliably trained to analyze these complex objects and process the data, images, or both at a high speed and scale. Early AI successes have been concentrated in image-intensive specialties, such as radiology, pathology, ophthalmology, and cardiology.2,3

However, many core tasks in health care, such as clinical risk prediction, diagnostics, and therapeutics, are more challenging for AI applications. For many clinical syndromes, such as heart failure or delirium, there is a lack of consensus about criterion standards on which to train AI algorithms. In addition, many AI techniques center on data classification rather than a probabilistic analytic approach; this focus may make AI output less suited to clinical questions that require probabilities to support clinical decision making.4 Moreover, AI-identified associations between patient characteristics and treatment outcomes are only correlations, not causative relationships. As such, results from these analyses are not appropriate for direct translation to clinical action, but rather serve as hypothesis generators for clinical trials and other techniques that directly assess cause-and-effect relationships.

What Are the Right Data for AI?

AI is most likely to succeed when used with high-quality data sources on which to “learn” and classify data in relation to outcomes. However, most clinical data, whether from electronic health records (EHRs) or medical billing claims, remain ill-defined and largely insufficient for effective exploitation by AI techniques. For example, EHR data on demographics, clinical conditions, and treatment plans are generally of low dimensionality and are recorded in limited, broad categorizations (eg, diabetes) that omit specificity (eg, duration, severity, and pathophysiologic mechanism). A potential approach to improving the dimensionality of clinical data sets could use natural language processing to analyze unstructured data, such as clinician notes. However, many natural language processing techniques are crude and the necessary amount of specificity is often absent from the clinical record.

Clinical data are also limited by potentially biased sampling. Because EHR data are collected during health care delivery (eg, clinic visits, hospitalizations), these data oversample sicker populations. Similarly, billing data overcapture conditions and treatments that are well-compensated under current payment mechanisms. A potential approach to overcome this issue may involve wearable sensors and other “quantified self” approaches to data collection outside of the health care system. However, many such efforts are also biased because they oversample the healthy, wealthy, and well. These biases can result in AI-generated analyses that produce flawed associations and insights that will likely fail to generalize beyond the population in which they are generated.5

What Is the Right Evidence Standard for AI?

Innovations in medications and medical devices are required to undergo extensive evaluation, often including randomized clinical trials and postmarketing surveillance, to validate clinical effectiveness and safety. If AI is to directly influence and improve clinical care delivery, then an analogous evidence standard is needed to demonstrate improved outcomes and a lack of unintended consequences. The evidence standard for AI tasks is currently ill-defined but likely should be proportionate to the task at hand. For example, validating the accuracy of AI-enabled imaging applications against current quality standards for traditional imaging is likely sufficient for clinical use. However, as AI applications move to prediction, diagnosis, and treatment, the standard for proof should be significantly higher.1 To this end, the US Food and Drug Administration is actively considering how best to regulate AI-fueled innovations in care delivery, attempting to strike a reasonable balance between innovation, safety, and efficacy.

Using AI in clinical care will need to meet particularly high standards to satisfy clinicians and patients. Even if the AI approach has demonstrated improvements over other approaches, it is not (and never will be) perfect, and mistakes, no matter how infrequent, will drive significant, negative perceptions. An instructive example can be seen with another AI-fueled innovation: driverless cars. Although these vehicles are, on average, safer than human drivers, a pedestrian death due to a driverless car error caused great alarm. A clinical mistake made by an AI-enabled process would have a significant chilling effect. Thus, ensuring the appropriate level of oversight and regulation is a critical step in introducing AI into the clinical arena.

In addition to demonstrating its clinical effectiveness, evaluation of the cost-effectiveness of AI is also important. Huge investments into AI are being made with promised efficiencies and assumed cost reductions in return, similar to robotic surgery. However, it is unclear that AI techniques, with their attendant needs for data storage, data curation, model maintenance and updating, and data visualization, will significantly reduce costs. These tools and related needs may simply replace current costs with different, and potentially higher, costs.

What Are the Right Approaches for Integrating AI Into Clinical Care?

Even after the correct tasks, data, and evidence for AI are addressed, realization of its potential will not occur without effective integration into clinical care. To do so requires that clinicians develop a facility with interpreting and integrating AI-supported insights in their clinical care. In many ways, this need is identical to the integration of more traditional clinical decision support that has been a part of medicine for the past several decades. However, use of deep learning and other analytic approaches in AI adds an additional challenge. Because these techniques, by definition, generate insights via unobservable methods, clinicians cannot apply the face validity available in more traditional clinical decision tools (eg, integer-based scores to calculate stroke risk among patients with atrial fibrillation). This “black box” nature of AI may thus impede the uptake of these tools into practice.

AI techniques also threaten to add to the amount of information that clinical teams must assimilate to deliver care. While AI can potentially introduce efficiencies to processes, including risk prediction and treatment selection, history suggests that most forms of clinical decision support add to, rather than replace, the information clinicians need to process. As a result, there is a risk that integrating AI into clinical workflow could significantly increase the cognitive load facing clinical teams and lead to higher stress, lower efficiency, and poorer clinical care.

Ideally, with appropriate integration of AI into clinical workflow, AI can define clinical patterns and insights beyond current human capabilities and free clinicians from some of the burden of integrating the vast and growing amounts of health data and knowledge into clinical workflow and practice. Clinicians can then focus on placing these insights into clinical context for their patients and return to their core (and fundamentally human) task of attending to patient needs and values in achieving their optimal health.6 This combination of AI and human intelligence, or augmented intelligence, is likely the most powerful approach to achieving this fundamental mission of health care.

A Balanced View of AI

AI is a promising tool for health care, and efforts should continue to bring innovations such as AI to clinical care delivery. However, inconsistent data quality, limited evidence supporting the clinical efficacy of AI, and lack of clarity about the effective integration of AI into clinical workflow are significant issues that threaten its application. Whether AI will ultimately improve quality of care at reasonable cost remains an unanswered, but critical, question. Without the difficult work needed to address these issues, the medical community risks falling prey to the hype of AI and missing the realization of its potential.

Back to top

Article Information

Corresponding Author: Thomas M. Maddox, MD, MSc, Cardiovascular Division, Washington University School of Medicine/BJC Healthcare, Campus Box 8086, 660 S Euclid, St Louis, MO 63110 (tmaddox@wustl.edu).

Published Online: December 10, 2018. doi:10.1001/jama.2018.18932

Conflict of Interest Disclosures: Dr Maddox reports employment at the Washington University School of Medicine as both a staff cardiologist and the director of the BJC HealthCare/Washington University School of Medicine Healthcare Innovation Lab; grant funding from the National Center for Advancing Translational Sciences that supports building a national data center for digital health informatics innovation; and consultation for Creative Educational Concepts. Dr Rumsfeld reports employment at the American College of Cardiology as the chief innovation officer. Dr Payne reports employment at the Washington University School of Medicine as the director of the Institute for Informatics; grant funding from the National Institutes of Health, National Center for Advancing Translational Sciences, National Cancer Institute, Agency for Healthcare Research and Quality, AcademyHealth, Pfizer, and the Hairy Cell Leukemia Foundation; academic consulting at Case Western Reserve University, Cleveland Clinic, Columbia University, Stonybrook University, University of Kentucky, West Virginia University, Indiana University, The Ohio State University, Geisinger Commonwealth School of Medicine; international partnerships at Soochow University (China), Fudan University (China), Clinica Alemana (Chile), Universidad de Chile (Chile); consulting for American Medical Informatics Association (AMIA), National Academy of Medicine, Geisinger Health System; editorial board membership for JAMIA, JAMIA Open, Joanna Briggs Institute, Generating Evidence & Methods to improve patient outcomes, BioMed Central Medical Informatics and Decision Making; and corporate relationships with Signet Accel Inc, Aver Inc, and Cultivation Capital.

References
1.

Stead  WW.  Clinical implications and challenges of artificial intelligence and deep learning.  JAMA. 2018;320(11):1107-1108. doi:10.1001/jama.2018.11029ArticlePubMedGoogle ScholarCrossref
2.

Gulshan  V, Peng  L, Coram  M,  et al.  Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs.  JAMA. 2016;316(22):2402-2410. doi:10.1001/jama.2016.17216ArticlePubMedGoogle ScholarCrossref
3.

Zhang  J, Gajjala  S, Agrawal  P,  et al.  Fully automated echocardiogram interpretation in clinical practice.  Circulation. 2018;138(16):1623-1635. doi:10.1161/CIRCULATIONAHA.118.034338PubMedGoogle ScholarCrossref
4.

Harrell  F. Is medicine mesmerized by machine learning? Statistical Thinking website. http://fharrell.com/post/medml/. Published February 1, 2018. Accessed October 26, 2018.
5.

Gianfrancesco  MA, Tamang  S, Yazdany  J, Schmajuk  G.  Potential biases in machine learning algorithms using electronic health record data.  JAMA Intern Med. 2018;178(11):1544-1547. doi:10.1001/jamainternmed.2018.3763ArticlePubMedGoogle ScholarCrossref
6.

Verghese  A, Shah  NH, Harrington  RA.  What this computer needs is a physician: humanism and artificial intelligence.  JAMA. 2018;319(1):19-20. doi:10.1001/jama.2017.19198ArticlePubMedGoogle ScholarCrossref

What is AI? Everything you need to know about Artificial Intelligence


An executive guide to artificial intelligence, from machine learning and general AI to neural networks.

What is artificial intelligence (AI)?

It depends who you ask.

AI might be a hot topic but you’ll still need to justify those projects.

Back in the 1950s, the fathers of the field Minsky and McCarthy, described artificial intelligence as any task performed by a program or a machine that, if a human carried out the same activity, we would say the human had to apply intelligence to accomplish the task.

That obviously is a fairly broad definition, which is why you will sometimes see arguments over whether something is truly AI or not.

AI systems will typically demonstrate at least some of the following behaviors associated with human intelligence: planning, learning, reasoning, problem solving, knowledge representation, perception, motion, and manipulation and, to a lesser extent, social intelligence and creativity.

What are the uses for AI?

AI is ubiquitous today, used to recommend what you should buy next online, to understand what you say to virtual assistants such as Amazon’s Alexa and Apple’s Siri, to recognise who and what is in a photo, to spot spam, or detect credit card fraud.

What are the different types of AI?

At a very high level artificial intelligence can be split into two broad types: narrow AI and general AI.

Narrow AI is what we see all around us in computers today: intelligent systems that have been taught or learned how to carry out specific tasks without being explicitly programmed how to do so.

This type of machine intelligence is evident in the speech and language recognition of the Siri virtual assistant on the Apple iPhone, in the vision-recognition systems on self-driving cars, in the recommendation engines that suggest products you might like based on what you bought in the past. Unlike humans, these systems can only learn or be taught how to do specific tasks, which is why they are called narrow AI.

What can narrow AI do?

There are a vast number of emerging applications for narrow AI: interpreting video feeds from drones carrying out visual inspections of infrastructure such as oil pipelines, organizing personal and business calendars, responding to simple customer-service queries, co-ordinating with other intelligent systems to carry out tasks like booking a hotel at a suitable time and location, helping radiologists to spot potential tumors in X-rays, flagging inappropriate content online, detecting wear and tear in elevators from data gathered by IoT devices, the list goes on and on.

What can general AI do?

Artificial general intelligence is very different, and is the type of adaptable intellect found in humans, a flexible form of intelligence capable of learning how to carry out vastly different tasks, anything from haircutting to building spreadsheets, or to reason about a wide variety of topics based on its accumulated experience. This is the sort of AI more commonly seen in movies, the likes of HAL in 2001 or Skynet in The Terminator, but which doesn’t exist today and AI experts are fiercely divided over how soon it will become a reality.

A survey conducted among four groups of experts in 2012/13 by AI researchers Vincent C Müller and philosopher Nick Bostrom reported a 50 percent chance that Artificial General Intelligence (AGI) would be developed between 2040 and 2050, rising to 90 percent by 2075. The group went even further, predicting that so-called ‘ superintelligence‘ — which Bostrom defines as “any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest” — was expected some 30 years after the achievement of AGI.

That said, some AI experts believe such projections are wildly optimistic given our limited understanding of the human brain, and believe that AGI is still centuries away.

What is machine learning?

There is a broad body of research in AI, much of which feeds into and complements each other.

Currently enjoying something of a resurgence, machine learning is where a computer system is fed large amounts of data, which it then uses to learn how to carry out a specific task, such as understanding speech or captioning a photograph.

What are neural networks?

Key to the process of machine learning are neural networks. These are brain-inspired networks of interconnected layers of algorithms, called neurons, that feed data into each other, and which can be trained to carry out specific tasks by modifying the importance attributed to input data as it passes between the layers. During training of these neural networks, the weights attached to different inputs will continue to be varied until the output from the neural network is very close to what is desired, at which point the network will have ‘learned’ how to carry out a particular task.

A subset of machine learning is deep learning, where neural networks are expanded into sprawling networks with a huge number of layers that are trained using massive amounts of data. It is these deep neural networks that have fueled the current leap forward in the ability of computers to carry out task like speech recognition and computer vision.

Download now: IT leader’s guide to deep learning

There are various types of neural networks, with different strengths and weaknesses. Recurrent neural networks are a type of neural net particularly well suited to language processing and speech recognition, while convolutional neural networks are more commonly used in image recognition. The design of neural networks is also evolving, with researchers recently refining a more effective form of deep neural network called long short-term memory or LSTM, allowing it to operate fast enough to be used in on-demand systems like Google Translate.

ai-ml-neural-network.jpg
The structure and training of deep neural networks.

Image: Nuance

Another area of AI research is evolutionary computation, which borrows from Darwin’s famous theory of natural selection, and sees genetic algorithms undergo random mutations and combinations between generations in an attempt to evolve the optimal solution to a given problem.

This approach has even been used to help design AI models, effectively using AI to help build AI. This use of evolutionary algorithms to optimize neural networks is called neuroevolution, and could have an important role to play in helping design efficient AI as the use of intelligent systems becomes more prevalent, particularly as demand for data scientists often outstrips supply. The technique was recently showcased by Uber AI Labs, which released papers on using genetic algorithms to train deep neural networks for reinforcement learning problems.

Finally there are expert systems, where computers are programmed with rules that allow them to take a series of decisions based on a large number of inputs, allowing that machine to mimic the behavior of a human expert in a specific domain. An example of these knowledge-based systems might be, for example, an autopilot system flying a plane.

What is fueling the resurgence in AI?

The biggest breakthroughs for AI research in recent years have been in the field of machine learning, in particular within the field of deep learning.

This has been driven in part by the easy availability of data, but even more so by an explosion in parallel computing power in recent years, during which time the use of GPU clusters to train machine-learning systems has become more prevalent.

Not only do these clusters offer vastly more powerful systems for training machine-learning models, but they are now widely available as cloud services over the internet. Over time the major tech firms, the likes of Google and Microsoft, have moved to using specialized chips tailored to both running, and more recently training, machine-learning models.

An example of one of these custom chips is Google’s Tensor Processing Unit (TPU), the latest version of which accelerates the rate at which useful machine-learning models built using Google’s TensorFlow software library can infer information from data, as well as the rate at which they can be trained.

These chips are not just used to train up models for DeepMind and Google Brain, but also the models that underpin Google Translate and the image recognition in Google Photo, as well as services that allow the public to build machine learning models using Google’s TensorFlow Research Cloud. The second generation of these chips was unveiled at Google’s I/O conference in May last year, with an array of these new TPUs able to train a Google machine-learning model used for translation in half the time it would take an array of the top-end graphics processing units (GPUs).

What are the elements of machine learning?

As mentioned, machine learning is a subset of AI and is generally split into two main categories: supervised and unsupervised learning.

Supervised learning

A common technique for teaching AI systems is by training them using a very large number of labeled examples. These machine-learning systems are fed huge amounts of data, which has been annotated to highlight the features of interest. These might be photos labeled to indicate whether they contain a dog or written sentences that have footnotes to indicate whether the word ‘bass’ relates to music or a fish. Once trained, the system can then apply these labels can to new data, for example to a dog in a photo that’s just been uploaded.

This process of teaching a machine by example is called supervised learning and the role of labeling these examples is commonly carried out by online workers, employed through platforms like Amazon Mechanical Turk.

Training these systems typically requires vast amounts of data, with some systems needing to scour millions of examples to learn how to carry out a task effectively — although this is increasingly possible in an age of big data and widespread data mining. Training datasets are huge and growing in size — Google’s Open Images Dataset has about nine million images, while its labeled video repository YouTube-8M links to seven million labeled videos. ImageNet, one of the early databases of this kind, has more than 14 million categorized images. Compiled over two years, it was put together by nearly 50,000 people — most of whom were recruited through Amazon Mechanical Turk — who checked, sorted, and labeled almost one billion candidate pictures.

In the long run, having access to huge labelled datasets may also prove less important than access to large amounts of compute power.

In recent years, Generative Adversarial Networks ( GANs) have shown how machine-learning systems that are fed a small amount of labelled data can then generate huge amounts of fresh data to teach themselves.

This approach could lead to the rise of semi-supervised learning, where systems can learn how to carry out tasks using a far smaller amount of labelled data than is necessary for training systems using supervised learning today.

Unsupervised learning

In contrast, unsupervised learning uses a different approach, where algorithms try to identify patterns in data, looking for similarities that can be used to categorise that data.

An example might be clustering together fruits that weigh a similar amount or cars with a similar engine size.

The algorithm isn’t setup in advance to pick out specific types of data, it simply looks for data that can be grouped by its similarities, for example Google News grouping together stories on similar topics each day.

Reinforcement learning

A crude analogy for reinforcement learning is rewarding a pet with a treat when it performs a trick.

In reinforcement learning, the system attempts to maximize a reward based on its input data, basically going through a process of trial and error until it arrives at the best possible outcome.

An example of reinforcement learning is Google DeepMind’s Deep Q-network, which has been used to best human performance in a variety of classic video games. The system is fed pixels from each game and determines various information, such as the distance between objects on screen.

By also looking at the score achieved in each game the system builds a model of which action will maximize the score in different circumstances, for instance, in the case of the video game Breakout, where the paddle should be moved to in order to intercept the ball.

Many AI-related technologies are approaching, or have already reached, the ‘peak of inflated expectations’ in Gartner’s Hype Cycle, with the backlash-driven ‘trough of disillusionment’ lying in wait.

Image: Gartner / Annotations: ZDNet

Which are the leading firms in AI?

Google’s DeepMind and the NHS: A glimpse of what AI means for the future of healthcare

The Google subsidiary has struck a series of deals with organisations in the UK health service — so what’s really happening?

With AI playing an increasingly major role in modern software and services, each of the major tech firms is battling to develop robust machine-learning technology for use in-house and to sell to the public via cloud services.

Each regularly makes headlines for breaking new ground in AI research, although it is probably Google with its DeepMind AI AlphaGo that has probably made the biggest impact on the public awareness of AI.

Which AI services are available?

All of the major cloud platforms — Amazon Web Services, Microsoft Azure and Google Cloud Platform — provide access to GPU arrays for training and running machine learning models, with Google also gearing up to let users use its Tensor Processing Units — custom chips whose design is optimized for training and running machine-learning models.

All of the necessary associated infrastructure and services are available from the big three, the cloud-based data stores, capable of holding the vast amount of data needed to train machine-learning models, services to transform data to prepare it for analysis, visualisation tools to display the results clearly, and software that simplifies the building of models.

These cloud platforms are even simplifying the creation of custom machine-learning models, with Google recently revealing a service that automates the creation of AI models, called Cloud AutoML. This drag-and-drop service builds custom image-recognition models and requires the user to have no machine-learning expertise.

Cloud-based, machine-learning services are constantly evolving, and at the start of 2018, Amazon revealed a host of new AWS offerings designed to streamline the process of training up machine-learning models.

For those firms that don’t want to build their own machine learning models but instead want to consume AI-powered, on-demand services — such as voice, vision, and language recognition — Microsoft Azure stands out for the breadth of services on offer, closely followed by Google Cloud Platform and then AWS. Meanwhile IBM, alongside its more general on-demand offerings, is also attempting to sell sector-specific AI services aimed at everything from healthcare to retail, grouping these offerings together under its IBM Watson umbrella — and recently investing $2bn in buying The Weather Channel to unlock a trove of data to augment its AI services.

Which of the major tech firms is winning the AI race?

Internally, each of the tech giants — and others such as Facebook — use AI to help drive myriad public services: serving search results, offering recommendations, recognizing people and things in photos, on-demand translation, spotting spam — the list is extensive.

But one of the most visible manifestations of this AI war has been the rise of virtual assistants, such as Apple’s Siri, Amazon’s Alexa, the Google Assistant, and Microsoft Cortana.

Relying heavily on voice recognition and natural-language processing, as well as needing an immense corpus to draw upon to answer queries, a huge amount of tech goes into developing these assistants.

But while Apple’s Siri may have come to prominence first, it is Google and Amazon whose assistants have since overtaken Apple in the AI space — Google Assistant with its ability to answer a wide range of queries and Amazon’s Alexa with the massive number of ‘Skills’ that third-party devs have created to add to its capabilities.

Despite being built into Windows 10, Cortana has had a particularly rough time of late, with the suggestion that major PC makers will build Alexa into laptops adding to speculation about whether Cortana’s days are numbered, although Microsoft was quick to reject this.

Which countries are leading the way in AI?

It’d be a big mistake to think the US tech giants have the field of AI sewn up. Chinese firms Alibaba, Baidu, and Lenovo are investing heavily in AI in fields ranging from ecommerce to autonomous driving. As a country China is pursuing a three-step plan to turn AI into a core industry for the country, one that will be worth 150 billion yuan ($22bn) by 2020.

Baidu has invested in developing self-driving cars, powered by its deep learning algorithm, Baidu AutoBrain, and, following several years of tests, plans to roll out fully autonomous vehicles in 2018 and mass-produce them by 2021.

Baidu has also partnered with Nvidia to use AI to create a cloud-to-car autonomous car platform for auto manufacturers around the world.

The combination of weak privacy laws, huge investment, concerted data-gathering, and big data analytics by major firms like Baidu, Alibaba, and Tencent, means that some analysts believe China will have an advantage over the US when it comes to future AI research, with one analyst describing the chances of China taking the lead over the US as 500 to one in China’s favor.

How can I get started with AI?

While you could try to build your own GPU array at home and start training a machine-learning model, probably the easiest way to experiment with AI-related services is via the cloud.

All of the major tech firms offer various AI services, from the infrastructure to build and train your own machine-learning models through to web services that allow you to access AI-powered tools such as speech, language, vision and sentiment recognition on demand.

What are recent landmarks in the development of AI?

There’s too many to put together a comprehensive list, but some recent highlights include: in 2009 Google showed it was possible for its self-driving Toyota Prius to complete more than 10 journeys of 100 miles each — setting society on a path towards driverless vehicles.

In 2011, the computer system IBM Watson made headlines worldwide when it won the US quiz show Jeopardy!, beating two of the best players the show had ever produced. To win the show, Watson used natural language processing and analytics on vast repositories of data that it processed to answer human-posed questions, often in a fraction of a second.

In June 2012, it became apparent just how good machine-learning systems were getting at computer vision, with Google training a system to recognise an internet favorite, pictures of cats.

Since Watson’s win, perhaps the most famous demonstration of the efficacy of machine-learning systems was the 2016 triumph of the Google DeepMind AlphaGo AI over a human grandmaster in Go, an ancient Chinese game whose complexity stumped computers for decades. Go has about 200 moves per turn, compared to about 20 in Chess. Over the course of a game of Go, there are so many possible moves that searching through each of them in advance to identify the best play is too costly from a computational point of view. Instead, AlphaGo was trained how to play the game by taking moves played by human experts in 30 million Go games and feeding them into deep-learning neural networks.

Training these deep learning networks can take a very long time, requiring vast amounts of data to be ingested and iterated over as the system gradually refines its model in order to achieve the best outcome.

However, more recently Google refined the training process with AlphaGo Zero, a system that played “completely random” games against itself, and then learnt from the results. At last year’s prestigious Neural Information Processing Systems (NIPS) conference, Google DeepMind CEO Demis Hassabis revealed AlphaGo had also mastered the games of chess and shogi.

And AI continues to sprint past new milestones, last year a system trained by OpenAI defeated the world’s top players in one-on-one matches of the online multiplayer game Dota 2.

That same year, OpenAI created AI agents that invented their own invented their own language to cooperate and achieve their goal more effectively, shortly followed by Facebook training agents to negotiate and even lie.

How will AI change the world?

Robots and driverless cars

This ebook, based on a special feature from ZDNet and TechRepublic, looks at emerging autonomous transport technologies and how they will affect society and the future of business.

The desire for robots to be able to act autonomously and understand and navigate the world around them means there is a natural overlap between robotics and AI. While AI is only one of the technologies used in robotics, use of AI is helping robots move into new areas such as self-driving cars, delivery robots, as well as helping robots to learn new skills. General Motors recently said it would build a driverless car without a steering wheel or pedals by 2019, while Ford committed to doing so by 2021, and Waymo, the self-driving group inside Google parent Alphabet, will soon offer a driverless taxi service in Phoenix.

Fake news

We are on the verge of having neural networks that can create photo-realistic images or replicate someone’s voice in a pitch-perfect fashion. With that comes the potential for hugely disruptive social change, such as no longer being able to trust video or audio footage as genuine. Concerns are also starting to be raised about how such technologies will be used to misappropriate people’s image, with tools already being created to convincingly splice famous actresses into adult films.

Speech and language recognition

Machine-learning systems have helped computers recognize what people are saying with an accuracy of almost 95 percent. Recently Microsoft’s Artificial Intelligence and Research group reported it had developed a system able to transcribe spoken English as accurately as human transcribers.

With researchers pursuing a goal of 99 percent accuracy, expect speaking to computers to become the norm alongside more traditional forms of human-machine interaction.

Facial recognition and surveillance

In recent years, the accuracy of facial-recognition systems has leapt forward, to the point where Chinese tech giant Baidu says it can match faces with 99 percent accuracy, providing the face is clear enough on the video. While police forces in western countries have generally only trialled using facial-recognition systems at large events, in China the authorities are mounting a nationwide program to connect CCTV across the country to facial recognition and to use AI systems to track suspects and suspicious behavior, and are also trialling the use of facial-recognition glasses by police.

Although privacy regulations vary across the world, it’s likely this more intrusive use of AI technology — including AI that can recognize emotions — will gradually become more widespread elsewhere.

Healthcare

AI could eventually have a dramatic impact on healthcare, helping radiologists to pick out tumors in x-rays, aiding researchers in spotting genetic sequences related to diseases and identifying molecules that could lead to more effective drugs.

There have been trials of AI-related technology in hospitals across the world. These include IBM’s Watson clinical decision support tool, which is trained by oncologists at Memorial Sloan Kettering Cancer Center, and the use of Google DeepMind systems by the UK’s National Health Service, where it will help spot eye abnormalities and streamline the process of screening patients for head and neck cancers.

Will AI kill us all?

Again, it depends who you ask. As AI-powered systems have grown more capable, so warnings of the downsides have become more dire.

Tesla and SpaceX CEO Elon Musk has claimed that AI is a “fundamental risk to the existence of human civilization”. As part of his push for stronger regulatory oversight and more responsible research into mitigating the downsides of AI he set up OpenAI, a non-profit artificial intelligence research company that aims to promote and develop friendly AI that will benefit society as a whole. Similarly, the esteemed physicist Stephen Hawking has warned that once a sufficiently advanced AI is created it will rapidly advance to the point at which it vastly outstrips human capabilities, a phenomenon known as the singularity, and could pose an existential threat to the human race.

Yet the notion that humanity is on the verge of an AI explosion that will dwarf our intellect seems ludicrous to some AI researchers.

Chris Bishop, Microsoft’s director of research in Cambridge, England, stresses how different the narrow intelligence of AI today is from the general intelligence of humans, saying that when people worry about “Terminator and the rise of the machines and so on? Utter nonsense, yes. At best, such discussions are decades away.”

Will an AI steal your job?

The possibility of artificially intelligent systems replacing much of modern manual labour is perhaps a more credible near-future possibility.

DNet and TechRepublic looks at the dramatic effect of AI, big data, cloud computing, and automation on IT jobs, and how companies can adapt.

Read More

While AI won’t replace all jobs, what seems to be certain is that AI will change the nature of work, with the only question being how rapidly and how profoundly automation will alter the workplace.

There is barely a field of human endeavour that AI doesn’t have the potential to impact. As AI expert Andrew Ng puts it: “many people are doing routine, repetitive jobs. Unfortunately, technology is especially good at automating routine, repetitive work”, saying he sees a “significant risk of technological unemployment over the next few decades”.

The evidence of which jobs will be supplanted is starting to emerge. Amazon has just launched Amazon Go, a cashier-free supermarket in Seattle where customers just take items from the shelves and walk out. What this means for the more than three million people in the US who works as cashiers remains to be seen. Amazon again is leading the way in using robots to improve efficiency inside its warehouses. These robots carry shelves of products to human pickers who select items to be sent out. Amazon has more than 100,000 bots in its fulfilment centers, with plans to add many more. But Amazon also stresses that as the number of bots have grown, so has the number of human workers in these warehouses. However, Amazon and small robotics firms are working to automate the remaining manual jobs in the warehouse, so it’s not a given that manual and robotic labor will continue to grow hand-in-hand.

Fully autonomous self-driving vehicles aren’t a reality yet, but by some predictions the self-driving trucking industry alone is poised to take over 1.7 million jobs in the next decade, even without considering the impact on couriers and taxi drivers.

Yet some of the easiest jobs to automate won’t even require robotics. At present there are millions of people working in administration, entering and copying data between systems, chasing and booking appointments for companies. As software gets better at automatically updating systems and flagging the information that’s important, so the need for administrators will fall.

As with every technological shift, new jobs will be created to replace those lost. However, what’s uncertain is whether these new roles will be created rapidly enough to offer employment to those displaced, and whether the newly unemployed will have the necessary skills or temperament to fill these emerging roles.

Not everyone is a pessimist. For some, AI is a technology that will augment, rather than replace, workers. Not only that but they argue there will be a commercial imperative to not replace people outright, as an AI-assisted worker — think a human concierge with an AR headset that tells them exactly what a client wants before they ask for it — will be more productive or effective than an AI working on its own.

Among AI experts there’s a broad range of opinion about how quickly artificially intelligent systems will surpass human capabilities.

Oxford University’s Future of Humanity Institute asked several hundred machine-learning experts to predict AI capabilities, over the coming decades.

Notable dates included AI writing essays that could pass for being written by a human by 2026, truck drivers being made redundant by 2027, AI surpassing human capabilities in retail by 2031, writing a best-seller by 2049, and doing a surgeon’s work by 2053.

They estimated there was a relatively high chance that AI beats humans at all tasks within 45 years and automates all human jobs within 120 years.

For all book lovers please visit my friend’s website.
URL: http://www.romancewithbooks.com

%d bloggers like this: