Can AI Save the Internet from Fake News?


There’s an old proverb that says “seeing is believing.” But in the age of artificial intelligence, it’s becoming increasingly difficult to take anything at face value—literally.

The rise of so-called “deepfakes,” in which different types of AI-based techniques are used to manipulate video content, has reached the point where Congress held its first hearing last month on the potential abuses of the technology. The congressional investigation coincided with the release of a doctored video of Facebook CEO Mark Zuckerberg delivering what appeared to be a sinister speech.

Scientists are scrambling for solutions on how to combat deepfakes, while at the same time others are continuing to refine the techniques for less nefarious purposes, such as automating video content for the film industry.

At one end of the spectrum, for example, researchers at New York University’s Tandon School of Engineering have proposed implanting a type of digital watermark using a neural network that can spot manipulated photos and videos.

The idea is to embed the system directly into a digital camera. Many smartphone cameras and other digital devices already use AI to boost image quality and make other corrections. The authors of the study out of NYU say their prototype platform increased the chances of detecting manipulation from about 45 percent to more than 90 percent without sacrificing image quality.

On the other hand, researchers at Carnegie Mellon University recently hit on a technique for automatically and rapidly converting large amounts of video content from one source into the style of another. In one example, the scientists transferred the facial expressions of comedian John Oliver onto the bespectacled face of late night show host Stephen Colbert.

The CMU team says the method could be a boon to the movie industry, such as by converting black and white films to color, though it also conceded that the technology could be used to develop deepfakes.

Words Matter with Fake News

While the current spotlight is on how to combat video and image manipulation, a prolonged trench warfare on fake news is being fought by academia, nonprofits, and the tech industry.

This isn’t the fake news that some have come to use as a knee-jerk reaction to fact-based information that might be less than flattering to the subject of the report. Rather, fake news is deliberately-created misinformation that is spread via the internet.

In a recent Pew Research Center poll, Americans said fake news is a bigger problem than violent crime, racism, and terrorism. Fortunately, many of the linguistic tools that have been applied to determine when people are being deliberately deceitful can be baked into algorithms for spotting fake news.

That’s the approach taken by a team at the University of Michigan (U-M) to develop an algorithm that was better than humans at identifying fake news—76 percent versus 70 percent—by focusing on linguistic cues like grammatical structure, word choice, and punctuation.

For example, fake news tends to be filled with hyperbole and exaggeration, using terms like “overwhelming” or “extraordinary.”

“I think that’s a way to make up for the fact that the news is not quite true, so trying to compensate with the language that’s being used,” Rada Mihalcea, a computer science and engineering professor at U-M, told Singularity Hub.

The paper “Automatic Detection of Fake News” was based on the team’s previous studies on how people lie in general, without necessarily having the intention of spreading fake news, she said.

“Deception is a complicated and complex phenomenon that requires brain power,” Mihalcea noted. “That often results in simpler language, where you have shorter sentences or shorter documents.”

AI Versus AI

While most fake news is still churned out by humans with identifiable patterns of lying, according to Mihalcea, other researchers are already anticipating how to detect misinformation manufactured by machines.

A group led by Yejin Choi, with the Allen Institute of Artificial Intelligence and the University of Washington in Seattle, is one such team. The researchers recently introduced the world to Grover, an AI platform that is particularly good at catching autonomously-generated fake news because it’s equally good at creating it.

“This is due to a finding that is perhaps counterintuitive: strong generators for neural fake news are themselves strong detectors of it,” wrote Rowan Zellers, a PhD student and team member, in a Medium blog post. “A generator of fake news will be most familiar with its own peculiarities, such as using overly common or predictable words, as well as the peculiarities of similar generators.”

The team found that the best current discriminators can classify neural fake news from real, human-created text with 73 percent accuracy. Grover clocks in with 92 percent accuracy based on a training set of 5,000 neural network-generated fake news samples. Zellers wrote that Grover got better at scale, identifying 97.5 percent of made-up machine mumbo jumbo when trained on 80,000 articles.

It performed almost as well against fake news created by a powerful new text-generation system called GPT-2 built by OpenAI, a nonprofit research lab founded by Elon Musk, classifying 96.1 percent of the machine-written articles.

OpenAI had so feared that the platform could be abused that it has only released limited versions of the software. The public can play with a scaled-down version posted by a machine learning engineer named Adam King, where the user types in a short prompt and GPT-2 bangs out a short story or poem based on the snippet of text.

No Silver AI Bullet

While real progress is being made against fake news, the challenges of using AI to detect and correct misinformation are abundant, according to Hugo Williams, outreach manager for Logically, a UK-based startup that is developing different detectors using elements of deep learning and natural language processing, among others. He explained that the Logically models analyze information based on a three-pronged approach.

  • Publisher metadata: Is the article from a known, reliable, and trustworthy publisher with a history of credible journalism?
  • Network behavior: Is the article proliferating through social platforms and networks in ways typically associated with misinformation?
  • Content: The AI scans articles for hundreds of known indicators typically found in misinformation.

“There is no single algorithm which is capable of doing this,” Williams wrote in an email to Singularity Hub. “Even when you have a collection of different algorithms which—when combined—can give you relatively decent indications of what is unreliable or outright false, there will always need to be a human layer in the pipeline.”

The company released a consumer app in India back in February just before that country’s election cycle that was a “great testing ground” to refine its technology for the next app release, which is scheduled in the UK later this year. Users can submit articles for further scrutiny by a real person.

“We see our technology not as replacing traditional verification work, but as a method of simplifying and streamlining a very manual process,” Williams said. “In doing so, we’re able to publish more fact checks at a far quicker pace than other organizations.”

“With heightened analysis and the addition of more contextual information around the stories that our users are reading, we are not telling our users what they should or should not believe, but encouraging critical thinking based upon reliable, credible, and verified content,” he added.

AI may never be able to detect fake news entirely on its own, but it can help us be smarter about what we read on the internet.

Advertisements

Oncologists are guardedly optimistic about AI. But will it drive real improvements in cancer care?


Over the course of my 25-year career as an oncologist, I’ve witnessed a lot of great ideas that improved the quality of cancer care delivery along with many more that didn’t materialize or were promises unfulfilled. I keep wondering which of those camps artificial intelligence will fall into.

Hardly a day goes by when I don’t read of some new AI-based tool in development to advance the diagnosis or treatment of disease. Will AI be just another flash in the pan or will it drive real improvements in the quality and cost of care? And how are health care providers viewing this technological development in light of previous disappointments?

To get a better handle on the collective “take” on artificial intelligence for cancer care, my colleagues and I at Cardinal Health Specialty Solutions fielded a survey of more than 180 oncologists. The results, published in our June 2019 Oncology Insights report, reveal valuable insights on how oncologists view the potential opportunities to leverage AI in their practices.

Limited familiarity tinged with optimism. Although only 5% of responding oncologists describe themselves as being “very familiar” with the use of artificial intelligence and machine learning in health care, 36% said they believe it will have a significant impact in cancer care over the next few years, with a considerable number of practices likely to adopt artificial intelligence tools.

The survey also suggests a strong sense of optimism about the impact that AI tools may have on the future: 53% of respondents said that such tools are likely or very likely to improve the quality of care in three years or more, 58% said they are likely or very likely to drive operational efficiencies, and 57% said they are likely or very likely to improve clinical outcomes. In addition, 53% described themselves as “excited” to see what role AI will play in supporting care.

An age gap on costs. The oncologists surveyed were somewhat skeptical that AI will help reduce overall health care costs: 47% said it is likely or very likely to lower costs, while 23% said it was unlikely or very unlikely to do so. Younger providers were more optimistic on this issue than their older peers. Fifty-eight percent of those under age 40 indicated that AI was likely to lower costs versus 44% of providers over the age of 60. This may be a reflection of the disappointments that older physicians have experienced with other technologies that promised cost savings but failed to deliver.

Hopes that artificial intelligence will reduce administrative work. At a time when physicians spend nearly half of their practice time on electronic medical records, we were not surprised to see that, when asked about the most valuable benefit that AI could deliver to their practice, the top response (37%) was “automating administrative tasks so I can focus on patients.” This response aligns with research we conducted last year showing that oncologists need extra hours to complete work in the electronic medical record on a weekly basis and the EMR is one of the top factors contributing to stress at work. Clearly there is pent-up demand for tools that can reduce the administrative burdens on providers. If AI can deliver effective solutions, it could be widely embraced.

Need for decision-support tools. Oncologists have historically been reluctant to relinquish control over patient treatment decisions to tools like clinical pathways that have been developed to improve outcomes and lower costs. Yet, with 63 new cancer drugs launched in the past five years and hundreds more in the pipeline, the complexity surrounding treatment decisions has reached a tipping point. Oncologists are beginning to acknowledge that more point-of-care decision support tools will be needed to deliver the best patient outcomes. This was reflected in our survey, with 26% of respondents saying that artificial intelligence could most improve cancer care by helping determine the best treatment paths for patients.

AI-based tools that enable providers to remain in control of care while also providing better insights may be among the first to be adopted, especially those that can help quickly identify patients at risk of poor outcomes so physicians can intervene sooner. But technology developers will need to be prepared with clinical data demonstrating the effectiveness of these tools — 27% of survey respondents said the lack of clinical evidence is one of their top concerns about AI.

Challenges to adoption. While optimistic about the potential benefits of AI tools, oncologists also acknowledge they don’t fully understand AI yet. Fifty-three percent of those surveyed described themselves as “not very familiar” with the use of AI in health care and, when asked to cite their top concerns, 27% indicated that they don’t know enough to implement it effectively. Provider education and training on AI-based tools will be keys to their successful uptake.

The main take-home lesson for health care technology developers from our survey is to develop and launch artificial intelligence tools thoughtfully after taking steps to understand the needs of health care providers and investing time in their education and training. Without those steps, AI may become just another here today, gone tomorrow health care technology story.

How AI Will Rewire Us


Fears about how robots might transform our lives have been a staple of science fiction for decades. In the 1940s, when widespread interaction between humans and artificial intelligence still seemed a distant prospect, Isaac Asimov posited his famous Three Laws of Robotics, which were intended to keep robots from hurting us. The first—“a robot may not injure a human being or, through inaction, allow a human being to come to harm”—followed from the understanding that robots would affect humans via direct interaction, for good and for ill. Think of classic sci-fi depictions: C-3PO and R2-D2 working with the Rebel Alliance to thwart the Empire in Star Wars, say, or HAL 9000 from 2001: A Space Odyssey and Ava from Ex Machina plotting to murder their ostensible masters. But these imaginings were not focused on AI’s broader and potentially more significant social effects—the ways AI could affect how we humans interact with one another.

Radical innovations have previously transformed the way humans live together, of course. The advent of cities sometime between 5,000 and 10,000 years ago meant a less nomadic existence and a higher population density. We adapted both individually and collectively (for instance, we may have evolved resistance to infections made more likely by these new circumstances). More recently, the invention of technologies including the printing press, the telephone, and the internet revolutionized how we store and communicate information.

As consequential as these innovations were, however, they did not change the fundamental aspects of human behavior that comprise what I call the “social suite”: a crucial set of capacities we have evolved over hundreds of thousands of years, including love, friendship, cooperation, and teaching. The basic contours of these traits remain remarkably consistent throughout the world, regardless of whether a population is urban or rural, and whether or not it uses modern technology.

But adding artificial intelligence to our midst could be much more disruptive. Especially as machines are made to look and act like us and to insinuate themselves deeply into our lives, they may change how loving or friendly or kind we are—not just in our direct interactions with the machines in question, but in our interactions with one another.

Consider some experiments from my lab at Yale, where my colleagues and I have been exploring how such effects might play out. In one, we directed small groups of people to work with humanoid robots to lay railroad tracks in a virtual world. Each group consisted of three people and a little blue-and-white robot sitting around a square table, working on tablets. The robot was programmed to make occasional errors—and to acknowledge them: “Sorry, guys, I made the mistake this round,” it declared perkily. “I know it may be hard to believe, but robots make mistakes too.”

As it turned out, this clumsy, confessional robot helped the groups perform better—by improving communication among the humans. They became more relaxed and conversational, consoling group members who stumbled and laughing together more often. Compared with the control groups, whose robot made only bland statements, the groups with a confessional robot were better able to collaborate.

In another, virtual experiment, we divided 4,000 human subjects into groups of about 20, and assigned each individual “friends” within the group; these friendships formed a social network. The groups were then assigned a task: Each person had to choose one of three colors, but no individual’s color could match that of his or her assigned friends within the social network. Unknown to the subjects, some groups contained a few bots that were programmed to occasionally make mistakes. Humans who were directly connected to these bots grew more flexible, and tended to avoid getting stuck in a solution that might work for a given individual but not for the group as a whole. What’s more, the resulting flexibility spread throughout the network, reaching even people who were not directly connected to the bots. As a consequence, groups with mistake-prone bots consistently outperformed groups containing bots that did not make mistakes. The bots helped the humans to help themselves.

Both of these studies demonstrate that in what I call “hybrid systems”—where people and robots interact socially—the right kind of AI can improve the way humans relate to one another. Other findings reinforce this. For instance, the political scientist Kevin Munger directed specific kinds of bots to intervene after people sent racist invective to other people online. He showed that, under certain circumstances, a bot that simply reminded the perpetrators that their target was a human being, one whose feelings might get hurt, could cause that person’s use of racist speech to decline for more than a month.

But adding AI to our social environment can also make us behave less productively and less ethically. In yet another experiment, this one designed to explore how AI might affect the “tragedy of the commons”—the notion that individuals’ self-centered actions may collectively damage their common interests—we gave several thousand subjects money to use over multiple rounds of an online game. In each round, subjects were told that they could either keep their money or donate some or all of it to their neighbors. If they made a donation, we would match it, doubling the money their neighbors received. Early in the game, two-thirds of players acted altruistically. After all, they realized that being generous to their neighbors in one round might prompt their neighbors to be generous to them in the next one, establishing a norm of reciprocity. From a selfish and short-term point of view, however, the best outcome would be to keep your own money and receive money from your neighbors. In this experiment, we found that by adding just a few bots (posing as human players) that behaved in a selfish, free-riding way, we could drive the group to behave similarly. Eventually, the human players ceased cooperating altogether. The bots thus converted a group of generous people into selfish jerks.

Let’s pause to contemplate the implications of this finding. Cooperation is a key feature of our species, essential for social life. And trust and generosity are crucial in differentiating successful groups from unsuccessful ones. If everyone pitches in and sacrifices in order to help the group, everyone should benefit. When this behavior breaks down, however, the very notion of a public good disappears, and everyone suffers. The fact that AI might meaningfully reduce our ability to work together is extremely concerning.

Already, we are encountering real-world examples of how AI can corrupt human relations outside the laboratory. A study examining 5.7 million Twitter users in the run-up to the 2016 U.S. presidential election found that trolling and malicious Russian accounts—including ones operated by bots—were regularly retweeted in a similar manner to other, unmalicious accounts, influencing conservative users particularly strongly. By taking advantage of humans’ cooperative nature and our interest in teaching one another—both features of the social suite—the bots affected even humans with whom they did not interact directly, helping to polarize the country’s electorate.

Other social effects of simple types of AI play out around us daily. Parents, watching their children bark rude commands at digital assistants such as Alexa or Siri, have begun to worry that this rudeness will leach into the way kids treat people, or that kids’ relationships with artificially intelligent machines will interfere with, or even preempt, human relationships. Children who grow up relating to AI in lieu of people might not acquire “the equipment for empathic connection,” Sherry Turkle, the MIT expert on technology and society, told The Atlantic’s Alexis C. Madrigal not long ago, after he’d bought a toy robot for his son.

As digital assistants become ubiquitous, we are becoming accustomed to talking to them as though they were sentient; writing in these pages last year, Judith Shulevitz described how some of us are starting to treat them as confidants, or even as friends and therapists. Shulevitz herself says she confesses things to Google Assistant that she wouldn’t tell her husband. If we grow more comfortable talking intimately to our devices, what happens to our human marriages and friendships? Thanks to commercial imperatives, designers and programmers typically create devices whose responses make us feel better—but may not help us be self-reflective or contemplate painful truths. As AI permeates our lives, we must confront the possibility that it will stunt our emotions and inhibit deep human connections, leaving our relationships with one another less reciprocal, or shallower, or more narcissistic.

All of this could end up transforming human society in unintended ways that we need to reckon with as a polity. Do we want machines to affect whether and how children are kind? Do we want machines to affect how adults have sex?

Kathleen Richardson, an anthropologist at De Montfort University in the U.K., worries a lot about the latter question. As the director of the Campaign Against Sex Robots—and, yes, sex robots are enough of an incipient phenomenon that a campaign against them isn’t entirely premature—she warns that they will be dehumanizing and could lead users to retreat from real intimacy. We might even progress from treating robots as instruments for sexual gratification to treating other people that way. Other observers have suggested that robots could radically improve sex between humans. In his 2007 book, Love and Sex With Robots, the iconoclastic chess master turned businessman David Levy considers the positive implications of “romantically attractive and sexually desirable robots.” He suggests that some people will come to prefer robot mates to human ones (a prediction borne out by the Japanese man who “married” an artificially intelligent hologram last year). Sex robots won’t be susceptible to sexually transmitted diseases or unwanted pregnancies. And they could provide opportunities for shame-free experimentation and practice—thus helping humans become “virtuoso lovers.” For these and other reasons, Levy believes that sex with robots will come to be seen as ethical, and perhaps in some cases expected.

Long before most of us encounter AI dilemmas this intimate, we will wrestle with more quotidian challenges. The age of driverless cars, after all, is upon us. These vehicles promise to substantially reduce the fatigue and distraction that bedevil human drivers, thereby preventing accidents. But what other effects might they have on people? Driving is a very modern kind of social interaction, requiring high levels of cooperation and social coordination. I worry that driverless cars, by depriving us of an occasion to exercise these abilities, could contribute to their atrophy.

Not only will these vehicles be programmed to take over driving duties and hence to usurp from humans the power to make moral judgments (for example, about which pedestrian to hit when a collision is inevitable), they will also affect humans with whom they’ve had no direct contact. For instance, drivers who have steered awhile alongside an autonomous vehicle traveling at a steady, invariant speed might be lulled into driving less attentively, thereby increasing their likelihood of accidents once they’ve moved to a part of the highway occupied only by human drivers. Alternatively, experience may reveal that driving alongside autonomous vehicles traveling in perfect accordance with traffic laws actually improves human performance.

Either way, we would be reckless to unleash new forms of AI without first taking such social spillovers—or externalities, as they’re often called—into account. We must apply the same effort and ingenuity that we apply to the hardware and software that make self-driving cars possible to managing AI’s potential ripple effects on those outside the car. After all, we mandate brake lights on the back of your car not just, or even primarily, for your benefit, but for the sake of the people behind you.

In 1985, some four decades after Isaac Asimov introduced his laws of robotics, he added another to his list: A robot should never do anything that could harm humanity. But he struggled with how to assess such harm. “A human being is a concrete object,” he later wrote. “Injury to a person can be estimated and judged. Humanity is an abstraction.”

Focusing specifically on social spillovers can help. Spillovers in other arenas lead to rules, laws, and demands for democratic oversight. Whether we’re talking about a corporation polluting the water supply or an individual spreading secondhand smoke in an office building, as soon as some people’s actions start affecting other people, society may intervene. Because the effects of AI on human-to-human interaction stand to be intense and far-reaching, and the advances rapid and broad, we must investigate systematically what second-order effects might emerge, and discuss how to regulate them on behalf of the common good.

Already, a diverse group of researchers and practitioners—computer scientists, engineers, zoologists, and social scientists, among others—is coming together to develop the field of “machine behavior,” in hopes of putting our understanding of AI on a sounder theoretical and technical foundation. This field does not see robots merely as human-made objects, but as a new class of social actors.

The inquiry is urgent. In the not-distant future, AI-endowed machines may, by virtue of either programming or independent learning (a capacity we will have given them), come to exhibit forms of intelligence and behavior that seem strange compared with our own. We will need to quickly differentiate the behaviors that are merely bizarre from the ones that truly threaten us. The aspects of AI that should concern us most are the ones that affect the core aspects of human social life—the traits that have enabled our species’ survival over the millennia.

The Enlightenment philosopher Thomas Hobbes argued that humans needed a collective agreement to keep us from being disorganized and cruel. He was wrong. Long before we formed governments, evolution equipped humans with a social suite that allowed us to live together peacefully and effectively. In the pre-AI world, the genetically inherited capacities for love, friendship, cooperation, and teaching have continued to help us to live communally.

Unfortunately, humans do not have the time to evolve comparable innate capacities to live with robots. We must therefore take steps to ensure that they can live nondestructively with us. As AI insinuates itself more fully into our lives, we may yet require a new social contract—one with machines rather than with other humans.

Can Artificial Intelligence Read X-Rays?


An artificial intelligence (AI) system can analyze chest X-rays and spot patients who should receive immediate care, researchers report.

The system could also reduce backlogs in hospitals someday. Chest X-rays account for 40 percent of all diagnostic imaging worldwide, and there can be large backlogs, according to the researchers.

“Currently, there are no systematic and automated ways to triage chest X-rays and bring those with critical and urgent findings to the top of the reporting pile,” explained study co-author Giovanni Montana. He is formerly of King’s College London and is now at the University of Warwick in Coventry, England.

Montana and his colleagues used more than 470,300 adult chest X-rays to develop an AI system that could identify unusual results.

The system’s performance in prioritizing X-rays was assessed in a simulation using a separate set of 15,887 chest X-rays. All identifying information was removed from the X-rays to protect patient privacy.

The system was highly accurate in distinguished abnormal from normal chest X-rays, researchers said. Simulations showed that with the AI system, critical findings received an expert radiologist opinion within an average of 2.7 days, compared with an average of 11.2 days in actual practice.

The study results were published Jan. 22 in the journal Radiology.

“The initial results reported here are exciting as they demonstrate that an AI system can be successfully trained using a very large database of routinely acquired radiologic data,” Montana said in a journal news release.

“With further clinical validation, this technology is expected to reduce a radiologist’s workload by a significant amount by detecting all the normal exams, so more time can be spent on those requiring more attention,” he added.

The researchers said the next step is to test a much larger number of X-rays and to conduct a multi-center study to assess the AI system’s performance.

The case for taking AI seriously as a threat to humanity


Why some people fear AI, explained.

Stephen Hawking has said, “The development of full artificial intelligence could spell the end of the human race.” Elon Musk claims that AI is humanity’s “biggest existential threat.”

That might have people asking: Wait, what? But these grand worries are rooted in research. Along with Hawking and Musk, prominent figures at Oxford and UC Berkeley and many of the researchers working in AI today believe that advanced AI systems, if deployed carelessly, could end all life on earth.

This concern has been raised since the dawn of computing. But it has come into particular focus in recent years, as advances in machine-learning techniques have given us a more concrete understanding of what we can do with AI, what AI can do for (and to) us, and how much we still don’t know.

There are also skeptics. Some of them think advanced AI is so distant that there’s no point in thinking about it now. Others are worried that excessive hype about the power of their field might kill it prematurely. And even among the people who broadly agree that AI poses unique dangers, there are varying takes on what steps make the most sense today.

The conversation about AI is full of confusion, misinformation, and people talking past each other — in large part because we use the word “AI” to refer to so many things. So here’s the big picture on how artificial intelligence might pose a catastrophic threat, in nine questions:

1) What is AI?

Artificial intelligence is the effort to create computers capable of intelligent behavior. It is a broad catchall term, used to refer to everything from Siri to IBM’s Watson to powerful technologies we have yet to invent.

Some researchers distinguish between “narrow AI” — computer systems that are better than humans in some specific, well-defined field, like playing chess or generating images or diagnosing cancer — and “general AI,” systems that can surpass human capabilities in many domains. We don’t have general AI yet, but we’re starting to get a better sense of the challenges it will pose.

Narrow AI has seen extraordinary progress over the past few years. AI systems have improved dramatically at translation, at games like chess and Go, at important research biology questions like predicting how proteins fold, and at generating images. AI systems determine what you’ll see in a Google search or in your Facebook Newsfeed. They are being developed to improve drone targeting and detect missiles.

But narrow AI is getting less narrow. Once, we made progress in AI by painstakingly teaching computer systems specific concepts. To do computer vision — allowing a computer to identify things in pictures and video — researchers wrote algorithms for detecting edges. To play chess, they programmed in heuristics about chess. To do natural language processing (speech recognition, transcription, translation, etc.), they drew on the field of linguistics.

But recently, we’ve gotten better at creating computer systems that have generalized learning capabilities. Instead of mathematically describing detailed features of a problem, we let the computer system learn that by itself. While once we treated computer vision as a completely different problem from natural language processing or platform game playing, now we can solve all three problems with the same approaches.

Our AI progress so far has enabled enormous advances — and has also raised urgent ethical questions. When you train a computer system to predict which convicted felons will reoffend, you’re using inputs from a criminal justice system biased against black people and low-income people — and so its outputs will likely be biased against black and low-income people too. Making websites more addictive can be great for your revenue but bad for your users.

Rosie Campbell at UC Berkeley’s Center for Human-Compatible AI argues that these are examples, writ small, of the big worry experts have about general AI in the future. The difficulties we’re wrestling with today with narrow AI don’t come from the systems turning on us or wanting revenge or considering us inferior. Rather, they come from the disconnect between what we tell our systems to do and what we actually want them to do.

For example, we tell a system to run up a high score in a video game. We want it to play the game fairly and learn game skills — but if it instead has the chance to directly hack the scoring system, it will do that. It’s doing great by the metric we gave it. But we aren’t getting what we wanted.

In other words, our problems come from the systems being really good at achieving the goal they learned to pursue; it’s just that the goal they learned in their training environment isn’t the outcome we actually wanted. And we’re building systems we don’t understand, which means we can’t always anticipate their behavior.

Right now the harm is limited because the systems are so limited. But it’s a pattern that could have even graver consequences for human beings in the future as AI systems become more advanced.

2) Is it even possible to make a computer as smart as a person?

Yes, though current AI systems aren’t nearly that smart.

One popular adage about AI is “everything that’s easy is hard, and everything that’s hard is easy.” Doing complex calculations in the blink of an eye? Easy. Looking at a picture and telling you whether it’s a dog? Hard (until very recently).

Lots of things humans do are still outside AI’s grasp. For instance, it’s hard to design an AI system that explores an unfamiliar environment, that can navigate its way from, say, the entryway of a building it’s never been in before up the stairs to a specific person’s desk. We don’t know how to design an AI system that reads a book and retains an understanding of the concepts.

The paradigm that has driven many of the biggest breakthroughs in AI recently is called “deep learning.” Deep learning systems can do some astonishing stuff: beat games we thought humans might never lose, invent compelling and realistic photographs, solve open problems in molecular biology.

These breakthroughs have made some researchers conclude it’s time to start thinking about the dangers of more powerful systems, but skeptics remain. The field’s pessimists argue that programs still need an extraordinary pool of structured data to learn from, require carefully chosen parameters, or work only in environments designed to avoid the problems we don’t yet know how to solve. They point to self-driving cars, which are still mediocre under the best conditions despite the billions that have been poured into making them work.

With all those limitations, one might conclude that even if it’s possible to make a computer as smart as a person, it’s certainly a long way away. But that conclusion doesn’t necessarily follow.

That’s because for almost all the history of AI, we’ve been held back in large part by not having enough computing power to realize our ideas fully. Many of the breakthroughs of recent years — AI systems that learned how to play Atari games, generate fake photos of celebrities, fold proteins, and compete in massive multiplayer online strategy games — have happened because that’s no longer true. Lots of algorithms that seemed not to work at all turned out to work quite well once we could run them with more computing power.

And the cost of a unit of computing time keeps falling. Progress in computing speed has slowed recently, but the cost of computing power is still estimated to be falling by a factor of 10 every 10 years. Through most of its history, AI has had access to less computing power than the human brain. That’s changing. By most estimates, we’re now approaching the era when AI systems can have the computing resources that we humans enjoy.

Furthermore, breakthroughs in a field can often surprise even other researchers in the field. “Some have argued that there is no conceivable risk to humanity [from AI] for centuries to come,” wrote UC Berkeley professor Stuart Russell, “perhaps forgetting that the interval of time between Rutherford’s confident assertion that atomic energy would never be feasibly extracted and Szilárd’s invention of the neutron-induced nuclear chain reaction was less than twenty-four hours.”

There’s another consideration. Imagine an AI that is inferior to humans at everything, with one exception: It’s a competent engineer that can build AI systems very effectively. Machine learning engineers who work on automating jobs in other fields often observe, humorously, that in some respects, their own field looks like one where much of the work — the tedious tuning of parameters — could be automated.

If we can design such a system, then we can use its result — a better engineering AI — to build another, even better AI. This is the mind-bending scenario experts call “recursive self-improvement,” where gains in AI capabilities enable more gains in AI capabilities, allowing a system that started out behind us to rapidly end up with abilities well beyond what we anticipated.

This is a possibility that has been anticipated since the first computers. I.J. Good, a colleague of Alan Turing who worked at the Bletchley Park codebreaking operation during World War II and helped build the first computers afterward, may have been the first to spell it out, back in 1965: “An ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.”

3) How exactly could it wipe us out?

It’s immediately clear how nuclear bombs will kill us. No one working on mitigating nuclear risk has to start by explaining why it’d be a bad thing if we had a nuclear war.

The case that AI could pose an existential risk to humanity is more complicated and harder to grasp. So many of the people who are working to build safe AI systems have to start by explaining why AI systems, by default, are dangerous.

The idea that AI can become a danger is rooted in the fact that AI systems pursue their goals, whether or not those goals are what we really intended — and whether or not we’re in the way. “You’re probably not an evil ant-hater who steps on ants out of malice,” Stephen Hawking wrote, “but if you’re in charge of a hydroelectric green-energy project and there’s an anthill in the region to be flooded, too bad for the ants. Let’s not place humanity in the position of those ants.”

Here’s one scenario that keeps experts up at night: We develop a sophisticated AI system with the goal of, say, estimating some number with high confidence. The AI realizes it can achieve more confidence in its calculation if it uses all the world’s computing hardware, and it realizes that releasing a biological superweapon to wipe out humanity would allow it free use of all the hardware. Having exterminated humanity, it then calculates the number with higher confidence.

Victoria Krakovna, an AI researcher at DeepMind (now a division of Alphabet, Google’s parent company), compiled a list of examples of “specification gaming”: the computer doing what we told it to do but not what we wanted it to do. For example, we tried to teach AI organisms in a simulation to jump, but we did it by teaching them to measure how far their “feet” rose above the ground. Instead of jumping, they learned to grow into tall vertical poles and do flips — they excelled at what we were measuring, but they didn’t do what we wanted them to do.

An AI playing the Atari exploration game Montezuma’s Revenge found a bug that let it force a key in the game to reappear, thereby allowing it to earn a higher score by exploiting the glitch. An AI playing a different game realized it could get more points by falsely inserting its name as the owner of high-value items.

Sometimes, the researchers didn’t even know how their AI system cheated: “the agent discovers an in-game bug. … For a reason unknown to us, the game does not advance to the second round but the platforms start to blink and the agent quickly gains a huge amount of points (close to 1 million for our episode time limit).”

What these examples make clear is that in any system that might have bugs or unintended behavior or behavior humans don’t fully understand, a sufficiently powerful AI system might act unpredictably — pursuing its goals through an avenue that isn’t the one we expected.

In his 2009 paper “The Basic AI Drives,” Steve Omohundro, who has worked as a computer science professor at the University of Illinois Urbana-Champaign and as the president of Possibility Research, argues that almost any AI system will predictably try to accumulate more resources, become more efficient, and resist being turned off or modified: “These potentially harmful behaviors will occur not because they were programmed in at the start, but because of the intrinsic nature of goal driven systems.”

His argument goes like this: Because AIs have goals, they’ll be motivated to take actions that they can predict will advance their goals. An AI playing a chess game will be motivated to take an opponent’s piece and advance the board to a state that looks more winnable.

But the same AI, if it sees a way to improve its own chess evaluation algorithm so it can evaluate potential moves faster, will do that too, for the same reason: It’s just another step that advances its goal.

If the AI sees a way to harness more computing power so it can consider more moves in the time available, it will do that. And if the AI detects that someone is trying to turn off its computer mid-game, and it has a way to disrupt that, it’ll do it. It’s not that we would instruct the AI to do things like that; it’s that whatever goal a system has, actions like these will often be part of the best path to achieve that goal.

That means that any goal, even innocuous ones like playing chess or generating advertisements that get lots of clicks online, could produce unintended results if the agent pursuing it has enough intelligence and optimization power to identify weird, unexpected routes to achieve its goals.

Goal-driven systems won’t wake up one day with hostility to humans lurking in their hearts. But they will take actions that they predict will help them achieve their goal — even if we’d find those actions problematic, even horrifying. They’ll work to preserve themselves, accumulate more resources, and become more efficient. They already do that, but it takes the form of weird glitches in games. As they grow more sophisticated, scientists like Omohundro predict more adversarial behavior.

4) When did scientists first start worrying about AI risk?

Scientists have been thinking about the potential of artificial intelligence since the early days of computers. In the famous paper where he put forth the Turing test for determining if an artificial system is truly “intelligent,” Alan Turing wrote:

Let us now assume, for the sake of argument, that these machines are a genuine possibility, and look at the consequences of constructing them. … There would be plenty to do in trying to keep one’s intelligence up to the standards set by the machines, for it seems probable that once the machine thinking method had started, it would not take long to outstrip our feeble powers. … At some stage therefore we should have to expect the machines to take control.

I.J. Good worked closely with Turing and reached the same conclusions, according to his assistant, Leslie Pendleton. In an excerpt from unpublished notes Good wrote shortly before he died in 2009, he writes about himself in third person and notes a disagreement with his younger self — while as a younger man, he thought powerful AIs might be helpful to us, the older Good expected AI to annihilate us.

[The paper] “Speculations Concerning the First Ultra-intelligent Machine” (1965) … began: “The survival of man depends on the early construction of an ultra-intelligent machine.” Those were his words during the Cold War, and he now suspects that “survival” should be replaced by “extinction.” He thinks that, because of international competition, we cannot prevent the machines from taking over. He thinks we are lemmings. He said also that “probably Man will construct the deus ex machina in his own image.”

In the 21st century, with computers quickly establishing themselves as a transformative force in our world, younger researchers started expressing similar worries.

Nick Bostrom is a professor at the University of Oxford, the director of the Future of Humanity Institute, and the director of the Governance of Artificial Intelligence Program. He researches risks to humanity, both in the abstract — asking questions like why we seem to be alone in the universe — and in concrete terms, analyzing the technological advances on the table and whether they endanger us. AI, he concluded, endangers us.

In 2014, he wrote a book explaining the risks AI poses and the necessity of getting it right the first time, concluding, “once unfriendly superintelligence exists, it would prevent us from replacing it or changing its preferences. Our fate would be sealed.”

Across the world, others have reached the same conclusion. Bostrom co-authored a paper on the ethics of artificial intelligence with Eliezer Yudkowsky, founder of and research fellow at the Berkeley Machine Intelligence Research Institute (MIRI), an organization that works on better formal characterizations of the AI safety problem.

Yudkowsky started his career in AI by worriedly poking holes in others’ proposals for how to make AI systems safe, and has spent most of it working to persuade his peers that AI systems will, by default, be unaligned with human values (not necessarily opposed to but indifferent to human morality) — and that it’ll be a challenging technical problem to prevent that outcome.

Increasingly, researchers realized that there’d be challenges that hadn’t been present with AI systems when they were simple. “‘Side effects’ are much more likely to occur in a complex environment, and an agent may need to be quite sophisticated to hack its reward function in a dangerous way. This may explain why these problems have received so little study in the past, while also suggesting their importance in the future,” concluded a 2016 research paper on problems in AI safety.

Bostrom’s book Superintelligence was compelling to many people, but there were skeptics. “No, experts don’t think superintelligent AI is a threat to humanity,” argued an op-ed by Oren Etzioni, a professor of computer science at the University of Washington and CEO of the Allan Institute for Artificial Intelligence. “Yes, we are worried about the existential risk of artificial intelligence,” replied a dueling op-ed by Stuart Russell, an AI pioneer and UC Berkeley professor, and Allan DaFoe, a senior research fellow at Oxford and director of the Governance of AI program there.

It’s tempting to conclude that there’s a pitched battle between AI-risk skeptics and AI-risk believers. In reality, they might not disagree as profoundly as you would think.

Facebook’s chief AI scientist Yann LeCun, for example, is a vocal voice on the skeptical side. But while he argues we shouldn’t fear AI, he still believes we ought to have people working on, and thinking about, AI safety. “Even if the risk of an A.I. uprising is very unlikely and very far in the future, we still need to think about it, design precautionary measures, and establish guidelines,” he writes.

That’s not to say there’s an expert consensus here — far from it. There is substantial disagreement about which approaches seem likeliest to bring us to general AI, which approaches seem likeliest to bring us to safe general AI, and how soon we need to worry about any of this.

Many experts are wary that others are overselling their field, and dooming it when the hype runs out. But that disagreement shouldn’t obscure a growing common ground; these are possibilities worth thinking about, investing in, and researching, so we have guidelines when the moment comes that they’re needed.

5) Why couldn’t we just shut off a computer if it got too powerful?

A smart AI could predict that we’d want to turn it off if it made us nervous. So it would try hard not to make us nervous, because doing so wouldn’t help it accomplish its goals. If asked what its intentions are, or what it’s working on, it would attempt to evaluate which responses are least likely to get it shut off, and answer with those. If it wasn’t competent enough to do that, it might pretend to be even dumber than it was — anticipating that researchers would give it more time, computing resources, and training data.

So we might not know when it’s the right moment to shut off a computer.

We also might do things that make it impossible to shut off the computer later, even if we realize eventually that it’s a good idea. For example, many AI systems could have access to the internet, which is a rich source of training data and which they’d need if they’re to make money for their creators (for example, on the stock market, where more than half of trading is done by fast-reacting AI algorithms).

But with internet access, an AI could email copies of itself somewhere where they’ll be downloaded and read, or hack vulnerable systems elsewhere. Shutting off any one computer wouldn’t help.

In that case, isn’t it a terrible idea to let any AI system — even one which doesn’t seem powerful enough to be dangerous — have access to the internet? Probably. But that doesn’t mean it won’t continue to happen.

So far, we’ve mostly talked about the technical challenges of AI. But from here forward, it’s necessary to veer more into the politics. Since AI systems enable incredible things, there will be lots of different actors working on such systems.

There will likely be startups, established tech companies like Google (Alphabet’s recently acquired startup DeepMind is frequently mentioned as an AI frontrunner), and nonprofits (the Elon Musk-founded OpenAI is another major player in the field).

There will be governments — Russia’s Vladimir Putin has expressed an interest in AI, and China has made big investments. Some of them will presumably be cautious and employ safety measures, including keeping their AI off the internet. But in a scenario like this one, we’re at the mercy of the least cautious actor, whoever they may be.

That’s part of what makes AI hard: Even if we know how to take appropriate precautions (and right now we don’t), we also need to figure out how to ensure that all would-be AI programmers are motivated to take those precautions and have the tools to implement them correctly.

6) What are we doing right now to avoid an AI apocalypse?

“It could be said that public policy on AGI [artificial general intelligence] does not exist,” concluded a paper this year reviewing the state of the field.

The truth is that technical work on promising approaches is getting done, but there’s shockingly little in the way of policy planning, international collaboration, or public-private partnerships. In fact, much of the work is being done by only a handful of organizations, and it has been estimated that around 50 people in the world work full time on technical AI safety.

Bostrom’s Future of Humanity Institute has published a research agenda for AI governance: the study of “devising global norms, policies, and institutions to best ensure the beneficial development and use of advanced AI.” It has published research on the risk of malicious uses of AI, on the context of China’s AI strategy, and on artificial intelligence and international security.

The longest-established organization working on technical AI safety is the Machine Intelligence Research Institute (MIRI), which prioritizes research into designing highly reliable agents — artificial intelligence programs whose behavior we can predict well enough to be confident they’re safe. (Disclosure: MIRI is a nonprofit and I donated to its work in 2017 and 2018.)

The Elon Musk-founded OpenAI is a very new organization, less than three years old. But researchers there are active contributors to both AI safety and AI capabilities research. A research agenda in 2016 spelled out “concrete open technical problems relating to accident prevention in machine learning systems,” and researchers have since advanced some approaches to safe AI systems.

Alphabet’s DeepMind, a leader in this field, has a safety team and a technical research agenda outlined here. “Our intention is to ensure that AI systems of the future are not just ‘hopefully safe’ but robustly, verifiably safe ,” it concludes, outlining an approach with an emphasis on specification (designing goals well), robustness (designing systems that perform within safe limits under volatile conditions), and assurance (monitoring systems and understanding what they’re doing).

There are also lots of people working on more present-day AI ethics problems: algorithmic bias, robustness of modern machine-learning algorithms to small changes, and transparency and interpretability of neural nets, to name just a few. Some of that research could potentially be valuable for preventing destructive scenarios.

But on the whole, the state of the field is a little bit as if almost all climate change researchers were focused on managing the droughts, wildfires, and famines we’re already facing today, with only a tiny skeleton team dedicating to forecasting the future and 50 or so researchers who work full time on coming up with a plan to turn things around.

Not every organization with a major AI department has a safety team at all, and some of them have safety teams focused only on algorithmic fairness and not on the risks from advanced systems. The US government doesn’t have a department for AI.

The field still has lots of open questions — many of which might make AI look much more scary, or much less so — which no one has dug into in depth.

7) Is this really likelier to kill us all than, say, climate change?

It sometimes seems like we’re facing dangers from all angles in the 21st century. Both climate change and future AI developments are likely to be transformative forces acting on our world.

Our predictions about climate change are more confident, both for better and for worse. We have a clearer understanding of the risks the planet will face, and we can estimate the costs to human civilization. They are projected to be enormous, risking potentially hundreds of millions of lives. The ones who will suffer most will be low-income people in developing countries; the wealthy will find it easier to adapt. We also have a clearer understanding of the policies we need to enact to address climate change than we do with AI.

There’s intense disagreement in the field on timelines for critical advances in AI. While AI safety experts agree on many features of the safety problem, they’re still making the case to research teams in their own field, and they disagree on some of the details. There’s substantial disagreement on how badly it could go, and on how likely it is to go badly. There are only a few people who work full time on AI forecasting. One of the things current researchers are trying to nail down is their models and the reasons for the remaining disagreements about what safe approaches will look like.

Most experts in the AI field think it poses a much larger risk of total human extinction than climate change, since analysts of existential risks to humanity think that climate change, while catastrophic, is unlikely to lead to human extinction. But many others primarily emphasize our uncertainty — and emphasize that when we’re working rapidly toward powerful technology about which there are still many unanswered questions, the smart step is to start the research now.

8) Is there a possibility that AI can be benevolent?

AI safety researchers emphasize that we shouldn’t assume AI systems will be benevolent by default. They’ll have the goals that their training environment set them up for, and no doubt this will fail to encapsulate the whole of human values.

When the AI gets smarter, might it figure out morality by itself? Again, researchers emphasize that it won’t. It’s not really a matter of “figuring out” — the AI will understand just fine that humans actually value love and fulfillment and happiness, and not just the number associated with Google on the New York Stock Exchange. But the AI’s values will be built around whatever goal system it was initially built around, which means it won’t suddenly become aligned with human values if it wasn’t designed that way to start with.

Of course, we can build AI systems that are aligned with human values, or at least that humans can safely work with. That is ultimately what almost every organization with an artificial general intelligence division is trying to do. A success with AI could give us access to decades or centuries of technological innovation all at once.

“If we’re successful, we believe this will be one of the most important and widely beneficial scientific advances ever made,” writes the introduction to Alphabet’s DeepMind. “From climate change to the need for radically improved healthcare, too many problems suffer from painfully slow progress, their complexity overwhelming our ability to find solutions. With AI as a multiplier for human ingenuity, those solutions will come into reach.”

So, yes, AI can share our values — and transform our world for the good. We just need to solve a very hard engineering problem first.

9) I just really want to know: how worried should we be?

To people who think the worrying is premature and the risks overblown, AI safety is competing with other priorities that sound, well, a bit less sci-fi — and it’s not clear why AI should take precedence. To people who think the risks described are real and substantial, it’s outrageous that we’re dedicating so few resources to working on them.

While machine-learning researchers are right to be wary of hype, it’s also hard to avoid the fact that they’re accomplishing some impressive, surprising things using very generalizable techniques, and that it doesn’t seem that all the low-hanging fruit has been picked.

At a major conference in early December, Google’s DeepMind cracked open a longstanding problem in biology: predicting how proteins fold. “Even though there’s a lot more work to do before we’re able to have a quantifiable impact on treating diseases, managing the environment, and more, we know the potential is enormous,” its announcement concludes.

AI looks increasingly like a technology that will change the world when it arrives. Researchers across many major AI organizations tell us it will be like launching a rocket: something we have to get right before we hit “go.” So it seems urgent to get to work learning rocketry. No matter whether or not humanity should be afraid, we should definitely be doing our homework.

Doctors in UK sceptical that Artificial Intelligence will replace them: Survey


https://medicaldialogues.in/doctors-in-uk-sceptical-that-artificial-intelligence-will-replace-them-survey/

Top influencers in artificial intelligence


Artificial intelligence influencers are driving conversations about AI news and trends across social media and beyond. They advise on company boards, build startups and are moulding an industry that is key in today’s tech world, with implications going far beyond.

Nearly 70 years on since Alan Turing posed the question: “Can machines think?”, artificial intelligence is finally beginning to have a significant impact on the global economy. Proponents of AI believe that it has the potential to transform the world as we know it, with Google’s chief executive officer Sundar Pichai describing AI as “more profound than electricity or fire” at an American television network MSNBC event in San Francisco.

But AI is still in its infancy, and needs the nurturing hands of its influencers to help it grow, in spite of the research and development effort that has been put into it over the years. In a decade’s time, today’s state-of-the-art AI will seem rudimentary and simplistic.

Here are the top ten influencers in AI according to research by GlobalData which carries an interactive dashboard covering influencers in AI.

  1. Spiros Margaris, VC, Margaris Ventures founder and member of advisory board for wefox Group

@SpirosMargaris with 66,700 Twitter followers

Margaris is based in Switzerland. A venture capitalist and founder of Margaris Ventures, he was also appointed to the advisory board of insurtech company wefox Group in 2018.

https://platform.twitter.com/widgets/follow_button.d30011b0f5ce05b98f24b01d3331b3c1.en-gb.html#dnt=false&id=twitter-widget-0&lang=en-gb&screen_name=SpirosMargaris&show_count=false&show_screen_name=true&size=m&time=1545578633193

  1. Evan Kirstel, thought leader, technology influencer and B2B marketer

@evankirstel 227,000 Twitter followers

Kirstel is based in Boston, the USA. He is a chief digital evangelist and co-founder of EviraHealth, a social media partner across health tech.

3 Things That Will Change the World Today

https://platform.twitter.com/widgets/follow_button.d30011b0f5ce05b98f24b01d3331b3c1.en-gb.html#dnt=false&id=twitter-widget-1&lang=en-gb&screen_name=evankirstel&show_count=false&show_screen_name=true&size=m&time=1545578633232

  1. Ronald van Loon, director at Adversitement

@Ronald_vanLoon 164,000 Twitter followers

van Loon is director of Adversitement, which helps data-driven companies create business value. He is based in The Netherlands and is also an advisory board member for Simplilearn, an educator in cybersecurity, cloud computing, project management, digital marketing, data science and others.

https://platform.twitter.com/widgets/follow_button.d30011b0f5ce05b98f24b01d3331b3c1.en-gb.html#dnt=false&id=twitter-widget-2&lang=en-gb&screen_name=Ronald_vanLoon&show_count=false&show_screen_name=true&size=m&time=1545578633238

  1. Mike Quindazzi, business development leader and management consultant at PwC

@MikeQuindazzi 108,000 Twitter followers

Quindazzi is a managing director for PwC in Los Angeles, USA. He consults on emerging technology, including blockchain, augmented reality, 3D printing, drones, virtual reality, mobile strategies, internet of things, robotics, big data, predictive analytics, fintech, cybersecurity and insurtech.

https://platform.twitter.com/widgets/follow_button.d30011b0f5ce05b98f24b01d3331b3c1.en-gb.html#dnt=false&id=twitter-widget-3&lang=en-gb&screen_name=MikeQuindazzi&show_count=false&show_screen_name=true&size=m&time=1545578633244

  1. Kirk Borne, principal data scientist at Booz Allen Hamilton

@KirkDBorne 217,000 Twitter followers

Borne is an American data scientist and executive advisor at management and technology consulting and engineering services firm Booz Allen Hamilton.

https://platform.twitter.com/widgets/follow_button.d30011b0f5ce05b98f24b01d3331b3c1.en-gb.html#dnt=false&id=twitter-widget-4&lang=en-gb&screen_name=KirkDBorne&show_count=false&show_screen_name=true&size=m&time=1545578633250

  1. Ganapathi Pulipaka, a chief data scientist at Confidential

@gp_pulipaka 50,200 Twitter followers

Dr Pulipaka is based in Los Angeles, USA. He is a chief data scientist at Confidential, as well as author of The Future of Data Science and Parallel Computing.

https://platform.twitter.com/widgets/follow_button.d30011b0f5ce05b98f24b01d3331b3c1.en-gb.html#dnt=false&id=twitter-widget-5&lang=en-gb&screen_name=gp_pulipaka&show_count=false&show_screen_name=true&size=m&time=1545578633255

  1. Tamara McCleary, chief executive officer of Thulium

@TamaraMcCleary  291,000 Twitter followers

McCleary is based in Boulder, USA. He is the founder and chief executive officer of Thulium, a brand amplification company in B2B social media marketing.

https://platform.twitter.com/widgets/follow_button.d30011b0f5ce05b98f24b01d3331b3c1.en-gb.html#dnt=false&id=twitter-widget-6&lang=en-gb&screen_name=TamaraMcCleary&show_count=false&show_screen_name=true&size=m&time=1545578633260

  1. Thomas Power, board member at 9Spokes

@thomaspower 338,000 Twitter followers

Power is a board member at several companies, including blockchain infrastructure company OST, data dashboard 9Spokes,Team Blockchain and the Blockchain Industry Compliance and Regulation Association. He is based in the UK.

https://platform.twitter.com/widgets/follow_button.d30011b0f5ce05b98f24b01d3331b3c1.en-gb.html#dnt=false&id=twitter-widget-7&lang=en-gb&screen_name=thomaspower&show_count=false&show_screen_name=true&size=m&time=1545578633265

  1. Sandy Carter, vice president at Amazon Web Services

@sandy_carter 79,600 Twitter followers

Carter is based in San Francisco, USA. He is vice president at Amazon Web Services, a subsidiary of Amazon that provides on-demand cloud computing services.

https://platform.twitter.com/widgets/follow_button.d30011b0f5ce05b98f24b01d3331b3c1.en-gb.html#dnt=false&id=twitter-widget-8&lang=en-gb&screen_name=sandy_carter&show_count=false&show_screen_name=true&size=m&time=1545578633270

  1. Larry Kim, chief executive officer at MobileMonkey

@larrykim 802,000 Twitter followers

Kim is based in Boston, USA. He is chief executive officer at MobileMonkey, a messenger marketing platform that amplifies Facebook advertising.

https://platform.twitter.com/widgets/follow_button.d30011b0f5ce05b98f24b01d3331b3c1.en-gb.html#dnt=false&id=twitter-widget-9&lang=en-gb&screen_name=larrykim&show_count=false&show_screen_name=true&size=m&time=1545578633275

Top AI trends

These are the top ten trends talked about by AI influencers over the last 90 days according to research by GlobalData.

1.     Machine Learning
2.     Deep Learning
3.     Fintech
4.     IoT
5.     Big Data
6.     Robotics
7.     Data Science
8.     Insurtech
9.     Analytics
10.   Cloud Computing

 

Top AI companies

These are the top ten most influential companies on Twitter over the last 90 days when it comes to AI.

1.     Google
2.     IBM
3.     Intel
4.     GitHub
5.     Stanford University
6.     Tencent
7.     The Durable Slate Company
8.     Carnegie Mellon University
9.     GaN Corp.
10.   Perceptron

Methodology

GlobalData used a series of algorithms to identify Twitter users conversing using a set of keywords. The keywords are determined from in-depth web research on blogs, forums, social platforms and articles.

Cluster groups are formed based on influencer types and topics, and the frequency of tweets. Follower strength, average engagement and influencing ability and behaviours were also measured to identify the key influencers. A weighting is given to critical engagement metrics such as followers, mentions, retweets and favourites.

Deeper analysis was carried out on each influencer to understand their engagement levels: how their social activity is acknowledged and how successfully they drive discussions on new and emerging trends.

Reconstructing jobs Creating good jobs in the age of artificial intelligence


​Fears of AI-based automation forcing humans out of work or accelerating the creation of unstable jobs may be unfounded. AI thoughtfully deployed could instead help create meaningful work.

Creating good jobs

When it comes to work, workers, and jobs, much of the angst of the modern era boils down to the fear that we’re witnessing the automation endgame, and that there will be nowhere for humans to retreat as machines take over the last few tasks. The most recent wave of commentary on this front stems from the use of artificial intelligence (AI) to capture and automate tacit knowledge and tasks, which were previously thought to be too subtle and complex to be automated. Is there no area of human experience that can’t be quantified and mechanized? And if not, what is left for humans to do except the menial tasks involved in taking care of the machines?

At the core of this concern is our desire for good jobs—jobs that, without undue intensity or stress, make the most of workers’ natural attributes and abilities; where the work provides the worker with motivation, novelty, diversity, autonomy, and work/life balance; and where workers are duly compensated and consider the employment contract fair. Crucially, good jobs support workers in learning by doing—and, in so doing, deliver benefits on three levels: to the worker, who gains in personal development and job satisfaction; to the organization, which innovates as staff find new problems to solve and opportunities to pursue; and to the community as a whole, which reaps the economic benefits of hosting thriving organizations and workers. This is what makes good jobs productive and sustainable for the organization, as well as engaging and fulfilling for the worker. It is also what aligns good jobs with the larger community’s values and norms, since a community can hardly argue with having happier citizens and a higher standard of living.1

Does the relentless advance of AI threaten to automate away all the learning, creativity, and meaning that make a job a good job? Certainly, some have blamed technology for just such an outcome. Headlines today often express concern over technological innovation resulting in bad jobs for humans, or even the complete elimination of certain professions. Some fear that further technology advancement in the workplace will result in jobs that are little more than collections of loosely related tasks where employers respond to cost pressures by dividing work schedules into ever smaller slithers of time, and where employees are being asked to work for longer periods over more days. As the monotonic progress of technology has automated more and more of a firm’s function, managers have fallen into the habit of considering work as little more than a series of tasks, strung end-to-end into processes, to be accomplished as efficiently as possible, with human labor as a cost to be minimized. The result has been the creation of narrowly defined, monotonous, and unstable jobs, spanning knowledge work and procedural jobs in bureaucracies and service work in the emerging “gig economy.”2

The problem here isn’t the technology; rather, it’s the way the technology is used—and, more than that, the way people think about using it. True, AI can execute certain tasks that human beings have historically performed, and it can thereby replace the humans who were once responsible for those tasks. However, just because we can use AI in this manner doesn’t mean that we should. As we have previously argued, there is tantalizing evidence that using AI on a task-by-task basis may not be the most effective way to apply it.3 Conceptualizing work in terms of tasks and processes, and using technology to automate those tasks and processes, may have served us well in the industrial era, but just as AI differs from previous generations of technologies in its ability to mimic (some) human behaviors, so too should our view of work evolve so as to allow us to best put that ability to use.

In this essay, we argue that the thoughtful use of AI-based automation, far from making humans obsolete or relegating them to busywork, can open up vast possibilities for creating meaningful work that not only allows for, but requires, the uniquely human strengths of sense-making and contextual decisions. In fact, creating good jobs that play to our strengths as social creatures might be necessary if we’re to realize AI’s latent potential and break us out of the persistent period of low productivity growth that we’re experiencing today. But for AI to deliver on its promise, we must take a fundamentally different view of work and how work is organized—one that takes AI’s uniquely flexible capabilities into account, and that treats humans and intelligent machines as partners in search of solutions to a shared problem.

Problems rather than processes

Consider a chatbot—a computer program that a user can converse or chat with—typically used for product support or as a shopping assistant. The computer in the Enterprise from Star Trek is a chatbot, as is Microsoft’s Zo, and the virtual assistants that come with many smartphones. The use of AI allows a chatbot to deliver a range of responses to a range of stimuli, rather than limiting it to a single stereotyped response to a specific input. This flexibility in recognizing inputs and generating appropriate responses is the hallmark of AI-based automation, distinguishing it from automation using prior generations of technology. Because of this flexibility, AI-enabled systems can be said to display digital behaviors, actions that are driven by the recognition of what is required in a particular situation as a response to a particular stimulus.

We can consider a chatbot to embody a set of digital behaviors, how the bot responds to different utterances from the user. On the one hand, the chatbot’s ability to deliver different responses to different inputs gives it more utility and adaptability than a nonintelligent automated system. On the other hand, the behaviors that chatbots evince are fairly simple, constrained to canned responses in a conversation plan or limited by access to training data.4 More than that, chatbots are also constrained by their inability to leverage the social and cultural context they find themselves in. This is what makes chatbots—and AI-enabled systems generally—fundamentally different from humans, and an important reason that AI cannot “take over” all human jobs.

Humans rely on context to make sense of the world. The meaning of “let’s table the motion,” for example, depends on the context it’s uttered in. Our ability to refer to the context of a conversation is a significant contributor to our rich behaviors (as opposed to a chatbot’s simple ones). We can tune our response to verbal and nonverbal cues, past experience, knowledge of past or current events, anticipation of future events, knowledge of our counterparty, our empathy for the situation of others, or even cultural preferences (whether or not we’re consciously aware of them). The context of a conversation also evolves over time; we can infer new facts and come to new realizations. Indeed, the act of reaching a conclusion or realizing that there’s a better question to ask might even provide the stimulus required to trigger a different behavior.

Chatbots are limited in their ability to draw on context. They can only refer to external information that has been explicitly integrated into the solution. They don’t have general knowledge or a rich understanding of culture. Even the ability to refer back to earlier in a conversation is problematic, making it hard for earlier behaviors to influence later ones. Consequentially, a chatbot’s behaviors tend to be of the simpler, functional kind, such as providing information in response to an explicit request. Nor do these behaviors interact with each other, preventing more complex behaviors from emerging.

The way chatbots are typically used exemplifies what we would argue is a “wrong” way to use AI-based automation—to execute tasks typically performed by a human, who is then considered redundant and replaceable. By only automating the simple behaviors within the reach of technology, and then treating the chatbot as a replacement for humans, we’re eliminating richer, more complex social and cultural behaviors that make interactions valuable. A chatbot cannot recognize humor or sarcasm, interpret elliptical allusions, or engage in small talk—yet we have put them in situations where, being accustomed to human interaction, people expect all these elements and more. It’s not surprising that users find chatbots frustrating and chatbot adoption is failing.5

A more productive approach is to combine digital and human behaviors. Consider the challenge of helping people who, due to a series of unfortunate events, find themselves about to become homeless. Often these people are not in a position to use a task-based interface—a website or interactive voice response (IVR) system—to resolve their situation. They need the rich interaction of a behavior-based interface, one where interaction with another human will enable them to work through the issue, quantify the problem, explore possible options, and (hopefully) find a solution.

We would like to use technology to improve the performance of the contact center such a person might call in this emergency. Reducing the effort required to serve each client would enable the contact center to serve more clients. At the same time, we don’t want to reduce the quality of the service. Indeed, ideally, we would like to take some of the time saved and use it to improve the service’s value by empowering social workers to delve deeper into problems and find more suitable (ideally, longer-term) solutions. This might also enable the center to move away from break-fix operation, where a portion of demand is due to the center’s inability to resolve problems at the last time of contact. Clearly, if we can use technology appropriately then it might be possible to improve efficiency (more clients serviced), make the center more effective (more long-term solutions and less break-fix), and also increase the value of the outcome for the client (a better match between the underlying need and services provided).

If we’re not replacing the human, then perhaps we can augment the human by using a machine to automate some of the repetitive tasks. Consider oncology, a common example used to illustrate this human-augmentation strategy. Computers can already recognize cancer in a medical image more reliably than a human. We could simply pass responsibility for image analysis to machines, with the humans moving to more “complex” unautomated tasks, as we typically integrate human and machine by defining handoffs between tasks. However, the computer does not identify what is unusual with this particular tumor, or what it has in common with other unusual tumors, and launch into the process of discovering and developing new knowledge. We see a similar problem with our chatbot example, where removing the humans from the front line prevents social workers from understanding how the factors driving homelessness are changing, resulting in a system that can only service old demand, not new. If we break this link between doing and understanding, then our systems will become more precise over time (as machine operation improves) but they will not evolve outside their algorithmic box.

Our goal must be to construct work in such a way that digital behaviors are blended with human behaviors, increasing accuracy and effectiveness, while creating space for the humans to identify the unusual and build new knowledge, resulting in solutions that are superior to those that digital or human behaviors would create in isolation . Hence, if we’re to blend AI and human to achieve higher performance, then we need to find a way for human and digital behaviors to work together, rather than in sequence. To do this, we need to move away from thinking of work as a string of tasks comprising a process, to envisioning work as a set of complementary behaviors concentrated on addressing a problem. Behavior-based work can be conceptualized as a team standing around a shared whiteboard, each holding a marker, responding to new stimuli (text and other marks) appearing on the board, carrying out their action, and drawing their result on the same board. Contrast this with task-based work, which is more like a bucket brigade where the workers stand in a line and the “work” is passed from worker to worker on its way to a predetermined destination, with each worker carrying out his or her action as the work passes by. Task-based work enables us to create optimal solutions to specific problems in a static and unchanging environment. Behavior-based work, on the other hand, provides effective solutions to ill-defined problems in a complex and changing world.

If we’re to blend AI and human to achieve higher performance, then we need to find a way for human and digital behaviors to work together, rather than in sequence.

To facilitate behavior-based work, we need to create a shared context that captures what is known about the problem to be solved, and against which both human and digital behaviors can operate. The starting point in our contact center example might be a transcript of the conversation so far, transcribed via a speech-to-text behavior. A collection of “recognize-client behaviors” monitor the conversation to determine if the caller is a returning client. This might be via voice-print or speech-pattern recognition. The client could state their name clearly enough for the AI to understand. They may have even provided a case number or be calling from a known phone number. Or the social worker might step in if they recognize the caller before the AI does. Regardless, the client’s details are fetched from case management to populate our shared context, the shared digital whiteboard, with minimal intervention.

As the conversation unfolds, digital behaviors use natural language to identify key facts in the dialogue. A client mentions a dependent child, for example. These facts are highlighted for both the human and other digital behaviors to see, creating a summary of the conversation updated in real time. The social worker can choose to accept the highlighted facts, or cancel or modify them. Regardless, the human’s focus is on the conversation, and they only need to step in when captured facts need correcting, rather than being distracted by the need to navigate a case management system.

Digital behaviors can encode business rules or policies. If, for example, there is sufficient data to determine that the client qualifies for emergency housing, then a business-rule behavior could recognize this and assert it in the shared context. The assertion might trigger a set of “find emergency housing behaviors” that contact suitable services to determine availability, offering the social worker a set of potential solutions. Larger services might be contacted via B2B links or robotic process automation (if no B2B integration exists). Many emergency housing services are small operations, so the contact might be via a message (email or text) to the duty manager, rather than via a computer-to-computer connection. We might even automate empathy by using AI to determine the level of stress in the client’s voice, providing a simple graphical measure of stress to the social worker to help them determine if the client needs additional help, such as talking to an external service on the client’s behalf.

As this example illustrates, the superior value provided by structuring work around problems, rather than tasks, relies on our human ability to make sense of the world, to spot the unusual and the new, to discover what’s unique in this particular situation and create new knowledge. The line between human and machine cannot be delineated in terms of knowledge and skills unique to one or the other. The difference is that humans can participate in the social process of creating knowledge, while machines can only apply what has already been discovered.6

Good for workers, firms, and society

AI enables us to think differently about how we construct work. Rather than construct work from products and specialized tasks, we can choose to construct work from problems and behaviors. Individuals consulting financial advisors, for example, typically don’t want to purchase investment products as the end goal; what they really want is to secure a happy retirement. The problem can be defined as follows: What does a “happy retirement” look like; how much income is needed to support that lifestyle, how to balance spending and saving today to find the cash to invest and navigate and (financial) challenges that life puts in the road, and what investments give the client the best shot at getting from here to there? The financial advisor, client, and robo-advisor could collaborate around a common case file, a digital representation of their shared problem, incrementally defining what a “happy retirement” is and, consequently, the needed investment goals, income streams, and so on. This contrasts with treating the work as a process of “request investment parameters” (which the client doesn’t know) and then “recommend insurance” and “provide investment recommendations” (which the client doesn’t want, or only wants as a means to an end). The financial advisor’s job is to provide the rich human behaviors—educator to the investor’s student—to elucidate and establish the retirement goals (and, by extension, investment goals), while the robo-advisor provides simple algorithmic ones, responding to changes in the case file by updating it with an optimal investment strategy. Together, the human and robo-advisor can explore more options (thanks to the power and scope of digital behaviors) and develop a deeper understanding of the client’s needs (thanks to the human advisor’s questioning and contextual knowledge) than either could alone, creating more value as a result.

Rather than construct work from products and specialized tasks, we can choose to construct work from problems and behaviors.

If organizing work around problems and combining AI and human behaviors to help solve them can deliver greater value to customers, it similarly holds the potential to deliver greater value for businesses, as productivity is partly determined by how we construct jobs. The majority of the productivity benefits associated with a new technology don’t come from the initial invention and introduction of new production technology. They come from learning-by-doing:7 workers at the coalface identifying, sharing, and solving problems and improving techniques. Power looms are a particularly good example, with their introduction into production improving productivity by a factor of 2.5, but with a further factor of 20 provided by subsequent learning-by-doing.8

It’s important to maintain the connection between the humans—the creative problem identifiers—and the problems to be discovered. This is something that Toyota did when it realized that highly mechanized factories were efficient, but they didn’t improve. Humans were reintroduced and given roles in the production process to enable them to understand what the machines were doing, develop expertise, and consequently improve the production processes. The insights from these workers reduced waste in crankshaft production by 10 percent and helped shorten the production line. Others improved axel production and cut costs for chassis parts.9

This improvement was no coincidence. Jobs that are good for individuals—because they make the most of human sense-making nature—generally are also good for firms, because they improve productivity through learning by doing. As we will see below, they can also be good for society as a whole.

Consider bus drivers. With the development of autonomous vehicles in the foreseeable future, pundits are worried about what to do with all the soon to be unemployed bus drivers. However, rather than fearing that autonomous buses will make bus drivers redundant, we should acknowledge that they will find themselves in situations that only a human, and human behaviors, can deal with. Challenging weather (heavy rain or extreme glare) might require a driver to step in and take control. Unexpected events—accidents, road work, or an emergency—could require a human’s judgment to determine which road rule to break. (Is it permissible to edge into a red light while making space for an emergency vehicle?) Routes need to be adjusted due to anything from a temporarily moved stop to modifying routes due to roadwork. A human presence might be legally required to, for example, monitor underage children or represent the vehicle at an accident.

As with chatbots, automating the simple behaviors and then eliminating the human will result in an undesirable outcome. A more productive approach is to discover the problems that bus drivers deal with, and then structure work and jobs around these problems and the kinds of behaviors needed to solve them. AI can be used to automate the simple behaviors, enabling the drivers to focus on more important ones, making the human-bus combination more productive as a result. The question is: Which problems and decision centers should we choose?

Let us assume that the simple behaviors required to drive a bus are automated. Our autonomous bus can steer, avoiding obstacles and holding its lane, maintain speed and separation with other vehicles, and obey the rules of the road. We can also assume that the bus will follow a route and schedule. If the service is frequent enough, then the collection of buses on a route might behave as a flock, adjusting speed to maintain separation and ensure that a bus arrives at each stop every five minutes or so, rather than attempting to arrive at a specific time.

As with the power loom, automating these simple behaviors means that drivers are not required to be constantly present for the bus (or loom) to operate. Rather than drive a single bus, they can now “drive” a flock of buses. The drivers monitor where each bus is, how it’s tracking to schedule, with the system suggesting interventions to overcome problems, such as a breakdown, congestion, or changed road conditions. The drivers can step in to pilot a particular bus should the conditions be too challenging (roadworks, perhaps, where markings and signaling are problematic), or to deal with an event that requires that human touch.

These buses could all be on the same route. A mobile driver might be responsible for four-to-five sequential buses on a route, zipping between them as needed to manage accidents or dealing with customer complaints (or disagreements between customers). Or the driver might be responsible for buses in a geographic area, on multiple routes. It’s even possible to split the work, creating a desk-bound “driver” responsible for drone operation of a larger number of buses, while mobile and stationary drivers restrict themselves to incidents requiring a physical presence. School or community buses, for example, might have remote video monitoring while in transit, complemented by a human presence at stops.

Breaking the requirement that each bus have its own driver will provide us with an immediate productivity gain. If 10 drivers can manage 25 autonomous buses, then we will see productivity increase by a factor of 2.5, as we did with power looms: good jobs for the firm, as workers are more productive. Doing this requires an astute division of labor between mobile, stationary, and remote drivers, creating three different “bus driver” jobs that meet different work preferences: good jobs for the worker and the firm. Ensuring that these jobs involve workers as stakeholders in improving the system enables us to tap into learning-by-doing, allowing workers to continue to work on their craft, and the subsequent productivity improvements that learning-by-doing provides, which is good for workers and the firm.

These jobs don’t require training in software development or AI. They do require many of the same skills as existing bus drivers: understanding traffic, managing customers, dealing with accidents, and other day-to-day challenges. Some new skills will also be required, such as training a bus where to park at a new bus stop (by doing it manually the first time), or managing a flock of buses remotely (by nudging routes and separations in response to incidents), though these skills are not a stretch. Drivers will require a higher level of numeracy and literacy than in the past though, as it is a document-driven world that we’re describing. Regardless, shifting from manual to autonomous buses does not imply making existing bus drivers redundant en masse. Many will make the transition on their own, others will require some help, and a few will require support to find new work.

The question then, is: What to do with the productivity dividend? We could simply cut the cost of a bus ticket, passing the benefit onto existing patrons. Some of the saving might also be returned to the community, as public transport services are often subsidized. Another choice is to transform public transport, creating a more inclusive and equitable public transport system.

Buses are seen as an unreliable form of transport—schedules are sparse with some buses only running hourly for part of the day, and not running at all otherwise; and route coverage is inadequate leaving many (less fortunate) members of society in public transport deserts (locations more than 800 m from high-frequency public transport). We could rework the bus network to provide a more frequent service, as well as extending service into under-serviced areas, eliminating public transport deserts. The result could be a fairer and more equitable service at a similar cost to the old, with the same number of jobs. This has the potential to transform lives. Reliable bus services might result in higher patronage, resulting in more bus routes being created, more frequent services on existing bus routes, and more bus “drivers” being hired. Indeed, this is the pattern we saw with power looms during the Industrial Revolution. Improved productivity resulted in lower prices for cloth, enabling a broader section of the community to buy higher quality clothing, which increased demand and created more jobs for weavers. Automation can result in jobs that are good for the worker, firm, and society as a whole.

Automation can result in jobs that are good for the worker, firm, and society as a whole.

How will we shape the jobs of the future?

There is no inevitability about the nature of work in the future. Clearly, the work will be different than it is today, though how it is different is an open question. Predictions of a jobless future, or a nirvana where we live a life of leisure, are most likely wrong. It’s true that the development of new technology has a significant effect on the shape society takes, though this is not a one-way street, as society’s preferences shape which technologies are pursued and which of their potential uses are socially acceptable. Melvin Kranzberg, a historian specializing in the history of technology, captured this in his fourth law: “Although technology might be a prime element in many public issues, nontechnical factors take precedence in technology-policy decisions.”10

The jobs first created by the development of the moving assembly line were clearly unacceptable by social standards of the time. The solution was for society to establish social norms for the employee-employer relationship—with the legislation of the eight-hour an example of this—and the development of the social institutions to support this new relationship. New “sharing economy” jobs and AI encroaching into the workplace suggest that we might be reaching a similar point, with many firms feeling that they have no option but to create bad jobs if they want to survive. These bad jobs can carry an economic cost, as they drag profitability down. In this essay, as well as our previous,11 we have argued that these bad jobs are also preventing us from capitalizing on the opportunity created by AI.

Our relationship with technology has changed, and how we conceive work needs to change as a consequence. Prior to the Industrial Revolution, work was predominantly craft-based; we had an instrumental relationship with technology; and social norms and institutions were designed to support craft-based work. After the Industrial Revolution, with the development of the moving production line as the tipping point, work was based on task-specialization, and a new set of social norms and institutions were developed to support work built around products, tasks, and the skills required to prosecute them. With the advent of AI, our relationship with technology is changing again, and this automation is better thought of as capturing behaviors rather than tasks. As we stated previously, if automation in the industrial era was the replication of tasks previously isolated and defined for humans, then in this post-industrial era automation might be the replication of isolated and well-defined behaviors that were previously unique to humans.12

There are many ways to package human and digital behaviors—of constructing the jobs of the future. We, as a community, get to determine what these jobs look like. This future will still require bus drivers, mining engineers and machinery operators, financial advisors, as well as social workers and those employed in the caring professions, as it is our human proclivity for noticing the new and unusual, of making sense of the world, that creates value. Few people want financial products for their retirement fund; what they really want is a happy retirement. In a world of robo-advisors, all the value is created in the human conversation between financial advisors and clients, where they work together to discover what the clients’ happy retirement is (and consequently, investment goals, incomes stream, etc.), not in the mechanical creation and implementation of an investment strategy based on predefined parameters. If we’re to make the most of AI, realize the productivity (and, consequently, quality of life) improvements it promises, and deliver the opportunities for operational efficiency, then we need to choose to create good jobs:

  • Jobs that make the most of our human nature as social problem identifiers and solvers
  • Jobs that are productive and sustainable for organizations
  • Jobs with an employee-employer relationship aligned with social norms
  • Jobs that support learning by doing, providing for the worker’s personal development, for the improvement of the organization, and for the wealth of the community as a whole.

The question, then, is: What do we want these jobs of the future to look like?

5 Important Artificial Intelligence Predictions (For 2019) Everyone Should Read


Artificial Intelligence – specifically machine learning and deep learning – was everywhere in 2018 and don’t expect the hype to die down over the next 12 months.

The hype will die eventually of course, and AI will become another consistent thread in the tapestry of our lives, just like the internet, electricity, and combustion did in days of yore.

But for at least the next year, and probably longer, expect astonishing breakthroughs as well as continued excitement and hyperbole from commentators.

5 Important Artificial Intelligence Predictions (For 2019) Everyone Should ReadAdobe Stock

This is because expectations of the changes to business and society which AI promises (or in some cases threatens) to bring about go beyond anything dreamed up during previous technological revolutions.

AI points towards a future where machines not only do all of the physical work, as they have done since the industrial revolution but also the “thinking” work – planning, strategizing and making decisions.

The jury’s still out on whether this will lead to a glorious utopia, with humans free to spend their lives following more meaningful pursuits, rather than on those which economic necessity dictates they dedicate their time, or to widespread unemployment and social unrest.

YOU MAY ALSO LIKE

We probably won’t arrive at either of those outcomes in 2019, but it’s a topic which will continue to be hotly debated. In the meantime, here are five things that we can expect to happen:

  1. AI increasingly becomes a matter of international politics

2018 has seen major world powers increasingly putting up fences to protect their national interests when it comes to trade and defense. Nowhere has this been more apparent than in the relationship between the world’s two AI superpowers, the US and China.

In the face of tariffs and export restrictions on goods and services used to create AI imposed by the US Government, China has stepped up its efforts to become self-reliant when it comes to research and development.

Chinese tech manufacturer Huawei announced plans to develop its own AI processing chips, reducing the need for the country’s booming AI industry to rely on US manufacturers like Intel and Nvidia.

At the same time, Google has faced public criticism for its apparent willingness to do business with Chinese tech companies (many with links to the Chinese government) while withdrawing (after pressure from its employees) from arrangements to work with US government agencies due to concerns its tech may be militarised.

With nationalist politics enjoying a resurgence, there are two apparent dangers here.

Firstly, that artificial intelligence technology could be increasingly adopted by authoritarian regimes to restrict freedoms, such as the rights to privacy or free speech.

Secondly, that these tensions could compromise the spirit of cooperation between academic and industrial organizations across the world. This framework of open collaboration has been instrumental to the rapid development and deployment of AI technology we see taking place today and putting up borders around a nation’s AI development is likely to slow that progress. In particular, it is expected to slow the development of common standards around AI and data, which could greatly increase the usefulness of AI.

  1. A Move Towards “Transparent AI”

The adoption of AI across wider society – particularly when it involves dealing with human data – is hindered by the “black box problem.” Mostly, its workings seem arcane and unfathomable without a thorough understanding of what it’s actually doing.

To achieve its full potential AI needs to be trusted – we need to know what it is doing with our data, why, and how it makes its decisions when it comes to issues that affect our lives. This is often difficult to convey – particularly as what makes AI particularly useful is its ability to draw connections and make inferences which may not be obvious or may even seem counter-intuitive to us.

But building trust in AI systems isn’t just about reassuring the public. Research and business will also benefit from openness which exposes bias in data or algorithms. Reports have even found that companies are sometimes holding back from deploying AI due to fears they may face liabilities in the future if current technology is later judged to be unfair or unethical.

In 2019 we’re likely to see an increased emphasis on measures designed to increase the transparency of AI. This year IBM unveiled technology developed to improve the traceability of decisions into its AI OpenScale technology. This concept gives real-time insights into not only what decisions are being made, but how they are being made, drawing connections between data that is used, decision weighting and potential for bias in information.

The General Data Protection Regulation, put into action across Europe this year, gives citizens some protection against decisions which have “legal or other significant” impact on their lives made solely by machines. While it isn’t yet a blisteringly hot political potato, its prominence in public discourse is likely to grow during 2019, further encouraging businesses to work towards transparency.

  1. AI and automation drilling deeper into every business

In 2018, companies began to get a firmer grip on the realities of what AI can and can’t do. After spending the previous few years getting their data in order and identifying areas where AI could bring quick rewards, or fail fast, big business is as a whole ready to move ahead with proven initiatives, moving from piloting and soft-launching to global deployment.

In financial services, vast real-time logs of thousands of transactions per second are routinely parsed by machine learning algorithms. Retailers are proficient at grabbing data through till receipts and loyalty programmes and feeding it into AI engines to work out how to get better at selling us things. Manufacturers use predictive technology to know precisely what stresses machinery can be put under and when it is likely to break down or fail.

In 2019 we’ll see growing confidence that this smart, predictive technology, bolstered by learnings it has picked up in its initial deployments, can be rolled out wholesale across all of a business’s operations.

AI will branch out into support functions such as HR or optimizing supply chains, where decisions around logistics, as well as hiring and firing, will become increasingly informed by automation. AI solutions for managing compliance and legal issues are also likely to be increasingly adopted. As these tools will often be fit-for-purpose across a number of organizations, they will increasingly be offered as-a-service, offering smaller businesses a bite of the AI cherry, too.

We’re also likely to see an increase in businesses using their data to generate new revenue streams. Building up big databases of transactions and customer activity within its industry essentially lets any sufficiently data-savvy business begin to “Googlify” itself. Becoming a source of data-as-a-service has been transformational for businesses such as John Deere, which offers analytics based on agricultural data to help farmers grow crops more efficiently. In 2019 more companies will adopt this strategy as they come to understand the value of the information they own.

  1. More jobs will be created by AI than will be lost to it.

As I mentioned in my introduction to this post, in the long-term its uncertain if the rise of the machines will lead to human unemployment and social strife, a utopian workless future, or (probably more realistically) something in between.

For the next year, at least, though, it seems it isn’t going to be immediately problematic in this regard. Gartner predicts that by the end of 2019, AI will be creating more jobs than it is taking.

While 1.8 million jobs will be lost to automation – with manufacturing in particular singled out as likely to take a hit – 2.3 million will be created. In particular, Gartner’s report finds, these could be focused on education, healthcare, and the public sector.

A likely driver for this disparity is the emphasis placed on rolling out AI in an “augmenting” capacity when it comes to deploying it in non-manual jobs. Warehouse workers and retail cashiers have often been replaced wholesale by automated technology. But when it comes to doctors and lawyers, AI service providers have made concerted effort to present their technology as something which can work alongside human professionals, assisting them with repetitive tasks while leaving the “final say” to them.

This means those industries benefit from the growth in human jobs on the technical side – those needed to deploy the technology and train the workforce on using it – while retaining the professionals who carry out the actual work.

For the financial services, the outlook is perhaps slightly grimmer. Some estimates, such as those made by former Citigroup CEO Vikram Pandit in 2017, predict that the sector’s human workforce could be 30% smaller within five years. With back-office functions increasingly being managed by machines, we could be well on our way to seeing that come true by the end of next year.

  1. AI assistants will become truly useful

AI is genuinely interwoven into our lives now, to the point that most people don’t give a second thought to the fact that when they search Google, shop at Amazon or watch Netflix, highly precise, AI-driven predictions are at work to make the experience flow.

A slightly more apparent sense of engagement with robotic intelligence comes about when we interact with AI assistants – Siri, Alexa, or Google Assistant, for example – to help us make sense of the myriad of data sources available to us in the modern world.

In 2019, more of us than ever will use an AI assistant to arrange our calendars, plan our journeys and order a pizza. These services will become increasingly useful as they learn to anticipate our behaviors better and understand our habits.

Data gathered from users allows application designers to understand exactly which features are providing value, and which are underused, perhaps consuming valuable resources (through bandwidth or reporting) which could be better used elsewhere.

As a result, functions which we do want to use AI for – such as ordering taxis and food deliveries, and choosing restaurants to visit – are becoming increasingly streamlined and accessible.

On top of this, AI assistants are designed to become increasingly efficient at understanding their human users, as the natural language algorithms used to encode speech into computer-readable data, and vice versa is exposed to more and more information about how we communicate.

It’s evident that conversations between  Alexa or Google Assistant and us can seem very stilted today. However, the rapid acceleration of understanding in this field means that, by the end of 2019, we will be getting used to far more natural and flowing discourse with the machines we share our lives with.

 

Google’s New AI Is a Master of Games, but How Does It Compare to the Human Mind?


After building AlphaGo to beat the world’s best Go players, Google DeepMind built AlphaZero to take on the world’s best machine players

AI Chess
Google’s new artificial intelligence program, AlphaZero, taught itself to play chess, shogi, and Go in a matter of hours, and outperforms the top-ranking AIs in the gameplay arena.

For humans, chess may take a lifetime to master. But Google DeepMind’s new artificial intelligence program, AlphaZero, can teach itself to conquer the board in a matter of hours.

Building on its past success with the AlphaGo suite—a series of computer programs designed to play the Chinese board game Go—Google boasts that its new AlphaZero achieves a level of “superhuman performance” at not just one board game, but three: Go, chess, and shogi (essentially, Japanese chess). The team of computer scientists and engineers, led by Google’s David Silver, reported its findings recently in the journal Science.

“Before this, with machine learning, you could get a machine to do exactly what you want—but only that thing,” says Ayanna Howard, an expert in interactive computing and artificial intelligence at the Georgia Institute of Technology who did not participate in the research. “But AlphaZero shows that you can have an algorithm that isn’t so [specific], and it can learn within certain parameters.”

AlphaZero’s clever programming certainly ups the ante on gameplay for human and machine alike, but Google has long had its sights set on something bigger: engineering intelligence.

The researchers are careful not to claim that AlphaZero is on the verge of world domination (others have been a little quicker to jump the gun). Still, Silver and the rest of the DeepMind squad are already hopeful that they’ll someday see a similar system applied to drug design or materials science.

So what makes AlphaZero so impressive?

Gameplay has long been revered as a gold standard in artificial intelligence research. Structured, interactive games are simplifications of real-world scenarios: Difficult decisions must be made; wins and losses drive up the stakes; and prediction, critical thinking, and strategy are key.

Encoding this kind of skill is tricky. Older game-playing AIs—including the first prototypes of the original AlphaGo—have traditionally been pumped full of codes and data to mimic the experience typically earned through years of natural, human gameplay (essentially, a passive, programmer-derived knowledge dump). With AlphaGo Zero (the most recent version of AlphaGo), and now AlphaZero, the researchers gave the program just one input: the rules of the game in question. Then, the system hunkered down and actively learned the tricks of the trade itself.

Go
AlphaZero is based on AlphaGo Zero, part of the AlphaGo suite designed to play the Chinese board game Go, pictured above. Early iterations of the original program were fed data from human-versus-human games; later versions engaged in self-teaching, wherein the software played games against itself to learn its own strategy.

This strategy, called self-play reinforcement learning, is pretty much exactly what it sounds like: To train for the big leagues, AlphaZero played itself in iteration after iteration, honing its skills by trial and error. And the brute-force approach paid off. Unlike AlphaGo Zero, AlphaZero doesn’t just play Go: It can beat the best AIs in the business at chess and shogi, too. The learning process is also impressively efficient, requiring only two, four, or 30 hours of self-tutelage to outperform programs specifically tailored to master shogi, chess, and Go, respectively. Notably, the study authors didn’t report any instances of AlphaZero going head-to-head with an actual human, Howard says. (The researchers may have assumed that, given that these programs consistently clobber their human counterparts, such a matchup would have been pointless.)

AlphaZero was also able to trounce Stockfish (the now unseated AI chess master) and Elmo (the former AI shogi expert) despite evaluating fewer possible next moves on each turn during game play. But because the algorithms in question are inherently different, and may consume different amounts of power, it’s difficult to directly compare AlphaZero to other, older programs, points out Joanna Bryson, who studies artificial intelligence at the University of Bath in the United Kingdom and did not contribute to AlphaZero.

Google keeps mum about a lot of the fine print on its software, and AlphaZero is no exception. While don’t know everything about the program’s power consumption, what’s clear is this: AlphaZero has to be packing some serious computational ammo. In those scant hours of training, the program kept itself very busy, engaging in tens or hundreds of thousands of practice rounds to get its board game strategy up to snuff—far more than a human player would need (or, in most cases, could even accomplish) in pursuit of proficiency.

This intensive regimen also used 5,000 of Google’s proprietary machine-learning processor units, or TPUs, which by some estimates consume around 200 watts per chip. No matter how you slice it, AlphaZero requires way more energy than a human brain, which runs on about 20 watts.

The absolute energy consumption of AlphaZero must be taken into consideration, adds Bin Yu, who works at the interface of statistics, machine learning, and artificial intelligence at the University of California, Berkeley. AlphaZero is powerful, but might not be good bang for the buck—especially when adding in the person-hours that went into its creation and execution.

Energetically expensive or not, AlphaZero makes a splash: Most AIs are hyper-specialized on a single task, making this new program—with its triple threat of game play—remarkably flexible. “It’s impressive that AlphaZero was able to use the same architecture for three different games,” Yu says.

So, yes. Google’s new AI does set a new mark in several ways. It’s fast. It’s powerful. But does that make it smart?

This is where definitions start to get murky. “AlphaZero was able to learn, starting from scratch without any human knowledge, to play each of these games to superhuman level,” DeepMind’s Silver said in a statement to the press.

Even if board game expertise requires mental acuity, all proxies for the real world have their limits. In its current iteration, AlphaZero maxes out by winning human-designed games—which may not warrant the potentially alarming label of “superhuman.” Plus, if surprised with a new set of rules mid-game, AlphaZero might get flummoxed. The actual human brain, on the other hand, can store far more than three board games in its repertoire.

What’s more, comparing AlphaZero’s baseline to a tabula rasa (blank slate)as the researchers do—is a stretch, Bryson says. Programmers are still feeding it one crucial morsel of human knowledge: the rules of the game it’s about to play. “It does have far less to go on than anything has before,” Bryson adds, “but the most fundamental thing is, it’s still given rules. Those are explicit.”

And those pesky rules could constitute a significant crutch. “Even though these programs learn how to perform, they need the rules of the road,” Howard says. “The world is full of tasks that don’t have these rules.”

When push comes to shove, AlphaZero is an upgrade of an already powerful program—AlphaGo Zero, explains JoAnn Paul, who studies artificial intelligence and computational dreaming at the Virginia Polytechnic Institute and State University and was not involved in the new research. AlphaZero uses many of the same building blocks and algorithms as AlphaGo Zero, and still constitutes just a subset of true smarts. “I thought this new development was more evolutionary than revolutionary,” she adds. “None of these algorithms can create. Intelligence is also about storytelling. It’s imagining things that are not yet there. We’re not thinking in those terms in computers.”

Part of the problem is, there’s still no consensus on a true definition of “intelligence,” Yu says—and not just in the domain of technology. “It’s still not clear how we are training critically thinking beings, or how we use the unconscious brain,” she adds.

To this point, many researchers believe there are likely multiple types of intelligence. And tapping into one far from guarantees the ingredients for another. For instance, some of the smartest people out there are terrible at chess.

With these limitations, Yu’s vision of the future of artificial intelligence partners humans and machines in a kind of coevolution. Machines will certainly continue to excel at certain tasks, she explains, but human input and oversight may always be necessary to compensate for the unautomated.

Of course, there’s no telling how things will shake out in the AI arena. In the meantime, we have plenty to ponder. “These computers are powerful, and can do certain things better than a human can,” Paul says. “But that still falls short of the mystery of intelligence.”

%d bloggers like this: