Sex robot inventor says having baby with his android lover will be ‘extremely simple’


Sergi Santos in relationship with android as well as human wife of 16 years

Sergi Santos with his sex robot creation 'Samantha'

Sergi Santos with his sex robot creation ‘Samantha’

robot creator has claimed that he will soon be able to have a baby with his own robot lover.

Sergi Santos, an electronic engineer and expert in AI, also believes it is just a matter of time before machines are doing human jobs and marrying into human families.

The Spaniard told The Sun he would “love” to have a child with his robotic partner, and that it would be “extremely simple”.

“Using the brain I have already created, I would program it with a genome so he or she could have moral values, plus concepts of beauty, justice and the values that humans have,” he said.

“Then to create a child with this robot it would be extremely simple. I would make an algorithm of what I personally believe about these concepts and then shuffle it with what she thinks and then 3D print it.

The designer says that having regular intercourse with his robot, called “Samantha”, has improved his sex life with Ms Kissamitaky.

It is also claimed his android has the ability to create emotional ties, can progress through different emotional modes, and has the ability to makes “realistic” orgasm sounds.

Advertisements

Guess Which Single Word Will Convince Other Humans You’re Not a Robot


That’s… not what we expected.

main article image

If you were trying to convince another human that you yourself are also human, what would you say? Probably something about emotions, right? That might work – but other humans are more likely to believe your humanity if you talk about bodily functions.

Specifically, the word ‘poop’.

At least, that’s the finding from a study that sought to determine a “minimal Turing test”, narrowing the human-like intelligence assessment down to a single word.

A Turing test – named after mathematician Alan Turing – is a conceptual method for determining whether machines can think like a human. In its simplest form, it involves having a computerised chat with an AI – if a human can’t tell if they’re talking to a computer or a living being, the AI “passes” the test.

In a new paper, cognitive scientists John McCoy and Tomer Ullman of MIT’s Department of Brain and Cognitive Sciences (McCoy is now at the University of Pennsylvania) have described a twist on this classic concept.

They asked 1,089 study participants what single word they would choose for this purpose: not to help distinguish humans from machines, but to try to understand what we humans think makes us human.

The largest proportion – 47 percent – of the participants chose something related to emotions or thinking, what the researchers call “mind perception”.

By far the most popular option was love, clocking in at a rather large 14 percent of all responses. This was followed by compassion at 3.5 percent, human at 3.2 percent and please at 2.7 percent.

word cloud minimal turing(McCoy & Ullman/Journal of Experimental Social Psychology)

In all, there were 10 categories, such as food, including words like banana and pizza; non-humans, including words like dog or robot; life and death, including words like pain and alive; and body functions and profanity, which included words like poop and penis.

The next part of the study involved figuring out which of those words would be likely to convince other humans of humanity.

The researchers randomly put pairs of the top words from each of the 10 categories together – such as banana and empathy – and told a new group of 2,405 participants that one of the words was chosen by a machine and the other by a human (even though both words were chosen by humans).

This group’s task was to say which was which. Unsurprisingly, the least successful word was robot. But the most successful – poop – was a surprise.

minimal turing test word comparison(McCoy & Ullman/Journal of Experimental Social Psychology)

This could, the researchers said, be because ‘taboo’ words generate an emotional response, rather than simply describing one.

“The high average relative strengths of the words ‘love’, ‘mercy’, and ‘compassion’ is consistent with the importance of the experience dimension when distinguishing the minds of robots and people. However, the taboo category word (‘poop’) has the highest average relative strength, referring to bodily function and evoking an amused emotional response,” they wrote in their paper.

“This suggests that highly charged words, such as the colourful profanities appearing in Study 1, might be judged as given by a human over all words used in Study 2.”

Now that this information is on the internet where any old AI with WiFi could get access to it, the study may not really help us tell person from machine; but it does provide some fascinating insight into our self-perception, and what we feel it means to be human.

It’s also, the researchers said, a methodology that could help us explore our perceptions of different kinds of humans – what would we say, for instance, to convince someone else we were a man or a woman, a child or a grandparent, Chinese or Chilean?

“Recall the word that you initially chose to prove that you are human. Perhaps it was a common choice, or perhaps it appeared but one other time, your thoughts in secret affinity with the machinations that produced words such as caterpillar, ethereal, or shenanigans. You may have delighted that your word was judged highly human, or wondered how it would have fared,” the researchers wrote.

“Whatever your word, it rested on the ability to rapidly navigate a web of shared meanings, and to make nuanced predictions about how others would do the same. As much as love and compassion, this is part of what it is to be human.”

Also: poop, apparently.

The research has been published in the Journal of Experimental Social Psychology.

Should we be worried about artificial intelligence?


Not really, but we do need think carefully about how to harness, and regulate, machine intelligence.

By now, most of us are used to the idea of rapid, even accelerating, technological change, particularly where information technologies are concerned. Indeed, as consumers, we helped the process along considerably. We love the convenience of mobile phones, and the lure of social-media platforms such as Facebook, even if, as we access these services, we find that bits and pieces of our digital selves become strewn all over the internet.

More and more tasks are being automated. Computers (under human supervision) already fly planes and sail ships. They are rapidly learning how to drive cars. Automated factories make many of our consumer goods. If you enter (or return to) Australia with an eligible e-passport, a computer will scan your face, compare it with your passport photo and, if the two match up, let you in. The “internet of things” beckons; there seems to be an “app” for everything. We are invited to make our homes smarter and our lives more convenient by using programs that interface with our home-based systems and appliances to switch the lights on and off, defrost the fridge and vacuum the carpet.

Robots taking over more intimate jobs

With the demise of the local car industry and the decline of manufacturing, the services sector is expected to pick up the slack for job seekers. But robots are taking over certain roles once deemed human-only.

Clever though they are, these programs represent more-or-less familiar applications of computer-based processing power. With artificial intelligence, though, computers are poised to conquer skills that we like to think of as uniquely human: the ability to extract patterns and solve problems by analysing data, to plan and undertake tasks, to learn from our own experience and that of others, and to deploy complex forms of reasoning.

The quest for AI has engaged computer scientists for decades. Until very recently, though, AI’s initial promise had failed to materialise. The recent revival of the field came as a result of breakthrough advances in machine intelligence and, specifically, machine learning. It was found that, by using neural networks (interlinked processing points) to implement mathematically specified procedures or algorithms, machines could, through many iterations, progressively improve on their performance – in other words, they could learn. Machine intelligence in general and machine learning in particular are now the fastest-growing components of AI.

The achievements have been impressive. It is now 20 years since IBM’s Deep Blue program, using traditional computational approaches, beat Garry Kasparov, the world’s best chess player. With machine-learning techniques, computers have conquered even more complex games such as Go, a strategy-based game with an enormous range of possible moves. In 2016, Google’s Alpha Go program beat Lee Sedol, the world’s best Go player, in a four-game match.

Allan Dafoe, of Oxford University’s future humanities institute, says AI is already at the point where it can transform almost every industry, from agriculture to health and medicine, from energy systems to security and the military. With sufficient data, computing power and an appropriate algorithm, machines can be used to come up with solutions that are not only commercially useful but, in some cases, novel and even innovative.

Should we be worried? Commentators as diverse as the late Stephen Hawking and development economist Muhammad Yunus have issued dire warnings about machine intelligence. Unless we learn how to control AI, they argue, we risk finding ourselves replaced by machines far more intelligent than we are. The fear is that not only will humans be redundant in this brave new world, but the machines will find us completely useless and eliminate us.

The University of Canberra's robot Ardie teaches tai chi to primary school pupils.
The University of Canberra’s robot Ardie teaches tai chi to primary school pupils.

If these fears are realistic, then governments clearly need to impose some sort of ethical and values-based framework around this work. But are our regulatory and governance techniques up to the task? When, in Australia, we have struggled to regulate our financial services industry, how on earth will governments anywhere manage a field as rapidly changing and complex as machine intelligence?

Governments often seem to play catch-up when it comes to new technologies. Privacy legislation is enormously difficult to enforce when technologies effortlessly span national boundaries. It is difficult for legislators even to know what is going on in relation to new applications developed inside large companies such as Facebook. On the other hand, governments are hardly IT ingenues. The public sector provided the demand-pull that underwrote the success of many high-tech firms. The US government, in particular, has facilitated the growth of many companies in cybersecurity and other fields.

Governments have been in the information business for a very long time. As William the Conqueror knew when he ordered his Domesday Book to be compiled in 1085, you can’t tax people successfully unless you know something about them. Spending of tax-generated funds is impossible without good IT. In Australia, governments have developed and successfully managed very large databases in health and human services.

The governance of all this data is subject to privacy considerations, sometimes even at the expense of information-sharing between agencies. The evidence we have is that, while some people worry a lot about privacy, most of us are prepared to trust government with our information. In 2016, the Australian Bureau of Statistics announced that, for the first time, it would retain the names and addresses it collected during the course of the 2016 population census. It was widely expected (at least by the media) that many citizens would withhold their names and addresses when they returned their forms. In the end, very few did.

But these are government agencies operating outside the security field. The so-called “deep state” holds information about citizens that could readily be misused. Moreover, private-sector profit is driving much of the current AI surge (although, in many cases, it is the thrill of new knowledge and understanding, too). We must assume that criminals are working out ways to exploit these possibilities, too.

If we want values such as equity, transparency, privacy and safety to govern what happens, old-fashioned regulation will not do the job. We need the developers of these technologies to co-produce the values we require, which implies some sort of effective partnership between the state and the private sector.

Could policy development be the basis for this kind of partnership? At the moment, machine intelligence works best on problems for which relevant data is available, and the objective is relatively easy to specify. As it develops, and particularly if governments are prepared to share their own data sets, machine intelligence could become important in addressing problems such as climate change, where we have data and an overall objective, but not much idea as to how to get there.

Machine intelligence might even help with problems where objectives are much harder to specify. What, for example, does good urban planning look like? We can crunch data from many different cities, and come up with an answer that could, in theory, go well beyond even the most advanced human-based modelling. When we don’t know what we don’t know, machines could be very useful indeed. Nor do we know, until we try, how useful the vast troves of information held by governments might be.

Perhaps, too, the jobs threat is not as extreme as we fear. Experience shows that humans are very good at finding things to do. And there might not be as many existing jobs at risk as we suppose. I am convinced, for example, that no robot could ever replace road workers – just think of the fantastical patterns of dug-up gravel and dirt they produce, the machines artfully arranged by the roadside or being driven, very slowly, up and down, even when all the signs are there, and there is absolutely no one around. How do we get a robot, even one capable of learning by itself, to do all that?

For all book lovers please visit my friend’s website.
URL: http://www.romancewithbooks.com

Cometh the cyborg: Improved integration of living muscles into robots


Researchers have developed a novel method of growing whole muscles from hydrogel sheets impregnated with myoblasts. They then incorporated these muscles as antagonistic pairs into a biohybrid robot, which successfully performed manipulations of objects. This approach overcame earlier limitations of a short functional life of the muscles and their ability to exert only a weak force, paving the way for more advanced biohybrid robots.

Object manipulations performed by the biohybrid robots.
 

The new field of biohybrid robotics involves the use of living tissue within robots, rather than just metal and plastic. Muscle is one potential key component of such robots, providing the driving force for movement and function. However, in efforts to integrate living muscle into these machines, there have been problems with the force these muscles can exert and the amount of time before they start to shrink and lose their function.

Now, in a study reported in the journal Science Robotics, researchers at The University of Tokyo Institute of Industrial Science have overcome these problems by developing a new method that progresses from individual muscle precursor cells, to muscle-cell-filled sheets, and then to fully functioning skeletal muscle tissues. They incorporated these muscles into a biohybrid robot as antagonistic pairs mimicking those in the body to achieve remarkable robot movement and continued muscle function for over a week.

The team first constructed a robot skeleton on which to install the pair of functioning muscles. This included a rotatable joint, anchors where the muscles could attach, and electrodes to provide the stimulus to induce muscle contraction. For the living muscle part of the robot, rather than extract and use a muscle that had fully formed in the body, the team built one from scratch. For this, they used hydrogel sheets containing muscle precursor cells called myoblasts, holes to attach these sheets to the robot skeleton anchors, and stripes to encourage the muscle fibers to form in an aligned manner.

“Once we had built the muscles, we successfully used them as antagonistic pairs in the robot, with one contracting and the other expanding, just like in the body,” study corresponding author Shoji Takeuchi says. “The fact that they were exerting opposing forces on each other stopped them shrinking and deteriorating, like in previous studies.”

The team also tested the robots in different applications, including having one pick up and place a ring, and having two robots work in unison to pick up a square frame. The results showed that the robots could perform these tasks well, with activation of the muscles leading to flexing of a finger-like protuberance at the end of the robot by around 90°.

“Our findings show that, using this antagonistic arrangement of muscles, these robots can mimic the actions of a human finger,” lead author Yuya Morimoto says. “If we can combine more of these muscles into a single device, we should be able to reproduce the complex muscular interplay that allow hands, arms, and other parts of the body to function.”

For all book lovers please visit my friend’s website.
URL: http://www.romancewithbooks.com

Healthcare AI & Machine Learning: 4.5 Things To Know


Has Alexa changed your life yet? Vacuum the floor, start the dishwasher, feed the cat, clean the litter box, order more paper towels and lower the thermostat. That’s just a few items from the to-do list I handed over to Artificial Intelligence today – and all without one manual click or key stroke. Whew, no more straining my index finger to push all those buttons.\

While all of this has a certain “cool” factor, it’s completely transforming our lives. Gone are the days when Artificial Intelligence (AI) meant a robot such as C-3PO or Rosie (the robot maid from the Jetsons). AI is no longer a sci-fi futuristic hope, it’s a set of algorithms and technologies known as Machine Learning (ML).

We’re quickly moving toward this technology powering many tasks in everyday life. But what about healthcare? How might it impact our industry and even our day-to-day jobs? I’m sharing 4.5 things you should know about Artificial Intelligence and Machine Learning in the healthcare industry and how it will impact our future.

1.   There’s a difference between Machine Learning and Artificial Intelligence.

Jeff Bezos, visionary founder and leader of Amazon, made this declaration last May: “Machine Learning and Artificial Intelligence will be used to empower and improve every business, every government organization, every philanthropy – basically there’s no institution in the world that cannot be improved with Machine Learning and Artificial Intelligence.”

Machine Learning and Artificial Intelligence have emerged as key buzzwords, and often times are used interchangeably. So, what’s the difference?

  • Machine Learning uses Artificial Intelligence to process large amounts of data and allows the machine to learn for itself. You’ve probably benefitted from Machine Learning in your inbox. Spam filters are continuously learning from words used in an email, where it’s sent from, who sent it, etc. In healthcare, a practical application is in the imaging industry, where machines learn how to read CT scans and MRIs allowing providers to quickly diagnose and optimize treatment options.
  • Artificial Intelligence is the ability of machines to behave in a way that we would consider “smart.” The capability of a machine to imitate intelligent human behavior. Artificial Intelligence is the ability to be abstract, creative, deductive – to learn and apply learnings. Siri and Alexa are good examples of what you might be using today. In healthcare, an artificial virtual assistant might have conversations with patients and providers about lab results and clinical next steps.

2.   The healthcare industry is primed for disruptors.

It’s no secret that interacting with the healthcare system is complex and frustrating. As consumers are paying more out of pocket, they’re expecting and demanding innovation to simplify their lives, get educated, and save money. At the same time, we’re starting to get a taste of what Machine Learning and AI can do for their daily lives. These technologies WILL dramatically change the way we work in healthcare. Don’t take my word for it, just review a few of the headlines over the past year.

  • A conversational robot explains lab results. Learn more.
  • A healthcare virtual assistant enables conversational dialogues and pre-built capabilities that automate clinical workflows. Learn more.
  • Google powers up Artificial Intelligence, Machine Learning accelerator for healthcare. Learn more.
  • New AI technology uses brief daily chat conversations, mood tracking, curated videos, and word games to help people manage mental health. Learn more.
  • A machine learning algorithm helps identify cancerous tumors on mammograms. Learn more.

Soon, will a machine learning algorithm serve up a choice of 3 benefit plans that are best for my situation? Maybe Siri or Alexa will even have a conversation with me about it.

3.   Be smart about customer data

Being in healthcare and seeing the recent breaches, most of us have learned to be very careful about our security policies and customer data. As the use of Machine Learning grows in healthcare, continue to obsess over the privacy of your customer data. There are two reasons for this…

First, data is what makes Machine Learning and AI work. Without data, there’s nothing to mine and that means there’s no info from which to learn. Due to the sheer amount of data that Machine Learning technology collects and consumes, privacy will be more important than ever. Also, there’s potential for a lot more personally identifiable data to be collected. It will be imperative that companies pay attention to masking that data so the specific user is not revealed. These steps are critical to ensure you and your customer are protected as laws continue to catch up with this emerging technology.

Second, you’re already collecting A LOT of data on your clients of which you may not be taking full advantage. As more data is collected, it’s important to keep it organized so that you can use it effectively to gain insights and help your clients. Data architecture is a set of models, policies, rules or standards that govern which data is collected, and how it is stored, arranged, integrated, and put to use in data systems and in organizations. If you’re not already thinking about this, it might be a good idea to consult with a developer to get help.

4.  Recognize WHERE the customer is today and plan for tomorrow.

What’s the consumer appetite for these types of technologies? This year, 56.3 million personal digital assistants will ship to homes, including Alexa, Google Assistant and Apple’s new Home Pod. That’s a 40% increase from the impressive growth of 33 million sold in 2017. Consumers are quickly changing where they shop, organize and get information. It’s important that we move quickly to offer the experience customers want and need.

At first, I was happy to ask my Alexa to play music and set timers, now it’s the hub for my home. Alexa shops for my groceries, provides news stories, organizes my day, and much more. Plus, the cat really appreciates when Alexa feeds her – I’m totally serious that my cat feeder is hooked up to Alexa who releases a ration of kibble at pre-set feeding times.

There are so many applications for healthcare. Here’s a few that are happening or just around the corner…

“Alexa, what’s left on my health plan deductible?”

“Alexa, find a provider in my network within 5 miles.”

“Alexa, search for Protonix pricing at my local pharmacies.”

“Alexa, how do I know if I have an ear infection?”

“Alexa, what’s the difference between an HSA, FSA and HRA?”

“Alexa, find a 5-star rated health insurance broker within 20 miles.”

Click here for a great article about the growth and applications of digital personal assistants.

4.5       The customer experience must be TRULY helpful

Making “cool” innovations in Artificial Intelligence or Machine Learning won’t work if not coupled with a relentless pursuit to serve the customer. These endeavors are expensive, so spend your IT budget wisely, ensuring new innovation creates true value and is easy for the end user.

Are you ready to learn more? I will be doing a breakout session with Reid Rasmussen, CEO of freshbenies at BenefitsPro on 4/18 at 2:30pm.

Raising Countless Eyebrows, Walmart Files Patent for Autonomous Bee Drones


Experts Answer: Who Is Actually Going to Suffer From Automation?


Educated Guesses

Thanks to rapid advances in the fields of artificial intelligence (AI) and robotics, smart machines that would have once been relegated to works of science-fiction are now a part of our reality.

Today, we have AIs that can pick apples, manage hotels, and diagnose cancer. Researchers at MIT have even developed an algorithm that can predict the immediate future. If only they could train it to predict how automation is going to impact the human workforce…

Will Automation Steal My Job?

Currently, opinions on the subject are as varied as the types of AIs in development. In January, MIT Technology Review compiled a list of 19 studies focused on automation and the future of work. No two reached identical conclusions.

In 2017, research and advisory company Gartner released a study predicting automation would destroy 1.8 million jobs worldwide by 2020. That same year, another research and advisory company, Forrester, released their own report on automation and the workforce. According to their calculations, the U.S. alone will lose 13.8 million jobs to automation in 2018.

The numbers vary even more wildly the farther out you look. By 2030, futurist Thomas Frey predicts humans will lose 2 billion jobs to robots, while researchers from consulting firm McKinsey predict a comparatively paltry 400 to 800 million in losses.

Beyond the numbers, experts also disagree on the professions that will become automated, as well as where in the world will bear the brunt of the job losses.

Are teachers and writers safe or should they start thinking about a career change? What about lawyers and doctors? Will the U.S. be the nation to lose the highest percentage of jobs, as PricewaterCooper predicts? Or will Japan be hit the hardest, like McKinsey’s report concludes?

In an attempt to get to the bottom of the automation mystery, Futurism asked several experts to tell us who they believe will be most likely to suffer as a result of automation. Here’s what they had to say.


Edward D. Hess, professor of business administration and Batten Executive-in-Residence at the University of Virginia:

Automation is going to dramatically impact service and professional workers. To find work, one must be good at doing what the technology won’t be able to do well.

For the near term, those skills are: (1) higher order thinking (critical, innovative, imaginative) that is not linear; (2) the delivery of customized services that require high emotional intelligence to other humans; and (3) trade skills that call for real-time iterative problem diagnosis and solving and/or complex human dexterity.

Jobs that have a high risk of being automated are jobs that involve repetitive tasks and linear tasks that are easy to code: “if this, then do this.”

High-risk fields are retail, fast food, agriculture, customer service,  accounting, marketing, management consulting, investment management, finance, higher education, insurance, and architecture. Specific jobs include security guards, long-haul truck drivers, manual laborers, construction workers, paralegals, CPAs, radiologists, and administrative workers.

Technology is going to continue to advance, and in reality, all of us are going to have become life-long learners, constantly upgrading our skills. The most important skills to have will be knowing how to be highly efficient at iterative learning — “unlearning and relearning” — and develop high emotional and social intelligence.

Jobs requiring high emotional engagement in the customization and delivery of services to other human beings will be the most safe. Those include psychological counselors, social workers, elementary school teachers, physical therapists, personal trainers, trial lawyers, and estate planners. Other jobs that will be in high demand are in computer and data science.

What will become human beings’ unique skill? Emotional and social intelligence.

Joel Mokyr, an economic historian at Northwestern University and author of A Culture of Growth: Origins of the Modern Economy:

The short answer is people who have boring, routine, repetitive, and physically arduous jobs.

The long answer is that labor-saving process innovation and “classical” productivity increase may make some workers redundant as they are replaced by robots and machines that can do their jobs better and cheaper.

This could get a lot worse if AI will also replace workers who are trained and skilled in medium human-capital intensity jobs, such as drivers, legal assistants, bank tellers, etc. So far, the evidence for that is very weak, but it could change, depending on what happens to demand and output as prices fall and quality improves. What counts is demand elasticities with regards to price and with regards to product quality (including user-friendliness).

However, product innovation (unlike process innovation) is likely to create new jobs that were never imagined. Who in 1914 would have suspected that their great-grandchildren would be video game designers or cybersecurity specialists or GPS programmers or veterinary psychiatrists?

The dynamic is likely to be that machines pick up more and more routine jobs (including mental ones) that humans used to do. At the same time, new tasks and functions will be preserved and created that only humans can perform because they require instinct, intuition, human contact, tacit knowledge, fingerspitzengefühl, or some kind of je ne sais quoi that cannot be mechanized.

Bob Doyle, director of communications for the Association for Advancing Automation:

I would argue that the question should be phrased as the following: “Who is actually going to thrive because of automation?” And the answer is everyone who embraces automation.

Automation is the competitive advantage used by companies around the world, and for good reason. Companies automate heavy-lifting, repetitive, low-value processes in order to achieve higher output and product quality so that they can be more competitive in global markets.

That gives them the resources to innovate, to improve business processes, and to continue to meet consumer demands. That lets those companies continue to hire human workers for the jobs they’re best-suited for: insight-driven, decision-based, and creative processes. You can say that another word for “automation” is “progress.”

The inability to compete is the real threat to jobs, not automation.

Between 2010 and 2016, there were almost 137,000 robots deployed in manufacturing facilities in the U.S. During that time, manufacturing jobs increased by 894,000 (U.S. Bureau of Labor Statistics) and the unemployment rate declined by 5.1 percent.

These companies (along with their employees) are competing and thriving today because of automation. We should remember that technological advances have always changed the nature of jobs. We believe this time is no different. We must be sure that we’re preparing the workforce to fill these jobs that are being created, especially in advanced manufacturing. The future of automation in bright!

Chatbot: The Therapist in Your Pocket


AI chatbots offer a modern twist to mental health services, using texting to digitally counsel today’s smartphone generation.

Artificial intelligence (AI)-powered chatbots provide a new form of mental health support for a tech-savvy generation already comfortable using texting as its dominant form of communication.

As the demand for mental health services grows nationwide, there’s a shortage of available psychiatric professionals, according to the National Institute of Mental Health. And college campuses are seeing unprecedented rates of anxiety and depression.

Chatbots designed to spot indicators of mental health distress may provide emotional support when traditional therapy is out of reach.

While these chatbot counselors don’t replace the critical human touch essential during a personal crisis, AI and machine learning can enable the mental health community to reach more people and ensure consistent follow up, according to Michiel Rauws, co-founder and CEO of the mental health tech startup X2AI.

“There are millions of people globally who struggle with anxiety and depression, and simply not enough psychologists to take care of everyone,” Rauws said.

AI chatbots help college students

Chatbots may offer greater access to mental health services for today’s tech-savvy generation.

Based on his own experience with depression and work with immigrants affected by the war in Syria, he realized that much of cognitive behavioral therapy (CBT) is about coaching people to reframe how they think of themselves and their lives. He developed Tess, an AI chatbot that helps psychologists monitor patients, and remotely deliver personalized psychotherapy and mental health coaching.

“I had learned the psychological techniques and innovations, and realized that I might help others,” he said.

Now 27, Rauws says his generation often feels more comfortable chatting about sensitive issues via text rather than in person at a therapist’s office.

Available through text, web browsers and Facebook Messenger, Tess is offered through healthcare organizations to provide care whenever and wherever a patient needs it.

In addition to helping patients work through their issues, the AI chatbot also recognizes signals that indicate an acute crisis, such as suicidal thoughts. It alerts a human therapist when emergency intervention is essential.

The Digital Doctor is In

Several companies are developing mental health-related chatbots for direct consumer use, including the Woebot, a digital mental health coach that uses CBT to help users deal with anxiety or depression.

“Woebot is a robot you can tell anything to,” Woebot CEO Alison Darcy told Wired. “It’s not an AI that’s going to tell you stuff you don’t know about yourself by detecting some magic you’re not even aware of.”

Using Facebook Messenger, Woebot uses talk therapy prompts such as “What’s going on in your world right now?” This helps people talk about their emotional responses to life events, and identify the traps that cause stress and depression.

There’s no Freudian psychoanalysis of childhood wounds, just solutions for changing behavior. Woebot notes CBT is based on the idea that it’s not events themselves that affect people, it’s how they think about those events. What people think is often revealed in what they say.

This is just the beginning of a new era of tech-enabled mental health care. Machine learning has become so sophisticated that it can read between the lines of conversations and look for warning signs, according to researchers at IBM.

IBM used cognitive systems and machine learning to analyze written transcripts and audio recordings from psychiatric interviews to identify patterns that can indicate, and maybe even predict, mental illness.

It takes only 300 words to help clinicians predict the probability of psychosis, according to Guillermo Cecchi, a neuroscientist and researcher at IBM Research.

AI chatbots help mental health

AI technology can help in identifying patterns of thinking.

In five years, the IBM team predicts that advanced analytics, machine learning and computational biology will provide a real-time overview of a patient’s mental health.

Bringing Care to Rural Areas

These CBT technologies could meet a growing need for mental health care among the younger generation. Rates of depression and anxiety among young people are rising, according to the American College Health Association (ACHA).

Within the past year, 50 percent of college students reported feeling that things were hopeless, 58 percent felt overwhelming anxiety and 37 percent felt so depressed it was difficult to function. While universities encourage students to seek help, the social stigma still associated with mental illness can keep them from looking for a traditional therapist, according to ACHA research.

Chatbots may also help address the shortage of mental health services in rural areas where patients drive long distances to see a therapist face-to-face, said Gloria Zaionz, tech guru at the Innovation Learning Network, a think tank in Silicon Valley that studies how technology can improve healthcare.

More than 106 million people live in areas that are federally designated as having a shortage of mental health care professionals, according to the Kaiser Family Foundation.

“Mental health professionals are often limited in their capacity to provide treatment, and there are other barriers like wait times, cost and social stigma that can prevent people from getting the support they need,” Rauws said.

Companies are careful to note that AI chatbots are not intended to replace in-person treatment, but rather to expand limited access to mental health services. The tech provides more ways for patients to check in between visits and receive consistent follow-up care.

Data on the effectiveness of AI therapy is limited, but early results look encouraging, according to Rauws at X2AI. He said a trial of Tess across several U.S. universities showed a decrease in the standard depression scale and anxiety scale scores. A pilot study of Woebot also reported reduced levels of depression and anxiety.

Through a simple text conversation, AI may help in overcoming the stigma of seeking mental health care, ultimately expanding the reach of services. And a seemingly nonjudgmental chatbot may encourage people to honestly answer the question, “Are you OK?”

“There’s something about the screen that makes people feel a little bit more anonymous,” said Zaionz, “so they open up more.”

HOW TO DESIGN A DROID-OPTIMIZED HOME


If you want robots at your beck and call someday, start thinking about robo-fitting your digs now.

ROBOTS CAN WALK, talk, run a hotel … and are entirely stumped by a doorknob. Or a mailbox. Or a dirty bathtub—zzzzt, dead. Sure, the SpotMini, a doglike domestic helper from Boston Dynamics, can climb stairs, but it struggles to reliably hand over a can of soda. That’s why some roboticists think the field needs to flip its perspective. “There are two approaches to building robots,” says Maya Cakmak, a researcher at the University of Washington. “Make the robot more humanlike to handle the environment, or design the environment to make it a better fit for the robot.” Cakmak pursues the latter, and to do that, she studies so-called universal design—the ways in which buildings and products are constructed for older people or those with disabilities. Robot can’t handle the twisting staircase? Put in a ramp. As for that pesky doorknob? Make entryways motion-activated. If you want droids at your beck and call someday, start thinking about robo-fitting your digs now.

1. Wide-Open Floor Plan
Any serious sans-­human housekeeping needs a wheeled robotic butler with arms, Cakmak says. That means fewer steps, plus hallways wide enough for U-turns. Oh, and hardwood floors. Thick carpeting slows a bot’s roll.

2. Visual Waypoints
Factory robots work so fast in part because their world is highly structured—conveyor belt here, truck over there. So for your robo-home, create landmarks that anchor the bots in space—a promi­nent light fixture, say, that tells them, “You’re in the dining room.” (RFID tags will help bots locate smaller objects, like cleaning supplies.)

3. Right Angles
Imagine holding a ball between two textbooks. Because each surface touches the sphere at only a ­single point, it’s easy to lose your grip. Robots have the same problem and do better holding flat, boxy surfaces. Swap out rounded dishware for rectangular coffee mugs and square bowls. And use more plastic—there will be drops.

4. Button-Free Zone
Machines struggle to “see” buttons—to say nothing of pushing them. They’re much happier interfacing digitally with Wi-Fi-­enabled (and buttonless) coffee makers, stoves, and dishwashers.

5. Upsized Bathroom
Roomba-type bath­room cleaners can’t navigate spaces behind toilets or the graduated curves around sinks and tubs. And that step at the entrance to the shower (already a hazard to older people) is a barrier. Flatten the room and boxify toilet and tub.

6. Matte Materials
Depth-­perception sensors in robots wig out in the face of shiny or transparent objects, meaning your stainless-steel refrigerator and glass tabletops may have to go. Lock away fancy stemware in a humans-only cabinet.

7. Indoor Power Station
Just like architects design a nook for the refrigerator or stove, your robo-home will need space for a power-up station. Wireless recharging when the robot rolls up to the zone will make it more unobtrusive. Maybe right next to your Tesla Powerwall?

8. Doors 2.0
Since robots hate turning spherical knobs, install flat handles. Better yet, buy automatic doors that can be digitally triggered by sensors in the bot—and do the same with dressers and anything else that opens.

9. Trackable Humans
If you’d rather your dinner party not be interrupted, give bots permission to track your location via your phone or fitness wearable (“at table,” “by stove”). They’ll leave you alone through dessert.

10. Like by Like
With droids capable of feeding fresh clothes into a folding machine, there’s no need to monitor wash cycles anymore. Just locate the laundry room near your walk-in closet for maximum efficiency.

11. Gastrobotics
Bots won’t be cooking too many meals from scratch (and you can’t be bothered), so get a smart fridge they can stock with meal kits and a smart oven they can control remotely. Mmm—bot-made meals whenever you want.

12. Raised Garden
Solar-powered horticultural bots need plenty of sunlight and like their plants arranged in easy-to-navigate rows and columns. Put your garden on the roof, with a nearby shed to store your autonomous lawnmower.

Digital Transformation: What and Why?


ML has more than just a learning curve to overcome before it transforms business.

Machine learning (ML) based data analytics is rewriting the rules for how enterprises handle data. Research into machine learning and analytics is already yielding success in turning vast amounts of data—shaped with the help of data scientists—into analytical rules that can spot things that would escape human analysis in the past—whether it be in pursuit of pushing forward genome research or predicting problems with complex machinery.

Now machine learning is beginning to move into the business world. But most organizations haven’t truly grasped how machine learning will change the way they do business—or how it will change the shape of their organizations in the process. Companies are looking to ML to automate processes or to augment humans by assisting them in data-driven tasks. And it’s possible that ML could turn enterprises into vendors—turning lessons learned from their own vast stores of data into algorithms they can license to software and service providers.

But getting there will depend on how machine learning capabilities evolve over the next five years and what implications that evolution has for today’s long-time hiring/recruitment strategies. And nowhere is this more crucial than in unsupervised machine learning, where systems are given vast datasets and told to find the patterns without humans having first figured out what the software needs to look for. With minimal pre-task human efforts needed, the scalability of unsupervised machine learning is much higher.

David Dittman, director of business intelligence and analytics services at Procter & Gamble, explained that the biggest analytics problem he sees today with other large US companies is that “they are becoming enamored by [machine learning and analytics] technology, while not understanding that they have to build the foundation [for it], because it can be hard, expensive and requires vision.” Instead, Dittman said, companies mistakenly believe that machine learning will reveal the vision for them: “‘Can’t I have artificial intelligence just tell me the answer?'”

The problem is that “artificial intelligence” doesn’t really work that way. ML currently falls into two broad categories: supervised and unsupervised. And neither of these works without having a solid data foundation.

Breaking training

Yisong Yue, assistant professor of computing and mathematics at the California Institute of Technology, sees potential in unsupervised machine learning for applications such as diagnosing cancer from radiology images.
Enlarge / Yisong Yue, assistant professor of computing and mathematics at the California Institute of Technology, sees potential in unsupervised machine learning for applications such as diagnosing cancer from radiology images.

 

Supervised ML requires humans to create sets of training data and validate the results of the training. Speech recognition is a prime example of this, explained Yisong Yue, assistant professor of computing and mathematics at Caltech. “Speech recognition is trained in a highly supervised way,” said Yue. “You start with gigantic data—asking people to say certain sentences.”

But collecting and classifying enough data for supervised training can be challenging, Yue said. “Imagine how expensive that is, to say all these sentences in a range of ways. [Data scientists] are annotating this stuff left and right. That simply isn’t scalable to every task that you want to solve. There is a fundamental limit to supervised ML.”

Unsupervised machine learning reduces that interaction. The data scientist chooses a presumably massive dataset and essentially tells the software to find the patterns within it, all without humans having to first figure out what the software needs to look for. With minimal pre-task human efforts needed, the scalability of unsupervised ML (particularly in terms of the human workload upfront) is much higher. But the term “unsupervised” can be misleading. A data scientist needs to choose the data to be examined.

Unsupervised ML software is asked to “find clusters of data that may be interesting, and a human analyzes [those groupings] and decides what to do next,” said Mike Gualtieri, Forrester Research’s vice president and principal analyst for advanced analytics and machine learning. Human analysis is still required to make sense of the groupings of data the software creates.

But the payoffs of unsupervised ML could be much broader. For example, Yue said, unsupervised learning may have applications in medical tasks such as cancer diagnosis. Today, he explained, standard diagnostic efforts involve taking a biopsy and sending it to a lab. The problem is that biopsies—themselves a human-intensive analytics effort—are time-consuming and expensive. And when a doctor and patient need to know right away if it’s cancer, waiting for the biopsy result can be medically hazardous. Today, a radiologist typically will look at the tissue, explained Yue, “and the radiologist makes a prediction—the probability of it containing cancerous tissue.”

With a big enough training data set, this could be an application for supervised machine learning, Yue said. “Suppose we took that dataset—the images of the tissue and those biopsy results—and ran supervised ML analysis.” That would be labor-intensive up front, but it could detect similarities in the images of those that had positive biopsies.

But, Yue asked, what if instead the process was done as an unsupervised learning effort?

“Suppose we had a dataset of images and we had no biopsy results? We can use this to figure out what we can predict with clustering.” Assume that the number of samples was 1,000. The software would group the images and look for all of the similarities and differences, which is basic pattern recognition. “Let’s say it finds 10 such clusters, and suppose I can only afford to run 10 biopsies. We could choose to test just one from each cluster,” Yue said. “This is just step one in a long sequence of steps, of course, as it looks at multiple types of cancer.”

Guider versus decider

Unsupervised learning still needs a human being to assign a value to the clusters or patterns of data it finds, so it’s not necessarily ready for totally hands-off tasks. Rather, it is currently better suited to enhance the performance of humans by highlighting patterns of data that might be of interest. But there are places where that may soon change—driven largely by the quality and quantity of data.

“I think, right now, that people are jumping to automation when they should be focused on augmenting their existing decision process,” said Dittman. “Five years from now, we’ll have the proper data assets and then you’ll want more automation and less augmentation. But not yet. Today, there is a lack of usable data for machine learning. It’s not granular enough, not broad enough.”

Even as ML data analytics become more sophisticated, it’s not yet clear how that will change the shape of companies’ IT organizations. Forrester’s Gualtieri anticipates a reduction in the need for data scientists five years from now in much the same way that the need for Web developers who create webpages from scratch were much more needed in 1995 than they were in 2000, as so many webpage functions were automated and sold as modular scripts. A similar shift is likely in machine learning, he suggested, as software and service providers begin to offer application programming interfaces to commercial machine learning platforms.

Gualtieri foresees a simple change in the enterprise IT build-or-buy model. “Today, you’re going to make a build decision and hire more data scientists,” he explained. “As these APIs enter the market, it will move to ‘buy’ as opposed to ‘build.'” He added that “we are seeing the beginnings of this right now.” A couple of examples are Clarifai—which can search through video in search of a particular moment, such as looking at thousands of wedding videos and learning to recognize the ring ceremony or the “you may kiss the bride” moment—and Affectiva, which tries to determine someone’s mood from an image.

Dittman agrees with Gualtieri that companies will likely create many specialized scripts automating many ML tasks. But he disagrees that this will result in fewer computer science jobs in five years.

“If you look at the number of practicing data scientists, that will sharply increase, but it will increase far slower than the digitization of technology, as [ML] moves into more and more whitespaces,” Dittman explained. “Consider the open source trend and the fact that data-scientist tools are going to start to get easier and easier to use, moving from code generation to code reuse.”

Caltech’s Yue argued that demand for data scientists will continue to rise as ML successes will beget more ML attempts. And as the technology improves, he explained, more and more units in a business would be able to take advantage of ML, which means the need for far more data scientists to initially write those programs.

From consumer to provider

Part of what may drive a continuing demand for data scientists is the hunger for data to make ML more effective. Gualtieri sees some enterprises—roughly five years from now—also playing the role of vendor. “Boeing may decide to be that provider of domain-specific machine learning and sell [those modules] to suppliers who could then become customers,” he said.

P&G’s Dittman sees both ends of the analytics equation—data and the ML code—being highly sellable, potentially as a new major revenue source for enterprises. “Companies are going to start monetizing their data,” he explained. “The data industry is going to explode. Data is absolutely exploding, but there is a lack of a data strategy. Getting the right data that you need for your business case, that tends to be the challenge.”

But Yue has a different concern. “Five years from now, [ML] will naturally come into conflict with legal issues. We have strong laws about discrimination, protected classes,” he said. “What if you use data algorithms to decide who to make loans to? How do you know that it’s not discriminatory? It’s a question for policy makers.”

Yue offered the example of software finding a correlation between consumers defaulting on their loans and those who have blue eyes. The software could decide to scan every customer’s eye color and use that information to decide whether or not to approve a loan. “If a human made that decision, it would be considered discriminatory,” Yue said.

That legal issue speaks to the core role a data analyst plays in unsupervised ML. The software’s job is to find the links, but it’s ostensibly the human who decides what to do about those links. One way or the other, HR is going to need to recruit a lot more data scientists for quite some time.

%d bloggers like this: