Caution When Using Robotically-Assisted Surgical Devices in Women’s Health including Mastectomy and Other Cancer-Related Surgeries: FDA Safety Communication


  • People with breast cancer or those at high risk for breast cancer who are considering the surgical removal of one or both breasts (mastectomy) using robotically-assisted surgery
  • People considering robotically-assisted surgery for the prevention or treatment of other cancers
  • Health care providers who perform robotically-assisted procedures as part of cancer prevention or treatment
  • Health care providers who advise patients on the need for mastectomy

Medical Specialties

Breast Surgery, Obstetrics and Gynecology, Gynecological Oncology, General Surgery, Surgical Oncology, Endocrine Surgery, Hepatobiliary Surgery, Thoracic Surgery, Urology, Colorectal Surgery, Medical Oncology, Radiation Oncology, Oncology Nurses, Primary Care.


Robotically-assisted surgical devices enable surgeons to perform a variety of surgical procedures through small cuts (incisions) in a patient’s body. This type of surgery may help reduce pain, blood loss, scarring, infection, and recovery time after surgery in comparison to traditional surgical procedures.

Computer and software technology allow a surgeon to precisely control surgical instruments attached to mechanical arms through small incisions while viewing the surgical site in three-dimensional high definition.


The FDA takes women’s health issues very seriously. The FDA is issuing this safety communication because it is important for health care providers and patients to understand that the safety and effectiveness of using robotically-assisted surgical devices in mastectomy procedures or in the prevention or treatment of cancer has not been established. There is limited, preliminary evidence that the use of robotically-assisted surgical devices for treatment or prevention of cancers that primarily (breast) or exclusively (cervical) affect women may be associated with diminished long-term survival. Health care providers and patients should consider the benefits, risks, and alternatives to robotically-assisted surgical procedures and consider this information to make informed treatment decisions.

Summary of Problem and Scope

Since robotically-assisted surgical devices became available in the US, robotically-assisted surgical procedures were widely adopted because they may allow for quicker recovery and could improve surgical precision. However, the FDA is concerned that health care providers and patients may not be aware that the safety and effectiveness of these devices has not been established for use in mastectomy procedures or the prevention or treatment of cancer. Patients and health care providers should also be aware that the FDA encourages health care providers who use robotically-assisted surgical devices to have specialized training and practice in their use.

Current evidence on use of robotically-assisted surgical devices

The safety and effectiveness of robotically-assisted surgical devices for use in mastectomy procedures or prevention or treatment of cancer has not been established. However, the FDA is aware of scientific literature and media publications describing surgeons and hospital systems that use robotically-assisted surgical devices for mastectomy.

To date, the FDA’s evaluation of robotically-assisted surgical devices has generally focused on determining whether the complication rate at 30 days is clinically comparable to other surgical techniques. To evaluate robotically-assisted surgical devices for use in the prevention or treatment of cancer, including breast cancer, the FDA anticipates these uses would be supported by specific clinical outcomes, such as local cancer recurrence, disease-free survival, or overall survival at time periods much longer than 30 days.

The relative benefits and risks of surgery using robotically-assisted surgical devices compared to conventional surgical approaches in cancer treatment have not been established. The FDA is aware of peer-reviewed literature reporting clinical outcomes for the use of robotically-assisted surgical devices in cancer treatment including one limited report that compared long term survival after radical hysterectomy for cervical cancer either by open abdominal surgery or by minimally invasive surgery (which included laparoscopic surgery or robotically-assisted surgery). In this report minimally invasive surgery appeared to be associated with a lower rate of long term survival compared with open abdominal surgery; however other researchers have reported no statistically significant difference in long term survival when these types of surgical procedures are compared (New England Journal of Medicine, November 2018).

To date, the FDA has not granted marketing authorization for any robotically-assisted surgical device for use in the United States for the prevention or treatment of cancer, including breast cancer. The labeling for robotically-assisted surgical devices that are legally marketed in the United States includes statements that cancer treatment outcomes using the device have not been evaluated by the FDA.

Recommendations for Patients

Before you have robotically assisted surgery to prevent or treat cancer:

  • Be aware that that the safety and effectiveness of using robotically-assisted surgical devices in mastectomy procedures or in the prevention or treatment of cancer has not been established.
  • Discuss the benefits, risks, and alternatives of all available treatment options with your health care provider to make the most informed treatment decisions.
  • Before choosing your surgeon, we recommend asking the following questions:
    • Ask your surgeon about his or her training, experience, and patient outcomes with robotically-assisted surgical device procedures.
    • Ask how many robotically-assisted surgical procedures like yours he or she has performed.
    • Ask your surgeon about possible complications and how often they happen.

If you had treatment with a robotically-assisted surgical device for any cancerous condition and experienced a complication, we encourage you to file a report through MedWatch, the FDA Safety Information and Adverse Event Reporting program.

Recommendations for Health Care Providers

  • Understand that the FDA has not cleared or approved any robotically-assisted surgical device based on cancer-related outcomes such as overall survival, recurrence, and disease-free survival.
  • Be aware that robotically-assisted surgical devices have been evaluated by the FDA and cleared for use in certain types of surgical procedures, but not for mastectomy.
  • The FDA recommends that you take training for the specific robotically-assisted surgical device procedures you perform.
  • Talk to your patients about your experience and training, and the clinical outcomes expected with the use of robotically-assisted surgical devices.
  • Discuss the benefits, risks, and alternatives of all available treatment options with your patients to help them make informed treatment decisions.
  • Be aware that clinical studies conducted in the United States involving a legally marketed device investigating a new intended use are subject to FDA oversight. For further information, please refer to the FDA’s Investigational Device Exemption website.
  • If any of your patients experience adverse effects or complications with a robotically-assisted surgical device, we encouraged you to file a report through MedWatch, the FDA Safety Information and Adverse Event Reporting program.

FDA Actions

  • Robotically-assisted surgical devices are novel and complex systems and the FDA reviews each robotically-assisted surgical device system individually, based on the complexity of the technology and its intended use.
  • The FDA is monitoring adverse events in the literature and reported to the FDA to inform our understanding of the benefits and risks of robotically-assisted surgical devices when used for specific indications.
  • The FDA encourages academic and research institutions, professional societies, robotically-assisted surgical device experts, and manufacturers to establish patient registries to gather data on the use of robotically-assisted surgical devices for all uses, including the prevention and treatment of cancer. Patient registries may help characterize surgeon’s learning curves, assess long-term clinical outcomes, and identify problems early to help enhance patient safety.
  • The FDA will update this communication if significant new information becomes available.

Reporting Problems to the FDA

Prompt reporting of adverse events can help the FDA identify and better understand the risks associated with robotically-assisted surgical devices. If you experience adverse events associated with the use of these devices for treatment of cancerous conditions, we encourage you to file a voluntary report through MedWatch, the FDA Safety Information and Adverse Event Reporting program. Health care personnel employed by facilities that are subject to the FDA’s user facility reporting requirements should follow the reporting procedures established by their facilities.

Sex robot inventor says having baby with his android lover will be ‘extremely simple’

Sergi Santos in relationship with android as well as human wife of 16 years

Sergi Santos with his sex robot creation 'Samantha'

Sergi Santos with his sex robot creation ‘Samantha’

robot creator has claimed that he will soon be able to have a baby with his own robot lover.

Sergi Santos, an electronic engineer and expert in AI, also believes it is just a matter of time before machines are doing human jobs and marrying into human families.

The Spaniard told The Sun he would “love” to have a child with his robotic partner, and that it would be “extremely simple”.

“Using the brain I have already created, I would program it with a genome so he or she could have moral values, plus concepts of beauty, justice and the values that humans have,” he said.

“Then to create a child with this robot it would be extremely simple. I would make an algorithm of what I personally believe about these concepts and then shuffle it with what she thinks and then 3D print it.

The designer says that having regular intercourse with his robot, called “Samantha”, has improved his sex life with Ms Kissamitaky.

It is also claimed his android has the ability to create emotional ties, can progress through different emotional modes, and has the ability to makes “realistic” orgasm sounds.

Guess Which Single Word Will Convince Other Humans You’re Not a Robot

That’s… not what we expected.

main article image

If you were trying to convince another human that you yourself are also human, what would you say? Probably something about emotions, right? That might work – but other humans are more likely to believe your humanity if you talk about bodily functions.

Specifically, the word ‘poop’.

At least, that’s the finding from a study that sought to determine a “minimal Turing test”, narrowing the human-like intelligence assessment down to a single word.

A Turing test – named after mathematician Alan Turing – is a conceptual method for determining whether machines can think like a human. In its simplest form, it involves having a computerised chat with an AI – if a human can’t tell if they’re talking to a computer or a living being, the AI “passes” the test.

In a new paper, cognitive scientists John McCoy and Tomer Ullman of MIT’s Department of Brain and Cognitive Sciences (McCoy is now at the University of Pennsylvania) have described a twist on this classic concept.

They asked 1,089 study participants what single word they would choose for this purpose: not to help distinguish humans from machines, but to try to understand what we humans think makes us human.

The largest proportion – 47 percent – of the participants chose something related to emotions or thinking, what the researchers call “mind perception”.

By far the most popular option was love, clocking in at a rather large 14 percent of all responses. This was followed by compassion at 3.5 percent, human at 3.2 percent and please at 2.7 percent.

word cloud minimal turing(McCoy & Ullman/Journal of Experimental Social Psychology)

In all, there were 10 categories, such as food, including words like banana and pizza; non-humans, including words like dog or robot; life and death, including words like pain and alive; and body functions and profanity, which included words like poop and penis.

The next part of the study involved figuring out which of those words would be likely to convince other humans of humanity.

The researchers randomly put pairs of the top words from each of the 10 categories together – such as banana and empathy – and told a new group of 2,405 participants that one of the words was chosen by a machine and the other by a human (even though both words were chosen by humans).

This group’s task was to say which was which. Unsurprisingly, the least successful word was robot. But the most successful – poop – was a surprise.

minimal turing test word comparison(McCoy & Ullman/Journal of Experimental Social Psychology)

This could, the researchers said, be because ‘taboo’ words generate an emotional response, rather than simply describing one.

“The high average relative strengths of the words ‘love’, ‘mercy’, and ‘compassion’ is consistent with the importance of the experience dimension when distinguishing the minds of robots and people. However, the taboo category word (‘poop’) has the highest average relative strength, referring to bodily function and evoking an amused emotional response,” they wrote in their paper.

“This suggests that highly charged words, such as the colourful profanities appearing in Study 1, might be judged as given by a human over all words used in Study 2.”

Now that this information is on the internet where any old AI with WiFi could get access to it, the study may not really help us tell person from machine; but it does provide some fascinating insight into our self-perception, and what we feel it means to be human.

It’s also, the researchers said, a methodology that could help us explore our perceptions of different kinds of humans – what would we say, for instance, to convince someone else we were a man or a woman, a child or a grandparent, Chinese or Chilean?

“Recall the word that you initially chose to prove that you are human. Perhaps it was a common choice, or perhaps it appeared but one other time, your thoughts in secret affinity with the machinations that produced words such as caterpillar, ethereal, or shenanigans. You may have delighted that your word was judged highly human, or wondered how it would have fared,” the researchers wrote.

“Whatever your word, it rested on the ability to rapidly navigate a web of shared meanings, and to make nuanced predictions about how others would do the same. As much as love and compassion, this is part of what it is to be human.”

Also: poop, apparently.

The research has been published in the Journal of Experimental Social Psychology.

Should we be worried about artificial intelligence?

Not really, but we do need think carefully about how to harness, and regulate, machine intelligence.

By now, most of us are used to the idea of rapid, even accelerating, technological change, particularly where information technologies are concerned. Indeed, as consumers, we helped the process along considerably. We love the convenience of mobile phones, and the lure of social-media platforms such as Facebook, even if, as we access these services, we find that bits and pieces of our digital selves become strewn all over the internet.

More and more tasks are being automated. Computers (under human supervision) already fly planes and sail ships. They are rapidly learning how to drive cars. Automated factories make many of our consumer goods. If you enter (or return to) Australia with an eligible e-passport, a computer will scan your face, compare it with your passport photo and, if the two match up, let you in. The “internet of things” beckons; there seems to be an “app” for everything. We are invited to make our homes smarter and our lives more convenient by using programs that interface with our home-based systems and appliances to switch the lights on and off, defrost the fridge and vacuum the carpet.

Robots taking over more intimate jobs

With the demise of the local car industry and the decline of manufacturing, the services sector is expected to pick up the slack for job seekers. But robots are taking over certain roles once deemed human-only.

Clever though they are, these programs represent more-or-less familiar applications of computer-based processing power. With artificial intelligence, though, computers are poised to conquer skills that we like to think of as uniquely human: the ability to extract patterns and solve problems by analysing data, to plan and undertake tasks, to learn from our own experience and that of others, and to deploy complex forms of reasoning.

The quest for AI has engaged computer scientists for decades. Until very recently, though, AI’s initial promise had failed to materialise. The recent revival of the field came as a result of breakthrough advances in machine intelligence and, specifically, machine learning. It was found that, by using neural networks (interlinked processing points) to implement mathematically specified procedures or algorithms, machines could, through many iterations, progressively improve on their performance – in other words, they could learn. Machine intelligence in general and machine learning in particular are now the fastest-growing components of AI.

The achievements have been impressive. It is now 20 years since IBM’s Deep Blue program, using traditional computational approaches, beat Garry Kasparov, the world’s best chess player. With machine-learning techniques, computers have conquered even more complex games such as Go, a strategy-based game with an enormous range of possible moves. In 2016, Google’s Alpha Go program beat Lee Sedol, the world’s best Go player, in a four-game match.

Allan Dafoe, of Oxford University’s future humanities institute, says AI is already at the point where it can transform almost every industry, from agriculture to health and medicine, from energy systems to security and the military. With sufficient data, computing power and an appropriate algorithm, machines can be used to come up with solutions that are not only commercially useful but, in some cases, novel and even innovative.

Should we be worried? Commentators as diverse as the late Stephen Hawking and development economist Muhammad Yunus have issued dire warnings about machine intelligence. Unless we learn how to control AI, they argue, we risk finding ourselves replaced by machines far more intelligent than we are. The fear is that not only will humans be redundant in this brave new world, but the machines will find us completely useless and eliminate us.

The University of Canberra's robot Ardie teaches tai chi to primary school pupils.
The University of Canberra’s robot Ardie teaches tai chi to primary school pupils.

If these fears are realistic, then governments clearly need to impose some sort of ethical and values-based framework around this work. But are our regulatory and governance techniques up to the task? When, in Australia, we have struggled to regulate our financial services industry, how on earth will governments anywhere manage a field as rapidly changing and complex as machine intelligence?

Governments often seem to play catch-up when it comes to new technologies. Privacy legislation is enormously difficult to enforce when technologies effortlessly span national boundaries. It is difficult for legislators even to know what is going on in relation to new applications developed inside large companies such as Facebook. On the other hand, governments are hardly IT ingenues. The public sector provided the demand-pull that underwrote the success of many high-tech firms. The US government, in particular, has facilitated the growth of many companies in cybersecurity and other fields.

Governments have been in the information business for a very long time. As William the Conqueror knew when he ordered his Domesday Book to be compiled in 1085, you can’t tax people successfully unless you know something about them. Spending of tax-generated funds is impossible without good IT. In Australia, governments have developed and successfully managed very large databases in health and human services.

The governance of all this data is subject to privacy considerations, sometimes even at the expense of information-sharing between agencies. The evidence we have is that, while some people worry a lot about privacy, most of us are prepared to trust government with our information. In 2016, the Australian Bureau of Statistics announced that, for the first time, it would retain the names and addresses it collected during the course of the 2016 population census. It was widely expected (at least by the media) that many citizens would withhold their names and addresses when they returned their forms. In the end, very few did.

But these are government agencies operating outside the security field. The so-called “deep state” holds information about citizens that could readily be misused. Moreover, private-sector profit is driving much of the current AI surge (although, in many cases, it is the thrill of new knowledge and understanding, too). We must assume that criminals are working out ways to exploit these possibilities, too.

If we want values such as equity, transparency, privacy and safety to govern what happens, old-fashioned regulation will not do the job. We need the developers of these technologies to co-produce the values we require, which implies some sort of effective partnership between the state and the private sector.

Could policy development be the basis for this kind of partnership? At the moment, machine intelligence works best on problems for which relevant data is available, and the objective is relatively easy to specify. As it develops, and particularly if governments are prepared to share their own data sets, machine intelligence could become important in addressing problems such as climate change, where we have data and an overall objective, but not much idea as to how to get there.

Machine intelligence might even help with problems where objectives are much harder to specify. What, for example, does good urban planning look like? We can crunch data from many different cities, and come up with an answer that could, in theory, go well beyond even the most advanced human-based modelling. When we don’t know what we don’t know, machines could be very useful indeed. Nor do we know, until we try, how useful the vast troves of information held by governments might be.

Perhaps, too, the jobs threat is not as extreme as we fear. Experience shows that humans are very good at finding things to do. And there might not be as many existing jobs at risk as we suppose. I am convinced, for example, that no robot could ever replace road workers – just think of the fantastical patterns of dug-up gravel and dirt they produce, the machines artfully arranged by the roadside or being driven, very slowly, up and down, even when all the signs are there, and there is absolutely no one around. How do we get a robot, even one capable of learning by itself, to do all that?

For all book lovers please visit my friend’s website.

Cometh the cyborg: Improved integration of living muscles into robots

Researchers have developed a novel method of growing whole muscles from hydrogel sheets impregnated with myoblasts. They then incorporated these muscles as antagonistic pairs into a biohybrid robot, which successfully performed manipulations of objects. This approach overcame earlier limitations of a short functional life of the muscles and their ability to exert only a weak force, paving the way for more advanced biohybrid robots.

Object manipulations performed by the biohybrid robots.

The new field of biohybrid robotics involves the use of living tissue within robots, rather than just metal and plastic. Muscle is one potential key component of such robots, providing the driving force for movement and function. However, in efforts to integrate living muscle into these machines, there have been problems with the force these muscles can exert and the amount of time before they start to shrink and lose their function.

Now, in a study reported in the journal Science Robotics, researchers at The University of Tokyo Institute of Industrial Science have overcome these problems by developing a new method that progresses from individual muscle precursor cells, to muscle-cell-filled sheets, and then to fully functioning skeletal muscle tissues. They incorporated these muscles into a biohybrid robot as antagonistic pairs mimicking those in the body to achieve remarkable robot movement and continued muscle function for over a week.

The team first constructed a robot skeleton on which to install the pair of functioning muscles. This included a rotatable joint, anchors where the muscles could attach, and electrodes to provide the stimulus to induce muscle contraction. For the living muscle part of the robot, rather than extract and use a muscle that had fully formed in the body, the team built one from scratch. For this, they used hydrogel sheets containing muscle precursor cells called myoblasts, holes to attach these sheets to the robot skeleton anchors, and stripes to encourage the muscle fibers to form in an aligned manner.

“Once we had built the muscles, we successfully used them as antagonistic pairs in the robot, with one contracting and the other expanding, just like in the body,” study corresponding author Shoji Takeuchi says. “The fact that they were exerting opposing forces on each other stopped them shrinking and deteriorating, like in previous studies.”

The team also tested the robots in different applications, including having one pick up and place a ring, and having two robots work in unison to pick up a square frame. The results showed that the robots could perform these tasks well, with activation of the muscles leading to flexing of a finger-like protuberance at the end of the robot by around 90°.

“Our findings show that, using this antagonistic arrangement of muscles, these robots can mimic the actions of a human finger,” lead author Yuya Morimoto says. “If we can combine more of these muscles into a single device, we should be able to reproduce the complex muscular interplay that allow hands, arms, and other parts of the body to function.”

For all book lovers please visit my friend’s website.

Healthcare AI & Machine Learning: 4.5 Things To Know

Has Alexa changed your life yet? Vacuum the floor, start the dishwasher, feed the cat, clean the litter box, order more paper towels and lower the thermostat. That’s just a few items from the to-do list I handed over to Artificial Intelligence today – and all without one manual click or key stroke. Whew, no more straining my index finger to push all those buttons.\

While all of this has a certain “cool” factor, it’s completely transforming our lives. Gone are the days when Artificial Intelligence (AI) meant a robot such as C-3PO or Rosie (the robot maid from the Jetsons). AI is no longer a sci-fi futuristic hope, it’s a set of algorithms and technologies known as Machine Learning (ML).

We’re quickly moving toward this technology powering many tasks in everyday life. But what about healthcare? How might it impact our industry and even our day-to-day jobs? I’m sharing 4.5 things you should know about Artificial Intelligence and Machine Learning in the healthcare industry and how it will impact our future.

1.   There’s a difference between Machine Learning and Artificial Intelligence.

Jeff Bezos, visionary founder and leader of Amazon, made this declaration last May: “Machine Learning and Artificial Intelligence will be used to empower and improve every business, every government organization, every philanthropy – basically there’s no institution in the world that cannot be improved with Machine Learning and Artificial Intelligence.”

Machine Learning and Artificial Intelligence have emerged as key buzzwords, and often times are used interchangeably. So, what’s the difference?

  • Machine Learning uses Artificial Intelligence to process large amounts of data and allows the machine to learn for itself. You’ve probably benefitted from Machine Learning in your inbox. Spam filters are continuously learning from words used in an email, where it’s sent from, who sent it, etc. In healthcare, a practical application is in the imaging industry, where machines learn how to read CT scans and MRIs allowing providers to quickly diagnose and optimize treatment options.
  • Artificial Intelligence is the ability of machines to behave in a way that we would consider “smart.” The capability of a machine to imitate intelligent human behavior. Artificial Intelligence is the ability to be abstract, creative, deductive – to learn and apply learnings. Siri and Alexa are good examples of what you might be using today. In healthcare, an artificial virtual assistant might have conversations with patients and providers about lab results and clinical next steps.

2.   The healthcare industry is primed for disruptors.

It’s no secret that interacting with the healthcare system is complex and frustrating. As consumers are paying more out of pocket, they’re expecting and demanding innovation to simplify their lives, get educated, and save money. At the same time, we’re starting to get a taste of what Machine Learning and AI can do for their daily lives. These technologies WILL dramatically change the way we work in healthcare. Don’t take my word for it, just review a few of the headlines over the past year.

  • A conversational robot explains lab results. Learn more.
  • A healthcare virtual assistant enables conversational dialogues and pre-built capabilities that automate clinical workflows. Learn more.
  • Google powers up Artificial Intelligence, Machine Learning accelerator for healthcare. Learn more.
  • New AI technology uses brief daily chat conversations, mood tracking, curated videos, and word games to help people manage mental health. Learn more.
  • A machine learning algorithm helps identify cancerous tumors on mammograms. Learn more.

Soon, will a machine learning algorithm serve up a choice of 3 benefit plans that are best for my situation? Maybe Siri or Alexa will even have a conversation with me about it.

3.   Be smart about customer data

Being in healthcare and seeing the recent breaches, most of us have learned to be very careful about our security policies and customer data. As the use of Machine Learning grows in healthcare, continue to obsess over the privacy of your customer data. There are two reasons for this…

First, data is what makes Machine Learning and AI work. Without data, there’s nothing to mine and that means there’s no info from which to learn. Due to the sheer amount of data that Machine Learning technology collects and consumes, privacy will be more important than ever. Also, there’s potential for a lot more personally identifiable data to be collected. It will be imperative that companies pay attention to masking that data so the specific user is not revealed. These steps are critical to ensure you and your customer are protected as laws continue to catch up with this emerging technology.

Second, you’re already collecting A LOT of data on your clients of which you may not be taking full advantage. As more data is collected, it’s important to keep it organized so that you can use it effectively to gain insights and help your clients. Data architecture is a set of models, policies, rules or standards that govern which data is collected, and how it is stored, arranged, integrated, and put to use in data systems and in organizations. If you’re not already thinking about this, it might be a good idea to consult with a developer to get help.

4.  Recognize WHERE the customer is today and plan for tomorrow.

What’s the consumer appetite for these types of technologies? This year, 56.3 million personal digital assistants will ship to homes, including Alexa, Google Assistant and Apple’s new Home Pod. That’s a 40% increase from the impressive growth of 33 million sold in 2017. Consumers are quickly changing where they shop, organize and get information. It’s important that we move quickly to offer the experience customers want and need.

At first, I was happy to ask my Alexa to play music and set timers, now it’s the hub for my home. Alexa shops for my groceries, provides news stories, organizes my day, and much more. Plus, the cat really appreciates when Alexa feeds her – I’m totally serious that my cat feeder is hooked up to Alexa who releases a ration of kibble at pre-set feeding times.

There are so many applications for healthcare. Here’s a few that are happening or just around the corner…

“Alexa, what’s left on my health plan deductible?”

“Alexa, find a provider in my network within 5 miles.”

“Alexa, search for Protonix pricing at my local pharmacies.”

“Alexa, how do I know if I have an ear infection?”

“Alexa, what’s the difference between an HSA, FSA and HRA?”

“Alexa, find a 5-star rated health insurance broker within 20 miles.”

Click here for a great article about the growth and applications of digital personal assistants.

4.5       The customer experience must be TRULY helpful

Making “cool” innovations in Artificial Intelligence or Machine Learning won’t work if not coupled with a relentless pursuit to serve the customer. These endeavors are expensive, so spend your IT budget wisely, ensuring new innovation creates true value and is easy for the end user.

Are you ready to learn more? I will be doing a breakout session with Reid Rasmussen, CEO of freshbenies at BenefitsPro on 4/18 at 2:30pm.

Raising Countless Eyebrows, Walmart Files Patent for Autonomous Bee Drones

Army of Nanorobots Successfully Strangles Cancerous Tumors

Nearly 1.7 million new cases of cancer are detected in the United States each year, and each year cancer claims almost 600,000 lives in the U.S. alone, making it the second-leading cause of death nationally. Treatment is sometimes worse than the illness, as invasive surgeries can be traumatic, and chemotherapy can cause off-target effects that wreak havoc on the entire body. But a new technique described in Nature Biotechnology, which uses nanorobots — literally microscopic robots — to specifically target tumors and cut off their blood supply has the potential to change treatment forever.

In the paper, published in February, an international team of scientists demonstrated the effectiveness of using DNA nanorobots to attack tumors in mice and pigs with cancer. These nanometer-sized robots are made of DNA that unfolds itself at precisely the right time and place to deliver a drug to only the exact target in the body. The DNA, folded up like an origami package, held molecules of thrombin, an enzyme that makes blood clot.

DNA origami nanorobot
When this DNA origami nanorobot detects blood vessels associated with tumors, it opens up to deliver thrombin, a clotting factor that chokes off the blood supply to the tumor.

To test whether this novel drug delivery system works, the team of scientists from Arizona State University and the National Center for Nanoscience and Technology of the Chinese Academy of Sciences injected the nanorobots into the bloodstreams of mice with tumors. They found that the treatment effectively targeted tumors, stopping their growth and even initiating tumor death.

Stopping tumor growth isn’t enough to prove the drug works, though, as it must also prove itself safe. So, the researchers also injected the nanorobots into the bloodstreams of Bama miniature pigs, which have been shown to be good models for testing preliminary drug safety for humans. One major concern with nanorobots is that they could get into the brain and cause strokes, but this did not happen with the test subject animals.

The nanorobots open up to deliver thrombin to the blood vessels that feed the tumor.
When they detect proteins associated with cancer cells, the nanorobots open up to deliver thrombin to the blood vessels that feed the tumor.

The precision of the nanorobots, which is what makes their potential for safe cancer treatment so great, is due to their meticulously crafted structure. The drug-holding “package” is made up of DNA sheets, measuring 60 by 90 nanometers, that wrap around thrombin molecules. On the outside of the folded sheets are molecules that zero in on nucleolin, a protein that’s present in the lining of blood vessels associated with growing tumors.

These molecules, called aptamers, both target the proper spot to deliver drugs and actually open up the DNA sheet up to expose the thrombin when the nanorobot finds the right spot. In theory, when the thrombin is released, it clots the blood entering the tumor, thereby starving it of the oxygen it needs to grow. This method, which essentially strangles the tumor, is reminiscent of the class of cancer drugs known as angiogenesis inhibitors, which help inhibit the growth of blood vessels that feed tumors.

These nanorobots show great promise, but they aren’t ready for humans yet. To get there, the researchers are seeking out clinical partners to further develop this treatment pathway. Still, the fact that it seems to work in mice and pigs makes it likely that nanorobots like these will be available as cancer treatments within our lifetimes.

Abstract: Nanoscale robots have potential as intelligent drug delivery systems that respond to molecular triggers. Using DNA origami we constructed an autonomous DNA robot programmed to transport payloads and present them specifically in tumors. Our nanorobot is functionalized on the outside with a DNA aptamer that binds nucleolin, a protein specifically expressed on tumor-associated endothelial cells, and the blood coagulation protease thrombin within its inner cavity. The nucleolin-targeting aptamer serves both as a targeting domain and as a molecular trigger for the mechanical opening of the DNA nanorobot. The thrombin inside is thus exposed and activates coagulation at the tumor site. Using tumor-bearing mouse models, we demonstrate that intravenously injected DNA nanorobots deliver thrombin specifically to tumor-associated blood vessels and induce intravascular thrombosis, resulting in tumor necrosis and inhibition of tumor growth. The nanorobot proved safe and immunologically inert in mice and Bama miniature pigs. Our data show that DNA nanorobots represent a promising strategy for precise drug delivery in cancer therapy.

Experts Answer: Who Is Actually Going to Suffer From Automation?

Educated Guesses

Thanks to rapid advances in the fields of artificial intelligence (AI) and robotics, smart machines that would have once been relegated to works of science-fiction are now a part of our reality.

Today, we have AIs that can pick apples, manage hotels, and diagnose cancer. Researchers at MIT have even developed an algorithm that can predict the immediate future. If only they could train it to predict how automation is going to impact the human workforce…

Will Automation Steal My Job?

Currently, opinions on the subject are as varied as the types of AIs in development. In January, MIT Technology Review compiled a list of 19 studies focused on automation and the future of work. No two reached identical conclusions.

In 2017, research and advisory company Gartner released a study predicting automation would destroy 1.8 million jobs worldwide by 2020. That same year, another research and advisory company, Forrester, released their own report on automation and the workforce. According to their calculations, the U.S. alone will lose 13.8 million jobs to automation in 2018.

The numbers vary even more wildly the farther out you look. By 2030, futurist Thomas Frey predicts humans will lose 2 billion jobs to robots, while researchers from consulting firm McKinsey predict a comparatively paltry 400 to 800 million in losses.

Beyond the numbers, experts also disagree on the professions that will become automated, as well as where in the world will bear the brunt of the job losses.

Are teachers and writers safe or should they start thinking about a career change? What about lawyers and doctors? Will the U.S. be the nation to lose the highest percentage of jobs, as PricewaterCooper predicts? Or will Japan be hit the hardest, like McKinsey’s report concludes?

In an attempt to get to the bottom of the automation mystery, Futurism asked several experts to tell us who they believe will be most likely to suffer as a result of automation. Here’s what they had to say.

Edward D. Hess, professor of business administration and Batten Executive-in-Residence at the University of Virginia:

Automation is going to dramatically impact service and professional workers. To find work, one must be good at doing what the technology won’t be able to do well.

For the near term, those skills are: (1) higher order thinking (critical, innovative, imaginative) that is not linear; (2) the delivery of customized services that require high emotional intelligence to other humans; and (3) trade skills that call for real-time iterative problem diagnosis and solving and/or complex human dexterity.

Jobs that have a high risk of being automated are jobs that involve repetitive tasks and linear tasks that are easy to code: “if this, then do this.”

High-risk fields are retail, fast food, agriculture, customer service,  accounting, marketing, management consulting, investment management, finance, higher education, insurance, and architecture. Specific jobs include security guards, long-haul truck drivers, manual laborers, construction workers, paralegals, CPAs, radiologists, and administrative workers.

Technology is going to continue to advance, and in reality, all of us are going to have become life-long learners, constantly upgrading our skills. The most important skills to have will be knowing how to be highly efficient at iterative learning — “unlearning and relearning” — and develop high emotional and social intelligence.

Jobs requiring high emotional engagement in the customization and delivery of services to other human beings will be the most safe. Those include psychological counselors, social workers, elementary school teachers, physical therapists, personal trainers, trial lawyers, and estate planners. Other jobs that will be in high demand are in computer and data science.

What will become human beings’ unique skill? Emotional and social intelligence.

Joel Mokyr, an economic historian at Northwestern University and author of A Culture of Growth: Origins of the Modern Economy:

The short answer is people who have boring, routine, repetitive, and physically arduous jobs.

The long answer is that labor-saving process innovation and “classical” productivity increase may make some workers redundant as they are replaced by robots and machines that can do their jobs better and cheaper.

This could get a lot worse if AI will also replace workers who are trained and skilled in medium human-capital intensity jobs, such as drivers, legal assistants, bank tellers, etc. So far, the evidence for that is very weak, but it could change, depending on what happens to demand and output as prices fall and quality improves. What counts is demand elasticities with regards to price and with regards to product quality (including user-friendliness).

However, product innovation (unlike process innovation) is likely to create new jobs that were never imagined. Who in 1914 would have suspected that their great-grandchildren would be video game designers or cybersecurity specialists or GPS programmers or veterinary psychiatrists?

The dynamic is likely to be that machines pick up more and more routine jobs (including mental ones) that humans used to do. At the same time, new tasks and functions will be preserved and created that only humans can perform because they require instinct, intuition, human contact, tacit knowledge, fingerspitzengefühl, or some kind of je ne sais quoi that cannot be mechanized.

Bob Doyle, director of communications for the Association for Advancing Automation:

I would argue that the question should be phrased as the following: “Who is actually going to thrive because of automation?” And the answer is everyone who embraces automation.

Automation is the competitive advantage used by companies around the world, and for good reason. Companies automate heavy-lifting, repetitive, low-value processes in order to achieve higher output and product quality so that they can be more competitive in global markets.

That gives them the resources to innovate, to improve business processes, and to continue to meet consumer demands. That lets those companies continue to hire human workers for the jobs they’re best-suited for: insight-driven, decision-based, and creative processes. You can say that another word for “automation” is “progress.”

The inability to compete is the real threat to jobs, not automation.

Between 2010 and 2016, there were almost 137,000 robots deployed in manufacturing facilities in the U.S. During that time, manufacturing jobs increased by 894,000 (U.S. Bureau of Labor Statistics) and the unemployment rate declined by 5.1 percent.

These companies (along with their employees) are competing and thriving today because of automation. We should remember that technological advances have always changed the nature of jobs. We believe this time is no different. We must be sure that we’re preparing the workforce to fill these jobs that are being created, especially in advanced manufacturing. The future of automation in bright!

Chatbot: The Therapist in Your Pocket

AI chatbots offer a modern twist to mental health services, using texting to digitally counsel today’s smartphone generation.

Artificial intelligence (AI)-powered chatbots provide a new form of mental health support for a tech-savvy generation already comfortable using texting as its dominant form of communication.

As the demand for mental health services grows nationwide, there’s a shortage of available psychiatric professionals, according to the National Institute of Mental Health. And college campuses are seeing unprecedented rates of anxiety and depression.

Chatbots designed to spot indicators of mental health distress may provide emotional support when traditional therapy is out of reach.

While these chatbot counselors don’t replace the critical human touch essential during a personal crisis, AI and machine learning can enable the mental health community to reach more people and ensure consistent follow up, according to Michiel Rauws, co-founder and CEO of the mental health tech startup X2AI.

“There are millions of people globally who struggle with anxiety and depression, and simply not enough psychologists to take care of everyone,” Rauws said.

AI chatbots help college students

Chatbots may offer greater access to mental health services for today’s tech-savvy generation.

Based on his own experience with depression and work with immigrants affected by the war in Syria, he realized that much of cognitive behavioral therapy (CBT) is about coaching people to reframe how they think of themselves and their lives. He developed Tess, an AI chatbot that helps psychologists monitor patients, and remotely deliver personalized psychotherapy and mental health coaching.

“I had learned the psychological techniques and innovations, and realized that I might help others,” he said.

Now 27, Rauws says his generation often feels more comfortable chatting about sensitive issues via text rather than in person at a therapist’s office.

Available through text, web browsers and Facebook Messenger, Tess is offered through healthcare organizations to provide care whenever and wherever a patient needs it.

In addition to helping patients work through their issues, the AI chatbot also recognizes signals that indicate an acute crisis, such as suicidal thoughts. It alerts a human therapist when emergency intervention is essential.

The Digital Doctor is In

Several companies are developing mental health-related chatbots for direct consumer use, including the Woebot, a digital mental health coach that uses CBT to help users deal with anxiety or depression.

“Woebot is a robot you can tell anything to,” Woebot CEO Alison Darcy told Wired. “It’s not an AI that’s going to tell you stuff you don’t know about yourself by detecting some magic you’re not even aware of.”

Using Facebook Messenger, Woebot uses talk therapy prompts such as “What’s going on in your world right now?” This helps people talk about their emotional responses to life events, and identify the traps that cause stress and depression.

There’s no Freudian psychoanalysis of childhood wounds, just solutions for changing behavior. Woebot notes CBT is based on the idea that it’s not events themselves that affect people, it’s how they think about those events. What people think is often revealed in what they say.

This is just the beginning of a new era of tech-enabled mental health care. Machine learning has become so sophisticated that it can read between the lines of conversations and look for warning signs, according to researchers at IBM.

IBM used cognitive systems and machine learning to analyze written transcripts and audio recordings from psychiatric interviews to identify patterns that can indicate, and maybe even predict, mental illness.

It takes only 300 words to help clinicians predict the probability of psychosis, according to Guillermo Cecchi, a neuroscientist and researcher at IBM Research.

AI chatbots help mental health

AI technology can help in identifying patterns of thinking.

In five years, the IBM team predicts that advanced analytics, machine learning and computational biology will provide a real-time overview of a patient’s mental health.

Bringing Care to Rural Areas

These CBT technologies could meet a growing need for mental health care among the younger generation. Rates of depression and anxiety among young people are rising, according to the American College Health Association (ACHA).

Within the past year, 50 percent of college students reported feeling that things were hopeless, 58 percent felt overwhelming anxiety and 37 percent felt so depressed it was difficult to function. While universities encourage students to seek help, the social stigma still associated with mental illness can keep them from looking for a traditional therapist, according to ACHA research.

Chatbots may also help address the shortage of mental health services in rural areas where patients drive long distances to see a therapist face-to-face, said Gloria Zaionz, tech guru at the Innovation Learning Network, a think tank in Silicon Valley that studies how technology can improve healthcare.

More than 106 million people live in areas that are federally designated as having a shortage of mental health care professionals, according to the Kaiser Family Foundation.

“Mental health professionals are often limited in their capacity to provide treatment, and there are other barriers like wait times, cost and social stigma that can prevent people from getting the support they need,” Rauws said.

Companies are careful to note that AI chatbots are not intended to replace in-person treatment, but rather to expand limited access to mental health services. The tech provides more ways for patients to check in between visits and receive consistent follow-up care.

Data on the effectiveness of AI therapy is limited, but early results look encouraging, according to Rauws at X2AI. He said a trial of Tess across several U.S. universities showed a decrease in the standard depression scale and anxiety scale scores. A pilot study of Woebot also reported reduced levels of depression and anxiety.

Through a simple text conversation, AI may help in overcoming the stigma of seeking mental health care, ultimately expanding the reach of services. And a seemingly nonjudgmental chatbot may encourage people to honestly answer the question, “Are you OK?”

“There’s something about the screen that makes people feel a little bit more anonymous,” said Zaionz, “so they open up more.”

%d bloggers like this: