Questions for Artificial Intelligence in Health Care

Artificial intelligence (AI) is gaining high visibility in the realm of health care innovation. Broadly defined, AI is a field of computer science that aims to mimic human intelligence with computer systems.1 This mimicry is accomplished through iterative, complex pattern matching, generally at a speed and scale that exceed human capability. Proponents suggest, often enthusiastically, that AI will revolutionize health care for patients and populations. However, key questions must be answered to translate its promise into action.

What Are the Right Tasks for AI in Health Care?

At its core, AI is a tool. Like all tools, it is better deployed for some tasks than for others. In particular, AI is best used when the primary task is identifying clinically useful patterns in large, high-dimensional data sets. Ideal data sets for AI also have accepted criterion standards that allow AI algorithms to “learn” within the data. For example, BRCA1 is a known genetic sequence linked to breast cancer, and AI algorithms can use that as “the source for truth” criterion when specifying models to predict breast cancer. With appropriate data, AI algorithms can identify subtle and complex associations that are unavailable with traditional analytic approaches, such as multiple small changes on a chest computed tomographic image that collectively indicate pneumonia. Such algorithms can be reliably trained to analyze these complex objects and process the data, images, or both at a high speed and scale. Early AI successes have been concentrated in image-intensive specialties, such as radiology, pathology, ophthalmology, and cardiology.2,3

However, many core tasks in health care, such as clinical risk prediction, diagnostics, and therapeutics, are more challenging for AI applications. For many clinical syndromes, such as heart failure or delirium, there is a lack of consensus about criterion standards on which to train AI algorithms. In addition, many AI techniques center on data classification rather than a probabilistic analytic approach; this focus may make AI output less suited to clinical questions that require probabilities to support clinical decision making.4 Moreover, AI-identified associations between patient characteristics and treatment outcomes are only correlations, not causative relationships. As such, results from these analyses are not appropriate for direct translation to clinical action, but rather serve as hypothesis generators for clinical trials and other techniques that directly assess cause-and-effect relationships.

What Are the Right Data for AI?

AI is most likely to succeed when used with high-quality data sources on which to “learn” and classify data in relation to outcomes. However, most clinical data, whether from electronic health records (EHRs) or medical billing claims, remain ill-defined and largely insufficient for effective exploitation by AI techniques. For example, EHR data on demographics, clinical conditions, and treatment plans are generally of low dimensionality and are recorded in limited, broad categorizations (eg, diabetes) that omit specificity (eg, duration, severity, and pathophysiologic mechanism). A potential approach to improving the dimensionality of clinical data sets could use natural language processing to analyze unstructured data, such as clinician notes. However, many natural language processing techniques are crude and the necessary amount of specificity is often absent from the clinical record.

Clinical data are also limited by potentially biased sampling. Because EHR data are collected during health care delivery (eg, clinic visits, hospitalizations), these data oversample sicker populations. Similarly, billing data overcapture conditions and treatments that are well-compensated under current payment mechanisms. A potential approach to overcome this issue may involve wearable sensors and other “quantified self” approaches to data collection outside of the health care system. However, many such efforts are also biased because they oversample the healthy, wealthy, and well. These biases can result in AI-generated analyses that produce flawed associations and insights that will likely fail to generalize beyond the population in which they are generated.5

What Is the Right Evidence Standard for AI?

Innovations in medications and medical devices are required to undergo extensive evaluation, often including randomized clinical trials and postmarketing surveillance, to validate clinical effectiveness and safety. If AI is to directly influence and improve clinical care delivery, then an analogous evidence standard is needed to demonstrate improved outcomes and a lack of unintended consequences. The evidence standard for AI tasks is currently ill-defined but likely should be proportionate to the task at hand. For example, validating the accuracy of AI-enabled imaging applications against current quality standards for traditional imaging is likely sufficient for clinical use. However, as AI applications move to prediction, diagnosis, and treatment, the standard for proof should be significantly higher.1 To this end, the US Food and Drug Administration is actively considering how best to regulate AI-fueled innovations in care delivery, attempting to strike a reasonable balance between innovation, safety, and efficacy.

Using AI in clinical care will need to meet particularly high standards to satisfy clinicians and patients. Even if the AI approach has demonstrated improvements over other approaches, it is not (and never will be) perfect, and mistakes, no matter how infrequent, will drive significant, negative perceptions. An instructive example can be seen with another AI-fueled innovation: driverless cars. Although these vehicles are, on average, safer than human drivers, a pedestrian death due to a driverless car error caused great alarm. A clinical mistake made by an AI-enabled process would have a significant chilling effect. Thus, ensuring the appropriate level of oversight and regulation is a critical step in introducing AI into the clinical arena.

In addition to demonstrating its clinical effectiveness, evaluation of the cost-effectiveness of AI is also important. Huge investments into AI are being made with promised efficiencies and assumed cost reductions in return, similar to robotic surgery. However, it is unclear that AI techniques, with their attendant needs for data storage, data curation, model maintenance and updating, and data visualization, will significantly reduce costs. These tools and related needs may simply replace current costs with different, and potentially higher, costs.

What Are the Right Approaches for Integrating AI Into Clinical Care?

Even after the correct tasks, data, and evidence for AI are addressed, realization of its potential will not occur without effective integration into clinical care. To do so requires that clinicians develop a facility with interpreting and integrating AI-supported insights in their clinical care. In many ways, this need is identical to the integration of more traditional clinical decision support that has been a part of medicine for the past several decades. However, use of deep learning and other analytic approaches in AI adds an additional challenge. Because these techniques, by definition, generate insights via unobservable methods, clinicians cannot apply the face validity available in more traditional clinical decision tools (eg, integer-based scores to calculate stroke risk among patients with atrial fibrillation). This “black box” nature of AI may thus impede the uptake of these tools into practice.

AI techniques also threaten to add to the amount of information that clinical teams must assimilate to deliver care. While AI can potentially introduce efficiencies to processes, including risk prediction and treatment selection, history suggests that most forms of clinical decision support add to, rather than replace, the information clinicians need to process. As a result, there is a risk that integrating AI into clinical workflow could significantly increase the cognitive load facing clinical teams and lead to higher stress, lower efficiency, and poorer clinical care.

Ideally, with appropriate integration of AI into clinical workflow, AI can define clinical patterns and insights beyond current human capabilities and free clinicians from some of the burden of integrating the vast and growing amounts of health data and knowledge into clinical workflow and practice. Clinicians can then focus on placing these insights into clinical context for their patients and return to their core (and fundamentally human) task of attending to patient needs and values in achieving their optimal health.6 This combination of AI and human intelligence, or augmented intelligence, is likely the most powerful approach to achieving this fundamental mission of health care.

A Balanced View of AI

AI is a promising tool for health care, and efforts should continue to bring innovations such as AI to clinical care delivery. However, inconsistent data quality, limited evidence supporting the clinical efficacy of AI, and lack of clarity about the effective integration of AI into clinical workflow are significant issues that threaten its application. Whether AI will ultimately improve quality of care at reasonable cost remains an unanswered, but critical, question. Without the difficult work needed to address these issues, the medical community risks falling prey to the hype of AI and missing the realization of its potential.

Back to top

Article Information

Corresponding Author: Thomas M. Maddox, MD, MSc, Cardiovascular Division, Washington University School of Medicine/BJC Healthcare, Campus Box 8086, 660 S Euclid, St Louis, MO 63110 (

Published Online: December 10, 2018. doi:10.1001/jama.2018.18932

Conflict of Interest Disclosures: Dr Maddox reports employment at the Washington University School of Medicine as both a staff cardiologist and the director of the BJC HealthCare/Washington University School of Medicine Healthcare Innovation Lab; grant funding from the National Center for Advancing Translational Sciences that supports building a national data center for digital health informatics innovation; and consultation for Creative Educational Concepts. Dr Rumsfeld reports employment at the American College of Cardiology as the chief innovation officer. Dr Payne reports employment at the Washington University School of Medicine as the director of the Institute for Informatics; grant funding from the National Institutes of Health, National Center for Advancing Translational Sciences, National Cancer Institute, Agency for Healthcare Research and Quality, AcademyHealth, Pfizer, and the Hairy Cell Leukemia Foundation; academic consulting at Case Western Reserve University, Cleveland Clinic, Columbia University, Stonybrook University, University of Kentucky, West Virginia University, Indiana University, The Ohio State University, Geisinger Commonwealth School of Medicine; international partnerships at Soochow University (China), Fudan University (China), Clinica Alemana (Chile), Universidad de Chile (Chile); consulting for American Medical Informatics Association (AMIA), National Academy of Medicine, Geisinger Health System; editorial board membership for JAMIA, JAMIA Open, Joanna Briggs Institute, Generating Evidence & Methods to improve patient outcomes, BioMed Central Medical Informatics and Decision Making; and corporate relationships with Signet Accel Inc, Aver Inc, and Cultivation Capital.


Stead  WW.  Clinical implications and challenges of artificial intelligence and deep learning.  JAMA. 2018;320(11):1107-1108. doi:10.1001/jama.2018.11029ArticlePubMedGoogle ScholarCrossref

Gulshan  V, Peng  L, Coram  M,  et al.  Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs.  JAMA. 2016;316(22):2402-2410. doi:10.1001/jama.2016.17216ArticlePubMedGoogle ScholarCrossref

Zhang  J, Gajjala  S, Agrawal  P,  et al.  Fully automated echocardiogram interpretation in clinical practice.  Circulation. 2018;138(16):1623-1635. doi:10.1161/CIRCULATIONAHA.118.034338PubMedGoogle ScholarCrossref

Harrell  F. Is medicine mesmerized by machine learning? Statistical Thinking website. Published February 1, 2018. Accessed October 26, 2018.

Gianfrancesco  MA, Tamang  S, Yazdany  J, Schmajuk  G.  Potential biases in machine learning algorithms using electronic health record data.  JAMA Intern Med. 2018;178(11):1544-1547. doi:10.1001/jamainternmed.2018.3763ArticlePubMedGoogle ScholarCrossref

Verghese  A, Shah  NH, Harrington  RA.  What this computer needs is a physician: humanism and artificial intelligence.  JAMA. 2018;319(1):19-20. doi:10.1001/jama.2017.19198ArticlePubMedGoogle ScholarCrossref


ARTIFICIAL INTELLIGENCE USED to mean something. Now, everything has AI. That app that delivers you late-night egg rolls? AI. The chatbot that pops up when you’re buying new kicks? AI. Tweets, stories, posts in your feed, the search results you return, even the people you swipe right or left; artificial intelligence had an invisible hand in what (and who) you see on the internet.

But in the walled-off world of health care, with its HIPAA laws and privacy hot buttons, AI is only just beginning to change the way doctors see, diagnose, treat, and monitor patients. The potential to save lives and money is tremendous; one report estimates big data-crunching algorithms could save medicine and pharma up to $100 billion a year, as a result of AI-assisted efficiencies in clinical trials, research, and decision-making in the doctor’s office. Which is why tech titans like IBM, Microsoft, Google, and Apple are spinning up their own AI health care pet projects. And why every health-focused startup pitching Silicon Valley VCs throws in a “machine learning” or “deep neural net” for good measure.

These algorithms get better the more data they see. And health data is practically hemorrhaging out of mobile devices, wearables, and electronic medical files. But their siloed storage systems don’t make it easy to share that data with each other, let alone with an artificial intelligence. Until that changes, AI won’t be curing the world of, well, probably anything.

Which is not to say AI in health care is all hype. Sure, Watson turned out to be less cancer-crushing computer prodigy and more very expensive electrical bill. But 2017 wasn’t all flops. In fact, this year saw artificial intelligence begin demonstrating real concrete usefulness inside exam rooms and out.

In the doctor’s office, AI is already helping dermatologists tell cancerous growths from harmless spots, diagnose rare genetic conditions using facial recognition algorithms, and lending an assist in reading X-rays and other medical images. Soon, it will be detecting signs of diabetes-related eye disease in India. But image classification isn’t the only thing it’s getting good at; AI can also mine text data. That kind of tech undergirds a platform that gives any primary care doc access to the expertise of specialists from all over the world. No more waiting six months for that referral you can’t really afford anyway. And after you get that diagnosis, you can now take home an AI-equipped robot to help you stick to your treatment plan. It nags, but it looks cute while it’s doing it.

Health care-focused AI has also seeped into virtual care, as medicine experiments with ways to offer preventive care and between-visit support via the omnipresent smartphone. Your phone no longer just tells you how to sleep bettereat healthierexercise more, and keep a quiet mind. Now, AI can pick up patterns in the way you talk and text, to detect the first signs of depression and suicide risk. And it can help you deal with that stuff too. Amiable chatbots trained on cognitive behavioral therapy concepts are now helping people who can’t find time or money for a proper shrink. For veterans struggling with PTSD, researchers designed a human therapist avatar with a mind built by machine learning. Both approaches take advantage of the fact that people open up better to machines than other humans—the algorithms don’t judge.

And artificial intelligence is smartening up other devices, too. Deep neural software is making it easier to tune things like hearing aids and fancy new ultrasound machines. It’s making exoskeletons more responsive and artificial hands better at gripping (but not breaking) things.

 Of course, as machine learning powers more and more medical device software, it’s made regulating them a whole lot trickier. This year the US Food and Drug Administration even had to create an entirely new digital health task force just to tackle it. How exactly do you regulate software that is always learning and evolving, constantly changing on the fly? What happens in a zero code world, where AI writes and rewrites its own instructions? Instead of trying to keep up with that radically different pace, the agency is piloting a new course—that certifies trusted companies with good track records, as opposed to individual software packages.

Still, those regulations will only control AI-informed devices, diagnostics, and treatments. The technology is seeping into the practice of medicine at every level, not just at the stage of final device approval. It’s now baked into the way biomedical researchers sift through tsunamis of genetic data and pharma firms discover new drugs. It’s how public health officials predict the next epidemic, and keep track of opioid hot spots. And it’s increasingly how doctors and scientists try to make sense their data-drenched realities. As AI opens those new avenues of understanding and treating human disease, it’s important to remember that algorithms, like people, are imperfect. They’re only as good as the data they see and the biases they carry.

No matter how many black box neural networks start finding their way into the health care system, medicine is still fundamentally a human endeavor. And people don’t always do what’s best for them, even on a doctor’s orders. Which means the biggest challenge in health care isn’t about changing people’s bodies, but about changing people’s minds. And that’s not the kind of intelligence computers are good at. AI won’t be replacing MDs, anytime soon. But it is coming for their fax machines.

Why Value in Health Care Is the Target

The words You have cancer strike fear in the minds of nearly every patient who has the disease. When it happens, we want the best doctors in the best health care organization to give us the best chance of achieving the outcomes of care personally important to us as individuals, at a price we can afford. As consumers, we traditionally seek value in many spheres. We compare prices and features on TVs, computers, cars — almost everything we bring into our lives. Consumers understand that value is the best quality at the lowest price. But when we become patients, we sometimes stop behaving like consumers. It is unusual to choose our doctor based on how well he or she treats a particular condition, and we rarely have a full picture of what the costs of care will be.

Lately, there has been a lot of talk about value in health care. A value proposition for health care arose from the work of Harvard Business School Professor Michael Porter, who advocates that value for patients should be the overarching principle for fixing our broken system. So what do doctors and patients need to know about value in health care? It is the balance between the health outcomes patients want and the cost to achieve them. Over the past ten years, doctors and hospitals in increasing numbers are trying to assess their value in treating specific problems. They are beginning to measure outcomes of care more regularly and are taking a hard look at health care costs.

High value in health care is a great outcome that matters to the patient at the lowest possible cost. It is neither a lower-cost treatment that has a worse outcome nor a great outcome that is not affordable. It is critically important that doctors and patients understand that outcomes of care are about more than basic questions such as, Will the patient survive an operation? or How long will a patient live with cancer? Outcomes also involve the patient experience — questions such as, Will there be pain with treatment? How long will the patient miss work?, and What will the impact on the family be?In a value-based system, both doctors and patients should know what outcomes to expect, and demand that outcomes be measured and publicly reported. When outcomes of care are reported, doctors can improve their results, and patients can make meaningful choices about where to receive care.

Costs of care delivery — the other part of the value equation — also need to be measured and made transparent to doctors and patients. Cancer is the leading cause of personal bankruptcy, yet most doctors do not know how much the care they outline will cost the patient, and most patients do not know how much their care will cost. It is time that patients know this information and that providers be transparent about their costs.

In addition, in a value-based system how doctors and hospitals get paid will change. Reimbursements will likely evolve to replace fee-for-service payment largely with bundled payments, which will require doctors to be sure they do only what is necessary to get a good health outcome — but not more. Bundled pricing does not mean that doctors will do less because they are getting a fixed amount of money; just the opposite, since bundled payments will be tied to getting the best outcomes. Doctors will be paid more for good outcomes and less for worse outcomes — something that happens in most industries, but has never been true in health care.

In a true value-based system, volume matters. Doctors will see a lot of patients with the same problem and become expert in treating those problems. Right now, too many clinicians try to treat too many different problems in which they may not be expert. As value-based care progresses, not all hospitals will treat everything. In Europe, there is a movement to certify breast cancer centers, where only those that see a lot of patients are certified.

A true value-based health care system controls costs through efficiency, eliminating things that do not need to be done and doing only those things that improve outcomes. Less health care does not mean worse care, often it means better care. Good health outcomes cost less. Lower cost in health care is good for patient health and for our economy. Fixing our health care system cannot be done simply by government fiat, by health care providers, by administrators, by patients, or by any party acting alone — but only with all parties focusing in tandem on value for the patient.

‘Countless’ Patients Harmed By Wrong or Delayed Diagnoses

Evidence is incomplete, but still shows most patients will be impacted by the problem at some point in their lives.

Doctor showing digital tablet x-ray to patient

Most people experience at least one diagnostic error during their life, according to a new report investigating wrong or delayed diagnoses.

The Institute of Medicine on Tuesday released a ground-breaking report calling wrong or delayed diagnoses a vast “blind-spot” in U.S. healthcare and blaming them for harming countless patients each year.

The report, called “Improving Diagnosis in Health Care,” asserts that diagnostic errors occur daily in every health care setting nationwide, yet they have never been adequately studied. No one knows how many people suffer from misdiagnoses or delays that affect their care.

Despite the sketchy evidence, the authors conclude that “most people will experience at least one diagnostic error in their lifetime, sometimes with devastating consequences.”

“This problem is significant and serious [yet] we don’t know for sure how often it occurs, how serious it is or how much it costs,” says Dr. John Ball, of the American College of Physicians, who chaired the committee that carried out the analysis. He called the lack of evidence one of the committee’s most “surprising” and distressing findings.

 Advocates hailed the report for calling attention to a problem that has been neglected for decades despite its importance to doctors and patients alike.

“It’s huge that diagnosis is finally getting the attention it deserves,” says Helen Haskell, co-chair of the patient committee at the Society to Improve Diagnosis in Medicine, who was invited by the committee to review a draft of the report. “There are lots of people who think our failure to tackle this is one reason why patient safety hasn’t progressed farther.

“Improving Diagnosis in Health Care,” is the latest installment in a series which began with “To Err is Human: Building a Safer Health System,” which made national headlines 16 years ago by estimating that 44,000 to 98,000 people die from preventable medical errors each year. Each report in the series has focused on lapses responsible for poor quality health care and how to correct them.

At least 5 percent outpatients are incorrectly diagnosed by their doctors, a new study says.


At Least 1 in 20 Americans Misdiagnosed by Their Doctors, Study Finds

Despite the committee’s inability to offer even a rough estimate of the pervasiveness of faulty diagnoses–a limitation likely to disappoint patient advocates and others who were anticipating the committee’s answer to that question–the report does offer some indications of the problem’s seriousness.

Studies show:

  • About 5 percent of adults who seek outpatient care annually suffer a delayed or wrong diagnosis.
  • Postmortem research suggests that diagnostic errors are implicated in one of every 10 patient deaths. Not every death is scrutinized, however, so the findings can’t be generalized to all hospital patients.
  • Chart reviews indicate that diagnostic errors account for up to 17 percent of hospital adverse events.
  • Diagnostic errors are the principle cause of paid malpractice claims and are almost twice as likely to end in a patient’s death than claims for other medical mishaps. They also represent the biggest share of total payments.

Getting the right diagnosis is critical, because it is the starting point for every other health care decision. Sometimes diagnostic errors or delays stem from poor judgment, including “shortcuts that people take,” such as a physician who makes superficial assumptions based on past experience rather than current information, Ball says.

Often diagnostic errors result from poor coordination of care. “Not all errors are individual human errors,” he says. “They occur in a system that leads you into [certain] kinds of errors.” He cited the emergency room, a chaotic setting with a constant stream of patients and information, where doctors, nurses, technicians and laboratory personnel must multi-task amid countless distractions.

One vital check on the accuracy of a diagnosis is following up with the patient, a cycle that promotes better care and reinforces learning, says Dr. Donald Berwick, president emeritus and senior fellow at the Institute of Healthcare Improvement. “The diagnosis is the hypothesis, the treatment is a test. If we don’t know what happened to the patient it’s difficult to improve either our diagnosis or treatment.”

The glut of tests–some ordered by doctors who are practicing defensive medicine to protect against malpractice lawsuits–compounds the problem. “There’s a tremendous reliance on tests,” says Haskell, of the Society to Improve Diagnosis in Medicine. “You have to know to order the right test, and the test has to be interpreted correctly all along the line. It’s a complicated system with a lot of opportunities for error.”

Clumsy health information technology, including electronic medical records, also represents a “barrier to good health care,” Ball says, because information isn’t easily accessible and is often presented in a confusing manner.

Berwick, who also reviewed the report for the institute, cited one crucial omission–the committee decided not to address over-diagnosis, a diagnosis that is made that is not helpful to patients. “They might not define that as an error,” he says, “But I think the task of addressing over-diagnosis is critical.”

Finally, Berwick says, it’s important to factor into any assessment of medical errors the heavy administrative demands placed on doctors. “Physicians today spend so much time filling out forms, seeking approvals and ordering things–you can’t increase work pressure so much without expecting errors to increase.

There is no easy fix, the report concludes. What’s required is a major reassessment of the diagnostic process and a commitment to change. It must begin with a common definition of what constitutes a diagnostic error–and the data to figure out possible remedies and measure progress.

“What I like is that the report emphasizes that teamwork is necessary to have a system that works,” says Haskell. “You have to have coordinated care, patient involvement and the involvement of non-physician personnel.”

Absent a better solution, Haskell says, “you need to do your own research to find out what tests are needed and be sure they’re being done. You need to get the results.”

“Patients bear the financial burden of all this,” she adds. “Patients or their insurers. The medical system profits from it.”

Telemedicine Is The Future Of Health Care: On-Call Docs To Examine, Diagnose, And Treat Patients Remotely

Smartphones and tablets are used for just about everything, from monitoring your bank account to ordering a cab, so it would only make sense that health care become part of this technological advancement. Telemedicine is the union between technology and health, and many believe it is the future of health care in the U.S.

On Monday, at the American Telemedicine Association’s trade show in Los Angeles, American Well, a telemedicine provider, announced “Telehealth 2.0” — a broad sweeping list of telemedicine products and services. These include live “video visits” on your phone and the web, real-time patient data, and the ability for doctors to review and accept/decline visits on their mobile phone.

“We [want to] take telehealth that was used as a convenience measure for patients and put it in the hands of physicians,” said American Well CEO Roy Schoenberg, as reported by Forbes.

American Well’s move has further strengthened the prediction telemedicine is not a passing phase but here to stay. The technology needed for telemedicine as well as the demand for its services has been around for decades, but it’s not until fairly recently that this idea of “virtual check-ups” began to be taken seriously by health professionals.

Telemedicine is highly convenient, a factor that is helping in its rise in popularity. American Well’s new app for physicians will include integrations with Apple’s biometrics to allow patients health records available at the touch of a finger. American Well also has an app which matches patients with doctors within two minutes. According to Forbes, American Well foresees doctors eventually easily shifting between their virtual and physical waiting room patients, a skill which will allow them to see more patients than ever before. Allowing doctors to deal with minor health concerns, such as flus and colds in a virtual setting would theoretically make more space in actual waiting rooms for more seriously ill patients.

Beyond the cold and flu, Schoenberg explains, telemedicine has potential to treat more complex conditions, such as cancer and heart disease. Large hospital systems like the Cleveland Clinic and Massachusetts General are currently using American Well technology to treat patients, he said.

Wired reported that UnitedHealthcare, Oscar, WellPoint, and some BlueCross BlueSheild plans have adopted telemedicine programs in recent years.

While telemedicine does sound exciting, it’s not without its hurdles. For example, making access to a doctor that easy may lead to patient-overuse, a problem which could overwhelm the already inundated health care system. There’s also the fact that old habits die hard and although it may be possible to have a virtual doctor’s appointment, for now, doctors and patients alike may prefer the old-fashioned face-to-face check-up.

Regardless of these hurdles, it’s clear that telemedicine is here to stay and bound to only become more popular. And while there is a long way to go before we’re all able to have 24/7 medical help at the touch of our fingers, this latest announcement from American Well is certainly a step in that direction.

Specialized Care Didn’t Affect Healthcare Use Among Confused Hospitalized EldersBut patients were happier, and their families were satisfied with their care..


Some hospitals have specialized units to care for older, cognitively impaired patients, but whether such units improve outcomes is unclear. In this randomized trial, investigators compared care in a specialized unit versus standard care (geriatric or general medical wards) in 600 patients (median age, 85) identified as “confused” on admission to a large U.K. hospital. Specialized unit staff were skilled in managing patients with delirium and dementia, and specialized care included regular psychiatrist visits, organized activities, a physical environment tailored to patients with cognitive impairment, and proactive involvement of family caregivers.

After adjusting for multiple variables, investigators found no significant differences between patients randomized to specialized care and those randomized to standard care in days spent at home during 90 days after randomization (51 and 45 days) or in median length of hospital stay (11 days in both groups). Rates of return home from the hospital, in-hospital mortality, 90-day survival, hospital readmission, and nursing home placement also were similar. However, specialized-unit patients were significantly more likely than standard-care patients to be in a positive mood (79% vs. 68%), and their family caregivers were significantly more likely to be satisfied with their care (91% vs. 83%).


In this trial, confused elders admitted to a specialized unit did not have superior healthcare-use outcomes or longer survival than those admitted to geriatric or general medical wards. Although patient mood and family caregivers’ satisfaction favored specialized care over standard care, the absolute differences were small. Based on these findings, justifying the costs associated with such specialized units would be difficult.

Source: NEJM


Yes, You Can Hack a Pacemaker (and Other Medical Devices Too).

On Sunday’s episode of the Emmy award-winning show Homeland, the Vice President of the United States is assassinated by a group of terrorists that have hacked into the pacemaker controlling his heart. In an elaborate plot, they obtain the device’s unique identification number. They then are able to remotely take control and administer large electrical shocks, bringing on a fatal heart attack.

Viewers were shocked — many questioned if something like this was possible in real life. In short: Yes (except, the part about the attacker being halfway across the world is questionable). For years, researchers have been exposing enormous vulnerabilities in Internet-connected implanted medical devices.

There are millions of people who rely on these brilliant technologies to stay alive. But as we put more electronic devices into our bodies, there are serious security challenges that must be addressed. We are familiar with the threat that cyber-crime poses to the computers around us — however, we have not yet prepared for the threat it may pose to the computers inside of us.

Implanted devices have been around for decades, but only in the last decade have these devices become virtually accessible. While they allow for doctors to collect valuable data, many of these devices were distributed without any type of encryption or defensive mechanisms in place. Unlike a regular electronic device that can be loaded with new firmware, medical devices are embedded inside the body and require surgery for “full” updates. One of the greatest constraints to adding additional security features is the very limited amount of battery power available.

Thankfully, there have been no recorded cases of a death or injury resulting from a cyber attack on the body. All demonstrations so far have been conducted for research purposes only. But if somebody decides to use these methods for nefarious purposes, it may go undetected.

Marc Goodman, a global security expert and the track chair for Policy, Law and Ethics at Singularity University, explains just how difficult it is to detect these types of attacks. “Even if a case were to go to the coroner’s office for review,” he asks, “how many public medical examiners would be capable of conducting a complex computer forensics investigation?” Even more troubling was, “The evidence of medical device tampering might not even be located on the body, where the coroner is accustomed to finding it, but rather might be thousands of kilometers away, across an ocean on a foreign computer server.”

Since knowledge of these vulnerabilities became public in 2008, there have been rapid advancements in the types of hacking successfully attempted.

The equipment needed to hack a transmitter used to cost tens of thousands of dollars; last year a researcher hacked his insulin pump using an Arduino module that cost less than $20. Barnaby Jack, a security researcher at McAfee, in April demonstrated a system that could scan for and compromise insulin pumps that communicate wirelessly. With a push of a button on his laptop, he could have any pump within 300 feet dump its entire contents, without even needing to know the devices’ identification numbers. At a different conference, Jack showed how he reverse engineered a pacemaker and could deliver an 830-volt shock to a person’s device from 50 feet away — which he likened to an “anonymous assassination.”

There have also been some fascinating advancements in the emerging field of security for medical devices. Researchers have created a “noise” shield that can block out certain attacks — but have strangely run into problems with telecommunication companies looking to protect their frequencies. There have been the discussions of using ultrasound waves to determine the distance between a transmitted and medical device to prevent far-away attacks. Another team has developed biometric heartbeat sensors to allow devices within a body to communicate with each other, keeping out intruding devices and signals.

But these developments pale in comparison to the enormous difficulty of protecting against “medical cybercrime,” and the rest of the industry is falling badly behind.

In hospitals around the country there has been a dangerous rise of malware infections in computerized equipment. Many of these systems are running very old versions of Windows that are susceptible to viruses from years ago, and some manufacturers will not allow their equipment to be modified, even with security updates, partially due to regulatory restrictions. A solution to this problem requires a rethinking of the legal protections, the loosening of equipment guidelines, as well as increased disclosure to patients.

Government regulators have studied this issue and recommended that the FDA take these concerns into account when approving devices. This may be a helpful first step, but the government will not be able to keep up with the fast developments of cyber-crime. As the digital and physical world continue to come together, we are going to need an aggressive system of testing and updating these systems. The devices of yesterday were not created to protect against the threats of tomorrow.


Online Access to Personal Health Records Increases Use of Services .

Patients with online access to personal health records unexpectedly increased their use of most clinical services, according to a JAMA study. Previous studies found the opposite effect.

The retrospective cohort study involved some 44,000 users of Kaiser Permanente Colorado‘s MyHealthManager who were matched to members who did not establish accounts. Matching was based on members’ history of office visits.

Compared with nonusers, users had an increased rate of office visits in the year following activation of their MyHealthManager account, a difference of 0.7 per member per year. Similarly, telephone encounters, after-hours clinic visits, emergency department visits, and hospitalizations all rose significantly. Among patients with coronary artery disease, use of services did not increase.

Editorialists call the findings “sobering for patient portal enthusiasts.” They speculate that the reason for the discrepancy between this and earlier studies may have to do with regional differences in healthcare delivery.

Source: JAMA

Understanding the Effect of Healthcare Workers’ Hand Hygiene.

Using a novel method, investigators revealed marked heterogeneity in healthcare worker interactions and in the potential consequences of their hand hygiene.

Attempts to understand disease transmission in healthcare settings have generally assumed that healthcare workers (HCWs) move and interact uniformly. However, observational studies have suggested the possibility of peripatetic “superspreaders” who have greater-than-average mobility and interactivity — and thus more opportunity to spread infection. In a recent study conducted in the medical intensive care unit of a university hospital, researchers assessed this possibility.

The researchers used small electronic badges worn by HCWs, together with fixed-position beacons, to determine patterns of HCW movement and interactions within this 20-bed unit. They then used these data to mathematically model the effect of HCW hand hygiene on pathogen transmission.

During the 48-hour period of analysis, the average number of contacts (HCW–HCW and HCW–patient) per HCW was 80.1 for day shifts and 76.1 for night shifts. However, a few HCWs were responsible for a disproportionately large share of the contacts. Modeling the effect of hand-hygiene activity on disease transmission showed that spread of a pathogen would be significantly greater with noncompliance of a few high-contact staff members than with noncompliance of an equal number of low-contact workers.

Comment: Hand hygiene is a central tenet of infection control, yet since the original work of Semmelweis, there has been relatively little research on the direct effects of hand-hygiene behavior on disease transmission. Hornbeck and colleagues have provided new insights into HCW contacts, which can help us to understand the role of hand hygiene in preventing nosocomial spread of pathogens and thus to develop more-sophisticated approaches for improving its efficacy.

Source: Journal Watch Infectious Diseases










Pregnancy-related cancers on rise.

The rate of pregnancy-associated cancer is increasing and is only partially explained by the rising number of older mothers according to research led by the University of Sydney.

The researchers say improved diagnostic techniques, detection and increased interaction with health services during pregnancy may contribute to the higher rates of pregnancy-associated cancer.

The findings, co-authored by Dr Christine Roberts from the Kolling Institute at Sydney Medical School, were recently published in BJOG: An International Journal of Obstetrics and Gynaecology. Cathy Lee, a Masters student in Biostatics at the University, is lead author of the study.

“The genetic and environmental origins of pregnancy-associated cancers are likely to pre-date the pregnancy but the hormones and growth factors necessary for a baby to develop may accelerate the growth of a tumour,” Dr Christine Roberts said.

The Australian study looked at 1.3 million births between 1994 and 2008. The rate of pregnancy-associated cancer, where the initial diagnosis of cancer is made during pregnancy or within 12 months of delivery, was compared to pregnant women without cancer (using the same parameters).

It found that over a 14-year period the incidence rate of pregnancy-associated cancer increased from 112.3 per 100,000 to 191.5

“Although this represents a 70 percent increase in cancers diagnosed during or soon after pregnancy it is important to note that cancer remains rare affecting about two in every 1000 pregnancies,” Dr Roberts said.

Although the age of the mother is a strong risk factor for cancer, increasing maternal age explained only some of the increase in cancer occurring.

“Pregnancy increases women’s interaction with health services and together with improved techniques for detecting cancer the possibility for diagnosis is therefore increased,” Dr Roberts said.

The most common cancers detected were skin melanomas, breast cancer, thyroid and other endocrine cancers, gynaecological and lymphohaematopoeitic cancers. The high incidence of melanoma may relate to the fact Australia has the highest incidence of melanoma in the world.

The study also looked at pregnancy outcomes and found that cancer during pregnancy was associated with a significantly increased risk of caesarean section and planned preterm birth which may be to allow cancer treatment to commence.

Importantly there was no evidence of harm to the babies of women with cancer – they were not at increased risk of reduced growth or death.

Source: Science Alert



%d bloggers like this: