I recently told a radiology resident who demolished the worklist, “You’re a machine.” He beamed with pride. Imitation is the highest form of flattery. But the machine, not content in being imitated, wants to encroach on our turf.
CT could scarcely have progressed without progress in computing to horde the glut of thin slices. On two-dimensional projectional images such as abdominal radiographs, arteries reveal themselves only if calcified. With CT, algorithms extract arteries with a click. When I first saw this I was mesmerized, then humbled, and then depressed. With automation, what was my role, aside from clicking the aorta?
The role of computers in radiology was predicted as early as 1959 by Lee Lusted, a radiologist and the founder of the Society for Medical Decision Making. Lusted envisioned “an electronic scanner-computer to look at chest photofluorograms, to separate the clearly normal chest films from the abnormal chest films. The abnormal chest films would be marked for later study by the radiologists.”
AI Is Not Just a Sci-Fi Film
I was skeptical of the machine. Skepticism is ignorance waiting for enlightenment. Nearly 60 years after Lusted’s intuition, artificial intelligence (AI) is not a figment of Isaac Asimov’s imagination but a virtual reality.
Leading the way in AI is IBM. Tanveer Syeda-Mahmood, PhD, IBM’s chief scientist for the Medical Sieve Radiology Grand Challenge project, sees symbiosis between radiologists and AI. At the 2015 RSNA meeting she showed off Watson, IBM’s polymath and the poster child of AI. Watson can find clots in the pulmonary arteries. The quality of the demonstration cases was high; the pulmonary arteries shone brightly. But I’m being nitpicky pointing that out. Evolutionarily, radiologists have reached their apotheosis. But Watson will improve. He has a lot of homework—30 billion medical images to review after IBM acquired Merge, according to the Wall Street Journal. A busy radiologist reads approximately 20,000 studies a year.
AI has a comparative advantage over radiologists because of the explosion of images: In a trauma patient, a typical “pan scan”—CT of the head, cervical spine, chest, abdomen, and pelvis with reformats—renders over 4000 images. Aside from the risk for de Quervain’s tenosynovitis from scrolling through the images, radiologists face visual fatigue. Our visual cortex was designed to look for patterns, not needles in haystacks.
The explosion of images has led to an odd safety concern. The disease burden hasn’t risen, and the prevalence of clinically significant pathology on imaging remains the same; however, the presence ofpotentially significant pathology has increased and the chances of missing potentially significant pathology have increased exponentially—ie, there is an epidemic of possible medical errors. Radiologists will be no match for AI in detecting 1-mm lung nodules.
What’s intriguing about AI is what AI finds easy. While I find the beating heart demanding to stare at, cine imaging is a piece of cake for Watson. Watson learned about cardiac disease by seeing videotapes of echocardiograms from India, according to Syeda-Mahmood. True to the healthcare systems that Watson will serve, he has both American and foreign training.
What’s even more intriguing about AI is what AI finds difficult. Eliot Siegel, MD, professor of radiology at the University of Maryland, has promised me that if anyone develops a program that segments the adrenal glands on CT as reliably as a 7-year-old could, he’d wash their car. So far he hasn’t had to wash any cars. AI may crunch PE studies but struggle with intra-abdominal abscesses. Distinguishing between fluid-filled sigmoid colon and an infected collection isn’t the last frontier of technology, but it may be the last bastion of the radiologist.
Competition for Watson
Watson has a rival. Enlitic was founded by Australian data scientist and entrepreneur Jeremy Howard. Both Watson and Enlitic use deep learning, an unregulated process whereby the computer figures out why something is what it is after being shown several examples. However, their philosophy is different. Watson wants to understand the disease. Enlitic wants to understand the raw data. Enlitic’s philosophy is a scientific truism: Images = f (x). Imaging is data. Find the source data, solve the data, and you’ve solved the diagnosis.
Igor Barani, MD, a radiation oncologist and the CEO of Enlitic, told me that he was once skeptical of computers. He changed his mind when he saw what Enlitic could do. After being fed several hundred musculoskeletal radiographs, which were either normal or had fractures, the machine flagged not only the radiographs with fracture but also the site of the fracture. The machine started off ignorant, was not told what to do, and learned by trial and error. It wasn’t spoonfed— rather, it feasted. Enlitic is like that vanishingly uncommon autodidact.
Barani, too, believes that AI and radiologists are symbiotic. According to him, AI will not render radiologists unemployable but will save the radiologist from mundane tasks that can be automated, such as reading portable chest radiographs to confirm line placements and looking at CT scans for lung nodules. As he put it, “You did not go to medical school to measure lung nodules.” Barani’s point is well taken. The tasks that can be automated should be given to the machine—not as surrender but secession.
Medical Publications vs Business News
Living in San Francisco, Barani is aware of the hot air from Silicon Valley. He doesn’t want Enlitic to be cast in the same mold as some diagnostics that have been buried under their own hype. He wants the technology to prove itself in a randomized controlled trial and says that Enlitic is conducting such a trial in Australia, which will assess AI’s accuracy and efficiency as an adjunct to radiologists. This is just as well. Charles Kahn, MD, an expert in informatics and vice chair of radiology at the University of Pennsylvania, has followed the history of neural networks—AI’s DNA. “I’ve seen optimism before. It’s time that proponents of AI published, not in Forbes, but in peer-reviewed medical journals,” he told me, with slight exasperation.
For AI’s full potential to be harnessed, it must extract as much information as possible from the electronic health record (EHR), images, and radiology reports. The challenges are not computational. The first challenge is the validity of the information. For example, the EHR has signal, but it also has a lot of junk that looks like signal because an ICD-10 code is attached.
The second challenge is consistency. The variability and disclaimers in radiology reports that frustrate clinicians could frustrate AI as well. It needs a consistent diagnostic anchor. Imagine if half of the World Atlases listed the capital of Mongolia as Ulan Bator and the other half listed it as New Delhi. Even Watson might be confused if asked the capital of Mongolia.
Could the hedge “pneumonia not excluded”—the Achilles heel of radiologists, the chink in the radiologist’s armor—save it from AI? Gleefully, I asked IBM’s Syeda-Mahmood. She smiled. “Watson doesn’t need to be better than you. Just as good as you.” She has a point. If Watson knows to pick up emboli in the lobar pulmonary arteries in some studies and can report in others “no embolus seen but subsegmental pulmonary embolus is not excluded,” how is that different from what we do?
Digital Mammography and AI Takeover
Like radiologists, AI must choose between sensitivity and specificity—ie, between overcalling and undercalling disease. One imperative for computer assistance in diagnosis is to reduce diagnostic errors. The errors considered more egregious are misses, not overdiagnoses. AI will favor sensitivity over specificity.
If AI reminds radiologists that leiomyosarcoma of the pulmonary veins, for example, is in the differential for upper lobe blood diversion on chest radiograph, this rare neoplasm will never be missed. But there’s a fine line between being a helpful Siri and a monkey on the shoulder constantly crying wolf.
The best example of computer-aided detection (CAD) is in breast imaging. Touted to revolutionize mammography, CAD’s successes have been modest. CAD chose sensitivity over specificity in a field where false negatives are dreaded, false positives cost, and images are complex. CAD flags several pseudoabnormalities, which experienced readers summarily dismiss but over which novice readers ruminate. CAD has achieved neither higher sensitivity nor higher specificity.
When I asked Emily Conant, MD, chief of breast imaging at the University of Pennsylvania, about this, she cautioned against early dismissal of CAD for mammography. “With digital platforms, quantification of global measures of tissue density and complexity are being developed to aid detection. CAD will be more reproducible and efficient than human readers in quantifying. This will be a great advance.” Digital mammography follows a pattern seen in other areas of imaging. An explosion of information is followed by an imperative to quantify, leaving radiologists vulnerable to annexation by the machine.
Should radiologists view AI as a friend or a foe? The question is partly moot. If AI has a greater area under the receiver operating character curve than radiologists—meaning it calls fewer false negativesand fewer false positives—it hardly matters what radiologists feel. Progress in AI will be geometric. Once all data are integrated, AI can have a greater sensitivity and greater specificity than radiologists.
Do We Need Fewer Radiologists?
Workforce planning for organized radiology is tricky. That AI will do the job radiologists do today is a mathematical certainty. The question is when. If it were within 6 months, radiologists may as well fall on their swords today. A reasonable timeframe is anything between 10 and 40 years, but closer to 10 years. How radiologists and AI could interact might be beyond our imagination. Enlitic’s Barani believes that radiologists can use AI to look after populations. AI, he says, “can scale the locus of a radiologist’s influence.”
AI may increase radiologists’ work in the beginning as it spits out false positives to dodge false negatives. I consulted R. Nick Bryan, MD, PhD, emeritus professor at the University of Pennsylvania, who believes that radiologists will adjudicate normal. The arc of history bends toward irony. Lusted thought that computers would find normal studies, leaving abnormal ones for radiologists. The past belonged to sensitivity; the future is specificity. People are tired of being told that they have “possible disease.” They want to know if they’re normal.
Bryan, a neuroradiologist, founded a technology that uses Bayesian analysis and a library of reference images for diagnosis in neuroimaging. He claims that the technology does better than first-year radiology residents and as well as neuroradiology fellows in correctly describing a range of brain diseases on MRI. He once challenged me to a duel with the technology. I told him that I was washing my hair that evening.
Running from AI isn’t the solution. Radiologists must work with AI, not to improve AI but to realize their role in the post-AI information world. Radiologists must keep their friends close and AI closer. Automation affects other industries. We have news stories written by bots (standardized William Zinsser–inspired op-eds may be next). Radiologists shouldn’t take automation personally.
Nevertheless, radiologists must know themselves. Emmanuel Botzolakis, MD, a neuroradiology fellow working with Bryan, put it succinctly. “Radiologists should focus on inference, not detection. With detection, we’ll lose to AI. With inference, we might prevail.”
Botzolakis was distinguishing between Radiologist as Clinical Problem Solver and Radiologist as TSA Detector. Like TSA detectors, radiologists spot possible time bombs, which on imaging are mostly irrelevant to the clinical presentation. This role is not likely to diminish, because there will be more anticipatory medicine and more quantification. When AI becomes that TSA detector, we may need fewer radiologists per capita to perform more complex cognitive tasks.
The future belongs to quantification but it is far from clear that this future will be palatable to its users. AI could attach numerical probabilities to differential diagnosis and churn reports such as this:
Based on Mr Patel’s demographics and imaging, the mass in the liver has a 66.6% chance of being benign, 33.3% chance of being malignant, and a 0.1% of not being real.
Is precision as useful as we think? What will you do if your CT report says renal mass has a “0.8% chance of malignancy?” Sleep soundly? Remove the mass? Do follow-up imaging? Numbers are continuous but decision-making is dichotomous, and the final outcome is still singular. Is the human race prepared for all of this information?
The hallmark of intelligence is in reducing information to what is relevant. A dualism may emerge between artificial and real intelligence, where AI spits out information and radiologists contract information. Radiologists could be Sherlock Holmes to the untamed eagerness of Watson.
In the meantime, radiologists should ask which tasks need a medical degree. Surely, placing a caliper from one end of a lung nodule to the other end doesn’t need 4 years of medical school and an internship. Then render unto AI what is AI’s. Of all the people I spoke to for this article, Gregory Mogel, MD, a battle-hardened radiologist and chief of radiology at Central Valley Kaiser Permanente, said it best. “Any radiologist that can be replaced by a computer should be.” Amen.