Computer-aided detection (CAD) has made its mark in the cancer field, most notably for breast cancer detection. In lung cancer, the use of CAD systems can improve the performance of radiologists in pinpointing pulmonary nodules. But CAD systems in lung cancer have suffered from some shortcomings, such as only little to modest improvement in sensitivity, an increase in false positives, and issues with the level of automation as well as the ability to detect different types and shapes of nodules.
Another potential pitfall: How can CAD results, which are rendered in the language of radiologists, be matched with the language of oncologists?
Bram Stieltjes, MD, PhD, of the University Hospital Basel in Switzerland, and colleagues sought to answer that question by developing an in-house CAD image-processing software for PET/CT lung studies. The goal with a study they presented at the 2017 Radiological Society of North America (RSNA) meeting in Chicago was to decrease tumor, node, distant metastasis (TNM) misstaging, and subsequent erroneous treatment planning for lung cancer patients.
Stieltjes spoke with ASCO Reading Room about the impetus for this research, and the plans for launching the CAD program at his institution as soon as early 2018.
The group evaluated reports from the radiology information system (RIS) of 145 non-small cell lung cancer (NSCLC) patients who underwent a primary staging FDG-PET/CT exam at the facility. TNM (edition 7) was determined according to the text information in the reports by a radiologist and a nuclear medicine physician.
The team then downloaded the corresponding PET and CT image data sets from the university’s picture archiving and communication system (PACS). These data sets were transferred to 3D-slicer-based prototype software. As the authors explained, the image-processing application allows for manual segmentation of tumors, lymph nodes, and metastasis using a set of labels including location information and morphological TNM features.
Stieltjes et al reported that in a substantial number of patients, not enough information was provided by the original report to extract a distinct TNM stage:
- T: 18.6% (27/145)
- N: 10.3% (15/145)
- M: 2.1% (3/145)
“Furthermore, in 29 cases, there was a considerable discrepancy between the report and annotation: upstaging due to the annotations: T: n=11, N: n=6, M: n=4; downstaging due to the annotations: T: n=3; N: n=4; M: n=1.”
Applying the image-processing tool and using a segmentation-based approach to the image data sets allowed the team to extract TNM information in all patients, the researchers explained, adding that their approach with tumor labels allows for a clear definition of cancerous lesions in a standardized and reproducible manner.
“We could demonstrate that the proper TNM stage could not be derived from unstructured PET/CT reports in a roughly 30% of the cases. This commonly affects the T-stage because of missing diameter measurements, but also the N and M stage.”
The investigators concluded that the labels generated using this image-processing tool could be directly translated into clinical decision-making, such as tumor boards, and were less prone to interpretation.
The following interview with Stieltjes, who is the head of research coordination for radiologist and nuclear medicine at the Basel institution, has been edited for length and clarity.
Why did your group decide to undertake this study?
Stieltjes: We were trying to standardize our output, and perhaps implement the much-hyped machine learning — what some might call a type of artificial intelligence inroutine clinical practice. I’ve been [at Basel] about 3 years, and our first year we looked for projects that might be suitable to achieve this goal. Lung tumor staging was one such project where the data was there, and there was a substantial number of patients.
The clinical work time that is done on preparing a lung PET/CT report is very high; we [radiologists] take about an hour per report. So there is much that can be gained in terms of efficiency. My gut feeling was that … even when we put in the work on interpretation and reporting, we don’t always deliver the information that is necessary for the oncologists.
There’s an ongoing conversation in radiology about the usefulness of structured reporting versus the free-text reporting described in your study. Do you see structured reporting being of some benefit along with the image-processing tool used in your research?
Stieltjes: Structured reporting does not touch on the radiological workflow as it is right now; it is only formalizing the way text is inputted. Radiology is still interpreting the image and generating a report. However the link between the text and the image is very weak.
What we are presenting is a sort of third way: You first define all the important anatomical features in a structured way with labels. Then the job of the radiologist is not scrolling and talking, but scrolling and clicking, labeling — to place a label on an image in the study series. This then goes to a database.
As a recipient of the report, I can learn and understand what the radiologist sees; I’m able to have a direct link between his or her knowledge and the actual place in the image where their skills have seen an area of importance.
Is the system described in the study ready for clinical deployment?
Stieltjes: We’re on the brink of putting the annotation portion for lung tumors into clinical routine — not the decision-making part. What we’re trying to do is change the radiologist workflow, so that they’re labeling as part of their report — so we focused on getting the annotation part done first. We hope to have it into practice in the early part of 2018.
The next part is the capability to detect all lung lesions, which the radiologist will review, and we’re confident that we’ll have that in the next generation of the application.
What feedback have you received from oncologists and other clinical colleagues about this system?
Stieltjes: This project has been conducted in partnership with our oncology department. The oncologists are really looking forward to having all the TNM information in our reports for two reasons: One, TNM information was not always included in reports; and two, images were not always included in reports. Now labeled images and TNM stage will be part of every report.
Of course, oncologists want to have pathology for correlation, so our TNM stage will be provisional until they get a pathological confirmation.