A Fully Automated Machine Learning Approach for Predicting Contrast Phases in CT Imaging

Abstract

Objectives: Accurately acquiring and assigning different contrast-enhanced phases in computed tomography (CT) is relevant for clinicians and for artificial intelligence orchestration to select the most appropriate series for analysis. However, this information is commonly extracted from the CT metadata, which is often wrong. This study aimed at developing an automatic pipeline for classifying intravenous (IV) contrast phases and additionally for identifying contrast media in the gastrointestinal tract (GIT). Materials and Methods: This retrospective study used 1200 CT scans collected at the investigating institution between January 4, 2016 and September 12, 2022, and 240 CT scans from multiple centers from The Cancer Imaging Archive for external validation. The open-source segmentation algorithm TotalSegmentator was used to identify regions of interest (pulmonary artery, aorta, stomach, portal/splenic vein, liver, portal vein/hepatic veins, inferior vena cava, duodenum, small bowel, colon, left/right kidney, urinary bladder), and machine learning classifiers were trained with 5-fold cross-validation to classify IV contrast phases (noncontrast, pulmonary arterial, arterial, venous, and urographic) and GIT contrast enhancement. The performance of the ensembles was evaluated using the receiver operating characteristic area under the curve (AUC) and 95% confidence intervals (CIs). Results: For the IV phase classification task, the following AUC scores were obtained for the internal test set: 99.59% [95% CI, 99.58–99.63] for the noncontrast phase, 99.50% [95% CI, 99.49–99.52] for the pulmonary-arterial phase, 99.13% [95% CI, 99.10–99.15] for the arterial phase, 99.8% [95% CI, 99.79–99.81] for the venous phase, and 99.7% [95% CI, 99.68–99.7] for the urographic phase. For the external dataset, a mean AUC of 97.33% [95% CI, 97.27–97.35] and 97.38% [95% CI, 97.34–97.41] was achieved for all contrast phases for the first and second annotators, respectively. Contrast media in the GIT could be identified with an AUC of 99.90% [95% CI, 99.89–99.9] in the internal dataset, whereas in the external dataset, an AUC of 99.73% [95% CI, 99.71–99.73] and 99.31% [95% CI, 99.27–99.33] was achieved with the first and second annotator, respectively. Conclusions: The integration of open-source segmentation networks and classifiers effectively classified contrast phases and identified GIT contrast enhancement using anatomical landmarks.

Publication
Investigative Radiology
Giulia Baldini
Giulia Baldini
Data Science

My research interests include deep learning, algorithms, and software development.

René Hosch
René Hosch
Team Lead

My research interests include distributed Computer Vision, Generative Adversarial Networks and Image-to-Image translation.

Cynthia Schmidt
Cynthia Schmidt
Medical Research

My research interests include medical digitalization, computer vision and radiology.

Katarzyna Borys
Katarzyna Borys
Data Science

My research interests include Deep Learning, Computer Vision, Radiomics, and Explainable AI.

Obioma Pelka
Obioma Pelka
Postdoc Data Science
Felix Nensa
Felix Nensa
Lead

My research interests include medical digitalization, computer vision and radiology.

Johannes Haubold
Johannes Haubold
Medical Lead

My research interests include virtual sequencing, non-invasive tumor decoding and clinical AI integration.