Annotation of enhanced radiographs for medical image retrieval with deep convolutional neural networks.

Abstract

The number of images taken per patient scan has rapidly increased due to advances in software, hardware and digital imaging in the medical domain. There is the need for medical image annotation systems that are accurate as manual annotation is impractical, time-consuming and prone to errors. This paper presents modeling approaches performed to automatically classify and annotate radiographs using several classification schemes, which can be further applied for automatic content-based image retrieval (CBIR) and computer-aided diagnosis (CAD). Different image preprocessing and enhancement techniques were applied to augment grayscale radiographs by virtually adding two extra layers. The Image Retrieval in Medical Applications (IRMA) Code, a mono-hierarchical multi-axial code, served as a basis for this work. To extensively evaluate the image enhancement techniques, five classification schemes including the complete IRMA code were adopted. The deep convolutional neural network systems Inception-v3 and Inception-ResNet-v2, and Random Forest models with 1000 trees were trained using extracted Bag-of-Keypoints visual representations. The classification model performances were evaluated using the ImageCLEF 2009 Medical Annotation Task test set. The applied visual enhancement techniques proved to achieve better annotation accuracy in all classification schemes.

Click the Cite button above to demo the feature to enablevisitors to import publication metadata into their reference management software.
Obioma Pelka
Obioma Pelka
Postdoc Data Science
Felix Nensa
Felix Nensa
Lead

My research interests include medical digitalization, computer vision and radiology.