Please use this identifier to cite or link to this item:
|Robust Application of New Deep Learning Tools: An Experimental Study in Medical Imaging
|Nowadays medical imaging plays a vital role in diagnosing the various types of diseases among patients across the healthcare system. Robust and accurate analysis of medical data is crucial to achieving a successful diagnosis from physicians. Traditional diagnostic methods are highly time-consuming and prone to handmade errors. Cost is reduced and performance is improved by adopting computer-aided diagnosis methods. Usually, the performance of traditional machine learning (ML) classification methods much depends on both feature extraction and selection methods that are sensitive to colors, shapes, and sizes, which conveys a complex solution when facing classification tasks in medical imaging. Currently, deep learning (DL) tools have become an alternative solution to overcome the drawbacks of traditional methods that make use of handmade features. In this paper, a new DL approach based on a hybrid deep convolutional neural network model is proposed for the automatic classification of several different types of medical images. Specifically, gradient vanishing and over-fitting issues have been properly addressed in the proposed model in order to improve its robustness by means of different tested techniques involving residual links, global average pooling layers, dropout layers, and data augmentation. Additionally, we employed the idea of parallel convolutional layers with the aim of achieving better feature representation by adopting different filter sizes on the same input and then concatenated as a result. The proposed model is trained and tested on the ICIAR 2018 dataset to classify hematoxylin and eosin-stained breast biopsy images into four categories: invasive carcinoma, in situ carcinoma, benign tumors, and normal tissue. As the experimental results show, our proposed method outperforms several of the state-of-the-art methods by achieving rate values of 93.2% and 89.8% for both image- and patch-wise image classification tasks, respectively. Moreover, we fine- tuned our model to classify foot images into two classes in order to test its robustness by considering normal and abnormal diabetic foot ulcer (DFU) image datasets. In this case the model achieved an F1 score value of 94.80% on the public DFU dataset and 97.3% on the private DFU dataset. Lastly, transfer learning (TL) has been adopted to validate the proposed model with multiple classes with the aim of classifying six different wound types. This approach significantly improves the accuracy rate from a rate of 76.92% when trained from scratch to 87.94% when TL was considered. Our proposed model has proven its suitability and robustness by addressing several medical imaging tasks dealing with complex and challenging scenarios.
|Alzubaidi, L., Fadhel, M.A., Al-Shamma, O. et al. Robust application of new deep learning tools: an experimental study in medical imaging. Multimed Tools Appl 81, 13289–13317 (2022). https://doi.org/10.1007/s11042-021-10942-9
|Appears in Collections:
Files in This Item:
This item is protected by original copyright