- Title
- Alzheimer’s disease detection using deep learning based on multimodal neuroimaging
- Creator
- Ebrahimi, Amir
- Relation
- University of Newcastle Research Higher Degree Thesis
- Resource Type
- thesis
- Date
- 2021
- Description
- Research Doctorate - Doctor of Philosophy (PhD)
- Description
- Alzheimer’s disease (AD), a leading cause of death in developed countries, is a fatal irreversible, progressive neurodegenerative disorder that slowly destroys memory. Although an impressive load of research has been reported using computer-aided algorithms to detect the disease, a practically applicable method within a clinical setting is yet to be made available. Owing to the popularity of deep learning models, especially in dealing with images, they have gained considerable attention in AD detection since 2013. Nevertheless, AD detection is challenging and requires a highly discriminative feature representation because of similar brain patterns. This research focuses on AD detection via magnetic resonance imaging scans (MRIs) using deep learning. First, a systematic literature review was conducted on more than 100 articles to investigate the current state and future trends of AD detection. The review showed that deep learning methods have demonstrated revolutionary performance in AD detection. They were reported to be more accurate than the other machine learning methods. Through the review, methodologies were developed to address knowledge gaps and establish the research objectives. The general idea in the reviewed papers was to employ transfer learning and two-dimensional (2D) convolutional neural networks (CNNs). To apply a 2D CNN on three-dimensional (3D) MRI volumes, each MRI scan is split into 2D image slices, and a CNN is trained over the image slices. Then, the CNN can classify each 2D image slice from 3D MRIs independently. Although 2D CNNs can discover spatial dependencies in an image slice, they cannot understand the dependencies among 2D image slices in a 3D MRI volume. To address this issue, two studies were conducted based on the established research objectives. The first study proposed to integrate various types and configurations of recurrent neural networks (RNNs) and temporal convolutional networks (TCNs) with pre-trained 2D CNNs to deal with the relationship between sequences of images for each subject. Transfer learning was used to initialise the CNN model. The proposed models classified subjects based on all input slices of that person’s MRI in this case. The study started with a pre-trained 2D CNN model followed by an RNN to enable the 2D CNN + RNN to understand the connection among sequences of 2D image slices for an MRI. The results showed a 2% improvement in classification accuracy compared with 2D CNNs. The issue with this model is that the feature extraction step in the 2D CNN is independent of the classification in the RNN. To resolve this issue, a TCN model was designed to extract features from 2D image slices and understand dependencies simultaneously. This model improved the classification accuracy by 3% compared with 2D CNN + RNN. Another idea for merging feature extraction and classification is to employ 3D CNNs. In the second study, 3D CNNs were employed instead of 2D CNNs to make voxel-based decisions. However, the main disadvantage of 3D CNNs is that their structure is highly complex and requires more training parameters, which may cause overfitting. In addition, they cannot benefit from transfer learning using datasets with millions of 2D images, such as ImageNet. The second study expanded the idea of transfer learning from datasets with 2D images to 3D MRI scans. In particular, 3D ResNet-18 improved the classification accuracy by 5% compared with the TCN. The results showed that training an RNN on features extracted from a CNN can improve the accuracy of the whole system. The findings reveal that combining feature extraction and classification in a single entity improves classification accuracy. Further, this research emphasises the importance of transfer learning to achieve higher classification accuracies. The best-performing model for classifying AD patients from healthy people was the pre-trained 3D ResNet-18 with 96.88% accuracy, 100% sensitivity and 94.12% specificity. The achieved accuracy opens a way for future research on detecting not only AD but also other diseases. This research enriches the understanding of AD detection using brain scans and offers a deeper understanding of how brain scans can be used with deep learning models to differentiate subjects with AD from healthy people. The main contributions of this research are to provide a comprehensive systematic literature review about AD detection and address the knowledge gaps in the literature; discuss various types of biomarkers, pre-processing steps, data analysis methods, software platforms and datasets; compare deep learning models, methods and configurations through the literature review; propose a sequence-based approach as a better alternative to slice-based and voxel-based approaches; expand the idea of transfer learning from datasets with 2D images to 3D CNNs; and implement and compare several deep learning models and configurations, including 2D and 3D CNNs, various types of RNNs, and TCNs. The models proposed in this thesis can be improved to be used in clinical settings to detect AD at early stages.
- Subject
- Alzheimer's disease; auto-encoder; convolutional neural network; deep learning; MRI; recurrent neural network; ResNet; remporal convolutional network; transfer learning; thesis by publication
- Identifier
- http://hdl.handle.net/1959.13/1473242
- Identifier
- uon:48984
- Rights
- Copyright 2021 Amir Ebrahimi
- Language
- eng
- Full Text
- Hits: 878
- Visitors: 1327
- Downloads: 533
Thumbnail | File | Description | Size | Format | |||
---|---|---|---|---|---|---|---|
View Details Download | ATTACHMENT01 | Thesis | 12 MB | Adobe Acrobat PDF | View Details Download | ||
View Details Download | ATTACHMENT02 | Abstract | 337 KB | Adobe Acrobat PDF | View Details Download |