Abstract
Alzheimer’s disease is a progressive neurodegenerative disorder that affects millions of people worldwide, leading to cognitive decline and memory loss. Early detection is crucial for effective management and treatment. This study focuses on the early detection of Alzheimer’s disease from MRI scans using deep learning techniques. The research utilizes the Alzheimer’s Dataset, which consists of four classes: Mild Demented, Moderate Demented, Non Demented, and Very Mild Demented. Three deep learning models were employed for classification: InceptionV3, MobileNet, and a proposed Deep CNN model. The performance of these models was evaluated using metrics such as precision, recall, F1 score, and accuracy. InceptionV3 achieved an accuracy of 95%, while MobileNet demonstrated improved performance with an accuracy of 97%. The proposed Deep CNN model outperformed both, achieving a remarkable accuracy of 99%. These results highlight the potential of deep learning models, particularly the proposed Deep CNN model, in accurately classifying different stages of Alzheimer’s disease from MRI scans, thereby facilitating early diagnosis and intervention.
Introduction
Introduction
One chronic neurodegenerative condition that is chronic is Alzheimer’s disease (AD). AD is a progressive neural disorder that affects cognitive abilities, particularly memory and language functions (Abbas et al., 2022). It is impossible to identify slight alterations in the hippocampus in the brains of those who have this illness. Degenerative symptoms, like memory loss and language impairment, are only noticed as the condition increases because of damage to specific brain nerve cells (AlSaeed & Omar, 2022). There is no proven cure for the illness, and its precise cause is unknown. Nonetheless, researchers have found that 10–15% of people with mild cognitive impairment will eventually develop Alzheimer’s disease every year (Grundman et al., 2004). Numerous neurological disorders are caused by the causes of dementia related to Alzheimer’s disease. A sizable portion of the global population is afflicted by these illnesses (Goenka & Tiwari, 2022). Depending on the degenerative changes in the brain, AD and related diseases also impair brain function and result in memory loss issues. The underlying reasons for these alterations, which restrict the brain’s capacity for cognition, are unknown (Battineni et al., 2021). With time, the cognitive decline changes quickly, leading to the development of AD. Because AD is a fatal disease that is incurable and irreversible, its marked increase has a significant impact on the health care system (Anand et al., 2017). AD can result in more than just memory loss or cognitive decline; in the later stages of the illness, a patient may experience eating and walking difficulties, which can lead to death (Lima & Hamerski, 2019). Brain imaging tests are performed on patients with different dementias to observe the loss of brain cells linked to AD (Ottoy et al., 2019). Early detection and treatment of the condition are highly important in alleviating its symptoms and slowing down the disease’s course.
Figure 1.1: Normal Brain vs Alzheimer’s Brain
One popular method for imaging the brain is magnetic resonance imaging (MRI). This method is frequently applied to comprehend the physiological functions in Alzheimer’s patients (Weller & Budson, 2018). Many researchers use artificial intelligence models to help professionals identify and classify Alzheimer’s disease in MRI images and to attain significant classification efficiency values (Acharya et al., 2019). An appropriate treatment plan will be started as soon as computer-aided systems identify Alzheimer’s at an early stage (Jimenez-Mesa et al., 2020).
A popular deep learning technique for image classification and segmentation from medical images is Convolutional Neural Networks (CNNs) (Barrera, Merino, et al., 2023). Convolution and pooling layers are used in the feature extraction portion of CNNs, while fully connected layers are used in the classification phase. During training, CNNs automatically learn everything from scratch using raw images (Barrera, Rodellar, et al., 2023). In the case of little labeled data, overfitting is frequently an issue because deep and wide CNN architectures have a lot of parameters (Kaya, 2024). Moreover, medical image analysis faces a number of problems, including a lack of labeled datasets, noises, uneven class distributions, and high inter-class similarity (Krizhevsky et al., 2017).
The dataset, hyperparameter values combinations, and architectural design all have a significant impact on how well CNNs perform(Kaya et al., 2023). Numerous hyperparameters affect CNN architecture, including learning rate, optimizer, batch size, dropout rate, epoch, number of convolution layers, number of filters in each convolution layer, filter sizes, number of fully connected layers, and neuron size in each fully connected layer (Shin et al., 2016). The lengthy training process of CNNs makes it challenging to manually adjust every possible combination of hyperparameters. Only a few hyperparameters, including dropout, batch size, loss function, and learning rate, have been optimized in some studies (Baghdadi et al., 2022). Upon reviewing the current research, it is discovered that there is an absence of automatic optimization of the number of filters in each convolution layer and the number of convolution layers, which are crucial components of CNN architectures for the accurate classification of Alzheimer’s (Deepa & Chokkalingam, 2022).
Research Problem
Due to the progressive and irreversible nature of AD, early detection of AD is an important research topic. Usually starting with mild cognitive impairment, Alzheimer’s disease progressively progresses to severe dementia, which has a significant negative impact on a person’s quality of life. Early diagnosis is essential because it enables prompt interventions that can enhance patient outcomes, reduce the rate at which symptoms worsen, and provide better disease management. Accurately interpreting these complex images in order to find the disease’s early warning indicators is the difficult part (Vrahatis et al., 2023). Modern computational techniques like machine learning and deep learning are necessary to improve the sensitivity and accuracy of MRI-based diagnoses because traditional diagnostic methods frequently fail to recognize these early indicators (Fernandes et al., 2020). A challenge is the variation in MRI imaging procedures and quality amongst various medical facilities, which results in inconsistent data that may impair the efficacy of diagnostic models (Kerwin et al., 2022). Due to the high dimensionality and complexity of the data, it is difficult to develop computational models that can analyze large-scale MRI datasets (Sheng et al., 2022).
Research Objectives
The primary objectives of this thesis are as follows:
To propose an advanced deep CNN model that can accurately identify early-stage Alzheimer’s disease by analyzing MRI scans.
To implement a deep CNN model on comprehensive MRI datasets, ensuring they can handle diverse and large-scale imaging data effectively
To evaluate the accuracy, precision, recall and F1 score of the developed deep learning models in detecting early-stage Alzheimer’s disease and compare with existing techniques, including InceptionV3 and MobileNet.
Research Questions
Try to find the answers to the following questions:
How can advanced deep learning models be developed to accurately identify early-stage Alzheimer’s disease by analyzing MRI scans?
How can these deep learning models be implemented on comprehensive MRI datasets to ensure they handle diverse and large-scale imaging data effectively?
How accurate, sensitive, and robust are the developed deep learning models in detecting early-stage Alzheimer’s disease?
Research Motivation
Alzheimer’s is a neurodegenerative disease that progresses and cannot be cured. Interventions that can slow the disease’s progression, improve patient care, and improve quality of life must be implemented as soon as possible. The slight brain alterations linked to AD’s early stages are frequently difficult for traditional diagnostic techniques to detect, delaying diagnosis and missing out on chances for early intervention. Though their complexity necessitates sophisticated analytical tools, MRI scans, which offer detailed images of brain structures, hold promise for identifying these early indicators. Deep learning provides a potent answer to this problem because of its ability to evaluate big, complicated datasets and spot patterns that might be invisible to human observers. The goal of this research is to use deep learning to create accurate, early-stage diagnostic instruments that can revolutionize Alzheimer’s care by enabling earlier diagnoses, better treatment outcomes, and ultimately hope for those who may be affected by this terrible illness.
Research Scope
In this work, deep learning algorithms, or CNNs, are developed to analyze MRI scans and identify minute anatomical changes in the brain, especially those linked to the onset of Alzheimer’s disease. To ensure that the models can generalize across various populations and imaging conditions, the scope of the work includes applying the models to large and diverse MRI datasets. To show the models’ efficacy in clinical settings, the research also assesses the models’ accuracy, sensitivity, and specificity in comparison to conventional diagnostic techniques. With the potential to significantly enhance the lives of patients and advance the field of neurodegenerative disease evaluations, this research aims to contribute to the early and accurate diagnosis of Alzheimer’s disease by focusing on both the technical development of the models and their practical application in healthcare.
The proposed Contribution of the Dissertation
This dissertation aims to advance the field of neuroimaging by creating robust, accurate, and clinically applicable deep-learning models specifically designed to identify early-stage Alzheimer’s Disease (AD) using MRI scans and deep learning. The goal of this study is to close a significant gap in the current diagnostic techniques, which frequently fail to identify the subtle brain abnormalities linked to the early onset of Alzheimer’s disease. The dissertation aims to develop new algorithms that can analyze complex MRI data more efficiently than existing methods by utilizing deep learning, offering higher sensitivity and specificity in early AD detection. This work aims to improve these models’ generalizability across various populations and imaging conditions, increasing their dependability and practicality in real-world clinical settings. The dissertation advances our knowledge of how cutting-edge computational methods can be combined with medical imaging to enhance patient outcomes. This research may result in earlier and more precise diagnoses, allowing for prompt interventions that could halt the progression of Alzheimer’s disease and greatly enhance patients’ quality of life.
Dissertation Organization
The thesis is organized into five chapters. Chapter 1 describes the overview of the work related early the introduction, research question, research objectives, and contribution of the work. Chapter 2 describes the literature review, which looks at previous studies in the domains of deep learning, neuroimaging, and Alzheimer’s diagnosis. This research fills in some of the gaps in the field and serves as a basis for the proposed study. Chapter 3 describes the methodology, which includes the architecture of the deep learning models developed, the processes for training and validating the models to guarantee their accuracy and robustness, and the selection and preprocessing of MRI datasets. Chapter 4 outlines the performance metrics, like specificity, sensitivity, and accuracy are discussed along with the results and their comparison to current diagnostic techniques. Chapter 5 includes the conclusion and future work.
Chapter Summary
An overview of the fundamental components of the research on the early detection of AD using MRI scans and deep learning is given in the chapter summary of the introduction. It starts by stressing how crucial an early diagnosis is to managing Alzheimer’s disease and how effective early intervention can be in improving patient outcomes. The limitations of current diagnostic techniques are then covered in the introduction, with a focus on their inability to detect the subtle brain changes that characterize the early stages of AD. It is discussed how MRI scans may be able to provide fine-grained pictures of these alterations, as well as the difficulties in interpreting such complicated data. Due to its ability to analyze large MRI datasets with high accuracy, deep learning is introduced as a promising solution. The research goals, scope, and significance are outlined in the chapter’s conclusion.
Background and Literature Review
Background
Over the past few decades, advances in computational methods and neuroimaging technology have led to a significant evolution in the quest for early detection of AD using MRI scans and deep learning (de Souza et al., 2021). Memory loss and a steady decline in cognitive function are the hallmarks of Alzheimer’s disease, which was initially identified by Dr. Alois Alzheimer in 1906. Early studies concentrated on comprehending the disease’s pathological characteristics, such as amyloid plaques and neurofibrillary tangles, but it was still very difficult to identify these alterations before the onset of symptoms.
When MRI technology first became widely used in neuroimaging in the 1980s, it offered a non-invasive way to see high-resolution images of brain structures (Moser et al., 2017). However, interpretative techniques and resolution limited early detection of subtle changes. Advances in imaging techniques and higher resolution scans were made possible by improvements in the 1990s and 2000s (Kabasawa, 2022). The early diagnosis of Alzheimer’s disease was not possible with traditional diagnostic methods (Fernández Montenegro et al., 2020). In the early 2010s, machine learning opened up new avenues for MRI data analysis, but there were still constraints (Sánchez Fernández & Peters, 2023). Convolutional neural networks in particular, which are used in deep learning, enabled automatic feature extraction from MRI scans, increasing early detection sensitivity and accuracy (Musallam et al., 2022).
Since MRI scans can produce detailed images of brain structures, they have become an important tool in the fight against early AD detection (Veitch et al., 2019). Critical brain regions, like the hippocampus, which is frequently among the first to exhibit abnormalities in early Alzheimer’s, can exhibit atrophy on structural magnetic resonance imaging (Chandra et al., 2019). Nevertheless, there are a lot of difficulties in interpreting MRI data to find these early, minute changes (Dubois et al., 2021). More sophisticated analytical techniques are required because traditional visual inspection and manual analysis methods frequently fall short of capturing the subtle differences linked to early-stage AD (Sharma & Mandal, 2022). By adding more complex architectures, like deep residual networks and attention mechanisms, and fusing MRI data with other biomarkers and clinical data, recent research has concentrated on improving these deep learning techniques (Zhu et al., 2021). Through this integration, the variability in MRI data across various populations and imaging protocols will be addressed, and diagnostic precision will be improved.
Literature Review
This paper (Arafa et al., 2024) aims to create a comprehensive framework based on CNN and deep learning techniques. Four phases of AD are applied: (I) data preparation and preprocessing; (II) data augmentation; (III) cross-validation; and (IV) deep learning-based classification and feature extraction for medical image classification. Two strategies are used at these phases. A basic CNN architecture is used in the first technique. The second method uses the pre-trained VGG16 model, which was trained on the ImageNet dataset and then applied to various datasets. To use the pre-trained models, we use meaning, transfer learning, and fine-tuning. The two approaches are assessed and contrasted using seven performance metrics. In addition to being more adept at analyzing Alzheimer’s disease than the most recent effort, the suggested technique requires fewer labeled training samples and less domain prior knowledge. In our experiments, we were able to achieve a considerable improvement in classification performance across all diagnosis groups. The experimental results show that the proposed designs are suitable for simple structures with low memory consumption, overfitting, computational complexity, and temporal regulation.
In this paper (El-Assy et al., 2024), we present a design for a Convolutional neural network (CNN) that classifies Alzheimer’s disease (AD) using magnetic resonance imaging (MRI) information collected from the Alzheimer’s disease Neuroimaging Initiative (ADNI) dataset. The network uses two different CNN models that are combined in a classification layer. Each model has different filter sizes and pooling layers. Three, four, and five categories address the multi-class problem. 99.43%, 99.57%, and 99.13% remarkable accuracies are attained by the suggested CNN architecture, respectively. These high accuracies show how well the network performs in extracting and identifying pertinent elements from MRI scans, allowing for the accurate classification of AD stages and subtypes.
This study’s (Fathi et al., 2024) primary goal was to develop an ensemble approach based on deep learning for the use of MRI images in the early detection of AD. The dataset was gathered, the data was preprocessed, individual and ensemble models were created, the models were evaluated using ADNI data, and the trained model was validated using a local dataset. The ensemble strategy that was suggested was chosen after a comparison of different ensemble scenarios. Ultimately, the ensemble model was created by combining the six top individual CNN-based classifiers. The analysis revealed that the four-way, three-way, EMCI/LMCI, NC/AD, 96.37, 94.22, 99.83, 93.88, and 93.92 classification groups had the highest accuracy rates, respectively. The local dataset’s validation results showed that the three-way classification accuracy was 88.46%
In this work (Ravi et al., 2024), we have measured and classified the stage of AD using the Alzheimer’s Disease Neuroimaging Initiative (ADNI2) sMRI image dataset. Convolutional Neural Networks (CNNs) have become a popular tool for medical image analysis in recent years. This work proposes the optimal pre-trained model that is capable of predicting the patient’s stage while concentrating on implementing various Deep Learning algorithms for the multi-class classification of AD MRI images. For the AD class, ResNet-50v2 is observed to provide the best accuracy of 91.84% and f1-score of 0.97. When estimating a model’s class, visualization techniques like Grad-CAM and Saliency Map are applied to the model that offered the highest level of accuracy in identifying the image’s region of focus.
Our research offers a comprehensive approach to early AD detection by utilizing the hippocampus and the VGG16 model with transfer learning. An important early affected region associated with memory, the hippocampus, is essential for categorizing patients into three groups: cognitively normal (CN), which refers to people who do not have cognitive impairment; mild cognitive impairment (MCI), which indicates a mild decline in cognitive abilities; and AD. With training enhanced by sophisticated picture preprocessing methods, our model uses the Alzheimer’s disease Neuroimaging Initiative (ADNI) dataset and achieves exceptional accuracy (testing 98.17%, validation 97.52 %, training 99.62 %). Our competitive advantage is strengthened by the strategic application of transfer learning, which includes the hippocampal approach and, most significantly, a progressive data augmentation strategy.
In this study (Matlani, 2024), hybrid deep learning (DL) approaches can be used to detect Alzheimer’s disease AD automatically. Improved adaptive wiener filtering (IAWF) is used for image pre-processing in order to improve the obtained images. Next, the features are extracted using a powerful hybrid technique called Principal Component Analysis, which extracts the important characteristics from images without the need for image segmentation. PCA-NGIST stands for Normalized Global Image Descriptor. Subsequently, the Improved Wild Horse Optimization algorithm (IWHO) is employed to identify the optimal characteristics. Ultimately, a hybrid bidirectional long Short-Term Memory with Artificial Neural Network (BiLSTM-ANN) is used to identify the illness.
This work (Castellano et al., 2024) proposes and assesses classification models in unimodal and multimodal frameworks utilizing amyloid positron emission tomography (PET_ scans and 2D and 3D MRI images. Our results show that models with volumetric data learn representations more efficiently than those with simply two-dimensional images. Additionally, the performance of the model is much improved by combining many modalities compared to single-modality techniques. On the OASIS-3 cohort, we achieved state-of-the-art performance. Furthermore, explainability evaluations using Grad-CAM show that our model bases its predictions on important regions connected to Alzheimer’s disease AD, highlighting its potential to help understand the etiology of the illness.
A middle-fusion multimodal paradigm is suggested in this work for the early diagnosis of AD. The suggested multimodal model uses a depth-wise separable convolution block without an activation function to extract features without loss. After that, sharing weight convolution and mix skip connection blocks, both of which are intended to learn the intricate interactions between modalities, are used to apply middle fusion. Compared to previous research, the suggested methodology boasts three primary innovations. 1) A multimodal middle-fusion model is suggested for (Kim et al., 2024) AD early detection. 2) The whole ADNI series is used to examine the proposed model, including tau protein PET and Aβ PET from the ADNI2 and ADNI3 datasets, as well as T1-weighted magnetic resonance imaging (T1w MRI) and 18F-FluoroDeoxyGlucose positron emission tomography (FDG PET) from the ADNI1 dataset. 3) The hippocampus, middle temporal, and inferior temporal regions, all of which are known to be impacted in the early stages of AD, are the subjects of a unique region-of-interest (ROI) extraction technique.
This research (Sorour et al., 2024) proposes a unique DL-based Alzheimer’s disease AD-DL approach for early AD detection. The dataset is made up of images from brain magnetic resonance imaging (MRI), which is utilized to assess and verify the proposed model. The approach consists of pre-processing, training, and final assessment phases for the DL model. There are five DL models displayed that have binary classification and self-sufficient feature extraction. The models fall into two categories: CNNs without Data Augmentation AuG and CNNs with Aug, CNNs-LSTM-with-Aug, CNNs-SVM-with-Aug, and Transfer learning using VGG16-SVM-with-Aug. The former is the category with data augmentation (with-Aug). It contains transfer learning utilizing VGG16-SVM-with-Aug, CNNs-with-Aug, CNNs-LSTM-with-Aug, and CNNs-SVM-with-Aug. Creating a model with the optimal detection accuracy, recall, precision, F1 score, training time, and testing time is the primary objective.
This research (Shanmugavadivel et al., 2023) aims to explore deeper spatial contextual structural information in order to create a novel network model that can diagnose or predict Alzheimer’s disease AD with more accuracy. In this study, we develop a spatial context network for AD classification by learning the multi-level structural aspects of brain MRI images using a 3D Convolutional neural network. The model exhibits good stability, accuracy, and generalization ability, according to the experimental data. The classification accuracy of our experimental method was 76.3% in the MCI/CN comparison, 74.9% in the AD/MCI comparison, and 92.6% in the AD/CN comparison. This study (Erdogmus & Kabakus, 2023) suggests a brand-new Convolutional Neural Network (CNN) as an affordable, quick, and precise fix. Firstly, the features that were yielded into the proposed novel model were generated using a gold-standard dataset, namely DARWIN, which was proposed for the detection of AD through handwriting. After that, this dataset was used to train and assess the newly suggested novel model. The experimental results showed that the novel model that was suggested could achieve an accuracy of up to 90.4%, surpassing the accuracies of the state-of-the-art baselines that included a total of 17 commonly used classifiers.
This study (Yao et al., 2023) aims to give a thorough overview of current advancements in deep learning techniques used to classify different stages of Alzheimer’s disease based on brain MRI scans, with an emphasis on early diagnosis. The work also discusses possible obstacles and future research possibilities in this rapidly evolving topic, highlighting the shortcomings of the current research. This advancement has made it possible to create complicated models and algorithms that can analyze complex brain imaging data, improving the efficiency and accuracy of diagnosis. This development gives rise to hope for the revolutionary potential of AI-driven diagnostics in transforming the management of Alzheimer’s disease and opening the door to more successful treatment plans and better patient outcomes.
This work (Vrahatis et al., 2023) aims to examine some of the facts and the current situation of these approaches to AD diagnosis by leveraging the potential of these tools and utilizing the vast amount of non-invasive data. In addition to providing a platform for the development of more precise and dependable biomarkers, the wealth of data produced by non-invasive methods like blood component monitoring, imaging, wearable sensors, and biosensors also greatly lowers patient pain, psychological effects, the risk of complications, and expenses. However, there are difficulties with the computer analysis of the massive amounts of data produced, which can offer vital details for the early detection of AD. Therefore, tackling these issues requires integrating deep learning and artificial intelligence.
In this work (El-Latif et al., 2023), we present an enhanced lightweight deep learning model for the precise identification of Alzheimer’s disease AD from MRI pictures. Our suggested strategy combines feature extraction and classification into a single stage, removing the requirement for deeper layers and achieving great detection performance without them. In addition, our suggested approach only includes seven layers, which reduces the system’s complexity and processing time compared to earlier deep models. We test our proposed model on the publicly accessible Kaggle dataset, which is just 36 megabytes in size but contains a huge number of records. Model eclipsed other previous ones, achieving an overall accuracy of 99.22% for binary classification and 95.93% for multi-classification tasks. Our study is the first to incorporate every technique deployed in the Kaggle dataset, which is accessible to the public, for AD detection, allowing academics to work on a dataset with fresh difficulties.
This study (Shukla et al., 2023) offers novel pre-processing techniques that enhance classification algorithms and shorten the training duration of already-existing learning algorithms. The study suggests three learning algorithms for AD classification: random forest, XGBoost, and CNN, using a dataset from the ADNI. With a sensitivity of 97.60% and an accuracy rate of 97.57%, the suggested method proved to be successful in identifying and classifying AD. Promising futures for the diagnosis and treatment of brain diseases are presented by this research.
This work (Rana et al., 2023) proposes a novel Deep Learning (DL) methodology in which different pre-processing techniques are used on MRI images before feeding them into the model. The suggested method for detecting Alzheimer’s disease uses brain MRIs to do multi-class categorization by transfer learning. Four categories are created from the MRI images: non-dementia (ND), very mild dementia (VMD), moderate dementia (MOD), and mild dementia (MD). Both the model’s implementation and a thorough performance study are carried out. The results indicate that the model achieves an accuracy of 97.31%. The model performs better than the most recent models in terms of F-score, recall, accuracy, and precision.
The model employs the ConvNeXt network’s micro design (MD) approach, which looks into the microscopic effects of the activation function and batch normalization layer on the model (Li et al., 2023). MD can significantly increase classification ability while lowering model complexity. The suggested method’s accuracy for AD/NC, AD/MCI, and MCI/NC dichotomous data is 93.30%, 92.42%, and 92.03%, respectively. Ultimately, the suggested approach performs better than the convolutional neural networks that are currently in use, including DenseNet, MobileNetV2, MobileNetV3, AlexNet, and GoogleNet.
The broad network-based model for early AD diagnosis (BLADNet) presented in this paper uses brain PET imaging and a unique wide neural network to improve the properties of FDG-PET that are extracted using 2D CNN (Duan et al., 2023). By adding additional BLS blocks without retraining the entire network, BLADNet may search for information over a large area, increasing the accuracy of AD categorization. Tests carried out on a dataset of 2,298 FDG-PET images of 1,045 participants from the ADNI database show that our approaches are better than those applied in earlier research on FDG-PET-based early diagnosis of AD. Specifically, our techniques produced cutting-edge outcomes in the FDG-PET classification of EMCI and LMCI.
The purpose of this research is to create a novel approach to identifying the illness in healthy people (Borkar et al., 2023). The goal of this project is to create a deep learning model that can distinguish between those who are healthy and those who are susceptible to Alzheimer’s disease. MRI scans can be used to extract different brain properties. The collected data is subsequently used to train the model. The results of the study suggest that this model can be used to screen for Alzheimer’s disease in people with normal cognitive functioning. It was also better than the existing diagnostic techniques. Our approach, which combines CNN and LSTM models with Adam optimization, may provide a non-invasive and economical substitute for existing methods while boosting accuracy.
Employing arbitrarily concatenated deep features from two pre-trained models that simultaneously learn deep features from brain functional networks from MRI images, we aim to clarify this issue in this research (Illakiya & Karthik, 2023). ResNet18 and DenseNet201 were the models we tested with to complete the AD multiclass classification challenge. The image’s discriminating region was identified using a gradient class activation map in order to facilitate the suggested model prediction. Recall, accuracy, and precision were employed to evaluate the suggested system’s performance. The results of the experimental research demonstrated that the suggested model could perform multiclass classification with 98.86% accuracy, 98.94% precision, and 98.89% recall.
The DL algorithm is applied to a neural network classifier with a VGG16 feature extractor in this article in order to facilitate the early diagnosis of AD (GUPTA et al., 2023). The two MRI datasets used for this purpose are 6400 and 6330 images. The accuracy, precision, recall, AUC, and F1-score for dataset 1 and dataset 2 are 90.4%, 0.905, 0.904, 0.969, and 0.904, and 71.1%, 0.71, 0.711, 0.85, and 0.71, respectively. Additionally, a comparison of the findings with earlier research indicates that the suggested model outperforms the others. Finally, the different machine learning (ML) and deep learning (DL) techniques that can be applied to the investigation of AD stage detection are identified in this paper.
This study (Odusami et al., 2023) describes a novel approach to data interpretation that uses perturbation analysis and clustering to find high-performing features that the deep models have learnt. We show that deep models outperform shallow models, such as support vector machines, decision trees, random forests, and k-nearest neighbors, using the Alzheimer’s disease Neuroimaging initiative (ADNI) dataset. Furthermore, we show that meanF1 scores, accuracy, precision, recall, and single modality models are not as good as when multi-modality data is included. In line with the established AD literature, our models have identified the hippocampus, amygdala brain regions, and the Rey Auditory Verbal Learning Test (RAVLT) as the most notable aspects.
In this study (Khan et al., 2023), data from the AD Neuroimaging Initiative (ADNI) database were accessed by 85 NC patients, 70 EMCI, 70 LMCI, and 75 AD patients. To extract the grey matter (GM) tissue from each individual, tissue segmentation was used. The suggested approach is tested using preprocessed data in order to verify its validity. The greatest rates of classification accuracy on AD vs. NC are 98.73%, and the accuracy of the test to distinguish between EMCI and LMCI patients is 83.72%. The accuracy of the remaining classes is higher than 80%. In conclusion, we offer a comparative study with other studies, demonstrating that the suggested model performed better in terms of testing accuracy than the most advanced models.
In this research (Erdogmus & Kabakus, 2023), we offer a deep learning-based method for detecting Alzheimer’s disease from the ADNI database of individuals with the condition. The dataset includes photos of normal people and Alzheimer’s patients from PET and fMRI tests. We first used 3D to 2D conversion and image scaling before using the VGG-16 architecture of a convolution neural network to extract features. Lastly, SVM, linear discriminant, K-means clustering, and decision tree classifiers are utilized in classification. According to the results of the experiment, the fMRI dataset can be classified with an average accuracy of 99.95%, whereas the PET dataset can be classified with an average accuracy of 73.46%.
The use of AI-based CAD systems on AD and its stages is reviewed in this work, with an emphasis on structural MRI because of its low cost and non-ionizing radiation nature (Zhao et al., 2023). We’ll go over key components of several AI methods that are relevant to AD, highlight findings from various research teams, analyze the difficulties at hand, and suggest future lines of inquiry. In the end, it would be perfect for the creation of a diagnostic framework that might one day be used for many forms of dementia in addition to AD.
In this work (Khan et al., 2022), we suggest a deep learning-based multiclass classification technique that differentiates between different stages for the early identification of Alzheimer’s disease. Our approach achieves an accuracy of 98.9% with an F1 score of 96.3%, is capable of classifying the 2D images acquired after the effective pre-processing of the publicly available Alzheimer’s Disease Neuroimaging Initiative (ADNI) dataset, and substantially manages data obstacles by enhancing. Multiple tests are conducted, and overall results show that the proposed method outperforms the state-of-the-art methods in terms of overall performance.
In this study (Khalid et al., 2023), we have used MRI images from the ADNI 3 class, which has a total of 2480 AD, 2633 normal, and 1512 moderate cases, to develop Convolutional Neural Networks (CNNs) for the early diagnosis and classification of AD. When compared to numerous other relevant papers, the model performed well, with a noteworthy accuracy of 99%. Additionally, we contrasted the outcome with our earlier research, which used the OASIS dataset to apply machine learning algorithms. This revealed that deep learning approaches can be a better choice than traditional machine learning techniques when handling large amounts of data, such as medical data.
In this study (Khalid et al., 2023), we discovered that SGOT, ApoE, BNP, Eot3, RAGE, and A2M together might constitute a critical biomarker profile of early illness. Based on the panels that were found, illness detection models were developed that demonstrated sensitivity (SN) > 80%, specificity (SP) > 70%, and area under the receiver operating curve (AUC) of at least 0.80 at the prodromal stage of the disease (with improved performance at subsequent stages). At this point in the disease, existing machine learning models fared badly in contrast, indicating that the underlying protein panels might not be appropriate for early disease identification. Our findings show that non-amyloid-based biomarkers can be used to detect AD early.
In this paper (Abbas et al., 2023), we design a 3D VGG variant CNN to investigate the classification accuracy based on two publicly available data sets: OASIS and ADNI. In order to prevent information loss during the process of slicing 3D MRI into 2D pictures and analyzing them using 2D Convolutional filters, we employed 3D models. In order to improve the model’s efficacy and classification performance, we also preprocessed the data. Outperforming 2D network models, the suggested model achieved 73.4% classification accuracy on ADNI and 69.9% on the OASIS dataset with 5-fold cross-validation (CV).
In this work (Khan et al., 2022), we suggest a deep learning-based multiclass classification technique that differentiates between different stages for the early identification of Alzheimer’s disease. Our approach achieves an accuracy of 98.9% with an F1 score of 96.3%, is capable of classifying the 2D images acquired after the effective pre-processing of the publicly available Alzheimer’s Disease Neuroimaging Initiative (ADNI) dataset, and substantially manages data obstacles by enhancing. Multiple tests are conducted, and overall results show that the proposed method outperforms the state-of-the-art methods in terms of overall performance.
In this work (Francis & Pandian, 2021), to enhance the classification accuracy of the mild cognitive impairment convertible (MCIc) and cognitively normal (CN) classes using an ensemble model. The retrained network models with the ensemble method employed by the authors were Xception and MobileNet. The Alzheimer’s Disease Neuroimaging Initiative (ADNI) data was used to assess the performance of pre-trained and ensemble models. The accuracy rates of the MobileNet and Xception models were 89.89% and 89.23%, respectively. Conversely, the ensemble model attains a greater 91.3% classification precision.
This study (Suganthe et al., 2021) created a comprehensive research model for predicting the diagnosis of Alzheimer’s disease (AD), cognitive normalcy (CN), and mild cognitive impairment (MCI) based on input images from magnetic resonance imaging (MRI). Alzheimer’s disease is characterised by a progressive loss of memory that leads to problems like a slow decline in behaviour, thinking, and social skills, which makes it challenging for a person to work on their own. Four classesNon-Demented, Mildly Demented, Very Mildly Demented, and Moderately Demented, were used in this paper. We combined Inception and ResNet formulation to diagnose Alzheimer’s disease. With an accuracy of 79.12 percent, this suggested model outperforms the current one by a wide margin.
This study (Hussain et al., 2020) suggested a CNN-based model for the binary (Alzheimer/healthy) classification of brain MRI data related to the condition. Using the Open Access Series of Imaging Studies (OASIS) dataset, the suggested 12-layer CNN model’s performance was assessed and contrasted with pre-trained InceptionV3, Xception, MobilenetV2, and VGG architectures. Using the suggested 12-layer CNN model, 97.50% f1-score and 97.75% accuracy were attained.
Table 2.1: State of the art of Alzheimer detection
Reference | Methodology | Dataset | Evaluation Measure | Limitations |
(Arafa et al., 2024) | CNN and pre-trained VGG16 Transfer learning |
Various datasets. Pre-trained on ImageNet. |
Seven performance metrics (not specified). Significant improvement across all diagnostic groups |
Requires fewer labeled training samples. Consumption issues |
(El-Assy et al., 2024) | CNN | Alzheimer’s Disease Neuroimaging Initiative (ADNI) dataset | Accuracy: 99.43%, 99.57%, and 99.13% for different categories | Focused on extracting and identifying pertinent elements from MRI scans |
(Fathi et al., 2024) | CNN | ADNI dataset | Accuracy: 96.37%, 94.22%, 99.83%, 93.88%, and 93.92%. | Limited Dataset |
(Ravi et al., 2024) | Optimal pre-trained ResNet-50v2 model used. | ADNI2 MRI image dataset | Accuracy: 91.84% for AD class. F1-score: 0.97 |
Focus on visualizing regions of interest in images |
(Matlani, 2024) | VGG16 model with transfer learning | ADNI dataset | Accuracy: 98.17% (testing), 97.52% (validation), 99.62% (training) | Focused on the hippocampal region; may not generalize well to other brain regions. |
(Castellano et al., 2024) | Hybrid DL approach using BiLSTM-ANN | Not specified | 98% | A hybrid approach may increase computational complexity. |
(Kim et al., 2024) | CNN | OASIS-3 cohort | 95% | The model relies heavily on the availability of volumetric data. |
(Sorour et al., 2024) | CNN | ADNI datasets | 95.6% | The model focuses on the hippocampus, middle temporal, and inferior temporal regions, and may not generalize well to other areas |
(Shanmugavadivel et al., 2023) | CNN, VGG-16, SVM | MRI images dataset | 94.8% | Focused on binary classification, it may not be suitable for multi-class scenarios. |
(Erdogmus & Kabakus, 2023) | 3D CNN | Brain MRI images | Accuracy: 76.3% (MCI/CN), 74.9% (AD/MCI), 92.6% | Limited to specific comparison groups, moderate accuracy in some comparisons. |
(Yao et al., 2023) | CNN | DARWIN dataset | Accuracy: 90.4% | Focuses on handwriting data, may not generalize to other forms of AD diagnosis. |
(El-Latif et al., 2023) | Deep learning | Brain MRI scans | 95.6% | Primarily a review; lacks experimental validation of techniques discussed. |
(Shukla et al., 2023) | CNN | Kaggle MRI dataset | Binary Classification Accuracy: 99.22%, Multi-class Classification Accuracy: 95.93% | Limited to a small, publicly available dataset; may not generalize to other datasets. |
(Rana et al., 2023) | Random Forest, XGBoost, and CNN. | ADNI dataset | Accuracy: 97.57%, Sensitivity: 97.60% | Focuses on specific algorithms; may not address all aspects of AD classification. |
(Li et al., 2023) | CNN | Brain MRI images dataset | Accuracy: 97.31% | Model focuses on transfer learning; may not explore other deep learning approaches. |
(Duan et al., 2023) | ConvNeXt network model | MRI images for AD/NC, AD/MCI, MCI/NC classification | Accuracy: 93.30% (AD/NC), 92.42% (AD/MCI), 92.03% | Focuses on specific comparisons, potential limitations in generalization to another dataset |
Research Gap
The variability and heterogeneity of MRI data across populations are a critical challenge that can result in inconsistent model performance and poor generalization. The current models struggle to accurately classify earlier, deeper stages of the disease; for example, they focus on differentiating between Alzheimer’s and mild cognitive impairment (MCI), which are specific stages of AD (Erdogmus & Kabakus, 2023). Furthermore, a lot of research places a strong emphasis on accuracy metrics optimization while ignoring the models’ interpretability, which is essential for clinical adoption. To improve early detection capabilities, stronger models that can incorporate multi-modal data, and MRI combined with other biomarkers are also required (Li et al., 2023). Deep learning models present practical challenges due to their high resource demands and computational complexity, especially in real-time or resource-constrained settings.
Chapter Summary
The chapter on literature review analyses the use of deep learning methods and MRI scans for the early diagnosis of Alzheimer’s disease. It goes over the many deep learning models, approaches, datasets, and assessment metrics that are employed. The study emphasizes how these models can help increase diagnostic precision, especially when it comes to differentiating Alzheimer’s from other forms of cognitive impairment. It also highlights areas in which further research is needed, including interpretability of the models, early-stage detection issues, and how to handle the variability of MRI data.
Methodology
Introduction
The methodology chapter describes a framework for early detection of AD from MRI scans using deep learning techniques. The dataset consisted of high-quality brain MRI images that had been preprocessed for image quality and alignment. Several deep learning models, including InceptionV3 and MobileNet, are used for feature extraction and AD classification. A proposed Deep Convolutional Neural Network (Deep CNN) is presented to address specific challenges in early AD detection. The methodology includes training, hyperparameter tuning, and optimization techniques for peak performance. Accuracy, precision, recall, F1-score, and AUC are some of the evaluation measures. The Deep CNN model is compared to EfficientNetB2, MobileNet, and InceptionV3 to demonstrate its superiority in early AD detection.
Dataset
The Alzheimer’s Dataset, which is used to detect Alzheimer’s disease early in MRI scans using deep learning, is a comprehensive collection that has been carefully curated to aid in distinguishing between different stages of cognitive impairment. The dataset is divided into four categories: Non-Demented (ND), Very Mild Demented (VMD), Mild Demented (MD), and Moderate Demented (MoD). These categories represent a progression from normal cognitive functioning to various stages of dementia, allowing for a more nuanced understanding of the disease’s progression, as shown in Figure 3.1. The dataset contains high-resolution brain MRI images that capture the structural changes associated with each stage of Alzheimer’s disease, making it an excellent source of information for deep learning models to learn discriminative features. The images are preprocessed to ensure uniformity, with steps such as skull stripping, intensity normalization, and resizing being critical for improving model training and performance. The Alzheimer’s Dataset allows for the development of robust and accurate deep learning models for early detection, which is critical for timely intervention and improved patient outcomes.
Figure 3.1: Dataset sample
Preprocessing
Preprocessing is an important step in the early detection of Alzheimer’s disease using MRI scans and deep learning. It consists of several steps that improve the quality of MRI images and ensure consistency across the dataset. Skull stripping removes non-brain tissues, whereas intensity normalization standardizes pixel values across all images. Resizing images to a uniform dimension ensures consistent input, particularly for architectures such as InceptionV3, MobileNet, and proposed deep CNN models. Rotation, flipping, and scaling are data augmentation techniques that artificially expand datasets to prevent overfitting. Histogram equalization improves contrast, allowing the deep learning model to detect small structural changes more easily. These preprocessing steps ensure that the data is clean, consistent, and enriched, allowing the model to learn meaningful patterns for accurate early detection.
Image Resizing
The dataset has been divided into training and validation sets for effective model training and evaluation. Originally, the images were of a much larger size, measuring 240×320 pixels. To make them more suitable for image classification tasks, all the images have been uniformly resized to a common size of 224×224 pixels. This resizing is a standard practice in image classification tasks as it facilitates efficient computation and training of Convolutional Neural Network (CNN) models. Resizing the images to a smaller dimension serves several important purposes. Firstly, it reduces the computational demands associated with processing the images. Smaller images require fewer parameters, which makes training CNN models more manageable and efficient.
Normalization
The normalization of Alzheimer’s disease detection using CNNs involves a systematic process to optimize the model’s performance in accurately identifying and classifying Alzheimer’s disease. Manual adjustments are made to verify that the images match the necessary target_size = (224, 224) pixels. This consistency in image proportions creates a continuous framework for the model to operate inside, regardless of the original image sizes. By dividing each pixel value by 255, the pixel values are transformed into a normalized range that spans from 0 to 1. As shown in Equation 3.1, the output is normalized to create image pixel intensities across 0 and 255.
Data Augmentation
Data augmentation is a pivotal aspect of optimizing the CNN for the detection of brain tumors. This technique involves generating diverse training samples by applying various transformations to the original Alzheimer images. Data augmentation techniques, such as random rotations, flips, and zooms, are employed to increase the diversity of the training dataset, reducing the risk of overfitting and enabling the CNN to generalize better to unseen data. Zooming involves resizing an image such that some parts of the image are magnified and others are shrunk. This technique can help create variations of the original images and improve the model’s ability to recognize objects at different scales. Horizontal flipping involves flipping the image horizontally to create a mirrored version of the original image. Some tasks included in data augmentation are as follows:
Rotation: The image is rotated by a certain angle to simulate different orientations of disease images. This helps the model recognize Alzheimer’s disease independent of their orientation in the input image.
Scaling: The image is resized to be larger or smaller, simulating variations in the size of signboards. This allows the model to detect and recognize Alzheimer’s disease of different dimensions.
Flipping: The image is flipped horizontally or vertically, creating mirror images. This helps the model become invariant to left-right or up-down symmetries of Alzheimer’s disease.
Brightness and Contrast Adjustment: The brightness and contrast of the image are modified to simulate changes in lighting conditions. This helps the model adapt to different lighting environments.
Data Splitting
Data splitting is a fundamental step in training and evaluating a CNN for Alzheimer’s disease detection. This process involves dividing the dataset into distinct subsets, typically training, validation, and testing sets. The images are split into train and test sets using slicing. Setting the percentage of images to use for testing here, the variable test percentage is set to 0.2, which means that 20% of the images will be used for testing, while the remaining 80% will be used for training. A flowchart of proposed work is shown in Figure 3.2.
Figure 3.2: Flowchart of proposed work
Feature Extraction
Feature extraction is a technique used in image analysis to organize images into smaller sections for subsequent processing. In our research, we identify a significant number of characteristics that aid in the detection of patterns in a vast number of datasets. The process involves utilizing the convolutional layers of the CNN to automatically learn and extract hierarchical features from the input images. These features may include color variations, texture patterns, and spatial relationships within the skin lesions. The ability of CNNs to automatically extract relevant features contributes to their effectiveness in the identification of Alzheimer’s disease.
Convolutional Neural Network (CNN)
CNNs have emerged as powerful tools in the domain of Alzheimer’s disease detection, demonstrating exceptional capabilities in automated image recognition tasks. CNNs are employed to discern complex patterns and features within brain images, enabling the detection of Alzheimer’s disease. CNN architecture includes an input layer, a Convolutional layer, RELU, a pooling layer and a fully connected layer.
Input Layer
The input layer is the first layer of the CNN, and it takes the image as input. The input layer provides a set of images as input to the CNN. [Height*width*number of Color channels] represents the input image. The color channel denotes the sort of image; for example, channel=3 denotes an RGB image. The same information is run via a data argumentation before being fed into the CNN.
Convolutional Layer
The Convolutional Layer plays a crucial role in automatically extracting relevant features from Alzheimer’s disease images, enabling the CNN to discern patterns associated with diseases. This layer utilizes filters or kernels that convolve across the input image, performing local receptive field operations to capture distinct features such as textures, edges, and color variations. The convolutional layer takes as input the brain images obtained from datasets. These images serve as the raw input on which the CNN will perform convolution operations to identify relevant patterns and features associated with Alzheimer’s disease. The output of the convolution operation is a set of feature maps. Each feature map corresponds to a particular filter and represents the areas of the input image where the filter detects relevant features. If a two-dimensional image is being processed, this component will carry out a multiplication operation between the raw data and the two-dimensional matrix of weights. The convolutional layer is responsible for the extraction of low-level properties, including edges, color, the direction of gradients, and so on. Additional layers are added to have the high-level elements that make up skin texture and are used to swiftly determine the kind of Alzheimer’s disease.
RELU
The RELU layer is also known as the activation layer. Because it performs non-linearity in the program, the RELU layer is executed after every layer of convolution. The ReLU activation function operates by replacing any negative input values with zero, as shown in Equation 3.3. ReLU improves the model’s ability to identify critical properties in Alzheimer’s images that are suggestive of specific diseases, by increasing the selective activation of neurons.
Pooling layer
The Pooling Layer helps the CNN become more invariant to variations in scale, orientation, and position of disease-related features. By retaining the most prominent features and discarding redundant information, this layer contributes to the network’s ability to focus on the most critical aspects of the input images, such as distinctive patterns or irregularities associated with different diseases. The pooling layer is used to reduce the spatial dimensions.
Flatten Layer
The flattened layer is used between the convolutional and fully connected layers to turn two-dimensional data into one feature vector. Flatten layer facilitates the training process. During training, the weights associated with the connections between the flattened layer and the subsequent dense layers are adjusted iteratively, allowing the model to learn discriminative patterns indicative of specific diseases. The flattened layer then plays a crucial role in reshaping this multidimensional representation into a one-dimensional vector.
Fully connected Layer
Following the extraction of features by convolutional and pooling layers, the Fully Connected Layer processes this information to make predictions about the presence of Alzheimer’s disease. This layer connects every neuron to every neuron in the preceding layer, effectively flattening the hierarchical representations learned in the convolutional layers. The fully connected layer accepts feature vectors as input and uses them to classify input images using the function of softmax. In Alzheimer’s disease detection, the fully connected layer plays a crucial role within the CNN architecture. Also known as the dense layer, it is typically positioned towards the end of the network, following the convolutional and pooling layers that extract hierarchical features from the input data, which in this context are often medical images.
Output Layer
The output layer serves as the final stage of the network, where the learned features are translated into predictions for specific disease classes. The architecture typically culminates in a dense layer with a softmax activation function, aligning with the nature of multi-class classification tasks associated with identifying different types of Alzheimer’s diseases. Equation 3.4 is used to calculate the feature map’s output size, where N is the original input size, F is the kernel size, P is the padding, and S is the stride.
InceptionV3 Architecture
The InceptionV3 model is a deep convolutional neural network architecture that is commonly used for the classification of images due to its efficiency and capacity to capture multiple scales of information via its unique inception modules. InceptionV3, developed by Google, is intended to maximize the effectiveness of computation while maintaining high performance by employing techniques such as simplified convolutions and auxiliary classifiers.
The InceptionV3 model serves as the foundation for feature extraction, with pre-trained weights from ImageNet utilizing the rich features learned from a large dataset. The model is initially frozen to avoid updates during the first training phase. To minimize the risk of overfitting, a Dropout Layer (0.25) is added, which randomly sets a fraction of input units to zero throughout training. The Flatten Layer reshapes 2D feature maps into 1D feature vectors, which is required for the transition from convolutional to fully connected layers. Following flattening, a Dense Layer (256 units with ReLU activation) is added to interpret high-level features and learn Alzheimer’s classification patterns, as described in Table 3.1. A Dense Layer (128 units, ReLU activation) is added to refine the previously learned features. A Dropout Layer (0.25) is added to further regularize the model and reduce the likelihood of overfitting. The final Dense Layer (4 units, Softmax activation) produces a probability distribution over four classes: Mild Demented, Moderate Demented, Non-Demented, and Very Mild Demented, with normalized probabilities summing to one.
Table 3.1: Parameters details of InceptionV3
Layer Type | Output Shape | Number of Parameters | Activation Function |
InceptionV3 | (None, 5, 5, 2048) | 21,802,784 | – (Pre-trained) |
Dropout | (None, 5, 5, 2048) | 0 | – |
Flatten | (None, 51200) | 0 | – |
Dense (256 units) | (None, 256) | 13,107,456 | ReLU |
Dense (128 units) | (None, 128) | 32,896 | ReLU |
Dropout | (None, 128) | 0 | – |
Dense (4 units) | (None, 4) | 516 | Softmax |
MobileNet Architecture
The MobileNet model is a deep convolutional neural network architecture optimized for mobile and integrated vision applications. It offers an efficient and efficient solution for image classification tasks with fewer parameters and lower computational costs, making it ideal for applications that require real-time processing but have limited hardware resources.
Conv_base is a pre-trained MobileNet model for extracting features that uses depth-wise separable convolutions to minimize parameters and computational expenses. To take advantage of rich features, the model is loaded with weights that have been pre-trained on large datasets like ImageNet. To avoid overfitting, a Dropout Layer (0.25) is added, which randomly sets a fraction of input units to zero while training. The Flatten Layer converts 2D feature maps into 1D feature vectors, making it easier to transition from convolutional to fully connected layers. A Dense Layer (256 units, ReLU activation) is used to learn high-level patterns and Alzheimer’s disease-specific features from the features that were extracted. A Dense Layer (128 units, ReLU activation) is added to refine the learning process and improve classification accuracy. A Dropout Layer (0.25) is added to improve regularization. The final Dense Layer (4 units, Softmax activation) generates a probability distribution over the four Alzheimer’s disease classes, enabling the model to determine the most likely class for any given MRI scan. This approach improves the accuracy of classification for Alzheimer’s detection tasks.
Table 3.2: Parameter details of MobileNet
Layer Type | Output Shape | Number of Parameters | Activation Function |
MobileNet (Base) | (None, 7, 7, 1024) | 3,228,864 | – (Pre-trained) |
Dropout | (None, 7, 7, 1024) | 0 | – |
Flatten | (None, 50176) | 0 | – |
Dense (256 units) | (None, 256) | 12,863,872 | ReLU |
Dense (128 units) | (None, 128) | 32,896 | ReLU |
Dropout | (None, 128) | 0 | – |
Dense (4 units) | (None, 4) | 516 | Softmax |
Proposed Deep CNN Model
The proposed Deep CNN Model for early Alzheimer’s disease detection from MRI scans is based on a deep CNN architecture that has been fine-tuned using additional dense layers in response to Alzheimer’s disease stage classification. The model starts with a base model (for example, a pre-trained CNN backbone) and then adds custom layers that have been developed to improve its ability to distinguish between Alzheimer’s disease stages: mild dementia, moderate dementia, non-demented, and very mild dementia.
The base model is a pre-trained deep CNN model that extracts high-level features from MRI scans using knowledge from a large dataset, such as ImageNet. To prevent overfitting, the model is not initially trainable. The output is fed into a 256-unit dense layer, which learns abstract features through ReLU activation. To reduce overfitting, set the kernel_regularizer parameter to regularizers.l2 (0.001). To prevent overfitting, a 0.4-dropout layer is added after the dense layer. The final dense layer consists of four units, each representing one of the four Alzheimer’s disease classes: mild dementia, moderate dementia, non-demented dementia, and very mild dementia, as shown in Figure 3.3. The model employs the Softmax activation function to generate a probability distribution across the four classes, enabling it to predict the most likely class for a given MRI scan. The model is built with Adamax, a categorical cross-entropy loss function, and evaluated using metrics like accuracy and AUC. The model’s performance is measured using metrics such as accuracy and AUC, which provide information about overall correctness and ability to distinguish between classes. Table 3.3 describes the parameter details of the proposed deep model.
Table 3.3: Parameter details of proposed Deep CNN model
Layer Type | Output Shape | Number of Parameters | Activation Function | Regularization |
Base Model (Pre-trained) | Varies | Varies (millions) | – | – |
Dense (256 units) | (None, 256) | 65,792 | ReLU | L2 (0.001) |
Dropout (0.4) | (None, 256) | 0 | – | – |
Dense (4 units) | (None, 4) | 1,028 | Softmax | – |
Figure 3.3: Proposed Deep CNN architecture
Evaluation Measures
The use of evaluation metrics in Alzheimer’s disease detection is paramount for systematically assessing the performance and reliability of detection models. These metrics provide quantitative measures that offer insights into how well a model is functioning in identifying instances of Alzheimer’s disease from medical imaging data.
Equation 3.5 shows that the various performance indicators, such as accuracy, precision, recall, confusion matrix, and F1 score, are used to validate the expected output of lung disease detection.
Where:
Accuracy is the fraction of correctly classified instances in the test set.
True positives (TP) are instances that are actually positive in the test set and are correctly labeled as positive by the classifier. True negatives (TN) are instances that are actually negative in the test set and are correctly labeled as negative by the classifier. False positives (FP) are instances that are actually negative in the test set but are incorrectly labeled as positive by the classifier. False negatives (FN) are instances that are actually positive in the test set but are incorrectly labeled as negative by the classifier.
As a result, the positive or healthy mango leaf samples are P, whereas the Alzheimer’s class samples are N, as shown in equations 3.6 and 3.7. The specificity and sensitivity formulas are as follows:
Precision is the fraction of true positive instances among all instances predicted as positive by the classifier, as shown in Equation 3.8. Precision is estimated by dividing the actual number of successes by the classifier’s projected number of successes.
Recall is the fraction of true positive instances among all instances that are actually positive in the test set, as shown in Equation 3.9.
Equation 3.10 shows that the F1 score is the harmonic mean of precision and recall, providing a combined measure of precision and recall.
Chapter summary
The third chapter examines the application of deep learning algorithms to Alzheimer’s disease detection from MRI scans. It compares the Deep CNN model to the MobileNet and InceptionV3 architectures, examining their architectures, training techniques, and performance measures. The chapter also compares the Deep CNN suitability for Alzheimer detection tasks with classic CNN-based methods.
Results and Discussion
Introduction
The results of the assessment of the suggested method for the detection of Alzheimer’s disease are shown in this section. In this section, the efficiency of the proposed Alzheimer’s disease detection framework is validated using several types of evaluation criteria and comparisons with other models. TensorFlow and Python programming are used for the implementation of the model described in the present research. The complete experimental setup was done in Python using Anaconda. Keras libraries are used to build, compile, and test the model.
Training data
The training data accounts for 80% of the dataset and is used to train deep learning models such as EfficientNetB2, MobileNet, InceptionV3, and the proposed Deep CNN model. These models learn to extract characteristics from MRI images, recognize patterns, and differentiate between four types of Alzheimer’s disease: mild, moderate, non-demented, and very mildly demented. The training procedure consists of many epochs, with the purpose of minimizing the loss function and assuring accuracy. Dropout, regularization, and data augmentation techniques are used to avoid overfitting and ensure model generalizability. The remaining 20% of the data is utilized as a testing set to assess the model’s performance following training. The testing set includes MRI scans that the model did not view during training, allowing for an unbiased assessment of its ability to generalize to new data. The model’s efficacy is measured by parameters such as accuracy, precision, recall, F1-score, and AUC. By segregating training and testing data, the model’s performance represents its predictive potential and dependability in clinical situations for early Alzheimer’s disease detection.
InceptionV3 architecture
The InceptionV3 model’s results for the early diagnosis of Alzheimer’s disease from MRI scans show a strong performance across many classes, with a high overall accuracy of 95%. Using four categories: Non-Demented (0), Very Mild Demented (1), Mild Demented (2), and Moderate Demented (3), the model was trained to categorize MRI images. Performance indicators like precision, recall, and F1-score give information on how well the model works to identify certain categories.
Recall measures the model’s capacity to accurately identify every pertinent event, whereas precision shows how well the model predicts the future. The F1-score offers a fair assessment of the model’s performance since it is the harmonic mean of precision and recall. The model produced very accurate predictions with low false positives and false negatives for class 0 (Non-Demented), with an accuracy of 0.99, recall of 0.98, and F1-score of 0.99. The model had remarkable performance in detecting class 1 (Very Mild Demented) as it achieved perfect scores across precision, recall, and F1-score (all 1.00). The model obtained an F1-score of 0.91 in class 2 (Mild Demented), with a precision of 0.84 and a recall of 1.00. This suggests that although the model accurately detects all cases of mild dementia, there are some false positives. The model produced an F1-score of 0.90 for class 3 (Moderately Demented), with a precision of 1.00 and a recall of 0.82. This implies that although the model correctly identifies cases of moderate dementia, it may overlook some of them, as evidenced by the somewhat reduced recall, as described in Table 4.1. With an overall accuracy of 95% on the test set, InceptionV3 demonstrates that MRI scans are a highly effective means of detecting Alzheimer’s disease early on.
Table 4.1: Evaluation measures of InceptionV3
Class | Precision | Recall | F1-Score | Support |
0 (Non-Demented) | 0.99 | 0.98 | 0.99 | 480 |
1 (Very Mild Demented) | 1.00 | 1.00 | 1.00 | 480 |
2 (Mild Demented) | 0.84 | 1.00 | 0.91 | 480 |
3 (Moderately Demented) | 1.00 | 0.82 | 0.90 | 480 |
Accuracy | 0.95 | 1920 |
The classification of MRI scans of Alzheimer’s patients into four categories, Non-Demented, Very Mild Demented, Mild Demented, and Moderate Demented, raises the training accuracy of the InceptionV3 model, as shown in Figure 4.1. This illustrates the model’s capacity to identify intricate patterns and characteristics in MRI data. The model’s ability to generalize to new data without appreciably overfitting to the training set is demonstrated by the upward trend in validation accuracy, as shown in Figure 4.2. As the model learns and optimizes its parameters, the training and validation losses gradually reduce. The model retains high generalization performance on unknown data if the validation loss is flat or slightly increasing and the accuracy remains stable.
Figure 4.1: Training and validation accuracy of InceptionV3
Figure 4.2: Training and validation loss of InceptionV3
The InceptionV3 model is very accurate at categorizing the cases of Very Mild Demented and Non-Demented, according to the confusion matrix. The similarities and differences between the Mild Demented and Moderate Demented categories, however, suggest that the characteristics separating these Alzheimer’s stages may share patterns that make it difficult for the model to discern between them, as shown in Figure 4.3. This knowledge aids in further improving the model’s performance, whether by adding more data augmentation or more sophisticated feature extraction methods to improve class separation.
Figure 4.3: Confusion matrix of InceptionV3
MobileNet architecture
The model obtained an F1-score of 0.98, a recall of 1.00, and a high precision of 0.96 for class 0. This suggests that there are very few false negatives in the MobileNet model’s ability to identify non-demented instances. Class 1 demonstrates that the model achieved F1-score values of 1.00, flawless precision, and recall for this class. This indicates that the model performed exceptionally well for this class, properly identifying all Very Mild Demented samples with no errors. For cases of mild dementia, Class 2 reports that the model obtained an F1-score of 0.94, a precision of 0.98, and a recall of 0.90. The somewhat reduced recall suggests some false negatives when samples with mild dementia were incorrectly identified, despite the great precision. Results for Class 3 indicate a 0.93 precision, 0.96 recall, and 0.94 F1-score for individuals with moderate dementia. This indicates that, despite some small misclassifications, the model is resilient in identifying cases of moderate dementia. The model’s overall accuracy of 0.97 indicates that it is reliable and successful for early Alzheimer’s disease identification using MRI scans, correctly classifying 97% of the samples across all categories. Table 4.2 describes the evaluation measures of MobileNet.
Table 4.2: Evaluation measures of MobileNet
Class | Precision | Recall | F1-Score | Support |
Non-Demented (0) | 0.96 | 1.00 | 0.98 | 480 |
Very Mild Demented (1) | 1.00 | 1.00 | 1.00 | 480 |
Mild Demented (2) | 0.98 | 0.90 | 0.94 | 480 |
Moderate Demented (3) | 0.93 | 0.96 | 0.94 | 480 |
Accuracy | 0.97 | 1920 |
The early identification of Alzheimer’s disease from MRI scans using the MobileNet model has demonstrated good training and validation accuracy. The Alzheimer’s Dataset, which is divided into four classes: Non-Demented, Very Mild Demented, Mild Demented, and Moderate Demented, was used to refine the model. To assess the model’s performance and generalization capacities, the training and validation accuracies were monitored throughout a number of epochs. By the end of the training, the model had consistently shown high training accuracy, coming very close to 98%. Additionally, the validation accuracy displayed a high value, averaging almost 97% by the end epochs, as shown in Figures 4.4 and 4.5. Over the course of the epochs, the training and validation losses dropped, suggesting that the model was successfully refining its parameters to reduce training mistakes. There was no discernible overfitting in the model, since the validation loss stayed low and close to the training loss.
Figure 4.4: Training and validation accuracy of MobileNet
Figure 4.5: Training and validation loss of MobileNet
For early Alzheimer’s disease identification using MRI scans, the confusion matrix of the MobileNet model shows the relationship between true labels and predicted labels for four classes: non-demented, very mildly demented, mildly demented, and moderately demented. All 480 samples in the Non-Demented class had accurate predictions; there were neither false positives nor false negatives. All 480 cases in the Very Mild Demented class had flawless classifications none were misclassified. Out of 432 samples in the Mild Demented class, 48 were incorrectly categorized, as shown in Figure 4.6. There were 461 correctly categorized samples and 19 incorrectly classified samples in the Moderate Demented class.
Figure 4.6: Confusion matrix of MobileNet
Proposed Deep CNN model
The proposed Deep CNN model for early diagnosis of Alzheimer’s using MRI scans performs admirably in all four classes: non-demented, very mildly demented, mildly demented, and moderately demented. The model has an accuracy, recall, and f1-score of 0.99 for each class, showing a near-perfect balance between properly identifying instances and reducing false positives. The precision, recall, and f1-score for the Non-Demented class (label 0) are all 0.99, with 400 samples of support, indicating that practically all actual Non-Demented cases were correctly predicted. Similarly, for the Very Mild Demented (label 1) and Mild Demented (label 2) classes, the precision, recall, and f1-score scores were consistently high at 0.99, with 400 examples each. The Moderate Demented class (label 3) had a precision of 0.99, a recall of 0.98, and an F1-score of 0.99, with a minor decrease in recall indicating a few misclassifications. The model’s overall accuracy is an excellent 0.99, demonstrating its ability to classify various stages of Alzheimer’s disease. These findings indicate that the proposed Deep CNN model is very dependable in discriminating between different stages of dementia, highlighting its potential as a valuable tool for early identification of Alzheimer’s disease using MRI images. Table 4.3 describes the evaluation measures of the proposed deep CNN.
Table 4.2: Evaluation measures of the proposed model
Class | Precision | Recall | F1-Score | Support |
Non-Demented (0) | 0.99 | 0.99 | 0.99 | 400 |
Very Mild Demented (1) | 0.99 | 0.99 | 0.99 | 400 |
Mild Demented (2) | 0.99 | 0.99 | 0.99 | 400 |
Moderate Demented (3) | 0.99 | 0.98 | 0.99 | 400 |
Accuracy | 0.99 | 1600 |
The Deep CNN model for early Alzheimer’s detection using MRI scans demonstrates substantial learning behavior and generalization capabilities. The model’s training accuracy rapidly grew, reaching nearly 99%, showing effective Alzheimer’s disease pattern learning. The training loss decreased, demonstrating the model’s capacity to reduce errors and optimize weights. During the validation phase, the model maintained strong performance, with validation accuracy tracking a trajectory similar to training accuracy. This illustrates the model’s capacity to learn well from training data and apply it efficiently to previously unseen validation data. The validation accuracy remained consistent, avoiding the overfitting problems frequent in deep learning models. After a few epochs, the validation loss dropped and stabilized, eventually matching the training loss, validating the model’s stability and ability to generalize across varied data sets, as shown in Figures 4.7 and 4.8.
Figure 4.7: Training and validation accuracy of the proposed Deep CNN model
Figure 4.8: Training and validation loss of proposed Deep CNN model
The proposed Deep CNN model for early identification of Alzheimer’s disease from MRI images uses a confusion matrix to compare true and predicted labels across classes. The model’s high accuracy is evidenced by its ability to correctly classify 99% of Mild Demented instances as Mild Demented while maintaining 99% precision and recall for other classes, as shown in Figure 4.9. The model’s capacity to distinguish between Alzheimer’s phases demonstrates its usefulness in collecting complicated patterns inside MRI data.
Figure 4.9: Confusion matrix of proposed Deep CNN model
Discussion
In the discussion of the models used for early detection of Alzheimer’s disease using MRI scans, it is clear that the proposed Deep CNN model exceeds the other models in terms of accuracy. The InceptionV3 model attained an accuracy of 95%, which is robust but somewhat lower than the other models. This shows that, despite its advanced architecture and great feature extraction capabilities, InceptionV3 may do a better job recognizing small variations between Alzheimer’s phases. The MobileNet model, which is known for its efficiency and speed, obtained an accuracy of 97%, exhibiting strong performance in classification tasks with a solid balance of accuracy and computational economy. However, the Proposed Deep CNN model outperformed the competition with an amazing 99% accuracy, demonstrating its superior ability in recognizing and classifying MRI data into the appropriate Alzheimer’s stages, as described in Table 4.4. This high accuracy suggests that the proposed Deep CNN model successfully caught complex patterns and features in the MRI scans, making it very dependable for early detection.
Table 4.4: Comparison of models accuracy
Models | Accuracy |
InceptionV3 | 95% |
MobileNet | 97% |
Proposed model | 99% |
Figure 4.10: Comparison of models accuracy
Limitations
Deep learning has made enormous strides in early identification of Alzheimer’s disease using MRI images, although it still has some limits. These include the requirement for high-quality, annotated datasets, which can be difficult to collect and may not accurately represent real-world scenarios. Variability in MRI protocols, picture resolution, and patient demographics can all have an impact on model performance, potentially resulting in biases or limited generalization. Furthermore, deep learning models necessitate significant computer resources and lengthy training data, which may impede their widespread deployment in clinical settings. The interpretability of these models is also an issue, as they may struggle to differentiate minor variations in the early stages of Alzheimer’s disease, reducing sensitivity and specificity.
Chapter summary
The fourth chapter contains studies that evaluate deep learning models for detecting Alzheimer’s disease using MRI images. It describes the dataset, training setup, and metrics used. The results demonstrate the accuracy, precision, recall, and F1-score for the InceptionV3, MobileNet and proposed Deep CNN models. This chapter also presents a thorough review of experimental data for medical image analysis, emphasizing the Deep CNN model’s potential to outperform classic CNN-based techniques.
Conclusion and future work
Conclusion
Alzheimer’s disease (AD) is a chronic neurodegenerative condition that affects cognitive abilities, particularly memory and language functions. It is difficult to identify slight alterations in the hippocampus in the brains of those with AD, and degenerative symptoms like memory loss and language impairment are only noticed as the condition progresses due to damage to specific brain nerve cells. There is no proven cure for AD, and its precise cause is unknown. However, 10-15% of people with mild cognitive impairment will eventually develop AD every year. Alzheimer’s disease is a fatal, incurable, and irreversible disease that significantly impacts the global healthcare system. Early detection and treatment are crucial in alleviating symptoms’ severity and slowing down the disease’s course.
MRI is a popular method for imaging the brain, and artificial intelligence models can help professionals identify and classify AD in MRI images. CNNs are a popular deep learning technique for image classification and segmentation from medical images. CNNs automatically learn everything from scratch using raw images, but overfitting is often an issue due to the large number of parameters in deep and wide CNN architectures. Medical image analysis faces problems such as a lack of labeled datasets, noise, uneven class distributions, and high inter-class similarity. Advances in computational methods and neuroimaging technology have led to a significant evolution in the quest for early detection of AD using MRI scans and deep learning. Memory loss and a steady decline in cognitive function are hallmarks of Alzheimer’s disease, which was initially identified by Dr. Alois Alzheimer in 1906. Advances in imaging techniques and higher resolution scans made it possible to detect subtle changes in the early stages of AD. Convolutional neural networks, used in deep learning, have enabled automatic feature extraction from MRI scans, increasing early detection sensitivity and accuracy. MRI scans are crucial for early detection of AD, as they produce detailed images of brain structures. However, traditional methods often struggle to interpret these changes, leading to the need for more sophisticated analytical techniques. Recent research has focused on improving deep learning techniques by integrating MRI data with other biomarkers and clinical data.
The Alzheimer’s Dataset is a comprehensive collection of high-resolution brain MRI images divided into four categories: Non-Demented (ND), Very Mild Demented (VMD), Mild Demented (MD), and Moderate Demented (MoD). These categories represent the progression from normal cognitive functioning to various stages of dementia, providing a nuanced understanding of the disease’s progression. The dataset contains high-resolution brain MRI images that capture structural changes associated with each stage of AD. CNNs have emerged as powerful tools in Alzheimer’s disease detection, demonstrating exceptional capabilities in automated image recognition tasks. CNN architecture includes an input layer, a Convolutional layer, a RELU, a pooling layer, and a fully connected layer. The InceptionV3 model serves as the foundation for feature extraction, using pre-trained weights from ImageNet. To minimize overfitting, a Dropout Layer is added to minimize the risk of overfitting. The Flatten Layer converts 2D feature maps into 1D feature vectors, making it easier to transition from convolutional to fully connected layers. A Dense Layer is used to interpret high-level features and learn Alzheimer’s disease-specific features from the extracted features.
Conv_base is a pre-trained MobileNet model for extracting features using depth-wise separable convolutions to minimize parameters and computational expenses. This approach improves the accuracy of classification for Alzheimer’s detection tasks. The Deep CNN Model for early Alzheimer’s disease detection from MRI scans is based on a deep CNN architecture that has been fine-tuned using additional dense layers to distinguish between Alzheimer’s disease stages: mild dementia, moderate dementia, non-demented, and very mild dementia. The model starts with a pre-trained CNN backbone and then adds custom layers to improve its ability to distinguish between these stages. The base model is a pre-trained deep CNN model that extracts high-level features from MRI scans using knowledge from a large dataset, such as ImageNet. To prevent overfitting, the model is not initially trainable. The output is fed into a 256-unit dense layer, which learns abstract features through ReLU activation. A 0.4-dropout layer is added after the dense layer to prevent overfitting. The final dense layer consists of four units representing one of the four Alzheimer’s disease classes: mild dementia, moderate dementia, non-demented dementia, and very mild dementia. The model employs the Softmax activation function to generate a probability distribution across the four classes, enabling it to predict the most likely class for a given MRI scan. It is built with Adamax, a categorical cross-entropy loss function, and evaluated using metrics like accuracy and AUC. The model produced very accurate predictions with low false positives and false negatives for class 0 (Non-Demented), with an F1-score of 0.99, recall of 0.98, and F1-score of 0.99. InceptionV3 demonstrated that MRI scans are a highly effective means of detecting Alzheimer’s disease early on. The model’s overall accuracy of 0.97 indicates that it is reliable and successful for early Alzheimer’s disease identification using MRI scans, correctly classifying 97% of the samples across all categories. These findings indicate that the proposed Deep CNN model is very dependable in discriminating between different stages of dementia, highlighting its potential as a valuable tool for early identification of Alzheimer’s disease.
Future work
Prospective research on the early identification of Alzheimer’s disease by deep learning from MRI scans shows potential for enhancing clinical applicability and diagnostic precision. A comprehensive model that more accurately represents the intricacy of Alzheimer’s disease can be created by integrating multimodal data, such as MRI and PET scans. Reducing biases and improving model generalization can be achieved by extending datasets to varied populations, improving the interpretability of deep learning models, and integrating longitudinal data for customized treatment plans. More access to and application of effective algorithms in clinical practice will be made possible by their low processing power requirements and ability to run on mobile devices.
References
Abbas, Q., Hussain, A., & Baig, A. R. (2022). Automatic detection and classification of cardiovascular disorders using phonocardiogram and convolutional vision transformers. Diagnostics, 12(12), 3109.
Abbas, S. Q., Chi, L., & Chen, Y.-P. P. (2023). Transformed domain convolutional neural network for Alzheimer’s disease diagnosis using structural MRI. Pattern Recognition, 133, 109031.
Acharya, U. R., Fernandes, S. L., WeiKoh, J. E., Ciaccio, E. J., Fabell, M. K. M., Tanik, U. J., . . . Yeong, C. H. (2019). Automated detection of Alzheimer’s disease using brain MRI images–a study with various feature extraction techniques. Journal of Medical Systems, 43, 1-14.
AlSaeed, D., & Omar, S. F. (2022). Brain MRI analysis for Alzheimer’s disease diagnosis using CNN-based feature extraction and machine learning. Sensors, 22(8), 2911.
Anand, A., Patience, A. A., Sharma, N., & Khurana, N. (2017). The present and future of pharmacotherapy of Alzheimer’s disease: A comprehensive review. European journal of pharmacology, 815, 364-375.
Arafa, D. A., Moustafa, H. E.-D., Ali, H. A., Ali-Eldin, A. M., & Saraya, S. F. (2024). A deep learning framework for early diagnosis of Alzheimer’s disease on MRI images. Multimedia Tools and Applications, 83(2), 3767-3799.
Baghdadi, N. A., Malki, A., Balaha, H. M., Badawy, M., & Elhosseini, M. (2022). A3c-tl-gto: Alzheimer automatic accurate classification using transfer learning and artificial gorilla troops optimizer. Sensors, 22(11), 4250.
Barrera, K., Merino, A., Molina, A., & Rodellar, J. (2023). Automatic generation of artificial images of leukocytes and leukemic cells using generative adversarial networks (syntheticcellgan). Computer methods and programs in biomedicine, 229, 107314.
Barrera, K., Rodellar, J., Alférez, S., & Merino, A. (2023). Automatic normalized digital color staining in the recognition of abnormal blood cells using generative adversarial networks. Computer methods and programs in biomedicine, 240, 107629.
Battineni, G., Chintalapudi, N., Amenta, F., & Traini, E. (2021). Deep Learning Type Convolution Neural Network Architecture for Multiclass Classification of Alzheimer’s Disease. Bioimaging,
Borkar, P., Wankhede, V. A., Mane, D. T., Limkar, S., Ramesh, J., & Ajani, S. N. (2023). Deep learning and image processing-based early detection of Alzheimer disease in cognitively normal individuals. Soft Computing, 1-23.
Castellano, G., Esposito, A., Lella, E., Montanaro, G., & Vessio, G. (2024). Automated detection of Alzheimer’s disease: a multi-modal approach with 3D MRI and amyloid PET. Scientific Reports, 14(1), 5210.
Chandra, A., Dervenoulas, G., Politis, M., & Initiative, A. s. D. N. (2019). Magnetic resonance imaging in Alzheimer’s disease and mild cognitive impairment. Journal of Neurology, 266, 1293-1302.
de Souza, R. G., dos Santos Lucas e Silva, G., dos Santos, W. P., de Lima, M. E., & Initiative, A. s. D. N. (2021). Computer-aided diagnosis of Alzheimer’s disease by MRI analysis and evolutionary computing. Research on Biomedical Engineering, 37, 455-483.
Deepa, N., & Chokkalingam, S. (2022). Optimization of VGG16 utilizing the arithmetic optimization algorithm for early detection of Alzheimer’s disease. Biomedical Signal Processing and Control, 74, 103455.
Duan, J., Liu, Y., Wu, H., Wang, J., Chen, L., & Chen, C. P. (2023). Broad learning for early diagnosis of Alzheimer’s disease using FDG-PET of the brain. Frontiers in neuroscience, 17, 1137567.
Dubois, J., Alison, M., Counsell, S. J., Hertz‐Pannier, L., Hüppi, P. S., & Benders, M. J. (2021). MRI of the neonatal brain: a review of methodological challenges and neuroscientific advances. Journal of Magnetic Resonance Imaging, 53(5), 1318-1343.
El-Assy, A., Amer, H. M., Ibrahim, H., & Mohamed, M. (2024). A novel CNN architecture for accurate early detection and classification of Alzheimer’s disease using MRI data. Scientific Reports, 14(1), 3463.
El-Latif, A. A. A., Chelloug, S. A., Alabdulhafith, M., & Hammad, M. (2023). Accurate detection of Alzheimer’s disease using lightweight deep learning model on MRI data. Diagnostics, 13(7), 1216.
Erdogmus, P., & Kabakus, A. T. (2023). The promise of convolutional neural networks for the early diagnosis of the Alzheimer’s disease. Engineering Applications of Artificial Intelligence, 123, 106254.
Fathi, S., Ahmadi, A., Dehnad, A., Almasi-Dooghaee, M., Sadegh, M., & Initiative, A. S. D. N. (2024). A deep learning-based ensemble method for early diagnosis of Alzheimer’s disease using MRI images. Neuroinformatics, 22(1), 89-105.
Fernandes, S. L., Tanik, U. J., Rajinikanth, V., & Karthik, K. A. (2020). A reliable framework for accurate brain image examination and treatment planning based on early diagnosis support for clinicians. Neural Computing and Applications, 32(20), 15897-15908.
Fernández Montenegro, J. M., Villarini, B., Angelopoulou, A., Kapetanios, E., Garcia-Rodriguez, J., & Argyriou, V. (2020). A survey of alzheimer’s disease early diagnosis methods for cognitive assessment. Sensors, 20(24), 7292.
Francis, A., & Pandian, I. A. (2021). Early detection of Alzheimer’s disease using ensemble of pre-trained models. 2021 International Conference on Artificial Intelligence and Smart Systems (ICAIS),
Goenka, N., & Tiwari, S. (2022). AlzVNet: A volumetric convolutional neural network for multiclass classification of Alzheimer’s disease through multiple neuroimaging computational approaches. Biomedical Signal Processing and Control, 74, 103500.
Grundman, M., Petersen, R. C., Ferris, S. H., Thomas, R. G., Aisen, P. S., Bennett, D. A., . . . Doody, R. (2004). Mild cognitive impairment can be distinguished from Alzheimer disease and normal aging for clinical trials. Archives of Neurology, 61(1), 59-66.
GUPTA, A., GUPTA, D., & GUPTA, S. S. (2023). Identification of Alzheimer’s disease from MRI images employing a probabilistic deep learning-based approach and the VGG16.
Hussain, E., Hasan, M., Hassan, S. Z., Azmi, T. H., Rahman, M. A., & Parvez, M. Z. (2020). Deep learning based binary classification for alzheimer’s disease detection using brain MRI images. 2020 15th IEEE Conference on Industrial Electronics and Applications (ICIEA),
Illakiya, T., & Karthik, R. (2023). Automatic detection of Alzheimer’s disease using deep learning models and neuro-imaging: current trends and future perspectives. Neuroinformatics, 21(2), 339-364.
Jimenez-Mesa, C., Illán, I. A., Martin-Martin, A., Castillo-Barnes, D., Martinez-Murcia, F. J., Ramirez, J., & Gorriz, J. M. (2020). Optimized one vs one approach in multiclass classification for early Alzheimer’s disease and mild cognitive impairment diagnosis. IEEE Access, 8, 96981-96993.
Kabasawa, H. (2022). MR imaging in the 21st century: technical innovation over the first two decades. Magnetic resonance in medical sciences, 21(1), 71-82.
Kaya, M. (2024). Feature fusion-based ensemble CNN learning optimization for automated detection of pediatric pneumonia. Biomedical Signal Processing and Control, 87, 105472.
Kaya, M., Ulutürk, S., Kaya, Y. Ç., Altıntaş, O., & Turan, B. (2023). Optimization of Several Deep CNN Models for Waste Classification. Sakarya University Journal of Computer and Information Sciences, 6(2), 91-104.
Kerwin, D., Abdelnour, C., Caramelli, P., Ogunniyi, A., Shi, J., Zetterberg, H., & Traber, M. (2022). Alzheimer’s disease diagnosis and management: perspectives from around the world. Alzheimer’s & Dementia: Diagnosis, Assessment & Disease Monitoring, 14(1), e12334.
Khalid, A., Senan, E. M., Al-Wagih, K., Ali Al-Azzam, M. M., & Alkhraisha, Z. M. (2023). Automatic Analysis of MRI Images for Early Prediction of Alzheimer’s Disease Stages Based on Hybrid Features of CNN and Handcrafted Features. Diagnostics, 13(9), 1654.
Khan, R., Akbar, S., Mehmood, A., Shahid, F., Munir, K., Ilyas, N., . . . Zheng, Z. (2023). A transfer learning approach for multiclass classification of Alzheimer’s disease using MRI images. Frontiers in neuroscience, 16, 1050777.
Khan, R., Qaisar, Z. H., Mehmood, A., Ali, G., Alkhalifah, T., Alturise, F., & Wang, L. (2022). A practical multiclass classification network for the diagnosis of Alzheimer’s disease. Applied Sciences, 12(13), 6507.
Kim, S. K., Duong, Q. A., & Gahm, J. K. (2024). Multimodal 3D Deep Learning for Early Diagnosis of Alzheimer’s Disease. IEEE Access.
Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2017). ImageNet classification with deep convolutional neural networks. Communications of the ACM, 60(6), 84-90.
Li, H., Tan, Y., Miao, J., Liang, P., Gong, J., He, H., . . . Wu, D. (2023). Attention-based and micro designed EfficientNetB2 for diagnosis of Alzheimer’s disease. Biomedical Signal Processing and Control, 82, 104571.
Lima, J. A., & Hamerski, L. (2019). Alkaloids as potential multi-target drugs to treat Alzheimer’s disease. Studies in natural products chemistry, 61, 301-334.
Matlani, P. (2024). BiLSTM-ANN: early diagnosis of Alzheimer’s disease using hybrid deep learning algorithms. Multimedia Tools and Applications, 83(21), 60761-60788.
Moser, E., Laistler, E., Schmitt, F., & Kontaxis, G. (2017). Ultra-high field NMR and MRI—the role of magnet technology to increase sensitivity and specificity. Frontiers in Physics, 5, 33.
Musallam, A. S., Sherif, A. S., & Hussein, M. K. (2022). A new convolutional neural network architecture for automatic detection of brain tumors in magnetic resonance imaging images. IEEE Access, 10, 2775-2782.
Odusami, M., Maskeliūnas, R., Damaševičius, R., & Misra, S. (2023). Explainable deep-learning-based diagnosis of Alzheimer’s disease using multimodal input fusion of PET and MRI Images. Journal of Medical and Biological Engineering, 43(3), 291-302.
Ottoy, J., Niemantsverdriet, E., Verhaeghe, J., De Roeck, E., Struyfs, H., Somers, C., . . . Van Broeckhoven, C. (2019). Association of short-term cognitive decline and MCI-to-AD dementia conversion with CSF, MRI, amyloid-and 18F-FDG-PET imaging. NeuroImage: Clinical, 22, 101771.
Rana, M. M., Islam, M. M., Talukder, M. A., Uddin, M. A., Aryal, S., Alotaibi, N., . . . Moni, M. A. (2023). A robust and clinically applicable deep learning model for early detection of Alzheimer’s. IET Image Processing, 17(14), 3959-3975.
Ravi, V., EA, G., & KP, S. (2024). Deep learning-based approach for multi-stage diagnosis of Alzheimer’s disease. Multimedia Tools and Applications, 83(6), 16799-16822.
Sánchez Fernández, I., & Peters, J. M. (2023). Machine learning and deep learning in medicine and neuroimaging. Annals of the Child Neurology Society, 1(2), 102-122.
Shanmugavadivel, K., Sathishkumar, V., Cho, J., & Subramanian, M. (2023). Advancements in computer-assisted diagnosis of Alzheimer’s disease: A comprehensive survey of neuroimaging methods and AI techniques for early detection. Ageing Research Reviews, 91, 102072.
Sharma, S., & Mandal, P. K. (2022). A comprehensive report on machine learning-based early detection of alzheimer’s disease using multi-modal neuroimaging data. ACM computing surveys (CSUR), 55(2), 1-44.
Sheng, J., Xin, Y., Zhang, Q., Wang, L., Yang, Z., & Yin, J. (2022). Predictive classification of Alzheimer’s disease using brain imaging and genetic data. Scientific Reports, 12(1), 2405.
Shin, H.-C., Roth, H. R., Gao, M., Lu, L., Xu, Z., Nogues, I., . . . Summers, R. M. (2016). Deep convolutional neural networks for computer-aided detection: CNN architectures, dataset characteristics and transfer learning. IEEE transactions on medical imaging, 35(5), 1285-1298.
Shukla, G. P., Kumar, S., Pandey, S. K., Agarwal, R., Varshney, N., & Kumar, A. (2023). Diagnosis and detection of Alzheimer’s disease using learning algorithm. Big Data Mining and Analytics, 6(4), 504-512.
Sorour, S. E., Abd El-Mageed, A. A., Albarrak, K. M., Alnaim, A. K., Wafa, A. A., & El-Shafeiy, E. (2024). Classification of Alzheimer’s disease using MRI data based on Deep Learning Techniques. Journal of King Saud University-Computer and Information Sciences, 36(2), 101940.
Suganthe, R., Geetha, M., Sreekanth, G., Gowtham, K., Deepakkumar, S., & Elango, R. (2021). Multiclass classification of Alzheimer’s disease using hybrid deep convolutional neural network. NVEO-NATURAL VOLATILES & ESSENTIAL OILS Journal| NVEO, 145-153.
Veitch, D. P., Weiner, M. W., Aisen, P. S., Beckett, L. A., Cairns, N. J., Green, R. C., . . . Morris, J. C. (2019). Understanding disease progression and improving Alzheimer’s disease clinical trials: Recent highlights from the Alzheimer’s Disease Neuroimaging Initiative. Alzheimer’s & Dementia, 15(1), 106-152.
Vrahatis, A. G., Skolariki, K., Krokidis, M. G., Lazaros, K., Exarchos, T. P., & Vlamos, P. (2023). Revolutionizing the early detection of Alzheimer’s disease through non-invasive biomarkers: the role of artificial intelligence and deep learning. Sensors, 23(9), 4184.
Weller, J., & Budson, A. (2018). Current understanding of Alzheimer’s disease diagnosis and treatment. F1000Research, 7.
Yao, Z., Wang, H., Yan, W., Wang, Z., Zhang, W., Wang, Z., & Zhang, G. (2023). Artificial intelligence-based diagnosis of Alzheimer’s disease with brain MRI images. European Journal of Radiology, 165, 110934.
Zhao, Y., Guo, Q., Zhang, Y., Zheng, J., Yang, Y., Du, X., . . . Zhang, S. (2023). Application of deep learning for prediction of alzheimer’s disease in PET/MR imaging. Bioengineering, 10(10), 1120.
Zhu, W., Sun, L., Huang, J., Han, L., & Zhang, D. (2021). Dual attention multi-instance deep learning for Alzheimer’s disease diagnosis with structural MRI. IEEE transactions on medical imaging, 40(9), 2354-2366.
Cite This Work
To export a reference to this article please select a referencing stye below: