波谱学杂志, 2023, 40(2): 220-238 doi: 10.11938/cjmr20223013

综述评论

基于深度学习的阿尔兹海默症影像学分类研究进展

钱程一, 王远军,*

上海理工大学 医学影像技术研究所,上海 200093

Research Progress on Imaging Classification of Alzheimer’s Disease Based on Deep Learning

QIAN Chengyi, WANG Yuanjun,*

Institute of Medical Imaging Technology, University of Shanghai for Science and Technology, Shanghai 200093, China

通讯作者: *Tel: 13761603606, E-mail:yjusst@126.com.

收稿日期: 2022-08-12   网络出版日期: 2022-11-07

基金资助: 上海市自然科学基金资助项目(18ZR1426900)

Corresponding authors: *Tel: 13761603606, E-mail:yjusst@126.com.

Received: 2022-08-12   Online: 2022-11-07

摘要

随着全球老龄化的加剧与深度学习的发展,基于深度学习的阿尔兹海默症(AD)影像学分类成为当前的一个研究热点.本文首先阐述了AD影像学分类任务中常用的深度学习模型、评估标准及公开数据集;接着讨论了不同图像模态在AD影像学分类中的应用;然后着重探讨了应用于AD影像学分类的深度学习模型改进方法;进一步引入了对模型可解释性研究的探讨;最后总结并比较了文中提及的分类模型,归纳了与AD影像分类相关的大脑区域,并对该领域未来的研究方向进行了展望.

关键词: 阿尔茨海默症(AD); 深度学习; 医学影像; 分类; 可解释性

Abstract

As global aging worsens and deep learning advances, the imaging classification of Alzheimer’s disease (AD) based on deep learning has become a hot topic of research. This paper reviewed the common deep learning models, evaluation criteria and public datasets in AD imaging classification tasks, discussed the application of different image modalities in AD imaging classification. The content was focused on the improvement of deep learning models applied to AD imaging classification. The studies of model interpretability were also introduced. Finally, the paper summarized and compared the classification models mentioned, identified the brain regions related to AD image classification, and outlined the future research directions in this field.

Keywords: Alzheimer’s disease (AD); deep learning; medical imaging; classification; interpretability

PDF (793KB) 元数据 多维度评价 相关文章 导出 EndNote| Ris| Bibtex  收藏本文

本文引用格式

钱程一, 王远军. 基于深度学习的阿尔兹海默症影像学分类研究进展[J]. 波谱学杂志, 2023, 40(2): 220-238 doi:10.11938/cjmr20223013

QIAN Chengyi. Research Progress on Imaging Classification of Alzheimer’s Disease Based on Deep Learning[J]. Chinese Journal of Magnetic Resonance, 2023, 40(2): 220-238 doi:10.11938/cjmr20223013

引言

阿尔兹海默症(Alzheimer disease,AD)是痴呆症最常见的类型,占所有痴呆症的50~70%,其症状为认知、功能和行为的退化,通常始于对于最近发生事件记忆的丧失.AD患者的脑组织病理特征是脑脊液中β淀粉蛋白(Aβ42)水平下降,总Tau蛋白或磷酸化Tau蛋白升高,从而在细胞内聚集成神经原纤维缠结(neurofibrillary tangles,NFT)[1].全球有约5 000万人患有AD,由于人口老龄化,预计到2050年,患者数量将增加两倍,即85人中就有一人患有AD,这样的趋势无疑对残疾风险、疾病负担和医疗费用是个巨大的挑战[2].

AD的加重也不是突然的,而是存在一个很长的被称为轻度认知障碍(mild cognitive impairment,MCI)的发展阶段,MCI的早期诊断对于AD的治疗至关重要[3].临床评估仍然是目前最主要的AD诊断手段,特别是针对患者本人的临床访谈,同时对患者进行神经衰退记录,辅助方法包括检测海马体积完整性、追踪白质纤维[4]等.这样的评估方法非常耗时耗力,针对一个患者就需要耗费大量的医疗资源对其AD状态进行评估,如果可以开发一种省时且无创的辅助诊断方法,则可大大降低医疗资源的消耗,减少患者被漏诊的可能.随着机器学习方法被引入到医学影像分类中,许多研究者通过大量医学影像数据与机器学习算法提出了针对AD与MCI的预测方法[5-7],且准确率已经接近人类专家的诊断效果,但是传统的机器学习算法依赖于人工提取的特征,需要复杂的数据预处理操作.而在实际应用场景中,人们更倾向于进行较少的数据预处理步骤就可以得到可靠的结果,这样端到端的学习模式正符合深度学习的特性.

在基于机器学习的AD影像学分类领域,以往的综述论文往往从机器学习和深度学习两大方面进行讨论[8,9],或者着重于特定算法在AD影像学分类中的应用[10,11].而本文针对深度学习的讨论超出特定算法层面,着重于迁移学习、集成学习和多任务学习这些方法对基于深度学习的AD影像学分类效能的提升;针对深度学习的黑箱特性所带来的可解释性难题,本文对目前可解释性的研究进展也做了分类探讨;同时还讨论了脑龄预测任务与AD影像学分类任务的相关性,这一方向未来或许会成为该领域新的研究重点.

本文第一节介绍AD影像学分类常用的深度学习算法模型、评估标准及公开数据集;第二节讨论不同图像模态在AD影像学分类中的应用;第三节讨论迁移学习、集成学习和多任务学习方法应用于深度学习后,对AD影像分类性能的提升效果;第四节对可解释性方法进行分类讨论;第五节对本文提及的分类模型和与AD影像分类相关的大脑区域进行总结,并对未来研究方向进行展望.

1 常用的深度学习模型、评估标准及公开数据集

以往用于AD影像分类的传统机器学习算法,如支持向量机[12]、决策树[13]和随机森林[14]等,均依赖于人工对图像特征进行提取:通常需要先使用脑模板,例如自动解剖标记(automated anatomical labeling,AAL),手动划分感兴趣区域(region of interest,ROI),然后从ROI中提取灰度直方图[14]、灰质体积、皮层表面积[12]、皮层分形结构[5]、海马体体积[7]等作为图像特征.该类特征提取方法依赖于专家的先验知识,同时手动划分的ROI具有不可避免的人为误差,从而影响到机器学习算法的表现.而深度学习端到端学习的特性解决了这样的难题,只需进行简单的预处理,例如将图像进行配准、归一化和平滑后直接作为矩阵输入到模型中,就可以自动进行特征提取,然后使用全连接层或者传统机器学习方法作为分类器.在实际的临床应用场景中,不同的医院以及不同的采集设备都会影响到成像效果,而不依赖于人工图像特征提取的深度学习算法往往会得到更好的效果,而且泛化能力也可以得到保证.

1.1 AD影像学分类中常用的深度学习模型

1.1.1 卷积神经网络(convolutional neural network,CNN)

随着图形处理器(graphics processing unit,GPU)计算集群对复杂神经网络的支持,CNN在计算机视觉领域被广泛运用.ImageNet项目是一个用于视觉对象识别软件研究的大型可视化数据库,在基于该数据库的大规模视觉识别挑战赛(ImageNet large scale visual recognition challenge,ILSVRC)中,参赛算法被要求将1 400万张图片分为1 000个类别,而CNN多次成为优胜算法.因此,CNN及其改进算法成为基于深度学习的医学影像处理热门算法,是AD影像学分类任务中使用最为广泛的深度学习算法.

图1展示了最为常见的输入二维图像的CNN模型,主要包含输入层、卷积层、池化层、全连接层和输出层,除了输入和输出层,CNN中其他层均会被多次堆叠.卷积层用于特征提取,随着网络的加深,特征图的尺寸也会随之减小.越靠近网络输入端,网络学习到的特征越初级,如纹理信息、面积信息;而越靠近网络输出端,网络越能学习到更高级的语义信息.池化层用于特征选择和信息过滤,极大降低网络计算量.全连接层不具备特征提取能力,而是对高阶特性进行非线性组合得到最终的输出.

图1

图1   CNN模型示意图

Fig. 1   Schematic diagram of CNN model


很多机器学习算法对于问题的目标函数都会做一些必要的先验假设,称为归纳偏置,而CNN就具有两个重要的归纳偏置:局部性和空间不变性.局部性即图像中越相邻的图像特征具有越强的相关性;空间不变性即不论图像中的物体如何平移,同样的卷积核可以提取到同样的特征.这使得CNN在训练之前就拥有了对图像分类极为重要的先验知识,所以CNN在小样本的医学影像处理问题上具有优秀的性能.VGGNet、MobileNet、AlexNet、InceptionNet、残差网络(residual network,ResNet)、密集连接网络(densely connected convolutional network,DenseNet)和全卷积神经网络(fully convolutional network,FCN)都是基于CNN改进后的经典模型.

1.1.2 循环神经网络(recurrent neural network,RNN)

RNN具有CNN不具备的记忆性,适用于对序列非线性特征的学习,所以通常被应用于自然语言处理领域.虽然RNN不常被应用于计算机视觉领域,但由于AD是长期发展的疾病,具有明显的时间序列性,所以依然有少量研究者将RNN应用于AD影像学分类任务中.如Abuhmed等[15]使用RNN学习受试者在过去时间上的影像特征,对该受试者未来的状况进行预测.总体而言,RNN在AD影像学分类中应用不如CNN广泛.

图2可以直观地看到,RNN在每一个时刻都有一个输入${{x}_{t}}$,经过网络t时刻的状态${{A}_{t}}$获得当前时刻的输出${{h}_{t}}$,而t时刻的网络状态由($t-1$)时刻的网络状态以及输入共同决定,这样的设计使得网络在时间序列上具有记忆功能.之后提出的双向循环神经网络(bidirectional recurrent neural network,BRNN)以及双向长短期记忆网络(bidirectional long short-term memory,BiLSTM)都是基于RNN改进后的模型.

图2

图2   RNN模型示意图

Fig. 2   Schematic diagram of RNN model


1.1.3 自动编码机(autoencoder,AE)

AE作为一种无监督学习模型,不需要对训练样本打上标签,降低了获取数据的难度,所以也常被用于医学图像处理.如Ju等[16]使用AE对图像特征进行提取后分类,使MCI与健康对照组(healthy control,HC)的分类准确率达到86.47%,相比传统机器学习方法提高了近20%.对于端到端的学习方式,需要使用基于AE改进的卷积自动编码机(convolutional autoencoder,CAE)对图像特征进行提取.如Baydargil等[17]使用CAE进行特征提取,使AD、MCI和HC的三分类准确率达到98.67%;Oh等[18]使用带有Inception多尺度卷积模块的CAE(inception modal based convolutional autoencoder,ICAE),使AD和HC的分类准确率达到88.6%.

图3所示,AE包含三种网络层结构,输入层与输出层神经元数量相同,而中间的隐藏层神经元数量少于输入和输出层.编码器将输入的高维信号压缩为低维的特征编码,解码器将特征编码还原成与输入接近的高维信号,从而实现AE的无监督学习.编码器输出的特征编码理论上包含了所有输入信号中有用的特征信息,实现了特征提取的功能.

图3

图3   AE网络示意图

Fig. 3   Schematic diagram of AE network


1.2 常用的评估标准

1.2.1 准确率(accuracy)

对于分类模型而言,存在4种检测结果:预测值和真实值均为阳的真阳性(true positive,TF);预测值为阳而真实值为阴的假阳性(false positive,FP);预测值和真实值均为阴的真阴性(true negative,TN);预测值为阴而真实值为阳的假阴性(false negative,FN).所有分类任务中的指标均由这4种检测结果进行计算.

在AD影像学分类任务中,accuracy是评判深度学习模型最为常用的评价指标,反映了模型在所有分类结果中,正确分类结果的占比,这也是本文使用的用于模型评估的指标,计算如(1)式所示:

$\text{accuracy}=\frac{\text{TP}+\text{TN}}{\text{TP}+\text{TN}+\text{FP}+\text{FN}}$

1.2.2 曲线下面积(area under curve,AUC)

为了避免分类阈值设定导致准确率无法准确反应模型性能的问题,有些研究者[19]会使用AUC对模型进行评估.AUC即接受者操作特征曲线(receiver operating characteristic curve,ROC)下方面积.为了介绍ROC,需要引入两个概念:敏感度(sensitivity),即真阳性样本在实际阳性样本中的占比;特异度(specificity),即真阴性样本在实际阴性样本中的占比.计算公式如(2)、(3)式所示:

$\text{sensitivity}=\frac{\text{TP}}{\text{TP}+\text{FN}}$
$\text{specificity}=\frac{\text{TN}}{\text{FP}+\text{TN}}$

ROC曲线是一条以(1-specificity)为横坐标,以sensitivity为纵坐标绘制的曲线,而AUC则是该曲线下方面积,取值在0.5~1之间,AUC越大表明模型性能越好.

图4

图4   ROC与AUC示意图

Fig. 4   Schematic diagram of ROC and AUC


1.2.3 平均绝对误差(mean absolute error,MAE)

对于认知评分回归任务和后文提到的脑龄预测任务均使用MAE作为评判标准,这也是回归任务最常用的评价指标,它可以总体反应模型的预测值与实际值的差值,MAE越小表示模型的性能越高.MAE的计算公式如(4)式:

$\text{MAE}=\frac{1}{n}\sum\limits_{i=1}^{n}{|{{y}_{i}}-{{{\hat{y}}}_{i}}|}$

其中n为所有预测数,${{y}_{i}}$为预测值,${{\hat{y}}_{i}}$为实际值.

1.3 常用的公开数据集

常用于AD影像学研究的公开数据集包含阿尔兹海默症神经影像学计划(Alzheimer’s disease neuroimaging initiative,ADNI)、开放存取影像研究(open access series of imaging studies,OASIS);对于脑龄预测的研究,常用的数据集为从图像中提取信息数据集(information eXtraction from images,IXI),但它只包含HC组的影像数据.数据集中常用的图像模态包括结构磁共振成像(structural magnetic resonance imaging,sMRI)、功能磁共振成像(functional magnetic resonance imaging,fMRI)、扩散张量成像(diffusion tensor imaging,DTI)、正电子发射型计算机断层显像(positron emission computed tomography,PET);非图像数据通常包括临床痴呆分级(clinical dementia rating,CDR)和简易智力状态检查量表(mini-mental state examination,MMSE).详细的信息在表1中展现.

表1   常用公开数据集

Table 1  Details of popular public dataset

数据集类别数据集大小图像模态
ADNI

AD
MCI
HC
192
398
229
sMRI、DTI、fMRI、PET

OASIS
AD
HC
100
316
sMRI、DTI、fMRI、PET
IXIHC约600sMRI、DTI、磁共振血管造影

新窗口打开| 下载CSV


2 基于不同图像模态的AD影像学分类

不同的图像模态反映了不同的大脑信息:T1加权的sMRI图像反映灰质的结构信息;fMRI图像反映大脑网络功能连接的信息;PET反映了特定标志物在大脑中的代谢信息;DTI反映大脑白质信息.随着医学专家对于AD疾病的深入了解,越来越多的图像模态被应用于AD影像学分类任务,本节就目前应用较多的几种模态进行讨论.

在AD影像学分类任务中,对AD、MCI和HC三类临床数据进行二分类或三分类任务一直是研究的重点.由于这三类影像数据的区分度较大,尤其是AD和HC的数据,所以用于区分AD与HC的模型往往容易训练、准确率较高且具有较好的鲁棒性.

2.1 sMRI图像在AD分类中的应用

AD是一种神经退行性疾病,表现为患者大脑结构变化引起的功能变化,而功能变化的累计又会造成结构的变化.这种结构变化会很好地反映在sMRI图像中,尤其是T1加权成像.而且MRI不产生电离辐射,对人体无害.在公开数据集中,sMRI也是数据量最大的模态.因此基于影像学的AD分类中,sMRI图像是使用最为广泛的.

Yi̇Ği̇T等[20]分别使用sMRI图像的轴向面、矢状面和冠状面作为CNN的输入,发现对于AD与HC分类任务而言,使用轴向面投影数据的分类准确率最高达到了83%;对于MCI与HC分类任务而言,使用矢状投影数据的分类准确率最高达到了82%.郁松等[21]将3D sMRI作为输入,并使用3D ResNet-101网络,使AD与HC分类准确率达到了97.425%.

基于深度学习的图像分类任务需要大量的数据作为训练集,CNN作为一种监督学习模型,依赖于带有标签的图像,而带有标签的医学图像很难获得.为了解决这样的问题,Bi等[22]提出了一种基于无监督学习的AD、MCI与HC分类模型,训练集使用不带标签的sMRI图像,结合主成分分析法(principal components analysis,PCA)与CNN对特征进行提取,使用k均值聚类(k-means clustering algorithm,k-means)进行分类,最终实现AD与MCI的分类准确率为97.01%,AD、MCI与HC三分类准确率为91.25%.

2.2 fMRI图像在AD分类中的应用

也有很多研究者使用fMRI对AD分类问题进行研究.fMRI通过检测血氧水平依赖来估计大脑的活动情况,牺牲了空间分辨率,但提高了时间分辨率.fMRI可以用于研究针对特定任务的相关脑区,在心理学和认知科学中经常被使用.其中,静息态fMRI可以用于实现人脑功能区及功能网络的构建,同时也提供了重要的区域时序上的关系[23].因此,对于影响认知功能的疾病,fMRI相比sMRI可以提供脑区功能的信息.

Parmar等[24]从4D fMRI中选取连续时间序列的5个3D MRI作为3D CNN的输入,经过5个卷积层和3个全连接层,最后得到AD与HC分类准确率为94.58%的分类模型.Bi等[19]基于AAL模板构建脑图谱,然后构建脑网络,通过RNN学习相邻位置特征,最后使用极限学习机(extreme learning machine,ELM)作为分类器,使AD与HC分类AUC达到了91.3%.

2.3 DTI图像在AD分类中的应用

很多医学研究表明AD在脑影像上的特征表现为脑实质的萎缩,主要发生的区域为海马和颞叶.例如,Reginold等[25]发现AD患者的颞叶出现很大程度的扩散异常,颞叶中浅表白质的平均扩散率(mean diffusivity,MD)、径向扩散系数(radical diffusion,RD)大幅上升.Bigham等[26]发现MCI患者的右顶叶浅层白质的MD显著增加,AD患者的左边缘系统和左颞叶浅层白质的MD显著增加.而探测水分子扩散的DTI可以很好地对白质神经纤维束进行成像,直观地反应大脑白质的结构变化,所以有研究者使用DTI数据进行AD影像分类.

Massalimova等[27]融合了MD扩散像、各向异性分数(fractional anisotropy,FA)扩散像和T1加权sMRI,使用ResNet-18作为网络主干,最终得到AD、MCI与HC的三分类准确率为97%.Deng等[28]通过DTI图像计算得到中心度特征(degree centrality,DC),使用CNN作为分类器,使AD与HC分类准确率达到了90.00%.Marzban等[29]将MD扩散像、FA扩散像和各向异性模式(mode of anisotropy,MO)扩散像作为2D CNN的输入,发现使用MD扩散像的AD与HC分类准确率最高达到88.9%,且融合MD扩散像和sMRI中的灰质图像可以取得更高的准确率93.5%.Kang等[30]将sMRI、FA扩散像、MD扩散像通过三通道合并输入到VGG-16网络中进行特征提取,并使用最小绝对收缩和选择算子(Least absolute shrinkage and selection operator,LASSO)算法进行分类,最终使早期MCI(early mild cognitive impairment,EMCI)与HC分类准确率达到94.2%.

2.4 PET图像在AD分类中的应用

PET是核医学中比较先进的成像技术,由于AD的一个显著特征是大脑中淀粉样蛋白斑块的累积,通过淀粉样蛋白的PET成像,医生可以检测AD患者的脑斑.有研究[31]表明淀粉样蛋白PET图像的引入对于AD疾病的诊断产生了很大的影响.对于PET脑部扫描显示存在显著淀粉样沉积物的患者,分别有82%的MCI患者与91%的痴呆患者被临床医生建议服用针对AD的药物;而在进行PET扫描之前,仅有40%的MCI患者与63%的痴呆症患者服用针对AD的药物[32].可见淀粉样蛋白阳性与AD之间高度相关,在基于PET数据的深度学习AD影像分类中也可以得到印证.

Punjabi等[33]详细比较了sMRI图像与AV-45淀粉样蛋白PET图像对AD与HC分类效能的影响.文中采用1 299份sMRI数据和585份AV-45淀粉样蛋白PET数据,并且分别使用全部的sMRI数据、全部的PET数据、和PET数据量相同的sMRI数据以及PET与sMRI多模态数据对CNN进行训练.结果显示在AD和HC分类中,仅使用sMRI大约一半数量的PET数据就达到了85.15%的准确率,和使用全部sMRI达到的准确率87.49%相当,进一步使用PET与sMRI多模态数据则可以将准确率提升至92.34%.而基于PET实验中的假阳性样本均伴有淀粉样蛋白水平升高,印证了淀粉样蛋白水平升高与AD疾病高度相关.也可见PET数据相比sMRI数据在AD影像学分类上具有更高的效能,而两种数据的结合可以获得更高的分类性能.

3 基于深度学习的AD影像学分类改进方法

在患者发展为AD状态以前,存在一段长时期的MCI阶段,MCI可以被细分为多种类型,如:早期MCI(early mild cognitive impairment,EMCI)和晚期MCI(later mild cognitive impairment,LMCI);在指定时间段内没有发展成AD的稳定型MCI(stable mild cognitive impairment,sMCI)和在指定时间段内发展为AD的渐近型MCI(progressive mild cognitive impairment,pMCI)等.如果可以通过深度学习方法对患者脑影像进行MCI类型的区分,更有利于了解患者在漫长的MCI期间发展为AD的趋势,具有极高的临床价值.同时因为都是MCI样本,患者间脑影像差别较小,难以训练出性能良好的模型.

在深度学习领域,有很多方法可以在高于特定算法层面提升分类模型的性能,本节将会讨论:针对医学数据集稀少及训练速度慢问题的迁移学习方法;针对单个分类器性能不好以及难以充分运用3D图像问题的集成学习方法;针对多任务并行与模型过拟合问题的多任务学习方法.

3.1 深度迁移学习在AD影像学分类中的应用

为了解决医学数据量少、模型训练速度慢以及计算量大的问题,深度迁移学习被应用于AD影像学分类任务中.基于深度学习的迁移学习通常是将分类模型先在大数据集(例如ImageNet和JFT-300M数据)上进行预训练,保留习得浅层信息的前几层网络的权重,然后在目标数据集上训练剩下的网络层.在深度学习模型上使用迁移学习技术可以显著地降低模型训练时间,并使模型性能得到一定的提升.

Li等[34]使用由文献[35]提供的3DSeg-8数据集作为预训练的源数据集,将权重迁移学习至ResNet-200,使AD、MCI和HC三分类准确率达到83%,而不使用迁移学习的准确率仅为76%.孔伶旭等[36]使用ImageNet数据集迁移学习到Mobilenet网络的瓶颈层,然后将fMRI数据提取出来的ROI时间序列输入到网络中对网络顶层进行训练,最终网络对于EMCI与HC的分类准确率达到了73.67%.Bin等[37]分别构建了平衡与不平衡数据集,同时在2D CNN、基于ImageNet迁移学习的Xception和Inception-v3网络中进行训练,最终发现带有迁移学习策略的Inception-v3在平衡数据集上性能最高,AD与HC分类准确率达到了99.45%.

有很多研究者将迁移学习进一步应用到分类难度更大的不同类型MCI分类问题上,并且取得了不错的成果.Mehmood等[38]将基于ImageNet数据集训练的权重迁移学习至VGG-19的前14层卷积层,使AD与HC分类准确率达到95.33%,EMCI与LMCI分类准确率达到83.72%.Lian等[39]提出了一个分步提取斑块级和区域级特征的分层全卷积神经网络(hierarchical fully convolutional network,H-FCN)对AD与HC进行分类,并且将网络的权重迁移学习至sMCI与pMCI分类任务中,达到了80.9%的准确率.Basaia等[40]使用FCN对AD与HC进行分类,并且将AD与HC分类网络的权重迁移学习到sMCI与pMCI分类网络中,在ADNI数据集中进行训练,分别在ADNI与独立测试集中进行测试,实验结果表明模型具有良好的泛化能力,在独立测试集中准确率仅从75.1%下降到了74.9%.

3.2 深度集成学习在AD影像学分类中的应用

监督学习算法的目标是学习出一个稳定且在各个方面都表现较好的模型,但实际情况往往不这么理想,有时只能得到多个性能偏好的模型,称为弱分类器.集成学习就是通过组合多个弱分类器模型以得到一个更好更全面的强监督模型,其思想就是即使一个弱分类器得到错误的预测,其他分类器也可以将错误纠正从而得到正确的预测结果.

集成学习方法在AD影像分类任务中可以起到信息互补的作用.如果直接使用3D sMRI数据,会大幅增加网络参数量,若只使用单投影2D切片,则会损失其他投影方向的信息,所以有研究者使用集成学习方法融合3个轴向投影的互补信息,并取得了不错的结果.Choi等[41]将sMRI图像的矢状面、冠状面和轴面分别作为VGG-16、GoogLeNet和AlexNet 3个CNN的输入,生成9个个体分类器,并构建了一个损失函数训练9个个体分类器的权重值,最终分类器在AD和HC分类任务中的准确率达到了93.84%.曾安等[42]使用sMRI图像的矢状面、冠状面和轴面分别训练40个、50个和33个CNN基分类器,选出测试效果最好的5个基分类器使用投票法集成单轴分类器,最后使用投票法将3个单轴分类器集成为最终分类器,使AD与HC的分类准确率达到81%,pMCI与sMCI的分类准确率达到62%.

3.3 多任务深度学习方法在AD影像学分类中的应用

多任务深度学习是深度学习的一个子领域,用于同时解决多个不同但又相关的任务,在提高效率的同时还可以相互充当正则化器减少模型的过拟合.对于认知评分的回归预测、对于性别的分类预测和对AD相关脑区的分割任务等都是AD影像分类的相关任务,所以它们通常被用作AD影像分类的辅助任务.

3.3.1 将分类任务作为辅助任务的多任务深度学习

前文中提到为了解决sMCI与pMCI分类模型难以训练的问题,有研究者使用迁移学习的方法将AD与HC的分类模型的权重迁移学习到sMCI与pMCI的分类模型中,可见两个任务之间存在高度的相似性,因此,同样可以用于多任务学习.如Spasov等[43]结合了MRI图像作为3D CNN的输入,并且在最后全连接的输出向量后拼接上人口学数据、心理认知评估测试和Rey听觉言语学习测试等临床数据,作为多模态数据的融合向量输入到另一个全连接层中,以上部分的网络权重由两个任务共享,最后使用全连接层联合AD与HC、sMCI与pMCI两个分类任务进行训练,该模型在sMCI与pMCI分类任务中的准确率达到86%.

3.3.2 将回归任务作为辅助任务的多任务深度学习

在对AD患者进行诊断时,认知评分量表的分数是重要的参考标准[44],也有很多AD影像分类的论文[45]将MMSE得分作为AD、MCI和HC组的标签,所以对于认知评分的回归预测也是AD影像分类多任务学习的理想辅助任务.如Zeng等[46]将MMSE和AD评定量表-认知分量表(Alzheimer’s disease assessment scale,ADAS-cog)的回归预测作为辅助任务,使用深度信念网络(deep belief network,DBN)作为网络主干,最终得到的网络在AD与HC的分类、pMCI与HC的分类和AD与sMCI的分类中的准确率均超过95%.Abuhmed等[15]使用患者在基线,以及基线后6个月、12个月和18个月的PET、sMRI、神经心理学、神经病理学和认知评分多模态数据,利用BiLSTM对患者基线后48个月的CDR和MMSE等7个认知评分进行回归预测,将预测得到的7个认知评分和患者的年龄、性别等信息作为随机森林分类器的输入,最后对患者基线48个月之后的状态进行AD、MCI与HC三分类,准确率达到了84.95%.

3.3.3 将分割任务作为辅助任务的多任务深度学习

有多项医学研究表明,AD是与海马体脑区高度相关的神经退行性疾病,同时也有不少研究者[47]在进行AD影像分类时将海马区域作为ROI.如Liu等[48]采用带有Dense模块的V-Net网络结构对3D MRI图像进行海马区域的分割,在进行海马分割的同时将从上采样层中提取的特征输入到全连接层进行AD与HC的分类,又融合了基于海马体掩膜提取到的特征进行分类,最终使AD与HC分类准确率达到88.9%,MCI与HC分类准确率达到76.2%.

4 网络模型的可解释性探讨

从以往的研究中可以观察到基于深度学习模型的AD影像学分类已经取得了很高的准确率,未来的研究方向将会更注重深度学习模型的可解释性研究.对于医学影像分类,模型的可解释性研究可以让研究者增强对于模型判断的信心;当模型分类准确率高于人类专家时,解释性研究可以帮助医生研究针对AD疾病的关联脑区.目前针对AD影像分类深度学习网络的可解释性方法可大致分为两种:可视化神经网络热力图和输入消融实验.

4.1 神经网络热力图应用于AD影像分类网络的可解释性研究

利用类激活映射(class activation mapping,CAM)生成神经网络热力图是一种用于可视化CNN的工具,通过热力图我们可以观察到网络在达到分类准确的前提下,更注重于哪块区域.CAM的具体实现方式是将CNN网络最后一个特征图做全局平均池化(global average pooling,GAP)用于计算各通道的均值,通过全连接层加权得到输出,将分类结果对应的权重值与对应的特征图相乘,上采样的同时叠加到原图便可在原图上生成热力图.CAM的缺点在于它需要修改原模型的结构从而导致需要重新训练模型,对于复杂模型来说,大大增加了训练成本.而梯度加权类激活映射(gradient-weighted class activation mapping,Grad-CAM)使用CNN最后一层的梯度信息来理解每个神经元对目标决定的重要性,避免了重新训练模型的问题.

Zhang等[49]使用Grad-CAM对带有注意力机制的ResNet网络(3D ResAttNet)在AD与HC分类任务上进行可视化,热力图中突出显示了海马体、侧脑室和大部分皮质区域.Raju等[50]使用Grad-CAM方法对认知障碍分类网络进行可视化操作,发现海马体、杏仁核和顶叶区域得到了最大程度的激活.Oh等[18]使用ICAE对AD影像进行分类,并且使用类显著映射(class saliency visualization,CSV)手段,发现pMCI与sMCI的分类任务更注重于左侧杏仁核、角回和楔前回,而AD与HC分类任务更注重内侧颞叶周围、左侧海马体.Guan等[51]结合了ResNet与并行注意力增强双线性网络(parallel attention-augmented bilinear network,pABN),使用CAM发现AD与HC分类、pMCI与sMCI分类的重点区域基本相似,集中于海马体、杏仁核、脑室、额叶、颞下回、颞上沟、顶枕沟和外侧裂.

以往基于体素的研究方法可能会因为特征维数过高和影像数据量不足导致过拟合,而基于ROI区域的方法不能精确覆盖到全部的病理部位,所以近年来有研究者基于斑块级区域训练网络模型,如上文中提到的Lian等[39]将大脑随机分割为若干个斑块区域对网络进行训练.这样的方法也可以用于生成神经网络热力图以达到可视化的效果,Qiu等[52]通过FCN生成每个斑块对于AD预测的概率热力图,直观地反应大脑各个斑块级区域对于AD分类的重要性,结果显示网络模型生成的结果与AD病理学尸检结果高度相关,即AD概率的增加与海马、中额叶、杏仁核和颞叶区域中淀粉样蛋白βτ的高水平积累相关.在热力图方法中,海马、颞叶和杏仁核被多次提及,与之前的医学研究结果相吻合,增加了研究者对于深度学习网络的信心.

4.2 输入消融实验应用于AD影像分类网络的可解释性研究

另一种对网络进行可解释性研究的方法是输入消融实验,通常通过在输入图像中去除或保留某些大脑区域来研究哪些脑区对AD影像分类更具贡献.如果去除了某些脑区并不会对分类结果产生负作用,则认为这些脑区贡献不大;如果加上某些脑区后对分类结果有正向作用,则认为这些脑区有利于AD分类.

金祝新等[53]使用遮挡块的方式对网络进行可视化操作,即使用黑色或灰色色块遮挡输入图像,如果目标类别概率因被遮挡了某区域而降低,则认为该区域与AD分类任务相关,结果显示内侧颞叶和海马区域与AD影像分类具有高相关性.Kwak等[54]同样使用遮挡块的方法对基于DenseNet的sMCI与pMCI分类网络进行了可解释性分析,发现海马、梭状回、颞下回和楔前叶区域在sMCI与pMCI分类任务中起到重要作用.Venugopalan等[55]使用屏蔽输入特征的方法对CNN进行可解释性研究,结果显示MRI图像中海马和杏仁核对分类贡献最大.

以上的研究都是通过人工选取保留或剔除的特征,具有一定的随机性和主观性,而使用算法自动选取脑区则可以避免这样的问题.Shahamat等[56]从哈佛牛津大脑图谱中获得96个大脑区域,将它们存储到与输入大小相同的3D矩阵中.基于这些脑区使用遗传算法随机生成掩膜,掩膜中可以包含一个或多个脑区,作为网络的输入.通过不断减小掩膜大小,最终选择了被认为对AD影像分类最具贡献的5个脑区,分别是左侧枕叶、左侧颞梭状皮层、右侧楔皮层、右额中回和右颞中回.输入消融实验法对于脑区的定位可以精确到人为定义的解剖模板,相比神经网络热力图可以定位到更细致的区域.

4.3 有关AD影像学分类网络可解释性研究结果的探讨

综合不同研究者的网络可解释性分析结果可以观察到,大部分的研究者都关注到了一些相同的脑区,例如海马体、杏仁核和颞叶;然而还有一部分脑区只被个别研究者所关注,例如梭状回、角回和顶叶等.对于该现象的解释,有如下合理的推测:

1、对于AD影像学分类问题,使用的数据几乎都来源于ADNI和OASIS两大公开数据集,但是研究者对于数据的选取策略各不相同,最终选取得到的数据集之间的差异会导致网络可解释性结果的不同.

2、大多数研究者对于每一类别仅使用数百例样本,较小的训练数据规模会导致网络模型泛化能力不强,从而得到不稳定的可解释性分析结果.通常我们认为在大规模数据集上得到准确率越高的网络模型越容易得到准确的可解释性分析结果.

3、对于神经网络热力图法,可视化结果会以热力图的形式覆盖于输入图像上,对于重点脑区的划分并无明确边界,依赖于人工观察判断,对于研究者的医学知识有一定的要求且具有主观性,并最终导致了网络可视化结果的差异.

4、对于输入消融实验法,不同的脑模板对于大脑区域划分的策略各不相同;而基于脑模板人工选择的掩膜区域具有不可避免的随机性和主观性,都会导致最终可解释结果的差异.

可见在AD影像学分类领域,深度学习可解释性分析依然是一个研究难点.

5 总结与展望

本文针对AD影像学自动分类问题,调研了近年来被广泛研究与使用的深度学习方法,从不同数据模态以及迁移学习、集成学习、多任务学习不同方法等方面,全面讨论了文献中基于深度学习的AD影像学分类的研究进展.通过调研发现相比于使用最为广泛的sMRI,单独使用fMRI在AD影像分类中没有表现出明显的优势;随着对AD患者脑白质变化的理解加深,DTI图像在AD影像分类中发挥了更加重要的作用;在模型中增加迁移学习方法可以加快模型的训练,提高不同MCI分类任务的准确率;应用集成学习方法可以充分利用不同投影面的互补信息,从而提升模型的可靠性;多任务学习可以同时进行两个相关任务,在提高效率的同时减少模型的过拟合.在此基础上,本文进一步对网络模型的可解释性问题进行了深入探讨,从仅有的一些文献报道可知,探究模型可解释性方法并在此基础上寻找与AD影像学分类具有关联性的脑区将是未来的研究重点.表2总结了本文中提及的各AD影像分类模型,表3总结了文中提及的深度学习网络可解释性方法,以及已发现的与AD影像学分类相关的脑区.

表2   各分类模型综合比较

Table 2  Performance comparison of different models

第一作者分类任务数据模态数据集测试方法分类模型准确率%
郁松[21]AD/HCsMRI1015 HC
575 AD
1709 MCI
训练集60%
验证集20%
测试集20%
3D ResNet-10197.425
Parmar[24]AD/HCfMRI30 AD
30 HC
训练集60%
验证集20%
测试集20%
3D CNN94.58
Bi[19]AD/HC
MCI/HC
AD/MCI
AD/HC/MCI
fMRI118 AD
295 HC
335 MCI
五折交叉验证RNN,ELM91.3 (AUC)
80.5 (AUC)
82.4 (AUC)
84.7 (AUC)
Yi̇Ği̇T[20]AD/HC
MCI/HC
sMRI训练集
30 AD
316 HC
70 MCI
测试集
46 AD+MCI
23 HC
/2D CNN83
82
Punjabi[33]AD/HCPET
sMRI+PET
共1299/3D CNN85.15
92.34
Zhang[49]AD/HC
pMCI/sMCI
sMRI200 AD
231 HC
172 pMCI
232 sMCI
五折交叉验证3D ResAttNet91.3
82.1
Qiu[52]AD/HCsMRI+性别+年龄+MMSE/训练集60%
验证集20%
测试集20%
FCN96.8
Deng[28]AD/HCDTI+sMRI98 AD
100 HC
训练集60%
验证集20%
测试集20%
CNN90.00
Marzban[29]AD/HC
MCI/HC
DTI+sMRI115 AD
185 HC
106 MCI
十折交叉验证CNN93.5
79.6
Kang[30]EMCI/HCDTI+sMRI50 HC
70 EMCI
训练集80%
测试集20%
VGG-16,LASSO94.2
Kwak[54]AD/HC
sMCI/pMCI
sMRI110 AD
109 HC
34 pMCI
81 sMCI
五折交叉验证DenseNet93.75
73.90
Ju[16]MCI/HCfMRI91 MCI
79 HC
十折交叉验证AE86.47
Baydargil[17]AD/MCI/HCPET141 AD
105 MCI
70 HC
训练集80%
验证集10%
测试集10%
CAE98.67
Guan[51]AD/HC
pMCI/sMCI
sMRI384 AD
392 HC
401 sMCI
197 pMCI
训练集 90%
测试集 10%
ResNet、pABN90.7
79.3
Li[34]AD/MCI/HCMRI237 AD
288 MCI
262 HC
训练集65%
测试集 35%
ResNet-200
(迁移学习)
83
Basaia[40]AD/HC
sMCI/pMCI
sMRI294 AD
352 HC
253 pMCI
510 sMCI
训练集90%
测试集10%
3D FCN(迁移学习)99.2
75.1
孔伶旭[36]EMCI/HCfMRI32 HC
32 EMCI
五折交叉验证Mobilenet(迁移学习)73.67
Bin[37]AD/HCsMRI100 AD
100 HC
五折交叉验证Inception-v3
(迁移学习)
99.45
Massalimova[27]AD/MCI/HCDTI训练集
59 AD
308 HC
7 MCI
测试集
16 AD
74 HC
1 MCI
/ResNet-18(迁移学习)97
Mehmood[38]AD/HC
EMCI/LMCI
sMRI75 AD
85 HC
70 EMCI
70 LMCI
训练集64%
验证集16%
测试集20%
VGG-19(迁移学习)95.33
83.72
Raju[50]非常轻度痴呆/轻度痴呆/中度痴呆/HC(四分类)sMRI1013非常轻度痴呆
896轻度痴呆
64中度痴呆
3200 HC(训练集)
334非常轻度痴呆
139轻度痴呆
10中度痴呆
530 HC(测试集)
VGG-16(迁移学习)99
Lian[39]AD/HC
sMCI/pMCI
sMRI数据集1
199 AD
229 HC
167 pMCI
226 sMCI
数据集2
159 AD
200 HC
38 pMCI
239 sMCI
两个数据集间
交叉验证
H-FCN(迁移学习)90.3
80.9
Oh[18]AD/HC
sMCI/pMCI
sMRI198 AD
230 HC
166 pMCI
101 sMCI
五折交叉验证3D CNN,ICAE
(迁移学习)
88.6
73.95
金祝新[53]AD/MCI
MCI/HC
AD/HC
sMRI267 AD
574 HC
446 MCI
训练集90%
测试集10%
3D CNN(迁移学习)94.6
92.5
90.9
曾安[42]AD/HC
pMCI/HC
pMCI/sMCI
sMRI137 AD
162 HC
76 pMCI
134 sMCI
五折交叉验证2D CNN(集成学习)81
79
62
Bi[22]AD/HC
MCI/HC
AD/MCI
AD/MCI/HC
sMRI243 AD
307 HC
525 MCI
/PCANet,k-means
(集成学习)
89.15
92.6
97.01
91.25
Choi[41]AD/HCsMRI715 AD
335 HC
305 MCI
训练集60%
验证集20%
测试集20%
VGG-16,GoogLeNet,AlexNet(集成学习)93.84
Venugopalan[55]AD/HCsMRI+电子病历+基因数据共220十折交叉验证AE,CNN,随机森林
(集成学习)
88
Zeng[46]sMCI/pMCI
AD/HC
sMCI/HC
pMCI/HC
AD/sMCI
AD/pMCI
sMRI92 AD
92 HC
92 sMCI
95 pMCI
训练集70%
测试集30%
PCA,DBN(多任务)87.78
98.62
92.31
96.67
99.62
91.89
Spasov[43]sMCI/pMCIsMRI192 AD
184 HC
409 MCI
十折交叉验证CNN(多任务)86
Liu[48]AD/HC
MCI/HC
sMRI97 AD
11 HC
233 MCI
五折交叉验证V-Net,DenseNet
(多任务)
88.9
76.2
Abuhmed[15]AD/MCI/HCPET+sMRI+神经心理学数据+神经病理学数据+认知评分共1371十折交叉验证BiLSTM,随机森林
(多任务)
84.95

新窗口打开| 下载CSV


表3   可解释性方法及AD相关脑区

Table 3  Comparison of different interpretation methods and AD-related brain regions

第一作者任务分类模型准确率%解释性方法脑区
Guan[51]AD/HC
sMCI/pMCI
ResNet、pABN90.7
79.3
CAM海马体、杏仁核、脑室、额叶、颞下回、颞上沟、顶枕沟、外侧裂
Zhang[49]AD/HC3D-ResAttNet91.3Grad-CAM海马体、侧脑室、大部分皮质
Raju[50]轻度痴呆/非常轻度痴呆/中度痴呆/HC(四分类)VGG-16(迁移学习)99Grad-CAM海马体、杏仁核、顶叶
Oh[18]AD/HC
sMCI/pMCI
3D-CNN,ICAE86.6
73.95
CSV内侧颞叶周围、左侧海马体
左侧杏仁核、角回、楔前回
Qiu[52]AD/HCFCN96.8基于斑块生成热力图海马、中额叶、杏仁核、颞叶
金祝新[53]AD/HC3D-CNN(迁移学习)90.9输入添加遮挡块内侧颞叶、海马体
Kwak[54]sMCI/pMCIDenseNet73.90输入添加遮挡块海马、梭状回、颞下回、楔前叶
Venugopalan[55]AD/HCAE,CNN,随机森林88特征屏蔽海马体、杏仁核
Shahamat[56]AD/HC3D-CNN85遗传算法选取脑模板左侧枕叶、左侧颞梭状皮层、右侧楔皮层、右额中回、右颞中回

新窗口打开| 下载CSV


对于AD影像学分类及其模型可解释性研究,未来有以下发展方向:(1)探索更多对AD分类有效的数据模态,以及更多基于多模态数据融合分类的方法;(2)针对各个医疗机构成像条件不同带来的数据差异,进行更多跨数据集的模型测试用于优化模型的泛化能力;(3)在AD与HC分类问题的基础上,改进不同类型MCI分类或多分类任务的准确度;(4)针对AD影像学分类相关任务的深度学习模型,进行更多的可解释性研究,并发现更多可靠的与AD疾病相关的脑区.

近几年,大脑年龄的预测逐渐变成了研究的热点,而AD是一种与年龄高度相关的疾病,Beheshti等[57]整合了15种脑龄预测模型,发现在AD组中的测试误差明显高于MCI组,MCI组明显高于HC组,可见患者患病程度越高,越会导致脑龄预测模型高估其大脑年龄.有些研究者针对脑龄预测与AD影像分类之间的关联性进行探索.如Bashyam等[58]使用正常人的sMRI图像训练了一个包含有Inception与ResNet模块的CNN网络,在测试集上MAE为3.701,将脑龄预测模型的权重值迁移学习到AD与HC分类模型中,实验结果表明在小样本情况下,该迁移学习模型的性能远远高于基于ImageNet迁移学习的分类模型,可见脑龄预测与AD影像分类任务有着极高的相关性.探索脑龄预测任务与AD影像学分类任务的关联性或许也可以成为未来的一个研究方向.

随着AD影像分类模型准确性及泛化能力的提高,基于深度学习的AD影像分类模型未来有望进入临床,在AD疾病早期筛查中发挥重要作用.对于脑龄预测及神经疾病的相关性的研究以及深度学习模型可解释性的研究,可以提升研究者对于深度学习分类模型的信心,进一步帮助医生更加深入地了解AD疾病.

利益冲突

附录

表A1   中英文全称及对应缩写表

Table A1  Chinese and English full names for corresponding abbreviations

缩写中文全称英文全称
AD阿尔兹海默症Alzheimer disease
NFT神经原纤维缠结neurofibrillary tangles
CDR临床痴呆分级clinical dementia rating
MMSE简易智力状态检查量表mini-mental state examination
MCI轻度认知障碍mild cognitive impairment
EMCI早期轻度认知障碍early mild cognitive impairment
LMCI晚期轻度认知障碍later mild cognitive impairment
sMCI稳定型轻度认知障碍stable mild cognitive impairment
pMCI渐近型轻度认知障碍progressive mild cognitive impairment
HC健康对照组healthy control
ADNI阿尔兹海默症神经影像学计划Alzheimer’s disease neuroimaging initiative
OASIS开放存取影像研究open access series of imaging studies
IXI从图像中提取信息数据集information eXtraction from images
sMRI结构磁共振成像structural magnetic resonance imaging
PET正电子发射型计算机断层显像positron emission computed tomography
DTI扩散张量成像diffusion tensor imaging
fMRI功能磁共振成像functional magnetic resonance imaging
FA各向异性分数fractional anisotropy
MD平均扩散率mean diffusivity
RD径向扩散系数radial diffusivity
DC中心度特征degree centrality
ROI感兴趣区域region of interest
AAL自动解剖标记automated anatomical labeling
TP真阳性true positive
FP假阳性false positive
TN真阴性true negative
FN假阴性false negative
ROC接受者操作特征曲线receiver operating characteristic curve
AUC曲线下面积area under curve
CNN卷积神经网络convolutional neural networks
ILSVRCImageNet大规模视觉识别挑战赛ImageNet large scale visual recognition challenge
RNN循环神经网络recurrent neural network
BRNN双向循环神经网络bidirectional recurrent neural network
AE自动编码机autoencoder
CAE卷积自动编码机convolutional autoencoder
ICAE带Inception模块的卷积自动编码机inception modal based convolutional autoencoder
ResNet残差网络residual network
DenseNet密集连接网络densely connected convolutional network
pABN并行注意力增强双线性网络parallel attention-augmented bilinear network
k-meansk均值聚类k-means clustering algorithm
PCA主成分分析法principal components analysis
ELM极限学习机extreme learning machine
H-FCN分层全卷积神经网络hierarchical fully convolutional network
FCN全卷积神经网络fully convolutional network
ADAS-cog阿尔兹海默症评定量表-认知分量表Alzheimer’s disease assessment scale
BiLSTM双向长短期记忆网络bidirectional long short-term memory
MAE平均绝对误差mean absolute error
CAM类激活映射class activation mapping
Grad-CAM梯度加权类激活映射gradient-weighted class activation mapping
GAP全局平均池化global average pooling
CSV类显著映射class saliency visualization

新窗口打开| 下载CSV


参考文献

JACK Jr C R, BENNETT D A, BLENNOW K, et al.

NIA-AA research framework: Toward a biological definition of Alzheimer’s disease

[J]. Alzheimers Dement, 2018, 14(4): 535-562.

DOI:10.1016/j.jalz.2018.02.018      URL     [本文引用: 1]

SCHELTENS P, DE STROOPER B, KIVIPELTO M, et al.

Alzheimer’s disease

[J]. The Lancet, 2021, 397(10284): 1577-1590.

DOI:10.1016/S0140-6736(20)32205-4      URL     [本文引用: 1]

ANDERSON N D.

State of the science on mild cognitive impairment

[J]. J Gerontol B Psycholo Sci Soc Sci, 2020, 75(7): 1359-1360.

[本文引用: 1]

MAGGIPINTO T, BELLOTTI R, AMOROSO N, et al.

DTI measurements for Alzheimer’s classification

[J]. Phys Med Biol, 2017, 62(6): 2361-2375.

DOI:10.1088/1361-6560/aa5dbe      URL     [本文引用: 1]

LAHMIRI S, SHMUEL A.

Performance of machine learning methods applied to structural MRI and ADAS cognitive scores in diagnosing Alzheimer’s disease

[J]. Biomed Signal Proces, 2019, 52: 414-419.

DOI:10.1016/j.bspc.2018.08.009      URL     [本文引用: 2]

ELSHATOURY H, AVOTS E, ANBARJAFARI G, et al.

Volumetric histogram-based Alzheimer’s disease detection using support vector machine

[J]. J Alzheimers Dis, 2019, 72(2): 515-524.

DOI:10.3233/JAD-190704      URL     [本文引用: 1]

UYSAL G, OZTURK M.

Hippocampal atrophy based Alzheimer’s disease diagnosis via machine learning methods

[J]. J Neurosci Meth, 2020, 337: 108669.

DOI:10.1016/j.jneumeth.2020.108669      URL     [本文引用: 2]

CHU Y, XU W L.

Review of early classification of Alzheimer’s disease based on computer-aided diagnosistechnology

[J]. Computer Engineering & Science, 2022, 44(5): 879-893.

[本文引用: 1]

楚阳, 徐文龙.

基于计算机辅助诊断技术的阿尔兹海默症早期分类研究综述

[J]. 计算机工程与科学, 2022, 44(5): 879-893.

[本文引用: 1]

YAO X F, YUAN Z B, BU X X.

The early predication of Alzheimer's disease based on intelligent radiomics technology

[J]. Sci Technol Rev, 2021, 39(20): 101-109.

DOI:10.20506/rst.issue.39.1.3056      URL     [本文引用: 1]

姚旭峰, 袁增贝, 卜溪溪.

基于智能影像基因组学技术的阿尔茨海默病预测进展

[J]. 科技导报, 2021, 39(20): 101-109.

DOI:10.20506/rst.issue.39.1.3056      URL     [本文引用: 1]

ZHU Y X, FENG W, GUO X H.

Application of deep learning method in brain image of Alzheimer’s disease

[J]. Med Recapitulate, 2019, 25(18): 3562-3566.

[本文引用: 1]

朱映璇, 冯巍, 郭秀花.

深度学习方法在阿尔茨海默病脑图像方面的应用进展

[J]. 医学综述, 2019, 25(18): 3562-3566.

[本文引用: 1]

TANVEER M, RICHHARIYA B, KHAN R U, et al.

Machine learning techniques for the diagnosis of Alzheimer’s disease: a review

[J]. ACM T Multim Comput, 2020, 16(1s): 30.

[本文引用: 1]

FAN Z, XU F, QI X, et al.

Classification of Alzheimer’s disease based on brain MRI and machine learning

[J]. Neural Comput Appl, 2020, 32(7): 1927-1936.

DOI:10.1007/s00521-019-04495-0      [本文引用: 2]

KARAMI V, NITTARI G, TRAINI E, et al.

An optimized decision tree with genetic algorithm rule-based approach to reveal the brain’s changes during Alzheimer’s disease dementia

[J]. J Alzheimers Dis, 2021, 84(4): 1577-1584.

DOI:10.3233/JAD-210626      URL     [本文引用: 1]

Background: It is desirable to achieve acceptable accuracy for computer aided diagnosis system (CADS) to disclose the dementia-related consequences on the brain. Therefore, assessing and measuring these impacts is fundamental in the diagnosis of dementia. Objective: This study introduces a new CADS for deep learning of magnetic resonance image (MRI) data to identify changes in the brain during Alzheimer’s disease (AD) dementia. Methods: The proposed algorithm employed a decision tree with genetic algorithm rule-based optimization to classify input data which were extracted from MRI. This pipeline is applied to the healthy and AD subjects of the Open Access Series of Imaging Studies (OASIS). Results: Final evaluation of the CADS and its comparison with other systems supported the potential of the proposed model as a novel tool for investigating the progression of AD and its great ability as an innovative computerized help to facilitate the decision-making procedure for the diagnosis of AD. Conclusion: The one-second time response, together with the identified high accurate performance, suggests that this system could be useful in future cognitive and computational neuroscience studies.

ALICKOVIC E, SUBASI A, ALZHEIMERS DIS N.

Automatic detection of Alzheimer disease based on histogram and random forest

[C]// Banja Luka, BOSNIA & HERCEG: International Conference on Medical and Biological Engineering in Bosnia and Herzegovina (CMBEBIH), 2019, 73: 91-96.

[本文引用: 2]

ABUHMED T, EI-SAPPAGH S, ALONSO J M.

Robust hybrid deep learning models for Alzheimer’s progression detection

[J]. Knowl-Based Syst, 2021, 213: 106688.

DOI:10.1016/j.knosys.2020.106688      URL     [本文引用: 3]

JU R H, HU C H, ZHOU P, et al.

Early diagnosis of Alzheimer’s disease based on resting-state brain networks and deep learning

[J]. IEEE ACM T Comput Bi, 2019, 16(1): 244-257.

[本文引用: 2]

BAYDARGIL H B, PARK J S, KANG D Y, et al.

Classification of Alzheimer’s disease using stacked sparse convolutional autoencoder

[C]// Jeju, SOUTH KOREA:19th International Conference on Control, Automation and Systems (ICCAS), 2019, 891-895.

[本文引用: 2]

OH K, CHUNG Y C, KIM K W, et al.

Classification and visualization of Alzheimer’s disease using volumetric convolutional neural network and transfer learning

[J]. Sci Rep, 2019, 9(1): 18150.

DOI:10.1038/s41598-019-54548-6      [本文引用: 4]

Recently, deep-learning-based approaches have been proposed for the classification of neuroimaging data related to Alzheimer’s disease (AD), and significant progress has been made. However, end-to-end learning that is capable of maximizing the impact of deep learning has yet to receive much attention due to the endemic challenge of neuroimaging caused by the scarcity of data. Thus, this study presents an approach meant to encourage the end-to-end learning of a volumetric convolutional neural network (CNN) model for four binary classification tasks (AD vs. normal control (NC), progressive mild cognitive impairment (pMCI) vs. NC, stable mild cognitive impairment (sMCI) vs. NC and pMCI vs. sMCI) based on magnetic resonance imaging (MRI) and visualizes its outcomes in terms of the decision of the CNNs without any human intervention. In the proposed approach, we use convolutional autoencoder (CAE)-based unsupervised learning for the AD vs. NC classification task, and supervised transfer learning is applied to solve the pMCI vs. sMCI classification task. To detect the most important biomarkers related to AD and pMCI, a gradient-based visualization method that approximates the spatial influence of the CNN model’s decision was applied. To validate the contributions of this study, we conducted experiments on the ADNI database, and the results demonstrated that the proposed approach achieved the accuracies of 86.60% and 73.95% for the AD and pMCI classification tasks respectively, outperforming other network models. In the visualization results, the temporal and parietal lobes were identified as key regions for classification.

BI X, ZHAO X G, HUANG H, et al.

Functional brain network classification for Alzheimer’s disease detection with deep features and extreme learning machine

[J]. Cogn Comput, 2020, 12(3): 513-527.

DOI:10.1007/s12559-019-09688-2      [本文引用: 3]

YIĞIT A, ISIK Z.

Applying deep learning models to structural MRI for stage prediction of Alzheimer’s disease

[J]. Turk J Elec Eng & Comp Sci, 2020, 28(1): 196-210.

[本文引用: 2]

YU S, LIAO W H.

An alzheimer’s disease classificationalgorithm based on 3D-ResNet

[J]. Computer Engineering & Science, 2020, 42(6): 1068-1075.

[本文引用: 2]

郁松, 廖文浩.

基于3D-ResNet的阿尔兹海默症分类算法研究

[J]. 计算机工程与科学, 2020, 42(6): 1068-1075.

[本文引用: 2]

BI X L, LI S T, XIAO B, et al.

Computer aided Alzheimer's disease diagnosis by an unsupervised deep learning technology

[J]. Neurocomputing, 2020, 392: 296-304.

DOI:10.1016/j.neucom.2018.11.111      URL     [本文引用: 2]

GUO H B, ZHANG Y J.

Resting state fMRI and improved deep learning algorithm for earlier detection of Alzheimer’s disease

[J]. IEEE Access, 2020, 8: 115383-115392.

DOI:10.1109/Access.6287639      URL     [本文引用: 1]

PARMAR H S, NUTTER B, LONG R, et al.

Deep learning of volumetric 3D CNN for fMRI in Alzheimer’s disease classification

[C]// Houston, TX: SPIE Medical Imaging Conference - Biomedical Applications in Molecular, Structural, and Functional Imaging, 2020, 11317.

[本文引用: 2]

REGINOLD W, LUEDKE A C, ITORRALBA J, et al.

Altered superficial white matter on tractography MRI in Alzheimer’s disease

[J]. Dementia and Geriatric Cognitive Disorders Extra, 2016, 6(2): 233-241.

DOI:10.1159/000446770      URL     [本文引用: 1]

&lt;b&gt;<i>Background/Aims:</i>&lt;/b&gt; Superficial white matter provides extensive cortico-cortical connections. This tractography study aimed to assess the diffusion characteristics of superficial white matter tracts in Alzheimer's disease. &lt;b&gt;<i>Methods:</i>&lt;/b&gt; Diffusion tensor 3T magnetic resonance imaging scans were acquired in 24 controls and 16 participants with Alzheimer's disease. Neuropsychological test scores were available in some participants. Tractography was performed by the Fiber Assignment by Continuous Tracking (FACT) method. The superficial white matter was manually segmented and divided into frontal, parietal, temporal and occipital lobes. The mean diffusivity (MD), radial diffusivity (RD), axial diffusivity (AxD) and fractional anisotropy (FA) of these tracts were compared between controls and participants with Alzheimer's disease and correlated with available cognitive tests while adjusting for age and white matter hyperintensity volume. &lt;b&gt;<i>Results:</i>&lt;/b&gt; Alzheimer's disease was associated with increased MD (p = 0.0011), increased RD (p = 0.0019) and increased AxD (p = 0.0017) in temporal superficial white matter. In controls, superficial white matter was associated with the performance on the Montreal Cognitive Assessment, Stroop and Trail Making Test B tests, whereas in Alzheimer's disease patients, it was not associated with the performance on cognitive tests. &lt;b&gt;<i>Conclusion:</i>&lt;/b&gt; Temporal lobe superficial white matter appears to be disrupted in Alzheimer's disease.

BIGHAM B, ZAMANPOUR S A, ZEMORSHIDI F, et al.

Identification of superficial white matter abnormalities in Alzheimer’s disease and mild cognitive impairment using diffusion tensor imaging

[J]. J Alzheimers Dis Rep, 2020, 4(1): 49-59.

[本文引用: 1]

MASSALIMOVA A, VAROL H A, IEEE.

Input agnostic deep learning for Alzheimer’s disease classification using multimodal MRI images

[C]// Electr Network:43rd Annual International Conference of the IEEE-Engineering-in-Medicine-and-Biology-Society (IEEE EMBC), 2021, 10.1109/embc46164.2021.96298072875-2878.

[本文引用: 2]

DENG L, WANG Y J.

Hybrid diffusion tensor imaging feature-based AD classification

[J]. J X-Ray Sci Technol, 2021, 29(1): 151-169.

DOI:10.3233/XST-200771      PMID:33325450      [本文引用: 2]

Effective detection of Alzheimer's disease (AD) is still difficult in clinical practice. Therefore, establishment of AD detection model by means of machine learning is of great significance to assist AD diagnosis.To investigate and test a new detection model aiming to help doctors diagnose AD more accurately.Diffusion tensor images and the corresponding T1w images acquired from subjects (AD = 98, normal control (NC) = 100) are used to construct brain networks. Then, 9 types features (198×90×9 in total) are extracted from the 3D brain networks by a graph theory method. Features with low correction in both groups are selected through the Pearson correlation analysis. Finally, the selected features (198×33, 198×26, 198×30, 198×42, 198×36, 198×23, 198×29, 198×14, 198×25) are separately used into train 3 machine learning classifier based detection models in which 60% of study subjects are used for training, 20% for validation and 20% for testing.The best detection accuracy levels of 3 models are 90%, 98% and 90% with the corresponding sensitivity of 92%, 96%, and 72% and specificity of 88%, 100% and 94% when using a random forest classifier trained with the Shortest Path Length (SPL) features (198×14), a support vector machine trained with the Degree Centrality features (198×33), and a convolution neural network trained with SPL features, respectively.This study demonstrates that the new method and models not only improve the accuracy of detecting AD, but also avoid bias caused by the method of direct dimensionality reduction from high dimensional data.

MARZBAN E N, ELDEIB A M, YASSINE I A, et al.

Alzheimer's disease diagnosis from diffusion tensor images using convolutional neural networks

[J]. PLoS ONE, 2020, 15(3): e0230409.

DOI:10.1371/journal.pone.0230409      URL     [本文引用: 2]

KANG L, JIANG J W, HUANG J J, et al.

Identifying early mild cognitive impairment by multi-modality MRI-based deep learning

[J]. Front Aging Neurosci, 2020, 12: 206.

DOI:10.3389/fnagi.2020.00206      PMID:33101003      [本文引用: 2]

Mild cognitive impairment (MCI) is a clinical state with a high risk of conversion to Alzheimer's Disease (AD). Since there is no effective treatment for AD, it is extremely important to diagnose MCI as early as possible, as this makes it possible to delay its progression toward AD. However, it's challenging to identify early MCI (EMCI) because there are only mild changes in the brain structures of patients compared with a normal control (NC). To extract remarkable features for these mild changes, in this paper, a multi-modality diagnosis approach based on deep learning is presented. Firstly, we propose to use structure MRI and diffusion tensor imaging (DTI) images as the multi-modality data to identify EMCI. Then, a convolutional neural network based on transfer learning technique is developed to extract features of the multi-modality data, where an L1-norm is introduced to reduce the feature dimensionality and retrieve essential features for the identification. At last, the classifier produces 94.2% accuracy for EMCI vs. NC on an ADNI dataset. Experimental results show that multi-modality data can provide more useful information to distinguish EMCI from NC compared with single modality data, and the proposed method can improve classification performance, which is beneficial to early intervention of AD. In addition, it is found that DTI image can act as an important biomarker for EMCI from the point of view of a clinical diagnosis.Copyright © 2020 Kang, Jiang, Huang and Zhang.

GAO F.

Integrated positron emission tomography/magnetic resonance imaging in clinical diagnosis of Alzheimer’s disease

[J]. Eur J Radiol, 2021, 145: 1100170.

[本文引用: 1]

RABINOVICI G D, GATSONIS C, APGAR C, et al.

Association of amyloid positron emission tomography with subsequent change in clinical management among medicare beneficiaries with mild cognitive impairment or dementia

[J]. JAMA-J Am Med Assoc, 2019, 321(13): 1286-1294.

DOI:10.1001/jama.2019.2000      URL     [本文引用: 1]

PUNJABI A, MARTERSTECK A, WANG Y R, et al.

Neuroimaging modality fusion in Alzheimer’s classification using convolutional neural networks

[J]. PLoS ONE, 2019, 14(12): e0225759.

DOI:10.1371/journal.pone.0225759      URL     [本文引用: 2]

LI Y, DING W, WANG X, et al.

Alzheimer’s disease classification model based on MED-3D transfer learning

[C]// Proceedings of the 2nd International Symposium on Artificial Intelligence for Medicine Sciences, 2021, 394-398.

[本文引用: 2]

CHEN S, MA K, ZHENG Y.

Med3d: Transfer learning for 3d medical image analysis

[J]. arXiv preprint, arXiv:.00625, 2019.

[本文引用: 1]

KONG L X, WU H F, ZENG Y, et al.

Transfer-learning feature extraction from rs-fMRI for classification of early mild cognitive impairment

[J]. CAAI Transactions on Intelligent Systems, 2021, 16(4): 662-672.

[本文引用: 2]

孔伶旭, 吴海锋, 曾玉, .

迁移学习特征提取的rs-fMRI早期轻度认知障碍分类

[J]. 智能系统学报, 2021, 16(4): 662-672.

[本文引用: 2]

BIN TUFAIL A, MA Y K, ZHANG Q N.

Binary classification of Alzheimer’s disease using sMRI imaging modality and deep learning

[J]. J Digit Imaging, 2020, 33(5): 1073-1090.

DOI:10.1007/s10278-019-00265-5      [本文引用: 2]

MEHMOOD A, YANG S Y, FENG Z X, et al.

A transfer learning approach for early diagnosis of Alzheimer’s disease on MRI images

[J]. Neuroscience, 2021, 460: 43-52.

DOI:10.1016/j.neuroscience.2021.01.002      URL     [本文引用: 2]

LIAN C F, LIU M X, ZHANG J, et al.

Hierarchical fully convolutional network for joint atrophy localization and Alzheimer’s disease diagnosis using structural MRI

[J]. IEEE Trans Pattern Anal Mach Intell, 2020, 42(4): 880-893.

DOI:10.1109/TPAMI.34      URL     [本文引用: 3]

BASAIA S, AGOSTA F, WAGNER L, et al.

Automated classification of Alzheimer's disease and mild cognitive impairment using a single MRI and deep neural networks

[J]. Neuroimage-Clin, 2019, 21: 101645.

DOI:10.1016/j.nicl.2018.101645      URL     [本文引用: 2]

CHOI J Y, LEE B H.

Combining of multiple deep networks via ensemble generalization loss, based on mri images, for Alzheimer’s disease classification

[J]. IEEE Signal Process Lett, 2020, 27: 206-210.

DOI:10.1109/LSP.97      URL     [本文引用: 2]

ZENG A, JIA L F, PAN D, et al.

Early prognosis of Alzheimer’s disease based on convolutional neural networks and ensemble learning

[J]. J Biomed Eng, 2019, 36(5): 711-719.

[本文引用: 2]

曾安, 贾龙飞, 潘丹, .

基于卷积神经网络和集成学习的阿尔茨海默症早期诊断

[J]. 生物医学工程学杂志, 2019, 36(5): 711-719.

[本文引用: 2]

SPASOV S, PASSAMONTI L, DUGGENTO A, et al.

A parameter-efficient deep learning approach to predict conversion from mild cognitive impairment to Alzheimer’s disease

[J]. Neuroimage, 2019, 189: 276-287.

DOI:10.1016/j.neuroimage.2019.01.031      URL     [本文引用: 2]

AMIEVA H, OUVRARD C, GIULIOLI C, et al.

Self-reported hearing loss, hearing aids, and cognitive decline in elderly adults: A 25-year study

[J]. J Am Geriatr Soc, 2015, 63(10): 2099-2104.

DOI:10.1111/jgs.13649      PMID:26480972      [本文引用: 1]

To investigate the association between hearing loss, hearing aid use, and cognitive decline.Prospective population-based study.Data gathered from the Personnes Agées QUID study, a cohort study begun in 1989-90.Individuals aged 65 and older (N = 3,670).At baseline, hearing loss was determined using a questionnaire assessing self-perceived hearing loss; 137 subjects reported major hearing loss, 1,139 reported moderate problems (difficulty following the conversation when several persons talk at the same time or in a noisy background), and 2,394 reported no hearing trouble. Cognitive decline was measured using the Mini-Mental State Examination (MMSE), administered at follow-up visits over 25 years.Self-reported hearing loss was significantly associated with lower baseline MMSE score (β = -0.69, P <.001) and greater decline during the 25-year follow-up period (β = -0.04, P =.01) independent of age, sex, and education. A difference in the rate of change in MMSE score over the 25-year follow-up was observed between participants with hearing loss not using hearing aids and controls (β = -0.06, P <.001). In contrast, subjects with hearing loss using a hearing aid had no difference in cognitive decline (β = 0.07, P =.08) from controls.Self-reported hearing loss is associated with accelerated cognitive decline in older adults; hearing aid use attenuates such decline.© 2015, Copyright the Authors Journal compilation © 2015, The American Geriatrics Society.

SARATXAGA C L, MOYA I, PICON A, et al.

MRI deep learning-based solution for Alzheimer’s disease prediction

[J]. J Pers Med, 2021, 11(9): 902.

DOI:10.3390/jpm11090902      URL     [本文引用: 1]

Background: Alzheimer’s is a degenerative dementing disorder that starts with a mild memory impairment and progresses to a total loss of mental and physical faculties. The sooner the diagnosis is made, the better for the patient, as preventive actions and treatment can be started. Although tests such as the Mini-Mental State Tests Examination are usually used for early identification, diagnosis relies on magnetic resonance imaging (MRI) brain analysis. Methods: Public initiatives such as the OASIS (Open Access Series of Imaging Studies) collection provide neuroimaging datasets openly available for research purposes. In this work, a new method based on deep learning and image processing techniques for MRI-based Alzheimer’s diagnosis is proposed and compared with previous literature works. Results: Our method achieves a balance accuracy (BAC) up to 0.93 for image-based automated diagnosis of the disease, and a BAC of 0.88 for the establishment of the disease stage (healthy tissue, very mild and severe stage). Conclusions: Results obtained surpassed the state-of-the-art proposals using the OASIS collection. This demonstrates that deep learning-based strategies are an effective tool for building a robust solution for Alzheimer’s-assisted diagnosis based on MRI data.

ZENG N Y, LI H, PENG Y H.

A new deep belief network-based multi-task learning for diagnosis of Alzheimer’s disease

[J]. Neural Comput Appl, 2021, 10.1007/s00521-021-06149-6.

[本文引用: 2]

KATABATHULA S, WANG Q Y, XU R.

Predict Alzheimer’s disease using hippocampus MRI data: a lightweight 3D deep convolutional network model with visual and global shape representations

[J]. Alzheimers Res Ther, 2021, 13: 104.

DOI:10.1186/s13195-021-00837-0      [本文引用: 1]

Alzheimer’s disease (AD) is a progressive and irreversible brain disorder. Hippocampus is one of the involved regions and its atrophy is a widely used biomarker for AD diagnosis. We have recently developed DenseCNN, a lightweight 3D deep convolutional network model, for AD classification based on hippocampus magnetic resonance imaging (MRI) segments. In addition to the visual features of the hippocampus segments, the global shape representations of the hippocampus are also important for AD diagnosis. In this study, we propose DenseCNN2, a deep convolutional network model for AD classification by incorporating global shape representations along with hippocampus segmentations.

LIU M H, LI F, YAN H, et al.

A multi-model deep convolutional neural network for automatic hippocampus segmentation and classification in Alzheimer's disease

[J]. Neuroimage, 2020, 208: 116459.

DOI:10.1016/j.neuroimage.2019.116459      URL     [本文引用: 2]

ZHANG X, HAN L, ZHU W, et al.

An explainable 3D residual self-attention deep neural network for joint atrophy localization and Alzheimer’s disease diagnosis using structural MRI

[J]. IEEE J Biomed Health Inform, 2022, 26(11): 5289-5297.

DOI:10.1109/JBHI.2021.3066832      URL     [本文引用: 3]

RAJU M, THIRUPALANI M, VIDHYABHARATHI S, et al.

Deep learning based multilevel classification of Alzheimer’s disease using MRI scans

[J]. IOP Conf Ser: Mater Sci Eng, 2021, 1084: 012017.

DOI:10.1088/1757-899X/1084/1/012017      [本文引用: 3]

Alzheimer’s disease is one of the most frequently studied diseases of the nervous system although it has no cure or slowing its progression. There are various options for treating the symptoms of Alzheimer’s disease in different stages and as the disease progresses over time, patients in their various stages need different treatment. Diagnosis of Alzheimer’s in the elderly is quietly difficult and requires representation of a discriminatory factor in isolation due to similar brain patterns and pixel strength. Deep learning strategies are able to learn such representations from the data. In this proposed work we perform multilevel classification of Alzheimer’s disease ie; Mild Demented, Moderate Demented, Non Demented and Very Mild Demented using transfer learning with VGG16 using Fastai. This approach results in 99% predictive accuracy which means a significant increase in accuracy compared to previous studies and clearly demonstrates the effectiveness of the proposed methods.

GUAN H, WANG C Y, CHENG J, et al.

A parallel attention-augmented bilinear network for early magnetic resonance imaging-based diagnosis of Alzheimer’s disease

[J]. Hum Brain Mapp, 2022, 43(2): 760-772.

DOI:10.1002/hbm.v43.2      URL     [本文引用: 3]

QIU S R, JOSHI P S, MILLER M I, et al.

Development and validation of an interpretable deep learning framework for Alzheimer’s disease classification

[J]. Brain, 2020, 143(6): 1920-1933.

DOI:10.1093/brain/awaa137      URL     [本文引用: 3]

Alzheimer’s disease is the primary cause of dementia worldwide, with an increasing morbidity burden that may outstrip diagnosis and management capacity as the population ages. Current methods integrate patient history, neuropsychological testing and MRI to identify likely cases, yet effective practices remain variably applied and lacking in sensitivity and specificity. Here we report an interpretable deep learning strategy that delineates unique Alzheimer’s disease signatures from multimodal inputs of MRI, age, gender, and Mini-Mental State Examination score. Our framework linked a fully convolutional network, which constructs high resolution maps of disease probability from local brain structure to a multilayer perceptron and generates precise, intuitive visualization of individual Alzheimer’s disease risk en route to accurate diagnosis. The model was trained using clinically diagnosed Alzheimer’s disease and cognitively normal subjects from the Alzheimer’s Disease Neuroimaging Initiative (ADNI) dataset (n = 417) and validated on three independent cohorts: the Australian Imaging, Biomarker and Lifestyle Flagship Study of Ageing (AIBL) (n = 382), the Framingham Heart Study (n = 102), and the National Alzheimer’s Coordinating Center (NACC) (n = 582). Performance of the model that used the multimodal inputs was consistent across datasets, with mean area under curve values of 0.996, 0.974, 0.876 and 0.954 for the ADNI study, AIBL, Framingham Heart Study and NACC datasets, respectively. Moreover, our approach exceeded the diagnostic performance of a multi-institutional team of practicing neurologists (n = 11), and high-risk cerebral regions predicted by the model closely tracked post-mortem histopathological findings. This framework provides a clinically adaptable strategy for using routinely available imaging techniques such as MRI to generate nuanced neuroimaging signatures for Alzheimer’s disease diagnosis, as well as a generalizable approach for linking deep learning to pathophysiological processes in human disease.

金祝新. 迁移学习与可视化辅助的阿尔茨海默症早期诊断[D]. 杭州: 杭州电子科技大学, 2019.

[本文引用: 3]

KWAK K, STANFORD W, DAYAN E, et al.

Identifying the regional substrates predictive of Alzheimer’s disease progression through a convolutional neural network model and occlusion

[J]. Hum Brain Mapp, 2022, 43(18): 5509-5519.

DOI:10.1002/hbm.v43.18      URL     [本文引用: 3]

VENUGOPALAN J, TONG L, HASSANZADEH H R, et al.

Multimodal deep learning models for early detection of Alzheimer’s disease stage

[J]. Sci Rep, 2021, 11(1): 3254.

DOI:10.1038/s41598-020-74399-w      [本文引用: 3]

Most current Alzheimer’s disease (AD) and mild cognitive disorders (MCI) studies use single data modality to make predictions such as AD stages. The fusion of multiple data modalities can provide a holistic view of AD staging analysis. Thus, we use deep learning (DL) to integrally analyze imaging (magnetic resonance imaging (MRI)), genetic (single nucleotide polymorphisms (SNPs)), and clinical test data to classify patients into AD, MCI, and controls (CN). We use stacked denoising auto-encoders to extract features from clinical and genetic data, and use 3D-convolutional neural networks (CNNs) for imaging data. We also develop a novel data interpretation method to identify top-performing features learned by the deep-models with clustering and perturbation analysis. Using Alzheimer’s disease neuroimaging initiative (ADNI) dataset, we demonstrate that deep models outperform shallow models, including support vector machines, decision trees, random forests, and k-nearest neighbors. In addition, we demonstrate that integrating multi-modality data outperforms single modality models in terms of accuracy, precision, recall, and meanF1 scores. Our models have identified hippocampus, amygdala brain areas, and the Rey Auditory Verbal Learning Test (RAVLT) as top distinguished features, which are consistent with the known AD literature.

SHAHAMAT H, ABADEH M S.

Brain MRI analysis using a deep learning based evolutionary approach

[J]. Neural Netw, 2020, 126: 218-234.

DOI:10.1016/j.neunet.2020.03.017      URL     [本文引用: 2]

BEHESHTI I, GANAIE M A, PALIWAL V, et al.

Predicting brain age using machine learning algorithms: A comprehensive evaluation

[J]. IEEE J Biomed Health Inform, 2022, 26(4): 1432-1440.

DOI:10.1109/JBHI.2021.3083187      URL     [本文引用: 1]

BASHYAM V M, ERUS G, DOSHI J, et al.

MRI signatures of brain age and disease over the lifespan based on a deep brain network and 14 468 individuals worldwide

[J]. Brain, 2020, 143(7): 2312-2324.

DOI:10.1093/brain/awaa160      URL     [本文引用: 1]

Deep learning has emerged as a powerful approach to constructing imaging signatures of normal brain ageing as well as of various neuropathological processes associated with brain diseases. In particular, MRI-derived brain age has been used as a comprehensive biomarker of brain health that can identify both advanced and resilient ageing individuals via deviations from typical brain ageing. Imaging signatures of various brain diseases, including schizophrenia and Alzheimer’s disease, have also been identified using machine learning. Prior efforts to derive these indices have been hampered by the need for sophisticated and not easily reproducible processing steps, by insufficiently powered or diversified samples from which typical brain ageing trajectories were derived, and by limited reproducibility across populations and MRI scanners. Herein, we develop and test a sophisticated deep brain network (DeepBrainNet) using a large (n = 11 729) set of MRI scans from a highly diversified cohort spanning different studies, scanners, ages and geographic locations around the world. Tests using both cross-validation and a separate replication cohort of 2739 individuals indicate that DeepBrainNet obtains robust brain-age estimates from these diverse datasets without the need for specialized image data preparation and processing. Furthermore, we show evidence that moderately fit brain ageing models may provide brain age estimates that are most discriminant of individuals with pathologies. This is not unexpected as tightly-fitting brain age models naturally produce brain-age estimates that offer little information beyond age, and loosely fitting models may contain a lot of noise. Our results offer some experimental evidence against commonly pursued tightly-fitting models. We show that the moderately fitting brain age models obtain significantly higher differentiation compared to tightly-fitting models in two of the four disease groups tested. Critically, we demonstrate that leveraging DeepBrainNet, along with transfer learning, allows us to construct more accurate classifiers of several brain diseases, compared to directly training classifiers on patient versus healthy control datasets or using common imaging databases such as ImageNet. We, therefore, derive a domain-specific deep network likely to reduce the need for application-specific adaptation and tuning of generic deep learning networks. We made the DeepBrainNet model freely available to the community for MRI-based evaluation of brain health in the general population and over the lifespan.

/