Show simple item record

dc.contributor.authorLebedev, Alexanderen_US
dc.contributor.authorWestman, Ericen_US
dc.contributor.authorWesten, G. J. P. Vanen_US
dc.contributor.authorKramberger, M. G.en_US
dc.contributor.authorLundervold, Arviden_US
dc.contributor.authorAarsland, Dagen_US
dc.contributor.authorSoininen, H.en_US
dc.contributor.authorKłoszewska, I.en_US
dc.contributor.authorMecocci, P.en_US
dc.contributor.authorTsolaki, M.en_US
dc.contributor.authorVellas, B.en_US
dc.contributor.authorLovestone, S.en_US
dc.contributor.authorSimmons, Andrewen_US
dc.date.accessioned2014-09-15T11:28:28Z
dc.date.available2014-09-15T11:28:28Z
dc.date.issued2014eng
dc.identifier.issn2213-1582
dc.identifier.urihttps://hdl.handle.net/1956/8470
dc.description.abstractComputer-aided diagnosis of Alzheimer's disease (AD) is a rapidly developing field of neuroimaging with strong potential to be used in practice. In this context, assessment of models' robustness to noise and imaging protocol differences together with post-processing and tuning strategies are key tasks to be addressed in order to move towards successful clinical applications. In this study, we investigated the efficacy of Random Forest classifiers trained using different structural MRI measures, with and without neuroanatomical constraints in the detection and prediction of AD in terms of accuracy and between-cohort robustness. From The ADNI database, 185 AD, and 225 healthy controls (HC) were randomly split into training and testing datasets. 165 subjects with mild cognitive impairment (MCI) were distributed according to the month of conversion to dementia (4-year follow-up). Structural 1.5-TMRI-scans were processed using Freesurfer segmentation and cortical reconstruction. Using the resulting output, AD/HC classifiers were trained. Training included model tuning and performance assessment using out-of-bag estimation. Subsequently the classifiers were validated on the AD/HC test set and for the ability to predictMCI-to-AD conversion. Models' between-cohort robustness was additionally assessed using the AddNeuroMed dataset acquired with harmonized clinical and imaging protocols. In the ADNI set, the best AD/HC sensitivity/specificity (88.6%/92.0% — test set) was achieved by combining cortical thickness and volumetric measures. The RandomForest model resulted in significantly higher accuracy compared to the reference classifier (linear Support Vector Machine). The models trained using parcelled and highdimensional (HD) input demonstrated equivalent performance, but the former was more effective in terms of computation/memory and time costs. The sensitivity/specificity for detecting MCI-to-AD conversion (but not AD/HC classification performance) was further improved from 79.5%/75%–83.3%/81.3% by a combination of morphometric measurements with ApoE-genotype and demographics (age, sex, education). When applied to the independent AddNeuroMed cohort, the best ADNI models produced equivalent performance without substantial accuracy drop, suggesting good robustness sufficient for future clinical implementation.en_US
dc.language.isoengeng
dc.publisherElseviereng
dc.relation.ispartof<a href="http://hdl.handle.net/1956/8473" target="blank">Cognitive impairment in neurodegenerative diseases: insights from computational neuroimaging</a>eng
dc.rightsAttribution-NonCommercial-ShareAlike CC BY-NC-SAeng
dc.rights.urihttp://creativecommons.org/licenses/by-nc-sa/3.0/eng
dc.titleRandom Forest ensembles for detection and prediction of Alzheimer's disease with a good between-cohort robustnessen_US
dc.typePeer reviewed
dc.typeJournal article
dc.description.versionpublishedVersionen_US
dc.rights.holderCopyright 2014 The Authors
dc.identifier.doihttps://doi.org/10.1016/j.nicl.2014.08.023
dc.identifier.cristin1222982
dc.source.journalNeuroImage: Clinical
dc.source.406
dc.source.pagenumber115-125


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record

Attribution-NonCommercial-ShareAlike CC BY-NC-SA
Except where otherwise noted, this item's license is described as Attribution-NonCommercial-ShareAlike CC BY-NC-SA