Comparison of seven modelling algorithms for γ-aminobutyric acid–edited proton magnetic resonance spectroscopy
Craven, Alexander R.; Bhattacharyya, Pallab K.; Clarke, William T.; Dydak, Ulrike; Edden, Richard A.E.; Ersland, Lars; Mandal, Pravat K.; Mikkelsen, Mark; Murdoch, James B.; Near, Jamie; Rideaux, Reuben; Shukla, Deepika; Wang, Min; Wilson, Martin; Zöllner, Helge J.; Hugdahl, Kenneth Jan; Oeltzschner, Georg
Journal article, Peer reviewed
Published version
View/ Open
Date
2022Metadata
Show full item recordCollections
Abstract
Edited MRS sequences are widely used for studying γ-aminobutyric acid (GABA) in the human brain. Several algorithms are available for modelling these data, deriving metabolite concentration estimates through peak fitting or a linear combination of basis spectra. The present study compares seven such algorithms, using data obtained in a large multisite study. GABA-edited (GABA+, TE = 68 ms MEGA-PRESS) data from 222 subjects at 20 sites were processed via a standardised pipeline, before modelling with FSL-MRS, Gannet, AMARES, QUEST, LCModel, Osprey and Tarquin, using standardised vendor-specific basis sets (for GE, Philips and Siemens) where appropriate. After referencing metabolite estimates (to water or creatine), systematic differences in scale were observed between datasets acquired on different vendors' hardware, presenting across algorithms. Scale differences across algorithms were also observed. Using the correlation between metabolite estimates and voxel tissue fraction as a benchmark, most algorithms were found to be similarly effective in detecting differences in GABA+. An interclass correlation across all algorithms showed single-rater consistency for GABA+ estimates of around 0.38, indicating moderate agreement. Upon inclusion of a basis set component explicitly modelling the macromolecule signal underlying the observed 3.0 ppm GABA peaks, single-rater consistency improved to 0.44. Correlation between discrete pairs of algorithms varied, and was concerningly weak in some cases. Our findings highlight the need for consensus on appropriate modelling parameters across different algorithms, and for detailed reporting of the parameters adopted in individual studies to ensure reproducibility and meaningful comparison of outcomes between different studies.