Application of fully automatic Hippocampal sub-field segmentation volumes

to standard resolution T1 MR Imaging in Alzheimer's disease

Joules R1, Robin Wolz 1

1IXICO plc, London UK

We present a snapshot of the development of segmentation workflow for extraction of hippocampal subfield volumes from standard resolution (1mm3) T1-weighted MRI.

Alternate approaches employ high resolution and/or multi-modal data,

processor

here we investigate the usage of standard a resolution single modality framework applied to the ADNI cohort, in comparison to the commonly employed ASHS tool [1], assessing relation to diagnostic group and cognitive score.

The proposed method was applied to 947 subjects (CN:393, EMCI:285, MCI:111, LMCI:158) from the ADNI dataset with hippocampal subfield volumes computed from a multi-modal segmentation of T1W + high resolution T2 MRI data with the ASHS pipeline. A subset of tested data had MMSE scores available.

ASHS and the proposed method employ similar, but subtly different, parcellation schema; however report significant correlations in ICV corrected volumes.

The partial correlation (Pearson's), including age and sex, all report significant (p<<0.01) correlation between MMSE and ICV corrected volumes for all sub fields.

Correlation with MMSE

CA1_3

DG

Subiculum

Whole Hippo

(N = 342)

r

CI 95%

r

CI 95%

r

CI 95%

r

CI 95%

Proposed

0.357

[0.26, 0.45]

0.522

[0.44, 0.60]

0.473

[0.39, 0.55]

0.468

[0.38,0.55]

ASHS

0.300

[0.20, 0.39]

0.403

[0.31, 0.49]

0.319

[0.22, 0.41]

0.355

[0.26, 0.44]

Expected trends of volume decline between armsp ocessor were observed with both methods.

Logistic regressions were performed between ICV corrected regional subfield volumes and diagnostic labels, adjusting for age and sex, demonstrating subfield volume is discriminative between arms.

CN - eMCI

CN - MCI

CN - lMCI

N

(393) - (285)

(393) - (111)

(393) - (158)

Region

Method

t-stat

p-value

t-stat

p-value

t-stat

p-value

Whole

Proposed

-2.15

0.032

2.28

0.022

2.35

0.019

Hippo

ASHS

0.02

0.998

2.56

0.011

4.70

<<0.001

CA

Proposed

-1.80

0.007

1.64

0.101

0.60

0.488

ASHS

0.49

0.624

2.50

0.012

4.22

<<0.001

DG

Proposed

-0.62

0.535

2.86

0.004

5.31

<<0.001

ASHS

0.79

0.429

2.93

0.003

5.63

<<0.001

SUB

Proposed

-3.15

0.002

2.65

0.008

2.98

0.003

ASHS

-2.92

0.003

1.15

0.249

3.03

0.002

Preliminary results indicate comparable separation of CN and MCI groups between the proposed method using T1W MRI and an accepted standard method, ASHS, which employs a high-resolution T2W image which may not be available in legacy data and potentially may not be required for discriminative volumes. Both methods reported increased sensitivity with sub-field volume as compared to whole

processor

Hippocampal volume. Differences in parcellation schema may explain observed discrepancy in regional results.

Continued development will focus on improving segmentation accuracy and assessing the clinical utility of hippocampal subfield volume from T1W data alone without the clear bounds provided from high resolution T2W MRI.

This work was supported with funding by Innovate UK

processor

[1] Yushkevich PA, Pluta J, Wang H, Ding SL, Xie L, Gertje E, Mancuso L, Kliot D, Das SR and Wolk DA, "Automated Volumetry and Regional Thickness Analysis of Hippocampal Subfields and Medial Temporal Cortical Structures in Mild Cognitive Impairment", Human Brain Mapping, 2014, 36(1), 258-287

  1. http://www.nitrc.org/projects/mni-hisub25
  2. Tustison NJ, Avants BB, Cook PA, Zheng Y, Egan A, Yushkevich PA, Gee JC. N4ITK: improved N3 bias correction. IEEE Trans Med Imaging. 2010 Jun;29(6):1310-20
  3. Isensee, F., Petersen, J., Klein, A., Zimmerer, D., Jaeger, P.F., Kohl, S., Wasserthal, J., Koehler, G., Norajitra, T., Wirkert, S. and Maier-Hein, K.H., 2018. nnu-net: Self- adapting framework for u-net-based medical image segmentation. arXiv preprint arXiv:1809.10486.

A publicly available training dataset 25 subjects with T1W MRI and manually segmented labels parcellating the Hippocampus into the CA1-3,CA4-DG and Subiculum was employed in this work [2].

A template mask was generated for each ROI in MNI space to estimate bounding box placements during segmentation of un-seen data.

T1-weighted images are skull stripped, intensity inhomogeneity corrected with N4 [3], and affinely transformed to MNI space at 1mm isotropic resolution.

A 3D volume was extracted around each ROI to reduce computation expense

Right cropped ROI's were reflected in the left-right axis to allow pooling, providing more training data and mitigation risk of learning a lateralised bias as compared to a bi-lateral segmentation model.

Augmentation was employed as part of training, including random intensity bias field, random noise, random motion and random affine or elastic deformation

Per ROI a dynamic / nnU-net [4], was trained with generalised DICE Focal loss, and AdamW optimiser with initial learning rate of 1e-3 (updated at validation loss plateau). Training was undertaken in a 5-foldcross-validated framework with a 0.8 training validation ratio (16 subjects training / 4 subjects validation / 5 subjects testing).

Individual ROI segmentations were combined into a multi-label segmentation where voxel labels were assigned based on regional maximal likelihood.

Ground Truth

Output

Error

We compute the mean (std) DICE across cross-validated folds per ROI, for both the initial regional model mask, and for the final merged multi-label output

Mean DICE

Whole

Subiculum

CA1 - 3

CA4 + DG

Hippocampus

(std)

Left

Right

Left

Right

Left

Right

Left

Right

Per Model

0.889

0.891

0.722

0.728

0.804

0.812

0.800

0.812

(0.019)

(0.013)

(0.027)

(0.030)

(0.031)

(0.018)

(0.026)

(0.036)

Multi-label

0.904

0.904

0.79

0.792

0.837

0.847

0.758

0.776

(0.017)

(0.010)

(0.030)

(0.033)

(0.022)

(0.016)

(0.031)

(0.054)

ixico.com |

IXICO |

@ IXICOnews | info@ixico.com

Attachments

  • Original Link
  • Original Document
  • Permalink

Disclaimer

IXICO plc published this content on 12 December 2022 and is solely responsible for the information contained therein. Distributed by Public, unedited and unaltered, on 12 December 2022 13:33:36 UTC.