Attributions

Automated Multimodal Detection and Analysis of Geographic Atrophy

Zhihong Hu, PhD Doheny Eye Institute, UCLA

Co-Principal Investigators

SriniVas Sadda, MD Doheny Eye Institute, UCLA

Summary

Geographic atrophy (GA) is a form of age-related macular degeneration (AMD), and increasingly the main cause of vision loss in patients. Much of the previous research of GA has focused on individual imaging modalities, utilizing two-dimensional (2D) information alone. However, considering the 3D topology of the disease, utilizing information from all imaging modalities concomitantly could potentially yield a more precise and comprehensive depiction of GA lesions. The overall goal of this project is to develop an automated multimodal GA segmentation system to more precisely quantify GA progression over time in multimodal 2D and 3D images to facilitate the understanding of GA’s relationship to vision loss. 

Project Details

The overall goal of this project is to develop an automated multimodal segmentation system to more precisely quantify the progression of geographic atrophy (GA), a type of advanced-stage eye disease, over time in multimodal 2D and 3D images to facilitate our understanding of GA’s relationship to vision loss.

GA is the late-stage of age-related macular degeneration (AMD) and is increasingly the main cause of vision loss in patients. Research efforts (including ours) to develop methods for the automated identification and quantitative analysis of GA in various eye images have been reported.  However, much of this previous research has focused on individual modalities, utilizing 2D information alone. Considering the 3D topology of AMD, an approach that utilizes information from all imaging modalities concomitantly could potentially yield a more precise and comprehensive depiction of GA lesions.

This project includes two major aims. In Aim 1, we develop and validate an automated segmentation system for detecting GA in different 2D and 3D (optical coherence tomography, or OCT) imaging modalities. To do so, we align each individual 2D image to the corresponding OCT image using a feature-based image registration algorithm. We then apply the multimodal GA segmentation by combining image features from different 2D modalities and the 3D OCT images.

In Aim 2, we derive optimal multimodal definitions of GA and identify the most predictive value of subsequent growth of the GA lesions over time. Various multimodal GA descriptors/features are generated from different modality images and are correlated with the microperimetry sensitivity to establish which GA descriptors/features are most predictive of function. We also establish which GA descriptors/features are most predictive of subsequent GA growth over time.

The deliverable from this research program is a fully automated system for the detection of GA lesions and the quantitative analysis of GA progression. We will make the developed system accessible to the broad research community. Furthermore, current advances of multimodal imaging make the translation of the multimodal segmentation practical for routine use in the clinical setting. This proposal is expected to facilitate the understanding of the pathogenesis of GA in research and to facilitate the diagnosis of GA in routine clinic environments.