Initial stages of Best vitelliform macular dystrophy (BVMD) and adult vitelliform macular dystrophy (AVMD) harbor similar blue autofluorescence (BAF) and optical coherence tomography (OCT) features. Nevertheless, BVMD is characterized by a worse final stage visual acuity (VA) and an earlier onset of critical VA loss. Currently, differential diagnosis requires an invasive and time-consuming process including genetic testing, electrooculography (EOG), full field electroretinogram (ERG), and visual field testing. The aim of our study was to automatically classify OCT and BAF images from stage II BVMD and AVMD eyes using a deep learning algorithm and to identify an image processing method to facilitate human-based clinical diagnosis based on non-invasive tests like BAF and OCT without the use of machine-learning technology. After the application of a customized image processing method, OCT images were characterized by a dark appearance of the vitelliform deposit in the case of BVMD and a lighter inhomogeneous appearance in the case of AVMD. By contrast, a customized method for processing of BAF images revealed that BVMD and AVMD were characterized respectively by the presence or absence of a hypo-autofluorescent region of retina encircling the central hyperautofluorescent foveal lesion. The human-based evaluation of both BAF and OCT images showed significantly higher correspondence to ground truth reference when performed on processed images. The deep learning classifiers based on BAF and OCT images showed around 90% accuracy of classification with both processed and unprocessed images, which was significantly higher than human performance on both processed and unprocessed images. The ability to differentiate between the two entities without recurring to invasive and expensive tests may offer a valuable clinical tool in the management of the two diseases.

Deep learning to distinguish Best vitelliform macular dystrophy (BVMD) from adult-onset vitelliform macular degeneration (AVMD)

Querques G.;Sacconi R.;Ribarich N.;L'Abbate G.;Rizzo S.;
2022-01-01

Abstract

Initial stages of Best vitelliform macular dystrophy (BVMD) and adult vitelliform macular dystrophy (AVMD) harbor similar blue autofluorescence (BAF) and optical coherence tomography (OCT) features. Nevertheless, BVMD is characterized by a worse final stage visual acuity (VA) and an earlier onset of critical VA loss. Currently, differential diagnosis requires an invasive and time-consuming process including genetic testing, electrooculography (EOG), full field electroretinogram (ERG), and visual field testing. The aim of our study was to automatically classify OCT and BAF images from stage II BVMD and AVMD eyes using a deep learning algorithm and to identify an image processing method to facilitate human-based clinical diagnosis based on non-invasive tests like BAF and OCT without the use of machine-learning technology. After the application of a customized image processing method, OCT images were characterized by a dark appearance of the vitelliform deposit in the case of BVMD and a lighter inhomogeneous appearance in the case of AVMD. By contrast, a customized method for processing of BAF images revealed that BVMD and AVMD were characterized respectively by the presence or absence of a hypo-autofluorescent region of retina encircling the central hyperautofluorescent foveal lesion. The human-based evaluation of both BAF and OCT images showed significantly higher correspondence to ground truth reference when performed on processed images. The deep learning classifiers based on BAF and OCT images showed around 90% accuracy of classification with both processed and unprocessed images, which was significantly higher than human performance on both processed and unprocessed images. The ability to differentiate between the two entities without recurring to invasive and expensive tests may offer a valuable clinical tool in the management of the two diseases.
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.11768/140419
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 10
  • ???jsp.display-item.citation.isi??? 6
social impact