Title: Machine Learning Augmentation & Data Fusion using cm-scale Fluid Lensing for Enhanced Coral Reef Assessment
Presenting Author: Alan Li
Organization: NASA Ames Research Center
Co-Author(s): Ved Chirayath

Abstract:
In recent years, coral reefs, representing one of the most complex and diverse ecosystems in existence, have been subject to increasing pressures from climate change, ocean acidification, and anthropogenic factors. These concerns require a comprehensive assessment of the world's coastal environments, including a quantitative analysis on the health and extent of coral reefs as a vital Earth Science measurement. Recently, the Laboratory for Advanced Sensing at NASA Ames have developed a novel methodology known as Fluid Lensing, in which the fluid-optical interactions at the water surface boundary is used to enhance imagery of the benthic cover underneath. Utilizing this new technology and the high resolution data it provides, we have shown that through data fusion and augmentation, it is possible to improve coral classification accuracies from imagery taken by existing low resolution airborne and spaceborne assets. Classification of benthic cover is separated into two phases: (1) discrimination by coral cover (organic vs. inorganic), and (2) discrimination by morphology (sand, rock, branching coral, or mounding coral). The method is based upon Principal Component Analysis (PCA) to remap and rescale existing datasets upon a known Support Vector Machine (SVM) solution within analogous principal spaces. This supervised method is able to autonomously compensate for changing water depth and illumination conditions, with errors for coral cover and morphology classification derived from aerial imagery at approximately 16% and 31%, respectively. Classification error for data derived from the highest resolution commercial satellite imagery available is approximately 21% for coral cover and 38% for morphology. Although classification accuracy is improved across both phases, morphology discrimination suffers more acutely from lower resolution and noise effects. However, the method shows promise for future work where UAVs may observe multispectral or hyperspectral data, further increasing the speed and accuracy of classification and enhancing datasets taken at higher altitudes.