Title: The Simplified Parallellized InSAR Scientific Computing Environment (SPISCE)
Presenting Author: Paul Rosen
Organization: Jet Propulsion Laboratory
Co-Author(s): Howard Zebker; Eric M. Gurrola; Michael Aivazis; Piyush Agram; Geoffrey Gunter

Abstract:
The Simplified, Parallelized InSAR Scientific Computing Environment (SPISCE) is a software framework for interferometric synthetic aperture radar (InSAR) processing that aims to accelerate both the computational elements of processing workflows, and the ability for scientists to understand and use the products. By exploiting back projection methods on cloud-enabled GPU platforms, SPISCE can directly and efficiently compute focused imagery in UTM (landsat grid), which delivers SAR data to users as user-ready products, in a form that is most familiar to them from optical sensors, removing a major obstacle for scientists to adopt radar data. Once formed, the data can be accessed on standard GIS platforms. We greatly reduce the processing complexity for users so they can concentrate on the science, and bring the products seamlessly into the 21st century tools that are rapidly evolving to handle the developing data explosion. The SPISCE framework extends the ESTO-AIST sponsored InSAR Scientific Computing Environment python-based framework to uniformly treat polarimetric and interferometric time-series such as those that will be created by NASA's upcoming radar mission, NISAR, using serialized product-based workflow techniques. In this work, we address several key challenges: 1) speed and efficiency in handling very large multi-terabyte time-series imagery data files, requiring innovations in multi-scale (GPU, node, cluster, cloud) workflow control; 2) framework technologies that can support the varied algorithms that these data can support, from SAR focusing, interferometry, polarimetry, interferometric polarimetry, and time-series processing; framework technologies that can support heterogeneous, multi-sensor data types (point-clouds and raster) in time and space. GPU accelerations to date are roughly 100x for back-projection imaging and 1000x for image cross-correlation, show great potential for vastly speeding up end-to-end processing and interaction timelines.