NASA's Science Mission Directorate Awards Funding for 20 Projects Under
the Advanced Information Systems Technology (AIST) Program of the Earth Science Technology Office
(ROSES 2008 Solicitation NNH08ZDA001N-AIST)

12/16/2008 – NASA's Science Mission Directorate, NASA Headquarters, Washington, DC, has selected proposals, for the Advanced Information Systems Technology Program (AIST-08) in support of the Earth Science Division (ESD). The AIST-08 will provide technologies to reduce the risk and cost of evolving NASA information systems to support future Earth observation missions and to transform observations into Earth information as envisioned by the National Research Council (NRC) decadal survey.

The ESD is awarding 20 proposals, for a total dollar value over a three-year period of approximately $25 million, through the Earth Science Technology Office located at Goddard Space Flight Center, Greenbelt, Md.

The Advanced Information Systems Technology (AIST) program sought proposals for technology development activities leading to new systems for sensor support, advanced data processing, and management of data services to be developed in support of the Science Mission Directorate’s Earth Science Division.
The objectives of the AIST Program are to identify, develop and (where appropriate) demonstrate advanced information system technologies which:

One hundred AIST-08 proposals were evaluated of which 20 have been selected for award. The awards are as follows (click on the name to go directly to the project abstract):

Yehuda Bock, Scripps Institute of Oceanography, University of California San Diego
Real-Time In Situ Measurements for Earthquake Early Warning and Spaceborne Deformation Measurement Mission Support

Amy Braverman, Jet Propulsion Laboratory
Geostatistical Data Fusion for Remote Sensing Applications

Andrea Donnellan, Jet Propulsion Laboratory
QuakeSim: Increasing Accessibility and Utility of Spaceborne and Ground-based Earthquake Fault Data

Tom Flatley, NASA Goddard Space Flight Center
Advanced Hybrid On-Board Data Processor - SpaceCube 2.0

Matthew French, University of Southern California / Information Sciences Institute
Autonomous, On-board Processing for Sensor Systems

Michael Goodman, NASA Marshall Space Flight Center
Technology Infusion for the Real Time Mission Monitor

William D. Ivancic, NASA Glenn Research Center
Real-Time and Store-and-Forward Delivery of Unmanned Airborne Vehicle Sensor Data

Gregory G. Leptoukh, NASA Goddard Space Flight Center
Multi-Sensor Data Synergy Advisor

Yunling Lou, Jet Propulsion Laboratory
Onboard Processing and Autonomous Data Acquisition for the DESDynI Mission

Daniel Mandl, NASA Goddard Space Flight Center
Sensor Web 3G to Provide Cost-Effective Customized Data Products for Decadal Missions

Mahta Moghaddam, University of Michigan
Ground Network Design and Dynamic Operation for Near Real-Time Validation of Space-Borne Soil Moisture Measurements

Ramakrishna R. Nemani, NASA Ames Research Center
Anomaly Detection and Analysis Framework for Terrestrial Observation and Prediction System (TOPS)

Charles D. Norton, Jet Propulsion Laboratory
On-Board Processing to Optimize the MSPI Imaging System for ACE

Christa D. Peters-Lidard, NASA Goddard Space Flight Center
Integration of Data Assimilation, Stochastic Optimization and Uncertainty Modeling within NASA Land Information System (LIS)

Paul A. Rosen, Jet Propulsion Laboratory
InSAR Scientific Computing Environment

Markus Schneider, University of Florida
Moving Objects Database Technology for Weather Event Analysis and Tracking 

Michael S. Seablom, NASA Goddard Space Flight Center
End-to-End Design and Objective Evaluation of Sensor Web Modeling and Data Assimilation System Architectures: Phase II

Bo-Wen Shen, University of Maryland, Baltimore Campus
Coupling NASA Advanced Multi-scale Modeling and Concurrent Visualization Systems for Improving Predictions of Tropical High-impact Weather

Simone Tanelli, Jet Propulsion Laboratory
Instrument Simulator Suite for Atmospheric Remote Sensing

Paul von Allmen, Jet Propulsion Laboratory
OSCAR:  Online Services for Correcting Atmosphere in Radar

Return to Top

Title

Real-Time In Situ Measurements for Earthquake Early Warning and Spaceborne Deformation Measurement Mission Support

Full Name

Yehuda Bock

Institution Name

Scripps Institute of Oceanography, University of California San Diego

Global geological hazards such as earthquakes, volcanoes, tsunamis and landslides continue to wreak havoc on the lives of millions of people worldwide. Our goal is to provide the most accurate and timely information to first responders, as well as scientists, mission planners and policy makers involved with these events. We will use dense in situ space-geodetic and seismic networks to develop advanced data processing technologies that will directly support future solid Earth deformation observation missions. These ground networks have proliferated with significant investments over the last two decades and have been recognized in the NRC Decadal Survey as indispensable tools in the mitigation of natural hazards. However, the full information content and timeliness of their observations have not been fully developed, in particular at higher frequencies than traditional daily continuous GPS position time series.  Nor have scientists taken full advantage of the complementary nature of space-based and in situ observations of surface deformation.  Our experienced team will address these challenges and, in accordance with the AIST program goal to reduce the risk and cost of evolving NASA information systems in support of future Earth observation missions, will develop:

(1) A publicly available real-time ground deformation data system that will fuse two in situ network data sources: low latency (1 s) high-rate (1 Hz or greater) continuous GPS and traditional seismic data. Scientists, mission planners, decision makers, and first responders will be able to rapidly access absolute displacement waveforms and be able to replay them and model significant events, related to global geological hazards. Open access to these resources will be through a modern real-time data portal environment using web services developed through funding from REASoN, ACCESS and MEaSUREs projects.

(2) Detection and preliminary modeling of signals of interest by the dense ground networks, which will aid mission planners to fully exploit the less-frequent but higher resolution Interferometric Synthetic Aperture Radar observations. A new capability will be demonstrated that will use continuous GPS data products to calibrate InSAR measurements for atmospheric and orbital errors, and significantly increase the accuracy of interferograms.
The project will bring the Technology Readiness Level (TRL) 3 to TRL 6.  This will allow the new technologies to be integrated into the future Deformation, Ecosystem Structure, and Dynamics of Ice (DESDynI) science data system. It will be in the form of in situ data products as recommended by the National Research Council's Decadal Survey.  This will be a significant contribution to the technology readiness of DESDynI's Science Data System (SDS) as it will build on existing investments in in situ space geodetic networks, and tie them together to allow rapid response to global geological hazards. It will also improve the timeliness, quality and science value of the data collected.

With extensive funding from NASA, more than 80 continuous GPS stations in southern California have been upgraded to stream high-rate (1 Hz) data with low latency (1 s or less). These include stations from the SCIGN network and Earthscope's PBO network. In collaboration with UNAVCO, plans are underway to extend this capability throughout California. The upgraded stations, comprising the California Real Time Network (CRTN), will be used as a test bed to demonstrate the two stated objectives. Although the demonstrations will be limited to California, they can be extended to the entire PBO region and to other plate boundaries.

With UNAVCO, we will create a Science Advisory Committee to provide community feedback from UNAVCO, WInSAR consortium, SCEC, and IRIS as we develop the advanced data processing technologies.

The ultimate goal of our research is to communicate these advanced observations of natural hazards to policy makers for the benefit of society.

Return to Top

Title

Geostatistical Data Fusion for Remote Sensing Applications

Full Name

Amy  Braverman

Institution Name

Jet Propulsion Laboratory

The key objectives of this proposal are to formulate and implement algorithms for fusing measurements from multiple remote sensing instruments (e.g. the Atmospheric Infrared Sounder and the Orbiting Carbon Observatory) to optimally estimate CO2. The methods are derived from a geostatistical model that relates observed CO2 measurement to the unobserved true value, thereby accounting for measurement errors and biases in the observations. Moreover, the methodology implicitly accounts for differences in the instruments' resolutions, footprint geometries, and other sampling characteristics. Most importantly, these estimates of CO2 are accompanied by formal measures of uncertainty (mean squared prediction errors) so that confidence intervals can be reported. The significance of this work is that it provides a rigorous methodology for creating a single best estimate NASA carbon dioxide data set, including uncertainties, that can be offered as a definitive, quantitative representation of CO2 for use in verifying and diagnosing models, and for policy making purposes. This work supports the ASCENDS mission by enabling the ASCENDS data record to be linked to that of OCO and AIRS.

Return to Top

Title

QuakeSim: Increasing Accessibility and Utility of Spaceborne and Ground-based Earthquake Fault Data

Full Name

Andrea Donnellan

Institution Name

Jet Propulsion Laboratory

(a) Objectives and Benefits

QuakeSim is a project to develop a solid Earth science framework for modeling and understanding earthquake and tectonic processes. The multi-scale nature of earthquakes requires integrating many data types and models to fully simulate and understand the earthquake process. QuakeSim focuses on modeling the interseismic process through various boundary element, finite element, and analytic applications, which run on various platforms ranging from desktops to high-end computers, which form a compute cloud. Making these data available to modelers is leading to significant improvements in earthquake forecast quality and thereby mitigating the danger from this natural hazard. This project lays groundwork for handling the large data volumes expected from NASA's upcoming DESDynI InSAR/Lidar mission and develops the tools for maximizing the solid Earth science return for the mission.

(b) Outline of proposed work and methodology

We will continue to develop QuakeSim to optimize the use of spaceborne crustal deformation for studying earthquake fault systems. In order to do so, we will proceed with developing federated databases, high performance computing software, and web and grid services for accessing the data. We will work with spaceborne InSAR and UAVSAR data to enable efficient ingestion into the geophysical models run over the compute grid.  Specifically we will

1. Develop a real-time, large-scale, service-oriented data assimilation distributed computing infrastructure.  This work will build upon our existing Web and Grid service infrastructure and will extend it to include cloud computing infrastructure.  In particular, we will investigate the feasibility and applicability of cloud computing approaches to scientific computing problems of interest to NASA. Resources in our infrastructure will include NASA Ames , JPL, and NSF TeraGrid supercomputers.

2. Assimilate distributed data sources and complex models into a parallel high-performance earthquake simulation and forecasting system, with an increasing  focus on radar interferograms (InSAR data).

3. Simplify data discovery, access, and usage from the scientific user point of view, by continuing to develop, test, and refine the portal.

4. Provide capabilities for efficient data mining, which includes pattern recognizers capable of running on workstations and supercomputers for analyzing data and simulations. It also requires establishing the necessary infrastructure and development of optimal techniques to understand the relationship between the observable space-time patterns of earthquakes and the underlying, nonlinear stress-strain dynamics that are inaccessible or unobservable in nature. This will improve earthquake forecasting, prediction and risk estimation.

5. Continue development of our fully three-dimensional finite element code (GeoFEST) with adaptive mesh generator capable of running on workstations and supercomputers for carrying out earthquake simulations.

6. Expand inversion algorithms and assimilation codes to include InSAR data for constraining the models and simulations with data.

7. Establish visualization applications for interpretation of data and models.

(c) Period of performance

Our period of performance is April 1, 2009 - March 31, 2012.

(d) Entry and planned exit TRL

Entry level TRL of this project is 4.  Through our current AIST work we will have validated the various components of QuakeSim in a laboratory environment, with standalone prototyping implementation and test. At present GPS networks provide a representative environment. With UAVSAR becoming operational in the next year, we will be able to move to TRL 5, conducting prototyping in a representative InSAR environment.  We will exit the project with a TRL of 6. We will be able to prototype QuakeSim in a relevant environment (UAVSAR), and prepare for handling operational data when DESDynI launches.

Return to Top

Title

Advanced Hybrid On-Board Data Processor - SpaceCube 2.0

Full Name

Tom Flatley

Institution Name

NASA Goddard Space Flight Center

Many of the missions proposed in the Earth Science Decadal Survey will require "next generation" on-board processing capabilities to meet their specified mission goals.  Advanced laser altimeter, radar, lidar and hyper-spectral instruments are proposed for at least ten of the Decadal Survey missions, and all of these instrument systems will require advanced on-board processing capabilities to facilitate the timely conversion of Earth Science data into Earth Science information.  Both an "order of magnitude" increase in processing power and the ability to "reconfigure on the fly" are required to implement algorithms that detect and react to events, to produce data products on-board for applications such as direct downlink, quick look, and "first responder" real-time awareness, to enable "sensor web" multi-platform collaboration, and to perform on-board "lossless" data reduction by migrating typical ground-based processing functions on-board (reducing on-board storage & downlink requirements).  The convergence of technology collaborations between the Goddard Space Flight Center (NASA/GSFC), the Air Force Research Lab (AFRL) and the Naval Research Lab (NRL) may provide a perfect solution.

The GSFC SpaceCube is an experimental high performance on-board processing platform based on Xilinx Virtex-4 Field Programmable Gate Array (FPGA) technology.  We have had great success implementing ground-based SAR data processing algorithms on the SpaceCube in our FY07 and FY08 GSFC Internal Research & Development (IRAD) tasks, and have achieved a "lossless" 6:1 processed data volume reduction by performing these functions "on-board".  We have also had great success implementing advanced hyper-spectral data processing algorithms for wild fire detection and temperature/burn index classification.  The experimental SpaceCube flight unit will fly on the HST servicing mission this year as part of a technology experiment, and we are collaborating with NRL to fly the experimental SpaceCube flight spare unit on their Space Station attached payload experiment (MISSE7) next year.  This experiment will test and validate our "radiation hardened by software" upset mitigation techniques, coupled with key components from our SAR and hyper-spectral data processing applications.

The SpaceCube brings "order of magnitude" improvements in on-board processing performance over current radiation hardened processors, but it is susceptible to single event upsets.  Several upset detection/correction strategies have been developed to mitigate these errors, but there are a number of "critical" upset scenarios that cannot be mitigated, requiring a full system reboot (hourly to daily).  Because of these issues the current experimental SpaceCube cannot meet "operational" mission requirements.  The experimental SpaceCube is, however, an excellent stepping stone toward true "next generation" on-board processing capabilities, and this proposal outlines the tasks required to make "next generation on-board processing" a reality.

AFRL is funding the development of a true radiation hardened Xilinx Virtex-5 FPGA that will be available in late 2010.  These new devices will eliminate 95% of the mitigation issues and 100% of the "critical" upset scenarios associated with the Virtex-4 parts.  We propose to design, build and test a "SpaceCube Version 2.0" system based on these new radiation hardened Virtex-5 parts and couple it with our "radiation hardened by software" techniques (to protect the internal PowerPC's) to deliver a true flight-worthy system that approaches the on-orbit performance of full radiation hardened systems, with 10x to 100x the processing capability, at a fraction of the cost.

Return to Top

Title

Autonomous, On-board Processing for Sensor Systems

Full Name

Matthew French

Institution Name

University of Southern California / Information Sciences Institute

This proposal addresses NASA's earth science missions and its underlying demands for high performance, adaptable processing for autonomous platforms. The University of Southern California's Information Sciences Institute (USC / ISI) proposes to fuse high performance reconfigurable processors with emerging fault-tolerance and autonomous processing techniques. By enabling fault-tolerant use of high performance on-board processing the utility of sensor systems will be greatly enhanced, achieving 10-100x decrease in processing time, which directly translates into more science experiments conducted per day and a more thorough, timely analysis of captured data. This research also addresses the ability to quickly react and adapt processing or mission objectives in real-time, by combining autonomous agents with reconfigurable computing techniques. This technology enables Autonomous On-board Processing for Sensor Systems (A-OPSS), via a tool-suite which generates a run-time system for sensor systems to autonomously detect changes in collected data and tune processing in a controlled manner to adapt to unforeseen events.

Using A-OPSS, satellites, air borne or ground sensors will be able to perform high performance, fault tolerant computation, and develop situational awareness about their operating environment and tune or adapt the application algorithm such that they return the most useful and significant data to human and automated decision support systems. There are a number of tangible benefits unique to remote autonomous reconfiguration of processing that directly impact science products, design cost, and risk across solid earth, water, weather, climate, health, and ecosystems science missions:

-  High Data Rate Instrument Support: High performance, fault-tolerant on-board processors enable pre-processing and data compression which support National Research Council (NRC) mission recommendations for high data rate instruments (SAR, Hyperspectral).
-  Autonomous Processing: Flexible in-situ high performance processing allows transient scientific events to be captured and detected 10-100x quicker than existing human-in-the-loop systems.
-  Portability: The proposed tools-based approach allows A-OPSS to be re-hosted to either new hardware platforms or different application domains or missions with little additional cost.
-  Scalability: By leveraging COTS devices and compilers, A-OPSS can utilize both legacy and future NASA systems.

A-OPSS leverages substantial research investments from DARPA, NRO, and NASA on autonomy and fault tolerant reconfigurable processing and focuses them on NASA AIST missions and applications. USC/ISI is teamed with NASA's Goddard Space Flight Center (GSFC) who will advise on NASA flight hardware and missions, specifically SAR and Hyperspectral imaging applications. This team will use its well-established expertise in reconfigurable computing, scheduling, fault mitigation techniques and tools, and NASA-focused applications to develop, test and evaluate a run-time environment for autonomous reconfigurable hardware and make it readily available to the community, raising the TRL from 3 to 6 in all areas.

Return to Top

Title

Technology Infusion for the Real Time Mission Monitor

Full Name

Michael Goodman

Institution Name

NASA Marshall Space Flight Center

The Real Time Mission Monitor (RTMM) is a visualization and information system that fuses multiple Earth science data sources, to enable real time decision-making for airborne and ground validation experiments.  Developed at the NASA Marshall Space Flight Center, RTMM is a situational awareness, decision-support system that integrates satellite imagery, radar, surface and airborne instrument data sets, model output parameters, lightning location observations, aircraft navigation data, soundings, and other applicable Earth science data sets.  The integration and delivery of this information is made possible through data acquisition systems, network communication links, and network server resources.  The RTMM uses the Google Earth application for the user visualization display.

In its existing form, RTMM has proven extremely valuable for optimizing individual field experiments.  However, RTMM was developed and evolved in a piecemeal fashion as a result of being funded by several individual field projects each with their own mission-specific requirements.  It is apparent to the team that for RTMM to better support both the Suborbital Science and the Earth Science Research and Analysis programs a more holistic design and integration of RTMM needs to be undertaken.

In its early form, RTMM was utilized by only a few instrument specialists.  However, as its capabilities improved and the visualization application moved from a two dimensional viewer to the 3D virtual globe, flight planners, mission scientists, instrument scientists and program managers alike have come to appreciate the contributions that the RTMM can make to their specific needs.  We have received numerous plaudits from a wide variety of scientists who used RTMM during recent field campaigns including the NASA African Monsoon Multidisciplinary Analyses (NAMMA), Tropical Composition, Cloud, and Climate Coupling (TC4), and Arctic Research of the Composition of the Troposphere from Aircraft and Satellites (ARCTAS) missions. In addition, the RTMM captured the interest of the broader Google Earth community following a presentation at the December 2007 American Geophysical Union meeting which eventually led to the publication of an article in the July 2008 issue of Popular Science magazine.

Currently RTMM employs a limited Service Oriented Architecture (SOA) approach to enable discovery of mission specific resources.  We propose to expand the RTMM architecture such that it will more effectively utilize Sensor Web Enablement (SWE) services and other software tools and components.  In order to improve the usefulness and efficiency of the RTMM system, capabilities are needed to allow mission managers and mission scientists to configure RTMM applications based on mission-specific requirements and objectives.  This will be achieved by building a linkage between the data sources and their associated applications which will streamline the process of RTMM application generation.  We are proposing modifications and extensions that will result in a robust, versatile RTMM system and will greatly reduce the man-in-the-loop.  In addition to these operational improvements we propose to formalize the RTMM design which will facilitate improvements to the system in the future. Through these improvements the RTMM system will continue to provide mission planners and scientists with valuable tools that can be used more efficiently plan, prepare and execute missions.

Return to Top

Title

Real-Time and Store-and-Forward Delivery of Unmanned Airborne Vehicle Sensor Data

Full Name

William D. Ivancic

Institution Name

NASA Glenn Research Center

This proposal is a joint activity between NASA Glenn Research Center (GRC) and NASA Ames Research Center (ARC). 

ARC has deployed sensors onboard Unmanned Airborne Vehicles (UAVs) in an operational and research settings since 2000. The data is being used to better understand and model our environmental conditions and climate, increasing the intelligence of unmanned aircraft to perform advanced missions. The UAVs are also being used for disaster monitoring and management.  On such example is the Ikhana UAV which is being used for detecting and monitoring forest fires in California and the surrounding areas.

The GRC network and architecture branch has a rich history in deploying secure network centric operations and mobile network technologies as well as development of advance communication protocols.  GRC is also working closely with the international aeronautics and Internet communities to define the problem space and deploy Internet technologies in the Next Generation Air Transportation System.

ARC is looking at advanced technology demonstrations utilizing UAV platforms for longer duration / higher altitude missions such as Earth Science, environmental monitoring and disaster assessment.  These systems will (1) use multiple satellite service providers and (2) fly to areas where no communication is possible for significant periods of time ' communication disruption.  The former requires network mobility.  The later requires store-and-forward technology.  ARC is seeking GRC's help in solving the network mobility and store-and forward problems.   The goal is to be able to command the payloads in real time and get as much data down as efficiently as possible.

There are two central objectives to this proposal.  The first objective is to develop and deploy a mobile communication architecture based on Internet technologies.  This is needed early on by Ames for their first Global Hawk flights to take place in FY09.  The second objective is to improve the data throughput by developing and deploying technologies that enable efficient use of the available communications links.  Such technologies may include: some form of Delay/Disruption Tolerant networking; improvements to the Saratoga transport protocol by implementing  a rate-based feature and perhaps congestion control; and development of  a protocol that advertises link properties from modem to router.

GRC will work with ARC and their satellite service providers to understand the requirements and limitations each piece of the network and then develop and deploy the mobile networking technologies that will meet ARC's needs as well address the requirements of  the various Satellite service providers.  This will establish the initial operational network.  GRC will then collaborate with Cisco Systems and appropriate radio manufacturers and work within the Internet Engineering Task Force to develop a rate-based implementation of Saratoga and a modem link-property advertisement protocol.  This modem-to-router protocol will allow the router to intelligently handle the buffering rather than having the radio (modem) simply drop packets.

By increasing the utilization of the communications links to provide real-time and near real-time data delivery this research will enable new observations and increase the accessibility and utility of science data.  By working with the Internet community and the Internet and modem industry to develop commercial-off-the-shelf (COTS) standards, this research and development will reduce the risk, cost, and development time for Earth science aeronautics-based and ground-based information systems.   Such technologies are even likely to find their way into space-based systems.

The period of performance is three years.  The entrance level TRL for mobile networking research is 4 with an exit of 9.  The entrance level TRLs for store-and-forward technology, rate-based transport and modem link advertisement are 2 and 8 respectively.

Return to Top

Title

Multi-Sensor Data Synergy Advisor

Full Name

Gregory G. Leptoukh

Institution Name

NASA Goddard Space Flight Center

Satellite instruments are capable of measuring the sources and transport of industrial and agricultural pollutants and their impact on nearby coastal ecosystems. The NRC Decadal Survey proposed GEOstationary Coastal and Air Pollution Events (GEO-CAPE) mission is designed to make multiple, simultaneous observations of atmospheric and oceanic properties several times a day from geosynchronous orbit. Therefore, there is a need to develop a framework for providing the resulting observations in a format that lends itself to researchers to understand the cause-and-effect relationship between these coincident observations. However, similar observations of tropospheric pollution and ocean color made on separate satellite platforms today do not lend themselves to ready comparison: the structure, format and basic assumption about how the data are processed make the two datasets apples and oranges. NASA must provide an ontological framework for data intercomparison from different sensors or models. Such a framework needs to be automated to account for the combinatorial increase the number of combinations of parameters, sensors and services for all of NASA's upcoming missions (NPP, NPOESS and the Decadal Survey missions).

We propose to:
* Capture scientist knowledge (rulesets) about the essential science and data quality characteristics
* Encode this knowledge so a computer can retrieve it based on user input or machine inference
* Present only the safe (valid) comparisons, or the caveats regarding speculative comparisons.
* Provide user-tunable options for quality screening
* Generate the Giovanni workflow to perform the operation and record the associated provenance

We will use Semantic Web technologies and ontologies to capture essential parameter details, quality and caveats. A rich interlingua (Proof Markup Language) and tools from the Inference Web project will be used to capture inter-relations of the provenance.Reasoners will automatically evaluate potential inter-comparisons as valid, speculative or invalid, providing an explanation of the result.

The application of the developed technologies will enable researchers to make valid data comparisons, and draw valid quantitative conclusions, from the GEO-CAPE datasets in order, for example, to address the ocean fertilization scenario by acid rain.

The result is a flexible framework whereby the characteristics of the dataset variables and their related quality can be encoded so these intercomparison rules can be derived.

The entry TRL for the proposed Multi-Sensor Data Synergy Advisor is 2. The exit TRL is 6.

Return to Top

Title

Onboard Processing and Autonomous Data Acquisition for the DESDynI Mission

Full Name

Yunling  Lou

Institution Name

Jet Propulsion Laboratory

We will develop information system technology that will streamline data acquisition on-demand and reduce the downlink data volume of the L-band SAR instrument onboard the DESDynI mission.  We will explore the onboard processing approaches for three major science objectives: 1) measure crustal deformation to determine the likelihood of earthquakes, volcanic eruptions, and landslides; 2) characterize vegetation structure for ecosystem health; and 3) measure surface and ice sheet deformation for understanding climate change.  We will determine the types of data processing suitable for implementation onboard the spacecraft to achieve the highest data compression gain without sacrificing the science objectives in each of these areas.  The appropriate data processing algorithms and image compression techniques will be prototyped with the UAVSAR onboard processor that is currently under development with an AIST-05 task.  We will also develop artificial intelligence in the onboard autonomy software to enable us to perform data acquisition on-demand with the satellite radar mission to meet the science objectives.  This technology development directly addresses NASA's strategic objective to "study Earth from space to advance scientific understanding and meet societal needs," (Strategic Subgoal 3A from Table 1 of summary of solicitation).  It also adds important capabilities to airborne observations via UAVSAR, which are complementary observations recommended by the NRC Decadal Survey Report.

The proposed sensor system support (topic area 1) technology development will capitalize on our recent work funded by two previous AIST awards on SAR onboard processing and onboard autonomy software for disturbance detection with UAVSAR to maximize the science return and utility of the DESDynI mission.  We will also utilize recent institutional investment on a space qualifiable integrated FPGA-based common instrument controller and computing platform (ISAAC) development.  By prototyping this technology with UAVSAR will enable us to reduce risk, cost, and development time of the space-based information system for the DESDynI mission.

Return to Top

Title

Sensor Web 3G to Provide Cost-Effective Customized Data Products for Decadal Missions

Full Name

Daniel Mandl

Institution Name

NASA Goddard Space Flight Center

The focus of this research effort is to dramatically improve the efficiency of how a wide variety of sensors are integrated together to measure climate change and aid in disaster management.  New decadal survey missions such as HyspIRI will produce an overwhelming amount of raw data.  Effective data reduction and data fusion can be realized through the use of sensor web technology both onboard the satellites and airborne instruments, and on the ground.  Thus survey satellite and pointing satellite data can be integrated into a sensor web over the Internet where virtual data products can be instantiated on the fly, assembled quickly by scientists, neo-geographers, emergency workers or even students and visualized as mashups using Google Earth.

We intend to demonstrate an open toolset, based upon our award winning  SensorWeb 2.0, which will provide non-programmers easy access to decadal mission satellite data via visual tools.  This toolset will be built upon internationally recognized Open Geospatial Consortium (OGC) standards, which will be integrated into a service-oriented architecture and further enhanced using a Resource Oriented Architecture (ROA) approach.  We call the new architecture SensorWeb 3G to indicate 3rd generation maturity.  Our group began to experiment with various aspects of ROA during our SensorWeb 2.0 effort and integrated some initial capability.  However, for SensorWeb 3G, we intend on demonstrating an extensive ROA capability in order to provide a higher level of simplicity and thus enable user mashup capabilities to instantiate custom products on the fly with visual tools. 

Mashups are a web technique that combines data and/or functionality from more than one source.  We will apply this technique to sensors and sensor data.  Typically, mashups enable ordinary users to easily customize data to suite their own needs instead of requiring programmers or data centers to assist.  This will dramatically lower the cost of decadal mission sensor data product production.

Thus these agile methods will provide the cornerstone for event driven observations and products in the area of disaster management, field campaigns, model validation, calibration / validation intercomparison and the generation of ad hoc virtual constellations.  These products generated onboard could be downlinked immediately via a direct broadcast mechanism and disseminated immediately to the end-users.  Packaged as Geospatial Really Simple Syndication (GeoRSS)/Keyhole Markup Language (KML) files, the data from HyspIRI and many other sensors will be published, aggregated and shared by end-users.  This will provide an Internet-like experience to access the world's sensors using standard web tools. This is an essential capability for HyspIRI and also all of the other decadal missions in order to fulfill a secondary requirement to enable multi-sensor integration to extend each mission's basic survey capability, thus enabling NASA and its science community to leverage their investment in these missions.   And finally, this capability is essential in order to fulfill the United States obligation to the Global Earth Observations System of Systems (GEOSS) commitments, which are being implemented under the auspices of the Committee on Earth Observation Satellites (CEOS). 

SensorWeb 3G will enter the effort at a TRL 4 and will progress to a TRL 6 over the three-year period of performance.  The start date of this effort will be 1/1/09 and will continue to 12/31/11. This research proposal is being submitted under the "Data Services Management" portion of the call, but also could fit under the "Advanced Data Processing" element. The team that has been assembled has interacted, collaborated extensively and provides leadership in the international, DoD, and federal wildfire management communities. We are thus uniquely qualified to develop and infuse this technology.

Return to Top

Title

Ground Network Design and Dynamic Operation for Near Real-Time Validation of Space-Borne Soil Moisture Measurements

Full Name

Mahta Moghaddam

Institution Name

University of Michigan

This proposal responds to the Sensor System Support topical category of the AIST-08 research announcement. Specifically, it addresses the 'Sensor calibration/validation strategies for near real-time operation.'  The proposed system will develop technologies for dynamic and near-real-time validation of space-borne soil moisture measurements, in particular those from the Soil Moisture Active and Passive (SMAP) mission, one of the four first-tier Earth Science missions identified by the National Research Council Decadal Survey.  Soil moisture fields are functions of variables that change over time across scales ranging from a few meters to several kilometers.  Therefore, an optimal spatial and temporal sampling strategy is needed that will not only make the validation task feasible, but will also result in substantial improvement in science quality for soil moisture validation over conventional techniques. Through an on-going AIST task, we have gained significant expertise in the dynamic control of ground sensors based on temporal statistics of soil moisture fields. Here, we leverage our expertise to solve the joint problem of sensor placement and scheduling. We propose to develop optimal sensor placement policies based on nonstationary spatial statistics of soil moisture, and for each location, dynamic scheduling policies based on physical models of soil moisture temporal dynamics and microwave sensor models for heterogeneous landscapes. Tractable computational strategies are proposed. We further propose to relate the ground-based estimates of the true mean to space-based estimates through a physics-based statistical aggregation procedure.  An integrated communication and actuation platform will be used to command the sensors and transmit measurement data to a base station in real time. Full-scale field experiments are proposed in coordination with SMAP cal/val experiments to prototype the validation system. This project has a 3-year performance period, with an entry TRL from 2-4 for various components and exit TRL of 6.

Return to Top

Title

Anomaly Detection and Analysis Framework for Terrestrial Observation and Prediction System (TOPS)

Full Name

Ramakrishna R. Nemani

Institution Name

NASA Ames Research Center

The goal of this project is to enhance data processing capabilities in Earth sciences by providing a framework for automated anomaly detection in large heterogeneous data sets and on-demand data/model analysis. We will demonstrate the proposed framework as an end-to-end functionality by integrating with TOPS[1] and with two existing fast and scalable on-line (streaming data) and off-line (archived data) anomaly detection algorithms. This capability to examine large amounts of data very quickly and automatically can identify regions of interest where we need to focus our analysis efforts. The result will be a data volume reduction, efficient utilization of our computing resources, and improved ability to extract knowledge and information from large datasets while minimizing the involvement of the scientists until post-analysis phase. TOPS integrates satellite, surface data with simulation models to estimate states/fluxes of global carbon and water cycles and as such provides an excellent opportunity to leverage planned observations of water and carbon variables from several decadal survey missions (SMAP, ASCENDS, DESDynI, HyspIRI, SCLP) [2]. In terms of the NRA, this project falls in the category of advanced data processing and more specifically include technologies for model interoperability, mining of data for information, and dynamic data acquisition from multiple sources. This effort brings together an interdisciplinary team of researchers in the fields of Earth science, data mining, and software engineering.

The project is executed in three phases that address different challenges. First, in order to provide support for automation within the TOPS system, we will develop a knowledge management system that will capture relationships among TOPS data and models. This will enable us to dynamically substitute similar datasets throughout our modeling efforts in an automated way, as well as have the ability to deploy appropriate models into areas identified by discovered anomalies.

Second, we will develop a framework for anomaly detection that will define a plug-in architecture that can be utilized by a number of current and future anomaly detection algorithms. To exercise this component, we will integrate it with both on-line and off-line anomaly detection algorithms developed at NASA's Integrated Vehicle Health Management (IVHM)[3] data-mining lab[12]. However, the main goal of this phase is to clearly separate the low-level details of data manipulations and transformations away from the interface and provide a testbed for future anomaly detection and data mining algorithms developed at NASA and elsewhere. This framework will be integrated with TOPS which will provide a unique foundation for future research in data-mining field particularly in Earth sciences.

Finally, we will develop a capability to automatically analyze the regions of interests discovered by the anomaly detection algorithms. This will be done by integration of decision-tree anomaly classification component with automated workflow execution system[4] developed under NASA's AIST[5] program.

The project will use the spiral model for software development where each phase begins with preliminaries, such as requirements and specifications, and on exit each component must pass a set of unit and integration tests. This is a risk mitigation technique, where at the end of each phase of the project we have a fully working system with a subset of proposed functionality, rather than waiting to the end of the project to perform final integration.

The period of performance of the project is three years and based on information in NASA's Guide for Proposers, we have estimated the possible beginning for May 2009. However, the exact start date is not critical for this project and it can be readily adjusted.

We have estimated the entry TRL of the efforts at 3 and we will deliver a system with exit TRL of 6. The detailed TRL justification is provided in the proposal.

Return to Top

Title

On-Board Processing to Optimize the MSPI Imaging System for ACE

Full Name

Charles D. Norton

Institution Name

Jet Propulsion Laboratory

The Earth Sciences Decadal Survey identifies a multiangle, multispectral, high-accuracy polarization imager as one requirement for the Aerosol-Cloud-Ecosystem (ACE) mission. JPL has been developing a Multiangle SpectroPolarimetric Imager (MSPI) as a candidate to fill this need. As a result of past internal JPL funding from FY2005-2007, as well as on-going Instrument Incubator Program funding, several key technology elements for MSPI have been or will soon be completed. This includes successful proto-flight vibration testing of the photoelastic modulators (PEMs) that are at the heart of our high-accuracy polarimetric imaging technique. The PEMs are a key component of the digital signal processing chain that drives the need for on-board processing to reduce the instrument data rate; the focus of this sensor system support proposal.

A key technology development needed for MSPI, directly related to information systems, is on-board signal processing to calculate polarimetry data as imaged by each of the 9 cameras forming the instrument. The proposed project will solve the real-time data processing requirements to demonstrate, for the first time, how signal data at 95 Mbytes/sec over 16-channels for each of the 9 multiangle cameras in the spaceborne instrument can be reduced on-board to 0.45 Mbytes/sec to produce intensity and polarization data needed to characterize aerosol and cloud microphysical properties. We will also show how on-board processing capabilities can simplify and improve design of the MSPI digital signal processing train as the on-board processor assumes video synchronization/sampling and video processing functions. In this way, we simultaneously address challenges in data processing and systems management using on-board processing techniques.

The following tasks will meet these objectives: (1) Complete design of a heritage polarimetric processing algorithm with migration and testing on the Xilinx Virtex-5(TM) FPGA and development board; (2) integrate the on-board processor into the camera brassboard system; (3) perform FPGA design trades to optimize performance and explore how digital signal processing features can be incorporated into the design; and (4) perform laboratory and airborne validation of on-board processing system with real-time retrieval of polarimetry data. Task 1 will establish TRL 4. Tasks 2-3 will advance the work to TRL 5 and task 4 will conclude this work at a planned exit TRL of 6. The period of performance is 3 years.

Return to Top

Title

Integration of Data Assimilation, Stochastic Optimization and Uncertainty Modeling within NASA Land Information System (LIS)

Full Name

Christa D. Peters-Lidard

Institution Name

NASA Goddard Space Flight Center

This proposal aims to design and implement a suite of stochastic optimization and uncertainty modeling tools to enhance the NASA Land Information System (LIS), which is a land surface modeling and data assimilation framework. LIS has been developed with the goal of enabling a system that can take advantage of the increasing number of land surface observations that are becoming available from NASA satellites. LIS provides the modeling tools to integrate these observations with the model forecasts to generate improved estimates of environmental conditions. LIS currently includes a comprehensive suite of Data Assimilation (DA) tools that are focused on generating improved estimates of hydrologic model states. The DA techniques, however, are premised on the existence of unbiased model state predictions. In this work, we propose the extension of LIS to integrate the use of both optimization and data assimilation with the development of a suite of generic, interoperable set of optimization tools. This extension would enable the use of observational data not only in the estimation of model states, but also of the model parameters fundamental to the behavior of the models. The other focus of this proposal is the development of uncertainty modeling tools that provide explicit characterizations of uncertainty in the model predictions. We propose the implementation of several formal uncertainty estimations techniques such as the Bayesian Markov chain Monte Carlo methods. The application of the system has several levels of synergy with the decadal survey missions, as the proposed technologies can be used to aid in the mission formulation activities as well as in the improvement of end-use products from these missions. The proposed extensions thus will enable LIS to more fully exploit the available hydrologic information, thereby providing decision and policy makers a better characterization of risks and vulnerabilities involved in the environmental prediction.

Return to Top

Title

InSAR Scientific Computing Environment

Full Name

Paul A. Rosen

Institution Name

Jet Propulsion Laboratory

We will develop the next generation of geodetic imaging processing technology for InSAR sensors, which is needed to provide flexibility and extensibility in reducing measurements from radar satellites and aircraft to new geophysical products.  The NRC Decadal Survey-recommended DESDynI mission will deliver to the science community data of unprecedented quantity and quality, making possible global-scale studies in climate research, natural hazards, and Earth's ecosystem. DESDynI will provide time series and multi-image measurements that permit 4-D models of Earth surface processes so that, for example, climate-induced changes over time become apparent and quantifiable. Our Advanced Data Processing technology, applied to a global data set such as from DESDynI, enables a new class of analyses at time and spatial scales unavailable using current approaches.

We will implement an accurate, extensible, and modular processing system to realize the full potential of DESDynI and of existing satellite platforms, as well as DESDynI's airborne counterpart, UAVSAR.  We will dramatically rethink the processing approach in order to i) enable multiscene analysis by adding new algorithms and data interfaces, ii) permit user-reconfigurable operation and extensibility, and iii) capitalize on codes already developed by NASA and by the science community.  The framework will incorporate modern programming methods based on recent research, including object oriented scripts controlling legacy and new codes, abstraction and generalization of the data model for efficient manipulation of objects among modules, and well-designed module interfaces suitable for command-line execution or GUI-programming. It will expose users gradually to its levels of sophistication, allowing novices to apply it readily for simple tasks and for experienced users to mine the data with great facility.  The framework will be designed to allow user contributions to promote maximum utility and sophistication of the code, creating an open source community that will extend the framework into the indefinite future.

The geodetic imaging system benefits from software developed for past and ongoing programs at NASA.  The geophysical and ecology communities, which will rely on DESDynI data, often use the ROI_PAC InSAR package, developed at NASA/JPL/Caltech. This research code analyzes single-interferogram products in terms of ground deformation, ice flow, and vegetation canopies in a basic and useful way, and is a traditional suite of FORTRAN and C programs tied together with scripts that execute certain prescribed functions, albeit parameterized for some flexibility. ROI_PAC interfaces are somewhat rigid and do not allow incorporation of newer techniques or data sets in a straightforward way.  In addition, the codes were not designed for the ultra-precise geodetic positioning of output products needed for 4-D data reduction.  The NASA UAVSAR program has developed a new processing methodology that is considerably more accurate, powerful and adaptable than current spaceborne codes; but the UAVSAR program has not addressed many of the analysis methods useful in spaceborne applications.  We will extend the UAVSAR approach to spaceborne platforms in a modern framework, with benefits of improved performance and new capability not previously possible.

The period of performance is three years.  We will deliver a software package that can process all major, open spaceborne data sets available in ways useful to the geophysical community, and we will demonstrate the framework's ability to adapt to new sensors, new algorithm modules, new users, and new ways of thinking about end-to-end data reduction. The entry level TRL is 3, as the basic object oriented methods have been demonstrated as a proof of concept on a component basis in the lab.  The planned exit level TRL is 6, where the technology will be demonstrated as a system in a research environment, but not at the production level.

Return to Top

Title

Moving Objects Database Technology for Weather Event Analysis and Tracking 

Full Name

Markus Schneider

Institution Name

University of Florida

Our proposed technology will leverage existing information systems to overcome the problem noted in the NRC decadal survey that "existing observation and information systems [...] only loosely connect [the] three key elements" of (i) raw satellite observations, (ii) analysis, forecasts, and models of information and knowledge, and (iii) the decision processes using such information and knowledge. The overall objective of our proposed technology is to provide the NASA workforce with previously unavailable database management, analysis, and query capabilities that will integrate the above-mentioned three key elements to support both the research and understanding of dynamic weather events, and the decision processes related to the weather events.

The two key components of our solution approach are the design and implementation of a Moving Objects Software Library (MOSL), which provides a representation of moving objects, enables the execution of operations on them, and can be integrated into databases, as well as the design and implementation of a spatial-temporal query language (STQL), which enables users (i) to pose ad-hoc queries on weather data like tropical cyclone data in a comfortable manner, (ii) to obtain immediate response, and (iii) to retrieve satellite data based on user queries.

Additional advantages of our proposed technology are that it is mission-independent, weather event-type independent, and system-independent. Hence, our proposed technology does not incur excessive development cost and time when integrating data from future missions, adding new weather event-types, or integrating the proposed technology into future or existing information systems.

In our proposed effort, we will apply our solution to the tropical cyclone weather events and the observations from the QuikSCAT and the TRMM satellites. This will ensure data continuity and smooth transition to the wind data and precipitation data from the NRC proposed XOVWM/3DWind missions and the PATH (and GPM) mission respectively. In the third year of our effort, our proposed technology will be integrated into the JPL tropical cyclone portal for validation.

This 3-year effort for the proposed technology is expected to advance the TRL from 2 to 5.

Return to Top

Title

End-to-End Design and Objective Evaluation of Sensor Web Modeling and Data Assimilation System Architectures: Phase II

Full Name

Michael S. Seablom

Institution Name

NASA Goddard Space Flight Center

We propose to deliver an end-to-end simulator that will quantitatively assess the scientific value of a fully functional, model-driven sensor web.  The overarching goal of this effort is to provide an objective analysis tool for Decadal Survey mission planning.  The tool would enable systems engineers and Earth scientists to define and model candidate mission designs and operations concepts and accurately assess their impacts.

We will build upon the work of our Sensor Web Simulator (SWS) that was designed and is now being prototyped under our Phase I AIST award. As in Phase I, we will focus on meteorological applications in which information derived from a numerical model is used to intelligently drive data collection for operational weather forecasting.  Simulation is essential: the development costs of an operational sensor web system, and the concomitant deployment risk are certain to be very high. Simulation provides the means to: identify types and quantities of sensor assets and their interactions; evaluate alternative observing system implementations; quantify potential development costs; reduce operational deployment risk.

During the proposed three-year performance period we will advance our current Phase I SWS technology from TRL 4 to TRL 6 such that its deployment as a mission planning tool is facilitated. We will expand the simulator's capabilities by working with our partners in developing detailed case studies for a hurricane prediction scenario using simulated data from three of the Decadal Survey missions: the Global Wind Observing Sounder (GWOS), the Extended Ocean Vector Winds Mission (XOVWM),  and the Precipitation and All-weather Temperature and Humidity (PATH) mission. All three are recommended for launch in the 2020 time frame. We will also expand on an existing Phase I use case scenario by making use of simulated data from the Geostationary Operational Environmental Satellite series "R" (GOES-R) advanced baseline imager.

To facilitate the design and testing of the SWS we have established key partnerships with two Federal Government and two commercial organizations that serve in the roles of both customers of, and contributors to, the sensor web simulator.

We will conclude the work by delivering the simulator to the NASA Goddard Integrated Design Center (IDC), a facility that provides conceptual end-to-end mission design as well as analyses of instrument systems. We envision the simulator as a necessary and complementary third element of the IDC's two constituent labs: the Mission Design Lab (MDL) and the Instrument Design Lab (IDL). The addition of a third lab, a proposed Sensor Web Lab (SWL), would then enable the IDC to perform a complete set of necessary trade studies to evaluate the impact of selecting different types and quantities of remote sensing and in situ sensors, the characterization of alternative platform vantage points and measurements modes, and to identify, explore, and quantitatively evaluate alternative rules of interaction between sensors and numerical models.  As sensor web concepts become adopted by mission planners and as missions become increasingly comprised of collections of interacting assets, it will be increasingly important for the IDC to evaluate a mission's ability to achieve overarching science goals and to maximize useful science return.

Return to Top

Title

Coupling NASA Advanced Multi-scale Modeling and Concurrent Visualization Systems for Improving Predictions of Tropical High-impact Weather

Full Name

Bo-Wen Shen

Institution Name

University of Maryland, Baltimore Campus

Objectives and benefits: This proposal addresses the support for the Earth Science missions: Climate Absolute Radiance and Refractivity Observatory (CLARREO), Aerosol-Cloud-Ecosystems (ACE), PATH (Precipitation and All-Weather Temperature and Humidity), and Three-dimensional Tropospheric winds from Space-based Lidar (3D-winds) missions in the NRC Decadal Survey (2007). The primary object of this research proposal is to seamlessly integrate NASA advanced technologies of supercomputing, concurrent visualization, and multi-scale (global-, meso-, and cloud-scale) modeling systems for improving the understanding of the roles of atmospheric moist thermodynamic processes (i.e., the changes of precipitation, temperature, and humidity) and the cloud-radiation-aerosol interactions, aimed at improving predictions of tropical high-impact weather systems. The potential major outcomes from our proposed tasks, which fall mainly into the category of "advanced data processing" and partially into the category of "data services management", include: (1) progress in improving the understanding of moist processes and their impacts on tropical high-impact weather predictions; (2) progress in advancing NASA large-scale modeling systems that can take advantage of the next-generation supercomputers; (3) progress in advancing NASA high-end computing and visualization technologies; (4) progress in fostering interdisciplinary collaborations and experience sharing among Earth science, computer science and computational science disciplines.

Outline of Proposed work and methodology: The proposed tasks, which will proceed according to well-defined milestones to advance the current TRL3 technologies to TRL7, include: (1) Improve the parallel scalability of the multi-scale modeling system to take full advantages of the next-generation supercomputers (e.g., NASA Pleiades) with tens of thousands of CPUs; (2) integrate the mutiscale modeling and the concurrent visualization systems that can be used to visualize massive volume of data from model simulations and satellite observations; (3) significantly streamline data flow for fast post-processing and visualizations; (4) Perform experimental real-time prediction with the final system for improving tropical weather predictions.

Return to Top

Title

Instrument Simulator Suite for Atmospheric Remote Sensing

Full Name

Simone Tanelli

Institution Name

Jet Propulsion Laboratory

We will develop an integrated Instruments Simulator Suite for Atmospheric Remote Sensing (ISSARS) capable of simulating active and passive instruments aiming at the remote sensing of the atmosphere. The concept of ISSARS is that of providing a modular framework to enable the creation of a universal instrument simulator (for real-aperture instruments). We will implement all the modules necessary to simulate the instruments considered for deployment on the Aerosol/Cloud/Ecosystems mission (ACE, NRC 2007, p. 4-5) and those to be deployed on the Global Precipitation Measurement mission (GPM, NRC 2007, p.11-9).  We will incorporate state-of-the-art forward models from the microwave to the UV range, and integrate them so that a common input from atmospheric models is treated with consistent assumptions across the simulated instruments.  ISSARS will also allow custom instrument configurations (e.g., orbit, scanning strategy, beamwidth, frequency) to facilitate mission design trade studies.

ISSARS will include modules for scattering, emission and absorption, including Doppler and polarimetric quantities, from atmospheric constituents (gas, hydrometeors, aerosol and dust) and surface characteristics. Most of the effort will be dedicated at developing a robust modular framework capable of integrating seamlessly further modules yet to be produced by the scientific community. Only selected modules will be improved with respect to the currently-available algorithms. ISSARS will be developed by optimizing parallelization schemes to maximize throughput to the science community.

ISSARS will be built with state-of-the-art IT technologies to allow instrument designers, modelers, and atmospheric scientists with limited knowledge of the hardware and software behind it, to easily configure the instruments with a user-friendly GUI front-end. 

The overarching goal is to fill a specific existing gap: in general, each instrument, or at the most each mission, developed its own set of simulators, some empirical, some based on arbitrary assumptions applied to the output of atmospheric models (e.g., scattering properties of frozen hydrometeors and particle size distributions).  The end result is that specific 'knobs' are used to match relatively small subsets of observables, and often they perform poorly when applied to other instruments. ISSARS will accept as input fields generated by commonly used models (e.g., NASA-Unified WRF), and will handle all assumptions consistently among all instruments to allow:

A) immediate and accurate verification of achievability of desired scientific requirements by planned missions, in various configurations; use of ISSARS will greatly benefit mission design and reduce mission costs.

B) integration in OSSEs, made possible by optimizing the parallelization structure of the code, for quantitative assessment of measurements impact and mission design optimization.  Most importantly, ISSARS will enable the efficient implementation of multi-instrument, multi-platform OSSE's. In particular we will consider the GSFC developed NASA-Unified WRF as the default (not the only possible) input to ISSARS.

C) consistent, and statistically robust validation of forward model formulations and atmospheric model outputs by reanalysis of data products from existing satellite missions (e.g., TRMM, A-Train, etc.). In fact, it is widely accepted that some of the current forward models need improvement. Significant efforts by the scientific community aim at developing new advanced models or at improving existing ones: ISSARS will allow rapid integration of new models and will accelerate their validation, and transition to higher TRL.

D) efficient and accurate performance assessment of on-board or at-ground processing methods aiming at the achievement of scientific requirements and goals given the available hardware and financial resources.

The period of performance is 2/1/2009 through 1/31/2012. Entry TRL is 3, exit TRL is 5.

Return to Top

Title

OSCAR:  Online Services for Correcting Atmosphere in Radar

Full Name

Paul von Allmen

Institution Name

Jet Propulsion Laboratory

Interferometric Synthetic Aperture Radar (InSAR) data undergoes a series of processing steps before topography or surface deformation can be measured. A major source of errors in repeat-pass InSAR measurements is delays in the propagation of the radar signal due to the presence of water vapor in the troposphere that changes between the two SAR acquisitions. Several water vapor delay correction algorithms based on satellite imagery and Global Positioning System (GPS) data have been developed by the co-investigators and others over the last few years that each have some spatial or temporal limitations. A new correction algorithm based on weather data developed recently by one of the co-investigators has the significant advantage over other approaches that it is available globally at all times, although it has coarser spatial resolution. We will build an Information Technology system that produces atmospheric water corrections for existing InSAR missions and the DESDynI mission recommended by the Decadal Survey, by blending the data from different sources to use the highest resolution data, where and when available, and coarser data elsewhere.

Currently, atmospheric data is retrieved "by hand" from a selection of sources, such as NASA, NOAA and NCAR archive data systems, and used as input for the correction algorithm. The gathering of data is time consuming and requires knowledge of the database search methodology and the data attributes and formats specific to each data resource, presenting a major barrier to adoption by scientists studying Earth deformation. We propose to develop and implement an automated service to access data from a wide selection of resources, and integrate the data into the correction algorithm. One of the key tasks will be to make accurate estimations of the spatial distribution of water vapor at high resolution, at the times of SAR image acquisitions. The automated retrieval of the data will rely on established Web Services technology such as Simple Object Access Protocol (SOAP) and REST (one-line query URLs), with emphasis on modularity, extensibility, and compliance with standards. A fundamental component of the proposed research is the SciFlo web services orchestration framework, which will provide one of the main thrusts in the conversion of application algorithms into remotely-invokable Web Services. The completed package will feature an interface with one or more standard InSAR data processing packages, in particular ROI_pac, widely used at JPL and in the InSAR community.

The primary purpose of our project is to provide users with an InSAR delay correction package that does not require detailed knowledge of the data sources. This automated approach will bring the benefit of newly developed algorithms and new atmospheric data to a wide audience in the Earth science community. It will provide better InSAR images for Earth crust deformation, ice motion, and biomass measurements. We will process data collected by the NASA UAVSAR missions and infuse the technology into the DESDynI Ground Data System. The proposed research addresses the NASA Strategic Objectives to "Conduct a program of research and technology development to advance Earth observation from space, improve scientific understanding, and demonstrate new technologies with the potential to improve future operational systems."

The proposed period of performance is 3 years, with Year 1 devoted to developing the Web Services, Year 2 to the integration with the correction algorithm and comparison with ground truth, and Year 3 to data processing for INSAR missions. Since the web services technology that will be used for this work has already been tested for other applications by one of the Co-Investigators, and other Co-Investigators have already implemented and tested the correction algorithms, the entry TRL is 3. At the end of Year 3 the planned exit TRL is 6, with infusion of the technology within the Ground Data System for DESDynI.