NASA Autonomous Modular Scanner (AMS) – WILDFIRE Airborne Instrument

The AMS is a complete re-build of the NASA Thematic Mapper Simulator with the same spectral characteristics as the Landsat TM, but with additional channels as well, and modifications to thermal channels to allow improved discrimination of a high range of temperature conditions. We have modified the thermal channels to replicate the spectral bandpass region of two of the proposed NPOESS VIIRS channels. Those VIIRS channels are the ones that are closest to the two channels of MODIS thermal channels that are currently used for the Fire Rapid Response product generation. In essence therefore, we are using the AMS channels to ascertain the quality of the VIIRS channels for the USFS, when they are forced to switch over to such when MODIS dies / is shut down.

The AMS is the only spectrometer in the NASA instrument fleet that has Sterling coolers for the thermal channels. This allows continuous operations (such as on a long-duration UAV). Previous versions (and other scanners) have Liquid Nitrogen dewars to cool the thermal detectors. Those systems loose coolant (and calibration therefore) after about 6-8 hours of operations, requiring refilling (you can't keep liquid nitrogen on an aircraft (flight safety), in order to refill units in-flight).

The AMS also has an interface to a processor where all the data manipulation processing in-flight occurs. Previously, sensors had limited processing built into the sensor processor. That is cumbersome and is a lot of "crunching" for the sensor to operate and process data as well. We took many of the fire-related algorithms and ported them to the on-board processor for "hands-free" operations. In that regard, we can send Level B (or Level II) products to the ground. On that processor, we also autonomously geo- and terrain-correct all the data we want to send to the ground (in real-time). We do such by spinning the SRTM data for the western US on the processor, and using such to geo / terrain correct the sensor data based on that DEM data. The AMS has an associated Applanix system on the sensor hear, which provides extremely precise DGPS and pointing vector information of the scan head. This allows a very precise geo-location to be derived. Coupling the Applanix with the ob-board Nav system allows even finer positional precision. By calculating all the sensor timing codes (to hundredth of a second), we can ascertain the exact pointing vector position of the scan mirror at all times. Coupling that with the terrain data, allows us to "put a pixel" in an accurate geo-location on the terrain.

The AMS is capable of much more than fire discrimination (as you probably realize from the inclusion of the TM (and other) spectral channels. We have refined the processing for fires, but any algorithm or band combination can be derived which would prove beneficial to other disasters (flood extent, oil slicks, etc). The key is in the processes and technology we developed to provide what the community wants, in real-time. The key is the "real-time" element (of course that is the most critical element in disaster intelligence gathering. Basically, we can work with any community and build real-time processes to meet their needs and supply real-time data. We are looking to also port this capability to other NASA instruments as well.

The AMS also has the capability to change spectral band structure. The intent of the AMS originally was to have three spectrometer heads (the Wildfire head, a MAMS head and OCI head)...there has not been support to the Airborne Sensor Facility (that maintains NASA's airborne instrument) to move forward aggressively on that front. I think, from the science perspective, the reconfiguration of the thermal channels to match VIIRS thermal channels was important, allowing an assessment of viability of these new spectral regions to discriminate fire. That also helps the "Applied" side of the house, in that it allows us to share assessment capabilities of the VIIRS with the fire community, so they can make a smoother transition to operational use of the platform data when it is launched.

Spectral Configuration:

Channel 1: 0.42 - 0.45 um

Channel 2: 0.45 – 0.52 um

Channel 3: 0.52 - 0.60 um

Channel 4: 0.60 - 0.62 um

Channel 5: 0.63 - 0.69 um

Channel 6: 0.69 - 0.75 um

Channel 7: 0.76 - 0.90 um

Channel 8: 0.91 - 1.05 um

Channel 9: 1.55 - 1.75 um (high gain)

Channel 10: 2.08 - 2.35 um (high gain)

Channel 11: 3.60 - 3.79 um (VIIRS M12) (high gain)

Channel 12: 10.26 -11.26 um (VIRRS M15) (high gain)

Channel 13: 1.55 - 1.75 um (low gain)

Channel 14: 2.08 - 2.35 um (low gain)

Channel 15: 3.60 - 3.79 um (VIIRS M12) (low gain)

Channel 16: 10.26 -11.26 um (VIRRS M15) (low gain)

FOV: 42.5 or 85.9 degrees (selectable)

IFOV: 1.25 mrad or 2.5mrad (selectable)

Spatial Resolution: 3 – 50 meters (variable dependent on altitude)

There are 16 active video channels and then two housekeeping channels, for total of 18 channels on the disk.

The "extra channel" is on the vis/nir array, and should be at about 1.1 microns. It may have poor performance, as this is at the extreme limit of the silicon detector sensitivity, so we may delete it after viewing some sphere data.

The maximum scan speeds are 33 rps for 2.5mrad IFOV, and 16 rps for 1.25mrad IFOV. These are dictated by the speed limitations of the A-to-D converters (they have to run twice as fast for 1.25mrad pixel sampling.) These speeds are all at 1X over-sampling. Like on MAS, if scan speeds are reduced, over-sampling can be enabled.