Astro3310 – Planetary Image Processing

Lab 4: Topographic Analysis

Due:October8, 2015

Reading:Jankowski, D. G and Squyres, S. W. (1991) Sources of error in planetary photoclinometry, J. Geophs. Res. 96, 20907-22.

Reflectance from Planetary Surfaces (Included in Appendix to this Lab)

Resources:If you have any questions, Prof. Hayes’ office is 412 Space Sciences.

Credit:A portion of this lab was produced by Prof. Oded Aharonson for Caltech course Ge151 (Planetary Surfaces), modified by Teaching Assistants Kevin Lewis and Alex Hayes in 2007/2008 and now Prof. Alex Hayes in 2015 (Note: Kevin Lewis is now an Assistant Professor at Georgia Tech and looking for graduate students).

Instructions:Download the data package from:

Work through the lab instructions below. Comments written in bold text are questions that need to be addressed in your lab writeup. A consolidated list of the questions can be found in the the lab writeup, which can be downloaded from:

Purpose:

Over the course of this lab you will learn:

  • How to find, retrieve, and process raw imaging data from MOC
    (and understand the processed in general).
  • How to obtain topographic data using photoclinometry on processed images.
  • How to look at single-track altimetry data from the MOLA instrument.
  • How to use stereo topographic data from MOC
  • The benefit and limitations photocliometry, altimetry, and stereo topography.
  • Deducing some simple things about geomorphic features on Mars.

I. Image Processing:

We're going to start with pre-selected images to demonstrate basic image processing and some of the software packages that are used to do it. The filename of the image ism0806185.imq.This is an image taken in the Arabia Terra region of Mars in October 1999 by the Mars Observer Camera (MOC), part of the Mars Global Surveyor (MGS) mission, which officially ended in January 2007 (R.I.P.).

The image has been prepared for release by the Planetary Data System (PDS; These people are responsible for providing a consistent format for all current and future planetary missions (they're slowly converting the older ones as well) known, unsurprisingly, as PDS format. PDS images typical have either an .imgor an .imqsuffix (.imqis simply a compressed version of .img). They have a text header with information specific to that image followed by the binary image data itself.

After downloading the data package, move into the LAB 4 directory generated when you extracted the archive.

Now we're ready to look at some images. From the DATAsub-directory, take a look at the image file either in a text editor or by typing'less m0806185.imq'at a Mac terminal, Windows Cygwin Terminal (if you have one), of Linux / Unix prompt (type 'q' to exit).There are other files there but ignore them for now. The information contained in the header usually relates to things which won't change such as the time the image was taken, the gain state of the camera or the number of lines and samples in the image. Other information such as the position of the spacecraft relative to the planet, the longitude and latitude of the image or the size of the pixels in meters is subject to change as better solutions for the orbit of the spacecraft are developed. This other information is not part of the image file as it would be a huge effort to revise the whole dataset each time these new and improved numbers are derived.

MOC images are identified by an 8-digit name; the first 3 characters denote the mission phase m08 in this case means the ninth mapping phase. Each mapping phase lasts roughly one calendar month. Other prefixes exist such as ab1, sp2, fha and cal which stand for aerobraking phase 1, science phasing 2, full high gain antenna and calibration phase respectively. These other phases took place early in the mission and were pretty short. The remaining 5 characters represent the orbit number (first 3 characters) and the image number (last 2 characters) within that orbit. So in the case of our image m0806185, it is the 85thimage taken on the 61storbit in mapping phase 8.

MOC is a line scan camera i.e. it has one row of detectors that sweeps across the surface of the planet as the spacecraft moves in its orbit.The resolution along this line (known as cross-track resolution) is set by the height of the orbit i.e. if the spacecraft is far away from the planet each pixel sees more of the surface. The mapping orbit was planned to be ~400 km high, however there is ~30 km of relief on the planet (from the summit of Olympus Mons to the bottom of the Hellas impact basin) so the cross-track resolution can vary from place to place on the planet (by almost 10%). The down-track resolution is determined by the groundspeed of the spacecraft (~ 3 km s-1) combined with the length of time the camera exposes each line. Ideally, you want the spacecraft to move only a fraction of a pixel (to prevent smearing) in the down-track direction during an exposure, however the exposure must be long enough to collect sufficient light for a high signal to noise ratio. The camera sensitivity, cross-track resolution and mapping orbit parameters have been designed so that the pixels have roughly the same resolution in the cross and down track directions. Changes in distance to the surface and surface reflectivity mean that the pixels are not entirely square so all MOC images have some non-unity aspect ratio. Sometimes this can be quite severe and must be removed before the image makes any sense. In general it is always nicer to correct the aspect ratio when looking at images (there is something fundamentally disturbing about elliptical craters).

MOC images are always taken at the full resolution of the camera (~1.4 meters/pixel for the narrow angle) but, to cope with the voluminous amounts of data, the spacecraft computer intentionally degrades the resolution by summing and averaging pixels in both the cross and down-track directions. This summing is not necessarily the same in each direction so in addition to the inherent aspect ratio there is sometimes an induced aspect ratio due to this differential summing. Note that the HiRISE camera onboard the Mars Reconnaissance Orbiter (MRO), which is currently taking data on Mars, acquires images at ~0.1 meters/pixel!

ISIS (Integrated Software for Imagers and Spectrometers)

ISIS is a software package developed by the United States Geological Survey (USGS) for use with spacecraft data. ISIS is really a collection of stand-alone programs, which perform operations on datasets such as map-projecting of an image onto a reference ellipsoid. In this case we are going to use some of the programs in the ISIS package to calibrate our image.

ISIS has its own environment from which you can call these programs known astae. It's possible to call them as stand alone programs as well directly from the terminal. If you use a Mac or Linux computer, you can install ISIS yourself for free. There are number of great tutorial and help files available on the ISIS website if you do (not required for the lab).

ISIS works with its own image format called 'cubes' with a.cubextension. An ISIS cube contains header information like the PDS format does. The first step will be to convert this image into the ISIS format. This is known, as level 0 processing i.e. it isn't really processing at all but just data translation. To save time and save you from having to install ISIS on your computer, we have run of all the processing for you. However, the script used to run all ofthe necessary ISIS programs to do this conversion is in the lab DATAsub-directory and, if you so desire, you can look at it. The filename isprepare_mocmola.sh. To be fare, this script calls perl scripts that are part of the now outdated ISIS 2 library (ISIS is now up to version 3 and working on version 4). However, it would be relatively straight forward to convert the various functions to their ISIS 3 (and soon 4) equivalents.

OK, now we can take a look at this image. ISIS has created a file calledm0806185.lev0.cub; this is the raw data in ISIS 2 format There are also updated ISIS 3 format files and PNG formatted 8-bit images for ISIS 2 data file. You either view the ISIS 3 cube file (m0806185.lev0.isis3.cub) in matlab, or the PNG file (m0806185.lev0.png) in your favorite image viewer (like Photoshop, which is now free to Cornell students!). To view the file in matlab you can use our familiar friendread_isis.m, which is stored in the lab SUBROUTINES sub-directory. From you Matlab window, move into the REPORTsub-directory for the lab (if you are not already there) using the “cd” command or the graphical file toolbar. Once you are in the directory, load the matlab programs relevant to the lab by typing “addpath(genpath(‘../.’))”. This will load all of the files in the matlab subdirectory into Matllab’s memory. To load the image, type “img0 = read_isis(‘m0806185.lev0.isis3.cub’);”. Now you can display the image using the “imagesc” command by typing “figure; imagesc(img0’); colormap gray; axis equal;” or using more advanced image display and stretching routines that we went over during the Matlab Image Tutorial. If you want to open the image in an image browser, simply pick your favorite image browser and open the PNG file.

Now that we have the image up, lets discuss it. Well, its pretty ugly! It's streaky, looks like it has an aspect ratio problem and looks pretty uninspiring in general. However you can turn this into a thing of beauty and scientific worth in a few easy steps by puttning the image througha calibration pipeline (remember LAB 3?). First, however, take a moment to look over the image zooming up on the interesting stuff.

Next we will look at the level 1 cube for this image. Level one means that the image has been radiometricaly corrected i.e. the previous DN values have been converted into meaningful I/F values (see the appendix to this labfor the explanation of what I/F means). Open up the file m0806185.lev1.isis3.cub(or m0806185.lev1.png)and take a look. It should be looking a whole lot better now; the streakiness was due to different pixels on the line array having different quantum efficiencies (i.e., the bright ones were very efficient at recording light and the darker one were not). The calibration process has taken account of that and 'flattened'all the columns. You’ll also notice that the pixel values are now between 0 and 1, as opposed to 8-bit values between 0 and 255.

This is still not the ideal situation, however, as the image is still distorted because of its aspect ratio and because we have no idea of what the scale is and which way is north. What we really need to do to answer all of them is map project the image into some reference coordinate system. Level 2 images do just this. Open up m0806185.lev2.cub. The image has now been projected into a 'sinusoidal' projection. This converts lat/lon (in degrees) to x/y coordinates (in meters). The conversion is calculated roughly as y=(lat/360)*2pi*RMars, x=(lon/360)*2pi*RMars*cos(lat). As we will discuss in lecture, there are many other projections that all have their own advantages and disadvantages, but they are beyond the scope of this lab.

In the lab data directory, there is a MOC Wide-Angle image (M0806186.jpg), which corresponds to the NAC image we've been looking at. Wide Angle images are helpful for understanding the context of the very small area within a Narrow Angle image. This image can be viewed either in matlab or photoshop. To load the image in matlab type “wac=imread(‘M0806186.jpg’);” and using the same commands as before. Alternatively, simply open the image up using your favorite image browser.Describe the various forms and processes you see in the Narrow Angle image, using both it and the context MOC image.Try and figure out what you're looking at.We recognize that this is not a geology class, but photointerpretation is relevant to image processing so try and describe the context provided by the images. Label a few of the main features and save a figure in your REPORT sub-directory.

Searching for your own images

Staying with the case of MOC images, several websites are now offering various search mechanisms. Each one has its advantages and disadvantages a quick summary of the main three follow.

This site is run by the USGS. Its main benefit is that if you know the image name then you can find and download it very fast as everything is arranged by orbit number.

This is the website of the company that actually built the MOC camera. They are also interested in the science it returns and have constructed a graphical database for all the publicly released images. Its difficult to find a specific image in here but this site is great for browsing regions of the planet if you don't know exactly what you're looking for.

This site is provided by the PDS. It's an extension of a search engine from the previous major Mars mission (Viking). This allows searching graphically by zooming up on a map of the planet. However, the thing that makes this site so very useful is the ability to search the MOC dataset using forms. You can specify any number of image parameters (including latitude and longitude ranges) and get a list of all the images, which match your search. Preview thumbnails are available and each image can be viewed online and downloaded in any number of formats.

II. Altimetry and Topographic Analysis:

This section of the lab is going to make extensive use of Matlab. We've tried to write all the more complex software in advance so that we can focus on the image processing and not turn the lab into an exercisein computer programming. Note, however, that this software is not bulletproof and if its given funky data you'll get funky answers. Also the MOLA data is much more difficult to extract from its archived PDS form into useful numbers than is the MOC data you used in the first part but again, we’ll try to make this as painless as possible.

I. Photoclinometry

This section is aimed at being an introduction to the concept of photoclinometry, or if your into computer vision shape-from-shading (deriving topography from surface shading).If you've read the lab appendix by now, you should know how local slopes as well as albedo can affect the brightness of a particular patch of ground (if you haven't read it yet now would be a good time). Photoclinometry is difficult to do in any quantitative way but this exercise will illustrate some of the qualitative concepts associated with it.

In you haven’t already, you’ll need to add the path of the matlab routines for this lab by typing:

“addpath(genpath(‘../.’))”

from the lab REPORT sub-directory in the Matlab window.

There is an matlab program called 'pclin.m' in the SUBROUTINES sub-directory. This is a basic photoclinometry program which will allow you to vary some input parameters and see their effects.Please look through 'pclin.m' and make sure you understand what it is doing.

This program uses the I/F values in the image m0806185.lev1.cub. Some of the program lines are reproduced below:

res = 2.95; %Size of a pixel in meters
sunaz = 320.65; % Solar Azimuth
inc_ang = 48.18; % Inclination Angle

These lines set up the variables specific to this image. The solar azimuth is important, as only tracks that cut across the image parallel to the illumination direction can be used for photoclinometry.

Explain briefly why this is the case.

b = b-shadow;

b is the variable that stores the I/F values alone the sun's line of sight. Here we remove some estimate of the I/F value, which is due solely to the atmosphere (called shadow brightness because this is the only way shadows can be illuminated)

z = acos(b.*pi.*cos(inc_ang*(pi/180.0))./albedo);
z = z - (inc_ang*(pi/180.0));
z = res.*tan(z)
for i=2:length(z)
z(i) = z(i-1)-z(i);
end

These few lines are the guts of the program. The first calculates the incidence angle for each pixel based on its I/F value. The second then removes the incidence angle that a flat surface would have, leaving just the local slopes for each pixel. The third line converts the slope of each pixel to the change in height of that pixel. The for loop adds up all these height changes to find the actual height of each pixel.

Notice, for example, that we have to assume some value for the albedo and that the atmosphere can have an important role, due to its ability to scatter light.

You can start the program by typing 'pclin('m0806185.lev1.cub')'at the matlab prompt. The standard case will come up i.e. the program selects an interesting part of the image, guesses an albedo and assumes no atmospheric effects. You can force it to use some particular albedo by calling it like'pclin('m0806185.lev1.cub',0.25)'for an albedo of 25%. You can use'pclin('m0806185.lev1.cub',.25, 3000)'to select a starting line for the section of image you want to look at (line 3000, in this case). You can also make some correction for the atmosphere by assuming some amount of I/F is due to scattering, e.g.'pclin('m0806185.lev1.cub',.25, 3000, .02)'will assume that 0.02 needs to be subtracted from each I/F value as a first order atmospheric correction. You can choose each of these three variables, although they must be in the order: “albedo, starting line, shadow brightness”. (If you want to specify the starting line but not the albedo, for instance, a nonsensical value like -1 will cause it to choose the default albedo value.'pclin('m0806185.lev1.cub',-1, -1, -1)'will cause the program to use the defaults for all three.) Note that the elevation here is in arbitrary units. It's easy to get relative elevations with photoclinometry, but this technique (or more specifically, our rudimentary implementation of it) isn't reliable enough to actually measure heights in meters. So in discussing these heights, don't assume some unit, it's just an arbitrary scale.