Lecture 1

Introduction

Assumed Concepts & Topics

from Geog205 and Geog300:

  • Scale, map projections and the UTM system
  • Generalization
  • Raster and vector systems
  • GIS themes or layers
  • The electromagnetic spectrum

Remote Sensing

"... is the collection of information about an object without direct contact (from a distance) "

  • The analogue unit of data collection is the photograph (aerial or from space), from a camera.
  • The digital unit is the pixel, created by using a scanner.
  • Scale is a function of distance from the object, system quality and resolution.
  • Analogue or digital systems can generate various images along the electromagnetic spectrum.
  • All photographs are also 'images' but digital images are not photographs.
  • A digital image processing system must be RASTER, but may also have vector capabilities.
  • VECTOR systems may have some raster options, such as image display.
  • Traditional uses of remote sensing are interpretation, location & updating
  • Digital applications are classification & feature extraction

Milestones in the History of Remote Sensing

1839 / Invention of photography
1910s / First use of aerial photography (World War I: photo interpretation)
1920s / Development of photogrammetry for mapping
1940s / Military use of radar (World War II)
1950s / Use of colour photography and infra-red
1962 / Term 'remote sensing' first appeared
1970s / Launch of first weather satellites (Nimbus, Tiros)
1972 / Launch of Landsat 1 (named ERTS-1) and multispectral sensor (MSS)
1982 / Landsat 4 and 'the next generation sensor': Thematic Mapper (TM)
1985 / Unix workstations and improved PCs enabling widespread use of digital imagery and GIS
1986 / SPOT-1 satellite (France)
1990s / Other satellites: e.g. India, Japan, USSR; airborne spectrometers (e.g. CASI)
1995 / Other satellites: e.g. India, Japan, USSR; airborne spectrometers (e.g. CASI)
2000 / (?) High resolution private sector satellites

Lecture 2

ELECTROMAGNETIC SPECTRUM

1. Summary of the Electromagnetic Spectrum

The EM spectrum describes the range of wavelengths of energy that can be recordedusing remote sensing. This includes shorter wavelengths that are reflected energy andmedium-longer wavelengths that are emitted energy.

The unit of measurement is the nanometre (nm) and also the micrometre.
1 micrometre = 1000 nanometres
1 metre = 1 million micrometres

The major portions of the EM spectrum used in remote sensing for mapping and GISapplications are:

a. Visible wavelengths (.4-.7 micrometres or 400-700 nm)

Blue .4 to .5 (400 to 500 nm)
Green .5 to .6 (500 to 600 nm)
Red .6 to .7 (600 to 700nm)

b. Near infrared .7 - 1.3mu and Mid IR (1.3 -3mu)

Wavelengths up to about 1.1 micrometres can be captured using photography;longer wavelengths REQUIRE a scanner, which can be used for all wavelengths. Energy in the near & mid IR is non-visible, reflected.

c. Far IR (Thermal) 3 - 14mu

In these wavelengths, we record energy emitted from the earth

d. Microwave (including radar) 1mm - 1 metre

2. Spectral Resolution

The width of each portion of the EM spectrum captured by a scanner defines the spectralresolution of the system.A small width equals a finer resolution.

3. Spectral Signatures

Spectral signature graphs, show the relativeamount of reflection or emission from an object across different wavelengths. Everyobject varies in the amount of energy reflection, otherwise for example they would allappear to be black, white or a shade of gray on colour film if there were equal reflectionin red, green and blue (RGB) wavelengths.

4. Spatial Resolution

Spatial resolution is a measure of the size of the pixels. This determines the precision orscale of the data. Remote sensing data generally varies from 1 metre to 1km (and in somecases, such as weather satellites, 10-100km). As with vector GIS, data collected at onescale is not usually suitable for analysis or mapping at another very different scale.

Remote sensing data and raster GIS data assume or give the impression that a pixel hasone uniform value across its width. This may be true for a small pixel or a homogenouscover, such as a large lake, or field, but often we need to know the nature of geographicdata and understand that what we are seeing is an average value for a variable forest or amixture of different surface covers.

Lecture 3

DIGITAL DATA FORMATS & SYSTEMS

1. Raster data

  • Scanner input signal
  • Signal quantification
  • Mapping a continuous value into a discrete digital value
  • Digital grid/array arrangement of images
  • Values in the image

2. Statistical summary

  • Image histograms
  • Histogram transforms
  • Linear stretch
  • Histogram Equalization enhancement
    Data are partitioned intoDNrange classes such that an equal number of pixels fall into each class. Greatest contrast is seen among pixels with the greatest frequency of occurrence in the image
  • Root/Logarithmic enhancement
    Useful for skewed Gaussian distributed DNs. The transfer function is logarithmic in shape
  • Piecewise linear stretch
  • Density slicing & pseudo-colour enhancements
  • DN thresholding

3. Data Storage Formats

  • Band Sequential (BSQ)
  • Band Interleaved by Line (BIL)
  • Band Interleaved by Pixel (BIP)
  • Run-length encoding
  • Desktop formats: tiff, gif, jpeg, pbm, pcx, sun raster, tga, xpm (X Window)

Lecture 4

1. Types of Platforms & Sensors

Platform: the satellite carrying the remote sensing device
Sensor: the remote sensing device recording wavelengths of energy

Refer to class handouts for listing of sensor types and their qualities.

Satellite orbits can be one of two kinds:
a. Sun-synchronous: the satellite passes over & captures imagery at the same time of day.
b. Geostationary: the satellite orbits with the earth & is permanently over the same location.

Satellites & sensors designed for terrestrial mapping & earth resource monitoring are generally sun-synchronous, while weather satellites are geostationary.

The main ones (with date of first launch) to be discussed here are:

1972 Landsat Multispectral sensor (MSS)

1982 Landsat Thematic Mapper (TM)

1986 SPOT High Resolution Video (HRV)

1995 IRS (India) - (LISS)

1979 NOAA Advanced Very High Resolution Radiometer (AVHRR)

Some other satellites:
1974 GOES (Geostationary Orbital Environmental Satellite)
1978 Nimbus
1995 RADARSAT

2. Orbit & Sensor Characteristics

LANDSAT TM / SPOT HRV
Launch / 1982 / 1986
Altitude / 705 km / 832 km
Attitude (polar) / 8.2 degrees / 8.7 degrees
Equatorial time / 9.45 am / 10.30 am
Swath width / 185km / 60km
Repeat coverage / 16 days / 26 days
Sensor / Thematic Mapper(TM) / High Resolution Visible (HRV)
Number of detectors / 100 / 6000/3000
Advantages / #bands, swath size / higher resolution, #'looks'
Bands / 7 / 1 + 3
Scanner type / Mirror (7 cycles/second / Pushbroom

3. Satellite Sensor Web Sites

Landsat (NASA)

Spot Image (French Satellite)

IRS (Indian Remote Sensing)

NOAA (Meteorological Satellite)

Russian Remote Sensing Satellites

RADARSAT (Radar Sensor)

New 1 metre Satellites :
IKONOS 1

Quickbird 1

Orbimage

CCRS Data Search Site :

Lecture 5

DATA ACQUISITION & DISPLAY

In Canada most remote sensing data are ordered from Radarsat, Inc.:
Scenes are previewed via the Canada Centre for Remote Sensing web site:

1. Acquiring Data

In multispectral sensing, data are captured at several wavelengths;
Users first decide what scale (resolution) data are required ... and in conjunction :
Which bands are suitable to determine which sensor is best.

Data are then ordered based on:

Location (lat/long or path/row)
Data format (BIL, BIP, BSQ)
Extent: whole scene or quadrant or portion
Number of bands: whole or subset

Data may either be captured by a nadir looking sensor =Whiskbroom (e.g. Landsat), or a directable or pushbroom sensor (e.g. SPOT)

The table below lists database size and image extent:

Sensor Extent (km) Pixels (x,y) Database size (Mb)
MSS 185 3240 x 2340 30
TM 185 6920 x 5728 300
SPOT Pan 60 6000 x 6000 36
SPOT XS 60 3000 x 3000 27
AVHRR 2800 2500 x 2500 35
(TM and SPOT are about the same amount of data for the same area, but TM has more
bands, while SPOT has higher resolution)

2. Bands, Channels, Image Planes & RGB Guns

Bands are captured (scanned) by the sensor.
Channels are bands stored in a database: no limit !
Image planes are where the channels are loaded for display; best limited to 8
RGBare the three colour guns available for display.

3. Display Modes

A monitor has 3 guns (RGB), so only 3 channels can be displayed at once.

  • Three different channels compose a colour composite.
  • The same one channel in all three guns creates a grayscale image.
  • One channel can also be displayed in pseudocolour (PC).
  • Density slice: certain DNs are classed or thresholded into a colour.

4. Display Considerations

Most data are acquired in 8 bit values (0-255).
The data in each band rarely fill the 0-255 range
8 bit display enables every DN for one band to be displayed in contrast
24 bit display enables every DN for three bands to be displayed
8 bit display for 3 channels requires grouping display values using a Look Up Table (LUT)

5. Histogram Stretching

This involves the manipulation of display colours to fit DN ranges: Stretches include:
None, Linear, Equal, Root, Infrequency, Special

Lecture 6

RADIOMETRIC CORRECTION

Radiometric correction is used to modify DN values in order to account for noise, that is contributions to the DN that are a function NOT of the feature being sensed but of the atmosphere or the sensor itself. (also referred to as "pre-processing")

1. Sensor Failure & Calibration

Sensor problems show as striping or missing lines of data:
Missing data due to sensor failure results in a line of 0 values every 16th line for TM data .. since there are 16 sensors for each band, scanning 16 lines at a time (or 6th line for MSS). Miscalibration results in lines of data with a significantly higher or lower mean values or standard deviations. These can be corrected by repating lines above or below (complete failure) or 'normalising' the faulty data lines according to the rest of the data.

Data can also show excessive speckle 'salt and pepper' effect (high and low values); this can be corrected using a box filter (3 x 3, 5 x 5): mean, median or modal.
Mean: smooths data, yields 32 bit (decimal) data
Median: smooths data, preserves edges (integer data)
Modal: smooths data, eliminates holes or slivers (no data)

2. Atmospheric Interference

Lower wavelength bands are increasingly subject to haze, which falsely increases the DN value.
This needs correction in some cases, for example to mosaic scenes with different amounts of haze, or to generate band ratios, where the resultant values may be affected.
The simplest method is known as dark object subtraction which assumes that the scene contains a pixel with a DN of 0 (if there were no haze). An integer value is subtracted from all DNs so that this pixel becomes 0. This results in lower overall DNs and requires stretching the image data.
The effect of haze diminishes with increasing wavelength, but clouds affect all visible and IR bands, hiding features twice: once with the cloud, once with its shadow. Only in the microwave, can energy penetrate through clouds.

3. Reflectance to Radiance Conversion

DN reflectance values can be converted to absolute radiance values. This is useful when comparing the actual reflectance from different sensors e.g. TM and SPOT.

DN = aL + b where a= gain and b =n offset

The radiance value (L) can be calculated as:L = [Lmax - Lmin]*DN/255 + Lmin
where Lmax and Lmin are known from the sensor calibration.This will create 32 bit values.

4. Illumination Correction

When comparing scenes, if they are from different days or time of day, they are likely illuminated by the sun at different angles. This is corrected simply by dividing all DNs by the sine of the sun elevation. This is given when purchasing data: the sun angle will usually decline away from the summer solstice for the same sensor.

Lecture 7

GEOMETRIC CORRECTION

Also referred to as (geo)rectification, most remote sensing data contain distortions preventing overlay with other GIS layers. While aerial photographs have many sources of error, the main source of geometric error in satellite data is due to satellite path orientation.

In air photos, errors include topographic and radial displacement; airplane tip, tilt and swing. These are all reduced and possibly negligible in satellite data due to higher altitude and greater stability. Data can be interpreted uncorrected, but not input as a layer into a GIS.

Best reference for geometric (and radiometric correction): (Section / module 5)

1. Sources of Geometric Error

a. Systematic distortions

Scan skew: the forward motion of platform during each mirror sweep, resulting in the ground swath not being normal to the polar axis Mirror-scan Velocity and panoramic distortion: along-scan distortion (pixels at edge are slightly larger)

Earth rotation: earth rotates during scanning (offset of rows).... (122 pixels per Landsat scene)

b. Non-systematic distortions

Altitude and attitude variations in satellite; topographic elevation

Data can be purchased with some of these removed, but at higher cost.

The main two levels of data are:

i. System corrected (bulk): still needs geocorrection = ‘Path’

ii Precision corrected (geocoded) = ‘Map’

The geocorrection process consists of two steps: rectification and resampling.

The same two steps are required in a vector GIS, where resampling is also known as registration.

2. Rectification

Data pixels must be related to exact ground locations, most commonly measured in UTM coordinates, (NAD83), by one of the processes below, which can be used to: correct data by position, register different date imagery, register different resolutions. It is done by one of these processes:

Image to image (a geocorrected image) .. to an uncorrected image would be 'registration' not rectification

Image to map (on a digitizer) or Image to terminal (keying in coordinates)

Image to 'GIS' vectors

Note: old data and maps may be in NAD27.

General procedure:

a. Identify known Ground Control Points (GCPs), normally easily identified points on ground/map and image, e.g. road intersections (should be stable, definite and well spaced)

b. Compare and tabulate pixel/row on image with known location on ground, co-ordinate system usually is UTM

c. These locations are submitted to a Least Squares Regression.

Polynomial equations model for various distortions and calculates RMS (root mean square) error ..usually try to be under 1 pixel ... calculated as SQ.RT [ (X-Xo)2 + (Y- Yo)2 ]

Polynomial models for surface fitting may be :

1st order, the simplest, needing at least 3 GCPs

first point provides pixel location translation

second point: x,y translation and scale

third point: translation, scale change and rotation

2nd order: minimum 6 points needed

3rd order: minimum 10 points needed

3. Resampling

Locations are fitted to a new grid based on map coordinates, using round values, which may require a new pixel size (to fit with UTM system)

:e.g. MSS 80m -> 50 m, TM 30m -> 25m IRS 5.8m -> 5m

- in vector system, all features can be relocated in continuous x,y locations

- in raster system, pixel values must be re-assigned using Resampling

Resampling determines the pixel values to fill in the corrected matrix of cells.

New DN values are determined in one of three ways (these are independent of the three orders of polynomial transforms):

a. Nearest Neighbour

Pixel in new grid acquires the value of closest from old grid : the easiest to compute, retains original DNs

Disadvantage: image may look blocky, and features up to 0.5 pixels off

b. Bilinear Interpolation

New pixel gets a value according to the distance weighted average of 4 (2 x 2) nearest pixels; takes longer processing time

Looks smoother, (but creates synthetic DNs, different from original numbers)

c. Cubic Convolution

New pixel values are computed from weighting 16 (4 x 4) surrounding numbers; smoothest image, but longest computing time and DNs more synthetic.

Where more than one image have been registered together, one can check on the fit by flickering between the two: there should not be any apparent movement of features. Good registration is absolutely critical for any GIS data integration.

Lecture 8

IMAGE ENHANCEMENT & FILTERS

Spatial Filters

A common practice in digital image processing is to enhance imagery with the use of spatial filters. Filters are commonly used for such things as edge enhancement, noise removal, and the smoothing of high frequency data. These filters work by enhancing or suppressing spatial detail to improve visual interpretation in the final image.

Spatial filters are referred to as a "local operation" because they modify the value of each pixel according to a specified number of surrounding pixels.

Spatial frequency

Spatial frequency refers to the number of changes in brightness value, per unit distance, for any area of within a scene. An area with low spatial frequency will have gradual transitions in digital values (i.e. a lake or a smooth water surface). An area with high spatial frequency will have rapid change in digital values (i.e. dense urban areas and street networks).

Low pass filters

Low pass filters are used to emphasize low spatial frequency data and are designed to smooth out an image with high spatial frequency. Since they de-emphasize high spatial frequency the resulting imagery slightly blurred.

High pass filters

High pass filters, on the other hand, are used to emphasize high spatial frequency data. Often they are used to enhance and sharpen features such as roads, land water boundaries and slowly varying components of an image. These filters are often referred to as sharpening filters because they generally enhance edges without affecting the low frequency portions of the image.

Edge detection filters

In many remote sensing applications, the most valuable information derived from an image is contained in the edges of features. Edge detection filters are used to emphasize these boundaries and make them easier to analyze. In addition edge detection filters can be manipulated to draw out direction and sun angle characteristics.

How filters work

Spatial filtering works by passing a two-dimensional rectangular array of weighted values over each pixel in a digital image. The pixel in the center of the array is evaluated and recalculated according to the average of itself and the surrounding pixels, which are weighted by the values in the filter array. The numbers are then averaged and the middle value is changed to this average value. The array then shifts over to the next cell and performs the same operation on the following center pixel. This process of evaluation is called a two dimensional convolution, and the filter is often called a convolution kernel.