Lab #4Burned Area Classification and NBR

Name:

Lab #3:

Objectives of this laboratory exercise:

Use ENVI to:

  • Classify the image into burned and non-burned areas
  • Characterize the “severity” of the image via the NBR spectral index

Location:RS/GIS Lab

Login: XXXX

Password:XXXX

Before you start:

Double click the ENVI icon on the desktop:

This starts both ENVI (The Environment for Visualizing Images) and the IDL (Integrated Development Language) programming interface

Ignore IDL but don’t close it as this closes ENVI as well.


1.View the Imagery

This lab is going to get you to classify a burned area image and to assess the variability within the burned area. The data is from the Hayman Fire in Colorado that burned in June 2002.

To view the image we will use in this lab, select File/Open Image File from the main ENVI menu bar and choose the filename Hayman_reflectance_scaled.

In the available bands list if you make the following selections you will get an image like you see on the right (you may have to edit the histograms to make the colors like this).

As you can see the burned area “pops” right out so it should be easy to classify.

2. Unsupervised Classification

An unsupervised classification looks at the statistical distributions of the pixels to classify the imagery. In ENVI there are two unsupervised classification options. Today we will use IsoData, which was first proposed by Ball and Hall (1965).

In the ENVI menu system select: Classification, Unsupervised, and then Isodata. Then select the Hayman reflectance filename and press OK.

This will bring up the Isodata options. For this first run, use most of the default values. The only changes are change “Maximum Iterations” from 1 to 4 and enter an output filename of your choice. Then press OK to run the IsoData procedure.

Typically you want the method to zoom in on 7 land cover classes. So the default values of between 5-10 works well.

You also want the method to iterate a few times to get an optimal result. You should try different values in these settings to see what happens.

Once you view your result, you should get an image like this. It’s a nice pretty picture. To be a “useful” classification we have to assign the colors to defined class names. We do this by visual interpretation and expert knowledge.

To help in this process, a useful trick in ENVI is to “link” the displays. To do this follow these steps:

  1. View the orginal image in one viewer (#1 day)
  2. View the classified image in viewer #2
  3. On one of the images, select Tools, Link, and then Link Displays.

The result is that when you move the zoom box around one image, the zoom box in the other image will also move to the same location.

To visually work out what class each color corresponds to it is useful to view different band combinations of the original imagery. When you do this you will probably need to adjust the histograms to view the imagery as I have it shown below. Next on the menu for the classified image, select Tools and then Color Mapping and then Class Color Mapping.

If you right click on one of the images and select Cursor Location / Value you can quickly evaluate what class in the unsupervised image matches up to given cover types.

The Class Color Mapping tool allows you to manually enter into ENVI what class each color represents. You can write in the same name for more than one class and combine them later.

Task: Based on your expert knowledge – assign classes to the colors. After selecting your choice for Burned, Unburned, and Other make sue under Options to Save Changes.

3. Accuracy Assessment

Using the Tools/Region of Interest/ROI Tools option from the main Image menu bar, open the ROI Tool box.

Now using you knowledge of satellite imagery, create ROIs for each of the following cover types:

Burned

Unburned

Other (Cloud, Shade, Water, etc)

One you have selected your ROIs, save them.

To evaluate the accuracy of the classification, from the main ENVI menu select Classification, Post Classification, Confusion Matrix, Using Ground Truth ROIs. Select the Classification file and press OK. If you called each class of the same land cover type the same name ENVI will already have groups the ROIs with the image classes (if you have different names – ask for help!). Press OK; make sure both pixels and percentages are reported and press OK.

This information is telling us two things.

  1. The accuracy of the classification in 99%. i.e. 99% of the time we recorded the correct amount of burned, unburned, and other pixels. These pixels did not have to be in the right location – just that the number is correct.
  2. The probability that the pixels were in the right location in the classification is given by the Kappa statistic. Here the value of 0.9775 tells us that they are correctly located 98% of the time indicating a very accurate classification. An accuracy of 100% and a kappa of 0 would tell us that although the numbers of pixels are classified correctly they are all in the wrong place.

4. Explore

Task: Using the ROIs produce a supervised classification of the image using the different options in the Supervised Classification Menu. Through using a new set of ROIs (i.e. now the same ones as used to drive the classification, assess the accuracy).

Task: From the main ENVI menu – select Transform and then NDVI. By selecting different bands to represent the Red and NIR explore what the NDVI (red = band 3, NIR = band 4) and NBR (red = band 7, NIR = band 4) produce for this image.

Questions Using NDVI and NBR – which is better at discriminating the burned and unburned areas?

HINT: In addition to visual assessment, use the lab on separability to evaluate which does a better job.

Question: If you run IsoData on better of the NDVI or NBR what do you get? Do you get think you get at severity or not? If not, what could you do to improve this approach?

1