Problem Set #6

Geog 3000: Advanced Geographic Statistics

Instructor: Dr. Paul C. Sutton

Problem Set Number 6 is another short problem set. We will probe into topics that are beyond the scope of the textbooks. However, I present problems with a bit of background info that I believe are very instructional and will be relatively easy to solve. I hope these ‘exercises’ are instructive. I encourage you to search the web for good material on these data reduction topics. Topics we will cover include some common data reduction techniques such as: Principle Components Analysis (PCA)/Factor Analysis and Clustering. I will also introduce the idea of stochastic modeling via a Monte Carlo simulation of a Hockey Game. I got this idea from a book on geographic statistics by Peter A. Rogerson. There is a paper in the course documents section of the blackboard page for the course titled: Judgement under Uncertainty: Heuristics and Biases by Amos Tversky and Daniel Kahneman. This is a classic paper that likely influenced famous geographers like Michael Goodchild and Reg Golledge. It is very relevant to answering the first question of this problem set about air force pilot training. Good luck. Don’t pull your hair out. You will get a chance to punish me back at course evaluation time soon.

#1) Judgement under Uncertainty: Heuristics and Biases?

Daniel Kahneman and Amos Tversky were at an Israeli pilot training school conducting some sort of seminar with Israeli Air Force instructors. They asked a room full of flight instructors if ‘positive reinforcement’ worked better, worse, or no differently than ‘negative reinforcement’ for pilot training. Most of the air force pilots had come to the conclusion that negative reinforcement worked (improved student flight performance) and positive reinforcement was actually detrimental (made students fly worse). They based this conclusion on the fact that in the really bad landings performed by students (let’s say the worst 10% of all student landings) they yelled at the student and gave them lots of ‘negative reinforcement’. Their observation was that almost invariably their next landing was better. In the case of really good landings by students (let’s say the best 10% of all student landings) they lauded them with praise, smiles, and other kinds of ‘positive reinforcement’. Their observations in these cases almost invariably were that the next landing by that student was not as good. The individual and collective conclusion of these flight instructors was that negative reinforcement works and positive reinforcement does not work.

Question 1: What statistical phenomena explains this pattern of observations? Are the conclusions of the Air Force instructors valid? Why or Why not?

NOTE: When you think about this, think about the students in this class and their parallel parking skills. Assume everyone in this class is equally skilled at parallel parking but not all of us perform each individual parallel parking job with the same results each time. Sometimes we nail it on the first pull in, sometimes we go back and forth five times. Visualize all of us performing a parallel parking job with a ‘Parallel parking Instructor’ sitting in the passenger seat. They yell and scream at us if we botch it and have to go back and forth five times. They give us a Hershey’s Kiss and a Hug if we nail it without having to go back and forth. Will we do better after a botched parking job? Will we do worse after a perfect one? Remember the assumption: We are all equally skilled at parallel parking. However, there is random variation with respect to our individual attempts at parallel parking.

The statistical phenomena to consider here is “Regression to the Mean”. The conclusions of the Air Force Instructors are NOT valid. Under the reasonable assumption that all of the student pilots will demonstrate random variation about a mean we would expect that particularly poor landings will be followed by something better (e.g. closer to the mean), and particularly good landings will be followed by something not as good (closer to the mean).

Question 2: Consider your knowledge of the military (real, TV based, or imaginary). What kinds of reinforcement do you typically see being exercised by military personell? Does this little vignette perhaps explain your observations?

My personal experience based admittedly primarily on movies and TV is that the military has bought into the idea of negative reinforcement being effective and positive reinforcement being useless. Thus the stereotypical drill sergeant. Perhaps this ‘regression to the mean’ story explains that.

#2) Pixels, Remote Sensing, and Cluster Analysis: Separating Sand, Water, and Grass

I have produced two very simple ‘images’ represented as a matrix or ‘raster’ of numbers below (If you have taken a remote sensing course this might seem familiar). The numbers in each cell or ‘pixel’ represent the measurement of the NearInfrared and Green (visible) sensor at that location (higher numbers mean more VNIR or Green radiation was detected at that location). The ‘scene’ is a patch of earth that has Sand, Grasss, and Water in it. Type these numbers into a table with two columns titled: VNIR and Green (this is a raster version of a ‘Spatial Join’ in a GIS). Read the table into JMP and answer the questions below:

3 / 4 / 3 / 5 / 31 / 32 / 0 / 1 / 2 / 2 / 5 / 4
3 / 6 / 5 / 4 / 33 / 34 / 1 / 2 / 3 / 3 / 3 / 3
4 / 5 / 5 / 25 / 31 / 2 / 0 / 1 / 1 / 20 / 7 / 2
34 / 33 / 30 / 31 / 30 / 36 / 40 / 44 / 48 / 46 / 5 / 6
36 / 35 / 29 / 30 / 33 / 35 / 42 / 39 / 40 / 43 / 3 / 5
35 / 34 / 36 / 35 / 37 / 36 / 45 / 49 / 47 / 38 / 43 / 4

Visible Near Infrared Green

w / w / w / w / s / s
w / w / w / w / s / s
w / w / w / M / s / s / M
g / g / g / g / s / s
g / g / g / g / s / s
g / g / g / g / g / s

Classified Image Spatial & Spectral

Sand (S), Water (W), Grass (G) ‘outliers’

Question #1) Create a scatter plot of VNIR and Green bands. Cluster it by eye. Explain how you performed this ‘clustering’ operation.

Bivariate Fit of Green By VNIR

Grass is Red Water is Orange Sand is Blue

Spectrally Mixed Pixel is Green

Question #2) Use radiometric theory to classify these clusters (Assume Water (W) is Low VNIR and Low Green, Grass (G) is High VNIR and High Green, Sand (S) is High VNIR and low Green).

See sketch above

Question #3) Use ground truthing (aka empiricism) to classify these clusters. Assume you paid a graduate student to go out with a GPS and find out that the upper left hand corner pixel was definitely water, the lower left hand pixel was definitely grass, and the upper right hand pixel was definitely sand.

Same answer as above – here theory and empiricism match – very unusual J

Question #4) Use JMP to classify these pixels into clusters. Go to ‘Analyze’ choose ‘Clustering’ add both ‘VNIR’ and ‘Green’ to the Y, Columns. Click ‘OK’. On the little red upsidedown triangle next to ‘Hierarchical Clustering’ choose ‘Number of Clusters’ Choose ‘4’ or ‘3’ (big hint from the scatterplot J). How does JMPs clustering of these pixels differ from yours? Explain what you think JMP is doing to ‘cluster’ these pixels.

It is finding average spectra in each of these clusters and measuring distance in ‘spectral space’ from each of these pixels to a mean spectra of a cluster. The pixel is ‘classified’ a belonging to the cluster it is closest to in spectral space to the mean of each cluster.

Question #5) Are these pixels ‘clustered’ based on spectral or ‘spatial’ characteristics? Explain. In the diagram above identify one pixel that is a ‘spatial’ outlier and one pixel that is a ‘spectral’ outlier. Explain your reasoning and real world reasons they might occur.

The pixels are clustered based on spectral characteristics. This example does not have a spatial outlier but it does have a spectral outlier marked with an ‘M’ above. This may be a mixed pixel. (I forgot to put in a spatial outlier – doh!)

#3) Using Attitudes about ‘Abortion’ and ‘Gun Control’ to understand Principle Components Analysis

Recall in Problem Set #4 you wrote a little about Principle Components Analysis (PCA), Factor Analysis and Clustering. Here is a little tutorial/exercise that I hope helps you understand the ideas behind PCA and Factor Analysis (PCA and Factor Analysis are very similar – I challenge you to find the best explanation of the difference between them on the web). I found the JMP help on Principal components to be very helpful (Check it out).

Imagine 35 people indicating with a number between 0 and 100 how they feel about the following statements (Where 100 is ‘Strongly Agree’ and 0 is ‘Strongly disagree’):

Statement #1: Abortion should be outlawed.

Statement #2: A well regulated Militia, being necessary to the security of

a free State, the right of the people to keep and bear Arms

shall not be infringed.

Load the table in the file named: PSno6_AbortionGunControlPCA.jmp into JMP and look at an X-Y scatter plot of the two columns. I imagine you can see that answers to these questions might co-vary significantly (i.e. there is a strong correlation between the responses to these two questions – in fact the R2 is 0.69). I made up these results but I think they might not be far from what we might really measure. This particular little problem set reminds me of a bumper sticker that says: “Look Honey, another Pro-Lifer for the War”. In any case, Principle Components analysis attempts to reduce the number of ‘columns’ in a data table. Can the responses to these two questions/statements be predominantly captured by some other attribute? To perform PCA on this data do the following: ‘Analyze’ , ‘Multivariate Methods’, ‘Principal Components’. Select both columns as ‘Y’ variables and click ‘OK’. The output suggests that the first principle component captures 91% of the common variation of the responses to these two statements and the second principle component captures 9% (8.517 to be precise). You can actually save the principle component scores by clicking on the tiny little upside down red triangle and choosing ‘Save Principal Components’ (do this). Answer the following questions:

#1) Factor analysis involves the art of ‘naming’ factors (which are closely related to Principal Components) based on ‘factor loadings’ which are derived from ‘eigenvectors’. This example is REAL simple so you don’t need to get involved with factor loadings. What might you ‘name’ the first and second principle component in this particular example based on your experience of people, attitudes, politics, etc.?

Factor #1: __Liberal - Conservative__ Factor #2: ___Religious – Not Religious______

Answers will vary and that ok. This is an art.

#2) When you saved the ‘Principle Component Scores’ for each person you have a ‘score’ which represents their ‘Factor #1 ness’ and ‘Factor #2 ness’. How would you characterize an individual with a High Factor #1 score? How would you characterize someone with a High Factor #2 score? What about low Factor #1 and Factor #2 scores?

High Factor 1: Right Wing conservative – government is a bad thing

Low Factor 1: Left Wing Progressive – government is a good thing

High Factor 2: Religious/Spiritual – Life is precious

Low Factor 2: Secular/Materialist – Very Pro-Choice

#4) Clustering and Principal Components Analysis – Fun with 3-D visualization

Load the file PCAfactorCluster.JMP into JMP. We are going to play with some of the JMP functionality. Choose ‘Graph’ – ‘Scatterplot 3-D’. Add ‘ClusterA’, ‘ClusterB’, and ‘ClusterC’ to the Y, columns. Click OK. Choose the ‘hand’ tool and click and drag on the 3-D cube. You should see the points as black dots in a 3-D cube. Let’s use the Clustering functionality of JMP. From your ‘exploratory data analysis’ (e.g. diddling around with your 3-D cube visualization) I hope you see that there are three clusters. Go to ‘Analyze’ - ‘Multivariate Methods’ – ‘Cluster’. Select ‘ClusterA’, ‘ClusterB’, and ‘ClusterC’ for the Y, columns. Click OK. Now in the little red upside down triangle choose ‘Number of Clusters’ enter ‘3’. Now using same red triangle choose ‘Color Clusters’ and ‘Mark Clusters’. Diddle with the 3-D cube again. Answer the following questions:

#1) In a broad conceptual way (no hairball math please) – How does JMP classify these points (records in our table) into the three clusters? And, do you think it did a good job?

I think it does a great job. It iteratively ‘guesses’ mean centers to the number of clusters you ask it to. As it starts categorizing each point into each of these clusters the means are adjusted. All of this is based on measureing distances in variable space.

#2) Different ‘Clustering’ techniques are often simply using different ‘distance’ metrics in variable space. Different approaches attempt to minimize the distance between each ‘record’ or ‘point’ in variable space to the ‘mean center’ of each cluster. Google the idea of ‘Mahalanobis Distance’ and give a go at explaining how it is a different way of clustering points in a variable space. I found this one to be fairly useful:

(http://www.aiaccess.net/English/Glossaries/GlosMod/e_gm_mahalanobis.htm ).

Mahalonobis Distance is measured in ‘Standard Deviations’ rather than straight Euclidean distance. So a distance of 3 in Euclidean distance may be different for within two different clusters. Answers will vary. Grade leniently on this one.

#3) Distance is a basic geographic concept. In statistics we take it into the twilight zone. In a 2-D Cartesian space the ‘distance’ between two points is given by the formula: