Calibrating Criminals
Calibration of an Eye Tracking Device to Non-Cooperative Subjects
Ryan Schuetzler
10/13/2010
This proposal recommends research into the covert calibration of an eye tracking device to uncooperative subjects. Using visual salience theory, the author justifies an approach to calibrate the device without subjects’ knowledge to enhance accuracy of test results.

Introduction

Following the events of September 11, 2001, securing the US border has become a focal issue. As more people attempt to enter the country, manpower and current processes are proving insufficient to process this ever-increasing volume. In order to deal with these constraints, more efficient methods of processing potential entrants are being evaluated. One of the methods currently being evaluated is an automated-screening kiosk. Users interact with the kiosk, and the kiosk integrates and analyzes data from various sensors to provide automated assistance in detecting deception (Derrick, Elkins, Burgoon, & Nunamaker, 2010). An eye tracker is one of those sensors, and was recently shown to be effective in detecting deception using a Guilty Knowledge Test (Derrick, Moffit, & Nunamaker, 2010).

One important issue in the use of sensors, including the eye tracker, is calibration. Calibration is the process of creating known measurements from which to compare future measurements. In the case of the eye tracker, calibration is the process of matching eye movement to specific points on the screen. Current calibration procedures involve asking the subjects to orient their eye gaze toward dots displayed on a screen. While this calibration procedure provides a high level of accuracy, it could also alert the subject to the eye tracking that is to follow.

Drawing attention to the eye tracker may cause threats to validity. The most important of these is hypothesis guessing. Hypothesis guessing occurs when subjects in an experiment guess what experimenters are attempting to study (Trochim, 2000). Since only limited types of information can be gathered from an eye tracker, subjects may consciously evaluate their eye movement and thus alter their gaze patterns, invalidating results.

Therefore, my research question for this study is the following:

·  RQ1: How can an eye tracker calibrate without the subject's knowledge?

Theoretical background

I propose to use visual salience theory to assist in the covert calibration of an eye tracking device. This theory is focused on how attributes of visual elements encourage the orientation of attention and gaze.

Visual Salience

A visually salient object is an object that is different in some way from the objects around it. For example, a blue square in a screen full of red circles would be visually salient. Prior research has shown that visual salience is effective in reducing the time searching for an object on a screen (Cole, Kentridge, & Heywood, 2004; Jonides, 1981; Muller & Rabbitt, 1989; Yantis & Jonides, 1990). However, research has also shown that people are able to filter out cues that are not relevant to their current task (Yantis & Egeth, 1999). This phenomenon can be explained using the theory of attentional guidance. Using attentional guidance, viewers ignore certain salience cues if they are not relevant (Folk, Remington, & Johnston, 1992). For example, a diagonal red bar on a page of diagonal blue bars is visually salient; however, subjects ignored that cue when their task was to locate the single vertical bar on the page (Yantis & Egeth, 1999).

Additional research has shown that the onset (appearance) of a new object is especially effective in capturing the attention of viewers (Cole et al., 2004; Yantis & Egeth, 1999). Cole et al. (2004) found through seven experiments that object onset was more effective than color, luminance, and object offset (disappearance). Thus, object onset may be the most visually salient attribute.

The use of visual salience cues like object onset to draw subjects’ attention will be the most effective way to draw attention and gaze subconsciously. Figure 1 shows the proposed model for this research.

Figure 1 - Proposed Research Model

Hypotheses

·  H1: Calibration performed using visually salient cues will be more accurate than calibration without salience

·  H2: Covert calibration performed using visually salient cues will be less accurate than traditional calibration methods

·  H3: Uncooperative subjects will be better calibrated using covert methods than overt methods

·  H4: Uncooperative subjects will be better calibrated using visually salient cues than without

Methodology

I propose to study the covert calibration of an eye tracking device by devising a task through which subjects would need to look at portions of the screen. Several different experimental designs could be used:

  1. The subject is presented with an image of their driver’s license or passport. The subject is then asked to review the information on the document for accuracy. Most of the document is darkened, followed by a box appearing around the section being reviewed at the time. For example, the subject could be asked if the picture is a picture of them, and a box would appear around the picture to draw their attention. These results could then be compared to a calibration task where users are simply asked to look at certain parts without highlighting the portion to be examined.
  2. The subject is shown an orientation video clip about the automated kiosk screening process. During this video, object onset subtly draws attention to different parts of the screen.
  3. Similar to (1), the subject is asked to review a list of information from a form instead of a picture of their driver’s license. Object onset and other salience cues highlight the portion of the screen to be examined. While this may provide a similar effect, I believe that the visual nature of the driver’s license will provide a more effective backdrop for the calibration process.
  4. The subjects must enter information displayed on the screen into a keypad. The user would have to look at the screen to read the information that would then be entered into a form. Several of these could be used, forcing the user to look at the screen at designated points. This approach may prove beneficial, as it will require subjects to direct their gaze to certain parts of the screen. However, since it is an artificial task, it may alert subjects to its real purpose.

In each of these experimental designs, the experimental calibration could be compared to a traditional calibration using a task after the initial calibration. For example, following the calibration, subjects could be asked to look at single points on the screen. In this way, the accuracy of the calibration can be assessed.

Pilot testing will need to be conducted to determine the timing of the calibration measurement. Since calibration takes into account the eyes’ position at only one point in time, some basic research will need to be conducted to determine how long after object onset is the ideal time to ensure that subjects are looking at the correct point on the screen. Reaction times in visual salience research have been anywhere from 200-700 ms (Posner & Cohen, 1984). These times will be used as starting points for evaluation. In order to ensure that calibration is as accurate as possible, the eye tracking device must take a measurement as soon as possible after object onset, before subjects have time to shift their gaze elsewhere.

Discussion

Expected Results

I expect the following results: (1) experimental covert calibration will not yield results as accurate as those obtained using traditional calibration, (2) I expect trials using object onset to calibrate more accurately than those without. Future research will be needed to determine how accurate calibration must be for testing to be effective. Of course, the necessary level of accuracy is highly task dependent. For analyzing gaze patterns in examining full-screen images, low levels of accuracy may be sufficient. However, when it is necessary to know exactly where subjects are looking at specific points in time, more calibration may be necessary.

Implications

The primary purpose of this research is to determine if calibration is possible without the subject’s knowledge. This information will be valuable for future eye tracking research as well as implementations of eye tracking technology in the field. Researchers will be able to avoid hypothesis guessing by calibrating the eye tracker without the subjects’ knowledge. Additionally, field implementations of eye tracking technology for deception detection will be more accurate when assessing uncooperative subjects as well. Adding the capability to calibrate the eye tracker without the subject’s knowledge will result in more reliable data.

References

Cole, G., Kentridge, R., & Heywood, C. (2004). Visual salience in the change detection paradigm: the special role of object onset. Journal of Experimental Psychology, 30(3), 464-477.

Derrick, D. C., Elkins, A. C., Burgoon, J. K., & Nunamaker, J. F., Jr. (2010). Border Security Credibility Assessments via Heterogeneous Sensor Fusion. IEEE Intelligent Systems, 25(3), 41-49.

Derrick, D. C., Moffit, K., & Nunamaker, J. F., Jr. (2010). Eye Gaze Behavior as a Guilty Knowledge Test: Initial Exploration for Use in Automated, Kiosk-based Screening. Paper presented at the 43rd Annual Hawaii International Conference on System Sciences, Poipu, Hawaii.

Folk, C., Remington, R., & Johnston, J. (1992). Involuntary covert orienting is contingent on attentional control settings. Journal of Experimental Psychology Human Perception and Performance, 18, 1030-1030.

Jonides, J. (1981). Voluntary versus automatic control over the mind’s eye’s movement. Attention and performance IX, 9, 187-203.

Muller, H., & Rabbitt, P. (1989). Reflexive and voluntary orienting of visual attention: Time course of activation and resistance to interruption. Journal of Experimental Psychology, 15(2), 315-330.

Posner, M., & Cohen, Y. (1984). Components of visual orienting. Attention and performance X: Control of language processes, 32, 531–556.

Trochim, W. M. (2000). The Research Methods Knowledge Base (2nd ed.). Cincinnati, OH: Atomic Dog Publishing.

Yantis, S., & Egeth, H. (1999). On the distinction between visual salience and stimulus-driven attentional capture. Journal of Experimental Psychology Human Perception and Performance, 25, 661-676.

Yantis, S., & Jonides, J. (1990). Abrupt visual onsets and selective attention: Voluntary versus automatic allocation. Journal of Experimental Psychology: Human perception and performance, 16(1), 121-134.

1 | Page