The Eye Tracker System – A System to Measure and Record Saccadic Eye Movements

William R. Pruehsner, Christopher M. Liebler, Francisco Rodriguez-Campos, John D. Enderle

Biomedical Engineering

University of Connecticut, Storrs, CT 06269

KEYWORDS

Saccade, LabView, Infrared, Eye Movements, Linear Regression, Transducer

ABSTRACT

The Eye Tracker System, built at the University of Connecticut at Storrs in the Biomedical Instrumentation Lab, consists of three separate and distinct units brought together as a whole system to measure saccades.

A seven row, eleven columned array of LEDs mounted on five degree centers along a concave surface provides targeting for the Eye Tracker System wherein the subject eye follows a pattern of illuminated LEDs as determined by the experimenter. The target system is digitally driven by serial inputs from the Main Command System. Subject positioning is aided by the concave surface of the Target System.

The System Console employs a multiple regression Operating System to predict Eye Position. Twenty-four channels utilizing the theory of Infrared Light Reflective Differentiation make measurements of the location of the eye. These optoelectronics are mounted in a specialized head-mounted transducer. The optoelectronics are mounted on the interior of a parabolic surface automatically aiming them towards the limbus. The transducer is styled after an ophthalmologist’s test frames and is comfortably worn and adjustable in size to fit any subject.

The Main Command System authored under G programming language (LabView) provides a graphic user interface (GUI) that controls the generation of the target pattern. The Main Command System also coordinates the programs that acquire all data, the regression algorithm for the real-time prediction of the eye position and the initial calibration of the system. In addition the application is able to save a retrieve data for further analysis.

INTRODUCTION

Accurate recording of eye movements is critical to the study of oculomotor system function. This information is critical not only for the understanding of visual function, but also for understanding the consequences of various kinds of individual impairment, and for the development of interfaces between human and artificial systems, and perhaps, the development of artificial systems themselves [1].

Eye movement measurement systems have a wide range of medical research applications and could have a significant potential for early detection and treatment of various disorders. Studies of the effects of disease states on eye movements and visual acuity could permit early study of such disorders as Parkinson’s disease and schizophrenia. Research into the effects of psychotropic drugs might be used for detecting illicit drug use, or compliance with prescribed medication treatment. Future medical applications could include AIDS and Alzheimer’s research, head injury evaluation, and evaluation and remediation of dyslexia and other reading dysfunctions [2].

Numerous systems have been devised using a variety of methods (some seemingly near torture) with respect to mechanical implementation accompanied by a like variety of computational algorithms to assist. Many recording devices use optical components ranging from elaborate camera set-ups to head mounted transducers containing infrared optoelectronics to measure eye movements. All of these devices use light reflectivity in some form or another to track either the glint of the eye or follow the interface between the sclera and the iris/pupil. The device introduced herein, the Eye Tracker, measures real-time, vertical, horizontal, or torsional saccadic eye movements using a method of differencing the intensity of infrared light as reflected from the sclera and the iris/pupil combination. Included in this system is a targeting array, system controlling software authored mainly with LabView, and a transducer with its included signal processing hardware.

DESCRIPTION

The Eye Tracker System introduced in this paper consists of three major components. These components are: A Transducer attached to an operating console, this component providing the variable voltages that are used to measure eye position, an Oculomotor Display System along with its operating electronics, and finally, a Main Command System authored under G programming language (LabView). Each of these systems is now described in some detail as stand-alone components. Following is a discussion of how these components are integrated.

The transducer is a head mounted device fashioned in such manner that it is variable in size to allow it to fit to as large a cross-section of people as possible. Contained in each of the two eyepieces (right and left) of the transducer are twelve photodetectors and two IR emitters. These devices are arrayed in a circle with each component equally spaced along that circle. The placement of the emitters is at three and nine o-clock with the photodetectors filling the other positions. These optoelectronics operate at a wavelength of 875 nm.

The transducer is connected to an operating console via a Wire Harness consisting of a bundle of 34 coaxial cables. The Wire Harness is four and a half linear feet long exposed. Approximately three linear feet from the console the Wire Harness splits into two equal sized bundles, one to serve the right eyepiece, the other the left. The Coax wire used for the Wire Harness is Beldon 8700-010 28 AWG Coax Cable. This wire has the mechanical size of .035 inch diameter and is very flexible. Prevention of or reduction of triboelectric currents caused by the inner conductors rubbing against each other as well as cable resistance and leakage current is not addressed in this application since the overriding design consideration is the actual size of the wire and its “off-the-shelf” availability.

The control console consists mainly of the circuitry that processes the signals received from the photodetectors. The circuit contains twenty-four channels, one for each detector. Each channel reads a modulated signal from a photodetector (5 kHz, 50% duty cycle), then amplifies and filters it. The resultant to this is a DC waveform that increases and decreases in amplitude dependent on the amount (or lack of) reflected light seen by that channel’s particular photodetector. The waveforms are then fed into an A/D circuit and sampled and digitized at one kilohertz.

Operating Software that accompanies the console itself predicts eye position using a method of multiple regression. Two programs are used. The first program utilizes a calibration procedure and a least squares fit to determine the regression coefficients. The calibration procedure is performed for each subject to be tested. A second program computes the X and Y co-ordinate of the eye movement by multiplying the regression coefficients with the sampled voltages derived above.

The Oculomotor Display System consists of two major components, a display and a control module. The Display System features a total of seventy-seven Light Emitting Diodes (LEDs) arrayed in a seven row by eleven column matrix. The LEDs are mounted on five degree centers, are three millimeter in size, and operate at a wavelength of 565 nm with a viewing angle of forty-five degrees.

During calibration procedure or actual testing, only single LEDs are required to “turn on.” For any given LED to emit, it must receive both on its cathode lead a digital low (zero volts DC or ground) and a digital high (positive five volts DC) on its anode. To accomplish this, all of the LED cathodes of a particular row are connected together in series. Also, all of the LED anodes are connected in series in any given column. A single LED may then be turned on by providing the row in which that LED is located with a digital low and the column in which the LED is located with a digital high. In this manner any LED in the display may be switched on or off.

The Display System is driven by a Control Module that receives input from a data acquisition card installed on a computer. The Control Module provides a method of switching the LEDs in the display on and off.

Switching is controlled by three Philips 74F138 ICs. Generically, the 74F138 is a three input - eight output high speed line decoder and demultiplexer. The function of the decoder is to receive a binary input and decode it in a manner such that only one of its outputs may be low for each input combination. One 74F138 IC controls the seven rows (from its eight available outputs), while two ICs control the eleven columns (likewise from 16 available outputs).

Figure 1: Schematic of Display System

When the Eye Tracker System is activated, all of the LED targets on the Display System must be off, therefore the output value of each decode must be low. However, the default values of the decoder outputs are high. Two 7406 open collector hex inverter ICs are used to invert all of the outputs of each decoder to low thereby setting the default value to each LED cathode to digital low. Subsequently, each row (cathodes) is pulled up to positive five volts DC by a 680 ohm resistor; each column (anodes) is pulled to ground by a 330 ohm resistor. This arrangement sets the default value of each LED to off. A schematic of the Display System Circuit (Fig. 1) is on the previous page.

Control Module components are mounted in such a manner to a double-sided PC Board so that all inputs from the data acquisition board as well as the decoder outputs to either of the columns or rows can be easily identified should repair or troubleshooting be required. The PC Board is designed and manufactured “in-house/over the internet” using a CAD software package provided by ExpressPCB.

The Control Module receives seven bit binary input from the data acquisition board (described below) wherein the four most significant bits of the binary input control the columns and the three least significant digits control the rows. The binary input is electrically isolated from the circuit by seven relays.

Coordinating the entire system is an Operational Package developed using LabView. Provided by this package is a Graphic User Interface (GUI) that controls the generation of a target pattern (described in more detail below), communications with the Operating Software application used for the eye position regression algorithm, and the acquisition of all the data generated in the system console.

Two main input/output operations (target pattern and acquisition) are performed by a PICDAQ 3220 series data acquisition card from CyberResearch Inc. This card has 32 single ended acquisition channels and 16 isolated output lines. Its capabilities include a maximum sample rate of 110 kHz and simultaneous sampling capabilities.

In target pattern generation, the Oculomotor Display System is driven using seven isolated output lines allowing to the system to turn on (or light) any of the seventy seven LEDs mounted in the Display. Target pattern generation is required during system calibration and during subject testing. During data acquisition, the Main Command System stores the acquired data in such a manner that the Operating Software can calculate and predict eye position. In addition the main command system is capable of retrieve previously saved data for further analysis and display a graph in real time of the eye movement during the acquisition process.

METHOD OF USE

A description of how each of the three main components are integrated together is best described by detailing how the system operates during a test protocol.

The subject to be tested sits in an upright position on a chair facing the Oculomotor Display System. The chair, along with the Display and Transducer are located in a darkened room dedicated solely to the use of this device. The chair is located in such manner that the zero position LED on the Display is at eye level and that the gaze of the subject moves five degrees for each incremental row or column that the LEDs illuminate. The transducer is then placed on the subject’s head. Proper placement requires that the eye is centered in each eyepiece. A chin restraint is brought forward and placed under the subject’s chin to eliminate or at least reduce any respiration artifact or inadvertent head movements.

At this time, the Main Command System generates a target pattern wherein the center LED illuminates. After a brief period of time, a different LED elsewhere on the Display illuminates (the center LED deactivates). The subject moves their eyes accordingly following the LED. This simple


Figure 2. Subject wearing Eye Tracker Transducer. Control Console for the Transducer is on the right.

procedure is executed a few times to practice the subject in the art of following the LED illuminations without inadvertently moving their head. No data is retained during this part of the session.

Calibration follows using generally the same procedure as the practice runs. First, the center LED is illuminated, then after 100 milliseconds a second LED illuminates. This LED remains illuminated for 400 milliseconds when the center LED once again illuminates. This procedure happens for all seventy-seven LEDs on the Display. Subject fatigue and possible distraction due to speed of calibration or loss of concentration is accounted for by breaking up the seventy-seven steps required during calibration into several groups. The pattern of LED illumination as well as timing between illumination and rest periods is controlled by the Main Command System.

Acquisition of the data is performed at a sampling rate of 1 millisecond producing five hundred data points for each channel for each 500 millisecond period. Since there are twelve channels used for each eye resultant in a total of twenty four channels, after each acquisition (500 milliseconds) a total of twelve thousand data points are generated. This operation is performed simultaneously with the target pattern generation in the target system. The LabView application stores this data in a text file using the following format:

[time, sensor1, sensor2, sensor3,.…, sensor 24]

This file is then accessed by the Operating Software that accompanies the transducer Control Console in order to perform the calibration. The device is calibrated for each use.

Once the system is calibrated, testing can begin. This follows closely to the calibration procedure other than the fact that there is only one set of data acquired at a time over a period of 500 milliseconds. The data, stored in the same manner as above is sent to the Operating Software wherein eye position is determined. A graph showing eye position is available for use in real-time.

CONCLUSIONS

Introduced has been the Eye Tracker System. The Eye Tracker System measures saccadic eye movements using reflective differencing of infrared light. The Eye Tracker System contains three components, the Transducer with its operating console and accompanying operating software, the Oculomotor Display System along with its operating electronics, and the Main Command System authored under LabView. Also described is a test protocol used for measuring an eye movement detailing the integration of the three major components.

ACKNOWLEDGMENTS

The authors wish to acknowledge the following persons who built the original Display System and developed the original Main Command System. Sara R. Wieczorek built the original Display System as an undergraduate project at the University of Connecticut and received her BSEE in May of 1999. Kurt Walton developed the original Main Command System also as an undergraduate student at the University of Connecticut and he received his BSEE in May of 2001. We would also like to thank Kathleen Fernald for modeling the transducer setup.

REFERENCES

[1] M. M. Hayhoe, “Keynote Address, Visions in Natural and Virtual Environments” Eye Tracking Research and Applications Symposium, March 2002

[2] Iota Eye Trace Systems. Improved Research: A Key to Controlling Health Care Costs.