Iris Recognition

Introduction & Basic Methodology

Iris recognition may not be on of the wide spread technologies used for authentication but it has one of the lowest error rates when compared to other biometric technologies. Irises patterns are individual and do not change with age. It is also a fact that the iris patterns of the left and right eyes within the same individual are different from each other. The texture of the iris is made up of a complex fibrous and elastic tissue sometimes referred to as the trabecular meshwork. This fine detail of this mesh like structure is established prior to birth and remains intact throughout the life of the individual.

Iris pattern is of a somewhat regular polar geometry making it easy to develop a co-ordinate system for feature recognition. A point to be taken into consideration is the fact that the surface of the iris is mobile since the pupil expands and contracts. The visible portion of the iris differs based on ethnic origin and genetic inheritance; in individuals with dark eyes the important question is how easily the boundaries between pupil and iris may be identified.

The first step would be to capture the image of the iris using a CCD (Charge Coupled Device) camera. After the image has been captured we use a circular edge detector to identify and locate the boundary between the white portion of the eye (Sclera) and the iris and proceed further to distinguish the boundary between the iris and the pupil [Figure 1].

Figure 1 Figure 2

After this we define circular contours of increasing radius so that we have zones of analysis [Figur2], which remain the same irrespective of pupil resizing activity. Parts of the iris that are hidden by the eyelids/eyelashes, or corrupted by reflections from glasses are detected and masked out so the encoding of the iris is not influenced. One must notice that the pupil is not always central to the iris. Because the constant movement of the iris multiple images are captured rapidly till a bona fide image is confirmed. The user can observe this process via a reflected image of the eye present in the CCD camera, which serves as an aid for the user to focus and stabilize the image.

Now we analyze the zones of analysis [Figure 2] and distinguish feature within these zones, for this purpose we use 2D Gabor filters which basically provide information about orientation and spatial frequency of minutiae within the image sectors. Integro-differential operators of the form given below do these detection operations,

Max(r, x0, y0) G (r) *  r, x0,y0 I(x,y)

r 2r

where contour integration is parameterized for size and location coordinates r, x0, y0 at a scale of analysis  set by G (r) is performed over image data I(x,y). Then a coordinate system is defined which maps the tissue, this coordinate system is pseudo polar and compensates automatically for the stretching of the iris tissue as the pupil dilates. The detailed pattern is encoded into a 256-byte code by demodulating it with 2D Gabor wavelets, which represent the texture by phasors in the complex plane. For each element of the iris pattern the phasor angle is mapped to its respective quadrant where it lies.

Dr. John Daugman developed the iris-scanning algorithm, which is widely used nowadays. The amazing fact is that the entire process of image capturing, zoning, analysis and iris code creation is typically completed in less than a second. The current implementations of the iris scanning approach include some amount of user interaction in order to properly capture the image, but it is basically a non-contact approach. It is found that the iris scanning approach works well with spectacle and contact lens users.

A decision made by a biometric system is generally a ‘genuine’ or ‘imposter’ decision, which can be represented using two statistical distributions, genuine distribution and imposter distribution. For each type of decision there will be two possible outcomes i.e., true / false. In that case there are four results, listed below

  1. a genuine person is accepted
  2. a genuine person is rejected
  3. an imposter is rejected
  4. an imposter is accepted

Results 1 and 3 are correct whereas 2 and 4 are incorrect. Now we can define the performance criteria for this system. So we define the False (imposter) Acceptance Rate (FAR) and False (genuine) Rejection Rate (FRR). In order to provide a more reliable assessment of the system we can define some more criteria, the first is the Reliable Operating Curve (ROC) and ‘d’. An ROC gives performance results (FAR and FRR) for the system at various operating points; ‘d’ gives the distance between the genuine distribution and imposter distribution. In other words ‘d’ measures how well separated the two distributions are, since recognition errors are caused by their overlap.

If their means are 1 and 2 and their standard deviations are 1 and 2, then ‘d’ is defined as

d = 1 - 2

SQRT [(12 + 22)/2]

 222,743 comparisons of different iris patterns yielded a mean value 1 =0.089 and

1 =0.042

 340 comparisons of sane iris pairs yielded a mean value of 2 =0.456 and 2 =0.018

The value of d is found to be 11.36 for iris recognition, which is much higher than that reported for any other biometric system.

Till now we have covered the theoretical aspects of the technology behind iris recognition, now we move on to see a practical implementation. We have taken the implementation model example of National Instruments (NIDAYS), Italy.

EXAMPLE OF IMPLEMENTATION

The hardware consists of a standard PC with the Microsoft Windows as OS, the NI 1411 acquisition board, and an analogic color single chip CCD camera. The system architecture is structured as shown in Fig.1.

The software is powered by Labview (LABVIEW Prof Dev Sys 6.1) and Labview RT (LABVIEW Real Time Module 6.1). “The image analysis is realized with IMAQ (IMAQ Vision 6.01) and the data analysis use the SIGNAL PROCESSING TOOLSET by NI. A private library was developed for the recognition based on wavelet analysis, neural network and genetic algorithms.”

How does this system work?

  • “The image acquisition unit is responsible for the image acquisition and pre-reduction” such as geometrical calibration and photometric alignment.
  • The console is used to control the system. It also allows the registration of new people and users.
  • The raw images are temporarily stored in the DataBase Unit (Figs. 3 & 4). It is linked to the system by SQL TOOLKIT version 2 by NI.
  • Then, each image is given to the Process and Analysis Unit (P&A Unit) for the recognition. This is the most important unit that I want to talk about, the algorithm of iris analysis. It consists of four processes provided by four subunits:
  • “The first subunit splits the colored image in the RGB frames and tests the morphology of the eyes in the real space.”
  • “The second subunit transforms the image into a 3D image, where the third dimension corresponds to a different weight of the eye with respect to a fixed coordinate frame and with respect to some characteristic parameters in
    iridology.“ (Fig. 5)
  • “The third subunit transforms the image in the frequency domain, where a wavelet analysis is carried out.”
  • “The last subunit transforms the image into a multidimensional object, where the dimensions of the space are linked with some principal parameters of the iris and pupil; then the algorithm executes the last analysis to recognize a person by using a genetic pipeline.” The result is showed in Fig. 6. This step is the core of the recognition.

  • After the analysis the data is transported back to the Database, which is linked with a Dispatcher Information system. This unit automatically builds a status report about authorizing the right people, people flow, doors and access to the buildings, so on and so forth. In particular, statistics, plots of data and events are produced and stored by the module. In the occurrence of a special event, like an alert, this unit can automatically reach supervisors and police with email and SMS (Short Message System) service.

According to a comparison of biometric technologies iris recognition ranks ‘high’ in Universality, Uniqueness, Permanence, Performance and Circumvention but ranks medium in Collectability, low in Acceptability.

REFERENCES:

  1. Biometrics: Personal Identification in Networked Society; Anil Jain, Ruud Bolle, Sharath Pankati.
  2. Biometrics: Advanced Identity Verification; Julian Ashbourn

Cryptography and Computer Security (CS 265) Page 1 of 6

Author: Mabbu, Sathya Swathi (SJSU# 004243721) & Long Vuong (SJSU#003739074)