Fingerprint Image Enhancement: a New Approach

Fingerprint Image Enhancement: a New Approach

Fingerprint Image Enhancement:

Pre and Post Processing

ARUN GAIKWAD, FELLOW, IETE, JUZAR BADAMI, SHAILESH MER,

AVINASH MATTOO

Vishwakarma Institute of Technology, Pune- 37.

University of Pune., India.

NAVNATH PASALKAR, FELLOW, IETE,

Director of Technical Education, Maharashtra state, Mumbai.

Abstract:

Image processing has gained a tremendous importance in many applications including Biometrics. In this password-ridden world, where everything in our daily life revolves around passwords and code names to maintain privacy and security, biometry is a refreshing change and an efficient one indeed! There are different identification methods like iris, face, retina, odor, hand geometry, fingerprints, etc. Fingerprints are generally regarded as uniquely identifying method and have been traditionally used for identification by law enforcement agencies. Automated fingerprint analysis using Image Processing as the tool has gained the latest importance presently. Most fingerprint based Personal Identification systems use minutiae based matching. Preprocessing is must for proper feature extraction and increasing matching accuracy. Several image enhancement algorithms need to be applied to fingerprint images scanned from commercially available scanners before extraction of the features. In this paper we investigate the results of a few conventional preprocessing algorithms and present a new approach to image enhancement. The verification accuracy with the new approach was found improved and quite acceptable for low and medium cost applications.

Key-words: Minutiae, Ridges, Segmentation, Verification, Feature Extraction.

  1. INTRODUCTION:

With the advent of electronic banking, e-commerce and Smart Cards and an increased emphasis on the privacy and security of information stored in various databases, automatic personal identification has become a very important topic. Accurate automatic personal identification is now needed in a wide range of civilian applications involving the use of passports, cellular telephones, automatic teller machines, and driving licenses. Traditional knowledge-based (password or personal identification number (PIN)) and token-based (passport, driver license, and ID card), identifications are prone to fraud because PINs may be forgotten or guessed by an imposter and tokens may be lost or stolen. As an example, MasterCard credit card fraud alone now amounts to more than 450 million US Dollars, annually [1]. Therefore, it has become increasingly important that a person be identified on the basis of some biometric feature. A biometric feature is a behavioral or physiological characteristic of the person, which is absolutely personal and unique to him.

  1. FINGERPRINT:

Among all the biometrics (e.g. face, finger prints, hand geometry, retina, iris, signature, voice print, face-thermogram, hand vein, gait, ear, odor, keystroke dynamics etc.), fingerprint based identification is one of the most mature and proven techniques. A fingerprint is a pattern of ridges and valleys on the surface of the finger. The uniqueness of the fingerprint can be determined by the overall pattern of ridges and valleys as well as the local ridge anomalies (a ridge bifurcation or a ridge ending called minutiae points).

Digital camera (or capacitive sensor) generates a high-resolution Fingerprint (FP) image. The basic pattern of the print (arch, whorl, loop, etc.) is not unique to the user and is therefore not adequate as an identifier. So, finer details must be extracted. While curving and forming patterns, the friction ridges of a fingerprint diverge and converge into ending and bifurcation points. The relative location of these points and their interrelations form a unique profile. In addition, the number of ridges between minutiae, ridge density, and the location of small sweat pore locations may also be recorded. These measurements form the template ranging in size from around 100 bytes to over 1000, which is generally one of the largest biometric profiles. As fingerprint sensors are becoming smaller, cheaper automatic identification based on fingerprints is becoming an attractive complement to the traditional methods of identification [6].

The critical factor in the wide use of fingerprints is in satisfying the performance (e.g. matching speed and accuracy) requirements of the emerging civilian identification applications. Several civilian applications such as Fingerprint based Smart Cards, National ID, and Driver’s License are in the process of implementing fingerprint based verification system. The commonly used minutiae based fingerprint matching technique depends on the spatial pattern of the features of the fingerprint. However, extraction of these features from scanned fingerprint images, for matching, continues to pose a challenge.

3. FINGERPRINT IMAGE ACQUISITION AND STORAGE:

The schematic of inkless image acquisition and a typical scanner is as shown in Fig.1.

Fig. 1 Fingerprint Image Acquisition.

The finger whose image is to be taken is placed on the scanning area of the fingerprint scanner. Depending on the working principle, the scanners can be divided into two broad categories: optical scanner and capacitive scanner. Most applications use a resolution of 500dpi and store such images in the BMP or TIFF formats. However, depending on the need of the application the resolution and image file formats can be varied [6].

3.1 Degradation of image quality:

Most feature extraction algorithms bank on the fact that the images provided to them have very good contrast (preferably binary), no noise (global (i.e. background) or local) and a ridge-width of one pixel. It is very easy to extract correct features (minutiae, core, delta etc.) from such image, as there is little deformity. However, images obtained from commercial scanners are gray scale images (intensity spread over 256 levels) as against only two levels desired by the feature extraction algorithms. Moreover, they have a large amount of global (background) noise, which appears primarily because during the process of scanning the non-white background contributes to some gray-level intensity even in the non-signal areas of the image. Besides, Local noise (pockets, breaks etc.), residues from previous finger scan, dirt, sweat, and dryness of skin also contribute to false features. Further the ridge-width in the images scanned from a 500dpi scanner is approximately 10 pixels instead of the desired single pixel. As a result, several image enhancement algorithms need to be applied to the scanned fingerprint image before it can be provided as a source to the Feature Extraction Algorithms. Fig.2 shows ideal and actual images obtained with the scanners.

(a) (b) (c)

Fig. 2 a) Ideal image b)Excessively pressed c)Poor contrast

4.FINGERPRINT IMAGE ENHANCEMENT (Preprocessing):

The conventional approach used for fingerprint image enhancement includes the algorithms for High Pass Filtering, Low Pass Filtering, Histogram Equalization and Directional Laplacian Filtering before thinning the image.

Proposed approach:

The proposed approach to fingerprint image enhancement was tested on our database. The details of our approach are presented in the following sections.

4.1 Variance cum mean based segmentation:

This is a spatial processing technique that primarily aims at cleaning up the background i.e. making the grayish non signal areas white, but retaining the ridges. The outcome of this technique is that it makes the transitions from a ridge to a valley or the background sharper. This sharper transition improves the results of thresholding and thinning as it helps to make a very clear distinction between ridge areas and non-ridge areas. This improves the contrast. It replaces the value of a pixel depending upon, Global Mean, Local Mean and Local Variance, which is computed using a WxW mask in the neighborhood of the pixel (The original value is included in the computation).

Principle:

Signal Area = High Local Variance + Low Local Mean.

Non Signal Area = Low Local Variance + High Local Mean.

Algorithm:

1)Calculate Global Mean of the entire image.

i=n

Xglobal = (1/n) ∑ xi

i=1

2)Consider a WxW mask around each pixel in the image.

3)Calculate the local mean( Xlocal ) and local variance (σ2local) in this mask.

i = m

Xlocal = (1/m) ∑ xI

i = 1

i = m

σlocal = (∑(xi -Xlocal)2 / (n-1) )1/2

i = 1

4)If,

XglobalXlocal , and σ2local < k.

Then, make the center pixel with white. Else, Write the pixel as it is.The value of ‘k’ was determined empirically by testing over a portion of the database. The result of segmentation on an image from the database is shown in fig.6(b)

4.2 Hole filling method:

The segmented image has a lot of pores or holes in the ridges. These holes need to be removed to improve the thinning algorithm performance. So we have implemented a noise reduction algorithm. In this approach, the GRAY level of each pixel is replaced by the MEDIAN of the gray levels in the neighborhood of that pixel. This method is effective in removing strong spike like components while preserving the edge sharpness in the ridges.

Algorithm:

Consider the pixel gray level with 8 neighbors,

X[i] ; where i = 0 ~ 8

If

X[i-1] < X[i];

Then,

Median = X[imax/2];

i.e. put the pixel gray level as X[4].

The results of this algorithm on a few sample images from the database are shown in fig.6(c).

4.3 Thresholding:

The requirement in the FP identification is to detect ridges from the background. So a monochrome representation of the FP is enough to serve the purpose. Thresholding involves conversion of a gray level image into a binary image. We present a few commonly used Threshold algorithms and relevant results.

4.3.1 Constant Thresholding:

In this method, a fixed threshold is decided. If the pixel is darker than the one represented by the threshold it is made black, else white.

If, X [i] > k, X[i] = 255.

Else, X[i]= 0.

Where, k is a suitable constant determined empirically by testing the Histogram of FP database.

4.3.2 Global Median Thresholding:

In the Global Median approach, the median of the gray level intensity distribution of the complete image is found. This median is used as the threshold.

If X[i], i = 0 to m, represents the gray level intensity of the pixels in the image, then ,

X[i] = 0 ; if X[i] < X[m/2]

=255 ; Otherwise.

4.3.3 Global Mean Thresholding:

This approach dictate that Mean of the gray level intensity distribution in an image be taken as the Threshold.

i = n

Xglobal = (1/n) ∑ xi

i = 1

X[i] = 0 ; if X[i] < Xglobal

= 255; otherwise.

4.3.4 Local Median Thresholding:

In this method of image Binarisation, a block of size WxW is considered around each pixel and the Median of gray level intensity distribution inside this block is determined. This Median value is used as the Threshold.

If X[i], i = 0 to m, represents the gray level intensity of the pixels in the WxW mask, then,

X[i] = 0 ; If X[i] < X[m/2]

= 255 ; Otherwise.

4.3.5 Local Mean Thresholding:

In the Local Mean Thresholding Algorithm, a block of size WxW is considered around each pixel and the mean of the gray level intensity distribution inside this block is determined. This mean value is used as the threshold for the binarization of the center pixel.

i = n

Xlocal = (1/n) ∑ xi

i = 1

X[i] = 0 ;if X[i]<Xlocal

= 255 ;Otherwise.

In our work we determined that the application of Local Mean ThresholdingAlgorithm worked best for the database under consideration. The effect of threshold is as shown in fig. 6(d).

5. HOLE REMOVAL:

Though a lot of holes are removed by the Despeckle algorithm, still there are some holes in the ridges which results in the generation of small loop artifacts after thinning. So we apply a Hole removing technique which is simply achieved by applying masks on the image. A 3x3, 2x2 or a single pixel wide hole can be removed by this method. The algorithm basically checks for the boundary pixels around a white (or a mock of upto 9 white) pixels. If the boundary is black then make all the inside white pixels black. The result of applying this algorithm on a threshold image is a shown in fig.6(e). It improves the performance of the thinning algorithm.

6. THINNING:

The result of a popular thinning algorithm[5] (Skeleton algorithm) on the enhanced images is shown in fig.6(f).

7. DESPIKING:

The Ridges in a thinned image has a lot of abruptness. It calls for the need of smoothening of the ridges.Despiking technique smoothens the ridge flow around a curve where the ridge appears like a staircase pattern. Applying masks to every pixel and removing unwanted pixels as shown by the fig 3 does this. The result is as shown in fig.6(g).

Fig.3-Despiking.

8. POST PROCESSING:

Post processing operation is applied on an image after the Minutiae extraction. This is basically essential to improve the matching efficiency by eliminating false Minutiae.

8.1 Offshoot Elimination:

The basic aim of this algorithm is to remove the small offshoots that are attached to the ridges, which results into false bifurcations. This algorithm is applied after marking the bifurcations and ridge endings on the despiked image. A Bifurcation is a feature point on the ridge where from splitting of the ridge takes place. So it’s a junction of three ridges. On the other hand a Ridge ending is the end point of a Ridge. A typical image with bifurcations (Red) and Ridges (Green) marked on it is as shown in fig.4 Due to small offshoots resulting in the thinned image, a lot of false bifurcations and ridge endings occur. With this algorithm applied, the false features can be eliminated.

Fig.4 False Minutiae.

Algorithm:

1)Check for a bifurcation.

2)Follow the ridge and see if there is a ridge ending residing within a specific distance (at a distance of about 6-7 pixels).

3)If a ridge found then the portion of the ridge traced out is eliminated.

The result of this algorithm applied on a despiked image is as shown in fig.6(h).

8.2 False Minutiae Removal algorithm:

By false minutiae, we mean those minutiae, which do not form a part of the actual fingerprint but have crept in the scanned image due to noise, which is added during the process of scanning. Such minutiae are not repeatable and are, therefore, not of use for matching two fingerprints. The algorithm for removal of False Minutiae is applied, after the Minutiae Extraction gives a list of all the minutiae that exist in an image.

Source Image Bifurcations

Marked

Fig.5: Example of false minutiae

In fig.5, there is a small pocket in the ridge caused due to noisy scanning and subsequent image preprocessing. This pocket is considered as a pair of bifurcations by the feature extraction algorithm. Since this is not a repeatable feature, we treat this as a false bifurcation. To remove this, we traverse the bifurcation in all directions until it finds a bifurcation. If any of the two ridges starting from a bifurcation reach another bifurcation and the distance between the two bifurcations under consideration is below a certain threshold, we delete both the bifurcations. The result after applying this algorithm is as shown in fig.6(h).

9. EXPERIMENTAL RESULTS:

We have tested our fingerprint image processing algorithm on a data base of 200 images (10 images per finger from 20 individuals), which were captured with Veridicom 5th sense capacitive scanner. The image size is 300x300 with a resolution of 500 dpi. Approximately 90% of the fingerprint images are of reasonable quality similar to that shown in fig. 6(a), while about 10% fingerprint images are not of good quality which are mainly due to creases and smudges in ridges and dryness of the impressed finger. Sample results of implementation of various preprocessing algorithms on the fingerprint images from the database are shown in fig.6.

10. CONCLUSION:

We have designed and implemented the fingerprint image enhancement technique, which includes, pre-processing prior to feature extraction followed by post-processing which improves the performance of minutiae extraction and matching algorithm. Experimental result shows that our method achieves better performance in a realistic environment. The following improvements were observed in our results over those obtained by the conventional approach:

1)Greater retention of signal area.

2)Greater retention of extractable features.

3)Reduction in false inter-ridge connections.

4)Improves the FAR and FRR.

We have observed that a number of factors are detrimental to the correct location of the minutiae. Among them poor image quality is most serious one. Therefore, in the future our efforts will be focused on further improvement in global image enhancement schemes.

References:

1] A. K. Jain, R. M. Bolle, S. Pankanti, Eds., Biometrics: Personal Identification in Networked Society, Norwell, MA: Kluwer, 1999.

2] A.K. Jain, Salil Prabhakar, Lin Hong, Sharath Pankanti, Filterbank-Based Fingerprint Matching,IEEE Transactions on Image Processing, Vol. 9, No. 5, May 2000.

3] H. C. Lee and R. E. Gaenssien, editor, Advances in Fingerprint Technology, Elsevier, New York, 1991.

4] Dinesh P. Mittal and Earn Khwang Teoh, An Automated Matching Technique for Fingerprint Identification ,1997 First International Conference on Knowledge-Based Intelligent Electronic Systems, 21-23 May, 1997, Adelaide, Australia.

5] R.C. Gonzalvez and R.E. Woods, Digital Image Processing, Second Edition, Addison- Wesley, 1992.

6]A.N.Gaikwad, N.B.Pasalkar, Fingerprint Sensors for Biometric Person Identification, Proceedings of the 10th National Seminar on Physics and technology of Sensors, March 4-6, 2004, pp. 284-289.


a) Original image /
b) Segmented image

c) De-speckled image /
d) Threshold image

d) Hole filled image /
e) Thinned image

f) Despiked image (smooth ridge flow) /
g) Minutiae marked image

h) Offshoot & False Minutiae removed image

Fig: 6 : Experimental results.