You are on page 1of 6

60

Int. J Comp Sci. Emerging Tech Vol-3 No 2 April, 2012

Iris Image Segmentation and Recognition


Amel Saeed Tuama
Software Engineering Department Technical college Kirkuk-IRAQ
Amal_aljaf70@yahoo.com Abstract: Biometrics deals with identification of individuals based on their biological or behavioural characteristics. Iris recognition is one of the newer biometric technologies used for personal identification. It is one of the most reliable and widely used biometric techniques available. In general, a typical iris recognition method includes capturing iris images, testing iris live-ness, image segmentation, and image recognition using traditional and statistical methods. Each method has its own strengths and limitations. In this paper, an iris recognition system is presented with several steps. First, image pre-processing is performed followed by extracting the iris portion from the eye image. The extracted iris part is then normalized, and Iris Code is constructed using daugman rubber sheet. Then the features are extracted by filtering the normalized iris region. This filtering is performed by convolution with a pair of Gabor filters. Finally two Iris Codes are compared to find the Hamming Distance, which is a fractional measure of the dissimilarity. Experimental image results show that unique codes can be generated for every eye image, extracts the important features from the image, and matches those features with data in an iris database. This approach will be simple and effective. The system is implemented by using Matlab. Keywords: Iris recognition, Pupil localization, localization, Identification, Normalization, Gabor filter. Iris

bit pattern that preserves the information that is essential for a statistically meaningful comparison between two iris images. The mathematical methods used resemble those of modern lossy compression algorithms for photographic images. In the case of Daugman's algorithms, a Gabor wavelet transform is used in order to extract the spatial frequency range that contains a good best signal-to-noise ratio considering the focus quality of available cameras. The result is a set of complex numbers that carry local amplitude and phase information for the iris image. In Daugman's algorithms [2], all amplitude information is discarded, and the resulting bits that represent an iris consist only of the complex sign bits of the Gabor-domain representation of the iris image. Discarding the amplitude information ensures that the template remains largely unaffected by changes in illumination and virtually negligibly by iris color, which contributes significantly to the long-term stability of the biometric template. To authenticate via identification (one-to-many template matching) or verification (one-to-one template matching), a template created by imaging the iris is compared to a stored value template in a database. If the Hamming distance is below the decision threshold, a positive identification has effectively been made.

1. Introduction
Biometrics involves recognizing individuals based on the features derived from their Physiological and behavioral characteristics. Biometric systems provide reliable recognition schemes to determine or confirm the individual identity. A higher degree of confidence can be achieved by using unique physical or behavioral characteristics to identify a person [1]. The automated personal identity Authentication systems based on iris recognition are reputed to be the most reliable among all biometric methods. We consider that the probability of finding two people with identical iris pattern is almost zero. The uniqueness of iris is such that even the left and right eye of the same individual is very different. Thats why iris recognition technology is becoming an important biometric solution for people identification. Applications of these systems include computer systems security, e-banking credit card, and access to buildings in a secure way. An irisrecognition system first has to identify the approximately concentric circular outer boundaries of the iris and the pupil in a photo of an eye. The set of pixels covering only the iris is then transformed into a
___________________________________________________________________________________ International Journal of Computer Science & Emerging Technologies IJCSET, E-ISSN: 2044-6004 Copyright ExcelingTech, Pub, UK (http://excelingtech.co.uk/)

Figure 1 - The outer structure of human iris A practical problem of iris recognition is that the iris is usually partially covered by eyelids and eyelashes (see figure 1). In order to reduce the false-reject risk in such cases, additional algorithms are needed to identify the locations of eyelids and eyelashes and to exclude the bits in the resulting code from the comparison operation

2. Related Work
In this section, techniques that have been used in iris recognition are discussed. Though the theory behind iris recognition was studied as early as the 19 th century, most research has been done in the last few decades [5], [6], [7]. Daugman [8] used a multiscale quadrate method and used Hamming distance for matching. Boles and Boashash [9] used a zero-crossing method, with dissimilarity functions for matching. Wildes et al. [10] used a Laplacian pyramid for analysis of the iris image. Lim et al. [11] used a 2D Haar transform to

61
Int. J Comp Sci. Emerging Tech Vol-3 No 2 April, 2012

extract iris data. Ma et al. [12] used multi-channel Gabor filtering to extract important data. Tisse et al. [13] used a Hilbert transform for extraction. In later research, Ma et al.[4] used a different spatial filter to extract features. The iris has a complex texture. Using infrared light, many of the unique properties of the iris can be seen. Because the properties used are based on texture, eye color is unimportant, and images can be grey-scale. Typically, a CCD camera is used to obtain iris images. The camera should have a resolution of 512 dpi [3]. The iris is not believed to change drastically over time, and so a database of images should be reliable for a long time [4]. The technique described in Daugmans [5] paper uses wavelets for demodulation of the iris image to extract two-dimensional modulations which are turned into what is referred to as an Iris Code. The extracted Iris Code is compared to an Iris Code in a database, and if it is similar enough, it is considered a match. Most iris recognition used today is based on this method [5]. Ma et al. use a technique which first converts the round image of the iris into a rectangular pattern, essentially by unwrapping the circular image. Filters are then used to obtain the frequency distribution of the image. This data is used to make a match in the iris database. According to the research of Ma et al., this technique is inferior only to Daugmans method [4].

shape with different centers. In many cases the upper and lower part of the iris is often occluded by eyelids. 3.2.1 Pupil localization In order to determine the pupil location, the image of an eye is divided into 8x8 regions. The mean intensity from each region is calculated and the lowest mean intensity value is used as a threshold. After that, the image of the eye is transformed into binary image using the threshold as in (1) [16].

Captured image

Pre-processing

Pupil localization

Iris localization

Normalization

3. Proposed Methodology: Main steps.


In this section, the proposed methodology for iris image recognition is discussed. Figure 2 shows the system processes that will be used. Feature Extraction Data base

3.1 Pre-processing
CASIA Iris Image Database is probably the largest and the most widely used iris image database publicly available to iris recognition researchers. It has been released to more than 2,900 users from 70 countries since 2006. CASIA iris image database ver.1 is used in the proposed method which is collected by the institute of Automation, Chinese Academy of Sciences. It uses a special camera that operates in the infrared spectrum of light, not visible by the human eye. Images are 320x280 pixels gray scale taken by a digital optical sensor designed by NLPR (National Laboratory of Pattern Recognition Chinese Academy of Sciences). There are 108 classes or irises in a total of 756 iris images. The iris is surrounded by the various non-relevant regions such as the pupil, the sclera, the eyelids, and also noise caused by the eyelashes, the eyebrows, the reflections, and the surrounding skin [9].We need to remove this noise from the iris image to improve the iris recognition accuracy.

Matching

Result

Figure 2 - Flowchart of Methodology

g ( x, y )

{1,, ff ((x ,,y ))tt 0 x y

(1)

3.2 Image Segmentation:


The main objective here is to remove non useful information, namely the pupil segment and the part outside the iris (sclera, eyelids, skin). The assumptions for the segmentation process are that the pupil is darker than the iris and the iris is darker than the sclera. Another assumption is that pupil and iris have circular

The resulted image g(x, y) is the binary form of the image f(x,y), and T is the threshold. The result of this process is a noisy pupil mask image. The noise is usually in the form of dark areas which comes from eyelashes and or eyebrows. Since this noise usually covers less than 500 pixels, thus to remove or reduce the noise, darks areas less than 500 pixels are removed. The final result is normally one dark area which is the pupil. In the case that more than one dark object are found after noise removal, the dark object whose center is closest to the center of the image is selected as the area of the pupil. The diameter of the pupil is determined by finding the largest gradient change to the left and right of the center of the pupil. The result of this process is shown as the inner circle in Figure (3). Next, Freemans chain code [16] is applied to find regions of 8-connected pixels that are assigned with

62
Int. J Comp Sci. Emerging Tech Vol-3 No 2 April, 2012

value equal 1. Finally, the chain code algorithm is applied one last time in order to retrieve the only region in the image (hopefully the pupil). From this region, it is trivial to obtain its central moments [15, 16]. Finding the edges of the pupil involves the creation of two imaginary orthogonal lines passing through the center of the region. The boundaries of the binarized pupil are defined by the first pixel with intensity zero, from the center to the extremities.

5.

Similar as described above, create vector L={lxcprx,lxcp-rx-1,,r1} from row (ycp) of G(x,y). This time we are forming vector L which contains elements of pupil center line starting at the left fringe of the pupil and ending at the first element of that line. For each side of the pupil (vector R for the right side and vector L for the left side): Calculate the average window vector A={a1,,an} where n=|L| or n=|R|. Vector A is subdivided into i
n / ws

6. a.

windows of size ws. For all window i1 , elements ai.ws-wsai.ws will contain the average of that window. b. Identify the edge point given side of the iris (vector L or R) as the first increase of values in Aj (1jn) that exceeds a set threshold t.

Figure 3 - Localized iris between inner and outer circles 3.2.2 Iris localization The second process in the image segmentation is the iris localization process. Since the top and bottom part of the iris are often covered by eyelashes, the boundary of the iris is sought after along the horizontal line starting from the pupil to iris boundary. The left and right boundaries of the iris are found by selecting the largest gradient change to the left and right of the pupil. This process resulted in the outer boundary of the iris, shown as the outer circle in Figure 3. The following is a description of the steps taken to detect the edges of the iris image I(x,y). 1. Find the center (xcp,ycp) of the pupil and the horizontal pupil radius rx using sections 3.2.1 algorithm. Apply a linear contrast filter on image I(x,y): G(x, y)I (x, y). We obtained satisfactory results with =1.4 Create vector V={v1,v2,,vw} that holds pixel intensities of the imaginary row passing through the center of the pupil (rx), with w being the width of contrasted image G(x,y). Create vector R={rxcp+rx,rxcp+rx+1,,rw} from the row that passes through the center of the pupil (ycp) in contrasted iris image G(x,y). Vector R formed by the elements of the ycp line that start at the right fringe of the pupil (xcp+rx) and go all the way to the width (w) of the image. Experiences shown that adding a small margin to the fringe of the pupil provides good results as it covers for small errors of the find pupil algorithm.

Figure 4 - Example of steps taken to find the right edge of the iris shown in the image. (a) Yellow line passes through ycp (center of pupil) of original image; red line shows pixel intensities for that line. (b) Effects of contrast stretching visible through red line.

3.3 Normalization
The iris contains important unique features, such as stripes, freckles, coronas, etc. These features are collectively referred to as the texture of the iris. In our method, these features are extracted using a variety of algorithms. Daugman suggested normal Cartesian to polar transformation that maps each pixel in the iris area into a pair of polar coordinates (r, ), where r and are on the intervals of [0 1] and [0 2] [17].This unwrapping can be formulated as I(x(r, ), y(r, )) I(r, ) Such that x(r, ) (1r)XP() + r x() y(r, ) (1r)YP() + r y() (3) where I(x, y), (x, y), (r, ), (xp, yp), (xi, yi) represent the iris region, Cartesian coordinates, polar coordinates, coordinates of the pupil and iris boundaries along direction respectively. Thus this representation often called as rubber sheet model and illustrated in figure 5. (2)

2.

3.

4.

63
Int. J Comp Sci. Emerging Tech Vol-3 No 2 April, 2012

Figure5 - Daugmans rubber sheet model

3.4 Feature Extraction


In our system, characteristic information from the iris is extracted by filtering the normalized iris region. This filtering is performed by convolution with a pair of Gabor filters. We also extract and store information about noise position in this stage. So, the iris code [9] is formed by some characteristic information extracted from normalized iris filtered by convolution (a pair of resulting images) and a Boolean mask representing the position of noisy pixels. A Gabor filter is a sine (or cosine) wave modulated by a Gaussian (see figure 6). This kind of filters optimally extracts information in space as well as in frequency domain. To extract iris features we designed two Gabor filters. First filter is a sine wave modulated by a Gaussian. Second is the same as first but using a cosine wave. In these filters, the central frequency of the filter is specified by the sine (or cosine) wave frequency and bandwidth varies as Gaussian width does. At implementation level, each filter must be a matrix [7].

Proposed system uses one-dimensional vectors instead of matrices. Given a normalized image, each row of pixels is taken as an input signal and is filtered by the one dimensional Gabor function. Each of these one dimensional vectors is a cross cut of the twodimensional corresponding matrix. At implementation level, we only have to calculate the central row of Gabor matrix. One dimensional filter notably speeds up filtering process because much less operations are needed to perform the convolution. Moreover, we have experimentally concluded that accuracy of the whole system is unaffected when using one-dimensional kernels instead of two dimensional ones. Filtered image is not taken as the process output. The output signals sign is more characteristic than its value. An output pixel value is positive or negative due to its own value and the value of its neighbor pixels. An output pixel value is lower or greater depending on other image factors like bright or contrast and thereby more susceptible to change when input image conditions change. Therefore, after filtering, each image is thresholded in order to get the final iris code (figure 5). This stages output consists in a filtered and thresholded image. This image can be taken as a bit field, one bit per pixel of the input image. A value of 1 means positive sign and a value of 0 means negative sign.

3.5 Code Comparisons


This phase consists of two steps, namely matching and identification. In the matching process, the extracted features of the iris are compared with the iris images in the database. If enough similarity is found, the subject is then identified [7].

HD

1 N CA( j )CB( j ) N j1

(5)

Figure 6 - Gabor filter generation: (a) A sine wave. (b) A Gaussian. (c)The sine wave modulated by the Gaussian. Each Gabor wave must be in a discrete form. To get this, we only have to apply the two functions, sine and Gauss (or the composition of the two) on a discrete space, for example, the elements of a matrix. This matrix would serve then as convolution kernel to filter the normalized image in the usual form (eq. 4).

where, CA and CB are the coefficients of two iris images and N is the size of the feature vector (in our case N = 702). The operator is the known Boolean operator that gives a binary 1 if the bits at position j in CA and CB are different and 0 if they are similar. John Daugman, the pioneer in iris recognition conducted his tests on a very large number of iris patterns (up to 3 million iris images) and deduced that the maximum Hamming distance that exists between two irises belonging to the same person is 0.32 [17]. Since we were not able to access any large eyes database and were only able to collect 60 images, we adopted this threshold and used it. Thus, when comparing two iris images, their corresponding binary feature vectors are passed to a function responsible of calculating the Hamming distance between the two. The decision of whether these two images belong to the same person depends upon the following result: If HD <= 0.32 decide that it is same person If HD > 0.32 decides that it is different person

y (r , s) h( p, q) * x(r p, s q)
p 0 q 0

(4)

Where x(r,s) is the discrete input signal (input image), h(p,q) is the convolution kernel, with dimensions PxQ, and y(r,s) is the filtered output signal, with same size as input.

64
Int. J Comp Sci. Emerging Tech Vol-3 No 2 April, 2012

(or left and right eyes of the same person) The Hamming distance approach is a matching metric employed by Daugman for comparing two bit patterns and it represents the number of bits that are different in the two patterns. Another matching metric that can be used to compare two templates is the weighted Euclidean distance which involves much computation and this metric is especially used when the two templates consist of integer values. Normalized correlation matching technique also involves significant amount of computation. And hence Hamming Distance matching classifier is chosen as it is more reasonable compared with Weighted Euclidean Distance and Normalized correlation matching classifiers, as it is fast and simple. Since an individual Iris region contains features with high degrees of freedom, each Iris region produces a bit pattern which is independent to that produced by another Iris. On the other hand, two Iris codes produced from the same Iris will be highly correlated. In ideal case, if two bits patterns are completely independent, such as Iris templates generated from different Irises, the Hamming Distance between the two patterns is high. This occurs because independence implies the two bit patterns will be totally different. If two patterns are derived from the same Iris, the Hamming Distance between them is close to zero, since they are highly correlated and the bits should agree between the two Iris codes. However

shows that if more subjects are considered, error rate increases. Table 1: Results for FAR and FRR indications Number of FAR% FRR% subjects 0.31 20 12.67 1.02 40 6.5 2.43 60 3.17 5.04 80 1.78 100 9.4 0.89

5. Conclusion
In this paper, a fast and effective real time algorithm is described for localizing and segmenting the iris and pupil Boundaries of the eye from database images. this approach detects the center and the boundaries quickly and reliably, even in the presence of eyelashes, under very low contrast interface and in the presence of excess illumination. This paper can enhance the performance of iris recognition system by using a small routine for edge detection and statistical features for iris recognition. In which we tested the comparison of two iris patterns by using Hamming distance. We have successfully developed this new Iris Recognition system capable of comparing two iris images. This identification system is quite simple requiring few components and effective enough to be integrated within security systems that require an identity check. Results have demonstrated 96% accuracy rate with a relatively rapid execution time. It is suggested that this algorithm can serve as an essential component for iris recognition applications. The personal identification technique developed by John Daugman was implemented, and has been tested only for the CASIA database image.

4. Experimental Results
To evaluate the performance of this proposed system, CASIA[14] iris image database (version 1) is used which is created by National Laboratory of pattern recognition, Institute of Automation, Chinese Academy of Science that consists of 108 subjects with 7 sample each. Images of CASIA iris image database are mainly from Asians. For each iris class, images are captured in two different sessions. The interval between two sessions is one month. There is no overlap between the training and test samples. In our experiments, threelevel Contour let decomposition is adopted. The above experiments are performed in Matlab 7.0. The normalized iris image obtained from the localized iris image is segmented by Daugman method. A pair of Gabor filters(a sine or cosine wave modulated by a Gaussian) are used for feature extraction process. This software is widely used for comparison purpose recently as a Daugman-like (not exactly Daugman) algorithm, which produces 1D feature vector from individual iris images. The dissimilarity between a pair of feature vectors is measured by their Hamming distance. The performance of a biometric system is estimated using the false acceptance rate (FAR) and false rejection rate (FRR).Here, FAR is the rate at which an imposter print is incorrectly accepted as genuine and FRR is the rate at which a genuine print is incorrectly rejected as an imposter. Table.1 shows the FAR and FRR for different subjects. When the number of subject is 60, the FAR and FRR are 2.43 % and 3.17%, respectively. Result

Acknowledgment
The author would like to express her thanks to the staff of software engineering department at technical college / Kirkuk-IRAQ, especially Asst Prof M. M. Siddeq and Asst Prof Abdulrahman Ikram for their help and advise.

References
[1] A.Jain, R. Bolle and S. Pankanti, Biometrics: Personal Identification in a Networked Society, eds. Kluwer, pp 276-284, 1999. J.Daugman Biometric Personal Identification system based on iris analysis, U S Patent No 5291,560 1994. A. Jain, An Introduction to Biometric recognition, IEEE transactions on circuits and system for video technology vol. 14 pp 4-20, 2004.

[2]

[3]

65
Int. J Comp Sci. Emerging Tech Vol-3 No 2 April, 2012

[4]

L. Ma, T, Yunhong Wang, and D. Zhang, Personal identification based on iris texture analysis, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol.25, no.12, 2003 John Daugman, Recognizing persons by their iris patterns, Cambridge University, Cambridge, UK pp 103-123. http://www.iris-recognition.org/, 2002 J. Daugman, High Confidence Visual Recognition of Persons by a Test of Statistical Independence, IEE Trans. Pattern Analysis and Machine Intelligence, vol. 15, no. 11, pp. 11481161, Nov. 1993 J.Daugman, Demodulation by Complex-Valued Wavelets for Stochastic Pattern Recognition, Intl J. Wavelets, Multiresolution and Information Processing, vol. 1, no. 1, pp. 1- 17, 2003. W. Boles, B. Boashash, A Human Identification Technique Using Images of the Iris and Wavelet Transform, IEEE Trans. Signal Processing, vol. 46, no. 4, pp. 1185-1188, 1998. R. Wildes, J. Asmuth, G. Green, S. Hsu, R. Kolczynski, J. Matey, and S. McBride, A Machine-Vision System for Iris Recognition, Machine Vision and Applications, vol. 9, pp. 1-8, 1996. S. Lim, K. Lee, O. Byeon, and T. Kim, Efficient Iris Recognition through Improvement of Feature Vector and Classifier, ETRI J., vol. 23, no. 2, pp. 61-70, 2001 L. Ma, Y. Wang, and T. Tan, Iris Recognition Based on Multichannel Gabor Filtering, Proc. Fifth Asian Conf. Computer Vision, vol. I, pp. 279-283, 2002.

[17]

L. Ma, Y. Wang, T. Tan, Iris recognition using circular symmetric filters. National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, 2002

[5]

Author Biographies
Amel Saeed Tuama was born in Baghdad-IRAQ in 1970. she received her Msc in computer science from University of Technology, Baghdad-IRAQ in 2003 and Bsc in computer science also from University of Technology, Baghdad-IRAQ in 1992. She worked in Iraqi ministry of science an technology from 2003 to 2008. In 2008 she was appointed as a lecturer in technical college /Kirkuk-IRAQ in software engineering department. Her research areas of interest include neural networks applications, image processing, and pattern recognition and image compression.

[6] [7]

[8]

[9]

[10]

[11]

[12]

[13] C. Tisse, L. Martin, L. Torres, and M. Robert, Person Identification Technique Using Human Iris Recognition Proc.Vision Interface, pp. 294-299, 2002 [14] CASIA iris image database, Institute of Automation, Chinese Academy of Sciences, [http://www.sinobiometrics.com]

[15] Gonzalez, R.C. dan Woods, R.E, Digital Image Processing 2nd/e, Prentice-Hall. Inc., Upper Saddle River, New Jersey. 2002, [16] Herbert Freeman, Computer processing of line drawing images, Computing Surveys, 6(1):5797, March 1974.

You might also like