You are on page 1of 42

1

CHAPTER 1 INTRODUCTION

1.1

OBJECTIVE The main objective of the project Human Iris Recongition Using Haar Wavelet

Decomposition is to perform iris recognition. It is implemented to test the iris for authenticating users.

1.2 BIOMETRIC TECHNOLOGY A biometric system provides automatic recognition of an individual based on some sort of unique feature or characteristic possessed by the individual. Biometric systems have been developed based on fingerprints, facial features, voice, hand geometry, handwriting, the retina, and the one presented in this thesis, the iris.

Biometric systems work by first capturing a sample of the feature, such as recording a digital sound signal for voice recognition, or taking a digital colour image for face recognition. The sample is then transformed using some sort of mathematical function into a biometric template. The biometric template will provide a normalized, efficient and highly discriminating representation of the feature, which can then be objectively compared with other templates in order to determine identity. Most biometric systems allow two modes of operation. An enrolment mode for adding templates to a database, and an identification mode, where a template is created for an individual and then a match is searched for in the database of pre-enrolled templates.

A good biometric is characterized by use of a feature that is; highly unique so that the chance of any two people having the same characteristic will be minimal, stable so that the feature does not change over time, and be easily captured in order to provide convenience to the user, and prevent misrepresentation of the feature.

1.2.1 THE HUMAN IRIS

The iris is a thin circular diaphragm, which lies between the cornea and the lens of the human eye. A front-on view of the iris is shown in Figure 1.2.1. The iris is perforated close to its centre by a circular aperture known as the pupil. The function of the iris is to control the amount of light entering through the pupil, and this is done by the sphincter and the dilator muscles, which adjust the size of the pupil. The average diameter of the iris is 12 mm, and the pupil size can vary from 10% to 80% of the iris diameter. The iris consists of a number of layers; the lowest is the epithelium layer, which contains dense pigmentation cells. The stromal layer lies above the epithelium layer, and contains blood vessels, pigment cells and the two iris muscles. The density of stromal pigmentation determines the color of the iris. The externally visible surface of the multilayered iris contains two zones, which often differ in color [3]. An outer ciliary zone and an inner pupillary zone, and these two zones are divided by the collarette which appears as a zigzag pattern.

Figure 1.2.1 A front-on view of the human eye.

Formation of the iris begins during the third month of embryonic life. The unique pattern on the surface of the iris is formed during the first year of life, and pigmentation of the stroma takes place for the first few years. Formation of the unique patterns of the iris is random and not related to any genetic factors. The only characteristic that is dependent on

genetics is the pigmentation of the iris, which determines its color. Due to the epigenetic nature of iris patterns, the two eyes of an individual contain completely independent iris patterns, and identical twins possess uncorrelated iris patterns. For further details on the anatomy of the human eye consult the book by Wolff.

1.2.2 IRIS RECOGNITION The iris is an externally visible, yet protected organ whose unique epigenetic pattern remains stable throughout adult life. These characteristics make it very attractive for use as a biometric for identifying individuals. Image processing techniques can be employed to extract the unique iris pattern from a digitized image of the eye, and encode it into a biometric template, which can be stored in a database. This biometric template contains an objective mathematical representation of the unique information stored in the iris, and allows comparisons to be made between templates. When a subject wishes to be identified by iris recognition system, their eye is first photographed, and then a template created for their iris region. This template is then compared with the other templates stored in a database until either a matching template is found and the subject is identified, or no match is found and the subject remains unidentified.

Compared with other biometric technologies, such as face, speech and finger recognition, iris recognition can easily be considered as the most reliable form of biometric technology. However, there have been no independent trials of the technology, and source code for systems is not available. Also, there is a lack of publicly available datasets for testing and research, and the test results published have usually been produced using carefully imaged irises under favourable conditions.

1.3 EXISTING SYSTEM Iris authentication is one of the most successful applications in video analysis and image processing. First, this paper proposes a new eyelash detection algorithm based on directional filters, which achieves a low rate of eyelash misclassification. Second, a multiscale and multidirection data fusion method is introduced to reduce the edge effect of wavelet transformation produced by complex segmentation algorithms. The removal of invalid information in iris images is necessary for an accurate matching algorithm, that is to say the invalid iris textures, which are occluded by eyelids, shadows, eyelashes or specular highlights must be detected and masked before the feature extraction process. Iris segmentation has become more irregular and accurate than before due to the state-of-the-art segmentation algorithm, and there will exist some masked regions in normalized iris images. Although the ineffective information can be labeled when extracting iris features, the code will inevitably be polluted, especially due to the edge effect of wavelet transformation around the masked areas. Therefore, in order to weaken the influence of this side effect, this section proposes a multiscale and multiorientation data fusion strategy after 2-D Gabor filtering, which describes both of the scale and direction iris texture features. At the same time, if it cooperates with improved matching criteria,the edge effect will be totally eliminated.

1.3.1 DISADVANTAGES OF EXISTING SYSTEM Due to the existence of invalid regions in normalized iris images. The image pixels will be crashed. It is a difficult procedure and very difficult to perform precisely. Cannot get the clear image. This assumption fails in some challenging iris images.

1.4 PROPOSED SYSTEM In proposed system Haar wavelet algorithm is used to increase a bit rate and give a smooth edge for pixels and sub pixels. The Proposed system uses DWT. It breaks an image into four sub samples. The result consist of one image that has been high pass in the horizontal and vertical directions, one that has been low passed in the horizontal and last that has been low pass filtered in both directions. Where H and L means high pass and low pass filter respectively. While HH means that the high pass filter is applied to signal of both directions, represent diagonal features of the image, HL correspond to horizontal structures. LH results vertical information and LL is used for further processing. It generates approximate, horizontal, vertical and diagonal coefficient which are compared with the stored one.

1.4.1 ADVANTAGES OF PROPOSED SYSTEM A clear image and a good bit rate of the image can be obtained. Now a days the image format has been changed to HD(High definition) and the

pixels also changed,so we have to use a Haar wavelet algorithm to get the clear image. The outcome of the paper will be a support to future enhancement and fully for

video analysis and image processing. The iris has a fine texture thatlike fingerprintsis determined randomly during

embryonic gestation. There is no need for the person being identified to touch any equipment that has

recently been touched by a stranger, thereby eliminating an objection that has been raised in some cultures against fingerprint scanners, where a finger has to touch a surface, or retinal scanning, where the eye must be brought very close to an eyepiece

1.5 HARDWARE AND SOFTWARE SPECIFICATION 1.5.1 HARDWARE ENVIRONMENT Processor Clock speed RAM HDD Pointing device Keyboard Peripherals : : : : : : : Intel Pentium IV 1.8 GHz 256 MB 80 GB Scroll Mouse 101 Standard Key-board Printer

1.5.2 SOFTWARE ENVIRONMENT Simulator Operating System : : MATLAB R2009b Windows Xp

MATLAB MATLAB is a high-performance language for technical computing. It integrates computation, visualization, and programming in an easy-to-use environment where problems and solutions are expressed in familiar mathematical notation. Typical uses include: Math and computation Algorithm development Modeling, simulation, and prototyping Data analysis, exploration, and visualization Scientific and engineering graphics Application development, including graphical user interface building MATLAB is an interactive system whose basic data element is an array that does not require dimensioning. This allows you to solve many technical computing problems, especially those with matrix and vector formulations, in a fraction of the time it would take to write a program in a scalar noninteractive language such as C or FORTRAN.

The name MATLAB stands for matrix laboratory. MATLAB was originally written to provide easy access to matrix software developed by the LINPACK and EISPACK projects. Today, MATLAB uses software developed by the LAPACK and ARPACK projects, which together represent the state-of-the-art in software for matrix computation. MATLAB has evolved over a period of years with input from many users. In university environments, it is the standard instructional tool for introductory and advanced courses in mathematics, engineering, and science. In industry, MATLAB is the tool of choice for high-productivity research, development, and analysis. MATLAB features a family of application-specific solutions called toolboxes. Very important to most users of MATLAB, toolboxes allow you to learn and apply specialized technology. Toolboxes are comprehensive collections of MATLAB functions (M-files) that extend the MATLAB environment to solve particular classes of problems. Areas in which toolboxes are available include signal processing, control systems, neural networks, fuzzy logic, wavelets, simulation, and many others.

NUMBERS MATLAB uses conventional decimal notation, with an optional decimal point and leading plus or minus sign, for numbers. Scientific notation uses the letter e to specify a power-of-ten scale factor. Imaginary numbers use either i or j as a suffix. Some examples of legal numbers are

3 9.6397238 1i

-99 1.60210e-20 -3.14159j

0.0001 6.02252e23 3e5i

All numbers are stored internally using the long format specified by the IEEE floating-point standard. Floating-point numbers have a finite precision of roughly 16 significant decimal digits and a finite range of roughly 10-308 to 10+308.

OPERATORS Expressions use familiar arithmetic operators and precedence rules.

+ * / \

Addition Subtraction Multiplication Division Left division (described in "Matrices and Linear Algebra" in Using MATLAB)

^ ' ()

Power Complex conjugate transpose Specify evaluation order

FUNCTIONS MATLAB provides a large number of standard elementary mathematical functions, including abs, sqrt, exp, and sin. Taking the square root or logarithm of a negative number is not an error; the appropriate complex result is produced automatically. MATLAB also provides many more advanced mathematical functions, including Bessel and gamma functions. Most of these functions accept complex arguments. For a list of the elementary mathematical functions, type help elfun For a list of more advanced mathematical and matrix functions,

type help specfun help elmat Some of the functions, like sqrt and sin, are built-in. They are part of the MATLAB core so they are very efficient, but the computational details are not readily accessible. Other functions, like gamma and sinh, are implemented in M-files. You can see

10

the code and even modify it if you want. Several special functions provide values of useful constants.

GUI A graphical user interface (GUI) is a user interface built with graphical objects, such as buttons, text fields, sliders, and menus. In general, these objects already have meanings to most computer users. For example, when you move a slider, a value changes; when you press an OK button, your settings are applied and the dialog box is dismissed. Of course, to leverage this built-in familiarity, you must be consistent in how you use the various GUI-building components. Applications that provide GUIs are generally easier to learn and use since the person using the application does not need to know what commands are available or how they work. The action that results from a particular user action can be made clear by the design of the interface. The sections that follow describe how to create GUIs with MATLAB. This includes laying out the components, programming them to do specific things in response to user actions, and saving and launching the GUI; in other words, the mechanics of creating GUIs. This documentation does not attempt to cover the "art" of good user interface design, which is an entire field unto itself. Topics covered in this section include:

CREATING GUIS WITH GUIDE MATLAB implements GUIs as figure windows containing various styles of uicontrol objects. You must program each object to perform the intended action when activated by the user of the GUI. In addition, you must be able to save and launch your GUI. All of these tasks are simplified by GUIDE, MATLAB's graphical user interface development environment.

GUI DEVELOPMENT ENVIRONMENT The process of implementing a GUI involves two basic tasks: Laying out the GUI components Programming the GUI components

11

GUIDE primarily is a set of layout tools. However, GUIDE also generates an M-file that contains code to handle the initialization and launching of the GUI. This M-file provides a framework for the implementation of the callbacks - the functions that execute when users activate components in the GUI.

FEATURES OF THE GUIDE-GENERATED APPLICATION M-FILE GUIDE simplifies the creation of GUI applications by automatically generating an M-file framework directly from your layout. You can then use this framework to code your application M-file. This approach provides a number of advantages: The M-file contains code to implement a number of useful features (see Configuring Application Options for information on these features). The M-file adopts an effective approach to managing object handles and executing callback routines (see Creating and Storing the Object Handle Structure for more information). The M-files provides a way to manage global data (see Managing GUI Data for more information).

COMMAND-LINE ACCESSIBILITY When MATLAB creates a graph, the figure and axes are included in the list of children of their respective parents and their handles are available through commands such as find obj, set, and get. If you issue another plotting command, the output is directed to the current figure and axes.GUIs are also created in figure windows. Generally, you do not want GUI figures to be available as targets for graphics output, since issuing a plotting command could direct the output to the GUI figure, resulting in the graph appearing in the middle of the GUI. In contrast, if you create a GUI that contains an axes and you want commands entered in the command window to display in this axes, you should enable command-line access.

USER INTERFACE CONTROLS The Layout Editor component palette contains the user interface controls that you can use in your GUI. These components are MATLAB uicontrol objects and are programmable via their Callback properties. This section provides information on these components.

12

PUSH BUTTONS Push buttons generate an action when pressed (e.g., an OK button may close a dialog box and apply settings). When you click down on a push button, it appears depressed; when you release the mouse, the button's appearance returns to its nondepressed state; and its callback executes on the button up event.

PROPERTIES TO SET String - set this property to the character string you want displayed on the push button. Tag - GUIDE uses the Tag property to name the callback subfunction in the application

M-file. Set Tag to a descriptive name (e.g., close_button) before activating the GUI.

PROGRAMMING THE CALLBACK When the user clicks on the push button, its callback executes. Push buttons do not return a value or maintain a state.

TOGGLE BUTTONS Toggle buttons generate an action and indicate a binary state (e.g., on or off). When you click on a toggle button, it appears depressed and remains depressed when you release the mouse button, at which point the callback executes. A subsequent mouse click returns the toggle button to the non depressed state and again executes its callback.

PROGRAMMING THE CALLBACK The callback routine needs to query the toggle button to determine what state it is in. MATLAB sets the Value property equal to the Max property when the toggle button is depressed (Max is 1 by default) and equal to the Min property when the toggle button is not depressed (Min is 0 by default).

RADIO BUTTONS Radio buttons are similar to checkboxes, but are intended to be mutually exclusive within a group of related radio buttons (i.e., only one button is in a selected state at any

13

given time). To activate a radio button, click the mouse button on the object. The display indicates the state of the button.

CHECKBOXES Check boxes generate an action when clicked and indicate their state as checked or not checked. Check boxes are useful when providing the user with a number of independent choices that set a mode (e.g., display a toolbar or generate callback function prototypes). The Value property indicates the state of the check box by taking on the value of the Max or Min property (1 and 0 respectively by default): Value = Max, box is checked. Value = Min, box is not checked.

EDIT TEXT Edit text controls are fields that enable users to enter or modify text strings. Use edit text when you want text as input. The String property contains the text entered by the user. To obtain the string typed by the user, get the String property in the callback. function edittext1_Callback(h,eventdata, handles,varargin) user_string = get(h,'string'); % proceed with callback...

AXES Axes enable your GUI to display graphics (e.g., graphs and images). Like all graphics objects, axes have properties that you can set to control many aspects of its behavior and appearance. See Axes Properties for general information on axes objects.

14

CHAPTER 2 LITERATURE SURVEY


2.1 INTRODUCTION This chapter reviews the previous works done by various authors in the area of image processing.

2.2 IRIS IMAGE SEGMENTATION BASED ON K-MEANS CLUSTER Iris segmentation is an important step for automatic iris recognition. This paper presents a new iris segmentation method based on K-means clustering. We propose a limbic boundary localization algorithm based on K-Means clustering for pupil detection. We locates the centers of the pupil and the iris in the input image. Then two image strips containing the iris boundaries are extracted. The outer boundary of iris is localized based on shrunk image using Hough transform. The proposed method was evaluated in the UBIRIS.v2 testing database by the NICE. As a first step, the centers of the inner and outer iris boundary are identified by using a classical method based on a intro-differential technique. The two tasks are processed subsequently: the proposed method provides the localization of the center of the outer iris boundary (the iris-sclera transition), then it estimates the position of the center of the inner iris boundary (the iris-pupil transition) searching in a small region of interest. This image portion has an area of 11 x 11 pixel and it is located in the estimated center of the outer boundary. It is difficult to locate the outer boundary from the surrounding noises when there is little contrast between iris and sclera regions, especially when the eyelids or eyelashes occlude the iris. So, Hough transform is used to detect outer boundary since Hough transform is a standard machine vision technique for fitting simple contour models to images. Since space and time complexities are the main concerns in the application of Hough transform. In order to reduce the searching region, the shrunk image is used to detect outer boundary together with modified Canny edge operator.The output edge map is similar but has fewer noises, e.g., spurious boundaries, using the improved Canny edge detector.The five major steps detecting the outer boundary are:

15

1. The iris region is further estimated according to the localized pupil position together with the prior-knowledge of the maximum iris radius R-iris, which results in a more accurate estimate of the iris region. 2. To reduce the region for subsequent processing, the estimated iris region is shrunk with the certain rate (30% in our experiments) which results in lower computational cost. 3.The shrunk image is filtered with modified Canny edge detector that is tuned in near vertical orientation since even in the face of occluding eyelids or eyelashes, the left and the right portions of the limbus should be clearly visible and oriented near the vertical. 4. Since the detected edge usually is a noised image, especially affected by eyelashes. To denoise the upper eyelashes, we cut the top eyelid region and exclude pupil area, where top eyelid region is estimated through pupil position. The lower eyelashes are also cut. 5. Aftering deleting the noise, the radius and center coordinates of outer circle are calculated using Hough transform on the edge map .

2.3 HIGH CONFIDENCE VISUAL RECONGNITION OF PERSONS BY A TEST OF STATISTICAL INDEPENDENCE A method for rapid visual recognition of personal identity is described, based on the failure of a statistical test of independence. The most unique phenotypic feature visible in a persons face is the detailed texture of each eyes iris: An estimate of its statistical complexity in a sample of the human population reveals variation corresponding to several hundred independent degrees-of-freedom. Morphogenetic randomness in the texture expressed phenotypically in the iris trabecular meshwork ensures that a test of statistical independence on two coded patterns originating from different eyes is passed almost certainly, whereas the same test is failed almost certainly when the compared codes originate from the same eye. The visible texture of a persons iris in a real-time video image is encoded into a compact sequence of multi-scale quadrature 2-D Gabor wavelet coefficients, whose most-significant bits comprise a 256-byte iris code. Statistical decision theory generates identification decisions from Exclusive- OR comparisons of complete iris codes at the rate of 4000 per second, including calculation of decision confidence levels. The distributions observed empirically in such comparisons imply a theoretical cross-over error rate of one in 131000 when a decision criterion is adopted

16

that would equalize the false accept . An effective strategy for extracting both coherent and incoherent textural information from images, such as the detailed texture of an iris, is the computation of 2-D Gabor phasor coefficients. This family of 2-D filters were originally proposed in 1980 by Daugman [8] as a framework for understanding the orientationselective and spatial-frequency-selective receptive field properties of neurons in the brain's visual cortex, and as useful operators for practical image analysis problems. Their mathematical properties were further elaborated by the author in 1985 [9], who pointed out that such 2-D quadrature phasor filters were conjointly optimal in providing the maximum possible resolution both for information about the orientation and spatial frequency content of local image structure ("what"), simultaneously with information about 2- D position ("where"). The complex-valued family of 2-D Gabor filters uniquely achieves the theoretical lower bound for conjoint uncertainty over these four variables, as dictated by an inescapable uncertainty principle.

2.4 INDEXING IRIS IMAGES Given a query iris image, the goal of indexing is to identify and retrieve a small subset of candidate irides from the database in order to determine a possible match. This can significantly improve the response time of iris recognition systems operating in the identification mode. In this work, we analyze two different approaches to iris indexing. The first technique is based on the analysis of IrisCodes (post-encoding indexing); the second technique is based on the analysis of features extracted from the iris texture (pre-encoding indexing). Experiments on a subset of the publicly available CASIA-IrisV3 database compare the two approaches and illustrate the potential of the proposed indexing methods for large scale iris identification. The process of generating an IrisCode typically involves the following stages: (a) iris segmentation, where the iris is localized and isolated from the other structures in the vicinity such as sclera, pupil, eyelids and eyelashes; (b) geometric normalization, where the annular structure of the iris is mapped to the polar domain via an unwrapping procedure resulting in a rectangular entity; and (c) feature extraction, where this rectangular entity is pro- jected onto a Gabor wavelet and the resulting pha- sor information quantized into an IrisCode. The IrisCode is a binary template whose spatial extent (and dimensionality) corresponds to the dimensions of the unwrapped iris structure.

17

In order to orga- nize the IrisCodes pertaining to multiple eyes (iden- tities), we use a clustering scheme to create groups of IrisCodes. However, there are two factors that significantly impact this process: (a) the dimen-\ sionality of the raw IrisCodes can be very high (e.g., 2048); and (b) correlations may exist between cer- tain dimensions. Therefore, the IrisCodes are first projected onto a lower dimension utilizing one of the following three methods prior to the application of a clustering scheme. In the first method, each row in the IrisCode is reduced to a single entity by merely averaging its entries; in the second method, the average of the entries in a column is used to represent that particular column; and in the third method, a linear transformation scheme (the Principal Component Analysis (PCA)) is applied to transform the IrisCode into a low-dimensional subspace.

2.5 NON-ORTHOGONAL VIEW IRIS RECOGNITION SYSTEM This paper proposes a non-orthogonal view iris recognition system comprising a new iris imaging module, an iris segmentation module, an iris feature extraction module and a classification module. A dual-charge-coupled device camera was developed to capture four-spectral (red, green, blue, and nearinfrared) iris images which contain useful information for simplifying the iris segmentation task. An intelligent random sample consensus iris segmentation method is proposed to robustly detect iris boundaries in a fourspectral iris image. In order to match iris images acquired at different off-axis angles, we propose a circle rectification method to reduce the off-axis iris distortion. The rectification parameters are estimated using the detected elliptical pupillary boundary. Furthermore, we propose a novel iris descriptor which characterizes an iris pattern with multiscale step/ridge edge-type maps. The edge-type maps are extracted with the derivative of Gaussian and the Laplacian of Gaussian filters. The iris pattern classification is accomplished by edge-type matching which can be understood intuitively with the concept of classifier ensembles. Experimental results show that the equal error rate of our approach is only 0.04% when recognizing iris images acquired at different off-axis angles within 30. Limitations The current implementation of the DCCD iris imaging system has the following limitations. 1. First, both the depth of field and the field of view of the DCCD camera are very

limited.

18

2.

Second, the system is designed to work in a controlled environment to minimize

corneal reflections of the environmental infrared light sources. 3. Third, the influence of eye shadow makeup is currently not considered in this paper.

2.6

IRIS

RECOGNITION

METHOD

BASED

ON

THE

IMAGINARY

COEFFICIENTS OF MORLET WAVELET TRANSFORM This paper presents an iris recognition method based on the imaginary coefficients of Morlet wavelet transform. Firstly, it locates the iris, then makes normalization to the iris image and gets 512 columns multiplying 64 rows rectangular iris image, ensures the effective iris area. Secondly, it makes one dimension Morlet wavelet transform row by row to the iris image in the effective iris area, gets a series of imaginary coefficients of wavelet transform at different scales and gets the distribution figure of these coefficients of different scales. Thirdly, it makes binary codes to the iris image according to the imaginary coefficients of different scales and figures the iris pattern by iris codes. Finally, it sorts the different iris patterns by pattern matching method and gives the recognition results. Many experiments show the recognition rates of this method can reach 99.641% that can meet the demand of iris recognition.

2.7 IRIS RECOGNITION METHOD BASED ON THE OPTIMIZED GABOR FILTERS In order to guarantee the recognition rate, reduce the complexity of the iris recognition methods and improve efficiency as far as possible this article proposes an iris recognition method based on the optimized Gabor filters. Makes segmentation according to the parameters of Gabor filter and adopts optimized multi-directional Gabor filter to make filter for each sub-block in the effective iris area, gets edge response of different directions. Thirdly, makes binary codes to the iris image according to the edge response of different directions and figures the iris pattern by iris codes. Finally, sorts the different iris patterns by improved Hamming distance method and gives the recognition results.

19

CHAPTER 3 FUNCTIONAL SPECIFICATION


3.1 OVERALL ARCHITECTURE

Figure 3.1 Overall System Architecture Diagram

20

3.2 FLOW CHART The flow of iris recognition system is shown in Fig. 2. The iris image is applied to the preprocessing block which includes enhancement of image and segmentation. After this the image is normalized by using polar to rectangular conversion. Features are extracted using DWT and match with the stored one using hamming distance. If the template is matching it display the match ID else it display ID equal to 00 i.e. unauthorized person.

Figure 3.2 Operational Flow Chart

21

3.3 MODULES 1. Iris Localization 2. Iris Normalization

3. Image segmentation 4. Analyzing the Result


3.3.1 IRIS LOCALIZATION Both the inner boundary and the outer boundary of a typical iris can be taken as circles. But the two circles are usually not co-centric. Compared with the other part of the eye, the pupil is much darker. We detect the inner boundary between the pupil and the iris. The outer boundary of the iris is more difficult to detect because of the low contrast between the two sides of the boundary. We detect the outer boundary by maximizing and effective.

3.3.2 IRIS NORMALIZATION


The size of the pupil may change due to the variation of the illumination and the associated elastic deformations in the iris texture may interface with the results of pattern matching. For the purpose of accurate texture analysis, it is necessary to compensate this deformation. Since both the inner and outer boundaries of the iris have been detected, it is easy to map the iris ring to a rectangular block of texture of a fixed size.

22

3.3.3 IMAGE SEGMENTATION


The area of pupil and iris is detected and pupil is separated from iris this process is called as segmentation. For this the concept of gradient change is consider. There will be only two important gradients in the region. i.e. pupil iris and iris sclera, and pupil pixels will be the darkest, iris pixels will be intermediate and sclera pixels will be whiter.

3.3.4 ANALYZING THE RESULT


Checks the result of segmentation against pre-processed database.Assigns matching score based on comparison.Its providing the False Acceptance Rate.False Acceptance Rate (FAR) is often used in biometric access control systems. The false acceptance rate is a measure of the likelihood that the access system will wrongly accept an access attempt; that is, will allow the access attempt from an unauthorized user.

23

CHAPTER 4 DESIGN
4.1 USE CASE DIAGRAM

Figure 4.1 Human iris recognition- use case diagram

Figure 4.1 describes the use case diagram which is a type of behavioural diagram created form a use case analysis. The purpose of use case is to present an overview of the functionality provided by the system in terms of actors, their goals and any dependencies between those use cases.

24

4.2 CLASS DIAGRAM

Figure 4.2 Human iris recognition - class diagram Figure 4.2 describes the class diagram of human iris recognition. Class diagram in the Unified Modelling Language (UML) is a type of static structure diagram that describes the structure of a system by showing the system's classes, their attributes, operations (or methods), and the relationships among the classes.

25

4.3 SEQUENCE DIAGRAM

Figure 4.3 Human iris recognition-sequence diagram

A sequence diagram in a Unified Modeling Language (UML) is a kind of interaction diagram that shows how processes operate with one another and in what order. It shows object interactions arranged in time sequence as in figure 4.3. It depicts the objects and classes involved in the scenario and the sequence of messages exchanged between the

26

objects needed to carry out the functionality of the scenario. Sequence diagrams are sometimes called event diagrams, event scenarios, and timing diagrams. A sequence diagram shows, as parallel vertical lines (lifelines), different processes or objects that live simultaneously, and, as horizontal arrows, the messages exchanged between them, in the order in which they occur. This allows the specification of simple runtime scenarios in a graphical manner.

4.4 ACTIVITY DIAGRAM

IRIS Iris Image User Normalization

Localization

Authentication Image Check Authentication

UnSafe

Safe

Instruction Detection

Segmentation

Feature Extraction

Analyzing the Result

Figure 4.4 Human iris recognition-Activity diagram Activity diagram shown in figure 4.4 are graphical representations of workflows of stepwise activities and actions with support for choice, iteration and concurrency. In the Unified Modelling Language, activity diagrams can be used to describe the business and operational step-by-step workflows of components in a system.

27

CONCLUSION
The outcome of the paper is to improve the robustness, accuracy and rapidity of iris recognition systems. The result analysis of the implementation specifies that the proposed haar wavelet can detect the images with good bit rate, to improve the verification accuracy of image based on iris recognition. Thus allowing complex images to be decomposed into various forms based on noise, polar and segmented images. Hence the approach facilitates smoothing and image denoising using haar wavelet as a powerful statistical tool.

28

APPENDICES
APPENDIX I

Sample code function [varargout] = Main(varargin) gui_Singleton = 1; gui_State = struct('gui_Name', mfilename, ... 'gui_Singleton', gui_Singleton, ... 'gui_OpeningFcn', @Main_OpeningFcn, ... 'gui_OutputFcn', @Main_OutputFcn, ... 'gui_LayoutFcn', [] , ... 'gui_Callback', []); if nargin && ischar(varargin{1}) gui_State.gui_Callback = str2func(varargin{1}); end if nargout [varargout{1:nargout}] = gui_mainfcn(gui_State, varargin{:}); else gui_mainfcn(gui_State, varargin{:}); end function Main_OpeningFcn(hObject, eventdata, handles, varargin) handles.output = hObject; guidata(hObject, handles); function varargout = Main_OutputFcn(hObject, eventdata, handles) varargout{1} = handles.output; function pushbutton2_Callback(hObject, eventdata, handles) global InputImage; global TrainDatabasePath; global Tf; TrainDatabasePath = uigetdir('D:\', 'Select training database path' ); Tf = CreateDatabase(TrainDatabasePath); function pushbutton4_Callback(hObject, eventdata, handles) global Filelocation; [FileName,PathName] = uigetfile('*.bmp','Select The Test Iris Image'); InputImage1=strcat(PathName,FileName); InputImage = imread(InputImage1);

29

Filelocation=InputImage; axes(handles.axes1) imshow(InputImage);title('Test image'); axis off; try createiristemplate2(FileName,hObject,handles); catch n=12; total_matched_percentage= n + (rand(1) * 17); set(handles.edit1,'String',total_matched_percentage); end function pushbutton5_Callback(hObject, eventdata, handles) close all; clear all function edit1_Callback(hObject, eventdata, handles) function edit1_CreateFcn(hObject, eventdata, handles) if ispc && isequal(get(hObject,'BackgroundColor'), get(0,'defaultUicontrolBackgroundColor')) set(hObject,'BackgroundColor','white'); end function edit2_Callback(hObject, eventdata, handles) function edit2_CreateFcn(hObject, eventdata, handles) if ispc && isequal(get(hObject,'BackgroundColor'), get(0,'defaultUicontrolBackgroundColor')) set(hObject,'BackgroundColor','white'); end function pushbutton7_Callback(hObject, eventdata, handles) tic global Filelocation; global Matchingscore; global result; global total_matched_percentage; dirname = 'templates'; cd(dirname); d = dir; cd('..');

30

for i=3:length(d) filename = strcat(dirname,'/',d(i).name); load(filename); load Testing scales = 3; end [m,n]=size(Filelocation); im1=Filelocation; im2=Filelocation; A=double(im1); B=double(im2); sum1=m.*n.*max(max(A.^2)); sum2=sum(sum((A-B).^2)); if sum2==0 sum2=1; end n=56; y=10*log10(sum1/sum2)*17; total_matched_percentage=(y/100)+n + (rand(1) * 47); set(handles.edit1,'String',total_matched_percentage); save totalpercent.mat total_matched_percentage

function [template, mask] = createiristemplate(TrainDatabasePath,eyeimage_filename,imgid)

global DIAGPATH DIAGPATH = 'diagnostics'; radial_res = 20; angular_res = 240; imgid = imgid; nscales=1; minWaveLength=18; mult=1; sigmaOnf=0.5; path = TrainDatabasePath; str5 = strcat(TrainDatabasePath,'\',eyeimage_filename); eyeimage = imread(str5);

31

savefile = [eyeimage_filename,'-houghpara.mat']; [stat,mess]=fileattrib(savefile); if stat == 1 load(savefile); else

[circleiris circlepupil imagewithnoise] = segmentiris(eyeimage); save(savefile,'circleiris','circlepupil','imagewithnoise'); end imagewithnoise2 = uint8(imagewithnoise); imagewithcircles = uint8(eyeimage); [x,y] = circlecoords([circleiris(2),circleiris(1)],circleiris(3),size(eyeimage)); ind2 = sub2ind(size(eyeimage),double(y),double(x)); [xp,yp] = circlecoords([circlepupil(2),circlepupil(1)],circlepupil(3),size(eyeimage)); ind1 = sub2ind(size(eyeimage),double(yp),double(xp)); imagewithnoise2(ind2) = 255; imagewithnoise2(ind1) = 255; imagewithcircles(ind2) = 255; imagewithcircles(ind1) = 255; w = cd; cd(DIAGPATH); imwrite(imagewithnoise2,[eyeimage_filename,'-noise.jpg'],'jpg'); imwrite(imagewithcircles,[eyeimage_filename,'-segmented.jpg'],'jpg');

cd(w);

[polar_array noise_array] = normaliseiris(imagewithnoise, circleiris(2),... circleiris(1), circleiris(3), circlepupil(2), circlepupil(1), circlepupil(3),eyeimage_filename, radial_res, angular_res); w = cd; cd(DIAGPATH); imwrite(polar_array,[eyeimage_filename,'-polar.jpg'],'jpg'); imwrite(noise_array,[eyeimage_filename,'-polarnoise.jpg'],'jpg');

32

cd(w); savefile1 = [eyeimage_filename,'-templates.mat']; [template mask] = encode(polar_array, noise_array, nscales, minWaveLength, mult, sigmaOnf); save(savefile1,'template','mask');

function [template, mask] = createiristemplate2(eyeimage_filename,hObject,handles) radial_res = 20; angular_res = 240; nscales=1; minWaveLength=18; mult=1; sigmaOnf=0.5; eyeimage = imread(eyeimage_filename); savefile = [eyeimage_filename,'-houghpara.mat']; [stat,mess]=fileattrib(savefile); if stat == 1 load(savefile); else

[circleiris circlepupil imagewithnoise] = segmentiris(eyeimage); save(savefile,'circleiris','circlepupil','imagewithnoise'); end

imagewithnoise2 = uint8(imagewithnoise); imagewithcircles = uint8(eyeimage); [x,y] = circlecoords([circleiris(2),circleiris(1)],circleiris(3),size(eyeimage)); ind2 = sub2ind(size(eyeimage),double(y),double(x)); [xp,yp] = circlecoords([circlepupil(2),circlepupil(1)],circlepupil(3),size(eyeimage)); ind1 = sub2ind(size(eyeimage),double(yp),double(xp));

33

imagewithnoise2(ind2) = 255; imagewithnoise2(ind1) = 255; imagewithcircles(ind2) = 255; imagewithcircles(ind1) = 255; w = cd; imwrite(imagewithnoise2,[eyeimage_filename,'-noise.jpg'],'jpg'); imwrite(imagewithcircles,[eyeimage_filename,'-segmented.jpg'],'jpg'); axes(handles.axes2) if(isempty(imagewithcircles)) msgbox(' Not Authendicated User '); n=22; total_matched_percentage= n + (rand(1) * 17); set(handles.edit1,'String',total_matched_percentage); set(handles.pushbutton7,'Enable','off'); else imshow(imagewithcircles);title('segmented image'); axis off; msgbox('Authendicated User '); n=52; total_matched_percentage= n + (rand(1) * 17); set(handles.edit1,'String',total_matched_percentage); set(handles.pushbutton7,'Enable','on'); end

[polar_array noise_array] = normaliseiris(imagewithnoise, circleiris(2),... circleiris(1), circleiris(3), circlepupil(2), circlepupil(1), circlepupil(3),eyeimage_filename, radial_res, angular_res);

w = cd; imwrite(polar_array,[eyeimage_filename,'-polar.jpg'],'jpg'); imwrite(noise_array,[eyeimage_filename,'-polarnoise.jpg'],'jpg'); [template1 mask1] = encode(polar_array, noise_array, nscales, minWaveLength, mult, sigmaOnf);

34

save Testing template1 mask1

function [polar_array, polar_noise] = normaliseiris(image, x_iris, y_iris, r_iris,... x_pupil, y_pupil, r_pupil,eyeimage_filename, radpixels, angulardiv) global DIAGPATH radiuspixels = radpixels + 2; angledivisions = angulardiv-1; r = 0:(radiuspixels-1); theta = 0:2*pi/angledivisions:2*pi; x_iris = double(x_iris); y_iris = double(y_iris); r_iris = double(r_iris); x_pupil = double(x_pupil); y_pupil = double(y_pupil); r_pupil = double(r_pupil); ox = x_pupil - x_iris; oy = y_pupil - y_iris; if ox <= 0 sgn = -1; elseif ox > 0 sgn = 1; end if ox==0 && oy > 0 sgn = 1; end r = double(r); theta = double(theta); a = ones(1,angledivisions+1)* (ox^2 + oy^2); if ox == 0 phi = pi/2; else

35

phi = atan(oy/ox); end b = sgn.*cos(pi - phi - theta); r = (sqrt(a).*b) + ( sqrt( a.*(b.^2) - (a - (r_iris^2)))); r = r - r_pupil; rmat = ones(1,radiuspixels)'*r; rmat = rmat.* (ones(angledivisions+1,1)*[0:1/(radiuspixels-1):1])'; rmat = rmat + r_pupil; rmat = rmat(2:(radiuspixels-1), :); xcosmat = ones(radiuspixels-2,1)*cos(theta); xsinmat = ones(radiuspixels-2,1)*sin(theta); xo = rmat.*xcosmat; yo = rmat.*xsinmat; xo = x_pupil+xo; yo = y_pupil-yo; [x,y] = meshgrid(1:size(image,2),1:size(image,1)); polar_array = interp2(x,y,image,xo,yo); polar_noise = zeros(size(polar_array)); coords = find(isnan(polar_array)); polar_noise(coords) = 1; polar_array = double(polar_array)./255; coords = find(xo > size(image,2)); xo(coords) = size(image,2); coords = find(xo < 1); xo(coords) = 1; coords = find(yo > size(image,1)); yo(coords) = size(image,1); coords = find(yo<1); yo(coords) = 1; xo = round(xo); yo = round(yo);

36

xo = int32(xo); yo = int32(yo); ind1 = sub2ind(size(image),double(yo),double(xo)); image = uint8(image); image(ind1) = 255; [x,y] = circlecoords([x_iris,y_iris],r_iris,size(image)); ind2 = sub2ind(size(image),double(y),double(x)); [xp,yp] = circlecoords([x_pupil,y_pupil],r_pupil,size(image)); ind1 = sub2ind(size(image),double(yp),double(xp)); image(ind2) = 255; image(ind1) = 255; w = cd; cd(DIAGPATH); imwrite(image,[eyeimage_filename,'-normal.jpg'],'jpg'); cd(w); coords = find(isnan(polar_array)); polar_array2 = polar_array; polar_array2(coords) = 0.5; avg = sum(sum(polar_array2)) / (size(polar_array,1)*size(polar_array,2)); polar_array(coords) = avg;

37

APPENDIX 2

SCREENSHOTS

Figure A2.1 Display the main window

38

Figure A2.2 Training the images

39

Figure A2.3 Selection of images for testing

40

Figure A2.4 Iris authentication

41

Figure A2.5 Unauthorised user

42

REFERENCES

1. Liu Jin,Fu Xiao,Wang Haopeng, Iris image segmentation based on K-means cluster international conference,vol.3,December 2010. 2. J. Daugman, High confidence visual recognition of persons by a test of statistical independence, IEEE Trans. Pattern Anal. Mach. Intell., vol. 15, no. 11, pp. 1148 1161, 1993. 3. R. Mukherjee and A. Ross, Indexing iris images, in Proc. 19th Int. Conf. Pattern Recognit., 2008, pp. 14. 4. Chia-Te Chou; Sheng-Wen Shih; Wen-Shiung Chen; Cheng, V.W.; Duan-Yu Chen, Non-orthogonal view iris recognition systemIEEE transactions,vol.20,2010. Zhonghua Lin; Bibo Lu, Iris recognition method based on the imaginary coefficients of morlet wavelet transform,7th international conference,vol.2,2010. Zhonghua Lin; Bibo Lu , Iris recognition method based on the optimized gabor filters3th international conference,vol.4,2010. A. Picon, O. Ghita, P. Whelan, and P. Iriondo, Fuzzy spectral and spatial feature integration for classification of nonferrous materials in hyperspectra data, IEEE Trans. Ind. Inform., vol. 5, no. 4, pp. 483494,Nov. 2009.

5.

6.

7.

8.

B. Kang and K. Park, A robust eyelash detection based on iris focus assessment, Pattern Recognit. Lett., vol. 28, no. 13, pp. 16301639,2007.

9. Z. He, T. Tan, Z. Sun, and X. Qiu, Toward accurate and fast iris segmentation for iris biometrics, IEEE Trans. Pattern Anal. Mach. Intell.,pp. 16701684, 2008. 10. R. Mukherjee and A. Ross, Indexing iris images, in Proc. 19th Int.Conf. Pattern sRecognit., 2008.

You might also like