Is it possible to determine palm area biometrically




















Identification systems based on hand geometry are using geometric differences in the human hands. Typical features include length and width of the fingers, palm and fingers position, thickness of the hand, etc.

There are no systems that are taking some non-geometric features e. Pegs that some scanners are using are also helpful in determining axes needed for the feature extraction. An example is shown in the Figure 6 where the hand was represented as the vector containing measuring results and 16 characteristic points were extracted:. Axes on which hand features are extracted and extracted features [ 9 ]. In the Parameter Estimation Technique peg-based acquisition system was used.

This approach is called intensity based approach. The other presented technique used fixed windows size and determined points whose intensity was changed along the axes. These techniques will be presented later in the chapter. Third technique that will be presented here was described in [ 16 ]. In order to offset the effects of background lighting, color of the skin, and noise, the following approach was devised to compute the various feature values.

A sequence of pixels along a measurement axis will have an ideal gray scale profile as shown in Figure 7. The gray scale profile of pixels along a measurement axis [ 15 ]. Total number of pixels considered is referred as Len , Pe and Ps refer to end points within which the object to measured is located and A1 , A2 and B are the gray scale values. The actual gray scale profile tends to be spiky as shown in Figure 7 right image. The first step author presented was to model the profile.

Let the pixels along a measurement axis be numbered from 1 to Len. The following assumptions about the profile were made:. The observed profile Figure 7 right is obtained from the ideal profile Figure 7 left by the addition of Gaussian noise to each of the pixels in the latter. Thus, for example, the gray level of a pixel lying between Ps and Pe were assumed to be drawn from the distribution:.

The gray level of an arbitrary pixel along a particular axis is independent of the gray level of other pixels in the line. Operating under these assumptions, author could write the joint distribution of all the pixel values along a particular axis as:. The parameters could then be estimated iteratively [ 15 ]. This technique was developed to locate the end points Ps and Pe from the gray scale profile in Figure 7.

A heuristic method was adopted to locate these points. A window of length wlen was moved over the profile, one pixel at a time, starting from the left-most pixel. This indicated a sharp change in the gray scale of the profile. They decided to use mathematical graphs on the two-dimensional hand image. Hand image was normalized by using basic morphological operators and edge detection.

They created a binary image from the image captured with an ordinary document scanner. On the binary image the pixel values were analyzed to define the location of characteristic points. They extracted 31 points, shown in the Figure 8. Hand shape and the characteristic hand points defined in [ 16 ]. For the hand placement on y-axis a referential point on the top of the middle finger was used.

The location of that point was determined by using the horizontal line y1. Using that line, authors defined 6 points that represents the characteristic points of index, middle and ring finger. Using lines y2 and y3 they extracted enough characteristic points for four fingers. Thumb has to be processed in the different manner. To achieve that the right-most point of the thumb had to be identified. Using two vertical lines they found the edges of the thumb.

By analyzing points on those lines and their midpoints the top of the thumb could be extracted. Example of the thumb top extracting is shown in the Figure 9. Extracting characteristic points of the thumb. In order to get enough information for their process, each hand had to be scanned four times. For each characteristic point authors constructed the complete graph. The example of characteristic points from four scans and the corresponding complete graph of one point are shown in the Figure 10 and Figure 11 respectively.

Characteristic points of the four scanning of the hand. The complete graph of one characteristic point. The number of edges in the complete graph is well known. In order to construct minimum spanning tree this graph needed to be weighted graph. The weights are distances between two graph vertices that are connected with an edge. Distances were measured using Euclidean distance.

In the end, Prim algorithm was used to construct minimum spanning tree of one characteristic point. The same procedure was made for each of 31 points.

The example of minimum spanning tree of one characteristic point and all minimum spanning trees are shown in the Figure 12 and Figure 13 respectively. Minimum spanning tree of the graph from the Figure All minimum spanning trees of one user.

The verification process is made by comparing every point minimum spanning tree with the location of currently captured corresponding point.

So far we described the basics of hand geometry biometrics. In this section we will mention some new trends and new researches in this field. Reading this section requires a great understanding of the hand geometry biometrics and the extraction and verification methods that are mentioned here.

We will describe everything in detail, but rather mention some achievements that were produced in last few years. Hand geometry has been contact-based from its beginnings and still is in almost all commercial systems. Since it has evolved in last 30 years, one can categorize this field as in [ 17 ]:. While the first category requires a flat platform and pegs or pins to restrict hand degree of freedom, second one is peg- and pin-free, although still requiring a platform to place a hand e.

Main papers of this category were described earlier in this chapter. The second category gives users more freedom in the process of image acquisition. This step is considered as the evolution forward from constrained contact-based systems. Some newer works in this field are [ 18 ], [ 19 ].

In the [ 18 ] authors presented a method based on three keys. Therefore, neither hand-pose nor a pre-fixed position were required in the registration process. Hand features were obtained through the polar representation of the hand's contour. Their system uses both right and left hand which allowed them to consider distance measures for direct and crossed hands. Authors of the second paper [ 19 ] used 15 geometric features to analyze the effect of changing the image resolution over biometric system based on hand geometry.

The images were diminished from an initial dpi up to 24dpi. They used two databases, one acquiring the images of the hand underneath whereas the second database acquires the image over the hand. According to that they used two classifiers: multiclass support vector machine Multiclass SVM and neural network with error correction output codes. There are many different verification approaches in the contact-based hand geometry systems. Due to user acceptability, contact-less biometrics is becoming more important.

In this approach neither pegs nor platform are required for hand image acquisition. Papers in this field are relatively new according to ones in the contact-based approach. It is for the best to present just new trends in contact-less hand geometry biometrics. These methods are also the most competitive in the existing literature.

In the last few years, literature on this problem is rapidly increasing. SVM is the most common used verification and identification method. Authors in [ 20 ] acquired hand image with static video camera.

Using the decision tree they segmented the hand and after that measured the local feature points extracted along fingers and wrists. The identification was based on the geometry measurements of a query image against a database of recorded measurements using SVM. Another use of SVM can be found in the [ 21 ]. They also presented biometric identification system based on geometrical features of the human hand.

The right hand images were acquired using classic web cam. Depending on illumination, binary images were constructed and the geometrical features finger widths were obtained from them. SVM was used as a verifier. Kumar and Zhang used SVM in their hybrid recognition system which uses feature-level fusion of hand shape and palm texture [ 22 ].

They extracted features from the single image acquired from digital camera. Their results proved that only a small subset of hand features are necessary in practice for building an accurate model for identification. A hybrid system fusing the palmprint and hand geometry of a human hand based on morphology was presented in [ 23 ].

Authors utilized the image morphology and concept of Voronoi diagram to cut the image of the front of the whole palm apart into several irregular blocks in accordance with the hand geometry. Statistic characteristics of the gray level in the blocks were employed as characteristic values. In the recognition phase SVM was used. Beside SVM which is the most competitive method in the contact-less hand geometry verification and identification, the literature contains other very promising methods such as neural networks [ 24 ], a new feature called 'SurfaceCode' [ 25 ] and template distances matching [ 17 ].

Mentioned methods are not the only ones but they have the smallest Equal Error Rate and therefore are the most promising methods for the future development of the contact-less hand geometry biometric systems. Hand features, described earlier in the chapter, are used in the devices for personal verification and identification. One of the leading commercial companies in this field is Schlage.

In their devices a CCD digital camera is used for acquiring a hand image. This image has size of pixels. One if their device is shown in Figure Schlage HandPunch [26]. The system presented in the Figure X14 consists from light source, camera, mirrors and flat surface with 5 pegs. The user places the hand facing down on a flat plate on which five pins serve as a control mechanism for the proper accommodation of the right hand of the user.

The device is connected with the computer through application which enables to see live image of the top side of the hand as well as side view of the hand. The GUI helps in image acquisition while the mirror in the device used to obtain side view of the hand. This gives a partially three-dimensional image of the hand. For example, palm prints are made up of strong principal lines and some thin wrinkles, whilst palm vein contains vascular network which also resembles line-like characteristic.

Therefore, we can deploy a single method to extract the discriminative line information from the different hand features. The aims is to encode the line pattern based on the proximal orientation of the lines. We first apply Wavelet Transform to decompose the palm print images into lower resolution representation.

The bit string assignment enables more effective matching process as the computation only deals with plain binary bit string rather than real or floating point numbers. Besides, another benefit of converting bit string to Gray code representation is that Gray code exhibits less bit transition. This is a desired property since we require the biometric feature to have high similarity within the data for the same subject.

Thus, Gray code representation provides less bit difference and more similarity in the data pattern. This image depicts the strongest directional response of the palm print and it closely resembles the original palm print pattern shown in Fig. The example of directional coding applied on palm vein image is illustrated in Fig.

Example of Directional Code applied on palm print image. Example of Directional Code applied on palm vein image. Hamming distance is deployed to count the fraction of bits that differ between two code strings for the Directional Coding method. Hamming distance is defined as,. In this research, the sum-based fusion rule is used to consolidate the matching scores produced by the different hand biometrics modalities.

Sum rule is defined as,. The reason of applying sum rule is because studies have shown that sum rule provides good results as compared to other decision level fusion techniques like likelihood ratio-based fusion He, et al. Another reason we do not apply sophisticated fusion technique in our work is because our dataset has been reasonably cleansed by the image pre-processing and feature extraction stages as will be shown in the experiment section. Sum rule is a linear-based fusion method. To conduct more thorough evaluation, we wish to examine the use of non-linear classification tool.

It has good generalization characteristics by minimizing the boundary based on the generalization error, and it has been proven to be successful classifier on several classical pattern recognition problems Burges, RBF kernel is defined as Saunders, ; Vapnik, ,. We propose a novel method to incorporate image quality in our fusion scheme to obtain better performance.

We first examine the quality of the images captured by the imaging device. The assignment of larger weight to better quality image is useful when we fuse the images under visible e. Sometimes, the vein images may not appear clear due to the medical condition of the skin like thick fatty tissue obstructing the subcutaneous blood vessels , thus, it is not appropriate to assign equal weight between these poor quality images and those having clear patterns.

We design an evaluation method to assess the richness of texture in the images. We have discovered several GLCM measures which can describe image quality appropriately. These measures were modelled using fuzzy logic to produce the final image quality metric that can be used in the fusion scheme. More formally, the i , j th element in the GLCM for an image can be expressed as,. To obtain the normalized GLCM, we can divide each entry by the number of pixels in the image,.

Based on the GLCM, a number of textural features could be calculated. Among the commonly used features are shown in Table 1. These measures are useful to describe the texture of an image. For example, ASM tells how orderly an image is, and homogeneity measures how closely the elements are distributed in the image. Based on the different texture features derived from GLCM, the fuzzy inference system can be used to aggregate these parameters and derive a final image quality score.

Among the different GLCM metrics, we observe that contrast, variance, and correlation could characterize image quality well.

Contrast is the chief indicator for image quality. An image with high contrast portrays dark and visible line texture. Variance and correlation are also good indicators of image quality. Better quality images tend to have higher values for contrast and variance, and lower value for correlation. Table 2 shows the values for contrast, variance, and correlation for the palm print and palm vein images. When we observe the images, we find that images constituting similar amount of textural information yield similar measurements for contrast, variance, and correlation.

Both the palm print and palm vein images for the first person, for instance, contain plenty of textural information. Thus, their GLCM features, especially the contrast value, do not vary much.

However, as the texture is clearly more visible in the palm print image than the palm vein image for the second person, it is not surprising to find that the palm print image contains much higher contrast value than the vein image in this respect. The three image quality metrics namely contrast, variance and correlation are fed as input to the fuzzy inference system. Each of the input sets are modelled by three functions as depicted in Fig. The membership functions are formed by Gaussian functions or a combination of Gaussian functions given by,.

The parameters for each of the membership functions are determined by taking the best performing values using the development set. The principal controller for determining the output for image quality is the contrast value. The image quality is good if the contrast value is large, and vice versa.

Thirteen rules are used to characterize fuzzy rules. The main properties for these rules are,. If all the inputs are favourable high contrast, high variance, and low correlation , the output is set to high.

If all the inputs are unfavourable low contrast, low variance, and high correlation , the output is set to low. Three membership functions defined for the input variables, a the contrast parameters, b the variance parameter and c the correlation parameters. We use the Mamdami reasoning method to interpret the fuzzy set rules. This technique is adopted because it is intuitive and works well with human input.

The output functions are shown in Fig. The defuzzified output score are recorded in Table 2. The output values adequately reflect the quality of the input images the higher the value, the better the image quality. The defuzzified output values are used as the weighting score for the biometric features in the fusion scheme.

The weighted vector can then be input to the fusion scheme to perform authentication. An experiment was carried out to assess the effectiveness of the proposed Directional Coding method applied on the individual palm print and palm vein modalities.

The results for both the left and right hands were recorded for the sake of thorough analysis of the hand features. In the experiment, we also examined the performance of the system when FAR was set to 0. The reason for doing this is because FAR is considered as one of the most significant parameter settings in a biometric system. It measures the likelihood of unauthorized accesses to the system.

In some security critical applications, even one failure to detect fraudulent break-in could cause disruptive consequence to the system. Therefore, it is of paramount importance to evaluate the system at very low FAR.

The performances of the individual hand modalities are presented in Table 3. Thus, we find that there is a need to combine these modalities in order to obtain promising result. We also discover that the results for both of the hands do not vary significantly. This implies that the users can use either hand to access the biometric system. This is an advantage in security and flexibility as the user can choose to use either hand for the system. Apart from that, allowing the user to use both hands reduces the chance of being falsely rejected.

This gives the user more chances of presentation and thereby reduces the inconvenience of being denied access. We had also included an experiment to verify the usefulness of the proposed local ridge enhancement LRE pre-processing technique to enhance the hand features.

The result of applying and without applying the pre-processing procedure is depicted in Fig. Correlation analysis of individual experts is important to determine their discriminatory power, data separability, and information complementary ability.

A common way to identify the correlation which exists between the experts is to analyze the errors made by them. The fusion result can be very effective if the errors made by the classifiers are highly de-correlated. In other words, the lower the correlation value, the more effective the fusion will become. This is due to the reason that more new information will be introduced when the dependency between the errors decreases Verlinde, One way to visualize the.

Improvement gain by applying the proposed LRE pre-processing technique for left and right hands. In the correlation observation shown in Fig. This indicates that the correlation between the individual palm print and palm vein modality is low. In other words, we found that both biometrics are independent and are suitable to be used for fusion.

Visual representation of the correlation of between palm print and palm vein experts. In this experiment, we combine the palm print and palm vein experts using the sum-based fusion rule. Table 4 records the results when we fused the two different hand modalities. We observe that, in general, the fusion approach takes advantage of the proficiency of the individual hand modalities. The fusions of palm print and palm vein yielded an overall increase of 3.

In this portion of study, we examine the use of SVM for our fusion approach. In the previous experiment, we use sum-rule linear-based method to fuse the different experts.

Although sum-rule can yield satisfying result especially in the fusion of three or more modalities, the fusion result can be further improved by deploying a non-linear classification tool. The fusion result of using SVM is presented in Table 5. As a whole, SVM has helped to reduce the error rates of the fusion of the experts.

This improvement is due to the fact that SVM is able to learn a non-linear decision plane which could separate our datasets more efficiently. Decision boundaries learnt by SVM. In order to testify the proposed fuzzy-weighted FW image quality-based fusion scheme is useful, we carried out an experiment to evaluate the technique.

Improvement gained of fuzzy-weighted fusion scheme for palm print and palm vein. We observe that the performance of the fusion methods could be improved by incorporating the image quality assessment scheme. The gain in improvement is particularly evident when the fuzzy-weighted quality assessment method is applied on sum-rule.

This result shows that the proposed quality-based fusion scheme offers an attractive alternative to increase the accuracy of the fusion approach. This chapter presents a low resolution contactless palm print and palm vein recognition system.

The proposed system offers several advantages like low-cost, accuracy, flexibility, and user-friendliness. We describe the hand acquisition device design and implementation without the use of expensive infrared sensor. We also introduce the LRE method to obtain good contrast palm print and vein images. To obtain useful representation of the palm print and vein modalities, a new technique called directional coding is proposed. This method represents the biometric features in bit string format which enable speedy matching and convenient storage.

In addition, we examined the performance of the proposed fuzzy-weighted image quality checking scheme. We found that performance of the system could be improved by incorporating image quality measures when the modalities were fused.

Our approach produced promising result to be implemented in a practical biometric application. Biometrics such as facial recognition have a critical flaw: your face is exposed everywhere you go, making it easy for face scanners to identify you from a distance without your consent. Since palm vein is internal to the body, it can only be captured via a close-up, high-definition camera in combination with UV light. So unless you deliberately scan your hand on the palm vein device, it is very difficult for it to be captured.

Since the palm has a larger surface area than the finger or iris, for example, the palm vein scanner is able to capture a larger number of data points.

This gives it an accuracy advantage compared to other biometrics. Keyo's palm vein scanner captures over 5 million data points in the palm, giving it unparalleled accuracy. Because of this, palm vein has the lowest false acceptance rate FAR and false rejection rate FRR than any other biometric.

The extremely low False Rejection Rate of palm vein makes it extremely unlikely for it to incorrectly deny access to authorized users or worse — to allow access to unauthorized users , so it will always work when you need it to. Other biometrics such as facial recognition, for example, are generally far less accurate 2.

Each person's palm vein pattern remains relatively stable throughout life, so the chance that a registered user will have to re-enroll in the future is very low. Certain other biometrics are more susceptible to change over time. Fingerprint , for example, is vulnerable to wear and damage due to being exposed. Cuts and abrasions on the finger make it possible for an authorized user to be unrecognized by fingerprint scanners and incorrectly denied access. With palm vein, however, the chance of damage to the palm's vein structure is generally lower, making it more stable over time.

Additionally, the palm vein scanner is resistant to dirt, dust, dryness, and moisture, making it functional in a variety of different environments.

This means that when you scan your palm, you can be sure it will work every time — making palm vein a very reliable biometric. Palm vein is contactless, meaning that you don't have to touch the sensor to scan your palm. To identify yourself, simply hover your hand over the scanner — no touch required. This makes it much more hygienic than biometrics that require you to touch the scanner, such as fingerprint.



0コメント

  • 1000 / 1000