(Back to Session Schedule)

The 23rd International Technical Conference on Circuits/Systems, Computers and Communications

Session A2  Image Processing & Video Technology 2
Time: 13:30 - 15:00 Monday, July 7, 2008
Location: 10F International Conference Room
Chairs: Masaaki Fujiyoshi (Tokyo Metropolitan University, Japan), Adisorn Leelasantitham (University of The Thai Chamber of Commerce, Thailand)

A2-1 (Time: 13:30 - 13:48)
TitleDisparity Estimation using Belief Propagation for View Interpolation
AuthorCheon Lee, *Yo-Sung Ho (Gwangju Institute of Science and Technology, Republic of Korea)
Pagepp. 21 - 24
Keyworddisparity estimation, belief propagation, view interpolation
AbstractIn this paper, we propose a modified disparity estimation method using belief propagation (BP) for view interpolation. The view interpolation method generates an arbitrary intermediate view image with disparity. To extract disparities, we modified the BP based stereo matching algorith which solves the problem using Bayesian belief propagation. We propose two methods to improve accuracy of disparity. The first is about the matching cost computation. Instead of using pixel differences between corresponding pixels, we use block cost computation. The second is quad-tree region dividing for disparity refinement. Since narrow and accurate search range can guarantee accuracy of estimation, we divide the image into four distinct regions and refine disparities. By experiments, we improved quality of the interpolated images over 1~4 dB on average.

A2-2 (Time: 13:48 - 14:06)
TitleA Preprocessing Method for Active Appearance Models
Author*Sung Joo Lee (Yonsei University, Republic of Korea), Kang Ryoung Park (Dongguk University, Republic of Korea), Jaihie Kim (Yonsei University, Republic of Korea)
Pagepp. 25 - 28
KeywordAAM, Initialization, Facial feature localization
AbstractActive Appearance Models (AAMs) are parametric facial models which have been widely used for facial features localization. In order to localize facial features, AAMs used an optimization method which finds local minima. As a result, AAMs cannot always localize facial features exactly when initial locations of facial features are far from the ground truth locations of them. To solve this problem, we propose a simple preprocessing method which finds initial locations of facial features such as eyes, nostrils, and lip corners. Experimental results showed that the proposed method was robust to moderate pose variation, illumination condition changes, complex background, and glasses wearers.

A2-3 (Time: 14:06 - 14:24)
TitleMulti-view Depth Map Estimation Enhancing Temporal Consistency
Author*Sang-Beom Lee, Yo-Sung Ho (Gwangju Institute of Science and Technology, Republic of Korea)
Pagepp. 29 - 32
Keyworddepth map estimation, multi-view video, temporal postprocessing, temporally weighted matching function
AbstractIn this paper, we propose a scheme for multi-view depth map estimation to enhance temporal consistency. After we divide the center image into several segments, we estimate one depth value for each segment using 3-D warping and segment-based matching techniques. In the refinement process, we apply a segment-based belief propagation algorithm. In order to enhance temporal consistency and reliability of the depth map, we define a temporally weighted matching function and apply in the initial depth estimation step. We also apply a temporal postprocessing operation to the refined depth map. Experimental results show that the final depth sequence has improved temporal consistency with reduced errors.

A2-4 (Time: 14:24 - 14:42)
TitleAutomatic Pixel Boosting for Face Enhancement in Dim Light
Author*Hataikan Poncharoensil (Mae Fah Luang University, Thailand), Seri Pansang (Chiangmai Rajabhat University, Thailand)
Pagepp. 33 - 36
Keywordimage enhancement, PCA, skin probability, illumination variation
Abstract Images in dim light cause many problems such as illumination variation which includes shadow effects. The enhancement of dim faces is difficult and challenging. Image enhancement can be useful for face detection and recognition. In this paper we contribute a new method called 'Automatic Pixel Boosting' to access and boost each pixel individually by using curve fitting. Eigencurves are used for improving the enhanced images. Skin probability ratio test is used for APB evaluation. Eigenfaces and Karhunen-Loeve algorithm and FaceSDK software are used for evaluating the performance of APB in comparison to Histogram Equalization by using a small database.

A2-5 (Time: 14:42 - 15:00)
TitleIris Template Protection Method Robust to Stolen Token Case
Author*Youn Joo Lee, Jaihie Kim (Yonsei University, Republic of Korea)
Pagepp. 37 - 40
KeywordBiometric Template Protection, Multiple Random Projection, Eigenfeature Regularization and Extraction, Stolen Token Scenario
AbstractBiometric authentication systems have been used in many security systems such as access contol, computer login, e-commers, and so on. However, biometric authentication systems have a critical problem. The problem is that if they are compromized once, biometric templates are permanantly compromized because biometric charicteristics cannot be changed. In order to solve this problem, the concept of cancelable biometrics was introduced. Cancelable biometrics method based on random projection is one of the various methods to produce revocable biometric templates. This method can easily reissue transformed biometric templates repeatedly and makes biometric features more distinquishable since original biometric signal is projected on random space that was derived from a user’s token that embody the user-specific psuedorandom number (PRN). However, random projection method has a major limitation. In stolen-token senario, the recognition performance remains as single biometrics performance. This paper presents a biometric templates protection method based on random projection, which is robust to stolen-token case. We used multiple random projection (MRP) to produce cancelable biometric templates and extracted regualarized eigenfeatures in order to improve the recognition performance in the stolen token scenario. Experimental results showed that the performance of the proposed method was not greatly degraded in the stolen token scenario.