Geometry based lip reading system using Multi Dimension Dynamic Time Warping
This paper describes an automatic lip reading system consisting of two main modules 1) a pre-processing module able to extract lip geometry information from the video sequence and 2) a classification module to identify the visual speech based on dynamic lip movements. The recognition performance of...
Main Authors: | , |
---|---|
Format: | Conference or Workshop Item |
Language: | English |
Published: |
IEEE
2012
|
Subjects: | |
Online Access: | http://umpir.ump.edu.my/id/eprint/26933/ http://umpir.ump.edu.my/id/eprint/26933/ http://umpir.ump.edu.my/id/eprint/26933/1/Geometry%20based%20lip%20reading%20system%20using%20Multi%20Dimension%20Dynamic%20Time%20Warping.pdf |
Summary: | This paper describes an automatic lip reading system consisting of two main modules 1) a pre-processing module able to extract lip geometry information from the video sequence and 2) a classification module to identify the visual speech based on dynamic lip movements. The recognition performance of the proposed system has been assessed in the recognition of the English digits 0 to 9 as spoken by the speakers in the video sequences available in the CUAVE database. Extraction of lip geometry features was carried out using a combination of a skin color filter, a border following algorithm and a convex hull approach. The proposed method was compared with the popular `snake' technique and was found to improve lip shape extraction performance for the database studied. Lip geometry features including height, width, ratio, area, perimeter and various combinations of these features were evaluated to determine which performs the best when representing speech in the visual domain in the application of three separate classification methods, namely optical flow, Dynamic Time Warping (DTW) and a new approach termed Multi-Dimensional DTW. Experiments show that the proposed system is capable of a recognition performance of 68% just using lip height, lip width and the ratio of these features demonstrating that the system has the potential to be incorporated in a multimodal speech recognition system for use in noisy environments. |
---|