In order to perform coherent and robust interpretation of moving faces, one must resolve the inconsistency and ambiguities observed among multiple sources of highly noisy visual measurements. In this work, we describe a method for perceptual fusion through the estimation of observation covariance. We demonstrate the approach through a working system that performs real-time face detection and tracking in live video sequences. The use of perceptual fusion aims to address the difficult problem of simultaneous 2D face alignment and 3D head pose estimation during tracking.
Fusion of 2D face alignment and 3D head pose estimation for robust and real-time performance
01.01.1999
333732 byte
Aufsatz (Konferenz)
Elektronische Ressource
Englisch
A Real-Time Hybrid Method for Head Pose Estimation
ASCE | 2011
|Dynamic random regression forests for real-time head pose estimation
British Library Online Contents | 2013
|British Library Online Contents | 2018
|Face pose analysis and estimation
IEEE | 2002
|