This paper addresses the problem of extracting depth information of non-rigid dynamic 3D scenes from multiple synchronized video streams. Three main issues are discussed in this context. (i) temporally consistent depth estimation, (ii) sharp depth discontinuity estimation around object boundaries, and (iii) enforcement of the global visibility constraint. We present a framework in which the scene is modeled as a collection of 3D piecewise planar surface patches induced by color based image segmentation. This representation is continuously estimated using an incremental formulation in which the 3D geometric, motion, and global visibility constraints are enforced over space and time. The proposed algorithm optimizes a cost function that incorporates the spatial color consistency constraint and a smooth scene motion model.
Dynamic depth recovery from multiple synchronized video streams
2001-01-01
1236434 byte
Conference paper
Electronic Resource
English
Dynamic Depth Recovery from Multiple Synchronized Video Streams
British Library Conference Proceedings | 2001
|Dynamic depth recovery from unsynchronized video streams
IEEE | 2003
|Dynamic Depth Recovery from Unsynchronized Video Streams
British Library Conference Proceedings | 2003
|Automatic Tracking of Human Motion in Indoor Scenes Across Multiple Synchronized Video Streams
British Library Conference Proceedings | 1998
|