This paper addresses the problem of extracting depth information of non-rigid dynamic 3D scenes from multiple synchronized video streams. Three main issues are discussed in this context. (i) temporally consistent depth estimation, (ii) sharp depth discontinuity estimation around object boundaries, and (iii) enforcement of the global visibility constraint. We present a framework in which the scene is modeled as a collection of 3D piecewise planar surface patches induced by color based image segmentation. This representation is continuously estimated using an incremental formulation in which the 3D geometric, motion, and global visibility constraints are enforced over space and time. The proposed algorithm optimizes a cost function that incorporates the spatial color consistency constraint and a smooth scene motion model.


    Access

    Check access

    Check availability in my library

    Order at Subito €


    Export, share and cite



    Title :

    Dynamic depth recovery from multiple synchronized video streams


    Contributors:
    Hai Tao, (author) / Sawhney, H.S. (author) / Kumar, R. (author)


    Publication date :

    2001-01-01


    Size :

    1236434 byte





    Type of media :

    Conference paper


    Type of material :

    Electronic Resource


    Language :

    English



    Dynamic Depth Recovery from Multiple Synchronized Video Streams

    Tao, H. / Sawhney, H. S. / Kumar, R. et al. | British Library Conference Proceedings | 2001


    Dynamic depth recovery from unsynchronized video streams

    Chunxiao Zhou, / Hai Tao, | IEEE | 2003


    Dynamic Depth Recovery from Unsynchronized Video Streams

    Zhou, C. / Tao, H. / IEEE | British Library Conference Proceedings | 2003



    Automatic Tracking of Human Motion in Indoor Scenes Across Multiple Synchronized Video Streams

    Cai, Q. / Aggarwal, J. K. / IEEE; Computer Society | British Library Conference Proceedings | 1998