Analysis of live and dynamic movements by computer is one of the areas that draws a great deal of interest to itself. One of the important parts of this area is the motion capture process that can be based on the appearance and facial mode estimation. The aim of this study is to represent 3D facial movements from estimated facial expressions using video image sequences and applying them to computer-generated 3D faces. We propose an algorithm which can classify the given image sequences into one of the motion frames. The contributions of this work lie mainly in two aspects. Firstly, an optical flow algorithm is used for feature extraction, that instead of using two subsequent images (or two subsequent frames in a video), the distinction between images and the normal state is used. Secondly, we realize a multilayer perceptron network that their inputs are matrices obtained from optical flow algorithm to model a mapping between person movements and database movement categories. A three-dimensional avatar, which is made by means of Kinect data, is used to represent the face movements in a graphical environment. In order to evaluate the proposed method, several videos are recorded in order to compare the available modes and discovered modes. The results indicate that the proposed method is effective. ; https://www.edusoft.ro/brain/index.php/brain/article/view/814/920


    Access

    Download


    Export, share and cite



    Title :

    BRAIN. Broad Research in Artificial Intelligence and Neuroscience-A Facial Motion Capture System Based on Neural Network Classifier Using RGB-D Data


    Contributors:

    Publication date :

    2018-05-01


    Remarks:

    oai:zenodo.org:1245909
    BRAIN. Broad Research in Artificial Intelligence and Neuroscience 9(2) 139-154



    Type of media :

    Article (Journal)


    Type of material :

    Electronic Resource


    Language :

    English



    Classification :

    DDC:    629