The major mode of communication between hearing-impaired or mute people and others is sign language. Prior, most of the recognition systems for sign language had been set simply to recognize hand signs and convey them as text. However, the proposed model tries to provide speech to the mute. Firstly, hand gestures for sign language recognition and facial emotions are trained using CNN (Convolutional Neural Network) and then by training the emotion to speech model. Finally combining hand gestures and facial emotions to realize the emotion and speech.


    Access

    Check access

    Check availability in my library

    Order at Subito €


    Export, share and cite



    Title :

    CNN based Recognition of Emotion and Speech from Gestures and Facial Expressions


    Contributors:
    Avula, Himaja (author) / R, Ranjith (author) / S Pillai, Anju (author)


    Publication date :

    2022-12-01


    Size :

    786797 byte




    Type of media :

    Conference paper


    Type of material :

    Electronic Resource


    Language :

    English




    Multi-modal emotion analysis from facial expressions and electroencephalogram

    Huang, Xiaohua / Kortelainen, Jukka / Zhao, Guoying et al. | British Library Online Contents | 2016


    Automatic Human Emotion Recognition System using Facial Expressions with Convolution Neural Network

    MADUPU, RAM KUMAR / KOTHAPALLI, CHIRANJEEVI / YARRA, VASANTHI et al. | IEEE | 2020



    Enhancing Online Learning with Automated Emotion Identification using Facial Expressions

    Jagadeesh, M. / L, Zubair Ali. / S, Vishnu. et al. | IEEE | 2023