The major mode of communication between hearing-impaired or mute people and others is sign language. Prior, most of the recognition systems for sign language had been set simply to recognize hand signs and convey them as text. However, the proposed model tries to provide speech to the mute. Firstly, hand gestures for sign language recognition and facial emotions are trained using CNN (Convolutional Neural Network) and then by training the emotion to speech model. Finally combining hand gestures and facial emotions to realize the emotion and speech.


    Zugriff

    Zugriff prüfen

    Verfügbarkeit in meiner Bibliothek prüfen

    Bestellung bei Subito €


    Exportieren, teilen und zitieren



    Titel :

    CNN based Recognition of Emotion and Speech from Gestures and Facial Expressions


    Beteiligte:
    Avula, Himaja (Autor:in) / R, Ranjith (Autor:in) / S Pillai, Anju (Autor:in)


    Erscheinungsdatum :

    01.12.2022


    Format / Umfang :

    786797 byte




    Medientyp :

    Aufsatz (Konferenz)


    Format :

    Elektronische Ressource


    Sprache :

    Englisch




    Multi-modal emotion analysis from facial expressions and electroencephalogram

    Huang, Xiaohua / Kortelainen, Jukka / Zhao, Guoying et al. | British Library Online Contents | 2016


    Automatic Human Emotion Recognition System using Facial Expressions with Convolution Neural Network

    MADUPU, RAM KUMAR / KOTHAPALLI, CHIRANJEEVI / YARRA, VASANTHI et al. | IEEE | 2020



    Enhancing Online Learning with Automated Emotion Identification using Facial Expressions

    Jagadeesh, M. / L, Zubair Ali. / S, Vishnu. et al. | IEEE | 2023