Sign language is a visual mode of communication predominantly employed by those with hearing and speech disabilities. Regrettably, those who use sign language encounter challenges in their daily professional interactions due to the lack of comprehension of sign language among the general populace. Furthermore, the variation in sign languages among different nations presents an additional barrier, complicating communication among individuals who use other sign languages. In order to surmount these obstacles to effective communication, this research article suggests a novel methodology. The proposed system facilitates the translation of one sign language into another by utilizing sentence structure and generating speech output. Several deep learning (DL) models, such as CNN, VGG16, and ResNet, are employed, and their performance is compared using accuracy, precision, recall, and F1-Score metrics. We utilize two separate sign language datasets, namely the Bangla Sign Language (BdSL) Dataset and the American Sign Language (ASL) Dataset, to train our model. To address the need for a comprehensive dataset containing Bangla characters and numbers, we have developed a dataset of 2300 Bangla characters and numbers. Among these three models, CNN performs best, achieving an accuracy rate of 99.98%. Now, people with hearing and speech disabilities can communicate with one another without expensive gear or human interpreters. Moreover, it would promote seamless and effective communication among individuals with different sign languages.
NeuralGesture Communication: Translating one Sign Language to Another Sign Language Using Deep Learning Model and gTTs
24.06.2024
4256732 byte
Aufsatz (Konferenz)
Elektronische Ressource
Englisch
British Library Online Contents | 1992
|Deep Neural Network based Sign Language Detection
IEEE | 2022
|Australian sign language recognition
British Library Online Contents | 2005
|