The major mode of communication between hearing-impaired or mute people and others is sign language. Prior, most of the recognition systems for sign language had been set simply to recognize hand signs and convey them as text. However, the proposed model tries to provide speech to the mute. Firstly, hand gestures for sign language recognition and facial emotions are trained using CNN (Convolutional Neural Network) and then by training the emotion to speech model. Finally combining hand gestures and facial emotions to realize the emotion and speech.
CNN based Recognition of Emotion and Speech from Gestures and Facial Expressions
01.12.2022
786797 byte
Aufsatz (Konferenz)
Elektronische Ressource
Englisch
Multi-modal emotion analysis from facial expressions and electroencephalogram
British Library Online Contents | 2016
|