This paper presents a cloud-based and deep learning-driven sign language recognition AR glasses system aiming to achieve real-time and accurate sign language recognition and communication. The system consists of three parts: information acquisition, algorithm processing, and output. In the information acquisition part, the glasses' camera is utilized to capture sign language images, and the data is transmitted to the K210 microprocessor for further processing. The algorithm processing part combines YOLO (you only look once) and RNN algorithms to address issues like object occlusion and motion blur, enhancing the accuracy and robustness of target detection. The output part visualizes the recognized sign language signals on the AR display of the glasses. Experimental results demonstrate that the proposed system achieves favorable performance and effectiveness in sign language recognition, opening up new possibilities for barrier-free communication between the deaf and non-deaf individuals.
AR glasses for sign language recognition based on deep learning
2023-10-11
1576663 byte
Conference paper
Electronic Resource
English
Australian sign language recognition
British Library Online Contents | 2005
|Viewpoint invariant sign language recognition
British Library Online Contents | 2007
|Viewpoint invariant sign language recognition
IEEE | 2005
|Viewpoint Invariant Sign Language Recognition
British Library Conference Proceedings | 2005
|Traffic Sign Detection and Recognition Using Deep Learning
Springer Verlag | 2022
|