This project aims to explore action recognition through a deep learning model generated by Convolutional Neural Networks, establishing the foundation for human-robot interaction in a scenario where Unmanned Aerial Vehicles (UAV) are controlled exclusively by visual commands. The model analyzes images captured by an onboard camera using and classifies them into nine categories. Each category issues a specific command based on human actions performed by individuals properly equipped with personal protective equipment. The results demonstrate the feasibility of the proposed approach, opening room for improvements aiming its use in more complex scenarios.
Recognizing Human Actions: A Deep Learning Model for UAV Piloting
2024-11-13
3576278 byte
Conference paper
Electronic Resource
English
Recognizing human actions in a static room
IEEE | 1998
|Recognizing Human Actions in a Static Room
British Library Conference Proceedings | 1998
|