This study is focused on exploring the possibilities of using camera and route planner images for autonomous driving in an end-to-mid learning fashion. The overall idea is to clone the humans’ driving behavior, in particular, their use of vision for ‘driving’ and map for ‘navigating’. The notion is that we humans use our vision to ‘drive’ and sometimes, we also use a map such as Google/Apple maps to find direction in order to ‘navigate’. We replicated this notion by using end-to-mid imitation learning. In particular, we imitated human driving behavior by using camera and route planner images for predicting the desired waypoints and by using a dedicated control to follow those predicted waypoints. Besides, this work also places emphasis on using minimal and cheaper sensors such as camera and basic map for autonomous driving rather than expensive sensors such Lidar or HD Maps as we humans do not use such sophisticated sensors for driving. Also, even after decades of research, the reasonable place for ‘mid’ in the End-to-End approach, as well as, the trade-off between data-driven and math-based approach is not fully understood. Therefore, we focused on the end-to-mid learning approach and tried to identify the reasonable place for ‘mid’ in the end-to-end pipeline.
Predicting Desired Temporal Waypoints from Camera and Route Planner Images using End-To-Mid Imitation Learning
Sae Technical Papers
SAE WCX Digital Summit ; 2021
2021-04-06
Conference paper
English
British Library Conference Proceedings | 2021
|Trailer backup assist system with trajectory planner for multiple waypoints
European Patent Office | 2016
|British Library Conference Proceedings | 2013
|