Continuous and highly accurate positioning of land vehicles continues to be a substantial challenge in urban GNSS-denied environments. Although the vehicle motion model (VMM) is fused to mitigate the positioning error, the problems of position error accumulation over a distance remain. Hence, we introduce an innovative multi-information integrated navigation approach that leverages visual semantics in conjunction with a lightweight high-definition (LHD) map for absolute position refinement. This method enhances the navigation solution by integrating a vehicle-mounted GNSS/INS system with the precise localization capabilities of road semantics, such as lane lines and poles, through camera vision. We establish a comprehensive road semantic measurement model in the pixel frame to directly use raw pixel data for a tightly coupled integration process. Additionally, we examine the distinct contributions of lane lines and poles to the estimation of navigation error states using a simplified measurement model. Field tests with land vehicles demonstrate the efficacy of our proposed method and show that the longitudinal and lateral positioning errors decrease to 0.43 meters and approximately 0.27 meters, which are significant enhancements due to road semantic cues.
Road Semantic-Enhanced Land Vehicle Integrated Navigation in GNSS Denied Environments
IEEE Transactions on Intelligent Transportation Systems ; 25 , 12 ; 20889-20899
2024-12-01
3991025 byte
Article (Journal)
Electronic Resource
English
Ultrasonic Wheel Based Aiding for Land Vehicle Navigation in GNSS Denied Environment
British Library Conference Proceedings | 2019
|