Methods and systems for training an autonomous driving system using a vision-language planning (VLP) model. Image data is obtained from a vehicle-mounted camera, encompassing details about agents situated within the external environment. Via image processing, the system identifies these agents within the environment. A Bird's Eye View (BEV) representation of the surroundings is then generated, encapsulating the spatiotemporal information linked to the vehicle and the recognized agents. Execution of the VLP machine learning model begins by extracting vision-based planning features from the BEV, and receiving or generating textual information characterizing various attributes of the vehicle within the environment. Text-based planning features are extracted from this textual information. To enhance model performance, a contrastive learning model is engaged to establish similarities between the vision-based and text-based planning features, and a predicted trajectory is output based on the similarities.
SYSTEMS AND METHODS FOR VISION-LANGUAGE PLANNING (VLP) FOUNDATION MODELS FOR AUTONOMOUS DRIVING
2025-05-15
Patent
Electronic Resource
English
Evaluation of Safety Cognition Capability in Vision-Language Models for Autonomous Driving
ArXiv | 2025
|METHODS AND SYSTEMS FOR TOPOLOGICAL PLANNING IN AUTONOMOUS DRIVING
European Patent Office | 2025
|