Video captioning aims to generate textual descriptions according to the video contents. The risk assessment of autonomous driving vehicles has become essential for an insurance company for providing adequate insurance coverage, in particular, for emerging MaaS business. The insurers need to assess the risk of autonomous driving business plans with a fixed route by analyzing a large number of driving data, including videos recorded by dash cameras and sensor signals. To make the process more efficient, generating captions for driving videos can provide insurers concise information to understand the video contents quickly. A natural problem with driving video captioning is, since the absence of egovehicles in these egocentric videos, descriptions of latent driving behaviors are difficult to be grounded in specific visual cues. To address this issue, we focus on generating driving video captions with accurate behavior descriptions, and propose to incorporate in-vehicle sensors which encapsulate the driving behavior information to assist the caption generation. We evaluate our method on the Japanese driving video captioning dataset called City Traffic, where the results demonstrate the effectiveness of in-vehicle sensors on improving the overall performance of generated captions, especially on generating more accurate descriptions for the driving behaviors.
Driving Behavior Aware Caption Generation for Egocentric Driving Videos Using In-Vehicle Sensors*
2021-07-11
3992274 byte
Conference paper
Electronic Resource
English
Egocentric Vision-based Future Vehicle Localization for Intelligent Driving Assistance Systems
ArXiv | 2018
|DRIVING BEHAVIOR-AWARE ADVANCED DRIVING ASSISTANCE SYSTEM
European Patent Office | 2024
|Vehicle–Bicyclist Dynamic Position Extracted From Naturalistic Driving Videos
Online Contents | 2016
|Vehicle-Bicyclist Dynamic Position Extracted From Naturalistic Driving Videos
Online Contents | 2017
|