Voice-based human-machine interfaces are becoming a key feature for next generation intelligent vehicles. For the navigation dialogue systems, it is desired to understand a driver's spoken language in a natural way. This study proposes a two-stage framework, which first converts the audio streams into text sentences through Automatic Speech Recognition (ASR), followed by Natural Language Processing (NLP) to retrieve the navigation-associated information. The NLP stage is based on a Deep Neural Network (DNN) framework, which contains sentence-level sentiment analysis and word/phrase-level context extraction. Experiments are conducted using the CU-Move in-vehicle speech corpus. Results indicate that the DNN architecture is effective for navigation dialog language understanding, whereas the NLP performances are affected by ASR errors. Overall, it is expected that the proposed RNN-based NLP approach, with the corresponding reduced vocabulary designed for navigation-oriented tasks, will benefit the development of advanced intelligent vehicle human-machine interfaces.
Navigation-orientated natural spoken language understanding for intelligent vehicle dialogue
01.06.2017
925685 byte
Aufsatz (Konferenz)
Elektronische Ressource
Englisch
Navigation-Orientated Natural Spoken Language Understanding for Intelligent Vehicle Dialogue
British Library Conference Proceedings | 2017
|Effect of Spoken Dialogue System Using Natural Language Understanding
British Library Online Contents | 2015
|Europäisches Patentamt | 2021
|Spoken Dialogue Technologies for Drivers
British Library Conference Proceedings | 2002
|