Systems and methods are provided to select a most typical pronunciation of a location name on a map from a plurality of user pronunciations. A server generates a reference speech model based on user pronunciations, compares the user pronunciations with the speech model and selects a pronunciation based on comparison. Alternatively, the server compares the distance between one the user pronunciations and every other user pronunciations and selects a pronunciation based on comparison. The server then annotates the map with the selected pronunciation and provides the audio output of the location name to a user device upon a user's request.


    Access

    Download


    Export, share and cite



    Title :

    Annotating maps with user-contributed pronunciations


    Contributors:
    CHECHIK GAL (author)

    Publication date :

    2017-06-06


    Type of media :

    Patent


    Type of material :

    Electronic Resource


    Language :

    English


    Classification :

    IPC:    G10L Analyse oder Synthese von Sprache , SPEECH ANALYSIS OR SYNTHESIS / G08G Anlagen zur Steuerung, Regelung oder Überwachung des Verkehrs , TRAFFIC CONTROL SYSTEMS / H04M TELEPHONIC COMMUNICATION , Fernsprechverkehr



    COLLECTING USER-CONTRIBUTED DATA RELATING TO A NAVIGABLE NETWORK

    VITALE VICENZO / SILSBURY DAVID / SHAIKH ZISHAN AHMED et al. | European Patent Office | 2022

    Free access

    COLLECTING USER-CONTRIBUTED DATA RELATING TO A NAVIGABLE NETWORK

    VITALE VICENZO / SILSBURY DAVID / SHAIKH ZISHAN et al. | European Patent Office | 2021

    Free access

    Large-scale, dense city reconstruction from user-contributed photos

    Irschara, A. / Zach, C. / Klopschitz, M. et al. | British Library Online Contents | 2012


    COLLECTING USER-CONTRIBUTED DATA RELATING TO A NAVIGABLE NETWORK

    VITALE VICENZO / SILSBURY DAVID / SHAIKH ZISHAN AHMED et al. | European Patent Office | 2024

    Free access

    Annotating high definition map data with semantic labels

    YANG LIN / WU XIAQING | European Patent Office | 2025

    Free access