Replacing a human driver is an extraordinarily complex task. While machine learning (ML) and its’ subset, deep learning (DL) are fueling breakthroughs in everything from consumer mobile applications to image and gesture recognition, significant challenges remain. The majority of artificial intelligence (AI) learning applications, particularly with respect to Highly Automated Vehicles (HAVs) and their ecosystem have remained opaque - genuine “black boxes.” Data is loaded into one side of the ML system and results come out the other, however, there is little to no understanding at how the decision was arrived at.To make these systems accurate, these AI systems require lots of data to crunch and the sheer computational complexity of building these DL based AI models also slows down the progress in accuracy and the practicality of deploying DL at scale. In addition, the training times and the forensic decision investigation — often measured in days, sometimes weeks and months — slows down implementation and makes traditional agile approaches with their definition of done almost impossible to follow.Recent breakthroughs have allowed ML systems in a HAV implementation context to determine reasonable solutions in very fixed scenarios. However, these systems are typically very complex and largely incapable of explaining how or why they came up with that solution. Without this knowledge and reasoning, intervention and proof of compliance during HAV development, validation, verification, and production applications is near impossible. To cut development and forensic time it takes to create and understand DL models with high precision, decisions must be understood, and reasoning applied.While significant breakthroughs have been made in Explainable AI (XAI) through DL technologies such as recursive methods, and Cognitive AI (CAI) through user interfaces (UI), they all commonly fail at “transparency”. Transparency is the ability to have access to the logic behind a decision made by a ML system. This is a requirement to establishing trust in high risk and high human cost applications such as an HAV. This paper will outline how a solution based on Knowledge Representation and Reasoning (KRR) creates a “holistic AI” approach that enables both knowledge on how a HAV machine learning system arrives at decisions, and provides the rational or reasoning through the provisioning of new insights into what would typically be a blind process. This “Transparent AI” solution will be explored through an algorithmic approach and then demonstrated through a software implementation within Baidu’s Apollo model framework.


    Zugriff

    Zugriff prüfen

    Verfügbarkeit in meiner Bibliothek prüfen

    Bestellung bei Subito €


    Exportieren, teilen und zitieren



    Titel :

    Building Responsibility in AI: Transparent AI for Highly Automated Vehicle Systems


    Weitere Titelangaben:

    Sae Technical Papers


    Beteiligte:

    Kongress:

    SAE WCX Digital Summit ; 2021



    Erscheinungsdatum :

    2021-04-06




    Medientyp :

    Aufsatz (Konferenz)


    Format :

    Print


    Sprache :

    Englisch




    Building Responsibility in AI: Transparent AI for Highly Automated Vehicle Systems

    Minarcin, Monika | British Library Conference Proceedings | 2021


    Responsibility for Causing Harm as a Result of a Road Accident Involving a Highly Automated Vehicle

    Magizov, R. R. / Mukhametdinov, E. M. / Mavrin, V. G. | TIBKAT | 2020


    METHOD FOR OPERATING A MORE HIGHLY AUTOMATED VEHICLE (HAV), IN PARTICULAR A HIGHLY AUTOMATED VEHICLE

    ALAWIEH ALI / HASBERG CARSTEN / HIENDRIANA DANNY et al. | Europäisches Patentamt | 2021

    Freier Zugriff

    Method and device for operating vehicle for highly automated driving, and vehicle for highly automated driving

    DOLGOV MAXIM / ZIERACH ROBERT / MICHALKE THOMAS | Europäisches Patentamt | 2024

    Freier Zugriff

    Method for Operating a Highly Automated or Fully Automated Vehicle

    MIELENZ HOLGER | Europäisches Patentamt | 2020

    Freier Zugriff