Highlights In the absence of federal regulation, this study provides guidance on AV design. Nothing can be taken for granted about the interconnected roles AVs must perform. AVs must be designed to avoid the errors humans make if they are to be safer. Intentional decisions by human drivers (e.g., speeding) lead to many crashes. AVs must be designed to prioritize safety over rider preferences when they conflict.

    Abstract Introduction: The final failure in the causal chain of events in 94% of crashes is driver error. It is assumed most crashes will be prevented by autonomous vehicles (AVs), but AVs will still crash if they make the same mistakes as humans. By identifying the distribution of crashes among various contributing factors, this study provides guidance on the roles AVs must perform and errors they must avoid to realize their safety potential. Method: Using the NMVCCS database, five categories of driver-related contributing factors were assigned to crashes: (1) sensing/perceiving (i.e., not recognizing hazards); (2) predicting (i.e., misjudging behavior of other vehicles); (3) planning/deciding (i.e., poor decision-making behind traffic law adherence and defensive driving); (4) execution/performance (i.e., inappropriate vehicle control); and (5) incapacitation (i.e., alcohol-impaired or otherwise incapacitated driver). Assuming AVs would have superior perception and be incapable of incapacitation, we determined how many crashes would persist beyond those with incapacitation or exclusively sensing/perceiving factors. Results: Thirty-three percent of crashes involved only sensing/perceiving factors (23%) or incapacitation (10%). If they could be prevented by AVs, 67% could remain, many with planning/deciding (41%), execution/performance (23%), and predicting (17%) factors. Crashes with planning/deciding factors often involved speeding (23%) or illegal maneuvers (15%). Conclusions: Errors in choosing evasive maneuvers, predicting actions of other road users, and traveling at speeds suitable for conditions will persist if designers program AVs to make errors similar to those of today’s human drivers. Planning/deciding factors, such as speeding and disobeying traffic laws, reflect driver preferences, and AV design philosophies will need to be consistent with safety rather than occupant preferences when they conflict. Practical applications: This study illustrates the complex roles AVs will have to perform and the risks arising from occupant preferences that AV designers and regulators must address if AVs will realize their potential to eliminate most crashes.


    Access

    Check access

    Check availability in my library

    Order at Subito €


    Export, share and cite



    Title :

    What humanlike errors do autonomous vehicles need to avoid to maximize safety?


    Contributors:

    Published in:

    Publication date :

    2020-10-27


    Size :

    9 pages




    Type of media :

    Article (Journal)


    Type of material :

    Electronic Resource


    Language :

    English




    Small UAVs with autonomous avoidance using humanlike thoughts

    Qirui Zhang, / Wei, Ruixuan / Renke He, et al. | IEEE | 2016



    Mechanical Necks with Humanlike Responses

    Culver, C. C. / Mertz, H. J. / Neathery, R. F. | SAE Technical Papers | 1972



    OBJECT SENSE AND AVOID SYSTEM FOR AUTONOMOUS VEHICLES

    BEER N REGINALD / CHAMBERS DAVID / PAGLIERONI DAVID W | European Patent Office | 2019

    Free access