Deep learning is at the frontier of machine learning and automation, using deep neural networks as the main workhorse, which have revolutionized the way we extract information from large amounts of data in computer vision, natural language processing and other domains. Moving from purely academic to real world scenarios has renewed interest into the way those powerful algorithms draw their conclusions and how to quantify the quality of their predictions beyond mere accuracy. From a practitioner’s point of view, the most important information to obtain alongside the algorithm’s prediction is the uncertainty attached to it, which is the basis for an accurate assessment of confidence. Unfortunately, extracting this information from large models and datasets has proven to be difficult. A common approach so far has been to devise a method that makes as many approximations as necessary to render the problem tractable while still yielding at least somewhat useful uncertainty estimates. This work looks at the problem from a slightly different angle: By first choosing a tractable and comparatively simple method, the burden then lies on the model design to lend itself to the chosen approximation. To this end, the most popular deep neural network architectures are compared based on their compliance to uncertainty estimation by Laplace approximation, assessing empirically the methods potentials and deficiencies as well as its applicability to large models and datasets while working towards an understanding how architectural choices correlate with the quality of obtained uncertainty estimates


    Zugriff

    Download


    Exportieren, teilen und zitieren



    Titel :

    Laplace Approximation for Uncertainty Estimation of Deep Neural Networks


    Beteiligte:
    Humt, Matthias (Autor:in)

    Erscheinungsdatum :

    2019-07-15


    Medientyp :

    Sonstige


    Format :

    Elektronische Ressource


    Sprache :

    Englisch


    Schlagwörter :


    Laplace Approximation for Real-Time Uncertainty Estimation in Object Detection

    Gui, Ming / Qiu, Tianming / Bauer, Fridolin et al. | IEEE | 2022



    Uncertainty Estimation for Deep Neural Object Detectors in Safety-Critical Applications

    Le, Michael Truong / Diehl, Frederik / Brunner, Thomas et al. | IEEE | 2018



    Online Black-Box Confidence Estimation of Deep Neural Networks

    Woitschek, Fabian / Schneider, Georg | IEEE | 2022