This paper presents results from two machine learning components within the upstream multi-modal fusion and decision-making framework introduced previously (Garagić et al., 2018 IEEE Aerospace Conference, doi: 10.1109/AERO.2018.8396737, Garagić et al., 2019 IEEE Aerospace Conference, doi: 10.1109/AERO.2019.8742221). The first combines Bayesian change-point motion-based detection to discover new targets with online transfer learning over Convolutional Neural Networks (CNNs) to learn models of previously seen targets and adapt those models as target appearance changes. CNN detection overcomes motion-based detection problems that arise when targets stop or operate in close proximity to each other. Motion-based detection overcomes the CNN problem of initializing target appearance and adapting to target appearance changes. This novel combination of motion and CNN detectors obviates the need for prior knowledge about targets to stimulate target detection. This detection approach improves performance over that achieved without an appearance-based detector on challenging data. A closed-loop between the tracking component and the detector component modules drives detector model adaptation. The second component combines multi-sensor, multi-modality features from upstream detection and tracking components to learn dynamic compact probabilistic representations of targets as manifolds in a joint space over all inputs. This Multi-Modal Deep Belief Network (MMDBN) approach incorporates supervised deep autoencoders and unsupervised Dirichlet process mixture models. Supervision within MMDBN arises from consistent associations between sensor and modality inputs for a target of interest. MMDBN thus learns and optimally leverages inter-sensor, inter-modality features of targets. This approach obviates the need for explicit, hand-crafted intermodal features because it implicitly learns both inter-modal features and a likelihood model to evaluate those features. This fusion approach improves tracking performance over a traditional fusion approach on challenging data. This overall approach offers multiple benefits in addition to outperforming single-modality and traditional fusion approaches. It is feature agnostic by accommodating any data source that provides a feature set of pre-determined dimensionality for each target at each time step. This paper describes novel methods for enhancing motion-based tracking by adding an appearance-based sidecar that provides to improve DTC performance in difficult conditions. It also showed the benefits of a feature-level fusion approach in performing DTC on several challenging multi-modality, multi-platforms AFRL ESCAPE data scenarios.


    Access

    Check access

    Check availability in my library

    Order at Subito €


    Export, share and cite



    Title :

    Machine Learning Multi-Modality Fusion Approaches Outperform Single-Modality & Traditional Approaches


    Contributors:


    Publication date :

    2021-03-06


    Size :

    949604 byte




    Type of media :

    Conference paper


    Type of material :

    Electronic Resource


    Language :

    English



    Multi-modality approaches for complex test requirements

    Fuchs, Theobald / Hassler, Ulf / Stackelberg, Burkhard von et al. | Tema Archive | 2008


    Muiti-Modality Approaches for Complex Test Requirements

    Stackelberg, B. von / Bernus, L. von / Fuchs, T. et al. | TIBKAT | 2008


    Fusion of multi-modality volumetric medical imagery

    Aguilar, M. / New, J.R. | IEEE | 2002


    Fusion of Multi-Modality Volumetric Medical Imagery

    Aguilar, M. / International Society of Information Fusion / Institute of Electrical and Electronics Engineers | British Library Conference Proceedings | 2002


    Fuzzy Classification for Multi-Modality Image Fusion

    Bloch, I. / IEEE; Signal Processing Society | British Library Conference Proceedings | 1994