Abstract Bottom-up segmentation based only on low-level cues is a notoriously difficult problem. This difficulty has lead to recent top-down segmentation algorithms that are based on class-specific image information. Despite the success of top-down algorithms, they often give coarse segmentations that can be significantly refined using low-level cues. This raises the question of how to combine both top-down and bottom-up cues in a principled manner. In this paper we approach this problem using supervised learning. Given a training set of ground truth segmentations we train a fragment-based segmentation algorithm which takes into account both bottom-up and top-down cues simultaneously, in contrast to most existing algorithms which train top-down and bottom-up modules separately. We formulate the problem in the framework of Conditional Random Fields (CRF) and derive a novel feature induction algorithm for CRF, which allows us to efficiently search over thousands of candidate fragments. Whereas pure top-down algorithms often require hundreds of fragments, our simultaneous learning procedure yields algorithms with a handful of fragments that are combined with low-level cues to efficiently compute high quality segmentations.


    Zugriff

    Download


    Exportieren, teilen und zitieren



    Titel :

    Learning to Combine Bottom-Up and Top-Down Segmentation


    Beteiligte:
    Levin, Anat (Autor:in) / Weiss, Yair (Autor:in)


    Erscheinungsdatum :

    2006-01-01


    Format / Umfang :

    14 pages





    Medientyp :

    Aufsatz/Kapitel (Buch)


    Format :

    Elektronische Ressource


    Sprache :

    Englisch




    Learning to Combine Bottom-Up and Top-Down Segmentation

    Levin, A. / Weiss, Y. | British Library Online Contents | 2009


    Learning to Combine Bottom-Up and Top-Down Segmentation

    Levin, A. / Weiss, Y. | British Library Conference Proceedings | 2006


    Combine - Combine Operator Communication

    Rickerd, Calvin / Pool, S. D. | SAE Technical Papers | 1968


    COMBINE

    SASAURA HIROYUKI | Europäisches Patentamt | 2023

    Freier Zugriff

    COMBINE

    YONEDA YUTAKA / NAGAOSA KAZUAKI / MITSUHARA MASAKI et al. | Europäisches Patentamt | 2021

    Freier Zugriff