We present our work on the determination of cues for discourse segmentation in free-form gesticulation accompanying speech in natural conversation. The basis for this integrating between gesticulation and speech discourse is the psycholinguistic concept of the co-equal generation of gesture and speech from the same semantic intent. We use the psycholinguistic device known as the 'catchment' as the locus around which this integration proceeds. We videotape gesture and speech elicitation experiments in which a subject describes her living space to an interlocutor. We extract the gestural motion of both hands using the Vector Coherence Mapping algorithm that combines spatial, momentum and skin color constraints in parallel using a fuzzy image processing approach. We extract the voiced units in the discourse as F/sub 0/ units are correlate these with transcribed speech. Psycholinguistics researchers perceptually micro-analyze the same video tape to produce a transcript that is annotated with the video timestamp and perceived gesture-speech entities. These serve to direct our high level analysis of the gesture trace and F/sub 0/ data. We report the results of our analysis that show that the feature of 'handedness' and the kind of symmetry in two-handed gestures provide effective cues for discourse segmentation. We also present observations on how the gesture traces provide cues to segment hand use, high level discourse repair and super-segmental cues for discourse grouping.


    Access

    Check access

    Check availability in my library

    Order at Subito €


    Export, share and cite



    Title :

    Gesture cues for conversational interaction in monocular video


    Contributors:
    Quek, F. (author) / McNeill, D. (author) / Ansari, R. (author) / Xin-Feng Ma (author) / Bryll, R. (author) / Duncan, S. (author) / McCullough, K.E. (author)


    Publication date :

    1999-01-01


    Size :

    195508 byte




    Type of media :

    Conference paper


    Type of material :

    Electronic Resource


    Language :

    English



    Pedestrian candidates generation using monocular cues

    Cheda, Diego / Ponsa, Daniel / Lopez, Antonio M. | IEEE | 2012


    Pedestrian Candidates Generation Using Monocular Cues

    Cheda, D. / Ponsa, D. / Lopez, A.M. et al. | British Library Conference Proceedings | 2012


    Vehicle-mounted gesture interaction method and system based on monocular camera

    KONG HUIFANG / ZHANG SHUAIJIE | European Patent Office | 2023

    Free access

    Evaluation of Monocular Depth Cues in 3D Aircraft Display

    Alm / Lif / Oberg | British Library Conference Proceedings | 2003


    Obstacle detection based on multiple cues fusion from monocular camera

    Liu, Wei / Zuo, Liyuan / Yu, Hongfei et al. | IEEE | 2013