Conclusions: (1) By using 'context predicates' we track actions occurring during a dialog to determine which goals (event and locative) have been achieved or attained and which have not; (2) By tracking 'context predicates' we can determine what actions need to be acted upon next; i.e., predicates in the stack that have not been completed; (3) 'Locative' expressions, e.g. 'there,' give us a kind of handle in command and control applications to attempt error correction when locative goals are being discussed; (4) By interleaving complex dialog with natural and mechanical gestures, we hope to achieve dynamic autonomy and an integrated multi-modal interface.


    Access

    Access via TIB

    Check availability in my library


    Export, share and cite



    Title :

    Multi-modal Interfacing for Human-Robot Interaction


    Contributors:
    D. Perzanowski (author) / A. Schultz (author) / W. Adams (author) / M. Bugajska (author) / E. March (author)

    Publication date :

    2001


    Size :

    22 pages


    Type of media :

    Report


    Type of material :

    No indication


    Language :

    English




    Multi-modal interaction management for a robot companion

    Li, Shuyin | TIBKAT | 2007

    Free access


    A Distributed Tactile Sensor for Intuitive Human-Robot Interfacing

    Cirillo, Andrea / Cirillo, Pasquale / De Maria, Giuseppe et al. | BASE | 2017

    Free access

    Multi-modal robot

    FAN MENGLONG / LIANG BIN / LIU HOUDE et al. | European Patent Office | 2024

    Free access

    Synchronized Multi-Modal Robot

    CHUNG SOON-JO | European Patent Office | 2021

    Free access