Conclusions: (1) By using 'context predicates' we track actions occurring during a dialog to determine which goals (event and locative) have been achieved or attained and which have not; (2) By tracking 'context predicates' we can determine what actions need to be acted upon next; i.e., predicates in the stack that have not been completed; (3) 'Locative' expressions, e.g. 'there,' give us a kind of handle in command and control applications to attempt error correction when locative goals are being discussed; (4) By interleaving complex dialog with natural and mechanical gestures, we hope to achieve dynamic autonomy and an integrated multi-modal interface.


    Zugriff

    Zugriff über TIB

    Verfügbarkeit in meiner Bibliothek prüfen


    Exportieren, teilen und zitieren



    Titel :

    Multi-modal Interfacing for Human-Robot Interaction


    Beteiligte:
    D. Perzanowski (Autor:in) / A. Schultz (Autor:in) / W. Adams (Autor:in) / M. Bugajska (Autor:in) / E. March (Autor:in)

    Erscheinungsdatum :

    2001


    Format / Umfang :

    22 pages


    Medientyp :

    Report


    Format :

    Keine Angabe


    Sprache :

    Englisch




    Multi-modal interaction management for a robot companion

    Li, Shuyin | TIBKAT | 2007

    Freier Zugriff


    A Distributed Tactile Sensor for Intuitive Human-Robot Interfacing

    Cirillo, Andrea / Cirillo, Pasquale / De Maria, Giuseppe et al. | BASE | 2017

    Freier Zugriff

    Multi-modal robot

    FAN MENGLONG / LIANG BIN / LIU HOUDE et al. | Europäisches Patentamt | 2024

    Freier Zugriff

    Synchronized Multi-Modal Robot

    CHUNG SOON-JO | Europäisches Patentamt | 2021

    Freier Zugriff