Conclusions: (1) By using 'context predicates' we track actions occurring during a dialog to determine which goals (event and locative) have been achieved or attained and which have not; (2) By tracking 'context predicates' we can determine what actions need to be acted upon next; i.e., predicates in the stack that have not been completed; (3) 'Locative' expressions, e.g. 'there,' give us a kind of handle in command and control applications to attempt error correction when locative goals are being discussed; (4) By interleaving complex dialog with natural and mechanical gestures, we hope to achieve dynamic autonomy and an integrated multi-modal interface.
Multi-modal Interfacing for Human-Robot Interaction
2001
22 pages
Report
No indication
English
Computers, Control & Information Theory , Bionics & Artificial Intelligence , Verbal , Humans , Interactions , Robots , Man computer interface , Robotics , Social communication , Reasoning , Autonomous navigation , Logic , Natural language , Linguistics , Goal programming , Humanoid robots , Multi-modal interface , Briefing charts , Gestures , Context predicates