Exploring a Model of Gaze for Grounding in Multimodal HRI

erschienen 2014 Proceedings of the 16th International Conference on Multimodal Interaction, ICMI' 14, pp. 247-254, Istanbul, Turkey, November 12 - 16 2014.

Verlag: ACM, New York, NY, USA

DOI: 10.1145/2663204.2663275


Grounding is an important process that underlies all human interaction. Hence, it is crucial for building social robots that are expected to collaborate effectively with humans. Gaze behavior plays versatile roles in establishing, maintaining and repairing the common ground. Integrating all these roles in a computational dialog model is a complex task since gaze is generally combined with multiple parallel information modalities and involved in multiple processes for the generation and recognition of behavior. Going beyond related work, we present a modeling approach focusing on these multi-modal, parallel and bi-directional aspects of gaze that need to be considered for grounding and their interleaving with the dialog and task management. We illustrate and discuss the different roles of gaze as well as advantages and drawbacks of our modeling approach based on a first user study with a technically sophisticated shared workspace application with a social humanoid robot.


  • BibTeX  -  (bibtex.txt, 0 KB)