Start date: 01.08.2013
End date: 01.08.2014
Funded by: Universität Augsburg
Local head of project: Prof. Dr. Elisabeth André
Local scientists: M.Sc. Gregor Mehlmann
M.Sc. Kathrin Janowski
M.Sc. Markus Häring


Human and robot use the multimodal combination of speech and gaze in order to communicate about the placement of several objects.


For this application the human's task is to place different objects on the given fields. The robot's instructions for this are kept intentionally ambiguous in order to make the human ask for clarification.

The eye tracking glasses worn by the user, as well as special markers on the objects, enable the system to detect which object the user is currently looking at. This in turn enables the robot to resolve ambiguities in the user's spoken question, so it can answer correctly.

Furthermore, this information allows the robot to show gaze behavior which expresses joint attention. For example, it follows the user's gaze to the respective objects or makes eye contact when the former looks at the robot.

This eye contact also serves as a signal for conversational floor management. When the user asks a question, the robot delays its answer and waits until the user looks at it directly.

All these behavior patterns serve to establish the common ground in order to avoid misunderstandings or at least resolve them quickly.


  • Modeling Grounding for Interactive Social Companions 
    Gregor Mehlmann, Kathrin Janowski, Elisabeth André 
    KI - Künstliche Intelligenz, Special Issue on Companion Technologies, 2015
  • Exploring a Model of Gaze for Grounding in Multimodal HRI 
    Gregor Mehlmann, Kathrin Janowski, Markus Häring, Tobias Baur, Patrick Gebhard and Elisabeth André 
    Proceedings of the 16th International Conference on Multimodal Interaction, ICMI' 14, pp. 247-254, Istanbul, Turkey, November 12 - 16 2014.
  • Modeling Gaze Mechanisms for Grounding in HRI 
    Gregor Mehlmann, Kathrin Janowski, Tobias Baur, Markus Häring, Elisabeth André und Patrick Gebhard 
    Proceedings of the 21th European Conference on Artificial intelligence, ECAI' 14, pp. 1069 - 1070, Prague, Czech Republic, August 18-22, 2014, Frontiers in Artificial Intelligence and Applications, Volume 263.