Suche

Publications


2016

  • Measuring the Impact of Multimodal Behavioural Feedback Loops on Social Interactions
    Ionut Damian, Tobias Baur, Elisabeth André
    Proceedings ICMI
    In this paper we explore the concept of automatic behavioural feedback loops during social interactions. Behavioural feedback loops (BFL) are rapid processes which analyse the behaviour of the user in realtime and provide the user with live feedback on how to improve the behaviour quality. In this context, we implemented an open source software framework for designing, creating and executing BFL on Android powered mobile devices. To get a better understanding of the effects of BFL on face-to-face social interactions, we conducted a user study and compared between four different BFL types spanning three modalities: tactile, auditory and visual. For the study, the BFL have been designed to improve the users' perception of their speaking time in an effort to create more balanced group discussions. The study yielded valuable insights into the impact of BFL on conversations and how humans react to such systems.

2015

2014

  • Exploring a Model of Gaze for Grounding in Multimodal HRI
    Gregor Mehlmann, Kathrin Janowski, Markus Häring, Tobias Baur, Patrick Gebhard and Elisabeth André
    Proceedings of the 16th International Conference on Multimodal Interaction, ICMI' 14, pp. 247-254, Istanbul, Turkey, November 12 - 16 2014.
  • Modeling Gaze Mechanisms for Grounding in HRI
    Gregor Mehlmann, Kathrin Janowski, Tobias Baur, Markus Häring, Elisabeth André und Patrick Gebhard
    Proceedings of the 21th European Conference on Artificial intelligence, ECAI' 14, pp. 1069 - 1070, Prague, Czech Republic, August 18-22, 2014, Frontiers in Artificial Intelligence and Applications, Volume 263.
  • Towards Peripheral Feedback-based Realtime Social Behaviour Coaching
    Ionut Damian, Tobias Baur, Chiew Seng Sean Tan, Johannes Schöning, Kris Luyten, Elisabeth André
    An important part of the information transfer in human-human communication is conducted nonverbally and often even unconsciously. Thus, controlling the information flow in such an interaction proves to be a difficult task, a fact which can cause various social problems. This paper explores the use of wearable computing, augmentation concepts and social signal processing for realtime behaviour coaching. The goal here is to help the user be more aware of her or his nonverbal behaviour during human-human interaction and to provide feedback on how to improve it.
  • Exploring Social Augmentation Concepts for Public Speaking using Peripheral Feedback and Real-Time Behavior Analysis
    Ionut Damian, Chiew Seng Sean Tan, Tobias Baur, Johannes Schöning, Kris Luyten, Elisabeth André
    Mixed and Augmented Reality (ISMAR), 2014 IEEE International Symposium on
    Non-verbal and unconscious behavior plays an important role for efficient human-to-human communication but are often undervalued when training people to become better communicators. This is particularly true for public speakers who need not only behave according to a social etiquette but do so while generating enthusiasm and interest for dozens if not hundreds of other persons. In this paper we propose the concept of social augmentation using wearable computing with the goal of giving users the ability to continuously monitor their performance as a communicator. To this end we explore interaction modalities and feedback mechanisms which would lend themselves to this task.
  • Who’s Afraid of Job Interviews? Definitely a Question for User Modelling
    Kaśka Porayska-Pomsta, Paola Rizzo, Ionut Damian, Tobias Baur, Elisabeth André, Nicolas Sabouret, Hazaël Jones, Keith Anderson, Evi Chryssafidou
    LNCS 8538
    We define job interviews as a domain of interaction that can be modelled automatically in a serious game for job interview skills training. We present four types of studies: (1) field-based human-to-human job interviews, (2) field-based computer-mediated human-to-human interviews, (3) lab-based wizard of oz studies, (4) field-based human-to-agent studies. Together, these highlight pertinent questions for the user modelling field as it expands its scope to applications for social inclusion. The results of the studies show that the interviewees suppress their emotional behaviours and although our system recognises automatically a subset of those behaviours, the modelling of complex mental states in real-world contexts poses a challenge for the state-of-the-art user modelling technologies. This calls for the need to re-examine both the approach to the implementation of the models and/or of their usage for the target contexts.
  • Exploring Interaction Strategies for Virtual Characters to Induce Stress in Simulated Job Interviews
    Patrick Gebhard, Tobias Baur, Ionut Damian, Gregor Mehlmann, Johannes Wagner and Elisabeth André
    Proceedings of the 13th International Conference on Autonomous Agents and Multiagent Systems, AAMAS '14, Paris, France, 2014
  • Interpreting social cues to generate credible affective reactions of virtual job interviewers
    Hazael Jones, Nicolas Sabouret, Ionut Damian, Tobias Baur, Elisabeth André, Kaśka Porayska-Pomsta, Paola Rizzo
    IDGEI
    In this paper we describe a mechanism of generating credible affective reactions in a virtual recruiter during an interaction with a user. This is done using communicative performance computation based on the behaviours of the user as detected by a recognition module. The proposed software pipeline is part of the TARDIS system which aims to aid young job seekers in acquiring job interview related social skills. In this context, our system enables the virtual recruiter to realistically adapt and react to the user in real-time.

2013

  • The TARDIS framework: intelligent virtual agents for social coaching in job interviews
    Keith Anderson, Elisabeth André, Tobias Baur, Sara Bernardini, Mathieu Chollet, Evi Chryssafidou, Ionut Damian, Cathy Ennis, Arjan Egges, Patrick Gebhard, Hazael Jones, Magalie Ochs, Catherine Pelachaud, Kaśka Porayska-Pomsta, Paola Rizzo, Nicolas Sabouret
    Proceedings of the Tenth International Conference on Advances in Computer Entertainment Technology (ACE-13). Enschede, the Netherlands, November 2013. LNCS 8253
  • NovA: Automated Analysis Of Nonverbal Signals In Social Interactions
    Tobias Baur, Ionut Damian, Florian Lingenfelser, Johannes Wagner and Elisabeth André
    Human Behavior Understanding: 4th International Workshop, HBU 2013, Barcelona, Spain, October 22, 2013, In conjunction with ACM Multimedia 2013. Proceedings (Lecture Notes in Computer Science Vol. 8212 / Image Processing, Computer Vision, Pattern Recognition, and Graphics)
    Fundstelle: 160-171
  • The Social Signal Interpretation (SSI) Framework - Multimodal Signal Processing and Recognition in Real-Time
    Johannes Wagner, Florian Lingenfelser, Tobias Baur, Ionut Damian, Felix Kistler and Elisabeth André
    Proceedings of the 21st ACM International Conference on Multimedia, 21-25 October 2013, Barcelona, Spain.
  • MMLI: Multimodal Multiperson Corpus of Laughter in Interaction
    Radoslaw Niewiadomski, Maurizio Mancini, Tobias Baur, Giovanna Varni, Harry Griffin and Min S.H Aung
    Human Behavior Understanding: 4th International Workshop, HBU 2013, Barcelona, Spain, October 22, 2013, Proceedings (Lecture Notes in Computer Science Vol. 8212 / Image Processing, Computer Vision, Pattern Recognition, and Graphics)
    Fundstelle: 184-195
  • A Job Interview Simulation: Social Cue-based Interaction with a Virtual Character
    Tobias Baur, Ionut Damian, Patrick Gebhard, Kaśka Porayska-Pomsta and Elisabeth André
    Proceedings of the IEEE/ASE International Conference on Social Computing (SocialCom 2013). Washington D.C., USA. 8.-14. September 2013
    Fundstelle: 220-227
    This paper presents an approach that makes use of a virtual character and social signal processing techniques to create an immersive job interview simulation environment. In this environment, the virtual character plays the role of a recruiter which reacts and adapts to the user’s behavior thanks to a component for the automatic recognition of social cues (conscious or unconscious behavioral patterns). The social cues pertinent to job interviews have been identified using a knowledge elicitation study with real job seekers. Finally, we present two user studies to investigate the feasibility of the proposed approach as well as the impact of such a system on users.
  • A Software Framework for Social Cue-based Interaction with a Virtual Recruiter
    Ionut Damian, Tobias Baur, Patrick Gebhard, Kaśka Porayska-Pomsta, Elisabeth André
    Proceedings of the Intelligent Virtual Agents: 13th International Conference, IVA 2013, Edinburgh, UK, August 29-31, 2013 (Lecture Notes in Computer Science/Lecture Notes in Artificial Intelligence, Vol. 8108)
    Fundstelle: 444-445
  • Modelling Users’ Affect in Job Interviews: Technological Demo
    Kaśka Porayska-Pomsta, Keith Anderson, Ionut Damian, Tobias Baur, Elisabeth André, Sara Bernardini and Paola Rizzo
    User Modeling, Adaptation, and Personalization, Lecture Notes in Computer Science vol. 7899
    Fundstelle: 353-355
    This demo presents an approach to recognising and interpreting social cues-based interactions in computer-enhanced job interview simulations. We show what social cues and complex mental states of the user are relevant in this interaction context, how they can be interpreted using static Bayesian Networks, and how they can be recognised automatically using state-of-the-art sensor technology in real-time.
  • Investigating Social Cue-Based Interaction in Digital Learning Games
    Ionut Damian, Tobias Baur and Elisabeth André
    Proc. 1st International Workshop on Intelligent Digital Games for Empowerment and Inclusion (IDGEI 2013) held in conjunction with the 8th Foundations of Digital Games 2013 (FDG), ACM, SASDG Digital Library, Chania, Crete, Greece, 14.05.2013
  • Laugh-aware virtual agent and its impact on user amusement
    Radoslaw Niewiadomski, Jennifer Hofmann, Jérôme Urbain, Tracy Platt, Johannes Wagner, Bilal Piot, Hüseyin Cakmak, Satish Pammi, Tobias Baur , Stéphane Dupont, Matthieu Geist, Florian Lingenfelser, Gary McKeown, Olivier Pietquin and Willibald Ruch.
    Proceedings of the 12th International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2013)
  • Laugh Machine
    Jérôme Urbain, Radoslaw Niewiadomski, Jennifer Hofmann, Emeline Bantegnie, Tobias Baur, Nadia Berthouze, Hüseyin Cakmak, Richard Thomas Cruz, Stéphane Dupont, Matthieu Geist, Harry Griffin, Florian Lingenfelser, Maurizio Mancini, Miguel Miranda, Gary McKeown, Sathish Pammi, Olivier Pietquin, Bilal Piot, Tracey Platt, Willibald Ruch, Abhishek Sharma, Gualtiero Volpe, Johannes Wagner
    Proceedings of the 8th International Workshop on Multimodal Interfaces (eNTERFACE'12)

  • Laugh when you're winning
    Maurizio Mancini, Laurent Ach, Emeline Bantegnie, Tobias Baur, Nadia Berthouze, Debajyoti Datta, Yu Ding, Stéphane Dupont, Harry J Griffin, Florian Lingenfelser, Radoslaw Niewiadomski, Catherine Pelachaud, Olivier Pietquin, Bilal Piot, Jérôme Urbain, Gualtiero Volpe, Johannes Wagner
    Innovative and Creative Developments in Multimodal Interaction Systems
    Fundstelle: 50-79