Suche

Publikationen


2017

2016

  • Measuring the Impact of Multimodal Behavioural Feedback Loops on Social Interactions
    Ionut Damian, Tobias Baur, Elisabeth André
    Proceedings ICMI
    In this paper we explore the concept of automatic behavioural feedback loops during social interactions. Behavioural feedback loops (BFL) are rapid processes which analyse the behaviour of the user in realtime and provide the user with live feedback on how to improve the behaviour quality. In this context, we implemented an open source software framework for designing, creating and executing BFL on Android powered mobile devices. To get a better understanding of the effects of BFL on face-to-face social interactions, we conducted a user study and compared between four different BFL types spanning three modalities: tactile, auditory and visual. For the study, the BFL have been designed to improve the users' perception of their speaking time in an effort to create more balanced group discussions. The study yielded valuable insights into the impact of BFL on conversations and how humans react to such systems.
  • Social Signal Processing for Dummies
    Ionut Damian, Michael Dietz, Frank Gaibler, Elisabeth André
    Proceedings ICMI
    We introduce SSJ Creator, a modern Android GUI enabling users to design and execute social signal processing pipelines using nothing but their smartphones and without writing a single line of code. It is based on a modular Java-based social signal processing framework (SSJ), which is able to perform realtime multimodal behaviour analysis on Android devices using both device internal and external sensors.
  • Exploring Eye-Tracking-Based Detection of Visual Search for Elderly People
    Michael Dietz, Daniel Schork, Elisabeth André
    Proceedings of the 12th International Conference on Intelligent Environments (IE) 2016
    Visual search plays an important role in our daily lives and can be very frustrating whenever we cannot remember where we left objects...
  • Investigating Politeness Strategies and their Persuasiveness for a Robotic Elderly Assistant
    Stephan Hammer, Birgit Lugrin, Sergey Bogomolov, Kathrin Janowski, Elisabeth André
    Proceedings of the 11th International Conference on Persuasive Technologies (Persuasive 2016)
  • Exploring Eye-Tracking-Driven Sonification for the Visually Impaired
    Michael Dietz, Maha El Garf, Ionut Damian, Elisabeth André
    Proceedings of the 7th Augmented Human International Conference 2016
    Most existing sonification approaches for the visually impaired restrict the user to the perception of static scenes...
  • Exploring the Potential of Realtime Haptic Feedback during Social Interactions
    Ionut Damian, Elisabeth André
    Proceedings of 10th ACM International conference on Tangible, Embedded and Embodied Interaction (TEI)
    We explore the use of haptic feedback to deliver supportive information during social interactions in realtime. In an exploratory user study, we investigated perceptual limitations of vibration patterns during a conversation between peers. The results from this study have then been used to develop a system for providing users with realtime information regarding the quality of their nonverbal behaviour while engaged in a public speech.

2015

2014

  • 'What I see is not what you get’: why culture-specific behaviours for virtual characters should be user-tested across cultures
    Nick Degens, Birgit Endrass, Gert Jan Hofstede, Adrie Beulens, Elisabeth André
    AI & SOCIETY, October 2014 DOI: http://link.springer.com/article/10.1007/s00146-014-0567-2
  • A Multi-Display System for Deploying and Controlling Home Automation
    Yucheng Jin, Chi-Tai Dang, Christian Prehofer, Elisabeth André
    Proceedings of the 9th Conference on Interactive Tabletops and Surfaces (ITS 2014)
    Fundstelle: http://dl.acm.org/ft_gateway.cfm?id=2669553
    In this paper, we present a concept of using a home devices mashup tool to wire home devices on a tabletop display in combination with web based UIs on mobile devices to control home devices.
  • Modeling Gaze Mechanisms for Grounding in HRI
    Gregor Mehlmann, Kathrin Janowski, Tobias Baur, Markus Häring, Elisabeth André und Patrick Gebhard
    Proceedings of the 21th European Conference on Artificial intelligence, ECAI' 14, pp. 1069 - 1070, Prague, Czech Republic, August 18-22, 2014, Frontiers in Artificial Intelligence and Applications, Volume 263.
  • Exploring a Model of Gaze for Grounding in Multimodal HRI
    Gregor Mehlmann, Kathrin Janowski, Markus Häring, Tobias Baur, Patrick Gebhard and Elisabeth André
    Proceedings of the 16th International Conference on Multimodal Interaction, ICMI' 14, pp. 247-254, Istanbul, Turkey, November 12 - 16 2014.
  • Towards Peripheral Feedback-based Realtime Social Behaviour Coaching
    Ionut Damian, Tobias Baur, Chiew Seng Sean Tan, Johannes Schöning, Kris Luyten, Elisabeth André
    An important part of the information transfer in human-human communication is conducted nonverbally and often even unconsciously. Thus, controlling the information flow in such an interaction proves to be a difficult task, a fact which can cause various social problems. This paper explores the use of wearable computing, augmentation concepts and social signal processing for realtime behaviour coaching. The goal here is to help the user be more aware of her or his nonverbal behaviour during human-human interaction and to provide feedback on how to improve it.
  • Exploring Social Augmentation Concepts for Public Speaking using Peripheral Feedback and Real-Time Behavior Analysis
    Ionut Damian, Chiew Seng Sean Tan, Tobias Baur, Johannes Schöning, Kris Luyten, Elisabeth André
    Mixed and Augmented Reality (ISMAR), 2014 IEEE International Symposium on
    Non-verbal and unconscious behavior plays an important role for efficient human-to-human communication but are often undervalued when training people to become better communicators. This is particularly true for public speakers who need not only behave according to a social etiquette but do so while generating enthusiasm and interest for dozens if not hundreds of other persons. In this paper we propose the concept of social augmentation using wearable computing with the goal of giving users the ability to continuously monitor their performance as a communicator. To this end we explore interaction modalities and feedback mechanisms which would lend themselves to this task.
  • Full Body Interaction with Virtual Characters in an Interactive Storytelling Scenario
    Felix Kistler, Birgit Endrass, Elisabeth André
    14th Int. Conf. on Intelligent Virtual Agents (IVA 2014), LNAI 8637, pp. 236-239
  • Simulating Deceptive Cues of Joy in Humanoid Robots
    Birgit Endrass, Markus Haering, Gasser Akila, Elisabeth André
    14th Int. Conf. on Intelligent Virtual Agents (IVA 2014), LNAI 8637, pp. 174-177
  • Integration of Cultural Factors into the Behavioural Models of Virtual Characters
    Birgit Endrass, Elisabeth André
    Natural Language Generation in Interactive Systems, A. Stent and S. Bangalore (Eds.), Cambridge University Press, ch.10, pp. 227-251
  • Designing User-Character Dialogue in Interactive Narratives: An Exploratory Experiment
    Birgit Endrass, Christoph Klimmt, Gregor Mehlmann, Elisabeth André, Christian Roth
    IEEE Transactions on Computational Intelligence and AI in Games, (Special Issue on Computational Narrative and Games), Volume 6 (2)
  • Who’s Afraid of Job Interviews? Definitely a Question for User Modelling
    Kaśka Porayska-Pomsta, Paola Rizzo, Ionut Damian, Tobias Baur, Elisabeth André, Nicolas Sabouret, Hazaël Jones, Keith Anderson, Evi Chryssafidou
    LNCS 8538
    We define job interviews as a domain of interaction that can be modelled automatically in a serious game for job interview skills training. We present four types of studies: (1) field-based human-to-human job interviews, (2) field-based computer-mediated human-to-human interviews, (3) lab-based wizard of oz studies, (4) field-based human-to-agent studies. Together, these highlight pertinent questions for the user modelling field as it expands its scope to applications for social inclusion. The results of the studies show that the interviewees suppress their emotional behaviours and although our system recognises automatically a subset of those behaviours, the modelling of complex mental states in real-world contexts poses a challenge for the state-of-the-art user modelling technologies. This calls for the need to re-examine both the approach to the implementation of the models and/or of their usage for the target contexts.
  • Trust-based Decision-Making for Energy-Aware Device Management
    Stephan Hammer, Michael Wißner, Elisabeth André
    Proceedings of the 22nd Conference on User Modeling, Adaptation and Personalization (UMAP 2014)
  • A Framework for the Development of Multi-Display Environment Applications Supporting Interactive Real-Time Portals
    Chi-Tai Dang, Elisabeth André
    Proceedings of the 6th Conference on Engineering Interactive Computing Systems (EICS 2014)
    Advances in multi-touch enabled interactive tabletops led to many commercially available products and were increasingly deployed at places beyond research labs, for example at exhibitions, retail stores, or showrooms. At the same time, small multi-touch devices, such as tablets or smartphones, became prevalent in our daily life. When considering both trends, occasions and scenarios where tabletop systems and mobile devices form a coupled interaction space are expected to become increasingly widespread.
  • Evaluating the Effectiveness of Visualizations for Comparing Energy Usage Data
    Elisabeth André, René Bühling, Birgit Endrass, Masood Masoodian
    Workshop Proceedings of FSEA 2014, The AVI 2014 Workshop on Fostering Smart Energy Applications through Advanced Visual Interfaces
    Fundstelle: pp 5-8
  • Effects of language variety on personality perception in embodied conversational agents
    Brigitte Krenn, Birgit Endrass, Felix Kistler, Elisabeth André
    16th Int. Conf. on Human-Computer Interaction. Advanced Interaction Modalities and Techniques (HCII 2014), LNCS 8511 (2), pp. 429-439
  • Engaging with Virtual Characters Using a Pictorial Interaction Language
    Birgit Endrass, Lynne Hall, Colette Hume, Sarah Tazzyman, Elisabeth André, Ruth Aylett
    CHI '14 Extended Abstracts on Human Factors in Computing Systems, pp. 531-534
  • A Pictorial Interaction Language for Children to Communicate with Cultural Virtual Characters
    Birgit Endrass, Lynne Hall, Colette Hume, Sarah Tazzyman, Elisabeth André
    16th Int. Conf. on Human-Computer Interaction. Advanced Interaction Modalities and Techniques (HCII 2014), LNCS 8511 (2), pp. 532-543
  • Werewolves, Cheats, and Cultural Sensitivity
    Ruth Aylett, Mei Yii Lim, Lynne Hall, Birgit Endrass, Sarah Tazzyman, Christopher Ritter, Asad Nazir, Ana Paiva, Gert Jan Hofstede, Elisabeth Andre, Arvid Kappas
    Proceedings of the 13th International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2014), pp. 1085 - 1092
  • MIXER: Why the Difference? (Demonstration)
    Asad Nazir, Ruth Aylett, Mei Yii Lim, Birgit Endrass, Lynne Hall, Christopher Ritter
    Proceedings of the 13th International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2014), pp. 1687 - 1688
  • A hybrid Approach to model a Bayesian Network of Culture-specific Behavior (Extended Abstract)
    Birgit Endrass, Julian Frommel, Elisabeth André
    Proceedings of the 13th International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2014), pp. 1395-1396
  • Trust-based Decision-making for the Adaptation of Public Displays in Changing Social Contexts
    Michael Wißner, Stephan Hammer, Ekaterina Kurdyukova, and Elisabeth André
    Journal of Trust Management 1(1) (2014)
  • Exploring Interaction Strategies for Virtual Characters to Induce Stress in Simulated Job Interviews
    Patrick Gebhard, Tobias Baur, Ionut Damian, Gregor Mehlmann, Johannes Wagner and Elisabeth André
    Proceedings of the 13th International Conference on Autonomous Agents and Multiagent Systems, AAMAS '14, Paris, France, 2014
  • A Framework for User-Defined Body Gestures to Control a Humanoid Robot
    Mohammad Obaid, Felix Kistler, Markus Häring, René Bühling, Elisabeth André
    International Journal of Social Robotics
  • Interpreting social cues to generate credible affective reactions of virtual job interviewers
    Hazael Jones, Nicolas Sabouret, Ionut Damian, Tobias Baur, Elisabeth André, Kaśka Porayska-Pomsta, Paola Rizzo
    IDGEI
    In this paper we describe a mechanism of generating credible affective reactions in a virtual recruiter during an interaction with a user. This is done using communicative performance computation based on the behaviours of the user as detected by a recognition module. The proposed software pipeline is part of the TARDIS system which aims to aid young job seekers in acquiring job interview related social skills. In this context, our system enables the virtual recruiter to realistically adapt and react to the user in real-time.
  • DynaLearn – An Intelligent Learning Environment for Learning Conceptual Knowledge
    Bert Bredeweg, Jochem Liem, Wouter Beek, Floris Linnebank, Jorge Gracia, Esther Lozano, Michael Wißner, René Bühling, Paulo Salles, Richard Noble, Andreas Zitek, Petya Borisova, David Mioduser
    AI Magazine, 34(4)
    Fundstelle: pp. 46-65
  • A Systematic Discussion of Fusion Techniques for Multi-Modal Affect Recognition Tasks
    Florian Lingenfelser and Johhannes Wagner and Elisabeth André
    ICMI '11 Proceedings of the 13th international conference on multimodal interfaces
  • An Event Driven Fusion Approach for Enjoyment Recognition in Real-time
    Florian Lingenfelser, Johannes Wagner , Elisabeth André, Gary McKeown and Will Curran
    Proceedings of the 22nd ACM international conference on Multimedia Pages 377-386
    Social signals and interpretation of carried information is of high importance in Human Computer Interaction. Often used for affect recognition, the cues within these signals are displayed in various modalities. Fusion of multi-modal signals is a natural and interesting way to improve automatic classification of emotions transported in social signals. Throughout most present studies, uni-modal affect recognition as well as multi-modal fusion, decisions are forced for fixed annotation segments across all modalities. In this paper, we investigate the less prevalent approach of event driven fusion, which indirectly accumulates asynchronous events in all modalities for final predictions. We present a fusion approach, handling short-timed events in a vector space, which is of special interest for real-time applications. We compare results of segmentation based uni-modal classification and fusion schemes to the event driven fusion approach. The evaluation is carried out via detection of enjoyment-episodes within the audiovisual Belfast Story-Telling Corpus.

2013

2012

2011

2010

2009

2008

2007

2006

2005

2004

2003

2002

2001

2000

1999

1998