Suche

VisualSceneMaker³ (VSM)


Projektstart: 01.01.2009
Projektträger: Universität Augsburg
Projektverantwortung vor Ort: M.Sc. Gregor Ulrich Mehlmann
Prof. Dr. Elisabeth André
Dr. Patrick Gebhard

Zusammenfassung

VisualSceneMaker³ is an authoring framework for modeling the multi-modal behavior and interaction management of virtual characters and social robots in joint activities with human users.

Beschreibung

Modeling believable and plausible behavior of virtual characters and human-like robots in social joint activities with human users can be a daunting task. A natural interaction with a human user becomes only possible if the artificially intelligent agent masters the incremental, reciprocal and parallel processes that contribute to the interpersonal coordination and grounding between the interaction partners. Humans master these behavioral aspects underlying all social activities between humans already from birth on. However, unfortunately they work by no means as perfectly in human-agent interactions and represent a huge challenge for the behavior and interaction modeling approach.
vsm

Mastering all aspects of interpersonal coordination and grounding requires the close coordination and fine-grained inter-meshing of a multitude of concurrent processes which are responsible for input processing and fusion, context and knowledge reasoning and behavior generation on different behavioral levels. To tackle these challenges, we present our VSM authoring framework which allows the authoring of behavior and interaction of virtual characters and social robots. The authoring software relies mainly on visual and declarative modeling languages and is the first modeling approach to combine hierarchical and concurrent state-charts with logic programming and a template-based script language. A declarative multi-modal event logic allows for the fusion of inputs distributed over multiple modalities in accordance to temporal and semantic constraints. The visual state-chart language allows for the coordination and tight synchronization of multiple parallel processes and the incremental interleaving of input processing, knowledge reasoning and behavior generation. The template-based behavior specification language allows for automatic behavior variations and the easy integration of domain knowledge into the dialog content.

Our implementation relies on an interpreter approach which allowing the real-time visualization of the execution of the behavior models within an IDE as well as the modification of the behavior model during the execution without the need for compilation or code generation steps. VSM has successfully been used in various research and teaching projects and in highly complex interactive applications with multiple virtual characters and social robots. It has also been evaluated in field tests with pupils and college students which has shown the it significantly facilitates the rapid prototyping of interactive virtual character applications and can be a useful educational tool.

For more information about the VSM software please visit the official project homepage of VSM or directly download the latest version of the source code of VSM.