Suche

FUBI - Full Body Interaction Framework


Projektstart: 01.01.2011
Projektträger: EU (Europäische Union)
Projektverantwortung vor Ort: Dr. Felix Kistler
Publikationen: Link zur Publikationsliste

Zusammenfassung

The Full Body Interaction Framework (FUBI) is a framework for recognizing full body gestures and postures in real time from the data of a depth sensor integrated using OpenNI or the Kinect SDK.

Quicklinks:        Download        Documentation        Related Publications        Development Repository

Beschreibung

Fubi_Logo

Fubi is a framework for full body interaction using a depth sensor such as the Microsoft Kinect or the Asus Xtion. It can be used with OpenNI/NiTE or the Kinect SDK. Since version 0.9, it further supports the Leap Motion Controller. FUBI is written in C++ and additionally includes a C#-Wrapper. The download comes with Visual Studio 2010 and 2013 solutions including two sample application that should be ready to compile. Further Code::Blocks project files are included to build under Linux. You only need to install the dependencies as described in the installation instructions and set the include and lib paths for OpenCV in Visual Studio or comment out the line "\#define FUBI_USE_OPENCV" in the FubiConfig.h if you do not want to use OpenCV. Within this file you can also define which tracking software you want to use: OpenNI v 1.x, 2.x, Kinect SDK 1.x or 2.x, or the Leap SDK 2.x. You can also switch between the different trackers during runtime.
FUBI is freely available under the terms of the Eclipse Public License - v 1.0.
You can get the sources with Visual Studio 2010 and 2013 solutions and Code::Blocks projects here:

Download

The download page also provides a precompiled Unity3D integration of Fubi.
Further, you can access the FUBI repository to get the current development version. More information can be found in the documentation and in the related publications.

If you use Fubi in a scientific project, please cite one of the related publications. If you use FUBI for a game-like application and/or you use the Unity integration, please cite the INTERACT 2013 paper. If you use Fubi with robots, please cite the IJSR 2014 paper. In all other cases, you can cite the ICIDS 2011 paper or the JMUI 2012 paper.

Fubi's main functionality is providing gesture and posture recognition capabilities.
The Framework therefore distinguishes between four gesture categories:

  1. Static postures: Configuration of several joints (positions or orientations) or number of displayed fingers (=finger count), no movement (e.g. figure 1: "arms crossed").
  2. Linear/Angular movements: Linear movement of several joints with specific direction and speed (e.g. figure 2: "right hand moves right") or angular movement of a joint (e.g. "turn head right").
  3. Combination of postures and movements: Combines sets of 1 and 2 in a sequence of states with specific time constraints (e.g. figure 3: "waving right hand").
  4. Symbolic gestures: Gesture with complex shape that are defined by recorded sample data. (e.g. figure 4: "right hand circle").
ArmsCrossed
Figure 1: arms crossed

Swipe
Figure 2: right hand moves right 

WavingAnim
Figure 3: waving right hand
circle
Figure 4: right hand circle

A description of the categories and how they are recognized was published in Journal on Multimodal User Interfaces, you can find the information here.

Gestures can be defined in C++/C# code or (preferred) using an XML based definition language.
For example, the following XML code defines a head nod gesture for recognition with Fubi:
headNodXML
The head nod is defined as a combination recognizer with four states, each calling an angular movement recognizer that waits for a specific head pitch movement.

The following image shows a study setup, in which 18 participants had to perform gestures during so-called quick time events in a multi-party interactive storytelling scenario. We recognized 97% of their gestures (65 out of 67 gestures).

The corresponding paper was published on ICIDS 2011, you can find more information on it here.

Quick time setup image

More information on the recognizable postures and gestures can be found in the documentation.

Fubi includes a GUI application that is build upon its C# wrapper. In the GUI, you can test your recognizers, look at all the information Fubi offers regarding the sensor streams and user tracking, change and test the filter values, and start mouse emulation for freehand interaction and bind gestures to key or mouse events, e.g. for clicking. The latest feature is a tool that records gesture performances and can generate valid Fubi gesture XML out of that performance.
You can find more information in the tutorial.
This is how the Fubi GUI looks like:
CSGUI_noAnno

There also exists a Unity Integration for Fubi which again uses the C# wrapper and is integrated in Unity 5 64-bit. It supports to add gestural interaction by using gesture symbols that can be used as default Unity button. As soon as such a symbol is shown on screen, Fubi automatically checks the corresponding recognizer and sends a click event if it has finished successfully. Further, Fubi provides buttons and a swiping menu to implement freehand GUI interaction in Unity.
More information can again be found in the corresponding tutorial.
Here a screen shot (note that it would usually make no sense to use a gesture symbol, a freehand button and a swiping menu all at the same time, but this is meant for demonstration purposes only):
FubiUnitySample

The Unity integration is also used in the Traveller application of the eCute project.
Therefore, we used a technique to gather a user defined gesture set that is described in the paper presented on INTERACT 2013 along with a poster/demonstration that won the best poster award.
Here, a screen shot:
TravellerSceneWithDepthImage