FUBI - Full Body Interaction Framework
|Funded by:||EU (Europäische Union)|
|Local project leader:||Dipl.-Inf. Felix Kistler|
The Framework distinguishes between four gesture categories:
- Static postures: Configuration of several joints, no movement (e.g. figure 1: "arms crossed").
- Gestures with linear movement: Linear movement of several joints with specific direction and speed (e.g. figure 2: "right hand moves right").
- Combination of postures and linear movement: Combination of sets of 1 and 2 with specific time constraints (e.g. figure 3: "waving right hand").
- Complex gestures: Detailed observation of one (or more) joints over a certain amount of time and recognition of specific patterns/paths (e.g. symbolic gestures like hand writing shapes).
Figure 1: arms crossed
Figure 3: waving right hand
The current release version supports categories 1 to 3. However, you can already train and recognize category 4 with SSI that also integrates the FUBI framework.
A description of the categories and how they are recognized was published in Journal on Multimodal User Interfaces, you can find the information here.
The following image shows a study setup, in which 18 participants had to perform gestures of category 1-3 during so-called quick time events in an interactive storytelling scenario. We recognized 97% of their gestures (65 out of 67 gestures).
The corresponding paper was published in ICIDS 2011, you can find more information on it here.
More information on the recognizable postures and gestures can be found in the documentation.
In addition, FUBI can detect the number of fingers users are showing up in front of the sensor (requires an OpenCV installation).
For example, in the following image, the number three would be the output of the finger count for shown hand.
The finger count recognition works quite robust under certain conditions:
- user close to the sensor (< 1m)
- fingers spread clearly
- hand in direction of Kinect
The above image displays the finger count recognition using the convex hull of the hand shape and its convexity defects.
In the meantime we are using a different approach that uses the morphological opening operation to separate the fingers from the rest of the hand as shown in the following image:
FUBI Architecture (not yet including the Kinect SDK integration)
The following image displays the workflow for designing a waving gesture using a combination recognizer in FUBI.
The example uses C++ code for defining the recognizer:
The same recognizer can as well be implemented in XML, as it will be explained in the first tutorial of the documentation.
FUBI is written in C++ and currently offers a C++-API.
In addition, it includes a C#/.NET-Wrapper that wraps the C++-API.
It has been only tested on Windows 7, but at least the C++-part should contain no platform dependent code.
The download comes with a Visual Studio 2010 solution including three sample application that should be ready to compile. You should only need to install the dependencies as described in the installation instructions and set the include and lib paths for OpenCV in Visual Studio or comment out the line "#define USE_OPENCV" at top of the FubiImageProcessing.cpp if you do not want to use OpenCV.
Within this file you can also define which tracking software you want to use: OpenNI v 1.x, 2.x, or the Kinect SDK.
More information can be found in the documentation.
FUBI is freely available under the terms of the Eclipse Public License - v 1.0.
You can get the sources with a Visual Studio 2010 solution here:
A documentation can be found here: