Suche

Projekte


sustAGE

Smart environments for person-centered sustainable work and well-being
Start date: circa 01.01.2019
Project duration: 36 Months
Funded by: EU Horizon 2020 Research & Innovation Action (RIA)
x
The project provides a paradigm shift in human machine interaction, building upon seven strategic technology trends, IoT, Machine learning, micro-moments, temporal reasoning, recommender systems, data analytics and gamification to deliver a composite system integrated with the daily activities at work and outside, to support employers and ageing employees to jointly increase well-being, wellness at work and productivity. The manifold contribution focuses on the support of the employment and later retirement of older adults from work and the optimization of the workforce management.
X
The sustAGE platform guides workers on work-related tasks, recommends personalized cognitive and physical training activities with emphasis on game and social aspects, delivers warnings regarding occupational risks and cares for their proper positioning in work tasks that will maximize team performance.
X
By combining a broad range of the innovation chain activities namely, technology R&D, demonstration, prototyping, pilots, and extensive validation, the project aims to explore how health and safety at work, continuous training and proper workforce management can prolongue older workers’ competitiveness at work. The deployment of the proposed technologies in two critical industrial sectors and their extensive evaluation will lead to a ground-breaking contribution that will improve the performance and quality of life at work and beyond for many ageing adult workers.
x

HOL-DEEP-SENSE

Holistic Deep Modelling for User Recognition and Affective Social Behaviour Sensing

Start date: 01.10.2018
Project duration: 30 Months
Funded by: EU Horizon 2020 Marie Skłodowska-Curie action Individual Fellowship
x
The “Holistic Deep Modelling for User Recognition and Affective Social Behaviour Sensing” (HOL-DEEP-SENSE) project aims at augmenting affective machines such as virtual assistants and social robots with human-like acumen based on holistic perception and understanding abilities.
X
Social competencies comprising context awareress, salience detection and affective sensitivity present a central aspect of human communication, and thus are indispensable for enabling natural and spontaneous human-machine interaction. Therefore, with the aim to advance affective computing and social signal processing, we envision a “Social Intelligent Multi-modal Ontological Net” (SIMON) that builds on technologies at the leading edge of deep learning for pattern recognition. In particular, our approach is driven by multi-modal information fusion using end-to-end deep neural networks trained on large datasets, allowing SIMON to exploit combined auditory, visual and physiological analysis.
X
In contrast to standard machine learning systems, SIMON makes use of task relatedness to adapt its topology within a novel construct of subdivided neural networks. Through deep affective feature transformation, SIMON is able to perform associative domain adaptation via transfer and multi-task learning, and thus can infer user characteristics and social cues in a holistic context.
X
This new unified sensing architecture will enable affective computers to assimilate ontological human phenomena, leading to a step change in machine perception. This will offer a wide range of applications for health and wellbeing in future IoT-inspired environments, connected to dedicated sensors and consumer electronics.
X
By verifying the gains through holistic sensing, the project will show the true potential of the much sought-after emotionally and socially intelligent AI, and herald a new generation of machines with hitherto unseen skills to interact with humans via universal communication channels.
x

Sentiment Analyse

Start date: 01.05.2018
End date: 31.05.2021
Funded by: BMW AG

The project aims at real-time internet-scale sentiment analysis in unstructured multimodal data in the wild.
X

TAPAS

TAPAS: Training Network on Automatic Processing of PAthological Speech

Training network on Automatic Processing of PAthological Speech

Start date: 01.11.2017
End date: 31.10.2021
Funded by: EU (Europäische Union)
Project Homepage www.tapas-etn-eu.org

There are an increasing number of people across Europe with debilitating speech pathologies (e.g., due to stroke, Parkinson’s, etc). These groups face communication problems that can lead to social exclusion. They are now being further marginalised by a new wave of speech technology that is increasingly woven into everyday life but which is not robust to atypical speech. TAPAS is proposing a programme of pathological speech research, that aims to transform the well-being of these people.

The TAPAS work programme targets three key research problems:

(a) Detection: We will develop speech processing techniques for early detection of conditions that impact on speech production. The outcomes will be cheap and non-invasive diagnostic tools that provide early warning of the onset of progressive conditions such as Alzheimer’s and Parkinson’s.

(b) Therapy: We will use newly-emerging speech processing techniques to produce automated speech therapy tools. These tools will make therapy more accessible and more individually targeted. Better therapy can increase the chances of recovering intelligible speech after traumatic events such a stroke or oral surgery.

(c) Assisted Living: We will re-design current speech technology so that it works well for people with speech impairments and also helps in making informed clinical choices. People with speech impairments often have other co-occurring conditions making them reliant on carers. Speech-driven tools for assisted-living are a way to allow such people to live more independently.

TAPAS adopts an inter-disciplinary and multi-sectorial approach. The consortium includes clinical practitioners, academic researchers and industrial partners, with expertise spanning speech engineering, linguistics and clinical science. All members have expertise in some element of pathological speech. This rich network will train a new generation of 15 researchers, equipping them with the skills and resources necessary for lasting success.


RADAR-CNS                                                                              

Remote Assessment of Disease and Relapse – Central Nervous System

Start date: 01.04.2016
End date: 31.03.2021
Funded by: EU (Europäische Union)
Project Homepage www.radar-cns.org

RADAR-CNS is a major new research programme which is developing new ways of monitoring major depressive disorder, epilepsy, and multiple sclerosis using wearable devices and smartphone technology. RADAR-CNS aims to improve patients’ quality of life, and potentially to change how these and other chronic disorders are treated.



X
ZD.B Fellowship                                                         

TAPAS: Training Network on Automatic Processing of PAthological Speech

An Embedded Soundscape System for Personalised Wellness via Multimodal Bio-Signal and Speech Monitoring

Start date: 01.01.2018
End date: 31.12.2020
Funded by: The Bavarian State Ministry of Education, Science and the Arts in the framework of the Centre Digitisation.Bavaria (ZD.B)
Project Homepage zentrum-digitalisierung.bayern/initiativen-fuer-die-wissenschaft/graduate-program/graduate-fellowships

The soundscape (the audible components of a given environment), is an omnipresence in daily-life. Yet research has shown, that elements of our acoustic soundscapes can negatively affect mental wellbeing.

Taking a dual analysis-synthesis approach this project, through multimodal feedback analysis, will explore the benefits of synthesised soundscape design and develop a ‘deep-listening’ personalised embedded system to improve human wellness. The project will explore questions pertaining to audible perception and develop novel methods for soundscape generation, informed by intelligent signal state monitoring.


DE-ENIGMA                                                                                

Start date: 01.02.2016
End date: 31.07.2019
Funded by: EU (Europäische Union)
Project Homepage  de-enigma.eu

The DE-ENIGMA project is developing artificial intelligence for a commercial robot (Robokind’s Zeno). The robot will be used for an emotion-recognition and emotion-expression teaching programme to school-aged autistic children. This approach combines the most common interests of children of school age: technology, cartoon characters (that Zeno resembles) and socializing with peers.

During the project, Zeno will go through several design phases, getting ‘smarter’ every time. It will be able to process children’s motions, vocalizations, and facial expressions in order to adaptively and autonomously present emotion activities, and engage in feedback, support, and play. The project, that will run from February 2016 until August 2019, is funded by Horizon 2020 (the European Union’s Framework Programme for Research and Innovation).
X

x

EngageMe                                                                                  

Assistenzsystem zur Erkennung des emotionalen Zustandes von von Werkstatt­mitarbeiterinnen und -mitarbeitern

Start date: 01.06.2015
End date: 30.09.2019
Funded by: EU (Europäische Union)

Engaging children with ASC (Autism Spectrum Conditions) in communication centred activities during educational therapy is one of the cardinal challenges by ASC and contributes to its poor outcome. To this end, therapists recently started using humanoid robots (e.g., NAO) as assistive tools. However, this technology lacks the ability to autonomously engage with children, which is the key for improving the therapy and, thus, learning opportunities. Existing approaches typically use machine learning algorithms to estimate the engagement of children with ASC from their head-pose or eye-gaze inferred from face-videos. These approaches are rather limited for modeling atypical behavioral displays of engagement of children with ASC, which can vary considerably across the children.

The first objective of EngageME is to bring novel machine learning models that can for the first time effectively leverage multi-modal behavioural cues, including facial expressions, head pose, vocal and physiological cues, to realize fully automated context-sensitive estimation of engagement levels of children with ASC. These models build upon dynamic graph models for multi-modal ordinal data, based on state-of-the-art machine learning approaches to sequence classification and domain adaptation, which can adapt to each child, while still being able to generalize across children and cultures. To realize this, the second objective of EngageME is to provide the candidate with the cutting-edge training aimed at expanding his current expertise in visual processing with expertise in wearable/physiological, and audio technologies, from leading experts in these fields.

EngageME is expected to bring novel technology/models for endowing assistive robots with ability to accurately ‘sense’ engagement levels of children with ASC during robot-assisted therapy, while providing the candidate with a set of skills needed to become one of the frontiers in the emerging field of affect-sensitive assistive technology.


iHEARu                                                                                       

Intelligent systems’ Holistic Evolving Analysis of Real-life Universal speaker characteristics

Start date: 01.01.2014
End date: 31.12.2018
Funded by: EU (Europäische Union)
Project Homepage www.ihearu.eu

Recently, automatic speech and speaker recognition has matured to the degree that it entered the daily lives of thousands of Europe’s citizens, e.g., on their smart phones or in call services. During the next years, speech processing technology will move to a new level of social awareness to make interaction more intuitive, speech retrieval more efficient, and lend additional competence to computer-mediated communication and speech-analysis services in the commercial, health, security, and further sectors. To reach this goal, rich speaker traits and states such as age, height, personality and physical and mental state as carried by the tone of the voice and the spoken words must be reliably identified by machines.

In the iHEARu project, ground-breaking methodology including novel techniques for multi-task and semi-supervised learning will deliver for the first time intelligent holistic and evolving analysis in real-life condition of universal speaker characteristics which have been considered only in isolation so far. Today’s sparseness of annotated realistic speech data will be overcome by large-scale speech and meta-data mining from public sources such as social media, crowd-sourcing for labelling and quality control, and shared semi-automatic annotation.

All stages from pre-processing and feature extraction, to the statistical modelling will evolve in “life-long learning” according to new data, by utilising feedback, deep, and evolutionary learning methods. Human-in-the-loop system validation and novel perception studies will analyse the self-organising systems and the relation of automatic signal processing to human interpretation in a previously unseen variety of speaker classification tasks.

The project’s work plan gives the unique opportunity to transfer current world-leading expertise in this field into a new de-facto standard of speaker characterisation methods and open-source tools ready for tomorrow’s challenge of socially aware speech analysis