Smart environments for person-centered sustainable work and well-being
Start date: 01.01.2019
Duration: 36 Months
Funding body: EU Horizon 2020 Research & Innovation Action (RIA)
sustAGE will provide a paradigm shift in human machine interaction, building upon seven strategic technology trends, IoT, Machine learning, micro-moments, temporal reasoning, recommender systems, data analytics and gamification to deliver a composite system integrated with the daily activities at work and outside, to support employers and ageing employees to jointly increase well-being, wellness at work and productivity. The manifold contribution focuses on the support of the employment and later retirement of older adults from work and the optimization of the workforce management.
The sustAGE platform guides workers on work-related tasks, recommends personalized cognitive and physical training activities with emphasis on game and social aspects, delivers warnings regarding occupational risks and cares for their proper positioning in work tasks that will maximize team performance.
By combining a broad range of the innovation chain activities namely, technology R&D, demonstration, prototyping, pilots, and extensive validation, the project aims to explore how health and safety at work, continuous training and proper workforce management can prolongue older workers’ competitiveness at work. The deployment of the proposed technologies in two critical industrial sectors and their extensive evaluation will lead to a ground-breaking contribution that will improve the performance and quality of life at work and beyond for many ageing adult workers.


Holistic Deep Modelling for User Recognition and Affective Social Behaviour Sensing

Start date: 01.10.2018
Duration: 30 Months
Funding body: EU Horizon 2020 Marie Skłodowska-Curie action Individual Fellowship
The “Holistic Deep Modelling for User Recognition and Affective Social Behaviour Sensing” (HOL-DEEP-SENSE) project aims at augmenting affective machines such as virtual assistants and social robots with human-like acumen based on holistic perception and understanding abilities.
Social competencies comprising context awareress, salience detection and affective sensitivity present a central aspect of human communication, and thus are indispensable for enabling natural and spontaneous human-machine interaction. Therefore, with the aim to advance affective computing and social signal processing, we envision a “Social Intelligent Multi-modal Ontological Net” (SIMON) that builds on technologies at the leading edge of deep learning for pattern recognition. In particular, our approach is driven by multi-modal information fusion using end-to-end deep neural networks trained on large datasets, allowing SIMON to exploit combined auditory, visual and physiological analysis.
In contrast to standard machine learning systems, SIMON makes use of task relatedness to adapt its topology within a novel construct of subdivided neural networks. Through deep affective feature transformation, SIMON is able to perform associative domain adaptation via transfer and multi-task learning, and thus can infer user characteristics and social cues in a holistic context.
This new unified sensing architecture will enable affective computers to assimilate ontological human phenomena, leading to a step change in machine perception. This will offer a wide range of applications for health and wellbeing in future IoT-inspired environments, connected to dedicated sensors and consumer electronics.
By verifying the gains through holistic sensing, the project will show the true potential of the much sought-after emotionally and socially intelligent AI, and herald a new generation of machines with hitherto unseen skills to interact with humans via universal communication channels.

Sentiment Analyse

Start date: 01.05.2018
End date: 31.05.2021
Funding body: BMW AG

The project aims at real-time internet-scale sentiment analysis in unstructured multimodal data in the wild.

TAPAS: Training Network on Automatic Processing of PAthological Speech

Training network on Automatic Processing of PAthological Speech

Start date: 01.11.2017
End date: 31.10.2021
Funding body: EU (Europäische Union)

There are an increasing number of people across Europe with debilitating speech pathologies (e.g., due to stroke, Parkinson’s, etc). These groups face communication problems that can lead to social exclusion. They are now being further marginalised by a new wave of speech technology that is increasingly woven into everyday life but which is not robust to atypical speech. TAPAS is proposing a programme of pathological speech research, that aims to transform the well-being of these people.

The TAPAS work programme targets three key research problems:

(a) Detection: We will develop speech processing techniques for early detection of conditions that impact on speech production. The outcomes will be cheap and non-invasive diagnostic tools that provide early warning of the onset of progressive conditions such as Alzheimer’s and Parkinson’s.

(b) Therapy: We will use newly-emerging speech processing techniques to produce automated speech therapy tools. These tools will make therapy more accessible and more individually targeted. Better therapy can increase the chances of recovering intelligible speech after traumatic events such a stroke or oral surgery.

(c) Assisted Living: We will re-design current speech technology so that it works well for people with speech impairments and also helps in making informed clinical choices. People with speech impairments often have other co-occurring conditions making them reliant on carers. Speech-driven tools for assisted-living are a way to allow such people to live more independently.

TAPAS adopts an inter-disciplinary and multi-sectorial approach. The consortium includes clinical practitioners, academic researchers and industrial partners, with expertise spanning speech engineering, linguistics and clinical science. All members have expertise in some element of pathological speech. This rich network will train a new generation of 15 researchers, equipping them with the skills and resources necessary for lasting success.


Remote Assessment of Disease and Relapse – Central Nervous System

Start date: 01.04.2016
End date: 31.03.2021
Funding body: EU (Europäische Union)

RADAR-CNS is a major research programme that is developing new ways of monitoring major depressive disorder, epilepsy, and multiple sclerosis using wearable devices and smartphone technology. RADAR-CNS aims to improve patients’ quality of life, and potentially to change how these and other chronic disorders are treated.

X zdb_logo_rgb  

An Embedded Soundscape System for Personalised Wellness via Multimodal Bio-Signal and Speech Monitoring

Start date: 01.01.2018
End date: 31.12.2020
Funding body: The Bavarian State Ministry of Education, Science and the Arts in the framework of the Centre Digitisation.Bavaria (ZD.B)

The soundscape (the audible components of a given environment), is an omnipresence in daily-life. Yet research has shown, that elements of our acoustic soundscapes can negatively affect mental wellbeing.

Taking a dual analysis-synthesis approach this project, through multimodal feedback analysis, will explore the benefits of synthesised soundscape design and develop a ‘deep-listening’ personalised embedded system to improve human wellness. The project will explore questions pertaining to audible perception and develop novel methods for soundscape generation, informed by intelligent signal state monitoring.

Start date: 01.02.2016
End date: 30.11.2019
Fundeding body: EU (Europäische Union)

The DE-ENIGMA project is developing artificial intelligence for a commercial robot (Robokind’s Zeno). The robot will be used for an emotion-recognition and emotion-expression teaching programme to school-aged autistic children. This approach combines the most common interests of children of school age: technology, cartoon characters (that Zeno resembles) and socializing with peers.

During the project, Zeno will go through several design phases, getting ‘smarter’ every time. It will be able to process children’s motions, vocalizations, and facial expressions in order to adaptively and autonomously present emotion activities, and engage in feedback, support, and play. The project, that will run from February 2016 until August 2019, is funded by Horizon 2020 (the European Union’s Framework Programme for Research and Innovation).