Feature Selection in Multimodal Continuous Emotion Prediction

Advances in affective computing have been made by combining information from different modalities, such as audio, video, and physiological signals. However, increasing the number of modalities also grows the dimensionality of the associated feature vectors, leading to higher computational cost and possibly lower prediction performance. In this regard, we present an comparative study of feature reduction methodologies for continuous emotion recognition. We compare dimensionality reduction by principal component analysis, filterbased feature selection using canonical correlation analysis, and correlation-based feature selection, as well as wrapperbased feature selection with sequential forward selection, and competitive swarm optimisation. These approaches are evaluated on the AV+EC-2015 database using support vector regression. Our results demonstrate that the wrapper-based approaches typically outperform the other methodologies, while pruning a large number of irrelevant features.
Title: Feature Selection in Multimodal Continuous Emotion Prediction
Lecturer: Shahin Amiriparian
Date: 14-11-2017
Building/Room: Eichleitnerstraße 30 / 207
Contact: U Augsburg/TUM