Emotion recognition utilising multimodal signals

Description Various modalities have been used for affect recognition, spanning from facial expression, peripheral physiological signals such as ECG, EEG, and EMG, and speech. Further, the combination of these multiple information streams has been shown to provide higher recognition performance
Task The aim of this project is to explore multimodal signal representations using different machine learning techniques for emotion recognition, and further to undertake a thorough evaluation of the suitability of these techniques.
Utilises  python, tensorflow, keras
Requirements basic knowledge of machine learning, reasonable programming skills (python)
Languages English
Supervisor  Jing Han (