Suche

Shahin Amiriparian


shahin-c2

Wiss. Mitarbeiter

E-Mail: shahin.amiriparian@informatik.uni-augsburg.de
Telefon: +49 (0) 821 598 - 2924
Raum: 306, Alte Universität (Eichleitnerstr. 30, F2)


CV

Shahin Amiriparian received his master's degree in Electrical Engineering and Information Technology (M. Sc.) from Technische Universität München (TUM), Germany. He started working towards his doctoral degree as a researcher in the Machine Intelligence and Signal Processing Group at TUM, focusing his research on novel deep learning methods for audio processing. From 2014 to 2017, he was a doctoral researcher at the Chair of Complex and Intelligent Systems at the University of Passau, Germany and currently pursuing his doctoral degree at the chair of Embedded Intelligence for Health Care and Well Being at the University of Augsburg, Germany. His main research focus is deep learning for audio understanding and image processing.

Chosen Publications

  • S. Amiriparian, A. Baird, S. Julka, A. Alcorn, S. Ottl, S. Petrovic, E. Ainger, N. Cummins, and B. Schuller, “Recogni- ´ tion of Echolalic Autistic Child Vocalisations Utilising Convolutional Recurrent Neural Networks,” in Proceedings INTERSPEECH 2018, 19th Annual Conference of the International Speech Communication Association, (Hyderabad, India), ISCA, ISCA, September 2018. 5 pages, to appear.

  • S. Amiriparian, M. Freitag, N. Cummins, M. Gerzcuk, S. Pugachevskiy, and B. W. Schuller, “A Fusion of Deep Convolutional Generative Adversarial Networks and Sequence to Sequence Autoencoders for Acoustic Scene Classification,” in Proceedings 26th European Signal Processing Conference (EUSIPCO), (Rome, Italy), EURASIP, IEEE, September 2012. 5 pages, to appear.
  • S. Amiriparian, M. Schmitt, N. Cummins, K. Qian, F. Dong, and B. Schuller, “Deep Unsupervised Representation Learning for Abnormal Heart Sound Classification,” in Proceedings of the 40th Annual International Conference of the IEEE Engineering in Medicine & Biology Society, EMBC 2018, (Honolulu, HI), IEEE, IEEE, July 2018. 4 pages, to appear.

  • S. Amiriparian, M. Gerczuk, S. Ottl, N. Cummins, S. Pugachevskiy, and B. Schuller, “Bag-of-Deep-Features: NoiseRobust Deep Feature Representations for Audio Analysis,” in Proceedings 31st International Joint Conference on Neural Networks (IJCNN), (Rio de Janeiro, Brazil), IEEE, IEEE, July 2018. 8 pages, to appear.

  • S. Amiriparian, M. Schmitt, S. Hantke, V. Pandit, and B. Schuller, “Humans Inside: Cooperative Big Multimedia Data Mining,” in Innovations in Big Data Mining and Embedded Knowledge: Domestic and Social Context Challenges (A. Es- posito, A. M. Esposito, and L. C. Jain, eds.), Intelligent Systems Reference Library (ISRL), Springer, 2018. 25 pages, invited contribution, to appear.

  • S. Amiriparian, S. Julka, N. Cummins, and B. Schuller, “Deep Convolutional Recurrent Neural Networks for
    Rare Sound Event Detection,” in Proceedings 44. Jahrestagung für Akustik, DAGA 2008, (Munich, Germany),
    pp. 1522–1525, DEGA, DEGA, March 2018.

  • S. Amiriparian, M. Schmitt, and B. Schulle, “Exploiting Deep Learning: die wichtigsten Bits und Pieces,” in Javamagazin, Maschinelles Lernen, pp. 46–53, May 2018.

  • S. Amiriparian, M. Freitag, N. Cummins, and B. Schuller, “Sequence To Sequence Autoencoders for Unsupervised Representation Learning From Audio,” in Proceedings of the Detection and Classification of Acoustic Scenes and Events 2017 IEEE AASP Challenge Workshop (DCASE 2017), satellite to EUSIPCO 2017, (Munich, Germany), EUSIPCO, IEEE, November 2017. 5 pages.

  • S. Amiriparian, N. Cummins, S. Ottl, M. Gerczuk, and B. Schuller, “Sentiment Analysis Using Image-based Deep Spectrum Features,” in Proc. 2nd International Workshop on Automatic Sentiment Analysis in the Wild (WASA 2017) held in conjunction with the 7th biannual Conference on Affective Computing and Intelligent Interaction (ACII 2017), (San Antonio, TX), AAAC, IEEE, October 2017. 5 pages.

  • S. Amiriparian, M. Freitag, N. Cummins, and B. Schuller, “Feature Selection in Multimodal Continuous Emotion Prediction,” in Proc. 2nd International Workshop on Automatic Sentiment Analysis in the Wild (WASA 2017) held in conjunction with the 7th biannual Conference on Affective Computing and Intelligent Interaction (ACII 2017), (San Antonio, TX), AAAC, IEEE, October 2017. 8 pages.

  • S. Amiriparian, S. Pugachevskiy, N. Cummins, S. Hantke, J. Pohjalainen, G. Keren, and B. Schuller, “CAST a database: Rapid targeted large-scale big data acquisition via small-world modelling of social media platforms,” in Proc. 7th biannual Conference on Affective Computing and Intelligent Interaction (ACII 2017), (San Antonio, TX), AAAC, IEEE, October 2017. 6 pages.

  • S. Amiriparian, M. Gerczuk, S. Ottl, N. Cummins, M. Freitag, S. Pugachevskiy, and B. Schuller, “Snore Sound Classification Using Image-based Deep Spectrum Features,” in Proceedings INTERSPEECH 2017,18th Annual Conference of the International Speech Communication Association, (Stockholm, Sweden), ISCA, ISCA, August 2017. 5 pages.

  • [nominated for best student paper award]
    S. Amiriparian,
    J. Pohjalainen, E. Marchi, S. Pugachevskiy, and B. Schuller, “Is deception emotional? An emotion-driven predictive approach,” in Proceedings INTERSPEECH 2016, 17th Annual Conference of the International Speech Communication Association, (San Francisco, CA), pp. 2011–2015, ISCA, ISCA, September 2016.

  • S. Amiriparian, S. Julka, N. Cummins, and B. Schuller, “Deep Convolutional Recurrent Neural Networks for Rare Sound Event Detection,” in Proceedings 44. Jahrestagung für Akustik, DAGA 2018, (Munich, Germany), DEGA, DEGA, March 2018. 4 pages, to appear, invited contribution, Structured Session Deep Learning for Audio.
  • S. Amiriparian, M. Schmitt, S. Hantke, V. Pandit, and B. Schuller, “Humans Inside: Cooperative Big Multimedia Data Mining,” in Innovations in Big Data Mining and Embedded Knowledge: Domestic and Social Context Challenges (A. Esposito, A. M. Esposito, and L. C. Jain, eds.), Intelligent Systems Reference Library (ISRL), Springer, 2018. 25 pages, invited contribution, to appear.

  • M. Freitag, S. Amiriparian, S. Pugachevskiy, N. Cummins, and B. Schuller, “auDeep: Unsupervised Learning of Representations from Audio with Deep Recurrent Neural Networks,” arxiv.org, December 2017. 5 pages.

  • B. W. Schuller, S. Steidl, A. Batliner, P. B. Marschik, H. Baumeister, F. Dong, S. Hantke, F. Pokorny, E.-M. Rathner, K. D. Bartl-Pokorny, C. Einspieler, D. Zhang, A. Baird, S. Amiriparian, K. Qian, Z. Ren, M. Schmitt, P. Tzirakis, and S. Zafeiriou, “The INTERSPEECH 2018 Computational Paralinguistics Challenge: Atypical & Self-Assessed Affect, Crying & Heart Beats,” in Proceedings INTERSPEECH 2018, 19th Annual Conference of the International Speech Communication Association, (Hyderabad, India), ISCA, ISCA, September 2018. 5 pages, to appear.

  • N. Cummins, S. Amiriparian, S. Ottl, M. Gerczuk, M. Schmitt,
    and B. Schuller, “Multimodal Bag-of-Words for Cross Do-
    mains Sentiment Analysis,” in Proceedings 43rd IEEE Interna-
    tional Conference on Acoustics, Speech, and Signal Processing,
    ICASSP 2018, (Calgary, Canada), IEEE, IEEE, April 2018. 5
    pages, to appear.

  • [oral acceptance rate: 7.5 %]
    N. Cummins, S. Amiriparian, G. Hagerer, A. Batliner, S. Steidl, and B. Schuller, “An Image-based Deep Spectrum Feature Representation for the Recognition of Emotional Speech,” in Proceedings of the 25th ACM International Conference on Multimedia, MM 2017, (Mountain View, CA), ACM, ACM, October 2017. 7 pages.

  • S. Amiriparian, N. Cummins, M. Freitag, K. Qian, R. Zhao, V. Pandit and B. Schuller, “The Combined Augsburg / Passau / TUM / ICL System for DCASE 2017,” in Proceedings of the Detection and Classification of Acoustic Scenes and Events 2017 IEEE AASP Challenge Workshop (DCASE 2017), satellite to EUSIPCO 2017, (Munich, Germany), EUSIPCO, IEEE, November 2017. 1 page. Technical report.

  • M. Freitag, S. Amiriparian, N. Cummins, M. Gerczuk, and B. Schuller, “An ‘End-to-Evolution’ Hybrid Approach for Snore Sound Classification,” in Proceedings INTERSPEECH 2017, 18th Annual Conference of the International Speech Communication Association, (Stockholm, Sweden), ISCA, ISCA, August 2017. 5 pages.

  • V. Pandit, S. Amiriparian, M. Schmitt, A. Mousa, and B. Schuller, “Big Data Multimedia Mining: Feature Extraction facing Volume, Velocity, and Variety,” in Big Data Analytics for Large-Scale Multimedia Search (S. Vrochidis, B. Huet, E. Chang, and I. Kompatsiaris, eds.), Wiley, 2017.

  • A. Baird, S. Amiriparian, N. Cummins, A. M. Alcorn, A. Batliner, S. Pugachevskiy, M. Freitag, M. Gerczuk, and B. Schuller, “Automatic Classification of Autistic Child Vocalisations: A Novel Database and Results,” in Proceedings INTERSPEECH 2017, 18th Annual Conference of the International Speech Communication Association, (Stockholm, Sweden), ISCA, ISCA, August 2017. 5 pages.

  • A. Baird, S. Amiriparian, A. Rynkiewicz, and B. Schuller, “Echolalic Autism Spectrum Condition Vocalisations: Brute-Force and Deep Spectrum Features,” in Proceedings International Paediatric Conference (IPC 2018), (Rzeszów, Poland), Polish Society of Social Medicine and Public Health, May 2018. 2 pages, to appear.

  • F. Ringeval, S. Amiriparian, F. Eyben, K. Scherer, and B. Schuller, “Emotion Recognition in the Wild: Incorporating Voice and Lip Activity in Multimodal Decision-Level Fusion,” in Proceedings of the ICMI 2014 EmotiW – Emotion Recognition In The Wild Challenge and Workshop (EmotiW 2014), Satellite of the 16th ACM International Conference on Multimodal Interaction (ICMI 2014), (Istanbul, Turkey), pp. 473–480, ACM, ACM, November 2014.

  • N. Cummins, M. Schmitt, S. Amiriparian, J. Krajewski, and B. Schuller, “You sound ill, take the day off: Classification of speech affected by Upper Respiratory Tract Infection,” in Proceedings of the 39th Annual International Conference of the IEEE Engineering in Medicine & Biology Society, EMBC 2017, (Jeju Island, South Korea), pp. 3806–3809, IEEE, IEEE, July 2017.

  • F. Demir, A. Sengur, N. Cummins, S. Amiriparian, and B. Schuller, “Low Level Texture Features for Snore Sound Discrimination,” in Proceedings of the 40th Annual International Conference of the IEEE Engineering in Medicine & Biology Society, EMBC 2018, (Honolulu, HI), IEEE, IEEE, July 2018. 4 pages, to appear.

Research Interests

  • Machine Learning, Deep Learning, Neural Networks, End-to-End Learning, Affective Computing, Emotion Recognition, Human-Robot Interaction.

Deep Leaning

  • Neural Network Development: Tensorflow, Caffe, Keras, Theano.
  • Neural Network Approaches: Sequence to Sequence Autoencoders, Conditional Variational Autoencoders, Adaptive Neural Networks, End-to-End Learning, Reinforcement Learning, Zero-Shot Learning, (B)LSTMs and (B)GRUs, Convolutional Recurrent Neural Networks, (DC)GANs, CNNs, Pre-trained CNNs, MLPs.
  • Neural Network Applications: Audio-based recognition tasks for Speech Emotion, Speaker Traits and States, Deception, Autism, Depression, Stroke, Acoustic Events, Keyword Spotting, Music Emotion.

Journal Reviewing

  • IEEE Transactions on Cybernetics, since 2014. (IF: 7.384, 2017)
  • IEEE Transactions on Neural Networks and Learning Systems, since 2015. (IF: 6.108, 2017)
  • IEEE Transactions on Affective Computing, since 2014. (IF: 3.149, 2017)
  • IEEE Transactions on Computational Intelligence and AI in Games, since 2016. (IF: 1.113, 2017)

Awards/Scholarships

  • Winner of the Science Slam Challenge 2018 in Augsburg, Germany. Topic: “Don’t be afraid of artificial intelligence”.
  • Leistungsprämie für TV-L-Beschäftigte wegen ausgezeichneter Leistungen Performance bonus for TV-L employees due to excellent performances. University of Passau, faculty of Informatics and Mathematics.

  • DAAD scholarship from STIBET III founding – Matching Funds. Technische Universität München and MicroNova AG.

  • Deutschlandstipendium for outstanding performace in study.

  • DAAD scholarship from STIBET program.

 

**********************************************************************
Open research topics for bachelor and master students
**********************************************************************

Early Detection of Stroke Using Artificial Intelligence
Shahin Amiriparian M. Sc. (shahin.amiriparian@informatik.uni-augsburg.de)

An Artificial Intelligence Approach for Early Identification of Depression Characteristics
Shahin Amiriparian M. Sc. (shahin.amiriparian@informatik.uni-augsburg.de)

Deep Learning for Data Augmentation
Shahin Amiriparian M. Sc. (shahin.amiriparian@informatik.uni-augsburg.de)

A Speech-Based Approach for Recognising the Signs of Bipolar Disorder
Shahin Amiriparian M. Sc. (shahin.amiriparian@informatik.uni-augsburg.de)

Rare Acoustic Event Detection
Shahin Amiriparian M. Sc. (shahin.amiriparian@informatik.uni-augsburg.de)

Acoustic Scene Recognition
Shahin Amiriparian M. Sc. (shahin.amiriparian@informatik.uni-augsburg.de)

Big Data Analytics
Shahin Amiriparian M. Sc. (shahin.amiriparian@informatik.uni-augsburg.de)

Physiological Feature Representation Learning
Shahin Amiriparian M. Sc. (shahin.amiriparian@informatik.uni-augsburg.de)