Suche

Empirical Comparison of Context and Transformer Word Embeddings on Few-Shot Learning Tasks


Description Google has recently demonstrated a new method to learn word embeddings through transformer networks (BERT – https://github.com/google-research/bert), which obtain state-of-the-art results on a wide array of Natural Language Processing (NLP) tasks. Word embeddings are the fundamental feature sets for any NLP task and especially important to avoid elaborate representational learning. Furthermore, deep learning approaches suffer from poor sampling efficiency in contrast to human perception. One- and few-shot learning tries to learn representations from only a few samples and is often used in tasks where only few data and targets are available. Recently, researchers also started to use these techniques on linguistic data.  
Task In this work, the student(s) will bring together two novel ways in NLP by empirically comparing different word embeddings (including BERT) in the context of few-shot learning.
Utilises NLP, Transformer Word Embeddings, Few-shot learning
Requirements Advanced knowledge in Machine Learning and Natural Language Processing, Good programming skills (e.g. Python, C++)
Languages German or English
Supervisor Lukas Stappen, M. Sc. (lukas.stappen@informatik.uni-augsburg.de)