Source
Dialogue
DATE OF PUBLICATION
06/18/2022
Authors
Tatyana Shavrina Oleg Serikov Ekaterina Voloshina
Share

Is language acquisition similar in language models and humans? A chronological probing study

Abstract

The probing methodology allows one to obtain a partial representation of linguistic phenomena stored in the inner layers of the neural network, using external classifiers and statistical analysis.
Pretrained transformer-based language models are widely used both for natural language understanding (NLU) and natural language generation (NLG) tasks making them most commonly used for downstream applications. However, no analysis was carried out, whether the models were pretrained enough or contained knowledge correlated with linguistic theory.
We are presenting the chronological probing study of transformer English models such as MultiBERT and T5. We sequentially compare the information about the language learned by the models in the process of training on corpora. The results show that 1) linguistic information is acquired in the early stages of training 2) both language models demonstrate capabilities to capture various features from various levels of language, including morphology, syntax, and even discourse, while they also can inconsistently fail on tasks that are perceived as easy.

Join AIRI