Источник
Dialogue
Дата публикации
15.02.2022
Авторы
Татьяна Шаврина Алена Феногенова Александр Кукушкин Владислав Михайлов Денис Шевелев Екатерина Артемова Мария Тихонова Антон Емельянов Валентин Малых
Поделиться

Russian SuperGLUE 1.1: Revising the Lessons not Learned by Russian NLP-models

Аннотация

In the last year, new neural architectures and multilingual pre-trained models have been released for Russian, which led to performance evaluation problems across a range of language understanding tasks.

This paper presents Russian SuperGLUE 1.1, an updated benchmark styled after GLUE for Russian NLP models. The new version includes a number of technical, user experience and methodological improvements, including fixes of the benchmark vulnerabilities unresolved in the previous version: novel and improved tests for understanding the meaning of a word in context (RUSSE) along with reading comprehension and common sense reasoning (DaNetQA, RuCoS, MuSeRC). Together with the release of the updated datasets, we improve the benchmark toolkit based on jiant framework for consistent training and evaluation of NLP-models of various architectures which now supports the most recent models for Russian. Finally, we provide the integration of Russian SuperGLUE with a framework for industrial evaluation of the open-source models, MOROCCO (MOdel ResOurCe COmparison), in which the models are evaluated according to the weighted average metric over all tasks, the inference speed, and the occupied amount of RAM.

Russian SuperGLUE is publicly available at https://russiansuperglue.com/

Присоединяйтесь к AIRI в соцсетях