Источник
ACL
Дата публикации
11.08.2024
Авторы
Алена Феногенова Артём Червяков Никита Мартынов Анастасия Козлова Мария Тихонова Альбина Ахметгареева Антон Емельянов Денис Шевелев Павел Лебедев Леонид Синев Катерина Коломейцева Даниил Московский Елизавета Гончарова Никита Савушкин Полина Михайлова Анастасия Минаева Денис Димитров Александр Панченко Сергей Марков
Поделиться

MERA: A Comprehensive LLM Evaluation in Russian

Аннотация

Over the past few years, one of the most notable advancements in AI research has been in foundation models (FMs), headlined by the rise of language models (LMs). However, despite researchers’ attention and the rapid growth in LM application, the capabilities, limitations, and associated risks still need to be better understood. To address these issues, we introduce a new instruction benchmark, MERA, oriented towards the FMs’ performance on the Russian language. The benchmark encompasses 21 evaluation tasks for generative models covering 10 skills and is supplied with private answer scoring to prevent data leakage. The paper introduces a methodology to evaluate FMs and LMs in fixed zero- and few-shot instruction settings that can be extended to other modalities. We propose an evaluation methodology, an open-source code base for the MERA assessment, and a leaderboard with a submission system. We evaluate open LMs as baselines and find they are still far behind the human level. We publicly release MERA to guide forthcoming research, anticipate groundbreaking model features, standardize the evaluation procedure, and address potential ethical concerns and drawbacks.

Присоединяйтесь к AIRI в соцсетях