Источник
ICML
Дата публикации
25.06.2024
Авторы
Вячеслав Синий Александр Никулин Владислав Куренков Илья Зисман Сергей Колесников
Поделиться

In-Context Reinforcement Learning for Variable Action Spaces

Аннотация

Recently, it has been shown that transformers pre-trained on diverse datasets with multi-episode contexts can generalize to new reinforcement learning tasks in-context. A key limitation of previously proposed models is their reliance on a predefined action space size and structure. The introduction of a new action space often requires data re-collection and model re-training, which can be costly for some applications. In our work, we show that it is possible to mitigate this issue by proposing the Headless-AD model that, despite being trained only once, is capable of generalizing to discrete action spaces of variable size, semantic content and order. By experimenting with Bernoulli and contextual bandits, as well as a gridworld environment, we show that Headless-AD exhibits significant capability to generalize to action spaces it has never encountered, even outperforming specialized models trained for a specific set of actions on several environment configurations.

Присоединяйтесь к AIRI в соцсетях