Источник
ICLR WRL
Дата публикации
24.04.2025
Авторы
Альбина Клепач Александр Никулин Илья Зисман Денис Тарасов Александр Деревягин Андрей Полубаров Никита Любайкин Владислав Куренков
Поделиться

Object-Centric Latent Action Learning

Аннотация

Leveraging vast amounts of internet video data for Embodied AI is currently bottle-necked by the lack of action annotations and the presence of action-correlated distractors. We propose a novel object-centric latent action learning approach, based on VideoSaur and LAPO, that employs self-supervised decomposition of scenes into object representations and annotates video data with proxy-action labels. This method effectively disentangles causal agent-object interactions from irrelevant background noise and reduces the performance degradation of latent action learning approaches caused by distractors. Our preliminary experiments with the Distracting Control Suite show that latent action pretraining based on object decompositions improve the quality of inferred latent actions by x2.7 and efficiency of downstream fine-tuning with a small set of labeled actions, increasing return by x2.6 on average.

Присоединяйтесь к AIRI в соцсетях