Источник
SISY
Дата публикации
13.02.2023
Авторы
Илья Макаров Евгения Михайлова
Поделиться

Curiosity-driven Exploration in VizDoom

Аннотация

Efficient exploration remains one of the most challenging aspects of Reinforcement learning. The enormous number of studies devoted to this topic propose different complex methods for enhancing overall performance. For vanilla reinforcement learning algorithms, it can take a tremendous amount of time just to find out the right action, especially for sparse reward environments. One of the solutions that could improve the agent behavior is applying the effective exploration strategy architecture to the baseline methods. This research suggests the examination of several advanced exploration approaches, based on the curiosity bonuses idea. Intrinsic Curiosity Module (ICM) and Random Network Distillation (RND) exploration architectures are applied to the Asynchronous Advantage Actor-Critic (A3C) algorithm. The constructed models are validated in VizDoom environment. This study compares the implemented models with vanilla A3C and Deep Q-Networks algorithms and shows state-of-the-art results in the most complicated scenario Deathmatch.

Присоединяйтесь к AIRI в соцсетях