Source
PeerJ Computer Science
DATE OF PUBLICATION
08/18/2022
Authors
Alexander Panov Alexey Skrynnik Konstantin Yakovlev Anton Andreychuk
Share

Pathfinding in Stochastic Environments: Learning vs Planning

Abstract

Among the main challenges associated with navigating a mobile robot in complex environments are partial observability and stochasticity. This work proposes a new stochastic formulation of the pathfinding problem, assuming that the obstacles of arbitrary shapes may appear and disappear at random moments. Moreover, we consider the case when the environment is only partially observable for an agent. We study and evaluate two orthogonal approaches to tackle the problem of reaching a goal under such conditions: planning and learning. Within planning, an agent constantly replans and updates the path based on the history of the observations using a search-based planner (we used a variant of the renowned D* Lite algorithm). Within learning, an agent asynchronously learns to optimize a policy function using recurrent neural networks (we propose an original efficient, scalable approach). We carry on an extensive empirical evaluation of both approaches that show that the learning-based approach scales better to higher degrees of stochasticity (i.e. the number of appearing/disappearing obstacles). At the same time, the planning-based one is preferable when the environment is close-to-the-deterministic (i.e., external disturbances are rare).

Join AIRI