Источник
AAAI
Дата публикации
29.02.2024
Авторы
Александр Панов Алексей Скрынник Константин Яковлев Антон Андрейчук
Поделиться

Decentralized Monte Carlo Tree Search for Partially Observable Multi-agent Pathfinding

Аннотация

The Multi-Agent Pathfinding (MAPF) problem involves finding a set of conflict-free paths for a group of agents confined to a graph. In typical MAPF scenarios, the graph and the agents' starting and ending vertices are known beforehand, allowing the use of centralized planning algorithms. However, in this study, we focus on the decentralized MAPF setting, where the agents may observe the other agents only locally and are restricted in communications with each other. Specifically, we investigate the lifelong variant of MAPF, where new goals are continually assigned to the agents upon completion of previous ones. Drawing inspiration from the successful AlphaZero approach, we propose a decentralized multi-agent Monte Carlo Tree Search (MCTS) method for MAPF tasks. Our approach utilizes the agent's observations to recreate the intrinsic Markov decision process, which is then used for planning with a tailored for multi-agent tasks version of neural MCTS. The experimental results show that our approach outperforms state-of-the-art learnable MAPF solvers. The source code is available at this https URL: https://github.com/AIRI-Institute/mats-lp

Присоединяйтесь к AIRI в соцсетях