Источник
Optimization Methods and Software
Дата публикации
28.03.2025
Авторы
Александр Безносиков Валентин Самохин Александр Гасников
Поделиться

Distributed saddle point problems: lower bounds, near-optimal and robust algorithms

Аннотация

This paper focuses on the distributed optimization of stochastic saddle point problems. The first part of the paper is devoted to lower bounds for the centralized and decentralized distributed methods for smooth (strongly) convex-(strongly) concave saddle point problems, as well as the near-optimal algorithms by which these bounds are achieved. Next, we present a new federated algorithm for centralized distributed saddle-point problems – Extra Step Local SGD. The theoretical analysis of the new method is carried out for strongly convex-strongly concave and non-convex-non-concave problems. In the experimental part of the paper, we show the effectiveness of our method in practice. In particular, we train GANs in a distributed manner.

Присоединяйтесь к AIRI в соцсетях