Источник
Optimization Methods and Software
Дата публикации
28.03.2025
Авторы
Александр Безносиков
Валентин Самохин
Александр Гасников
Поделиться
Distributed saddle point problems: lower bounds, near-optimal and robust algorithms
Distributed optimization,
saddle point problems,
lower and upper bounds,
local methods,
convex optimization,
stochastic optimization
Аннотация
This paper focuses on the distributed optimization of stochastic saddle point problems. The first part of the paper is devoted to lower bounds for the centralized and decentralized distributed methods for smooth (strongly) convex-(strongly) concave saddle point problems, as well as the near-optimal algorithms by which these bounds are achieved. Next, we present a new federated algorithm for centralized distributed saddle-point problems – Extra Step Local SGD. The theoretical analysis of the new method is carried out for strongly convex-strongly concave and non-convex-non-concave problems. In the experimental part of the paper, we show the effectiveness of our method in practice. In particular, we train GANs in a distributed manner.
Похожие публикации
Вы можете задать нам вопрос или предложить совместный проект в области ИИ
partner@airi.net
По вопросам научного
сотрудничества и партнерства
сотрудничества и партнерства
pr@airi.net
Для журналистов и СМИ