Источник
ICOMP
Дата публикации
10.10.2024
Авторы
Александр Безносиков
Айбек Аланов
Дмитрий Ковалев
Martin Takáč
Александр Гасников
Поделиться
On Scaled Methods for Saddle Point Problems
Аннотация
Methods with adaptive scaling of different features play a key role in solving saddle point problems, primarily due to Adam's popularity for solving adversarial machine learning problems, including GANS training. This paper carries out a theoretical analysis of the following scaling techniques for solving SPPs: the well-known Adam and RmsProp scaling and the newer AdaHessian and OASIS based on Hutchison approximation. We use the Extra Gradient and its improved version with negative momentum as the basic method. Experimental studies on GANs show good applicability not only for Adam, but also for other less popular methods.
Похожие публикации
Вы можете задать нам вопрос или предложить совместный проект в области ИИ
partner@airi.net
По вопросам научного
сотрудничества и партнерства
сотрудничества и партнерства
pr@airi.net
Для журналистов и СМИ