Last-Iterate Convergence of Saddle-Point Optimizers via High-Resolution Differential Equations

Authors

  • Tatjana Chavdarova, Michael I. Jordan, Manolis Zampetakis Author

Keywords:

Variational inequality, convergence, high resolution differential equations, saddle-point optimizers, continuous time methods.

Abstract

Several widely-used first-order saddle-point optimization methods yield an identical continuous time ordinary differential equation (ODE) that is identical to that of the Gradient Descent Ascent (GDA) method when derived naively. However, the convergence properties of these methods are qualitatively different, even on simple bilinear games. Thus the ODE perspective, which has
 proved powerful in analyzing single-objective optimization methods, has not played a similar role in saddle-point optimization.

Downloads

Download data is not yet available.

Downloads

Published

2023-08-01

How to Cite

Last-Iterate Convergence of Saddle-Point Optimizers via High-Resolution Differential Equations. (2023). Minimax Theory and Its Applications, 8(2), 333–380. https://journalmta.com/index.php/jmta/article/view/158