Convergence Acceleration via Chebyshev Step: Plausible Interpretation of Deep-Unfolded Gradient Descent

  • TAKABE Satoshi
    Department of Mathematical and Computing Science, Tokyo Institute of Technology
  • WADAYAMA Tadashi
    Department of Computer Science, Nagoya Institute of Technology

Abstract

<p>Deep unfolding is a promising deep-learning technique, whose network architecture is based on expanding the recursive structure of existing iterative algorithms. Although deep unfolding realizes convergence acceleration, its theoretical aspects have not been revealed yet. This study details the theoretical analysis of the convergence acceleration in deep-unfolded gradient descent (DUGD) whose trainable parameters are step sizes. We propose a plausible interpretation of the learned step-size parameters in DUGD by introducing the principle of Chebyshev steps derived from Chebyshev polynomials. The use of Chebyshev steps in gradient descent (GD) enables us to bound the spectral radius of a matrix governing the convergence speed of GD, leading to a tight upper bound on the convergence rate. Numerical results show that Chebyshev steps numerically explain the learned step-size parameters in DUGD well.</p>

Journal

References(25)*help

See more

Related Projects

See more

Details 詳細情報について

Report a problem

Back to top