A NEW NONLINEAR CONJUGATE GRADIENT METHOD FOR UNCONSTRAINED OPTIMIZATION A NEW NONLINEAR CONJUGATE GRADIENT METHOD FOR UNCONSTRAINED OPTIMIZATION

Access this Article

Search this Article

Author(s)

Abstract

Conjugate gradient methods are widely used for large scale unconstrained optimization problems. Most of conjugate gradient methods don't always generate a descent search direction, so the descent condition is usually assumed in the analysis and implementations. Dai and Yuan (1999) proposed a conjugate gradient method which generates a descent search direction at every iteration and converges globally to the solution if the Wolfe conditions are satisfied within the line search strategy. In this paper, we give a new conjugate gradient method based on the study of Dai and Yuan, and show that our method always produces a descent search direction and converges globally if the Wolfe conditions are satisfied. Moreover our method has the second-order curvature information with a higher precision which uses the modified secant condition proposed by Zhang, Deng and Chen (1999) and Zhang and Xu (2001). Our numerical results show that our method is very efficient for given standard test problems, if we make a good choice of a parameter included in our method.

Journal

  • Journal of the Operations Research Society of Japan

    Journal of the Operations Research Society of Japan 48(4), 284-296, 2005

    The Operations Research Society of Japan

References:  9

Codes

  • NII Article ID (NAID)
    110002558257
  • NII NACSIS-CAT ID (NCID)
    AA00703935
  • Text Lang
    ENG
  • Article Type
    ART
  • ISSN
    0453-4514
  • NDL Article ID
    7744805
  • NDL Source Classification
    ZM31(科学技術--数学) // ZD25(経済--企業・経営--経営管理)
  • NDL Call No.
    Z53-M226
  • Data Source
    CJP  NDL  NII-ELS  J-STAGE 
Page Top