Conjugate Gradient Methods in Multidimensional Minimization

conjugate gradient methods n.w
1 / 6
Embed
Share

Discover the efficiency and optimization strategies of conjugate gradient methods for multidimensional minimization. Learn about breaking down functions into Taylor series, the improved efficiency with equivalent information content, the inefficiency of the steepest descent method, and the derivation of the Fletcher-Reeves and Polak-Ribiere algorithms in solving linear equations.

  • Conjugate Gradient
  • Optimization
  • Multidimensional Minimization
  • Efficiency
  • Linear Equations

Uploaded on | 0 Views


Download Presentation

Please find below an Image/Link to download the presentation.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.

You are allowed to download the files provided on this website for personal or commercial use, subject to the condition that they are used lawfully. All files are the property of their respective owners.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author.

E N D

Presentation Transcript


  1. CONJUGATE GRADIENT METHODS 10.8 Daniel Wall

  2. CONJUGATE GRADIENT Method for multidimensional minimization Any function can be broken down into its Taylor series, which can then be approximated by the following equation:

  3. IMPROVED EFFICIENCY Number of unknowns is equal to number of free parameters in A and b: 1 2?(? + 1), order ?2 Each change in parameters can change the minimum, so an equivalent information content is needed to find the actual minimum Methods like those in 10.7 need ?2line minimizations for this, but each evaluation of the gradient brings N components of information, meaning ideally only N function evaluations are necessary

  4. STEEPEST DESCENT Inefficient method of incorporating gradient information The inefficiency with this method is that if you follow the direction of the gradient at a calculated minimum, you can only move in right angle turns We should instead proceed in a direction conjugate to previous gradient as well as all other directions traversed

  5. DERIVATION OF THE METHOD Based on conjugate gradient method for solving linear equations (2.7.6) Start with arbitrary initial vectors ?0= 0 Construct two sequences of vectors from:

  6. DERIVATION OF THE METHOD The equation for ??+1requires A, which we do not have Instead, from a point ??we can minimize in the direction of ?to arrive at ??+1 Then set ??+1= ?(??+1) This describes the Fletcher-Reeves conjugate gradient algorithm An optimization made by the Polak-Ribiere algorithm:

More Related Content