Advanced Methods in Optimization: ADMM, Dual Ascent, and More

admm and dso n.w
1 / 19
Embed
Share

Explore advanced optimization techniques such as ADMM, Dual Ascent, Augmented Lagrangian, and more to tackle complex convex optimization problems with improved convergence and efficiency.

  • Optimization
  • ADMM
  • Dual Ascent
  • Augmented Lagrangian
  • Convex

Uploaded on | 0 Views


Download Presentation

Please find below an Image/Link to download the presentation.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.

You are allowed to download the files provided on this website for personal or commercial use, subject to the condition that they are used lawfully. All files are the property of their respective owners.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author.

E N D

Presentation Transcript


  1. ADMM AND DSO

  2. Pre-requisites for ADMM Dual Ascent Dual decomposition Augmented Lagrangian

  3. Dual Ascent Constraint convex optimization of the form: Lagrangian: Dual function: Issue: Slow convergence No distributedness.

  4. Dual decomposition Same constraint convex optimization problem but separable. Lagrangian: Parallelization is possible. Issue: Still slow convergence.

  5. xi updates of t+1 iteration are send to a central hub which calculates y(t+1) and then again propagates it to different machines.

  6. Augmented Lagrangian method Constraint convex optimization : Updated objective function So the Lagrangian would look like: Updates would look like: Issue: Due to this new term we lost decomposability but improved convergence.

  7. ADMM (Alternating Direction Method of multipliers) Standard ADMM Scaled ADMM

  8. Standard ADMM Constraint convex optimization : Augmented Lagrangian: AL updates would be like:

  9. Standard ADMM Blend of dual decomposition and augmented Lagrangian method(AL). ADMM updates would be:

  10. Scaled ADMM Scale the dual variable: u=y/ The standard ADMM updates would look like: The formulas are shorter in this version. This version is widely used.

  11. Least square problem Consider the method of least-square where we minimize the sum of square of errors for regression purpose: For standard ADMM to work, we will reformulate the problem as:

  12. DSO (Distributed Stochastic Optimization)

  13. Regularized Risk Minimization RRM : Introducing constraints:

  14. Lagrangian Lagrangian: Fenchel-Legendre conjugate : Lagrangian can be rewritten as:

  15. DSO Again rewriting the previous equation but only for non-zero features. Now applying stochastic gradient descent for w and ascent for : Note that we can parallelize this stochastic optimization algorithm.

  16. Working of DSO

  17. Thank-you

Related


More Related Content