Iterative Methods in Circuit Simulation and Verification

cse 245 computer aided circuit simulation n.w
1 / 34
Embed
Share

Explore iterative methods such as Jacobi, Gauss-Seidel, and Multigrid in the context of computer-aided circuit simulation. Learn about matrix condition, norm, scaling, and the Gershgorin Circle Theorem relevant to circuit analysis. Gain insights into stationary and non-stationary iterative techniques for efficient circuit verification.

  • Iterative methods
  • Circuit simulation
  • Verification
  • Matrix computations
  • Computer-aided

Uploaded on | 1 Views


Download Presentation

Please find below an Image/Link to download the presentation.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.

You are allowed to download the files provided on this website for personal or commercial use, subject to the condition that they are used lawfully. All files are the property of their respective owners.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author.

E N D

Presentation Transcript


  1. CSE 245: Computer Aided Circuit Simulation and Verification Matrix Computations: Iterative Methods I Chung-Kuan Cheng

  2. Outline Introduction Direct Methods Iterative Methods Formulations Projection Methods Krylov Space Methods Preconditioned Iterations Multigrid Methods Domain Decomposition Methods 2

  3. Introduction Iterative Methods Direct Method LU Decomposition Domain Decomposition General and Robust but can be complicated if N>= 1M Preconditioning Conjugate Gradient GMRES Jacobi Multigrid Gauss-Seidel Excellent choice for SPD matrices Remain an art for arbitrary matrices 3

  4. Introduction: Matrix Condition Given Ax=b, the error | x|/|x| of the solution x is in proportion to the error of A and b by a factor of the matrix condition. Derivation: With errors, we have (A+ A)(x+ x)=b+ b Thus, we can derive x=(A+ A)-1( b- Ax) Or | x|/|x| (A+ A)-1( b- Ax)/|x| |(A+ A)-1|(| b|/|x|+| Ax|/|x|) |A||(A+ A)-1|(| b|/|b|+| A|/|A|) |A||A-1|(| b|/|b|+| A|/|A|) We define the matrix condition as K(A)=|A||A-1|, i.e. | x|/|x| K(A)(| b|/|b|+| A|/|A|) 4

  5. Introduction: Matrix norm 5

  6. Introduction: Scaling 6

  7. Introduction: Gershgorin Circle Theorem 7

  8. Gershgorin Circle Theorem: Example 2 0.1 0.2 0.3 0.1 3 0 0 0 0.3 4 0.1 0 0 0.2 5 8

  9. Iterative Methods Stationary: x(k+1)=Gx(k)+c where G and c do not depend on iteration count (k) Non Stationary: x(k+1)=x(k)+akp(k) where computation involves information that change at each iteration 9

  10. Stationary: Jacobi Method In the i-th equation solve for the value of xiwhile assuming the other entries of x remain fixed: ( ) 1 k b m x b m x i ij j i ij j N = ( ) k j i = = = j m i m x b x x ij j i i i m 1 j ii ii In matrix terms the method becomes: ( ) ( ) = + + ( ) 1 1 1 k k x D L U x D b where D, -L and -U represent the diagonal, the strictly lower- trg and strictly upper-trg parts of M M=D-L-U 10

  11. Stationary-Gause-Seidel Like Jacobi, but now assume that previously computed results are used as soon as they are available: ( ) ( ) 1 k k b m x b m x m x i ij j i ij j ij j N = ( ) k j m i j i j i = = = m x b x x ij j i i i m 1 j ii ii In matrix terms the method becomes: ( ) ( ) 1 = + ( ) 1 k k ( ) x D L Ux b where D, -L and -U represent the diagonal, the strictly lower- trg and strictly upper-trg parts of M M=D-L-U 11

  12. Stationary: Successive Overrelaxation (SOR) Devised by extrapolation applied to Gauss-Seidel in the form of weighted average: ( ) ( ) 1 k k b m x m x i ij j ij j ) 1 ( ) ( ) ( k k k = + ( ) k j i j i = 1 ( ) x w x w x x i i i i m ii In matrix terms the method becomes: ( wL D x ( = ) ( ) 1 + + ( ) 1 1 k k 1 ( ) ) ( ) wU w D x w D wL b where D, -L and -U represent the diagonal, the strictly lower- trg and strictly upper-trg parts of M M=D-L-U 12

  13. SOR Choose w to accelerate the convergence ) 1 ( ) ( ) ( k k k = + 1 ( ) x w x w x i i i W =1 : Jacobi / Gauss-Seidel 2>W>1: Over-Relaxation W < 1: Under-Relaxation 13

  14. Convergence of Stationary Method Linear Equation: MX=b A sufficient condition for convergence of the solution(GS,Jacob) is that the matrix M is diagonally dominant. mii j!=i |mij| If M is symmetric positive definite, SOR converges for any w (0<w<2) A necessary and sufficient condition for the convergence is the magnitude of the largest eigenvalue of the matrix G is smaller than 1 Jacobi: Gauss-Seidel SOR: ( ) 1 = + G D D D L L U U = 1 ( ) G G ( ) 1 = + ( 1 ( ) ) wL wU w D 14

  15. Convergence of Gauss-Seidel Eigenvalues of G=(D-L)-1LTis inside a unit circle Proof: G1=D1/2GD-1/2=(I-L1)-1L1T, L1=D-1/2LD-1/2 Let G1x=rx we have L1Tx=r(I-L1)x xL1Tx=r(1-xTL1x) y=r(1-y) r= y/(1-y), |r|<= 1 iff Re(y) <= . Since A=D-L-LTis PD, D-1/2AD-1/2is PD, 1-2xTL1x >= 0 or 1-2y>= 0, i.e. y<= . 15

  16. Linear Equation: an optimization problem Quadratic function of vector x =2 + T T ( ) f x x Ax b x c 1 xT Matrix A is positive-definite, if for any nonzero vector x 0 Ax If A is symmetric, positive-definite, f(x) is minimized by the solution Ax = b 16

  17. Linear Equation: an optimization problem Quadratic function x x f =2 ) ( + T T Ax b x c 1 Derivative x f ) ( = + T A x Ax b 1 1 2 2 If A is symmetric Ax x f = ) ( b If A is positive-definite is minimized by setting to 0 ) (x f b Ax = f (x ) 17

  18. For symmetric positive definite matrix A 18

  19. Gradient of quadratic form The points in the direction of steepest increase of f(x) 19

  20. Symmetric Positive-Definite Matrix A If A is symmetric positive definite P is the arbitrary point X is the solution point = 1 x A b = + T ( ) ( ) ( ) ( ) f p f x p x A p x 1 2 since T ( ) ( ) 0 p x A p x 1 2 We have, ( ) ( ) f p f x If p != x 20

  21. If A is not positive definite a) Positive definite matrix b) negative-definite matrix c) Singular matrix d) positive indefinite matrix 21

  22. Non-stationary Iterative Method State from initial guess x0, adjust it until close enough to the exact solution = + x x a p i=0,1,2,3, ) 1 + ( ( ) ( ) ( ) i i i i p a (i ) Adjustment Direction Step Size (i ) How to choose direction and step size? 22

  23. Steepest Descent Method (1) Choose the direction in which f decrease most quickly: the direction opposite of ) ( ) f (i x = = ( () i x f b Ax r ) ( ) ( ) i i Which is also the direction of residue = + x x a r ) 1 + ( ( ) ( ) ( ) i i i i 23

  24. Steepest Descent Method (2) How to choose step size ? Line Search should minimize f, along the direction of , which means ) a (i ) = ( ) 0 f x (ir d ) 1 + ( i da = = = T T ( ) ( ) ( ) 0 f x f x x f x r d d ) 1 + ) 1 + T ) 1 + ) 1 + ( ( ( ( ( ) i i i i i da da = 0 r r Orthogonal + ( b ) 1 ( ) i i = T ( ) 0 Ax r ) 1 + ( x ( ) i i + = T ( ( )) 0 b A a r r ( ) ) ( ) ( a ) ( ) i i = i i T T ( ( ) b Ax r Ar r ( ) ( ) ( ) ( ) ( ) i i i i i T r r = a ( ) T ( ) i i ( ) i r Ar ( ) ( ) i i 24

  25. Steepest Descent Algorithm Given x0, iterate until residue is smaller than error tolerance = r b Ax ( ) ( ) i i T r r = a ( ) T ( ) i i ( ) i r = Ar ( ) ( ) i i + x x a r ) 1 + ( ( ) ( ) ( ) i i i i 25

  26. Steepest Descent Method: example 3 2 2 x 1 = 2 6 8 x 2 a) Starting at (-2,-2) take the direction of steepest descent of f b) Find the point on the intersec- tion of these two surfaces that minimize f c) Intersection of surfaces. d) The gradient at the bottommost point is orthogonal to the gradient of the previous step 26

  27. Iterations of Steepest Descent Method 27

  28. Convergence of Steepest Descent-1 Let e=x*-x, f(x)=f(x*)+1/2 eTAe Note that Ae=b-Ax=r n = j = e v Eigenvector: ( ) i j j 1 j EigenValue: j=1,2, ,n A= / 1) 2 T ( e e Ae Energy norm: 28

  29. Convergence of Steepest Descent-2 2 = T e e Ae ) 1 + ) 1 + ) 1 + ( ( ( i i i A = + + T T i ( ) ( ) e a r A e a r ( ) Ae ( ) ( ) a ( ) ( + ) ( ) i i i Ae i i = + 2 ( T T i T i 2 e r a r Ar ( ) ( ) ( ) ( ) ( ) ) ( ) ( ) i i i r i i i r 2 T i T i T i ( ) r r r r 2 2 ( ) Ar ( ) ( ) Ar ( ) ( T i ) ( ) = + + = i i i 2 T i T i 2 ( ) ( ) e r r r Ar e ( ) ( ) ( ) ( ) ( ) ( ) i i i i T i T i r r r Ar A A ( ) ( ) ( ) ( ) ( ) ( ) i i i j 2 j 2 j 2 ( ) 2 T i ( ) r r 2 2 ( ) ( ) = = i e 1 1 e e j j ( ) i ( ) i 2 j 3 j 2 j T i T ( )( ) ( )( ) r Ar Ae A A ( ) ( ) ( ) ( ) i i i j j 2 j 2 j 2 ( ) 2 = = 2 2 , 1 e j j ( ) i 2 j 3 j 2 j ( )( ) A j 29

  30. Convergence of Steepest Descent-2 2 = T e e Ae + + + ( ) 1 ( ) 1 ( ) 1 i i i A = + + T T i ( ) ( ) e a r A e a r ( ) Ae ( ) ( ) a ( ) ( ) ( ) i i i Ae i i = + + 2 ( T T i T i 2 e r a r Ar ( ) ( ) ( ) ( ) ( ) ) ( ) ( ) i i i r i i i r 2 T i T i T i ( ) r r r r 2 2 ( ) Ar ( ) ( ) Ar ( ) ( T i ) ( ) i i i = + + = 2 T i T i 2 ( ) ( ) e r r r Ar e ( ) ( ) ( ) ( ) ( ) ( ) i i i i T i T i r r r Ar A A ( ) ( ) ( ) ( ) ( ) ( ) i i i 2 T i ( ) r r 2 ( ) ( ) = i e 1 e ( ) i T i T ( )( ) r Ar Ae A ( ) ( ) ( ) ( ) i i i The error shrinks be a factor: 30

  31. Convergence Study (n=2) assume 1 2 = 1/ k let Spectral condition number 2 2 = j = 2/ u = e v let 1 ( ) i j j 1 + 2 2 1 2 2 2 2 2 ( + ) = 2 1 1 + 2 2 2 2 3 1 2 2 3 2 ( )( ) 1 1 2 1 + 2 2 2 ( ) + k u = 1 + 2 3 2 ( )( ) k u k u 31

  32. Plot of w 32

  33. Case Study 33

  34. Bound of Convergence + 2 2 2 ( ) + k u = 2 1 + k 2 3 2 ( )( ) k u k u 4 4 1 + + 5 4 3 2 k k k 2 ( ) 1 + k k = 2 ( ) 1 k 1 + 1 k It can be proved that it is also valid for n>2, where = max/ k min 34

Related


More Related Content