Understanding Generalized Inverses and Resolution Matrices

lecture 6 n.w
1 / 52
Embed
Share

Explore the concept of Generalized Inverses, Resolution Matrices, and the Unit Covariance Matrix in this comprehensive lecture series. Learn how to quantify resolution spread and covariance size, maximizing their use in solving inverse problems.

  • Generalized Inverses
  • Resolution Matrices
  • Covariance Matrix
  • Inverse Problems
  • Data Analysis

Uploaded on | 0 Views


Download Presentation

Please find below an Image/Link to download the presentation.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.

You are allowed to download the files provided on this website for personal or commercial use, subject to the condition that they are used lawfully. All files are the property of their respective owners.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author.

E N D

Presentation Transcript


  1. Lecture 6 Resolution and Generalized Inverses

  2. Syllabus Lecture 01 Lecture 02 Lecture 03 Lecture 04 Lecture 05 Lecture 06 Lecture 07 Lecture 08 Lecture 09 Lecture 10 Lecture 11 Lecture 12 Lecture 13 Lecture 14 Lecture 15 Lecture 16 Lecture 17 Lecture 18 Lecture 19 Lecture 20 Lecture 21 Lecture 22 Lecture 23 Lecture 24 Lecture 25 Lecture 26 Describing Inverse Problems Probability and Measurement Error, Part 1 Probability and Measurement Error, Part 2 The L2 Norm and Simple Least Squares A Priori Information and Weighted Least Squared Resolution and Generalized Inverses Backus-Gilbert Inverse and the Trade Off of Resolution and Variance The Principle of Maximum Likelihood Inexact Theories Prior Covariance and Gaussian Process Regression Non-uniqueness and Localized Averages Vector Spaces and Singular Value Decomposition Equality and Inequality Constraints L1 , L Norm Problems and Linear Programming Nonlinear Problems: Grid and Monte Carlo Searches Nonlinear Problems: Newton s Method Nonlinear Problems: MCMC and Bootstrap Confidence Intervals Factor Analysis Varimax Factors, Empirical Orthogonal Functions Backus-Gilbert Theory for Continuous Problems; Radon s Problem Linear Operators and Their Adjoints Fr chet Derivatives Estimating a Parameter in a Differential Equation Exemplary Inverse Problems, incl. Filter Design Exemplary Inverse Problems, incl. Earthquake Location Exemplary Inverse Problems, incl. Vibrational Problems

  3. Purpose of the Lecture Introduce the idea of a Generalized Inverse, the Data and Model Resolution Matrices and the Unit Covariance Matrix Quantify the spread of resolution and the size of the covariance Use the maximization of resolution and/or covariance as the guiding principle for solving inverse problems

  4. Part 1 The Generalized Inverse, the Data and Model Resolution Matrices and the Unit Covariance Matrix

  5. all of the solutions of the form m mest = Md Md + v v

  6. m mest = Md Md + v v let s focus on this matrix

  7. m mest = G G- -g gd d + v v rename it the generalized inverse and use the symbol G G-g

  8. (lets ignore the vector v v for a moment) Generalized Inverse G G-g operates on the data to give an estimate of the model parameters if d dpre = Gm then m mest = G G- -g gd dobs Gmest

  9. Generalized Inverse G G-g if d dpre = Gm Gmest then m mest = G G- -g gd dobs sort of looks like a matrix inverse except M N, not square and GG GG- -g g I I and G G- -g gG G I I

  10. so actually the generalized inverse is not a matrix inverse at all

  11. plug one equation into the other Gmest and m mest = G G- -g gd dobs d dpre pre = Gm obs obs with N N= GG d dpre pre = Nd Ndobs GG- -g g data resolution matrix

  12. Data Resolution Matrix, N N d dpre pre = Nd Ndobs obs How much does diobs contribute to its own prediction?

  13. if N=I N=I d dpre pre = d dobs obs dipre = diobs diobs completely controls its own prediction

  14. (A) d dpre d dobs = The closer N N is to I, the more diobs controls its own prediction

  15. straight line problem 15 15 10 10 d d 5 5 0 0 0 5 z 10 0 5 z 10

  16. d dpre pre = N d N dobs obs j = = i i j only the data at the ends control their own prediction

  17. plug one equation into the other Gmtrue and m mest = G G- -g gd dobs d dobs obs = Gm obs true with R R= G G- -g gG G m mest est = Rm Rmtrue model resolution matrix

  18. Model Resolution Matrix, R R m mest est = Rm Rmtrue true How much does mitrue contribute to its own estimated value?

  19. if R=I R=I m mest est = m mtrue true miest = mitrue miestreflects mitrueonly

  20. else if R R I I miest = + Ri,i-1mi-1true + Ri,imitrue + Ri,i+1mi+1true+ miest is a weighted average of all the elements of m mtrue

  21. m mest m mtrue = The closer R R is to I, the more miest reflects only mitrue

  22. Discrete version of Laplace Transform large c: d is shallow average of m(z) small c: d is deep average of m(z)

  23. m(z) integrate e-c loz dlo z integrate e-c hiz dhi z z

  24. m mest est = R R m mtrue true j = = i i j the shallowest model parameters are best resolved

  25. Covariance associated with the Generalized Inverse unit covariance matrix divide by 2 to remove effect of the overall magnitude of the measurement error

  26. unit covariance for straight line problem model parameters uncorrelated when this term zero happens when data are centered about the origin

  27. Part 2 The spread of resolution and the size of the covariance

  28. a resolution matrix has small spread if only its main diagonal has large elements it is close to the identity matrix

  29. Dirichlet Spread Functions

  30. a unit covariance matrix has small size if its diagonal elements are small error in the data corresponds to only small error in the model parameters (ignore correlations)

  31. Part 3 minimization of spread of resolution and/or size of covariance as the guiding principle for creating a generalized inverse

  32. over-determined case note that for simple least squares G G-g = [G GTG G]-1G GT model resolution R R=G G-gG G = [G GTG G]-1G GTG G=I I always the identify matrix

  33. suggests that we try to minimize the spread of the data resolution matrix, N N find G G-g that minimizes spread(N)

  34. spread of the k-th row of N N now compute

  35. first term

  36. second term third term is zero

  37. putting it all together which is just simple least squares G G-g = [G GTG G]-1G GT

  38. the simple least squares solution minimizes the spread of data resolution and has zero spread of the model resolution

  39. under-determined case note that for minimum length solution G G-g = G GT [GG GGT]-1 data resolution N N=G GG G-g = G G G GT [GG always the identify matrix GGT]-1 =I I

  40. suggests that we try to minimize the spread of the model resolution matrix, R R find G G-g that minimizes spread(R)

  41. minimization leads to [GG GGT]G G-g = G GT which is just minimum length solution G G-g = G GT [GG GGT]-1

  42. the minimum length solution minimizes the spread of model resolution and has zero spread of the data resolution

  43. general case leads to

  44. general case leads to a Sylvester Equation, so explicit solution in terms of matrices

  45. special case #1 1 0 2 I I [G GTG G+ 2I I]G G-g=G GT G G-g=[G GTG G+ 2I I]-1G GT damped least squares

  46. special case #2 0 1 2 I I G G-g[GG GGT+ 2I I] =G GT G G-g=G GT [GG GGT+ 2I I]-1 damped minimum length

  47. so no new solutions have arisen just a reinterpretation of previously- derived solutions

  48. reinterpretation instead of solving for estimates of the model parameters We are solving for estimates of weighted averages of the model parameters, where the weights are given by the model resolution matrix

  49. criticism of Direchlet spread() functions when m m represents m(x) is that they don t capture the sense of being localized very well

Related


More Related Content