Advanced Environmental Data Analysis using MATLAB or Python

environmental data analysis with matlab or python n.w
1 / 63
Embed
Share

"Explore advanced topics in environmental data analysis using MATLAB or Python, including linear models, Fourier series, hypothesis testing, and incorporating prior information to solve problems effectively. Learn how to ensure failure-proof least squares solutions with examples of using linear prior information. Dive into covariance, linear approximations, and more to enhance your data analysis skills."

  • MATLAB
  • Python
  • Environmental Data Analysis
  • Linear Models
  • Fourier Series

Uploaded on | 0 Views


Download Presentation

Please find below an Image/Link to download the presentation.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.

You are allowed to download the files provided on this website for personal or commercial use, subject to the condition that they are used lawfully. All files are the property of their respective owners.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author.

E N D

Presentation Transcript


  1. Environmental Data Analysis with MATLAB or Python 3rdEdition Lecture 8

  2. SYLLABUS Lecture 01 Lecture 02 Lecture 03 Lecture 04 Lecture 05 Lecture 06 Lecture 07 Lecture 08 Lecture 09 Lecture 10 Lecture 11 Lecture 12 Lecture 13 Lecture 14 Lecture 15 Lecture 16 Lecture 17 Lecture 18 Lecture 19 Lecture 20 Lecture 21 Lecture 22 Lecture 23 Lecture 24 Lecture 25 Lecture 26 Intro; Using MTLAB or Python Looking At Data Probability and Measurement Error Multivariate Distributions Linear Models The Principle of Least Squares Prior Information Solving Generalized Least Squares Problems Fourier Series Complex Fourier Series Lessons Learned from the Fourier Transform Power Spectra Filter Theory Applications of Filters Factor Analysis and Cluster Analysis Empirical Orthogonal functions and Clusters Covariance and Autocorrelation Cross-correlation Smoothing, Correlation and Spectra Coherence; Tapering and Spectral Analysis Interpolation and Gaussian Process Regression Linear Approximations and Non Linear Least Squares Adaptable Approximations with Neural Networks Hypothesis testing Hypothesis Testing continued; F-Tests Confidence Limits of Spectra, Bootstraps

  3. Goals of the lecture use prior information to solve exemplary problems

  4. review of last lecture

  5. failure-proof least-squares add information to the problem that guarantees that matrices like [G GTG G] are never singular such information is called prior information

  6. examples of prior information soil has density will be around 1500 kg/m3 give or take 500 or so chemical components sum to 100% pollutant transport is subject to the diffusion equation water in rivers always flows downhill

  7. linear prior information with covariance C Ch

  8. simplest example model parameters near known values h with Hm Hm = h m1 = 10 5 m2 = 20 5 H H=I I h h = [10, 20]T m1 and m2 uncorrelated C Ch= 52 0 52 0

  9. another example relevant to chemical constituents H H h h

  10. use Normal p.d.f. to represent prior information

  11. Normal p.d.f. defines an error in prior information individual errors weighted by their certainty

  12. now suppose that we observe some data: d= dobs with covariance Cd

  13. represent the observations with a Normal p.d.f. p(d d) = observations mean of data predicted by the model

  14. this Normal p.d.f. defines an error in data weighted by its certainty prediction error

  15. Generalized Principle of Least Squares the best m mest is the one that minimizes the total error with respect to m m justified by Bayes Theorem in the last lecture

  16. generalized least squares solution pattern same as ordinary least squares but with more complicated matrices

  17. (new material) How to use the Generalized Least Squares Equations

  18. Generalized least squares is equivalent to solving F m = f F m = f by ordinary least squares C Cd- G G C Cd- d d = m m C Ch- H H C Ch- h h

  19. uncorrelated, uniform variance case C Cd = d2 I I C Ch = h2 I I d-1G G d-1d d = m m h-1H H h-1h h

  20. top part data equation weighted by its certainty d-1G G d-1d d = m m h-1H H h-1h h Gm=d data equation d-1 { Gm d } certainty of measurement

  21. bottom part prior information equation weighted by its certainty d-1G G d-1d d = m m h-1H H h-1h h Hm=h prior information equation h-1 { Hm h } certainty of prior information

  22. example no prior information but data equation weighted by its certainty d1-1G11 d1-1G12 d1-1G1M d1-1d1 d2-1d2 = d2-1G21 dN-1GN1 d2-1G22 dN-1GN2 d2-1G2M dN-1GNM m m dN-1dN called weighted least squares

  23. straight line fit no prior information but data equation weighted by its certainty 500 fit 400 300 200 100 0 data with low variance -100 -200 -300 data with high variance -400 -500 0 10 20 30 40 50 60 70 80 90 100

  24. straight line fit no prior information but data equation weighted by its certainty 500 400 300 fit 200 100 0 -100 -200 data with low variance data with high variance -300 -400 -500 0 10 20 30 40 50 60 70 80 90 100

  25. another example prior information that the model parameters are small m 0 H H=I I h h=0 assume uncorrelated with uniform variances C Cd = d2 I I C Ch = h2 I I

  26. Fm=h h d-1G G d-1d d m m = h-1I I h-10 0 [F [FTF] F]-1F FT Tm m=f f [GTG + G + 2I] I]-1G GT Td d with m m=[G with = d/ / m

  27. called damped least squares [GTG + G + 2I] I]-1G GT Td d with m m=[G with = d/ / m =0: minimize the prediction error : minimize the size of the model parameters 0< < : minimize a combination of the two

  28. [GTG + G + 2I] I]-1G GT Td d with advantages: m m=[G with = d/ / m really easy to code MATLAB mest = (G *G+(e^2)*eye(M))\(G *d); Python GTG = np.matmul(G.T,G); GTd = np.matmul(G.T,d); mest = lla.solve(GTG+(e**2)*np.identity(M),GTd); always works

  29. disadvantages: often need to determine empirically prior information that the model parameters are small not always sensible

  30. smoothness as prior information

  31. model parameters represent the values of a function m(x) at equally spaced increments along the x-axis

  32. function approximated by its values at a sequence of x s mimi+1 m(x) x xixi+1 x m(x) m m=[m1, m2, m3, , mM]T

  33. rough function has large second derivative a smooth function is one that is not rough a smooth function has a small second derivative

  34. approximate expressions for second derivative

  35. m(x) x xi i-th row of H H: ( x)-2 [ 0, 0, 0, 0, 1, -2, 1, 0, . 0, 0, 0] 2nd derivative at xi column i

  36. what to do about m1 and mM? not enough points for 2nd derivative two possibilities no prior information for m1 and mM or prior information about flatness (first derivative)

  37. m(x) x x1 first row of H H: ( x)-1 [ -1, 1, 0, 0] 1st derivative at x1

  38. smooth interior / flat ends version of Hm Hm=h h h h=0

  39. example problem: to fill in the missing model parameters so that the resulting curve is smooth m = d x

  40. the model parameters, m m an ordered list of all model parameters m1 m2 m3 m4 m5 m6 m7 m m=

  41. the data, d d justthe model parameters that were measured d3 d5 d6 m3 m5 m6 d d= =

  42. data equation data equation Gm Gm=d d m1 m2 m3 m4 m5 m6 m7 0 0 1 0 0 0 0 d3 d5 d7 0 0 0 0 1 0 0 = 0 0 0 0 0 1 0 data kernel associates a measured model parameter with an unknown model parameter data are just model parameters that have been observed

  43. The prior information equation, Hm smooth interior / flat ends Hm=h h h h=0

  44. put them together into the Generalized Least Squares equation d-1G G d-1d d F = f = h-1H H 0 choose d/ / m to be << 1 data takes precedence over prior information

  45. the solution using MATLAB or Python FTF = np.matmul(F.T,F); FTf = np.matmul(F.T,f); mest = la.solve(FTF,FTf);

  46. graph of the solution 2 solution passes close to data solution is smooth 1.5 1 0.5 m = d 0 d -0.5 -1 -1.5 -2 0 10 20 30 40 50 x x 60 70 80 90 100

  47. Two MatLab issues Issue 1: matrices like G G and F F can be very big, but contain mostly zeros. Solution 1: Use sparse matrices which don t store the zeros Issue 2: matrices like GTG and FTF are not as sparse as G and F Solution 2: Solve equation by a method, such as biconjugate gradients that doesn t require the calculation of GTG and FTF

  48. Using sparse matrices which dont store the zeros: N=200000; M=100000; F=spalloc(N,M,3*M); sparse allocate creates a 200000 100000 matrix that can hold up to 300000 non-zero elements. note that an ordinary matrix would have 20,000,000,000 elements

  49. Building Sparse Matrices There are many ways to build a sparse matrix. However ... don t really need a sparse matrix, at all)l, but become absurdly slow when it is large Some work well when the matrix is small (in which case, you often does for one-dimensional problems) but are not applicable for cases where the structure is complicated (as it often is in multidimensional problems). Some work well when the matrix has a simple structure (as it The presumption in this class is that, ifyou re going to need sparse matrices, it is because you need to solve a very large, multi- dimensional problem. The method used here works for such problems. However, superficially at least, it will seem more cumbersome than some other methods.

  50. The sparse matrix is built in two steps: Step 1: Build a table of the row indices. column indices, and values of its non-zero elements Step 2: call a function or method that uses the table to create the sparse matri

Related


More Related Content