Environmental Data Analysis with MATLAB: Neural Networks for Adaptation

environmental data analysis with matlab n.w
1 / 58
Embed
Share

Explore adaptable approximations using neural networks in environmental data analysis with MATLAB. Learn the advantages, disadvantages, and network representation of functions, along with practical applications.

  • MATLAB
  • Neural Networks
  • Adaptation
  • Environmental Data
  • Analysis

Uploaded on | 0 Views


Download Presentation

Please find below an Image/Link to download the presentation.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.

You are allowed to download the files provided on this website for personal or commercial use, subject to the condition that they are used lawfully. All files are the property of their respective owners.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author.

E N D

Presentation Transcript


  1. Environmental Data Analysis with MatLab 2ndEdition Lecture 23: Adaptable Approximations with Neural Networks

  2. SYLLABUS Lecture 01 Lecture 02 Lecture 03 Lecture 04 Lecture 05 Lecture 06 Lecture 07 Lecture 08 Lecture 09 Lecture 10 Lecture 11 Lecture 12 Lecture 13 Lecture 14 Lecture 15 Lecture 16 Lecture 17 Lecture 18 Lecture 19 Lecture 20 Lecture 21 Lecture 22 Lecture 23 Lecture 24 Lecture 25 Lecture 26 Using MatLab Looking At Data Probability and Measurement Error Multivariate Distributions Linear Models The Principle of Least Squares Prior Information Solving Generalized Least Squares Problems Fourier Series Complex Fourier Series Lessons Learned from the Fourier Transform Power Spectra Filter Theory Applications of Filters Factor Analysis Orthogonal functions Covariance and Autocorrelation Cross-correlation Smoothing, Correlation and Spectra Coherence; Tapering and Spectral Analysis Interpolation Linear Approximations and Non Linear Least Squares Adaptable Approximations with Neural Networks Hypothesis testing Hypothesis Testing continued; F-Tests Confidence Limits of Spectra, Bootstraps

  3. Goals of the lecture Understand the motivation behind neural networks, what neural networks are, why they are adaptable, and a few simple applications

  4. Look-up table as a form of approximation d=4 x d(x) x=3 1 0 2 2 3 4 4 5 5 4 6 2 7 0

  5. advantages d=4 x d(x) x=3 1 0 2 2 3 4 4 5 5 4 6 2 7 0 fast

  6. advantages d=4.5 x d(x) x=3 1 0 2 2 4.5 3 4 4 5 5 4 6 2 7 0 easy to update

  7. disadvantages d=2 d=4 x=2.99 x d(x) 1 0 2 2 3 4 x=3.01 4 5 5 4 6 2 7 0 sharp jumps

  8. disadvantages d=4 x d(x) x=3 1 0 2 2 3 4 3.5 4.75 4 5 5 4 6 2 7 0 hard to reconfigure

  9. network representation of a function

  10. network representation of a function flow of information

  11. network representation of a table

  12. row of a table represented as a boxcar or tower function

  13. another network representation of one row of a table representation in terms of two step functions

  14. smooth alternative to a step function sigmoid function with

  15. smooth alternative to a step function sigmoid function with weight bias

  16. smooth alternative to a step function big w sigmoid function with center small w max slope at x0

  17. neural net

  18. neural net neuron

  19. neural net neuron with bias b bias b: property of neuron

  20. neural net layer

  21. neural net

  22. neural net weight w: property of a connection

  23. neural net a output or activity a of a neuron

  24. neural net z input z a of a neuron

  25. neural net input 1 output 1 input 2 output 2 input 3 output 3 input 4 information flow

  26. neural net let s examine this part

  27. layers numbered from left to right

  28. neuron 1 neuron 2 neurons in each layer are numbered from top to bottom

  29. bias bi(k) of i-th neuron in k-th layer

  30. input zi(k) of i-th neuron in k-th layer

  31. output (or activity) ai(k) of i-th neuron in k-th layer

  32. weight wij(k) of connection from i-th neuron in layer (k-1) to j-th neuron in layer k

  33. sigmoid function NOT applied to last layer

  34. neural net for a step-like function

  35. neural net for a tower function

  36. neural nets can easily be amalgamated so construct a function using a row of towers

  37. neural net for an arbitrary function (made with a superposition of towers)

  38. neural net for an arbitrary function

  39. neural net for 2d tower function (made with a superposition of towers)

  40. neural net for 2d tower function

  41. neural net for a linear function

  42. challenge of designing a neural net 1) choosing number of layers number of neurons in each layer their connections 2) finding the weights and biases that best approximate a given behavior

  43. training = machine learning finding the weights and biases that best approximate a given behavior given a training dataset a (large) set of desired input/output pairs

  44. treat as a least squares problem find the weights and biases that minimize the total error between the desired output and the actual output

  45. least squares requires that you know the linearized data kernel, that is, the derivatives

  46. least squares requires that you know the linearized data kernel, that is, the derivatives the network formulas are simple, so these derivatives can be computed (with copious use of the chain rule)

  47. a simple tower trained to fit a 2D function true function initial guess after training

Related


More Related Content