
Environmental Data Analysis with MATLAB: Neural Networks for Adaptation
Explore adaptable approximations using neural networks in environmental data analysis with MATLAB. Learn the advantages, disadvantages, and network representation of functions, along with practical applications.
Download Presentation

Please find below an Image/Link to download the presentation.
The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.
You are allowed to download the files provided on this website for personal or commercial use, subject to the condition that they are used lawfully. All files are the property of their respective owners.
The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author.
E N D
Presentation Transcript
Environmental Data Analysis with MatLab 2ndEdition Lecture 23: Adaptable Approximations with Neural Networks
SYLLABUS Lecture 01 Lecture 02 Lecture 03 Lecture 04 Lecture 05 Lecture 06 Lecture 07 Lecture 08 Lecture 09 Lecture 10 Lecture 11 Lecture 12 Lecture 13 Lecture 14 Lecture 15 Lecture 16 Lecture 17 Lecture 18 Lecture 19 Lecture 20 Lecture 21 Lecture 22 Lecture 23 Lecture 24 Lecture 25 Lecture 26 Using MatLab Looking At Data Probability and Measurement Error Multivariate Distributions Linear Models The Principle of Least Squares Prior Information Solving Generalized Least Squares Problems Fourier Series Complex Fourier Series Lessons Learned from the Fourier Transform Power Spectra Filter Theory Applications of Filters Factor Analysis Orthogonal functions Covariance and Autocorrelation Cross-correlation Smoothing, Correlation and Spectra Coherence; Tapering and Spectral Analysis Interpolation Linear Approximations and Non Linear Least Squares Adaptable Approximations with Neural Networks Hypothesis testing Hypothesis Testing continued; F-Tests Confidence Limits of Spectra, Bootstraps
Goals of the lecture Understand the motivation behind neural networks, what neural networks are, why they are adaptable, and a few simple applications
Look-up table as a form of approximation d=4 x d(x) x=3 1 0 2 2 3 4 4 5 5 4 6 2 7 0
advantages d=4 x d(x) x=3 1 0 2 2 3 4 4 5 5 4 6 2 7 0 fast
advantages d=4.5 x d(x) x=3 1 0 2 2 4.5 3 4 4 5 5 4 6 2 7 0 easy to update
disadvantages d=2 d=4 x=2.99 x d(x) 1 0 2 2 3 4 x=3.01 4 5 5 4 6 2 7 0 sharp jumps
disadvantages d=4 x d(x) x=3 1 0 2 2 3 4 3.5 4.75 4 5 5 4 6 2 7 0 hard to reconfigure
network representation of a function flow of information
row of a table represented as a boxcar or tower function
another network representation of one row of a table representation in terms of two step functions
smooth alternative to a step function sigmoid function with
smooth alternative to a step function sigmoid function with weight bias
smooth alternative to a step function big w sigmoid function with center small w max slope at x0
neural net neuron
neural net neuron with bias b bias b: property of neuron
neural net layer
neural net weight w: property of a connection
neural net a output or activity a of a neuron
neural net z input z a of a neuron
neural net input 1 output 1 input 2 output 2 input 3 output 3 input 4 information flow
neural net let s examine this part
neuron 1 neuron 2 neurons in each layer are numbered from top to bottom
weight wij(k) of connection from i-th neuron in layer (k-1) to j-th neuron in layer k
sigmoid function NOT applied to last layer
neural nets can easily be amalgamated so construct a function using a row of towers
neural net for an arbitrary function (made with a superposition of towers)
neural net for 2d tower function (made with a superposition of towers)
challenge of designing a neural net 1) choosing number of layers number of neurons in each layer their connections 2) finding the weights and biases that best approximate a given behavior
training = machine learning finding the weights and biases that best approximate a given behavior given a training dataset a (large) set of desired input/output pairs
treat as a least squares problem find the weights and biases that minimize the total error between the desired output and the actual output
least squares requires that you know the linearized data kernel, that is, the derivatives
least squares requires that you know the linearized data kernel, that is, the derivatives the network formulas are simple, so these derivatives can be computed (with copious use of the chain rule)
a simple tower trained to fit a 2D function true function initial guess after training