
Understanding Neural Network Algorithms and Learning Rules
Explore the world of Artificial Neural Networks (ANN) and learning algorithms, from mimicking child learning to Hebbian learning rules, convergent of the net, and the application of learning rules in solving logical problems such as the AND gate. Dive into the intricate details of how NNs adapt and learn continuously based on various factors, enhancing performance in different environments.
Download Presentation

Please find below an Image/Link to download the presentation.
The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.
You are allowed to download the files provided on this website for personal or commercial use, subject to the condition that they are used lawfully. All files are the property of their respective owners.
The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author.
E N D
Presentation Transcript
ANN ANN ALGORITHMS ALGORITHMS Learning Algorithms Learning Algorithms
Learning Algorithms The NN's mimic the way that a child learns to identify shapes and colors NN algorithms are able to adapt continuously based on current results to improve performance. Adaptation or learning is an essential feature of NN's in order to handle the new "environments" that are continuously encountered. The performance of learning procedure depends on many factors such as: - 1. The choice of error function. 2. The net architecture. 3. Types of nodes and possible restrictions on the values of the weights. 4. An activation function.
The convergent of the net Depends on the: - 1- Training set 2- The initial conditions 3- Learning algorithms. Note: - The convergence in the case of complete information is better than in the case of incomplete information. Training a NN is to perform weights assignment in a net to minimize the o/p error. The net is said to be trained when convergence is achieved or in other words the weights stop changing.
The types of the learning rules: 1. Hebbian Learning Rule The earliest and simplest learning rule is known as the Hebbian learning rule. Hebb's basic idea is that if a unit Uj receives an input from a unit Ui and both unit are highly active, then the weight Wij (from unit i to unit j) should be strengthened. This idea is formulated as: Wij = Xi Wj Where is the learning rate , if =1 , W is the weight change W(new)= W(old)+ W W(new) = W(old) + xy The main disadvantage of Hebbian learning is that it takes no account of the actual value of the output, only the desired value. This limitation can be overcome if the weights are adjusted by amount which depends upon the error between the desired and actual output.
Ex-4 For the following AND gate, update the initial weights [W1=0, W2=0, b=0] W1 W1 W2 b X1 X1 X2 X2 b b y y W2 b W1 W1 W2 W2 B B 1 1- - 2 2- - 3 3- - 4 4- - 1 1 0 0 1 0 1 0 1 1 1 1 1 0 0 0 The first input pattern shows that the response will be correct presenting the second, third, and fourth training i/p shows that because the target value is 0, no learning occurs. Thus, using binary target values prevents the net from learning only pattern for which the target is "off". The AND function can be solved if we modify its representation to express the inputs as well as the targets in bipolar form. Bipolar representation of the inputs and targets allows modifications of a weight when the input unit and the target value are both "on" at the same time and when they are both "off".
W1 W1 W2 b X1 X1 X2 X2 b b y y W2 b W1 W1 W2 W2 b b 1 1 1 1 - -1 1 - -1 1 1 -1 1 -1 1 1 1 1 1 -1 -1 -1 Second Method Wij = XiYj or W = X T Y Ex-5 What would the weights be if Hebbian learning is applied to the data shown in the following table? Assume that the initial weights are zero. X1 X1 X2 X2 y y W1 W1 W2 b W2 b W1 W1 W2 W2 net net 0 0 0 0 1 1 1 1 0 1 0 1 1 1 0 1 What are the output values being produce, using hyperbolic activation function?