Artificial Intelligence Perceptrons: Linear Classifiers and Geometric Explanation
In these slides, explore the concept of artificial intelligence perceptrons including linear classifiers, feature vectors, and geometric explanations. Learn about weight updates and the learning process of binary perceptrons. Understand how weights are adjusted based on training instances and delve into the role of hyperplanes in classification tasks.
Download Presentation

Please find below an Image/Link to download the presentation.
The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author.If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.
You are allowed to download the files provided on this website for personal or commercial use, subject to the condition that they are used lawfully. All files are the property of their respective owners.
The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author.
E N D
Presentation Transcript
CS 4/527: Artificial Intelligence Perceptrons Instructor: Jared Saia--- University of New Mexico [These slides were created by Dan Klein, Pieter Abbeel, Anca Dragan for CS188 Intro to AI at UC Berkeley. All CS188 materials are available at http://ai.berkeley.edu.]
Feature Vectors Hello, # free : 2 YOUR_NAME : 0 MISSPELLED : 2 FROM_FRIEND : 0 ... SPAM or + Do you want free printr cartriges? Why pay more when you can get them ABSOLUTELY FREE! Just PIXEL-7,12 : 1 PIXEL-7,13 : 0 ... NUM_LOOPS : 1 ... 2
Linear Classifiers o Inputs are feature values o Each feature has a weight o Sum is the activation o If the activation is: o Positive, output +1 o Negative, output -1 w1 w2 w3 f1 f2 f3 >0?
Geometric Explanation # free : 4 YOUR_NAME :-1 MISSPELLED : 1 FROM_FRIEND :-3 ... # free : 2 YOUR_NAME : 0 MISSPELLED : 2 FROM_FRIEND : 0 ... # free : 0 YOUR_NAME : 1 MISSPELLED : 1 FROM_FRIEND : 1 ... Dot product positive means the positive class
Geometric Explanation o In the space of feature vectors o Examples are points o Any weight vector is a hyperplane o One side corresponds to Y=+1 o Other corresponds to Y=-1 money 2 +1 = SPAM 1 BIAS : -3 free : 4 money : 2 ... 0 -1 = HAM 0 1 free
Learning: Binary Perceptron o Start with weights = 0 o For each training instance: o Classify with current weights o If correct (i.e., y=y*), no change! o If wrong: adjust the weight vector
Learning: Binary Perceptron o Start with weights = 0 o For each training instance: o Classify with current weights o If correct (i.e., y=y*), no change! o If wrong: adjust the weight vector by adding or subtracting the feature vector. Subtract if y* is -1.
Examples: Perceptron o Separable Case
Multiclass Decision Rule o If we have multiple classes: o A weight vector for each class: o Score (activation) of a class y: o Prediction highest score wins Binary = multiclass where the negative class has weight zero
Learning: Multiclass Perceptron o Start with all weights = 0 o Pick up training examples one by one o Predict with current weights o If correct, no change! o If wrong: lower score of wrong answer, raise score of right answer
Example: Multiclass Perceptron win the vote win the election win the game BIAS : 1 win : 0 game : 0 vote : 0 the : 0 ... BIAS : 0 win : 0 game : 0 vote : 0 the : 0 ... BIAS : 0 win : 0 game : 0 vote : 0 the : 0 ...
Multiclass Example Show example from video
Properties of Perceptrons Separable o Separability: true if some parameters get the training set perfectly correct o Convergence: if the training is separable, perceptron will eventually converge (binary case) o Mistake Bound: the maximum number of mistakes (binary case) related to the margin or degree of separability Non-Separable k = square of radius of ball containing data delta = margin separating data classes
Examples: Perceptron o Non-Separable Case
Problems with the Perceptron o Noise: if the data isn t separable, weights might thrash o Averaging weight vectors over time can help (averaged perceptron) o Mediocre generalization: finds a barely separating solution o Overtraining: test / held-out accuracy usually rises, then falls o Overtraining is a kind of overfitting
Fixing the Perceptron o Idea: adjust the weight update to mitigate these effects o MIRA*: choose an update size that fixes the current mistake o but, minimizes the change to w o The +1 helps to generalize * Margin Infused Relaxed Algorithm
Minimum Correcting Update min not =0, or would not have made an error, so min will be where equality holds
Maximum Step Size o In practice, it s also bad to make updates that are too large o Example may be labeled incorrectly o You may not have enough features o Solution: cap the maximum possible value of with some constant C o Corresponds to an optimization that assumes non-separable data o Usually converges faster than perceptron o Usually better, especially on noisy data
Linear Separators o Which of these linear separators is optimal?
Support Vector Machines o Maximizing the margin: good according to intuition, theory, practice o Only support vectors matter; other training examples are ignorable o Support vector machines (SVMs) find the separator with max margin o Basically, SVMs are MIRA where you optimize over all examples at once MIRA SVM
Classification: Comparison o Na ve Bayes o Builds a model training data o Gives prediction probabilities o Strong assumptions about feature independence o One pass through data (counting) o Perceptrons / MIRA: o Makes less assumptions about data o Mistake-driven learning o Multiple passes through data (prediction) o Often more accurate