Convolutional Neural Networks (CNNs) for Image Processing

convolutional neural networks cnns convnets n.w
1 / 17
Embed
Share

"Learn about Convolutional Neural Networks (CNNs) and how they extract higher representations of images for better classification compared to traditional image processing methods. Explore the layers, architecture, and applications of CNNs in image classification, segmentation, and generation. Discover why CNNs are essential for tasks like image recognition and feature extraction."

  • CNNs
  • Image Processing
  • Neural Networks
  • Deep Learning
  • Computer Vision

Uploaded on | 1 Views


Download Presentation

Please find below an Image/Link to download the presentation.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.

You are allowed to download the files provided on this website for personal or commercial use, subject to the condition that they are used lawfully. All files are the property of their respective owners.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author.

E N D

Presentation Transcript


  1. CONVOLUTIONAL NEURAL NETWORKS (CNNS / CONVNETS) Md Mahin

  2. WHAT AND WHY CNN is a type of neural network model that can extract higher representations of the image. In classical image classification you define the image features. CNN takes the image s raw pixel data, trains the model and then extracts the features for better classification. Regular Neural Nets don t scale well to full images Full connectivity is wasteful and the huge number of parameters would quickly lead to overfitting.

  3. NN VS CNN

  4. CNN

  5. LAYERS Input(2D or 3D input) Convolutional Layer Relu Layer Maxpool Layer Fully Connected Layer Additional Layer liker Dropout Layer

  6. CONVOLUTIONAL LAYER The CONV layer s parameters consist of a set of learnable filters. Every filter is small spatially (along width and height), but extends through the full depth of the input. Each place contains weight Only uses local connectivity [The spatial extent of this connectivity is a hyperparameter called the receptive field of the neuron (equivalently this is the filter size)] Layers share parameters

  7. CONVOLUTIONAL LAYER

  8. CONVOLUTIONAL LAYER

  9. MAXPOOLING LAYER Maxpool layers are used in CNN to replace output with the max summary of the feature maps to reduce the data size and processing time. Max pooling takes two hyperparameters: stride and size. Stride will determine the pixels that maxpool filters will move. Size will determines how big the value pools in every step.

  10. MAXPOOLING LAYER

  11. SAMPLE IMAGE

  12. USE OF CNN 1. Image classification (LeNet, InceptionNet, ResNet) 2. Image segmentation (UNet, FCNN, RCNN) 3. Image Generation(GAN)

  13. OTHER EXAMPLE: AUTOENCODER Autoencoders is a neural network that is trained to attempt to copy its input to its output. They can be supervised or unsupervised, this depends on the problem that is being solved. They are mainly a dimensionality reduction algorithm. Components of autoencoder 1. Encoder 2. Code 3. decoder

  14. PROPERTIES OF AUTOENCODERS 1. Data Specific: They are only able to compress data similar to what they have been trained on because they learn the features specific for the training data. 2. Lossy: The output of an autoencoder will not be the same as the input, it will be very similar to the input but not exactly the same. 3. Unsupervised: Autoencoders are considered unsupervised because they don t need explicit labels to train on. Autoencoders are self-supervised because they generate their own labels from training data.

  15. AUTOENCODER ARCHITECTURE

  16. DENOISING USING AUTOENCODER

  17. REFERENCE https://cs231n.github.io/convolutional- networks/?fbclid=IwAR3mPWaxIpos6lS3zDHUrL8C1h9ZrzBMUIk5J4PHRbKRf ncqgUBYtJEKATA http://cs231n.stanford.edu/slides/2018/cs231n_2018_lecture05.pdf

Related


More Related Content