Unsupervised Learning and Neural Networks: A Comprehensive Guide

training neural networks with unsupervised n.w
1 / 13
Embed
Share

Explore the world of unsupervised learning in neural networks, including concepts like clustering, autoencoders, and more. Discover the key differences between supervised and unsupervised learning approaches and their applications in solving complex data problems.

  • Unsupervised Learning
  • Neural Networks
  • Clustering
  • Autoencoders
  • Machine Learning

Uploaded on | 0 Views


Download Presentation

Please find below an Image/Link to download the presentation.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.

You are allowed to download the files provided on this website for personal or commercial use, subject to the condition that they are used lawfully. All files are the property of their respective owners.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author.

E N D

Presentation Transcript


  1. Training Neural Networks with Unsupervised Learning Eder Santana

  2. Supervised vs Unsupervised Supervised Requires input x desired output pairs If you have data enough, you are more likely to solve your problems Unsupervised Considers only input data Unlabeled data is almost gauranteed

  3. Unsupervised learning Clustering Principal Component Analysis Independent Component Analysis Dimensionality reduction Filtering, noise reduction Metric learning

  4. Unsupervised learning Autoencoders Restricted Boltzman Machines Adversarial Networks Siamese Networks

  5. Unsupervised Learning and neural networks Finding modes in the data

  6. Unsupervised Learning and neural networks Autoencoder: L = E[ ( x - d(e(x)) )2] encoder decoder

  7. Unsupervised Learning and neural networks Denoising Autoencoder: L = E[ ( x - d(e(x + n)) )2] noise encoder decoder

  8. Unsupervised Learning and neural networks Sparse Autoencoder: L = E[ ( x - d(e(x)) )2] + ?|e(x)|1 encoder decoder

  9. Unsupervised Learning and neural networks Functional regularization for Autoencoder: L = E[ ( x - d(e(x)) )2] + ? D(e, p0) encoder decoder

  10. Unsupervised Learning and neural networks Sparse Autoencoder: L = E[ ( x - d(e(x)) )2] + ?|e(x)|1 Encoder and Decoder as MLPs: e(x) = tanh(Wx + b) d(x) = W`e(x) + b

  11. Unsupervised Learning and neural networks How to go deeper?

  12. Unsupervised Learning and neural networks How to go deeper?

  13. More videos Hugo Larochelle: Neural networks [7.6] : Deep learning - deep autoencoder Hugo Larochelle Neural networks [7.3] : Deep learning - unsupervised pre-training Hugo Larochelle Nando de Freitas: Machine learning - Deep learning II, the Google autoencoders and dropout

More Related Content