
Effective Transfer Learning Strategies in Machine Learning
Explore the concept of transfer learning in machine learning, where knowledge gained from one task is applied to a related task. Learn about different transfer learning strategies like inductive, unsupervised, and transductive transfer learning, along with overcoming the isolated learning paradigm. Enhance your understanding through real-world examples and insights into how humans naturally transfer knowledge across tasks.
Download Presentation

Please find below an Image/Link to download the presentation.
The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.
You are allowed to download the files provided on this website for personal or commercial use, subject to the condition that they are used lawfully. All files are the property of their respective owners.
The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author.
E N D
Presentation Transcript
Transfer Learning & Domain Adaption Transfer learning is a research problem in machine learning that focuses on storing knowledge gained while solving one problem and applying it to a different but related problem. For example, knowledge gained while learning to recognize cars could apply when trying to recognize trucks.
Introduction Humans have an inherent ability to transfer knowledge across tasks. What we acquire as knowledge while learning about one task, we utilize in the same way to solve related tasks.
Example Know how to ride a motorbike , Learn how to ride a car Know how to play classic piano , Learn how to play jazz piano Know math and statistics , Learn machine learning
Isolation Conventional machine learning and deep learning algorithms, traditionally designed to work in isolation so far, have been These algorithms are trained to solve specific tasks. The models have to be rebuilt from scratch once the feature-space distribution changes.
Transfer learning Transfer learning is the idea of overcoming the isolated learning paradigm and utilizing knowledge acquired for one task to solve related ones.
Transfer Learning Strategies There are different transfer learning strategies and techniques, which can be applied based on the domain, task at hand, and the availability of data. 1. Inductive Transfer learning 2. Unsupervised Transfer Learning 3. Transductive Transfer Learning
Transfer Learning Strategies Inductive Transfer learning: In this scenario, the source and target domains are the same. yet the source and target tasks are different from each other. The algorithms try to utilize the inductive biases of the source domain to help improve the target task. Depending upon whether the source domain contains labeled data or not, this can be further divided into two subcategories, similar to multitask learning and self-taught learning, respectively.
Transfer Learning Strategies Unsupervised Transfer Learning: This setting is similar to inductive transfer itself, with a focus on unsupervised tasks in the target domain. The source and target domains are similar, but the tasks are different.
Transductive Transfer Learning In this scenario, there are similarities between the source and target tasks. Transductive Transfer Learning: but the corresponding domains are different. In this setting, the source domain has a lot of labeled data, while the target domain has none.
Types of Deep Transfer Learning Multitask learning is a slightly different flavor of the transfer learning world. In the case of multitask learning, several tasks are learned simultaneously without distinction between the source and targets. In this case, the learner receives information about multiple tasks at once, as compared to transfer learning, where the learner initially has no idea about the target task. This is
Types of Deep Transfer Learning Multitask Learning
One-shot Learning Deep learning systems are data-hungry by nature, such that they need many training examples to learn the weights. This is one of the limiting aspects of deep neural networks. One-shot learning is a variant of transfer learning, where we try to infer the required output based on just one or a few training examples. This is essentially helpful in real-world scenarios where it is not possible to have labeled data for every possible class.
Zero-shot Learning Zero-shot learning is another extreme variant of transfer learning, which relies on no labeled examples to learn a task. Zero-data learning or zero-short learning methods, make clever adjustments during the training stage itself to exploit additional information to understand unseen data.
Distributed-Representations The concept of distributed representations is often central to deep learning, particularly as it applies to natural language tasks.
Local representation of shapes Figure 1. Sparse or local, non-distributed representation of shapes
distributed representation of shapes Distributed representation of shapes
Figure 3. Distributed representation of a circle; This representation is more useful as it provides us with information about how this new shape is related to our other shapes.
Variants of CNN: DenseNet Densely connected Convolutional networktwork A DenseNet is a type of convolutional neural networktwork.
Variants of CNN: DenseNet DenseNets, are the next step on the way to keep increasing the depth of deep convolutional networks The problems arise with CNNs when they go deeper DenseNet improve accuracy caused by the vanishing gradient in high-level neural networks due to the long distance between input and output layers & the information vanishes before reaching its destination was specially developed to
Each Layer add the features on the top of the existing feature map All featured map are concated with each other Perform the down sampling on featured map
Each Layer add the features on the top of the existing feature map