
Insights into Machine Learning and Deep Learning Topics
Explore the diverse realms of machine learning and deep learning, covering classic ML topics, deep learning developments, application areas, and a reflection on the past, present, and future of these technologies. Gain valuable knowledge about hidden Markov models, principal component analysis, support vector machines, neural networks, and more.
Download Presentation

Please find below an Image/Link to download the presentation.
The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.
You are allowed to download the files provided on this website for personal or commercial use, subject to the condition that they are used lawfully. All files are the property of their respective owners.
The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author.
E N D
Presentation Transcript
Epilogue Mark Stamp Summary 1
Classic ML Topics Hidden Markov Models (HMM) Principal Component Analysis (PCA) Support Vector Machines (SVM) Clustering Boosting & LDA k-NN & Random Forest Applications of most of these Summary 2
Deep Learning Topics Historical development Backpropagation o Automatic differentiation o Reverse mode automatic differentiation Multilayer Perceptron (MLP) Convolutional Neural Networks (CNN) Recurrent Neural Networks (RNN) o Gradient issues, LSTM, GRU, Summary 3
Deep Learning Topics Generative Adversarial Networks (GAN) Word2Vec Mini topics o Residual Networks (ResNet) o Extreme Learning Machines (ELM) o Transfer learning, ensembles, accuracy/loss, overfitting, regularization, explainability, adversarial attacks, Summary 4
Machine Learning: Past, Present, and Future Summary 5
Past Discussed history of neural networks o Neural networks originated in 1940s o AI winters , deep networks, big data, Classic ML is (mostly) more recent o HMM developed in the late 1960s o SVM only became useful in mid 1990s o Some older classic statistical techniques now consider ML Summary 6
Present ML/DL/AI/ big data is everywhere All major tech companies do ML o And most smaller tech companies too o Many jobs are available! ML/DL today is as important today as software was a decade ago Example: Machine translation o Human translators used to be paid a lot o Machines now translate most things Boosting 7
Future? It s difficult to make predictions, especially about the future Yogi Berra So far, there have been 2 AI winters o Will there be a 3rd? Never say never What might cause another AI winter? o Overpromising, so that the technology does not match the hype o A lot of hype around DL today Boosting 8
Problematic Problems Some problems not well-suited to ML o Perhaps, fake news and hate speech Some examples of fake news are easy o But, some require judgements calls that learning techniques should not make Some examples of hate speech easy o What about freedom of speech? Asking too muchof ML/DL? Boosting 9
Fairness Lots of research on fairness in ML/DL Topics on previous slide are ill-defined Fairness is often easy to define o Consider mortgage example where o bank uses ML to decide who gets loans o There are legal definitions of fairness Can bank use ML and still be fair (in, say, a legal sense)? Boosting 10
Fairness How to make ML/DL models fair ? o Training data might be biased, but models are less biased than humans! Sensible ways to improve fairness? o Get more or better training data o Use ML as one input to human decisions But, much of research into fairness tries to modify models themselves Boosting 11
Fairness Should be modify inner workings of models to make sure they are fair ? o Models are just based on algorithms o They do not care what the data is o So, no inherent bias in ML/DL/AI Modifying inner workings of models will cause people to lose faith in ML Is there any limiting principle? Boosting 12
Goldilocks Principle Trying to solve ill-defined problems is asking too muchof ML Trying to make ML fair is asking too littleof ML ML should be applied to problems that are just right ! o Well-defined problems, and use ML/DL techniques to their full capabilities Boosting 13
Indexing a Book How to make an index for a book? Your author used tools in LaTeX o Add index entries manually to text o Tedious and boring job! Can we train ML model to make index? o Yes, but could such a model satisfy your quirky author? o This could be challenging! Boosting 14
Indexing On p. 83 there is a footnote that mentions 867-5309 o This is supposed to be a joke o Refers to a song, Jenny/867-5309 by the group Tommy Tutone Index has an entry for Tommy Tutone o This index entry points to page 83 o Tommy Tutone is not mentioned on p. 83 Boosting 15
Index Entry How would ML/DL algorithm know to create index entry on previous slide? o Would require a lot of cultural knowledge o Would have to recognize that 867-5309 refers to a song from the 1980s Where would training data come from? o Not many tech authors include lame jokes o Would have to be highly customized to match your author s specific humor Boosting 16
Indexing Problem Training ML model for such an index problem would be extremely difficult! o Far more difficult than simply creating the index manually o So it, would make little sense to do so But suppose we only want ML that generates, say, 98% of index entries? o Then it s super-simple (TF-IDF will work) Boosting 17
Indexing Bottom Line Makes more sense to use ML to do the bulk of the work o Then a human can do whatever remains o That is, ML is a tool for humans Computers and software are tools ML/DL/AI also tools Should we fear these tools? o What does the future hold? Boosting 18
Futurists vs the Future Futurists seem to think that smart robots will take over the world o Humans may be slaves to robots (or worse) Futurists have a poor record of predicting the future! o Future Shock(1970) predicted that kids would be overwhelmed by technology o Claimed that kids would need adult mentors to help them deal with computers Boosting 19
Crystal Ball ML/DL/AI very useful today o But what about the future? Your author s view of the future o ML/DL/AI will remain very useful o even if there is another AI winter Humans will remain firmly in charge o Not the other way around! o The technology will become ubiquitous and fade into the background of life Boosting 20
Latest Research Interesting recent article Train RNN on easy version of problem Then test on harder version Find that if RNN thinks longer, does better on hard version of problem o More iterations of RNN Three problems considered o Bit manipulation, mazes, a chess problem Boosting 21
Latest Research Boosting 22