Lessons Learned from Developing Automated Machine Learning on HPC

Lessons Learned from Developing Automated Machine Learning on HPC
Slide Note
Embed
Share

This presentation by Romain EGELE explores various aspects of developing automated machine learning on High-Performance Computing (HPC) systems. Topics covered include multi-fidelity optimization, hyperparameters, model evaluation methods, learning curve extrapolation, and more valuable insights for efficient machine learning development.

  • Machine Learning
  • Automated
  • HPC
  • Optimization
  • Hyperparameters

Uploaded on Sep 21, 2024 | 1 Views


Download Presentation

Please find below an Image/Link to download the presentation.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author.If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.

You are allowed to download the files provided on this website for personal or commercial use, subject to the condition that they are used lawfully. All files are the property of their respective owners.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author.

E N D

Presentation Transcript


  1. Lessons Learned from Developing Automated Machine Learning on HPC Romain EGELE romain.egele@universite-paris-saclay.fr

  2. Selecting the Baseline Multi-Fidelity Hyperparameter Optimization Is One Epoch All You Need for Multi-Fidelity Hyperparameter Optimization? arXiv: 2307.15422 2

  3. Augmentation, Normalization Optimizer, Learning rate, Number of layers, Type of layer 3

  4. Multi-Fidelity Example with Successive Halving From: https://amueller.github.io/aml/04-model-evaluation/parameter_tuning_automl.html#successive-halving-different-example 4

  5. Learning Curve Extrapolation (LCE) Probability of Performing Worse Observation 5

  6. RoBER versus Weighted Prob. Mixture FAILURE SUCCESS

  7. Baselines for Budget Bounds on budget Training Steps Minimum Number of Training Steps Maximum Number of Training Steps 1-Epoch 100-Epoch 7

  8. WINNERS

  9. Low fidelity evaluations 1-Epoch can be accurate predictors for model-selection. Paper Software 10

  10. END BACKUP SLIDES 11

  11. Hyper par am eter Sear ch Space Sear ch continue Sugges t Conf igur ation Tr ue F al s e Tr aining continue Execute Tr aining Step Model Sel ection Tr ue F al s e Tr ained Model with Es tim ated Bes t Hyper par ameter s 12

Related


More Related Content