DUNE Requirements for Workload Management

DUNE Requirements for Workload Management
Slide Note
Embed
Share

"Exploring workload management requirements for the DUNE project involving large scale data handling, machine learning, and novel architectures. Investigating synergies with other experiments like LSST, SKA, and SBND."

  • DUNE
  • Workload Management
  • Data Handling
  • Machine Learning
  • Experiment

Uploaded on Mar 03, 2025 | 0 Views


Download Presentation

Please find below an Image/Link to download the presentation.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author.If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.

You are allowed to download the files provided on this website for personal or commercial use, subject to the condition that they are used lawfully. All files are the property of their respective owners.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author.

E N D

Presentation Transcript


  1. 1 DUNE REQUIREMENTS FOR WORKLOAD MANAGEMENT Heidi Schellman, Oregon State University For the computing teams 3/3/2025

  2. Where DUNE is the same as existing expts. The usual (see other talks) - Multiple PB of data - 1,200 collaborators worldwide - Need access to code, databases, data - Need monitoring - Need to match jobs up with data is network fast enough or not? Go to data? Get the data? Find the data? We would prefer to investigate existing solutions and see which features we need and which we don t Note: Events are large but CPU/MB is not that extreme 2 Oregon State University 3/3/2025

  3. Where DUNE differs DUNE raw events are very large 4 detector modules, with at least 2 different readout methods SP is 384,000 channels/module divided up into 150 APA modules Each APA ~ 40MB unpacked data for a 5 msec readout Supernova events are 6,000 times larger - Extra DUNE requirement, ability to deal with chunks of data that vary in scale between 600 word ntuple summaries and 24 TB supernova readouts. Machine learning brings new requirements - Run across multiple cores - Investigating HPC s, GPU s 3 Oregon State University 3/3/2025

  4. Bottom line DUNE can use much of the same infrastructure as LHC experiments for many phases of our analysis However, some use cases raw data and machine learning for example - will require novel workload handling on unfamiliar architectures. Questions: Are there synergies with other experiments? LSST, SKA, SBND?, NAXX, new ways of doing LHC expts? 4 Oregon State University 3/3/2025

Related


More Related Content