
Insights into Ubiquitous Performance Analysis (UPA)
Explore the world of Ubiquitous Performance Analysis (UPA) with discussions on tools, workflows, measurement tuning, common components, infrastructure sharing, visualization, and trade-offs between applications and facilities.
Download Presentation

Please find below an Image/Link to download the presentation.
The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.
You are allowed to download the files provided on this website for personal or commercial use, subject to the condition that they are used lawfully. All files are the property of their respective owners.
The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author.
E N D
Presentation Transcript
Ubiquitous Performance Analysis (UPA) Notes
Agenda: Talk about other tools in UPA workflows Agenda: Talk about plugging tools into each other as part of UPA Agenda: Talk about tuning UPA measurement, what s on by default? Agenda: Shared or common tool components (score-p, caliper as measurement). Adiak.
ORNL Caascade: Module environment will run source code analysis. What MPI/functions are applications using? X-Alt style information. Heavy overlap with monitoring ProPE out of Dresden may be in a UPA area, unsure JSC may be collecting high-level metrics that is shared with users Many applications have built-in timers that record data with every run. Can graph nightly test results.
Infrastructure Adiak Sharing name/value pairs among tools MDHIM (hxhim) multidemsional hash in memory, also has key/value space over MPI. Gotcha Alternative to LD_PRELOAD. Wraps functions though API space.
UPA differences from classic tools Attribution of metrics to application space Job launch easier Communication easier (or more tied to application) Outputting data is managed by application
Visualization of UPA data SPOT is LLNL s current idea. Uses crossfilter. Maybe technologies in ProPe? Feeding data in splunk, Grafana
UPA Trade-offs between Apps and Facilities Would be nice to have a UPA that doesn t require as tight application integration. Sampling, symbol/lineno lookups for attribution Would need techniques to capture app metadata UPA is harder for facilities that don t develop their own applications, or run community codes.
Tightly Integrated tools/applications help each other Calculating FOMs Load balance metrics Nicer if tools provide infrastructure, but applications create/interpret these metrics Can be distinct from UPA, but goes along with it too.