HTCondor Monitoring and Operations Insights

HTCondor Monitoring and Operations Insights
Slide Note
Embed
Share

In-depth insights into HTCondor monitoring and operations, focusing on tracking status, analyzing job profiles, resolving failures, and creating informative dashboards. Learn about the motivation behind OSG Connect, challenges with existing tools, and the Graphite tool for real-time graphing. Discover how to send HTCondor stats to Graphite and overcome parsing issues with Python bindings.

  • HTCondor
  • Monitoring
  • Operations
  • Graphite
  • OSG Connect

Uploaded on Feb 28, 2025 | 0 Views


Download Presentation

Please find below an Image/Link to download the presentation.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author.If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.

You are allowed to download the files provided on this website for personal or commercial use, subject to the condition that they are used lawfully. All files are the property of their respective owners.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author.

E N D

Presentation Transcript


  1. A Year of HTCondor Monitoring Lincoln Bryant Suchandra Thapa HTCondor Week 2015 May 21, 2015

  2. Analytics vs. Operations Two parallel tracks in mind: o Operations o Analytics Operations needs to: o Observe trends. o Be alerted and respond to problems. Analytics needs to: o Store and retrieve the full ClassAds of every job. o Perform deep queries over the entire history of jobs.

  3. Motivations For OSG Connect, we want to: o track HTCondor status o discover job profiles of various users to determine what resources users are need o determine when user jobs are failing and help users to correct failures o create dashboards to provide high-level overviews o open our monitoring up to end-users o be alerted when there are problems Existing tools like Cycleserver, Cacti, Ganglia, etc.. did not adequately cover our use case

  4. Operations

  5. Graphite Real-time graphing for time series data Open source, developed by Orbitz Three main components: o Whisper - Data format, replacement for RRD o Carbon - Listener daemon o Front-end - Web interface for metrics Dead simple protocol o Open socket, fire off metrics in the form of: path.to.metric <value> <timestamp> Graphite home page: http://graphite.wikidot.com/

  6. Sending HTCondor stats To try it out, you can just parse the output of condor_q into the desired format Then, simply use netcat to send it to the Graphite server #!/bin/bash metric= htcondor.running value=$(condor_q | grep R | wc -l) timestamp=$(date +%s) echo $metric $value $timestamp | nc \ graphite.yourdomain.edu 2003

  7. A simple first metric Run the script with cron:

  8. Problems with parsing Parsing condor_q output is a heavyweight and potentially fragile operation o Especially if you do it once a minute o Be prepared for gaps in the data if your schedd is super busy What to do then? o Ask the daemons directly with the Python bindings!

  9. Collecting Collector stats via Python Here we ask the collector for slots in claimed state and sum them up: import classad, htcondor coll = htcondor.Collector("htcondor.domain.edu") slotState = coll.query(htcondor.AdTypes.Startd, "true",['Name','JobId','State','RemoteOwner','COLLECTOR_HOST_STRI NG']) for slot in slotState[:]: if (slot['State'] == "Claimed"): slot_claimed += 1 print "condor.claimed "+ str(slot_claimed) + " " + str(timestamp)

  10. Sample HTCondor Collector summary Fire off a cron & come back to a nice plot:

  11. Grafana Graphite is nice, but static PNGs are so Web 1.0 Fortunately, Graphite can export raw JSON instead Grafana is an overlay for Graphite that renders the JSON data using flot o Nice HTML / Javascript-based graphs o Admins can quickly assemble dashboards, carousels, etc o Saved dashboards are backed by nosql database All stored Graphite metrics should work out of the box Grafana home page: http://grafana.org/ flot home page: http://www.flotcharts.org/

  12. Sample dashboard

  13. Sample dashboard

  14. An example trend A user s jobs were rapidly flipping between idle and running. Why? Turns out to be a problematic job with an aggressive periodic release: periodic_release = ((CurrentTime - EnteredCurrentStatus) > 60) (Credit to Mats Rynge for pointing this out)

  15. Active monitoring with Jenkins Nominally a continuous integration tool for building software Easily configured to submit simple Condor jobs instead Behaves similar to a real user o Grabs latest functional tests via git o Runs condor_submit for a simple job o Checks for correct output, notifies upon failure Plethora of integrations for notifying sysops of problems: o Email, IRC, XMPP, SMS, Slack, etc.

  16. Jenkins monitoring samples Dashboard gives a reassuring all-clear: Slack integration logs & notifies support team of problems:

  17. Operations - Lessons learned Using the HTCondor Python bindings for monitoring is just as easy, if not easier, than scraping condor_{q,status} If you plan to have a lot of metrics, the sooner you move to SSD(s), the better Weird oscillatory patterns, sudden drops in running jobs, huge spikes in idle jobs can all be indicative of problems Continuous active testing + alerting infrastructure is key for catching problems before end-users do

  18. Analytics

  19. In the beginning... We started with a summer project where students would be visualizing HTCondor job data To speed up startup, we wrote a small python script (~50 lines) that queried HTCondor for history information and added any new records to a MongoDB server o Intended so that students would have an existing data source o Ended up being used for much longer

  20. Initial data visualization efforts Had a few students working on visualizing data over last summer o Generated a few visualizations using MongoDB and other sources o Tried importing data from MongoDB to Elasticsearch using Kibana o Used a few visualizations for a while but eventually stopped due to maintenance required Created a homebrew system using MongoDB, python, highcharts and cherrypy

  21. Current setup Probes to collect condor history from log file and to check the schedd every minute Redis server for pub/sub channels for probes Logstash to follow Redis channels and to insert data into Elasticsearch Elasticsearch cluster for storage and queries Kibana for user and operational dashboards, RStudio/python scripts for more complicated analytics

  22. Indexing job history information Python script polls the history logs periodically for new entries and publishes this to a Redis channel. Classads get published to a channel on the Redis server and read by Logstash Due to size of classads on Elasticsearch and because ES only works on data in memory, data goes into a new index each month

  23. Indexing schedd data Python script is run every minute by a cronjob and collects classads for all jobs. The complete set of job classads is put into an ES index for that week. Script also inserts a record with number of jobs in each state into another ES index.

  24. Querying/Visualizing information All of this information is good, but need a way of querying and visualizing it Luckily, Elasticsearch integrates with Kibana which provides a web interface for querying/visualization

  25. Kibana Overview Time range selector Query field Sampling of documents found Hits over time

  26. Plotting/Dashboards with Kibana Can also generate plots of data with Kibana as well as dashboards Plots and dashboards can be exported!

  27. Some (potentially interesting) plots Queries/plots developed in Kibana, data exported as csv file and plotted using RStudio/ggplot

  28. Average number of job starts over time 94% of jobs succeed on first attempt Note: most jobs use periodic hold and period release ClassAds to retry failed jobs Thus invalid submissions may result in multiple restarts, inflating this number

  29. Average job duration on compute node Plot of the average job duration on compute nodes, majority of jobs complete within an hour with a few projects having outliers (mean: 1050s, median: 657s )

  30. Average bytes transferred per job using HTCondor Most jobs transfer significantly less than 1GB per job through HTCondor (mean: 214MB, median: 71MB) Note: this does not take into account data transferred by other methods (e.g. wget)

  31. Memory usage Most jobs use less than 2GB of memory with proteomics and snoplus projects having the highest average memory utilization Mean use: 378 MB Median use: 118 MB

  32. Other uses Analytics framework is not just for dashboards and pretty plots Can also be used to troubleshoot issues

  33. Troubleshooting a job In November 2014, there was a spike in the number of restarts, would like to investigate this and see if we can determine the cause and fix

  34. Troubleshooting part 2 algdock project seems to be responsible for most of the restarts Select the time range in question Split out the projects with the most restarts

  35. Troubleshooting concluded Next, we go to the discover tab in Kibana and search for records in the relevant time frame and look at the classads, it quickly becomes apparently that the problem is due to a missing output file that results in the job being held combined with a PeriodicRelease for the job

  36. Future directions Update schedd probe to use LogReader API to get classad updates rather than querying schedd for all classads More and better analytics: o Explore data more and pull out analytics that are useful for operational needs o Update user dashboards to present information that users are interested in

  37. Links Github repo - https://github.com/DHTC- Tools/logstash-confs/tree/master/condor

  38. Questions? Comments?

More Related Content