Device Failure Prediction Service at University of Wisconsin-Madison

slide1 n.w
1 / 23
Embed
Share

"Explore the project on device failure prediction by Ankit Maharia, Pulkit Kapoor, and Sreyas Krishna Natarajan at the University of Wisconsin-Madison. The project aims to build a service for creating an open dataset of device health metrics for failure prediction using public datasets. Discover the problem statement, motivation, methodology, service dependencies, and flow of the project. Explore the demo and Ceph integration details to understand the innovative approach in enhancing data availability for researchers."

  • University of Wisconsin-Madison
  • Device Failure Prediction
  • Open Dataset
  • Machine Learning
  • Data Analytics

Uploaded on | 0 Views


Download Presentation

Please find below an Image/Link to download the presentation.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.

You are allowed to download the files provided on this website for personal or commercial use, subject to the condition that they are used lawfully. All files are the property of their respective owners.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author.

E N D

Presentation Transcript


  1. Device Failure Prediction - Ankit Maharia - Pulkit Kapoor - Sreyas Krishna Natarajan UNIVERSITY OF WISCONSIN-MADISON

  2. Outline Problem Statement & Motivation Methodology/Implementation ML Results Future Scope Conclusion UNIVERSITY OF WISCONSIN-MADISON

  3. Problem Statement Build a service to facilitate creation of open dataset of device health metrics. Failure prediction using public dataset. UNIVERSITY OF WISCONSIN-MADISON

  4. Motivation Limited public dataset availability for researchers. Avoid degraded data redundancy or complete data loss. Tuning background scrub speed UNIVERSITY OF WISCONSIN-MADISON

  5. Methodology UNIVERSITY OF WISCONSIN-MADISON

  6. Service Dependencies Pecan: Python web framework for creating REST API. ElasticSearch: To store device metrics MongoDB: For storing host id and host secret on service side. UNIVERSITY OF WISCONSIN-MADISON

  7. Flow - Registering a host - Done via POST api call to: /register-host - Returns a unique host_id and host_secret which are to be used when sending metrics - Client can store it in its persistent storage. For ceph: we have stored it in the manager store. UNIVERSITY OF WISCONSIN-MADISON

  8. Flow - Sending Device Metrics Done via POST api call to: /store-device- metrics Client sets host_id and host_secret in the request headers and posts the payload like below Service stores the payload to elastic search, replacing device serial number under smartctl_json with a SHA-1 hash UNIVERSITY OF WISCONSIN-MADISON

  9. DEMO - - Demo 1 - - Demo 2

  10. CEPH INTEGRATION UNIVERSITY OF WISCONSIN-MADISON

  11. UNIVERSITY OF WISCONSIN-MADISON

  12. UNIVERSITY OF WISCONSIN-MADISON

  13. FAILURE PREDICTION UNIVERSITY OF WISCONSIN-MADISON

  14. SMART (Self-Monitoring, Analysis and Reporting Technology) SMART is a monitoring system supported by most drives that reports on various indicators of drive health, including various types of errors, but also operational data, such as drive temperature, and power on hours of the drive. UNIVERSITY OF WISCONSIN-MADISON

  15. Features UNIVERSITY OF WISCONSIN-MADISON

  16. Features: Interesting SMART 5 (S5) - count of reallocated sectors. When a read or a write operation on a sector fails, the drive will mark the sector as bad and remap (reallocate) it to a spare sector on disk. SMART 187 (S187) - read errors that could not be recovered using ECC SMART 197 (S197) - count of unstable sectors. Some drives mark a sector as unstable following a failed read, and remap it only after waiting for a while to see whether the data can be recovered in a subsequent read or when it gets overwritten UNIVERSITY OF WISCONSIN-MADISON

  17. Challenges How do we define a failure? - User rarely(never) uploads data regarding failure. This leads to lack of failure signal. - If metrics for a device is not reported on a day and it was reported the previous day, it can be assumed as a failure. - If a device has moved from one host to another then it could be marked as a failure. (Fixed and attached back) UNIVERSITY OF WISCONSIN-MADISON

  18. Prediction Pipeline Processing of metrics dumped to elasticsearch index on daily basis - Adding a failure signal - Checking multipath - Validated flow using backblaze We used Backblaze dataset from 2013 to 2016. - Data was sampled to 3:2 (label 0 : 1) - Train # failure samples: 3250 - Test # failure samples: 350 Negative Label(0): Device did not fail UNIVERSITY OF WISCONSIN-MADISON

  19. Results Best Model- Random Forest Model Precision Recall F1 1 0.75 0.67 0.71 2 0.55 0.83 0.66 UNIVERSITY OF WISCONSIN-MADISON

  20. Future Scope - Dockerize the service (Done!) - REST API for prediction - Better Machine Learning Models - Script to publicly release dataset UNIVERSITY OF WISCONSIN-MADISON

  21. Questions? UNIVERSITY OF WISCONSIN-MADISON

  22. Thank You! UNIVERSITY OF WISCONSIN-MADISON

More Related Content