
Shared Caches Management Techniques
"Discover innovative cache management approach combining dynamic insertion and promotion policies for improved performance and fairness in multi-core systems. Evaluation shows significant performance gains over traditional methods."
Download Presentation

Please find below an Image/Link to download the presentation.
The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.
You are allowed to download the files provided on this website for personal or commercial use, subject to the condition that they are used lawfully. All files are the property of their respective owners.
The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author.
E N D
Presentation Transcript
Embedded System Lab. PIPP: Promotion/Insertion Pseudo-Partitioning of Multi-Core Shared Caches Yuejian Xie et al. ACM, 2009 tuckdae@naver.com Embedded System Lab.
Table of contents Abstract Background Reference paper PIPP Evaluation Conclusion Reference Embedded System Lab.
Abstract Cache management(e.g., LRU) policies can lead to poor performance and fairness when the multiple cores compete for the limited LLC capacity Different memory access patterns can cause cache contention in different ways propose a new cache management approach that combines dynamic insertion and promotion policies benefits of cache partitioning, adaptive insertion, and capacity stealing all with a single mechanism Embedded System Lab.
Background MRU, LRU, Promotion policies Cache Partitioning Cache partitioning reduces worst-case execution time for critical tasks, thereby enhancing CPU utilization, especially for multicore applications Page coloring, UCP Embedded System Lab.
Reference paper Capacity management M. K. Qureshi and Y. N. Patt. Utility-Based Cache Partitioning: A Low- Overhead, High-Performance, Runtime Mechanism to Partition Shared Caches (UCP) Dead-Time management M. K. Qureshi, A. Jaleel, Y. N. Patt, S. C. S. Jr., and J. Emer. Adaptive Insertion Policies for High-Performance Caching. (DIP) A. Jaleel, W. Hasenplaugh, M. Qureshi, J. Sebot, S. S. Jr.,and J. Emer. Adaptive Insertion Policies for Managing Shared Caches. (TADIP) Embedded System Lab.
PIPP Basic PIPP make use of UCP s utility monitors to compute the target partitions Dynamic promotion Dynamic Insertion steal Stream-Sensitive PIPP Embedded System Lab.
Evaluation Performance impact of the different cache management techniques for the weighted IPC speedup (Cooperative Cache Partitioning for Chip Multiprocessors) PIPP consistently outperforms unmanaged LRU by a large margin (19.0% on the harmonic mean), and also outperforms both UCP and TADIP (10.6% and 10.1%, respectively) Similar results hold for the quad-core case where PIPP is 21.9% better than LRU, 12.1% better than UCP and 17.5% better than TADIP Embedded System Lab.
Conclusion In this work, we have introduced a single unified technique that can provide the benefits of capacity management, adaptive insertion and inter-core capacity stealing This work opens several future directions for research Embedded System Lab.
Q & A Embedded System Lab.
Backup slide Embedded System Lab.
Evaluation Embedded System Lab.
Evaluation Embedded System Lab.