Energy-Efficient Computing and Scaling of Memcache
This content explores the importance of having a caching tier in the cloud environment to reduce database load, latency, and costs. It delves into the scaling of memcache, challenges in addressing hot spots, load balancing techniques, and considerations for optimizing performance. Various images and diagrams further illustrate key concepts and strategies.
Download Presentation

Please find below an Image/Link to download the presentation.
The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author.If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.
You are allowed to download the files provided on this website for personal or commercial use, subject to the condition that they are used lawfully. All files are the property of their respective owners.
The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author.
E N D
Presentation Transcript
CSE 591: Energy-Efficient Computing Lecture 16 SCALING: memcache Anshul Gandhi 347, CS building anshul@cs.stonybrook.edu
Application in the Cloud Application Tier Database Caching Tier DB req/sec req/sec Load Balancer Why have a caching tier? 1. Reduce database (DB) load ( DB << ) 2
Application in the Cloud DB response time rapidly increases at high DB load Mean response time (ms) DB req/sec Why have a caching tier? 1. Reduce database (DB) load ( DB << ) 3
Application in the Cloud Application Tier Database Load Caching Tier DB req/sec req/sec Load Balancer > 1/3 of the cost [Krioukov`10] [Chen`08] Why have a caching tier? 1. Reduce database (DB) load 2. Reduce latency [Ousterhout`10] Shrink your cache during low load ( DB << ) 4
Is scaling memcache worth it? It depends on the popularity distribution Small decrease in hit rate Hit rate, p Uniform Zipf Large decrease in caching tier size caching tier size Small decrease in % of data cached 5
Nave scaling Performance can temporarily suffer if we lose a lot of hot data Mean response time (ms) Time (min) 6
Challenges to address Hot Spots Heterogeneity
Load balancing in memcache For simplicity, memcache nodes are unaware of each other They act as independent, individual caches Designed to only support GET and SET Not aware of key->server mapping Client (library at client) handles key->server mapping mem_hotspots offloads this functionality to an LB Standard consistent hashing for regular keys Special table lookup for hot keys Supports replication or forwarding to faster server
Challenges Calcification Request popularity changes: big values to small values Variability in apps Some are overprovisioned, some are underprovisioned Objectives How much memory to allocate each slab within total budget Which apps will benefit from this?
Solution Use stack distance to compute h function