Elasticity in Main Memory Databases

squall fine squall fine grained live grained live n.w
1 / 32
Embed
Share

Explore the concepts of fine-grained live reconfiguration, scaling-out via partitioning, and approaches for main-memory DBMS in high-throughput transactional systems. Learn about the challenges of workload skew and the promise of elasticity in managing resources effectively for improved system performance.

  • Main Memory Databases
  • Elasticity
  • Workload Skew
  • High-throughput Systems

Uploaded on | 0 Views


Download Presentation

Please find below an Image/Link to download the presentation.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.

You are allowed to download the files provided on this website for personal or commercial use, subject to the condition that they are used lawfully. All files are the property of their respective owners.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author.

E N D

Presentation Transcript


  1. Squall: Fine Squall: Fine- -Grained Live Grained Live Reconfiguration for Partitioned Reconfiguration for Partitioned Main Memory Databases Main Memory Databases AARON J. ELMORE, VAIBHAV ARORA, REBECCA TAFT, ANDY PAVLO, DIVY AGRAWAL, AMR EL ABBADI

  2. Higher OLTP Throughput Demand for High-throughput transactional systems (OLTP) especially due to web-based services Cost per GB for RAM is dropping. Network memory is faster than local disk. Let s use Main-Memory

  3. Scaling-out via Partitioning Growth in scale of the data Data Partitioning enables managing scale via Scaling-Out.

  4. Approaches for main-memory DBMS* Highly concurrent, latch-free data structures Hekaton, Silo Partitioned data with single-threaded executors Hstore, VoltDB *Excuse the generalization

  5. Procedure Name Input Parameters Client Application Slide Credits: Andy Pavlo

  6. The Problem: Workload Skew High skew increases latency by 10X and decreases throughput by 4X Partitioned shared-nothing systems are especially susceptible Throughput (txns/s) 125,000 100,000 75,000 50,000 25,000 0 No Skew Low Skew High Skew

  7. The Problem: Workload Skew Possible solutions: Provision resources for peak load (Very expensive and brittle!) Capacity Resources Demand Time Unused Resources

  8. The Problem: Workload Skew Possible solutions: Limit load on system (Poor performance!) Resources Time

  9. Need Elasticity

  10. The Promise of Elasticity Capacity Resources Demand Time Unused resources Slide Credits: Berkeley RAD Lab

  11. What we need Enable system to elastically scale in or out to dynamically adapt to changes in load Change the partition plan Add nodes Reconfiguration Remove nodes

  12. Problem Statement Need to migrate tuples between partitions to reflect the updated partition plan. Partition Warehouse Partition Warehouse Partition 1 [0,1) Partition 1 [0,2) Partition 2 [2,3) Partition 2 [2,4) Partition 3 [1, 2),[3,6) Partition 3 [4,6) Would like to do this without bringing the system offline: Live Reconfiguration

  13. E-Store Normal operation, high level monitoring Load imbalance detected Reconfiguration complete Tuple level monitoring (E-Monitor) Online reconfiguration (Squall) Tuple placement planning (E-Planner) New partition plan Hot tuples, partition-level access counts

  14. Live Migrations Solutions are Not Suitable Predicated on disk based solutions with traditional concurrency and recovery. Zephyr: Relies on concurrency (2PL) and disk pages. ProRea: Relies on concurrency (SI and OCC) and disk pages. Albatross: Relies on replication and shared disk storage. Also introduces strain on source.

  15. Not Your Parents Migration Single threaded execution model Either doing work or migration More than a single source and destination (and the destination is not cold) Want lightweight coordination Presence of distributed transactions and replication

  16. Squall Given plan from E-Planner, Squall physically moves the data while the system is live Pull based mechanism Destination pulls from source Conforms to H-Store single-threaded execution model o While data is moving, transactions are blocked but only on partitions moving the data To avoid performance degradation, Squall moves small chunks of data at a time, interleaved with regular transaction execution

  17. 1. Initialization and Identify migrating data 2. Live reactive pulls for required data 3. Periodic lazy/async pulls for large chunks Squall Steps 1 0 1 0 4 3 4 3 Reconfiguration (New Plan, Leader ID) 2 2 Outgoing: 2 Pull W_ID=2 Partition 2 Partition 1 Partition 2 Partition 1 9 8 6 9 5 Pull W_ID>5 6 5 8 Incoming: 2 Outgoing: 5 Incoming: 5 7 10 7 10 Partitioned by Warehouse id Partition 3 Partition 4 Partition 3 Partition 4 Client

  18. Chunk Data for Chunk Data for Asynchronous Pulls Asynchronous Pulls

  19. Why Chunk? Unknown amount of data when not partitioned by clustered index. Customers by W_ID in TPC-C Time spent extracting, is time not spent on TXNS.

  20. Async Pulls Periodically pull chunks of cold data These pulls are answered lazily Start at lower priority than transactions. Priority increases with time. Execution is interwoven with extracting and sending data (dirty the range!)

  21. Chunking Async Pulls Data Data Async Pull Request Source Destination

  22. Keys to Performance Properly size reconfiguration granules and space them apart. Split large reconfigurations to limit demands on a single partition. Redirect or pull only if needed. Tune what gets pulled. Sometimes pull a little extra.

  23. Optimization: Splitting Reconfigurations 1. Split by pairs of source and destination - Avoids contention to a single partition Example: partition 1 is migrating W_ID 2,3 to partitions 3 and 7, execute as two reconfigurations. 2. Split large objects and migrate one piece at a time

  24. Evaluation Workloads YCSB TPC-C Baselines Stop & Copy Purely Reactive Only Demand based pulling equivalent to Zephyr) Zephyr+ - Purely Reactive + Asynchronous Chunking with Pull Prefetching (Semantically

  25. YCSB Latency YCSB data shuffle 10% pairwise YCSB cluster consolidation 4 to 3 nodes

  26. Results Highlight TPC-C load balancing hotspot warehouses

  27. All about trade-offs Trading off time to complete migration and performance degradation. Future work to consider automating this trade-off based on service level objectives.

  28. I Fell Asleep What Happened? Partitioned Single Threaded Main Memory Environment -> Susceptible to Hotspots. Elastic data Management is a solution -> Squall provides a mechanism for executing a fine grained live reconfiguration Questions?

  29. Tuning Optimizations Tuning Optimizations

  30. Sizing Chunks Static analysis to set chunk sizes, future work to dynamically set sizing and scheduling. Impact of chunk sizes on a 10% reconfiguration during a YCSB workload.

  31. Spacing Async Pulls Delay at destination between new async pull requests. Impact on chunk sizes on a 10% reconfiguration during a YCSB workload with 8mb chunk size.

  32. Effect of Splitting into Sub-Plans Set a cap on sub-plan splits, and split on pairs and ability to decompose migrating objects

Related


More Related Content