Exploring Machine-Dependent Optimization in Computing: A Deep Dive into Modern CPU Design and Pipelined Functional Units

cs 105 n.w
1 / 38
Embed
Share

Delve into the intricate world of machine-dependent optimization, understanding the crucial role it plays in modern CPU design. Learn about superscalar processors, pipelines, and Haswell CPU functionality, shedding light on the dynamic processes driving computing efficiency.

  • Optimization
  • Computing
  • CPU Design
  • Pipelining
  • Superscalar Processor

Uploaded on | 0 Views


Download Presentation

Please find below an Image/Link to download the presentation.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.

You are allowed to download the files provided on this website for personal or commercial use, subject to the condition that they are used lawfully. All files are the property of their respective owners.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author.

E N D

Presentation Transcript


  1. CS 105 Tour of the Black Holes of Computing Machine-Dependent Optimization

  2. Machine-Dependent Optimization Need to understand the architecture Not portable Not often needed but critically important when it is Also helps in understanding modern machines CS 105 2

  3. Modern CPU Design Instruction Control Address Fetch Control Retirement Unit Register File Instruction Cache Instructions Instruction Decode Operations Register Updates Prediction OK? Functional Units Store Branch Arith Arith Arith Load Operation Results Addr. Addr. Data Data Data Cache Execution CS 105 3

  4. Superscalar Processor Definition: A superscalar processor can issue and execute multiple instructions in one cycle. The instructions are retrieved from a sequential instruction stream and are usually scheduled dynamically. Benefit: without programming effort, superscalar processor can take advantage of the instruction-level parallelism that most programs have Most modern CPUs are superscalar. Intel: since Pentium (1993) CS 105 4

  5. What Is a Pipeline? Mul Mul Mul Mul Mul Mul Mul Mul Mul Mul Mul Result Bucket CS 105 5

  6. Pipelined Functional Units Stage 1 long mult_eg(long a, long b, long c) { long p1 = a*b; long p2 = a*c; long p3 = p1 * p2; return p3; } Stage 2 Stage 3 Time 1 2 3 4 5 6 7 a*b a*c p1*p2 Stage 1 a*b a*c p1*p2 Stage 2 a*b a*c p1*p2 Stage 3 Divide computation into stages (e.g., one per partial product in multiplication) Pass partial computations from stage to stage Stage i can start new computation once values passed to i+1 Here, we complete 3 multiplications in 7 cycles, even though each requires 3 cycles CS 105 6

  7. Haswell CPU 11 functional units in total Multiple instructions can execute in parallel 2 load, with address computation 1 store, with address computation 4 integer 2 FP multiply 1 FP add 1 FP divide Some instructions take > 1 cycle, but can be pipelined Instruction Load / Store Integer Multiply Integer/Long Divide Single/Double FP Multiply Single/Double FP Add Single/Double FP Divide Latency Cycles/Issue 4 3 1 1 3-30 3-30 5 3 1 1 3-15 3-15 CS 105 7

  8. x86-64 Compilation of Combine4 Inner Loop (Case: Integer Multiply) .L519: imull (%rax,%rdx,4), %ecx # t = t * d[i] addq $1, %rdx cmpq %rdx, %rbp jg .L519 # Loop: # i++ # Compare length:i # If >, goto Loop Method Integer Double FP Operation Add Mult Add Mult Combine4 1.27 3.01 3.01 5.01 Latency Bound 1.00 3.00 3.00 5.00 CS 105 8

  9. Combine4 = Serial Computation (OP = *) Computation (length=8) ((((((((1 * d[0]) * d[1]) * d[2]) * d[3]) * d[4]) * d[5]) * d[6]) * d[7]) 1d0 * d1 Sequential dependence Performance: determined by latency of OP d3 * d2 * * d4 * d5 * d6 * d7 * CS 105 9

  10. Loop Unrolling (2x1) void unroll2a_combine(vec_ptr v, data_t *dest) { long length = vec_length(v); long limit = length-1; data_t *d = get_vec_start(v); data_t x = IDENT; long i; /* Combine 2 elements at a time */ for (i = 0; i < limit; i+=2) { x = (x OP d[i]) OP d[i+1]; } /* Finish any remaining elements */ for ( ; i < length; i++) { x = x OP d[i]; } *dest = x; } Perform 2x more useful work per iteration CS 105 10

  11. Effect of Loop Unrolling Method Integer Double FP Operation Add Mult Add Mult Combine4 1.27 3.01 3.01 5.01 Unroll 2x1 1.01 3.01 3.01 5.01 Latency Bound 1.00 3.00 3.00 5.00 x = (x OP d[i]) OP d[i+1]; Helps integer add by reducing number of overhead instructions (Almost) achieves latency bound Others don t improve. Why? Still sequential dependency CS 105 11

  12. Loop Unrolling with Reassociation (2x1a) void unroll2aa_combine(vec_ptr v, data_t *dest) { long length = vec_length(v); long limit = length-1; data_t *d = get_vec_start(v); data_t x = IDENT; long i; /* Combine 2 elements at a time */ for (i = 0; i < limit; i+=2) { x = x OP (d[i] OP d[i+1]); } /* Finish any remaining elements */ for (; i < length; i++) { x = x OP d[i]; } *dest = x; } Compare to before x = (x OP d[i]) OP d[i+1]; Can this change result of computation? Yes, for multiply and floating point. Why? CS 105 12

  13. Effect of Reassociation Method Integer Double FP Operation Add Mult Add Mult Combine4 1.27 3.01 3.01 5.01 Unroll 2x1 1.01 3.01 3.01 5.01 Unroll 2x1a 1.01 1.51 1.51 2.51 Latency Bound Throughput Bound 1.00 3.00 3.00 5.00 0.50 1.00 1.00 0.50 Nearly 2x speedup for Int *, FP +, FP * Reason: Breaks sequential dependency 2 functional units for FP * 2 functional units for load x = x OP (d[i] OP d[i+1]); 4 functional units for int + 2 functional units for load Why is that? (next slide) CS 105 13

  14. Reassociated Computation What changed: Operations in the next iteration can be started early (no dependency) x = x OP (d[i] OP d[i+1]); d0 d1 Overall Performance N elements, D cycles latency/op (N/2+1)*D cycles: CPE = D/2 * d2 d3 1 * d4 d5 * * d6 d7 * * * * CS 105 14

  15. Loop Unrolling with Separate Accumulators (2x2) void unroll2a_combine(vec_ptr v, data_t *dest) { long length = vec_length(v); long limit = length-1; data_t *d = get_vec_start(v); data_t x0 = IDENT; data_t x1 = IDENT; long i; /* Combine 2 elements at a time */ for (i = 0; i < limit; i+=2) { x0 = x0 OP d[i]; x1 = x1 OP d[i+1]; } /* Finish any remaining elements */ for (; i < length; i++) { x0 = x0 OP d[i]; } *dest = x0 OP x1; } Different form of reassociation CS 105 15

  16. Effect of Separate Accumulators Method Integer Double FP Operation Add Mult Add Mult Combine4 1.27 3.01 3.01 5.01 Unroll 2x1 1.01 3.01 3.01 5.01 Unroll 2x1a 1.01 1.51 1.51 2.51 Unroll 2x2 0.81 1.51 1.51 2.51 Latency Bound 1.00 3.00 3.00 5.00 Throughput Bound 0.50 1.00 1.00 0.50 Int + makes use of two load units x0 = x0 OP d[i]; x1 = x1 OP d[i+1]; 2x speedup (over unroll2) for Int *, FP +, FP * CS 105 16

  17. Separate Accumulators What changed: Two independent streams of operations x0 = x0 OP d[i]; x1 = x1 OP d[i+1]; 1 d0 1d1 Overall Performance N elements, D cycles latency/operation Should be (N/2+1)*D cycles: CPE = D/2 CPE matches prediction! * * d2 d3 * * d4 d5 * * d6 d7 * * What Now? * CS 105 17

  18. Unrolling & Accumulating Idea Can unroll to any degree L Can accumulate K results in parallel L must be multiple of K Limitations Diminishing returns Cannot go beyond throughput limitations of execution units May run out of registers for accumulators Large overhead for short lengths Finish off iterations sequentially CS 105 18

  19. Unrolling & Accumulating: Double * Case Intel Haswell Double FP Multiplication Latency bound: 5.00. Throughput bound: 0.50 FP * Unrolling Factor L K 1 2 3 4 6 8 10 12 1 5.01 5.01 5.01 5.01 5.01 5.01 5.01 2 2.51 2.51 2.51 3 1.67 4 1.25 1.26 Number of Accumulators 6 0.84 0.88 8 0.63 10 0.51 12 0.52 CS 105 19

  20. Unrolling & Accumulating: Int + Case Intel Haswell Integer addition Latency bound: 1.00. Throughput bound: 1.00 FP * Unrolling Factor L K 1 2 3 4 6 8 10 12 1 1.27 1.01 1.01 1.01 1.01 1.01 1.01 2 0.81 0.69 0.54 3 0.74 4 0.69 1.24 Number of Accumulators 6 0.56 0.56 8 0.54 10 0.54 12 0.56 CS 105 20

  21. Achievable Performance Method Integer Double FP Operation Add Mult Add Mult Best 0.54 1.01 1.01 0.52 Latency Bound 1.00 3.00 3.00 5.00 Throughput Bound 0.50 1.00 1.00 0.50 Limited only by throughput of functional units Up to 42X improvement over original, unoptimized code CS 105 21

  22. What About Branches? Challenge Instruction Control Unit must work well ahead of Execution Unit to generate enough operations to keep EU busy 404663: mov 404668: cmp 40466b: jge 40466d: mov $0x0,%eax (%rdi),%rsi 404685 0x8(%rdi),%rax Executing How to continue? . . . 404685: repz retq When encounters conditional branch, cannot reliably determine where to continue fetching CS 105 22

  23. Modern CPU Design Instruction Control Address Fetch Control Retirement Unit Register File Instruction Cache Instructions Instruction Decode Operations Register Updates Prediction OK? Functional Units Store Branch Arith Arith Arith Load Operation Results Addr. Addr. Data Data Data Cache Execution CS 105 23

  24. Branch Outcomes When encounter conditional branch, cannot determine where to continue fetching Branch Taken: Transfer control to branch target Branch Not-Taken: Continue with next instruction in sequence Cannot resolve until outcome determined by branch/integer unit 404663: mov 404668: cmp 40466b: jge 40466d: mov $0x0,%eax (%rdi),%rsi 404685 0x8(%rdi),%rax Branch Not-Taken . . . Branch Taken 404685: repz retq CS 105 24

  25. Branch Prediction Idea Guess which way branch will go Begin executing instructions at predicted position But don t actually modify register or memory data 404663: mov 404668: cmp 40466b: jge 40466d: mov $0x0,%eax (%rdi),%rsi 404685 0x8(%rdi),%rax Predict Taken . . . Begin Execution 404685: repz retq CS 105 25

  26. Branch Prediction Through Loop Assume 401029: vmulsd (%rdx),%xmm0,%xmm0 40102d: add $0x8,%rdx 401031: cmp %rax,%rdx 401034: jne 401029 vector length = 100 i = 98 Predict Taken (OK) 401029: vmulsd (%rdx),%xmm0,%xmm0 40102d: add $0x8,%rdx 401031: cmp %rax,%rdx 401034: jne 401029 i = 99 Predict Taken (Oops) 401029: vmulsd (%rdx),%xmm0,%xmm0 40102d: add $0x8,%rdx 401031: cmp %rax,%rdx 401034: jne 401029 Executed Read invalid location i = 100 401029: vmulsd (%rdx),%xmm0,%xmm0 40102d: add $0x8,%rdx 401031: cmp %rax,%rdx 401034: jne 401029 Fetched i = 101 CS 105 26

  27. Branch Misprediction Invalidation Assume 401029: vmulsd (%rdx),%xmm0,%xmm0 40102d: add $0x8,%rdx 401031: cmp %rax,%rdx 401034: jne 401029 vector length = 100 i = 98 Predict Taken (OK) 401029: vmulsd (%rdx),%xmm0,%xmm0 40102d: add $0x8,%rdx 401031: cmp %rax,%rdx 401034: jne 401029 i = 99 Predict Taken (Oops) 401029: vmulsd (%rdx),%xmm0,%xmm0 40102d: add $0x8,%rdx 401031: cmp %rax,%rdx 401034: jne 401029 i = 100 Invalidate 401029: vmulsd (%rdx),%xmm0,%xmm0 40102d: add $0x8,%rdx 401031: cmp %rax,%rdx 401034: jne 401029 i = 101 CS 105 27

  28. Branch Misprediction Recovery 401029: vmulsd (%rdx),%xmm0,%xmm0 40102d: add $0x8,%rdx 401031: cmp %rax,%rdx 401034: jne 401029 401036: jmp 401040 . . . 401040: vmovsd %xmm0,(%r12) i = 99 Definitely not taken Reload Pipeline Performance Cost Multiple clock cycles on modern processor Can be a major performance limiter Current CPUs (2019+) speculate 150 or more instructions ahead! One of the motivations for introducing conditional move (cmov) instructions CS 105 28

  29. Visualizing Operations %rdx.0 load (%rax,%rdx.0,4) imull t.1, %ecx.0 incl %rdx.0 cmpl %rsi, %rdx.1 jl-taken cc.1 t.1 %ecx.1 %rdx.1 cc.1 incl %rdx.1 load cmpl cc.1 jl %ecx.0 t.1 Time Operations Vertical position denotes time at which executed Cannot begin operation until operands available imull %ecx.1 Height denotes latency Operands Arcs shown only for operands that are passed within execution unit CS 105 29

  30. 3 Iterations of Combining Product Unlimited-Resource Analysis Assume operation can start as soon as operands available Operations for multiple iterations overlap in time %rax.0 incl 1 %rax.1 cmpl incl load 2 %rax.2 cc.1 load jll cmpl 3 incl %rax.3 t.1 cc.2 %rcx.0 jll cmpl load 4 i=0 t.2 cc.3 jll 5 imull t.3 6 7 %rcx.1 Iteration 1 8 Performance Limiting factor becomes latency of integer multiplier Gives CPE of 4.0 9 i=1 imull 10 Cycle %rcx.2 11 Iteration 2 12 13 i=2 imull 14 15 %rcx.3 CS 105 30 Iteration 3

  31. 4 Iterations of Combining Sum %edx.0 %edx.0 incl incl 1 1 %edx.1 %edx.1 load load %ecx.i +1 cmpl %ecx.i +1 cmpl incl incl 2 2 %edx.2 %edx.2 cc.1 cc.1 jl jl load load %ecx.i +1 cmpl %ecx.i +1 cmpl incl incl 3 3 %ecx.0 %edx.3 %edx.3 t.1 t.1 cc.2 cc.2 4 integer ops i=0 i=0 addl %ecx.1 %ecx.1 Iteration 1 addl jl jl load load %ecx.i +1 cmpl %ecx.i +1 cmpl incl incl 4 4 %edx.4 t.2 t.2 cc.3 cc.3 i=1 i=1 addl %ecx.2 %ecx.2 Iteration 2 addl jl jl load load %ecx.i +1 cmpl %ecx.i +1 cmpl 5 5 t.3 t.3 cc.4 cc.4 i=2 i=2 addl %ecx.3 %ecx.3 Iteration 3 addl jl jl 6 6 Cycle Cycle t.4 t.4 i=3 i=3 addl %ecx.4 %ecx.4 Iteration 4 addl 7 7 Unlimited-Resource Analysis Performance Can begin a new iteration on each clock cycle Should give CPE of 1.0 Would require executing 4 integer operations in parallel CS 105 31

  32. Combining Sum: Resource Constraints %edx.3 %edx.3 6 6 load load incl incl 7 7 %edx.4 %edx.4 %ecx.i +1 cmpl %ecx.i +1 cmpl incl incl 8 8 %ecx.3 %edx.5 %edx.5 t.4 t.4 cc.4 cc.4 addl addl jl jl load load 9 9 i=3 i=3 %ecx.i +1 cmpl %ecx.i +1 cmpl load load incl incl 10 10 %edx.6 %edx.6 t.5 t.5 cc.5 cc.5 %ecx.4 %ecx.4 addl addl jl jl 11 11 Iteration 4 t.6 t.6 %ecx.5 %ecx.5 i=4 i=4 addl addl cmpl cmpl load load 12 12 Iteration 5 cc.6 cc.6 jl jl incl incl 13 13 %edx.7 %edx.7 t.7 t.7 %ecx.6 %ecx.6 i=5 i=5 addl addl cmpl cmpl 14 14 Iteration 6 cc.7 cc.7 jl jl load load incl incl 15 15 %edx.8 Suppose only have two integer functional units Some operations delayed even though operands available Set priority based on program order Performance Sustains CPE of 2.0 i=6 i=6 %ecx.i +1 cmpl %ecx.i +1 cmpl 16 16 Cycle Cycle t.8 t.8 cc.8 cc.8 %ecx.7 %ecx.7 addl addl jl jl 17 17 Iteration 7 i=7 i=7 18 %ecx.8 %ecx.8 Iteration 8 CS 105 32

  33. Visualizing Parallel Loop %edx.0 Two multiplies within loop no longer have data dependency Allows them to pipeline addl %edx.1 load cmpl cc.1 load jl %ecx.0 t.1a %ebx.0 t.1b Time imull imull %ecx.1 load (%eax,%edx.0,4) imull t.1a, %ecx.0 load 4(%eax,%edx.0,4) imull t.1b, %ebx.0 iaddl $2,%edx.0 cmpl %esi, %edx.1 jl-taken cc.1 t.1a %ecx.1 t.1b %ebx.1 %edx.1 cc.1 %ebx.1 CS 105 33

  34. Executing with Parallel Loop %edx.0 %edx.0 addl addl 1 1 %edx.1 %edx.1 load load cmpl cmpl addl addl 2 2 %edx.2 cc.1 cc.1 load load jl jl load load cmpl cmpl addl 3 3 %edx.3 %ecx.0 t.1a t.1a cc.2 cc.2 load load jl jl load cmpl 4 4 %ebx.0 t.1b t.1b cc.3 load jl 5 5 imull imull 6 6 imull imull 7 7 %ecx.1 %ecx.1 t.2a t.2a Note: actually delayed 1 clock from what diagram shows. (Why?) 8 8 i=0 %ebx.1 %ebx.1 t.2b t.2b Iteration 1 9 9 imull imull Cycle Cycle 10 10 imull imull 11 11 %ecx.2 %ecx.2 t.3a 12 12 i=2 %ebx.2 %ebx.2 t.3b Iteration 2 13 13 Predicted Performance Can keep 4-cycle multiplier busy performing two simultaneous multiplications Gives CPE of 2.0 imull 14 14 14 imull 15 15 %ecx.3 16 16 i=4 %ebx.3 CS 105 34 Iteration 3

  35. Getting High Performance Use a good compiler and appropriate flags Don t do anything stupid Watch out for hidden algorithmic inefficiencies Write compiler-friendly code Watch out for optimization blockers: procedure calls & memory references Look carefully at innermost loops (where most work is done) Tune code for machine Exploit instruction-level parallelism Avoid unpredictable branches Make code cache-friendly But DON T OPTIMIZE UNTIL IT S DEBUGGED!!! CS 105 35

  36. Meltdown and Spectre Consider a few things Access to cached things is much faster than to non-cached ones Programs have access to detailed timing information Intel offers free-running cycle counter to all programs Thus, can tell whether something was cached OS has access to everything Carefully checks whether you have access before giving stuff to you CPU speculates many instructions ahead Must guess about branch directions User programs can either flush cache (clflush instruction) or clobber with loop CS 105 36

  37. Meltdown and Spectre Trick OS into doing these steps: Check whether you have access to arbitrary location x(you don t) Mispredict that branch Read location x and use its contents as follows: Extract bit b Multiply (shift left) bit b by, e.g., 1024 Access array y[b*1024] that you do have access to Hardware will eventually discover mispredicted branch and cancel all those instructions but cache now contains y[b*1024] Scan cache to see whether y[0] or y[1024] is fast (i.e., in cache) You now know bit b of location x Lather, rinse, repeat until you know all bits of x Lather, rinse, repeat for all locations you want to read CS 105 37

  38. So What? Can read arbitrary memory at about 2K bits/second No biggie on your laptop Huge issue in the cloud Physical machines often shared Supposedly isolated by virtual-machine technology Grab people s encryption keys, passwords, all sorts of stuff Next stop: Putin What to do? Disabling speculation kills performance Only certain branches are vulnerable Can do special things for those branches But hard to find (millions of lines in kernel) Compiler can try to identify risky branches But will be conservative OS will slow down CS 105 38

Related


More Related Content