gene-bordegaray opened a new pull request, #22159:
URL: https://github.com/apache/datafusion/pull/22159

   ## Which issue does this PR close?
   
   <!--
   We generally require a GitHub issue to be filed for all bug fixes and 
enhancements and this helps us generate change logs for our releases. You can 
link an issue to this PR using the GitHub syntax. For example `Closes #123` 
indicates that this PR will close issue #123.
   -->
   
   - Does not close an issue; this is a targeted physical-plan performance 
optimization.
   
   ## Rationale for this change
   
   <!--
    Why are you proposing this change? If this is already explained clearly in 
the issue then this section is not needed.
    Explaining clearly why changes are proposed helps reviewers understand your 
changes and offer better suggestions for fixes.  
   -->
   
   Hash repartition currently builds one output batch per non-empty target 
partition by calling `take_arrays` separately for each partition. At high 
fanout this means an input batch can issue many Arrow take kernels, which shows 
up as a large cost in repartition-heavy queries.
   
   This changes hash repartition to concatenate the per-partition row indices, 
call `take_arrays` once for the input batch, and then slice the reordered batch 
back into per-partition output batches.
   
   ## What changes are included in this PR?
   
   <!--
   There is no need to duplicate the description in the issue here but it is 
sometimes worth providing a summary of the individual changes in this PR.
   -->
   
   - Replaces per-partition hash repartition `take_arrays` calls with one 
grouped `take_arrays` call per input batch.
   - Tracks partition ranges into the grouped reordered batch and returns 
zero-copy `RecordBatch::slice` outputs for each non-empty partition.
   - Adds a concise comment and example documenting how the grouped index 
vector maps back to output partitions.
   
   <details>
   <summary>How the grouped take works</summary>
   
   ```text
   input rows:        0   1   2   3   4   5   6
   
   partition 0:      [2, 5]
   partition 1:      []
   partition 2:      [0, 3, 4]
   partition 3:      [1, 6]
   
   grouped indices:  [2, 5, 0, 3, 4, 1, 6]
   partition ranges: [(0, start=0, len=2),
                      (2, start=2, len=3),
                      (3, start=5, len=2)]
   
   take once:        rows [2, 5, 0, 3, 4, 1, 6]
   slice outputs:    partition 0 = slice(0, 2)
                     partition 2 = slice(2, 3)
                     partition 3 = slice(5, 2)
   ```
   
   </details>
   
   ## Are these changes tested?
   
   <!--
   We typically require tests for all PRs in order to:
   1. Prevent the code from being accidentally broken by subsequent changes
   2. Serve as another way to document the expected behavior of the code
   
   If tests are not included in your PR, please explain why (for example, are 
they covered by existing tests)?
   -->
   
   - `cargo fmt --all`
   - `cargo test -p datafusion-physical-plan repartition --lib`
   - `cargo clippy --all-targets --all-features -- -D warnings`
   
   Benchmarks were run against the branch merge-base 
`937dfdad748589aa7372848bb2a57ef04109b931` and this branch commit `a0a727c4dd`. 
Lower is better; negative percentage means this PR was faster.
   
   <details>
   <summary>TPCH SF10, 8 partitions, all queries</summary>
   
   | Query | main ms | grouped ms | change | speedup |
   |---:|---:|---:|---:|---:|
   | Q1 | 640.78 | 451.82 | -29.49% | 1.420x |
   | Q2 | 315.81 | 150.07 | -52.48% | 2.100x |
   | Q3 | 899.21 | 375.88 | -58.20% | 2.390x |
   | Q4 | 469.31 | 217.07 | -53.75% | 2.160x |
   | Q5 | 1131.37 | 446.36 | -60.55% | 2.530x |
   | Q6 | 376.40 | 163.66 | -56.52% | 2.300x |
   | Q7 | 1388.40 | 484.36 | -65.11% | 2.870x |
   | Q8 | 1369.67 | 571.62 | -58.27% | 2.400x |
   | Q9 | 1834.81 | 739.88 | -59.68% | 2.480x |
   | Q10 | 813.73 | 361.94 | -55.52% | 2.250x |
   | Q11 | 267.06 | 114.84 | -57.00% | 2.330x |
   | Q12 | 526.41 | 250.39 | -52.43% | 2.100x |
   | Q13 | 760.54 | 324.78 | -57.30% | 2.340x |
   | Q14 | 446.91 | 221.04 | -50.54% | 2.020x |
   | Q15 | 764.64 | 375.67 | -50.87% | 2.040x |
   | Q16 | 167.74 | 80.36 | -52.09% | 2.090x |
   | Q17 | 1801.72 | 763.58 | -57.62% | 2.360x |
   | Q18 | 3303.89 | 1649.87 | -50.06% | 2.000x |
   | Q19 | 694.16 | 354.97 | -48.86% | 1.960x |
   | Q20 | 693.91 | 323.17 | -53.43% | 2.150x |
   | Q21 | 3112.83 | 1065.36 | -65.78% | 2.920x |
   | Q22 | 205.88 | 95.97 | -53.38% | 2.150x |
   
   </details>
   
   <details>
   <summary>TPCH SF10, 16 partitions, all queries</summary>
   
   | Query | main ms | grouped ms | change | speedup |
   |---:|---:|---:|---:|---:|
   | Q1 | 518.84 | 328.66 | -36.66% | 1.580x |
   | Q2 | 350.47 | 148.64 | -57.59% | 2.360x |
   | Q3 | 1003.55 | 371.04 | -63.03% | 2.700x |
   | Q4 | 589.70 | 258.05 | -56.24% | 2.290x |
   | Q5 | 1343.64 | 506.01 | -62.34% | 2.660x |
   | Q6 | 322.21 | 130.93 | -59.37% | 2.460x |
   | Q7 | 1527.85 | 550.88 | -63.94% | 2.770x |
   | Q8 | 1476.46 | 578.77 | -60.80% | 2.550x |
   | Q9 | 2091.16 | 785.54 | -62.44% | 2.660x |
   | Q10 | 817.98 | 331.02 | -59.53% | 2.470x |
   | Q11 | 341.46 | 123.31 | -63.89% | 2.770x |
   | Q12 | 493.51 | 221.61 | -55.10% | 2.230x |
   | Q13 | 690.52 | 290.54 | -57.92% | 2.380x |
   | Q14 | 410.54 | 171.45 | -58.24% | 2.390x |
   | Q15 | 733.96 | 290.56 | -60.41% | 2.530x |
   | Q16 | 197.35 | 86.09 | -56.37% | 2.290x |
   | Q17 | 2089.12 | 828.96 | -60.32% | 2.520x |
   | Q18 | 2712.00 | 1097.77 | -59.52% | 2.470x |
   | Q19 | 602.77 | 260.74 | -56.74% | 2.310x |
   | Q20 | 661.20 | 288.58 | -56.35% | 2.290x |
   | Q21 | 5490.50 | 1151.50 | -79.03% | 4.770x |
   | Q22 | 198.38 | 103.13 | -48.01% | 1.920x |
   
   </details>
   
   <details>
   <summary>TPCH SF10, 32 partitions, all queries</summary>
   
   | Query | main ms | grouped ms | change | speedup |
   |---:|---:|---:|---:|---:|
   | Q1 | 533.86 | 338.54 | -36.59% | 1.580x |
   | Q2 | 439.59 | 199.50 | -54.62% | 2.200x |
   | Q3 | 1242.19 | 510.11 | -58.93% | 2.440x |
   | Q4 | 743.92 | 363.33 | -51.16% | 2.050x |
   | Q5 | 1711.97 | 666.50 | -61.07% | 2.570x |
   | Q6 | 325.39 | 134.07 | -58.80% | 2.430x |
   | Q7 | 1947.59 | 722.22 | -62.92% | 2.700x |
   | Q8 | 1914.31 | 775.62 | -59.48% | 2.470x |
   | Q9 | 2662.07 | 976.47 | -63.32% | 2.730x |
   | Q10 | 902.80 | 362.71 | -59.82% | 2.490x |
   | Q11 | 400.93 | 170.81 | -57.40% | 2.350x |
   | Q12 | 572.19 | 265.06 | -53.68% | 2.160x |
   | Q13 | 736.31 | 296.82 | -59.69% | 2.480x |
   | Q14 | 430.11 | 180.93 | -57.93% | 2.380x |
   | Q15 | 732.36 | 327.12 | -55.33% | 2.240x |
   | Q16 | 245.97 | 116.24 | -52.74% | 2.120x |
   | Q17 | 2711.18 | 1100.17 | -59.42% | 2.460x |
   | Q18 | 2946.70 | 1176.02 | -60.09% | 2.510x |
   | Q19 | 600.47 | 258.58 | -56.94% | 2.320x |
   | Q20 | 765.20 | 337.01 | -55.96% | 2.270x |
   | Q21 | 10062.70 | 1534.95 | -84.75% | 6.560x |
   | Q22 | 250.50 | 128.27 | -48.79% | 1.950x |
   
   </details>
   
   <details>
   <summary>TPCH SF10, 64 partitions, all queries</summary>
   
   | Query | main ms | grouped ms | change | speedup |
   |---:|---:|---:|---:|---:|
   | Q1 | 595.70 | 324.74 | -45.49% | 1.830x |
   | Q2 | 663.08 | 305.30 | -53.96% | 2.170x |
   | Q3 | 1744.90 | 727.81 | -58.29% | 2.400x |
   | Q4 | 1070.72 | 566.20 | -47.12% | 1.890x |
   | Q5 | 2447.07 | 938.91 | -61.63% | 2.610x |
   | Q6 | 315.47 | 132.73 | -57.93% | 2.380x |
   | Q7 | 2807.33 | 1004.83 | -64.21% | 2.790x |
   | Q8 | 2674.51 | 1069.64 | -60.01% | 2.500x |
   | Q9 | 3777.94 | 1424.08 | -62.31% | 2.650x |
   | Q10 | 1086.91 | 469.38 | -56.82% | 2.320x |
   | Q11 | 575.59 | 264.02 | -54.13% | 2.180x |
   | Q12 | 841.83 | 387.25 | -54.00% | 2.170x |
   | Q13 | 867.57 | 379.90 | -56.21% | 2.280x |
   | Q14 | 470.87 | 214.58 | -54.43% | 2.190x |
   | Q15 | 762.07 | 340.55 | -55.31% | 2.240x |
   | Q16 | 337.20 | 179.25 | -46.84% | 1.880x |
   | Q17 | 3953.82 | 1701.46 | -56.97% | 2.320x |
   | Q18 | 3763.51 | 1606.90 | -57.30% | 2.340x |
   | Q19 | 644.43 | 314.27 | -51.23% | 2.050x |
   | Q20 | 973.56 | 453.24 | -53.45% | 2.150x |
   | Q21 | 19356.91 | 2396.96 | -87.62% | 8.080x |
   | Q22 | 366.20 | 195.40 | -46.64% | 1.870x |
   
   </details>
   
   <details>
   <summary>TPCH SF1, 8 partitions, all queries</summary>
   
   | Query | main ms | grouped ms | change | speedup |
   |---:|---:|---:|---:|---:|
   | Q1 | 74.48 | 55.29 | -25.78% | 1.350x |
   | Q2 | 27.61 | 19.78 | -28.38% | 1.400x |
   | Q3 | 63.53 | 44.89 | -29.33% | 1.420x |
   | Q4 | 26.05 | 18.81 | -27.79% | 1.380x |
   | Q5 | 74.40 | 49.18 | -33.90% | 1.510x |
   | Q6 | 26.16 | 19.15 | -26.82% | 1.370x |
   | Q7 | 71.31 | 49.41 | -30.72% | 1.440x |
   | Q8 | 62.99 | 45.06 | -28.46% | 1.400x |
   | Q9 | 72.78 | 54.76 | -24.76% | 1.330x |
   | Q10 | 85.04 | 59.80 | -29.68% | 1.420x |
   | Q11 | 14.78 | 10.28 | -30.40% | 1.440x |
   | Q12 | 42.51 | 30.72 | -27.74% | 1.380x |
   | Q13 | 62.42 | 44.17 | -29.24% | 1.410x |
   | Q14 | 43.48 | 25.36 | -41.68% | 1.710x |
   | Q15 | 59.18 | 30.70 | -48.12% | 1.930x |
   | Q16 | 23.68 | 14.12 | -40.39% | 1.680x |
   | Q17 | 117.71 | 64.39 | -45.29% | 1.830x |
   | Q18 | 185.95 | 89.57 | -51.83% | 2.080x |
   | Q19 | 75.83 | 38.62 | -49.07% | 1.960x |
   | Q20 | 58.60 | 36.03 | -38.51% | 1.630x |
   | Q21 | 151.04 | 67.61 | -55.24% | 2.230x |
   | Q22 | 31.77 | 20.10 | -36.73% | 1.580x |
   
   </details>
   
   <details>
   <summary>TPCH SF1, 16 partitions, all queries</summary>
   
   | Query | main ms | grouped ms | change | speedup |
   |---:|---:|---:|---:|---:|
   | Q1 | 53.49 | 38.44 | -28.14% | 1.390x |
   | Q2 | 23.02 | 16.41 | -28.71% | 1.400x |
   | Q3 | 62.41 | 42.53 | -31.85% | 1.470x |
   | Q4 | 19.02 | 13.45 | -29.30% | 1.410x |
   | Q5 | 77.90 | 52.31 | -32.85% | 1.490x |
   | Q6 | 19.38 | 14.90 | -23.12% | 1.300x |
   | Q7 | 75.24 | 52.43 | -30.32% | 1.440x |
   | Q8 | 54.93 | 38.27 | -30.34% | 1.440x |
   | Q9 | 99.76 | 51.51 | -48.36% | 1.940x |
   | Q10 | 83.23 | 55.62 | -33.17% | 1.500x |
   | Q11 | 14.90 | 8.90 | -40.27% | 1.670x |
   | Q12 | 53.93 | 27.49 | -49.03% | 1.960x |
   | Q13 | 71.37 | 35.71 | -49.96% | 2.000x |
   | Q14 | 41.78 | 21.93 | -47.52% | 1.910x |
   | Q15 | 56.26 | 26.45 | -52.99% | 2.130x |
   | Q16 | 29.48 | 19.57 | -33.63% | 1.510x |
   | Q17 | 137.60 | 66.52 | -51.66% | 2.070x |
   | Q18 | 193.02 | 86.73 | -55.07% | 2.230x |
   | Q19 | 66.79 | 33.11 | -50.43% | 2.020x |
   | Q20 | 56.24 | 33.80 | -39.90% | 1.660x |
   | Q21 | 200.67 | 70.48 | -64.88% | 2.850x |
   | Q22 | 29.23 | 20.02 | -31.52% | 1.460x |
   
   </details>
   
   <details>
   <summary>TPCH SF1, 32 partitions, all queries</summary>
   
   | Query | main ms | grouped ms | change | speedup |
   |---:|---:|---:|---:|---:|
   | Q1 | 50.06 | 42.15 | -15.81% | 1.190x |
   | Q2 | 22.84 | 16.46 | -27.95% | 1.390x |
   | Q3 | 78.73 | 55.12 | -29.99% | 1.430x |
   | Q4 | 18.57 | 13.07 | -29.65% | 1.420x |
   | Q5 | 112.40 | 73.71 | -34.42% | 1.520x |
   | Q6 | 18.88 | 13.82 | -26.82% | 1.370x |
   | Q7 | 108.11 | 70.60 | -34.70% | 1.530x |
   | Q8 | 89.48 | 50.71 | -43.33% | 1.760x |
   | Q9 | 125.87 | 61.73 | -50.96% | 2.040x |
   | Q10 | 87.31 | 56.89 | -34.84% | 1.530x |
   | Q11 | 15.21 | 9.90 | -34.88% | 1.540x |
   | Q12 | 64.06 | 34.39 | -46.31% | 1.860x |
   | Q13 | 73.14 | 35.26 | -51.80% | 2.070x |
   | Q14 | 50.69 | 24.15 | -52.35% | 2.100x |
   | Q15 | 55.65 | 27.72 | -50.18% | 2.010x |
   | Q16 | 32.59 | 21.60 | -33.71% | 1.510x |
   | Q17 | 171.04 | 82.32 | -51.87% | 2.080x |
   | Q18 | 251.96 | 118.01 | -53.16% | 2.130x |
   | Q19 | 61.62 | 30.81 | -49.99% | 2.000x |
   | Q20 | 68.55 | 40.58 | -40.80% | 1.690x |
   | Q21 | 333.10 | 93.20 | -72.02% | 3.570x |
   | Q22 | 31.16 | 20.65 | -33.72% | 1.510x |
   
   </details>
   
   <details>
   <summary>TPCH SF1, 64 partitions, all queries</summary>
   
   | Query | main ms | grouped ms | change | speedup |
   |---:|---:|---:|---:|---:|
   | Q1 | 50.74 | 38.53 | -24.06% | 1.320x |
   | Q2 | 25.95 | 18.94 | -27.01% | 1.370x |
   | Q3 | 122.81 | 82.93 | -32.47% | 1.480x |
   | Q4 | 19.21 | 13.65 | -28.94% | 1.410x |
   | Q5 | 189.43 | 125.95 | -33.51% | 1.500x |
   | Q6 | 18.61 | 14.21 | -23.69% | 1.310x |
   | Q7 | 213.40 | 126.30 | -40.81% | 1.690x |
   | Q8 | 141.89 | 86.99 | -38.69% | 1.630x |
   | Q9 | 191.69 | 90.71 | -52.68% | 2.110x |
   | Q10 | 95.25 | 59.23 | -37.82% | 1.610x |
   | Q11 | 19.32 | 11.84 | -38.71% | 1.630x |
   | Q12 | 103.60 | 56.33 | -45.63% | 1.840x |
   | Q13 | 103.22 | 56.79 | -44.98% | 1.820x |
   | Q14 | 63.73 | 28.08 | -55.94% | 2.270x |
   | Q15 | 60.79 | 29.06 | -52.19% | 2.090x |
   | Q16 | 39.55 | 28.59 | -27.70% | 1.380x |
   | Q17 | 228.14 | 118.11 | -48.23% | 1.930x |
   | Q18 | 399.33 | 200.80 | -49.72% | 1.990x |
   | Q19 | 56.17 | 31.83 | -43.33% | 1.760x |
   | Q20 | 162.26 | 100.33 | -38.17% | 1.620x |
   | Q21 | 583.93 | 143.60 | -75.41% | 4.070x |
   | Q22 | 25.60 | 21.68 | -15.34% | 1.180x |
   
   </details>
   
   <details>
   <summary>TPCH SF10, 300 partitions, targeted high-fanout queries</summary>
   
   | Query | main ms | grouped ms | change | speedup |
   |---:|---:|---:|---:|---:|
   | Q3 | 2543.94 | 2250.91 | -11.52% | 1.130x |
   | Q9 | 6495.22 | 4755.78 | -26.78% | 1.370x |
   | Q10 | 1869.05 | 1709.18 | -8.55% | 1.090x |
   | Q13 | 1238.63 | 1157.47 | -6.55% | 1.070x |
   | Q15 | 461.51 | 446.25 | -3.31% | 1.030x |
   | Q21 | 37810.29 | 5594.01 | -85.21% | 6.760x |
   | Q22 | 1084.95 | 1058.74 | -2.42% | 1.020x |
   
   </details>
   
   <details>
   <summary>TPCH SF10, 300 partitions, peak RSS stress</summary>
   
   Measured with `/usr/bin/time -l`, one iteration, no DataFusion memory limit. 
RSS is process peak resident set size from the OS.
   
   | Query | main ms | grouped ms | time change | main peak RSS | grouped peak 
RSS | RSS change |
   |---:|---:|---:|---:|---:|---:|---:|
   | Q7 | 5171.45 | 4151.15 | -19.73% | 3.69 GiB | 3.75 GiB | 1.61% |
   | Q9 | 6055.57 | 4758.10 | -21.43% | 4.01 GiB | 4.01 GiB | 0.04% |
   | Q21 | 36300.80 | 5810.14 | -83.99% | 2.96 GiB | 2.05 GiB | -30.79% |
   
   </details>
   
   <details>
   <summary>Memory concern and follow-up work</summary>
   
   This PR changes output batches from independently materialized per-partition 
batches to zero-copy slices of one reordered batch. That is the source of the 
speedup, but it has a memory-accounting consideration: sibling slices can share 
the same backing buffers.
   
   Potential concern:
   
   ```text
   one reordered batch allocation
     -> slice for partition 0
     -> slice for partition 1
     -> slice for partition 2
   ```
   
   A slow output partition can keep the shared reordered batch buffers alive 
until its slice is dropped. Also, `RecordBatch::get_array_memory_size()` may 
count shared slice buffers repeatedly when repartition reserves memory per 
output batch.
   
   The peak RSS stress above did not show a material process-memory regression: 
Q7 was +1.61%, Q9 was +0.04%, and Q21 was -30.79%. Tight p300 memory-limit 
probes were too slow to use as PR evidence, so the right follow-up is a 
purpose-built repartition memory benchmark and/or buffer-aware reservation 
accounting for shared slices.
   
   </details>
   
   ## Are there any user-facing changes?
   
   <!--
   If there are user-facing changes then we may require documentation to be 
updated before approving the PR.
   -->
   
   No API or user-facing behavior changes.
   
   <!--
   If there are any breaking changes to public APIs, please add the `api 
change` label.
   -->
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to