viirya opened a new pull request, #19558:
URL: https://github.com/apache/datafusion/pull/19558

   Optimized lpad and rpad functions to eliminate per-row allocations by 
reusing buffers for graphemes and fill characters.
   
   The previous implementation allocated new Vec<&str> for graphemes and 
Vec<char> for fill characters on every row, which was inefficient. This 
optimization introduces reusable buffers that are allocated once and 
cleared/refilled for each row.
   
   Changes:
   - lpad: Added graphemes_buf and fill_chars_buf outside loops, clear and 
refill per row instead of allocating new Vec each time
   - rpad: Added graphemes_buf outside loops to reuse across iterations
   - Both functions now allocate buffers once and reuse them for all rows
   - Buffers are cleared and reused for each row via .clear() and .extend()
   
   Optimization impact:
   - For lpad with fill parameter: Eliminates 2 Vec allocations per row 
(graphemes + fill_chars)
   - For lpad without fill: Eliminates 1 Vec allocation per row (graphemes)
   - For rpad: Eliminates 1 Vec allocation per row (graphemes)
   
   This optimization is particularly effective for:
   - Large arrays with many rows
   - Strings with multiple graphemes (unicode characters)
   - Workloads with custom fill patterns
   
   Benchmark results comparing main vs optimized branch:
   
   lpad benchmarks:
   - size=1024, str_len=5, target=20:  116.53 µs -> 63.226 µs (45.7% faster)
   - size=1024, str_len=20, target=50: 314.07 µs -> 190.30 µs (39.4% faster)
   - size=4096, str_len=5, target=20:  467.35 µs -> 261.29 µs (44.1% faster)
   - size=4096, str_len=20, target=50: 1.2286 ms -> 754.24 µs (38.6% faster)
   
   rpad benchmarks:
   - size=1024, str_len=5, target=20:  113.89 µs -> 72.645 µs (36.2% faster)
   - size=1024, str_len=20, target=50: 313.68 µs -> 202.98 µs (35.3% faster)
   - size=4096, str_len=5, target=20:  456.08 µs -> 295.57 µs (35.2% faster)
   - size=4096, str_len=20, target=50: 1.2523 ms -> 818.47 µs (34.6% faster)
   
   Overall improvements: 35-46% faster across all workloads
   
   ## Which issue does this PR close?
   
   <!--
   We generally require a GitHub issue to be filed for all bug fixes and 
enhancements and this helps us generate change logs for our releases. You can 
link an issue to this PR using the GitHub syntax. For example `Closes #123` 
indicates that this PR will close issue #123.
   -->
   
   - Closes #.
   
   ## Rationale for this change
   
   <!--
    Why are you proposing this change? If this is already explained clearly in 
the issue then this section is not needed.
    Explaining clearly why changes are proposed helps reviewers understand your 
changes and offer better suggestions for fixes.  
   -->
   
   ## What changes are included in this PR?
   
   <!--
   There is no need to duplicate the description in the issue here but it is 
sometimes worth providing a summary of the individual changes in this PR.
   -->
   
   ## Are these changes tested?
   
   <!--
   We typically require tests for all PRs in order to:
   1. Prevent the code from being accidentally broken by subsequent changes
   2. Serve as another way to document the expected behavior of the code
   
   If tests are not included in your PR, please explain why (for example, are 
they covered by existing tests)?
   -->
   
   ## Are there any user-facing changes?
   
   <!--
   If there are user-facing changes then we may require documentation to be 
updated before approving the PR.
   -->
   
   <!--
   If there are any breaking changes to public APIs, please add the `api 
change` label.
   -->
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to