JerAguilon opened a new pull request, #41874:
URL: https://github.com/apache/arrow/pull/41874
<!--
Thanks for opening a pull request!
If this is your first pull request you can find detailed information on how
to contribute here:
* [New Contributor's
Guide](https://arrow.apache.org/docs/dev/developers/guide/step_by_step/pr_lifecycle.html#reviews-and-merge-of-the-pull-request)
* [Contributing
Overview](https://arrow.apache.org/docs/dev/developers/overview.html)
If this is not a [minor
PR](https://github.com/apache/arrow/blob/main/CONTRIBUTING.md#Minor-Fixes).
Could you open an issue for this pull request on GitHub?
https://github.com/apache/arrow/issues/new/choose
Opening GitHub issues ahead of time contributes to the
[Openness](http://theapacheway.com/open/#:~:text=Openness%20allows%20new%20users%20the,must%20happen%20in%20the%20open.)
of the Apache Arrow project.
Then could you also rename the pull request title in the following format?
GH-${GITHUB_ISSUE_ID}: [${COMPONENT}] ${SUMMARY}
or
MINOR: [${COMPONENT}] ${SUMMARY}
In the case of PARQUET issues on JIRA the title also supports:
PARQUET-${JIRA_ISSUE_ID}: [${COMPONENT}] ${SUMMARY}
-->
### Rationale for this change
<!--
Why are you proposing this change? If this is already explained clearly in
the issue then this section is not needed.
Explaining clearly why changes are proposed helps reviewers understand your
changes and offer better suggestions for fixes.
-->
This PR is a performance optimization of the asof-join, hence there are no
visible behavioral/test changes.
Please read https://github.com/apache/arrow/issues/41873, where I explain
exactly why this optimization works. The idea is for the left hand side of the
join, rather than copying data to the output arrays cell-by-cell, we can take
`Array::Slice`s, which are zero-copy and minimal overhead. This results in
large speedups that scale with the number of LHS columns we are emitting.
### What changes are included in this PR?
<!--
There is no need to duplicate the description in the issue here but it is
sometimes worth providing a summary of the individual changes in this PR.
-->
**Note**: To reduce merge conflict headaches, I have rebased this PR on top
of https://github.com/apache/arrow/pull/41125 since I am aware it was just
accepted.
Aside from the changes in the parent PR, the changes are mostly localized to
`unmaterialized_table.h`. We add a new field `contiguous_srcs`, which contains
the set of table IDs that can be simply `Slice`d. `asof_join_node.cc` simply
initializes an `UnmaterializedCompositeTable` with this new field:
```
return CompositeTable{schema, inputs.size(), dst_to_src, pool,
/*contiguous_sources=*/{0}};
```
Which indicates that table ID 0 (i.e., the left hand side) can be quickly
sliced.
### Are these changes tested?
Yes - here are some results running `arrow-acero-asof-join-benchmark`:
https://gist.github.com/JerAguilon/68568525f3818f60dc2ffcfe5eb6aba2
This was run on a 32GB 14" Apple M1 Pro
Generally, we see a 30-65% improvement in rows/sec with no discernible
changes in peak memory. The scale of improvement depends on the number of
columns on the LHS.
Anecdotally, there _are_ peak memory improvements at huge scale. I've
personally asof-joined 50GB+ parquet files. At this size, you can accumulate a
nontrivial backlog of work on the producer thread. If you emit rows faster,
then the backlog can be kept lower.
<!--
We typically require tests for all PRs in order to:
1. Prevent the code from being accidentally broken by subsequent changes
2. Serve as another way to document the expected behavior of the code
If tests are not included in your PR, please explain why (for example, are
they covered by existing tests)?
-->
### Are there any user-facing changes?
<!--
If there are user-facing changes then we may require documentation to be
updated before approving the PR.
-->
<!--
If there are any breaking changes to public APIs, please uncomment the line
below and explain which changes are breaking.
-->
<!-- **This PR includes breaking changes to public APIs.** -->
<!--
Please uncomment the line below (and provide explanation) if the changes fix
either (a) a security vulnerability, (b) a bug that caused incorrect or invalid
data to be produced, or (c) a bug that causes a crash (even when the API
contract is upheld). We use this to highlight fixes to issues that may affect
users without their knowledge. For this reason, fixing bugs that cause errors
don't count, since those are usually obvious.
-->
<!-- **This PR contains a "Critical Fix".** -->
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]