2010YOUY01 opened a new pull request, #18644:
URL: https://github.com/apache/datafusion/pull/18644
## Which issue does this PR close?
<!--
We generally require a GitHub issue to be filed for all bug fixes and
enhancements and this helps us generate change logs for our releases. You can
link an issue to this PR using the GitHub syntax. For example `Closes #123`
indicates that this PR will close issue #123.
-->
- Closes #.
## Rationale for this change
<!--
Why are you proposing this change? If this is already explained clearly in
the issue then this section is not needed.
Explaining clearly why changes are proposed helps reviewers understand your
changes and offer better suggestions for fixes.
-->
Background for dynamic filter:
https://datafusion.apache.org/blog/2025/09/10/dynamic-filters/
The following queries can be used for quick global insights:
```
-- Q1
select min(l_shipdate) from lineitem;
-- Q2
select min(l_shipdate) from lineitem where l_returnflag = 'R';
```
Now Q1 can get executed very efficiently by directly check the file metadata
if possible:
```
> explain select min(l_shipdate) from lineitem;
+---------------+-------------------------------+
| plan_type | plan |
+---------------+-------------------------------+
| physical_plan | ┌───────────────────────────┐ |
| | │ ProjectionExec │ |
| | │ -------------------- │ |
| | │ min(lineitem.l_shipdate): │ |
| | │ 1992-01-02 │ |
| | └─────────────┬─────────────┘ |
| | ┌─────────────┴─────────────┐ |
| | │ PlaceholderRowExec │ |
| | └───────────────────────────┘ |
| | |
+---------------+-------------------------------+
1 row(s) fetched.
Elapsed 0.007 seconds.
```
However for Q2 now it's still doing the whole scan, and it's possible to use
dynamic filters to speed them up.
### Benchmarking Q2
#### Setup
1. Generate tpch-sf100 parquet file with `tpchgen-cli -s 100
--format=parquet`
(https://github.com/clflushopt/tpchgen-rs/tree/main/tpchgen-cli)
2. In datafusion-cli, run
```
CREATE EXTERNAL TABLE lineitem
STORED AS PARQUET
LOCATION '/Users/yongting/data/tpch_sf100/lineitem.parquet';
select min(l_shipdate) from lineitem where l_returnflag = 'R';
```
#### Result
Main: 0.55s
PR: 0.09s
### Aggregate Dynamic Filter Pushdown Overview
For queries like
-- `example_table(type TEXT, val INT)`
SELECT min(val)
FROM example_table
WHERE type='A';
And `example_table`'s physical representation is a partitioned parquet file
with
column statistics
- part-0.parquet: val {min=0, max=100}
- part-1.parquet: val {min=100, max=200}
- ...
- part-100.parquet: val {min=10000, max=10100}
After scanning the 1st file, we know we only have to read files if their
minimal
value on `val` column is less than 0, the minimal `val` value in the 1st
file.
We can skip scanning the remaining file by implementing dynamic filter, the
intuition is we keep a shared data structure for current min in both
`AggregateExec
and `DataSourceExec`, and let it update during execution, so the scanner can
know during execution if it's possible to skip scanning certain files. See
physical optimizer rule `FilterPushdown` for details.
### Implementation
#### Enable Condition
- No grouping (no `GROUP BY` clause in the sql, only a single global group
to aggregate)
- The aggregate expression must be `min`/`max`, and evaluate directly on
columns.
Note multiple aggregate expressions that satisfy this requirement are
allowed,
and a dynamic filter will be constructed combining all applicable expr's
states. See more in the following example with dynamic filter on multiple
columns.
#### Filter Construction
The filter is kept in the `DataSourceExec`, and it will gets update during
execution,
the reader will interpret it as "the upstream only needs rows that such
filter
predicate is evaluated to true", and certain scanner implementation like
`parquet`
can evalaute column statistics on those dynamic filters, to decide if they
can
prune a whole range.
**Examples**
- Expr: `min(a)`, Dynamic Filter: `a < a_cur_min`
- Expr: `min(a), max(a), min(b)`, Dynamic Filter: `(a < a_cur_min) OR (a >
a_cur_max) OR (b < b_cur_min)`
## What changes are included in this PR?
<!--
There is no need to duplicate the description in the issue here but it is
sometimes worth providing a summary of the individual changes in this PR.
-->
The goal is is to let aggregate expressions `MIN/MAX` with only column
reference as argument (e.g. min(col1)) support dynamic filter, the above
implementation rationale has explained it further.
The implementation includes:
1. Added `AggrDynFilter` struct, and it would be shared across different
partition streams to store the current bounds for dynamic filter update.
2. `init_dynamic_filter` is responsible checking the conditions for whether
to enable dynamic filter in the current aggregate execution plan, and finally
build the `AggrDynFilter` inside the operator.
3. During aggregation execution, after evaluating each batch, the current
bound is refreshed in the dynamic filter, enabling the scanner to skip prunable
units using the latest runtime bounds. (now it's updating every batch, perhaps
we can let them update every k batches to avoid overheads?)
4. Updated `gather_filters_for_pushdown` and
`handle_child_pushdown_result` API in `AggregateExec` to enable self dynamic
filter generation and pushdown.
5. Added a configuration to turn it on/off
## Are these changes tested?
<!--
We typically require tests for all PRs in order to:
1. Prevent the code from being accidentally broken by subsequent changes
2. Serve as another way to document the expected behavior of the code
If tests are not included in your PR, please explain why (for example, are
they covered by existing tests)?
-->
Yes, optimize UTs and end-to-end tests
## Are there any user-facing changes?
No
<!--
If there are user-facing changes then we may require documentation to be
updated before approving the PR.
-->
<!--
If there are any breaking changes to public APIs, please add the `api
change` label.
-->
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]