This is an automated email from the ASF dual-hosted git repository.
github-bot pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/datafusion.git
The following commit(s) were added to refs/heads/main by this push:
new 2589fa8ac0 doc: Add documentation for pushing limit into plan (#20271)
2589fa8ac0 is described below
commit 2589fa8ac01b2911616c9e3fc81a498d91658a5d
Author: Yongting You <[email protected]>
AuthorDate: Wed Mar 11 18:11:14 2026 +0800
doc: Add documentation for pushing limit into plan (#20271)
## Which issue does this PR close?
<!--
We generally require a GitHub issue to be filed for all bug fixes and
enhancements and this helps us generate change logs for our releases.
You can link an issue to this PR using the GitHub syntax. For example
`Closes #123` indicates that this PR will close issue #123.
-->
- Closes #.
## Rationale for this change
<!--
Why are you proposing this change? If this is already explained clearly
in the issue then this section is not needed.
Explaining clearly why changes are proposed helps reviewers understand
your changes and offer better suggestions for fixes.
-->
Besides pushing `LimitExec` down the query plan, there is another
optimization that allows plan nodes to *absorb* a limit, so it can
potentially stop early.
I’ve noticed that this form of limit absorption has not been implemented
by many operators. This suggests the optimization is non-obvious, so I’d
like to improve the documentation for it.
A recent PR that implements this optimization is:
- https://github.com/apache/datafusion/pull/20228
## What changes are included in this PR?
<!--
There is no need to duplicate the description in the issue here but it
is sometimes worth providing a summary of the individual changes in this
PR.
-->
## Are these changes tested?
<!--
We typically require tests for all PRs in order to:
1. Prevent the code from being accidentally broken by subsequent changes
2. Serve as another way to document the expected behavior of the code
If tests are not included in your PR, please explain why (for example,
are they covered by existing tests)?
-->
## Are there any user-facing changes?
<!--
If there are user-facing changes then we may require documentation to be
updated before approving the PR.
-->
<!--
If there are any breaking changes to public APIs, please add the `api
change` label.
-->
---
.../physical-optimizer/src/limit_pushdown.rs | 41 ++++++++++++++++++++++
datafusion/physical-plan/src/execution_plan.rs | 4 +++
2 files changed, 45 insertions(+)
diff --git a/datafusion/physical-optimizer/src/limit_pushdown.rs
b/datafusion/physical-optimizer/src/limit_pushdown.rs
index e7bede494d..b556037699 100644
--- a/datafusion/physical-optimizer/src/limit_pushdown.rs
+++ b/datafusion/physical-optimizer/src/limit_pushdown.rs
@@ -17,6 +17,47 @@
//! [`LimitPushdown`] pushes `LIMIT` down through `ExecutionPlan`s to reduce
//! data transfer as much as possible.
+//!
+//! # Plan Limit Absorption
+//! In addition to pushing down [`LimitExec`] in the plan, some operators can
+//! "absorb" a limit and stop early during execution.
+//!
+//! ## Background: vectorized volcano execution model
+//! DataFusion uses a batched volcano model. For most operators, output is
+//! produced in batches of `datafusion.execution.batch_size` (default 8192), so
+//! the batch sizes typically look like:
+//! ```text
+//! 8192, 8192, ..., 8192, 100 (the final batch may be partial)
+//! ```
+//!
+//! ## Example
+//! For a join with an expensive, selective predicate:
+//! ```text
+//! LimitExec(fetch=10)
+//! -- NestedLoopJoinExec(on=expr_expensive_and_selective)
+//! --- DataSourceExec()
+//! --- DataSourceExec()
+//! ```
+//!
+//! Under this model, `NestedLoopJoinExec` would keep working until it can emit
+//! a full batch (8192 rows), even though the query only needs 10. If the limit
+//! cannot be pushed below the join, we can still embed it inside the join so
it
+//! stops once the limit is satisfied. The transformed plan looks like:
+//!
+//! ```text
+//! NestedLoopJoinExec(on=expr_expensive_and_selective, fetch=10)
+//! --- DataSourceExec()
+//! --- DataSourceExec()
+//! ```
+//!
+//! ## Implementation
+//! The current optimizer rule optionally pushes `fetch` requirements into
+//! operators via [`ExecutionPlan::with_fetch`].
+//!
+//! To support early termination in operators,
[`LimitedBatchCoalescer`](https://docs.rs/datafusion/latest/datafusion/physical_plan/coalesce/struct.LimitedBatchCoalescer.html)
+//! can help manage the output buffer.
+//!
+//! Reference implementation in Hash Join:
<https://github.com/apache/datafusion/pull/20228>
use std::fmt::Debug;
use std::sync::Arc;
diff --git a/datafusion/physical-plan/src/execution_plan.rs
b/datafusion/physical-plan/src/execution_plan.rs
index d1e0978cfe..a97bb8c865 100644
--- a/datafusion/physical-plan/src/execution_plan.rs
+++ b/datafusion/physical-plan/src/execution_plan.rs
@@ -579,6 +579,10 @@ pub trait ExecutionPlan: Debug + DisplayAs + Send + Sync {
/// Returns a fetching variant of this `ExecutionPlan` node, if it supports
/// fetch limits. Returns `None` otherwise.
+ ///
+ /// See physical optimizer rule [`limit_pushdown`] for details.
+ ///
+ /// [`limit_pushdown`]:
https://docs.rs/datafusion/latest/datafusion/physical_optimizer/limit_pushdown/index.html
fn with_fetch(&self, _limit: Option<usize>) -> Option<Arc<dyn
ExecutionPlan>> {
None
}
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]