adriangb commented on code in PR #18393:
URL: https://github.com/apache/datafusion/pull/18393#discussion_r2546312959
##########
datafusion/common/src/config.rs:
##########
@@ -1019,6 +1019,22 @@ config_namespace! {
/// will be collected into a single partition
pub hash_join_single_partition_threshold_rows: usize, default = 1024 *
128
+ /// Maximum size in bytes for the build side of a hash join to be
pushed down as an InList expression for dynamic filtering.
+ /// Build sides larger than this will use hash table lookups instead.
+ /// Set to 0 to always use hash table lookups.
+ ///
+ /// InList pushdown can be more efficient for small build sides
because it can result in better
+ /// statistics pruning as well as use any bloom filters present on the
scan side.
+ /// InList expressions are also more transparent and easier to
serialize over the network in distributed uses of DataFusion.
+ /// On the other hand InList pushdown requires making a copy of the
data and thus adds some overhead to the build side and uses more memory.
+ ///
+ /// This setting is per-partition, so we may end up using
`hash_join_inlist_pushdown_max_size` * `target_partitions` memory.
+ ///
+ /// The default is 128kB per partition.
+ /// This should allow point lookup joins (e.g. joining on a unique
primary key) to use InList pushdown in most cases
+ /// but avoids excessive memory usage or overhead for larger joins.
+ pub hash_join_inlist_pushdown_max_size: usize, default = 128 * 1024
Review Comment:
> For instance, we could end up with a very large list like x IN (1, 2, 3,
..., 1000000) that fits in 128KB but is still inefficient because we'd be
duplicating values and performance might decrease
Could you elaborate? In my mind a large InList is not any more or less
efficient than pushing down the hash table itself, but if it's big it looses
access to the bloom filter pushdown optimization so it's probably not faster
than the hash table itself. That said there are still reasons to push it down
instead, namely that custom execution nodes that downcast match on a
PhysicalExpr can recognize it.
So the idea with the 128kB is to balance how much CPU we burn upfront
building the filter. But I agree it could be in terms of rows as well.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]