Hi,

I'm wondering if the current version of Spark still supports bucket
pruning? I see the pull request <https://github.com/apache/spark/pull/10942>
that incorporated the change, but the logic to actually skip reading
buckets has since been removed as part of other PRs
<https://github.com/apache/spark/pull/12300>, and the logic in the
BucketedReadSuite to verify that pruned buckets are empty is currently
commented
out
<https://github.com/apache/spark/blob/master/sql/core/src/test/scala/org/apache/spark/sql/sources/BucketedReadSuite.scala#L114>
.

Thanks,
Joe

Reply via email to