wangyum commented on pull request #29243:
URL: https://github.com/apache/spark/pull/29243#issuecomment-676111794


   Before this pr. We can only prune on `fact_stats.store_id` column:
   ```
   == Physical Plan ==
   *(3) Project [store_id#2705, code#2706, product_id#2708L]
   +- *(3) BroadcastHashJoin [cast(store_id#2705 as bigint)], [store_id#2709L], 
Inner, BuildRight, false
      :- *(3) Project [store_id#2705, code#2706]
      :  +- *(3) BroadcastHashJoin [store_id#2705], [store_id#2707], Inner, 
BuildRight, false
      :     :- *(3) ColumnarToRow
      :     :  +- FileScan parquet default.fact_stats[store_id#2705] Batched: 
true, DataFilters: [], Format: Parquet, Location: 
InMemoryFileIndex[file:/Users/yumwang/spark/SPARK-27227/sql/core/spark-warehouse/org.apache.spark...,
 PartitionFilters: [isnotnull(store_id#2705), 
dynamicpruningexpression(cast(store_id#2705 as bigint) IN subquery#2716)], 
PushedFilters: [], ReadSchema: struct<>
      :     :        +- Subquery subquery#2716, [id=#245]
      :     :           +- *(2) HashAggregate(keys=[store_id#2709L#2715L], 
functions=[])
      :     :              +- Exchange hashpartitioning(store_id#2709L#2715L, 
5), true, [id=#241]
      :     :                 +- *(1) HashAggregate(keys=[store_id#2709L AS 
store_id#2709L#2715L], functions=[])
      :     :                    +- *(1) Filter ((isnotnull(product_id#2708L) 
AND (product_id#2708L < 3)) AND isnotnull(store_id#2709L))
      :     :                       +- *(1) ColumnarToRow
      :     :                          +- FileScan parquet 
default.product[product_id#2708L,store_id#2709L] Batched: true, DataFilters: 
[isnotnull(product_id#2708L), (product_id#2708L < 3), 
isnotnull(store_id#2709L)], Format: Parquet, Location: 
InMemoryFileIndex[file:/Users/yumwang/spark/SPARK-27227/sql/core/spark-warehouse/org.apache.spark...,
 PartitionFilters: [], PushedFilters: [IsNotNull(product_id), 
LessThan(product_id,3), IsNotNull(store_id)], ReadSchema: 
struct<product_id:bigint,store_id:bigint>
      :     +- BroadcastExchange HashedRelationBroadcastMode(List(cast(input[1, 
int, true] as bigint)),false), [id=#275]
      :        +- *(1) ColumnarToRow
      :           +- FileScan parquet 
default.code_stats[code#2706,store_id#2707] Batched: true, DataFilters: [], 
Format: Parquet, Location: 
InMemoryFileIndex[file:/Users/yumwang/spark/SPARK-27227/sql/core/spark-warehouse/org.apache.spark...,
 PartitionFilters: [isnotnull(store_id#2707)], PushedFilters: [], ReadSchema: 
struct<code:int>
      +- BroadcastExchange HashedRelationBroadcastMode(List(input[1, bigint, 
false]),false), [id=#283]
         +- *(2) Filter ((isnotnull(product_id#2708L) AND (product_id#2708L < 
3)) AND isnotnull(store_id#2709L))
            +- *(2) ColumnarToRow
               +- FileScan parquet 
default.product[product_id#2708L,store_id#2709L] Batched: true, DataFilters: 
[isnotnull(product_id#2708L), (product_id#2708L < 3), 
isnotnull(store_id#2709L)], Format: Parquet, Location: 
InMemoryFileIndex[file:/Users/yumwang/spark/SPARK-27227/sql/core/spark-warehouse/org.apache.spark...,
 PartitionFilters: [], PushedFilters: [IsNotNull(product_id), 
LessThan(product_id,3), IsNotNull(store_id)], ReadSchema: 
struct<product_id:bigint,store_id:bigint>
   ```
   
   After this pr. We can also prune on `code_stats.store_id` column:
   ```
   == Physical Plan ==
   *(3) Project [store_id#2705, code#2706, product_id#2708L]
   +- *(3) BroadcastHashJoin [cast(store_id#2705 as bigint)], [store_id#2709L], 
Inner, BuildRight, false
      :- *(3) Project [store_id#2705, code#2706]
      :  +- *(3) BroadcastHashJoin [store_id#2705], [store_id#2707], Inner, 
BuildRight, false
      :     :- *(3) ColumnarToRow
      :     :  +- FileScan parquet default.fact_stats[store_id#2705] Batched: 
true, DataFilters: [], Format: Parquet, Location: 
InMemoryFileIndex[file:/Users/yumwang/spark/SPARK-27227/sql/core/spark-warehouse/org.apache.spark...,
 PartitionFilters: [isnotnull(store_id#2705), 
dynamicpruningexpression(cast(store_id#2705 as bigint) IN subquery#2716)], 
PushedFilters: [], ReadSchema: struct<>
      :     :        +- Subquery subquery#2716, [id=#250]
      :     :           +- *(2) HashAggregate(keys=[store_id#2709L#2715L], 
functions=[])
      :     :              +- Exchange hashpartitioning(store_id#2709L#2715L, 
5), true, [id=#246]
      :     :                 +- *(1) HashAggregate(keys=[store_id#2709L AS 
store_id#2709L#2715L], functions=[])
      :     :                    +- *(1) Filter ((isnotnull(product_id#2708L) 
AND (product_id#2708L < 3)) AND isnotnull(store_id#2709L))
      :     :                       +- *(1) ColumnarToRow
      :     :                          +- FileScan parquet 
default.product[product_id#2708L,store_id#2709L] Batched: true, DataFilters: 
[isnotnull(product_id#2708L), (product_id#2708L < 3), 
isnotnull(store_id#2709L)], Format: Parquet, Location: 
InMemoryFileIndex[file:/Users/yumwang/spark/SPARK-27227/sql/core/spark-warehouse/org.apache.spark...,
 PartitionFilters: [], PushedFilters: [IsNotNull(product_id), 
LessThan(product_id,3), IsNotNull(store_id)], ReadSchema: 
struct<product_id:bigint,store_id:bigint>
      :     +- BroadcastExchange HashedRelationBroadcastMode(List(cast(input[1, 
int, true] as bigint)),false), [id=#309]
      :        +- *(1) ColumnarToRow
      :           +- FileScan parquet 
default.code_stats[code#2706,store_id#2707] Batched: true, DataFilters: [], 
Format: Parquet, Location: 
InMemoryFileIndex[file:/Users/yumwang/spark/SPARK-27227/sql/core/spark-warehouse/org.apache.spark...,
 PartitionFilters: [isnotnull(store_id#2707), 
dynamicpruningexpression(cast(store_id#2707 as bigint) IN subquery#2718)], 
PushedFilters: [], ReadSchema: struct<code:int>
      :                 +- Subquery subquery#2718, [id=#279]
      :                    +- *(2) HashAggregate(keys=[store_id#2709L#2717L], 
functions=[])
      :                       +- Exchange 
hashpartitioning(store_id#2709L#2717L, 5), true, [id=#275]
      :                          +- *(1) HashAggregate(keys=[store_id#2709L AS 
store_id#2709L#2717L], functions=[])
      :                             +- *(1) Filter 
((isnotnull(product_id#2708L) AND (product_id#2708L < 3)) AND 
isnotnull(store_id#2709L))
      :                                +- *(1) ColumnarToRow
      :                                   +- FileScan parquet 
default.product[product_id#2708L,store_id#2709L] Batched: true, DataFilters: 
[isnotnull(product_id#2708L), (product_id#2708L < 3), 
isnotnull(store_id#2709L)], Format: Parquet, Location: 
InMemoryFileIndex[file:/Users/yumwang/spark/SPARK-27227/sql/core/spark-warehouse/org.apache.spark...,
 PartitionFilters: [], PushedFilters: [IsNotNull(product_id), 
LessThan(product_id,3), IsNotNull(store_id)], ReadSchema: 
struct<product_id:bigint,store_id:bigint>
      +- BroadcastExchange HashedRelationBroadcastMode(List(input[1, bigint, 
false]),false), [id=#317]
         +- *(2) Filter ((isnotnull(product_id#2708L) AND (product_id#2708L < 
3)) AND isnotnull(store_id#2709L))
            +- *(2) ColumnarToRow
               +- FileScan parquet 
default.product[product_id#2708L,store_id#2709L] Batched: true, DataFilters: 
[isnotnull(product_id#2708L), (product_id#2708L < 3), 
isnotnull(store_id#2709L)], Format: Parquet, Location: 
InMemoryFileIndex[file:/Users/yumwang/spark/SPARK-27227/sql/core/spark-warehouse/org.apache.spark...,
 PartitionFilters: [], PushedFilters: [IsNotNull(product_id), 
LessThan(product_id,3), IsNotNull(store_id)], ReadSchema: 
struct<product_id:bigint,store_id:bigint>
   
   ```


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to