wangyum commented on pull request #31485:
URL: https://github.com/apache/spark/pull/31485#issuecomment-774841202


   It may just not be explained correctly, for example:
   ```sql
   create table t1 using parquet as select id as a, id as b, id as c from 
range(1000000);
   create table t2 using parquet as select id as a, id as b, id as c from 
range(1000000);
   create table t3 using parquet as select id as a, id as b, id as c from 
range(1000000);
   analyze table t1 compute statistics for all columns;
   analyze table t2 compute statistics for all columns;
   analyze table t3 compute statistics for all columns;
   
   set spark.sql.cbo.enabled=true;
   explain cost
   select * from t3 where c > (select max(t1.c) as tc from t1 join t2 on t1.a = 
t2.a and t2.b < 10);
   
   
   == Optimized Logical Plan ==
   Filter (isnotnull(c#35782L) AND (c#35782L > scalar-subquery#35774 [])), 
Statistics(sizeInBytes=30.5 MiB, rowCount=1.00E+6)
   :  +- Aggregate [max(c#35785L) AS tc#35773L]
   :     +- Project [c#35785L]
   :        +- Join Inner, (a#35783L = a#35786L)
   :           :- Project [a#35783L, c#35785L]
   :           :  +- Filter isnotnull(a#35783L)
   :           :     +- Relation[a#35783L,b#35784L,c#35785L] parquet
   :           +- Project [a#35786L]
   :              +- Filter ((isnotnull(b#35787L) AND (b#35787L < 10)) AND 
isnotnull(a#35786L))
   :                 +- Relation[a#35786L,b#35787L,c#35788L] parquet
   +- Relation[a#35780L,b#35781L,c#35782L] parquet, Statistics(sizeInBytes=30.5 
MiB, rowCount=1.00E+6)
   
   == Physical Plan ==
   AdaptiveSparkPlan isFinalPlan=false
   +- Filter (isnotnull(c#35782L) AND (c#35782L > Subquery subquery#35774, 
[id=#201]))
      :  +- Subquery subquery#35774, [id=#201]
      :     +- AdaptiveSparkPlan isFinalPlan=false
      :        +- HashAggregate(keys=[], functions=[max(c#35785L)], 
output=[tc#35773L])
      :           +- Exchange SinglePartition, ENSURE_REQUIREMENTS, [id=#199]
      :              +- HashAggregate(keys=[], 
functions=[partial_max(c#35785L)], output=[max#35791L])
      :                 +- Project [c#35785L]
      :                    +- BroadcastHashJoin [a#35783L], [a#35786L], Inner, 
BuildRight, false
      :                       :- Filter isnotnull(a#35783L)
      :                       :  +- FileScan parquet 
default.t1[a#35783L,c#35785L] Batched: true, DataFilters: 
[isnotnull(a#35783L)], Format: Parquet, Location: InMemoryFileIndex(1 
paths)[file:/root/opensource/apache-spark/spark-warehouse/t1], 
PartitionFilters: [], PushedFilters: [IsNotNull(a)], ReadSchema: 
struct<a:bigint,c:bigint>
      :                       +- BroadcastExchange 
HashedRelationBroadcastMode(List(input[0, bigint, true]),false), [id=#194]
      :                          +- Project [a#35786L]
      :                             +- Filter ((isnotnull(b#35787L) AND 
(b#35787L < 10)) AND isnotnull(a#35786L))
      :                                +- FileScan parquet 
default.t2[a#35786L,b#35787L] Batched: true, DataFilters: [isnotnull(b#35787L), 
(b#35787L < 10), isnotnull(a#35786L)], Format: Parquet, Location: 
InMemoryFileIndex(1 
paths)[file:/root/opensource/apache-spark/spark-warehouse/t2], 
PartitionFilters: [], PushedFilters: [IsNotNull(b), LessThan(b,10), 
IsNotNull(a)], ReadSchema: struct<a:bigint,b:bigint>
      +- FileScan parquet default.t3[a#35780L,b#35781L,c#35782L] Batched: true, 
DataFilters: [isnotnull(c#35782L)], Format: Parquet, Location: 
InMemoryFileIndex(1 
paths)[file:/root/opensource/apache-spark/spark-warehouse/t3], 
PartitionFilters: [], PushedFilters: [IsNotNull(c)], ReadSchema: 
struct<a:bigint,b:bigint,c:bigint>
   ```
   
   It should be SMJ if stats incorrect.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to