Github user cloud-fan commented on a diff in the pull request:

    https://github.com/apache/spark/pull/22957#discussion_r238524763
  
    --- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/exchange/EnsureRequirements.scala
 ---
    @@ -145,9 +145,14 @@ case class EnsureRequirements(conf: SQLConf) extends 
Rule[SparkPlan] {
         assert(requiredChildDistributions.length == children.length)
         assert(requiredChildOrderings.length == children.length)
     
    +    val aliasMap = 
AttributeMap[Expression](children.flatMap(_.expressions.collect {
    +      case a: Alias => (a.toAttribute, a)
    +    }))
    +
         // Ensure that the operator's children satisfy their output 
distribution requirements.
         children = children.zip(requiredChildDistributions).map {
    -      case (child, distribution) if 
child.outputPartitioning.satisfies(distribution) =>
    +      case (child, distribution) if child.outputPartitioning.satisfies(
    +          distribution.mapExpressions(replaceAlias(_, aliasMap))) =>
    --- End diff --
    
    As an example, `ProjectExec.outputPartitioning` can be wrong, as it doesn't 
consider the aliases in the project list. I think it's clearer to adjust the 
`outputPartitioning` there, instead of dealing with it in a rule. What if we 
have more rules need to check `outputPartitioning` and 
`requiredChildDistribution`?


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to