Github user tejasapatil commented on a diff in the pull request:

    https://github.com/apache/spark/pull/16985#discussion_r102245647
  
    --- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/joins/SortMergeJoinExec.scala
 ---
    @@ -33,8 +33,8 @@ import org.apache.spark.util.collection.BitSet
      * Performs a sort merge join of two child relations.
      */
     case class SortMergeJoinExec(
    -    leftKeys: Seq[Expression],
    -    rightKeys: Seq[Expression],
    +    var leftKeys: Seq[Expression],
    --- End diff --
    
    @hvanhovell : I had tried that but for some class of queries that didn't 
work. When I try to get the `outputPartitioning` for a SMB node, in [case of 
inner-join it 
is](https://github.com/apache/spark/blob/02f203107b8eda1f1576e36c4f12b0e3bc5e910e/sql/core/src/main/scala/org/apache/spark/sql/execution/joins/SortMergeJoinExec.scala#L68)
 `PartioniningCollection`. Now one of its children can have a `ReusedExchange` 
which is yet to be resolved but if the other child is resolved, then [this 
check](https://github.com/apache/spark/blob/39e2bad6a866d27c3ca594d15e574a1da3ee84cc/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/physical/partitioning.scala#L325)
 fails.
    
    Example query:
    ```
    SELECT a.i, b.i, c.i
    FROM mytable a, mytable b, mytable c
    where a.i = b.i and a.i = c.i
    ```
    
    Example query plan:
    ```
    :- *SortMergeJoin [i#8], [i#9], Inner
    :  :- *Sort [i#8 ASC NULLS FIRST], false, 0
    :  :  +- Exchange hashpartitioning(i#8, 200)
    :  :     +- *Project [i#8]
    :  :        +- *Filter isnotnull(i#8)
    :  :           +- *FileScan orc default.one_column[i#8] Batched: false, 
Format: ORC, Location: 
InMemoryFileIndex[file:/Users/tejasp/Desktop/dev/tp-spark/spark-warehouse/one_column],
 PartitionFilters: [], PushedFilters: [IsNotNull(i)], ReadSchema: struct<i:int>
    :  +- *Sort [i#9 ASC NULLS FIRST], false, 0
    :     +- ReusedExchange [i#9], Exchange hashpartitioning(i#8, 200)
    ```
    Just to be on the same page, sharing what I had tried (will update the PR 
with the change anyways. I know that there would be some unit tests which would 
fail):
    ```
    case class SortMergeJoinExec(
        leftKeys: Seq[Expression],
        rightKeys: Seq[Expression],
        ....) {
    
      lazy val (reorderedLeftKeys, reorderedRightKeys) = {
        def reorder(
            expectedOrderOfKeys: Seq[Expression],
            currentOrderOfKeys: Seq[Expression]): (Seq[Expression], 
Seq[Expression]) = {
    
          val leftKeysBuffer = ArrayBuffer[Expression]()
          val rightKeysBuffer = ArrayBuffer[Expression]()
    
          expectedOrderOfKeys.foreach(expression => {
            val index = currentOrderOfKeys.indexWhere(e => 
e.semanticEquals(expression))
    
            leftKeysBuffer.append(leftKeys(index))
            rightKeysBuffer.append(rightKeys(index))
          })
    
          (leftKeysBuffer, rightKeysBuffer)
        }
    
        left.outputPartitioning match {
          case HashPartitioning(leftExpressions, _)
            if leftExpressions.length == leftKeys.length &&
              leftKeys.forall(x => leftExpressions.exists(_.semanticEquals(x))) 
=>
            reorder(leftExpressions, leftKeys)
    
          case _ => right.outputPartitioning match {
            case HashPartitioning(rightExpressions, _)
              if rightExpressions.length == rightKeys.length &&
                rightKeys.forall(x => 
rightExpressions.exists(_.semanticEquals(x))) =>
    
              reorder(rightExpressions, rightKeys)
    
            case _ => (leftKeys, rightKeys)
          }
        }
      }
    ```


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to