leanken commented on a change in pull request #29104: URL: https://github.com/apache/spark/pull/29104#discussion_r459819627
########## File path: sql/core/src/main/scala/org/apache/spark/sql/execution/joins/BroadcastHashJoinExec.scala ########## @@ -454,6 +491,48 @@ case class BroadcastHashJoinExec( val (matched, checkCondition, _) = getJoinCondition(ctx, input) val numOutput = metricTerm(ctx, "numOutputRows") + // fast stop if isOriginalInputEmpty = true, should accept all rows in streamedSide + if (broadcastRelation.value.isOriginalInputEmpty) { + return s""" + |// Anti Join isOriginalInputEmpty(true) accept all + |$numOutput.add(1); + |${consume(ctx, input)} + """.stripMargin + } + + if (isNullAwareAntiJoin) { + if (broadcastRelation.value.allNullColumnKeyExistsInOriginalInput) { + return s""" + |// NAAJ Review comment: ``` from the interface of BufferedRowIterator normally after a row is appended, it will break the while loop it will not break until it eventually generate a row, in this case, i don't think we are able to do fast stop /* 043 */ bhj_mutableStateArray_0[0].zeroOutNullBytes(); /* 044 */ /* 045 */ if (localtablescan_isNull_0) { /* 046 */ bhj_mutableStateArray_0[0].setNullAt(0); /* 047 */ } else { /* 048 */ bhj_mutableStateArray_0[0].write(0, localtablescan_value_0); /* 049 */ } /* 050 */ /* 051 */ bhj_mutableStateArray_0[0].write(1, localtablescan_value_1, 2, 1); /* 052 */ append((bhj_mutableStateArray_0[0].getRow())); /* 053 */ if (shouldStop()) return; /* 054 */ } ``` ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org --------------------------------------------------------------------- To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org