Github user cloud-fan commented on a diff in the pull request: https://github.com/apache/spark/pull/21860#discussion_r227650326 --- Diff: sql/core/src/main/scala/org/apache/spark/sql/execution/aggregate/HashAggregateExec.scala --- @@ -831,7 +832,14 @@ case class HashAggregateExec( ctx.currentVars = new Array[ExprCode](aggregateBufferAttributes.length) ++ input val updateRowInRegularHashMap: String = { - ctx.INPUT_ROW = unsafeRowBuffer + val updatedTmpAggBuffer = + if (isFastHashMapEnabled && !isVectorizedHashMapEnabled) { + updatedAggBuffer --- End diff -- just realized it. Do we create the `updatedAggBuffer` variable only to improve the readability of the generated code? It looks to me we don't need this variable. Here we can write ``` ctx.INPUT_ROW = if (isFastHashMapEnabled && !isVectorizedHashMapEnabled) fastRowBuffer else unsafeRowBuffer ```
--- --------------------------------------------------------------------- To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org