Qifan Pu created SPARK-16699: -------------------------------- Summary: Fix performance bug in hash aggregate on long string keys Key: SPARK-16699 URL: https://issues.apache.org/jira/browse/SPARK-16699 Project: Spark Issue Type: Bug Components: Spark Core Affects Versions: 2.0.0 Reporter: Qifan Pu Fix For: 2.0.0
In the following code in `VectorizedHashMapGenerator.scala`: ``` def hashBytes(b: String): String = { val hash = ctx.freshName("hash") val bytes = ctx.freshName("bytes") s""" |int $result = 0; |byte[] $bytes = $b; |for (int i = 0; i < $bytes.length; i++) { | ${genComputeHash(ctx, s"$bytes[i]", ByteType, hash)} | $result = ($result ^ (0x9e3779b9)) + $hash + ($result << 6) + ($result >>> 2); |} """.stripMargin } ``` when b=input.getBytes(), the current 2.0 code results in getBytes() being called n times, n being length of input. getBytes() involves memory copy is thus expensive and causes a performance degradation. Fix is to evaluate getBytes() before the for loop. -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org