[ https://issues.apache.org/jira/browse/SPARK-16699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Apache Spark reassigned SPARK-16699: ------------------------------------ Assignee: (was: Apache Spark) > Fix performance bug in hash aggregate on long string keys > --------------------------------------------------------- > > Key: SPARK-16699 > URL: https://issues.apache.org/jira/browse/SPARK-16699 > Project: Spark > Issue Type: Bug > Components: Spark Core > Affects Versions: 2.0.0 > Reporter: Qifan Pu > Fix For: 2.0.0 > > > In the following code in `VectorizedHashMapGenerator.scala`: > ``` > def hashBytes(b: String): String = { > val hash = ctx.freshName("hash") > s""" > |int $result = 0; > |for (int i = 0; i < $b.length; i++) { > | ${genComputeHash(ctx, s"$b[i]", ByteType, hash)} > | $result = ($result ^ (0x9e3779b9)) + $hash + ($result << 6) + > ($result >>> 2); > |} > """.stripMargin > } > ``` > when b=input.getBytes(), the current 2.0 code results in getBytes() being > called n times, n being length of input. getBytes() involves memory copy is > thus expensive and causes a performance degradation. > Fix is to evaluate getBytes() before the for loop. -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org