[ https://issues.apache.org/jira/browse/SPARK-17495?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15889090#comment-15889090 ]
Tejas Patil commented on SPARK-17495: ------------------------------------- [~rxin]: >> 1. On the read side we shouldn't care which hash function to use. All we >> need to know is that the data is hash partitioned by some hash function, and >> that should be sufficient to remove the shuffle needed in aggregation or >> join. For joins, if one side is pre-hashed (due to bucketing) and other is not, then the non hashed side need to be shuffled with the _same_ hashing function as the pre-hashed one. Else, the output would be wrong. Or we can choose to shuffle both the sides but that wont utilize benefit of bucketing. > Hive hash implementation > ------------------------ > > Key: SPARK-17495 > URL: https://issues.apache.org/jira/browse/SPARK-17495 > Project: Spark > Issue Type: Sub-task > Components: SQL > Reporter: Tejas Patil > Assignee: Tejas Patil > Priority: Minor > Fix For: 2.2.0 > > > Spark internally uses Murmur3Hash for partitioning. This is different from > the one used by Hive. For queries which use bucketing this leads to different > results if one tries the same query on both engines. For us, we want users to > have backward compatibility to that one can switch parts of applications > across the engines without observing regressions. -- This message was sent by Atlassian JIRA (v6.3.15#6346) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org