[ https://issues.apache.org/jira/browse/HIVE-17114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16092647#comment-16092647 ]
Rui Li commented on HIVE-17114: ------------------------------- Hi [~kellyzly], I have an example in the description. Basically, Spark decides the reducer task for each record by computing {{hash(key)%numReducers}}. Currently, for a single int key, hash(key)==key. Therefore in my example, all records go to the same reducer although they have different keys. I think it's a rare case, but I did hit it in a benchmark. By using MurmurHash, we can distribute the records more evenly, see HIVE-7121. > HoS: Possible skew in shuffling when data is not really skewed > -------------------------------------------------------------- > > Key: HIVE-17114 > URL: https://issues.apache.org/jira/browse/HIVE-17114 > Project: Hive > Issue Type: Bug > Reporter: Rui Li > Assignee: Rui Li > Priority: Minor > Attachments: HIVE-17114.1.patch > > > Observed in HoS and may apply to other engines as well. > When we join 2 tables on a single int key, we use the key itself as hash code > in {{ObjectInspectorUtils.hashCode}}: > {code} > case INT: > return ((IntObjectInspector) poi).get(o); > {code} > Suppose the keys are different but are all some multiples of 10. And if we > choose 10 as #reducers, the shuffle will be skewed. -- This message was sent by Atlassian JIRA (v6.4.14#64029)