[ https://issues.apache.org/jira/browse/SPARK-30198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Dongjoon Hyun updated SPARK-30198: ---------------------------------- Affects Version/s: 1.6.3 > BytesToBytesMap does not grow internal long array as expected > ------------------------------------------------------------- > > Key: SPARK-30198 > URL: https://issues.apache.org/jira/browse/SPARK-30198 > Project: Spark > Issue Type: Bug > Components: Spark Core > Affects Versions: 1.6.3, 2.0.2, 2.1.3, 2.2.3, 2.3.4, 2.4.4, 3.0.0 > Reporter: L. C. Hsieh > Assignee: L. C. Hsieh > Priority: Major > Fix For: 2.4.5, 3.0.0 > > > One Spark job on our cluster hangs forever at BytesToBytesMap.safeLookup. > After inspecting, the long array size is 536870912. > Currently in BytesToBytesMap.append, we only grow the internal array if the > size of the array is less than its MAX_CAPACITY that is 536870912. So in > above case, the array can not be grown up, and safeLookup can not find an > empty slot forever. > But it is wrong because we use two array entries per key, so the array size > is twice the capacity. We should compare the current capacity of the array, > instead of its size. -- This message was sent by Atlassian Jira (v8.3.4#803005) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org