liupengcheng created SPARK-25258: ------------------------------------ Summary: Upgrade kryo package to version 4.0.2+ Key: SPARK-25258 URL: https://issues.apache.org/jira/browse/SPARK-25258 Project: Spark Issue Type: Wish Components: Spark Core Affects Versions: 2.3.1, 2.1.0 Reporter: liupengcheng
Recently, we encountered a kryo performance issue in spark2.1.0, and the issue affect all kryo below 4.0.2, so it seems that all spark version might encounter this issue. Issue description: In shuffle write phase or some spilling operation, spark will use kryo serializer to serialize data if `spark.serializer` is set to `KryoSerializer`, however, when data contains some extremely large records, kryoSerializer's MapReferenceResolver would be expand, and it's `reset` method will take a long time to reset all items in writtenObjects table to null. com.esotericsoftware.kryo.util.MapReferenceResolver {code:java} public void reset () { readObjects.clear(); writtenObjects.clear(); } public void clear () { K[] keyTable = this.keyTable; for (int i = capacity + stashSize; i-- > 0;) keyTable[i] = null; size = 0; stashSize = 0; } {code} I checked the kryo project in github, and this issue seems fixed in 4.0.2+ [https://github.com/EsotericSoftware/kryo/commit/77935c696ee4976963aa5c6ac53d53d9b40b8bdd#diff-215fa9846e1e4e54bbeede0500de1e28] I was wondering if we can make spark kryo package upgrade to 4.0.2+ to fix this problem. -- This message was sent by Atlassian JIRA (v7.6.3#76005) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org