[ 
https://issues.apache.org/jira/browse/SPARK-3630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14180758#comment-14180758
 ] 

Josh Rosen commented on SPARK-3630:
-----------------------------------

I found another cause:

*Errors in reduce phases for jobs with over 2000 post-shuffle tasks*: When 
running the current master (Spark 1.2), SPARK-4019 causes {{PARSING_ERROR(2)}} 
in post-shuffle stages with more than 2000 partitions.  This can occur when 
performing groupByKey(), reduceByKey(), etc. on large datasets if you haven't 
changed the default number of partitions.  I have an open pull request to fix 
this.  If you encounter this problem, try passing 2000 (or less) as the 
explicit number of partitions to the shuffle operation, e.g. groupByKey(2000).

In general, it seems that most cases of {{PARSING_ERROR(2)}} occur when we 
accidentally attempt to decompress empty files / streams.  I'm going to open an 
upstream issue with {{snappy-java}} to improve its error-reporting in these 
cases.

I think we're making a lot of progress in fixing these issues.  Please continue 
to send me bug reports / stacktraces help me figure out whether there are other 
occurrences of this issue that I haven't fixed.  If you do this, please include 
the Spark version that you're using (or commit SHA if you're running off of 
master).  If you'd rather not post publicly on this JIRA, feel free to email me 
at {{joshro...@databricks.com}}. 

> Identify cause of Kryo+Snappy PARSING_ERROR
> -------------------------------------------
>
>                 Key: SPARK-3630
>                 URL: https://issues.apache.org/jira/browse/SPARK-3630
>             Project: Spark
>          Issue Type: Task
>          Components: Spark Core
>    Affects Versions: 1.1.0, 1.2.0
>            Reporter: Andrew Ash
>            Assignee: Josh Rosen
>
> A recent GraphX commit caused non-deterministic exceptions in unit tests so 
> it was reverted (see SPARK-3400).
> Separately, [~aash] observed the same exception stacktrace in an 
> application-specific Kryo registrator:
> {noformat}
> com.esotericsoftware.kryo.KryoException: java.io.IOException: failed to 
> uncompress the chunk: PARSING_ERROR(2)
> com.esotericsoftware.kryo.io.Input.fill(Input.java:142) 
> com.esotericsoftware.kryo.io.Input.require(Input.java:169) 
> com.esotericsoftware.kryo.io.Input.readInt(Input.java:325) 
> com.esotericsoftware.kryo.io.Input.readFloat(Input.java:624) 
> com.esotericsoftware.kryo.serializers.DefaultSerializers$FloatSerializer.read(DefaultSerializers.java:127)
>  
> com.esotericsoftware.kryo.serializers.DefaultSerializers$FloatSerializer.read(DefaultSerializers.java:117)
>  
> com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:732) 
> com.esotericsoftware.kryo.serializers.CollectionSerializer.read(CollectionSerializer.java:109)
>  
> com.esotericsoftware.kryo.serializers.CollectionSerializer.read(CollectionSerializer.java:18)
>  
> com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:732)
> ...
> {noformat}
> This ticket is to identify the cause of the exception in the GraphX commit so 
> the faulty commit can be fixed and merged back into master.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to