L. C. Hsieh created SPARK-45678:
-----------------------------------

             Summary: Cover BufferReleasingInputStream.available under 
tryOrFetchFailedException
                 Key: SPARK-45678
                 URL: https://issues.apache.org/jira/browse/SPARK-45678
             Project: Spark
          Issue Type: Improvement
          Components: Spark Core
    Affects Versions: 4.0.0
            Reporter: L. C. Hsieh


We have encountered shuffle data corruption issue:

```
Caused by: java.io.IOException: FAILED_TO_UNCOMPRESS(5)
        at org.xerial.snappy.SnappyNative.throw_error(SnappyNative.java:112)
        at org.xerial.snappy.SnappyNative.rawUncompress(Native Method)
        at org.xerial.snappy.Snappy.rawUncompress(Snappy.java:504)
        at org.xerial.snappy.Snappy.uncompress(Snappy.java:543)
        at 
org.xerial.snappy.SnappyInputStream.hasNextChunk(SnappyInputStream.java:450)
        at 
org.xerial.snappy.SnappyInputStream.available(SnappyInputStream.java:497)
        at 
org.apache.spark.storage.BufferReleasingInputStream.available(ShuffleBlockFetcherIterator.scala:1356)
 ```

Spark shuffle has capacity to detect corruption for a few stream op like `read` 
and `skip`, such `IOException` in the stack trace will be rethrown as 
`FetchFailedException` that will re-try the failed shuffle task. But in the 
stack trace it is `available` that is not covered by the mechanism. So no-retry 
has been happened and the Spark application just failed.

As the `available` op will also involve data decompression, we should be able 
to check it like `read` and `skip` do.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to