[ 
https://issues.apache.org/jira/browse/SPARK-30225?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mala Chikka Kempanna updated SPARK-30225:
-----------------------------------------
    Description: 
There is issues with spark.unsafe.sorter.spill.read.ahead.enabled in spark 
2.4.0, which is introduced by https://issues.apache.org/jira/browse/SPARK-23366

 

Workaround for this problem is to disable readahead of unsafe spill with 
following.
 --conf spark.unsafe.sorter.spill.read.ahead.enabled=false

 

This issue can be reproduced on Spark 2.4.0 by following the steps in this 
comment of Jira SPARK-18105.

https://issues.apache.org/jira/browse/SPARK-18105?focusedCommentId=16981461&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16981461

 

Exception looks like below: 

 

<code>

19/12/10 01:51:31 INFO sort.ShuffleExternalSorter: Thread 142 spilling sort 
data of 5.1 GB to disk (1  time so far)19/12/10 01:51:31 INFO 
sort.ShuffleExternalSorter: Thread 142 spilling sort data of 5.1 GB to disk (1  
time so far)19/12/10 01:52:48 INFO sort.ShuffleExternalSorter: Thread 142 
spilling sort data of 5.1 GB to disk (2  times so far)19/12/10 01:53:53 ERROR 
executor.Executor: Exception in task 6.0 in stage 0.0 (TID 
6)java.io.IOException: Stream is corrupted at 
net.jpountz.lz4.LZ4BlockInputStream.refill(LZ4BlockInputStream.java:202) at 
net.jpountz.lz4.LZ4BlockInputStream.refill(LZ4BlockInputStream.java:228) at 
net.jpountz.lz4.LZ4BlockInputStream.read(LZ4BlockInputStream.java:157) at 
org.apache.spark.io.ReadAheadInputStream$1.run(ReadAheadInputStream.java:168) 
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
at java.lang.Thread.run(Thread.java:748)19/12/10 01:53:53 INFO 
executor.CoarseGrainedExecutorBackend: Got assigned task 3319/12/10 01:53:53 
INFO executor.Executor: Running task 8.1 in stage 0.0 (TID 33)19/12/10 01:54:00 
INFO sort.UnsafeExternalSorter: Thread 142 spilling sort data of 3.3 GB to disk 
(0  time so far)19/12/10 01:54:30 INFO executor.Executor: Executor is trying to 
kill task 8.1 in stage 0.0 (TID 33), reason: Stage cancelled19/12/10 01:54:30 
INFO executor.Executor: Executor killed task 8.1 in stage 0.0 (TID 33), reason: 
Stage cancelled19/12/10 01:54:52 INFO executor.CoarseGrainedExecutorBackend: 
Driver commanded a shutdown

 

</code>

 

 

 

  was:
There is issues with spark.unsafe.sorter.spill.read.ahead.enabled in spark 
2.4.0, which is introduced by https://issues.apache.org/jira/browse/SPARK-23366

 

Workaround for this problem is to disable readahead of unsafe spill with 
following.
 --conf spark.unsafe.sorter.spill.read.ahead.enabled=false

 

This issue can be reproduced on Spark 2.4.0 by following the steps in this 
comment of Jira SPARK-18105.

https://issues.apache.org/jira/browse/SPARK-18105?focusedCommentId=16981461&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16981461

 

Exception looks like below: 

 

<code>

19/12/10 01:51:31 INFO sort.ShuffleExternalSorter: Thread 142 spilling sort 
data of 5.1 GB to disk (1  time so far)19/12/10 01:51:31 INFO 
sort.ShuffleExternalSorter: Thread 142 spilling sort data of 5.1 GB to disk (1  
time so far)19/12/10 01:52:48 INFO sort.ShuffleExternalSorter: Thread 142 
spilling sort data of 5.1 GB to disk (2  times so far)19/12/10 01:53:53 ERROR 
executor.Executor: Exception in task 6.0 in stage 0.0 (TID 
6)java.io.IOException: Stream is corrupted at 
net.jpountz.lz4.LZ4BlockInputStream.refill(LZ4BlockInputStream.java:202) at 
net.jpountz.lz4.LZ4BlockInputStream.refill(LZ4BlockInputStream.java:228) at 
net.jpountz.lz4.LZ4BlockInputStream.read(LZ4BlockInputStream.java:157) at 
org.apache.spark.io.ReadAheadInputStream$1.run(ReadAheadInputStream.java:168) 
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
at java.lang.Thread.run(Thread.java:748)19/12/10 01:53:53 INFO 
executor.CoarseGrainedExecutorBackend: Got assigned task 3319/12/10 01:53:53 
INFO executor.Executor: Running task 8.1 in stage 0.0 (TID 33)19/12/10 01:54:00 
INFO sort.UnsafeExternalSorter: Thread 142 spilling sort data of 3.3 GB to disk 
(0  time so far)19/12/10 01:54:30 INFO executor.Executor: Executor is trying to 
kill task 8.1 in stage 0.0 (TID 33), reason: Stage cancelled19/12/10 01:54:30 
INFO executor.Executor: Executor killed task 8.1 in stage 0.0 (TID 33), reason: 
Stage cancelled19/12/10 01:54:52 INFO executor.CoarseGrainedExecutorBackend: 
Driver commanded a shutdown

</code>

 

 

 


> "Stream is corrupt" exception on reading disk-spilled data of a shuffle 
> operation
> ---------------------------------------------------------------------------------
>
>                 Key: SPARK-30225
>                 URL: https://issues.apache.org/jira/browse/SPARK-30225
>             Project: Spark
>          Issue Type: Bug
>          Components: Input/Output
>    Affects Versions: 2.4.0
>            Reporter: Mala Chikka Kempanna
>            Priority: Major
>
> There is issues with spark.unsafe.sorter.spill.read.ahead.enabled in spark 
> 2.4.0, which is introduced by 
> https://issues.apache.org/jira/browse/SPARK-23366
>  
> Workaround for this problem is to disable readahead of unsafe spill with 
> following.
>  --conf spark.unsafe.sorter.spill.read.ahead.enabled=false
>  
> This issue can be reproduced on Spark 2.4.0 by following the steps in this 
> comment of Jira SPARK-18105.
> https://issues.apache.org/jira/browse/SPARK-18105?focusedCommentId=16981461&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16981461
>  
> Exception looks like below: 
>  
> <code>
> 19/12/10 01:51:31 INFO sort.ShuffleExternalSorter: Thread 142 spilling sort 
> data of 5.1 GB to disk (1  time so far)19/12/10 01:51:31 INFO 
> sort.ShuffleExternalSorter: Thread 142 spilling sort data of 5.1 GB to disk 
> (1  time so far)19/12/10 01:52:48 INFO sort.ShuffleExternalSorter: Thread 142 
> spilling sort data of 5.1 GB to disk (2  times so far)19/12/10 01:53:53 ERROR 
> executor.Executor: Exception in task 6.0 in stage 0.0 (TID 
> 6)java.io.IOException: Stream is corrupted at 
> net.jpountz.lz4.LZ4BlockInputStream.refill(LZ4BlockInputStream.java:202) at 
> net.jpountz.lz4.LZ4BlockInputStream.refill(LZ4BlockInputStream.java:228) at 
> net.jpountz.lz4.LZ4BlockInputStream.read(LZ4BlockInputStream.java:157) at 
> org.apache.spark.io.ReadAheadInputStream$1.run(ReadAheadInputStream.java:168) 
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748)19/12/10 01:53:53 INFO 
> executor.CoarseGrainedExecutorBackend: Got assigned task 3319/12/10 01:53:53 
> INFO executor.Executor: Running task 8.1 in stage 0.0 (TID 33)19/12/10 
> 01:54:00 INFO sort.UnsafeExternalSorter: Thread 142 spilling sort data of 3.3 
> GB to disk (0  time so far)19/12/10 01:54:30 INFO executor.Executor: Executor 
> is trying to kill task 8.1 in stage 0.0 (TID 33), reason: Stage 
> cancelled19/12/10 01:54:30 INFO executor.Executor: Executor killed task 8.1 
> in stage 0.0 (TID 33), reason: Stage cancelled19/12/10 01:54:52 INFO 
> executor.CoarseGrainedExecutorBackend: Driver commanded a shutdown
>  
> </code>
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to