[ 
https://issues.apache.org/jira/browse/SPARK-22272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

roncenzhao updated SPARK-22272:
-------------------------------
    Description: 
JVM bug: http://bugs.java.com/bugdatabase/view_bug.do?bug_id=8132693

We kill the task using 'Thread.interrupt()' and the ShuffleMapTask use nio to 
merge all partitions files when 'spark.file.transferTo' is true(default), so it 
may cause the jvm bug.

When the driver send one task to this bad executor, the task will never run and 
as a result the job will hang forever without handling.

  was:
JVM bug: http://bugs.java.com/bugdatabase/view_bug.do?bug_id=8132693

We kill the task using 'Thread.interrupt()' and the ShuffleMapTask use nio to 
merge all partitions files when 'spark.file.transferTo' is true(default), so it 
may cause the jvm bug.




> killing task may cause the executor progress hang because of the JVM bug
> ------------------------------------------------------------------------
>
>                 Key: SPARK-22272
>                 URL: https://issues.apache.org/jira/browse/SPARK-22272
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Core
>    Affects Versions: 2.1.1
>         Environment: java version "1.7.0_75"
> hadoop version 2.5.0
>            Reporter: roncenzhao
>         Attachments: 26883.jstack, screenshot-1.png, screenshot-2.png
>
>
> JVM bug: http://bugs.java.com/bugdatabase/view_bug.do?bug_id=8132693
> We kill the task using 'Thread.interrupt()' and the ShuffleMapTask use nio to 
> merge all partitions files when 'spark.file.transferTo' is true(default), so 
> it may cause the jvm bug.
> When the driver send one task to this bad executor, the task will never run 
> and as a result the job will hang forever without handling.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to