[ 
https://issues.apache.org/jira/browse/SPARK-4694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14232724#comment-14232724
 ] 

SaintBacchus commented on SPARK-4694:
-------------------------------------

Thanks for reply. [~vanzin] the problem is very sure: the scheduler backend was 
aware of the AM had been exited so it call sc.stop to exit the driver process 
but there was a user thread which was still alive and cause this problem.
To fix this, just using System.exit(-1) instead of the sc.stop so that jvm will 
not wait all the user threads being down and exit clearly.
Can I use System.exit() in spark code?

> Long-run user thread(such as HiveThriftServer2) causes the 'process leak' in 
> yarn-client mode
> ---------------------------------------------------------------------------------------------
>
>                 Key: SPARK-4694
>                 URL: https://issues.apache.org/jira/browse/SPARK-4694
>             Project: Spark
>          Issue Type: Bug
>          Components: YARN
>            Reporter: SaintBacchus
>
> Recently when I use the Yarn HA mode to test the HiveThriftServer2 I found a 
> problem that the driver can't exit by itself.
> To reappear it, you can do as fellow:
> 1.use yarn HA mode and set am.maxAttemp = 1for convenience
> 2.kill the active resouce manager in cluster
> The expect result is just failed, because the maxAttemp was 1.
> But the actual result is that: all executor was ended but the driver was 
> still there and never close.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to