[ 
https://issues.apache.org/jira/browse/SPARK-21656?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16117241#comment-16117241
 ] 

Thomas Graves commented on SPARK-21656:
---------------------------------------

As a said above it DOES help the application to keep them alive. the scheduler 
logic will fall back to them at some point when it goes to rack/any locality or 
when it finishes the tasks that are getting locality on those few nodes.  Thus 
why I'm saying its a conflict within spark. 

SPARK should be resilient to any weird things happening.  In the cases I have 
described we could actually release all of our executors and never ask for more 
within a stage, that is a BUG.   We can change the configs to make it so that 
doesn't normally happen but a user could change them back and when they do that 
it shouldn't result in a deadlock.



> spark dynamic allocation should not idle timeout executors when tasks still 
> to run
> ----------------------------------------------------------------------------------
>
>                 Key: SPARK-21656
>                 URL: https://issues.apache.org/jira/browse/SPARK-21656
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Core
>    Affects Versions: 2.1.1
>            Reporter: Jong Yoon Lee
>             Fix For: 2.1.1
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> Right now spark lets go of executors when they are idle for the 60s (or 
> configurable time). I have seen spark let them go when they are idle but they 
> were really needed. I have seen this issue when the scheduler was waiting to 
> get node locality but that takes longer then the default idle timeout. In 
> these jobs the number of executors goes down really small (less than 10) but 
> there are still like 80,000 tasks to run.
> We should consider not allowing executors to idle timeout if they are still 
> needed according to the number of tasks to be run.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to