Miguel Pérez created SPARK-20286:
------------------------------------

             Summary: dynamicAllocation.executorIdleTimeout is ignored after 
unpersist
                 Key: SPARK-20286
                 URL: https://issues.apache.org/jira/browse/SPARK-20286
             Project: Spark
          Issue Type: Bug
          Components: Spark Core
    Affects Versions: 2.0.1
            Reporter: Miguel Pérez


With dynamic allocation enabled, it seems that executors with cached data which 
are unpersisted are still being killed using the 
{{dynamicAllocation.cachedExecutorIdleTimeout}} configuration, instead of 
{{dynamicAllocation.executorIdleTimeout}}. Assuming the default configuration 
({{dynamicAllocation.cachedExecutorIdleTimeout = Infinity}}), an executor with 
unpersisted data won't be released until the job ends.

*How to reproduce*
- Set different values for {{dynamicAllocation.executorIdleTimeout}} and 
{{dynamicAllocation.cachedExecutorIdleTimeout}}
- Load a file into a RDD and persist it
- Execute an action on the RDD (like a count) so some executors are activated.
- When the action has finished, unpersist the RDD
- The application UI removes correctly the persisted data from the *Storage* 
tab, but if you look in the *Executors* tab, you will find that the executors 
remain *active* until ({{dynamicAllocation.cachedExecutorIdleTimeout}} is 
reached.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to