Github user aarondav commented on the pull request:

    https://github.com/apache/spark/pull/2609#issuecomment-57529239
  
    As mentioned in the JIRA, I think it would be very good to also check the 
appId to make sure the Executors are indeed terminated. It does not seem 
unreasonable to me that some Spark clusters might remain idle for a couple days 
before someone comes back to them, with the expectation that they still work.
    
    I think we can achieve this in a pretty type-safe manner by changing the
    think we can achieve this in a pretty type-safe manner by changing the 
ExecutorRunner to take in the "executorWorkDir" instead of "workDir", and thus 
making Worker have control over the fact that app dirs are named with the app's 
ID.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to