Github user Zariel commented on the pull request:

    https://github.com/apache/spark/pull/8358#issuecomment-139751123
  
    What should be the correct order for the local dirs? Currently as I can see 
the priority is `YARN_LOCAL_DIRS > LOCAL_DIRS > SPARK_EXECUTOR_DIRS > 
SPARK_LOCAL_DIRS > spark.local.dir > java.io.tmpdir`
    
    To me it makes sense that when running in Yarn or Mesos Spark should use 
their local space if it is available, if someone is running applications in 
Mesos then its fair to assume they (cluster operator) can provide sufficient 
local disk space and performance.
    
    Ive updated the PR to disable the Mesos sandbox when dynamic allocation is 
enabled, and also adjusted the local dir priroity so that it now looks like this
    
    `YARN_LOCAL_DIRS > LOCAL_DIRS > SPARK_LOCAL_DIRS > SPARK_EXECUTOR_DIRS > 
MESOS_DIRECTORY > spark.local.dir > java.io.tmpdir`



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to