[
https://issues.apache.org/jira/browse/SPARK-19320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15904479#comment-15904479
]
Ji Yan commented on SPARK-19320:
i'm proposing to add a configuration parameter to guarantee a hard limit
[
https://issues.apache.org/jira/browse/SPARK-19320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15888998#comment-15888998
]
Ji Yan commented on SPARK-19320:
[~tnachen] in this case, should we rename spark.mesos.gpus.max to
[
https://issues.apache.org/jira/browse/SPARK-19740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15884487#comment-15884487
]
Ji Yan commented on SPARK-19740:
the problem is that when running Spark on Mesos, there is no way to run
[
https://issues.apache.org/jira/browse/SPARK-19740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15884426#comment-15884426
]
Ji Yan commented on SPARK-19740:
proposed change:
[
https://issues.apache.org/jira/browse/SPARK-19740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Ji Yan updated SPARK-19740:
---
Description:
When running Spark on Mesos with docker containerizer, the spark executors are
always launched
Ji Yan created SPARK-19740:
--
Summary: Spark executor always runs as root when running on mesos
Key: SPARK-19740
URL: https://issues.apache.org/jira/browse/SPARK-19740
Project: Spark
Issue Type: Bug