[jira] [Commented] (SPARK-19320) Allow guaranteed amount of GPU to be used when launching jobs

2017-03-09 Thread Ji Yan (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-19320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15904479#comment-15904479
 ] 

Ji Yan commented on SPARK-19320:


i'm proposing to add a configuration parameter to guarantee a hard limit 
(spark.mesos.gpus) on gpu numbers. To avoid conflict, it will override 
spark.mesos.gpus.max whenever spark.mesos.gpus is greater than 0.

> Allow guaranteed amount of GPU to be used when launching jobs
> -
>
> Key: SPARK-19320
> URL: https://issues.apache.org/jira/browse/SPARK-19320
> Project: Spark
>  Issue Type: Improvement
>  Components: Mesos
>Reporter: Timothy Chen
>
> Currently the only configuration for using GPUs with Mesos is setting the 
> maximum amount of GPUs a job will take from an offer, but doesn't guarantee 
> exactly how much.
> We should have a configuration that sets this.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-19320) Allow guaranteed amount of GPU to be used when launching jobs

2017-02-28 Thread Ji Yan (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-19320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15888998#comment-15888998
 ] 

Ji Yan commented on SPARK-19320:


[~tnachen] in this case, should we rename spark.mesos.gpus.max to something 
else, or should we keep spark.mesos.gpus.max and set add a new configuration 
for a hard limit on gpu?

> Allow guaranteed amount of GPU to be used when launching jobs
> -
>
> Key: SPARK-19320
> URL: https://issues.apache.org/jira/browse/SPARK-19320
> Project: Spark
>  Issue Type: Improvement
>  Components: Mesos
>Reporter: Timothy Chen
>
> Currently the only configuration for using GPUs with Mesos is setting the 
> maximum amount of GPUs a job will take from an offer, but doesn't guarantee 
> exactly how much.
> We should have a configuration that sets this.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-19740) Spark executor always runs as root when running on mesos

2017-02-25 Thread Ji Yan (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-19740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15884487#comment-15884487
 ] 

Ji Yan commented on SPARK-19740:


the problem is that when running Spark on Mesos, there is no way to run Spark 
executor as non-root user

> Spark executor always runs as root when running on mesos
> 
>
> Key: SPARK-19740
> URL: https://issues.apache.org/jira/browse/SPARK-19740
> Project: Spark
>  Issue Type: Bug
>  Components: Mesos
>Affects Versions: 2.1.0
>Reporter: Ji Yan
>
> When running Spark on Mesos with docker containerizer, the spark executors 
> are always launched with 'docker run' command without specifying --user 
> option, which always results in spark executors running as root. Mesos has a 
> way to support arbitrary parameters. Spark could use that to expose setting 
> user
> background on mesos with arbitrary parameters support: 
> https://issues.apache.org/jira/browse/MESOS-1816



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-19740) Spark executor always runs as root when running on mesos

2017-02-25 Thread Ji Yan (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-19740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15884426#comment-15884426
 ] 

Ji Yan commented on SPARK-19740:


proposed change: 
https://github.com/yanji84/spark/commit/4f8368ea727e5689e96794884b8d1baf3eccb5d5

> Spark executor always runs as root when running on mesos
> 
>
> Key: SPARK-19740
> URL: https://issues.apache.org/jira/browse/SPARK-19740
> Project: Spark
>  Issue Type: Bug
>  Components: Mesos
>Affects Versions: 2.1.0
>Reporter: Ji Yan
>
> When running Spark on Mesos with docker containerizer, the spark executors 
> are always launched with 'docker run' command without specifying --user 
> option, which always results in spark executors running as root. Mesos has a 
> way to support arbitrary parameters. Spark could use that to expose setting 
> user
> background on mesos with arbitrary parameters support: 
> https://issues.apache.org/jira/browse/MESOS-1816



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-19740) Spark executor always runs as root when running on mesos

2017-02-25 Thread Ji Yan (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-19740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ji Yan updated SPARK-19740:
---
Description: 
When running Spark on Mesos with docker containerizer, the spark executors are 
always launched with 'docker run' command without specifying --user option, 
which always results in spark executors running as root. Mesos has a way to 
support arbitrary parameters. Spark could use that to expose setting user

background on mesos with arbitrary parameters support: 
https://issues.apache.org/jira/browse/MESOS-1816


  was:When running Spark on Mesos with docker containerizer, the spark 
executors are always launched with 'docker run' command without specifying 
--user option, which always results in spark executors running as root. Mesos 
has a way to support arbitrary parameters. Spark could use that to expose 
setting user


> Spark executor always runs as root when running on mesos
> 
>
> Key: SPARK-19740
> URL: https://issues.apache.org/jira/browse/SPARK-19740
> Project: Spark
>  Issue Type: Bug
>  Components: Mesos
>Affects Versions: 2.1.0
>Reporter: Ji Yan
>
> When running Spark on Mesos with docker containerizer, the spark executors 
> are always launched with 'docker run' command without specifying --user 
> option, which always results in spark executors running as root. Mesos has a 
> way to support arbitrary parameters. Spark could use that to expose setting 
> user
> background on mesos with arbitrary parameters support: 
> https://issues.apache.org/jira/browse/MESOS-1816



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-19740) Spark executor always runs as root when running on mesos

2017-02-25 Thread Ji Yan (JIRA)
Ji Yan created SPARK-19740:
--

 Summary: Spark executor always runs as root when running on mesos
 Key: SPARK-19740
 URL: https://issues.apache.org/jira/browse/SPARK-19740
 Project: Spark
  Issue Type: Bug
  Components: Mesos
Affects Versions: 2.1.0
Reporter: Ji Yan


When running Spark on Mesos with docker containerizer, the spark executors are 
always launched with 'docker run' command without specifying --user option, 
which always results in spark executors running as root. Mesos has a way to 
support arbitrary parameters. Spark could use that to expose setting user



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org