[jira] [Commented] (SPARK-13258) --conf properties not honored in Mesos cluster mode

2017-08-31 Thread Stavros Kontopoulos (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-13258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16148982#comment-16148982
 ] 

Stavros Kontopoulos commented on SPARK-13258:
-

SPARK_JAVA_OPTS has been removed from the code base 
(https://issues.apache.org/jira/browse/SPARK-14453). I verified that this is 
not a bug anymore. Ssh'd to a node in a dc/os cluster and run a job in cluster 
mode:
./bin/spark-submit --verbose --deploy-mode cluster --master 
mesos://spark.marathon.mesos:14003 --conf 
spark.mesos.uris=https://raw.githubusercontent.com/mesosphere/spark/master/README.md
 --conf spark.executor.memory=1g  --conf spark.executor.cores=2 --conf 
spark.cores.max=4 --conf spark.mesos.executor.home=/opt/spark/dist --conf 
spark.mesos.executor.docker.image=mesosphere/spark:1.1.1-2.2.0-hadoop-2.6 
--class org.apache.spark.examples.SparkPi 
https://s3-eu-west-1.amazonaws.com/fdp-stavros-test/spark-examples_2.11-2.1.1.jar
 1000

By looking at the driver sandbox the README is fetched:
I0831 13:24:54.727625 11257 fetcher.cpp:580] Fetched 
'https://raw.githubusercontent.com/mesosphere/spark/master/README.md' to 
'/var/lib/mesos/slave/slaves/53b59f05-430e-4837-aa14-658ca13a82d3-S0/frameworks/53b59f05-430e-4837-aa14-658ca13a82d3-0002/executors/driver-20170831132454-0007/runs/54527349-52de-49e2-b090-9587e4547b99/README.md'
I0831 13:24:54.976838 11268 exec.cpp:162] Version: 1.2.2
I0831 13:24:54.983006 11275 exec.cpp:237] Executor registered on agent 
53b59f05-430e-4837-aa14-658ca13a82d3-S0
17/08/31 13:24:56 INFO SparkContext: Running Spark version 2.2.0

Also looking at the executor's sandbox:
W0831 13:24:58.799739 11234 fetcher.cpp:322] Copying instead of extracting 
resource from URI with 'extract' flag, because it does not seem to be an 
archive: https://raw.githubusercontent.com/mesosphere/spark/master/README.md
I0831 13:24:58.799767 11234 fetcher.cpp:580] Fetched 
'https://raw.githubusercontent.com/mesosphere/spark/master/README.md' to 
'/var/lib/mesos/slave/slaves/53b59f05-430e-4837-aa14-658ca13a82d3-S2/frameworks/53b59f05-430e-4837-aa14-658ca13a82d3-0002-driver-20170831132454-0007/executors/3/runs/8aa9a13d-c6a6-4449-b37d-8b8ab32339d2/README.md'
I0831 13:24:59.033010 11244 exec.cpp:162] Version: 1.2.2
I0831 13:24:59.038556 11246 exec.cpp:237] Executor registered on agent 
53b59f05-430e-4837-aa14-658ca13a82d3-S2
17/08/31 13:25:00 INFO CoarseGrainedExecutorBackend: Started daemon with 
process name: 7...@ip-10-0-2-43.eu-west-1.compute.internal

[~susanxhuynh] [~arand] [~sowen] Not an issue anymore for versions >= 2.2.0 you 
just need to use the --conf option.



> --conf properties not honored in Mesos cluster mode
> ---
>
> Key: SPARK-13258
> URL: https://issues.apache.org/jira/browse/SPARK-13258
> Project: Spark
>  Issue Type: Bug
>  Components: Mesos
>Affects Versions: 1.6.0
>Reporter: Michael Gummelt
>
> Spark properties set on {{spark-submit}} via the deprecated 
> {{SPARK_JAVA_OPTS}} are passed along to the driver, but those set via the 
> preferred {{--conf}} are not.
> For example, this results in the URI being fetched in the executor:
> {{SPARK_JAVA_OPTS="-Dspark.mesos.uris=https://raw.githubusercontent.com/mesosphere/spark/master/README.md
>  -Dspark.mesos.executor.docker.image=mesosphere/spark:1.6.0" 
> ./bin/spark-submit --deploy-mode cluster --master mesos://10.0.78.140:7077  
> --class org.apache.spark.examples.SparkPi 
> http://downloads.mesosphere.com.s3.amazonaws.com/assets/spark/spark-examples_2.10-1.5.0.jar}}
> This does not:
> {{SPARK_JAVA_OPTS="-Dspark.mesos.executor.docker.image=mesosphere/spark:1.6.0"
>  ./bin/spark-submit --deploy-mode cluster --master mesos://10.0.78.140:7077 
> --conf 
> spark.mesos.uris=https://raw.githubusercontent.com/mesosphere/spark/master/README.md
>  --class org.apache.spark.examples.SparkPi 
> http://downloads.mesosphere.com.s3.amazonaws.com/assets/spark/spark-examples_2.10-1.5.0.jar}}
> https://github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosClusterScheduler.scala#L369
> In the above line of code, you can see that SPARK_JAVA_OPTS is passed along 
> to the driver, so those properties take effect.
> https://github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosClusterScheduler.scala#L373
> Whereas in this line of code, you see that {{--conf}} variables are set on 
> {{SPARK_EXECUTOR_OPTS}}, which AFAICT has absolutely no effect because this 
> env var is being set on the driver, not the executor.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-13258) --conf properties not honored in Mesos cluster mode

2016-07-13 Thread Michael Gummelt (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-13258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15375797#comment-15375797
 ] 

Michael Gummelt commented on SPARK-13258:
-

Nope, this is still a bug.

> --conf properties not honored in Mesos cluster mode
> ---
>
> Key: SPARK-13258
> URL: https://issues.apache.org/jira/browse/SPARK-13258
> Project: Spark
>  Issue Type: Bug
>  Components: Mesos
>Affects Versions: 1.6.0
>Reporter: Michael Gummelt
>
> Spark properties set on {{spark-submit}} via the deprecated 
> {{SPARK_JAVA_OPTS}} are passed along to the driver, but those set via the 
> preferred {{--conf}} are not.
> For example, this results in the URI being fetched in the executor:
> {{SPARK_JAVA_OPTS="-Dspark.mesos.uris=https://raw.githubusercontent.com/mesosphere/spark/master/README.md
>  -Dspark.mesos.executor.docker.image=mesosphere/spark:1.6.0" 
> ./bin/spark-submit --deploy-mode cluster --master mesos://10.0.78.140:7077  
> --class org.apache.spark.examples.SparkPi 
> http://downloads.mesosphere.com.s3.amazonaws.com/assets/spark/spark-examples_2.10-1.5.0.jar}}
> This does not:
> {{SPARK_JAVA_OPTS="-Dspark.mesos.executor.docker.image=mesosphere/spark:1.6.0"
>  ./bin/spark-submit --deploy-mode cluster --master mesos://10.0.78.140:7077 
> --conf 
> spark.mesos.uris=https://raw.githubusercontent.com/mesosphere/spark/master/README.md
>  --class org.apache.spark.examples.SparkPi 
> http://downloads.mesosphere.com.s3.amazonaws.com/assets/spark/spark-examples_2.10-1.5.0.jar}}
> https://github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosClusterScheduler.scala#L369
> In the above line of code, you can see that SPARK_JAVA_OPTS is passed along 
> to the driver, so those properties take effect.
> https://github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosClusterScheduler.scala#L373
> Whereas in this line of code, you see that {{--conf}} variables are set on 
> {{SPARK_EXECUTOR_OPTS}}, which AFAICT has absolutely no effect because this 
> env var is being set on the driver, not the executor.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-13258) --conf properties not honored in Mesos cluster mode

2016-04-04 Thread Jo Voordeckers (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-13258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15224793#comment-15224793
 ] 

Jo Voordeckers commented on SPARK-13258:


I would assume so, my fix is merged into master and I have a pending backport 
against branch-1.6, can you test it?

https://github.com/apache/spark/pull/12101

> --conf properties not honored in Mesos cluster mode
> ---
>
> Key: SPARK-13258
> URL: https://issues.apache.org/jira/browse/SPARK-13258
> Project: Spark
>  Issue Type: Bug
>  Components: Mesos
>Affects Versions: 1.6.0
>Reporter: Michael Gummelt
>
> Spark properties set on {{spark-submit}} via the deprecated 
> {{SPARK_JAVA_OPTS}} are passed along to the driver, but those set via the 
> preferred {{--conf}} are not.
> For example, this results in the URI being fetched in the executor:
> {{SPARK_JAVA_OPTS="-Dspark.mesos.uris=https://raw.githubusercontent.com/mesosphere/spark/master/README.md
>  -Dspark.mesos.executor.docker.image=mesosphere/spark:1.6.0" 
> ./bin/spark-submit --deploy-mode cluster --master mesos://10.0.78.140:7077  
> --class org.apache.spark.examples.SparkPi 
> http://downloads.mesosphere.com.s3.amazonaws.com/assets/spark/spark-examples_2.10-1.5.0.jar}}
> This does not:
> {{SPARK_JAVA_OPTS="-Dspark.mesos.executor.docker.image=mesosphere/spark:1.6.0"
>  ./bin/spark-submit --deploy-mode cluster --master mesos://10.0.78.140:7077 
> --conf 
> spark.mesos.uris=https://raw.githubusercontent.com/mesosphere/spark/master/README.md
>  --class org.apache.spark.examples.SparkPi 
> http://downloads.mesosphere.com.s3.amazonaws.com/assets/spark/spark-examples_2.10-1.5.0.jar}}
> https://github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosClusterScheduler.scala#L369
> In the above line of code, you can see that SPARK_JAVA_OPTS is passed along 
> to the driver, so those properties take effect.
> https://github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosClusterScheduler.scala#L373
> Whereas in this line of code, you see that {{--conf}} variables are set on 
> {{SPARK_EXECUTOR_OPTS}}, which AFAICT has absolutely no effect because this 
> env var is being set on the driver, not the executor.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-13258) --conf properties not honored in Mesos cluster mode

2016-04-04 Thread Michael Gummelt (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-13258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15224690#comment-15224690
 ] 

Michael Gummelt commented on SPARK-13258:
-

[~jayv] Does your PR fix this problem?

> --conf properties not honored in Mesos cluster mode
> ---
>
> Key: SPARK-13258
> URL: https://issues.apache.org/jira/browse/SPARK-13258
> Project: Spark
>  Issue Type: Bug
>  Components: Mesos
>Affects Versions: 1.6.0
>Reporter: Michael Gummelt
>
> Spark properties set on {{spark-submit}} via the deprecated 
> {{SPARK_JAVA_OPTS}} are passed along to the driver, but those set via the 
> preferred {{--conf}} are not.
> For example, this results in the URI being fetched in the executor:
> {{SPARK_JAVA_OPTS="-Dspark.mesos.uris=https://raw.githubusercontent.com/mesosphere/spark/master/README.md
>  -Dspark.mesos.executor.docker.image=mesosphere/spark:1.6.0" 
> ./bin/spark-submit --deploy-mode cluster --master mesos://10.0.78.140:7077  
> --class org.apache.spark.examples.SparkPi 
> http://downloads.mesosphere.com.s3.amazonaws.com/assets/spark/spark-examples_2.10-1.5.0.jar}}
> This does not:
> {{SPARK_JAVA_OPTS="-Dspark.mesos.executor.docker.image=mesosphere/spark:1.6.0"
>  ./bin/spark-submit --deploy-mode cluster --master mesos://10.0.78.140:7077 
> --conf 
> spark.mesos.uris=https://raw.githubusercontent.com/mesosphere/spark/master/README.md
>  --class org.apache.spark.examples.SparkPi 
> http://downloads.mesosphere.com.s3.amazonaws.com/assets/spark/spark-examples_2.10-1.5.0.jar}}
> https://github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosClusterScheduler.scala#L369
> In the above line of code, you can see that SPARK_JAVA_OPTS is passed along 
> to the driver, so those properties take effect.
> https://github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosClusterScheduler.scala#L373
> Whereas in this line of code, you see that {{--conf}} variables are set on 
> {{SPARK_EXECUTOR_OPTS}}, which AFAICT has absolutely no effect because this 
> env var is being set on the driver, not the executor.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-13258) --conf properties not honored in Mesos cluster mode

2016-04-03 Thread Vidhya Arvind (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-13258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15223668#comment-15223668
 ] 

Vidhya Arvind commented on SPARK-13258:
---

hi - wondering if this has been fixed?

> --conf properties not honored in Mesos cluster mode
> ---
>
> Key: SPARK-13258
> URL: https://issues.apache.org/jira/browse/SPARK-13258
> Project: Spark
>  Issue Type: Bug
>  Components: Mesos
>Affects Versions: 1.6.0
>Reporter: Michael Gummelt
>
> Spark properties set on {{spark-submit}} via the deprecated 
> {{SPARK_JAVA_OPTS}} are passed along to the driver, but those set via the 
> preferred {{--conf}} are not.
> For example, this results in the URI being fetched in the executor:
> {{SPARK_JAVA_OPTS="-Dspark.mesos.uris=https://raw.githubusercontent.com/mesosphere/spark/master/README.md
>  -Dspark.mesos.executor.docker.image=mesosphere/spark:1.6.0" 
> ./bin/spark-submit --deploy-mode cluster --master mesos://10.0.78.140:7077  
> --class org.apache.spark.examples.SparkPi 
> http://downloads.mesosphere.com.s3.amazonaws.com/assets/spark/spark-examples_2.10-1.5.0.jar}}
> This does not:
> {{SPARK_JAVA_OPTS="-Dspark.mesos.executor.docker.image=mesosphere/spark:1.6.0"
>  ./bin/spark-submit --deploy-mode cluster --master mesos://10.0.78.140:7077 
> --conf 
> spark.mesos.uris=https://raw.githubusercontent.com/mesosphere/spark/master/README.md
>  --class org.apache.spark.examples.SparkPi 
> http://downloads.mesosphere.com.s3.amazonaws.com/assets/spark/spark-examples_2.10-1.5.0.jar}}
> https://github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosClusterScheduler.scala#L369
> In the above line of code, you can see that SPARK_JAVA_OPTS is passed along 
> to the driver, so those properties take effect.
> https://github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosClusterScheduler.scala#L373
> Whereas in this line of code, you see that {{--conf}} variables are set on 
> {{SPARK_EXECUTOR_OPTS}}, which AFAICT has absolutely no effect because this 
> env var is being set on the driver, not the executor.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org