> However, they are missing in subsequent child processes and the final java
process started doesn't contain them either.

I don't see any evidence of this in your process list.  `launcher.Main` is
not the final java process.  `launcher.Main` prints a java command, which
`spark-class` then runs.  That command is the final java process.
`launcher.Main` should take the contents of SPARK_EXECUTOR_OPTS and include
those opts in the command which it prints out.

If you could include the process listing for that final command, and you
observe it doesn't contain the aws system properties from
SPARK_EXECUTOR_OPTS, then I would see something wrong.

On Tue, Aug 9, 2016 at 10:13 AM, Jim Carroll <jimfcarr...@gmail.com> wrote:

> I'm running spark 2.0.0 on Mesos using spark.mesos.executor.docker.image
> to
> point to a docker container that I built with the Spark installation.
>
> Everything is working except the Spark client process that's started inside
> the container doesn't get any of my parameters I set in the spark config in
> the driver.
>
> I set spark.executor.extraJavaOptions and spark.executor.extraClassPath in
> the driver and they don't get passed all the way through. Here is a capture
> of the chain of processes that are started on the mesos slave, in the
> docker
> container:
>
> root      1064  1051  0 12:46 ?        00:00:00 docker -H
> unix:///var/run/docker.sock run --cpu-shares 8192 --memory 4723834880 -e
> SPARK_CLASSPATH=[path to my jar] -e SPARK_EXECUTOR_OPTS=
> -Daws.accessKeyId=[myid] -Daws.secretKey=[mykey] -e SPARK_USER=root -e
> SPARK_EXECUTOR_MEMORY=4096m -e MESOS_SANDBOX=/mnt/mesos/sandbox -e
> MESOS_CONTAINER_NAME=mesos-90e2c720-1e45-4dbc-8271-
> f0c47a33032a-S0.772f8080-6278-4a35-9e57-0009787ac605
> -v
> /tmp/mesos/slaves/90e2c720-1e45-4dbc-8271-f0c47a33032a-
> S0/frameworks/f5794f8a-b56f-4958-b906-f05c426dcef0-0001/
> executors/0/runs/772f8080-6278-4a35-9e57-0009787ac605:/mnt/mesos/sandbox
> --net host --entrypoint /bin/sh --name
> mesos-90e2c720-1e45-4dbc-8271-f0c47a33032a-S0.772f8080-6278-
> 4a35-9e57-0009787ac605
> [my docker image] -c  "/opt/spark/./bin/spark-class"
> org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url
> spark://CoarseGrainedScheduler@192.168.10.145:46121 --executor-id 0
> --hostname 192.168.10.145 --cores 8 --app-id
> f5794f8a-b56f-4958-b906-f05c426dcef0-0001
>
> root      1193  1175  0 12:46 ?        00:00:00 /bin/sh -c
> "/opt/spark/./bin/spark-class"
> org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url
> spark://CoarseGrainedScheduler@192.168.10.145:46121 --executor-id 0
> --hostname 192.168.10.145 --cores 8 --app-id
> f5794f8a-b56f-4958-b906-f05c426dcef0-0001
>
> root      1208  1193  0 12:46 ?        00:00:00 bash
> /opt/spark/./bin/spark-class
> org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url
> spark://CoarseGrainedScheduler@192.168.10.145:46121 --executor-id 0
> --hostname 192.168.10.145 --cores 8 --app-id
> f5794f8a-b56f-4958-b906-f05c426dcef0-0001
>
> root      1213  1208  0 12:46 ?        00:00:00 bash
> /opt/spark/./bin/spark-class
> org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url
> spark://CoarseGrainedScheduler@192.168.10.145:46121 --executor-id 0
> --hostname 192.168.10.145 --cores 8 --app-id
> f5794f8a-b56f-4958-b906-f05c426dcef0-0001
>
> root      1215  1213  0 12:46 ?        00:00:00
> /usr/lib/jvm/java-8-openjdk-amd64/bin/java -Xmx128m -cp /opt/spark/jars/*
> org.apache.spark.launcher.Main
> org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url
> spark://CoarseGrainedScheduler@192.168.10.145:46121 --executor-id 0
> --hostname 192.168.10.145 --cores 8 --app-id
> f5794f8a-b56f-4958-b906-f05c426dcef0-0001
>
> Notice, in the initial process started by mesos both the SPARK_CLASSPATH is
> set to the value of spark.executor.extraClassPath and the -D options are
> set
> as I set them on spark.executor.extraJavaOptions (in this case, to my aws
> creds) in the drive configuration.
>
> However, they are missing in subsequent child processes and the final java
> process started doesn't contain them either.
>
> I "fixed" the classpath problem by putting my jar in /opt/spark/jars
> (/opt/spark is the location I have spark installed in the docker
> container).
>
> Can someone tell me what I'm missing?
>
> Thanks
> Jim
>
>
>
>
> --
> View this message in context: http://apache-spark-user-list.
> 1001560.n3.nabble.com/Spark-on-mesos-in-docker-not-
> getting-parameters-tp27500.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> ---------------------------------------------------------------------
> To unsubscribe e-mail: user-unsubscr...@spark.apache.org
>
>


-- 
Michael Gummelt
Software Engineer
Mesosphere

Reply via email to