I'm playing around with Spark on Windows and have to worker nodes running by
starting them manually using a script that contains the following

set SPARK_HOME=C:\dev\programs\spark-1.2.0
set SPARK_MASTER_IP=master.brad.com
spark-class org.apache.spark.deploy.worker.Worker
spark://master.brad.com:7077

I then create a copy of this script with a different SPARK_HOME defined to
run my second worker from.

This works, however I am unable to configure a different log4j.properties
file for each worker which is a pain as they then both log to the same log
file. The issue seems to be that it's the conf directory on the master that
sends the JVM arguments to each worker to execute a Task. Therefore the same
"-Dlog4j.configuration=file:..." argument is sent to both.

Does anyone know how the JVM command sent to the workers form the master can
read an environment variable, so this can be defined per worker?



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Cluster-launch-tp1484p21647.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to