This is what I thought the simplest method would be, but I can't seem to
figure out how to configure it--
When you set:

SPARK_WORKER_INSTANCES, to set the number of worker processes per node

but when you set 

SPARK_WORKER_MEMORY, to set how much total memory workers have to give
executors (e.g. 1000m, 2g)

I believe it is shared across all workers! So when worker memory gets set by
the master (I tried setting it in the spark-env.sh on a worker, but was
overridden by the setting on the master) it is not multiplied by the number
of workers?

(also, I'm not sure Worker_Instances isn't also overridden by the master...)

How would you suggest setting this up?



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/heterogeneous-cluster-hardware-tp11567p12609.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to