Hi Simon,

Thanks.  I did actually have "SPARK_WORKER_CORES=8" in spark-env.sh - its
commented as 'to set the number of cores to use on this machine'.
Not sure how this would interplay with SPARK_EXECUTOR_INSTANCES and
SPARK_EXECUTOR_CORES, but I removed it and still see no scaleup with
increasing cores.  Nothing else is set in spark-env.sh.... 

However, your email has drawn my attention to the comments in spark-env now
which indicate that SPARK_EXECUTOR_INSTANCES and SPARK_EXECUTOR_CORES are
only read in for Yarn configurations.  Based also on what is listed under
"Options for the daemons used in the standalone deploy mode" I guess the
standalone thing to do would be use:

# - SPARK_WORKER_CORES, to set the number of cores to use on this machine
# - SPARK_WORKER_INSTANCES, to set the number of worker processes per node

But as I'm running locally, and there is a separate comment & section for
"Options read when launching programs locally with ./bin/run-example or
./bin/spark-submit".
I don't believe the daemon settings would be read for my setup.  In fact I
just tried switching to SPARK_WORKER_CORES and SPARK_WORKER_INSTANCES and
the cores don't scale, so I its probably using all cores available on the
machine and I don't have control of executors and cores/executor if running
local.

Lans comments here:
   
http://stackoverflow.com/questions/24696777/what-is-the-relationship-between-workers-worker-instances-and-executors
mention standalone cluster manager.  I had assumed that it would apply also
to a large local machine.

Will I in future versions of Spark be able to control executors and
cores/executor ?  Any plans for this ?

Please let me know if my current understanding of whats possible in spark
local mode is incorrect,

Many thanks

Karen




--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Correct-way-of-setting-executor-numbers-and-executor-cores-in-Spark-1-6-1-for-non-clustered-mode-tp26894p26896.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to