Re: Correct way of setting executor numbers and executor cores in Spark 1.6.1 for non-clustered mode ?

2016-05-08 Thread Mich Talebzadeh
Hi Karen, You mentioned: "So if I'm reading your email correctly it sounds like I should be able to increase the number of executors on local mode by adding hostnames for localhost. and cores per executor with SPARK_EXECUTOR_CORES. And by starting master/slave(s) for local host I can access

Re: Correct way of setting executor numbers and executor cores in Spark 1.6.1 for non-clustered mode ?

2016-05-07 Thread kmurph
Hi Simon, Thanks. I did actually have "SPARK_WORKER_CORES=8" in spark-env.sh - its commented as 'to set the number of cores to use on this machine'. Not sure how this would interplay with SPARK_EXECUTOR_INSTANCES and SPARK_EXECUTOR_CORES, but I removed it and still see no scaleup with increasing

Re: Correct way of setting executor numbers and executor cores in Spark 1.6.1 for non-clustered mode ?

2016-05-07 Thread Mich Talebzadeh
Check how much free memory you have on your hosr /usr/bin/free as a heuristic values start with these in export SPARK_EXECUTOR_CORES=4 ##, Number of cores for the workers (Default: 1). export SPARK_EXECUTOR_MEMORY=8G ## , Memory per Worker (e.g. 1000M, 2G) (Default: 1G) export

Correct way of setting executor numbers and executor cores in Spark 1.6.1 for non-clustered mode ?

2016-05-07 Thread kmurph
Hi, I'm running spark 1.6.1 on a single machine, initially a small one (8 cores, 16GB ram) using "--master local[*]" to spark-submit and I'm trying to see scaling with increasing cores, unsuccessfully. Initially I'm setting SPARK_EXECUTOR_INSTANCES=1, and increasing cores for each executor.