That worked great, thanks Andrew.
On Tue, Aug 18, 2015 at 1:39 PM, Andrew Or and...@databricks.com wrote:
Hi Axel,
You can try setting `spark.deploy.spreadOut` to false (through your
conf/spark-defaults.conf file). What this does is essentially try to
schedule as many cores on one worker as
hmm maybe I spoke too soon.
I have an apache zeppelin instance running and have configured it to use 48
cores (each node only has 16 cores), so I figured by setting it to 48,
would mean that spark would grab 3 nodes. what happens instead though is
that spark, reports that 48 cores are being
Hi Axel, what spark version are you using? Also, what do your
configurations look like for the following?
- spark.cores.max (also --total-executor-cores)
- spark.executor.cores (also --executor-cores)
2015-08-19 9:27 GMT-07:00 Axel Dahl a...@whisperstream.com:
hmm maybe I spoke too soon.
I
by default standalone creates 1 executor on every worker machine per
application
number of overall cores is configured with --total-executor-cores
so in general if you'll specify --total-executor-cores=1 then there would
be only 1 core on some executor and you'll get what you want
on the other
Hi Axel,
You can try setting `spark.deploy.spreadOut` to false (through your
conf/spark-defaults.conf file). What this does is essentially try to
schedule as many cores on one worker as possible before spilling over to
other workers. Note that you *must* restart the cluster through the sbin
I have a 4 node cluster and have been playing around with the num-executors
parameters, executor-memory and executor-cores
I set the following:
--executor-memory=10G
--num-executors=1
--executor-cores=8
But when I run the job, I see that each worker, is running one executor
which has 2 cores