Hi Mohammed,
thanks a lot for the reply.
Ok, so from what I understand I cannot control the number of executors per
worker in standalone cluster mode.
Is that correct?
BR
On 20 February 2015 at 17:46, Mohammed Guller moham...@glassbeam.com
wrote:
SPARK_WORKER_MEMORY=8g
Will allocate 8GB
SPARK_WORKER_MEMORY=8g
Will allocate 8GB memory to Spark on each worker node. Nothing to do with # of
executors.
Mohammed
From: Yiannis Gkoufas [mailto:johngou...@gmail.com]
Sent: Friday, February 20, 2015 4:55 AM
To: user@spark.apache.org
Subject: Setting the number of executors in standalone
ASFAIK, in stand-alone mode, each Spark application gets one executor on each
worker. You could run multiple workers on a machine though.
Mohammed
From: Yiannis Gkoufas [mailto:johngou...@gmail.com]
Sent: Friday, February 20, 2015 9:48 AM
To: Mohammed Guller
Cc: user@spark.apache.org
Subject:
Hi,
Currently, there is only one executor per worker. There is jira ticket to
relax this:
https://issues.apache.org/jira/browse/SPARK-1706
But, if you want to use more cores, maybe, you can try increasing
SPARK_WORKER_INSTANCES. It increases the number of workers per machine.
Take a look here: