Thank you Nishkam,
I have read your code. So, for the sake of my understanding, it seems that
for each spark context there is one executor per node? Can anyone confirm
this?
--
Martin Goodson | VP Data Science
(0)20 3397 1240
[image: Inline image 1]
On Thu, Jul 24, 2014 at 6:12 AM, Nishkam
Great - thanks for the clarification Aaron. The offer stands for me to
write some documentation and an example that covers this without leaving
*any* room for ambiguity.
--
Martin Goodson | VP Data Science
(0)20 3397 1240
[image: Inline image 1]
On Thu, Jul 24, 2014 at 6:09 PM, Aaron
Whoops, I was mistaken in my original post last year. By default, there is
one executor per node per Spark Context, as you said.
spark.executor.memory is the amount of memory that the application
requests for each of its executors. SPARK_WORKER_MEMORY is the amount of
memory a Spark Worker is
SO this is good information for standalone, but how is memory distributed
within Mesos? There's coarse grain mode where the execute stays active, or
theres fine grained mode where it appears each task is it's only process in
mesos, how to memory allocations work in these cases? Thanks!
On Thu,
We are having difficulties configuring Spark, partly because we still don't
understand some key concepts. For instance, how many executors are there
per machine in standalone mode? This is after having closely read the
documentation several times:
Hi Martin,
In standalone mode, each SparkContext you initialize gets its own set of
executors across the cluster. So for example if you have two shells open,
they'll each get two JVMs on each worker machine in the cluster.
As far as the other docs, you can configure the total number of cores
On Wed, Jul 23, 2014 at 4:19 PM, Andrew Ash and...@andrewash.com wrote:
In standalone mode, each SparkContext you initialize gets its own set of
executors across the cluster. So for example if you have two shells open,
they'll each get two JVMs on each worker machine in the cluster.
Dumb
Thanks Andrew,
So if there is only one SparkContext there is only one executor per
machine? This seems to contradict Aaron's message from the link above:
If each machine has 16 GB of RAM and 4 cores, for example, you might set
spark.executor.memory between 2 and 3 GB, totaling 8-12 GB used by
See if this helps:
https://github.com/nishkamravi2/SparkAutoConfig/
It's a very simple tool for auto-configuring default parameters in Spark.
Takes as input high-level parameters (like number of nodes, cores per node,
memory per node, etc) and spits out default configuration, user advice and