Hi,

Running Spark 1.0.1 on Yarn 2.5

When i specify --executor-memory 4g, the spark UI shows each executor as
having only 2.3 GB, and similarly for 8g, only 4.6 GB.

I am guessing that the executor memory corresponds to the container memory,
and that the task JVM gets only a percentage of the container total memory.
Is there a yarn or spark parameter to tune this so that my task JVM
actually gets 6GB out of the 8GB for example?


Thanks.

Reply via email to