I am seeing a strange behaviour while running spark in yarn client mode.I
am observing this on the single node yarn cluster.in spark-default I have
configured the executors memory as 2g and started the spark shell as follows

bin/spark-shell --master yarn-client

which trigger the 2 executors on the node with 1060MB of memory, I am able
to figure out that if you wont specify the num-executors it will span 2
executors on the node by defaults.


now when i try to run again it with the

bin/spark-shell --master yarn-client --num-executors 1

now it spawn a single executors with 1060M size, I am not able to
understand why this time it executes executors with 1G+overhead not 2G what
I specified.

why I am seeing this strange behavior?

Reply via email to