Re: strange behavior in spark yarn-client mode

2016-01-14 Thread Marcelo Vanzin
On Thu, Jan 14, 2016 at 10:17 AM, Sanjeev Verma wrote: > now it spawn a single executors with 1060M size, I am not able to understand > why this time it executes executors with 1G+overhead not 2G what I > specified. Where are you looking for the memory size for the

Re: strange behavior in spark yarn-client mode

2016-01-14 Thread Marcelo Vanzin
Please reply to the list. The web ui does not show the total size of the executor's heap. It shows the amount of memory available for caching data, which is, give or take, 60% of the heap by default. On Thu, Jan 14, 2016 at 11:03 AM, Sanjeev Verma wrote: > I am

strange behavior in spark yarn-client mode

2016-01-14 Thread Sanjeev Verma
I am seeing a strange behaviour while running spark in yarn client mode.I am observing this on the single node yarn cluster.in spark-default I have configured the executors memory as 2g and started the spark shell as follows bin/spark-shell --master yarn-client which trigger the 2 executors on