Hello, I am exprimenting with tuning an on demand spark-cluster on top of our cloudera hadoop. I am running Cloudera 5.5.2 with Spark 1.5 right now and I am running spark in yarn-client mode.
Right now my main experimentation is about spark.executor.memory property and I have noticed a strange behaviour. When I set spark.executor.memory=512M several things happen: - per each executor a container with 1GB memory is requested and assigned from YARN - in Spark UI I can see that each executor has 256M memory So what I am seeing is that spark requests 2x the memory but the executor has only 1/4 of what has been requested. Why is that? Thanks. -- Jan Sterba https://twitter.com/honzasterba | http://flickr.com/honzasterba | http://500px.com/honzasterba --------------------------------------------------------------------- To unsubscribe, e-mail: user-unsubscr...@spark.apache.org For additional commands, e-mail: user-h...@spark.apache.org