Hi all, I saw spark 1.6 has new off heap settings: spark.memory.offHeap.size The doc said we need to shrink on heap size accordingly. But on Yarn on-heap and yarn limit is set all together via spark.executor.memory (jvm opts for memory is not allowed according to doc), how can we set executor JVM heap size and Yarn container memory limit differently?
Thanks. 马晓宇 hzmaxia...@corp.netease.com --------------------------------------------------------------------- To unsubscribe, e-mail: user-unsubscr...@spark.apache.org For additional commands, e-mail: user-h...@spark.apache.org