Hi, How can we increase the executor memory of a running spark cluster on YARN? We want to increase the executor memory on the addition of new nodes in the cluster. We are running spark version 1.0.2.
Thanks Mudassar -- View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/Increase-Executor-Memory-on-YARN-tp18489.html Sent from the Apache Spark User List mailing list archive at Nabble.com. --------------------------------------------------------------------- To unsubscribe, e-mail: user-unsubscr...@spark.apache.org For additional commands, e-mail: user-h...@spark.apache.org