Did you go through
http://spark.apache.org/docs/latest/job-scheduling.html#configuration-and-setup
for yarn, i guess you will have to copy the spark-1.5.1-yarn-shuffle.jar to
the classpath of all nodemanagers in your cluster.
Thanks
Best Regards
On Fri, Oct 30, 2015 at 7:41 PM, Tom Stewart <
https://issues.apache.org/jira/browse/SPARK-10790
Changed to add minExecutors < initialExecutors < maxExecutors and that
works.
spark-shell --conf spark.dynamicAllocation.enabled=true --conf
spark.shuffle.service.enabled=true --conf
spark.dynamicAllocation.minExecutors=2 --conf
(apologies if this re-posts, having challenges with the various web front
ends to this mailing list)
I am running the following command on a Hadoop cluster to launch Spark shell
with DRA:
spark-shell --conf spark.dynamicAllocation.enabled=true --conf
spark.shuffle.service.enabled=true --conf
I am running the following command on a Hadoop cluster to launch Spark shell
with DRA:
spark-shell --conf spark.dynamicAllocation.enabled=true --conf
spark.shuffle.service.enabled=true --conf
spark.dynamicAllocation.minExecutors=4 --conf
spark.dynamicAllocation.maxExecutors=12 --conf
I am running the following command on a Hadoop cluster to launch Spark shell
with DRA:
spark-shell --conf spark.dynamicAllocation.enabled=true --conf
spark.shuffle.service.enabled=true --conf
spark.dynamicAllocation.minExecutors=4 --conf
spark.dynamicAllocation.maxExecutors=12 --conf