Hi, My team has a cluster running HDP, with Hive and Spark. We setup spark to use dynamic resource allocation, for benefits such as not having to hard code the number of executors and to free resources after using. Everything is running on YARN.
The problem is that for Spark 1.5.2 with dynamic resource allocation to function properly we needed to set yarn.nodemanager.aux-services in yarn-site.xml to spark_shuffle, but this breaks hive (1.2.1), since it is looking for auxService:mapreduce_shuffle. Does any one know of a way to configure in order to have both services running smoothly? Thanks, Ian -- View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/Running-Hive-and-Spark-together-with-Dynamic-Resource-Allocation-tp27968.html Sent from the Apache Spark User List mailing list archive at Nabble.com. --------------------------------------------------------------------- To unsubscribe e-mail: user-unsubscr...@spark.apache.org