Is it possible to support both spark-1.5.1 and spark-1.6.0 on one yarn cluster?

From: Saisai Shao [mailto:sai.sai.s...@gmail.com]
Sent: Monday, December 28, 2015 2:29 PM
To: Jeff Zhang
Cc: 顾亮亮; user@spark.apache.org; 刘骋昺
Subject: Re: Opening Dynamic Scaling Executors on Yarn

Replace all the shuffle jars and restart the NodeManager is enough, no need to 
restart NN.

On Mon, Dec 28, 2015 at 2:05 PM, Jeff Zhang 
<zjf...@gmail.com<mailto:zjf...@gmail.com>> wrote:
See 
http://spark.apache.org/docs/latest/job-scheduling.html#dynamic-resource-allocation



On Mon, Dec 28, 2015 at 2:00 PM, 顾亮亮 
<guliangli...@qiyi.com<mailto:guliangli...@qiyi.com>> wrote:
Hi all,

SPARK-3174 (https://issues.apache.org/jira/browse/SPARK-3174) is a useful 
feature to save resources on yarn.
We want to open this feature on our yarn cluster.
I have a question about the version of shuffle service.

I’m now using spark-1.5.1 (shuffle service).
If I want to upgrade to spark-1.6.0, should I replace the shuffle service jar 
and restart all the namenode on yarn ?

Thanks a lot.

Mars




--
Best Regards

Jeff Zhang

Reply via email to