Re: Opening Dynamic Scaling Executors on Yarn
> > External shuffle service is backward compatible, so if you deployed 1.6 > shuffle service on NM, it could serve both 1.5 and 1.6 Spark applications. Actually, it just happens to be backward compatible because we didn't change the shuffle file formats. This may not necessarily be the case moving forward as Spark offers no such guarantees. Just thought it's worth clarifying. 2015-12-27 22:34 GMT-08:00 Saisai Shao : > External shuffle service is backward compatible, so if you deployed 1.6 > shuffle service on NM, it could serve both 1.5 and 1.6 Spark applications. > > Thanks > Saisai > > On Mon, Dec 28, 2015 at 2:33 PM, 顾亮亮 wrote: > >> Is it possible to support both spark-1.5.1 and spark-1.6.0 on one yarn >> cluster? >> >> >> >> *From:* Saisai Shao [mailto:sai.sai.s...@gmail.com] >> *Sent:* Monday, December 28, 2015 2:29 PM >> *To:* Jeff Zhang >> *Cc:* 顾亮亮; user@spark.apache.org; 刘骋昺 >> *Subject:* Re: Opening Dynamic Scaling Executors on Yarn >> >> >> >> Replace all the shuffle jars and restart the NodeManager is enough, no >> need to restart NN. >> >> >> >> On Mon, Dec 28, 2015 at 2:05 PM, Jeff Zhang wrote: >> >> See >> http://spark.apache.org/docs/latest/job-scheduling.html#dynamic-resource-allocation >> >> >> >> >> >> >> >> On Mon, Dec 28, 2015 at 2:00 PM, 顾亮亮 wrote: >> >> Hi all, >> >> >> >> SPARK-3174 (https://issues.apache.org/jira/browse/SPARK-3174) is a >> useful feature to save resources on yarn. >> >> We want to open this feature on our yarn cluster. >> >> I have a question about the version of shuffle service. >> >> >> >> I’m now using spark-1.5.1 (shuffle service). >> >> If I want to upgrade to spark-1.6.0, should I replace the shuffle service >> jar and restart all the namenode on yarn ? >> >> >> >> Thanks a lot. >> >> >> >> Mars >> >> >> >> >> >> >> >> -- >> >> Best Regards >> >> Jeff Zhang >> >> >> > >
Re: Opening Dynamic Scaling Executors on Yarn
External shuffle service is backward compatible, so if you deployed 1.6 shuffle service on NM, it could serve both 1.5 and 1.6 Spark applications. Thanks Saisai On Mon, Dec 28, 2015 at 2:33 PM, 顾亮亮 wrote: > Is it possible to support both spark-1.5.1 and spark-1.6.0 on one yarn > cluster? > > > > *From:* Saisai Shao [mailto:sai.sai.s...@gmail.com] > *Sent:* Monday, December 28, 2015 2:29 PM > *To:* Jeff Zhang > *Cc:* 顾亮亮; user@spark.apache.org; 刘骋昺 > *Subject:* Re: Opening Dynamic Scaling Executors on Yarn > > > > Replace all the shuffle jars and restart the NodeManager is enough, no > need to restart NN. > > > > On Mon, Dec 28, 2015 at 2:05 PM, Jeff Zhang wrote: > > See > http://spark.apache.org/docs/latest/job-scheduling.html#dynamic-resource-allocation > > > > > > > > On Mon, Dec 28, 2015 at 2:00 PM, 顾亮亮 wrote: > > Hi all, > > > > SPARK-3174 (https://issues.apache.org/jira/browse/SPARK-3174) is a useful > feature to save resources on yarn. > > We want to open this feature on our yarn cluster. > > I have a question about the version of shuffle service. > > > > I’m now using spark-1.5.1 (shuffle service). > > If I want to upgrade to spark-1.6.0, should I replace the shuffle service > jar and restart all the namenode on yarn ? > > > > Thanks a lot. > > > > Mars > > > > > > > > -- > > Best Regards > > Jeff Zhang > > >
RE: Opening Dynamic Scaling Executors on Yarn
Is it possible to support both spark-1.5.1 and spark-1.6.0 on one yarn cluster? From: Saisai Shao [mailto:sai.sai.s...@gmail.com] Sent: Monday, December 28, 2015 2:29 PM To: Jeff Zhang Cc: 顾亮亮; user@spark.apache.org; 刘骋昺 Subject: Re: Opening Dynamic Scaling Executors on Yarn Replace all the shuffle jars and restart the NodeManager is enough, no need to restart NN. On Mon, Dec 28, 2015 at 2:05 PM, Jeff Zhang mailto:zjf...@gmail.com>> wrote: See http://spark.apache.org/docs/latest/job-scheduling.html#dynamic-resource-allocation On Mon, Dec 28, 2015 at 2:00 PM, 顾亮亮 mailto:guliangli...@qiyi.com>> wrote: Hi all, SPARK-3174 (https://issues.apache.org/jira/browse/SPARK-3174) is a useful feature to save resources on yarn. We want to open this feature on our yarn cluster. I have a question about the version of shuffle service. I’m now using spark-1.5.1 (shuffle service). If I want to upgrade to spark-1.6.0, should I replace the shuffle service jar and restart all the namenode on yarn ? Thanks a lot. Mars -- Best Regards Jeff Zhang
Re: Opening Dynamic Scaling Executors on Yarn
Replace all the shuffle jars and restart the NodeManager is enough, no need to restart NN. On Mon, Dec 28, 2015 at 2:05 PM, Jeff Zhang wrote: > See > http://spark.apache.org/docs/latest/job-scheduling.html#dynamic-resource-allocation > > > > On Mon, Dec 28, 2015 at 2:00 PM, 顾亮亮 wrote: > >> Hi all, >> >> >> >> SPARK-3174 (https://issues.apache.org/jira/browse/SPARK-3174) is a >> useful feature to save resources on yarn. >> >> We want to open this feature on our yarn cluster. >> >> I have a question about the version of shuffle service. >> >> >> >> I’m now using spark-1.5.1 (shuffle service). >> >> If I want to upgrade to spark-1.6.0, should I replace the shuffle service >> jar and restart all the namenode on yarn ? >> >> >> >> Thanks a lot. >> >> >> >> Mars >> >> >> > > > > -- > Best Regards > > Jeff Zhang >
Re: Opening Dynamic Scaling Executors on Yarn
See http://spark.apache.org/docs/latest/job-scheduling.html#dynamic-resource-allocation On Mon, Dec 28, 2015 at 2:00 PM, 顾亮亮 wrote: > Hi all, > > > > SPARK-3174 (https://issues.apache.org/jira/browse/SPARK-3174) is a useful > feature to save resources on yarn. > > We want to open this feature on our yarn cluster. > > I have a question about the version of shuffle service. > > > > I’m now using spark-1.5.1 (shuffle service). > > If I want to upgrade to spark-1.6.0, should I replace the shuffle service > jar and restart all the namenode on yarn ? > > > > Thanks a lot. > > > > Mars > > > -- Best Regards Jeff Zhang