Shao [mailto:sai.sai.s...@gmail.com]
>> *Sent:* Monday, December 28, 2015 2:29 PM
>> *To:* Jeff Zhang
>> *Cc:* 顾亮亮; user@spark.apache.org; 刘骋昺
>> *Subject:* Re: Opening Dynamic Scaling Executors on Yarn
>>
>>
>>
>> Replace all the shuffle jars and restart the
See
http://spark.apache.org/docs/latest/job-scheduling.html#dynamic-resource-allocation
On Mon, Dec 28, 2015 at 2:00 PM, 顾亮亮 wrote:
> Hi all,
>
>
>
> SPARK-3174 (https://issues.apache.org/jira/browse/SPARK-3174) is a useful
> feature to save resources on yarn.
>
> We
Hi all,
SPARK-3174 (https://issues.apache.org/jira/browse/SPARK-3174) is a useful
feature to save resources on yarn.
We want to open this feature on our yarn cluster.
I have a question about the version of shuffle service.
I’m now using spark-1.5.1 (shuffle service).
If I want to upgrade to
ark-1.6.0 on one yarn
> cluster?
>
>
>
> *From:* Saisai Shao [mailto:sai.sai.s...@gmail.com]
> *Sent:* Monday, December 28, 2015 2:29 PM
> *To:* Jeff Zhang
> *Cc:* 顾亮亮; user@spark.apache.org; 刘骋昺
> *Subject:* Re: Opening Dynamic Scaling Executors on Yarn
>
>
&g
Is it possible to support both spark-1.5.1 and spark-1.6.0 on one yarn cluster?
From: Saisai Shao [mailto:sai.sai.s...@gmail.com]
Sent: Monday, December 28, 2015 2:29 PM
To: Jeff Zhang
Cc: 顾亮亮; user@spark.apache.org; 刘骋昺
Subject: Re: Opening Dynamic Scaling Executors on Yarn
Replace all
Replace all the shuffle jars and restart the NodeManager is enough, no need
to restart NN.
On Mon, Dec 28, 2015 at 2:05 PM, Jeff Zhang wrote:
> See
> http://spark.apache.org/docs/latest/job-scheduling.html#dynamic-resource-allocation
>
>
>
> On Mon, Dec 28, 2015 at 2:00 PM,