[jira] [Commented] (SPARK-7699) Config "spark.dynamicAllocation.initialExecutors" has no effect

2015-05-24 Thread Sandy Ryza (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-7699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14557911#comment-14557911
 ] 

Sandy Ryza commented on SPARK-7699:
---

[~sowen] I think the possible flaw in your argument is that it relies on 
"initial load" being defined in some reasonable.

I.e. I think the worry is that the following can happen:
* initial = 3 and min = 1
* cluster is large and uncontended
* first line of user code is a job submission that can make use of at least 3
* because the executor allocation thread starts immediately, requested 
executors ramps down to 1 before the user code has a chance to submit the job

Which is to say: what guarantees do we provide about initialExecutors other 
than that it's the number of executors requests we have before some opaque 
internal thing happens to adjust it down?  One possible such guarantee we could 
provide is that we won't adjust down for some fixed number of seconds after the 
SparkContext starts.

> Config "spark.dynamicAllocation.initialExecutors" has no effect 
> 
>
> Key: SPARK-7699
> URL: https://issues.apache.org/jira/browse/SPARK-7699
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Reporter: meiyoula
>
> spark.dynamicAllocation.minExecutors 2
> spark.dynamicAllocation.initialExecutors  3
> spark.dynamicAllocation.maxExecutors 4
> Just run the spark-shell with above configurations, the initial executor 
> number is 2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-7699) Config "spark.dynamicAllocation.initialExecutors" has no effect

2015-05-22 Thread Sean Owen (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-7699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14555871#comment-14555871
 ] 

Sean Owen commented on SPARK-7699:
--

Say initial = 3 and min = 1. There is no load at the start. You may start with 
3 executors (ramp-down occurred after the initial executor request was 
fulfilled) or 1 (ramp-down happened before the request, or the request was 
successfully cancelled). This JIRA says that it should not be 1; you're saying 
it might start at 3. I agree it might start at 3 but disagree with the idea 
that it can't be 1.

In the short term, the right number of executors is 1; 3 executors will ramp 
down to 1 soon anyway. So either of these seems like a reasonable result.

You can say, well, initial = 3 has no effect or is pointless in this situation. 
That's true. There is no point in setting initial = 3 in this situation, and 
the caller shouldn't do that if he/she knows there is no load. But either 
actual resulting behavior seems to be fine.

I don't see a reason to artificially prevent changing from initial for a while, 
and I don't think it should be scrapped since it does serve a good purpose: 
when there *is* initial load, this lets you start at a much more reasonable 
number of executors and increase from there rather than start from the minimum. 
That's the core purpose of initialExecutors, right? 

That's why I disagree with the premise of the JIRA that is has no effect, and 
say that the current behavior seems correct in the no-load case assuming rapid 
ramp-down is desired, and I think everyone agrees with that. That's why I'd say 
this isn't a problem -- does this hold together?

> Config "spark.dynamicAllocation.initialExecutors" has no effect 
> 
>
> Key: SPARK-7699
> URL: https://issues.apache.org/jira/browse/SPARK-7699
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Reporter: meiyoula
>
> spark.dynamicAllocation.minExecutors 2
> spark.dynamicAllocation.initialExecutors  3
> spark.dynamicAllocation.maxExecutors 4
> Just run the spark-shell with above configurations, the initial executor 
> number is 2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-7699) Config "spark.dynamicAllocation.initialExecutors" has no effect

2015-05-22 Thread Sandy Ryza (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-7699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14555837#comment-14555837
 ] 

Sandy Ryza commented on SPARK-7699:
---

Sorry for the delay here.  The desired behavior is to never have outstanding 
requests for more than the number of executors we'd need to satisfy all current 
tasks (unless minExecutors is set).  I.e. we don't want to ramp down gradually.

So I think the concerns here are valid - this policy means that as soon as the 
dynamic allocation thread starts being active, it will cancel any container 
requests that were made as a result of initialExecutors.  If, however, these 
executor requests were actually fulfilled, dynamic allocation wouldn't throw 
away the executors.

This means that the relevant questions are: is there a window of time after 
we've requested the initial executors but before the dynamic allocation thread 
starts?  Is there something fundamental about this window of time that means it 
will probably still be there after future scheduling optimizations?  If not, do 
we want to make sure that ExecutorAllocationManager itself doesn't ramp down 
below initialExecutors until some criteria (probably time) is satisfied?  Or 
should we just scrap the property.


> Config "spark.dynamicAllocation.initialExecutors" has no effect 
> 
>
> Key: SPARK-7699
> URL: https://issues.apache.org/jira/browse/SPARK-7699
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Reporter: meiyoula
>
> spark.dynamicAllocation.minExecutors 2
> spark.dynamicAllocation.initialExecutors  3
> spark.dynamicAllocation.maxExecutors 4
> Just run the spark-shell with above configurations, the initial executor 
> number is 2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-7699) Config "spark.dynamicAllocation.initialExecutors" has no effect

2015-05-22 Thread Sean Owen (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-7699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14555788#comment-14555788
 ] 

Sean Owen commented on SPARK-7699:
--

[~sandyr] can you comment on the intended behavior? does the number of 
executors ramp down immediately, while it ramps up only gradually?

> Config "spark.dynamicAllocation.initialExecutors" has no effect 
> 
>
> Key: SPARK-7699
> URL: https://issues.apache.org/jira/browse/SPARK-7699
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Reporter: meiyoula
>
> spark.dynamicAllocation.minExecutors 2
> spark.dynamicAllocation.initialExecutors  3
> spark.dynamicAllocation.maxExecutors 4
> Just run the spark-shell with above configurations, the initial executor 
> number is 2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-7699) Config "spark.dynamicAllocation.initialExecutors" has no effect

2015-05-19 Thread Saisai Shao (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-7699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14550442#comment-14550442
 ] 

Saisai Shao commented on SPARK-7699:


Yes, you are right after checking the code of {{addExecutors}}. But not sure if 
load is less, choose {{minNumExecutors}} is the design intention? If so, then 
there's no bug here.

> Config "spark.dynamicAllocation.initialExecutors" has no effect 
> 
>
> Key: SPARK-7699
> URL: https://issues.apache.org/jira/browse/SPARK-7699
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Reporter: meiyoula
>
> spark.dynamicAllocation.minExecutors 2
> spark.dynamicAllocation.initialExecutors  3
> spark.dynamicAllocation.maxExecutors 4
> Just run the spark-shell with above configurations, the initial executor 
> number is 2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-7699) Config "spark.dynamicAllocation.initialExecutors" has no effect

2015-05-19 Thread Sean Owen (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-7699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14550401#comment-14550401
 ] 

Sean Owen commented on SPARK-7699:
--

Backing up a bit here, after I re-read the code in more detail, I am pretty 
certain initialExecutors has an effect. It's in {{addExecutors}}, and that is 
the "increase executors" code path. Let's say minimum = 1, max = 10, initial = 
3. At the first schedulued check, 6 executors are needed. The code path 
increases the initial value by 1 to 4, and requests 4 executors. The fact that 
the initial value was 3 matters here.

However, yes, the code seems to intentionally ramp down immediately if load is 
less than the target. It doesn't choose the minimum; it chooses a target number 
of executors equal to the required amount (which must be at least the minimum). 
I think that is by design; I think there's much less reason to ramp *down* 
slowly?

But it's not true that this initialExecutors has no effect, which seems to be 
the thrust of this JIRA. It has an effect in all cases; its effect is mooted 
immediately however in one code path, by design it seems.

> Config "spark.dynamicAllocation.initialExecutors" has no effect 
> 
>
> Key: SPARK-7699
> URL: https://issues.apache.org/jira/browse/SPARK-7699
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Reporter: meiyoula
>
> spark.dynamicAllocation.minExecutors 2
> spark.dynamicAllocation.initialExecutors  3
> spark.dynamicAllocation.maxExecutors 4
> Just run the spark-shell with above configurations, the initial executor 
> number is 2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-7699) Config "spark.dynamicAllocation.initialExecutors" has no effect

2015-05-19 Thread Saisai Shao (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-7699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14550378#comment-14550378
 ] 

Saisai Shao commented on SPARK-7699:


Yes, {{maxNeeded}} is the correct # if we have lots of pending tasks *at 
start*, but if not, is it better to choose {{initialExecutors}} or 
{{minNumExecutors}} *at start*. From the current code, it chooses 
{{minNumExecutors}}, so the configuration {{initialExecutors}} never has any 
chance to take effect at any request, even at beginning, as what this 
configuration stands for.

> Config "spark.dynamicAllocation.initialExecutors" has no effect 
> 
>
> Key: SPARK-7699
> URL: https://issues.apache.org/jira/browse/SPARK-7699
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Reporter: meiyoula
>
> spark.dynamicAllocation.minExecutors 2
> spark.dynamicAllocation.initialExecutors  3
> spark.dynamicAllocation.maxExecutors 4
> Just run the spark-shell with above configurations, the initial executor 
> number is 2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-7699) Config "spark.dynamicAllocation.initialExecutors" has no effect

2015-05-19 Thread Sean Owen (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-7699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14550367#comment-14550367
 ] 

Sean Owen commented on SPARK-7699:
--

Isn't maxNeeded the correct number at that point then? the initial state 
doesn't matter. I agree that the initial setting probably doesn't last long. 
But there's a difference in going from initial to # needed, than having to 
start at min and get to the # needed; there's still a point to hinting the 
right initial value.

Are you saying this happens so fast that the initial executors never has any 
effect because it never even got a chance to influence any request?

> Config "spark.dynamicAllocation.initialExecutors" has no effect 
> 
>
> Key: SPARK-7699
> URL: https://issues.apache.org/jira/browse/SPARK-7699
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Reporter: meiyoula
>
> spark.dynamicAllocation.minExecutors 2
> spark.dynamicAllocation.initialExecutors  3
> spark.dynamicAllocation.maxExecutors 4
> Just run the spark-shell with above configurations, the initial executor 
> number is 2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-7699) Config "spark.dynamicAllocation.initialExecutors" has no effect

2015-05-19 Thread Saisai Shao (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-7699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14550357#comment-14550357
 ] 

Saisai Shao commented on SPARK-7699:


I think if we have lots of pending tasks, the actual requested executors will 
be larger than the minimum number of executors at the start time.

IIUC the problem here is no matter what we set on {{initialExecutors}}, this 
configuration will not be effective, the requested executor number will either 
be {{maxNeeded}} or {{minNumExecutors}}.

> Config "spark.dynamicAllocation.initialExecutors" has no effect 
> 
>
> Key: SPARK-7699
> URL: https://issues.apache.org/jira/browse/SPARK-7699
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Reporter: meiyoula
>
> spark.dynamicAllocation.minExecutors 2
> spark.dynamicAllocation.initialExecutors  3
> spark.dynamicAllocation.maxExecutors 4
> Just run the spark-shell with above configurations, the initial executor 
> number is 2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-7699) Config "spark.dynamicAllocation.initialExecutors" has no effect

2015-05-19 Thread Sean Owen (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-7699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14550344#comment-14550344
 ] 

Sean Owen commented on SPARK-7699:
--

Does your invocation use enough tasks to need more than the minimum number of 
executors?

> Config "spark.dynamicAllocation.initialExecutors" has no effect 
> 
>
> Key: SPARK-7699
> URL: https://issues.apache.org/jira/browse/SPARK-7699
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Reporter: meiyoula
>
> spark.dynamicAllocation.minExecutors 2
> spark.dynamicAllocation.initialExecutors  3
> spark.dynamicAllocation.maxExecutors 4
> Just run the spark-shell with above configurations, the initial executor 
> number is 2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-7699) Config "spark.dynamicAllocation.initialExecutors" has no effect

2015-05-19 Thread meiyoula (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-7699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14550339#comment-14550339
 ] 

meiyoula commented on SPARK-7699:
-

[~srowen]Sorry, I have tested it, run SparkPi with spark-submit, the result is 
the same with spark-shell.

> Config "spark.dynamicAllocation.initialExecutors" has no effect 
> 
>
> Key: SPARK-7699
> URL: https://issues.apache.org/jira/browse/SPARK-7699
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Reporter: meiyoula
>
> spark.dynamicAllocation.minExecutors 2
> spark.dynamicAllocation.initialExecutors  3
> spark.dynamicAllocation.maxExecutors 4
> Just run the spark-shell with above configurations, the initial executor 
> number is 2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-7699) Config "spark.dynamicAllocation.initialExecutors" has no effect

2015-05-19 Thread Sean Owen (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-7699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14550249#comment-14550249
 ] 

Sean Owen commented on SPARK-7699:
--

Isn't this the normal state of a Spark app, run with spark-submit? spark-shell 
is the more unusual case, where a Spark app sits there submitting no work at 
the outset. updateAndSyncNumExecutorsTarget is called regularly, right? If 
there's no load, I expect the number of executors to quickly reach the minimum. 
How long are you expecting the initial setting to override this logic?

> Config "spark.dynamicAllocation.initialExecutors" has no effect 
> 
>
> Key: SPARK-7699
> URL: https://issues.apache.org/jira/browse/SPARK-7699
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Reporter: meiyoula
>
> spark.dynamicAllocation.minExecutors 2
> spark.dynamicAllocation.initialExecutors  3
> spark.dynamicAllocation.maxExecutors 4
> Just run the spark-shell with above configurations, the initial executor 
> number is 2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-7699) Config "spark.dynamicAllocation.initialExecutors" has no effect

2015-05-19 Thread meiyoula (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-7699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14550223#comment-14550223
 ] 

meiyoula commented on SPARK-7699:
-

[~srowen] you said "I'm just talking about the case where you are running some 
load versus none", can you list some examples. 

> Config "spark.dynamicAllocation.initialExecutors" has no effect 
> 
>
> Key: SPARK-7699
> URL: https://issues.apache.org/jira/browse/SPARK-7699
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Reporter: meiyoula
>
> spark.dynamicAllocation.minExecutors 2
> spark.dynamicAllocation.initialExecutors  3
> spark.dynamicAllocation.maxExecutors 4
> Just run the spark-shell with above configurations, the initial executor 
> number is 2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-7699) Config "spark.dynamicAllocation.initialExecutors" has no effect

2015-05-19 Thread Sean Owen (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-7699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14550185#comment-14550185
 ] 

Sean Owen commented on SPARK-7699:
--

The parameter has meaning if the system isn't idle, right? I understood this as 
the whole point of the initial executors setting -- to hint how much load would 
be applied immediately, rather than wait for it to ramp up. If the system is 
actually idle, isn't the right action to switch to minimum as soon as possible?

> Config "spark.dynamicAllocation.initialExecutors" has no effect 
> 
>
> Key: SPARK-7699
> URL: https://issues.apache.org/jira/browse/SPARK-7699
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Reporter: meiyoula
>
> spark.dynamicAllocation.minExecutors 2
> spark.dynamicAllocation.initialExecutors  3
> spark.dynamicAllocation.maxExecutors 4
> Just run the spark-shell with above configurations, the initial executor 
> number is 2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-7699) Config "spark.dynamicAllocation.initialExecutors" has no effect

2015-05-19 Thread Saisai Shao (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-7699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14550163#comment-14550163
 ] 

Saisai Shao commented on SPARK-7699:


The problem is as you mentioned here:

{code}
val oldNumExecutorsTarget = numExecutorsTarget
 numExecutorsTarget = math.max(maxNeeded, minNumExecutors)
 client.requestTotalExecutors(numExecutorsTarget)
{code}

If this system is idle, {{numExecutorsTarget}} will use {{minNumExecutors}}, it 
is OK in the middle run-time, but at start of application, we should use 
{{initialExecutors}}, as you configured *3*, not {{minNumExecutors}}, otherwise 
the parameter {{initialExecutors}} is no meaning.

> Config "spark.dynamicAllocation.initialExecutors" has no effect 
> 
>
> Key: SPARK-7699
> URL: https://issues.apache.org/jira/browse/SPARK-7699
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Reporter: meiyoula
>
> spark.dynamicAllocation.minExecutors 2
> spark.dynamicAllocation.initialExecutors  3
> spark.dynamicAllocation.maxExecutors 4
> Just run the spark-shell with above configurations, the initial executor 
> number is 2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-7699) Config "spark.dynamicAllocation.initialExecutors" has no effect

2015-05-19 Thread Saisai Shao (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-7699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14550157#comment-14550157
 ] 

Saisai Shao commented on SPARK-7699:


I think for the first time called {{updateAndSyncNumExecutorsTarget}},  we 
don't need to calculate the expected executor number, we could just use 
{{initialExecutors}} to request the container numbers if this configuration is 
set.

> Config "spark.dynamicAllocation.initialExecutors" has no effect 
> 
>
> Key: SPARK-7699
> URL: https://issues.apache.org/jira/browse/SPARK-7699
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Reporter: meiyoula
>
> spark.dynamicAllocation.minExecutors 2
> spark.dynamicAllocation.initialExecutors  3
> spark.dynamicAllocation.maxExecutors 4
> Just run the spark-shell with above configurations, the initial executor 
> number is 2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-7699) Config "spark.dynamicAllocation.initialExecutors" has no effect

2015-05-19 Thread Sean Owen (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-7699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14549956#comment-14549956
 ] 

Sean Owen commented on SPARK-7699:
--

Agree, but this logic change only also happens a bit later. I'm just talking 
about the case where you are running _some_ load versus _none_. With no or 
little load, I expect it to show minimum executors quickly rather than what the 
initial number was. That is, I don't agree that the initial number is not the 
configured value, but think in your case the initial value is quickly not used 
anymore.

> Config "spark.dynamicAllocation.initialExecutors" has no effect 
> 
>
> Key: SPARK-7699
> URL: https://issues.apache.org/jira/browse/SPARK-7699
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Reporter: meiyoula
>
> spark.dynamicAllocation.minExecutors 2
> spark.dynamicAllocation.initialExecutors  3
> spark.dynamicAllocation.maxExecutors 4
> Just run the spark-shell with above configurations, the initial executor 
> number is 2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-7699) Config "spark.dynamicAllocation.initialExecutors" has no effect

2015-05-18 Thread meiyoula (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-7699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14549623#comment-14549623
 ] 

meiyoula commented on SPARK-7699:
-

[~sandyryza] You are the author of the code, can you express your opinion?

> Config "spark.dynamicAllocation.initialExecutors" has no effect 
> 
>
> Key: SPARK-7699
> URL: https://issues.apache.org/jira/browse/SPARK-7699
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Reporter: meiyoula
>
> spark.dynamicAllocation.minExecutors 2
> spark.dynamicAllocation.initialExecutors  3
> spark.dynamicAllocation.maxExecutors 4
> Just run the spark-shell with above configurations, the initial executor 
> number is 2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-7699) Config "spark.dynamicAllocation.initialExecutors" has no effect

2015-05-18 Thread meiyoula (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-7699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14549618#comment-14549618
 ] 

meiyoula commented on SPARK-7699:
-

I think the situation you said will never appears. Because the executors go to 
run when we new a SparkContext object. But the tasks go to schedule only after 
the SparkContext has been initialized. 

> Config "spark.dynamicAllocation.initialExecutors" has no effect 
> 
>
> Key: SPARK-7699
> URL: https://issues.apache.org/jira/browse/SPARK-7699
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Reporter: meiyoula
>
> spark.dynamicAllocation.minExecutors 2
> spark.dynamicAllocation.initialExecutors  3
> spark.dynamicAllocation.maxExecutors 4
> Just run the spark-shell with above configurations, the initial executor 
> number is 2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-7699) Config "spark.dynamicAllocation.initialExecutors" has no effect

2015-05-18 Thread Sean Owen (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-7699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14547974#comment-14547974
 ] 

Sean Owen commented on SPARK-7699:
--

It would make a difference if the program immediately executed an operation 
that needed more than the minimum number of executors, but a spark-shell idling 
doesn't do that.

> Config "spark.dynamicAllocation.initialExecutors" has no effect 
> 
>
> Key: SPARK-7699
> URL: https://issues.apache.org/jira/browse/SPARK-7699
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Reporter: meiyoula
>
> spark.dynamicAllocation.minExecutors 2
> spark.dynamicAllocation.initialExecutors  3
> spark.dynamicAllocation.maxExecutors 4
> Just run the spark-shell with above configurations, the initial executor 
> number is 2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-7699) Config "spark.dynamicAllocation.initialExecutors" has no effect

2015-05-18 Thread meiyoula (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-7699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14547956#comment-14547956
 ] 

meiyoula commented on SPARK-7699:
-

If so, I think the spark.dynamicAllocation.initialExecutors doesn't be needed 
more .

> Config "spark.dynamicAllocation.initialExecutors" has no effect 
> 
>
> Key: SPARK-7699
> URL: https://issues.apache.org/jira/browse/SPARK-7699
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Reporter: meiyoula
>
> spark.dynamicAllocation.minExecutors 2
> spark.dynamicAllocation.initialExecutors  3
> spark.dynamicAllocation.maxExecutors 4
> Just run the spark-shell with above configurations, the initial executor 
> number is 2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-7699) Config "spark.dynamicAllocation.initialExecutors" has no effect

2015-05-18 Thread Sean Owen (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-7699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14547665#comment-14547665
 ] 

Sean Owen commented on SPARK-7699:
--

I think that's maybe intentional? the logic is detecting that you don't need 3 
executors and ramping it down to the minimum. 

> Config "spark.dynamicAllocation.initialExecutors" has no effect 
> 
>
> Key: SPARK-7699
> URL: https://issues.apache.org/jira/browse/SPARK-7699
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Reporter: meiyoula
>
> spark.dynamicAllocation.minExecutors 2
> spark.dynamicAllocation.initialExecutors  3
> spark.dynamicAllocation.maxExecutors 4
> Just run the spark-shell with above configurations, the initial executor 
> number is 2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-7699) Config "spark.dynamicAllocation.initialExecutors" has no effect

2015-05-18 Thread meiyoula (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-7699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14547622#comment-14547622
 ] 

meiyoula commented on SPARK-7699:
-

  val oldNumExecutorsTarget = numExecutorsTarget
  numExecutorsTarget = math.max(maxNeeded, minNumExecutors)
  client.requestTotalExecutors(numExecutorsTarget)

I think maybe the above code causes this.

> Config "spark.dynamicAllocation.initialExecutors" has no effect 
> 
>
> Key: SPARK-7699
> URL: https://issues.apache.org/jira/browse/SPARK-7699
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Reporter: meiyoula
>
> spark.dynamicAllocation.minExecutors 2
> spark.dynamicAllocation.initialExecutors  3
> spark.dynamicAllocation.maxExecutors 4
> Just run the spark-shell with above configurations, the initial executor 
> number is 2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org