[GitHub] spark issue #21589: [SPARK-24591][CORE] Number of cores and executors in the...

2018-07-24 Thread MaxGekk
Github user MaxGekk commented on the issue:

https://github.com/apache/spark/pull/21589
  
I am closing the PR since there is no consensus regarding new methods.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #21589: [SPARK-24591][CORE] Number of cores and executors in the...

2018-07-19 Thread HyukjinKwon
Github user HyukjinKwon commented on the issue:

https://github.com/apache/spark/pull/21589
  
> Can we add the methods as experimental and if we will observe some 
problems in the upcoming releases, we will just remove them? 

For clarification, I think we could but if there was no explicit objection 
or concern. That's what I initially thought and I left my sign-off even though 
I saw rough concerns (for example, it wouldn't work for Yarn's dynamic 
allocation as far as I can tell), which now look roughly matched to some 
concerns listed up here. That it's still difficult to remove out an API once 
it's added.




---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #21589: [SPARK-24591][CORE] Number of cores and executors in the...

2018-07-19 Thread MaxGekk
Github user MaxGekk commented on the issue:

https://github.com/apache/spark/pull/21589
  
> Unless there is some other compelling reason for introducing this which I 
have missed; I am -1 on introducing this change.

I would like to describe one class of use cases which you don't consider 
seriously for some reasons. You are mostly talking about the cases when a 
cluster is shared among many apps/users/jobs, and not all resources are 
available to submitted jobs. In that cases, the proposed methods are useless no 
doubt. But there is another trend nowadays.

Creating a cluster is becoming pretty cheap. The process takes a few 
seconds. Our clients create new cluster per one job, and typical use cases when 
one jobs occupies all cluster resources. A cluster become like a container for 
one job. Analogy between Virtual Machines and containers here is direct. I 
would like you look at this use cases more seriously. Users can spin up new 
cluster for any activity in their app - one for machine/deep learning 
(`numExecutors` is useful here), another one for crunching inputs (fine tuning 
of CPU usage is need here). I believe our users/customers are smart enough to 
use the proposed methods in right way. 

Can we add the methods as experimental and if we will observe some problems 
in the upcoming releases, we will just remove them? /cc @gatorsmile @rxin 


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #21589: [SPARK-24591][CORE] Number of cores and executors in the...

2018-07-18 Thread markhamstra
Github user markhamstra commented on the issue:

https://github.com/apache/spark/pull/21589
  
Thank you, @HyukjinKwon 

There are a significant number of Spark users who use the Job Scheduler 
model with a SparkContext shared across many users and many Jobs. Promoting 
tools and patterns based upon the number of core or executors that a 
SparkContext has access to, encouraging users to create Jobs that try to use 
all of the available cores, very much leads those users in the wrong direction.

As much as possible, the public API should target policy that addresses 
real user problems (all users, not just a subset), and avoid targeting the 
particulars of Spark's internal implementation. A `repartition` that is 
extended to support policy or goal declarations (things along the lines of 
`repartition(availableCores)`, `repartition(availableDataNodes)`, 
`repartition(availableExecutors)`, `repartition(unreservedCores)`, etc.), 
relying upon Spark's internals (with it's compete knowledge of the total number 
of cores and executors, scheduling pool shares, number of reserved Task nodes 
sought in barrier scheduling, number of active Jobs, Stages, Tasks and 
Sessions, etc.) may be something that I can get behind. Exposing a couple of 
current Spark scheduler implementation details in the expectation that some 
subset of users in some subset of use cases will be able to make correct use of 
them is not. 


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #21589: [SPARK-24591][CORE] Number of cores and executors in the...

2018-07-18 Thread HyukjinKwon
Github user HyukjinKwon commented on the issue:

https://github.com/apache/spark/pull/21589
  
I wouldn't argue who more take care of or represent users or not though. 
That's easily biased. If there's a technical concern from a committer or PMC, I 
wouldn't go for it.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #21589: [SPARK-24591][CORE] Number of cores and executors in the...

2018-07-18 Thread ssimeonov
Github user ssimeonov commented on the issue:

https://github.com/apache/spark/pull/21589
  
> Repartitioning based upon a snapshot of the number of cores available 
cluster-wide is clearly not the correct thing to do in many instances and use 
cases.

I wholeheartedly agree and I can't wait for the better approach(es) you 
proposed. In the meantime, repartitioning to a constant number of partitions, 
which is what people do today, is a lot worse in most instances and use cases 
(obviously excluding the situations where a fixed number of partitions is 
driven by a requirement).

In the end, your objections provide absolutely no immediate & practical 
alternative to an immediate & common problem that faces any Spark user whose 
jobs execute on clusters of varying size, a problem that meaningfully affects 
performance and cost.

> ... I don't appreciate being pinned ...

None of us do, @markhamstra, but that's sometimes how we help others, in 
this case, the broader Spark user community.

> I don't accept your assertions of what constitutes the majority and 
minority of Spark users or use cases or their relative importance.

My claims are based on (a) the constitution of data engineering/science 
teams at all non-ISV companies whose engineering structures/head counts I know 
well (7), (b) what multiple recruiters are telling me about hiring trends (East 
Coast-biased but consistently confirmed when talking to West Coast colleagues) 
and (c) the audiences at Spark meetups and the Spark Summit where I speak 
frequently. What is your non-acceptance based on?

> As a long-time maintainer of the Spark scheduler, it is also not my 
concern to define which Spark users are important or not, but rather to foster 
system internals and a public API that benefit all users.

I still do not understand how you evaluate an API. Do you mean you have a 
way of knowing when a public API benefits all users _without_ understanding how 
user personas break down by volume and/or by importance? Or, perhaps, you 
evaluate an API according to how well it serves the "average" user, who must be 
some strange cross between a Scala Spark committer, a Java data engineer and a 
Python/R data scientist, or the "average" Spark job, which must be a mix 
between batch ETL, streaming and ML/AI training? Or, just based on what you 
feel is right?

Your work on the Spark scheduler and its APIs is much appreciated as is 
your expertise in evolving these APIs over time. However, this PR is NOT about 
the scheduler API. It is about the public `SparkContext`/`SparkSession` APIs 
that are exposed to the end users of Spark. @MaxGekk spends his days talking to 
end users of Spark across dozens if not hundreds of companies. I would argue he 
has an excellent, mostly unbiased perspective of the life and needs of people 
using Spark. Do you have an excellent and mostly unbiased perspective of how 
Spark is used in the real world? You work on Spark internals, which means that 
you do not spend your days using Spark. Your users are internal Spark 
developers, not the end users of Spark. You work at a top-notch ISV, a highly 
technical organization, which is not representative of the broader Spark 
community.

I strongly feel that you are trying to do what's right but have you 
considered the possibility that @MaxGekk has a much more accurate perspective 
of Spark user needs, and the urgency of addressing those needs, and that the 
way you judge this PR is biased by your rather unique perspective and 
environment?

I have nothing more to say on the topic of this PR. No matter which way it 
goes, I thank @MaxGekk for looking out for Spark users and @mridulm + 
@markhamstra for trying to do the right thing, as they see it.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #21589: [SPARK-24591][CORE] Number of cores and executors in the...

2018-07-18 Thread markhamstra
Github user markhamstra commented on the issue:

https://github.com/apache/spark/pull/21589
  
I don't accept you assertions of what constitutes the majority and minority 
of Spark users or use cases or their relative importance. As a long-time 
maintainer of the Spark scheduler, it is also not my concern to define which 
Spark users are important or not, but rather to foster system internals and a 
public API that benefit all users.

I already have pointed out with some specificity how exposing the 
scheduler's low-level accounting of the number of cores or executors that are 
available at some point can encourage anti-patterns and sub-optimal Job 
execution. Repartitioning based upon a snapshot of the number of cores 
available cluster-wide is clearly not the correct thing to do in many instances 
and use cases. Beyond concern for users, as a developer of Spark internals, I 
don't appreciate being pinned to particular implementation details by having 
them directly exposed to users.

And I'll repeat, this JIRA and PR look to be defining the problem to fit a 
preconception of the solution. Even for the particular users and use cases 
targeted by this PR, I wouldn't expect that those users would embrace "I can't 
repartition based upon the scheduler's notion of the number of cores in the 
cluster at some point" as a more accurate statement of their problem than "My 
Spark Jobs don't use all of the CPU resources that I am entitled to use." Even 
if we were to stipulate that in a `repartition` call is inherently the only or 
best place to try to address that real user problem (and I far from convinced 
that this is the only or best approach), then I'd be far happier with extending 
the `repartition` API to include declarative goals than exposing to users only 
part of what is needed from Spark's internal to figure out what is the best 
repartitioning -- perhaps something along the lines of 
`repartition(MaximizeCPUs)` or other appropriate policy/goal enumerations.

And spark packages are not irrelevant here. In fact, a large part of their 
motivation was to handle extensions that are not appropriate for all users or 
to prove out ideas and APIs that are not yet clearly appropriate for inclusion 
in Spark itself. 


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #21589: [SPARK-24591][CORE] Number of cores and executors in the...

2018-07-18 Thread ssimeonov
Github user ssimeonov commented on the issue:

https://github.com/apache/spark/pull/21589
  
@markhamstra I am confused about your API evaluation criteria. 

You are not arguing about the specific benefits these changes can provide 
immediately to an increasing majority of Spark users. Great.

You have some concerns about a minority audience of Spark users and you are 
using those concerns to argue against immediate, simple and specific 
improvements for the majority of Spark users. No problem, except that the 
details of your concerns are rather fuzzy. Can you please make explicit the 
specific harm you see in these APIs, as opposed to just arguing that there is a 
theoretical, yet-to-be-defined-but-surely-just-right way to improve job 
execution performance at an unspecified point in the future that will work well 
for both majority and minority users?

BTW, the package argument is irrelevant here. Tons of things that are in 
Spark can be done with Spark packages but, instead, we add them to the core 
project because this increases the likelihood that they will benefit the most 
users. The use cases discussed here are about essentially any type of job that 
repartitions or coalesces, which clearly falls under the umbrella of 
benefitting the most users.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #21589: [SPARK-24591][CORE] Number of cores and executors in the...

2018-07-18 Thread markhamstra
Github user markhamstra commented on the issue:

https://github.com/apache/spark/pull/21589
  
It is precisely because the audience that I am concerned with is not 
limited to just data scientists or notebook users and their particular needs 
that I am far from convinced that exposing internals of the Spark scheduler in 
the public API is a good idea.

There are many ways that a higher-level declaration could be made. I'm not 
committed to any particular model at this point. The way that it is done for 
scheduling pools via `sc.setLocalProperty` is one way that Job execution can be 
put into a particular declarative context. That's not necessarily the best way 
to do it, but it isn't necessarily more difficult than figuring out correct 
imperative code after fetching a snapshot of the number of available cores at 
some point.

Doing this the right way likely requires an appropriate SPIP, not just a 
quick hack PR.

A spark-package would be another way to expose additional functionality 
without it needing to be bound into the Spark public API.  


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #21589: [SPARK-24591][CORE] Number of cores and executors in the...

2018-07-18 Thread ssimeonov
Github user ssimeonov commented on the issue:

https://github.com/apache/spark/pull/21589
  
@markhamstra even the words you are using indicate that you are missing the 
intended audience.

> high-level, declarative abstraction that can be used to specify requested 
Job resource-usage policy

How exactly do you imagine data scientists using something like this as 
they hack in a Jupyter or Databricks notebook in Python to sample data from a 
10Tb dataset?


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #21589: [SPARK-24591][CORE] Number of cores and executors in the...

2018-07-18 Thread markhamstra
Github user markhamstra commented on the issue:

https://github.com/apache/spark/pull/21589
  
@ssimeonov the purpose of a public API is not to offer hack solutions to a 
subset of problems. What is needed is a high-level, declarative abstraction 
that can be used to specify requested Job resource-usage policy. Exposing 
low-level Spark scheduling internals is not the way to achieve that.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #21589: [SPARK-24591][CORE] Number of cores and executors in the...

2018-07-18 Thread ssimeonov
Github user ssimeonov commented on the issue:

https://github.com/apache/spark/pull/21589
  
@markhamstra the purpose of this PR is not to address the topic of dynamic 
resource management in arbitrarily complex Spark environments. Most Spark users 
do not operate in such environments. It is to help simple Spark users refactor 
code such as

```scala
df.repartition(25) // and related repartition() + coalesce() variants
```

to make job execution take advantage of additional cores, when they are 
available. 

Asking for a greater degree of parallelism than the cores a job has 
available rarely has significant negative effects (for reasonable values). 
Asking for a low degree of parallelism when there are lots of cores available 
has significant negative effects, especially in the common real-world use cases 
where there is lots of data skew. That's the point that both you and @mridulm 
seem to be missing. The arguments about resources flexing during job execution 
to do change this. 

My team has used this simple technique for years on both static and 
autoscaling clusters and we've seen meaningful performance improvements in both 
ETL and ML/AI-related data production for data ranging from gigabytes to 
petabytes. The idea is simple enough that even data scientists can (and do) 
easily use it. That's the benefit of this PR and that's why I like it. The cost 
of this PR is adding two simple & clear methods. The cost-benefit analysis 
seems obvious.

I agree with you that lots more can be done to handle the general case of 
better matching job resource needs to cluster/pool resources. This work is 
going to take forever given the current priorities. Let's not deny the majority 
of Spark users simple & real execution benefits while we dream about amazing 
architectural improvements. 

When looking at the net present value of performance, the discount factor 
is large. Performance improvements now are a lot more valuable than performance 
improvements in the far future.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #21589: [SPARK-24591][CORE] Number of cores and executors in the...

2018-07-18 Thread markhamstra
Github user markhamstra commented on the issue:

https://github.com/apache/spark/pull/21589
  
No, defaultParallelism isn't more useful in that case, but that just starts 
getting to my overall assessment of this JIRA and PR: It smells of defining the 
problem to align with a preconception of the solution.

Exposing the driver's current accounting of the number of cores active in 
the cluster is not something that we couldn't do or didn't know how to do along 
time ago. Rather, it is something that those of us working on the scheduler 
chose not to do because of the expectation that putting this in the public API 
(and thereby implicitly encouraging its use) was likely to produce as many 
problems as it solves. This was primarily because of two factors: 1) The number 
of cores and executors is not static; 2) Closely tailoring a Job to some 
expectation of the number of available cores or executors is not obviously a 
correct thing to encourage in general. 

Whether from node failures, dynamic executor allocation, backend scheduler 
elasticity/preemption, or just other Jobs running under the same SparkContext, 
the number of cores and executors available to any particular Job when it is 
created can easily be different from what is available when any of its Stages 
actually runs.

Even if you could get reliable numbers for the cores and executors that 
will be available through the lifecycle of a Job, tailoring a Job to use all of 
those cores and executors is only the right thing to do in a subset of Spark 
use cases. For example, using many more executors than there are DFS partitions 
holding the data, or trying to use all of the cores when there are other Jobs 
pending, or trying to use all of the cores when another Job needs to acquire a 
minimum number for barrier scheduled execution, or trying to use more cores 
than a scheduling pool permits would all be examples of anti-patterns that 
would be more enabled by easy, context-free access to low-level numCores.

There definitely are use cases where users need to be able to set policy 
for whether particular jobs should be encouraged to use more or less of the 
cluster's resources, but I believe that that needs to be done at a much higher 
level of abstraction in a declarative form, and that policy likely needs to be 
enforced dynamically/adaptively at Stage boundaries. The under-developed and 
under-used dynamic shuffle partitioning code in Spark SQL starts to go in that 
direction. 


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #21589: [SPARK-24591][CORE] Number of cores and executors in the...

2018-07-18 Thread ssimeonov
Github user ssimeonov commented on the issue:

https://github.com/apache/spark/pull/21589
  
@mridulm your comments make an implicit assumption, which is quite 
incorrect: that Spark users read the Spark codebase and/or are aware of Spark 
internals. Please, consider this PR in the context of its intended audience who 
(a) do not read the source code and (b) hardly look at the API docs. What they 
read are things like Stack Overflow, the Databricks Guide, blog posts and 
(quite rarely) the occasional how-to-with-Spark book. The fact that something 
is possible with Spark doesn't make it easy or intuitive. The value of this PR 
is that it makes a common use case easy and intuitive.

Let's consider the practicality of your suggestions:

> Rely on defaultParallelism - this gives the expected result, unless 
explicitly overridden by user.

That doesn't address the core use case as the scope of change & effect is 
very different. In the targeted use cases, a user wants to explicitly control 
the level of parallelism relative to the current cluster physical state for 
potentially a single stage. Relying on `defaultParallelism` exposes the user to 
undesired side-effects as the setting can be changed by other, potentially 
unrelated code the user has no control over. Introducing unintended side 
effects, which your suggestion does, is poor design.

> If you need fine grained information about executors, use spark listener 
(it is trivial to keep a count with onExecutorAdded/onExecutorRemoved).

I'd suggest you reconsider your definition of "trivial". Normal Spark 
users, not people who work on Spark or at companies like Hortonworks whose job 
is to be Spark experts, have no idea what a listener is, have never hooked one 
and never will. Not to mention how much fun it is to do this from, say, R.

> If you simply want a current value without own listener - use REST api to 
query for current executors.

This type of suggestion is a prime example of ignoring Spark user concerns. 
You are comparing `sc.numExecutors` with:

1. Knowing that a REST API exists that can produce this result.
2. Learning the details of the API.
3. Picking a synchronous REST client in the language they are using Spark 
with.
4. Initializing the REST client with the correct endpoint which they 
obtain... somehow.
5. Formulating the request.
6. Parsing the response.

I don't think there is any need to say more about this suggestion.

Taking a step back, it is important to acknowledge that Spark has become a 
mass-market data platform product and start designing user-facing APIs with 
this in mind. If the teams I know are any indication, the majority of Spark 
users are not experienced backend/data engineers. They are data scientists and 
data hackers: people who are getting into big data via Spark. The imbalance is 
only going to grow. The criteria by which user-focused Spark APIs are evaluated 
should evolve accordingly. 

From an ease-of-use perspective, I'd argue the two new methods should be 
exposed to `SparkSession` also as this is the typical new user "entry point". 
For example, the data scientists on my team never use `SparkContext` but they 
do adjust stage parallelism via implicits equivalent to the ones proposed in 
this PR (to significant benefit in query execution performance).


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #21589: [SPARK-24591][CORE] Number of cores and executors in the...

2018-07-18 Thread MaxGekk
Github user MaxGekk commented on the issue:

https://github.com/apache/spark/pull/21589
  
> it's not terribly useful to know, e.g., that there are 5 million cores in 
the cluster if your Job is running in a scheduler pool that is restricted to 
using far fewer CPUs via the pool's maxShares

Is `defaultParallelism` more useful in the case?


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #21589: [SPARK-24591][CORE] Number of cores and executors in the...

2018-07-18 Thread mridulm
Github user mridulm commented on the issue:

https://github.com/apache/spark/pull/21589
  
@MaxGekk We are going in circles.
I dont think this is a good api to expose currently - the data is available 
through multiple other means as I detailed and while not a succinct oneliner, 
it is useable.
Not to mention @markhamstra's comment.
Unless there is some other compelling reason for introducing this which I 
have missed; I am -1 on introducing this change.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #21589: [SPARK-24591][CORE] Number of cores and executors in the...

2018-07-18 Thread MaxGekk
Github user MaxGekk commented on the issue:

https://github.com/apache/spark/pull/21589
  
> ... unless explicitly overridden by user.

This is the problem this PR addresses, actually.

> If you need fine grained information about executors, use spark listener 
(it is trivial to keep a count with onExecutorAdded/onExecutorRemoved)

Do you really believe this is right approach? Instead of 
`spark.sparkContext.numCores` you propose to our users to figure out how to 
register a listener, store somewhere results (in thread safe manner?) and keep 
it updated. Seriously? 

> If you simply want a current value without own listener - use REST api to 
query for current executors.

I hope you know the POLA principle: 
https://en.wikipedia.org/wiki/Principle_of_least_astonishment . Could you 
imagine you are writing some code in local or remote notebook in R. First of 
all are you sure the REST API is available to users? Comparing to one line 
call, how many lines and effort from users may take calling the REST API and 
parsing results? Highly likely users will just put some constant (like you 
proposed even from config) and will get overloaded/underloaded cluster.

> defaultParallelism exists to give a default when user does not explicitly 
override when creating an RDD : and reflects the current number of executors.

One more time, the `numCores()` method aims to solve the problem when 
`defaultParallelism` is set explicitly. As I show you on an use case above, the 
`defaultParallelism` can be changed in one part of app (for example in some 
library - not available to user) and number of cores is needed in another part.

> externalize your config's and populate based on resources available to 
application

I have already described you one use cases when a notebook can be attached 
to many clusters. The same notebook can be reused/called from another 
notebooks. This _"externalize your config"_ could sound perfect from 
developer's perspective but not from customer's .


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #21589: [SPARK-24591][CORE] Number of cores and executors in the...

2018-07-18 Thread markhamstra
Github user markhamstra commented on the issue:

https://github.com/apache/spark/pull/21589
  
@mridulm scheduler pools could also make the cluster-wide resource numbers 
not very meaningful. I don't think the maxShare work has been merged yet (kind 
of a stalled TODO on an open PR, IIRC), but once that is in, it's not terribly 
useful to know, e.g., that there are 5 million cores in the cluster if your Job 
is running in a scheduler pool that is restricted to using far fewer CPUs via 
the pool's maxShares.   


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #21589: [SPARK-24591][CORE] Number of cores and executors in the...

2018-07-18 Thread mridulm
Github user mridulm commented on the issue:

https://github.com/apache/spark/pull/21589
  
@MaxGekk The example you cites is literally one of a handful of usages 
which is not easily overridden - and is prefixed with a 'HACK ALERT' ! A few 
others are in mllib, typically for reading schema.

I will reiterate the solutions available to users currently:
* Rely on `defaultParallelism` - this gives the expected result, unless 
explicitly overridden by user.
* If you need fine grained information about executors, use spark listener 
(it is trivial to keep a count with `onExecutorAdded`/`onExecutorRemoved`).
* If you simply want a current value without own listener - use REST api to 
query for current executors.

Having said this, I will caution against this approach if you are concerned 
about performance. `defaultParallelism` exists to give a default when user does 
not explicitly override when creating an `RDD` : and reflects the current 
number of executors.
Particularly when dynamic resource allocation is enabled, this value is not 
optimal : spark will acquire or release resources based on pending tasks.

Using available cluster resources (from cluster manager - not spark) as a 
way to model parallelism would be a better approach : externalize your config's 
and populate based on resources available to application (in your example: 
difference between test/staging/production).


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #21589: [SPARK-24591][CORE] Number of cores and executors in the...

2018-07-18 Thread AmplabJenkins
Github user AmplabJenkins commented on the issue:

https://github.com/apache/spark/pull/21589
  
Merged build finished. Test PASSed.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #21589: [SPARK-24591][CORE] Number of cores and executors in the...

2018-07-18 Thread AmplabJenkins
Github user AmplabJenkins commented on the issue:

https://github.com/apache/spark/pull/21589
  
Test PASSed.
Refer to this link for build results (access rights to CI server needed): 
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/93223/
Test PASSed.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #21589: [SPARK-24591][CORE] Number of cores and executors in the...

2018-07-18 Thread SparkQA
Github user SparkQA commented on the issue:

https://github.com/apache/spark/pull/21589
  
**[Test build #93223 has 
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/93223/testReport)**
 for PR 21589 at commit 
[`eebb310`](https://github.com/apache/spark/commit/eebb31099f078cc05bf0f6d6e32c94d4ee818f9e).
 * This patch passes all tests.
 * This patch merges cleanly.
 * This patch adds no public classes.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #21589: [SPARK-24591][CORE] Number of cores and executors in the...

2018-07-18 Thread MaxGekk
Github user MaxGekk commented on the issue:

https://github.com/apache/spark/pull/21589
  
> User's are not expected to override it unless they want fine grained 
control over the value

This is actually one of the use cases when an user need to take control or 
tune a query. The `defaultParallelism` is used in many places like 
https://github.com/apache/spark/blob/9549a2814951f9ba969955d78ac4bd2240f85989/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFileFormat.scala#L594-L597
 . If he/she wants to tune the behavior in the methods, he/she has to change 
`defaultParallelism`. In this way the factor `5` in `df.repartition(5 * 
sc.defaultParallelism)` should be tune accordingly. In this way we just force 
users to introduce absolutely unnecessary complexity and dependencies in their 
code. If I need number of cores in my cluster, I would like to have a direct 
way to take it instead of hope a method returns me this number implicitly.

> One thing to be kept in mind is that dynamic resource allocation will 
kick in after tasks are submitted ...

Let me show you another use case which I observe in my experience. Our 
customers can write a code in notebooks and can attach their notebooks to 
different cluster. Usually code is developed and debugged on small (staging) 
cluster. After that the notebooks are re-attached to production cluster which 
may have completely different size. Pretty often users just leave existing 
params/constants like in `repartition()` as is. It usually leads to 
underloading or overloading a clusters. Why cannot they use 
`defaultParallelism` everywhere? Look at the use case above - tuning one part 
of user's app requires changing factors in another parts (absolutely 
independent from the first one).   



---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #21589: [SPARK-24591][CORE] Number of cores and executors in the...

2018-07-18 Thread mridulm
Github user mridulm commented on the issue:

https://github.com/apache/spark/pull/21589
  
+CC @markhamstra since you were looking at API stability.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #21589: [SPARK-24591][CORE] Number of cores and executors in the...

2018-07-18 Thread mridulm
Github user mridulm commented on the issue:

https://github.com/apache/spark/pull/21589
  

I am not convinced by the rationale given for adding the new api's in the 
jira.
The examples given there can be easily modeled using `defaultParallelism` 
(to get current state) and executor events (to get numCores, memory per 
executor).
For example: `df.repartition(5 * sc.defaultParallelism)`

The other argument seems to be that users can override this value and set 
it to a static constant.
User's are not expected to override it unless they want fine grained 
control over the value and spark is expected to honor it when specified.

One thing to be kept in mind is that dynamic resource allocation will kick 
in after tasks are submitted (when there are insufficient resources available) 
- so trying to fine tune this for an application, in presence of DRA, uses 
these api's is not going to be effective anyway.

If there are corner cases where `defaultParallelism` is not accurate, we 
should fix those to reflect the current value.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #21589: [SPARK-24591][CORE] Number of cores and executors in the...

2018-07-18 Thread MaxGekk
Github user MaxGekk commented on the issue:

https://github.com/apache/spark/pull/21589
  
> I am not seeing the utility of these two methods.

@mridulm I describe the utility of the methods in the ticket: 
https://issues.apache.org/jira/browse/SPARK-24591

> defaultParallelism already captures the current number of cores.

The `defaultParallelism` can be changed by users. And pretty often it is 
not reflected to number of cores. 

> For monitoring usecases, existing events fired via listener can be used 
to keep track of current executor population (if that is the intended usecase).

The basic cluster properties should be easily discoverable via APIs, I 
believe. And monitoring is just one of use cases. 


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #21589: [SPARK-24591][CORE] Number of cores and executors in the...

2018-07-18 Thread mridulm
Github user mridulm commented on the issue:

https://github.com/apache/spark/pull/21589
  
I am not seeing the utility of these two methods.
`defaultParallelism` already captures the current number of cores.

For monitoring usecases, existing events fired via listener can be used to 
keep track of current executor population (if that is the intended usecase).

Given that this is duplicating information already exposed, I am not very 
keen on adding additional api.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #21589: [SPARK-24591][CORE] Number of cores and executors in the...

2018-07-18 Thread SparkQA
Github user SparkQA commented on the issue:

https://github.com/apache/spark/pull/21589
  
**[Test build #93223 has 
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/93223/testReport)**
 for PR 21589 at commit 
[`eebb310`](https://github.com/apache/spark/commit/eebb31099f078cc05bf0f6d6e32c94d4ee818f9e).


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #21589: [SPARK-24591][CORE] Number of cores and executors in the...

2018-07-17 Thread AmplabJenkins
Github user AmplabJenkins commented on the issue:

https://github.com/apache/spark/pull/21589
  
Test PASSed.
Refer to this link for build results (access rights to CI server needed): 
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/93174/
Test PASSed.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #21589: [SPARK-24591][CORE] Number of cores and executors in the...

2018-07-17 Thread AmplabJenkins
Github user AmplabJenkins commented on the issue:

https://github.com/apache/spark/pull/21589
  
Merged build finished. Test PASSed.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #21589: [SPARK-24591][CORE] Number of cores and executors in the...

2018-07-17 Thread SparkQA
Github user SparkQA commented on the issue:

https://github.com/apache/spark/pull/21589
  
**[Test build #93174 has 
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/93174/testReport)**
 for PR 21589 at commit 
[`cf0b024`](https://github.com/apache/spark/commit/cf0b024a23e2d9b2defc5219de1e78d17a0155a9).
 * This patch passes all tests.
 * This patch merges cleanly.
 * This patch adds the following public classes _(experimental)_:
  * `case class Least(children: Seq[Expression]) extends 
ComplexTypeMergingExpression `
  * `case class Greatest(children: Seq[Expression]) extends 
ComplexTypeMergingExpression `
  * `case class MapConcat(children: Seq[Expression]) extends 
ComplexTypeMergingExpression `
  * `case class Concat(children: Seq[Expression]) extends 
ComplexTypeMergingExpression `
  * `case class Coalesce(children: Seq[Expression]) extends 
ComplexTypeMergingExpression `


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #21589: [SPARK-24591][CORE] Number of cores and executors in the...

2018-07-17 Thread SparkQA
Github user SparkQA commented on the issue:

https://github.com/apache/spark/pull/21589
  
**[Test build #93174 has 
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/93174/testReport)**
 for PR 21589 at commit 
[`cf0b024`](https://github.com/apache/spark/commit/cf0b024a23e2d9b2defc5219de1e78d17a0155a9).


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #21589: [SPARK-24591][CORE] Number of cores and executors in the...

2018-07-17 Thread MaxGekk
Github user MaxGekk commented on the issue:

https://github.com/apache/spark/pull/21589
  
jenkins, retest this, please


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #21589: [SPARK-24591][CORE] Number of cores and executors in the...

2018-07-16 Thread AmplabJenkins
Github user AmplabJenkins commented on the issue:

https://github.com/apache/spark/pull/21589
  
Merged build finished. Test FAILed.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #21589: [SPARK-24591][CORE] Number of cores and executors in the...

2018-07-16 Thread AmplabJenkins
Github user AmplabJenkins commented on the issue:

https://github.com/apache/spark/pull/21589
  
Test FAILed.
Refer to this link for build results (access rights to CI server needed): 
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/93135/
Test FAILed.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #21589: [SPARK-24591][CORE] Number of cores and executors in the...

2018-07-16 Thread SparkQA
Github user SparkQA commented on the issue:

https://github.com/apache/spark/pull/21589
  
**[Test build #93135 has 
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/93135/testReport)**
 for PR 21589 at commit 
[`cf0b024`](https://github.com/apache/spark/commit/cf0b024a23e2d9b2defc5219de1e78d17a0155a9).
 * This patch **fails Spark unit tests**.
 * This patch merges cleanly.
 * This patch adds the following public classes _(experimental)_:
  * `case class Least(children: Seq[Expression]) extends 
ComplexTypeMergingExpression `
  * `case class Greatest(children: Seq[Expression]) extends 
ComplexTypeMergingExpression `
  * `case class MapConcat(children: Seq[Expression]) extends 
ComplexTypeMergingExpression `
  * `case class Concat(children: Seq[Expression]) extends 
ComplexTypeMergingExpression `
  * `case class Coalesce(children: Seq[Expression]) extends 
ComplexTypeMergingExpression `


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #21589: [SPARK-24591][CORE] Number of cores and executors in the...

2018-07-16 Thread SparkQA
Github user SparkQA commented on the issue:

https://github.com/apache/spark/pull/21589
  
**[Test build #93135 has 
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/93135/testReport)**
 for PR 21589 at commit 
[`cf0b024`](https://github.com/apache/spark/commit/cf0b024a23e2d9b2defc5219de1e78d17a0155a9).


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #21589: [SPARK-24591][CORE] Number of cores and executors in the...

2018-07-16 Thread AmplabJenkins
Github user AmplabJenkins commented on the issue:

https://github.com/apache/spark/pull/21589
  
Test FAILed.
Refer to this link for build results (access rights to CI server needed): 
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/93119/
Test FAILed.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #21589: [SPARK-24591][CORE] Number of cores and executors in the...

2018-07-16 Thread AmplabJenkins
Github user AmplabJenkins commented on the issue:

https://github.com/apache/spark/pull/21589
  
Merged build finished. Test FAILed.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #21589: [SPARK-24591][CORE] Number of cores and executors in the...

2018-07-16 Thread SparkQA
Github user SparkQA commented on the issue:

https://github.com/apache/spark/pull/21589
  
**[Test build #93119 has 
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/93119/testReport)**
 for PR 21589 at commit 
[`128f6f0`](https://github.com/apache/spark/commit/128f6f0c3fc3b89b32554bdd40dddf784d274079).
 * This patch **fails Spark unit tests**.
 * This patch merges cleanly.
 * This patch adds no public classes.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #21589: [SPARK-24591][CORE] Number of cores and executors in the...

2018-07-16 Thread SparkQA
Github user SparkQA commented on the issue:

https://github.com/apache/spark/pull/21589
  
**[Test build #93119 has 
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/93119/testReport)**
 for PR 21589 at commit 
[`128f6f0`](https://github.com/apache/spark/commit/128f6f0c3fc3b89b32554bdd40dddf784d274079).


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #21589: [SPARK-24591][CORE] Number of cores and executors in the...

2018-07-16 Thread MaxGekk
Github user MaxGekk commented on the issue:

https://github.com/apache/spark/pull/21589
  
jenkins, retest this, please


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #21589: [SPARK-24591][CORE] Number of cores and executors in the...

2018-07-16 Thread AmplabJenkins
Github user AmplabJenkins commented on the issue:

https://github.com/apache/spark/pull/21589
  
Test FAILed.
Refer to this link for build results (access rights to CI server needed): 
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/93092/
Test FAILed.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #21589: [SPARK-24591][CORE] Number of cores and executors in the...

2018-07-16 Thread AmplabJenkins
Github user AmplabJenkins commented on the issue:

https://github.com/apache/spark/pull/21589
  
Merged build finished. Test FAILed.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #21589: [SPARK-24591][CORE] Number of cores and executors in the...

2018-07-16 Thread SparkQA
Github user SparkQA commented on the issue:

https://github.com/apache/spark/pull/21589
  
**[Test build #93092 has 
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/93092/testReport)**
 for PR 21589 at commit 
[`128f6f0`](https://github.com/apache/spark/commit/128f6f0c3fc3b89b32554bdd40dddf784d274079).
 * This patch **fails Spark unit tests**.
 * This patch merges cleanly.
 * This patch adds no public classes.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #21589: [SPARK-24591][CORE] Number of cores and executors in the...

2018-07-16 Thread SparkQA
Github user SparkQA commented on the issue:

https://github.com/apache/spark/pull/21589
  
**[Test build #93092 has 
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/93092/testReport)**
 for PR 21589 at commit 
[`128f6f0`](https://github.com/apache/spark/commit/128f6f0c3fc3b89b32554bdd40dddf784d274079).


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #21589: [SPARK-24591][CORE] Number of cores and executors in the...

2018-07-16 Thread HyukjinKwon
Github user HyukjinKwon commented on the issue:

https://github.com/apache/spark/pull/21589
  
retest this please


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #21589: [SPARK-24591][CORE] Number of cores and executors in the...

2018-07-16 Thread AmplabJenkins
Github user AmplabJenkins commented on the issue:

https://github.com/apache/spark/pull/21589
  
Test FAILed.
Refer to this link for build results (access rights to CI server needed): 
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/93085/
Test FAILed.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #21589: [SPARK-24591][CORE] Number of cores and executors in the...

2018-07-16 Thread AmplabJenkins
Github user AmplabJenkins commented on the issue:

https://github.com/apache/spark/pull/21589
  
Merged build finished. Test FAILed.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #21589: [SPARK-24591][CORE] Number of cores and executors in the...

2018-07-16 Thread SparkQA
Github user SparkQA commented on the issue:

https://github.com/apache/spark/pull/21589
  
**[Test build #93085 has 
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/93085/testReport)**
 for PR 21589 at commit 
[`128f6f0`](https://github.com/apache/spark/commit/128f6f0c3fc3b89b32554bdd40dddf784d274079).
 * This patch **fails due to an unknown error code, -9**.
 * This patch merges cleanly.
 * This patch adds no public classes.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #21589: [SPARK-24591][CORE] Number of cores and executors in the...

2018-07-16 Thread SparkQA
Github user SparkQA commented on the issue:

https://github.com/apache/spark/pull/21589
  
**[Test build #93085 has 
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/93085/testReport)**
 for PR 21589 at commit 
[`128f6f0`](https://github.com/apache/spark/commit/128f6f0c3fc3b89b32554bdd40dddf784d274079).


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #21589: [SPARK-24591][CORE] Number of cores and executors in the...

2018-07-16 Thread MaxGekk
Github user MaxGekk commented on the issue:

https://github.com/apache/spark/pull/21589
  
jenkins, retest this, please


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #21589: [SPARK-24591][CORE] Number of cores and executors in the...

2018-07-16 Thread AmplabJenkins
Github user AmplabJenkins commented on the issue:

https://github.com/apache/spark/pull/21589
  
Test FAILed.
Refer to this link for build results (access rights to CI server needed): 
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/93041/
Test FAILed.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #21589: [SPARK-24591][CORE] Number of cores and executors in the...

2018-07-16 Thread AmplabJenkins
Github user AmplabJenkins commented on the issue:

https://github.com/apache/spark/pull/21589
  
Merged build finished. Test FAILed.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #21589: [SPARK-24591][CORE] Number of cores and executors in the...

2018-07-16 Thread SparkQA
Github user SparkQA commented on the issue:

https://github.com/apache/spark/pull/21589
  
**[Test build #93041 has 
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/93041/testReport)**
 for PR 21589 at commit 
[`128f6f0`](https://github.com/apache/spark/commit/128f6f0c3fc3b89b32554bdd40dddf784d274079).
 * This patch **fails from timeout after a configured wait of \`300m\`**.
 * This patch merges cleanly.
 * This patch adds no public classes.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #21589: [SPARK-24591][CORE] Number of cores and executors in the...

2018-07-15 Thread SparkQA
Github user SparkQA commented on the issue:

https://github.com/apache/spark/pull/21589
  
**[Test build #93041 has 
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/93041/testReport)**
 for PR 21589 at commit 
[`128f6f0`](https://github.com/apache/spark/commit/128f6f0c3fc3b89b32554bdd40dddf784d274079).


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #21589: [SPARK-24591][CORE] Number of cores and executors in the...

2018-07-15 Thread HyukjinKwon
Github user HyukjinKwon commented on the issue:

https://github.com/apache/spark/pull/21589
  
retest this please


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #21589: [SPARK-24591][CORE] Number of cores and executors in the...

2018-07-15 Thread AmplabJenkins
Github user AmplabJenkins commented on the issue:

https://github.com/apache/spark/pull/21589
  
Merged build finished. Test FAILed.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #21589: [SPARK-24591][CORE] Number of cores and executors in the...

2018-07-15 Thread AmplabJenkins
Github user AmplabJenkins commented on the issue:

https://github.com/apache/spark/pull/21589
  
Test FAILed.
Refer to this link for build results (access rights to CI server needed): 
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/93031/
Test FAILed.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #21589: [SPARK-24591][CORE] Number of cores and executors in the...

2018-07-15 Thread SparkQA
Github user SparkQA commented on the issue:

https://github.com/apache/spark/pull/21589
  
**[Test build #93031 has 
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/93031/testReport)**
 for PR 21589 at commit 
[`128f6f0`](https://github.com/apache/spark/commit/128f6f0c3fc3b89b32554bdd40dddf784d274079).
 * This patch **fails Spark unit tests**.
 * This patch merges cleanly.
 * This patch adds no public classes.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #21589: [SPARK-24591][CORE] Number of cores and executors in the...

2018-07-15 Thread SparkQA
Github user SparkQA commented on the issue:

https://github.com/apache/spark/pull/21589
  
**[Test build #93031 has 
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/93031/testReport)**
 for PR 21589 at commit 
[`128f6f0`](https://github.com/apache/spark/commit/128f6f0c3fc3b89b32554bdd40dddf784d274079).


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #21589: [SPARK-24591][CORE] Number of cores and executors in the...

2018-07-15 Thread jiangxb1987
Github user jiangxb1987 commented on the issue:

https://github.com/apache/spark/pull/21589
  
> @felixcheung I am not sure that our users are so interested in getting a 
list of cores per executors and calculate total numbers cores by summurizing 
the list. It will just complicate API and implementation, from my point of view.

A list of cores per executors can be useful, one scenario is users may want 
to know how many slots are available, and that requires sum all the slots on 
each executors, with # of slots = # of cores on an executor / CPUS_PER_TASK


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #21589: [SPARK-24591][CORE] Number of cores and executors in the...

2018-07-15 Thread HyukjinKwon
Github user HyukjinKwon commented on the issue:

https://github.com/apache/spark/pull/21589
  
Yarn can use dynamic allocation as well. That's why I said "in general". To 
address @felixcheung's concern, I guess it's good to mention like see the 
configuration section and details can be varied. We will note it's an 
experimental API so I guess it's good enough for now.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #21589: [SPARK-24591][CORE] Number of cores and executors in the...

2018-07-15 Thread MaxGekk
Github user MaxGekk commented on the issue:

https://github.com/apache/spark/pull/21589
  
> AFAIK, we always have num of executor ...

Not in all cases, Databricks clients can create auto-scaling clusters: 
https://docs.databricks.com/user-guide/clusters/sizing.html#cluster-size-and-autoscaling
 . For such cluster, we cannot get size of cluster  in term of cores via config 
parameters. We need methods that could return current state of a cluster. Any 
static configs don't work here because it leads to overloaded or underloaded 
clusters. 

> ...  and then num of core per executor right?

In general, number of cores per executor could be different. I don't think 
it is good idea to force user to perform complex calculation to get number of 
cores available in a cluster. 

> maybe we should have the getter factored the same way and probably named 
and described/documented similarly

@felixcheung I am not sure that our users are so interested in getting a 
list of cores per executors and calculate total numbers cores by summurizing 
the list. It will just complicate API and implementation, from my point of view.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #21589: [SPARK-24591][CORE] Number of cores and executors in the...

2018-07-15 Thread felixcheung
Github user felixcheung commented on the issue:

https://github.com/apache/spark/pull/21589
  
AFAIK, we always have num of executor and then num of core per executor 
right?
https://spark.apache.org/docs/latest/configuration.html#execution-behavior

maybe we should have the getter factored the same way and probably named 
similarly


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #21589: [SPARK-24591][CORE] Number of cores and executors in the...

2018-07-13 Thread HyukjinKwon
Github user HyukjinKwon commented on the issue:

https://github.com/apache/spark/pull/21589
  
sgtm


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #21589: [SPARK-24591][CORE] Number of cores and executors in the...

2018-07-13 Thread AmplabJenkins
Github user AmplabJenkins commented on the issue:

https://github.com/apache/spark/pull/21589
  
Merged build finished. Test PASSed.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #21589: [SPARK-24591][CORE] Number of cores and executors in the...

2018-07-13 Thread AmplabJenkins
Github user AmplabJenkins commented on the issue:

https://github.com/apache/spark/pull/21589
  
Test PASSed.
Refer to this link for build results (access rights to CI server needed): 
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/92989/
Test PASSed.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #21589: [SPARK-24591][CORE] Number of cores and executors in the...

2018-07-13 Thread SparkQA
Github user SparkQA commented on the issue:

https://github.com/apache/spark/pull/21589
  
**[Test build #92989 has 
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/92989/testReport)**
 for PR 21589 at commit 
[`7533114`](https://github.com/apache/spark/commit/7533114d00110f7350280378b8a3e78f39c5).
 * This patch passes all tests.
 * This patch merges cleanly.
 * This patch adds no public classes.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #21589: [SPARK-24591][CORE] Number of cores and executors in the...

2018-07-13 Thread MaxGekk
Github user MaxGekk commented on the issue:

https://github.com/apache/spark/pull/21589
  
> in this cluster do we really mean cores allocated to the "application" or 
"job"? 

@felixcheung What about `number of CPUs/Executors potentially available to 
an job submitted via the Spark Context`?


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #21589: [SPARK-24591][CORE] Number of cores and executors in the...

2018-07-13 Thread SparkQA
Github user SparkQA commented on the issue:

https://github.com/apache/spark/pull/21589
  
**[Test build #92989 has 
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/92989/testReport)**
 for PR 21589 at commit 
[`7533114`](https://github.com/apache/spark/commit/7533114d00110f7350280378b8a3e78f39c5).


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #21589: [SPARK-24591][CORE] Number of cores and executors in the...

2018-07-12 Thread AmplabJenkins
Github user AmplabJenkins commented on the issue:

https://github.com/apache/spark/pull/21589
  
Test PASSed.
Refer to this link for build results (access rights to CI server needed): 
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/92907/
Test PASSed.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #21589: [SPARK-24591][CORE] Number of cores and executors in the...

2018-07-12 Thread AmplabJenkins
Github user AmplabJenkins commented on the issue:

https://github.com/apache/spark/pull/21589
  
Merged build finished. Test PASSed.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #21589: [SPARK-24591][CORE] Number of cores and executors in the...

2018-07-11 Thread SparkQA
Github user SparkQA commented on the issue:

https://github.com/apache/spark/pull/21589
  
**[Test build #92907 has 
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/92907/testReport)**
 for PR 21589 at commit 
[`a39695e`](https://github.com/apache/spark/commit/a39695e059c1a2976be50159e33144ee453d3c2f).
 * This patch passes all tests.
 * This patch merges cleanly.
 * This patch adds no public classes.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #21589: [SPARK-24591][CORE] Number of cores and executors in the...

2018-07-11 Thread HyukjinKwon
Github user HyukjinKwon commented on the issue:

https://github.com/apache/spark/pull/21589
  
LGTM otherwise


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #21589: [SPARK-24591][CORE] Number of cores and executors in the...

2018-07-11 Thread SparkQA
Github user SparkQA commented on the issue:

https://github.com/apache/spark/pull/21589
  
**[Test build #92907 has 
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/92907/testReport)**
 for PR 21589 at commit 
[`a39695e`](https://github.com/apache/spark/commit/a39695e059c1a2976be50159e33144ee453d3c2f).


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #21589: [SPARK-24591][CORE] Number of cores and executors in the...

2018-07-11 Thread HyukjinKwon
Github user HyukjinKwon commented on the issue:

https://github.com/apache/spark/pull/21589
  
retest this please


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #21589: [SPARK-24591][CORE] Number of cores and executors in the...

2018-07-11 Thread gatorsmile
Github user gatorsmile commented on the issue:

https://github.com/apache/spark/pull/21589
  
cc @jiangxb1987 


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #21589: [SPARK-24591][CORE] Number of cores and executors in the...

2018-07-11 Thread MaxGekk
Github user MaxGekk commented on the issue:

https://github.com/apache/spark/pull/21589
  
@felixcheung @HyukjinKwon Could you tell me, please, what does prevent the 
PR from getting merged?


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #21589: [SPARK-24591][CORE] Number of cores and executors in the...

2018-06-28 Thread MaxGekk
Github user MaxGekk commented on the issue:

https://github.com/apache/spark/pull/21589
  
> Are you maybe able to manually test this in other cluster like standalone 
or yarn too?

I have tested standalone mode but didn't check yarn though 
`YarnClientSchedulerBackend` and `YarnClusterSchedulerBackend` both use 
`CoarseGrainedSchedulerBackend` in which the two new methods are defined. The 
`CoarseGrainedSchedulerBackend` is extended by `StandaloneSchedulerBackend` 
which inherits implementation of `numCores()` and `numExecutors()`. I hope that 
checking of the standalone mode is enough.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #21589: [SPARK-24591][CORE] Number of cores and executors in the...

2018-06-28 Thread AmplabJenkins
Github user AmplabJenkins commented on the issue:

https://github.com/apache/spark/pull/21589
  
Merged build finished. Test PASSed.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #21589: [SPARK-24591][CORE] Number of cores and executors in the...

2018-06-28 Thread AmplabJenkins
Github user AmplabJenkins commented on the issue:

https://github.com/apache/spark/pull/21589
  
Test PASSed.
Refer to this link for build results (access rights to CI server needed): 
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/92423/
Test PASSed.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #21589: [SPARK-24591][CORE] Number of cores and executors in the...

2018-06-28 Thread SparkQA
Github user SparkQA commented on the issue:

https://github.com/apache/spark/pull/21589
  
**[Test build #92423 has 
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/92423/testReport)**
 for PR 21589 at commit 
[`a39695e`](https://github.com/apache/spark/commit/a39695e059c1a2976be50159e33144ee453d3c2f).
 * This patch passes all tests.
 * This patch merges cleanly.
 * This patch adds no public classes.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #21589: [SPARK-24591][CORE] Number of cores and executors in the...

2018-06-28 Thread SparkQA
Github user SparkQA commented on the issue:

https://github.com/apache/spark/pull/21589
  
**[Test build #92423 has 
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/92423/testReport)**
 for PR 21589 at commit 
[`a39695e`](https://github.com/apache/spark/commit/a39695e059c1a2976be50159e33144ee453d3c2f).


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #21589: [SPARK-24591][CORE] Number of cores and executors in the...

2018-06-27 Thread HyukjinKwon
Github user HyukjinKwon commented on the issue:

https://github.com/apache/spark/pull/21589
  
Are you maybe able to manually test this in other cluster like standalone 
or yarn too?


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #21589: [SPARK-24591][CORE] Number of cores and executors in the...

2018-06-27 Thread AmplabJenkins
Github user AmplabJenkins commented on the issue:

https://github.com/apache/spark/pull/21589
  
Merged build finished. Test PASSed.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #21589: [SPARK-24591][CORE] Number of cores and executors in the...

2018-06-27 Thread AmplabJenkins
Github user AmplabJenkins commented on the issue:

https://github.com/apache/spark/pull/21589
  
Test PASSed.
Refer to this link for build results (access rights to CI server needed): 
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/92402/
Test PASSed.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #21589: [SPARK-24591][CORE] Number of cores and executors in the...

2018-06-27 Thread SparkQA
Github user SparkQA commented on the issue:

https://github.com/apache/spark/pull/21589
  
**[Test build #92402 has 
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/92402/testReport)**
 for PR 21589 at commit 
[`1405daf`](https://github.com/apache/spark/commit/1405daf18f9ae907f36c64e426bf65a3a9e567e4).
 * This patch passes all tests.
 * This patch merges cleanly.
 * This patch adds no public classes.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #21589: [SPARK-24591][CORE] Number of cores and executors in the...

2018-06-27 Thread SparkQA
Github user SparkQA commented on the issue:

https://github.com/apache/spark/pull/21589
  
**[Test build #92402 has 
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/92402/testReport)**
 for PR 21589 at commit 
[`1405daf`](https://github.com/apache/spark/commit/1405daf18f9ae907f36c64e426bf65a3a9e567e4).


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #21589: [SPARK-24591][CORE] Number of cores and executors in the...

2018-06-27 Thread AmplabJenkins
Github user AmplabJenkins commented on the issue:

https://github.com/apache/spark/pull/21589
  
Merged build finished. Test FAILed.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #21589: [SPARK-24591][CORE] Number of cores and executors in the...

2018-06-27 Thread AmplabJenkins
Github user AmplabJenkins commented on the issue:

https://github.com/apache/spark/pull/21589
  
Test FAILed.
Refer to this link for build results (access rights to CI server needed): 
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/92393/
Test FAILed.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #21589: [SPARK-24591][CORE] Number of cores and executors in the...

2018-06-27 Thread SparkQA
Github user SparkQA commented on the issue:

https://github.com/apache/spark/pull/21589
  
**[Test build #92393 has 
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/92393/testReport)**
 for PR 21589 at commit 
[`c280b6c`](https://github.com/apache/spark/commit/c280b6c6471f2699fa971a48bab958a2e0b40f5a).
 * This patch **fails Spark unit tests**.
 * This patch merges cleanly.
 * This patch adds no public classes.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #21589: [SPARK-24591][CORE] Number of cores and executors in the...

2018-06-27 Thread AmplabJenkins
Github user AmplabJenkins commented on the issue:

https://github.com/apache/spark/pull/21589
  
Merged build finished. Test FAILed.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #21589: [SPARK-24591][CORE] Number of cores and executors in the...

2018-06-27 Thread AmplabJenkins
Github user AmplabJenkins commented on the issue:

https://github.com/apache/spark/pull/21589
  
Test FAILed.
Refer to this link for build results (access rights to CI server needed): 
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/92389/
Test FAILed.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #21589: [SPARK-24591][CORE] Number of cores and executors in the...

2018-06-27 Thread SparkQA
Github user SparkQA commented on the issue:

https://github.com/apache/spark/pull/21589
  
**[Test build #92389 has 
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/92389/testReport)**
 for PR 21589 at commit 
[`2e6dce4`](https://github.com/apache/spark/commit/2e6dce489ea2e2cae36732d6a834302b2076bcb2).
 * This patch **fails Spark unit tests**.
 * This patch merges cleanly.
 * This patch adds no public classes.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #21589: [SPARK-24591][CORE] Number of cores and executors in the...

2018-06-27 Thread SparkQA
Github user SparkQA commented on the issue:

https://github.com/apache/spark/pull/21589
  
**[Test build #92393 has 
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/92393/testReport)**
 for PR 21589 at commit 
[`c280b6c`](https://github.com/apache/spark/commit/c280b6c6471f2699fa971a48bab958a2e0b40f5a).


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #21589: [SPARK-24591][CORE] Number of cores and executors in the...

2018-06-27 Thread MaxGekk
Github user MaxGekk commented on the issue:

https://github.com/apache/spark/pull/21589
  
> what's the convention here, I thought SparkContext has get* methods 
instead

`SparkContext` has a few methods without such prefix, for example: 
`defaultParallelism`, `defaultMinPartitions`. New methods falls to the same 
category. 


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #21589: [SPARK-24591][CORE] Number of cores and executors in the...

2018-06-27 Thread SparkQA
Github user SparkQA commented on the issue:

https://github.com/apache/spark/pull/21589
  
**[Test build #92389 has 
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/92389/testReport)**
 for PR 21589 at commit 
[`2e6dce4`](https://github.com/apache/spark/commit/2e6dce489ea2e2cae36732d6a834302b2076bcb2).


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #21589: [SPARK-24591][CORE] Number of cores and executors in the...

2018-06-26 Thread felixcheung
Github user felixcheung commented on the issue:

https://github.com/apache/spark/pull/21589
  
what's the convention here, I thought SparkContext has get* methods instead


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #21589: [SPARK-24591][CORE] Number of cores and executors in the...

2018-06-21 Thread HyukjinKwon
Github user HyukjinKwon commented on the issue:

https://github.com/apache/spark/pull/21589
  
Seems fine.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



  1   2   >