Is there a more fixed way of doing this? E.g., if I submit a job with 10
executors, I want to see 10 all the time, and not a fluctuating number
based on currently available executors.

In a tight cluster with lots of jobs running, I can see that this number
goes up slowly and even down (when an executor fails). What I want is the
actual number of executors that was requested.


On Mon, Jul 28, 2014 at 4:06 PM, Zongheng Yang <zonghen...@gmail.com> wrote:

> Nicholas,
>
> The (somewhat common) situation you ran into probably meant the
> executors were still connecting. A typical solution is to sleep a
> couple seconds before querying that field.
>
> On Mon, Jul 28, 2014 at 3:57 PM, Andrew Or <and...@databricks.com> wrote:
> > Yes, both of these are derived from the same source, and this source
> > includes the driver. In other words, if you submit a job with 10
> executors
> > you will get back 11 for both statuses.
> >
> >
> > 2014-07-28 15:40 GMT-07:00 Sung Hwan Chung <coded...@cs.stanford.edu>:
> >
> >> Do getExecutorStorageStatus and getExecutorMemoryStatus both return the
> >> number of executors + the driver?
> >> E.g., if I submit a job with 10 executors, I get 11 for
> >> getExeuctorStorageStatus.length and getExecutorMemoryStatus.size
> >>
> >>
> >> On Thu, Jul 24, 2014 at 4:53 PM, Nicolas Mai <nicolas....@gmail.com>
> >> wrote:
> >>>
> >>> Thanks, this is what I needed :) I should have searched more...
> >>>
> >>> Something I noticed though: after the SparkContext is initialized, I
> had
> >>> to
> >>> wait for a few seconds until sc.getExecutorStorageStatus.length returns
> >>> the
> >>> correct number of workers in my cluster (otherwise it returns 1, for
> the
> >>> driver)...
> >>>
> >>>
> >>>
> >>> --
> >>> View this message in context:
> >>>
> http://apache-spark-user-list.1001560.n3.nabble.com/Getting-the-number-of-slaves-tp10604p10619.html
> >>> Sent from the Apache Spark User List mailing list archive at
> Nabble.com.
> >>
> >>
> >
>

Reply via email to