Nothing has changed in that regard, nor is there likely to be "progress",
since more sophisticated or capable resource scheduling at the Application
level is really beyond the design goals for standalone mode.  If you want
more in the way of multi-Application resource scheduling, then you should
be looking at Yarn or Mesos.  Is there some reason why neither of those
options can work for you?

On Fri, Jul 15, 2016 at 9:15 AM, Teng Qiu <teng...@gmail.com> wrote:

> Hi,
>
>
> http://people.apache.org/~pwendell/spark-nightly/spark-master-docs/latest/spark-standalone.html#resource-scheduling
> The standalone cluster mode currently only supports a simple FIFO
> scheduler across applications.
>
> is this sentence still true? any progress on this? it will really
> helpful. some roadmap?
>
> Thanks
>
> Teng
>
> ---------------------------------------------------------------------
> To unsubscribe e-mail: user-unsubscr...@spark.apache.org
>
>

Reply via email to