You're mixing app scheduling in the cluster manager (your [1] link)
with job scheduling within an app (your [2] link). They're independent
things.

On Fri, Oct 2, 2015 at 2:22 PM, Jacek Laskowski <ja...@japila.pl> wrote:
> Hi,
>
> The docs in Resource Scheduling [1] says:
>
>> The standalone cluster mode currently only supports a simple FIFO scheduler 
>> across applications.
>
> There's however `spark.scheduler.mode` that can be one of `FAIR`,
> `FIFO`, `NONE` values.
>
> Is FAIR available for Spark Standalone cluster mode? Is there a page
> where it's described in more details? I can't seem to find much about
> FAIR and Standalone in Job Scheduling [2].
>
> [1] 
> http://people.apache.org/~pwendell/spark-nightly/spark-master-docs/latest/spark-standalone.html
> [2] 
> http://people.apache.org/~pwendell/spark-nightly/spark-master-docs/latest/job-scheduling.html
>
> Pozdrawiam,
> Jacek
>
> --
> Jacek Laskowski | http://blog.japila.pl | http://blog.jaceklaskowski.pl
> Follow me at https://twitter.com/jaceklaskowski
> Upvote at http://stackoverflow.com/users/1305344/jacek-laskowski
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
> For additional commands, e-mail: user-h...@spark.apache.org
>



-- 
Marcelo

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to