Github user markhamstra commented on a diff in the pull request:

    https://github.com/apache/spark/pull/20881#discussion_r178886364
  
    --- Diff: docs/job-scheduling.md ---
    @@ -215,6 +215,9 @@ pool), but inside each pool, jobs run in FIFO order. 
For example, if you create
     means that each user will get an equal share of the cluster, and that each 
user's queries will run in
     order instead of later queries taking resources from that user's earlier 
ones.
     
    +If jobs are not explicitly set to use a given pool, they end up in the 
default pool. This means that even if
    +`spark.scheduler.mode` is set to `FAIR` those jobs will be run in `FIFO` 
order (within the default pool).
    +
    --- End diff --
    
    This is not actually correct. There is no reason why you can't define a 
default pool that uses FAIR scheduling.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to