Github user markhamstra commented on a diff in the pull request: https://github.com/apache/spark/pull/20881#discussion_r178928945 --- Diff: docs/job-scheduling.md --- @@ -215,6 +215,9 @@ pool), but inside each pool, jobs run in FIFO order. For example, if you create means that each user will get an equal share of the cluster, and that each user's queries will run in order instead of later queries taking resources from that user's earlier ones. +If jobs are not explicitly set to use a given pool, they end up in the default pool. This means that even if +`spark.scheduler.mode` is set to `FAIR` those jobs will be run in `FIFO` order (within the default pool). + --- End diff -- You seem to be missing a few somethings: 1) You can define your own default pool that does FAIR scheduling within that pool, so blanket statements about "the" default pool are dangerous; 2) `spark.scheduler.mode` controls the setup of the rootPool, not the scheduling within any pool; 3) If you don't define your own pool with a name corresponding to the DEFAULT_POOL_NAME (i.e. "default"), then you are going to get a default construction of "default", which does use FIFO scheduling within that pool. So, item 2) effectively means that `spark.scheduler.mode` controls whether fair scheduling is possible at all, and it also defines the kind of scheduling that is used among the shedulable entities contained in the root pool -- i.e. among the scheduling pools nested within rootPool. One of those nested pools will be DEFAULT_POOL_NAME/"default", which will use FIFO scheduling for schedulable entities within that pool if you haven't defined it to use fair scheduling. If you just want one scheduling pool that does fair scheduling among its schedulable entities, then you need to set `spark.scheduler.mode` to "FAIR" in your SparkConf and _also_ define in the pool configuration file a "default" pool to use schedulingMode FAIR. You could alternatively define such a fair-scheduling-inside pool named something other than "default" and then make sure that all of your jobs get assigned to that pool.
--- --------------------------------------------------------------------- To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org