at comparator is called?
>> It looks like spark scheduler has not any form of preemption, am I right?
>>
>> Thank you
>> --
>> *From:* Mark Hamstra <m...@clearstorydata.com>
>> *Sent:* Thursday, September 1, 2016 8:44:10 PM
I right?
>
> Thank you
> --
> *From:* Mark Hamstra <m...@clearstorydata.com>
> *Sent:* Thursday, September 1, 2016 8:44:10 PM
>
> *To:* enrico d'urso
> *Cc:* user@spark.apache.org
> *Subject:* Re: Spark scheduling mode
>
>
: user@spark.apache.org
Subject: Re: Spark scheduling mode
Spark's FairSchedulingAlgorithm is not round robin:
https://github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark/scheduler/SchedulingAlgorithm.scala#L43
When at the scope of fair scheduling Jobs within a single Pool, the Sched
eduled in round robin way,
> am I right?
>
> --
> *From:* Mark Hamstra <m...@clearstorydata.com>
> *Sent:* Thursday, September 1, 2016 8:19:44 PM
> *To:* enrico d'urso
> *Cc:* user@spark.apache.org
> *Subject:* Re: Spark scheduling mode
>
> The default
t;m...@clearstorydata.com>
Sent: Thursday, September 1, 2016 8:19:44 PM
To: enrico d'urso
Cc: user@spark.apache.org
Subject: Re: Spark scheduling mode
The default pool (``) can be configured like any other
pool:
https://spark.apache.org/docs/latest/job-scheduling.html#configuring-pool-properties
On Thu, Sep 1
ault pool?
> I mean, round robin for the jobs that belong to the default pool.
>
> Cheers,
> --
> *From:* Mark Hamstra <m...@clearstorydata.com>
> *Sent:* Thursday, September 1, 2016 7:24:54 PM
> *To:* enrico d'urso
> *Cc:* user@spark.apache.org
: user@spark.apache.org
Subject: Re: Spark scheduling mode
Just because you've flipped spark.scheduler.mode to FAIR, that doesn't mean
that Spark can magically configure and start multiple scheduling pools for you,
nor can it know to which pools you want jobs assigned. Without doing any
Just because you've flipped spark.scheduler.mode to FAIR, that doesn't mean
that Spark can magically configure and start multiple scheduling pools for
you, nor can it know to which pools you want jobs assigned. Without doing
any setup of additional scheduling pools or assigning of jobs to pools,
I am building a Spark App, in which I submit several jobs (pyspark). I am using
threads to run them in parallel, and also I am setting:
conf.set("spark.scheduler.mode", "FAIR") Still, I see the jobs run serially in
FIFO way. Am I missing something?
Cheers,
Enrico