Thank you.
May I know when that comparator is called?
It looks like spark scheduler has not any form of preemption, am I right?
Thank you
From: Mark Hamstra
Sent: Thursday, September 1, 2016 8:44:10 PM
To: enrico d'urso
Cc: user@spark.apache.org
Subjec
, September 1, 2016 8:19:44 PM
To: enrico d'urso
Cc: user@spark.apache.org
Subject: Re: Spark scheduling mode
The default pool (``) can be configured like any other
pool:
https://spark.apache.org/docs/latest/job-scheduling.html#configuring-pool-properties
On Thu, Sep 1, 2016 at 11:11 AM, enrico d
Is there a way to force scheduling to be fair inside the default pool?
I mean, round robin for the jobs that belong to the default pool.
Cheers,
From: Mark Hamstra
Sent: Thursday, September 1, 2016 7:24:54 PM
To: enrico d'urso
Cc: user@spark.apache.org
Su
I am building a Spark App, in which I submit several jobs (pyspark). I am using
threads to run them in parallel, and also I am setting:
conf.set("spark.scheduler.mode", "FAIR") Still, I see the jobs run serially in
FIFO way. Am I missing something?
Cheers,
Enrico