On Fri, Jul 21, 2017 at 5:00 AM, Gokula Krishnan D wrote:
> Is there anyway can we setup the scheduler mode in Spark Cluster level
> besides application (SC level).
That's called the cluster (or resource) manager. e.g., configure
separate queues in YARN with a maximum number
Mark & Ayan, thanks for the inputs.
*Is there anyway can we setup the scheduler mode in Spark Cluster level
besides application (SC level).*
Currently in YARN is in FAIR mode and manually we ensure that Spark
Application also in FAIR mode however noticed that Applications are not
releasing the
Hi
As Mark said, scheduler mode works within application ie within a Spark
Session and Spark context. This is also clear if you think where you set
the configuration - in a Spark Config which used to build a context.
If you are using Yarn as resource manager, however, you can set YARN with
fair
The fair scheduler doesn't have anything to do with reallocating resource
across Applications.
https://spark.apache.org/docs/latest/job-scheduling.html#scheduling-across-applications
https://spark.apache.org/docs/latest/job-scheduling.html#scheduling-within-an-application
On Thu, Jul 20, 2017 at
Mark, Thanks for the response.
Let me rephrase my statements.
"I am submitting a Spark application(*Application*#A) with scheduler.mode
as FAIR and dynamicallocation=true and it got all the available executors.
In the meantime, submitting another Spark Application (*Application* # B)
with the
First, Executors are not allocated to Jobs, but rather to Applications. If
you run multiple Jobs within a single Application, then each of the Tasks
associated with Stages of those Jobs has the potential to run on any of the
Application's Executors. Second, once a Task starts running on an
Hello All,
We are having cluster with 50 Executors each with 4 Cores so can avail max.
200 Executors.
I am submitting a Spark application(JOB A) with scheduler.mode as FAIR and
dynamicallocation=true and it got all the available executors.
In the meantime, submitting another Spark Application