I am accessing a shared cluster mode Spark environment. However, there is
an existing application (SparkSQL/Thrift Server), running under a different
user, that occupies all available cores. Please see attached screenshot to
get an idea about current resource utilization.

Is there a way I can use this application to submit my jobs (under
different user than mapr) without restarting this app with reduced number
of  cores (spark.deploy.defaultCores)?

What could be optimal solution for this type of resource sharing issues?
Fari Scheduler or any other approach? -
http://spark.apache.org/docs/latest/job-scheduling.html#scheduling-within-an-application


[image: Inline image 1]
\[image: Inline image 3]

Reply via email to