[ 
https://issues.apache.org/jira/browse/SPARK-20662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16035478#comment-16035478
 ] 

Marcelo Vanzin commented on SPARK-20662:
----------------------------------------

bq. It's probably not a good idea to let one job takes all resources while 
starving others.

I'm pretty sure that's why resource managers have queues.

What you want here is a client-controlled, opt-in, application-level "nicety 
config" that tells it to not submit more tasks than a limit at a time. That 
control already exists - set a maximum number of executors for the app. number 
of executors times number of cores = max number of tasks.

> Block jobs that have greater than a configured number of tasks
> --------------------------------------------------------------
>
>                 Key: SPARK-20662
>                 URL: https://issues.apache.org/jira/browse/SPARK-20662
>             Project: Spark
>          Issue Type: Improvement
>          Components: Spark Core
>    Affects Versions: 1.6.0, 2.0.0
>            Reporter: Xuefu Zhang
>
> In a shared cluster, it's desirable for an admin to block large Spark jobs. 
> While there might not be a single metrics defining the size of a job, the 
> number of tasks is usually a good indicator. Thus, it would be useful for 
> Spark scheduler to block a job whose number of tasks reaches a configured 
> limit. By default, the limit could be just infinite, to retain the existing 
> behavior.
> MapReduce has mapreduce.job.max.map and mapreduce.job.max.reduce to be 
> configured, which blocks a MR job at job submission time.
> The proposed configuration is spark.job.max.tasks with a default value -1 
> (infinite).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to