[ https://issues.apache.org/jira/browse/SPARK-20662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16035487#comment-16035487 ]
Marcelo Vanzin commented on SPARK-20662: ---------------------------------------- BTW if you really, really, really think this is a good idea and you really want it, you can write a listener that just cancels jobs or kills the application whenever a stage with more than x tasks is submitted. No need for any changes in Spark. > Block jobs that have greater than a configured number of tasks > -------------------------------------------------------------- > > Key: SPARK-20662 > URL: https://issues.apache.org/jira/browse/SPARK-20662 > Project: Spark > Issue Type: Improvement > Components: Spark Core > Affects Versions: 1.6.0, 2.0.0 > Reporter: Xuefu Zhang > > In a shared cluster, it's desirable for an admin to block large Spark jobs. > While there might not be a single metrics defining the size of a job, the > number of tasks is usually a good indicator. Thus, it would be useful for > Spark scheduler to block a job whose number of tasks reaches a configured > limit. By default, the limit could be just infinite, to retain the existing > behavior. > MapReduce has mapreduce.job.max.map and mapreduce.job.max.reduce to be > configured, which blocks a MR job at job submission time. > The proposed configuration is spark.job.max.tasks with a default value -1 > (infinite). -- This message was sent by Atlassian JIRA (v6.3.15#6346) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org