[ https://issues.apache.org/jira/browse/SPARK-15176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15291425#comment-15291425 ]
Imran Rashid commented on SPARK-15176: -------------------------------------- I agree with Kay. Not vetoing it or anything yet -- just that I think we need a stronger case for adding it. > Job Scheduling Within Application Suffers from Priority Inversion > ----------------------------------------------------------------- > > Key: SPARK-15176 > URL: https://issues.apache.org/jira/browse/SPARK-15176 > Project: Spark > Issue Type: Bug > Components: Scheduler > Affects Versions: 1.6.1 > Reporter: Nick White > > Say I have two pools, and N cores in my cluster: > * I submit a job to one, which has M >> N tasks > * N of the M tasks are scheduled > * I submit a job to the second pool - but none of its tasks get scheduled > until a task from the other pool finishes! > This can lead to unbounded denial-of-service for the second pool - regardless > of `minShare` or `weight` settings. Ideally Spark would support a pre-emption > mechanism, or an upper bound on a pool's resource usage. -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org