[ https://issues.apache.org/jira/browse/SPARK-27750?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16989249#comment-16989249 ]
t oo edited comment on SPARK-27750 at 1/16/20 6:33 AM: ------------------------------------------------------- WDYT [~Ngone51] [~squito] [~vanzin] [~mgaido] [~jiangxb1987] [~jiangxb] [~zsxwing] [~jlaskowski] [~cloud_fan] [~srowen] [~dongjoon] [~hyukjin.kwon] was (Author: toopt4): WDYT [~Ngone51] [~squito] [~vanzin] [~mgaido] [~jiangxb1987] [~jiangxb] [~zsxwing] [~jlaskowski] [~cloud_fan] > Standalone scheduler - ability to prioritize applications over drivers, many > drivers act like Denial of Service > --------------------------------------------------------------------------------------------------------------- > > Key: SPARK-27750 > URL: https://issues.apache.org/jira/browse/SPARK-27750 > Project: Spark > Issue Type: New Feature > Components: Scheduler > Affects Versions: 3.0.0 > Reporter: t oo > Priority: Minor > > If I submit 1000 spark submit drivers then they consume all the cores on my > cluster (essentially it acts like a Denial of Service) and no spark > 'application' gets to run since the cores are all consumed by the 'drivers'. > This feature is about having the ability to prioritize applications over > drivers so that at least some 'applications' can start running. I guess it > would be like: If (driver.state = 'submitted' and (exists some app.state = > 'submitted')) then set app.state = 'running' > if all apps have app.state = 'running' then set driver.state = 'submitted' > > Secondary to this, why must a driver consume a minimum of 1 entire core? -- This message was sent by Atlassian Jira (v8.3.4#803005) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org