[ https://issues.apache.org/jira/browse/SPARK-13433?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Sean Owen resolved SPARK-13433. ------------------------------- Resolution: Not A Problem I don't see that this is a problem. You're saying that if you use all your cores, you don't have any more available. Of course > The standalone server should limit the count of cores and memory for > running Drivers > -------------------------------------------------------------------------------------- > > Key: SPARK-13433 > URL: https://issues.apache.org/jira/browse/SPARK-13433 > Project: Spark > Issue Type: Improvement > Components: Scheduler > Affects Versions: 1.6.0 > Reporter: lichenglin > > I have a 16 cores cluster. > A Running driver at least use 1 core may be more. > When I submit a lot of job to the standalone server in cluster mode. > all the cores may be used for running driver, > and then there is no cores to run applications > The server is stuck. > So I think we should limit the resources(cores and memory) for running driver. -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org