[ 
https://issues.apache.org/jira/browse/SPARK-2154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14060813#comment-14060813
 ] 

Aaron Davidson commented on SPARK-2154:
---------------------------------------

Created this PR to hopefully fix that: https://github.com/apache/spark/pull/1405

> Worker goes down.
> -----------------
>
>                 Key: SPARK-2154
>                 URL: https://issues.apache.org/jira/browse/SPARK-2154
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Core
>    Affects Versions: 0.8.1, 0.9.0, 1.0.0
>         Environment: Spark on cluster of three nodes on Ubuntu 12.04.4 LTS
>            Reporter: siva venkat gogineni
>              Labels: patch
>         Attachments: Sccreenhot at various states of driver ..jpg
>
>
> Worker dies when i try to submit drivers more than the allocated cores. When 
> I submit 9 drivers with one core for each driver on a cluster having 8 cores 
> all together the worker dies as soon as i submit the 9 the driver. It works 
> fine until it reaches 8 cores, As soon as i submit 9th driver the driver 
> status remains "Submitted" and the worker crashes. I understand that we 
> cannot run  drivers more than the allocated cores but the problem here is 
> instead of the 9th driver being in queue it is being executed and as a result 
> it is crashing the worker. Let me know if there is a way to get around this 
> issue or is it being fixed in the upcoming version?
> Cluster Details:
> Spark 1.00
> 2 nodes with 4 cores each.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to