Ofer Eliassaf created SPARK-17444:
-------------------------------------

             Summary: spark memory allocation makes workers non responsive
                 Key: SPARK-17444
                 URL: https://issues.apache.org/jira/browse/SPARK-17444
             Project: Spark
          Issue Type: Bug
          Components: PySpark
    Affects Versions: 2.0.0
         Environment: spark standalone
            Reporter: Ofer Eliassaf
            Priority: Critical


I am running a cluster of 3 slaves and 2 masters with spark standalone.
total of 12 cores  (4 in each machine)
memory allocated to executors and workers are 4.5GB, and the machine has total 
of 8GB.

steps to reproduce:
open pyspark and point to the masters

run the following command multiple times:
sc.parallelize(range(1,50000000), 12).count()
after few runs the python will stop respond.

then exit the python shell.

The critical issue that after this happens the cluster is not useful any more:
There is no way to submit application or running another commands on the 
cluster etc.


Hope this helps!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to