[ 
https://issues.apache.org/jira/browse/SPARK-17554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Owen resolved SPARK-17554.
-------------------------------
    Resolution: Invalid

Questions should go to user@. Without seeing how you're running the job or what 
you are looking at specifically in the UI it's hard to say. The parameter does 
work correctly in all of my usages.

> spark.executor.memory option not working
> ----------------------------------------
>
>                 Key: SPARK-17554
>                 URL: https://issues.apache.org/jira/browse/SPARK-17554
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Core
>            Reporter: Sankar Mittapally
>
> Hi,
>  I am new to spark, I have spark cluster with 5 slaves(Each one have 2 cores 
> and 4g RAM). In spark cluster dashboard I am seeing memory per node is 1gb, I 
> tried to increase it to 2g by using this parameter spark.executor.memory  2g 
> in defaults.conf but it didn't work. I want to increase the memory. Please 
> let me know how to do that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to