[ 
https://issues.apache.org/jira/browse/SPARK-5861?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Owen updated SPARK-5861:
-----------------------------
    Fix Version/s:     (was: 1.2.2)
                       (was: 1.3.0)

Ah, you're saying that you are in yarn-client mode but the Application Master 
is still using the large amount of memory defined in spark.driver.memory, 
instead of the small default of 512m it should use.

I see that the lines of code you cite only execute in yarn-cluster mode though, 
or should only execute. That in turn is keyed on whether {{--class}} is set by 
{{Client.scala}} which again should only happen in yarn-cluster mode.

How are you running this? are you bypassing this code?

CC [~sandyr] and [~vanzin] for an opinion.

> [yarn-client mode] Application master should not use memory = 
> spark.driver.memory
> ---------------------------------------------------------------------------------
>
>                 Key: SPARK-5861
>                 URL: https://issues.apache.org/jira/browse/SPARK-5861
>             Project: Spark
>          Issue Type: Bug
>          Components: YARN
>    Affects Versions: 1.2.1
>            Reporter: Shekhar Bansal
>
> I am using
>  {code}spark.driver.memory=6g{code}
> which creates application master of 7g 
> (yarn.scheduler.minimum-allocation-mb=1024)
> Application manager don't need 7g in yarn-client mode.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to