[ 
https://issues.apache.org/jira/browse/SPARK-25073?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

vivek kumar updated SPARK-25073:
--------------------------------
    Description: 
When the yarn.nodemanager.resource.memory-mb and/or 
yarn.scheduler.maximum-allocation-mb is insufficient, Spark *always* reports an 
error request to adjust Yarn.scheduler.maximum-allocation-mb. Expecting the 
error request to be  more around yarn.scheduler.maximum-allocation-mb' and/or 
'yarn.nodemanager.resource.memory-mb'.

 

Scenario 1. yarn.scheduler.maximum-allocation-mb =4g and 
yarn.nodemanager.resource.memory-mb =8G
 # Launch shell on Yarn with am.memory less than nodemanager.resource memory 
but greater than yarn.scheduler.maximum-allocation-mb

eg; spark-shell --master yarn --conf spark.yarn.am.memory 5g

 Error: java.lang.IllegalArgumentException: Required AM memory (5120+512 MB) is 
above the max threshold (4096 MB) of this cluster! Please increase the value of 
'yarn.scheduler.maximum-allocation-mb'.

at org.apache.spark.deploy.yarn.Client.verifyClusterResources(Client.scala:325)

 

*Scenario 2*. yarn.scheduler.maximum-allocation-mb =15g and 
yarn.nodemanager.resource.memory-mb =8g

a. Launch shell on Yarn with am.memory greater than nodemanager.resource memory 
but less than yarn.scheduler.maximum-allocation-mb

eg; *spark-shell --master yarn --conf spark.yarn.am.memory=10g*

 Error :

java.lang.IllegalArgumentException: Required AM memory (10240+1024 MB) is above 
the max threshold (*8096 MB*) of this cluster! *Please increase the value of 
'yarn.scheduler.maximum-allocation-mb'.*

at org.apache.spark.deploy.yarn.Client.verifyClusterResources(Client.scala:325)

 

b. Launch shell on Yarn with am.memory greater than nodemanager.resource memory 
and yarn.scheduler.maximum-allocation-mb

eg; *spark-shell --master yarn --conf spark.yarn.am.memory=17g*

 Error:

java.lang.IllegalArgumentException: Required AM memory (17408+1740 MB) is above 
the max threshold (*8096 MB*) of this cluster! *Please increase the value of 
'yarn.scheduler.maximum-allocation-mb'.*

at org.apache.spark.deploy.yarn.Client.verifyClusterResources(Client.scala:325)

 

*Expected* : Error request for scenario2 should be more around 
yarn.scheduler.maximum-allocation-mb' and/or 
'yarn.nodemanager.resource.memory-mb'.

  was:
When the yarn.nodemanager.resource.memory-mb and/or 
yarn.scheduler.maximum-allocation-mb is insufficient, Spark always reports an 
error request to adjust Yarn.scheduler.maximum-allocation-mb. Expecting the 
error request to be  more around yarn.scheduler.maximum-allocation-mb' and/or 
'yarn.nodemanager.resource.memory-mb'.

 

Scenario 1. yarn.scheduler.maximum-allocation-mb =4g and 
yarn.nodemanager.resource.memory-mb =8G
 # Launch shell on Yarn with am.memory less than nodemanager.resource memory 
but greater than yarn.scheduler.maximum-allocation-mb

eg; spark-shell --master yarn --conf spark.yarn.am.memory 5g

 Error:

java.lang.IllegalArgumentException: Required AM memory (5120+512 MB) is above 
the max threshold (4096 MB) of this cluster! Please increase the value of 
'yarn.scheduler.maximum-allocation-mb'.

at org.apache.spark.deploy.yarn.Client.verifyClusterResources(Client.scala:325)

 

Scenario 2. yarn.scheduler.maximum-allocation-mb =15g and 
yarn.nodemanager.resource.memory-mb =8g

a.Launch shell on Yarn with am.memory greater than nodemanager.resource memory 
but less than yarn.scheduler.maximum-allocation-mb

eg; spark-shell --master yarn --conf spark.yarn.am.memory=10g

 

Error :

java.lang.IllegalArgumentException: Required AM memory (10240+1024 MB) is above 
the max threshold (*8096 MB*) of this cluster! Please increase the value of 
*'yarn.scheduler.maximum-allocation-mb'*.

at org.apache.spark.deploy.yarn.Client.verifyClusterResources(Client.scala:325)

b.Launch shell on Yarn with am.memory greater than nodemanager.resource memory 
and yarn.scheduler.maximum-allocation-mb

eg; spark-shell --master yarn --conf spark.yarn.am.memory=17g

 Error:

java.lang.IllegalArgumentException: Required AM memory (17408+1740 MB) is above 
the max threshold (*8096 MB*) of this cluster! Please increase the value of 
*'yarn.scheduler.maximum-allocation-mb'*.

at org.apache.spark.deploy.yarn.Client.verifyClusterResources(Client.scala:325)

 

Expected : Error request for scenario2 should be more around 
yarn.scheduler.maximum-allocation-mb' and/or 
'yarn.nodemanager.resource.memory-mb'.


> Spark-submit on Yarn Task : When the yarn.nodemanager.resource.memory-mb 
> and/or yarn.scheduler.maximum-allocation-mb is insufficient, Spark always 
> reports an error request to adjust yarn.scheduler.maximum-allocation-mb
> --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
>
>                 Key: SPARK-25073
>                 URL: https://issues.apache.org/jira/browse/SPARK-25073
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Submit
>    Affects Versions: 2.3.0, 2.3.1
>            Reporter: vivek kumar
>            Priority: Minor
>
> When the yarn.nodemanager.resource.memory-mb and/or 
> yarn.scheduler.maximum-allocation-mb is insufficient, Spark *always* reports 
> an error request to adjust Yarn.scheduler.maximum-allocation-mb. Expecting 
> the error request to be  more around yarn.scheduler.maximum-allocation-mb' 
> and/or 'yarn.nodemanager.resource.memory-mb'.
>  
> Scenario 1. yarn.scheduler.maximum-allocation-mb =4g and 
> yarn.nodemanager.resource.memory-mb =8G
>  # Launch shell on Yarn with am.memory less than nodemanager.resource memory 
> but greater than yarn.scheduler.maximum-allocation-mb
> eg; spark-shell --master yarn --conf spark.yarn.am.memory 5g
>  Error: java.lang.IllegalArgumentException: Required AM memory (5120+512 MB) 
> is above the max threshold (4096 MB) of this cluster! Please increase the 
> value of 'yarn.scheduler.maximum-allocation-mb'.
> at 
> org.apache.spark.deploy.yarn.Client.verifyClusterResources(Client.scala:325)
>  
> *Scenario 2*. yarn.scheduler.maximum-allocation-mb =15g and 
> yarn.nodemanager.resource.memory-mb =8g
> a. Launch shell on Yarn with am.memory greater than nodemanager.resource 
> memory but less than yarn.scheduler.maximum-allocation-mb
> eg; *spark-shell --master yarn --conf spark.yarn.am.memory=10g*
>  Error :
> java.lang.IllegalArgumentException: Required AM memory (10240+1024 MB) is 
> above the max threshold (*8096 MB*) of this cluster! *Please increase the 
> value of 'yarn.scheduler.maximum-allocation-mb'.*
> at 
> org.apache.spark.deploy.yarn.Client.verifyClusterResources(Client.scala:325)
>  
> b. Launch shell on Yarn with am.memory greater than nodemanager.resource 
> memory and yarn.scheduler.maximum-allocation-mb
> eg; *spark-shell --master yarn --conf spark.yarn.am.memory=17g*
>  Error:
> java.lang.IllegalArgumentException: Required AM memory (17408+1740 MB) is 
> above the max threshold (*8096 MB*) of this cluster! *Please increase the 
> value of 'yarn.scheduler.maximum-allocation-mb'.*
> at 
> org.apache.spark.deploy.yarn.Client.verifyClusterResources(Client.scala:325)
>  
> *Expected* : Error request for scenario2 should be more around 
> yarn.scheduler.maximum-allocation-mb' and/or 
> 'yarn.nodemanager.resource.memory-mb'.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to