[ 
https://issues.apache.org/jira/browse/MAPREDUCE-4508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13439943#comment-13439943
 ] 

Hitesh Shah commented on MAPREDUCE-4508:
----------------------------------------

Sorry for the late reply. I dont believe that an error should be thrown when 
the AM requested memory is greater than the NM memory. I believe this is more 
of a configuration bug where the scheduler max allocation should be set such 
that an error is thrown for any AM requesting more than that. The RM should 
error out if the max scheduler allocation for a single container is less than 
the resources required to launch a new AM. Please let me know if you have seen 
something contrary to this. 

However, depending on how the scheduler max allocation is configured, there 
will be situations in heterogenous clusters where certain nodes may be down 
creating holes causing requests for large amount of resources/memory to wait 
for an indefinite amount of time. This is something which needs to be addressed 
separately and is a bit more tricky in terms of when to decide whether the 
allocation request cannot be fulfilled ( both from a new AM perspective or 
container requests by an AM ). I will file a separate jira for that.  


                
> YARN needs to properly check the NM,AM memory properties in yarn-site.xml and 
> mapred.xml and report errors accordingly.
> -----------------------------------------------------------------------------------------------------------------------
>
>                 Key: MAPREDUCE-4508
>                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4508
>             Project: Hadoop Map/Reduce
>          Issue Type: Bug
>          Components: nodemanager, resourcemanager
>    Affects Versions: 2.0.0-alpha
>         Environment: CentOs6.0, Hadoop2.0.0 Alpha
>            Reporter: Anil Gupta
>              Labels: Map, Reduce, YARN
>
> Please refer to this discussion on the Hadoop Mailing list:
> http://comments.gmane.org/gmane.comp.jakarta.lucene.hadoop.user/33110
> Summary:
> I was running YARN(Hadoop2.0.0 Alpha) on a 8 datanode, 4 admin node 
> Hadoop/HBase cluster. My datanodes were only having 3.2GB of memory. So, i 
> configured the yarn.nodemanager.resource.memory-mb property in yarn-site.xml 
> to 1200. After setting the property if i run any Yarn Job then the 
> NodemManager wont be able to start any Map task since by default the 
> yarn.app.mapreduce.am.resource.mb property is set to 1500 MB in 
> mapred-site.xml. 
> Expected Behavior: NodeManager should give an error if 
> yarn.app.mapreduce.am.resource.mb >= yarn.nodemanager.resource.memory-mb.
> Please let me know if more information is required.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to