[ 
https://issues.apache.org/jira/browse/YARN-1008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13739913#comment-13739913
 ] 

Sandy Ryza commented on YARN-1008:
----------------------------------

A few comments:

Can we call the config RM_SCHEDULER_INCLUDE_PORT_IN_NODE_NAME instead of 
RM_SCHEDULER_USE_PORT_FOR_NODE_NAME?  The latter makes it seem like we're only 
using the port.

Also, like in yarn.scheduler.minimum-allocation-mb, can we use dashes, not 
periods, for the part that comes after scheduler? 

Also, it should start with yarn.scheduler, not yarn.resourcemanager.scheduler.

In the getNodeName doc, "diferentiate" should be "differentiate".

The whole test added to TestFairScheduler needs another space of indentation.

The finally block at the end of the test shouldn't be necessary, because we 
reinitialize with a fresh config before every test already.
                
> MiniYARNCluster with multiple nodemanagers, all nodes have same key for 
> allocations
> -----------------------------------------------------------------------------------
>
>                 Key: YARN-1008
>                 URL: https://issues.apache.org/jira/browse/YARN-1008
>             Project: Hadoop YARN
>          Issue Type: Bug
>          Components: nodemanager
>    Affects Versions: 2.1.0-beta
>            Reporter: Alejandro Abdelnur
>            Assignee: Alejandro Abdelnur
>         Attachments: YARN-1008.patch, YARN-1008.patch, YARN-1008.patch
>
>
> While the NMs are keyed using the NodeId, the allocation is done based on the 
> hostname. 
> This makes the different nodes indistinguishable to the scheduler.
> There should be an option to enabled the host:port instead just port for 
> allocations. The nodes reported to the AM should report the 'key' (host or 
> host:port). 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Reply via email to