[jira] [Updated] (YARN-3266) RMContext inactiveNodes should have NodeId as map key

2015-04-14 Thread Chengbing Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chengbing Liu updated YARN-3266:

Attachment: YARN-3266.03.patch

Updated patch, which uses NodeId as map key.

> RMContext inactiveNodes should have NodeId as map key
> -
>
> Key: YARN-3266
> URL: https://issues.apache.org/jira/browse/YARN-3266
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.6.0
>Reporter: Chengbing Liu
>Assignee: Chengbing Liu
> Attachments: YARN-3266.01.patch, YARN-3266.02.patch, 
> YARN-3266.03.patch
>
>
> Under the default NM port configuration, which is 0, we have observed in the 
> current version, "lost nodes" count is greater than the length of the lost 
> node list. This will happen when we consecutively restart the same NM twice:
> * NM started at port 10001
> * NM restarted at port 10002
> * NM restarted at port 10003
> * NM:10001 timeout, {{ClusterMetrics#incrNumLostNMs()}}, # lost node=1; 
> {{rmNode.context.getInactiveRMNodes().put(rmNode.nodeId.getHost(), rmNode)}}, 
> {{inactiveNodes}} has 1 element
> * NM:10002 timeout, {{ClusterMetrics#incrNumLostNMs()}}, # lost node=2; 
> {{rmNode.context.getInactiveRMNodes().put(rmNode.nodeId.getHost(), rmNode)}}, 
> {{inactiveNodes}} still has 1 element
> Since we allow multiple NodeManagers on one host (as discussed in YARN-1888), 
> {{inactiveNodes}} should be of type {{ConcurrentMap}}. If 
> this will break the current API, then the key string should include the NM's 
> port as well.
> Thoughts?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-3266) RMContext inactiveNodes should have NodeId as map key

2015-02-26 Thread Chengbing Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chengbing Liu updated YARN-3266:

Attachment: YARN-3266.02.patch

Added a test in {{TestRMNodeTransitions}} to prevent regression.

> RMContext inactiveNodes should have NodeId as map key
> -
>
> Key: YARN-3266
> URL: https://issues.apache.org/jira/browse/YARN-3266
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.6.0
>Reporter: Chengbing Liu
>Assignee: Chengbing Liu
> Attachments: YARN-3266.01.patch, YARN-3266.02.patch
>
>
> Under the default NM port configuration, which is 0, we have observed in the 
> current version, "lost nodes" count is greater than the length of the lost 
> node list. This will happen when we consecutively restart the same NM twice:
> * NM started at port 10001
> * NM restarted at port 10002
> * NM restarted at port 10003
> * NM:10001 timeout, {{ClusterMetrics#incrNumLostNMs()}}, # lost node=1; 
> {{rmNode.context.getInactiveRMNodes().put(rmNode.nodeId.getHost(), rmNode)}}, 
> {{inactiveNodes}} has 1 element
> * NM:10002 timeout, {{ClusterMetrics#incrNumLostNMs()}}, # lost node=2; 
> {{rmNode.context.getInactiveRMNodes().put(rmNode.nodeId.getHost(), rmNode)}}, 
> {{inactiveNodes}} still has 1 element
> Since we allow multiple NodeManagers on one host (as discussed in YARN-1888), 
> {{inactiveNodes}} should be of type {{ConcurrentMap}}. If 
> this will break the current API, then the key string should include the NM's 
> port as well.
> Thoughts?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-3266) RMContext inactiveNodes should have NodeId as map key

2015-02-26 Thread Chengbing Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chengbing Liu updated YARN-3266:

Attachment: YARN-3266.01.patch

> RMContext inactiveNodes should have NodeId as map key
> -
>
> Key: YARN-3266
> URL: https://issues.apache.org/jira/browse/YARN-3266
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.6.0
>Reporter: Chengbing Liu
>Assignee: Rohith
> Attachments: YARN-3266.01.patch
>
>
> Under the default NM port configuration, which is 0, we have observed in the 
> current version, "lost nodes" count is greater than the length of the lost 
> node list. This will happen when we consecutively restart the same NM twice:
> * NM started at port 10001
> * NM restarted at port 10002
> * NM restarted at port 10003
> * NM:10001 timeout, {{ClusterMetrics#incrNumLostNMs()}}, # lost node=1; 
> {{rmNode.context.getInactiveRMNodes().put(rmNode.nodeId.getHost(), rmNode)}}, 
> {{inactiveNodes}} has 1 element
> * NM:10002 timeout, {{ClusterMetrics#incrNumLostNMs()}}, # lost node=2; 
> {{rmNode.context.getInactiveRMNodes().put(rmNode.nodeId.getHost(), rmNode)}}, 
> {{inactiveNodes}} still has 1 element
> Since we allow multiple NodeManagers on one host (as discussed in YARN-1888), 
> {{inactiveNodes}} should be of type {{ConcurrentMap}}. If 
> this will break the current API, then the key string should include the NM's 
> port as well.
> Thoughts?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)