[jira] [Commented] (YARN-9985) Unsupported "transitionToObserver" option displaying for rmadmin command

2019-12-09 Thread Akira Ajisaka (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-9985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16992103#comment-16992103
 ] 

Akira Ajisaka commented on YARN-9985:
-

Filed HADOOP-16753 for refactoring.

> Unsupported "transitionToObserver" option displaying for rmadmin command
> 
>
> Key: YARN-9985
> URL: https://issues.apache.org/jira/browse/YARN-9985
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: RM, yarn
>Affects Versions: 3.2.1
>Reporter: Souryakanta Dwivedy
>Assignee: Ayush Saxena
>Priority: Minor
> Fix For: 3.3.0, 3.2.2
>
> Attachments: YARN-9985-01.patch, YARN-9985-02.patch, 
> image-2019-11-18-18-31-17-755.png, image-2019-11-18-18-35-54-688.png
>
>
> Unsupported "transitionToObserver" option displaying for rmadmin command
> Check the options for Yarn rmadmin command
> It will display the "-transitionToObserver " option which is not 
> supported 
>  by yarn rmadmin command which is wrong behavior.
>  But if you check the yarn rmadmin -help it will not display any option  
> "-transitionToObserver "
>  
> !image-2019-11-18-18-31-17-755.png!
>  
> ==
> install/hadoop/resourcemanager/bin> ./yarn rmadmin -help
> rmadmin is the command to execute YARN administrative commands.
> The full syntax is:
> yarn rmadmin [-refreshQueues] [-refreshNodes [-g|graceful [timeout in 
> seconds] -client|server]] [-refreshNodesResources] 
> [-refreshSuperUserGroupsConfiguration] [-refreshUserToGroupsMappings] 
> [-refreshAdminAcls] [-refreshServiceAcl] [-getGroup [username]] 
> [-addToClusterNodeLabels 
> <"label1(exclusive=true),label2(exclusive=false),label3">] 
> [-removeFromClusterNodeLabels ] [-replaceLabelsOnNode 
> <"node1[:port]=label1,label2 node2[:port]=label1"> [-failOnUnknownNodes]] 
> [-directlyAccessNodeLabelStore] [-refreshClusterMaxPriority] 
> [-updateNodeResource [NodeID] [MemSize] [vCores] ([OvercommitTimeout]) or 
> -updateNodeResource [NodeID] [ResourceTypes] ([OvercommitTimeout])] 
> *{color:#FF}[-transitionToActive [--forceactive] ]{color} 
> {color:#FF}[-transitionToStandby ]{color}* [-getServiceState 
> ] [-getAllServiceState] [-checkHealth ] [-help [cmd]]
> -refreshQueues: Reload the queues' acls, states and scheduler specific 
> properties.
>  ResourceManager will reload the mapred-queues configuration file.
>  -refreshNodes [-g|graceful [timeout in seconds] -client|server]: Refresh the 
> hosts information at the ResourceManager. Here [-g|graceful [timeout in 
> seconds] -client|server] is optional, if we specify the timeout then 
> ResourceManager will wait for timeout before marking the NodeManager as 
> decommissioned. The -client|server indicates if the timeout tracking should 
> be handled by the client or the ResourceManager. The client-side tracking is 
> blocking, while the server-side tracking is not. Omitting the timeout, or a 
> timeout of -1, indicates an infinite timeout. Known Issue: the server-side 
> tracking will immediately decommission if an RM HA failover occurs.
>  -refreshNodesResources: Refresh resources of NodeManagers at the 
> ResourceManager.
>  -refreshSuperUserGroupsConfiguration: Refresh superuser proxy groups mappings
>  -refreshUserToGroupsMappings: Refresh user-to-groups mappings
>  -refreshAdminAcls: Refresh acls for administration of ResourceManager
>  -refreshServiceAcl: Reload the service-level authorization policy file.
>  ResourceManager will reload the authorization policy file.
>  -getGroups [username]: Get the groups which given user belongs to.
>  -addToClusterNodeLabels 
> <"label1(exclusive=true),label2(exclusive=false),label3">: add to cluster 
> node labels. Default exclusivity is true
>  -removeFromClusterNodeLabels  (label splitted by ","): 
> remove from cluster node labels
>  -replaceLabelsOnNode <"node1[:port]=label1,label2 
> node2[:port]=label1,label2"> [-failOnUnknownNodes] : replace labels on nodes 
> (please note that we do not support specifying multiple labels on a single 
> host for now.)
>  [-failOnUnknownNodes] is optional, when we set this option, it will fail if 
> specified nodes are unknown.
>  -directlyAccessNodeLabelStore: This is DEPRECATED, will be removed in future 
> releases. Directly access node label store, with this option, all node label 
> related operations will not connect RM. Instead, they will access/modify 
> stored node labels directly. By default, it is false (access via RM). AND 
> PLEASE NOTE: if you configured yarn.node-labels.fs-store.root-dir to a local 
> directory (instead of NFS or HDFS), this option will only work when the 
> command run on the machine where RM is running.
>  -refreshClusterMaxPriority: Refresh cluster max priority
>  -updateNodeResource [NodeID] [MemSize] [vCores] ([Ove

[jira] [Commented] (YARN-9985) Unsupported "transitionToObserver" option displaying for rmadmin command

2019-12-09 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-9985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16991436#comment-16991436
 ] 

Hudson commented on YARN-9985:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17743 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17743/])
YARN-9985. Unsupported transitionToObserver option displaying for (aajisaka: 
rev dc66de744826e0501040f8c2ca9e1edc076a80cf)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/cli/TestRMAdminCLI.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/cli/RMAdminCLI.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/YarnCommands.md


> Unsupported "transitionToObserver" option displaying for rmadmin command
> 
>
> Key: YARN-9985
> URL: https://issues.apache.org/jira/browse/YARN-9985
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: RM, yarn
>Affects Versions: 3.2.1
>Reporter: Souryakanta Dwivedy
>Assignee: Ayush Saxena
>Priority: Minor
> Fix For: 3.3.0, 3.2.2
>
> Attachments: YARN-9985-01.patch, YARN-9985-02.patch, 
> image-2019-11-18-18-31-17-755.png, image-2019-11-18-18-35-54-688.png
>
>
> Unsupported "transitionToObserver" option displaying for rmadmin command
> Check the options for Yarn rmadmin command
> It will display the "-transitionToObserver " option which is not 
> supported 
>  by yarn rmadmin command which is wrong behavior.
>  But if you check the yarn rmadmin -help it will not display any option  
> "-transitionToObserver "
>  
> !image-2019-11-18-18-31-17-755.png!
>  
> ==
> install/hadoop/resourcemanager/bin> ./yarn rmadmin -help
> rmadmin is the command to execute YARN administrative commands.
> The full syntax is:
> yarn rmadmin [-refreshQueues] [-refreshNodes [-g|graceful [timeout in 
> seconds] -client|server]] [-refreshNodesResources] 
> [-refreshSuperUserGroupsConfiguration] [-refreshUserToGroupsMappings] 
> [-refreshAdminAcls] [-refreshServiceAcl] [-getGroup [username]] 
> [-addToClusterNodeLabels 
> <"label1(exclusive=true),label2(exclusive=false),label3">] 
> [-removeFromClusterNodeLabels ] [-replaceLabelsOnNode 
> <"node1[:port]=label1,label2 node2[:port]=label1"> [-failOnUnknownNodes]] 
> [-directlyAccessNodeLabelStore] [-refreshClusterMaxPriority] 
> [-updateNodeResource [NodeID] [MemSize] [vCores] ([OvercommitTimeout]) or 
> -updateNodeResource [NodeID] [ResourceTypes] ([OvercommitTimeout])] 
> *{color:#FF}[-transitionToActive [--forceactive] ]{color} 
> {color:#FF}[-transitionToStandby ]{color}* [-getServiceState 
> ] [-getAllServiceState] [-checkHealth ] [-help [cmd]]
> -refreshQueues: Reload the queues' acls, states and scheduler specific 
> properties.
>  ResourceManager will reload the mapred-queues configuration file.
>  -refreshNodes [-g|graceful [timeout in seconds] -client|server]: Refresh the 
> hosts information at the ResourceManager. Here [-g|graceful [timeout in 
> seconds] -client|server] is optional, if we specify the timeout then 
> ResourceManager will wait for timeout before marking the NodeManager as 
> decommissioned. The -client|server indicates if the timeout tracking should 
> be handled by the client or the ResourceManager. The client-side tracking is 
> blocking, while the server-side tracking is not. Omitting the timeout, or a 
> timeout of -1, indicates an infinite timeout. Known Issue: the server-side 
> tracking will immediately decommission if an RM HA failover occurs.
>  -refreshNodesResources: Refresh resources of NodeManagers at the 
> ResourceManager.
>  -refreshSuperUserGroupsConfiguration: Refresh superuser proxy groups mappings
>  -refreshUserToGroupsMappings: Refresh user-to-groups mappings
>  -refreshAdminAcls: Refresh acls for administration of ResourceManager
>  -refreshServiceAcl: Reload the service-level authorization policy file.
>  ResourceManager will reload the authorization policy file.
>  -getGroups [username]: Get the groups which given user belongs to.
>  -addToClusterNodeLabels 
> <"label1(exclusive=true),label2(exclusive=false),label3">: add to cluster 
> node labels. Default exclusivity is true
>  -removeFromClusterNodeLabels  (label splitted by ","): 
> remove from cluster node labels
>  -replaceLabelsOnNode <"node1[:port]=label1,label2 
> node2[:port]=label1,label2"> [-failOnUnknownNodes] : replace labels on nodes 
> (please note that we do not support specifying multiple labels on a single 
> host for now.)
>  [-failOnUnknownNodes] is optional, when we set this option, it will fail if 
> specified nodes are unknown.
>  -directlyAccessNodeLabelStore: This is DEPRECATED, will be removed in future 
> 

[jira] [Commented] (YARN-9985) Unsupported "transitionToObserver" option displaying for rmadmin command

2019-12-09 Thread Akira Ajisaka (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-9985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16991414#comment-16991414
 ] 

Akira Ajisaka commented on YARN-9985:
-

I'm +1 for this change.

{quote} Can we move HDFS-specific command options from HAAdmin to DFSHAAdmin? 
{quote}

This refactoring can be done in a separate jira.

> Unsupported "transitionToObserver" option displaying for rmadmin command
> 
>
> Key: YARN-9985
> URL: https://issues.apache.org/jira/browse/YARN-9985
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: RM, yarn
>Affects Versions: 3.2.1
>Reporter: Souryakanta Dwivedy
>Assignee: Ayush Saxena
>Priority: Minor
> Attachments: YARN-9985-01.patch, YARN-9985-02.patch, 
> image-2019-11-18-18-31-17-755.png, image-2019-11-18-18-35-54-688.png
>
>
> Unsupported "transitionToObserver" option displaying for rmadmin command
> Check the options for Yarn rmadmin command
> It will display the "-transitionToObserver " option which is not 
> supported 
>  by yarn rmadmin command which is wrong behavior.
>  But if you check the yarn rmadmin -help it will not display any option  
> "-transitionToObserver "
>  
> !image-2019-11-18-18-31-17-755.png!
>  
> ==
> install/hadoop/resourcemanager/bin> ./yarn rmadmin -help
> rmadmin is the command to execute YARN administrative commands.
> The full syntax is:
> yarn rmadmin [-refreshQueues] [-refreshNodes [-g|graceful [timeout in 
> seconds] -client|server]] [-refreshNodesResources] 
> [-refreshSuperUserGroupsConfiguration] [-refreshUserToGroupsMappings] 
> [-refreshAdminAcls] [-refreshServiceAcl] [-getGroup [username]] 
> [-addToClusterNodeLabels 
> <"label1(exclusive=true),label2(exclusive=false),label3">] 
> [-removeFromClusterNodeLabels ] [-replaceLabelsOnNode 
> <"node1[:port]=label1,label2 node2[:port]=label1"> [-failOnUnknownNodes]] 
> [-directlyAccessNodeLabelStore] [-refreshClusterMaxPriority] 
> [-updateNodeResource [NodeID] [MemSize] [vCores] ([OvercommitTimeout]) or 
> -updateNodeResource [NodeID] [ResourceTypes] ([OvercommitTimeout])] 
> *{color:#FF}[-transitionToActive [--forceactive] ]{color} 
> {color:#FF}[-transitionToStandby ]{color}* [-getServiceState 
> ] [-getAllServiceState] [-checkHealth ] [-help [cmd]]
> -refreshQueues: Reload the queues' acls, states and scheduler specific 
> properties.
>  ResourceManager will reload the mapred-queues configuration file.
>  -refreshNodes [-g|graceful [timeout in seconds] -client|server]: Refresh the 
> hosts information at the ResourceManager. Here [-g|graceful [timeout in 
> seconds] -client|server] is optional, if we specify the timeout then 
> ResourceManager will wait for timeout before marking the NodeManager as 
> decommissioned. The -client|server indicates if the timeout tracking should 
> be handled by the client or the ResourceManager. The client-side tracking is 
> blocking, while the server-side tracking is not. Omitting the timeout, or a 
> timeout of -1, indicates an infinite timeout. Known Issue: the server-side 
> tracking will immediately decommission if an RM HA failover occurs.
>  -refreshNodesResources: Refresh resources of NodeManagers at the 
> ResourceManager.
>  -refreshSuperUserGroupsConfiguration: Refresh superuser proxy groups mappings
>  -refreshUserToGroupsMappings: Refresh user-to-groups mappings
>  -refreshAdminAcls: Refresh acls for administration of ResourceManager
>  -refreshServiceAcl: Reload the service-level authorization policy file.
>  ResourceManager will reload the authorization policy file.
>  -getGroups [username]: Get the groups which given user belongs to.
>  -addToClusterNodeLabels 
> <"label1(exclusive=true),label2(exclusive=false),label3">: add to cluster 
> node labels. Default exclusivity is true
>  -removeFromClusterNodeLabels  (label splitted by ","): 
> remove from cluster node labels
>  -replaceLabelsOnNode <"node1[:port]=label1,label2 
> node2[:port]=label1,label2"> [-failOnUnknownNodes] : replace labels on nodes 
> (please note that we do not support specifying multiple labels on a single 
> host for now.)
>  [-failOnUnknownNodes] is optional, when we set this option, it will fail if 
> specified nodes are unknown.
>  -directlyAccessNodeLabelStore: This is DEPRECATED, will be removed in future 
> releases. Directly access node label store, with this option, all node label 
> related operations will not connect RM. Instead, they will access/modify 
> stored node labels directly. By default, it is false (access via RM). AND 
> PLEASE NOTE: if you configured yarn.node-labels.fs-store.root-dir to a local 
> directory (instead of NFS or HDFS), this option will only work when the 
> command run on the machine where RM is running.
>  -refreshClusterMaxPriorit

[jira] [Commented] (YARN-9985) Unsupported "transitionToObserver" option displaying for rmadmin command

2019-12-02 Thread Akira Ajisaka (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-9985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16986506#comment-16986506
 ] 

Akira Ajisaka commented on YARN-9985:
-

Thanks [~ayushtkn] for providing the patch.

I'm thinking it would be better to fix the problem in HDFS. Can we move 
HDFS-specific command options from HAAdmin to DFSHAAdmin?

> Unsupported "transitionToObserver" option displaying for rmadmin command
> 
>
> Key: YARN-9985
> URL: https://issues.apache.org/jira/browse/YARN-9985
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: RM, yarn
>Affects Versions: 3.2.1
>Reporter: Souryakanta Dwivedy
>Assignee: Ayush Saxena
>Priority: Minor
> Attachments: YARN-9985-01.patch, YARN-9985-02.patch, 
> image-2019-11-18-18-31-17-755.png, image-2019-11-18-18-35-54-688.png
>
>
> Unsupported "transitionToObserver" option displaying for rmadmin command
> Check the options for Yarn rmadmin command
> It will display the "-transitionToObserver " option which is not 
> supported 
>  by yarn rmadmin command which is wrong behavior.
>  But if you check the yarn rmadmin -help it will not display any option  
> "-transitionToObserver "
>  
> !image-2019-11-18-18-31-17-755.png!
>  
> ==
> install/hadoop/resourcemanager/bin> ./yarn rmadmin -help
> rmadmin is the command to execute YARN administrative commands.
> The full syntax is:
> yarn rmadmin [-refreshQueues] [-refreshNodes [-g|graceful [timeout in 
> seconds] -client|server]] [-refreshNodesResources] 
> [-refreshSuperUserGroupsConfiguration] [-refreshUserToGroupsMappings] 
> [-refreshAdminAcls] [-refreshServiceAcl] [-getGroup [username]] 
> [-addToClusterNodeLabels 
> <"label1(exclusive=true),label2(exclusive=false),label3">] 
> [-removeFromClusterNodeLabels ] [-replaceLabelsOnNode 
> <"node1[:port]=label1,label2 node2[:port]=label1"> [-failOnUnknownNodes]] 
> [-directlyAccessNodeLabelStore] [-refreshClusterMaxPriority] 
> [-updateNodeResource [NodeID] [MemSize] [vCores] ([OvercommitTimeout]) or 
> -updateNodeResource [NodeID] [ResourceTypes] ([OvercommitTimeout])] 
> *{color:#FF}[-transitionToActive [--forceactive] ]{color} 
> {color:#FF}[-transitionToStandby ]{color}* [-getServiceState 
> ] [-getAllServiceState] [-checkHealth ] [-help [cmd]]
> -refreshQueues: Reload the queues' acls, states and scheduler specific 
> properties.
>  ResourceManager will reload the mapred-queues configuration file.
>  -refreshNodes [-g|graceful [timeout in seconds] -client|server]: Refresh the 
> hosts information at the ResourceManager. Here [-g|graceful [timeout in 
> seconds] -client|server] is optional, if we specify the timeout then 
> ResourceManager will wait for timeout before marking the NodeManager as 
> decommissioned. The -client|server indicates if the timeout tracking should 
> be handled by the client or the ResourceManager. The client-side tracking is 
> blocking, while the server-side tracking is not. Omitting the timeout, or a 
> timeout of -1, indicates an infinite timeout. Known Issue: the server-side 
> tracking will immediately decommission if an RM HA failover occurs.
>  -refreshNodesResources: Refresh resources of NodeManagers at the 
> ResourceManager.
>  -refreshSuperUserGroupsConfiguration: Refresh superuser proxy groups mappings
>  -refreshUserToGroupsMappings: Refresh user-to-groups mappings
>  -refreshAdminAcls: Refresh acls for administration of ResourceManager
>  -refreshServiceAcl: Reload the service-level authorization policy file.
>  ResourceManager will reload the authorization policy file.
>  -getGroups [username]: Get the groups which given user belongs to.
>  -addToClusterNodeLabels 
> <"label1(exclusive=true),label2(exclusive=false),label3">: add to cluster 
> node labels. Default exclusivity is true
>  -removeFromClusterNodeLabels  (label splitted by ","): 
> remove from cluster node labels
>  -replaceLabelsOnNode <"node1[:port]=label1,label2 
> node2[:port]=label1,label2"> [-failOnUnknownNodes] : replace labels on nodes 
> (please note that we do not support specifying multiple labels on a single 
> host for now.)
>  [-failOnUnknownNodes] is optional, when we set this option, it will fail if 
> specified nodes are unknown.
>  -directlyAccessNodeLabelStore: This is DEPRECATED, will be removed in future 
> releases. Directly access node label store, with this option, all node label 
> related operations will not connect RM. Instead, they will access/modify 
> stored node labels directly. By default, it is false (access via RM). AND 
> PLEASE NOTE: if you configured yarn.node-labels.fs-store.root-dir to a local 
> directory (instead of NFS or HDFS), this option will only work when the 
> command run on the machine where RM is running.
>  -refreshClu

[jira] [Commented] (YARN-9985) Unsupported "transitionToObserver" option displaying for rmadmin command

2019-12-02 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-9985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16986089#comment-16986089
 ] 

Ayush Saxena commented on YARN-9985:


[~aajisaka] [~tasanuma] can you help review.

> Unsupported "transitionToObserver" option displaying for rmadmin command
> 
>
> Key: YARN-9985
> URL: https://issues.apache.org/jira/browse/YARN-9985
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: RM, yarn
>Affects Versions: 3.2.1
>Reporter: Souryakanta Dwivedy
>Assignee: Ayush Saxena
>Priority: Minor
> Attachments: YARN-9985-01.patch, YARN-9985-02.patch, 
> image-2019-11-18-18-31-17-755.png, image-2019-11-18-18-35-54-688.png
>
>
> Unsupported "transitionToObserver" option displaying for rmadmin command
> Check the options for Yarn rmadmin command
> It will display the "-transitionToObserver " option which is not 
> supported 
>  by yarn rmadmin command which is wrong behavior.
>  But if you check the yarn rmadmin -help it will not display any option  
> "-transitionToObserver "
>  
> !image-2019-11-18-18-31-17-755.png!
>  
> ==
> install/hadoop/resourcemanager/bin> ./yarn rmadmin -help
> rmadmin is the command to execute YARN administrative commands.
> The full syntax is:
> yarn rmadmin [-refreshQueues] [-refreshNodes [-g|graceful [timeout in 
> seconds] -client|server]] [-refreshNodesResources] 
> [-refreshSuperUserGroupsConfiguration] [-refreshUserToGroupsMappings] 
> [-refreshAdminAcls] [-refreshServiceAcl] [-getGroup [username]] 
> [-addToClusterNodeLabels 
> <"label1(exclusive=true),label2(exclusive=false),label3">] 
> [-removeFromClusterNodeLabels ] [-replaceLabelsOnNode 
> <"node1[:port]=label1,label2 node2[:port]=label1"> [-failOnUnknownNodes]] 
> [-directlyAccessNodeLabelStore] [-refreshClusterMaxPriority] 
> [-updateNodeResource [NodeID] [MemSize] [vCores] ([OvercommitTimeout]) or 
> -updateNodeResource [NodeID] [ResourceTypes] ([OvercommitTimeout])] 
> *{color:#FF}[-transitionToActive [--forceactive] ]{color} 
> {color:#FF}[-transitionToStandby ]{color}* [-getServiceState 
> ] [-getAllServiceState] [-checkHealth ] [-help [cmd]]
> -refreshQueues: Reload the queues' acls, states and scheduler specific 
> properties.
>  ResourceManager will reload the mapred-queues configuration file.
>  -refreshNodes [-g|graceful [timeout in seconds] -client|server]: Refresh the 
> hosts information at the ResourceManager. Here [-g|graceful [timeout in 
> seconds] -client|server] is optional, if we specify the timeout then 
> ResourceManager will wait for timeout before marking the NodeManager as 
> decommissioned. The -client|server indicates if the timeout tracking should 
> be handled by the client or the ResourceManager. The client-side tracking is 
> blocking, while the server-side tracking is not. Omitting the timeout, or a 
> timeout of -1, indicates an infinite timeout. Known Issue: the server-side 
> tracking will immediately decommission if an RM HA failover occurs.
>  -refreshNodesResources: Refresh resources of NodeManagers at the 
> ResourceManager.
>  -refreshSuperUserGroupsConfiguration: Refresh superuser proxy groups mappings
>  -refreshUserToGroupsMappings: Refresh user-to-groups mappings
>  -refreshAdminAcls: Refresh acls for administration of ResourceManager
>  -refreshServiceAcl: Reload the service-level authorization policy file.
>  ResourceManager will reload the authorization policy file.
>  -getGroups [username]: Get the groups which given user belongs to.
>  -addToClusterNodeLabels 
> <"label1(exclusive=true),label2(exclusive=false),label3">: add to cluster 
> node labels. Default exclusivity is true
>  -removeFromClusterNodeLabels  (label splitted by ","): 
> remove from cluster node labels
>  -replaceLabelsOnNode <"node1[:port]=label1,label2 
> node2[:port]=label1,label2"> [-failOnUnknownNodes] : replace labels on nodes 
> (please note that we do not support specifying multiple labels on a single 
> host for now.)
>  [-failOnUnknownNodes] is optional, when we set this option, it will fail if 
> specified nodes are unknown.
>  -directlyAccessNodeLabelStore: This is DEPRECATED, will be removed in future 
> releases. Directly access node label store, with this option, all node label 
> related operations will not connect RM. Instead, they will access/modify 
> stored node labels directly. By default, it is false (access via RM). AND 
> PLEASE NOTE: if you configured yarn.node-labels.fs-store.root-dir to a local 
> directory (instead of NFS or HDFS), this option will only work when the 
> command run on the machine where RM is running.
>  -refreshClusterMaxPriority: Refresh cluster max priority
>  -updateNodeResource [NodeID] [MemSize] [vCores] ([OvercommitTimeout])
>  or
>  [Node

[jira] [Commented] (YARN-9985) Unsupported "transitionToObserver" option displaying for rmadmin command

2019-11-29 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-9985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16985060#comment-16985060
 ] 

Hadoop QA commented on YARN-9985:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
35s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
 4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 28s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
17s{color} | {color:green} hadoop-yarn-project/hadoop-yarn: The patch generated 
0 new + 90 unchanged - 2 fixed = 90 total (was 92) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 21s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 25m 
56s{color} | {color:green} hadoop-yarn-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
21s{color} | {color:green} hadoop-yarn-site in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
38s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 99m 11s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.5 Server=19.03.5 Image:yetus/hadoop:104ccca9169 |
| JIRA Issue | YARN-9985 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12987158/YARN-9985-02.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname 

[jira] [Commented] (YARN-9985) Unsupported "transitionToObserver" option displaying for rmadmin command

2019-11-29 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-9985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16984999#comment-16984999
 ] 

Ayush Saxena commented on YARN-9985:


Found Failover too mentioned in two places along with transitionToObserver, 
Removed this as that too wasn't supposed to be there as per YARN-3397

> Unsupported "transitionToObserver" option displaying for rmadmin command
> 
>
> Key: YARN-9985
> URL: https://issues.apache.org/jira/browse/YARN-9985
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: RM, yarn
>Affects Versions: 3.2.1
>Reporter: Souryakanta Dwivedy
>Assignee: Ayush Saxena
>Priority: Minor
> Attachments: YARN-9985-01.patch, YARN-9985-02.patch, 
> image-2019-11-18-18-31-17-755.png, image-2019-11-18-18-35-54-688.png
>
>
> Unsupported "transitionToObserver" option displaying for rmadmin command
> Check the options for Yarn rmadmin command
> It will display the "-transitionToObserver " option which is not 
> supported 
>  by yarn rmadmin command which is wrong behavior.
>  But if you check the yarn rmadmin -help it will not display any option  
> "-transitionToObserver "
>  
> !image-2019-11-18-18-31-17-755.png!
>  
> ==
> install/hadoop/resourcemanager/bin> ./yarn rmadmin -help
> rmadmin is the command to execute YARN administrative commands.
> The full syntax is:
> yarn rmadmin [-refreshQueues] [-refreshNodes [-g|graceful [timeout in 
> seconds] -client|server]] [-refreshNodesResources] 
> [-refreshSuperUserGroupsConfiguration] [-refreshUserToGroupsMappings] 
> [-refreshAdminAcls] [-refreshServiceAcl] [-getGroup [username]] 
> [-addToClusterNodeLabels 
> <"label1(exclusive=true),label2(exclusive=false),label3">] 
> [-removeFromClusterNodeLabels ] [-replaceLabelsOnNode 
> <"node1[:port]=label1,label2 node2[:port]=label1"> [-failOnUnknownNodes]] 
> [-directlyAccessNodeLabelStore] [-refreshClusterMaxPriority] 
> [-updateNodeResource [NodeID] [MemSize] [vCores] ([OvercommitTimeout]) or 
> -updateNodeResource [NodeID] [ResourceTypes] ([OvercommitTimeout])] 
> *{color:#FF}[-transitionToActive [--forceactive] ]{color} 
> {color:#FF}[-transitionToStandby ]{color}* [-getServiceState 
> ] [-getAllServiceState] [-checkHealth ] [-help [cmd]]
> -refreshQueues: Reload the queues' acls, states and scheduler specific 
> properties.
>  ResourceManager will reload the mapred-queues configuration file.
>  -refreshNodes [-g|graceful [timeout in seconds] -client|server]: Refresh the 
> hosts information at the ResourceManager. Here [-g|graceful [timeout in 
> seconds] -client|server] is optional, if we specify the timeout then 
> ResourceManager will wait for timeout before marking the NodeManager as 
> decommissioned. The -client|server indicates if the timeout tracking should 
> be handled by the client or the ResourceManager. The client-side tracking is 
> blocking, while the server-side tracking is not. Omitting the timeout, or a 
> timeout of -1, indicates an infinite timeout. Known Issue: the server-side 
> tracking will immediately decommission if an RM HA failover occurs.
>  -refreshNodesResources: Refresh resources of NodeManagers at the 
> ResourceManager.
>  -refreshSuperUserGroupsConfiguration: Refresh superuser proxy groups mappings
>  -refreshUserToGroupsMappings: Refresh user-to-groups mappings
>  -refreshAdminAcls: Refresh acls for administration of ResourceManager
>  -refreshServiceAcl: Reload the service-level authorization policy file.
>  ResourceManager will reload the authorization policy file.
>  -getGroups [username]: Get the groups which given user belongs to.
>  -addToClusterNodeLabels 
> <"label1(exclusive=true),label2(exclusive=false),label3">: add to cluster 
> node labels. Default exclusivity is true
>  -removeFromClusterNodeLabels  (label splitted by ","): 
> remove from cluster node labels
>  -replaceLabelsOnNode <"node1[:port]=label1,label2 
> node2[:port]=label1,label2"> [-failOnUnknownNodes] : replace labels on nodes 
> (please note that we do not support specifying multiple labels on a single 
> host for now.)
>  [-failOnUnknownNodes] is optional, when we set this option, it will fail if 
> specified nodes are unknown.
>  -directlyAccessNodeLabelStore: This is DEPRECATED, will be removed in future 
> releases. Directly access node label store, with this option, all node label 
> related operations will not connect RM. Instead, they will access/modify 
> stored node labels directly. By default, it is false (access via RM). AND 
> PLEASE NOTE: if you configured yarn.node-labels.fs-store.root-dir to a local 
> directory (instead of NFS or HDFS), this option will only work when the 
> command run on the machine where RM is running.
>  -refreshClusterMaxPriority: Refresh cluste

[jira] [Commented] (YARN-9985) Unsupported "transitionToObserver" option displaying for rmadmin command

2019-11-28 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-9985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16984638#comment-16984638
 ] 

Hadoop QA commented on YARN-9985:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
37s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 31s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 16s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 26m 10s{color} 
| {color:red} hadoop-yarn-client in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 82m 25s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.client.cli.TestRMAdminCLI |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.5 Server=19.03.5 Image:yetus/hadoop:104ccca9169 |
| JIRA Issue | YARN-9985 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12987095/YARN-9985-01.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 7efbb3988542 4.15.0-66-generic #75-Ubuntu SMP Tue Oct 1 
05:24:09 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 44f7b91 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_222 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/25244/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/25244/testReport/ |
| Max. process+thread count | 531 (vs. ulimit of 5500) |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/25244/console |
| Powered 

[jira] [Commented] (YARN-9985) Unsupported "transitionToObserver" option displaying for rmadmin command

2019-11-28 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/YARN-9985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16984617#comment-16984617
 ] 

Ayush Saxena commented on YARN-9985:


Thanx [~SouryakantaDwivedy] for the report.
Uploaded patch with the fix,
Pls Review!!!

> Unsupported "transitionToObserver" option displaying for rmadmin command
> 
>
> Key: YARN-9985
> URL: https://issues.apache.org/jira/browse/YARN-9985
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: RM, yarn
>Affects Versions: 3.2.1
>Reporter: Souryakanta Dwivedy
>Assignee: Ayush Saxena
>Priority: Minor
> Attachments: YARN-9985-01.patch, image-2019-11-18-18-31-17-755.png, 
> image-2019-11-18-18-35-54-688.png
>
>
> Unsupported "transitionToObserver" option displaying for rmadmin command
> Check the options for Yarn rmadmin command
> It will display the "-transitionToObserver " option which is not 
> supported 
>  by yarn rmadmin command which is wrong behavior.
>  But if you check the yarn rmadmin -help it will not display any option  
> "-transitionToObserver "
>  
> !image-2019-11-18-18-31-17-755.png!
>  
> ==
> install/hadoop/resourcemanager/bin> ./yarn rmadmin -help
> rmadmin is the command to execute YARN administrative commands.
> The full syntax is:
> yarn rmadmin [-refreshQueues] [-refreshNodes [-g|graceful [timeout in 
> seconds] -client|server]] [-refreshNodesResources] 
> [-refreshSuperUserGroupsConfiguration] [-refreshUserToGroupsMappings] 
> [-refreshAdminAcls] [-refreshServiceAcl] [-getGroup [username]] 
> [-addToClusterNodeLabels 
> <"label1(exclusive=true),label2(exclusive=false),label3">] 
> [-removeFromClusterNodeLabels ] [-replaceLabelsOnNode 
> <"node1[:port]=label1,label2 node2[:port]=label1"> [-failOnUnknownNodes]] 
> [-directlyAccessNodeLabelStore] [-refreshClusterMaxPriority] 
> [-updateNodeResource [NodeID] [MemSize] [vCores] ([OvercommitTimeout]) or 
> -updateNodeResource [NodeID] [ResourceTypes] ([OvercommitTimeout])] 
> *{color:#FF}[-transitionToActive [--forceactive] ]{color} 
> {color:#FF}[-transitionToStandby ]{color}* [-getServiceState 
> ] [-getAllServiceState] [-checkHealth ] [-help [cmd]]
> -refreshQueues: Reload the queues' acls, states and scheduler specific 
> properties.
>  ResourceManager will reload the mapred-queues configuration file.
>  -refreshNodes [-g|graceful [timeout in seconds] -client|server]: Refresh the 
> hosts information at the ResourceManager. Here [-g|graceful [timeout in 
> seconds] -client|server] is optional, if we specify the timeout then 
> ResourceManager will wait for timeout before marking the NodeManager as 
> decommissioned. The -client|server indicates if the timeout tracking should 
> be handled by the client or the ResourceManager. The client-side tracking is 
> blocking, while the server-side tracking is not. Omitting the timeout, or a 
> timeout of -1, indicates an infinite timeout. Known Issue: the server-side 
> tracking will immediately decommission if an RM HA failover occurs.
>  -refreshNodesResources: Refresh resources of NodeManagers at the 
> ResourceManager.
>  -refreshSuperUserGroupsConfiguration: Refresh superuser proxy groups mappings
>  -refreshUserToGroupsMappings: Refresh user-to-groups mappings
>  -refreshAdminAcls: Refresh acls for administration of ResourceManager
>  -refreshServiceAcl: Reload the service-level authorization policy file.
>  ResourceManager will reload the authorization policy file.
>  -getGroups [username]: Get the groups which given user belongs to.
>  -addToClusterNodeLabels 
> <"label1(exclusive=true),label2(exclusive=false),label3">: add to cluster 
> node labels. Default exclusivity is true
>  -removeFromClusterNodeLabels  (label splitted by ","): 
> remove from cluster node labels
>  -replaceLabelsOnNode <"node1[:port]=label1,label2 
> node2[:port]=label1,label2"> [-failOnUnknownNodes] : replace labels on nodes 
> (please note that we do not support specifying multiple labels on a single 
> host for now.)
>  [-failOnUnknownNodes] is optional, when we set this option, it will fail if 
> specified nodes are unknown.
>  -directlyAccessNodeLabelStore: This is DEPRECATED, will be removed in future 
> releases. Directly access node label store, with this option, all node label 
> related operations will not connect RM. Instead, they will access/modify 
> stored node labels directly. By default, it is false (access via RM). AND 
> PLEASE NOTE: if you configured yarn.node-labels.fs-store.root-dir to a local 
> directory (instead of NFS or HDFS), this option will only work when the 
> command run on the machine where RM is running.
>  -refreshClusterMaxPriority: Refresh cluster max priority
>  -updateNodeResource [NodeID] [MemSize] [vCores] ([OvercommitTi