[ https://issues.apache.org/jira/browse/AMBARI-15790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15253787#comment-15253787 ]
Alex Bush commented on AMBARI-15790: ------------------------------------ Hi Dymtro, I'm not sure the haadmin case is covered. I believe all the hdfs haadmin commands now need a -ns flag: https://github.com/apache/ambari/blob/5772ceb05a1804b1c16cccc7968cd17a25de8ccd/ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/package/scripts/utils.py#L101 failover_command = format("hdfs haadmin -failover {namenode_id} {other_namenode_id}") should be failover_command = format("hdfs haadmin -ns {!nameservice!} -failover {namenode_id} {other_namenode_id}") As per the discussion linked above. I could be wrong but it looks like this tool alone doesn't respect dfs.internal.nameservices. Otherwise you get: /usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs --config /usr/hdp/current/hadoop-client/conf haadmin -getServiceState nn1 | grep active''] {} 2016-04-18 09:51:39,868 - call returned (1, 'Illegal argument: Unable to determine the name service ID. This is an HA configuration with multiple name services configured. dfs.nameservices is set to [CLUSTER2, CLUSTER1]. Please re-run with the -ns option.') What do you think? Should another bug be opened to cover this? > Clean up stack scripts that refer to dfs.nameservices to use > dfs.internal.nameservices as first option > ------------------------------------------------------------------------------------------------------ > > Key: AMBARI-15790 > URL: https://issues.apache.org/jira/browse/AMBARI-15790 > Project: Ambari > Issue Type: Bug > Components: stacks > Affects Versions: 2.2.2 > Reporter: Sumit Mohanty > Assignee: Dmytro Sen > Fix For: 2.4.0 > > Attachments: AMBARI-15790-trunk_5.patch > > > Several stack scripts refer to dfs.nameservices and they should be modified > to use dfs.internal.nameservices -- This message was sent by Atlassian JIRA (v6.3.4#6332)