AnanyaSingh2121 commented on code in PR #3804:
URL: https://github.com/apache/ambari/pull/3804#discussion_r1708975610


##########
ambari-server/src/main/resources/stacks/BIGTOP/3.2.0/services/HDFS/package/scripts/params_linux.py:
##########
@@ -294,6 +295,38 @@
 
 data_dir_mount_file = 
"/var/lib/ambari-agent/data/datanode/dfs_data_dir_mount.hist"
 
+router_address = None
+if 'dfs.federation.router.rpc-address' in 
config['configurations']['hdfs-rbf-site']:
+  router_rpcaddress = 
config['configurations']['hdfs-rbf-site']['dfs.federation.router.rpc-address']
+  router_address = format("hdfs://{router_rpcaddress}")
+else:
+  router_address = config['configurations']['core-site']['fs.defaultFS']
+if router_host:

Review Comment:
   When we want to have multiple Routers in the cluster we are required to add 
few configurations in the hdfs-site.xml:
   Client configuration
   For clients to use the federated namespace, they need to create a new one 
that points to the routers. For example, a cluster with 4 namespaces ns0, ns1, 
ns2, ns3, can add a new one to hdfs-site.xml called ns-fed which points to two 
of the routers:
   
   <configuration>
     <property>
       <name>dfs.nameservices</name>
       <value>ns0,ns1,ns2,ns3,ns-fed</value>
     </property>
     <property>
       <name>dfs.ha.namenodes.ns-fed</name>
       <value>r1,r2</value>
     </property>
     <property>
       <name>dfs.namenode.rpc-address.ns-fed.r1</name>
       <value>router1:rpc-port</value>
     </property>
     <property>
       <name>dfs.namenode.rpc-address.ns-fed.r2</name>
       <value>router2:rpc-port</value>
     </property>
     <property>
       <name>dfs.client.failover.proxy.provider.ns-fed</name>
       
<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
     </property>
     <property>
       <name>dfs.client.failover.random.order</name>
       <value>true</value>
     </property>
   </configuration>
   
   two configs create issues with namenode and zkfc if the configs are 
distributed to all the host. these configs are :
   <property>
       <name>dfs.ha.namenodes.ns-fed</name>
       <value>r1,r2</value>
     </property>
     <property>
       <name>dfs.namenode.rpc-address.ns-fed.r1</name>
       <value>router1:rpc-port</value>
     </property>
     <property>
       <name>dfs.namenode.rpc-address.ns-fed.r2</name>
       <value>router2:rpc-port</value>
     </property>
   Because in hdfs-site we will have duplicate HA related configurations. This 
gave error in zkfc and namenode restart. So by managing the configs from 
backend I am making sure that these configs are getting added in the hdfs-site 
on only the router_hosts and do not trigger the restart of NNs and zkfcs. 
   The only limitation as of now is that Router and namenode/zkfc cannot be on 
the same host.
   This issue has been raised and discussed in more detail as part of this jira 
: https://issues.apache.org/jira/browse/HDFS-17356



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to