> On 十月 19, 2017, 5:36 p.m., Alejandro Fernandez wrote: > > hdfs-agent/src/main/java/org/apache/ranger/services/hdfs/client/HdfsClient.java > > Lines 300 (patched) > > <https://reviews.apache.org/r/63142/diff/1/?file=1863486#file1863486line301> > > > > Doesn't this also have to set > > dfs.namenode.http-address.$cluster.$nn_id ? > > > > and potentially https instead if SSL is enabled.
Here only use hdfs ipc, no use https (port: 50070 or 50470). - Qiang ----------------------------------------------------------- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/63142/#review188706 ----------------------------------------------------------- On 十月 19, 2017, 11:41 a.m., Qiang Zhang wrote: > > ----------------------------------------------------------- > This is an automatically generated e-mail. To reply, visit: > https://reviews.apache.org/r/63142/ > ----------------------------------------------------------- > > (Updated 十月 19, 2017, 11:41 a.m.) > > > Review request for ranger, Alok Lal, Ankita Sinha, Don Bosco Durai, Colm O > hEigeartaigh, Gautam Borad, Madhan Neethiraj, pengjianhua, Ramesh Mani, > Selvamohan Neethiraj, Velmurugan Periasamy, and Qiang Zhang. > > > Bugs: RANGER-1844 > https://issues.apache.org/jira/browse/RANGER-1844 > > > Repository: ranger > > > Description > ------- > > In ranger admin, when creating a hdfs service, if hdfs cluster is in HA > mode,you have to config a lot of configs in hdfs_dev,such as: > Namenode URL *=hdfs://hdfscluster > ===Add New Configurations=== > dfs.nameservices=hdfscluster > dfs.client.failover.proxy.provider.hdfscluster=org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider > dfs.ha.namenodes.hdfscluster=nn1,nn2 > dfs.namenode.rpc-address.hdfscluster.nn1=hdfs://10.43.159.240:9000 > dfs.namenode.rpc-address.hdfscluster.nn2=hdfs://10.43.159.245:9000 > ===End of add New Configurations=== > And other big data components such as hbase,hive and so on can support HA > without config > lots of "Add New Configurations",it is easy to config a url. like zk queue > configuration in hbase ,like jdbc url in hive. In hdfs service, only need to > config "fs.default.name" : > Namenode URL *=hdfs://hdfscluster ?old? > Namenode URL *=hdfs://dap230-183:9000,hdfs://dap229-183:9000 ?new? > > > Diffs > ----- > > > hdfs-agent/src/main/java/org/apache/ranger/services/hdfs/client/HdfsClient.java > c252213f > > > Diff: https://reviews.apache.org/r/63142/diff/1/ > > > Testing > ------- > > > Thanks, > > Qiang Zhang > >
