[ https://issues.apache.org/jira/browse/HDFS-15423?focusedWorklogId=537178&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-537178 ]
ASF GitHub Bot logged work on HDFS-15423: ----------------------------------------- Author: ASF GitHub Bot Created on: 17/Jan/21 22:48 Start Date: 17/Jan/21 22:48 Worklog Time Spent: 10m Work Description: fengnanli commented on a change in pull request #2605: URL: https://github.com/apache/hadoop/pull/2605#discussion_r559250676 ########## File path: hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterWebHdfsMethods.java ########## @@ -453,21 +456,30 @@ private DatanodeInfo chooseDatanode(final Router router, final String path, final HttpOpParam.Op op, final long openOffset, final String excludeDatanodes) throws IOException { final RouterRpcServer rpcServer = getRPCServer(router); - DatanodeInfo[] dns = null; + DatanodeInfo[] dns = {}; + String resolvedNs = ""; try { dns = rpcServer.getCachedDatanodeReport(DatanodeReportType.LIVE); } catch (IOException e) { LOG.error("Cannot get the datanodes from the RPC server", e); } + if (op == PutOpParam.Op.CREATE) { + try { + resolvedNs = rpcServer.getCreateLocation(path).getNameserviceId(); + } catch (IOException e) { + LOG.error("Cannot get the name service to create file", e); + } + } + HashSet<Node> excludes = new HashSet<Node>(); - if (excludeDatanodes != null) { - Collection<String> collection = - getTrimmedStringCollection(excludeDatanodes); - for (DatanodeInfo dn : dns) { - if (collection.contains(dn.getName())) { - excludes.add(dn); - } + Collection<String> collection = + getTrimmedStringCollection(excludeDatanodes); + for (DatanodeInfo dn : dns) { + String ns = getNsFromDataNodeNetworkLocation(dn.getNetworkLocation()); + if (collection.contains(dn.getName()) || Review comment: I made it if () else if () so the logic is clearer. How do you think? ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking ------------------- Worklog Id: (was: 537178) Time Spent: 3h 50m (was: 3h 40m) > RBF: WebHDFS create shouldn't choose DN from all sub-clusters > ------------------------------------------------------------- > > Key: HDFS-15423 > URL: https://issues.apache.org/jira/browse/HDFS-15423 > Project: Hadoop HDFS > Issue Type: Bug > Components: rbf, webhdfs > Reporter: Chao Sun > Assignee: Fengnan Li > Priority: Major > Labels: pull-request-available > Time Spent: 3h 50m > Remaining Estimate: 0h > > In {{RouterWebHdfsMethods}} and for a {{CREATE}} call, {{chooseDatanode}} > first gets all DNs via {{getDatanodeReport}}, and then randomly pick one from > the list via {{getRandomDatanode}}. This logic doesn't seem correct as it > should pick a DN for the specific cluster(s) of the input {{path}}. -- This message was sent by Atlassian Jira (v8.3.4#803005) --------------------------------------------------------------------- To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org