ZanderXu commented on code in PR #6784: URL: https://github.com/apache/hadoop/pull/6784#discussion_r1597385087
########## hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterClientProtocol.java: ########## @@ -1009,6 +1000,20 @@ public HdfsFileStatus getFileInfo(String src) throws IOException { return ret; } + public RemoteResult<RemoteLocation, HdfsFileStatus> getFileRemoteResult(String path) + throws IOException { + rpcServer.checkOperation(NameNode.OperationCategory.READ); + + final List<RemoteLocation> locations = rpcServer.getLocationsForPath(path, false, false); + RemoteMethod method = + new RemoteMethod("getFileInfo", new Class<?>[] {String.class}, new RemoteParam()); + // Check for file information sequentially + RemoteResult<RemoteLocation, HdfsFileStatus> result = Review Comment: If `locations` only contains one namespace, we can returns this namespace directly instead of getting the namespace through `getFileInfo`, right? ########## hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterRpc.java: ########## @@ -1224,6 +1224,17 @@ public void testProxyConcatFile() throws Exception { String badPath = "/unknownlocation/unknowndir"; compareResponses(routerProtocol, nnProtocol, m, new Object[] {badPath, new String[] {routerFile}}); + + // Test when concat trg is a empty file Review Comment: Do the namenode and rbf throw the same Exception? Maybe RBF throws NPE, but NN throws `org.apache.hadoop.HadoopIllegalArgumentException`. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org --------------------------------------------------------------------- To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org