aajisaka commented on a change in pull request #3065: URL: https://github.com/apache/hadoop/pull/3065#discussion_r647376745
########## File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java ########## @@ -1996,7 +1996,12 @@ private void metaSave(PrintWriter out) { LightWeightHashSet<Long> openFileIds = new LightWeightHashSet<>(); for (DatanodeDescriptor dataNode : blockManager.getDatanodeManager().getDatanodes()) { - for (long ucFileId : dataNode.getLeavingServiceStatus().getOpenFiles()) { + // Sort open files + LightWeightHashSet<Long> dnOpenFiles = + dataNode.getLeavingServiceStatus().getOpenFiles(); + Long[] dnOpenFileIds = new Long[dnOpenFiles.size()]; + Arrays.sort(dnOpenFiles.toArray(dnOpenFileIds)); + for (Long ucFileId : dnOpenFileIds) { Review comment: @ferhui Thank you for your comment. The change makes sense to keep the previous behavior. Now I have a question about the previous (and current) behavior. https://github.com/apache/hadoop/blob/85517df11ae33ab3a06654d40a1ef4d8eae013e3/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java#L1997-L1998 If there are multiple DataNodes with open files, are the inode IDs really sorted? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org --------------------------------------------------------------------- To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org