ayushtkn commented on a change in pull request #2305: URL: https://github.com/apache/hadoop/pull/2305#discussion_r489589949
########## File path: hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestViewDistributedFileSystemWithMountLinks.java ########## @@ -61,4 +64,55 @@ public void testCreateOnRoot() throws Exception { public void testMountLinkWithNonExistentLink() throws Exception { testMountLinkWithNonExistentLink(false); } + + @Test + public void testRenameOnInternalDirWithFallback() throws Exception { + Configuration conf = getConf(); + URI defaultFSURI = + URI.create(conf.get(CommonConfigurationKeys.FS_DEFAULT_NAME_KEY)); + final Path hdfsTargetPath1 = new Path(defaultFSURI + "/HDFSUser"); + final Path hdfsTargetPath2 = new Path(defaultFSURI + "/NewHDFSUser/next"); + ViewFsTestSetup.addMountLinksToConf(defaultFSURI.getAuthority(), + new String[] {"/HDFSUser", "/NewHDFSUser/next"}, + new String[] {hdfsTargetPath1.toUri().toString(), + hdfsTargetPath2.toUri().toString()}, conf); + //Making sure parent dir structure as mount points available in fallback. + try (DistributedFileSystem dfs = new DistributedFileSystem()) { + dfs.initialize(defaultFSURI, conf); + dfs.mkdirs(hdfsTargetPath1); + dfs.mkdirs(hdfsTargetPath2); + } + + try (FileSystem fs = FileSystem.get(conf)) { + Path src = new Path("/newFileOnRoot"); + Path dst = new Path("/newFileOnRoot1"); + fs.create(src).close(); + verifyRename(fs, src, dst); + + src = new Path("/newFileOnRoot1"); + dst = new Path("/NewHDFSUser/newFileOnRoot"); + fs.mkdirs(dst.getParent()); + verifyRename(fs, src, dst); + + src = new Path("/NewHDFSUser/newFileOnRoot"); + dst = new Path("/NewHDFSUser/newFileOnRoot1"); + verifyRename(fs, src, dst); + + src = new Path("/NewHDFSUser/newFileOnRoot1"); + dst = new Path("/newFileOnRoot"); + verifyRename(fs, src, dst); + + src = new Path("/HDFSUser/newFileOnRoot1"); + dst = new Path("/HDFSUser/newFileOnRoot"); + fs.create(src).close(); + verifyRename(fs, src, dst); + } + } + + private void verifyRename(FileSystem fs, Path src, Path dst) + throws IOException { + fs.rename(src, dst); + Assert.assertFalse(fs.exists(src)); + Assert.assertTrue(fs.exists(dst)); + } Review comment: Thanx @umamaheswararao for the update. Regarding the Case 2: When the same directory structure isn't available in the fallback. In `ViewFs` I think this was handled and `createParent` was explicitly made `true` always. It would be just for rename this compulsion would be there. Considering a mount entry like -- `/mount/sub1/sub2` --> `/nsPath` if someone calls rename with `dst` as `/mount/sub1/renameFile` will fail, but if he calls create `/mount/sub1/createFile` without `createParent` it would pass and this `create` call will create the internal directory structure as well. So, now again the user calls the same rename command, it would succeed. Same for `mkdir` with `createParent` as `false` This would be little intermittent behavior for the end user, one API behaving differently. Secondly creating the same directory structure at `fallback` just for `rename` to work doesn't seems feasible, It would be too many empty directories, increasing the number of inodes at NN. IIRC something like this, to create empty directories for mount entries in case of RBF was discussed for some issue recently, and UBER folks had concerns with inode numbers going high due to empty directories. I think we should explicitly take care of this in `rename` as well, May be in non-atomic way only? Later we might find a better way, Maybe adding one more flag to `rename2` and argument to `rename` for `createParent` in a follow up. ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org --------------------------------------------------------------------- To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org