umamaheswararao commented on a change in pull request #2305: URL: https://github.com/apache/hadoop/pull/2305#discussion_r489780767
########## File path: hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestViewDistributedFileSystemWithMountLinks.java ########## @@ -61,4 +64,55 @@ public void testCreateOnRoot() throws Exception { public void testMountLinkWithNonExistentLink() throws Exception { testMountLinkWithNonExistentLink(false); } + + @Test + public void testRenameOnInternalDirWithFallback() throws Exception { + Configuration conf = getConf(); + URI defaultFSURI = + URI.create(conf.get(CommonConfigurationKeys.FS_DEFAULT_NAME_KEY)); + final Path hdfsTargetPath1 = new Path(defaultFSURI + "/HDFSUser"); + final Path hdfsTargetPath2 = new Path(defaultFSURI + "/NewHDFSUser/next"); + ViewFsTestSetup.addMountLinksToConf(defaultFSURI.getAuthority(), + new String[] {"/HDFSUser", "/NewHDFSUser/next"}, + new String[] {hdfsTargetPath1.toUri().toString(), + hdfsTargetPath2.toUri().toString()}, conf); + //Making sure parent dir structure as mount points available in fallback. + try (DistributedFileSystem dfs = new DistributedFileSystem()) { + dfs.initialize(defaultFSURI, conf); + dfs.mkdirs(hdfsTargetPath1); + dfs.mkdirs(hdfsTargetPath2); + } + + try (FileSystem fs = FileSystem.get(conf)) { + Path src = new Path("/newFileOnRoot"); + Path dst = new Path("/newFileOnRoot1"); + fs.create(src).close(); + verifyRename(fs, src, dst); + + src = new Path("/newFileOnRoot1"); + dst = new Path("/NewHDFSUser/newFileOnRoot"); + fs.mkdirs(dst.getParent()); + verifyRename(fs, src, dst); + + src = new Path("/NewHDFSUser/newFileOnRoot"); + dst = new Path("/NewHDFSUser/newFileOnRoot1"); + verifyRename(fs, src, dst); + + src = new Path("/NewHDFSUser/newFileOnRoot1"); + dst = new Path("/newFileOnRoot"); + verifyRename(fs, src, dst); + + src = new Path("/HDFSUser/newFileOnRoot1"); + dst = new Path("/HDFSUser/newFileOnRoot"); + fs.create(src).close(); + verifyRename(fs, src, dst); + } + } + + private void verifyRename(FileSystem fs, Path src, Path dst) + throws IOException { + fs.rename(src, dst); + Assert.assertFalse(fs.exists(src)); + Assert.assertTrue(fs.exists(dst)); + } Review comment: What's your proposal for rename here? Could you elaborate a bit? I am thinking this way: If API nature is to create parent dirs, it will create. If the API nature is not create parents, I am ok to fail and let the users know the fact that it's redirected to fallback and they don't have structure? In your above example, you made additional call to createFile, so that next rename succeeded. I see in ViewFS.java create, mkdir we made createParent by default true. I am ok to take the user passed flag itself. That way we can be consistent on API perspective. Also if I am correct, this problem only exist with ViewFs.java. In ViewFileSystem.java as we will compliant to API as mkdirs by default create parent. This way, users expected things only we do. What do you? If someone really wants to createParents bydefault in ViewFs.java, that discussion can be taken into separate JIRA for create,mkdir behavior. To summarize: in ViewFs.java, create, mkdir creates the parent in fallback if they don't exist (irrespective of createParent flag. ) We did this because, for the users perspective, we provide transparency and exist of parent dir returns true. However that fact is that, children actually goes to fallback. When users create mount points with respective to fallback cluster, then only this will be true. Otherwise it can be any random mount point without any relation to fallbac k. When API does not say createParent, then we will keep that semantics to fallback as well. Whats your thoughts? ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org --------------------------------------------------------------------- To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org