[GitHub] [hadoop] lfxy commented on a diff in pull request #5561: HDFS-16983. Whether checking path access permissions should be decided by dfs.permissions.enabled in concat operation
lfxy commented on code in PR #5561: URL: https://github.com/apache/hadoop/pull/5561#discussion_r1213975421 ## hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestHDFSConcat.java: ## @@ -564,4 +566,113 @@ public void testConcatOnSameFile() throws Exception { assertEquals(1, dfs.getContentSummary(new Path(dir)).getFileCount()); } + + /** + * Verifies concat with wrong user when dfs.permissions.enabled is false. + * + * @throws IOException + */ + @Test + public void testConcatPermissionEnabled() throws IOException { +Configuration conf2 = new Configuration(); +conf2.setLong(DFSConfigKeys.DFS_BLOCK_SIZE_KEY, blockSize); +conf2.setBoolean(DFSConfigKeys.DFS_PERMISSIONS_ENABLED_KEY, true); +MiniDFSCluster cluster2 = new MiniDFSCluster.Builder(conf2).numDataNodes(REPL_FACTOR).build(); +cluster2.waitClusterUp(); +DistributedFileSystem dfs2 = cluster2.getFileSystem(); + +String testPathDir = "/dir2"; +Path dir = new Path(testPathDir); +dfs2.mkdirs(dir); +Path trg = new Path(testPathDir, "trg"); +Path src = new Path(testPathDir, "src"); +DFSTestUtil.createFile(dfs2, trg, blockSize, REPL_FACTOR, 1); +DFSTestUtil.createFile(dfs2, src, blockSize, REPL_FACTOR, 1); + +// Check permissions with the wrong user when dfs.permissions.enabled is true. +final UserGroupInformation user = +UserGroupInformation.createUserForTesting("theDoctor", new String[] {"tardis"}); +DistributedFileSystem hdfs1 = (DistributedFileSystem) DFSTestUtil.getFileSystemAs(user, conf2); +try { + hdfs1.concat(trg, new Path[] {src}); + fail("Permission exception expected"); +} catch (IOException ie) { + LOG.info("Got expected exception for permissions:" + ie.getLocalizedMessage()); +} + +dfs2.close(); +cluster2.shutdownDataNodes(); +cluster2.shutdown(); + +conf2.setBoolean(DFSConfigKeys.DFS_PERMISSIONS_ENABLED_KEY, false); +cluster2 = new MiniDFSCluster.Builder(conf2).numDataNodes(REPL_FACTOR).build(); +cluster2.waitClusterUp(); +dfs2 = cluster2.getFileSystem(); +dfs2.mkdirs(dir); +DFSTestUtil.createFile(dfs2, trg, blockSize, REPL_FACTOR, 1); +DFSTestUtil.createFile(dfs2, src, blockSize, REPL_FACTOR, 1); + +// Check permissions with the wrong user when dfs.permissions.enabled is false. +DistributedFileSystem hdfs2 = (DistributedFileSystem) DFSTestUtil.getFileSystemAs(user, conf2); +try { + hdfs2.concat(trg, new Path[] {src}); +} catch (IOException ie) { + fail("Got unexpected exception for permissions:" + ie.getLocalizedMessage()); +} +dfs2.close(); +cluster2.shutdownDataNodes(); +cluster2.shutdown(); + } + + /** + * Test permissions of Concat operation. + */ + @Test + public void testConcatPermissions() throws IOException { +String testPathDir = "/dir"; +Path dir = new Path(testPathDir); +dfs.mkdirs(dir); +dfs.setPermission(dir, new FsPermission((short) 0777)); + +Path dst = new Path(testPathDir, "dst"); +Path src = new Path(testPathDir, "src"); +DFSTestUtil.createFile(dfs, dst, blockSize, REPL_FACTOR, 1); + +// Create a user who is not the owner of the file and try concat operation. +final UserGroupInformation user = +UserGroupInformation.createUserForTesting("theDoctor", new String[] {"group"}); +DistributedFileSystem dfs2 = (DistributedFileSystem) DFSTestUtil.getFileSystemAs(user, conf); + +// Test 1: User is not the owner of the file and has src & dst permission. +DFSTestUtil.createFile(dfs, src, blockSize, REPL_FACTOR, 1); +dfs.setPermission(dst, new FsPermission((short) 0777)); +dfs.setPermission(src, new FsPermission((short) 0777)); +try { + dfs2.concat(dst, new Path[] {src}); +} catch (AccessControlException ace) { + fail("Got unexpected exception:" + ace.getLocalizedMessage()); +} Review Comment: @ayushtkn Thanks a lot for your help! I have commit the UT. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] lfxy commented on a diff in pull request #5561: HDFS-16983. Whether checking path access permissions should be decided by dfs.permissions.enabled in concat operation
lfxy commented on code in PR #5561: URL: https://github.com/apache/hadoop/pull/5561#discussion_r1211724811 ## hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestHDFSConcat.java: ## @@ -564,4 +566,113 @@ public void testConcatOnSameFile() throws Exception { assertEquals(1, dfs.getContentSummary(new Path(dir)).getFileCount()); } + + /** + * Verifies concat with wrong user when dfs.permissions.enabled is false. + * + * @throws IOException + */ + @Test + public void testConcatPermissionEnabled() throws IOException { +Configuration conf2 = new Configuration(); +conf2.setLong(DFSConfigKeys.DFS_BLOCK_SIZE_KEY, blockSize); +conf2.setBoolean(DFSConfigKeys.DFS_PERMISSIONS_ENABLED_KEY, true); +MiniDFSCluster cluster2 = new MiniDFSCluster.Builder(conf2).numDataNodes(REPL_FACTOR).build(); +cluster2.waitClusterUp(); +DistributedFileSystem dfs2 = cluster2.getFileSystem(); + +String testPathDir = "/dir2"; +Path dir = new Path(testPathDir); +dfs2.mkdirs(dir); +Path trg = new Path(testPathDir, "trg"); +Path src = new Path(testPathDir, "src"); +DFSTestUtil.createFile(dfs2, trg, blockSize, REPL_FACTOR, 1); +DFSTestUtil.createFile(dfs2, src, blockSize, REPL_FACTOR, 1); + +// Check permissions with the wrong user when dfs.permissions.enabled is true. +final UserGroupInformation user = +UserGroupInformation.createUserForTesting("theDoctor", new String[] {"tardis"}); +DistributedFileSystem hdfs1 = (DistributedFileSystem) DFSTestUtil.getFileSystemAs(user, conf2); +try { + hdfs1.concat(trg, new Path[] {src}); + fail("Permission exception expected"); +} catch (IOException ie) { + LOG.info("Got expected exception for permissions:" + ie.getLocalizedMessage()); +} + +dfs2.close(); +cluster2.shutdownDataNodes(); +cluster2.shutdown(); + +conf2.setBoolean(DFSConfigKeys.DFS_PERMISSIONS_ENABLED_KEY, false); +cluster2 = new MiniDFSCluster.Builder(conf2).numDataNodes(REPL_FACTOR).build(); +cluster2.waitClusterUp(); +dfs2 = cluster2.getFileSystem(); +dfs2.mkdirs(dir); +DFSTestUtil.createFile(dfs2, trg, blockSize, REPL_FACTOR, 1); +DFSTestUtil.createFile(dfs2, src, blockSize, REPL_FACTOR, 1); + +// Check permissions with the wrong user when dfs.permissions.enabled is false. +DistributedFileSystem hdfs2 = (DistributedFileSystem) DFSTestUtil.getFileSystemAs(user, conf2); +try { + hdfs2.concat(trg, new Path[] {src}); +} catch (IOException ie) { + fail("Got unexpected exception for permissions:" + ie.getLocalizedMessage()); +} +dfs2.close(); +cluster2.shutdownDataNodes(); +cluster2.shutdown(); + } + + /** + * Test permissions of Concat operation. + */ + @Test + public void testConcatPermissions() throws IOException { +String testPathDir = "/dir"; +Path dir = new Path(testPathDir); +dfs.mkdirs(dir); +dfs.setPermission(dir, new FsPermission((short) 0777)); + +Path dst = new Path(testPathDir, "dst"); +Path src = new Path(testPathDir, "src"); +DFSTestUtil.createFile(dfs, dst, blockSize, REPL_FACTOR, 1); + +// Create a user who is not the owner of the file and try concat operation. +final UserGroupInformation user = +UserGroupInformation.createUserForTesting("theDoctor", new String[] {"group"}); +DistributedFileSystem dfs2 = (DistributedFileSystem) DFSTestUtil.getFileSystemAs(user, conf); + +// Test 1: User is not the owner of the file and has src & dst permission. +DFSTestUtil.createFile(dfs, src, blockSize, REPL_FACTOR, 1); +dfs.setPermission(dst, new FsPermission((short) 0777)); +dfs.setPermission(src, new FsPermission((short) 0777)); +try { + dfs2.concat(dst, new Path[] {src}); +} catch (AccessControlException ace) { + fail("Got unexpected exception:" + ace.getLocalizedMessage()); +} Review Comment: @ayushtkn I have update the UT, please help to review. Thank you for your patient answer. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] lfxy commented on a diff in pull request #5561: HDFS-16983. Whether checking path access permissions should be decided by dfs.permissions.enabled in concat operation
lfxy commented on code in PR #5561: URL: https://github.com/apache/hadoop/pull/5561#discussion_r1211643942 ## hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestHDFSConcat.java: ## @@ -564,4 +566,113 @@ public void testConcatOnSameFile() throws Exception { assertEquals(1, dfs.getContentSummary(new Path(dir)).getFileCount()); } + + /** + * Verifies concat with wrong user when dfs.permissions.enabled is false. + * + * @throws IOException + */ + @Test + public void testConcatPermissionEnabled() throws IOException { +Configuration conf2 = new Configuration(); +conf2.setLong(DFSConfigKeys.DFS_BLOCK_SIZE_KEY, blockSize); +conf2.setBoolean(DFSConfigKeys.DFS_PERMISSIONS_ENABLED_KEY, true); +MiniDFSCluster cluster2 = new MiniDFSCluster.Builder(conf2).numDataNodes(REPL_FACTOR).build(); +cluster2.waitClusterUp(); +DistributedFileSystem dfs2 = cluster2.getFileSystem(); + +String testPathDir = "/dir2"; +Path dir = new Path(testPathDir); +dfs2.mkdirs(dir); +Path trg = new Path(testPathDir, "trg"); +Path src = new Path(testPathDir, "src"); +DFSTestUtil.createFile(dfs2, trg, blockSize, REPL_FACTOR, 1); +DFSTestUtil.createFile(dfs2, src, blockSize, REPL_FACTOR, 1); + +// Check permissions with the wrong user when dfs.permissions.enabled is true. +final UserGroupInformation user = +UserGroupInformation.createUserForTesting("theDoctor", new String[] {"tardis"}); +DistributedFileSystem hdfs1 = (DistributedFileSystem) DFSTestUtil.getFileSystemAs(user, conf2); +try { + hdfs1.concat(trg, new Path[] {src}); + fail("Permission exception expected"); +} catch (IOException ie) { + LOG.info("Got expected exception for permissions:" + ie.getLocalizedMessage()); +} + +dfs2.close(); +cluster2.shutdownDataNodes(); +cluster2.shutdown(); + +conf2.setBoolean(DFSConfigKeys.DFS_PERMISSIONS_ENABLED_KEY, false); +cluster2 = new MiniDFSCluster.Builder(conf2).numDataNodes(REPL_FACTOR).build(); +cluster2.waitClusterUp(); +dfs2 = cluster2.getFileSystem(); +dfs2.mkdirs(dir); +DFSTestUtil.createFile(dfs2, trg, blockSize, REPL_FACTOR, 1); +DFSTestUtil.createFile(dfs2, src, blockSize, REPL_FACTOR, 1); + +// Check permissions with the wrong user when dfs.permissions.enabled is false. +DistributedFileSystem hdfs2 = (DistributedFileSystem) DFSTestUtil.getFileSystemAs(user, conf2); +try { + hdfs2.concat(trg, new Path[] {src}); +} catch (IOException ie) { + fail("Got unexpected exception for permissions:" + ie.getLocalizedMessage()); +} +dfs2.close(); +cluster2.shutdownDataNodes(); +cluster2.shutdown(); + } + + /** + * Test permissions of Concat operation. + */ + @Test + public void testConcatPermissions() throws IOException { +String testPathDir = "/dir"; +Path dir = new Path(testPathDir); +dfs.mkdirs(dir); +dfs.setPermission(dir, new FsPermission((short) 0777)); + +Path dst = new Path(testPathDir, "dst"); +Path src = new Path(testPathDir, "src"); +DFSTestUtil.createFile(dfs, dst, blockSize, REPL_FACTOR, 1); + +// Create a user who is not the owner of the file and try concat operation. +final UserGroupInformation user = +UserGroupInformation.createUserForTesting("theDoctor", new String[] {"group"}); +DistributedFileSystem dfs2 = (DistributedFileSystem) DFSTestUtil.getFileSystemAs(user, conf); + +// Test 1: User is not the owner of the file and has src & dst permission. +DFSTestUtil.createFile(dfs, src, blockSize, REPL_FACTOR, 1); +dfs.setPermission(dst, new FsPermission((short) 0777)); +dfs.setPermission(src, new FsPermission((short) 0777)); +try { + dfs2.concat(dst, new Path[] {src}); +} catch (AccessControlException ace) { + fail("Got unexpected exception:" + ace.getLocalizedMessage()); +} Review Comment: @ayushtkn Could you give me an example to use finally block with LambdaTestUtils.intercept like below. Thanks. LambdaTestUtils.intercept(AccessControlException.class, "Permission denied: user=theDoctor, access=WRITE", () -> hdfs1.concat(trg, new Path[] {src})); cluster2.shutdown(); -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] lfxy commented on a diff in pull request #5561: HDFS-16983. Whether checking path access permissions should be decided by dfs.permissions.enabled in concat operation
lfxy commented on code in PR #5561: URL: https://github.com/apache/hadoop/pull/5561#discussion_r1211583457 ## hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestHDFSConcat.java: ## @@ -564,4 +566,113 @@ public void testConcatOnSameFile() throws Exception { assertEquals(1, dfs.getContentSummary(new Path(dir)).getFileCount()); } + + /** + * Verifies concat with wrong user when dfs.permissions.enabled is false. + * + * @throws IOException + */ + @Test + public void testConcatPermissionEnabled() throws IOException { +Configuration conf2 = new Configuration(); +conf2.setLong(DFSConfigKeys.DFS_BLOCK_SIZE_KEY, blockSize); +conf2.setBoolean(DFSConfigKeys.DFS_PERMISSIONS_ENABLED_KEY, true); +MiniDFSCluster cluster2 = new MiniDFSCluster.Builder(conf2).numDataNodes(REPL_FACTOR).build(); +cluster2.waitClusterUp(); +DistributedFileSystem dfs2 = cluster2.getFileSystem(); + +String testPathDir = "/dir2"; +Path dir = new Path(testPathDir); +dfs2.mkdirs(dir); +Path trg = new Path(testPathDir, "trg"); +Path src = new Path(testPathDir, "src"); +DFSTestUtil.createFile(dfs2, trg, blockSize, REPL_FACTOR, 1); +DFSTestUtil.createFile(dfs2, src, blockSize, REPL_FACTOR, 1); + +// Check permissions with the wrong user when dfs.permissions.enabled is true. +final UserGroupInformation user = +UserGroupInformation.createUserForTesting("theDoctor", new String[] {"tardis"}); +DistributedFileSystem hdfs1 = (DistributedFileSystem) DFSTestUtil.getFileSystemAs(user, conf2); +try { + hdfs1.concat(trg, new Path[] {src}); + fail("Permission exception expected"); +} catch (IOException ie) { + LOG.info("Got expected exception for permissions:" + ie.getLocalizedMessage()); +} + +dfs2.close(); +cluster2.shutdownDataNodes(); +cluster2.shutdown(); + +conf2.setBoolean(DFSConfigKeys.DFS_PERMISSIONS_ENABLED_KEY, false); +cluster2 = new MiniDFSCluster.Builder(conf2).numDataNodes(REPL_FACTOR).build(); +cluster2.waitClusterUp(); +dfs2 = cluster2.getFileSystem(); +dfs2.mkdirs(dir); +DFSTestUtil.createFile(dfs2, trg, blockSize, REPL_FACTOR, 1); +DFSTestUtil.createFile(dfs2, src, blockSize, REPL_FACTOR, 1); + +// Check permissions with the wrong user when dfs.permissions.enabled is false. +DistributedFileSystem hdfs2 = (DistributedFileSystem) DFSTestUtil.getFileSystemAs(user, conf2); +try { + hdfs2.concat(trg, new Path[] {src}); +} catch (IOException ie) { + fail("Got unexpected exception for permissions:" + ie.getLocalizedMessage()); +} +dfs2.close(); +cluster2.shutdownDataNodes(); +cluster2.shutdown(); + } + + /** + * Test permissions of Concat operation. + */ + @Test + public void testConcatPermissions() throws IOException { +String testPathDir = "/dir"; +Path dir = new Path(testPathDir); +dfs.mkdirs(dir); +dfs.setPermission(dir, new FsPermission((short) 0777)); + +Path dst = new Path(testPathDir, "dst"); +Path src = new Path(testPathDir, "src"); +DFSTestUtil.createFile(dfs, dst, blockSize, REPL_FACTOR, 1); + +// Create a user who is not the owner of the file and try concat operation. +final UserGroupInformation user = +UserGroupInformation.createUserForTesting("theDoctor", new String[] {"group"}); +DistributedFileSystem dfs2 = (DistributedFileSystem) DFSTestUtil.getFileSystemAs(user, conf); + +// Test 1: User is not the owner of the file and has src & dst permission. +DFSTestUtil.createFile(dfs, src, blockSize, REPL_FACTOR, 1); +dfs.setPermission(dst, new FsPermission((short) 0777)); +dfs.setPermission(src, new FsPermission((short) 0777)); +try { + dfs2.concat(dst, new Path[] {src}); +} catch (AccessControlException ace) { + fail("Got unexpected exception:" + ace.getLocalizedMessage()); +} Review Comment: 1. I mean if it doesn't thrown exception, what should use LambdaTestUtils.intercept ? 2. After used LambdaTestUtils.intercept , could I use finally block to call cluster2.shutdown()? @ayushtkn -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] lfxy commented on a diff in pull request #5561: HDFS-16983. Whether checking path access permissions should be decided by dfs.permissions.enabled in concat operation
lfxy commented on code in PR #5561: URL: https://github.com/apache/hadoop/pull/5561#discussion_r1211433782 ## hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestHDFSConcat.java: ## @@ -564,4 +566,113 @@ public void testConcatOnSameFile() throws Exception { assertEquals(1, dfs.getContentSummary(new Path(dir)).getFileCount()); } + + /** + * Verifies concat with wrong user when dfs.permissions.enabled is false. + * + * @throws IOException + */ + @Test + public void testConcatPermissionEnabled() throws IOException { +Configuration conf2 = new Configuration(); +conf2.setLong(DFSConfigKeys.DFS_BLOCK_SIZE_KEY, blockSize); +conf2.setBoolean(DFSConfigKeys.DFS_PERMISSIONS_ENABLED_KEY, true); +MiniDFSCluster cluster2 = new MiniDFSCluster.Builder(conf2).numDataNodes(REPL_FACTOR).build(); +cluster2.waitClusterUp(); +DistributedFileSystem dfs2 = cluster2.getFileSystem(); + +String testPathDir = "/dir2"; +Path dir = new Path(testPathDir); +dfs2.mkdirs(dir); +Path trg = new Path(testPathDir, "trg"); +Path src = new Path(testPathDir, "src"); +DFSTestUtil.createFile(dfs2, trg, blockSize, REPL_FACTOR, 1); +DFSTestUtil.createFile(dfs2, src, blockSize, REPL_FACTOR, 1); + +// Check permissions with the wrong user when dfs.permissions.enabled is true. +final UserGroupInformation user = +UserGroupInformation.createUserForTesting("theDoctor", new String[] {"tardis"}); +DistributedFileSystem hdfs1 = (DistributedFileSystem) DFSTestUtil.getFileSystemAs(user, conf2); +try { + hdfs1.concat(trg, new Path[] {src}); + fail("Permission exception expected"); +} catch (IOException ie) { + LOG.info("Got expected exception for permissions:" + ie.getLocalizedMessage()); +} + +dfs2.close(); +cluster2.shutdownDataNodes(); +cluster2.shutdown(); + +conf2.setBoolean(DFSConfigKeys.DFS_PERMISSIONS_ENABLED_KEY, false); +cluster2 = new MiniDFSCluster.Builder(conf2).numDataNodes(REPL_FACTOR).build(); +cluster2.waitClusterUp(); +dfs2 = cluster2.getFileSystem(); +dfs2.mkdirs(dir); +DFSTestUtil.createFile(dfs2, trg, blockSize, REPL_FACTOR, 1); +DFSTestUtil.createFile(dfs2, src, blockSize, REPL_FACTOR, 1); + +// Check permissions with the wrong user when dfs.permissions.enabled is false. +DistributedFileSystem hdfs2 = (DistributedFileSystem) DFSTestUtil.getFileSystemAs(user, conf2); +try { + hdfs2.concat(trg, new Path[] {src}); +} catch (IOException ie) { + fail("Got unexpected exception for permissions:" + ie.getLocalizedMessage()); +} +dfs2.close(); +cluster2.shutdownDataNodes(); +cluster2.shutdown(); + } + + /** + * Test permissions of Concat operation. + */ + @Test + public void testConcatPermissions() throws IOException { +String testPathDir = "/dir"; +Path dir = new Path(testPathDir); +dfs.mkdirs(dir); +dfs.setPermission(dir, new FsPermission((short) 0777)); + +Path dst = new Path(testPathDir, "dst"); +Path src = new Path(testPathDir, "src"); +DFSTestUtil.createFile(dfs, dst, blockSize, REPL_FACTOR, 1); + +// Create a user who is not the owner of the file and try concat operation. +final UserGroupInformation user = +UserGroupInformation.createUserForTesting("theDoctor", new String[] {"group"}); +DistributedFileSystem dfs2 = (DistributedFileSystem) DFSTestUtil.getFileSystemAs(user, conf); + +// Test 1: User is not the owner of the file and has src & dst permission. +DFSTestUtil.createFile(dfs, src, blockSize, REPL_FACTOR, 1); +dfs.setPermission(dst, new FsPermission((short) 0777)); +dfs.setPermission(src, new FsPermission((short) 0777)); +try { + dfs2.concat(dst, new Path[] {src}); +} catch (AccessControlException ace) { + fail("Got unexpected exception:" + ace.getLocalizedMessage()); +} Review Comment: @ayushtkn If LambdaTestUtils.intercept can replace both expected exception and unexpected exception? And can I use finally block if i use LambdaTestUtils.intercept? Thank you. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] lfxy commented on a diff in pull request #5561: HDFS-16983. Whether checking path access permissions should be decided by dfs.permissions.enabled in concat operation
lfxy commented on code in PR #5561: URL: https://github.com/apache/hadoop/pull/5561#discussion_r1204372000 ## hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestHDFSConcat.java: ## @@ -564,4 +564,36 @@ public void testConcatOnSameFile() throws Exception { assertEquals(1, dfs.getContentSummary(new Path(dir)).getFileCount()); } + + /** + * Verifies concat with wrong user when dfs.permissions.enabled is false. + * + * @throws IOException + */ + @Test + public void testConcatPermissions() throws IOException { Review Comment: @Hexiaoqiao I have tried to set permission explicitly before invoke concat, but the permission config failed take effect. Could you tell me how to change dfs.permissions.enabled dynamically? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] lfxy commented on a diff in pull request #5561: HDFS-16983. Whether checking path access permissions should be decided by dfs.permissions.enabled in concat operation
lfxy commented on code in PR #5561: URL: https://github.com/apache/hadoop/pull/5561#discussion_r1204175628 ## hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestHDFSConcat.java: ## @@ -564,4 +564,36 @@ public void testConcatOnSameFile() throws Exception { assertEquals(1, dfs.getContentSummary(new Path(dir)).getFileCount()); } + + /** + * Verifies concat with wrong user when dfs.permissions.enabled is false. + * + * @throws IOException + */ + @Test + public void testConcatPermissions() throws IOException { Review Comment: Without MiniDFSCluster restart, can we change the value of dfs.permissions.enabled dynamically? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] lfxy commented on a diff in pull request #5561: HDFS-16983. Whether checking path access permissions should be decided by dfs.permissions.enabled in concat operation
lfxy commented on code in PR #5561: URL: https://github.com/apache/hadoop/pull/5561#discussion_r1204017406 ## hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestHDFSConcat.java: ## @@ -564,4 +564,36 @@ public void testConcatOnSameFile() throws Exception { assertEquals(1, dfs.getContentSummary(new Path(dir)).getFileCount()); } + + /** + * Verifies concat with wrong user when dfs.permissions.enabled is false. + * + * @throws IOException + */ + @Test + public void testConcatPermissions() throws IOException { Review Comment: @Hexiaoqiao In TestHDFSConcat::testConcat(), there are a failed permission case. And I also add a failed case in testConcatPermissions(). Please help to review. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org