Hexiaoqiao commented on code in PR #5561:
URL: https://github.com/apache/hadoop/pull/5561#discussion_r1204104857


##########
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestHDFSConcat.java:
##########
@@ -564,4 +564,66 @@ public void testConcatOnSameFile() throws Exception {
 
     assertEquals(1, dfs.getContentSummary(new Path(dir)).getFileCount());
   }
+
+  /**
+   * Verifies concat with wrong user when dfs.permissions.enabled is false.
+   *
+   * @throws IOException
+   */
+  @Test
+  public void testConcatPermissions() throws IOException {
+    Configuration conf2 = new Configuration();
+    conf2.setLong(DFSConfigKeys.DFS_BLOCK_SIZE_KEY, blockSize);
+    conf2.setBoolean(DFSConfigKeys.DFS_PERMISSIONS_ENABLED_KEY, true);
+    MiniDFSCluster cluster2 = new 
MiniDFSCluster.Builder(conf2).numDataNodes(REPL_FACTOR).build();
+    cluster2.waitClusterUp();
+    DistributedFileSystem dfs2 = cluster2.getFileSystem();
+
+    String testPathDir = "/dir2";
+    Path dir = new Path(testPathDir);
+    dfs2.mkdirs(dir);
+    Path trg = new Path(testPathDir, "trg");
+    Path src = new Path(testPathDir, "src");
+    DFSTestUtil.createFile(dfs2, trg, blockSize, REPL_FACTOR, 1);
+    DFSTestUtil.createFile(dfs2, src, blockSize, REPL_FACTOR, 1);
+
+    // check permissions with the wrong user when dfs.permissions.enabled is 
true
+    final UserGroupInformation user = 
UserGroupInformation.createUserForTesting(
+        "theDoctor", new String[] { "tardis" });
+    DistributedFileSystem hdfs1 =
+        (DistributedFileSystem)DFSTestUtil.getFileSystemAs(user, conf2);
+    try {
+      hdfs1.concat(trg, new Path[] {src});
+      fail("Permission exception expected");
+    } catch (IOException ie) {
+      System.out.println("Got expected exception for permissions:"

Review Comment:
   Please remove the standard output.



##########
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestHDFSConcat.java:
##########
@@ -564,4 +564,66 @@ public void testConcatOnSameFile() throws Exception {
 
     assertEquals(1, dfs.getContentSummary(new Path(dir)).getFileCount());
   }
+
+  /**
+   * Verifies concat with wrong user when dfs.permissions.enabled is false.
+   *
+   * @throws IOException
+   */
+  @Test
+  public void testConcatPermissions() throws IOException {
+    Configuration conf2 = new Configuration();
+    conf2.setLong(DFSConfigKeys.DFS_BLOCK_SIZE_KEY, blockSize);
+    conf2.setBoolean(DFSConfigKeys.DFS_PERMISSIONS_ENABLED_KEY, true);
+    MiniDFSCluster cluster2 = new 
MiniDFSCluster.Builder(conf2).numDataNodes(REPL_FACTOR).build();
+    cluster2.waitClusterUp();
+    DistributedFileSystem dfs2 = cluster2.getFileSystem();
+
+    String testPathDir = "/dir2";
+    Path dir = new Path(testPathDir);
+    dfs2.mkdirs(dir);
+    Path trg = new Path(testPathDir, "trg");
+    Path src = new Path(testPathDir, "src");
+    DFSTestUtil.createFile(dfs2, trg, blockSize, REPL_FACTOR, 1);
+    DFSTestUtil.createFile(dfs2, src, blockSize, REPL_FACTOR, 1);
+
+    // check permissions with the wrong user when dfs.permissions.enabled is 
true

Review Comment:
   Please use a capital letter at the beginning of the sentences and period at 
the end for every annotation.



##########
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestHDFSConcat.java:
##########
@@ -564,4 +564,36 @@ public void testConcatOnSameFile() throws Exception {
 
     assertEquals(1, dfs.getContentSummary(new Path(dir)).getFileCount());
   }
+
+  /**
+   * Verifies concat with wrong user when dfs.permissions.enabled is false.
+   *
+   * @throws IOException
+   */
+  @Test
+  public void testConcatPermissions() throws IOException {

Review Comment:
   1. We should not setup cluster and destroy it again and again. Actually for 
`TestHDFSConcat`,  it is ready to setup cluster and also it can destroy it 
automatically. Please refer to annotation `@Before` and `@After`.
   2. Suggest to set permission explicitly before invoke concat then it will be 
more readable and clearly.
   3. Please check Yetus report carefully include failed unit test and 
checkstyles.
   Thanks again.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org

Reply via email to