[jira] [Commented] (HDFS-17278) Detect order dependent flakiness in TestViewfsWithNfs3.java under hadoop-hdfs-nfs module

2023-12-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17794066#comment-17794066
 ] 

ASF GitHub Bot commented on HDFS-17278:
---

hadoop-yetus commented on PR #6329:
URL: https://github.com/apache/hadoop/pull/6329#issuecomment-1844822834

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 21s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  32m 23s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 15s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   0m 13s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 17s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 21s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 20s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 19s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   0m 34s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  19m 16s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | -1 :x: |  mvninstall  |   0m 12s | 
[/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs-nfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6329/3/artifact/out/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs-nfs.txt)
 |  hadoop-hdfs-nfs in the patch failed.  |
   | -1 :x: |  compile  |   0m 12s | 
[/patch-compile-hadoop-hdfs-project_hadoop-hdfs-nfs-jdkUbuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6329/3/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-nfs-jdkUbuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.txt)
 |  hadoop-hdfs-nfs in the patch failed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.  |
   | -1 :x: |  javac  |   0m 12s | 
[/patch-compile-hadoop-hdfs-project_hadoop-hdfs-nfs-jdkUbuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6329/3/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-nfs-jdkUbuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.txt)
 |  hadoop-hdfs-nfs in the patch failed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.  |
   | -1 :x: |  compile  |   0m 10s | 
[/patch-compile-hadoop-hdfs-project_hadoop-hdfs-nfs-jdkPrivateBuild-1.8.0_392-8u392-ga-1~20.04-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6329/3/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-nfs-jdkPrivateBuild-1.8.0_392-8u392-ga-1~20.04-b08.txt)
 |  hadoop-hdfs-nfs in the patch failed with JDK Private 
Build-1.8.0_392-8u392-ga-1~20.04-b08.  |
   | -1 :x: |  javac  |   0m 10s | 
[/patch-compile-hadoop-hdfs-project_hadoop-hdfs-nfs-jdkPrivateBuild-1.8.0_392-8u392-ga-1~20.04-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6329/3/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-nfs-jdkPrivateBuild-1.8.0_392-8u392-ga-1~20.04-b08.txt)
 |  hadoop-hdfs-nfs in the patch failed with JDK Private 
Build-1.8.0_392-8u392-ga-1~20.04-b08.  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 10s | 
[/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs-nfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6329/3/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs-nfs.txt)
 |  hadoop-hdfs-project/hadoop-hdfs-nfs: The patch generated 1 new + 1 
unchanged - 0 fixed = 2 total (was 1)  |
   | -1 :x: |  mvnsite  |   0m 12s | 
[/patch-mvnsite-hadoop-hdfs-project_hadoop-hdfs-nfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6329/3/artifact/out/patch-mvnsite-hadoop-hdfs-project_hadoop-hdfs-nfs.txt)
 |  hadoop-hdfs-nfs in the patch failed.  |
   | +1 :green_heart: |  javadoc  |   0m 13s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 12s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | -1 :x: |  spotbugs  | 

[jira] [Commented] (HDFS-17278) Detect order dependent flakiness in TestViewfsWithNfs3.java under hadoop-hdfs-nfs module

2023-12-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17794049#comment-17794049
 ] 

ASF GitHub Bot commented on HDFS-17278:
---

yijut2 commented on PR #6329:
URL: https://github.com/apache/hadoop/pull/6329#issuecomment-1844768197

   > Thanks for fixing this bug!
   
   Thanks for the quick response too!




> Detect order dependent flakiness in TestViewfsWithNfs3.java under 
> hadoop-hdfs-nfs module
> 
>
> Key: HDFS-17278
> URL: https://issues.apache.org/jira/browse/HDFS-17278
> Project: Hadoop HDFS
>  Issue Type: Bug
> Environment: openjdk version "17.0.9"
> Apache Maven 3.9.5
>Reporter: Ruby
>Priority: Minor
>  Labels: pull-request-available
> Attachments: failed-1.png, failed-2.png, success.png
>
>
> The order dependent flakiness was detected if the test class 
> TestDFSClientCache.java runs before TestRpcProgramNfs3.java.
> The error message looks like below:
> {code:java}
> [ERROR] Failures: 
> [ERROR]   TestRpcProgramNfs3.testAccess:279 Incorrect return code 
> expected:<0> but was:<13>
> [ERROR]   TestRpcProgramNfs3.testCommit:764 Incorrect return code: 
> expected:<13> but was:<5>
> [ERROR]   TestRpcProgramNfs3.testCreate:493 Incorrect return code: 
> expected:<13> but was:<5>
> [ERROR]   
> TestRpcProgramNfs3.testEncryptedReadWrite:359->createFileUsingNfs:393 
> Incorrect response:  expected: but 
> was:
> [ERROR]   TestRpcProgramNfs3.testFsinfo:714 Incorrect return code: 
> expected:<13> but was:<5>
> [ERROR]   TestRpcProgramNfs3.testFsstat:696 Incorrect return code: 
> expected:<0> but was:<13>
> [ERROR]   TestRpcProgramNfs3.testGetattr:205 Incorrect return code 
> expected:<0> but was:<13>
> [ERROR]   TestRpcProgramNfs3.testLookup:249 Incorrect return code 
> expected:<13> but was:<5>
> [ERROR]   TestRpcProgramNfs3.testMkdir:517 Incorrect return code: 
> expected:<13> but was:<5>
> [ERROR]   TestRpcProgramNfs3.testPathconf:738 Incorrect return code: 
> expected:<13> but was:<5>
> [ERROR]   TestRpcProgramNfs3.testRead:341 Incorrect return code: expected:<0> 
> but was:<13>
> [ERROR]   TestRpcProgramNfs3.testReaddir:642 Incorrect return code: 
> expected:<13> but was:<5>
> [ERROR]   TestRpcProgramNfs3.testReaddirplus:666 Incorrect return code: 
> expected:<13> but was:<5>
> [ERROR]   TestRpcProgramNfs3.testReadlink:297 Incorrect return code: 
> expected:<0> but was:<5>
> [ERROR]   TestRpcProgramNfs3.testRemove:570 Incorrect return code: 
> expected:<13> but was:<5>
> [ERROR]   TestRpcProgramNfs3.testRename:618 Incorrect return code: 
> expected:<13> but was:<5>
> [ERROR]   TestRpcProgramNfs3.testRmdir:594 Incorrect return code: 
> expected:<13> but was:<5>
> [ERROR]   TestRpcProgramNfs3.testSetattr:225 Incorrect return code 
> expected:<13> but was:<5>
> [ERROR]   TestRpcProgramNfs3.testSymlink:546 Incorrect return code: 
> expected:<13> but was:<5>
> [ERROR]   TestRpcProgramNfs3.testWrite:468 Incorrect return code: 
> expected:<13> but was:<5>
> [INFO] 
> [ERROR] Tests run: 25, Failures: 20, Errors: 0, Skipped: 0
> [INFO] 
> [ERROR] There are test failures. {code}
> The polluter that led to this flakiness was the test method
> testGetUserGroupInformationSecure() in TestDFSClientCache.java. There was a 
> line 
> {code:java}
> UserGroupInformation.setLoginUser(currentUserUgi);{code}
> which modifies some shared state and resource, something like pre-setup the 
> config. To fix this issue, I added the cleanup methods in 
> TestDFSClientCache.java to reset the UserGroupInformation to ensure the 
> isolation among each test class.
> {code:java}
> @AfterClass
> public static void cleanup() {
> UserGroupInformation.reset();
> }{code}
> Including setting
> {code:java}
> authenticationMethod = null;
> conf = null; // set configuration to null
> setLoginUser(null); // reset login user to default null{code}
> ..., and so on. The reset() methods can be referred to 
> hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java.
> After the fix, the error was no longer exist and the succeed message was:
> {code:java}
> [INFO] ---
> [INFO]  T E S T S
> [INFO] ---
> [INFO] Running org.apache.hadoop.hdfs.nfs.nfs3.CustomTest
> [INFO] Tests run: 25, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 
> 18.457 s - in org.apache.hadoop.hdfs.nfs.nfs3.CustomTest
> [INFO] 
> [INFO] Results:
> [INFO] 
> [INFO] Tests run: 25, Failures: 0, Errors: 0, Skipped: 0
> [INFO] 
> [INFO] 
> 
> [INFO] BUILD SUCCESS
> [INFO] 
> --

[jira] [Commented] (HDFS-17278) Detect order dependent flakiness in TestViewfsWithNfs3.java under hadoop-hdfs-nfs module

2023-12-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17794048#comment-17794048
 ] 

ASF GitHub Bot commented on HDFS-17278:
---

xinglin commented on code in PR #6329:
URL: https://github.com/apache/hadoop/pull/6329#discussion_r1418445324


##
hadoop-hdfs-project/hadoop-hdfs-nfs/src/test/java/org/apache/hadoop/hdfs/nfs/nfs3/TestDFSClientCache.java:
##
@@ -31,8 +31,14 @@
 import org.apache.hadoop.hdfs.nfs.conf.NfsConfiguration;
 import org.apache.hadoop.security.UserGroupInformation;
 import org.junit.Test;
+import org.junit.AfterClass;

Review Comment:
   Please fix this as well. Otherwise, LGTM. thanks,





> Detect order dependent flakiness in TestViewfsWithNfs3.java under 
> hadoop-hdfs-nfs module
> 
>
> Key: HDFS-17278
> URL: https://issues.apache.org/jira/browse/HDFS-17278
> Project: Hadoop HDFS
>  Issue Type: Bug
> Environment: openjdk version "17.0.9"
> Apache Maven 3.9.5
>Reporter: Ruby
>Priority: Minor
>  Labels: pull-request-available
> Attachments: failed-1.png, failed-2.png, success.png
>
>
> The order dependent flakiness was detected if the test class 
> TestDFSClientCache.java runs before TestRpcProgramNfs3.java.
> The error message looks like below:
> {code:java}
> [ERROR] Failures: 
> [ERROR]   TestRpcProgramNfs3.testAccess:279 Incorrect return code 
> expected:<0> but was:<13>
> [ERROR]   TestRpcProgramNfs3.testCommit:764 Incorrect return code: 
> expected:<13> but was:<5>
> [ERROR]   TestRpcProgramNfs3.testCreate:493 Incorrect return code: 
> expected:<13> but was:<5>
> [ERROR]   
> TestRpcProgramNfs3.testEncryptedReadWrite:359->createFileUsingNfs:393 
> Incorrect response:  expected: but 
> was:
> [ERROR]   TestRpcProgramNfs3.testFsinfo:714 Incorrect return code: 
> expected:<13> but was:<5>
> [ERROR]   TestRpcProgramNfs3.testFsstat:696 Incorrect return code: 
> expected:<0> but was:<13>
> [ERROR]   TestRpcProgramNfs3.testGetattr:205 Incorrect return code 
> expected:<0> but was:<13>
> [ERROR]   TestRpcProgramNfs3.testLookup:249 Incorrect return code 
> expected:<13> but was:<5>
> [ERROR]   TestRpcProgramNfs3.testMkdir:517 Incorrect return code: 
> expected:<13> but was:<5>
> [ERROR]   TestRpcProgramNfs3.testPathconf:738 Incorrect return code: 
> expected:<13> but was:<5>
> [ERROR]   TestRpcProgramNfs3.testRead:341 Incorrect return code: expected:<0> 
> but was:<13>
> [ERROR]   TestRpcProgramNfs3.testReaddir:642 Incorrect return code: 
> expected:<13> but was:<5>
> [ERROR]   TestRpcProgramNfs3.testReaddirplus:666 Incorrect return code: 
> expected:<13> but was:<5>
> [ERROR]   TestRpcProgramNfs3.testReadlink:297 Incorrect return code: 
> expected:<0> but was:<5>
> [ERROR]   TestRpcProgramNfs3.testRemove:570 Incorrect return code: 
> expected:<13> but was:<5>
> [ERROR]   TestRpcProgramNfs3.testRename:618 Incorrect return code: 
> expected:<13> but was:<5>
> [ERROR]   TestRpcProgramNfs3.testRmdir:594 Incorrect return code: 
> expected:<13> but was:<5>
> [ERROR]   TestRpcProgramNfs3.testSetattr:225 Incorrect return code 
> expected:<13> but was:<5>
> [ERROR]   TestRpcProgramNfs3.testSymlink:546 Incorrect return code: 
> expected:<13> but was:<5>
> [ERROR]   TestRpcProgramNfs3.testWrite:468 Incorrect return code: 
> expected:<13> but was:<5>
> [INFO] 
> [ERROR] Tests run: 25, Failures: 20, Errors: 0, Skipped: 0
> [INFO] 
> [ERROR] There are test failures. {code}
> The polluter that led to this flakiness was the test method
> testGetUserGroupInformationSecure() in TestDFSClientCache.java. There was a 
> line 
> {code:java}
> UserGroupInformation.setLoginUser(currentUserUgi);{code}
> which modifies some shared state and resource, something like pre-setup the 
> config. To fix this issue, I added the cleanup methods in 
> TestDFSClientCache.java to reset the UserGroupInformation to ensure the 
> isolation among each test class.
> {code:java}
> @AfterClass
> public static void cleanup() {
> UserGroupInformation.reset();
> }{code}
> Including setting
> {code:java}
> authenticationMethod = null;
> conf = null; // set configuration to null
> setLoginUser(null); // reset login user to default null{code}
> ..., and so on. The reset() methods can be referred to 
> hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java.
> After the fix, the error was no longer exist and the succeed message was:
> {code:java}
> [INFO] ---
> [INFO]  T E S T S
> [INFO] ---
> [INFO] Running org.apache.hadoop.hdfs.nfs.nfs3.CustomTest
> [INFO] Tests run: 25, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 
> 18.457 s - in org.apache.hadoop.

[jira] [Commented] (HDFS-17278) Detect order dependent flakiness in TestViewfsWithNfs3.java under hadoop-hdfs-nfs module

2023-12-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17794047#comment-17794047
 ] 

ASF GitHub Bot commented on HDFS-17278:
---

yijut2 commented on code in PR #6329:
URL: https://github.com/apache/hadoop/pull/6329#discussion_r1418443416


##
hadoop-hdfs-project/hadoop-hdfs-nfs/src/test/java/org/apache/hadoop/hdfs/nfs/nfs3/TestDFSClientCache.java:
##
@@ -31,8 +31,14 @@
 import org.apache.hadoop.hdfs.nfs.conf.NfsConfiguration;
 import org.apache.hadoop.security.UserGroupInformation;
 import org.junit.Test;
+import org.junit.AfterClass;
 
 public class TestDFSClientCache {
+  @AfterClass

Review Comment:
   Agreed, I think that would be better! Just updated the change, thank you.





> Detect order dependent flakiness in TestViewfsWithNfs3.java under 
> hadoop-hdfs-nfs module
> 
>
> Key: HDFS-17278
> URL: https://issues.apache.org/jira/browse/HDFS-17278
> Project: Hadoop HDFS
>  Issue Type: Bug
> Environment: openjdk version "17.0.9"
> Apache Maven 3.9.5
>Reporter: Ruby
>Priority: Minor
>  Labels: pull-request-available
> Attachments: failed-1.png, failed-2.png, success.png
>
>
> The order dependent flakiness was detected if the test class 
> TestDFSClientCache.java runs before TestRpcProgramNfs3.java.
> The error message looks like below:
> {code:java}
> [ERROR] Failures: 
> [ERROR]   TestRpcProgramNfs3.testAccess:279 Incorrect return code 
> expected:<0> but was:<13>
> [ERROR]   TestRpcProgramNfs3.testCommit:764 Incorrect return code: 
> expected:<13> but was:<5>
> [ERROR]   TestRpcProgramNfs3.testCreate:493 Incorrect return code: 
> expected:<13> but was:<5>
> [ERROR]   
> TestRpcProgramNfs3.testEncryptedReadWrite:359->createFileUsingNfs:393 
> Incorrect response:  expected: but 
> was:
> [ERROR]   TestRpcProgramNfs3.testFsinfo:714 Incorrect return code: 
> expected:<13> but was:<5>
> [ERROR]   TestRpcProgramNfs3.testFsstat:696 Incorrect return code: 
> expected:<0> but was:<13>
> [ERROR]   TestRpcProgramNfs3.testGetattr:205 Incorrect return code 
> expected:<0> but was:<13>
> [ERROR]   TestRpcProgramNfs3.testLookup:249 Incorrect return code 
> expected:<13> but was:<5>
> [ERROR]   TestRpcProgramNfs3.testMkdir:517 Incorrect return code: 
> expected:<13> but was:<5>
> [ERROR]   TestRpcProgramNfs3.testPathconf:738 Incorrect return code: 
> expected:<13> but was:<5>
> [ERROR]   TestRpcProgramNfs3.testRead:341 Incorrect return code: expected:<0> 
> but was:<13>
> [ERROR]   TestRpcProgramNfs3.testReaddir:642 Incorrect return code: 
> expected:<13> but was:<5>
> [ERROR]   TestRpcProgramNfs3.testReaddirplus:666 Incorrect return code: 
> expected:<13> but was:<5>
> [ERROR]   TestRpcProgramNfs3.testReadlink:297 Incorrect return code: 
> expected:<0> but was:<5>
> [ERROR]   TestRpcProgramNfs3.testRemove:570 Incorrect return code: 
> expected:<13> but was:<5>
> [ERROR]   TestRpcProgramNfs3.testRename:618 Incorrect return code: 
> expected:<13> but was:<5>
> [ERROR]   TestRpcProgramNfs3.testRmdir:594 Incorrect return code: 
> expected:<13> but was:<5>
> [ERROR]   TestRpcProgramNfs3.testSetattr:225 Incorrect return code 
> expected:<13> but was:<5>
> [ERROR]   TestRpcProgramNfs3.testSymlink:546 Incorrect return code: 
> expected:<13> but was:<5>
> [ERROR]   TestRpcProgramNfs3.testWrite:468 Incorrect return code: 
> expected:<13> but was:<5>
> [INFO] 
> [ERROR] Tests run: 25, Failures: 20, Errors: 0, Skipped: 0
> [INFO] 
> [ERROR] There are test failures. {code}
> The polluter that led to this flakiness was the test method
> testGetUserGroupInformationSecure() in TestDFSClientCache.java. There was a 
> line 
> {code:java}
> UserGroupInformation.setLoginUser(currentUserUgi);{code}
> which modifies some shared state and resource, something like pre-setup the 
> config. To fix this issue, I added the cleanup methods in 
> TestDFSClientCache.java to reset the UserGroupInformation to ensure the 
> isolation among each test class.
> {code:java}
> @AfterClass
> public static void cleanup() {
> UserGroupInformation.reset();
> }{code}
> Including setting
> {code:java}
> authenticationMethod = null;
> conf = null; // set configuration to null
> setLoginUser(null); // reset login user to default null{code}
> ..., and so on. The reset() methods can be referred to 
> hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java.
> After the fix, the error was no longer exist and the succeed message was:
> {code:java}
> [INFO] ---
> [INFO]  T E S T S
> [INFO] ---
> [INFO] Running org.apache.hadoop.hdfs.nfs.nfs3.CustomTest
> [INFO] Tests run: 25, Failures: 

[jira] [Commented] (HDFS-17278) Detect order dependent flakiness in TestViewfsWithNfs3.java under hadoop-hdfs-nfs module

2023-12-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17794032#comment-17794032
 ] 

ASF GitHub Bot commented on HDFS-17278:
---

xinglin commented on code in PR #6329:
URL: https://github.com/apache/hadoop/pull/6329#discussion_r1418413021


##
hadoop-hdfs-project/hadoop-hdfs-nfs/src/test/java/org/apache/hadoop/hdfs/nfs/nfs3/TestDFSClientCache.java:
##
@@ -31,8 +31,14 @@
 import org.apache.hadoop.hdfs.nfs.conf.NfsConfiguration;
 import org.apache.hadoop.security.UserGroupInformation;
 import org.junit.Test;
+import org.junit.AfterClass;
 
 public class TestDFSClientCache {
+  @AfterClass

Review Comment:
   nit: maybe @After? Basically reset/clean all side-effects after each test.





> Detect order dependent flakiness in TestViewfsWithNfs3.java under 
> hadoop-hdfs-nfs module
> 
>
> Key: HDFS-17278
> URL: https://issues.apache.org/jira/browse/HDFS-17278
> Project: Hadoop HDFS
>  Issue Type: Bug
> Environment: openjdk version "17.0.9"
> Apache Maven 3.9.5
>Reporter: Ruby
>Priority: Minor
>  Labels: pull-request-available
> Attachments: failed-1.png, failed-2.png, success.png
>
>
> The order dependent flakiness was detected if the test class 
> TestDFSClientCache.java runs before TestRpcProgramNfs3.java.
> The error message looks like below:
> {code:java}
> [ERROR] Failures: 
> [ERROR]   TestRpcProgramNfs3.testAccess:279 Incorrect return code 
> expected:<0> but was:<13>
> [ERROR]   TestRpcProgramNfs3.testCommit:764 Incorrect return code: 
> expected:<13> but was:<5>
> [ERROR]   TestRpcProgramNfs3.testCreate:493 Incorrect return code: 
> expected:<13> but was:<5>
> [ERROR]   
> TestRpcProgramNfs3.testEncryptedReadWrite:359->createFileUsingNfs:393 
> Incorrect response:  expected: but 
> was:
> [ERROR]   TestRpcProgramNfs3.testFsinfo:714 Incorrect return code: 
> expected:<13> but was:<5>
> [ERROR]   TestRpcProgramNfs3.testFsstat:696 Incorrect return code: 
> expected:<0> but was:<13>
> [ERROR]   TestRpcProgramNfs3.testGetattr:205 Incorrect return code 
> expected:<0> but was:<13>
> [ERROR]   TestRpcProgramNfs3.testLookup:249 Incorrect return code 
> expected:<13> but was:<5>
> [ERROR]   TestRpcProgramNfs3.testMkdir:517 Incorrect return code: 
> expected:<13> but was:<5>
> [ERROR]   TestRpcProgramNfs3.testPathconf:738 Incorrect return code: 
> expected:<13> but was:<5>
> [ERROR]   TestRpcProgramNfs3.testRead:341 Incorrect return code: expected:<0> 
> but was:<13>
> [ERROR]   TestRpcProgramNfs3.testReaddir:642 Incorrect return code: 
> expected:<13> but was:<5>
> [ERROR]   TestRpcProgramNfs3.testReaddirplus:666 Incorrect return code: 
> expected:<13> but was:<5>
> [ERROR]   TestRpcProgramNfs3.testReadlink:297 Incorrect return code: 
> expected:<0> but was:<5>
> [ERROR]   TestRpcProgramNfs3.testRemove:570 Incorrect return code: 
> expected:<13> but was:<5>
> [ERROR]   TestRpcProgramNfs3.testRename:618 Incorrect return code: 
> expected:<13> but was:<5>
> [ERROR]   TestRpcProgramNfs3.testRmdir:594 Incorrect return code: 
> expected:<13> but was:<5>
> [ERROR]   TestRpcProgramNfs3.testSetattr:225 Incorrect return code 
> expected:<13> but was:<5>
> [ERROR]   TestRpcProgramNfs3.testSymlink:546 Incorrect return code: 
> expected:<13> but was:<5>
> [ERROR]   TestRpcProgramNfs3.testWrite:468 Incorrect return code: 
> expected:<13> but was:<5>
> [INFO] 
> [ERROR] Tests run: 25, Failures: 20, Errors: 0, Skipped: 0
> [INFO] 
> [ERROR] There are test failures. {code}
> The polluter that led to this flakiness was the test method
> testGetUserGroupInformationSecure() in TestDFSClientCache.java. There was a 
> line 
> {code:java}
> UserGroupInformation.setLoginUser(currentUserUgi);{code}
> which modifies some shared state and resource, something like pre-setup the 
> config. To fix this issue, I added the cleanup methods in 
> TestDFSClientCache.java to reset the UserGroupInformation to ensure the 
> isolation among each test class.
> {code:java}
> @AfterClass
> public static void cleanup() {
> UserGroupInformation.reset();
> }{code}
> Including setting
> {code:java}
> authenticationMethod = null;
> conf = null; // set configuration to null
> setLoginUser(null); // reset login user to default null{code}
> ..., and so on. The reset() methods can be referred to 
> hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java.
> After the fix, the error was no longer exist and the succeed message was:
> {code:java}
> [INFO] ---
> [INFO]  T E S T S
> [INFO] ---
> [INFO] Running org.apache.hadoop.hdfs.nfs.nfs3.CustomTest
> [INFO] Tests run: 25, Failures

[jira] [Commented] (HDFS-17262) Fixed the verbose log.warn in DFSUtil.addTransferRateMetric()

2023-12-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17262?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17794031#comment-17794031
 ] 

ASF GitHub Bot commented on HDFS-17262:
---

xinglin commented on PR #6290:
URL: https://github.com/apache/hadoop/pull/6290#issuecomment-1844557563

   Thanks @Hexiaoqiao for merging!
   
   Checked out the commit from trunk branch and saw "Contributed by" was 
changed from "Ravindra Dingankar 
[rdingan...@linkedin.com](mailto:rdingan...@linkedin.com)." to myself, which 
was unexpected. I intentionally put "Contributed by Rav" in the commit message. 
I should have communicated this to @Hexiaoqiao before he merges the PR.
   
   The change was originally created by Rav and I just helped contribute it 
back to open-source while he was on a vacation. 
   
   ```
   commit 607c98104284fd6364509bf0d5a62f23abef2a52 (HEAD -> trunk, 
origin/trunk, origin/HEAD)
   Author: Xing Lin 
   Date:   Wed Dec 6 18:16:23 2023 -0800
   
   HDFS-17262.  Fixed the verbose log.warn in 
DFSUtil.addTransferRateMetric().  (#6290). Contributed by Xing Lin.
   ```
   cc @rdingankar 




> Fixed the verbose log.warn in DFSUtil.addTransferRateMetric()
> -
>
> Key: HDFS-17262
> URL: https://issues.apache.org/jira/browse/HDFS-17262
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Bryan Beaudreault
>Assignee: Xing Lin
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>
> HDFS-16917 added a LOG.warn when passed duration is 0. The unit for duration 
> is millis, and its very possible for a read to take less than a millisecond 
> when considering local TCP connection. We are seeing this spam multiple times 
> per millisecond. There's another report on the PR for HDFS-16917.
> Please downgrade to debug or remove the log



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15413) DFSStripedInputStream throws exception when datanodes close idle connections

2023-12-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17794010#comment-17794010
 ] 

ASF GitHub Bot commented on HDFS-15413:
---

Neilxzn commented on PR #5829:
URL: https://github.com/apache/hadoop/pull/5829#issuecomment-1844334814

   I can pass the unit test hadoop.hdfs.TestDFSStripedInputStreamWithTimeout in 
my local development environment, but it fails on GitHub Jenkins.
   
![image](https://github.com/apache/hadoop/assets/10757009/a511b4e1-8413-44bb-9136-5e7cc1f3ff17)
   Check if the test log of the development environment is consistent with the 
assumption. When the client reads the file for the first time and stops for 10 
seconds, the connection between the client and the datanode server will be 
automatically disconnected, resulting in a failed subsequent read by the 
client.  @ayushtkn Any other suggestions?




> DFSStripedInputStream throws exception when datanodes close idle connections
> 
>
> Key: HDFS-15413
> URL: https://issues.apache.org/jira/browse/HDFS-15413
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ec, erasure-coding, hdfs-client
>Affects Versions: 3.1.3
> Environment: - Hadoop 3.1.3
> - erasure coding with ISA-L and RS-3-2-1024k scheme
> - running in kubernetes
> - dfs.client.socket-timeout = 1
> - dfs.datanode.socket.write.timeout = 1
>Reporter: Andrey Elenskiy
>Priority: Critical
>  Labels: pull-request-available
> Attachments: out.log
>
>
> We've run into an issue with compactions failing in HBase when erasure coding 
> is enabled on a table directory. After digging further I was able to narrow 
> it down to a seek + read logic and able to reproduce the issue with hdfs 
> client only:
> {code:java}
> import org.apache.hadoop.conf.Configuration;
> import org.apache.hadoop.fs.Path;
> import org.apache.hadoop.fs.FileSystem;
> import org.apache.hadoop.fs.FSDataInputStream;
> public class ReaderRaw {
> public static void main(final String[] args) throws Exception {
> Path p = new Path(args[0]);
> int bufLen = Integer.parseInt(args[1]);
> int sleepDuration = Integer.parseInt(args[2]);
> int countBeforeSleep = Integer.parseInt(args[3]);
> int countAfterSleep = Integer.parseInt(args[4]);
> Configuration conf = new Configuration();
> FSDataInputStream istream = FileSystem.get(conf).open(p);
> byte[] buf = new byte[bufLen];
> int readTotal = 0;
> int count = 0;
> try {
>   while (true) {
> istream.seek(readTotal);
> int bytesRemaining = bufLen;
> int bufOffset = 0;
> while (bytesRemaining > 0) {
>   int nread = istream.read(buf, 0, bufLen);
>   if (nread < 0) {
>   throw new Exception("nread is less than zero");
>   }
>   readTotal += nread;
>   bufOffset += nread;
>   bytesRemaining -= nread;
> }
> count++;
> if (count == countBeforeSleep) {
> System.out.println("sleeping for " + sleepDuration + " 
> milliseconds");
> Thread.sleep(sleepDuration);
> System.out.println("resuming");
> }
> if (count == countBeforeSleep + countAfterSleep) {
> System.out.println("done");
> break;
> }
>   }
> } catch (Exception e) {
> System.out.println("exception on read " + count + " read total " 
> + readTotal);
> throw e;
> }
> }
> }
> {code}
> The issue appears to be due to the fact that datanodes close the connection 
> of EC client if it doesn't fetch next packet for longer than 
> dfs.client.socket-timeout. The EC client doesn't retry and instead assumes 
> that those datanodes went away resulting in "missing blocks" exception.
> I was able to consistently reproduce with the following arguments:
> {noformat}
> bufLen = 100 (just below 1MB which is the size of the stripe) 
> sleepDuration = (dfs.client.socket-timeout + 1) * 1000 (in our case 11000)
> countBeforeSleep = 1
> countAfterSleep = 7
> {noformat}
> I've attached the entire log output of running the snippet above against 
> erasure coded file with RS-3-2-1024k policy. And here are the logs from 
> datanodes of disconnecting the client:
> datanode 1:
> {noformat}
> 2020-06-15 19:06:20,697 INFO datanode.DataNode: Likely the client has stopped 
> reading, disconnecting it (datanode-v11-0-hadoop.hadoop:9866:DataXceiver 
> error processing READ_BLOCK operation  src: /10.128.23.40:53748 dst: 
> /10.128.14.46:9866); java.net.SocketTimeoutException: 1 millis timeout 
> while waiti

[jira] [Commented] (HDFS-17278) Detect order dependent flakiness in TestViewfsWithNfs3.java under hadoop-hdfs-nfs module

2023-12-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17793991#comment-17793991
 ] 

ASF GitHub Bot commented on HDFS-17278:
---

hadoop-yetus commented on PR #6329:
URL: https://github.com/apache/hadoop/pull/6329#issuecomment-1844178741

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 21s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  32m 27s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 16s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   0m 16s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 18s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 20s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 22s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 19s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   0m 34s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  19m 24s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 11s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 12s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |   0m 12s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 10s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   0m 10s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m  9s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 12s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 12s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 13s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   0m 34s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  19m  6s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m  9s |  |  hadoop-hdfs-nfs in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 23s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  80m 55s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6329/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6329 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 7570b4fcffe0 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 
15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / fe8553ef2922a26cb218b13c148ec282b510fb1a |
   | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6329/2/testReport/ |
   | Max. process+thread count | 634 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-nfs U: 
hadoop-hdfs-project/hadoop-hdfs-nfs |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6329/2/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   




> Detect order dependent flakiness in TestViewfsWithNfs3.java under 

[jira] [Commented] (HDFS-17265) RBF: Throwing an exception prevents the permit from being released when using FairnessPolicyController

2023-12-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17265?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17793988#comment-17793988
 ] 

ASF GitHub Bot commented on HDFS-17265:
---

KeeProMise commented on PR #6298:
URL: https://github.com/apache/hadoop/pull/6298#issuecomment-1844143655

   @Hexiaoqiao @goiri @slfan1989 hi, If no more comments here, please help 
merge it, thanks!




> RBF: Throwing an exception prevents the permit from being released when using 
> FairnessPolicyController
> --
>
> Key: HDFS-17265
> URL: https://issues.apache.org/jira/browse/HDFS-17265
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Jian Zhang
>Assignee: Jian Zhang
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDFS-17265.patch
>
>
> *Bug description*
> When the router uses FairnessPolicyController, each time a request is 
> processed,
> the permit of the ns corresponding to the request will be obtained first 
> {*}(method acquirePermit){*},
> and then the  information of namenodes corresponding to the ns will be 
> obtained{*}(method getOrderedNamenodes){*}.
> getOrderedNamenodes comes after acquirePermit, so if acquirePermit succeeds 
> but getOrderedNamenodes throws an exception, the permit cannot be released.
>  
> *How to reproduce*
> Use the original code to run the new unit test 
> testReleasedWhenExceptionOccurs in this PR
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-17262) Fixed the verbose log.warn in DFSUtil.addTransferRateMetric()

2023-12-06 Thread Xiaoqiao He (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17262?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoqiao He updated HDFS-17262:
---
Summary: Fixed the verbose log.warn in DFSUtil.addTransferRateMetric()  
(was: Transfer rate metric warning log is too verbose)

> Fixed the verbose log.warn in DFSUtil.addTransferRateMetric()
> -
>
> Key: HDFS-17262
> URL: https://issues.apache.org/jira/browse/HDFS-17262
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Bryan Beaudreault
>Assignee: Xing Lin
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>
> HDFS-16917 added a LOG.warn when passed duration is 0. The unit for duration 
> is millis, and its very possible for a read to take less than a millisecond 
> when considering local TCP connection. We are seeing this spam multiple times 
> per millisecond. There's another report on the PR for HDFS-16917.
> Please downgrade to debug or remove the log



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-17262) Transfer rate metric warning log is too verbose

2023-12-06 Thread Xiaoqiao He (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17262?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoqiao He resolved HDFS-17262.

Fix Version/s: 3.4.0
 Hadoop Flags: Reviewed
   Resolution: Fixed

> Transfer rate metric warning log is too verbose
> ---
>
> Key: HDFS-17262
> URL: https://issues.apache.org/jira/browse/HDFS-17262
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Bryan Beaudreault
>Assignee: Xing Lin
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>
> HDFS-16917 added a LOG.warn when passed duration is 0. The unit for duration 
> is millis, and its very possible for a read to take less than a millisecond 
> when considering local TCP connection. We are seeing this spam multiple times 
> per millisecond. There's another report on the PR for HDFS-16917.
> Please downgrade to debug or remove the log



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17262) Transfer rate metric warning log is too verbose

2023-12-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17262?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17793983#comment-17793983
 ] 

ASF GitHub Bot commented on HDFS-17262:
---

Hexiaoqiao commented on PR #6290:
URL: https://github.com/apache/hadoop/pull/6290#issuecomment-1844090791

   Committed to trunk. Thanks all for your contributions!




> Transfer rate metric warning log is too verbose
> ---
>
> Key: HDFS-17262
> URL: https://issues.apache.org/jira/browse/HDFS-17262
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Bryan Beaudreault
>Assignee: Xing Lin
>Priority: Major
>  Labels: pull-request-available
>
> HDFS-16917 added a LOG.warn when passed duration is 0. The unit for duration 
> is millis, and its very possible for a read to take less than a millisecond 
> when considering local TCP connection. We are seeing this spam multiple times 
> per millisecond. There's another report on the PR for HDFS-16917.
> Please downgrade to debug or remove the log



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17262) Transfer rate metric warning log is too verbose

2023-12-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17262?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17793982#comment-17793982
 ] 

ASF GitHub Bot commented on HDFS-17262:
---

Hexiaoqiao merged PR #6290:
URL: https://github.com/apache/hadoop/pull/6290




> Transfer rate metric warning log is too verbose
> ---
>
> Key: HDFS-17262
> URL: https://issues.apache.org/jira/browse/HDFS-17262
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Bryan Beaudreault
>Assignee: Xing Lin
>Priority: Major
>  Labels: pull-request-available
>
> HDFS-16917 added a LOG.warn when passed duration is 0. The unit for duration 
> is millis, and its very possible for a read to take less than a millisecond 
> when considering local TCP connection. We are seeing this spam multiple times 
> per millisecond. There's another report on the PR for HDFS-16917.
> Please downgrade to debug or remove the log



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-17277) Delete invalid code logic in namenode format

2023-12-06 Thread zhangzhanchang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17277?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhangzhanchang updated HDFS-17277:
--
Summary: Delete invalid code logic in namenode format  (was: Delete invalid 
code logic)

> Delete invalid code logic in namenode format
> 
>
> Key: HDFS-17277
> URL: https://issues.apache.org/jira/browse/HDFS-17277
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: zhangzhanchang
>Priority: Minor
>  Labels: pull-request-available
>
> There is invalid logical processing in the namenode format process



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-17278) Detect order dependent flakiness in TestViewfsWithNfs3.java under hadoop-hdfs-nfs module

2023-12-06 Thread Ruby (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17278?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ruby updated HDFS-17278:

Issue Type: Bug  (was: New Feature)

> Detect order dependent flakiness in TestViewfsWithNfs3.java under 
> hadoop-hdfs-nfs module
> 
>
> Key: HDFS-17278
> URL: https://issues.apache.org/jira/browse/HDFS-17278
> Project: Hadoop HDFS
>  Issue Type: Bug
> Environment: openjdk version "17.0.9"
> Apache Maven 3.9.5
>Reporter: Ruby
>Priority: Minor
>  Labels: pull-request-available
> Attachments: failed-1.png, failed-2.png, success.png
>
>
> The order dependent flakiness was detected if the test class 
> TestDFSClientCache.java runs before TestRpcProgramNfs3.java.
> The error message looks like below:
> {code:java}
> [ERROR] Failures: 
> [ERROR]   TestRpcProgramNfs3.testAccess:279 Incorrect return code 
> expected:<0> but was:<13>
> [ERROR]   TestRpcProgramNfs3.testCommit:764 Incorrect return code: 
> expected:<13> but was:<5>
> [ERROR]   TestRpcProgramNfs3.testCreate:493 Incorrect return code: 
> expected:<13> but was:<5>
> [ERROR]   
> TestRpcProgramNfs3.testEncryptedReadWrite:359->createFileUsingNfs:393 
> Incorrect response:  expected: but 
> was:
> [ERROR]   TestRpcProgramNfs3.testFsinfo:714 Incorrect return code: 
> expected:<13> but was:<5>
> [ERROR]   TestRpcProgramNfs3.testFsstat:696 Incorrect return code: 
> expected:<0> but was:<13>
> [ERROR]   TestRpcProgramNfs3.testGetattr:205 Incorrect return code 
> expected:<0> but was:<13>
> [ERROR]   TestRpcProgramNfs3.testLookup:249 Incorrect return code 
> expected:<13> but was:<5>
> [ERROR]   TestRpcProgramNfs3.testMkdir:517 Incorrect return code: 
> expected:<13> but was:<5>
> [ERROR]   TestRpcProgramNfs3.testPathconf:738 Incorrect return code: 
> expected:<13> but was:<5>
> [ERROR]   TestRpcProgramNfs3.testRead:341 Incorrect return code: expected:<0> 
> but was:<13>
> [ERROR]   TestRpcProgramNfs3.testReaddir:642 Incorrect return code: 
> expected:<13> but was:<5>
> [ERROR]   TestRpcProgramNfs3.testReaddirplus:666 Incorrect return code: 
> expected:<13> but was:<5>
> [ERROR]   TestRpcProgramNfs3.testReadlink:297 Incorrect return code: 
> expected:<0> but was:<5>
> [ERROR]   TestRpcProgramNfs3.testRemove:570 Incorrect return code: 
> expected:<13> but was:<5>
> [ERROR]   TestRpcProgramNfs3.testRename:618 Incorrect return code: 
> expected:<13> but was:<5>
> [ERROR]   TestRpcProgramNfs3.testRmdir:594 Incorrect return code: 
> expected:<13> but was:<5>
> [ERROR]   TestRpcProgramNfs3.testSetattr:225 Incorrect return code 
> expected:<13> but was:<5>
> [ERROR]   TestRpcProgramNfs3.testSymlink:546 Incorrect return code: 
> expected:<13> but was:<5>
> [ERROR]   TestRpcProgramNfs3.testWrite:468 Incorrect return code: 
> expected:<13> but was:<5>
> [INFO] 
> [ERROR] Tests run: 25, Failures: 20, Errors: 0, Skipped: 0
> [INFO] 
> [ERROR] There are test failures. {code}
> The polluter that led to this flakiness was the test method
> testGetUserGroupInformationSecure() in TestDFSClientCache.java. There was a 
> line 
> {code:java}
> UserGroupInformation.setLoginUser(currentUserUgi);{code}
> which modifies some shared state and resource, something like pre-setup the 
> config. To fix this issue, I added the cleanup methods in 
> TestDFSClientCache.java to reset the UserGroupInformation to ensure the 
> isolation among each test class.
> {code:java}
> @AfterClass
> public static void cleanup() {
> UserGroupInformation.reset();
> }{code}
> Including setting
> {code:java}
> authenticationMethod = null;
> conf = null; // set configuration to null
> setLoginUser(null); // reset login user to default null{code}
> ..., and so on. The reset() methods can be referred to 
> hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java.
> After the fix, the error was no longer exist and the succeed message was:
> {code:java}
> [INFO] ---
> [INFO]  T E S T S
> [INFO] ---
> [INFO] Running org.apache.hadoop.hdfs.nfs.nfs3.CustomTest
> [INFO] Tests run: 25, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 
> 18.457 s - in org.apache.hadoop.hdfs.nfs.nfs3.CustomTest
> [INFO] 
> [INFO] Results:
> [INFO] 
> [INFO] Tests run: 25, Failures: 0, Errors: 0, Skipped: 0
> [INFO] 
> [INFO] 
> 
> [INFO] BUILD SUCCESS
> [INFO] 
>  
> {code}
> Here is the CustomTest.java file that I used to run these two tests in order, 
> the error can be reproduce by running this CustomTest.java. 
> {code:java}
> package org.apache.hadoop.hdfs.nfs.nfs3;
> imp

[jira] [Commented] (HDFS-17278) Detect order dependent flakiness in TestViewfsWithNfs3.java under hadoop-hdfs-nfs module

2023-12-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17793973#comment-17793973
 ] 

ASF GitHub Bot commented on HDFS-17278:
---

hadoop-yetus commented on PR #6329:
URL: https://github.com/apache/hadoop/pull/6329#issuecomment-1843977545

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 20s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  31m 45s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 17s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   0m 16s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 17s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 19s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 20s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 19s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   0m 33s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  19m  1s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 11s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 11s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |   0m 11s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 12s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   0m 12s |  |  the patch passed  |
   | -1 :x: |  blanks  |   0m  0s | 
[/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6329/1/artifact/out/blanks-eol.txt)
 |  The patch has 1 line(s) that end in blanks. Use git apply --whitespace=fix 
<>. Refer https://git-scm.com/docs/git-apply  |
   | +1 :green_heart: |  checkstyle  |   0m  8s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 12s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 13s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 13s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   0m 32s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  19m  6s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 11s |  |  hadoop-hdfs-nfs in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 21s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  79m 50s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6329/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6329 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux a3f434c3f0b0 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 
15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 6fd7ea59d2942de7fa519a128b5303a4babd905f |
   | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6329/1/testReport/ |
   | Max. process+thread count | 682 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-nfs U: 
hadoop-hdfs-project/hadoop-hdfs-nfs |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6329/1/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Pow

[jira] [Updated] (HDFS-17278) Detect order dependent flakiness in TestViewfsWithNfs3.java under hadoop-hdfs-nfs module

2023-12-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17278?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDFS-17278:
--
Labels: pull-request-available  (was: )

> Detect order dependent flakiness in TestViewfsWithNfs3.java under 
> hadoop-hdfs-nfs module
> 
>
> Key: HDFS-17278
> URL: https://issues.apache.org/jira/browse/HDFS-17278
> Project: Hadoop HDFS
>  Issue Type: New Feature
> Environment: openjdk version "17.0.9"
> Apache Maven 3.9.5
>Reporter: Ruby
>Priority: Minor
>  Labels: pull-request-available
> Attachments: failed-1.png, failed-2.png, success.png
>
>
> The order dependent flakiness was detected if the test class 
> TestDFSClientCache.java runs before TestRpcProgramNfs3.java.
> The error message looks like below:
> {code:java}
> [ERROR] Failures: 
> [ERROR]   TestRpcProgramNfs3.testAccess:279 Incorrect return code 
> expected:<0> but was:<13>
> [ERROR]   TestRpcProgramNfs3.testCommit:764 Incorrect return code: 
> expected:<13> but was:<5>
> [ERROR]   TestRpcProgramNfs3.testCreate:493 Incorrect return code: 
> expected:<13> but was:<5>
> [ERROR]   
> TestRpcProgramNfs3.testEncryptedReadWrite:359->createFileUsingNfs:393 
> Incorrect response:  expected: but 
> was:
> [ERROR]   TestRpcProgramNfs3.testFsinfo:714 Incorrect return code: 
> expected:<13> but was:<5>
> [ERROR]   TestRpcProgramNfs3.testFsstat:696 Incorrect return code: 
> expected:<0> but was:<13>
> [ERROR]   TestRpcProgramNfs3.testGetattr:205 Incorrect return code 
> expected:<0> but was:<13>
> [ERROR]   TestRpcProgramNfs3.testLookup:249 Incorrect return code 
> expected:<13> but was:<5>
> [ERROR]   TestRpcProgramNfs3.testMkdir:517 Incorrect return code: 
> expected:<13> but was:<5>
> [ERROR]   TestRpcProgramNfs3.testPathconf:738 Incorrect return code: 
> expected:<13> but was:<5>
> [ERROR]   TestRpcProgramNfs3.testRead:341 Incorrect return code: expected:<0> 
> but was:<13>
> [ERROR]   TestRpcProgramNfs3.testReaddir:642 Incorrect return code: 
> expected:<13> but was:<5>
> [ERROR]   TestRpcProgramNfs3.testReaddirplus:666 Incorrect return code: 
> expected:<13> but was:<5>
> [ERROR]   TestRpcProgramNfs3.testReadlink:297 Incorrect return code: 
> expected:<0> but was:<5>
> [ERROR]   TestRpcProgramNfs3.testRemove:570 Incorrect return code: 
> expected:<13> but was:<5>
> [ERROR]   TestRpcProgramNfs3.testRename:618 Incorrect return code: 
> expected:<13> but was:<5>
> [ERROR]   TestRpcProgramNfs3.testRmdir:594 Incorrect return code: 
> expected:<13> but was:<5>
> [ERROR]   TestRpcProgramNfs3.testSetattr:225 Incorrect return code 
> expected:<13> but was:<5>
> [ERROR]   TestRpcProgramNfs3.testSymlink:546 Incorrect return code: 
> expected:<13> but was:<5>
> [ERROR]   TestRpcProgramNfs3.testWrite:468 Incorrect return code: 
> expected:<13> but was:<5>
> [INFO] 
> [ERROR] Tests run: 25, Failures: 20, Errors: 0, Skipped: 0
> [INFO] 
> [ERROR] There are test failures. {code}
> The polluter that led to this flakiness was the test method
> testGetUserGroupInformationSecure() in TestDFSClientCache.java. There was a 
> line 
> {code:java}
> UserGroupInformation.setLoginUser(currentUserUgi);{code}
> which modifies some shared state and resource, something like pre-setup the 
> config. To fix this issue, I added the cleanup methods in 
> TestDFSClientCache.java to reset the UserGroupInformation to ensure the 
> isolation among each test class.
> {code:java}
> @AfterClass
> public static void cleanup() {
> UserGroupInformation.reset();
> }{code}
> Including setting
> {code:java}
> authenticationMethod = null;
> conf = null; // set configuration to null
> setLoginUser(null); // reset login user to default null{code}
> ..., and so on. The reset() methods can be referred to 
> hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java.
> After the fix, the error was no longer exist and the succeed message was:
> {code:java}
> [INFO] ---
> [INFO]  T E S T S
> [INFO] ---
> [INFO] Running org.apache.hadoop.hdfs.nfs.nfs3.CustomTest
> [INFO] Tests run: 25, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 
> 18.457 s - in org.apache.hadoop.hdfs.nfs.nfs3.CustomTest
> [INFO] 
> [INFO] Results:
> [INFO] 
> [INFO] Tests run: 25, Failures: 0, Errors: 0, Skipped: 0
> [INFO] 
> [INFO] 
> 
> [INFO] BUILD SUCCESS
> [INFO] 
>  
> {code}
> Here is the CustomTest.java file that I used to run these two tests in order, 
> the error can be reproduce by running this CustomTest.java. 
> {code:java}
> package org.ap

[jira] [Commented] (HDFS-17278) Detect order dependent flakiness in TestViewfsWithNfs3.java under hadoop-hdfs-nfs module

2023-12-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17793958#comment-17793958
 ] 

ASF GitHub Bot commented on HDFS-17278:
---

yijut2 opened a new pull request, #6329:
URL: https://github.com/apache/hadoop/pull/6329

   
   
   ### Description of PR
   The order dependent flakiness was detected if the test class 
`TestDFSClientCache.java` runs before `TestRpcProgramNfs3.java`.
   The error message looks like below:
   ```
   [ERROR] Failures: 
   [ERROR]   TestRpcProgramNfs3.testAccess:279 Incorrect return code 
expected:<0> but was:<13>
   [ERROR]   TestRpcProgramNfs3.testCommit:764 Incorrect return code: 
expected:<13> but was:<5>
   [ERROR]   TestRpcProgramNfs3.testCreate:493 Incorrect return code: 
expected:<13> but was:<5>
   [ERROR]   
TestRpcProgramNfs3.testEncryptedReadWrite:359->createFileUsingNfs:393 Incorrect 
response:  expected: but 
was:
   [ERROR]   TestRpcProgramNfs3.testFsinfo:714 Incorrect return code: 
expected:<13> but was:<5>
   [ERROR]   TestRpcProgramNfs3.testFsstat:696 Incorrect return code: 
expected:<0> but was:<13>
   [ERROR]   TestRpcProgramNfs3.testGetattr:205 Incorrect return code 
expected:<0> but was:<13>
   [ERROR]   TestRpcProgramNfs3.testLookup:249 Incorrect return code 
expected:<13> but was:<5>
   [ERROR]   TestRpcProgramNfs3.testMkdir:517 Incorrect return code: 
expected:<13> but was:<5>
   [ERROR]   TestRpcProgramNfs3.testPathconf:738 Incorrect return code: 
expected:<13> but was:<5>
   [ERROR]   TestRpcProgramNfs3.testRead:341 Incorrect return code: 
expected:<0> but was:<13>
   [ERROR]   TestRpcProgramNfs3.testReaddir:642 Incorrect return code: 
expected:<13> but was:<5>
   [ERROR]   TestRpcProgramNfs3.testReaddirplus:666 Incorrect return code: 
expected:<13> but was:<5>
   [ERROR]   TestRpcProgramNfs3.testReadlink:297 Incorrect return code: 
expected:<0> but was:<5>
   [ERROR]   TestRpcProgramNfs3.testRemove:570 Incorrect return code: 
expected:<13> but was:<5>
   [ERROR]   TestRpcProgramNfs3.testRename:618 Incorrect return code: 
expected:<13> but was:<5>
   [ERROR]   TestRpcProgramNfs3.testRmdir:594 Incorrect return code: 
expected:<13> but was:<5>
   [ERROR]   TestRpcProgramNfs3.testSetattr:225 Incorrect return code 
expected:<13> but was:<5>
   [ERROR]   TestRpcProgramNfs3.testSymlink:546 Incorrect return code: 
expected:<13> but was:<5>
   [ERROR]   TestRpcProgramNfs3.testWrite:468 Incorrect return code: 
expected:<13> but was:<5>
   [INFO] 
   [ERROR] Tests run: 25, Failures: 20, Errors: 0, Skipped: 0
   [INFO] 
   [ERROR] There are test failures. 
   ```
   The polluter that led to this flakiness was the test method
   `testGetUserGroupInformationSecure()` in `TestDFSClientCache.java`. There 
was a line `UserGroupInformation.setLoginUser(currentUserUgi);`
   which modifies some shared state and resource, something like pre-setup the 
config. To fix this issue, I added the cleanup methods in 
`TestDFSClientCache.java` to reset the `UserGroupInformation` to ensure the 
isolation among each test class.
   ```
   @AfterClass
   public static void cleanup() {
   UserGroupInformation.reset();
   }
   ```
   Including setting
   ```
   authenticationMethod = null;
   conf = null; // set configuration to null
   setLoginUser(null); // reset login user to default null
   ```
   ..., and so on. The `reset()` methods can be referred to 
`hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java`.
   After the fix, the error was no longer exist and the succeed message was:
   ```
   [INFO] ---
   [INFO]  T E S T S
   [INFO] ---
   [INFO] Running org.apache.hadoop.hdfs.nfs.nfs3.CustomTest
   [INFO] Tests run: 25, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 
18.457 s - in org.apache.hadoop.hdfs.nfs.nfs3.CustomTest
   [INFO] 
   [INFO] Results:
   [INFO] 
   [INFO] Tests run: 25, Failures: 0, Errors: 0, Skipped: 0
   [INFO] 
   [INFO] 

   [INFO] BUILD SUCCESS
   [INFO] --

> Detect order dependent flakiness in TestViewfsWithNfs3.java under 
> hadoop-hdfs-nfs module
> 
>
> Key: HDFS-17278
> URL: https://issues.apache.org/jira/browse/HDFS-17278
> Project: Hadoop HDFS
>  Issue Type: New Feature
> Environment: openjdk version "17.0.9"
> Apache Maven 3.9.5
>Reporter: Ruby
>Priority: Minor
> Attachments: failed-1.png, failed-2.png, success.png
>
>
> The order dependent flakiness was detected if the test class 
> TestDFSClientCache.java runs before TestRpcProgramNfs3.java.
> The er

[jira] [Created] (HDFS-17278) Detect order dependent flakiness in TestViewfsWithNfs3.java under hadoop-hdfs-nfs module

2023-12-06 Thread Ruby (Jira)
Ruby created HDFS-17278:
---

 Summary: Detect order dependent flakiness in 
TestViewfsWithNfs3.java under hadoop-hdfs-nfs module
 Key: HDFS-17278
 URL: https://issues.apache.org/jira/browse/HDFS-17278
 Project: Hadoop HDFS
  Issue Type: New Feature
 Environment: openjdk version "17.0.9"
Apache Maven 3.9.5
Reporter: Ruby
 Attachments: failed-1.png, failed-2.png, success.png

The order dependent flakiness was detected if the test class 
TestDFSClientCache.java runs before TestRpcProgramNfs3.java.

The error message looks like below:
{code:java}
[ERROR] Failures: 
[ERROR]   TestRpcProgramNfs3.testAccess:279 Incorrect return code expected:<0> 
but was:<13>
[ERROR]   TestRpcProgramNfs3.testCommit:764 Incorrect return code: 
expected:<13> but was:<5>
[ERROR]   TestRpcProgramNfs3.testCreate:493 Incorrect return code: 
expected:<13> but was:<5>
[ERROR]   TestRpcProgramNfs3.testEncryptedReadWrite:359->createFileUsingNfs:393 
Incorrect response:  expected: but 
was:
[ERROR]   TestRpcProgramNfs3.testFsinfo:714 Incorrect return code: 
expected:<13> but was:<5>
[ERROR]   TestRpcProgramNfs3.testFsstat:696 Incorrect return code: expected:<0> 
but was:<13>
[ERROR]   TestRpcProgramNfs3.testGetattr:205 Incorrect return code expected:<0> 
but was:<13>
[ERROR]   TestRpcProgramNfs3.testLookup:249 Incorrect return code expected:<13> 
but was:<5>
[ERROR]   TestRpcProgramNfs3.testMkdir:517 Incorrect return code: expected:<13> 
but was:<5>
[ERROR]   TestRpcProgramNfs3.testPathconf:738 Incorrect return code: 
expected:<13> but was:<5>
[ERROR]   TestRpcProgramNfs3.testRead:341 Incorrect return code: expected:<0> 
but was:<13>
[ERROR]   TestRpcProgramNfs3.testReaddir:642 Incorrect return code: 
expected:<13> but was:<5>
[ERROR]   TestRpcProgramNfs3.testReaddirplus:666 Incorrect return code: 
expected:<13> but was:<5>
[ERROR]   TestRpcProgramNfs3.testReadlink:297 Incorrect return code: 
expected:<0> but was:<5>
[ERROR]   TestRpcProgramNfs3.testRemove:570 Incorrect return code: 
expected:<13> but was:<5>
[ERROR]   TestRpcProgramNfs3.testRename:618 Incorrect return code: 
expected:<13> but was:<5>
[ERROR]   TestRpcProgramNfs3.testRmdir:594 Incorrect return code: expected:<13> 
but was:<5>
[ERROR]   TestRpcProgramNfs3.testSetattr:225 Incorrect return code 
expected:<13> but was:<5>
[ERROR]   TestRpcProgramNfs3.testSymlink:546 Incorrect return code: 
expected:<13> but was:<5>
[ERROR]   TestRpcProgramNfs3.testWrite:468 Incorrect return code: expected:<13> 
but was:<5>
[INFO] 
[ERROR] Tests run: 25, Failures: 20, Errors: 0, Skipped: 0
[INFO] 
[ERROR] There are test failures. {code}
The polluter that led to this flakiness was the test method
testGetUserGroupInformationSecure() in TestDFSClientCache.java. There was a 
line 
{code:java}
UserGroupInformation.setLoginUser(currentUserUgi);{code}
which modifies some shared state and resource, something like pre-setup the 
config. To fix this issue, I added the cleanup methods in 
TestDFSClientCache.java to reset the UserGroupInformation to ensure the 
isolation among each test class.
{code:java}
@AfterClass
public static void cleanup() {
UserGroupInformation.reset();
}{code}
Including setting
{code:java}
authenticationMethod = null;
conf = null; // set configuration to null
setLoginUser(null); // reset login user to default null{code}
..., and so on. The reset() methods can be referred to 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java.

After the fix, the error was no longer exist and the succeed message was:
{code:java}
[INFO] ---
[INFO]  T E S T S
[INFO] ---
[INFO] Running org.apache.hadoop.hdfs.nfs.nfs3.CustomTest
[INFO] Tests run: 25, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 18.457 
s - in org.apache.hadoop.hdfs.nfs.nfs3.CustomTest
[INFO] 
[INFO] Results:
[INFO] 
[INFO] Tests run: 25, Failures: 0, Errors: 0, Skipped: 0
[INFO] 
[INFO] 
[INFO] BUILD SUCCESS
[INFO]  
{code}

Here is the CustomTest.java file that I used to run these two tests in order, 
the error can be reproduce by running this CustomTest.java. 
{code:java}
package org.apache.hadoop.hdfs.nfs.nfs3;

import org.junit.runner.RunWith;import org.junit.runners.Suite;



@RunWith(Suite.class)
@Suite.SuiteClasses({
TestDFSClientCache.class,
TestRpcProgramNfs3.class
})
public class CustomTest {} {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17276) The nn fetch editlog forbidden in kerberos environment

2023-12-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17276?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17793860#comment-17793860
 ] 

ASF GitHub Bot commented on HDFS-17276:
---

hadoop-yetus commented on PR #6326:
URL: https://github.com/apache/hadoop/pull/6326#issuecomment-1843372230

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 21s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  32m  7s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 42s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   0m 35s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 35s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 45s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 42s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   1m  5s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m 43s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m 17s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 33s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 40s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |   0m 40s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 33s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   0m 33s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 37s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 31s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   1m  0s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m 41s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  20m 35s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 192m 18s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6326/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 29s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 279m 13s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.server.datanode.TestDirectoryScanner |
   |   | hadoop.hdfs.web.TestWebHdfsTokens |
   |   | hadoop.hdfs.qjournal.server.TestGetJournalEditServlet |
   |   | hadoop.hdfs.server.common.TestJspHelper |
   |   | hadoop.hdfs.TestDFSStripedOutputStreamWithRandomECPolicy |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6326/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6326 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 2e1184921843 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 
15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / a9147ac02d063880895857f8e1062e3a0b54823a |
   | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6326/1/test

[jira] [Commented] (HDFS-17277) Delete invalid code logic

2023-12-06 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17277?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17793772#comment-17793772
 ] 

Steve Loughran commented on HDFS-17277:
---

# moved to HDFS
# can you give the JIRA a title which covers where you are deleting invalid 
code. We've quite a lot of it, after all..

> Delete invalid code logic
> -
>
> Key: HDFS-17277
> URL: https://issues.apache.org/jira/browse/HDFS-17277
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: zhangzhanchang
>Priority: Minor
>  Labels: pull-request-available
>
> There is invalid logical processing in the namenode format process



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Moved] (HDFS-17277) Delete invalid code logic

2023-12-06 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17277?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran moved HADOOP-19002 to HDFS-17277:


Key: HDFS-17277  (was: HADOOP-19002)
Project: Hadoop HDFS  (was: Hadoop Common)

> Delete invalid code logic
> -
>
> Key: HDFS-17277
> URL: https://issues.apache.org/jira/browse/HDFS-17277
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: zhangzhanchang
>Priority: Minor
>  Labels: pull-request-available
>
> There is invalid logical processing in the namenode format process



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17270) RBF: Fix ZKDelegationTokenSecretManagerImpl use closed zookeeper client to get token in some case

2023-12-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17270?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17793723#comment-17793723
 ] 

ASF GitHub Bot commented on HDFS-17270:
---

Hexiaoqiao commented on PR #6315:
URL: https://github.com/apache/hadoop/pull/6315#issuecomment-1842896978

   Committed to trunk. Thanks @ThinkerLei and @zhangshuyan0 




> RBF: Fix ZKDelegationTokenSecretManagerImpl use closed zookeeper client  to 
> get token in some case
> --
>
> Key: HDFS-17270
> URL: https://issues.apache.org/jira/browse/HDFS-17270
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: lei w
>Assignee: lei w
>Priority: Major
>  Labels: pull-request-available
> Attachments: CuratorFrameworkException
>
>
> Now, we use CuratorFramework to simplifies using ZooKeeper in 
> ZKDelegationTokenSecretManagerImpl and we always hold the same 
> zookeeperClient after initialization ZKDelegationTokenSecretManagerImpl. But 
> in some cases like network problem , CuratorFramework may close current 
> zookeeperClient and create new one. In this case , we will use  a zkclient 
> which has been closed  to get token. We encountered this situation in our 
> cluster,exception information in attachment.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-17270) RBF: Fix ZKDelegationTokenSecretManagerImpl use closed zookeeper client to get token in some case

2023-12-06 Thread Xiaoqiao He (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17270?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoqiao He updated HDFS-17270:
---
Component/s: rbf

> RBF: Fix ZKDelegationTokenSecretManagerImpl use closed zookeeper client  to 
> get token in some case
> --
>
> Key: HDFS-17270
> URL: https://issues.apache.org/jira/browse/HDFS-17270
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: rbf
>Reporter: lei w
>Assignee: lei w
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
> Attachments: CuratorFrameworkException
>
>
> Now, we use CuratorFramework to simplifies using ZooKeeper in 
> ZKDelegationTokenSecretManagerImpl and we always hold the same 
> zookeeperClient after initialization ZKDelegationTokenSecretManagerImpl. But 
> in some cases like network problem , CuratorFramework may close current 
> zookeeperClient and create new one. In this case , we will use  a zkclient 
> which has been closed  to get token. We encountered this situation in our 
> cluster,exception information in attachment.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-17270) RBF: Fix ZKDelegationTokenSecretManagerImpl use closed zookeeper client to get token in some case

2023-12-06 Thread Xiaoqiao He (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17270?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoqiao He resolved HDFS-17270.

Fix Version/s: 3.4.0
 Hadoop Flags: Reviewed
   Resolution: Fixed

> RBF: Fix ZKDelegationTokenSecretManagerImpl use closed zookeeper client  to 
> get token in some case
> --
>
> Key: HDFS-17270
> URL: https://issues.apache.org/jira/browse/HDFS-17270
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: lei w
>Assignee: lei w
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
> Attachments: CuratorFrameworkException
>
>
> Now, we use CuratorFramework to simplifies using ZooKeeper in 
> ZKDelegationTokenSecretManagerImpl and we always hold the same 
> zookeeperClient after initialization ZKDelegationTokenSecretManagerImpl. But 
> in some cases like network problem , CuratorFramework may close current 
> zookeeperClient and create new one. In this case , we will use  a zkclient 
> which has been closed  to get token. We encountered this situation in our 
> cluster,exception information in attachment.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17270) RBF: Fix ZKDelegationTokenSecretManagerImpl use closed zookeeper client to get token in some case

2023-12-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17270?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17793722#comment-17793722
 ] 

ASF GitHub Bot commented on HDFS-17270:
---

Hexiaoqiao merged PR #6315:
URL: https://github.com/apache/hadoop/pull/6315




> RBF: Fix ZKDelegationTokenSecretManagerImpl use closed zookeeper client  to 
> get token in some case
> --
>
> Key: HDFS-17270
> URL: https://issues.apache.org/jira/browse/HDFS-17270
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: lei w
>Assignee: lei w
>Priority: Major
>  Labels: pull-request-available
> Attachments: CuratorFrameworkException
>
>
> Now, we use CuratorFramework to simplifies using ZooKeeper in 
> ZKDelegationTokenSecretManagerImpl and we always hold the same 
> zookeeperClient after initialization ZKDelegationTokenSecretManagerImpl. But 
> in some cases like network problem , CuratorFramework may close current 
> zookeeperClient and create new one. In this case , we will use  a zkclient 
> which has been closed  to get token. We encountered this situation in our 
> cluster,exception information in attachment.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-17270) RBF: Fix ZKDelegationTokenSecretManagerImpl use closed zookeeper client to get token in some case

2023-12-06 Thread Xiaoqiao He (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17270?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoqiao He updated HDFS-17270:
---
Summary: RBF: Fix ZKDelegationTokenSecretManagerImpl use closed zookeeper 
client  to get token in some case  (was:  Fix 
ZKDelegationTokenSecretManagerImpl use closed zookeeper Client  to get token in 
some case )

> RBF: Fix ZKDelegationTokenSecretManagerImpl use closed zookeeper client  to 
> get token in some case
> --
>
> Key: HDFS-17270
> URL: https://issues.apache.org/jira/browse/HDFS-17270
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: lei w
>Assignee: lei w
>Priority: Major
>  Labels: pull-request-available
> Attachments: CuratorFrameworkException
>
>
> Now, we use CuratorFramework to simplifies using ZooKeeper in 
> ZKDelegationTokenSecretManagerImpl and we always hold the same 
> zookeeperClient after initialization ZKDelegationTokenSecretManagerImpl. But 
> in some cases like network problem , CuratorFramework may close current 
> zookeeperClient and create new one. In this case , we will use  a zkclient 
> which has been closed  to get token. We encountered this situation in our 
> cluster,exception information in attachment.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-17276) The nn fetch editlog forbidden in kerberos environment

2023-12-06 Thread kuper (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17276?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

kuper updated HDFS-17276:
-
Description: 
* In a Kerberos environment, the namenode cannot fetch editlog from journalnode 
because the request is rejected (403).  !image-2023-12-06-20-21-03-557.png!
 * GetJournalEditServlet checks if the request's username meets the 
requirements through the isValidRequestor function. After HDFS-16686 is merged, 
remotePrincipal becomes ugi.getUserName().
 * In a Kerberos environment, ugi.getUserName() gets the 
request.getRemoteUser() via DfsServlet's getUGI to get the username, and this 
username is not a full name.
 * Therefore, the obtained username is similar to namenode01 instead of 
namenode01/hos...@realm.tld, which meansit fails to pass the isValidRequestor 
check.  !image-2023-12-06-20-21-46-825.png!

*reproduction*
 * In the TestGetJournalEditServlet add testSecurityRequestNameNode

{code:java}
@Test
public void testSecurityRequestNameNode() throws IOException, ServletException {
  // Test: Make a request from a namenode
  CONF.set(HADOOP_SECURITY_AUTHENTICATION, "kerberos");
  UserGroupInformation.setConfiguration(CONF);
  
  HttpServletRequest request = mock(HttpServletRequest.class);

when(request.getParameter(UserParam.NAME)).thenReturn("nn/localh...@realm.tld");
  when(request.getRemoteUser()).thenReturn("jn");
  boolean isValid = SERVLET.isValidRequestor(request, CONF);
  
  assertThat(isValid).isTrue();
} {code}

  was:
* In a Kerberos environment, the namenode cannot fetch editlog from journalnode 
because the request is rejected (403). !image-2023-12-06-20-21-03-557.png!
 * GetJournalEditServlet checks if the request's username meets the 
requirements through the isValidRequestor function. After HDFS-16686 is merged, 
remotePrincipal becomes ugi.getUserName().
 * In a Kerberos environment, ugi.getUserName() gets the 
request.getRemoteUser() via DfsServlet's getUGI to get the username, and this 
username is not a full name.
 * Therefore, the obtained username is similar to namenode01 instead of 
namenode01/host01@@REALM.TLD, which meansit fails to pass the isValidRequestor 
check. !image-2023-12-06-20-21-46-825.png!

*reproduction*
 * In the TestGetJournalEditServlet add testSecurityRequestNameNode

{code:java}
@Test
public void testSecurityRequestNameNode() throws IOException, ServletException {
  // Test: Make a request from a namenode
  CONF.set(HADOOP_SECURITY_AUTHENTICATION, "kerberos");
  UserGroupInformation.setConfiguration(CONF);
  
  HttpServletRequest request = mock(HttpServletRequest.class);

when(request.getParameter(UserParam.NAME)).thenReturn("nn/localh...@realm.tld");
  when(request.getRemoteUser()).thenReturn("jn");
  boolean isValid = SERVLET.isValidRequestor(request, CONF);
  
  assertThat(isValid).isTrue();
} {code}


> The nn fetch editlog forbidden in kerberos environment
> --
>
> Key: HDFS-17276
> URL: https://issues.apache.org/jira/browse/HDFS-17276
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: qjm, security
>Affects Versions: 3.3.5, 3.3.6
>Reporter: kuper
>Priority: Major
>  Labels: pull-request-available
> Attachments: image-2023-12-06-20-21-03-557.png, 
> image-2023-12-06-20-21-46-825.png
>
>
> * In a Kerberos environment, the namenode cannot fetch editlog from 
> journalnode because the request is rejected (403).  
> !image-2023-12-06-20-21-03-557.png!
>  * GetJournalEditServlet checks if the request's username meets the 
> requirements through the isValidRequestor function. After HDFS-16686 is 
> merged, remotePrincipal becomes ugi.getUserName().
>  * In a Kerberos environment, ugi.getUserName() gets the 
> request.getRemoteUser() via DfsServlet's getUGI to get the username, and this 
> username is not a full name.
>  * Therefore, the obtained username is similar to namenode01 instead of 
> namenode01/hos...@realm.tld, which meansit fails to pass the isValidRequestor 
> check.  !image-2023-12-06-20-21-46-825.png!
> *reproduction*
>  * In the TestGetJournalEditServlet add testSecurityRequestNameNode
> {code:java}
> @Test
> public void testSecurityRequestNameNode() throws IOException, 
> ServletException {
>   // Test: Make a request from a namenode
>   CONF.set(HADOOP_SECURITY_AUTHENTICATION, "kerberos");
>   UserGroupInformation.setConfiguration(CONF);
>   
>   HttpServletRequest request = mock(HttpServletRequest.class);
> 
> when(request.getParameter(UserParam.NAME)).thenReturn("nn/localh...@realm.tld");
>   when(request.getRemoteUser()).thenReturn("jn");
>   boolean isValid = SERVLET.isValidRequestor(request, CONF);
>   
>   assertThat(isValid).isTrue();
> } {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

--

[jira] [Commented] (HDFS-17276) The nn fetch editlog forbidden in kerberos environment

2023-12-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17276?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17793698#comment-17793698
 ] 

ASF GitHub Bot commented on HDFS-17276:
---

gp1314 opened a new pull request, #6326:
URL: https://github.com/apache/hadoop/pull/6326

   
   
   ### Description of PR
   
   - In a Kerberos environment, the namenode cannot fetch editlog from 
journalnode because the request is rejected (403). 
   
![image-2023-12-05-20-59-33-728](https://github.com/apache/hadoop/assets/22268305/f19c2518-3fa9-4ceb-8570-63b0b38f682a)
   
   - GetJournalEditServlet checks if the request's username meets the 
requirements through the isValidRequestor function. After 
[HDFS-16686](https://issues.apache.org/jira/browse/HDFS-16686) is merged, 
remotePrincipal becomes ugi.getUserName().
   
   - In a Kerberos environment, ugi.getUserName() gets the 
request.getRemoteUser() via DfsServlet's getUGI to get the username, and this 
username is not a full name.
   
   - Therefore, the obtained username is similar to namenode01 instead of 
namenode01/host01@@REALM.TLD, which meansit fails to pass the isValidRequestor 
check. 
   
![image-2023-12-05-21-05-49-180](https://github.com/apache/hadoop/assets/22268305/1a50c620-c8a3-4499-bdfe-2b064b709d9f)
   
   
   **reproduction**
   
   - In the TestGetJournalEditServlet add testSecurityRequestNameNode
   ```
   @Test
   public void testSecurityRequestNameNode() throws IOException, 
ServletException {
 // Test: Make a request from a namenode
 CONF.set(HADOOP_SECURITY_AUTHENTICATION, "kerberos");
 UserGroupInformation.setConfiguration(CONF);
 
 HttpServletRequest request = mock(HttpServletRequest.class);
   
when(request.getParameter(UserParam.NAME)).thenReturn("nn/localh...@realm.tld");
 when(request.getRemoteUser()).thenReturn("jn");
 boolean isValid = SERVLET.isValidRequestor(request, CONF);
 
 assertThat(isValid).isTrue();
   } 
   ```
   
   
   
   ### How was this patch tested?
   
   
   ### For code changes:
   
   - [x] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   




> The nn fetch editlog forbidden in kerberos environment
> --
>
> Key: HDFS-17276
> URL: https://issues.apache.org/jira/browse/HDFS-17276
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: qjm, security
>Affects Versions: 3.3.5, 3.3.6
>Reporter: kuper
>Priority: Major
> Attachments: image-2023-12-06-20-21-03-557.png, 
> image-2023-12-06-20-21-46-825.png
>
>
> * In a Kerberos environment, the namenode cannot fetch editlog from 
> journalnode because the request is rejected (403). 
> !image-2023-12-06-20-21-03-557.png!
>  * GetJournalEditServlet checks if the request's username meets the 
> requirements through the isValidRequestor function. After HDFS-16686 is 
> merged, remotePrincipal becomes ugi.getUserName().
>  * In a Kerberos environment, ugi.getUserName() gets the 
> request.getRemoteUser() via DfsServlet's getUGI to get the username, and this 
> username is not a full name.
>  * Therefore, the obtained username is similar to namenode01 instead of 
> namenode01/host01@@REALM.TLD, which meansit fails to pass the 
> isValidRequestor check. !image-2023-12-06-20-21-46-825.png!
> *reproduction*
>  * In the TestGetJournalEditServlet add testSecurityRequestNameNode
> {code:java}
> @Test
> public void testSecurityRequestNameNode() throws IOException, 
> ServletException {
>   // Test: Make a request from a namenode
>   CONF.set(HADOOP_SECURITY_AUTHENTICATION, "kerberos");
>   UserGroupInformation.setConfiguration(CONF);
>   
>   HttpServletRequest request = mock(HttpServletRequest.class);
> 
> when(request.getParameter(UserParam.NAME)).thenReturn("nn/localh...@realm.tld");
>   when(request.getRemoteUser()).thenReturn("jn");
>   boolean isValid = SERVLET.isValidRequestor(request, CONF);
>   
>   assertThat(isValid).isTrue();
> } {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-17276) The nn fetch editlog forbidden in kerberos environment

2023-12-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17276?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDFS-17276:
--
Labels: pull-request-available  (was: )

> The nn fetch editlog forbidden in kerberos environment
> --
>
> Key: HDFS-17276
> URL: https://issues.apache.org/jira/browse/HDFS-17276
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: qjm, security
>Affects Versions: 3.3.5, 3.3.6
>Reporter: kuper
>Priority: Major
>  Labels: pull-request-available
> Attachments: image-2023-12-06-20-21-03-557.png, 
> image-2023-12-06-20-21-46-825.png
>
>
> * In a Kerberos environment, the namenode cannot fetch editlog from 
> journalnode because the request is rejected (403). 
> !image-2023-12-06-20-21-03-557.png!
>  * GetJournalEditServlet checks if the request's username meets the 
> requirements through the isValidRequestor function. After HDFS-16686 is 
> merged, remotePrincipal becomes ugi.getUserName().
>  * In a Kerberos environment, ugi.getUserName() gets the 
> request.getRemoteUser() via DfsServlet's getUGI to get the username, and this 
> username is not a full name.
>  * Therefore, the obtained username is similar to namenode01 instead of 
> namenode01/host01@@REALM.TLD, which meansit fails to pass the 
> isValidRequestor check. !image-2023-12-06-20-21-46-825.png!
> *reproduction*
>  * In the TestGetJournalEditServlet add testSecurityRequestNameNode
> {code:java}
> @Test
> public void testSecurityRequestNameNode() throws IOException, 
> ServletException {
>   // Test: Make a request from a namenode
>   CONF.set(HADOOP_SECURITY_AUTHENTICATION, "kerberos");
>   UserGroupInformation.setConfiguration(CONF);
>   
>   HttpServletRequest request = mock(HttpServletRequest.class);
> 
> when(request.getParameter(UserParam.NAME)).thenReturn("nn/localh...@realm.tld");
>   when(request.getRemoteUser()).thenReturn("jn");
>   boolean isValid = SERVLET.isValidRequestor(request, CONF);
>   
>   assertThat(isValid).isTrue();
> } {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-17276) The nn fetch editlog forbidden in kerberos environment

2023-12-06 Thread kuper (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17276?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

kuper updated HDFS-17276:
-
Summary: The nn fetch editlog forbidden in kerberos environment  (was: The 
nn fetch editlog failed in kerberos environment)

> The nn fetch editlog forbidden in kerberos environment
> --
>
> Key: HDFS-17276
> URL: https://issues.apache.org/jira/browse/HDFS-17276
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: qjm, security
>Affects Versions: 3.3.5, 3.3.6
>Reporter: kuper
>Priority: Major
> Attachments: image-2023-12-06-20-21-03-557.png, 
> image-2023-12-06-20-21-46-825.png
>
>
> * In a Kerberos environment, the namenode cannot fetch editlog from 
> journalnode because the request is rejected (403). 
> !image-2023-12-06-20-21-03-557.png!
>  * GetJournalEditServlet checks if the request's username meets the 
> requirements through the isValidRequestor function. After HDFS-16686 is 
> merged, remotePrincipal becomes ugi.getUserName().
>  * In a Kerberos environment, ugi.getUserName() gets the 
> request.getRemoteUser() via DfsServlet's getUGI to get the username, and this 
> username is not a full name.
>  * Therefore, the obtained username is similar to namenode01 instead of 
> namenode01/host01@@REALM.TLD, which meansit fails to pass the 
> isValidRequestor check. !image-2023-12-06-20-21-46-825.png!
> *reproduction*
>  * In the TestGetJournalEditServlet add testSecurityRequestNameNode
> {code:java}
> @Test
> public void testSecurityRequestNameNode() throws IOException, 
> ServletException {
>   // Test: Make a request from a namenode
>   CONF.set(HADOOP_SECURITY_AUTHENTICATION, "kerberos");
>   UserGroupInformation.setConfiguration(CONF);
>   
>   HttpServletRequest request = mock(HttpServletRequest.class);
> 
> when(request.getParameter(UserParam.NAME)).thenReturn("nn/localh...@realm.tld");
>   when(request.getRemoteUser()).thenReturn("jn");
>   boolean isValid = SERVLET.isValidRequestor(request, CONF);
>   
>   assertThat(isValid).isTrue();
> } {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17269) RBF: Listing trash directory should return subdirs from all subclusters.

2023-12-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17269?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17793680#comment-17793680
 ] 

ASF GitHub Bot commented on HDFS-17269:
---

hadoop-yetus commented on PR #6312:
URL: https://github.com/apache/hadoop/pull/6312#issuecomment-1842774757

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   3m  6s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  32m 22s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 24s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   0m 19s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 17s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 27s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 27s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 19s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   0m 57s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m 43s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 18s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 19s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |   0m 19s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 17s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   0m 17s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 10s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 20s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 16s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 17s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   0m 50s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  20m 12s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  18m 51s |  |  hadoop-hdfs-rbf in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 23s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 104m 10s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6312/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6312 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 3aa177a5fab8 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 
15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 6deca602b618ed759badba4e6024d490f3b18110 |
   | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6312/3/testReport/ |
   | Max. process+thread count | 2311 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6312/3/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   




> RBF: Listing trash directory should return subdirs from all subclu

[jira] [Commented] (HDFS-17269) RBF: Listing trash directory should return subdirs from all subclusters.

2023-12-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17269?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17793679#comment-17793679
 ] 

ASF GitHub Bot commented on HDFS-17269:
---

hadoop-yetus commented on PR #6312:
URL: https://github.com/apache/hadoop/pull/6312#issuecomment-1842774062

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   6m 57s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  32m 26s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 22s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   0m 20s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 17s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 25s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 19s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   0m 55s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m 16s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 18s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 19s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |   0m 19s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 16s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   0m 16s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m  9s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 19s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 16s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 13s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   0m 50s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  20m 19s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  18m 45s |  |  hadoop-hdfs-rbf in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 25s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 107m 13s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6312/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6312 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 067f0218bc0c 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 
15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 6deca602b618ed759badba4e6024d490f3b18110 |
   | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6312/2/testReport/ |
   | Max. process+thread count | 2305 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6312/2/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   




> RBF: Listing trash directory should return subdirs from all subclu

[jira] [Created] (HDFS-17276) The nn fetch editlog failed in kerberos environment

2023-12-06 Thread kuper (Jira)
kuper created HDFS-17276:


 Summary: The nn fetch editlog failed in kerberos environment
 Key: HDFS-17276
 URL: https://issues.apache.org/jira/browse/HDFS-17276
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: qjm, security
Affects Versions: 3.3.6, 3.3.5
Reporter: kuper
 Attachments: image-2023-12-06-20-21-03-557.png, 
image-2023-12-06-20-21-46-825.png

* In a Kerberos environment, the namenode cannot fetch editlog from journalnode 
because the request is rejected (403). !image-2023-12-06-20-21-03-557.png!
 * GetJournalEditServlet checks if the request's username meets the 
requirements through the isValidRequestor function. After HDFS-16686 is merged, 
remotePrincipal becomes ugi.getUserName().
 * In a Kerberos environment, ugi.getUserName() gets the 
request.getRemoteUser() via DfsServlet's getUGI to get the username, and this 
username is not a full name.
 * Therefore, the obtained username is similar to namenode01 instead of 
namenode01/host01@@REALM.TLD, which meansit fails to pass the isValidRequestor 
check. !image-2023-12-06-20-21-46-825.png!

*reproduction*
 * In the TestGetJournalEditServlet add testSecurityRequestNameNode

{code:java}
@Test
public void testSecurityRequestNameNode() throws IOException, ServletException {
  // Test: Make a request from a namenode
  CONF.set(HADOOP_SECURITY_AUTHENTICATION, "kerberos");
  UserGroupInformation.setConfiguration(CONF);
  
  HttpServletRequest request = mock(HttpServletRequest.class);

when(request.getParameter(UserParam.NAME)).thenReturn("nn/localh...@realm.tld");
  when(request.getRemoteUser()).thenReturn("jn");
  boolean isValid = SERVLET.isValidRequestor(request, CONF);
  
  assertThat(isValid).isTrue();
} {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17272) NNThroughputBenchmark should support specifying the base directory for multi-client test

2023-12-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17793665#comment-17793665
 ] 

ASF GitHub Bot commented on HDFS-17272:
---

hadoop-yetus commented on PR #6319:
URL: https://github.com/apache/hadoop/pull/6319#issuecomment-1842732133

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 53s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m 30s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  31m  4s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  16m 19s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |  14m 47s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   4m 11s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   3m 12s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   2m 24s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   2m 48s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   5m 54s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  34m 53s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 31s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m  4s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  15m 39s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |  15m 39s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  14m 50s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |  14m 50s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   4m  7s |  |  root: The patch generated 
0 new + 117 unchanged - 9 fixed = 117 total (was 126)  |
   | +1 :green_heart: |  mvnsite  |   3m  5s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   2m 19s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   2m 38s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   6m 15s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  35m 22s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  19m 14s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  | 264m  5s |  |  hadoop-hdfs in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   1m  2s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 503m 43s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6319/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6319 |
   | Optional Tests | dupname asflicense mvnsite codespell detsecrets 
markdownlint compile javac javadoc mvninstall unit shadedclient spotbugs 
checkstyle |
   | uname | Linux 5f859dc6a525 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 
15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 14566b74d274a286f28f1efa751c3f2941e74d61 |
   | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6319/5/testReport/ |
   | Max. process+thread count | 3476 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hado

[jira] [Commented] (HDFS-15413) DFSStripedInputStream throws exception when datanodes close idle connections

2023-12-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17793662#comment-17793662
 ] 

ASF GitHub Bot commented on HDFS-15413:
---

hadoop-yetus commented on PR #5829:
URL: https://github.com/apache/hadoop/pull/5829#issuecomment-1842713669

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 19s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  0s |  |  xmllint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  13m 24s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  19m 14s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   2m 51s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   2m 47s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 44s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 16s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m  3s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   1m 29s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   3m  3s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m  9s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 21s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m  2s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   2m 47s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |   2m 47s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   2m 42s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   2m 42s |  |  the patch passed  |
   | -1 :x: |  blanks  |   0m  0s | 
[/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5829/4/artifact/out/blanks-eol.txt)
 |  The patch has 1 line(s) that end in blanks. Use git apply --whitespace=fix 
<>. Refer https://git-scm.com/docs/git-apply  |
   | -0 :warning: |  checkstyle  |   0m 35s | 
[/results-checkstyle-hadoop-hdfs-project.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5829/4/artifact/out/results-checkstyle-hadoop-hdfs-project.txt)
 |  hadoop-hdfs-project: The patch generated 1 new + 45 unchanged - 0 fixed = 
46 total (was 45)  |
   | +1 :green_heart: |  mvnsite  |   1m  6s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 50s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   1m 20s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   3m  5s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  20m  7s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 49s |  |  hadoop-hdfs-client in the patch 
passed.  |
   | -1 :x: |  unit  | 189m 58s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5829/4/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 27s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 293m 19s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.TestDFSStripedInputStreamWithTimeout |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5829/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5829 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets xmllint |
   | uname | Linux 807603bf2dcf 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 
15:

[jira] [Commented] (HDFS-17269) RBF: Listing trash directory should return subdirs from all subclusters.

2023-12-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17269?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17793629#comment-17793629
 ] 

ASF GitHub Bot commented on HDFS-17269:
---

LiuGuH commented on code in PR #6312:
URL: https://github.com/apache/hadoop/pull/6312#discussion_r1417093166


##
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterTrash.java:
##
@@ -282,6 +282,13 @@ public void testMultipleMountPoint() throws IOException,
 fileStatuses = fs.listStatus(new Path("/user/test-trash/.Trash/Current/" + 
MOUNT_POINT2));
 assertEquals(0, fileStatuses.length);
 
+// In ns1, make a trash path with timestamp to simulate a trash path.
+String trashPath = "/user/test-trash/.Trash/" + System.currentTimeMillis();
+client1.mkdirs(trashPath, new FsPermission("770"),
+true);
+fileStatuses = fs.listStatus(new Path("/user/test-trash/.Trash"));

Review Comment:
   Thanks for reivew.  For this it is enough and others will be more 
complicated.  





> RBF: Listing trash directory should return subdirs from all subclusters.
> 
>
> Key: HDFS-17269
> URL: https://issues.apache.org/jira/browse/HDFS-17269
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: liuguanghua
>Assignee: liuguanghua
>Priority: Minor
>  Labels: pull-request-available
>
> Same scenario  as HDFS-17263 
> If user trash config  fs.trash.checkpoint.interval set to 10min in namenodes, 
> the trash root dir /user/$USER/.Trash/Current will be very 10 min renamed to 
> /user/$USER/.Trash/timestamp .
>  
> When user  ls  /user/$USER/.Trash, it should be return blow:
> /user/$USER/.Trash/Current
> /user/$USER/.Trash/timestamp (This is invisible now)
>  
> So we should  make  that user ls trash root dir can see all trash subdirs in 
> all nameservices which user has any mountpoint in nameservice.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17262) Transfer rate metric warning log is too verbose

2023-12-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17262?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17793589#comment-17793589
 ] 

ASF GitHub Bot commented on HDFS-17262:
---

hadoop-yetus commented on PR #6290:
URL: https://github.com/apache/hadoop/pull/6290#issuecomment-1842500384

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 21s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  31m 42s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 42s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   0m 37s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 39s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 41s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 41s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   1m  0s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m 43s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m 19s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 35s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 36s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |   0m 36s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 34s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   0m 34s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 26s |  |  
hadoop-hdfs-project/hadoop-hdfs: The patch generated 0 new + 126 unchanged - 2 
fixed = 126 total (was 128)  |
   | +1 :green_heart: |  mvnsite  |   0m 36s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 30s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   1m  1s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m 40s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  20m 13s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  | 185m  8s |  |  hadoop-hdfs in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 28s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 270m 46s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6290/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6290 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 571c7939a280 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 
15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / c18693adfb9f77959f4732bda9b0b65c2af160b1 |
   | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6290/5/testReport/ |
   | Max. process+thread count | 4277 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6290/5/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.