[jira] [Commented] (HDFS-17370) Fix junit dependency for running parameterized tests in hadoop-hdfs-rbf

2024-03-21 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17829747#comment-17829747
 ] 

Ayush Saxena commented on HDFS-17370:
-

Thanx [~tasanuma] for the fix, I think there is one more problem

{noformat}
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:3.0.0-M1:test (default-test) on 
project hadoop-hdfs-rbf: Execution default-test of goal 
org.apache.maven.plugins:maven-surefire-plugin:3.0.0-M1:test failed: 
java.lang.NoClassDefFoundError: 
org/junit/platform/launcher/core/LauncherFactory: 
org.junit.platform.launcher.core.LauncherFactory -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/PluginExecutionException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-hdfs-rbf
{noformat}

Refs:
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6637/1/artifact/out/patch-unit-root.txt
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6638/1/artifact/out/patch-unit-root.txt

It happened on one of my PR as well, something like this fixed:
https://github.com/apache/hadoop/pull/6629/files#diff-dbf6ea05af8f5d11e74cd87e059a361dd8b06d0f12f1d13ea9899fbbc4ffbc48R185-R189


> Fix junit dependency for running parameterized tests in hadoop-hdfs-rbf
> ---
>
> Key: HDFS-17370
> URL: https://issues.apache.org/jira/browse/HDFS-17370
> Project: Hadoop HDFS
>  Issue Type: Test
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.4.1, 3.5.0
>
>
> We need to add junit-jupiter-engine dependency for running parameterized 
> tests in hadoop-hdfs-rbf.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17408) Reduce the number of quota calculations in FSDirRenameOp

2024-03-21 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17829746#comment-17829746
 ] 

ASF GitHub Bot commented on HDFS-17408:
---

hadoop-yetus commented on PR #6653:
URL: https://github.com/apache/hadoop/pull/6653#issuecomment-2014358463

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 20s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  32m 38s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 41s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   0m 39s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   0m 39s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 40s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 41s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m  6s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   1m 43s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m 50s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 35s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 37s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   0m 37s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 33s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |   0m 33s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 28s | 
[/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6653/4/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs-project/hadoop-hdfs: The patch generated 84 new + 164 unchanged 
- 1 fixed = 248 total (was 165)  |
   | +1 :green_heart: |  mvnsite  |   0m 38s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 31s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m  1s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   1m 43s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  20m 34s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 198m 10s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6653/4/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 30s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 286m 39s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdfs.server.diskbalancer.command.TestDiskBalancerCommand |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6653/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6653 |
   | JIRA Issue | HDFS-17408 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux f3a233d5d866 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / f298262c1275b84f8916a370bed102166fb31661 |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 

[jira] [Updated] (HDFS-17438) RBF: The latest STANDBY and UNAVAILABLE nn should be the lowest priority.

2024-03-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17438?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDFS-17438:
--
Labels: pull-request-available  (was: )

> RBF: The latest STANDBY and UNAVAILABLE nn should be the lowest priority.
> -
>
> Key: HDFS-17438
> URL: https://issues.apache.org/jira/browse/HDFS-17438
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Jian Zhang
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17438) RBF: The latest STANDBY and UNAVAILABLE nn should be the lowest priority.

2024-03-21 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17829735#comment-17829735
 ] 

ASF GitHub Bot commented on HDFS-17438:
---

KeeProMise opened a new pull request, #6655:
URL: https://github.com/apache/hadoop/pull/6655

   …ity.
   
   
   
   ### Description of PR
   
   
   ### How was this patch tested?
   
   
   ### For code changes:
   
   - [ ] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   




> RBF: The latest STANDBY and UNAVAILABLE nn should be the lowest priority.
> -
>
> Key: HDFS-17438
> URL: https://issues.apache.org/jira/browse/HDFS-17438
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Jian Zhang
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-17438) RBF: The latest STANDBY and UNAVAILABLE nn should be the lowest priority.

2024-03-21 Thread Jian Zhang (Jira)
Jian Zhang created HDFS-17438:
-

 Summary: RBF: The latest STANDBY and UNAVAILABLE nn should be the 
lowest priority.
 Key: HDFS-17438
 URL: https://issues.apache.org/jira/browse/HDFS-17438
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Jian Zhang






--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-17436) checkPermission should not ignore original AccessControlException

2024-03-21 Thread ZanderXu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17436?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZanderXu updated HDFS-17436:

Hadoop Flags: Reviewed
  Resolution: Fixed
  Status: Resolved  (was: Patch Available)

> checkPermission should not ignore original AccessControlException 
> --
>
> Key: HDFS-17436
> URL: https://issues.apache.org/jira/browse/HDFS-17436
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 3.3.0, 3.3.6
>Reporter: Xiaobao Wu
>Priority: Minor
>  Labels: patch, pull-request-available
> Fix For: 3.3.0
>
> Attachments: 
> HDFS-17436__Supplement_log_information_for_AccessControlException.patch
>
>
> In the environment where the *Ranger-HDFS* plugin is enabled, I look at the 
> log information of *AccessControlException* caused by the *du.* I find that 
> the printed log information is not accurate, because the original 
> AccessControlException is ignored by checkPermission, which is not conducive 
> to judging the real situation of the  AccessControlException . At least part 
> of the original log information should be printed.
> Later, the *inode* information prompted by the original 
> AccessControlException log information makes me realize that the Ranger-HDFS 
> plug-in in the current environment is not incorporated into RANGER-2297.
> Because the current log prints the inode information is not the ”inode 
> information“ *passed* to the authorizers. At this time if certain external 
> authorizers *does not adjust its authentication logic* according to 
> HDFS-12130 , it is more difficult to locate the real situation of the 
> problem.So I think it is necessary to prompt this part of the log information.
> AccessControlException information currently printed:
> {code:java}
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
>  Permission denied: user=test,access=READ_EXECUTE, 
> inode="/warehouse/tablespace/managed/hive/test.db/stu/dt=2024-01-17":hive:hadoop:drwxrwx---
> at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:226){code}
>  The original AccessControlException information printed:
> {code:java}
> org.apache.hadoop.security.AccessControlException: Permission denied: 
> user=test,access=READ_EXECUTE, inode="dt=2024-01-17":hive:hadoop:drwxrwx---
> at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:400)
>  {code}
> From the comparison results of the above log information, it can be seen that 
> the inode information and the exception stack printed by the log are not 
> accurate.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17436) checkPermission should not ignore original AccessControlException

2024-03-21 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17436?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17829732#comment-17829732
 ] 

ASF GitHub Bot commented on HDFS-17436:
---

ZanderXu merged PR #6651:
URL: https://github.com/apache/hadoop/pull/6651




> checkPermission should not ignore original AccessControlException 
> --
>
> Key: HDFS-17436
> URL: https://issues.apache.org/jira/browse/HDFS-17436
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 3.3.0, 3.3.6
>Reporter: Xiaobao Wu
>Priority: Minor
>  Labels: patch, pull-request-available
> Fix For: 3.3.0
>
> Attachments: 
> HDFS-17436__Supplement_log_information_for_AccessControlException.patch
>
>
> In the environment where the *Ranger-HDFS* plugin is enabled, I look at the 
> log information of *AccessControlException* caused by the *du.* I find that 
> the printed log information is not accurate, because the original 
> AccessControlException is ignored by checkPermission, which is not conducive 
> to judging the real situation of the  AccessControlException . At least part 
> of the original log information should be printed.
> Later, the *inode* information prompted by the original 
> AccessControlException log information makes me realize that the Ranger-HDFS 
> plug-in in the current environment is not incorporated into RANGER-2297.
> Because the current log prints the inode information is not the ”inode 
> information“ *passed* to the authorizers. At this time if certain external 
> authorizers *does not adjust its authentication logic* according to 
> HDFS-12130 , it is more difficult to locate the real situation of the 
> problem.So I think it is necessary to prompt this part of the log information.
> AccessControlException information currently printed:
> {code:java}
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
>  Permission denied: user=test,access=READ_EXECUTE, 
> inode="/warehouse/tablespace/managed/hive/test.db/stu/dt=2024-01-17":hive:hadoop:drwxrwx---
> at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:226){code}
>  The original AccessControlException information printed:
> {code:java}
> org.apache.hadoop.security.AccessControlException: Permission denied: 
> user=test,access=READ_EXECUTE, inode="dt=2024-01-17":hive:hadoop:drwxrwx---
> at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:400)
>  {code}
> From the comparison results of the above log information, it can be seen that 
> the inode information and the exception stack printed by the log are not 
> accurate.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17436) checkPermission should not ignore original AccessControlException

2024-03-21 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17436?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17829733#comment-17829733
 ] 

ASF GitHub Bot commented on HDFS-17436:
---

ZanderXu commented on PR #6651:
URL: https://github.com/apache/hadoop/pull/6651#issuecomment-2014276916

   merged. Thanks @XbaoWu for your contribution. Thanks @slfan1989 
@hiwangzhihui for your review.




> checkPermission should not ignore original AccessControlException 
> --
>
> Key: HDFS-17436
> URL: https://issues.apache.org/jira/browse/HDFS-17436
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 3.3.0, 3.3.6
>Reporter: Xiaobao Wu
>Priority: Minor
>  Labels: patch, pull-request-available
> Fix For: 3.3.0
>
> Attachments: 
> HDFS-17436__Supplement_log_information_for_AccessControlException.patch
>
>
> In the environment where the *Ranger-HDFS* plugin is enabled, I look at the 
> log information of *AccessControlException* caused by the *du.* I find that 
> the printed log information is not accurate, because the original 
> AccessControlException is ignored by checkPermission, which is not conducive 
> to judging the real situation of the  AccessControlException . At least part 
> of the original log information should be printed.
> Later, the *inode* information prompted by the original 
> AccessControlException log information makes me realize that the Ranger-HDFS 
> plug-in in the current environment is not incorporated into RANGER-2297.
> Because the current log prints the inode information is not the ”inode 
> information“ *passed* to the authorizers. At this time if certain external 
> authorizers *does not adjust its authentication logic* according to 
> HDFS-12130 , it is more difficult to locate the real situation of the 
> problem.So I think it is necessary to prompt this part of the log information.
> AccessControlException information currently printed:
> {code:java}
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
>  Permission denied: user=test,access=READ_EXECUTE, 
> inode="/warehouse/tablespace/managed/hive/test.db/stu/dt=2024-01-17":hive:hadoop:drwxrwx---
> at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:226){code}
>  The original AccessControlException information printed:
> {code:java}
> org.apache.hadoop.security.AccessControlException: Permission denied: 
> user=test,access=READ_EXECUTE, inode="dt=2024-01-17":hive:hadoop:drwxrwx---
> at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:400)
>  {code}
> From the comparison results of the above log information, it can be seen that 
> the inode information and the exception stack printed by the log are not 
> accurate.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17415) [FGL] RPCs in NamenodeProtocol support fine-grained lock

2024-03-21 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17415?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17829729#comment-17829729
 ] 

ASF GitHub Bot commented on HDFS-17415:
---

ZanderXu opened a new pull request, #6654:
URL: https://github.com/apache/hadoop/pull/6654

   [FGL] RPCs in NamenodeProtocol support fine-grained lock.
   
   - getBlocks
   - getBlockKeys
   - getTransactionID
   - getMostRecentCheckpointTxId
   - rollEditLog
   - versionRequest
   - errorReport
   - registerSubordinateNamenode
   - startCheckpoint
   - endCheckpoint
   - getEditLogManifest
   - isUpgradeFinalized
   - isRollingUpgrade




> [FGL] RPCs in NamenodeProtocol support fine-grained lock
> 
>
> Key: HDFS-17415
> URL: https://issues.apache.org/jira/browse/HDFS-17415
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: ZanderXu
>Assignee: ZanderXu
>Priority: Major
>
> [FGL] RPCs in NamenodeProtocol support fine-grained lock.
>  * getBlocks
>  * getBlockKeys
>  * getTransactionID
>  * getMostRecentCheckpointTxId
>  * rollEditLog
>  * versionRequest
>  * errorReport
>  * registerSubordinateNamenode
>  * startCheckpoint
>  * endCheckpoint
>  * getEditLogManifest
>  * isUpgradeFinalized
>  * isRollingUpgrade



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-17415) [FGL] RPCs in NamenodeProtocol support fine-grained lock

2024-03-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17415?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDFS-17415:
--
Labels: pull-request-available  (was: )

> [FGL] RPCs in NamenodeProtocol support fine-grained lock
> 
>
> Key: HDFS-17415
> URL: https://issues.apache.org/jira/browse/HDFS-17415
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: ZanderXu
>Assignee: ZanderXu
>Priority: Major
>  Labels: pull-request-available
>
> [FGL] RPCs in NamenodeProtocol support fine-grained lock.
>  * getBlocks
>  * getBlockKeys
>  * getTransactionID
>  * getMostRecentCheckpointTxId
>  * rollEditLog
>  * versionRequest
>  * errorReport
>  * registerSubordinateNamenode
>  * startCheckpoint
>  * endCheckpoint
>  * getEditLogManifest
>  * isUpgradeFinalized
>  * isRollingUpgrade



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-17415) [FGL] RPCs in NamenodeProtocol support fine-grained lock

2024-03-21 Thread ZanderXu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17415?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZanderXu updated HDFS-17415:

Description: 
[FGL] RPCs in NamenodeProtocol support fine-grained lock.
 * getBlocks
 * getBlockKeys
 * getTransactionID
 * getMostRecentCheckpointTxId
 * rollEditLog
 * versionRequest
 * errorReport
 * registerSubordinateNamenode
 * startCheckpoint
 * endCheckpoint
 * getEditLogManifest
 * isUpgradeFinalized
 * isRollingUpgrade

  was:
[FGL] RPCs in NamenodeProtocol support fine-grained lock.
 * getBlocks
 * getBlockKeys
 * getTransactionID
 * getMostRecentCheckpointTxId
 * rollEditLog
 * versionRequest
 * errorReport
 * registerSubordinateNamenode
 * startCheckpoint
 * endCheckpoint
 * getEditLogManifest
 * isUpgradeFinalized
 * isRollingUpgrade
 * getNextSPSPath


> [FGL] RPCs in NamenodeProtocol support fine-grained lock
> 
>
> Key: HDFS-17415
> URL: https://issues.apache.org/jira/browse/HDFS-17415
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: ZanderXu
>Assignee: ZanderXu
>Priority: Major
>
> [FGL] RPCs in NamenodeProtocol support fine-grained lock.
>  * getBlocks
>  * getBlockKeys
>  * getTransactionID
>  * getMostRecentCheckpointTxId
>  * rollEditLog
>  * versionRequest
>  * errorReport
>  * registerSubordinateNamenode
>  * startCheckpoint
>  * endCheckpoint
>  * getEditLogManifest
>  * isUpgradeFinalized
>  * isRollingUpgrade



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-17436) checkPermission should not ignore original AccessControlException

2024-03-21 Thread Xiaobao Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17436?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobao Wu updated HDFS-17436:
--
Attachment: 
HDFS-17436__Supplement_log_information_for_AccessControlException.patch
Labels: patch pull-request-available  (was: pull-request-available)
Status: Patch Available  (was: Open)

> checkPermission should not ignore original AccessControlException 
> --
>
> Key: HDFS-17436
> URL: https://issues.apache.org/jira/browse/HDFS-17436
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 3.3.6, 3.3.0
>Reporter: Xiaobao Wu
>Priority: Minor
>  Labels: pull-request-available, patch
> Fix For: 3.3.0
>
> Attachments: 
> HDFS-17436__Supplement_log_information_for_AccessControlException.patch
>
>
> In the environment where the *Ranger-HDFS* plugin is enabled, I look at the 
> log information of *AccessControlException* caused by the *du.* I find that 
> the printed log information is not accurate, because the original 
> AccessControlException is ignored by checkPermission, which is not conducive 
> to judging the real situation of the  AccessControlException . At least part 
> of the original log information should be printed.
> Later, the *inode* information prompted by the original 
> AccessControlException log information makes me realize that the Ranger-HDFS 
> plug-in in the current environment is not incorporated into RANGER-2297.
> Because the current log prints the inode information is not the ”inode 
> information“ *passed* to the authorizers. At this time if certain external 
> authorizers *does not adjust its authentication logic* according to 
> HDFS-12130 , it is more difficult to locate the real situation of the 
> problem.So I think it is necessary to prompt this part of the log information.
> AccessControlException information currently printed:
> {code:java}
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
>  Permission denied: user=test,access=READ_EXECUTE, 
> inode="/warehouse/tablespace/managed/hive/test.db/stu/dt=2024-01-17":hive:hadoop:drwxrwx---
> at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:226){code}
>  The original AccessControlException information printed:
> {code:java}
> org.apache.hadoop.security.AccessControlException: Permission denied: 
> user=test,access=READ_EXECUTE, inode="dt=2024-01-17":hive:hadoop:drwxrwx---
> at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:400)
>  {code}
> From the comparison results of the above log information, it can be seen that 
> the inode information and the exception stack printed by the log are not 
> accurate.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17416) [FGL] Monitor threads in BlockManager.class support fine-grained lock

2024-03-21 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17829724#comment-17829724
 ] 

ASF GitHub Bot commented on HDFS-17416:
---

ZanderXu commented on PR #6647:
URL: https://github.com/apache/hadoop/pull/6647#issuecomment-2014192895

   > CI reported checkstyle issue. can also fix that. Thanks
   
   Done, please help me review it again, thanks




> [FGL] Monitor threads in BlockManager.class support fine-grained lock
> -
>
> Key: HDFS-17416
> URL: https://issues.apache.org/jira/browse/HDFS-17416
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: ZanderXu
>Assignee: ZanderXu
>Priority: Major
>  Labels: pull-request-available
>
> There are some monitor threads in BlockManager.class.
>  
> This ticket is used to make these threads supporting fine-grained locking.
>  * BlockReportProcessingThread
>  * MarkedDeleteBlockScrubber
>  * RedundancyMonitor
>  * Reconstruction Queue Initializer
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17423) [FGL] BlockManagerSafeMode supports fine-grained lock

2024-03-21 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17829723#comment-17829723
 ] 

ASF GitHub Bot commented on HDFS-17423:
---

ferhui commented on PR #6645:
URL: https://github.com/apache/hadoop/pull/6645#issuecomment-2014187264

   @ZanderXu seems failed cases are related to this PR. could you check it?




> [FGL] BlockManagerSafeMode supports fine-grained lock
> -
>
> Key: HDFS-17423
> URL: https://issues.apache.org/jira/browse/HDFS-17423
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: ZanderXu
>Assignee: ZanderXu
>Priority: Major
>  Labels: pull-request-available
>
> [FGL] BlockManagerSafeMode supports fine-grained lock



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17413) [FGL] CacheReplicationMonitor supports fine-grained lock

2024-03-21 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17829722#comment-17829722
 ] 

ASF GitHub Bot commented on HDFS-17413:
---

ferhui commented on PR #6641:
URL: https://github.com/apache/hadoop/pull/6641#issuecomment-2014183343

   Thanks for contribution. Merged.




> [FGL] CacheReplicationMonitor supports fine-grained lock
> 
>
> Key: HDFS-17413
> URL: https://issues.apache.org/jira/browse/HDFS-17413
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: ZanderXu
>Assignee: ZanderXu
>Priority: Major
>  Labels: pull-request-available
>
> * addCacheDirective
>  * modifyCacheDirective
>  * removeCacheDirective
>  * listCacheDirectives
>  * addCachePool
>  * modifyCachePool
>  * removeCachePool
>  * listCachePools
>  * cacheReport
>  * CacheManager
>  * CacheReplicationMonitor



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-17413) [FGL] CacheReplicationMonitor supports fine-grained lock

2024-03-21 Thread Hui Fei (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hui Fei resolved HDFS-17413.

Resolution: Fixed

> [FGL] CacheReplicationMonitor supports fine-grained lock
> 
>
> Key: HDFS-17413
> URL: https://issues.apache.org/jira/browse/HDFS-17413
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: ZanderXu
>Assignee: ZanderXu
>Priority: Major
>  Labels: pull-request-available
>
> * addCacheDirective
>  * modifyCacheDirective
>  * removeCacheDirective
>  * listCacheDirectives
>  * addCachePool
>  * modifyCachePool
>  * removeCachePool
>  * listCachePools
>  * cacheReport
>  * CacheManager
>  * CacheReplicationMonitor



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17413) [FGL] CacheReplicationMonitor supports fine-grained lock

2024-03-21 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17829721#comment-17829721
 ] 

ASF GitHub Bot commented on HDFS-17413:
---

ferhui merged PR #6641:
URL: https://github.com/apache/hadoop/pull/6641




> [FGL] CacheReplicationMonitor supports fine-grained lock
> 
>
> Key: HDFS-17413
> URL: https://issues.apache.org/jira/browse/HDFS-17413
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: ZanderXu
>Assignee: ZanderXu
>Priority: Major
>  Labels: pull-request-available
>
> * addCacheDirective
>  * modifyCacheDirective
>  * removeCacheDirective
>  * listCacheDirectives
>  * addCachePool
>  * modifyCachePool
>  * removeCachePool
>  * listCachePools
>  * cacheReport
>  * CacheManager
>  * CacheReplicationMonitor



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17413) [FGL] CacheReplicationMonitor supports fine-grained lock

2024-03-21 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17829719#comment-17829719
 ] 

ASF GitHub Bot commented on HDFS-17413:
---

ferhui commented on PR #6641:
URL: https://github.com/apache/hadoop/pull/6641#issuecomment-2014181960

   TestLargeBlockReport tracking in 
https://issues.apache.org/jira/browse/HDFS-17437
   TestDFSAdmin  is fixed by HDFS-17422
   TestBlockListAsLongs#testFuzz seems the same reason as TestLargeBlockReport




> [FGL] CacheReplicationMonitor supports fine-grained lock
> 
>
> Key: HDFS-17413
> URL: https://issues.apache.org/jira/browse/HDFS-17413
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: ZanderXu
>Assignee: ZanderXu
>Priority: Major
>  Labels: pull-request-available
>
> * addCacheDirective
>  * modifyCacheDirective
>  * removeCacheDirective
>  * listCacheDirectives
>  * addCachePool
>  * modifyCachePool
>  * removeCachePool
>  * listCachePools
>  * cacheReport
>  * CacheManager
>  * CacheReplicationMonitor



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17413) [FGL] CacheReplicationMonitor supports fine-grained lock

2024-03-21 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17829720#comment-17829720
 ] 

ASF GitHub Bot commented on HDFS-17413:
---

ferhui commented on PR #6641:
URL: https://github.com/apache/hadoop/pull/6641#issuecomment-2014182167

   Failed cases are unrelated.




> [FGL] CacheReplicationMonitor supports fine-grained lock
> 
>
> Key: HDFS-17413
> URL: https://issues.apache.org/jira/browse/HDFS-17413
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: ZanderXu
>Assignee: ZanderXu
>Priority: Major
>  Labels: pull-request-available
>
> * addCacheDirective
>  * modifyCacheDirective
>  * removeCacheDirective
>  * listCacheDirectives
>  * addCachePool
>  * modifyCachePool
>  * removeCachePool
>  * listCachePools
>  * cacheReport
>  * CacheManager
>  * CacheReplicationMonitor



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-17437) TestLargeBlockReport#testBlockReportSucceedsWithLargerLengthLimit failed

2024-03-21 Thread Hui Fei (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17437?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hui Fei updated HDFS-17437:
---
Parent: HDFS-15646
Issue Type: Sub-task  (was: Test)

> TestLargeBlockReport#testBlockReportSucceedsWithLargerLengthLimit failed
> 
>
> Key: HDFS-17437
> URL: https://issues.apache.org/jira/browse/HDFS-17437
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Hui Fei
>Priority: Minor
>
> reported by https://github.com/apache/hadoop/pull/6641
> log as following
> {quote}
> [ERROR] 
> testBlockReportSucceedsWithLargerLengthLimit(org.apache.hadoop.hdfs.server.datanode.TestLargeBlockReport)
>   Time elapsed: 33.802 s  <<< ERROR!
> org.apache.hadoop.ipc.RemoteException(java.io.IOException): 
> java.lang.NoSuchMethodError: 
> java.nio.ByteBuffer.position(I)Ljava/nio/ByteBuffer;
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.runBlockOp(BlockManager.java:5558)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.blockReport(NameNodeRpcServer.java:1651)
>   at 
> org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.blockReport(DatanodeProtocolServerSideTranslatorPB.java:182)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:34769)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:621)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:589)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:573)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1227)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1249)
>   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1172)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1953)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:3202)
> Caused by: java.lang.NoSuchMethodError: 
> java.nio.ByteBuffer.position(I)Ljava/nio/ByteBuffer;
>   at 
> org.apache.hadoop.thirdparty.protobuf.IterableByteBufferInputStream.read(IterableByteBufferInputStream.java:143)
>   at 
> org.apache.hadoop.thirdparty.protobuf.CodedInputStream$StreamDecoder.read(CodedInputStream.java:2080)
>   at 
> org.apache.hadoop.thirdparty.protobuf.CodedInputStream$StreamDecoder.tryRefillBuffer(CodedInputStream.java:2831)
>   at 
> org.apache.hadoop.thirdparty.protobuf.CodedInputStream$StreamDecoder.refillBuffer(CodedInputStream.java:2777)
>   at 
> org.apache.hadoop.thirdparty.protobuf.CodedInputStream$StreamDecoder.readRawByte(CodedInputStream.java:2859)
>   at 
> org.apache.hadoop.thirdparty.protobuf.CodedInputStream$StreamDecoder.readRawVarint64SlowPath(CodedInputStream.java:2648)
>   at 
> org.apache.hadoop.thirdparty.protobuf.CodedInputStream$StreamDecoder.readRawVarint64(CodedInputStream.java:2641)
>   at 
> org.apache.hadoop.thirdparty.protobuf.CodedInputStream$StreamDecoder.readSInt64(CodedInputStream.java:2497)
>   at 
> org.apache.hadoop.hdfs.protocol.BlockListAsLongs$BufferDecoder$1.next(BlockListAsLongs.java:419)
>   at 
> org.apache.hadoop.hdfs.protocol.BlockListAsLongs$BufferDecoder$1.next(BlockListAsLongs.java:397)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.reportDiff(BlockManager.java:3349)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.processReport(BlockManager.java:3171)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.processReport(BlockManager.java:2950)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.lambda$blockReport$0(NameNodeRpcServer.java:1652)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.processQueue(BlockManager.java:5637)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.run(BlockManager.java:5614)
>   at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1586)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1531)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1428)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:258)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:139)
>   at com.sun.proxy.$Proxy23.blockReport(Unknown Source)
>   at 
> 

[jira] [Updated] (HDFS-17437) TestLargeBlockReport#testBlockReportSucceedsWithLargerLengthLimit failed

2024-03-21 Thread Hui Fei (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17437?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hui Fei updated HDFS-17437:
---
Description: 
reported by https://github.com/apache/hadoop/pull/6641

log as following
{quote}
[ERROR] 
testBlockReportSucceedsWithLargerLengthLimit(org.apache.hadoop.hdfs.server.datanode.TestLargeBlockReport)
  Time elapsed: 33.802 s  <<< ERROR!
org.apache.hadoop.ipc.RemoteException(java.io.IOException): 
java.lang.NoSuchMethodError: 
java.nio.ByteBuffer.position(I)Ljava/nio/ByteBuffer;
at 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.runBlockOp(BlockManager.java:5558)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.blockReport(NameNodeRpcServer.java:1651)
at 
org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.blockReport(DatanodeProtocolServerSideTranslatorPB.java:182)
at 
org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:34769)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:621)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:589)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:573)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1227)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1249)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1172)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1953)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:3202)
Caused by: java.lang.NoSuchMethodError: 
java.nio.ByteBuffer.position(I)Ljava/nio/ByteBuffer;
at 
org.apache.hadoop.thirdparty.protobuf.IterableByteBufferInputStream.read(IterableByteBufferInputStream.java:143)
at 
org.apache.hadoop.thirdparty.protobuf.CodedInputStream$StreamDecoder.read(CodedInputStream.java:2080)
at 
org.apache.hadoop.thirdparty.protobuf.CodedInputStream$StreamDecoder.tryRefillBuffer(CodedInputStream.java:2831)
at 
org.apache.hadoop.thirdparty.protobuf.CodedInputStream$StreamDecoder.refillBuffer(CodedInputStream.java:2777)
at 
org.apache.hadoop.thirdparty.protobuf.CodedInputStream$StreamDecoder.readRawByte(CodedInputStream.java:2859)
at 
org.apache.hadoop.thirdparty.protobuf.CodedInputStream$StreamDecoder.readRawVarint64SlowPath(CodedInputStream.java:2648)
at 
org.apache.hadoop.thirdparty.protobuf.CodedInputStream$StreamDecoder.readRawVarint64(CodedInputStream.java:2641)
at 
org.apache.hadoop.thirdparty.protobuf.CodedInputStream$StreamDecoder.readSInt64(CodedInputStream.java:2497)
at 
org.apache.hadoop.hdfs.protocol.BlockListAsLongs$BufferDecoder$1.next(BlockListAsLongs.java:419)
at 
org.apache.hadoop.hdfs.protocol.BlockListAsLongs$BufferDecoder$1.next(BlockListAsLongs.java:397)
at 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.reportDiff(BlockManager.java:3349)
at 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.processReport(BlockManager.java:3171)
at 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.processReport(BlockManager.java:2950)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.lambda$blockReport$0(NameNodeRpcServer.java:1652)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.processQueue(BlockManager.java:5637)
at 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.run(BlockManager.java:5614)

at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1586)
at org.apache.hadoop.ipc.Client.call(Client.java:1531)
at org.apache.hadoop.ipc.Client.call(Client.java:1428)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:258)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:139)
at com.sun.proxy.$Proxy23.blockReport(Unknown Source)
at 
org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolClientSideTranslatorPB.lambda$blockReport$2(DatanodeProtocolClientSideTranslatorPB.java:212)
at 
org.apache.hadoop.ipc.internal.ShadedProtobufHelper.ipc(ShadedProtobufHelper.java:160)
at 
org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolClientSideTranslatorPB.blockReport(DatanodeProtocolClientSideTranslatorPB.java:212)
at 

[jira] [Created] (HDFS-17437) TestLargeBlockReport#testBlockReportSucceedsWithLargerLengthLimit failed

2024-03-21 Thread Hui Fei (Jira)
Hui Fei created HDFS-17437:
--

 Summary: 
TestLargeBlockReport#testBlockReportSucceedsWithLargerLengthLimit failed
 Key: HDFS-17437
 URL: https://issues.apache.org/jira/browse/HDFS-17437
 Project: Hadoop HDFS
  Issue Type: Test
Reporter: Hui Fei


reported by 

log as following
{quote}
[ERROR] 
testBlockReportSucceedsWithLargerLengthLimit(org.apache.hadoop.hdfs.server.datanode.TestLargeBlockReport)
  Time elapsed: 33.802 s  <<< ERROR!
org.apache.hadoop.ipc.RemoteException(java.io.IOException): 
java.lang.NoSuchMethodError: 
java.nio.ByteBuffer.position(I)Ljava/nio/ByteBuffer;
at 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.runBlockOp(BlockManager.java:5558)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.blockReport(NameNodeRpcServer.java:1651)
at 
org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.blockReport(DatanodeProtocolServerSideTranslatorPB.java:182)
at 
org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:34769)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:621)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:589)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:573)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1227)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1249)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1172)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1953)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:3202)
Caused by: java.lang.NoSuchMethodError: 
java.nio.ByteBuffer.position(I)Ljava/nio/ByteBuffer;
at 
org.apache.hadoop.thirdparty.protobuf.IterableByteBufferInputStream.read(IterableByteBufferInputStream.java:143)
at 
org.apache.hadoop.thirdparty.protobuf.CodedInputStream$StreamDecoder.read(CodedInputStream.java:2080)
at 
org.apache.hadoop.thirdparty.protobuf.CodedInputStream$StreamDecoder.tryRefillBuffer(CodedInputStream.java:2831)
at 
org.apache.hadoop.thirdparty.protobuf.CodedInputStream$StreamDecoder.refillBuffer(CodedInputStream.java:2777)
at 
org.apache.hadoop.thirdparty.protobuf.CodedInputStream$StreamDecoder.readRawByte(CodedInputStream.java:2859)
at 
org.apache.hadoop.thirdparty.protobuf.CodedInputStream$StreamDecoder.readRawVarint64SlowPath(CodedInputStream.java:2648)
at 
org.apache.hadoop.thirdparty.protobuf.CodedInputStream$StreamDecoder.readRawVarint64(CodedInputStream.java:2641)
at 
org.apache.hadoop.thirdparty.protobuf.CodedInputStream$StreamDecoder.readSInt64(CodedInputStream.java:2497)
at 
org.apache.hadoop.hdfs.protocol.BlockListAsLongs$BufferDecoder$1.next(BlockListAsLongs.java:419)
at 
org.apache.hadoop.hdfs.protocol.BlockListAsLongs$BufferDecoder$1.next(BlockListAsLongs.java:397)
at 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.reportDiff(BlockManager.java:3349)
at 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.processReport(BlockManager.java:3171)
at 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.processReport(BlockManager.java:2950)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.lambda$blockReport$0(NameNodeRpcServer.java:1652)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.processQueue(BlockManager.java:5637)
at 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$BlockReportProcessingThread.run(BlockManager.java:5614)

at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1586)
at org.apache.hadoop.ipc.Client.call(Client.java:1531)
at org.apache.hadoop.ipc.Client.call(Client.java:1428)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:258)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:139)
at com.sun.proxy.$Proxy23.blockReport(Unknown Source)
at 
org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolClientSideTranslatorPB.lambda$blockReport$2(DatanodeProtocolClientSideTranslatorPB.java:212)
at 
org.apache.hadoop.ipc.internal.ShadedProtobufHelper.ipc(ShadedProtobufHelper.java:160)
at 
org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolClientSideTranslatorPB.blockReport(DatanodeProtocolClientSideTranslatorPB.java:212)
at 

[jira] [Commented] (HDFS-17416) [FGL] Monitor threads in BlockManager.class support fine-grained lock

2024-03-21 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17829716#comment-17829716
 ] 

ASF GitHub Bot commented on HDFS-17416:
---

ferhui commented on PR #6647:
URL: https://github.com/apache/hadoop/pull/6647#issuecomment-2014172936

   CI reported checkstyle issue. can also fix that. Thanks




> [FGL] Monitor threads in BlockManager.class support fine-grained lock
> -
>
> Key: HDFS-17416
> URL: https://issues.apache.org/jira/browse/HDFS-17416
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: ZanderXu
>Assignee: ZanderXu
>Priority: Major
>  Labels: pull-request-available
>
> There are some monitor threads in BlockManager.class.
>  
> This ticket is used to make these threads supporting fine-grained locking.
>  * BlockReportProcessingThread
>  * MarkedDeleteBlockScrubber
>  * RedundancyMonitor
>  * Reconstruction Queue Initializer
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17408) Reduce the number of quota calculations in FSDirRenameOp

2024-03-21 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17829706#comment-17829706
 ] 

ASF GitHub Bot commented on HDFS-17408:
---

hadoop-yetus commented on PR #6653:
URL: https://github.com/apache/hadoop/pull/6653#issuecomment-2014043907

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  18m 13s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  49m  5s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 26s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   1m 19s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   1m 16s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 30s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 11s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 45s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   3m 22s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  40m 54s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 14s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 18s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   1m 18s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 10s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |   1m 10s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   1m  3s | 
[/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6653/2/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs-project/hadoop-hdfs: The patch generated 84 new + 164 unchanged 
- 1 fixed = 248 total (was 165)  |
   | +1 :green_heart: |  mvnsite  |   1m 19s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 57s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 35s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   3m 20s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  40m 33s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 266m 12s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6653/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 51s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 440m 59s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.fs.TestSymlinkHdfsFileSystem |
   |   | hadoop.fs.TestSymlinkHdfsFileContext |
   |   | hadoop.hdfs.tools.TestDFSAdmin |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.45 ServerAPI=1.45 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6653/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6653 |
   | JIRA Issue | HDFS-17408 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 1aacb8746543 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / cf1ce619fbc4ea910411f7cb530574b17d95878c |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | 

[jira] [Commented] (HDFS-17408) Reduce the number of quota calculations in FSDirRenameOp

2024-03-21 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17829700#comment-17829700
 ] 

ASF GitHub Bot commented on HDFS-17408:
---

hadoop-yetus commented on PR #6653:
URL: https://github.com/apache/hadoop/pull/6653#issuecomment-2013949219

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 29s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  43m 56s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 19s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   1m 14s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   1m 10s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 20s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m  5s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 42s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   3m 14s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  34m 55s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m  8s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 10s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   1m 10s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  5s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |   1m  5s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 57s | 
[/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6653/3/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs-project/hadoop-hdfs: The patch generated 84 new + 164 unchanged 
- 1 fixed = 248 total (was 165)  |
   | +1 :green_heart: |  mvnsite  |   1m 12s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 52s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 38s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   3m 16s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  35m 11s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 225m 51s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6653/3/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 43s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 364m 39s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.fs.TestSymlinkHdfsFileContext |
   |   | hadoop.fs.TestSymlinkHdfsFileSystem |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6653/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6653 |
   | JIRA Issue | HDFS-17408 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux cb18caccc025 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / cf1ce619fbc4ea910411f7cb530574b17d95878c |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 

[jira] [Commented] (HDFS-17365) EC: Add extra redunency configuration in checkStreamerFailures to prevent data loss.

2024-03-21 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17829696#comment-17829696
 ] 

ASF GitHub Bot commented on HDFS-17365:
---

hadoop-yetus commented on PR #6517:
URL: https://github.com/apache/hadoop/pull/6517#issuecomment-2013783212

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   8m  8s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  0s |  |  xmllint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m  6s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  21m  3s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   3m  1s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   3m 25s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   0m 51s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 34s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m  7s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 39s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | -1 :x: |  spotbugs  |   1m 33s | 
[/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-client-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6517/5/artifact/out/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-client-warnings.html)
 |  hadoop-hdfs-project/hadoop-hdfs-client in trunk has 1 extant spotbugs 
warnings.  |
   | +1 :green_heart: |  shadedclient  |  22m 10s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 22s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m  8s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   3m  3s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   3m  3s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   3m 27s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |   3m 27s |  |  the patch passed  |
   | -1 :x: |  blanks  |   0m  0s | 
[/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6517/5/artifact/out/blanks-eol.txt)
 |  The patch has 3 line(s) that end in blanks. Use git apply --whitespace=fix 
<>. Refer https://git-scm.com/docs/git-apply  |
   | -0 :warning: |  checkstyle  |   0m 43s | 
[/results-checkstyle-hadoop-hdfs-project.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6517/5/artifact/out/results-checkstyle-hadoop-hdfs-project.txt)
 |  hadoop-hdfs-project: The patch generated 1 new + 33 unchanged - 0 fixed = 
34 total (was 33)  |
   | +1 :green_heart: |  mvnsite  |   1m 15s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 55s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 31s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   3m 31s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  22m 20s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 45s |  |  hadoop-hdfs-client in the patch 
passed.  |
   | -1 :x: |  unit  | 233m 47s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6517/5/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 32s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 355m 42s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.TestReconstructStripedFile |
   |   | hadoop.hdfs.TestReadStripedFileWithDNFailure |
   |   | 

[jira] [Commented] (HDFS-17365) EC: Add extra redunency configuration in checkStreamerFailures to prevent data loss.

2024-03-21 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17829695#comment-17829695
 ] 

ASF GitHub Bot commented on HDFS-17365:
---

hadoop-yetus commented on PR #6517:
URL: https://github.com/apache/hadoop/pull/6517#issuecomment-2013759811

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 22s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  0s |  |  xmllint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m  8s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  20m 51s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   3m 37s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   2m 59s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   0m 47s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 19s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m  4s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 39s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | -1 :x: |  spotbugs  |   1m 35s | 
[/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-client-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6517/6/artifact/out/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-client-warnings.html)
 |  hadoop-hdfs-project/hadoop-hdfs-client in trunk has 1 extant spotbugs 
warnings.  |
   | +1 :green_heart: |  shadedclient  |  22m 18s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 28s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 20s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   3m 35s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   3m 35s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   3m  2s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |   3m  2s |  |  the patch passed  |
   | -1 :x: |  blanks  |   0m  0s | 
[/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6517/6/artifact/out/blanks-eol.txt)
 |  The patch has 3 line(s) that end in blanks. Use git apply --whitespace=fix 
<>. Refer https://git-scm.com/docs/git-apply  |
   | -0 :warning: |  checkstyle  |   0m 39s | 
[/results-checkstyle-hadoop-hdfs-project.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6517/6/artifact/out/results-checkstyle-hadoop-hdfs-project.txt)
 |  hadoop-hdfs-project: The patch generated 1 new + 33 unchanged - 0 fixed = 
34 total (was 33)  |
   | +1 :green_heart: |  mvnsite  |   1m 13s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 54s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 35s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   3m 29s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  21m 50s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 49s |  |  hadoop-hdfs-client in the patch 
passed.  |
   | -1 :x: |  unit  | 224m 46s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6517/6/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 28s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 338m 55s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.TestDFSStripedInputStream |
   |   | hadoop.hdfs.TestReadStripedFileWithDNFailure |
   |   | 

[jira] [Commented] (HDFS-17408) Reduce the number of quota calculations in FSDirRenameOp

2024-03-21 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17829687#comment-17829687
 ] 

ASF GitHub Bot commented on HDFS-17408:
---

hadoop-yetus commented on PR #6608:
URL: https://github.com/apache/hadoop/pull/6608#issuecomment-2013690866

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   7m 33s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  32m 24s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 44s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   0m 43s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   0m 41s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 48s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 43s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m  8s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   1m 47s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  21m  7s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  21m 20s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 37s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 38s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   0m 38s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 37s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |   0m 37s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 29s | 
[/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6608/5/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs-project/hadoop-hdfs: The patch generated 83 new + 165 unchanged 
- 0 fixed = 248 total (was 165)  |
   | +1 :green_heart: |  mvnsite  |   0m 37s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 32s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m  7s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   1m 46s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  21m 56s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 199m 20s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6608/5/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 31s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 297m  7s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.tools.TestDFSAdmin |
   |   | hadoop.fs.TestSymlinkHdfsFileSystem |
   |   | hadoop.fs.TestSymlinkHdfsFileContext |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.45 ServerAPI=1.45 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6608/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6608 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 8de84988246b 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 376e99b05721ec22a7fda0b53892261e4055ac81 |
   | 

[jira] [Commented] (HDFS-17408) Reduce the number of quota calculations in FSDirRenameOp

2024-03-21 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17829684#comment-17829684
 ] 

ASF GitHub Bot commented on HDFS-17408:
---

hadoop-yetus commented on PR #6653:
URL: https://github.com/apache/hadoop/pull/6653#issuecomment-2013679333

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 20s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 4 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  32m 30s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 40s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   0m 37s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   0m 39s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 43s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 38s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m  6s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   1m 45s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m 43s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 37s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 39s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   0m 39s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 34s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |   0m 34s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 29s | 
[/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6653/1/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs-project/hadoop-hdfs: The patch generated 84 new + 192 unchanged 
- 1 fixed = 276 total (was 193)  |
   | +1 :green_heart: |  mvnsite  |   0m 37s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 31s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m  2s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   1m 45s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  20m 21s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 198m 48s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6653/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 29s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 286m 58s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.fs.TestSymlinkHdfsFileSystem |
   |   | hadoop.fs.TestSymlinkHdfsFileContext |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6653/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6653 |
   | JIRA Issue | HDFS-17408 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 00fe646ca513 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 3367e6694ef82dff18a4b92db0d5774ec8c8d114 |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 

[jira] [Commented] (HDFS-17436) checkPermission should not ignore original AccessControlException

2024-03-21 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17436?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17829633#comment-17829633
 ] 

ASF GitHub Bot commented on HDFS-17436:
---

hadoop-yetus commented on PR #6651:
URL: https://github.com/apache/hadoop/pull/6651#issuecomment-2012952640

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 35s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  49m  3s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 24s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   1m 18s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   1m 14s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 29s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m  6s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 41s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   3m 38s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  37m 32s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 17s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 15s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   1m 15s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 10s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   1m 10s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 56s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 13s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 51s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 33s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   3m 22s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  37m 50s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  | 227m 40s |  |  hadoop-hdfs in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 42s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 377m 47s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6651/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6651 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 079b56841608 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 822a00e26d901946130130781696bae914a4d9f5 |
   | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6651/3/testReport/ |
   | Max. process+thread count | 3467 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6651/3/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message 

[jira] [Commented] (HDFS-17408) Reduce the number of quota calculations in FSDirRenameOp

2024-03-21 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17829615#comment-17829615
 ] 

ASF GitHub Bot commented on HDFS-17408:
---

ThinkerLei opened a new pull request, #6653:
URL: https://github.com/apache/hadoop/pull/6653

   
   In this pr , we do not consider the cases where the source directory is a 
symlink or involves snapshot in rename. There are currently two methods of 
rename, and their logic for calculating quota is as follows:
   
   rename(String src, String dst)
   In the method verifyQuotaForRename, we calculate the quota of the source 
INode using the storage policy of the target directory, without using the 
cached quota usage information, lastSnapshotId=Snapshot.CURRENT_STATE_ID. If 
the target directory exists, it will use its own storage policy to calculate 
its quota (in the case of an overwrite operation).
   
   In the removeSrc4OldRename method, we calculate the quota of the source 
INode using the storage policy of the source directory for subsequent updates 
to the source directory's quota, with lastSnapshotId=Snapshot.CURRENT_STATE_ID 
being used at this time. Of course, if the source is a symlink, using 
lastSnapshotId will be different. Here we do not consider the case where the 
source is a Symlink and the snapshot ID is not Snapshot.CURRENT_STATE_ID.
   
   In the addSourceToDestination method, we graft the source INode into the 
target directory, and the method of quota calculation is consistent with the 
verifyQuotaForRename method.
   
   In the updateQuotasInSourceTree method, if the source is a snapshot, a quota 
calculation is performed. This optimization does not target snapshots, so it is 
not considered for the time being.
   
   In the restoreSource method, if the rename fails, the source directory is 
restored back to its original location. At this time, the source directory's 
storage policy is used for the calculation.
   
   rename2(String src, String dst, Options.Rename... options) same logic like 
rename(String src, String dst). Its calculation logic is basically the same as 
rename(String src, String dst).
   Based on the above, without considering snapshots and Symlinks, we can 
reduce the source INode quota calculation at least once. When the storage 
policy is the same, we can save two quota calculations
   




> Reduce the number of quota calculations in FSDirRenameOp
> 
>
> Key: HDFS-17408
> URL: https://issues.apache.org/jira/browse/HDFS-17408
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: lei w
>Assignee: lei w
>Priority: Major
>  Labels: pull-request-available
>
> During the execution of the rename operation, we first calculate the quota 
> for the source INode using verifyQuotaForRename, and at the same time, we 
> calculate the quota for the target INode. Subsequently, in 
> RenameOperation#removeSrc, RenameOperation#removeSrc4OldRename, and 
> RenameOperation#addSourceToDestination, the quota for the source directory is 
> calculated again. In exceptional cases, RenameOperation#restoreDst and 
> RenameOperation#restoreSource will also perform quota calculations for the 
> source and target directories. In fact, many of the quota calculations are 
> redundant and unnecessary, so we should optimize them away.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17408) Reduce the number of quota calculations in FSDirRenameOp

2024-03-21 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17829613#comment-17829613
 ] 

ASF GitHub Bot commented on HDFS-17408:
---

ThinkerLei closed pull request #6608: HDFS-17408. Reduce quota calculation 
times in FSDirRenameOp.
URL: https://github.com/apache/hadoop/pull/6608




> Reduce the number of quota calculations in FSDirRenameOp
> 
>
> Key: HDFS-17408
> URL: https://issues.apache.org/jira/browse/HDFS-17408
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: lei w
>Assignee: lei w
>Priority: Major
>  Labels: pull-request-available
>
> During the execution of the rename operation, we first calculate the quota 
> for the source INode using verifyQuotaForRename, and at the same time, we 
> calculate the quota for the target INode. Subsequently, in 
> RenameOperation#removeSrc, RenameOperation#removeSrc4OldRename, and 
> RenameOperation#addSourceToDestination, the quota for the source directory is 
> calculated again. In exceptional cases, RenameOperation#restoreDst and 
> RenameOperation#restoreSource will also perform quota calculations for the 
> source and target directories. In fact, many of the quota calculations are 
> redundant and unnecessary, so we should optimize them away.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17365) EC: Add extra redunency configuration in checkStreamerFailures to prevent data loss.

2024-03-21 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17829611#comment-17829611
 ] 

ASF GitHub Bot commented on HDFS-17365:
---

hfutatzhanghb commented on code in PR #6517:
URL: https://github.com/apache/hadoop/pull/6517#discussion_r1534166846


##
hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml:
##
@@ -3908,6 +3908,18 @@
   
 
 
+
+  dfs.client.ec.EXAMPLEECPOLICYNAME.checkstreamer.redunency

Review Comment:
   @tasanuma @zhangshuyan0 Hi, sir. Sorry for response too late here. I have 
uploaded a new patch according to the review opinions, please help me review 
this PR when you have bandwidth. Thanks a lot~





> EC: Add extra redunency configuration in checkStreamerFailures to prevent 
> data loss.
> 
>
> Key: HDFS-17365
> URL: https://issues.apache.org/jira/browse/HDFS-17365
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: ec
>Reporter: farmmamba
>Assignee: farmmamba
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17408) Reduce the number of quota calculations in FSDirRenameOp

2024-03-21 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17829601#comment-17829601
 ] 

ASF GitHub Bot commented on HDFS-17408:
---

hadoop-yetus commented on PR #6608:
URL: https://github.com/apache/hadoop/pull/6608#issuecomment-2012579307

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m  0s |  |  Docker mode activated.  |
   | -1 :x: |  patch  |   0m 16s |  |  
https://github.com/apache/hadoop/pull/6608 does not apply to trunk. Rebase 
required? Wrong Branch? See 
https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute for help.  
|
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | GITHUB PR | https://github.com/apache/hadoop/pull/6608 |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6608/4/console |
   | versions | git=2.34.1 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   




> Reduce the number of quota calculations in FSDirRenameOp
> 
>
> Key: HDFS-17408
> URL: https://issues.apache.org/jira/browse/HDFS-17408
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: lei w
>Assignee: lei w
>Priority: Major
>  Labels: pull-request-available
>
> During the execution of the rename operation, we first calculate the quota 
> for the source INode using verifyQuotaForRename, and at the same time, we 
> calculate the quota for the target INode. Subsequently, in 
> RenameOperation#removeSrc, RenameOperation#removeSrc4OldRename, and 
> RenameOperation#addSourceToDestination, the quota for the source directory is 
> calculated again. In exceptional cases, RenameOperation#restoreDst and 
> RenameOperation#restoreSource will also perform quota calculations for the 
> source and target directories. In fact, many of the quota calculations are 
> redundant and unnecessary, so we should optimize them away.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17436) checkPermission should not ignore original AccessControlException

2024-03-21 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17436?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17829598#comment-17829598
 ] 

ASF GitHub Bot commented on HDFS-17436:
---

hadoop-yetus commented on PR #6651:
URL: https://github.com/apache/hadoop/pull/6651#issuecomment-2012552601

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 30s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  45m 15s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 23s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   1m 15s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   1m 10s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 25s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m  6s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 40s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   3m 17s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  37m 18s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 11s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 10s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   1m 10s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  8s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |   1m  8s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 56s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 11s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 52s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 34s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   3m 10s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  40m 53s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 239m 57s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6651/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 44s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 388m 33s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdfs.TestDFSStripedOutputStreamWithRandomECPolicy |
   |   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.45 ServerAPI=1.45 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6651/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6651 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux a34c72a38206 5.15.0-91-generic #101-Ubuntu SMP Tue Nov 14 
13:30:08 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / cbaa42f0cee5217c9874336412409e893f66802c |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   |  Test Results | 

[jira] [Commented] (HDFS-17414) [FGL] RPCs in DatanodeProtocol support fine-grained lock

2024-03-21 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17829586#comment-17829586
 ] 

ASF GitHub Bot commented on HDFS-17414:
---

hadoop-yetus commented on PR #6649:
URL: https://github.com/apache/hadoop/pull/6649#issuecomment-2012482210

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  17m 58s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ HDFS-17384 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  49m 49s |  |  HDFS-17384 passed  |
   | +1 :green_heart: |  compile  |   1m 27s |  |  HDFS-17384 passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   1m 18s |  |  HDFS-17384 passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   1m 17s |  |  HDFS-17384 passed  |
   | +1 :green_heart: |  mvnsite  |   1m 26s |  |  HDFS-17384 passed  |
   | +1 :green_heart: |  javadoc  |   1m 10s |  |  HDFS-17384 passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 40s |  |  HDFS-17384 passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   3m 21s |  |  HDFS-17384 passed  |
   | +1 :green_heart: |  shadedclient  |  40m 44s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 14s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 18s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   1m 18s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  9s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |   1m  9s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m  3s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 20s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 57s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 32s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   3m 22s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  40m 56s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 264m 20s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6649/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 49s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 439m 20s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.protocol.TestBlockListAsLongs |
   |   | hadoop.hdfs.server.blockmanagement.TestBlockManager |
   |   | hadoop.hdfs.tools.TestDFSAdmin |
   |   | hadoop.hdfs.TestRollingUpgrade |
   |   | hadoop.hdfs.server.datanode.TestLargeBlockReport |
   |   | hadoop.hdfs.server.blockmanagement.TestDatanodeManager |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.45 ServerAPI=1.45 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6649/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6649 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 0838c8470f7a 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | HDFS-17384 / fee6a943dcb2845a8efceecdad17e62eff3f15f0 |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | 

[jira] [Commented] (HDFS-17436) checkPermission should not ignore original AccessControlException

2024-03-21 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17436?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17829584#comment-17829584
 ] 

ASF GitHub Bot commented on HDFS-17436:
---

hadoop-yetus commented on PR #6651:
URL: https://github.com/apache/hadoop/pull/6651#issuecomment-2012475574

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  12m 31s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  44m 32s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 21s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   1m 14s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   1m 10s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 21s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m  5s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 39s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   3m 15s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  36m 59s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 11s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 14s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   1m 14s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  4s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |   1m  4s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 59s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 13s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 53s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 37s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   3m 12s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  38m  5s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 239m 16s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6651/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 45s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 395m 37s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdfs.server.diskbalancer.command.TestDiskBalancerCommand |
   |   | hadoop.hdfs.server.namenode.TestReconstructStripedBlocks |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.45 ServerAPI=1.45 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6651/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6651 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux ec95db5873a5 5.15.0-91-generic #101-Ubuntu SMP Tue Nov 14 
13:30:08 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 400b7876ed1f7f05c1378f883a55514076b9b653 |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   |  Test Results | 

[jira] [Commented] (HDFS-17430) RecoveringBlock will skip no live replicas when get block recovery command.

2024-03-21 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17829554#comment-17829554
 ] 

ASF GitHub Bot commented on HDFS-17430:
---

hadoop-yetus commented on PR #6635:
URL: https://github.com/apache/hadoop/pull/6635#issuecomment-2012266961

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 20s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  33m  0s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 40s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   0m 37s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   0m 36s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 42s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 40s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m  5s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   1m 49s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  23m 27s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  23m 40s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 42s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 40s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   0m 40s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 39s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |   0m 39s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 38s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 32s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m  4s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   1m 42s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  20m 53s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  | 202m 55s |  |  hadoop-hdfs in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 31s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 295m  8s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.45 ServerAPI=1.45 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6635/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6635 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 10d0020a22a4 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / c21a656da06274270a468d51b6320b63144ff9c2 |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6635/3/testReport/ |
   | Max. process+thread count | 4129 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console 

[jira] [Commented] (HDFS-17436) checkPermission should not ignore original AccessControlException

2024-03-21 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17436?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17829545#comment-17829545
 ] 

ASF GitHub Bot commented on HDFS-17436:
---

slfan1989 commented on PR #6651:
URL: https://github.com/apache/hadoop/pull/6651#issuecomment-2012235777

   LGTM.




> checkPermission should not ignore original AccessControlException 
> --
>
> Key: HDFS-17436
> URL: https://issues.apache.org/jira/browse/HDFS-17436
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 3.3.0, 3.3.6
>Reporter: Xiaobao Wu
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.3.0
>
>
> In the environment where the *Ranger-HDFS* plugin is enabled, I look at the 
> log information of *AccessControlException* caused by the *du.* I find that 
> the printed log information is not accurate, because the original 
> AccessControlException is ignored by checkPermission, which is not conducive 
> to judging the real situation of the  AccessControlException . At least part 
> of the original log information should be printed.
> Later, the *inode* information prompted by the original 
> AccessControlException log information makes me realize that the Ranger-HDFS 
> plug-in in the current environment is not incorporated into RANGER-2297.
> Because the current log prints the inode information is not the ”inode 
> information“ *passed* to the authorizers. At this time if certain external 
> authorizers *does not adjust its authentication logic* according to 
> HDFS-12130 , it is more difficult to locate the real situation of the 
> problem.So I think it is necessary to prompt this part of the log information.
> AccessControlException information currently printed:
> {code:java}
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
>  Permission denied: user=test,access=READ_EXECUTE, 
> inode="/warehouse/tablespace/managed/hive/test.db/stu/dt=2024-01-17":hive:hadoop:drwxrwx---
> at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:226){code}
>  The original AccessControlException information printed:
> {code:java}
> org.apache.hadoop.security.AccessControlException: Permission denied: 
> user=test,access=READ_EXECUTE, inode="dt=2024-01-17":hive:hadoop:drwxrwx---
> at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:400)
>  {code}
> From the comparison results of the above log information, it can be seen that 
> the inode information and the exception stack printed by the log are not 
> accurate.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17408) Reduce the number of quota calculations in FSDirRenameOp

2024-03-21 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17829526#comment-17829526
 ] 

ASF GitHub Bot commented on HDFS-17408:
---

zhangshuyan0 commented on code in PR #6608:
URL: https://github.com/apache/hadoop/pull/6608#discussion_r1533779361


##
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirRenameOp.java:
##
@@ -88,13 +96,19 @@ private static void verifyQuotaForRename(FSDirectory fsd, 
INodesInPath src,
 final QuotaCounts delta = src.getLastINode()
 .computeQuotaUsage(bsps, storagePolicyID, false,
 Snapshot.CURRENT_STATE_ID);
+srcDelta = Optional.of(delta.negation().negation());

Review Comment:
   Why call negation() twice here?



##
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirRenameOp.java:
##
@@ -88,13 +96,19 @@ private static void verifyQuotaForRename(FSDirectory fsd, 
INodesInPath src,
 final QuotaCounts delta = src.getLastINode()
 .computeQuotaUsage(bsps, storagePolicyID, false,

Review Comment:
   I'm curious why caching is not allowed here.



##
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirRenameOp.java:
##
@@ -88,13 +96,19 @@ private static void verifyQuotaForRename(FSDirectory fsd, 
INodesInPath src,
 final QuotaCounts delta = src.getLastINode()
 .computeQuotaUsage(bsps, storagePolicyID, false,
 Snapshot.CURRENT_STATE_ID);
+srcDelta = Optional.of(delta.negation().negation());
 
 // Reduce the required quota by dst that is being removed
 final INode dstINode = dst.getLastINode();
 if (dstINode != null) {
-  delta.subtract(dstINode.computeQuotaUsage(bsps));
+  QuotaCounts quotaCounts = dstINode.computeQuotaUsage(bsps);
+  dstDelta = dstINode.isQuotaSet() ?
+  dstDelta : Optional.of(quotaCounts.negation().negation());

Review Comment:
   Same as above.





> Reduce the number of quota calculations in FSDirRenameOp
> 
>
> Key: HDFS-17408
> URL: https://issues.apache.org/jira/browse/HDFS-17408
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: lei w
>Assignee: lei w
>Priority: Major
>  Labels: pull-request-available
>
> During the execution of the rename operation, we first calculate the quota 
> for the source INode using verifyQuotaForRename, and at the same time, we 
> calculate the quota for the target INode. Subsequently, in 
> RenameOperation#removeSrc, RenameOperation#removeSrc4OldRename, and 
> RenameOperation#addSourceToDestination, the quota for the source directory is 
> calculated again. In exceptional cases, RenameOperation#restoreDst and 
> RenameOperation#restoreSource will also perform quota calculations for the 
> source and target directories. In fact, many of the quota calculations are 
> redundant and unnecessary, so we should optimize them away.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17416) [FGL] Monitor threads in BlockManager.class support fine-grained lock

2024-03-21 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17829489#comment-17829489
 ] 

ASF GitHub Bot commented on HDFS-17416:
---

hadoop-yetus commented on PR #6647:
URL: https://github.com/apache/hadoop/pull/6647#issuecomment-2011925968

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  17m  6s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ HDFS-17384 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  49m  1s |  |  HDFS-17384 passed  |
   | +1 :green_heart: |  compile  |   1m 22s |  |  HDFS-17384 passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   1m 12s |  |  HDFS-17384 passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   1m 12s |  |  HDFS-17384 passed  |
   | +1 :green_heart: |  mvnsite  |   1m 20s |  |  HDFS-17384 passed  |
   | +1 :green_heart: |  javadoc  |   1m  9s |  |  HDFS-17384 passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 38s |  |  HDFS-17384 passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   3m 20s |  |  HDFS-17384 passed  |
   | +1 :green_heart: |  shadedclient  |  40m 20s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 11s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 15s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   1m 15s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  5s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |   1m  5s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   1m  2s | 
[/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6647/1/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs-project/hadoop-hdfs: The patch generated 1 new + 226 unchanged 
- 0 fixed = 227 total (was 226)  |
   | +1 :green_heart: |  mvnsite  |   1m 10s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 55s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 32s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   3m 17s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  40m  9s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 264m  7s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6647/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 44s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 435m 32s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdfs.server.blockmanagement.TestProvidedStorageMap |
   |   | hadoop.hdfs.tools.TestDFSAdmin |
   |   | hadoop.hdfs.server.datanode.TestLargeBlockReport |
   |   | hadoop.hdfs.server.blockmanagement.TestBlockManager |
   |   | hadoop.hdfs.protocol.TestBlockListAsLongs |
   |   | hadoop.hdfs.server.blockmanagement.TestReplicationPolicy |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6647/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6647 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 29c75e4482d4 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 

[jira] [Commented] (HDFS-17416) [FGL] Monitor threads in BlockManager.class support fine-grained lock

2024-03-21 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17829472#comment-17829472
 ] 

ASF GitHub Bot commented on HDFS-17416:
---

ferhui commented on code in PR #6647:
URL: https://github.com/apache/hadoop/pull/6647#discussion_r1533604421


##
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java:
##
@@ -5346,11 +5346,12 @@ NamenodeCommand startCheckpoint(NamenodeRegistration 
backupNode,
   public void processIncrementalBlockReport(final DatanodeID nodeID,
   final StorageReceivedDeletedBlocks srdb)
   throws IOException {
-writeLock();
+// Needs the FSWriteLock since it may update quota and access storage 
policyId and full path.

Review Comment:
   Better to make the comment clear.
   FS lock in the comment, but global lock in codes.





> [FGL] Monitor threads in BlockManager.class support fine-grained lock
> -
>
> Key: HDFS-17416
> URL: https://issues.apache.org/jira/browse/HDFS-17416
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: ZanderXu
>Assignee: ZanderXu
>Priority: Major
>  Labels: pull-request-available
>
> There are some monitor threads in BlockManager.class.
>  
> This ticket is used to make these threads supporting fine-grained locking.
>  * BlockReportProcessingThread
>  * MarkedDeleteBlockScrubber
>  * RedundancyMonitor
>  * Reconstruction Queue Initializer
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-17436) checkPermission should not ignore original AccessControlException

2024-03-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17436?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDFS-17436:
--
Labels: pull-request-available  (was: )

> checkPermission should not ignore original AccessControlException 
> --
>
> Key: HDFS-17436
> URL: https://issues.apache.org/jira/browse/HDFS-17436
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 3.3.0, 3.3.6
>Reporter: Xiaobao Wu
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.3.0
>
>
> In the environment where the *Ranger-HDFS* plugin is enabled, I look at the 
> log information of *AccessControlException* caused by the *du.* I find that 
> the printed log information is not accurate, because the original 
> AccessControlException is ignored by checkPermission, which is not conducive 
> to judging the real situation of the  AccessControlException . At least part 
> of the original log information should be printed.
> Later, the *inode* information prompted by the original 
> AccessControlException log information makes me realize that the Ranger-HDFS 
> plug-in in the current environment is not incorporated into RANGER-2297.
> Because the current log prints the inode information is not the ”inode 
> information“ *passed* to the authorizers. At this time if certain external 
> authorizers *does not adjust its authentication logic* according to 
> HDFS-12130 , it is more difficult to locate the real situation of the 
> problem.So I think it is necessary to prompt this part of the log information.
> AccessControlException information currently printed:
> {code:java}
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
>  Permission denied: user=test,access=READ_EXECUTE, 
> inode="/warehouse/tablespace/managed/hive/test.db/stu/dt=2024-01-17":hive:hadoop:drwxrwx---
> at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:226){code}
>  The original AccessControlException information printed:
> {code:java}
> org.apache.hadoop.security.AccessControlException: Permission denied: 
> user=test,access=READ_EXECUTE, inode="dt=2024-01-17":hive:hadoop:drwxrwx---
> at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:400)
>  {code}
> From the comparison results of the above log information, it can be seen that 
> the inode information and the exception stack printed by the log are not 
> accurate.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17436) checkPermission should not ignore original AccessControlException

2024-03-21 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17436?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17829470#comment-17829470
 ] 

ASF GitHub Bot commented on HDFS-17436:
---

XbaoWu commented on PR #6651:
URL: https://github.com/apache/hadoop/pull/6651#issuecomment-2011844402

   > This is beneficial for troubleshooting issues with external authentication 
components, and this log is necessary; However, printing a large number of 
authentication failure logs is a significant burden, and I suggest changing the 
log level to DEBUG.
   
   Okay, thank you for your reminder.




> checkPermission should not ignore original AccessControlException 
> --
>
> Key: HDFS-17436
> URL: https://issues.apache.org/jira/browse/HDFS-17436
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 3.3.0, 3.3.6
>Reporter: Xiaobao Wu
>Priority: Minor
> Fix For: 3.3.0
>
>
> In the environment where the *Ranger-HDFS* plugin is enabled, I look at the 
> log information of *AccessControlException* caused by the *du.* I find that 
> the printed log information is not accurate, because the original 
> AccessControlException is ignored by checkPermission, which is not conducive 
> to judging the real situation of the  AccessControlException . At least part 
> of the original log information should be printed.
> Later, the *inode* information prompted by the original 
> AccessControlException log information makes me realize that the Ranger-HDFS 
> plug-in in the current environment is not incorporated into RANGER-2297.
> Because the current log prints the inode information is not the ”inode 
> information“ *passed* to the authorizers. At this time if certain external 
> authorizers *does not adjust its authentication logic* according to 
> HDFS-12130 , it is more difficult to locate the real situation of the 
> problem.So I think it is necessary to prompt this part of the log information.
> AccessControlException information currently printed:
> {code:java}
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
>  Permission denied: user=test,access=READ_EXECUTE, 
> inode="/warehouse/tablespace/managed/hive/test.db/stu/dt=2024-01-17":hive:hadoop:drwxrwx---
> at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:226){code}
>  The original AccessControlException information printed:
> {code:java}
> org.apache.hadoop.security.AccessControlException: Permission denied: 
> user=test,access=READ_EXECUTE, inode="dt=2024-01-17":hive:hadoop:drwxrwx---
> at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:400)
>  {code}
> From the comparison results of the above log information, it can be seen that 
> the inode information and the exception stack printed by the log are not 
> accurate.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17397) Choose another DN as soon as possible, when encountering network issues

2024-03-21 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17829468#comment-17829468
 ] 

ASF GitHub Bot commented on HDFS-17397:
---

hadoop-yetus commented on PR #6591:
URL: https://github.com/apache/hadoop/pull/6591#issuecomment-2011838129

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 29s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  42m 57s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m  1s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   0m 55s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   0m 35s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 59s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 51s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 43s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | -1 :x: |  spotbugs  |   2m 35s | 
[/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-client-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6591/11/artifact/out/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-client-warnings.html)
 |  hadoop-hdfs-project/hadoop-hdfs-client in trunk has 1 extant spotbugs 
warnings.  |
   | +1 :green_heart: |  shadedclient  |  35m  0s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 51s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 51s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   0m 51s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 45s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |   0m 45s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 21s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 49s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 34s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 35s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   2m 33s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  34m 44s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 27s |  |  hadoop-hdfs-client in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 37s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 133m 53s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.45 ServerAPI=1.45 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6591/11/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6591 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux aaf0a3ef1b26 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 73d6c12734975d4adbd52f39e810478322b55a9f |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6591/11/testReport/ |
   | Max. process+thread count | 699 (vs. ulimit of 5500) |
 

[jira] [Commented] (HDFS-17435) Fix TestRouterRpc#testClearStaleNamespacesInRouterStateIdContext() failed

2024-03-21 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17829458#comment-17829458
 ] 

ASF GitHub Bot commented on HDFS-17435:
---

hadoop-yetus commented on PR #6650:
URL: https://github.com/apache/hadoop/pull/6650#issuecomment-2011742132

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 20s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  31m 45s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 22s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   0m 20s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   0m 20s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 27s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 28s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 18s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | -1 :x: |  spotbugs  |   0m 50s | 
[/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-rbf-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6650/1/artifact/out/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-rbf-warnings.html)
 |  hadoop-hdfs-project/hadoop-hdfs-rbf in trunk has 1 extant spotbugs 
warnings.  |
   | +1 :green_heart: |  shadedclient  |  19m 33s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 19s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 18s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   0m 18s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 17s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |   0m 17s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 11s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 18s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 16s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 17s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   0m 47s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  19m 38s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  25m 50s |  |  hadoop-hdfs-rbf in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 24s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 106m 27s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6650/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6650 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 9199d9b521e9 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 7ccd825429a3b70713b4aded8d7e7b44ed2f4113 |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6650/1/testReport/ |
   | Max. process+thread count | 3549 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
   | Console output | 

[jira] [Commented] (HDFS-16016) BPServiceActor add a new thread to handle IBR

2024-03-21 Thread Xiping Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17829452#comment-17829452
 ] 

Xiping Zhang commented on HDFS-16016:
-

yes ,Deleting the red box code here will not remove blk_9,blk_10 in the figure 
below, and will not affect the removal of blk_1(disk damage lost blocks)

!image-2024-03-21-17-19-01-183.png!

> BPServiceActor add a new thread to handle IBR
> -
>
> Key: HDFS-16016
> URL: https://issues.apache.org/jira/browse/HDFS-16016
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: JiangHua Zhu
>Assignee: Viraj Jasani
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.3.6
>
> Attachments: image-2023-11-03-18-11-54-502.png, 
> image-2023-11-06-10-53-13-584.png, image-2023-11-06-10-55-50-939.png, 
> image-2024-03-20-18-31-23-937.png, image-2024-03-21-16-20-46-746.png, 
> image-2024-03-21-17-17-23-281.png, image-2024-03-21-17-19-01-183.png
>
>  Time Spent: 5h 20m
>  Remaining Estimate: 0h
>
> Now BPServiceActor#offerService() is doing many things, FBR, IBR, heartbeat. 
> We can handle IBR independently to improve the performance of heartbeat and 
> FBR.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16016) BPServiceActor add a new thread to handle IBR

2024-03-21 Thread Xiping Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16016?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiping Zhang updated HDFS-16016:

Attachment: image-2024-03-21-17-19-01-183.png

> BPServiceActor add a new thread to handle IBR
> -
>
> Key: HDFS-16016
> URL: https://issues.apache.org/jira/browse/HDFS-16016
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: JiangHua Zhu
>Assignee: Viraj Jasani
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.3.6
>
> Attachments: image-2023-11-03-18-11-54-502.png, 
> image-2023-11-06-10-53-13-584.png, image-2023-11-06-10-55-50-939.png, 
> image-2024-03-20-18-31-23-937.png, image-2024-03-21-16-20-46-746.png, 
> image-2024-03-21-17-17-23-281.png, image-2024-03-21-17-19-01-183.png
>
>  Time Spent: 5h 20m
>  Remaining Estimate: 0h
>
> Now BPServiceActor#offerService() is doing many things, FBR, IBR, heartbeat. 
> We can handle IBR independently to improve the performance of heartbeat and 
> FBR.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17436) checkPermission should not ignore original AccessControlException

2024-03-21 Thread Xiaobao Wu (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17436?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17829449#comment-17829449
 ] 

Xiaobao Wu commented on HDFS-17436:
---

[~hiwangzhihui] Okay, I adjusted the description information according to the 
purpose.

> checkPermission should not ignore original AccessControlException 
> --
>
> Key: HDFS-17436
> URL: https://issues.apache.org/jira/browse/HDFS-17436
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 3.3.0, 3.3.6
>Reporter: Xiaobao Wu
>Priority: Minor
> Fix For: 3.3.0
>
>
> In the environment where the *Ranger-HDFS* plugin is enabled, I look at the 
> log information of *AccessControlException* caused by the *du.* I find that 
> the printed log information is not accurate, because the original 
> AccessControlException is ignored by checkPermission, which is not conducive 
> to judging the real situation of the  AccessControlException . At least part 
> of the original log information should be printed.
> Later, the *inode* information prompted by the original 
> AccessControlException log information makes me realize that the Ranger-HDFS 
> plug-in in the current environment is not incorporated into RANGER-2297.
> Because the current log prints the inode information is not the ”inode 
> information“ *passed* to the authorizers. At this time if certain external 
> authorizers *does not adjust its authentication logic* according to 
> HDFS-12130 , it is more difficult to locate the real situation of the 
> problem.So I think it is necessary to prompt this part of the log information.
> AccessControlException information currently printed:
> {code:java}
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
>  Permission denied: user=test,access=READ_EXECUTE, 
> inode="/warehouse/tablespace/managed/hive/test.db/stu/dt=2024-01-17":hive:hadoop:drwxrwx---
> at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:226){code}
>  The original AccessControlException information printed:
> {code:java}
> org.apache.hadoop.security.AccessControlException: Permission denied: 
> user=test,access=READ_EXECUTE, inode="dt=2024-01-17":hive:hadoop:drwxrwx---
> at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:400)
>  {code}
> From the comparison results of the above log information, it can be seen that 
> the inode information and the exception stack printed by the log are not 
> accurate.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16016) BPServiceActor add a new thread to handle IBR

2024-03-21 Thread Xiping Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16016?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiping Zhang updated HDFS-16016:

Attachment: image-2024-03-21-17-17-23-281.png

> BPServiceActor add a new thread to handle IBR
> -
>
> Key: HDFS-16016
> URL: https://issues.apache.org/jira/browse/HDFS-16016
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: JiangHua Zhu
>Assignee: Viraj Jasani
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.3.6
>
> Attachments: image-2023-11-03-18-11-54-502.png, 
> image-2023-11-06-10-53-13-584.png, image-2023-11-06-10-55-50-939.png, 
> image-2024-03-20-18-31-23-937.png, image-2024-03-21-16-20-46-746.png, 
> image-2024-03-21-17-17-23-281.png
>
>  Time Spent: 5h 20m
>  Remaining Estimate: 0h
>
> Now BPServiceActor#offerService() is doing many things, FBR, IBR, heartbeat. 
> We can handle IBR independently to improve the performance of heartbeat and 
> FBR.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-17436) checkPermission should not ignore original AccessControlException

2024-03-21 Thread Xiaobao Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17436?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobao Wu updated HDFS-17436:
--
Description: 
In the environment where the *Ranger-HDFS* plugin is enabled, I look at the log 
information of *AccessControlException* caused by the *du.* I find that the 
printed log information is not accurate, because the original 
AccessControlException is ignored by checkPermission, which is not conducive to 
judging the real situation of the  AccessControlException . At least part of 
the original log information should be printed.

Later, the *inode* information prompted by the original AccessControlException 
log information makes me realize that the Ranger-HDFS plug-in in the current 
environment is not incorporated into RANGER-2297.

Because the current log prints the inode information is not the ”inode 
information“ *passed* to the authorizers. At this time if certain external 
authorizers *does not adjust its authentication logic* according to HDFS-12130 
, it is more difficult to locate the real situation of the problem.So I think 
it is necessary to prompt this part of the log information.

AccessControlException information currently printed:
{code:java}
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
 Permission denied: user=test,access=READ_EXECUTE, 
inode="/warehouse/tablespace/managed/hive/test.db/stu/dt=2024-01-17":hive:hadoop:drwxrwx---
at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:226){code}
 The original AccessControlException information printed:
{code:java}
org.apache.hadoop.security.AccessControlException: Permission denied: 
user=test,access=READ_EXECUTE, inode="dt=2024-01-17":hive:hadoop:drwxrwx---
at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:400)
 {code}
>From the comparison results of the above log information, it can be seen that 
>the inode information and the exception stack printed by the log are not 
>accurate.

  was:
In the environment where the *Ranger-HDFS* plugin is enabled, I look at the log 
information of *AccessControlException* caused by the *du.* I find that the 
printed log information is not accurate, because the original 
AccessControlException is ignored by checkPermission, which is not conducive to 
judging the real situation of the  AccessControlException . At least part of 
the original log information should be printed.

AccessControlException information currently printed:
{code:java}
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
 Permission denied: user=test,access=READ_EXECUTE, 
inode="/warehouse/tablespace/managed/hive/test.db/stu/dt=2024-01-17":hive:hadoop:drwxrwx---
at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:226){code}
 The original AccessControlException information printed:
{code:java}
org.apache.hadoop.security.AccessControlException: Permission denied: 
user=test,access=READ_EXECUTE, inode="dt=2024-01-17":hive:hadoop:drwxrwx---
at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:400)
 {code}
>From the comparison results of the above log information, it can be seen that 
>the inode information and the exception stack printed by the log are not 
>accurate.

Later, the *inode* information prompted by the original AccessControlException 
log information makes me realize that the Ranger-HDFS plug-in in the current 
environment is not incorporated into RANGER-2297.

If certain external authorizers *does not adjust its authentication logic* 
according to HDFS-12130 , it is more difficult to locate the real situation of 
the problem.So I think it is necessary to prompt this part of the log 
information.


> checkPermission should not ignore original AccessControlException 
> --
>
> Key: HDFS-17436
> URL: https://issues.apache.org/jira/browse/HDFS-17436
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 3.3.0, 3.3.6
>Reporter: Xiaobao Wu
>Priority: Minor
> Fix For: 3.3.0
>
>
> In the environment where the *Ranger-HDFS* plugin is enabled, I look at the 
> log information of *AccessControlException* caused by the *du.* I find that 
> the printed log information is not accurate, because the original 
> AccessControlException is ignored by checkPermission, which is not conducive 
> to judging the real situation of the  AccessControlException . At least part 
> of the original log information should be printed.
> Later, the *inode* information prompted by the original 
> AccessControlException log information makes me realize that the Ranger-HDFS 
> plug-in 

[jira] [Commented] (HDFS-17416) [FGL] Monitor threads in BlockManager.class support fine-grained lock

2024-03-21 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17829447#comment-17829447
 ] 

ASF GitHub Bot commented on HDFS-17416:
---

ZanderXu commented on code in PR #6647:
URL: https://github.com/apache/hadoop/pull/6647#discussion_r1533487337


##
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java:
##
@@ -2150,7 +2155,7 @@ int computeReconstructionWorkForBlocks(
 }
   }
 } finally {
-  namesystem.writeUnlock("computeReconstructionWorkForBlocks");
+  namesystem.writeUnlock(FSNamesystemLockMode.GLOBAL, 
"computeReconstructionWorkForBlocks");

Review Comment:
   `bc.getName()` and `bc.getStoragePolicyID()` need FSReadLock while 
initializing `BlockReconstructionWork` 





> [FGL] Monitor threads in BlockManager.class support fine-grained lock
> -
>
> Key: HDFS-17416
> URL: https://issues.apache.org/jira/browse/HDFS-17416
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: ZanderXu
>Assignee: ZanderXu
>Priority: Major
>  Labels: pull-request-available
>
> There are some monitor threads in BlockManager.class.
>  
> This ticket is used to make these threads supporting fine-grained locking.
>  * BlockReportProcessingThread
>  * MarkedDeleteBlockScrubber
>  * RedundancyMonitor
>  * Reconstruction Queue Initializer
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-16016) BPServiceActor add a new thread to handle IBR

2024-03-21 Thread Xiping Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17829425#comment-17829425
 ] 

Xiping Zhang edited comment on HDFS-16016 at 3/21/24 9:14 AM:
--

[~liuguanghua]   Maybe I didn't describe it clearly, the above processing is in 
the namenode,  and does not need to be processed in the datanode end.  The FBR 
is to align the block information of the namenode and datanode history.  Now we 
can determine that FBR blocks in the DN have been added to the namenode by IBR, 
 but we cannot guarantee that all the added blocks by IBR ​are included in FBR 
this time, right?

for 3.3.0 ,Just delete the red box code

!image-2024-03-21-16-20-46-746.png!


was (Author: zhangxiping):
[~liuguanghua]   Maybe I didn't describe it clearly, the above processing is in 
the namenode, 
and does not need to be processed in the datanode end. The FBR is to align the 
block information of the namenode and datanode history. 
Now we can determine that all blocks reported in the DN end have been added to 
the namenode end, 
but we cannot guarantee that all the added blocks are included in all blocks 
reported this time, right?

for 3.3.0 ,Just delete the red box code

!image-2024-03-21-16-20-46-746.png!

> BPServiceActor add a new thread to handle IBR
> -
>
> Key: HDFS-16016
> URL: https://issues.apache.org/jira/browse/HDFS-16016
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: JiangHua Zhu
>Assignee: Viraj Jasani
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.3.6
>
> Attachments: image-2023-11-03-18-11-54-502.png, 
> image-2023-11-06-10-53-13-584.png, image-2023-11-06-10-55-50-939.png, 
> image-2024-03-20-18-31-23-937.png, image-2024-03-21-16-20-46-746.png
>
>  Time Spent: 5h 20m
>  Remaining Estimate: 0h
>
> Now BPServiceActor#offerService() is doing many things, FBR, IBR, heartbeat. 
> We can handle IBR independently to improve the performance of heartbeat and 
> FBR.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-16016) BPServiceActor add a new thread to handle IBR

2024-03-21 Thread liuguanghua (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17829445#comment-17829445
 ] 

liuguanghua edited comment on HDFS-16016 at 3/21/24 9:13 AM:
-

Thank for reply.  [~zhangxiping] 

IBR  contains   DELETED_BLOCK,   RECEIVED_BLOCK, RECEIVING_BLOCK.  Mis-order of 
IBR and FBR not only effect to_remove blocks. 

And NN should remove blocks which FBR does not contain  if disk damage lost 
blocks.

 


was (Author: liuguanghua):
Thank for reply.  

IBR  contains   DELETED_BLOCK,   RECEIVED_BLOCK, RECEIVING_BLOCK.  Mis-order of 
IBR and FBR not only effect to_remove blocks. 

And NN should remove blocks which FBR does not contain  if disk damage lost 
blocks.

 

> BPServiceActor add a new thread to handle IBR
> -
>
> Key: HDFS-16016
> URL: https://issues.apache.org/jira/browse/HDFS-16016
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: JiangHua Zhu
>Assignee: Viraj Jasani
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.3.6
>
> Attachments: image-2023-11-03-18-11-54-502.png, 
> image-2023-11-06-10-53-13-584.png, image-2023-11-06-10-55-50-939.png, 
> image-2024-03-20-18-31-23-937.png, image-2024-03-21-16-20-46-746.png
>
>  Time Spent: 5h 20m
>  Remaining Estimate: 0h
>
> Now BPServiceActor#offerService() is doing many things, FBR, IBR, heartbeat. 
> We can handle IBR independently to improve the performance of heartbeat and 
> FBR.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17416) [FGL] Monitor threads in BlockManager.class support fine-grained lock

2024-03-21 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17829446#comment-17829446
 ] 

ASF GitHub Bot commented on HDFS-17416:
---

ZanderXu commented on code in PR #6647:
URL: https://github.com/apache/hadoop/pull/6647#discussion_r1533484754


##
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java:
##
@@ -5346,11 +5346,12 @@ NamenodeCommand startCheckpoint(NamenodeRegistration 
backupNode,
   public void processIncrementalBlockReport(final DatanodeID nodeID,
   final StorageReceivedDeletedBlocks srdb)
   throws IOException {
-writeLock();
+// Needs the FSWriteLock since it may update quota and access storage 
policyId and full path.

Review Comment:
   Yes. `completeBlock` will updateQuota, so it needs BMWriteLock and 
FSWriteLock.
   
   `processExtraRedundancyBlock` chooses excess replica depends on storage 
policyId, so it needs FSReadLock.
   
   `isInSnapshot` depends on the full path, so it needs FSReadLock.





> [FGL] Monitor threads in BlockManager.class support fine-grained lock
> -
>
> Key: HDFS-17416
> URL: https://issues.apache.org/jira/browse/HDFS-17416
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: ZanderXu
>Assignee: ZanderXu
>Priority: Major
>  Labels: pull-request-available
>
> There are some monitor threads in BlockManager.class.
>  
> This ticket is used to make these threads supporting fine-grained locking.
>  * BlockReportProcessingThread
>  * MarkedDeleteBlockScrubber
>  * RedundancyMonitor
>  * Reconstruction Queue Initializer
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-16016) BPServiceActor add a new thread to handle IBR

2024-03-21 Thread liuguanghua (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17829445#comment-17829445
 ] 

liuguanghua commented on HDFS-16016:


Thank for reply.  

IBR  contains   DELETED_BLOCK,   RECEIVED_BLOCK, RECEIVING_BLOCK.  Mis-order of 
IBR and FBR not only effect to_remove blocks. 

And NN should remove blocks which FBR does not contain  if disk damage lost 
blocks.

 

> BPServiceActor add a new thread to handle IBR
> -
>
> Key: HDFS-16016
> URL: https://issues.apache.org/jira/browse/HDFS-16016
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: JiangHua Zhu
>Assignee: Viraj Jasani
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.3.6
>
> Attachments: image-2023-11-03-18-11-54-502.png, 
> image-2023-11-06-10-53-13-584.png, image-2023-11-06-10-55-50-939.png, 
> image-2024-03-20-18-31-23-937.png, image-2024-03-21-16-20-46-746.png
>
>  Time Spent: 5h 20m
>  Remaining Estimate: 0h
>
> Now BPServiceActor#offerService() is doing many things, FBR, IBR, heartbeat. 
> We can handle IBR independently to improve the performance of heartbeat and 
> FBR.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17436) checkPermission should not ignore original AccessControlException

2024-03-21 Thread wangzhihui (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17436?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17829443#comment-17829443
 ] 

wangzhihui commented on HDFS-17436:
---

[~wuxiaobao]  You should modify the description to clarify the purpose of 
adding new logs.

> checkPermission should not ignore original AccessControlException 
> --
>
> Key: HDFS-17436
> URL: https://issues.apache.org/jira/browse/HDFS-17436
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 3.3.0, 3.3.6
>Reporter: Xiaobao Wu
>Priority: Minor
> Fix For: 3.3.0
>
>
> In the environment where the *Ranger-HDFS* plugin is enabled, I look at the 
> log information of *AccessControlException* caused by the *du.* I find that 
> the printed log information is not accurate, because the original 
> AccessControlException is ignored by checkPermission, which is not conducive 
> to judging the real situation of the  AccessControlException . At least part 
> of the original log information should be printed.
> AccessControlException information currently printed:
> {code:java}
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
>  Permission denied: user=test,access=READ_EXECUTE, 
> inode="/warehouse/tablespace/managed/hive/test.db/stu/dt=2024-01-17":hive:hadoop:drwxrwx---
> at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:226){code}
>  The original AccessControlException information printed:
> {code:java}
> org.apache.hadoop.security.AccessControlException: Permission denied: 
> user=test,access=READ_EXECUTE, inode="dt=2024-01-17":hive:hadoop:drwxrwx---
> at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:400)
>  {code}
> From the comparison results of the above log information, it can be seen that 
> the inode information and the exception stack printed by the log are not 
> accurate.
> Later, the *inode* information prompted by the original 
> AccessControlException log information makes me realize that the Ranger-HDFS 
> plug-in in the current environment is not incorporated into RANGER-2297.
> If certain external authorizers *does not adjust its authentication logic* 
> according to HDFS-12130 , it is more difficult to locate the real situation 
> of the problem.So I think it is necessary to prompt this part of the log 
> information.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-17436) checkPermission should not ignore original AccessControlException

2024-03-21 Thread Xiaobao Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17436?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobao Wu updated HDFS-17436:
--
Description: 
In the environment where the *Ranger-HDFS* plugin is enabled, I look at the log 
information of *AccessControlException* caused by the *du.* I find that the 
printed log information is not accurate, because the original 
AccessControlException is ignored by checkPermission, which is not conducive to 
judging the real situation of the  AccessControlException . At least part of 
the original log information should be printed.

AccessControlException information currently printed:
{code:java}
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
 Permission denied: user=test,access=READ_EXECUTE, 
inode="/warehouse/tablespace/managed/hive/test.db/stu/dt=2024-01-17":hive:hadoop:drwxrwx---
at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:226){code}
 The original AccessControlException information printed:
{code:java}
org.apache.hadoop.security.AccessControlException: Permission denied: 
user=test,access=READ_EXECUTE, inode="dt=2024-01-17":hive:hadoop:drwxrwx---
at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:400)
 {code}
>From the comparison results of the above log information, it can be seen that 
>the inode information and the exception stack printed by the log are not 
>accurate.

Later, the *inode* information prompted by the original AccessControlException 
log information makes me realize that the Ranger-HDFS plug-in in the current 
environment is not incorporated into RANGER-2297.

If certain external authorizers *does not adjust its authentication logic* 
according to HDFS-12130 , it is more difficult to locate the real situation of 
the problem.So I think it is necessary to prompt this part of the log 
information.

  was:
In the environment where the *Ranger-HDFS* plugin is enabled, I look at the log 
information of *AccessControlException* caused by the *du.* I find that the 
printed log information is not accurate, because the original 
AccessControlException is ignored by checkPermission, which is not conducive to 
judging the real situation of the  AccessControlException . At least part of 
the original log information should be printed.

AccessControlException information currently printed:
{code:java}
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
 Permission denied: user=test,access=READ_EXECUTE, 
inode="/warehouse/tablespace/managed/hive/test.db/stu/dt=2024-01-17":hive:hadoop:drwxrwx---
at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:226){code}
 The original AccessControlException information printed:
{code:java}
org.apache.hadoop.security.AccessControlException: Permission denied: 
user=test,access=READ_EXECUTE, inode="dt=2024-01-17":hive:hadoop:drwxrwx---
at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:400)
 {code}
>From the comparison results of the above log information, it can be seen that 
>the inode information and the exception stack printed by the log are not 
>accurate.

Later, the *inode* information prompted by the original AccessControlException 
log information makes me realize that the Ranger-HDFS plug-in in the current 
environment is not incorporated into RANGER-2297.

If certain external authorizers *does not adjust its authentication logic* 
according to HDFS-12130 , it is more difficult to locate the real situation of 
the problem.

So I think it is necessary to prompt this part of the log information.


> checkPermission should not ignore original AccessControlException 
> --
>
> Key: HDFS-17436
> URL: https://issues.apache.org/jira/browse/HDFS-17436
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 3.3.0, 3.3.6
>Reporter: Xiaobao Wu
>Priority: Minor
> Fix For: 3.3.0
>
>
> In the environment where the *Ranger-HDFS* plugin is enabled, I look at the 
> log information of *AccessControlException* caused by the *du.* I find that 
> the printed log information is not accurate, because the original 
> AccessControlException is ignored by checkPermission, which is not conducive 
> to judging the real situation of the  AccessControlException . At least part 
> of the original log information should be printed.
> AccessControlException information currently printed:
> {code:java}
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
>  Permission denied: user=test,access=READ_EXECUTE, 
> 

[jira] [Updated] (HDFS-17436) checkPermission should not ignore original AccessControlException

2024-03-21 Thread Xiaobao Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17436?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobao Wu updated HDFS-17436:
--
Description: 
In the environment where the *Ranger-HDFS* plugin is enabled, I look at the log 
information of *AccessControlException* caused by the *du.* I find that the 
printed log information is not accurate, because the original 
AccessControlException is ignored by checkPermission, which is not conducive to 
judging the real situation of the  AccessControlException . At least part of 
the original log information should be printed.

AccessControlException information currently printed:
{code:java}
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
 Permission denied: user=test,access=READ_EXECUTE, 
inode="/warehouse/tablespace/managed/hive/test.db/stu/dt=2024-01-17":hive:hadoop:drwxrwx---
at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:226){code}
 The original AccessControlException information printed:
{code:java}
org.apache.hadoop.security.AccessControlException: Permission denied: 
user=test,access=READ_EXECUTE, inode="dt=2024-01-17":hive:hadoop:drwxrwx---
at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:400)
 {code}
>From the comparison results of the above log information, it can be seen that 
>the inode information and the exception stack printed by the log are not 
>accurate.

Later, the *inode* information prompted by the original AccessControlException 
log information makes me realize that the Ranger-HDFS plug-in in the current 
environment is not incorporated into RANGER-2297.

If certain external authorizers does not adjust its authentication logic 
according to HDFS-12130 , it is more difficult to locate the real situation of 
the problem.

So I think it is necessary to prompt this part of the log information.

  was:
In the environment where the *Ranger-HDFS* plugin is enabled, I look at the log 
information of *AccessControlException* caused by the *du.* I find that the 
printed log information is not accurate, because the original 
AccessControlException is ignored by checkPermission, which is not conducive to 
judging the real situation of the  AccessControlException . At least part of 
the original log information should be printed.

AccessControlException information currently printed:
{code:java}
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
 Permission denied: user=test,access=READ_EXECUTE, 
inode="/warehouse/tablespace/managed/hive/test.db/stu/dt=2024-01-17":hive:hadoop:drwxrwx---
at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:226){code}
 The original AccessControlException information printed:
{code:java}
org.apache.hadoop.security.AccessControlException: Permission denied: 
user=test,access=READ_EXECUTE, inode="dt=2024-01-17":hive:hadoop:drwxrwx---
at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:400)
 {code}
>From the comparison results of the above log information, it can be seen that 
>the inode information and the exception stack printed by the log are not 
>accurate.

Later, the *inode* information prompted by the original AccessControlException 
log information makes me realize that the Ranger-HDFS plug-in in the current 
environment is not incorporated into RANGER-2297, so I think it is necessary to 
prompt this part of the log information.


> checkPermission should not ignore original AccessControlException 
> --
>
> Key: HDFS-17436
> URL: https://issues.apache.org/jira/browse/HDFS-17436
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 3.3.0, 3.3.6
>Reporter: Xiaobao Wu
>Priority: Minor
> Fix For: 3.3.0
>
>
> In the environment where the *Ranger-HDFS* plugin is enabled, I look at the 
> log information of *AccessControlException* caused by the *du.* I find that 
> the printed log information is not accurate, because the original 
> AccessControlException is ignored by checkPermission, which is not conducive 
> to judging the real situation of the  AccessControlException . At least part 
> of the original log information should be printed.
> AccessControlException information currently printed:
> {code:java}
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
>  Permission denied: user=test,access=READ_EXECUTE, 
> inode="/warehouse/tablespace/managed/hive/test.db/stu/dt=2024-01-17":hive:hadoop:drwxrwx---
> at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:226){code}
>  The 

[jira] [Updated] (HDFS-17436) checkPermission should not ignore original AccessControlException

2024-03-21 Thread Xiaobao Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17436?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobao Wu updated HDFS-17436:
--
Description: 
In the environment where the *Ranger-HDFS* plugin is enabled, I look at the log 
information of *AccessControlException* caused by the *du.* I find that the 
printed log information is not accurate, because the original 
AccessControlException is ignored by checkPermission, which is not conducive to 
judging the real situation of the  AccessControlException . At least part of 
the original log information should be printed.

AccessControlException information currently printed:
{code:java}
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
 Permission denied: user=test,access=READ_EXECUTE, 
inode="/warehouse/tablespace/managed/hive/test.db/stu/dt=2024-01-17":hive:hadoop:drwxrwx---
at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:226){code}
 The original AccessControlException information printed:
{code:java}
org.apache.hadoop.security.AccessControlException: Permission denied: 
user=test,access=READ_EXECUTE, inode="dt=2024-01-17":hive:hadoop:drwxrwx---
at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:400)
 {code}
>From the comparison results of the above log information, it can be seen that 
>the inode information and the exception stack printed by the log are not 
>accurate.

Later, the *inode* information prompted by the original AccessControlException 
log information makes me realize that the Ranger-HDFS plug-in in the current 
environment is not incorporated into RANGER-2297.

If certain external authorizers *does not adjust its authentication logic* 
according to HDFS-12130 , it is more difficult to locate the real situation of 
the problem.

So I think it is necessary to prompt this part of the log information.

  was:
In the environment where the *Ranger-HDFS* plugin is enabled, I look at the log 
information of *AccessControlException* caused by the *du.* I find that the 
printed log information is not accurate, because the original 
AccessControlException is ignored by checkPermission, which is not conducive to 
judging the real situation of the  AccessControlException . At least part of 
the original log information should be printed.

AccessControlException information currently printed:
{code:java}
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
 Permission denied: user=test,access=READ_EXECUTE, 
inode="/warehouse/tablespace/managed/hive/test.db/stu/dt=2024-01-17":hive:hadoop:drwxrwx---
at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:226){code}
 The original AccessControlException information printed:
{code:java}
org.apache.hadoop.security.AccessControlException: Permission denied: 
user=test,access=READ_EXECUTE, inode="dt=2024-01-17":hive:hadoop:drwxrwx---
at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:400)
 {code}
>From the comparison results of the above log information, it can be seen that 
>the inode information and the exception stack printed by the log are not 
>accurate.

Later, the *inode* information prompted by the original AccessControlException 
log information makes me realize that the Ranger-HDFS plug-in in the current 
environment is not incorporated into RANGER-2297.

If certain external authorizers does not adjust its authentication logic 
according to HDFS-12130 , it is more difficult to locate the real situation of 
the problem.

So I think it is necessary to prompt this part of the log information.


> checkPermission should not ignore original AccessControlException 
> --
>
> Key: HDFS-17436
> URL: https://issues.apache.org/jira/browse/HDFS-17436
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 3.3.0, 3.3.6
>Reporter: Xiaobao Wu
>Priority: Minor
> Fix For: 3.3.0
>
>
> In the environment where the *Ranger-HDFS* plugin is enabled, I look at the 
> log information of *AccessControlException* caused by the *du.* I find that 
> the printed log information is not accurate, because the original 
> AccessControlException is ignored by checkPermission, which is not conducive 
> to judging the real situation of the  AccessControlException . At least part 
> of the original log information should be printed.
> AccessControlException information currently printed:
> {code:java}
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
>  Permission denied: user=test,access=READ_EXECUTE, 
> 

[jira] [Commented] (HDFS-17416) [FGL] Monitor threads in BlockManager.class support fine-grained lock

2024-03-21 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17829441#comment-17829441
 ] 

ASF GitHub Bot commented on HDFS-17416:
---

ferhui commented on code in PR #6647:
URL: https://github.com/apache/hadoop/pull/6647#discussion_r1533464091


##
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java:
##
@@ -5346,11 +5346,12 @@ NamenodeCommand startCheckpoint(NamenodeRegistration 
backupNode,
   public void processIncrementalBlockReport(final DatanodeID nodeID,
   final StorageReceivedDeletedBlocks srdb)
   throws IOException {
-writeLock();
+// Needs the FSWriteLock since it may update quota and access storage 
policyId and full path.

Review Comment:
   means both fs and bm locks are needed here?



##
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java:
##
@@ -2150,7 +2155,7 @@ int computeReconstructionWorkForBlocks(
 }
   }
 } finally {
-  namesystem.writeUnlock("computeReconstructionWorkForBlocks");
+  namesystem.writeUnlock(FSNamesystemLockMode.GLOBAL, 
"computeReconstructionWorkForBlocks");

Review Comment:
   Why use the global lock here? Didn't see any operations related to inode.





> [FGL] Monitor threads in BlockManager.class support fine-grained lock
> -
>
> Key: HDFS-17416
> URL: https://issues.apache.org/jira/browse/HDFS-17416
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: ZanderXu
>Assignee: ZanderXu
>Priority: Major
>  Labels: pull-request-available
>
> There are some monitor threads in BlockManager.class.
>  
> This ticket is used to make these threads supporting fine-grained locking.
>  * BlockReportProcessingThread
>  * MarkedDeleteBlockScrubber
>  * RedundancyMonitor
>  * Reconstruction Queue Initializer
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17370) Fix junit dependency for running parameterized tests in hadoop-hdfs-rbf

2024-03-21 Thread Takanobu Asanuma (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17829437#comment-17829437
 ] 

Takanobu Asanuma commented on HDFS-17370:
-

The problem is fixed by HDFS-17432.

> Fix junit dependency for running parameterized tests in hadoop-hdfs-rbf
> ---
>
> Key: HDFS-17370
> URL: https://issues.apache.org/jira/browse/HDFS-17370
> Project: Hadoop HDFS
>  Issue Type: Test
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.4.1, 3.5.0
>
>
> We need to add junit-jupiter-engine dependency for running parameterized 
> tests in hadoop-hdfs-rbf.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-17436) checkPermission should not ignore original AccessControlException

2024-03-21 Thread Xiaobao Wu (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17436?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17829429#comment-17829429
 ] 

Xiaobao Wu edited comment on HDFS-17436 at 3/21/24 8:44 AM:


[~hiwangzhihui]  Could you help me see this issue? I think it is necessary to 
retain the original  AccessControlException information.


was (Author: JIRAUSER304049):
[~hiwangzhihui] Could you see this issue 

> checkPermission should not ignore original AccessControlException 
> --
>
> Key: HDFS-17436
> URL: https://issues.apache.org/jira/browse/HDFS-17436
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 3.3.0, 3.3.6
>Reporter: Xiaobao Wu
>Priority: Minor
> Fix For: 3.3.0
>
>
> In the environment where the *Ranger-HDFS* plugin is enabled, I look at the 
> log information of *AccessControlException* caused by the *du.* I find that 
> the printed log information is not accurate, because the original 
> AccessControlException is ignored by checkPermission, which is not conducive 
> to judging the real situation of the  AccessControlException . At least part 
> of the original log information should be printed.
> AccessControlException information currently printed:
> {code:java}
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
>  Permission denied: user=test,access=READ_EXECUTE, 
> inode="/warehouse/tablespace/managed/hive/test.db/stu/dt=2024-01-17":hive:hadoop:drwxrwx---
> at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:226){code}
>  The original AccessControlException information printed:
> {code:java}
> org.apache.hadoop.security.AccessControlException: Permission denied: 
> user=test,access=READ_EXECUTE, inode="dt=2024-01-17":hive:hadoop:drwxrwx---
> at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:400)
>  {code}
> From the comparison results of the above log information, it can be seen that 
> the inode information and the exception stack printed by the log are not 
> accurate.
> Later, the *inode* information prompted by the original 
> AccessControlException log information makes me realize that the Ranger-HDFS 
> plug-in in the current environment is not incorporated into RANGER-2297, so I 
> think it is necessary to prompt this part of the log information.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17435) Fix TestRouterRpc#testClearStaleNamespacesInRouterStateIdContext() failed

2024-03-21 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17829436#comment-17829436
 ] 

ASF GitHub Bot commented on HDFS-17435:
---

tasanuma commented on PR #6650:
URL: https://github.com/apache/hadoop/pull/6650#issuecomment-2011648612

   Hi @simbadzina and @zhangshuyan0.
   Could you review it when you have bandwidth? This is caused by HDFS-17354.




> Fix TestRouterRpc#testClearStaleNamespacesInRouterStateIdContext() failed
> -
>
> Key: HDFS-17435
> URL: https://issues.apache.org/jira/browse/HDFS-17435
> Project: Hadoop HDFS
>  Issue Type: Test
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
>  Labels: pull-request-available
>
> TestRouterRpc and TestRouterRpcMultiDestination are failing with the 
> following error.
> {noformat}
> [ERROR] testProxyGetBlockKeys  Time elapsed: 0.573 s  <<< ERROR!
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.authorize.AuthorizationException):
>  User: jenkins is not allowed to impersonate jenkins
> {noformat}
> This is caused by testClearStaleNamespacesInRouterStateIdContext() which is 
> implemented by HDFS-17354.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-17354) Delay invoke clearStaleNamespacesInRouterStateIdContext during router start up

2024-03-21 Thread Takanobu Asanuma (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takanobu Asanuma resolved HDFS-17354.
-
Fix Version/s: 3.5.0
   Resolution: Fixed

> Delay invoke  clearStaleNamespacesInRouterStateIdContext during router start 
> up
> ---
>
> Key: HDFS-17354
> URL: https://issues.apache.org/jira/browse/HDFS-17354
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: lei w
>Assignee: lei w
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.5.0
>
>
> We should  start clear expired namespace thread at  RouterRpcServer RUNNING 
> phase  because StateStoreService is Initialized in  initialization phase.  
> Now, router will throw IoException when start up.
> {panel:title=Exception}
> 2024-01-09 16:27:06,939 WARN 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer: Could not 
> fetch current list of namespaces.
> java.io.IOException: State Store does not have an interface for 
> MembershipStore
> at 
> org.apache.hadoop.hdfs.server.federation.resolver.MembershipNamenodeResolver.getStoreInterface(MembershipNamenodeResolver.java:121)
> at 
> org.apache.hadoop.hdfs.server.federation.resolver.MembershipNamenodeResolver.getMembershipStore(MembershipNamenodeResolver.java:102)
> at 
> org.apache.hadoop.hdfs.server.federation.resolver.MembershipNamenodeResolver.getNamespaces(MembershipNamenodeResolver.java:388)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.clearStaleNamespacesInRouterStateIdContext(RouterRpcServer.java:434)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {panel}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17436) checkPermission should not ignore original AccessControlException

2024-03-21 Thread Xiaobao Wu (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17436?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17829429#comment-17829429
 ] 

Xiaobao Wu commented on HDFS-17436:
---

[~hiwangzhihui] Could you see this issue 

> checkPermission should not ignore original AccessControlException 
> --
>
> Key: HDFS-17436
> URL: https://issues.apache.org/jira/browse/HDFS-17436
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 3.3.0, 3.3.6
>Reporter: Xiaobao Wu
>Priority: Minor
> Fix For: 3.3.0
>
>
> In the environment where the *Ranger-HDFS* plugin is enabled, I look at the 
> log information of *AccessControlException* caused by the *du.* I find that 
> the printed log information is not accurate, because the original 
> AccessControlException is ignored by checkPermission, which is not conducive 
> to judging the real situation of the  AccessControlException . At least part 
> of the original log information should be printed.
> AccessControlException information currently printed:
> {code:java}
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
>  Permission denied: user=test,access=READ_EXECUTE, 
> inode="/warehouse/tablespace/managed/hive/test.db/stu/dt=2024-01-17":hive:hadoop:drwxrwx---
> at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:226){code}
>  The original AccessControlException information printed:
> {code:java}
> org.apache.hadoop.security.AccessControlException: Permission denied: 
> user=test,access=READ_EXECUTE, inode="dt=2024-01-17":hive:hadoop:drwxrwx---
> at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:400)
>  {code}
> From the comparison results of the above log information, it can be seen that 
> the inode information and the exception stack printed by the log are not 
> accurate.
> Later, the *inode* information prompted by the original 
> AccessControlException log information makes me realize that the Ranger-HDFS 
> plug-in in the current environment is not incorporated into RANGER-2297, so I 
> think it is necessary to prompt this part of the log information.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-17436) checkPermission should not ignore original AccessControlException

2024-03-21 Thread Xiaobao Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17436?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobao Wu updated HDFS-17436:
--
Fix Version/s: 3.3.0
  Description: 
In the environment where the *Ranger-HDFS* plugin is enabled, I look at the log 
information of *AccessControlException* caused by the *du.* I find that the 
printed log information is not accurate, because the original 
AccessControlException is ignored by checkPermission, which is not conducive to 
judging the real situation of the  AccessControlException . At least part of 
the original log information should be printed.

AccessControlException information currently printed:
{code:java}
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
 Permission denied: user=test,access=READ_EXECUTE, 
inode="/warehouse/tablespace/managed/hive/test.db/stu/dt=2024-01-17":hive:hadoop:drwxrwx---
at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:226){code}
 The original AccessControlException information printed:
{code:java}
org.apache.hadoop.security.AccessControlException: Permission denied: 
user=test,access=READ_EXECUTE, inode="dt=2024-01-17":hive:hadoop:drwxrwx---
at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:400)
 {code}
>From the comparison results of the above log information, it can be seen that 
>the inode information and the exception stack printed by the log are not 
>accurate.

Later, the *inode* information prompted by the original AccessControlException 
log information makes me realize that the Ranger-HDFS plug-in in the current 
environment is not incorporated into RANGER-2297, so I think it is necessary to 
prompt this part of the log information.

> checkPermission should not ignore original AccessControlException 
> --
>
> Key: HDFS-17436
> URL: https://issues.apache.org/jira/browse/HDFS-17436
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 3.3.0, 3.3.6
>Reporter: Xiaobao Wu
>Priority: Minor
> Fix For: 3.3.0
>
>
> In the environment where the *Ranger-HDFS* plugin is enabled, I look at the 
> log information of *AccessControlException* caused by the *du.* I find that 
> the printed log information is not accurate, because the original 
> AccessControlException is ignored by checkPermission, which is not conducive 
> to judging the real situation of the  AccessControlException . At least part 
> of the original log information should be printed.
> AccessControlException information currently printed:
> {code:java}
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
>  Permission denied: user=test,access=READ_EXECUTE, 
> inode="/warehouse/tablespace/managed/hive/test.db/stu/dt=2024-01-17":hive:hadoop:drwxrwx---
> at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:226){code}
>  The original AccessControlException information printed:
> {code:java}
> org.apache.hadoop.security.AccessControlException: Permission denied: 
> user=test,access=READ_EXECUTE, inode="dt=2024-01-17":hive:hadoop:drwxrwx---
> at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:400)
>  {code}
> From the comparison results of the above log information, it can be seen that 
> the inode information and the exception stack printed by the log are not 
> accurate.
> Later, the *inode* information prompted by the original 
> AccessControlException log information makes me realize that the Ranger-HDFS 
> plug-in in the current environment is not incorporated into RANGER-2297, so I 
> think it is necessary to prompt this part of the log information.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-17436) checkPermission should not ignore original AccessControlException

2024-03-21 Thread Xiaobao Wu (Jira)
Xiaobao Wu created HDFS-17436:
-

 Summary: checkPermission should not ignore original 
AccessControlException 
 Key: HDFS-17436
 URL: https://issues.apache.org/jira/browse/HDFS-17436
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Affects Versions: 3.3.6, 3.3.0
Reporter: Xiaobao Wu






--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16016) BPServiceActor add a new thread to handle IBR

2024-03-21 Thread Xiping Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16016?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiping Zhang updated HDFS-16016:

Attachment: (was: image-2024-03-21-16-19-33-668.png)

> BPServiceActor add a new thread to handle IBR
> -
>
> Key: HDFS-16016
> URL: https://issues.apache.org/jira/browse/HDFS-16016
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: JiangHua Zhu
>Assignee: Viraj Jasani
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.3.6
>
> Attachments: image-2023-11-03-18-11-54-502.png, 
> image-2023-11-06-10-53-13-584.png, image-2023-11-06-10-55-50-939.png, 
> image-2024-03-20-18-31-23-937.png, image-2024-03-21-16-20-46-746.png
>
>  Time Spent: 5h 20m
>  Remaining Estimate: 0h
>
> Now BPServiceActor#offerService() is doing many things, FBR, IBR, heartbeat. 
> We can handle IBR independently to improve the performance of heartbeat and 
> FBR.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-16016) BPServiceActor add a new thread to handle IBR

2024-03-21 Thread Xiping Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17829425#comment-17829425
 ] 

Xiping Zhang commented on HDFS-16016:
-

[~liuguanghua]   Maybe I didn't describe it clearly, the above processing is in 
the namenode, 
and does not need to be processed in the datanode end. The FBR is to align the 
block information of the namenode and datanode history. 
Now we can determine that all blocks reported in the DN end have been added to 
the namenode end, 
but we cannot guarantee that all the added blocks are included in all blocks 
reported this time, right?

for 3.3.0 ,Just delete the red box code

!image-2024-03-21-16-20-46-746.png!

> BPServiceActor add a new thread to handle IBR
> -
>
> Key: HDFS-16016
> URL: https://issues.apache.org/jira/browse/HDFS-16016
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: JiangHua Zhu
>Assignee: Viraj Jasani
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.3.6
>
> Attachments: image-2023-11-03-18-11-54-502.png, 
> image-2023-11-06-10-53-13-584.png, image-2023-11-06-10-55-50-939.png, 
> image-2024-03-20-18-31-23-937.png, image-2024-03-21-16-19-33-668.png, 
> image-2024-03-21-16-20-46-746.png
>
>  Time Spent: 5h 20m
>  Remaining Estimate: 0h
>
> Now BPServiceActor#offerService() is doing many things, FBR, IBR, heartbeat. 
> We can handle IBR independently to improve the performance of heartbeat and 
> FBR.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16016) BPServiceActor add a new thread to handle IBR

2024-03-21 Thread Xiping Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16016?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiping Zhang updated HDFS-16016:

Attachment: image-2024-03-21-16-20-46-746.png

> BPServiceActor add a new thread to handle IBR
> -
>
> Key: HDFS-16016
> URL: https://issues.apache.org/jira/browse/HDFS-16016
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: JiangHua Zhu
>Assignee: Viraj Jasani
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.3.6
>
> Attachments: image-2023-11-03-18-11-54-502.png, 
> image-2023-11-06-10-53-13-584.png, image-2023-11-06-10-55-50-939.png, 
> image-2024-03-20-18-31-23-937.png, image-2024-03-21-16-19-33-668.png, 
> image-2024-03-21-16-20-46-746.png
>
>  Time Spent: 5h 20m
>  Remaining Estimate: 0h
>
> Now BPServiceActor#offerService() is doing many things, FBR, IBR, heartbeat. 
> We can handle IBR independently to improve the performance of heartbeat and 
> FBR.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17430) RecoveringBlock will skip no live replicas when get block recovery command.

2024-03-21 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17829422#comment-17829422
 ] 

ASF GitHub Bot commented on HDFS-17430:
---

haiyang1987 commented on PR #6635:
URL: https://github.com/apache/hadoop/pull/6635#issuecomment-2011611948

   Update PR.
   
   Hi @ZanderXu @Hexiaoqiao @dineshchitlangia please help me review this PR 
again when you have free time, Thank you very much.




> RecoveringBlock will skip no live replicas when get block recovery command.
> ---
>
> Key: HDFS-17430
> URL: https://issues.apache.org/jira/browse/HDFS-17430
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Haiyang Hu
>Assignee: Haiyang Hu
>Priority: Major
>  Labels: pull-request-available
>
> RecoveringBlock maybe skip no live replicas when get block recovery command.
> *Issue:*
> Currently the following scenarios may lead to failure in the execution of 
> BlockRecoveryWorker by the datanode, resulting file being not to be closed 
> for a long time.
> *t1.*  The block_xxx_xxx has two replicas[dn1,dn2]; the dn1 machine shut down 
> and will be dead status, the dn2 is live status.
> *t2.* Occurs block recovery.
> related logs:
> {code:java}
> 2024-03-13 21:58:00.651 WARN hdfs.StateChange DIR* 
> NameSystem.internalReleaseLease: File /xxx/file has not been closed. Lease 
> recovery is in progress. RecoveryId = 28577373754 for block blk_xxx_xxx
> {code}
> *t3.*  The dn2 is chosen for block recovery.
> dn1 is marked as stale (is dead state) at this time, here the 
> recoveryLocations size is 1, currently according to the following logic, dn1 
> and dn2 will be chosen to participate in block recovery.
> DatanodeManager#getBlockRecoveryCommand
> {code:java}
>// Skip stale nodes during recovery
>  final List recoveryLocations =
>  new ArrayList<>(storages.length);
>  final List storageIdx = new ArrayList<>(storages.length);
>  for (int i = 0; i < storages.length; ++i) {
>if (!storages[i].getDatanodeDescriptor().isStale(staleInterval)) {
>  recoveryLocations.add(storages[i]);
>  storageIdx.add(i);
>}
>  }
>  ...
>  // If we only get 1 replica after eliminating stale nodes, choose all
>  // replicas for recovery and let the primary data node handle failures.
>  DatanodeInfo[] recoveryInfos;
>  if (recoveryLocations.size() > 1) {
>if (recoveryLocations.size() != storages.length) {
>  LOG.info("Skipped stale nodes for recovery : "
>  + (storages.length - recoveryLocations.size()));
>}
>recoveryInfos = DatanodeStorageInfo.toDatanodeInfos(recoveryLocations);
>  } else {
>// If too many replicas are stale, then choose all replicas to
>// participate in block recovery.
>recoveryInfos = DatanodeStorageInfo.toDatanodeInfos(storages);
>  }
> {code}
> {code:java}
> 2024-03-13 21:58:01,425 INFO  datanode.DataNode 
> (BlockRecoveryWorker.java:logRecoverBlock(563))
> [org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$1@54e291ac] -
> BlockRecoveryWorker: NameNode at xxx:8040 calls 
> recoverBlock(BP-xxx:blk_xxx_xxx, 
> targets=[DatanodeInfoWithStorage[dn1:50010,null,null], 
> DatanodeInfoWithStorage[dn2:50010,null,null]], 
> newGenerationStamp=28577373754, newBlock=null, isStriped=false)
> {code}
> *t4.* When dn2 executes BlockRecoveryWorker#recover, it will call 
> initReplicaRecovery operation on dn1, however, since the dn1 machine is 
> currently down state at this time, it will take a very long time to timeout,  
> the default number of retries to establish a server connection is 45 times.
> related logs:
> {code:java}
> 2024-03-13 21:59:31,518 INFO  ipc.Client 
> (Client.java:handleConnectionTimeout(904)) 
> [org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$1@54e291ac] - 
> Retrying connect to server: dn1:8010. Already tried 0 time(s); maxRetries=45
> ...
> 2024-03-13 23:05:35,295 INFO  ipc.Client 
> (Client.java:handleConnectionTimeout(904)) 
> [org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$1@54e291ac] - 
> Retrying connect to server: dn2:8010. Already tried 44 time(s); maxRetries=45
> 2024-03-13 23:07:05,392 WARN  protocol.InterDatanodeProtocol 
> (BlockRecoveryWorker.java:recover(170)) 
> [org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$1@54e291ac] -
> Failed to recover block (block=BP-xxx:blk_xxx_xxx, 
> datanode=DatanodeInfoWithStorage[dn1:50010,null,null]) 
> org.apache.hadoop.net.ConnectTimeoutException:
> Call From dn2 to dn1:8010 failed on socket timeout exception: 
> org.apache.hadoop.net.ConnectTimeoutException: 9 millis timeout while 
> waiting for channel to be ready for connect.ch : 
> java.nio.channels.SocketChannel[connection-pending 

[jira] [Updated] (HDFS-16016) BPServiceActor add a new thread to handle IBR

2024-03-21 Thread Xiping Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16016?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiping Zhang updated HDFS-16016:

Attachment: image-2024-03-21-16-19-33-668.png

> BPServiceActor add a new thread to handle IBR
> -
>
> Key: HDFS-16016
> URL: https://issues.apache.org/jira/browse/HDFS-16016
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: JiangHua Zhu
>Assignee: Viraj Jasani
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.3.6
>
> Attachments: image-2023-11-03-18-11-54-502.png, 
> image-2023-11-06-10-53-13-584.png, image-2023-11-06-10-55-50-939.png, 
> image-2024-03-20-18-31-23-937.png, image-2024-03-21-16-19-33-668.png
>
>  Time Spent: 5h 20m
>  Remaining Estimate: 0h
>
> Now BPServiceActor#offerService() is doing many things, FBR, IBR, heartbeat. 
> We can handle IBR independently to improve the performance of heartbeat and 
> FBR.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-17435) Fix TestRouterRpc#testClearStaleNamespacesInRouterStateIdContext() failed

2024-03-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDFS-17435:
--
Labels: pull-request-available  (was: )

> Fix TestRouterRpc#testClearStaleNamespacesInRouterStateIdContext() failed
> -
>
> Key: HDFS-17435
> URL: https://issues.apache.org/jira/browse/HDFS-17435
> Project: Hadoop HDFS
>  Issue Type: Test
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
>  Labels: pull-request-available
>
> TestRouterRpc and TestRouterRpcMultiDestination are failing with the 
> following error.
> {noformat}
> [ERROR] testProxyGetBlockKeys  Time elapsed: 0.573 s  <<< ERROR!
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.authorize.AuthorizationException):
>  User: jenkins is not allowed to impersonate jenkins
> {noformat}
> This is caused by testClearStaleNamespacesInRouterStateIdContext() which is 
> implemented by HDFS-17354.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17435) Fix TestRouterRpc#testClearStaleNamespacesInRouterStateIdContext() failed

2024-03-21 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17829411#comment-17829411
 ] 

ASF GitHub Bot commented on HDFS-17435:
---

tasanuma opened a new pull request, #6650:
URL: https://github.com/apache/hadoop/pull/6650

   
   
   
   ### Description of PR
   
   TestRouterRpc and TestRouterRpcMultiDestination are failing with the 
following error.
   
   ```
   [ERROR] testProxyGetBlockKeys  Time elapsed: 0.573 s  <<< ERROR!
   
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.authorize.AuthorizationException):
 User: jenkins is not allowed to impersonate jenkins
   ```
   
   This is caused by `testClearStaleNamespacesInRouterStateIdContext()` which 
is implemented by HDFS-17354.
   
   ### How was this patch tested?
   
   
   ### For code changes:
   
   - [x] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   




> Fix TestRouterRpc#testClearStaleNamespacesInRouterStateIdContext() failed
> -
>
> Key: HDFS-17435
> URL: https://issues.apache.org/jira/browse/HDFS-17435
> Project: Hadoop HDFS
>  Issue Type: Test
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
>
> TestRouterRpc and TestRouterRpcMultiDestination are failing with the 
> following error.
> {noformat}
> [ERROR] testProxyGetBlockKeys  Time elapsed: 0.573 s  <<< ERROR!
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.authorize.AuthorizationException):
>  User: jenkins is not allowed to impersonate jenkins
> {noformat}
> This is caused by testClearStaleNamespacesInRouterStateIdContext() which is 
> implemented by HDFS-17354.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-17435) Fix TestRouterRpc#testClearStaleNamespacesInRouterStateIdContext() failed

2024-03-21 Thread Takanobu Asanuma (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takanobu Asanuma reassigned HDFS-17435:
---

Assignee: Takanobu Asanuma

> Fix TestRouterRpc#testClearStaleNamespacesInRouterStateIdContext() failed
> -
>
> Key: HDFS-17435
> URL: https://issues.apache.org/jira/browse/HDFS-17435
> Project: Hadoop HDFS
>  Issue Type: Test
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
>
> TestRouterRpc and TestRouterRpcMultiDestination are failing with the 
> following error.
> {noformat}
> [ERROR] testProxyGetBlockKeys  Time elapsed: 0.573 s  <<< ERROR!
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.authorize.AuthorizationException):
>  User: jenkins is not allowed to impersonate jenkins
> {noformat}
> This is caused by testClearStaleNamespacesInRouterStateIdContext() which is 
> implemented by HDFS-17354.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-17414) [FGL] RPCs in DatanodeProtocol support fine-grained lock

2024-03-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17414?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDFS-17414:
--
Labels: pull-request-available  (was: )

> [FGL] RPCs in DatanodeProtocol support fine-grained lock
> 
>
> Key: HDFS-17414
> URL: https://issues.apache.org/jira/browse/HDFS-17414
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: ZanderXu
>Assignee: ZanderXu
>Priority: Major
>  Labels: pull-request-available
>
> [FGL] RPCs in DatanodeProtocol support fine-grained lock.
>  * registerDatanode
>  * sendHeartbeat
>  * sendLifeline
>  * blockReport
>  * blockReceivedAndDeleted
>  * errorReport
>  * versionRequest
>  * reportBadBlocks
>  * commitBlockSynchronization



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17414) [FGL] RPCs in DatanodeProtocol support fine-grained lock

2024-03-21 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17829406#comment-17829406
 ] 

ASF GitHub Bot commented on HDFS-17414:
---

ZanderXu opened a new pull request, #6649:
URL: https://github.com/apache/hadoop/pull/6649

   [FGL] RPCs in DatanodeProtocol support fine-grained lock.
   
   - registerDatanode
   - sendHeartbeat
   - sendLifeline
   - blockReport
   - blockReceivedAndDeleted
   - errorReport
   - versionRequest
   - reportBadBlocks
   - commitBlockSynchronization




> [FGL] RPCs in DatanodeProtocol support fine-grained lock
> 
>
> Key: HDFS-17414
> URL: https://issues.apache.org/jira/browse/HDFS-17414
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: ZanderXu
>Assignee: ZanderXu
>Priority: Major
>
> [FGL] RPCs in DatanodeProtocol support fine-grained lock.
>  * registerDatanode
>  * sendHeartbeat
>  * sendLifeline
>  * blockReport
>  * blockReceivedAndDeleted
>  * errorReport
>  * versionRequest
>  * reportBadBlocks
>  * commitBlockSynchronization



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-16016) BPServiceActor add a new thread to handle IBR

2024-03-21 Thread liuguanghua (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17829403#comment-17829403
 ] 

liuguanghua commented on HDFS-16016:


In step4, It is according to the following way?

(1)  In a loop,  Heartbeat -> IBR(if need) -> FBR(6h)

(2)  And DN keeps all blocks(FBR) in memory ,and merge every IBR

[~zhangxiping] , Thanks.

> BPServiceActor add a new thread to handle IBR
> -
>
> Key: HDFS-16016
> URL: https://issues.apache.org/jira/browse/HDFS-16016
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: JiangHua Zhu
>Assignee: Viraj Jasani
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.3.6
>
> Attachments: image-2023-11-03-18-11-54-502.png, 
> image-2023-11-06-10-53-13-584.png, image-2023-11-06-10-55-50-939.png, 
> image-2024-03-20-18-31-23-937.png
>
>  Time Spent: 5h 20m
>  Remaining Estimate: 0h
>
> Now BPServiceActor#offerService() is doing many things, FBR, IBR, heartbeat. 
> We can handle IBR independently to improve the performance of heartbeat and 
> FBR.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-17414) [FGL] RPCs in DatanodeProtocol support fine-grained lock

2024-03-21 Thread ZanderXu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17414?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZanderXu updated HDFS-17414:

Description: 
[FGL] RPCs in DatanodeProtocol support fine-grained lock.
 * registerDatanode
 * sendHeartbeat
 * sendLifeline
 * blockReport
 * blockReceivedAndDeleted
 * errorReport
 * versionRequest
 * reportBadBlocks
 * commitBlockSynchronization

  was:
[FGL] RPCs in DatanodeProtocol support fine-grained lock.
 * registerDatanode
 * sendHeartbeat
 * sendLifeline
 * blockReport
 * cacheReport
 * blockReceivedAndDeleted
 * errorReport
 * versionRequest
 * reportBadBlocks
 * commitBlockSynchronization


> [FGL] RPCs in DatanodeProtocol support fine-grained lock
> 
>
> Key: HDFS-17414
> URL: https://issues.apache.org/jira/browse/HDFS-17414
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: ZanderXu
>Assignee: ZanderXu
>Priority: Major
>
> [FGL] RPCs in DatanodeProtocol support fine-grained lock.
>  * registerDatanode
>  * sendHeartbeat
>  * sendLifeline
>  * blockReport
>  * blockReceivedAndDeleted
>  * errorReport
>  * versionRequest
>  * reportBadBlocks
>  * commitBlockSynchronization



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-17432) Fix junit dependency to enable JUnit4 tests to run in hadoop-hdfs-rbf

2024-03-21 Thread Takanobu Asanuma (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17432?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takanobu Asanuma resolved HDFS-17432.
-
Fix Version/s: 3.4.1
   3.5.0
   Resolution: Fixed

> Fix junit dependency to enable JUnit4 tests to run in hadoop-hdfs-rbf
> -
>
> Key: HDFS-17432
> URL: https://issues.apache.org/jira/browse/HDFS-17432
> Project: Hadoop HDFS
>  Issue Type: Test
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.1, 3.5.0
>
>
> After HDFS-17370, JUnit4 tests stopped running in hadoop-hdfs-rbf. To enable 
> both JUnit4 and JUnit5 tests to run, we need to add junit-vintage-engine to 
> the hadoop-hdfs-rbf/pom.xml.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org