[jira] [Commented] (HDFS-17383) Datanode current block token should come from active NameNode in HA mode

2024-04-11 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17836419#comment-17836419
 ] 

ASF GitHub Bot commented on HDFS-17383:
---

hadoop-yetus commented on PR #6562:
URL: https://github.com/apache/hadoop/pull/6562#issuecomment-205102

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  17m 38s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 6 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  49m 53s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 24s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   1m 17s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   1m 14s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 26s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 12s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 43s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   3m 20s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  41m  1s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 14s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 17s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   1m 17s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  8s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |   1m  8s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   1m  7s | 
[/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6562/7/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs-project/hadoop-hdfs: The patch generated 3 new + 455 unchanged 
- 0 fixed = 458 total (was 455)  |
   | +1 :green_heart: |  mvnsite  |   1m 15s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 57s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 36s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   3m 22s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  41m 54s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  | 251m 37s |  |  hadoop-hdfs in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 49s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 427m 25s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6562/7/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6562 |
   | JIRA Issue | HDFS-17383 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 0cf9a0341f16 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 6b1ed1a1cc4aed7414936f60ce07b58214e927d8 |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6562/7/testReport/ |
   | Max. process+thread count | 3049 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console 

[jira] [Commented] (HDFS-17383) Datanode current block token should come from active NameNode in HA mode

2024-04-11 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17836411#comment-17836411
 ] 

ASF GitHub Bot commented on HDFS-17383:
---

hadoop-yetus commented on PR #6562:
URL: https://github.com/apache/hadoop/pull/6562#issuecomment-2050991527

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 31s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 6 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  45m 25s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 23s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   1m 15s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   1m 15s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 25s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m  8s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 45s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   3m 18s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  36m  3s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 12s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 14s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   1m 14s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  6s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |   1m  6s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   1m  4s | 
[/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6562/8/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs-project/hadoop-hdfs: The patch generated 3 new + 456 unchanged 
- 0 fixed = 459 total (was 456)  |
   | +1 :green_heart: |  mvnsite  |   1m 15s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 53s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 39s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   3m 14s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  35m 57s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  | 229m  3s |  |  hadoop-hdfs in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 46s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 372m  7s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.45 ServerAPI=1.45 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6562/8/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6562 |
   | JIRA Issue | HDFS-17383 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux f23da8b11928 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 6b1ed1a1cc4aed7414936f60ce07b58214e927d8 |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6562/8/testReport/ |
   | Max. process+thread count | 4448 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console 

[jira] [Commented] (HDFS-17461) Fix spotbugs in PeerCache#getInternal

2024-04-11 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17461?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17836405#comment-17836405
 ] 

ASF GitHub Bot commented on HDFS-17461:
---

hadoop-yetus commented on PR #6721:
URL: https://github.com/apache/hadoop/pull/6721#issuecomment-2050962070

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 31s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  47m  0s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m  3s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   0m 56s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   0m 30s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 58s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 47s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 39s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | -1 :x: |  spotbugs  |   2m 51s | 
[/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-client-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6721/2/artifact/out/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-client-warnings.html)
 |  hadoop-hdfs-project/hadoop-hdfs-client in trunk has 1 extant spotbugs 
warnings.  |
   | +1 :green_heart: |  shadedclient  |  37m 54s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 50s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 54s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   0m 54s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 46s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |   0m 46s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 19s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 49s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 35s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 32s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   2m 50s |  |  
hadoop-hdfs-project/hadoop-hdfs-client generated 0 new + 0 unchanged - 1 fixed 
= 0 total (was 1)  |
   | +1 :green_heart: |  shadedclient  |  39m 36s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 25s |  |  hadoop-hdfs-client in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 36s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 144m 59s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6721/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6721 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 05f56d1e9843 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / c5e1b9d29fe1f62c1dd40d55c8b4be8c7b77f943 |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   |  Test Results | 

[jira] [Commented] (HDFS-17397) Choose another DN as soon as possible, when encountering network issues

2024-04-11 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17836400#comment-17836400
 ] 

ASF GitHub Bot commented on HDFS-17397:
---

tangphucnhan commented on PR #6591:
URL: https://github.com/apache/hadoop/pull/6591#issuecomment-2050879234

   Thanks!
   
   On Thu, Mar 28, 2024 at 4:17 PM Apache Hadoop Yetus Account <
   ***@***.***> wrote:
   
   >  *-1 overall*
   > Vote Subsystem Runtime Logfile Comment
   > +0  reexec 0m 31s Docker mode activated.
   > _ Prechecks _
   > +1  dupname 0m 0s No case conflicting files found.
   > +0  codespell 0m 1s codespell was not available.
   > +0  detsecrets 0m 1s detect-secrets was not available.
   > +1  @author  0m 0s The patch does not
   > contain any @author  tags.
   > -1 ❌ test4tests 0m 0s The patch doesn't appear to include any new or
   > modified tests. Please justify why no new tests are needed for this patch.
   > Also please list what manual steps were performed to verify this patch.
   > _ trunk Compile Tests _
   > +1  mvninstall 44m 30s trunk passed
   > +1  compile 1m 1s trunk passed with JDK
   > Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1
   > +1  compile 0m 57s trunk passed with JDK Private
   > Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06
   > +1  checkstyle 0m 34s trunk passed
   > +1  mvnsite 0m 59s trunk passed
   > +1  javadoc 0m 50s trunk passed with JDK
   > Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1
   > +1  javadoc 0m 44s trunk passed with JDK Private
   > Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06
   > -1 ❌ spotbugs 2m 38s
   > /branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-client-warnings.html
   > 

 hadoop-hdfs-project/hadoop-hdfs-client
   > in trunk has 1 extant spotbugs warnings.
   > +1  shadedclient 34m 49s branch has no errors when building and testing
   > our client artifacts.
   > _ Patch Compile Tests _
   > +1  mvninstall 0m 49s the patch passed
   > +1  compile 0m 53s the patch passed with JDK
   > Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1
   > +1  javac 0m 53s the patch passed
   > +1  compile 0m 45s the patch passed with JDK Private
   > Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06
   > +1  javac 0m 45s the patch passed
   > +1  blanks 0m 0s The patch has no blanks issues.
   > +1  checkstyle 0m 21s the patch passed
   > +1  mvnsite 0m 47s the patch passed
   > +1  javadoc 0m 36s the patch passed with JDK
   > Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1
   > +1  javadoc 0m 35s the patch passed with JDK Private
   > Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06
   > +1  spotbugs 2m 34s the patch passed
   > +1  shadedclient 34m 38s patch has no errors when building and testing
   > our client artifacts.
   > _ Other Tests _
   > +1  unit 2m 25s hadoop-hdfs-client in the patch passed.
   > +1  asflicense 0m 37s The patch does not generate ASF License warnings.
   > 135m 1s
   > Subsystem Report/Notes
   > Docker ClientAPI=1.45 ServerAPI=1.45 base:
   > 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6591/12/artifact/out/Dockerfile
   > GITHUB PR #6591 
   > Optional Tests dupname asflicense compile javac javadoc mvninstall
   > mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets
   > uname Linux 8e980caff1e4 5.15.0-94-generic #104
   > -Ubuntu SMP Tue Jan 9 15:25:40
   > UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
   > Build tool maven
   > Personality dev-support/bin/hadoop.sh
   > git revision trunk / 73d6c12
   > 

   > Default Java Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06
   > Multi-JDK versions 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1
   > /usr/lib/jvm/java-8-openjdk-amd64:Private
   > Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06
   > Test Results
   > 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6591/12/testReport/
   > Max. process+thread count 552 (vs. ulimit of 5500)
   > modules C: hadoop-hdfs-project/hadoop-hdfs-client U:
   > hadoop-hdfs-project/hadoop-hdfs-client
   > Console output
   > https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6591/12/console
   > versions git=2.25.1 maven=3.6.3 spotbugs=4.2.2
   > Powered by Apache Yetus 0.14.0 https://yetus.apache.org
   >
   > This message was automatically generated.
   >
   > —
   > Reply to this email directly, view it on GitHub
   > , or
   > unsubscribe
   > 

[jira] [Assigned] (HDFS-17459) [FGL] Summarize this feature

2024-04-11 Thread ZanderXu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17459?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZanderXu reassigned HDFS-17459:
---

Assignee: Felix N  (was: ZanderXu)

> [FGL] Summarize this feature 
> -
>
> Key: HDFS-17459
> URL: https://issues.apache.org/jira/browse/HDFS-17459
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: ZanderXu
>Assignee: Felix N
>Priority: Major
>
> Write a doc to summarize this feature so we can merge it into the trunk.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17459) [FGL] Summarize this feature

2024-04-11 Thread ZanderXu (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17459?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17836399#comment-17836399
 ] 

ZanderXu commented on HDFS-17459:
-

Sure, thanks

> [FGL] Summarize this feature 
> -
>
> Key: HDFS-17459
> URL: https://issues.apache.org/jira/browse/HDFS-17459
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: ZanderXu
>Assignee: ZanderXu
>Priority: Major
>
> Write a doc to summarize this feature so we can merge it into the trunk.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17459) [FGL] Summarize this feature

2024-04-11 Thread Felix N (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17459?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17836396#comment-17836396
 ] 

Felix N commented on HDFS-17459:


I can help with this one. I assume it's documentation for this feature?

> [FGL] Summarize this feature 
> -
>
> Key: HDFS-17459
> URL: https://issues.apache.org/jira/browse/HDFS-17459
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: ZanderXu
>Assignee: ZanderXu
>Priority: Major
>
> Write a doc to summarize this feature so we can merge it into the trunk.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17424) [FGL] DelegationTokenSecretManager supports fine-grained lock

2024-04-11 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17424?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17836394#comment-17836394
 ] 

ASF GitHub Bot commented on HDFS-17424:
---

ferhui commented on code in PR #6696:
URL: https://github.com/apache/hadoop/pull/6696#discussion_r1561943317


##
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/security/token/delegation/DelegationTokenSecretManager.java:
##
@@ -401,7 +402,10 @@ protected void logExpireToken(final 
DelegationTokenIdentifier dtId)
   // closes the edit log files. Doing this inside the
   // fsn lock will prevent being interrupted when stopping
   // the secret manager.
-  namesystem.readLockInterruptibly();
+  // TODO: delegation token is a very independent system, so
+  // it's proper to use an seperated r/w lock instead of fs lock
+  // for getting/renewing/expiring/canceling token or updating master key.

Review Comment:
   @yuanboliu @ZanderXu modify the comments or keep it there ?





> [FGL] DelegationTokenSecretManager supports fine-grained lock
> -
>
> Key: HDFS-17424
> URL: https://issues.apache.org/jira/browse/HDFS-17424
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: ZanderXu
>Assignee: Yuanbo Liu
>Priority: Major
>  Labels: pull-request-available
>
> DelegationTokenSecretManager supports fine-grained lock



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17461) Fix spotbugs in PeerCache#getInternal

2024-04-11 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17461?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17836392#comment-17836392
 ] 

ASF GitHub Bot commented on HDFS-17461:
---

haiyang1987 commented on code in PR #6721:
URL: https://github.com/apache/hadoop/pull/6721#discussion_r1561931239


##
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/PeerCache.java:
##
@@ -155,7 +155,7 @@ public Peer get(DatanodeID dnId, boolean isDomain) {
 
   private synchronized Peer getInternal(DatanodeID dnId, boolean isDomain) {
 List sockStreamList = multimap.get(new Key(dnId, isDomain));
-if (sockStreamList == null) {
+if (sockStreamList.isEmpty()) {
   return null;

Review Comment:
   Thanks @ayushtkn for your comment.
   Update PR, please help me review it again, thanks~





> Fix spotbugs in PeerCache#getInternal
> -
>
> Key: HDFS-17461
> URL: https://issues.apache.org/jira/browse/HDFS-17461
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Haiyang Hu
>Assignee: Haiyang Hu
>Priority: Major
>  Labels: pull-request-available
>
> Fix spotbugs in PeerCache#getInternal 
> Spotbugs warnings:
> https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6710/4/artifact/out/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-client-warnings.html



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17461) Fix spotbugs in PeerCache#getInternal

2024-04-11 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17461?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17836372#comment-17836372
 ] 

ASF GitHub Bot commented on HDFS-17461:
---

ayushtkn commented on code in PR #6721:
URL: https://github.com/apache/hadoop/pull/6721#discussion_r1561813524


##
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/PeerCache.java:
##
@@ -155,7 +155,7 @@ public Peer get(DatanodeID dnId, boolean isDomain) {
 
   private synchronized Peer getInternal(DatanodeID dnId, boolean isDomain) {
 List sockStreamList = multimap.get(new Key(dnId, isDomain));
-if (sockStreamList == null) {
+if (sockStreamList.isEmpty()) {
   return null;

Review Comment:
   We can just drop this if check itself, the below logic can safely handle an 
empty list





> Fix spotbugs in PeerCache#getInternal
> -
>
> Key: HDFS-17461
> URL: https://issues.apache.org/jira/browse/HDFS-17461
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Haiyang Hu
>Assignee: Haiyang Hu
>Priority: Major
>  Labels: pull-request-available
>
> Fix spotbugs in PeerCache#getInternal 
> Spotbugs warnings:
> https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6710/4/artifact/out/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-client-warnings.html



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17439) Improve NNThroughputBenchmark to allow non super user to use the tool

2024-04-11 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17836354#comment-17836354
 ] 

ASF GitHub Bot commented on HDFS-17439:
---

fateh288 commented on PR #6677:
URL: https://github.com/apache/hadoop/pull/6677#issuecomment-2050492580

   Requesting review on this patch.
   The style check failures are from legacy code and not introduced in this 
patch specifically.
   The unit test failures are also unrelated (the same patch passed the unit 
tests previously and no logic changes done in the follow up patch - only style 
changes fixed)




> Improve NNThroughputBenchmark to allow non super user to use the tool
> -
>
> Key: HDFS-17439
> URL: https://issues.apache.org/jira/browse/HDFS-17439
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: benchmarks, namenode
>Reporter: Fateh Singh
>Priority: Major
>  Labels: pull-request-available
>
> The NNThroughputBenchmark can only be used with hdfs user or any user with 
> super user privileges since entering/exiting safemode is a privileged 
> operation. However, when using super user, ACL checks are skipped. Hence it 
> renders the tool to be useless when testing namenode performance along with 
> authorization frameworks such as Apache Ranger / any other authorization 
> frameworks.
> An optional argument such as -nonSuperUser can be used to skip the statements 
> such as entering / exiting safemode. This optional argument makes the tool 
> useful for incorporating authorization frameworks into the performance 
> estimation flows.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17461) Fix spotbugs in PeerCache#getInternal

2024-04-11 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17461?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17836199#comment-17836199
 ] 

ASF GitHub Bot commented on HDFS-17461:
---

hadoop-yetus commented on PR #6721:
URL: https://github.com/apache/hadoop/pull/6721#issuecomment-2049777273

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 31s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  45m 53s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m  2s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   0m 54s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   0m 30s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 57s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 49s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 39s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | -1 :x: |  spotbugs  |   2m 50s | 
[/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-client-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6721/1/artifact/out/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-client-warnings.html)
 |  hadoop-hdfs-project/hadoop-hdfs-client in trunk has 1 extant spotbugs 
warnings.  |
   | +1 :green_heart: |  shadedclient  |  39m 26s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 50s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 53s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   0m 53s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 46s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |   0m 46s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 19s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 49s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 33s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 31s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   2m 39s |  |  
hadoop-hdfs-project/hadoop-hdfs-client generated 0 new + 0 unchanged - 1 fixed 
= 0 total (was 1)  |
   | +1 :green_heart: |  shadedclient  |  37m 22s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 26s |  |  hadoop-hdfs-client in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 35s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 143m 36s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6721/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6721 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 33e5234aa821 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 4116fa0bb6144c988fc8a5291d16d01107e42121 |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   |  Test Results | 

[jira] [Commented] (HDFS-17458) Remove unnecessary BP lock in ReplicaMap

2024-04-11 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17836192#comment-17836192
 ] 

ASF GitHub Bot commented on HDFS-17458:
---

hadoop-yetus commented on PR #6717:
URL: https://github.com/apache/hadoop/pull/6717#issuecomment-2049744473

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  17m 46s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  49m 38s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 23s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   1m 15s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   1m 13s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 24s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m  8s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 42s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   3m 18s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  42m 45s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 12s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 16s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   1m 16s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  9s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |   1m  9s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m  2s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 14s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 56s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 34s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   3m 21s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  41m 52s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  | 265m 27s |  |  hadoop-hdfs in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 48s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 442m 25s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6717/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6717 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux e0f27f718e00 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 4acc5b4369d4e0528645386df1720ee3bb8cced3 |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6717/4/testReport/ |
   | Max. process+thread count | 2635 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6717/4/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 

[jira] [Updated] (HDFS-17462) NPE in Router concat when trg is an empty file.

2024-04-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17462?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDFS-17462:
--
Labels: pull-request-available  (was: )

> NPE in Router concat when trg is an empty file.
> ---
>
> Key: HDFS-17462
> URL: https://issues.apache.org/jira/browse/HDFS-17462
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.10.2, 3.3.6
>Reporter: NaihaoFan
>Priority: Minor
>  Labels: pull-request-available
>
> When trg of Router concat is an empty file, it will trigger NPE in Router, 
> and the concat will fail, example:
> This is because when trg is an empty file, NameNode will return 
> lastLocatedBlock as null in the response of getBlockLocations. And Router 
> will not check null of lastLocatedBlock returned, instead Router will use it 
> to get block pool id directly.
> Trg of concat is an empty file should be allowed in router since this case is 
> supported by concat of NameNode.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17462) NPE in Router concat when trg is an empty file.

2024-04-11 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17836182#comment-17836182
 ] 

ASF GitHub Bot commented on HDFS-17462:
---

fannaihao opened a new pull request, #6722:
URL: https://github.com/apache/hadoop/pull/6722

   
   
   ### Description of PR
   When trg of Router concat is an empty file, it will trigger NPE in Router, 
and the concat will fail, example:
   
![image](https://github.com/apache/hadoop/assets/40593494/4edc0aed-08ee-4e1d-8236-84c20f61d15d)
   
   This is because when trg is an empty file, NameNode will return 
lastLocatedBlock as null in the response of getBlockLocations. And Router will 
not check null of lastLocatedBlock returned, instead Router will use it to get 
block pool id directly.
   Trg of concat is an empty file should be allowed in router since this case 
is supported by concat of NameNode.
   This PR fix this NPE exception.
   
   ### How was this patch tested?
   
![image](https://github.com/apache/hadoop/assets/40593494/23a46672-cd3a-4a54-8f4d-9c833b2d560c)
   
   
   ### For code changes:
   If lastLocatedBlock returned from getBlockLocations is null in Router 
concat, it will not be used to get block pool id.
   In this case, the block pool id check of trg will be delayed, i.e., concat 
continues to get and check block pool id of files in src, and only check them.
   And the check of trg block pool id can be achieved in following steps, i.e., 
getLocationForPath and the request of concat forwarded to NameNode.
   And exceptions will be thrown if block pool id of trg is not match with the 
block pool id of any file in src.
   
   - [ ] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   




> NPE in Router concat when trg is an empty file.
> ---
>
> Key: HDFS-17462
> URL: https://issues.apache.org/jira/browse/HDFS-17462
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.10.2, 3.3.6
>Reporter: NaihaoFan
>Priority: Minor
>
> When trg of Router concat is an empty file, it will trigger NPE in Router, 
> and the concat will fail, example:
> This is because when trg is an empty file, NameNode will return 
> lastLocatedBlock as null in the response of getBlockLocations. And Router 
> will not check null of lastLocatedBlock returned, instead Router will use it 
> to get block pool id directly.
> Trg of concat is an empty file should be allowed in router since this case is 
> supported by concat of NameNode.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17458) Remove unnecessary BP lock in ReplicaMap

2024-04-11 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17836178#comment-17836178
 ] 

ASF GitHub Bot commented on HDFS-17458:
---

hfutatzhanghb commented on PR #6717:
URL: https://github.com/apache/hadoop/pull/6717#issuecomment-2049700613

   > @hfutatzhanghb Thanks for your works. We should be careful to remove BP 
lock here. List one of the changes as example, it will return one definite 
value before this PR because hold RW lock here, but uncertain after this PR, 
such as another thread invoke `map.put` between `map.get` and `return` it will 
return null, but if invoke `map.put` before them it will return one 
`ReplicaInfo` object.
   > 
   > ```
   >   ReplicaInfo get(String bpid, long blockId) {
   > checkBlockPool(bpid);
   > - try (AutoCloseDataSetLock l = 
lockManager.readLock(LockLevel.BLOCK_POOl, bpid)) {
   > -   LightWeightResizableGSet m = map.get(bpid);
   > -   return m != null ? m.get(new Block(blockId)) : null;
   > - }
   > + LightWeightResizableGSet m = map.get(bpid);
   > + return m != null ? m.get(new Block(blockId)) : null;
   >   }
   > ```
   > 
   > I didn't traverse all invoker here, and not sure if it will involve some 
potential risk. FYI.
   
   Sir, Thanks for your replying. Yes, we need to be very careful to modify 
class ReplicaMap. In fact, i have check the methods one by one and  I think we 
can push this PR forward after it runs stablely on our product for a long time.




> Remove unnecessary BP lock in ReplicaMap
> 
>
> Key: HDFS-17458
> URL: https://issues.apache.org/jira/browse/HDFS-17458
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.4.0
>Reporter: farmmamba
>Assignee: farmmamba
>Priority: Major
>  Labels: pull-request-available
>
> In HDFS-16429 we make LightWeightResizableGSet to be thread safe, and in 
> HDFS-16511  we change some methods in ReplicaMap to acquire read lock instead 
> of acquiring write lock.
> This PR try to remove unnecessary Block_Pool read lock further.
> Recently, I performed stress tests on datanodes to measure their read/write 
> operations/second.
> Before we removing some lock,  it can only achieve ~2K write ops. After 
> optimizing, it can achieve more than 5K write ops.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17462) NPE in Router concat when trg is an empty file.

2024-04-11 Thread NaihaoFan (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17836159#comment-17836159
 ] 

NaihaoFan commented on HDFS-17462:
--

Hi [~Keepromise], thanks for your comment, I will update the picture later.

There is already a fix, I'm working on raising the PR now, and it is verified 
inside my team.

> NPE in Router concat when trg is an empty file.
> ---
>
> Key: HDFS-17462
> URL: https://issues.apache.org/jira/browse/HDFS-17462
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.10.2, 3.3.6
>Reporter: NaihaoFan
>Priority: Minor
>
> When trg of Router concat is an empty file, it will trigger NPE in Router, 
> and the concat will fail, example:
> This is because when trg is an empty file, NameNode will return 
> lastLocatedBlock as null in the response of getBlockLocations. And Router 
> will not check null of lastLocatedBlock returned, instead Router will use it 
> to get block pool id directly.
> Trg of concat is an empty file should be allowed in router since this case is 
> supported by concat of NameNode.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-17462) NPE in Router concat when trg is an empty file.

2024-04-11 Thread NaihaoFan (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17462?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

NaihaoFan updated HDFS-17462:
-
Description: 
When trg of Router concat is an empty file, it will trigger NPE in Router, and 
the concat will fail, example:

This is because when trg is an empty file, NameNode will return 
lastLocatedBlock as null in the response of getBlockLocations. And Router will 
not check null of lastLocatedBlock returned, instead Router will use it to get 
block pool id directly.
Trg of concat is an empty file should be allowed in router since this case is 
supported by concat of NameNode.

  was:
When trg of Router concat is an empty file, it will trigger NPE in Router, and 
the concat will fail, example:
!https://msasg.visualstudio.com/b9b38275-a912-4222-a7b7-0b8b968719c0/_apis/git/repositories/39259a38-581f-4d2a-9dfd-4f4660702c00/pullRequests/4614947/attachments/image%20%284%29.png|width=980,height=229!
This is because when trg is an empty file, NameNode will return 
lastLocatedBlock as null in the response of getBlockLocations. And Router will 
not check null of lastLocatedBlock returned, instead Router will use it to get 
block pool id directly.
Trg of concat is an empty file should be allowed in router since this case is 
supported by concat of NameNode.


> NPE in Router concat when trg is an empty file.
> ---
>
> Key: HDFS-17462
> URL: https://issues.apache.org/jira/browse/HDFS-17462
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.10.2, 3.3.6
>Reporter: NaihaoFan
>Priority: Minor
>
> When trg of Router concat is an empty file, it will trigger NPE in Router, 
> and the concat will fail, example:
> This is because when trg is an empty file, NameNode will return 
> lastLocatedBlock as null in the response of getBlockLocations. And Router 
> will not check null of lastLocatedBlock returned, instead Router will use it 
> to get block pool id directly.
> Trg of concat is an empty file should be allowed in router since this case is 
> supported by concat of NameNode.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17462) NPE in Router concat when trg is an empty file.

2024-04-11 Thread Jian Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17836146#comment-17836146
 ] 

Jian Zhang commented on HDFS-17462:
---

[~naihaofan]  hi, I can't see your picture, and if you haven't fixed this 
problem yet, I can try to fix it.

> NPE in Router concat when trg is an empty file.
> ---
>
> Key: HDFS-17462
> URL: https://issues.apache.org/jira/browse/HDFS-17462
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.10.2, 3.3.6
>Reporter: NaihaoFan
>Priority: Minor
>
> When trg of Router concat is an empty file, it will trigger NPE in Router, 
> and the concat will fail, example:
> !https://msasg.visualstudio.com/b9b38275-a912-4222-a7b7-0b8b968719c0/_apis/git/repositories/39259a38-581f-4d2a-9dfd-4f4660702c00/pullRequests/4614947/attachments/image%20%284%29.png|width=980,height=229!
> This is because when trg is an empty file, NameNode will return 
> lastLocatedBlock as null in the response of getBlockLocations. And Router 
> will not check null of lastLocatedBlock returned, instead Router will use it 
> to get block pool id directly.
> Trg of concat is an empty file should be allowed in router since this case is 
> supported by concat of NameNode.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17461) Fix spotbugs in PeerCache#getInternal

2024-04-11 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17461?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17836136#comment-17836136
 ] 

ASF GitHub Bot commented on HDFS-17461:
---

haiyang1987 opened a new pull request, #6721:
URL: https://github.com/apache/hadoop/pull/6721

   ### Description of PR
   https://issues.apache.org/jira/browse/HDFS-17461
   
   Fix spotbugs in PeerCache#getInternal
   
   Spotbugs warnings:
   
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6710/4/artifact/out/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-client-warnings.html
   
   ` private final LinkedListMultimap multimap = 
LinkedListMultimap.create();`
   Returns a collection view containing the values associated with key in this 
multimap, if any. Note that even when (containsKey(key) is false, get(key) 
still returns an empty collection, not null.
   




> Fix spotbugs in PeerCache#getInternal
> -
>
> Key: HDFS-17461
> URL: https://issues.apache.org/jira/browse/HDFS-17461
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Haiyang Hu
>Assignee: Haiyang Hu
>Priority: Major
>
> Fix spotbugs in PeerCache#getInternal 
> Spotbugs warnings:
> https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6710/4/artifact/out/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-client-warnings.html



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-17461) Fix spotbugs in PeerCache#getInternal

2024-04-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17461?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDFS-17461:
--
Labels: pull-request-available  (was: )

> Fix spotbugs in PeerCache#getInternal
> -
>
> Key: HDFS-17461
> URL: https://issues.apache.org/jira/browse/HDFS-17461
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Haiyang Hu
>Assignee: Haiyang Hu
>Priority: Major
>  Labels: pull-request-available
>
> Fix spotbugs in PeerCache#getInternal 
> Spotbugs warnings:
> https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6710/4/artifact/out/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-client-warnings.html



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17455) Fix Client throw IndexOutOfBoundsException in DFSInputStream#fetchBlockAt

2024-04-11 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17455?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17836134#comment-17836134
 ] 

ASF GitHub Bot commented on HDFS-17455:
---

haiyang1987 commented on PR #6710:
URL: https://github.com/apache/hadoop/pull/6710#issuecomment-2049510136

   Thanks @ZanderXu @Hexiaoqiao for your review and merge it.




> Fix Client throw IndexOutOfBoundsException in DFSInputStream#fetchBlockAt
> -
>
> Key: HDFS-17455
> URL: https://issues.apache.org/jira/browse/HDFS-17455
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: dfsclient
>Reporter: Haiyang Hu
>Assignee: Haiyang Hu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.5.0
>
>
> When the client read data, connect to the datanode, because at this time the 
> datanode access token is invalid will throw InvalidBlockTokenException. At 
> this time, when call fetchBlockAt method will  throw 
> java.lang.IndexOutOfBoundsException causing  read data failed.
> *Root case:*
> * The HDFS file contains only one RBW block, with a block data size of 2048KB.
> * The client open this file and seeks to the offset of 1024KB to read data.
> * Call DFSInputStream#getBlockReader method connect to the datanode,  because 
> at this time the datanode access token is invalid will throw 
> InvalidBlockTokenException., and call DFSInputStream#fetchBlockAt will throw 
> java.lang.IndexOutOfBoundsException.
> {code:java}
> private synchronized DatanodeInfo blockSeekTo(long target)
>  throws IOException {
>if (target >= getFileLength()) {
>// the target size is smaller than fileLength (completeBlockSize + 
> lastBlockBeingWrittenLength),
>// here at this time target is 1024 and getFileLength is 2048
>  throw new IOException("Attempted to read past end of file");
>}
>...
>while (true) {
>  ...
>  try {
>blockReader = getBlockReader(targetBlock, offsetIntoBlock,
>targetBlock.getBlockSize() - offsetIntoBlock, targetAddr,
>storageType, chosenNode);
>if(connectFailedOnce) {
>  DFSClient.LOG.info("Successfully connected to " + targetAddr +
> " for " + targetBlock.getBlock());
>}
>return chosenNode;
>  } catch (IOException ex) {
>...
>} else if (refetchToken > 0 && tokenRefetchNeeded(ex, targetAddr)) {
>  refetchToken--;
>  // Here will catch InvalidBlockTokenException.
>  fetchBlockAt(target);
>} else {
>  ...
>}
>  }
>}
>  }
> private LocatedBlock fetchBlockAt(long offset, long length, boolean useCache)
>   throws IOException {
> maybeRegisterBlockRefresh();
> synchronized(infoLock) {
>   // Here the locatedBlocks only contains one locatedBlock, at this time 
> the offset is 1024 and fileLength is 0,
>   // so the targetBlockIdx is -2
>   int targetBlockIdx = locatedBlocks.findBlock(offset);
>   if (targetBlockIdx < 0) { // block is not cached
> targetBlockIdx = LocatedBlocks.getInsertIndex(targetBlockIdx);
> // Here the targetBlockIdx is 1;
> useCache = false;
>   }
>   if (!useCache) { // fetch blocks
> final LocatedBlocks newBlocks = (length == 0)
> ? dfsClient.getLocatedBlocks(src, offset)
> : dfsClient.getLocatedBlocks(src, offset, length);
> if (newBlocks == null || newBlocks.locatedBlockCount() == 0) {
>   throw new EOFException("Could not find target position " + offset);
> }
> // Update the LastLocatedBlock, if offset is for last block.
> if (offset >= locatedBlocks.getFileLength()) {
>   setLocatedBlocksFields(newBlocks, getLastBlockLength(newBlocks));
> } else {
>   locatedBlocks.insertRange(targetBlockIdx,
>   newBlocks.getLocatedBlocks());
> }
>   }
>   // Here the locatedBlocks only contains one locatedBlock, so will throw 
> java.lang.IndexOutOfBoundsException: Index 1 out of bounds for length 1
>   return locatedBlocks.get(targetBlockIdx);
> }
>   }
> {code}
> The client exception:
> {code:java}
> java.lang.IndexOutOfBoundsException: Index 1 out of bounds for length 1
> at 
> java.base/jdk.internal.util.Preconditions.outOfBounds(Preconditions.java:64)
> at 
> java.base/jdk.internal.util.Preconditions.outOfBoundsCheckIndex(Preconditions.java:70)
> at 
> java.base/jdk.internal.util.Preconditions.checkIndex(Preconditions.java:266)
> at java.base/java.util.Objects.checkIndex(Objects.java:359)
> at java.base/java.util.ArrayList.get(ArrayList.java:427)
> at 
> org.apache.hadoop.hdfs.protocol.LocatedBlocks.get(LocatedBlocks.java:87)
>   

[jira] [Commented] (HDFS-17458) Remove unnecessary BP lock in ReplicaMap

2024-04-11 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17836122#comment-17836122
 ] 

ASF GitHub Bot commented on HDFS-17458:
---

hadoop-yetus commented on PR #6717:
URL: https://github.com/apache/hadoop/pull/6717#issuecomment-2049462537

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 23s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  34m  9s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 47s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   0m 40s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   0m 40s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 45s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 41s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m  8s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   1m 44s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  22m  9s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 37s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 38s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   0m 38s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 37s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |   0m 37s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  1s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 40s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 31s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m  2s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   1m 42s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  24m 23s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  | 197m  0s |  |  hadoop-hdfs in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 26s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 292m 52s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.45 ServerAPI=1.45 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6717/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6717 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 159c60de77f8 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 4acc5b4369d4e0528645386df1720ee3bb8cced3 |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6717/3/testReport/ |
   | Max. process+thread count | 4388 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6717/3/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 

[jira] [Updated] (HDFS-17462) NPE in Router concat when trg is an empty file.

2024-04-11 Thread NaihaoFan (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17462?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

NaihaoFan updated HDFS-17462:
-
Priority: Minor  (was: Critical)

> NPE in Router concat when trg is an empty file.
> ---
>
> Key: HDFS-17462
> URL: https://issues.apache.org/jira/browse/HDFS-17462
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.10.2, 3.3.6
>Reporter: NaihaoFan
>Priority: Minor
>
> When trg of Router concat is an empty file, it will trigger NPE in Router, 
> and the concat will fail, example:
> !https://msasg.visualstudio.com/b9b38275-a912-4222-a7b7-0b8b968719c0/_apis/git/repositories/39259a38-581f-4d2a-9dfd-4f4660702c00/pullRequests/4614947/attachments/image%20%284%29.png!
> This is because when trg is an empty file, NameNode will return 
> lastLocatedBlock as null in the response of getBlockLocations. And Router 
> will not check null of lastLocatedBlock returned, instead Router will use it 
> to get block pool id directly.
> Trg of concat is an empty file should be allowed in router since this case is 
> supported by concat of NameNode.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-17462) NPE in Router concat when trg is an empty file.

2024-04-11 Thread NaihaoFan (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17462?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

NaihaoFan updated HDFS-17462:
-
Priority: Critical  (was: Major)

> NPE in Router concat when trg is an empty file.
> ---
>
> Key: HDFS-17462
> URL: https://issues.apache.org/jira/browse/HDFS-17462
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.10.2, 3.3.6
>Reporter: NaihaoFan
>Priority: Critical
>
> When trg of Router concat is an empty file, it will trigger NPE in Router, 
> and the concat will fail, example:
> !https://msasg.visualstudio.com/b9b38275-a912-4222-a7b7-0b8b968719c0/_apis/git/repositories/39259a38-581f-4d2a-9dfd-4f4660702c00/pullRequests/4614947/attachments/image%20%284%29.png!
> This is because when trg is an empty file, NameNode will return 
> lastLocatedBlock as null in the response of getBlockLocations. And Router 
> will not check null of lastLocatedBlock returned, instead Router will use it 
> to get block pool id directly.
> Trg of concat is an empty file should be allowed in router since this case is 
> supported by concat of NameNode.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-17462) NPE in Router concat when trg is an empty file.

2024-04-11 Thread NaihaoFan (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17462?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

NaihaoFan updated HDFS-17462:
-
Description: 
When trg of Router concat is an empty file, it will trigger NPE in Router, and 
the concat will fail, example:
!https://msasg.visualstudio.com/b9b38275-a912-4222-a7b7-0b8b968719c0/_apis/git/repositories/39259a38-581f-4d2a-9dfd-4f4660702c00/pullRequests/4614947/attachments/image%20%284%29.png|width=980,height=229!
This is because when trg is an empty file, NameNode will return 
lastLocatedBlock as null in the response of getBlockLocations. And Router will 
not check null of lastLocatedBlock returned, instead Router will use it to get 
block pool id directly.
Trg of concat is an empty file should be allowed in router since this case is 
supported by concat of NameNode.

  was:
When trg of Router concat is an empty file, it will trigger NPE in Router, and 
the concat will fail, example:
!https://msasg.visualstudio.com/b9b38275-a912-4222-a7b7-0b8b968719c0/_apis/git/repositories/39259a38-581f-4d2a-9dfd-4f4660702c00/pullRequests/4614947/attachments/image%20%284%29.png!
This is because when trg is an empty file, NameNode will return 
lastLocatedBlock as null in the response of getBlockLocations. And Router will 
not check null of lastLocatedBlock returned, instead Router will use it to get 
block pool id directly.
Trg of concat is an empty file should be allowed in router since this case is 
supported by concat of NameNode.


> NPE in Router concat when trg is an empty file.
> ---
>
> Key: HDFS-17462
> URL: https://issues.apache.org/jira/browse/HDFS-17462
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.10.2, 3.3.6
>Reporter: NaihaoFan
>Priority: Minor
>
> When trg of Router concat is an empty file, it will trigger NPE in Router, 
> and the concat will fail, example:
> !https://msasg.visualstudio.com/b9b38275-a912-4222-a7b7-0b8b968719c0/_apis/git/repositories/39259a38-581f-4d2a-9dfd-4f4660702c00/pullRequests/4614947/attachments/image%20%284%29.png|width=980,height=229!
> This is because when trg is an empty file, NameNode will return 
> lastLocatedBlock as null in the response of getBlockLocations. And Router 
> will not check null of lastLocatedBlock returned, instead Router will use it 
> to get block pool id directly.
> Trg of concat is an empty file should be allowed in router since this case is 
> supported by concat of NameNode.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-17462) NPE in Router concat when trg is an empty file.

2024-04-11 Thread NaihaoFan (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17462?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

NaihaoFan updated HDFS-17462:
-
Description: 
When trg of Router concat is an empty file, it will trigger NPE in Router, and 
the concat will fail, example:
!https://msasg.visualstudio.com/b9b38275-a912-4222-a7b7-0b8b968719c0/_apis/git/repositories/39259a38-581f-4d2a-9dfd-4f4660702c00/pullRequests/4614947/attachments/image%20%284%29.png!
This is because when trg is an empty file, NameNode will return 
lastLocatedBlock as null in the response of getBlockLocations. And Router will 
not check null of lastLocatedBlock returned, instead Router will use it to get 
block pool id directly.
Trg of concat is an empty file should be allowed in router since this case is 
supported by concat of NameNode.

> NPE in Router concat when trg is an empty file.
> ---
>
> Key: HDFS-17462
> URL: https://issues.apache.org/jira/browse/HDFS-17462
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.10.2, 3.3.6
>Reporter: NaihaoFan
>Priority: Major
>
> When trg of Router concat is an empty file, it will trigger NPE in Router, 
> and the concat will fail, example:
> !https://msasg.visualstudio.com/b9b38275-a912-4222-a7b7-0b8b968719c0/_apis/git/repositories/39259a38-581f-4d2a-9dfd-4f4660702c00/pullRequests/4614947/attachments/image%20%284%29.png!
> This is because when trg is an empty file, NameNode will return 
> lastLocatedBlock as null in the response of getBlockLocations. And Router 
> will not check null of lastLocatedBlock returned, instead Router will use it 
> to get block pool id directly.
> Trg of concat is an empty file should be allowed in router since this case is 
> supported by concat of NameNode.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-17462) NPE in Router concat when trg is an empty file.

2024-04-11 Thread NaihaoFan (Jira)
NaihaoFan created HDFS-17462:


 Summary: NPE in Router concat when trg is an empty file.
 Key: HDFS-17462
 URL: https://issues.apache.org/jira/browse/HDFS-17462
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.3.6, 2.10.2
Reporter: NaihaoFan






--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17458) Remove unnecessary BP lock in ReplicaMap

2024-04-11 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17836102#comment-17836102
 ] 

ASF GitHub Bot commented on HDFS-17458:
---

Hexiaoqiao commented on PR #6717:
URL: https://github.com/apache/hadoop/pull/6717#issuecomment-2049395174

   @hfutatzhanghb Thanks for your works. We should be careful to remove BP lock 
here. List one of the changes as example, it will return one definite value 
before this PR because hold RW lock here, but uncertain after this PR, such as 
another thread invoke `map.put` between `map.get` and `return` it will return 
null, but if invoke `map.put` before them it will return one `ReplicaInfo` 
object. 
   
   ```
 ReplicaInfo get(String bpid, long blockId) {
   checkBlockPool(bpid);
   - try (AutoCloseDataSetLock l = 
lockManager.readLock(LockLevel.BLOCK_POOl, bpid)) {
   -   LightWeightResizableGSet m = map.get(bpid);
   -   return m != null ? m.get(new Block(blockId)) : null;
   - }
   + LightWeightResizableGSet m = map.get(bpid);
   + return m != null ? m.get(new Block(blockId)) : null;
 }
   ```
   
   I didn't traverse all invoker here, and not sure if it will involve some 
potential risk. FYI.




> Remove unnecessary BP lock in ReplicaMap
> 
>
> Key: HDFS-17458
> URL: https://issues.apache.org/jira/browse/HDFS-17458
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.4.0
>Reporter: farmmamba
>Assignee: farmmamba
>Priority: Major
>  Labels: pull-request-available
>
> In HDFS-16429 we make LightWeightResizableGSet to be thread safe, and in 
> HDFS-16511  we change some methods in ReplicaMap to acquire read lock instead 
> of acquiring write lock.
> This PR try to remove unnecessary Block_Pool read lock further.
> Recently, I performed stress tests on datanodes to measure their read/write 
> operations/second.
> Before we removing some lock,  it can only achieve ~2K write ops. After 
> optimizing, it can achieve more than 5K write ops.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-17455) Fix Client throw IndexOutOfBoundsException in DFSInputStream#fetchBlockAt

2024-04-11 Thread Xiaoqiao He (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17455?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoqiao He updated HDFS-17455:
---
Component/s: dfsclient

> Fix Client throw IndexOutOfBoundsException in DFSInputStream#fetchBlockAt
> -
>
> Key: HDFS-17455
> URL: https://issues.apache.org/jira/browse/HDFS-17455
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: dfsclient
>Reporter: Haiyang Hu
>Assignee: Haiyang Hu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.5.0
>
>
> When the client read data, connect to the datanode, because at this time the 
> datanode access token is invalid will throw InvalidBlockTokenException. At 
> this time, when call fetchBlockAt method will  throw 
> java.lang.IndexOutOfBoundsException causing  read data failed.
> *Root case:*
> * The HDFS file contains only one RBW block, with a block data size of 2048KB.
> * The client open this file and seeks to the offset of 1024KB to read data.
> * Call DFSInputStream#getBlockReader method connect to the datanode,  because 
> at this time the datanode access token is invalid will throw 
> InvalidBlockTokenException., and call DFSInputStream#fetchBlockAt will throw 
> java.lang.IndexOutOfBoundsException.
> {code:java}
> private synchronized DatanodeInfo blockSeekTo(long target)
>  throws IOException {
>if (target >= getFileLength()) {
>// the target size is smaller than fileLength (completeBlockSize + 
> lastBlockBeingWrittenLength),
>// here at this time target is 1024 and getFileLength is 2048
>  throw new IOException("Attempted to read past end of file");
>}
>...
>while (true) {
>  ...
>  try {
>blockReader = getBlockReader(targetBlock, offsetIntoBlock,
>targetBlock.getBlockSize() - offsetIntoBlock, targetAddr,
>storageType, chosenNode);
>if(connectFailedOnce) {
>  DFSClient.LOG.info("Successfully connected to " + targetAddr +
> " for " + targetBlock.getBlock());
>}
>return chosenNode;
>  } catch (IOException ex) {
>...
>} else if (refetchToken > 0 && tokenRefetchNeeded(ex, targetAddr)) {
>  refetchToken--;
>  // Here will catch InvalidBlockTokenException.
>  fetchBlockAt(target);
>} else {
>  ...
>}
>  }
>}
>  }
> private LocatedBlock fetchBlockAt(long offset, long length, boolean useCache)
>   throws IOException {
> maybeRegisterBlockRefresh();
> synchronized(infoLock) {
>   // Here the locatedBlocks only contains one locatedBlock, at this time 
> the offset is 1024 and fileLength is 0,
>   // so the targetBlockIdx is -2
>   int targetBlockIdx = locatedBlocks.findBlock(offset);
>   if (targetBlockIdx < 0) { // block is not cached
> targetBlockIdx = LocatedBlocks.getInsertIndex(targetBlockIdx);
> // Here the targetBlockIdx is 1;
> useCache = false;
>   }
>   if (!useCache) { // fetch blocks
> final LocatedBlocks newBlocks = (length == 0)
> ? dfsClient.getLocatedBlocks(src, offset)
> : dfsClient.getLocatedBlocks(src, offset, length);
> if (newBlocks == null || newBlocks.locatedBlockCount() == 0) {
>   throw new EOFException("Could not find target position " + offset);
> }
> // Update the LastLocatedBlock, if offset is for last block.
> if (offset >= locatedBlocks.getFileLength()) {
>   setLocatedBlocksFields(newBlocks, getLastBlockLength(newBlocks));
> } else {
>   locatedBlocks.insertRange(targetBlockIdx,
>   newBlocks.getLocatedBlocks());
> }
>   }
>   // Here the locatedBlocks only contains one locatedBlock, so will throw 
> java.lang.IndexOutOfBoundsException: Index 1 out of bounds for length 1
>   return locatedBlocks.get(targetBlockIdx);
> }
>   }
> {code}
> The client exception:
> {code:java}
> java.lang.IndexOutOfBoundsException: Index 1 out of bounds for length 1
> at 
> java.base/jdk.internal.util.Preconditions.outOfBounds(Preconditions.java:64)
> at 
> java.base/jdk.internal.util.Preconditions.outOfBoundsCheckIndex(Preconditions.java:70)
> at 
> java.base/jdk.internal.util.Preconditions.checkIndex(Preconditions.java:266)
> at java.base/java.util.Objects.checkIndex(Objects.java:359)
> at java.base/java.util.ArrayList.get(ArrayList.java:427)
> at 
> org.apache.hadoop.hdfs.protocol.LocatedBlocks.get(LocatedBlocks.java:87)
> at 
> org.apache.hadoop.hdfs.DFSInputStream.fetchBlockAt(DFSInputStream.java:569)
> at 
> org.apache.hadoop.hdfs.DFSInputStream.fetchBlockAt(DFSInputStream.java:540)
> at 

[jira] [Resolved] (HDFS-17455) Fix Client throw IndexOutOfBoundsException in DFSInputStream#fetchBlockAt

2024-04-11 Thread Xiaoqiao He (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17455?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoqiao He resolved HDFS-17455.

Fix Version/s: 3.5.0
 Hadoop Flags: Reviewed
   Resolution: Fixed

> Fix Client throw IndexOutOfBoundsException in DFSInputStream#fetchBlockAt
> -
>
> Key: HDFS-17455
> URL: https://issues.apache.org/jira/browse/HDFS-17455
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Haiyang Hu
>Assignee: Haiyang Hu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.5.0
>
>
> When the client read data, connect to the datanode, because at this time the 
> datanode access token is invalid will throw InvalidBlockTokenException. At 
> this time, when call fetchBlockAt method will  throw 
> java.lang.IndexOutOfBoundsException causing  read data failed.
> *Root case:*
> * The HDFS file contains only one RBW block, with a block data size of 2048KB.
> * The client open this file and seeks to the offset of 1024KB to read data.
> * Call DFSInputStream#getBlockReader method connect to the datanode,  because 
> at this time the datanode access token is invalid will throw 
> InvalidBlockTokenException., and call DFSInputStream#fetchBlockAt will throw 
> java.lang.IndexOutOfBoundsException.
> {code:java}
> private synchronized DatanodeInfo blockSeekTo(long target)
>  throws IOException {
>if (target >= getFileLength()) {
>// the target size is smaller than fileLength (completeBlockSize + 
> lastBlockBeingWrittenLength),
>// here at this time target is 1024 and getFileLength is 2048
>  throw new IOException("Attempted to read past end of file");
>}
>...
>while (true) {
>  ...
>  try {
>blockReader = getBlockReader(targetBlock, offsetIntoBlock,
>targetBlock.getBlockSize() - offsetIntoBlock, targetAddr,
>storageType, chosenNode);
>if(connectFailedOnce) {
>  DFSClient.LOG.info("Successfully connected to " + targetAddr +
> " for " + targetBlock.getBlock());
>}
>return chosenNode;
>  } catch (IOException ex) {
>...
>} else if (refetchToken > 0 && tokenRefetchNeeded(ex, targetAddr)) {
>  refetchToken--;
>  // Here will catch InvalidBlockTokenException.
>  fetchBlockAt(target);
>} else {
>  ...
>}
>  }
>}
>  }
> private LocatedBlock fetchBlockAt(long offset, long length, boolean useCache)
>   throws IOException {
> maybeRegisterBlockRefresh();
> synchronized(infoLock) {
>   // Here the locatedBlocks only contains one locatedBlock, at this time 
> the offset is 1024 and fileLength is 0,
>   // so the targetBlockIdx is -2
>   int targetBlockIdx = locatedBlocks.findBlock(offset);
>   if (targetBlockIdx < 0) { // block is not cached
> targetBlockIdx = LocatedBlocks.getInsertIndex(targetBlockIdx);
> // Here the targetBlockIdx is 1;
> useCache = false;
>   }
>   if (!useCache) { // fetch blocks
> final LocatedBlocks newBlocks = (length == 0)
> ? dfsClient.getLocatedBlocks(src, offset)
> : dfsClient.getLocatedBlocks(src, offset, length);
> if (newBlocks == null || newBlocks.locatedBlockCount() == 0) {
>   throw new EOFException("Could not find target position " + offset);
> }
> // Update the LastLocatedBlock, if offset is for last block.
> if (offset >= locatedBlocks.getFileLength()) {
>   setLocatedBlocksFields(newBlocks, getLastBlockLength(newBlocks));
> } else {
>   locatedBlocks.insertRange(targetBlockIdx,
>   newBlocks.getLocatedBlocks());
> }
>   }
>   // Here the locatedBlocks only contains one locatedBlock, so will throw 
> java.lang.IndexOutOfBoundsException: Index 1 out of bounds for length 1
>   return locatedBlocks.get(targetBlockIdx);
> }
>   }
> {code}
> The client exception:
> {code:java}
> java.lang.IndexOutOfBoundsException: Index 1 out of bounds for length 1
> at 
> java.base/jdk.internal.util.Preconditions.outOfBounds(Preconditions.java:64)
> at 
> java.base/jdk.internal.util.Preconditions.outOfBoundsCheckIndex(Preconditions.java:70)
> at 
> java.base/jdk.internal.util.Preconditions.checkIndex(Preconditions.java:266)
> at java.base/java.util.Objects.checkIndex(Objects.java:359)
> at java.base/java.util.ArrayList.get(ArrayList.java:427)
> at 
> org.apache.hadoop.hdfs.protocol.LocatedBlocks.get(LocatedBlocks.java:87)
> at 
> org.apache.hadoop.hdfs.DFSInputStream.fetchBlockAt(DFSInputStream.java:569)
> at 
> 

[jira] [Commented] (HDFS-17455) Fix Client throw IndexOutOfBoundsException in DFSInputStream#fetchBlockAt

2024-04-11 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17455?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17836089#comment-17836089
 ] 

ASF GitHub Bot commented on HDFS-17455:
---

Hexiaoqiao commented on PR #6710:
URL: https://github.com/apache/hadoop/pull/6710#issuecomment-2049353263

   Committed to trunk. Thanks @haiyang1987 and @ZanderXu .




> Fix Client throw IndexOutOfBoundsException in DFSInputStream#fetchBlockAt
> -
>
> Key: HDFS-17455
> URL: https://issues.apache.org/jira/browse/HDFS-17455
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Haiyang Hu
>Assignee: Haiyang Hu
>Priority: Major
>  Labels: pull-request-available
>
> When the client read data, connect to the datanode, because at this time the 
> datanode access token is invalid will throw InvalidBlockTokenException. At 
> this time, when call fetchBlockAt method will  throw 
> java.lang.IndexOutOfBoundsException causing  read data failed.
> *Root case:*
> * The HDFS file contains only one RBW block, with a block data size of 2048KB.
> * The client open this file and seeks to the offset of 1024KB to read data.
> * Call DFSInputStream#getBlockReader method connect to the datanode,  because 
> at this time the datanode access token is invalid will throw 
> InvalidBlockTokenException., and call DFSInputStream#fetchBlockAt will throw 
> java.lang.IndexOutOfBoundsException.
> {code:java}
> private synchronized DatanodeInfo blockSeekTo(long target)
>  throws IOException {
>if (target >= getFileLength()) {
>// the target size is smaller than fileLength (completeBlockSize + 
> lastBlockBeingWrittenLength),
>// here at this time target is 1024 and getFileLength is 2048
>  throw new IOException("Attempted to read past end of file");
>}
>...
>while (true) {
>  ...
>  try {
>blockReader = getBlockReader(targetBlock, offsetIntoBlock,
>targetBlock.getBlockSize() - offsetIntoBlock, targetAddr,
>storageType, chosenNode);
>if(connectFailedOnce) {
>  DFSClient.LOG.info("Successfully connected to " + targetAddr +
> " for " + targetBlock.getBlock());
>}
>return chosenNode;
>  } catch (IOException ex) {
>...
>} else if (refetchToken > 0 && tokenRefetchNeeded(ex, targetAddr)) {
>  refetchToken--;
>  // Here will catch InvalidBlockTokenException.
>  fetchBlockAt(target);
>} else {
>  ...
>}
>  }
>}
>  }
> private LocatedBlock fetchBlockAt(long offset, long length, boolean useCache)
>   throws IOException {
> maybeRegisterBlockRefresh();
> synchronized(infoLock) {
>   // Here the locatedBlocks only contains one locatedBlock, at this time 
> the offset is 1024 and fileLength is 0,
>   // so the targetBlockIdx is -2
>   int targetBlockIdx = locatedBlocks.findBlock(offset);
>   if (targetBlockIdx < 0) { // block is not cached
> targetBlockIdx = LocatedBlocks.getInsertIndex(targetBlockIdx);
> // Here the targetBlockIdx is 1;
> useCache = false;
>   }
>   if (!useCache) { // fetch blocks
> final LocatedBlocks newBlocks = (length == 0)
> ? dfsClient.getLocatedBlocks(src, offset)
> : dfsClient.getLocatedBlocks(src, offset, length);
> if (newBlocks == null || newBlocks.locatedBlockCount() == 0) {
>   throw new EOFException("Could not find target position " + offset);
> }
> // Update the LastLocatedBlock, if offset is for last block.
> if (offset >= locatedBlocks.getFileLength()) {
>   setLocatedBlocksFields(newBlocks, getLastBlockLength(newBlocks));
> } else {
>   locatedBlocks.insertRange(targetBlockIdx,
>   newBlocks.getLocatedBlocks());
> }
>   }
>   // Here the locatedBlocks only contains one locatedBlock, so will throw 
> java.lang.IndexOutOfBoundsException: Index 1 out of bounds for length 1
>   return locatedBlocks.get(targetBlockIdx);
> }
>   }
> {code}
> The client exception:
> {code:java}
> java.lang.IndexOutOfBoundsException: Index 1 out of bounds for length 1
> at 
> java.base/jdk.internal.util.Preconditions.outOfBounds(Preconditions.java:64)
> at 
> java.base/jdk.internal.util.Preconditions.outOfBoundsCheckIndex(Preconditions.java:70)
> at 
> java.base/jdk.internal.util.Preconditions.checkIndex(Preconditions.java:266)
> at java.base/java.util.Objects.checkIndex(Objects.java:359)
> at java.base/java.util.ArrayList.get(ArrayList.java:427)
> at 
> org.apache.hadoop.hdfs.protocol.LocatedBlocks.get(LocatedBlocks.java:87)
> at 
> 

[jira] [Commented] (HDFS-17455) Fix Client throw IndexOutOfBoundsException in DFSInputStream#fetchBlockAt

2024-04-11 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17455?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17836088#comment-17836088
 ] 

ASF GitHub Bot commented on HDFS-17455:
---

Hexiaoqiao merged PR #6710:
URL: https://github.com/apache/hadoop/pull/6710




> Fix Client throw IndexOutOfBoundsException in DFSInputStream#fetchBlockAt
> -
>
> Key: HDFS-17455
> URL: https://issues.apache.org/jira/browse/HDFS-17455
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Haiyang Hu
>Assignee: Haiyang Hu
>Priority: Major
>  Labels: pull-request-available
>
> When the client read data, connect to the datanode, because at this time the 
> datanode access token is invalid will throw InvalidBlockTokenException. At 
> this time, when call fetchBlockAt method will  throw 
> java.lang.IndexOutOfBoundsException causing  read data failed.
> *Root case:*
> * The HDFS file contains only one RBW block, with a block data size of 2048KB.
> * The client open this file and seeks to the offset of 1024KB to read data.
> * Call DFSInputStream#getBlockReader method connect to the datanode,  because 
> at this time the datanode access token is invalid will throw 
> InvalidBlockTokenException., and call DFSInputStream#fetchBlockAt will throw 
> java.lang.IndexOutOfBoundsException.
> {code:java}
> private synchronized DatanodeInfo blockSeekTo(long target)
>  throws IOException {
>if (target >= getFileLength()) {
>// the target size is smaller than fileLength (completeBlockSize + 
> lastBlockBeingWrittenLength),
>// here at this time target is 1024 and getFileLength is 2048
>  throw new IOException("Attempted to read past end of file");
>}
>...
>while (true) {
>  ...
>  try {
>blockReader = getBlockReader(targetBlock, offsetIntoBlock,
>targetBlock.getBlockSize() - offsetIntoBlock, targetAddr,
>storageType, chosenNode);
>if(connectFailedOnce) {
>  DFSClient.LOG.info("Successfully connected to " + targetAddr +
> " for " + targetBlock.getBlock());
>}
>return chosenNode;
>  } catch (IOException ex) {
>...
>} else if (refetchToken > 0 && tokenRefetchNeeded(ex, targetAddr)) {
>  refetchToken--;
>  // Here will catch InvalidBlockTokenException.
>  fetchBlockAt(target);
>} else {
>  ...
>}
>  }
>}
>  }
> private LocatedBlock fetchBlockAt(long offset, long length, boolean useCache)
>   throws IOException {
> maybeRegisterBlockRefresh();
> synchronized(infoLock) {
>   // Here the locatedBlocks only contains one locatedBlock, at this time 
> the offset is 1024 and fileLength is 0,
>   // so the targetBlockIdx is -2
>   int targetBlockIdx = locatedBlocks.findBlock(offset);
>   if (targetBlockIdx < 0) { // block is not cached
> targetBlockIdx = LocatedBlocks.getInsertIndex(targetBlockIdx);
> // Here the targetBlockIdx is 1;
> useCache = false;
>   }
>   if (!useCache) { // fetch blocks
> final LocatedBlocks newBlocks = (length == 0)
> ? dfsClient.getLocatedBlocks(src, offset)
> : dfsClient.getLocatedBlocks(src, offset, length);
> if (newBlocks == null || newBlocks.locatedBlockCount() == 0) {
>   throw new EOFException("Could not find target position " + offset);
> }
> // Update the LastLocatedBlock, if offset is for last block.
> if (offset >= locatedBlocks.getFileLength()) {
>   setLocatedBlocksFields(newBlocks, getLastBlockLength(newBlocks));
> } else {
>   locatedBlocks.insertRange(targetBlockIdx,
>   newBlocks.getLocatedBlocks());
> }
>   }
>   // Here the locatedBlocks only contains one locatedBlock, so will throw 
> java.lang.IndexOutOfBoundsException: Index 1 out of bounds for length 1
>   return locatedBlocks.get(targetBlockIdx);
> }
>   }
> {code}
> The client exception:
> {code:java}
> java.lang.IndexOutOfBoundsException: Index 1 out of bounds for length 1
> at 
> java.base/jdk.internal.util.Preconditions.outOfBounds(Preconditions.java:64)
> at 
> java.base/jdk.internal.util.Preconditions.outOfBoundsCheckIndex(Preconditions.java:70)
> at 
> java.base/jdk.internal.util.Preconditions.checkIndex(Preconditions.java:266)
> at java.base/java.util.Objects.checkIndex(Objects.java:359)
> at java.base/java.util.ArrayList.get(ArrayList.java:427)
> at 
> org.apache.hadoop.hdfs.protocol.LocatedBlocks.get(LocatedBlocks.java:87)
> at 
> org.apache.hadoop.hdfs.DFSInputStream.fetchBlockAt(DFSInputStream.java:569)
> at 
> 

[jira] [Updated] (HDFS-17461) Fix spotbugs in PeerCache#getInternal

2024-04-11 Thread Haiyang Hu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17461?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haiyang Hu updated HDFS-17461:
--
Description: 
Fix spotbugs in PeerCache#getInternal 

Spotbugs warnings:
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6710/4/artifact/out/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-client-warnings.html

  was:
Fix spotbugs in PeerCache#getInternal 

https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6710/4/artifact/out/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-client-warnings.html


> Fix spotbugs in PeerCache#getInternal
> -
>
> Key: HDFS-17461
> URL: https://issues.apache.org/jira/browse/HDFS-17461
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Haiyang Hu
>Assignee: Haiyang Hu
>Priority: Major
>
> Fix spotbugs in PeerCache#getInternal 
> Spotbugs warnings:
> https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6710/4/artifact/out/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-client-warnings.html



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-17461) Fix spotbugs in PeerCache#getInternal

2024-04-11 Thread Haiyang Hu (Jira)
Haiyang Hu created HDFS-17461:
-

 Summary: Fix spotbugs in PeerCache#getInternal
 Key: HDFS-17461
 URL: https://issues.apache.org/jira/browse/HDFS-17461
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Haiyang Hu
Assignee: Haiyang Hu


Fix spotbugs in PeerCache#getInternal 

https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6710/4/artifact/out/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-client-warnings.html



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17458) Remove unnecessary BP lock in ReplicaMap

2024-04-11 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17836029#comment-17836029
 ] 

ASF GitHub Bot commented on HDFS-17458:
---

hadoop-yetus commented on PR #6717:
URL: https://github.com/apache/hadoop/pull/6717#issuecomment-2049035589

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 20s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  32m 45s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 41s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   0m 40s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   0m 42s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 46s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 42s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m  9s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   1m 44s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  21m 46s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 37s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 41s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   0m 41s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 37s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |   0m 37s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 38s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 32s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m  4s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | -1 :x: |  spotbugs  |   1m 48s | 
[/new-spotbugs-hadoop-hdfs-project_hadoop-hdfs.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6717/2/artifact/out/new-spotbugs-hadoop-hdfs-project_hadoop-hdfs.html)
 |  hadoop-hdfs-project/hadoop-hdfs generated 1 new + 0 unchanged - 0 fixed = 1 
total (was 0)  |
   | +1 :green_heart: |  shadedclient  |  22m  0s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 202m 55s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6717/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 27s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 294m 17s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | SpotBugs | module:hadoop-hdfs-project/hadoop-hdfs |
   |  |  Return value of putIfAbsent is ignored, but curSet is reused in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.ReplicaMap.mergeAll(ReplicaMap)
  At ReplicaMap.java:ignored, but curSet is reused in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.ReplicaMap.mergeAll(ReplicaMap)
  At ReplicaMap.java:[line 178] |
   | Failed junit tests | hadoop.hdfs.server.datanode.TestDirectoryScanner |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.45 ServerAPI=1.45 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6717/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6717 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux fbf778c6fa4c 5.15.0-94-generic 

[jira] [Commented] (HDFS-17458) Remove unnecessary BP lock in ReplicaMap

2024-04-11 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17836024#comment-17836024
 ] 

ASF GitHub Bot commented on HDFS-17458:
---

hfutatzhanghb commented on PR #6717:
URL: https://github.com/apache/hadoop/pull/6717#issuecomment-2049008972

   @Hexiaoqiao  @zhangshuyan0 @tomscut Sir, could you please take a look at 
this PR when you have free time? Thanks a lot.




> Remove unnecessary BP lock in ReplicaMap
> 
>
> Key: HDFS-17458
> URL: https://issues.apache.org/jira/browse/HDFS-17458
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.4.0
>Reporter: farmmamba
>Assignee: farmmamba
>Priority: Major
>  Labels: pull-request-available
>
> In HDFS-16429 we make LightWeightResizableGSet to be thread safe, and in 
> HDFS-16511  we change some methods in ReplicaMap to acquire read lock instead 
> of acquiring write lock.
> This PR try to remove unnecessary Block_Pool read lock further.
> Recently, I performed stress tests on datanodes to measure their read/write 
> operations/second.
> Before we removing some lock,  it can only achieve ~2K write ops. After 
> optimizing, it can achieve more than 5K write ops.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org