[jira] [Created] (HDFS-17456) Fix the dfsused statistics of datanode are incorrect when appending a file.

2024-04-06 Thread fuchaohong (Jira)
fuchaohong created HDFS-17456:
-

 Summary: Fix the dfsused statistics of datanode are incorrect when 
appending a file.
 Key: HDFS-17456
 URL: https://issues.apache.org/jira/browse/HDFS-17456
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs
Affects Versions: 3.3.3
Reporter: fuchaohong






--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17454) Fix namenode fsck swallows the exception stacktrace, this can help us to troubleshooting log.

2024-04-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17454?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17834612#comment-17834612
 ] 

ASF GitHub Bot commented on HDFS-17454:
---

hadoop-yetus commented on PR #6709:
URL: https://github.com/apache/hadoop/pull/6709#issuecomment-2041337533

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  11m 31s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  47m 58s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 21s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   1m 16s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   1m 10s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 21s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m  5s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 39s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   3m 17s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  36m 43s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m  7s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  8s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   1m  8s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  8s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |   1m  8s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 57s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 13s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 50s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 31s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   3m 14s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  37m 21s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 231m 55s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6709/5/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 37s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 389m  5s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.server.namenode.TestFsck |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6709/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6709 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux cbc24e121190 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / d3b4744de37b64fd717794b217c99d10dfb1eac4 |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6709/5/testReport/ |
   | Max. process+thread count | 3703 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 
https

[jira] [Updated] (HDFS-17454) Fix namenode fsck swallows the exception stacktrace, this can help us to troubleshooting log.

2024-04-06 Thread xiaojunxiang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17454?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiaojunxiang updated HDFS-17454:

Attachment: (was: image-2024-04-06-17-51-54-356.png)

> Fix namenode fsck swallows the exception stacktrace, this can help us to 
> troubleshooting log.
> -
>
> Key: HDFS-17454
> URL: https://issues.apache.org/jira/browse/HDFS-17454
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.3.6
>Reporter: xiaojunxiang
>Priority: Minor
>  Labels: pull-request-available
> Attachments: image-2024-04-05-15-40-37-147.png, 
> image-2024-04-05-15-41-38-420.png, image-2024-04-07-13-22-22-493.png, 
> image-2024-04-07-13-22-46-684.png
>
>
> When I used `hdfs fsck /xxx.txt -move`, missing error, but I can't kown the 
> reason, because the exception stacktrace doesn't append to LOG, original code:
> !image-2024-04-05-15-40-37-147.png!
>  
> When I fix it, look, we can see the exception stacktrace:
> !image-2024-04-07-13-22-22-493.png!
> !image-2024-04-07-13-22-46-684.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-17454) Fix namenode fsck swallows the exception stacktrace, this can help us to troubleshooting log.

2024-04-06 Thread xiaojunxiang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17454?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiaojunxiang updated HDFS-17454:

Attachment: image-2024-04-07-13-22-46-684.png

> Fix namenode fsck swallows the exception stacktrace, this can help us to 
> troubleshooting log.
> -
>
> Key: HDFS-17454
> URL: https://issues.apache.org/jira/browse/HDFS-17454
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.3.6
>Reporter: xiaojunxiang
>Priority: Minor
>  Labels: pull-request-available
> Attachments: image-2024-04-05-15-40-37-147.png, 
> image-2024-04-05-15-41-38-420.png, image-2024-04-06-17-51-54-356.png, 
> image-2024-04-07-13-22-22-493.png, image-2024-04-07-13-22-46-684.png
>
>
> When I used `hdfs fsck /xxx.txt -move`, missing error, but I can't kown the 
> reason, because the exception stacktrace doesn't append to LOG, original code:
> !image-2024-04-05-15-40-37-147.png!
>  
> When I fix it, look, we can see the exception stacktrace:
> !image-2024-04-06-17-51-49-316.png!!image-2024-04-06-17-51-54-356.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-17454) Fix namenode fsck swallows the exception stacktrace, this can help us to troubleshooting log.

2024-04-06 Thread xiaojunxiang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17454?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiaojunxiang updated HDFS-17454:

Attachment: image-2024-04-07-13-22-22-493.png

> Fix namenode fsck swallows the exception stacktrace, this can help us to 
> troubleshooting log.
> -
>
> Key: HDFS-17454
> URL: https://issues.apache.org/jira/browse/HDFS-17454
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.3.6
>Reporter: xiaojunxiang
>Priority: Minor
>  Labels: pull-request-available
> Attachments: image-2024-04-05-15-40-37-147.png, 
> image-2024-04-05-15-41-38-420.png, image-2024-04-06-17-51-54-356.png, 
> image-2024-04-07-13-22-22-493.png, image-2024-04-07-13-22-46-684.png
>
>
> When I used `hdfs fsck /xxx.txt -move`, missing error, but I can't kown the 
> reason, because the exception stacktrace doesn't append to LOG, original code:
> !image-2024-04-05-15-40-37-147.png!
>  
> When I fix it, look, we can see the exception stacktrace:
> !image-2024-04-06-17-51-49-316.png!!image-2024-04-06-17-51-54-356.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-17454) Fix namenode fsck swallows the exception stacktrace, this can help us to troubleshooting log.

2024-04-06 Thread xiaojunxiang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17454?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiaojunxiang updated HDFS-17454:

Attachment: (was: image-2024-04-06-17-51-49-316.png)

> Fix namenode fsck swallows the exception stacktrace, this can help us to 
> troubleshooting log.
> -
>
> Key: HDFS-17454
> URL: https://issues.apache.org/jira/browse/HDFS-17454
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.3.6
>Reporter: xiaojunxiang
>Priority: Minor
>  Labels: pull-request-available
> Attachments: image-2024-04-05-15-40-37-147.png, 
> image-2024-04-05-15-41-38-420.png, image-2024-04-06-17-51-54-356.png, 
> image-2024-04-07-13-22-22-493.png, image-2024-04-07-13-22-46-684.png
>
>
> When I used `hdfs fsck /xxx.txt -move`, missing error, but I can't kown the 
> reason, because the exception stacktrace doesn't append to LOG, original code:
> !image-2024-04-05-15-40-37-147.png!
>  
> When I fix it, look, we can see the exception stacktrace:
> !image-2024-04-07-13-22-22-493.png!
> !image-2024-04-07-13-22-46-684.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-17454) Fix namenode fsck swallows the exception stacktrace, this can help us to troubleshooting log.

2024-04-06 Thread xiaojunxiang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17454?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiaojunxiang updated HDFS-17454:

Description: 
When I used `hdfs fsck /xxx.txt -move`, missing error, but I can't kown the 
reason, because the exception stacktrace doesn't append to LOG, original code:

!image-2024-04-05-15-40-37-147.png!

 

When I fix it, look, we can see the exception stacktrace:

!image-2024-04-07-13-22-22-493.png!

!image-2024-04-07-13-22-46-684.png!

  was:
When I used `hdfs fsck /xxx.txt -move`, missing error, but I can't kown the 
reason, because the exception stacktrace doesn't append to LOG, original code:

!image-2024-04-05-15-40-37-147.png!

 

When I fix it, look, we can see the exception stacktrace:

!image-2024-04-06-17-51-49-316.png!!image-2024-04-06-17-51-54-356.png!


> Fix namenode fsck swallows the exception stacktrace, this can help us to 
> troubleshooting log.
> -
>
> Key: HDFS-17454
> URL: https://issues.apache.org/jira/browse/HDFS-17454
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.3.6
>Reporter: xiaojunxiang
>Priority: Minor
>  Labels: pull-request-available
> Attachments: image-2024-04-05-15-40-37-147.png, 
> image-2024-04-05-15-41-38-420.png, image-2024-04-06-17-51-54-356.png, 
> image-2024-04-07-13-22-22-493.png, image-2024-04-07-13-22-46-684.png
>
>
> When I used `hdfs fsck /xxx.txt -move`, missing error, but I can't kown the 
> reason, because the exception stacktrace doesn't append to LOG, original code:
> !image-2024-04-05-15-40-37-147.png!
>  
> When I fix it, look, we can see the exception stacktrace:
> !image-2024-04-07-13-22-22-493.png!
> !image-2024-04-07-13-22-46-684.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17454) Fix namenode fsck swallows the exception stacktrace, this can help us to troubleshooting log.

2024-04-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17454?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17834598#comment-17834598
 ] 

ASF GitHub Bot commented on HDFS-17454:
---

xiaojunxiang2023 commented on code in PR #6709:
URL: https://github.com/apache/hadoop/pull/6709#discussion_r1554813666


##
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java:
##
@@ -1201,7 +1201,9 @@ private void lostFoundInit(DFSClient dfs) {
 lfInitedOk = true;
   }
 }  catch (Exception e) {
-  e.printStackTrace();
+  if (!lfInitedOk) {
+throw new IOException("failed to initialize " + lfName);

Review Comment:
   ok, I will fix it.





> Fix namenode fsck swallows the exception stacktrace, this can help us to 
> troubleshooting log.
> -
>
> Key: HDFS-17454
> URL: https://issues.apache.org/jira/browse/HDFS-17454
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.3.6
>Reporter: xiaojunxiang
>Priority: Minor
>  Labels: pull-request-available
> Attachments: image-2024-04-05-15-40-37-147.png, 
> image-2024-04-05-15-41-38-420.png, image-2024-04-06-17-51-49-316.png, 
> image-2024-04-06-17-51-54-356.png
>
>
> When I used `hdfs fsck /xxx.txt -move`, missing error, but I can't kown the 
> reason, because the exception stacktrace doesn't append to LOG, original code:
> !image-2024-04-05-15-40-37-147.png!
>  
> When I fix it, look, we can see the exception stacktrace:
> !image-2024-04-06-17-51-49-316.png!!image-2024-04-06-17-51-54-356.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17454) Fix namenode fsck swallows the exception stacktrace, this can help us to troubleshooting log.

2024-04-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17454?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17834597#comment-17834597
 ] 

ASF GitHub Bot commented on HDFS-17454:
---

xiaojunxiang2023 commented on code in PR #6709:
URL: https://github.com/apache/hadoop/pull/6709#discussion_r1554813666


##
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java:
##
@@ -1201,7 +1201,9 @@ private void lostFoundInit(DFSClient dfs) {
 lfInitedOk = true;
   }
 }  catch (Exception e) {
-  e.printStackTrace();
+  if (!lfInitedOk) {
+throw new IOException("failed to initialize " + lfName);

Review Comment:
   ok, It seems better not use catch. 





> Fix namenode fsck swallows the exception stacktrace, this can help us to 
> troubleshooting log.
> -
>
> Key: HDFS-17454
> URL: https://issues.apache.org/jira/browse/HDFS-17454
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.3.6
>Reporter: xiaojunxiang
>Priority: Minor
>  Labels: pull-request-available
> Attachments: image-2024-04-05-15-40-37-147.png, 
> image-2024-04-05-15-41-38-420.png, image-2024-04-06-17-51-49-316.png, 
> image-2024-04-06-17-51-54-356.png
>
>
> When I used `hdfs fsck /xxx.txt -move`, missing error, but I can't kown the 
> reason, because the exception stacktrace doesn't append to LOG, original code:
> !image-2024-04-05-15-40-37-147.png!
>  
> When I fix it, look, we can see the exception stacktrace:
> !image-2024-04-06-17-51-49-316.png!!image-2024-04-06-17-51-54-356.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17454) Fix namenode fsck swallows the exception stacktrace, this can help us to troubleshooting log.

2024-04-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17454?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17834596#comment-17834596
 ] 

ASF GitHub Bot commented on HDFS-17454:
---

hiwangzhihui commented on code in PR #6709:
URL: https://github.com/apache/hadoop/pull/6709#discussion_r1554813100


##
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java:
##
@@ -1201,7 +1201,9 @@ private void lostFoundInit(DFSClient dfs) {
 lfInitedOk = true;
   }
 }  catch (Exception e) {
-  e.printStackTrace();
+  if (!lfInitedOk) {
+throw new IOException("failed to initialize " + lfName);

Review Comment:
   This should not lose the original abnormal information。





> Fix namenode fsck swallows the exception stacktrace, this can help us to 
> troubleshooting log.
> -
>
> Key: HDFS-17454
> URL: https://issues.apache.org/jira/browse/HDFS-17454
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.3.6
>Reporter: xiaojunxiang
>Priority: Minor
>  Labels: pull-request-available
> Attachments: image-2024-04-05-15-40-37-147.png, 
> image-2024-04-05-15-41-38-420.png, image-2024-04-06-17-51-49-316.png, 
> image-2024-04-06-17-51-54-356.png
>
>
> When I used `hdfs fsck /xxx.txt -move`, missing error, but I can't kown the 
> reason, because the exception stacktrace doesn't append to LOG, original code:
> !image-2024-04-05-15-40-37-147.png!
>  
> When I fix it, look, we can see the exception stacktrace:
> !image-2024-04-06-17-51-49-316.png!!image-2024-04-06-17-51-54-356.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17383) Datanode current block token should come from active NameNode in HA mode

2024-04-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17834594#comment-17834594
 ] 

ASF GitHub Bot commented on HDFS-17383:
---

zhangshuyan0 commented on PR #6562:
URL: https://github.com/apache/hadoop/pull/6562#issuecomment-2041304944

   There a problem with this fix. Assuming the following situation:
   1. nn1 and nn2 are both standby, and after dn1 registers with them, its 
currentKey is null;
   2. nn1 is transitioned to active, dn1 reports heartbeat, nn1 sends some 
DNA_TRANSFER commands and a DNA_ACCESSKEYUPDATE command;
   3. Due to the order of commands, dn1 will first process DNA_TRANSFER before 
processing DNA_ACCESSKEYUPDATE, which results in a failure to process 
DNA_TRANSFER due to a null currentKey.




> Datanode current block token should come from active NameNode in HA mode
> 
>
> Key: HDFS-17383
> URL: https://issues.apache.org/jira/browse/HDFS-17383
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: lei w
>Priority: Major
>  Labels: pull-request-available
> Attachments: reproduce.diff
>
>
> We found that transfer block failed during the namenode upgrade. The specific 
> error reported was that the block token verification failed. The reason is 
> that during the datanode transfer block process, the source datanode uses its 
> own generated block token, and the keyid comes from ANN or SBN. However, 
> because the newly upgraded NN has just been started, the keyid owned by the 
> source datanode may not be owned by the target datanode, so the write fails. 
> Here's how to reproduce this situation in the attachment



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17455) Fix Client throw IndexOutOfBoundsException in DFSInputStream#fetchBlockAt

2024-04-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17455?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17834587#comment-17834587
 ] 

ASF GitHub Bot commented on HDFS-17455:
---

haiyang1987 commented on PR #6710:
URL: https://github.com/apache/hadoop/pull/6710#issuecomment-2041284092

   Hi @ZanderXu @Hexiaoqiao @ayushtkn @zhangshuyan0 please help me review this 
PR when you are free, thanks ~




> Fix Client throw IndexOutOfBoundsException in DFSInputStream#fetchBlockAt
> -
>
> Key: HDFS-17455
> URL: https://issues.apache.org/jira/browse/HDFS-17455
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Haiyang Hu
>Assignee: Haiyang Hu
>Priority: Major
>  Labels: pull-request-available
>
> When the client read data, connect to the datanode, because at this time the 
> datanode access token is invalid will throw InvalidBlockTokenException. At 
> this time, when call fetchBlockAt method will  throw 
> java.lang.IndexOutOfBoundsException causing  read data failed.
> *Root case:*
> * The HDFS file contains only one RBW block, with a block data size of 2048KB.
> * The client open this file and seeks to the offset of 1024KB to read data.
> * Call DFSInputStream#getBlockReader method connect to the datanode,  because 
> at this time the datanode access token is invalid will throw 
> InvalidBlockTokenException., and call DFSInputStream#fetchBlockAt will throw 
> java.lang.IndexOutOfBoundsException.
> {code:java}
> private synchronized DatanodeInfo blockSeekTo(long target)
>  throws IOException {
>if (target >= getFileLength()) {
>// the target size is smaller than fileLength (completeBlockSize + 
> lastBlockBeingWrittenLength),
>// here at this time target is 1024 and getFileLength is 2048
>  throw new IOException("Attempted to read past end of file");
>}
>...
>while (true) {
>  ...
>  try {
>blockReader = getBlockReader(targetBlock, offsetIntoBlock,
>targetBlock.getBlockSize() - offsetIntoBlock, targetAddr,
>storageType, chosenNode);
>if(connectFailedOnce) {
>  DFSClient.LOG.info("Successfully connected to " + targetAddr +
> " for " + targetBlock.getBlock());
>}
>return chosenNode;
>  } catch (IOException ex) {
>...
>} else if (refetchToken > 0 && tokenRefetchNeeded(ex, targetAddr)) {
>  refetchToken--;
>  // Here will catch InvalidBlockTokenException.
>  fetchBlockAt(target);
>} else {
>  ...
>}
>  }
>}
>  }
> private LocatedBlock fetchBlockAt(long offset, long length, boolean useCache)
>   throws IOException {
> maybeRegisterBlockRefresh();
> synchronized(infoLock) {
>   // Here the locatedBlocks only contains one locatedBlock, at this time 
> the offset is 1024 and fileLength is 0,
>   // so the targetBlockIdx is -2
>   int targetBlockIdx = locatedBlocks.findBlock(offset);
>   if (targetBlockIdx < 0) { // block is not cached
> targetBlockIdx = LocatedBlocks.getInsertIndex(targetBlockIdx);
> // Here the targetBlockIdx is 1;
> useCache = false;
>   }
>   if (!useCache) { // fetch blocks
> final LocatedBlocks newBlocks = (length == 0)
> ? dfsClient.getLocatedBlocks(src, offset)
> : dfsClient.getLocatedBlocks(src, offset, length);
> if (newBlocks == null || newBlocks.locatedBlockCount() == 0) {
>   throw new EOFException("Could not find target position " + offset);
> }
> // Update the LastLocatedBlock, if offset is for last block.
> if (offset >= locatedBlocks.getFileLength()) {
>   setLocatedBlocksFields(newBlocks, getLastBlockLength(newBlocks));
> } else {
>   locatedBlocks.insertRange(targetBlockIdx,
>   newBlocks.getLocatedBlocks());
> }
>   }
>   // Here the locatedBlocks only contains one locatedBlock, so will throw 
> java.lang.IndexOutOfBoundsException: Index 1 out of bounds for length 1
>   return locatedBlocks.get(targetBlockIdx);
> }
>   }
> {code}
> The client exception:
> {code:java}
> java.lang.IndexOutOfBoundsException: Index 1 out of bounds for length 1
> at 
> java.base/jdk.internal.util.Preconditions.outOfBounds(Preconditions.java:64)
> at 
> java.base/jdk.internal.util.Preconditions.outOfBoundsCheckIndex(Preconditions.java:70)
> at 
> java.base/jdk.internal.util.Preconditions.checkIndex(Preconditions.java:266)
> at java.base/java.util.Objects.checkIndex(Objects.java:359)
> at java.base/java.util.ArrayList.get(ArrayList.java:427)
> at 
> org.apache.hadoop.hdfs.protocol.LocatedBlocks.get(LocatedBlocks.java:87)
> 

[jira] [Commented] (HDFS-17454) Fix namenode fsck swallows the exception stacktrace, this can help us to troubleshooting log.

2024-04-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17454?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17834542#comment-17834542
 ] 

ASF GitHub Bot commented on HDFS-17454:
---

hadoop-yetus commented on PR #6709:
URL: https://github.com/apache/hadoop/pull/6709#issuecomment-2041150281

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m  0s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  45m 49s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 26s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   1m 19s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   1m 12s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 28s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m  7s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 40s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   3m 22s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  36m 48s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 19s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 18s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   1m 18s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 11s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |   1m 11s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m  1s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 24s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 54s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 34s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   3m 17s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  37m 27s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  | 302m 35s |  |  hadoop-hdfs in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 47s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 449m 18s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.45 ServerAPI=1.45 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6709/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6709 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 5af967045041 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 4c5dee325e327595ada3a22daa82f2eb1410cb37 |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6709/3/testReport/ |
   | Max. process+thread count | 2970 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6709/3/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus

[jira] [Commented] (HDFS-17454) Fix namenode fsck swallows the exception stacktrace, this can help us to troubleshooting log.

2024-04-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17454?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17834540#comment-17834540
 ] 

ASF GitHub Bot commented on HDFS-17454:
---

hadoop-yetus commented on PR #6709:
URL: https://github.com/apache/hadoop/pull/6709#issuecomment-2041148709

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 54s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  45m 14s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 22s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   1m 16s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   1m 10s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 22s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m  6s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 43s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   3m 28s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  36m 45s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 14s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 15s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   1m 15s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  7s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |   1m  7s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 58s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m  9s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 52s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 39s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   3m 26s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  36m 52s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  | 304m 23s |  |  hadoop-hdfs in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 45s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 448m 40s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.45 ServerAPI=1.45 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6709/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6709 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 20c8310f094d 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / e0e05aadea76e37124f11930a670f3769f745b76 |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6709/2/testReport/ |
   | Max. process+thread count | 2883 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6709/2/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   




> Fix namenode fsck swallo

[jira] [Commented] (HDFS-17454) Fix namenode fsck swallows the exception stacktrace, this can help us to troubleshooting log.

2024-04-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17454?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17834539#comment-17834539
 ] 

ASF GitHub Bot commented on HDFS-17454:
---

hadoop-yetus commented on PR #6709:
URL: https://github.com/apache/hadoop/pull/6709#issuecomment-2041147563

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 50s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  49m 31s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 25s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   1m 17s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   1m 12s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 25s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 11s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 45s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   3m 21s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  41m 10s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 12s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 16s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   1m 16s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  9s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |   1m  9s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m  2s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 15s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 58s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 38s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   3m 20s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  41m 59s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 270m  0s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6709/4/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 53s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 429m 15s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.tools.TestDFSAdmin |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.45 ServerAPI=1.45 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6709/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6709 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 672a536b9e66 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 880699a9bc61727d79f9c9908dd58677ae63d5e0 |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6709/4/testReport/ |
   | Max. process+thread count | 2734 (vs. ulimit of 550

[jira] [Updated] (HDFS-17454) Fix namenode fsck swallows the exception stacktrace, this can help us to troubleshooting log.

2024-04-06 Thread xiaojunxiang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17454?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiaojunxiang updated HDFS-17454:

Description: 
When I used `hdfs fsck /xxx.txt -move`, missing error, but I can't kown the 
reason, because the exception stacktrace doesn't append to LOG, original code:

!image-2024-04-05-15-40-37-147.png!

 

When I fix it, look, we can see the exception stacktrace:

!image-2024-04-06-17-51-49-316.png!!image-2024-04-06-17-51-54-356.png!

  was:
When I used `hdfs fsck /xxx.txt -move`, missing error, but I can't kown the 
reason, because the exception stacktrace doesn't append to LOG, original code:

!image-2024-04-05-15-40-37-147.png!

 

When I fix it, look, we can see the exception stacktrace:

!image-2024-04-05-15-41-38-420.png!


> Fix namenode fsck swallows the exception stacktrace, this can help us to 
> troubleshooting log.
> -
>
> Key: HDFS-17454
> URL: https://issues.apache.org/jira/browse/HDFS-17454
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.3.6
>Reporter: xiaojunxiang
>Priority: Minor
>  Labels: pull-request-available
> Attachments: image-2024-04-05-15-40-37-147.png, 
> image-2024-04-05-15-41-38-420.png, image-2024-04-06-17-51-49-316.png, 
> image-2024-04-06-17-51-54-356.png
>
>
> When I used `hdfs fsck /xxx.txt -move`, missing error, but I can't kown the 
> reason, because the exception stacktrace doesn't append to LOG, original code:
> !image-2024-04-05-15-40-37-147.png!
>  
> When I fix it, look, we can see the exception stacktrace:
> !image-2024-04-06-17-51-49-316.png!!image-2024-04-06-17-51-54-356.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-17454) Fix namenode fsck swallows the exception stacktrace, this can help us to troubleshooting log.

2024-04-06 Thread xiaojunxiang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17454?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiaojunxiang updated HDFS-17454:

Attachment: image-2024-04-06-17-51-54-356.png

> Fix namenode fsck swallows the exception stacktrace, this can help us to 
> troubleshooting log.
> -
>
> Key: HDFS-17454
> URL: https://issues.apache.org/jira/browse/HDFS-17454
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.3.6
>Reporter: xiaojunxiang
>Priority: Minor
>  Labels: pull-request-available
> Attachments: image-2024-04-05-15-40-37-147.png, 
> image-2024-04-05-15-41-38-420.png, image-2024-04-06-17-51-49-316.png, 
> image-2024-04-06-17-51-54-356.png
>
>
> When I used `hdfs fsck /xxx.txt -move`, missing error, but I can't kown the 
> reason, because the exception stacktrace doesn't append to LOG, original code:
> !image-2024-04-05-15-40-37-147.png!
>  
> When I fix it, look, we can see the exception stacktrace:
> !image-2024-04-05-15-41-38-420.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-17454) Fix namenode fsck swallows the exception stacktrace, this can help us to troubleshooting log.

2024-04-06 Thread xiaojunxiang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17454?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiaojunxiang updated HDFS-17454:

Attachment: image-2024-04-06-17-51-49-316.png

> Fix namenode fsck swallows the exception stacktrace, this can help us to 
> troubleshooting log.
> -
>
> Key: HDFS-17454
> URL: https://issues.apache.org/jira/browse/HDFS-17454
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.3.6
>Reporter: xiaojunxiang
>Priority: Minor
>  Labels: pull-request-available
> Attachments: image-2024-04-05-15-40-37-147.png, 
> image-2024-04-05-15-41-38-420.png, image-2024-04-06-17-51-49-316.png, 
> image-2024-04-06-17-51-54-356.png
>
>
> When I used `hdfs fsck /xxx.txt -move`, missing error, but I can't kown the 
> reason, because the exception stacktrace doesn't append to LOG, original code:
> !image-2024-04-05-15-40-37-147.png!
>  
> When I fix it, look, we can see the exception stacktrace:
> !image-2024-04-05-15-41-38-420.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17454) Fix namenode fsck swallows the exception stacktrace, this can help us to troubleshooting log.

2024-04-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17454?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17834509#comment-17834509
 ] 

ASF GitHub Bot commented on HDFS-17454:
---

xiaojunxiang2023 commented on code in PR #6709:
URL: https://github.com/apache/hadoop/pull/6709#discussion_r1554551429


##
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java:
##
@@ -1201,7 +1200,7 @@ private void lostFoundInit(DFSClient dfs) {
 lfInitedOk = true;
   }
 }  catch (Exception e) {
-  e.printStackTrace();
+  LOG.error(lfName + " dir init failed.", e);
   lfInitedOk = false;
 }

Review Comment:
   Thank you for your guidance. It seems better to throw out exceptions so that 
the Client can also perceive specific exception information. I will immediately 
modify it





> Fix namenode fsck swallows the exception stacktrace, this can help us to 
> troubleshooting log.
> -
>
> Key: HDFS-17454
> URL: https://issues.apache.org/jira/browse/HDFS-17454
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.3.6
>Reporter: xiaojunxiang
>Priority: Minor
>  Labels: pull-request-available
> Attachments: image-2024-04-05-15-40-37-147.png, 
> image-2024-04-05-15-41-38-420.png
>
>
> When I used `hdfs fsck /xxx.txt -move`, missing error, but I can't kown the 
> reason, because the exception stacktrace doesn't append to LOG, original code:
> !image-2024-04-05-15-40-37-147.png!
>  
> When I fix it, look, we can see the exception stacktrace:
> !image-2024-04-05-15-41-38-420.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17453) IncrementalBlockReport can have race condition with Edit Log Tailer

2024-04-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17834506#comment-17834506
 ] 

ASF GitHub Bot commented on HDFS-17453:
---

hadoop-yetus commented on PR #6708:
URL: https://github.com/apache/hadoop/pull/6708#issuecomment-2041015894

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 57s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  44m 58s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 23s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   1m 15s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   1m 11s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 26s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m  8s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 45s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   3m 21s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  35m 54s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 10s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 14s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   1m 14s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  7s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |   1m  7s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 58s | 
[/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6708/4/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs-project/hadoop-hdfs: The patch generated 6 new + 86 unchanged - 
0 fixed = 92 total (was 86)  |
   | +1 :green_heart: |  mvnsite  |   1m 15s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 53s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 34s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   3m 17s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  36m  3s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 282m 26s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6708/4/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 48s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 425m  8s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdfs.server.blockmanagement.TestPendingDataNodeMessages |
   |   | hadoop.hdfs.server.datanode.TestBlockReplacement |
   |   | hadoop.hdfs.server.namenode.ha.TestDNFencing |
   |   | hadoop.hdfs.TestRollingUpgrade |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.45 ServerAPI=1.45 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6708/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6708 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux a59a5c1df2b8 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 4ec0e6c6f1d16b4cbd6e0e4bc9203d73bb67613d |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JD

[jira] [Resolved] (HDFS-17449) Fix ill-formed decommission host name and port pair triggers IndexOutOfBound error

2024-04-06 Thread Ayush Saxena (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17449?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena resolved HDFS-17449.
-
Fix Version/s: 3.5.0
 Hadoop Flags: Reviewed
   Resolution: Fixed

> Fix ill-formed decommission host name and port pair triggers IndexOutOfBound 
> error 
> ---
>
> Key: HDFS-17449
> URL: https://issues.apache.org/jira/browse/HDFS-17449
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: ConfX
>Assignee: ConfX
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.5.0
>
>
> h2. What happened:
> Got IndexOutOfBound when trying to run 
> org.apache.hadoop.hdfs.server.namenode.TestDecommissioningStatusWithBackoffMonitor#testDecommissionStatusAfterDNRestart
>  with namenode host provider set to 
> org.apache.hadoop.hdfs.server.blockmanagement.CombinedHostFileManager.
> h2. Buggy code:
> In HostsFileWriter.java:
> {code:java}
> String[] hostAndPort = hostNameAndPort.split(":"); // hostNameAndPort might 
> be invalid
> dn.setHostName(hostAndPort[0]);
> dn.setPort(Integer.parseInt(hostAndPort[1])); // here IndexOutOfBound might 
> be thrown
> dn.setAdminState(AdminStates.DECOMMISSIONED);{code}
> h2. StackTrace:
> {code:java}
> java.lang.ArrayIndexOutOfBoundsException: Index 1 out of bounds for length 1
>     at 
> org.apache.hadoop.hdfs.util.HostsFileWriter.initOutOfServiceHosts(HostsFileWriter.java:110){code}
> h2. How to reproduce:
> (1) Set {{dfs.namenode.hosts.provider.classname}} to 
> {{org.apache.hadoop.hdfs.server.blockmanagement.CombinedHostFileManager}}
> (2) Run test: 
> {{org.apache.hadoop.hdfs.server.namenode.TestDecommissioningStatusWithBackoffMonitor#testDecommissionStatusAfterDNRestart}}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17449) Fix ill-formed decommission host name and port pair triggers IndexOutOfBound error

2024-04-06 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17834501#comment-17834501
 ] 

Ayush Saxena commented on HDFS-17449:
-

Committed to trunk.
Thanx [~FuzzingTeam] for the contribution!!!

> Fix ill-formed decommission host name and port pair triggers IndexOutOfBound 
> error 
> ---
>
> Key: HDFS-17449
> URL: https://issues.apache.org/jira/browse/HDFS-17449
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: ConfX
>Assignee: ConfX
>Priority: Major
>  Labels: pull-request-available
>
> h2. What happened:
> Got IndexOutOfBound when trying to run 
> org.apache.hadoop.hdfs.server.namenode.TestDecommissioningStatusWithBackoffMonitor#testDecommissionStatusAfterDNRestart
>  with namenode host provider set to 
> org.apache.hadoop.hdfs.server.blockmanagement.CombinedHostFileManager.
> h2. Buggy code:
> In HostsFileWriter.java:
> {code:java}
> String[] hostAndPort = hostNameAndPort.split(":"); // hostNameAndPort might 
> be invalid
> dn.setHostName(hostAndPort[0]);
> dn.setPort(Integer.parseInt(hostAndPort[1])); // here IndexOutOfBound might 
> be thrown
> dn.setAdminState(AdminStates.DECOMMISSIONED);{code}
> h2. StackTrace:
> {code:java}
> java.lang.ArrayIndexOutOfBoundsException: Index 1 out of bounds for length 1
>     at 
> org.apache.hadoop.hdfs.util.HostsFileWriter.initOutOfServiceHosts(HostsFileWriter.java:110){code}
> h2. How to reproduce:
> (1) Set {{dfs.namenode.hosts.provider.classname}} to 
> {{org.apache.hadoop.hdfs.server.blockmanagement.CombinedHostFileManager}}
> (2) Run test: 
> {{org.apache.hadoop.hdfs.server.namenode.TestDecommissioningStatusWithBackoffMonitor#testDecommissionStatusAfterDNRestart}}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-17449) Fix ill-formed decommission host name and port pair triggers IndexOutOfBound error

2024-04-06 Thread Ayush Saxena (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17449?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-17449:

Summary: Fix ill-formed decommission host name and port pair triggers 
IndexOutOfBound error   (was: Ill-formed decommission host name and port pair 
would trigger IndexOutOfBound error)

> Fix ill-formed decommission host name and port pair triggers IndexOutOfBound 
> error 
> ---
>
> Key: HDFS-17449
> URL: https://issues.apache.org/jira/browse/HDFS-17449
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: ConfX
>Assignee: ConfX
>Priority: Major
>  Labels: pull-request-available
>
> h2. What happened:
> Got IndexOutOfBound when trying to run 
> org.apache.hadoop.hdfs.server.namenode.TestDecommissioningStatusWithBackoffMonitor#testDecommissionStatusAfterDNRestart
>  with namenode host provider set to 
> org.apache.hadoop.hdfs.server.blockmanagement.CombinedHostFileManager.
> h2. Buggy code:
> In HostsFileWriter.java:
> {code:java}
> String[] hostAndPort = hostNameAndPort.split(":"); // hostNameAndPort might 
> be invalid
> dn.setHostName(hostAndPort[0]);
> dn.setPort(Integer.parseInt(hostAndPort[1])); // here IndexOutOfBound might 
> be thrown
> dn.setAdminState(AdminStates.DECOMMISSIONED);{code}
> h2. StackTrace:
> {code:java}
> java.lang.ArrayIndexOutOfBoundsException: Index 1 out of bounds for length 1
>     at 
> org.apache.hadoop.hdfs.util.HostsFileWriter.initOutOfServiceHosts(HostsFileWriter.java:110){code}
> h2. How to reproduce:
> (1) Set {{dfs.namenode.hosts.provider.classname}} to 
> {{org.apache.hadoop.hdfs.server.blockmanagement.CombinedHostFileManager}}
> (2) Run test: 
> {{org.apache.hadoop.hdfs.server.namenode.TestDecommissioningStatusWithBackoffMonitor#testDecommissionStatusAfterDNRestart}}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17449) Ill-formed decommission host name and port pair would trigger IndexOutOfBound error

2024-04-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17834500#comment-17834500
 ] 

ASF GitHub Bot commented on HDFS-17449:
---

ayushtkn merged PR #6691:
URL: https://github.com/apache/hadoop/pull/6691




> Ill-formed decommission host name and port pair would trigger IndexOutOfBound 
> error
> ---
>
> Key: HDFS-17449
> URL: https://issues.apache.org/jira/browse/HDFS-17449
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: ConfX
>Assignee: ConfX
>Priority: Major
>  Labels: pull-request-available
>
> h2. What happened:
> Got IndexOutOfBound when trying to run 
> org.apache.hadoop.hdfs.server.namenode.TestDecommissioningStatusWithBackoffMonitor#testDecommissionStatusAfterDNRestart
>  with namenode host provider set to 
> org.apache.hadoop.hdfs.server.blockmanagement.CombinedHostFileManager.
> h2. Buggy code:
> In HostsFileWriter.java:
> {code:java}
> String[] hostAndPort = hostNameAndPort.split(":"); // hostNameAndPort might 
> be invalid
> dn.setHostName(hostAndPort[0]);
> dn.setPort(Integer.parseInt(hostAndPort[1])); // here IndexOutOfBound might 
> be thrown
> dn.setAdminState(AdminStates.DECOMMISSIONED);{code}
> h2. StackTrace:
> {code:java}
> java.lang.ArrayIndexOutOfBoundsException: Index 1 out of bounds for length 1
>     at 
> org.apache.hadoop.hdfs.util.HostsFileWriter.initOutOfServiceHosts(HostsFileWriter.java:110){code}
> h2. How to reproduce:
> (1) Set {{dfs.namenode.hosts.provider.classname}} to 
> {{org.apache.hadoop.hdfs.server.blockmanagement.CombinedHostFileManager}}
> (2) Run test: 
> {{org.apache.hadoop.hdfs.server.namenode.TestDecommissioningStatusWithBackoffMonitor#testDecommissionStatusAfterDNRestart}}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-17449) Ill-formed decommission host name and port pair would trigger IndexOutOfBound error

2024-04-06 Thread Ayush Saxena (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17449?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena reassigned HDFS-17449:
---

Assignee: ConfX

> Ill-formed decommission host name and port pair would trigger IndexOutOfBound 
> error
> ---
>
> Key: HDFS-17449
> URL: https://issues.apache.org/jira/browse/HDFS-17449
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: ConfX
>Assignee: ConfX
>Priority: Major
>  Labels: pull-request-available
>
> h2. What happened:
> Got IndexOutOfBound when trying to run 
> org.apache.hadoop.hdfs.server.namenode.TestDecommissioningStatusWithBackoffMonitor#testDecommissionStatusAfterDNRestart
>  with namenode host provider set to 
> org.apache.hadoop.hdfs.server.blockmanagement.CombinedHostFileManager.
> h2. Buggy code:
> In HostsFileWriter.java:
> {code:java}
> String[] hostAndPort = hostNameAndPort.split(":"); // hostNameAndPort might 
> be invalid
> dn.setHostName(hostAndPort[0]);
> dn.setPort(Integer.parseInt(hostAndPort[1])); // here IndexOutOfBound might 
> be thrown
> dn.setAdminState(AdminStates.DECOMMISSIONED);{code}
> h2. StackTrace:
> {code:java}
> java.lang.ArrayIndexOutOfBoundsException: Index 1 out of bounds for length 1
>     at 
> org.apache.hadoop.hdfs.util.HostsFileWriter.initOutOfServiceHosts(HostsFileWriter.java:110){code}
> h2. How to reproduce:
> (1) Set {{dfs.namenode.hosts.provider.classname}} to 
> {{org.apache.hadoop.hdfs.server.blockmanagement.CombinedHostFileManager}}
> (2) Run test: 
> {{org.apache.hadoop.hdfs.server.namenode.TestDecommissioningStatusWithBackoffMonitor#testDecommissionStatusAfterDNRestart}}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17454) Fix namenode fsck swallows the exception stacktrace, this can help us to troubleshooting log.

2024-04-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17454?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17834499#comment-17834499
 ] 

ASF GitHub Bot commented on HDFS-17454:
---

ayushtkn commented on code in PR #6709:
URL: https://github.com/apache/hadoop/pull/6709#discussion_r1554546040


##
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java:
##
@@ -1201,7 +1200,7 @@ private void lostFoundInit(DFSClient dfs) {
 lfInitedOk = true;
   }
 }  catch (Exception e) {
-  e.printStackTrace();
+  LOG.error(lfName + " dir init failed.", e);
   lfInitedOk = false;
 }

Review Comment:
   why do we need to catch the exception itself, can't we just throw. If I 
catch the logic, we catch the exception and set ``lfInitedOk`` to `false`  and 
then in the calling method we throw
   ```
 if (!lfInitedOk) {
   throw new IOException("failed to initialize lost+found");
 }
   ```
   Can't we directly throw. The client will get the exception, it will log or 
whatever rather than logging here





> Fix namenode fsck swallows the exception stacktrace, this can help us to 
> troubleshooting log.
> -
>
> Key: HDFS-17454
> URL: https://issues.apache.org/jira/browse/HDFS-17454
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.3.6
>Reporter: xiaojunxiang
>Priority: Minor
>  Labels: pull-request-available
> Attachments: image-2024-04-05-15-40-37-147.png, 
> image-2024-04-05-15-41-38-420.png
>
>
> When I used `hdfs fsck /xxx.txt -move`, missing error, but I can't kown the 
> reason, because the exception stacktrace doesn't append to LOG, original code:
> !image-2024-04-05-15-40-37-147.png!
>  
> When I fix it, look, we can see the exception stacktrace:
> !image-2024-04-05-15-41-38-420.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17453) IncrementalBlockReport can have race condition with Edit Log Tailer

2024-04-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17834496#comment-17834496
 ] 

ASF GitHub Bot commented on HDFS-17453:
---

hadoop-yetus commented on PR #6708:
URL: https://github.com/apache/hadoop/pull/6708#issuecomment-2040998440

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 31s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  47m 20s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 22s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   1m 16s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   1m 10s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 29s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 15s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 42s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   3m 21s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  38m 30s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 13s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 15s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   1m 15s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  8s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |   1m  8s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 59s | 
[/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6708/3/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs-project/hadoop-hdfs: The patch generated 6 new + 86 unchanged - 
0 fixed = 92 total (was 86)  |
   | +1 :green_heart: |  mvnsite  |   1m 20s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 55s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 35s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   3m 28s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  39m  2s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 237m  9s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6708/3/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 42s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 387m 32s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.server.mover.TestMover |
   |   | hadoop.hdfs.server.blockmanagement.TestPendingDataNodeMessages |
   |   | 
hadoop.hdfs.server.common.blockaliasmap.impl.TestLevelDbMockAliasMapClient |
   |   | hadoop.hdfs.server.diskbalancer.command.TestDiskBalancerCommand |
   |   | hadoop.hdfs.server.diskbalancer.TestDiskBalancerWithMockMover |
   |   | hadoop.hdfs.server.namenode.ha.TestDNFencing |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.45 ServerAPI=1.45 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6708/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6708 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 43af3362757d 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-supp