[jira] [Commented] (HDFS-17595) [ARR] ErasureCoding supports asynchronous rpc.

2024-10-10 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17888276#comment-17888276
 ] 

ASF GitHub Bot commented on HDFS-17595:
---

hfutatzhanghb opened a new pull request, #6983:
URL: https://github.com/apache/hadoop/pull/6983

   ### Description of PR
   The main new addition is AsyncErasureCoding, which extends ErasureCoding so 
that supports asynchronous rpc.




> [ARR] ErasureCoding supports asynchronous rpc.
> --
>
> Key: HDFS-17595
> URL: https://issues.apache.org/jira/browse/HDFS-17595
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: rbf
>Reporter: Jian Zhang
>Assignee: farmmamba
>Priority: Major
>  Labels: pull-request-available
>
> *Describe*
> The main new addition is AsyncErasureCoding, which extends ErasureCoding so 
> that supports asynchronous rpc.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17595) [ARR] ErasureCoding supports asynchronous rpc.

2024-10-10 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17888275#comment-17888275
 ] 

ASF GitHub Bot commented on HDFS-17595:
---

hfutatzhanghb closed pull request #6983: HDFS-17595. [ARR] ErasureCoding 
supports asynchronous rpc.
URL: https://github.com/apache/hadoop/pull/6983




> [ARR] ErasureCoding supports asynchronous rpc.
> --
>
> Key: HDFS-17595
> URL: https://issues.apache.org/jira/browse/HDFS-17595
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: rbf
>Reporter: Jian Zhang
>Assignee: farmmamba
>Priority: Major
>  Labels: pull-request-available
>
> *Describe*
> The main new addition is AsyncErasureCoding, which extends ErasureCoding so 
> that supports asynchronous rpc.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-17601) [ARR] RouterRpcServer supports asynchronous rpc.

2024-10-10 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDFS-17601:
--
Labels: pull-request-available  (was: )

> [ARR] RouterRpcServer supports asynchronous rpc.
> 
>
> Key: HDFS-17601
> URL: https://issues.apache.org/jira/browse/HDFS-17601
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: rbf
>Reporter: farmmamba
>Assignee: farmmamba
>Priority: Major
>  Labels: pull-request-available
>
> RouterRpcServer supports asynchronous RPC, the following methods need to be 
> transformed to asynchronous versions:
>  * {{{color:#172b4d}invokeOnNsAsync{color}}}
>  * {{{color:#172b4d}invokeAtAvailableNsAsync {color}}}
>  * {{{color:#172b4d}getExistingLocationAsync{color}}}
>  * {{{color:#172b4d}getDatanodeReportAsync{color}}}
>  * {{{color:#172b4d}getDatanodeStorageReportMapAsync{color}}}
>  * {{{color:#172b4d}getSlowDatanodeReportAsync{color}}}
>  * {{{color:#172b4d}getCreateLocationAsync{color}}}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17601) [ARR] RouterRpcServer supports asynchronous rpc.

2024-10-10 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17601?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17888272#comment-17888272
 ] 

ASF GitHub Bot commented on HDFS-17601:
---

hfutatzhanghb opened a new pull request, #7108:
URL: https://github.com/apache/hadoop/pull/7108

   ### Description of PR
   RouterRpcServer supports asynchronous RPC, the following methods need to be 
transformed to asynchronous versions:
   
   1. invokeOnNsAsync
   2. invokeAtAvailableNsAsync
   3. getExistingLocationAsync
   4. getDatanodeReportAsync
   5. getDatanodeStorageReportMapAsync
   6. getSlowDatanodeReportAsync
   7. getCreateLocationAsync




> [ARR] RouterRpcServer supports asynchronous rpc.
> 
>
> Key: HDFS-17601
> URL: https://issues.apache.org/jira/browse/HDFS-17601
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: rbf
>Reporter: farmmamba
>Assignee: farmmamba
>Priority: Major
>
> RouterRpcServer supports asynchronous RPC, the following methods need to be 
> transformed to asynchronous versions:
>  * {{{color:#172b4d}invokeOnNsAsync{color}}}
>  * {{{color:#172b4d}invokeAtAvailableNsAsync {color}}}
>  * {{{color:#172b4d}getExistingLocationAsync{color}}}
>  * {{{color:#172b4d}getDatanodeReportAsync{color}}}
>  * {{{color:#172b4d}getDatanodeStorageReportMapAsync{color}}}
>  * {{{color:#172b4d}getSlowDatanodeReportAsync{color}}}
>  * {{{color:#172b4d}getCreateLocationAsync{color}}}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17595) [ARR] ErasureCoding supports asynchronous rpc.

2024-10-10 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17888193#comment-17888193
 ] 

ASF GitHub Bot commented on HDFS-17595:
---

hadoop-yetus commented on PR #6983:
URL: https://github.com/apache/hadoop/pull/6983#issuecomment-2404406745

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m 31s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ HDFS-17531 Compile Tests _ |
   | -1 :x: |  mvninstall  |  23m 45s | 
[/branch-mvninstall-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6983/5/artifact/out/branch-mvninstall-root.txt)
 |  root in HDFS-17531 failed.  |
   | -1 :x: |  compile  |   0m 29s | 
[/branch-compile-hadoop-hdfs-project_hadoop-hdfs-rbf-jdkUbuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6983/5/artifact/out/branch-compile-hadoop-hdfs-project_hadoop-hdfs-rbf-jdkUbuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04.txt)
 |  hadoop-hdfs-rbf in HDFS-17531 failed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04.  |
   | +1 :green_heart: |  compile  |   0m 42s |  |  HDFS-17531 passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  checkstyle  |   0m 30s |  |  HDFS-17531 passed  |
   | +1 :green_heart: |  mvnsite  |   0m 49s |  |  HDFS-17531 passed  |
   | -1 :x: |  javadoc  |   0m 29s | 
[/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-rbf-jdkUbuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6983/5/artifact/out/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-rbf-jdkUbuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04.txt)
 |  hadoop-hdfs-rbf in HDFS-17531 failed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04.  |
   | +1 :green_heart: |  javadoc  |   0m 39s |  |  HDFS-17531 passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | -1 :x: |  spotbugs  |   0m 46s | 
[/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6983/5/artifact/out/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-rbf.txt)
 |  hadoop-hdfs-rbf in HDFS-17531 failed.  |
   | +1 :green_heart: |  shadedclient  |  45m 11s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | -1 :x: |  mvninstall  |   0m 14s | 
[/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6983/5/artifact/out/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs-rbf.txt)
 |  hadoop-hdfs-rbf in the patch failed.  |
   | +1 :green_heart: |  compile  |   0m 45s |  |  the patch passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  javac  |   0m 45s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 52s |  |  the patch passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  javac  |   0m 52s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 21s |  |  the patch passed  |
   | -1 :x: |  mvnsite  |   0m  8s | 
[/patch-mvnsite-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6983/5/artifact/out/patch-mvnsite-hadoop-hdfs-project_hadoop-hdfs-rbf.txt)
 |  hadoop-hdfs-rbf in the patch failed.  |
   | +1 :green_heart: |  javadoc  |   0m 36s |  |  the patch passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  javadoc  |   0m 27s |  |  the patch passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | -1 :x: |  spotbugs  |   0m 38s | 
[/patch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6983/5/artifact/out/patch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-rbf.txt)
 |  hadoop-hdfs-rbf in the patch failed.  |
   | -1 :x: |  shadedclient  |   4m 53s |  |  patch has errors when building 
and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  |   0m  9s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6983/5/artifact/out/patch-unit-hadoop-

[jira] [Updated] (HDFS-17643) RBF:rm src and dst are in different NS, causing an error

2024-10-10 Thread chuanjie.duan (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17643?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chuanjie.duan updated HDFS-17643:
-
Attachment: HDFS-17643.patch

> RBF:rm src and dst are in different NS, causing an error
> 
>
> Key: HDFS-17643
> URL: https://issues.apache.org/jira/browse/HDFS-17643
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.2.2
>Reporter: chuanjie.duan
>Priority: Major
> Attachments: HDFS-17643.patch
>
>
> hdfs dfsrouteradmin -add /nnThroughputBenchmark flashHadoop,hdspacexHadoop 
> /nnThroughputBenchmark
> hdfs dfsrouteradmin -add /user flashHadoop /user
>  
> hdfs dfs -put file /nnThroughputBenchmark
> hdfs dfs -rm /nnThroughputBenchmark/file
>  
> error log
> rm: Failed to move to trash: hdfs://flashHadoop/nnThroughputBenchmark/out: 
> rename destination parent /user/hdfs/.Trash/Current/nnThroughputBenchmark/out 
> not found.
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-17643) RBF:rm src and dst are in different NS, causing an error

2024-10-10 Thread chuanjie.duan (Jira)
chuanjie.duan created HDFS-17643:


 Summary: RBF:rm src and dst are in different NS, causing an error
 Key: HDFS-17643
 URL: https://issues.apache.org/jira/browse/HDFS-17643
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.2.2
Reporter: chuanjie.duan


hdfs dfsrouteradmin -add /nnThroughputBenchmark flashHadoop,hdspacexHadoop 
/nnThroughputBenchmark

hdfs dfsrouteradmin -add /user flashHadoop /user

 

hdfs dfs -put file /nnThroughputBenchmark

hdfs dfs -rm /nnThroughputBenchmark/file

 

error log

rm: Failed to move to trash: hdfs://flashHadoop/nnThroughputBenchmark/out: 
rename destination parent /user/hdfs/.Trash/Current/nnThroughputBenchmark/out 
not found.

 

 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-17601) [ARR] RouterRpcServer supports asynchronous rpc.

2024-10-09 Thread farmmamba (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

farmmamba updated HDFS-17601:
-
Description: 
RouterRpcServer supports asynchronous RPC, the following methods need to be 
transformed to asynchronous versions:
 * {{{color:#172b4d}invokeOnNsAsync{color}}}
 * {{{color:#172b4d}invokeAtAvailableNsAsync {color}}}
 * {{{color:#172b4d}getExistingLocationAsync{color}}}
 * {{{color:#172b4d}getDatanodeReportAsync{color}}}
 * {{{color:#172b4d}getDatanodeStorageReportMapAsync{color}}}
 * {{{color:#172b4d}getSlowDatanodeReportAsync{color}}}
 * {{{color:#172b4d}getCreateLocationAsync{color}}}

  was:
RouterRpcServer supports asynchronous RPC, the following methods need to be 
transformed to asynchronous versions:
 * 
{{{}{color:#172b4d}invokeOnNsAsync{color}{}}}{{{}{color:#172b4d}invokeAtAvailableNsAsync
 {color}{}}}
 * {{{color:#172b4d}getExistingLocationAsync{color}}}
 * {{{color:#172b4d}getDatanodeReportAsync{color}}}
 * {{{color:#172b4d}getDatanodeStorageReportMapAsync{color}}}
 * {{{color:#172b4d}getSlowDatanodeReportAsync{color}}}
 * {{{color:#172b4d}getCreateLocationAsync{color}}}


> [ARR] RouterRpcServer supports asynchronous rpc.
> 
>
> Key: HDFS-17601
> URL: https://issues.apache.org/jira/browse/HDFS-17601
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: rbf
>Reporter: farmmamba
>Assignee: farmmamba
>Priority: Major
>
> RouterRpcServer supports asynchronous RPC, the following methods need to be 
> transformed to asynchronous versions:
>  * {{{color:#172b4d}invokeOnNsAsync{color}}}
>  * {{{color:#172b4d}invokeAtAvailableNsAsync {color}}}
>  * {{{color:#172b4d}getExistingLocationAsync{color}}}
>  * {{{color:#172b4d}getDatanodeReportAsync{color}}}
>  * {{{color:#172b4d}getDatanodeStorageReportMapAsync{color}}}
>  * {{{color:#172b4d}getSlowDatanodeReportAsync{color}}}
>  * {{{color:#172b4d}getCreateLocationAsync{color}}}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17595) [ARR] ErasureCoding supports asynchronous rpc.

2024-10-09 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17888124#comment-17888124
 ] 

ASF GitHub Bot commented on HDFS-17595:
---

hadoop-yetus commented on PR #6983:
URL: https://github.com/apache/hadoop/pull/6983#issuecomment-2403980673

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 57s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ HDFS-17531 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  46m 50s |  |  HDFS-17531 passed  |
   | +1 :green_heart: |  compile  |   0m 49s |  |  HDFS-17531 passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  compile  |   0m 41s |  |  HDFS-17531 passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  checkstyle  |   0m 34s |  |  HDFS-17531 passed  |
   | +1 :green_heart: |  mvnsite  |   0m 47s |  |  HDFS-17531 passed  |
   | +1 :green_heart: |  javadoc  |   0m 46s |  |  HDFS-17531 passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  javadoc  |   0m 36s |  |  HDFS-17531 passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  spotbugs  |   1m 31s |  |  HDFS-17531 passed  |
   | +1 :green_heart: |  shadedclient  |  35m 23s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 35s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 40s |  |  the patch passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  javac  |   0m 40s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 32s |  |  the patch passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  javac  |   0m 32s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 21s | 
[/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6983/4/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs-rbf.txt)
 |  hadoop-hdfs-project/hadoop-hdfs-rbf: The patch generated 19 new + 0 
unchanged - 0 fixed = 19 total (was 0)  |
   | +1 :green_heart: |  mvnsite  |   0m 36s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 32s |  |  the patch passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  |  the patch passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  spotbugs  |   1m 28s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  35m  0s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  |  36m 15s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6983/4/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt)
 |  hadoop-hdfs-rbf in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 40s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 167m 46s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdfs.server.federation.router.TestRouterClientRejectOverload |
   |   | 
hadoop.hdfs.server.federation.router.TestRouterFederationRenameInKerberosEnv |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.47 ServerAPI=1.47 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6983/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6983 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 6b4922be6a9b 5.15.0-117-generic #127-Ubuntu SMP Fri Jul 5 
20:13:28 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | HDFS-17531 / d633f0466ec6f81c91910d28988d1b4da47e6065 |
   | Default Java | Private Build-1.8.0_422-8u422-b05-1~20.04-b05 |
   | Multi-JDK versions | 
/usr/lib

[jira] [Updated] (HDFS-17601) [ARR] RouterRpcServer supports asynchronous rpc.

2024-10-09 Thread farmmamba (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

farmmamba updated HDFS-17601:
-
Description: 
RouterRpcServer supports asynchronous RPC, the following methods need to be 
transformed to asynchronous versions:
 * 
{{{}{color:#172b4d}invokeOnNsAsync{color}{}}}{{{}{color:#172b4d}invokeAtAvailableNsAsync
 {color}{}}}
 * {{{color:#172b4d}getExistingLocationAsync{color}}}
 * {{{color:#172b4d}getDatanodeReportAsync{color}}}
 * {{{color:#172b4d}getDatanodeStorageReportMapAsync{color}}}
 * {{{color:#172b4d}getSlowDatanodeReportAsync{color}}}
 * {{{color:#172b4d}getCreateLocationAsync{color}}}

  was:
RouterRpcServer supports asynchronous RPC, the following methods need to be 
transformed to asynchronous versions:

```java

{{invokeOnNsAsync}}

invokeAtAvailableNsAsync 

getExistingLocationAsync  getDatanodeReportAsync
getDatanodeStorageReportMapAsync
getSlowDatanodeReportAsync
getCreateLocationAsync

```


> [ARR] RouterRpcServer supports asynchronous rpc.
> 
>
> Key: HDFS-17601
> URL: https://issues.apache.org/jira/browse/HDFS-17601
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: rbf
>Reporter: farmmamba
>Assignee: farmmamba
>Priority: Major
>
> RouterRpcServer supports asynchronous RPC, the following methods need to be 
> transformed to asynchronous versions:
>  * 
> {{{}{color:#172b4d}invokeOnNsAsync{color}{}}}{{{}{color:#172b4d}invokeAtAvailableNsAsync
>  {color}{}}}
>  * {{{color:#172b4d}getExistingLocationAsync{color}}}
>  * {{{color:#172b4d}getDatanodeReportAsync{color}}}
>  * {{{color:#172b4d}getDatanodeStorageReportMapAsync{color}}}
>  * {{{color:#172b4d}getSlowDatanodeReportAsync{color}}}
>  * {{{color:#172b4d}getCreateLocationAsync{color}}}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-17601) [ARR] RouterRpcServer supports asynchronous rpc.

2024-10-09 Thread farmmamba (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

farmmamba updated HDFS-17601:
-
Description: 
RouterRpcServer supports asynchronous RPC, the following methods need to be 
transformed to asynchronous versions:

```java

{{invokeOnNsAsync}}

invokeAtAvailableNsAsync 

getExistingLocationAsync  getDatanodeReportAsync
getDatanodeStorageReportMapAsync
getSlowDatanodeReportAsync
getCreateLocationAsync

```

  was:
RouterRpcServer supports asynchronous RPC

The following method needs to be transformed asynchronously:

invokeOnNsAsync

invokeAtAvailableNsAsync 
getExistingLocationAsync  getDatanodeReportAsync
getDatanodeStorageReportMapAsync
getSlowDatanodeReportAsync
getCreateLocationAsync


> [ARR] RouterRpcServer supports asynchronous rpc.
> 
>
> Key: HDFS-17601
> URL: https://issues.apache.org/jira/browse/HDFS-17601
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: rbf
>Reporter: farmmamba
>Assignee: farmmamba
>Priority: Major
>
> RouterRpcServer supports asynchronous RPC, the following methods need to be 
> transformed to asynchronous versions:
> ```java
> {{invokeOnNsAsync}}
> invokeAtAvailableNsAsync 
> getExistingLocationAsync  getDatanodeReportAsync
> getDatanodeStorageReportMapAsync
> getSlowDatanodeReportAsync
> getCreateLocationAsync
> ```



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-17641) Add detailed metrics for low redundancy blocks

2024-10-09 Thread Prateek Sane (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17641?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prateek Sane updated HDFS-17641:

Description: 
Low Redundancy blocks have different priority levels including highest 
priority, very low redundancy, low redundancy, badly distributed, and corrupt.

Having a metric for the number of badly distributed blocks would be helpful.

  was:
Low Redundancy blocks have different priority levels including highest 
priority, very low redundancy, low redundancy, badly distributed, and corrupt.

While theres metrics for the aggregate number of lowRedundancy blocks and for 
highest priority and corrupt, it would be useful to also know specific 
quantities of very low redundancy, low redundancy, and badly distributed.


> Add detailed metrics for low redundancy blocks 
> ---
>
> Key: HDFS-17641
> URL: https://issues.apache.org/jira/browse/HDFS-17641
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Prateek Sane
>Priority: Minor
>
> Low Redundancy blocks have different priority levels including highest 
> priority, very low redundancy, low redundancy, badly distributed, and corrupt.
> Having a metric for the number of badly distributed blocks would be helpful.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-17642) Support specifying Datanodes to exclude for balancing at source/target granularity

2024-10-09 Thread Joseph Dell'Aringa (Jira)
Joseph Dell'Aringa created HDFS-17642:
-

 Summary: Support specifying Datanodes to exclude for balancing at 
source/target granularity
 Key: HDFS-17642
 URL: https://issues.apache.org/jira/browse/HDFS-17642
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: balancer
Reporter: Joseph Dell'Aringa
Assignee: Joseph Dell'Aringa


In some cases it can be useful to exclude a list of datanodes from being 
selected as a target but remaining an option for a source and vice-versa. 

The fix for this ticket will add an additional exclude list for source and 
target datanodes.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-17641) Add detailed metrics for low redundancy blocks

2024-10-09 Thread Prateek Sane (Jira)
Prateek Sane created HDFS-17641:
---

 Summary: Add detailed metrics for low redundancy blocks 
 Key: HDFS-17641
 URL: https://issues.apache.org/jira/browse/HDFS-17641
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Reporter: Prateek Sane


Low Redundancy blocks have different priority levels including highest 
priority, very low redundancy, low redundancy, badly distributed, and corrupt.

While theres metrics for the aggregate number of lowRedundancy blocks and for 
highest priority and corrupt, it would be useful to also know specific 
quantities of very low redundancy, low redundancy, and badly distributed.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17595) [ARR] ErasureCoding supports asynchronous rpc.

2024-10-09 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17887971#comment-17887971
 ] 

ASF GitHub Bot commented on HDFS-17595:
---

hadoop-yetus commented on PR #6983:
URL: https://github.com/apache/hadoop/pull/6983#issuecomment-2402581473

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 20s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ HDFS-17531 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  33m  0s |  |  HDFS-17531 passed  |
   | +1 :green_heart: |  compile  |   0m 32s |  |  HDFS-17531 passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  compile  |   0m 24s |  |  HDFS-17531 passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  checkstyle  |   0m 22s |  |  HDFS-17531 passed  |
   | +1 :green_heart: |  mvnsite  |   0m 32s |  |  HDFS-17531 passed  |
   | +1 :green_heart: |  javadoc  |   0m 31s |  |  HDFS-17531 passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  javadoc  |   0m 20s |  |  HDFS-17531 passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  spotbugs  |   0m 52s |  |  HDFS-17531 passed  |
   | +1 :green_heart: |  shadedclient  |  20m 20s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 21s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 23s |  |  the patch passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  javac  |   0m 23s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 19s |  |  the patch passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  javac  |   0m 19s |  |  the patch passed  |
   | -1 :x: |  blanks  |   0m  0s | 
[/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6983/3/artifact/out/blanks-eol.txt)
 |  The patch has 6 line(s) that end in blanks. Use git apply --whitespace=fix 
<>. Refer https://git-scm.com/docs/git-apply  |
   | -0 :warning: |  checkstyle  |   0m 11s | 
[/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6983/3/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs-rbf.txt)
 |  hadoop-hdfs-project/hadoop-hdfs-rbf: The patch generated 8 new + 0 
unchanged - 0 fixed = 8 total (was 0)  |
   | +1 :green_heart: |  mvnsite  |   0m 22s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 21s |  |  the patch passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  javadoc  |   0m 18s |  |  the patch passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  spotbugs  |   0m 53s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  20m 32s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  27m 14s |  |  hadoop-hdfs-rbf in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 27s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 109m 58s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.47 ServerAPI=1.47 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6983/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6983 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux af125593a099 5.15.0-117-generic #127-Ubuntu SMP Fri Jul 5 
20:13:28 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | HDFS-17531 / 746d3f257c9ef4ff59291760ca5d2c3802398839 |
   | Default Java | Private Build-1.8.0_422-8u422-b05-1~20.04-b05 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_422-8u422-b05-1~20.04-b05 
|
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-m

[jira] [Commented] (HDFS-17595) [ARR] ErasureCoding supports asynchronous rpc.

2024-10-09 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17887930#comment-17887930
 ] 

ASF GitHub Bot commented on HDFS-17595:
---

hfutatzhanghb commented on PR #6983:
URL: https://github.com/apache/hadoop/pull/6983#issuecomment-2402300976

   @Hexiaoqiao @KeeProMise Sir, have added UT. PTAL when you have free time, 
thanks~




> [ARR] ErasureCoding supports asynchronous rpc.
> --
>
> Key: HDFS-17595
> URL: https://issues.apache.org/jira/browse/HDFS-17595
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: rbf
>Reporter: Jian Zhang
>Assignee: farmmamba
>Priority: Major
>  Labels: pull-request-available
>
> *Describe*
> The main new addition is AsyncErasureCoding, which extends ErasureCoding so 
> that supports asynchronous rpc.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17596) [ARR] RouterStoragePolicy supports asynchronous rpc.

2024-10-08 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17596?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17887743#comment-17887743
 ] 

ASF GitHub Bot commented on HDFS-17596:
---

KeeProMise commented on code in PR #6988:
URL: https://github.com/apache/hadoop/pull/6988#discussion_r1792697438


##
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcServer.java:
##
@@ -791,6 +800,36 @@  T invokeAtAvailableNs(RemoteMethod method, Class 
clazz)
 return invokeOnNs(method, clazz, io, nss);
   }
 
+   T invokeAtAvailableNsAsync(RemoteMethod method, Class clazz)
+  throws IOException {
+String nsId = subclusterResolver.getDefaultNamespace();
+// If default Ns is not present return result from first namespace.
+Set nss = namenodeResolver.getNamespaces();
+// If no namespace is available, throw IOException.
+IOException io = new IOException("No namespace available.");
+
+asyncComplete(null);
+if (!nsId.isEmpty()) {
+  asyncTry(() -> {
+rpcClient.invokeSingle(nsId, method, clazz);
+  });
+
+  asyncCatch((AsyncCatchFunction)(res, ioe) -> {
+if (!clientProto.isUnavailableSubclusterException(ioe)) {
+  LOG.debug("{} exception cannot be retried",
+  ioe.getClass().getSimpleName());
+  throw ioe;
+}
+nss.removeIf(n -> n.getNameserviceId().equals(nsId));
+invokeOnNs(method, clazz, io, nss);

Review Comment:
   Should use  invokeOnNsAsync.



##
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcServer.java:
##
@@ -791,6 +800,36 @@  T invokeAtAvailableNs(RemoteMethod method, Class 
clazz)
 return invokeOnNs(method, clazz, io, nss);
   }
 
+   T invokeAtAvailableNsAsync(RemoteMethod method, Class clazz)
+  throws IOException {
+String nsId = subclusterResolver.getDefaultNamespace();
+// If default Ns is not present return result from first namespace.
+Set nss = namenodeResolver.getNamespaces();
+// If no namespace is available, throw IOException.
+IOException io = new IOException("No namespace available.");
+
+asyncComplete(null);

Review Comment:
   Hi, @hfutatzhanghb IMO, asyncComplete(null) is not needed in this place.



##
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterAsyncStoragePolicy.java:
##
@@ -0,0 +1,48 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.server.federation.router;
+
+import org.apache.hadoop.hdfs.protocol.BlockStoragePolicy;
+import org.apache.hadoop.hdfs.server.federation.resolver.RemoteLocation;
+import org.apache.hadoop.hdfs.server.namenode.NameNode;
+
+import java.io.IOException;
+import java.util.List;
+
+import static 
org.apache.hadoop.hdfs.server.federation.router.async.AsyncUtil.asyncReturn;

Review Comment:
   Remove unused imports.



##
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcServer.java:
##
@@ -824,6 +863,49 @@  T invokeOnNs(RemoteMethod method, Class clazz, 
IOException ioe,
 throw ioe;
   }
 
+   T invokeOnNsAsync(RemoteMethod method, Class clazz, IOException ioe,
+Set nss) throws IOException {
+if (nss.isEmpty()) {
+  throw ioe;
+}
+
+asyncComplete(null);
+Iterator nsIterator = nss.iterator();
+asyncForEach(nsIterator, (foreach, fnInfo) -> {
+  String nsId = fnInfo.getNameserviceId();
+  LOG.debug("Invoking {} on namespace {}", method, nsId);
+  asyncTry(() -> {
+rpcClient.invokeSingle(nsId, method, clazz);
+asyncApply(result -> {
+  if (result != null && isExpectedClass(clazz, result)) {
+foreach.breakNow();
+return result;
+  }
+  return null;
+});
+  });
+
+  asyncCatch((AsyncCatchFunction)(ret, ex) -> {

Review Com

[jira] [Commented] (HDFS-15169) RBF: Router FSCK should consider the mount table

2024-10-08 Thread Felix N (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17887500#comment-17887500
 ] 

Felix N commented on HDFS-15169:


Hi [~hexiaoqiao], is there any update on this ticket?

> RBF: Router FSCK should consider the mount table
> 
>
> Key: HDFS-15169
> URL: https://issues.apache.org/jira/browse/HDFS-15169
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: rbf
>Reporter: Akira Ajisaka
>Assignee: Xiaoqiao He
>Priority: Major
> Attachments: HDFS-15169.001.patch, HDFS-15169.002.patch, 
> HDFS-15169.003.patch, HDFS-15169.004.patch, HDFS-15169.005.patch
>
>
> HDFS-13989 implemented FSCK to DFSRouter, however, it just redirects the 
> requests to all the active downstream NameNodes for now. The DFSRouter should 
> consider the mount table when redirecting the requests.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17640) [ARR] RouterClientProtocol supports asynchronous rpc.

2024-10-08 Thread farmmamba (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17887520#comment-17887520
 ] 

farmmamba commented on HDFS-17640:
--

[~keepromise] Sir, I create this ISSUE for RouterAsyncClientProtocol , PTAL. 
Thanks ahead for giving some suggestions.

> [ARR] RouterClientProtocol supports asynchronous rpc.
> -
>
> Key: HDFS-17640
> URL: https://issues.apache.org/jira/browse/HDFS-17640
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: rbf
>Reporter: farmmamba
>Priority: Major
>
> RouterClientProtocol should support asynchronous rpc.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-17640) [ARR] RouterClientProtocol supports asynchronous rpc.

2024-10-08 Thread farmmamba (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17640?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

farmmamba updated HDFS-17640:
-
Parent: HDFS-17531
Issue Type: Sub-task  (was: New Feature)

> [ARR] RouterClientProtocol supports asynchronous rpc.
> -
>
> Key: HDFS-17640
> URL: https://issues.apache.org/jira/browse/HDFS-17640
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: rbf
>Reporter: farmmamba
>Priority: Major
>
> RouterClientProtocol should support asynchronous rpc.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-17640) [ARR] RouterClientProtocol supports asynchronous rpc.

2024-10-08 Thread farmmamba (Jira)
farmmamba created HDFS-17640:


 Summary: [ARR] RouterClientProtocol supports asynchronous rpc.
 Key: HDFS-17640
 URL: https://issues.apache.org/jira/browse/HDFS-17640
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: rbf
Reporter: farmmamba


RouterClientProtocol should support asynchronous rpc.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-14601) NameNode HA with single DNS record for NameNode discovery prevent running ZKFC

2024-10-08 Thread fuchaohong (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14601?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17887510#comment-17887510
 ] 

fuchaohong edited comment on HDFS-14601 at 10/8/24 8:18 AM:


[~fengnanli] Is this patch complete? If so, would you be so kind as to upload 
it? Thank you very much.


was (Author: fuchaohong):
[~fengnanli] Is this patch complete? If so, please upload it. Thanks.

> NameNode HA with single DNS record for NameNode discovery prevent running ZKFC
> --
>
> Key: HDFS-14601
> URL: https://issues.apache.org/jira/browse/HDFS-14601
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Kei Kori
>Assignee: Fengnan Li
>Priority: Major
>
> ZKFC seems not treat one DNS record for NameNode discovery as multiple 
> NameNodes, so launching ZKFC is blocked on NameNodes which has only one 
> "dfs.ha.namenodes" definition with DNS for resolving multiple NameNodes.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-17639) Lock contention for hasStorageType when the number of storage nodes is large

2024-10-04 Thread Goodness Ayinmode (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17639?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goodness Ayinmode updated HDFS-17639:
-
Description: 
I was looking into methods associated with storages and storageTypes. I found 
[DatanodeDescriptor.hasStorageType|https://github.com/apache/hadoop/blob/2f0dd7c4feb1e482d47786d26d6d32483f39414b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java#L1138]
 could be a source of  bottlenecks. To check whether a specific storage type 
exists among the storage locations associated with a DatanodeDescriptor, 
[hasStorageType|https://github.com/apache/hadoop/blob/2f0dd7c4feb1e482d47786d26d6d32483f39414b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java#L1138]
 iterates over an array of DatanodeStorageInfos returned by 
[getStorageInfos()|https://github.com/apache/hadoop/blob/2f0dd7c4feb1e482d47786d26d6d32483f39414b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java#L305].
 This retrieves the storage information from a storageMap and converts it to an 
array while under a lock. As the system scales and the size of storageMap grows 
with more datanodes, the duration spent in the synchronized block will 
increase. This issue could become more significant when hasStorageType is 
called  in methods like 
[DatanodeDescriptor.pruneStorageMap|https://github.com/apache/hadoop/blob/49a495803a9451850b8982317e277b605c785587/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java#L568]
 that could iterate (resulting in a form of nested iteration) over a large data 
structure. The combination of a repeated linear search (within hasStorageType) 
and the iteration within a lock can lead to a significant complexity 
(potentially quadratic) and significant synchronization bottlenecks

 

[DFSNetworkTopology.chooseRandomWithStorageType|https://github.com/apache/hadoop/blob/2f0dd7c4feb1e482d47786d26d6d32483f39414b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/net/DFSNetworkTopology.java#L180]
 and [DFSNetworkTopology. 
chooseRandomWithStorageTypeTwoTrial|https://github.com/apache/hadoop/blob/2f0dd7c4feb1e482d47786d26d6d32483f39414b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/net/DFSNetworkTopology.java#L107]
 are affected because they both invoke hasStorageType. Additionally, 
[INodeFile.assertAllBlocksComplete|https://github.com/apache/hadoop/blob/2f0dd7c4feb1e482d47786d26d6d32483f39414b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java#L345]
 and 
[BlockManager.checkRedundancy()|https://github.com/apache/hadoop/blob/6be04633b55bbd67c2875e39977cd9d2308dc1d1/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java#L5018]
 faces a similar issue 
([FSNamesystem.finalizeINodeFileUnderConstruction|https://github.com/apache/hadoop/blob/2f0dd7c4feb1e482d47786d26d6d32483f39414b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java#L3908]
 invokes both methods under a writeLock)

This appears to be a similar issue with 
https://issues.apache.org/jira/browse/HDFS-17638 . I’m curious to know if my 
analysis is wrong and if there is anything that can be done to reduce the 
impact of these issues

  was:
 

Hi,

I was looking into methods associated with storages and storageTypes. I found 
[DatanodeDescriptor.hasStorageType|https://github.com/apache/hadoop/blob/2f0dd7c4feb1e482d47786d26d6d32483f39414b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java#L1138]
 could be a source of  bottlenecks. To check whether a specific storage type 
exists among the storage locations associated with a DatanodeDescriptor, 
[hasStorageType|https://github.com/apache/hadoop/blob/2f0dd7c4feb1e482d47786d26d6d32483f39414b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java#L1138]
 iterates over an array of DatanodeStorageInfos returned by 
[getStorageInfos()|https://github.com/apache/hadoop/blob/2f0dd7c4feb1e482d47786d26d6d32483f39414b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java#L305].
 This retrieves the storage information from a storageMap and converts it to an 
array while under a lock. As the system scales and the size of storageMap grows 
with more datanodes, the duration spent in the synchronized block will 
increase. This issue could become more significant when hasStorageType is 
called  in methods like 
[DatanodeDescriptor.pruneStorageMap|https://github.com/apache/hadoop/blob

[jira] [Updated] (HDFS-17639) Lock contention for hasStorageType when the number of storage nodes is large

2024-10-04 Thread Goodness Ayinmode (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17639?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goodness Ayinmode updated HDFS-17639:
-
Description: 
 

Hi,

I was looking into methods associated with storages and storageTypes. I found 
[DatanodeDescriptor.hasStorageType|https://github.com/apache/hadoop/blob/2f0dd7c4feb1e482d47786d26d6d32483f39414b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java#L1138]
 could be a source of  bottlenecks. To check whether a specific storage type 
exists among the storage locations associated with a DatanodeDescriptor, 
[hasStorageType|https://github.com/apache/hadoop/blob/2f0dd7c4feb1e482d47786d26d6d32483f39414b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java#L1138]
 iterates over an array of DatanodeStorageInfos returned by 
[getStorageInfos()|https://github.com/apache/hadoop/blob/2f0dd7c4feb1e482d47786d26d6d32483f39414b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java#L305].
 This retrieves the storage information from a storageMap and converts it to an 
array while under a lock. As the system scales and the size of storageMap grows 
with more datanodes, the duration spent in the synchronized block will 
increase. This issue could become more significant when hasStorageType is 
called  in methods like 
[DatanodeDescriptor.pruneStorageMap|https://github.com/apache/hadoop/blob/49a495803a9451850b8982317e277b605c785587/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java#L568]
 that could iterate (resulting in a form of nested iteration) over a large data 
structure. The combination of a repeated linear search (within hasStorageType) 
and the iteration within a lock can lead to a significant complexity 
(potentially quadratic) and significant synchronization bottlenecks

 

[DFSNetworkTopology.chooseRandomWithStorageType|https://github.com/apache/hadoop/blob/2f0dd7c4feb1e482d47786d26d6d32483f39414b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/net/DFSNetworkTopology.java#L180]
 and [DFSNetworkTopology. 
chooseRandomWithStorageTypeTwoTrial|https://github.com/apache/hadoop/blob/2f0dd7c4feb1e482d47786d26d6d32483f39414b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/net/DFSNetworkTopology.java#L107]
 are affected because they both invoke hasStorageType. Additionally, 
[INodeFile.assertAllBlocksComplete|https://github.com/apache/hadoop/blob/2f0dd7c4feb1e482d47786d26d6d32483f39414b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java#L345]
 and 
[BlockManager.checkRedundancy()|https://github.com/apache/hadoop/blob/6be04633b55bbd67c2875e39977cd9d2308dc1d1/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java#L5018]
 faces a similar issue 
([FSNamesystem.finalizeINodeFileUnderConstruction|https://github.com/apache/hadoop/blob/2f0dd7c4feb1e482d47786d26d6d32483f39414b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java#L3908]
 invokes both methods under a writeLock)

This appears to be a similar issue with 
https://issues.apache.org/jira/browse/HDFS-17638 . I’m curious to know if my 
analysis is wrong and if there is anything that can be done to reduce the 
impact of these issues

  was:
Lock contention and for hasStorageType when the number of storage nodes is large

 

Hi,

I was looking into methods associated with storages and storageTypes. I found 
[DatanodeDescriptor.hasStorageType|https://github.com/apache/hadoop/blob/2f0dd7c4feb1e482d47786d26d6d32483f39414b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java#L1138]
 could be a source of  bottlenecks. To check whether a specific storage type 
exists among the storage locations associated with a DatanodeDescriptor, 
[hasStorageType|https://github.com/apache/hadoop/blob/2f0dd7c4feb1e482d47786d26d6d32483f39414b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java#L1138]
 iterates over an array of DatanodeStorageInfos returned by 
[getStorageInfos()|https://github.com/apache/hadoop/blob/2f0dd7c4feb1e482d47786d26d6d32483f39414b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java#L305].
 This retrieves the storage information from a storageMap and converts it to an 
array while under a lock. As the system scales and the size of storageMap grows 
with more datanodes, the duration spent in the synchronized block will 
increase. This issue could become more significant when hasStorageType is 
called  in methods like

[jira] [Created] (HDFS-17639) Lock contention for hasStorageType when the number of storage nodes is large

2024-10-04 Thread Goodness Ayinmode (Jira)
Goodness Ayinmode created HDFS-17639:


 Summary: Lock contention for hasStorageType when the number of 
storage nodes is large
 Key: HDFS-17639
 URL: https://issues.apache.org/jira/browse/HDFS-17639
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode, server
Affects Versions: 3.4.0
Reporter: Goodness Ayinmode


Lock contention and for hasStorageType when the number of storage nodes is large

 

Hi,

I was looking into methods associated with storages and storageTypes. I found 
[DatanodeDescriptor.hasStorageType|https://github.com/apache/hadoop/blob/2f0dd7c4feb1e482d47786d26d6d32483f39414b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java#L1138]
 could be a source of  bottlenecks. To check whether a specific storage type 
exists among the storage locations associated with a DatanodeDescriptor, 
[hasStorageType|https://github.com/apache/hadoop/blob/2f0dd7c4feb1e482d47786d26d6d32483f39414b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java#L1138]
 iterates over an array of DatanodeStorageInfos returned by 
[getStorageInfos()|https://github.com/apache/hadoop/blob/2f0dd7c4feb1e482d47786d26d6d32483f39414b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java#L305].
 This retrieves the storage information from a storageMap and converts it to an 
array while under a lock. As the system scales and the size of storageMap grows 
with more datanodes, the duration spent in the synchronized block will 
increase. This issue could become more significant when hasStorageType is 
called  in methods like 
[DatanodeDescriptor.pruneStorageMap|https://github.com/apache/hadoop/blob/49a495803a9451850b8982317e277b605c785587/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java#L568]
 that could iterate (resulting in a form of nested iteration) over a large data 
structure. The combination of a repeated linear search (within hasStorageType) 
and the iteration within a lock can lead to a significant complexity 
(potentially quadratic) and significant synchronization bottlenecks

 

[DFSNetworkTopology.chooseRandomWithStorageType|https://github.com/apache/hadoop/blob/2f0dd7c4feb1e482d47786d26d6d32483f39414b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/net/DFSNetworkTopology.java#L180]
 and [DFSNetworkTopology. 
chooseRandomWithStorageTypeTwoTrial|https://github.com/apache/hadoop/blob/2f0dd7c4feb1e482d47786d26d6d32483f39414b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/net/DFSNetworkTopology.java#L107]
 are affected because they both invoke hasStorageType. Additionally, 
[INodeFile.assertAllBlocksComplete|https://github.com/apache/hadoop/blob/2f0dd7c4feb1e482d47786d26d6d32483f39414b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java#L345]
 and 
[BlockManager.checkRedundancy()|https://github.com/apache/hadoop/blob/6be04633b55bbd67c2875e39977cd9d2308dc1d1/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java#L5018]
 faces a similar issue 
([FSNamesystem.finalizeINodeFileUnderConstruction|https://github.com/apache/hadoop/blob/2f0dd7c4feb1e482d47786d26d6d32483f39414b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java#L3908]
 invokes both methods under a writeLock)


This appears to be a similar issue with 
https://issues.apache.org/jira/browse/HDFS-17638 . I’m curious to know if my 
analysis is wrong and if there is anything that can be done to reduce the 
impact of these issues



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-17638) Lock contention for DatanodeStorageInfo when the number of storage nodes is large

2024-10-04 Thread Goodness Ayinmode (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17638?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goodness Ayinmode updated HDFS-17638:
-
Description: 
Hi, 

I was looking into the DatanodeStorageInfo class and I think some of the 
methods could give issues at large scale. For example, to convert 
DatanodeStorageInfo objects into their respective DatanodeDescriptor and 
Storage ID forms, 
[DatanodeStorageInfo.toDatanodeInfos()|https://github.com/apache/hadoop/blob/2f0dd7c4feb1e482d47786d26d6d32483f39414b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeStorageInfo.java#L44]
 and 
[DatanodeStorageInfo.toStorageIDs()|https://github.com/apache/hadoop/blob/2f0dd7c4feb1e482d47786d26d6d32483f39414b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeStorageInfo.java#L61]
 iterate over the entire array of storage nodes. Each operation is linear, 
however performance issues can arise, when they are called under a lock, like 
in 
[bumpBlockGenerationStamp|https://github.com/apache/hadoop/blob/2f0dd7c4feb1e482d47786d26d6d32483f39414b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java#L5987],
 where 
[newLocatedBlock|https://github.com/apache/hadoop/blob/49a495803a9451850b8982317e277b605c785587/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java#L5461]
 calls both methods 
(bumpBlockGenerationStamp->newLocatedBlock->([newLocatedBlock|https://github.com/apache/hadoop/blob/49a495803a9451850b8982317e277b605c785587/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java#L5437]
 or [newLocatedStripedBlock 
|https://github.com/apache/hadoop/blob/2f0dd7c4feb1e482d47786d26d6d32483f39414b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java#L5435]->
 toDatanodeInfos and toStorageIDs under the writeLock). This situation can be 
even more problematic when these methods are repeatedly invoked within an 
iteration like in 
[createLocatedBlockList|https://github.com/apache/hadoop/blob/2f0dd7c4feb1e482d47786d26d6d32483f39414b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java#L1450]
 ([createLocatedBlocks -> 
createLocatedBlockList|https://github.com/apache/hadoop/blob/2f0dd7c4feb1e482d47786d26d6d32483f39414b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java#L1601]
 
->[createLocatedBlock|https://github.com/apache/hadoop/blob/2f0dd7c4feb1e482d47786d26d6d32483f39414b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java#L1487]
 ->(newLocatedBlock or newLocatedStripedBlock) -> toDatanodeInfos and 
toStorageIDs). Such behaviors cause significant synchronization bottlenecks 
when the number of blocks or number of storage nodes is large. 

[BlockPlacementPolicyDefault.getPipeline|https://github.com/apache/hadoop/blob/2f0dd7c4feb1e482d47786d26d6d32483f39414b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java#L1147],
 
[BlockPlacementPolicyDefault.chooseTarget|https://github.com/apache/hadoop/blob/2f0dd7c4feb1e482d47786d26d6d32483f39414b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java#L287],
 and 
[BlockManager.validateReconstructionWork|https://github.com/apache/hadoop/blob/2f0dd7c4feb1e482d47786d26d6d32483f39414b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java#L2355]
 (  
[BlockManager.computeReconstructionWorkForBlocks|https://github.com/apache/hadoop/blob/49a495803a9451850b8982317e277b605c785587/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java#L2187]
 --> BlockManager.validateReconstructionWork -->  
[incrementBlocksScheduled|https://github.com/apache/hadoop/blob/2f0dd7c4feb1e482d47786d26d6d32483f39414b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeStorageInfo.java#L338]
 )  also faces a similar issue with lock contention.

Please let me know if my analysis is wrong, and if there are suggestions to 
make this better. Thanks

  was:
Hi, 

I was looking into the DatanodeStorageInfo class and I think some of the 
methods could give issues at large scale. For example, to convert 
DatanodeStorageInfo objects into their respective DatanodeDescriptor and 
Storage ID forms, 
[DatanodeStorageInfo.toDatanodeInfos()|https://github.com/apache/hadoop/blob/2f0dd7c4feb1e482d47786d26d6d32483f39414b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/or

[jira] [Updated] (HDFS-17638) Lock contention for DatanodeStorageInfo when the number of storage nodes is large

2024-10-04 Thread Goodness Ayinmode (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17638?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goodness Ayinmode updated HDFS-17638:
-
Description: 
Hi, 

I was looking into the DatanodeStorageInfo class and I think some of the 
methods could give issues at large scale. For example, to convert 
DatanodeStorageInfo objects into their respective DatanodeDescriptor and 
Storage ID forms, 
[DatanodeStorageInfo.toDatanodeInfos()|https://github.com/apache/hadoop/blob/2f0dd7c4feb1e482d47786d26d6d32483f39414b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeStorageInfo.java#L44]
 and 
[DatanodeStorageInfo.toStorageIDs()|https://github.com/apache/hadoop/blob/2f0dd7c4feb1e482d47786d26d6d32483f39414b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeStorageInfo.java#L61]
 iterate over the entire array of storage nodes. Each operation is linear, 
however performance issues can arise, when they are called under a lock, like 
in 
[bumpBlockGenerationStamp|https://github.com/apache/hadoop/blob/2f0dd7c4feb1e482d47786d26d6d32483f39414b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java#L5987],
 where 
[newLocatedBlock|https://github.com/apache/hadoop/blob/49a495803a9451850b8982317e277b605c785587/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java#L5461]
 calls both methods 
(bumpBlockGenerationStamp->newLocatedBlock->([newLocatedBlock|https://github.com/apache/hadoop/blob/49a495803a9451850b8982317e277b605c785587/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java#L5437]
 or [newLocatedStripedBlock 
|https://github.com/apache/hadoop/blob/2f0dd7c4feb1e482d47786d26d6d32483f39414b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java#L5435]{-})-{-}>toDatanodeInfos
 and toStorageIDs under the writeLock). This situation can be even more 
problematic when these methods are repeatedly invoked within an iteration like 
in 
[createLocatedBlockList|https://github.com/apache/hadoop/blob/2f0dd7c4feb1e482d47786d26d6d32483f39414b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java#L1450]
 ([createLocatedBlocks{-}>{-} 
createLocatedBlockList|https://github.com/apache/hadoop/blob/2f0dd7c4feb1e482d47786d26d6d32483f39414b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java#L1601]
 
->[createLocatedBlock|https://github.com/apache/hadoop/blob/2f0dd7c4feb1e482d47786d26d6d32483f39414b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java#L1487]
 ->(newLocatedBlock or newLocatedStripedBlock) -> toDatanodeInfos and 
toStorageIDs). Such behaviors cause significant synchronization bottlenecks 
when the number of blocks or number of storage nodes is large. 

[BlockPlacementPolicyDefault.getPipeline|https://github.com/apache/hadoop/blob/2f0dd7c4feb1e482d47786d26d6d32483f39414b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java#L1147],
 
[BlockPlacementPolicyDefault.chooseTarget|https://github.com/apache/hadoop/blob/2f0dd7c4feb1e482d47786d26d6d32483f39414b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java#L287],
 and 
[BlockManager.validateReconstructionWork|https://github.com/apache/hadoop/blob/2f0dd7c4feb1e482d47786d26d6d32483f39414b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java#L2355]
 (  
[BlockManager.computeReconstructionWorkForBlocks|https://github.com/apache/hadoop/blob/49a495803a9451850b8982317e277b605c785587/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java#L2187]
 --> BlockManager.validateReconstructionWork -->  
[incrementBlocksScheduled|https://github.com/apache/hadoop/blob/2f0dd7c4feb1e482d47786d26d6d32483f39414b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeStorageInfo.java#L338]
 )  also faces a similar issue with lock contention.

Please let me know if my analysis is wrong, and if there are suggestions to 
make this better. Thanks

  was:
Hi, 

I was looking into the DatanodeStorageInfo class and I think some of the 
methods could give issues at large scale. For example, to convert 
DatanodeStorageInfo objects into their respective DatanodeDescriptor and 
Storage ID forms, 
[DatanodeStorageInfo.toDatanodeInfos()|https://github.com/apache/hadoop/blob/2f0dd7c4feb1e482d47786d26d6d32483f39414b/hadoop-hdfs-project/hadoop-hdfs/src/

[jira] [Updated] (HDFS-17638) Lock contention for DatanodeStorageInfo when the number of storage nodes is large

2024-10-04 Thread Goodness Ayinmode (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17638?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goodness Ayinmode updated HDFS-17638:
-
Description: 
Hi, 

I was looking into the DatanodeStorageInfo class and I think some of the 
methods could give issues at large scale. For example, to convert 
DatanodeStorageInfo objects into their respective DatanodeDescriptor and 
Storage ID forms, 
[DatanodeStorageInfo.toDatanodeInfos()|https://github.com/apache/hadoop/blob/2f0dd7c4feb1e482d47786d26d6d32483f39414b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeStorageInfo.java#L44]
 and 
[DatanodeStorageInfo.toStorageIDs()|https://github.com/apache/hadoop/blob/2f0dd7c4feb1e482d47786d26d6d32483f39414b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeStorageInfo.java#L61]
 iterate over the entire array of storage nodes. Each operation is linear, 
however performance issues can arise, when they are called under a lock, like 
in 
[bumpBlockGenerationStamp|https://github.com/apache/hadoop/blob/2f0dd7c4feb1e482d47786d26d6d32483f39414b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java#L5987],
 where 
[newLocatedBlock|https://github.com/apache/hadoop/blob/49a495803a9451850b8982317e277b605c785587/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java#L5461]
 calls both methods 
(bumpBlockGenerationStamp->newLocatedBlock->([newLocatedBlock|https://github.com/apache/hadoop/blob/49a495803a9451850b8982317e277b605c785587/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java#L5437]
 or [newLocatedStripedBlock 
|https://github.com/apache/hadoop/blob/2f0dd7c4feb1e482d47786d26d6d32483f39414b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java#L5435]{-})-{-}>toDatanodeInfos
 and toStorageIDs under the writeLock). This situation can be even more 
problematic when these methods are repeatedly invoked within an iteration like 
in 
[createLocatedBlockList|https://github.com/apache/hadoop/blob/2f0dd7c4feb1e482d47786d26d6d32483f39414b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java#L1450]
 ([createLocatedBlocks{-}>{-} 
createLocatedBlockList|https://github.com/apache/hadoop/blob/2f0dd7c4feb1e482d47786d26d6d32483f39414b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java#L1601]->[createLocatedBlock|https://github.com/apache/hadoop/blob/2f0dd7c4feb1e482d47786d26d6d32483f39414b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java#L1487]->(newLocatedBlock
 or newLocatedStripedBlock)->toDatanodeInfos and toStorageIDs). Such behaviors 
cause significant synchronization bottlenecks when the number of blocks or 
number of storage nodes is large. 

[BlockPlacementPolicyDefault.getPipeline|https://github.com/apache/hadoop/blob/2f0dd7c4feb1e482d47786d26d6d32483f39414b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java#L1147],
 
[BlockPlacementPolicyDefault.chooseTarget|https://github.com/apache/hadoop/blob/2f0dd7c4feb1e482d47786d26d6d32483f39414b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java#L287],
 and 
[BlockManager.validateReconstructionWork|https://github.com/apache/hadoop/blob/2f0dd7c4feb1e482d47786d26d6d32483f39414b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java#L2355]
 (  
[BlockManager.computeReconstructionWorkForBlocks|https://github.com/apache/hadoop/blob/49a495803a9451850b8982317e277b605c785587/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java#L2187]
 --> BlockManager.validateReconstructionWork -->  
[incrementBlocksScheduled|https://github.com/apache/hadoop/blob/2f0dd7c4feb1e482d47786d26d6d32483f39414b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeStorageInfo.java#L338]
 )  also faces a similar issue with lock contention.

Please let me know if my analysis is wrong, and if there are suggestions to 
make this better. Thanks

  was:
Hi, 

I was looking into the DatanodeStorageInfo class and I think some of the 
methods could give issues at large scale. For example, to convert 
DatanodeStorageInfo objects into their respective DatanodeDescriptor and 
Storage ID forms, 
[DatanodeStorageInfo.toDatanodeInfos()|https://github.com/apache/hadoop/blob/2f0dd7c4feb1e482d47786d26d6d32483f39414b/hadoop-hdfs-project/hadoop-hdfs/src/main/

[jira] [Updated] (HDFS-17638) Lock contention for DatanodeStorageInfo when the number of storage nodes is large

2024-10-04 Thread Goodness Ayinmode (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17638?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goodness Ayinmode updated HDFS-17638:
-
Description: 
Hi, 

I was looking into the DatanodeStorageInfo class and I think some of the 
methods could give issues at large scale. For example, to convert 
DatanodeStorageInfo objects into their respective DatanodeDescriptor and 
Storage ID forms, 
[DatanodeStorageInfo.toDatanodeInfos()|https://github.com/apache/hadoop/blob/2f0dd7c4feb1e482d47786d26d6d32483f39414b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeStorageInfo.java#L44]
 and 
[DatanodeStorageInfo.toStorageIDs()|https://github.com/apache/hadoop/blob/2f0dd7c4feb1e482d47786d26d6d32483f39414b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeStorageInfo.java#L61]
 iterate over the entire array of storage nodes. Each operation is linear, 
however performance issues can arise, when they are called under a lock, like 
in 
[bumpBlockGenerationStamp|https://github.com/apache/hadoop/blob/2f0dd7c4feb1e482d47786d26d6d32483f39414b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java#L5987],
 where 
[newLocatedBlock|https://github.com/apache/hadoop/blob/49a495803a9451850b8982317e277b605c785587/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java#L5461]
 calls both methods (bumpBlockGenerationStamp --> newLocatedBlock --> 
[newLocatedBlock|https://github.com/apache/hadoop/blob/49a495803a9451850b8982317e277b605c785587/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java#L5437]
 or 
[newLocatedStripedBlock|https://github.com/apache/hadoop/blob/2f0dd7c4feb1e482d47786d26d6d32483f39414b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java#L5435]
 --> toDatanodeInfos and toStorageIDs) under the writeLock. This situation can 
be even more problematic when these methods are repeatedly invoked within an 
iteration like in 
[createLocatedBlockList|https://github.com/apache/hadoop/blob/2f0dd7c4feb1e482d47786d26d6d32483f39414b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java#L1450]
 
([createLocatedBlocks|https://github.com/apache/hadoop/blob/2f0dd7c4feb1e482d47786d26d6d32483f39414b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java#L1601]
 --> createLocatedBlockList --> [createLocatedBlock  
|https://github.com/apache/hadoop/blob/2f0dd7c4feb1e482d47786d26d6d32483f39414b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java#L1487]->
 newLocatedBlock or newLocatedStripedBlock --> toDatanodeInfos and 
toStorageIDs). Such behaviors cause significant synchronization bottlenecks 
when the number of blocks or number of storage nodes is large. 

[BlockPlacementPolicyDefault.getPipeline|https://github.com/apache/hadoop/blob/2f0dd7c4feb1e482d47786d26d6d32483f39414b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java#L1147],
 
[BlockPlacementPolicyDefault.chooseTarget|https://github.com/apache/hadoop/blob/2f0dd7c4feb1e482d47786d26d6d32483f39414b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java#L287],
 and 
[BlockManager.validateReconstructionWork|https://github.com/apache/hadoop/blob/2f0dd7c4feb1e482d47786d26d6d32483f39414b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java#L2355]
 (  
[BlockManager.computeReconstructionWorkForBlocks|https://github.com/apache/hadoop/blob/49a495803a9451850b8982317e277b605c785587/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java#L2187]
 --> BlockManager.validateReconstructionWork -->  
[incrementBlocksScheduled|https://github.com/apache/hadoop/blob/2f0dd7c4feb1e482d47786d26d6d32483f39414b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeStorageInfo.java#L338]
 )  also faces a similar issue with lock contention.

Please let me know if my analysis is wrong, and if there are suggestions to 
make this better. Thanks

  was:
Hi, 

I was looking into the DatanodeStorageInfo class and I think some of the 
methods could give issues at large scale. For example, to convert 
DatanodeStorageInfo objects into their respective DatanodeDescriptor and 
Storage ID forms, 
[DatanodeStorageInfo.toDatanodeInfos()|https://github.com/apache/hadoop/blob/2f0dd7c4feb1e482d47786d26d6d32483f39414b/hadoop-hdfs-project/hadoop-hdfs/src/

[jira] [Created] (HDFS-17638) Lock contention for DatanodeStorageInfo when the number of storage nodes is large

2024-10-04 Thread Goodness Ayinmode (Jira)
Goodness Ayinmode created HDFS-17638:


 Summary: Lock contention for DatanodeStorageInfo when the number 
of storage nodes is large
 Key: HDFS-17638
 URL: https://issues.apache.org/jira/browse/HDFS-17638
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode, server
Affects Versions: 3.4.0
Reporter: Goodness Ayinmode


Hi, 

I was looking into the DatanodeStorageInfo class and I think some of the 
methods could give issues at large scale. For example, to convert 
DatanodeStorageInfo objects into their respective DatanodeDescriptor and 
Storage ID forms, 
[DatanodeStorageInfo.toDatanodeInfos()|https://github.com/apache/hadoop/blob/2f0dd7c4feb1e482d47786d26d6d32483f39414b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeStorageInfo.java#L44]
 and 
[DatanodeStorageInfo.toStorageIDs()|https://github.com/apache/hadoop/blob/2f0dd7c4feb1e482d47786d26d6d32483f39414b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeStorageInfo.java#L61]
 iterate over the entire array of storage nodes. Each operation is linear, 
however performance issues can arise, when they are called under a lock, like 
in 
[bumpBlockGenerationStamp|https://github.com/apache/hadoop/blob/2f0dd7c4feb1e482d47786d26d6d32483f39414b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java#L5987],
 where 
[newLocatedBlock|https://github.com/apache/hadoop/blob/49a495803a9451850b8982317e277b605c785587/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java#L5461]
 calls both methods (bumpBlockGenerationStamp --> newLocatedBlock --> 
[newLocatedBlock|https://github.com/apache/hadoop/blob/49a495803a9451850b8982317e277b605c785587/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java#L5437]
 or 
[newLocatedStripedBlock|https://github.com/apache/hadoop/blob/2f0dd7c4feb1e482d47786d26d6d32483f39414b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java#L5435]
 --> toDatanodeInfos and toStorageIDs) under the writeLock. This situation can 
be even more problematic when these methods are repeatedly invoked within an 
iteration like in 
[createLocatedBlockList|https://github.com/apache/hadoop/blob/2f0dd7c4feb1e482d47786d26d6d32483f39414b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java#L1450]
 
([createLocatedBlocks|https://github.com/apache/hadoop/blob/2f0dd7c4feb1e482d47786d26d6d32483f39414b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java#L1601]
 --> createLocatedBlockList --> [createLocatedBlock  
|https://github.com/apache/hadoop/blob/2f0dd7c4feb1e482d47786d26d6d32483f39414b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java#L1487]-->
 newLocatedBlock or newLocatedStripedBlock --> toDatanodeInfos and 
toStorageIDs). Such behaviors cause significant synchronization bottlenecks 
when the number of blocks or number of storage nodes is large. 

[BlockPlacementPolicyDefault.getPipeline|https://github.com/apache/hadoop/blob/2f0dd7c4feb1e482d47786d26d6d32483f39414b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java#L1147],
 
[BlockPlacementPolicyDefault.chooseTarget|https://github.com/apache/hadoop/blob/2f0dd7c4feb1e482d47786d26d6d32483f39414b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java#L287],
 and 
[BlockManager.validateReconstructionWork|https://github.com/apache/hadoop/blob/2f0dd7c4feb1e482d47786d26d6d32483f39414b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java#L2355]
 (  
[BlockManager.computeReconstructionWorkForBlocks|https://github.com/apache/hadoop/blob/49a495803a9451850b8982317e277b605c785587/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java#L2187]
 --> BlockManager.validateReconstructionWork -->  
[incrementBlocksScheduled|https://github.com/apache/hadoop/blob/2f0dd7c4feb1e482d47786d26d6d32483f39414b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeStorageInfo.java#L338]
 )  also faces a similar issue with lock contention.

Please let me know if my analysis is wrong, and if there are suggestions to 
make this better. Thanks



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e

[jira] [Assigned] (HDFS-17633) `CombinedFileRange.merge` should not convert disjoint ranges into overlapped ones

2024-10-04 Thread Hemanth Boyina (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17633?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hemanth Boyina reassigned HDFS-17633:
-

Assignee: Hemanth Boyina

> `CombinedFileRange.merge` should not convert disjoint ranges into overlapped 
> ones
> -
>
> Key: HDFS-17633
> URL: https://issues.apache.org/jira/browse/HDFS-17633
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: common
>Affects Versions: 3.3.9, 3.4.1, 3.5.0
>Reporter: Dongjoon Hyun
>Assignee: Hemanth Boyina
>Priority: Major
> Attachments: Screenshot 2024-09-28 at 21.59.09.png
>
>
> Currently, Hadoop has a bug to convert disjoint ranges into overlapped ones 
> and eventually fails by itself.
>  !Screenshot 2024-09-28 at 21.59.09.png! 
> {code}
> +  public void testMergeSortedRanges() {
> +List input = asList(
> +createFileRange(13816220, 24, null),
> +createFileRange(13816244, 7423960, null)
> +);
> +assertIsNotOrderedDisjoint(input, 100, 800);
> +final List outputList = mergeSortedRanges(
> +sortRangeList(input), 100, 1001, 2500);
> +
> +assertRangeListSize(outputList, 1);
> +assertFileRange(outputList.get(0), 13816200, 7424100);
> +  }
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-17633) `CombinedFileRange.merge` should not convert disjoint ranges into overlapped ones

2024-10-04 Thread Hemanth Boyina (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17633?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hemanth Boyina reassigned HDFS-17633:
-

Assignee: (was: Hemanth Boyina)

> `CombinedFileRange.merge` should not convert disjoint ranges into overlapped 
> ones
> -
>
> Key: HDFS-17633
> URL: https://issues.apache.org/jira/browse/HDFS-17633
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: common
>Affects Versions: 3.3.9, 3.4.1, 3.5.0
>Reporter: Dongjoon Hyun
>Priority: Major
> Attachments: Screenshot 2024-09-28 at 21.59.09.png
>
>
> Currently, Hadoop has a bug to convert disjoint ranges into overlapped ones 
> and eventually fails by itself.
>  !Screenshot 2024-09-28 at 21.59.09.png! 
> {code}
> +  public void testMergeSortedRanges() {
> +List input = asList(
> +createFileRange(13816220, 24, null),
> +createFileRange(13816244, 7423960, null)
> +);
> +assertIsNotOrderedDisjoint(input, 100, 800);
> +final List outputList = mergeSortedRanges(
> +sortRangeList(input), 100, 1001, 2500);
> +
> +assertRangeListSize(outputList, 1);
> +assertFileRange(outputList.get(0), 13816200, 7424100);
> +  }
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-17381) Distcp of EC files should not be limited to DFS.

2024-10-04 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17381?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-17381:
---
Fix Version/s: 3.4.2

> Distcp of EC files should not be limited to DFS.
> 
>
> Key: HDFS-17381
> URL: https://issues.apache.org/jira/browse/HDFS-17381
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: distcp
>Affects Versions: 3.4.0
>Reporter: Sadanand Shenoy
>Assignee: Sadanand Shenoy
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.5.0, 3.4.2
>
>
> Currently EC file support in distcp is limited to DFS and the code checks if 
> the given FS instance is DistributedFileSystem, In Ozone, EC is supported 
> now, and this limitation can be removed and a general contract for any 
> filesystem that supports EC files should be allowed by implementing few 
> interfaces/methods.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-17637) Fix spotbugs in HttpFSFileSystem#getXAttr

2024-10-04 Thread Hualong Zhang (Jira)
Hualong Zhang created HDFS-17637:


 Summary: Fix spotbugs in HttpFSFileSystem#getXAttr
 Key: HDFS-17637
 URL: https://issues.apache.org/jira/browse/HDFS-17637
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: httpfs
Affects Versions: 3.5.0, 3.4.2
Reporter: Hualong Zhang
Assignee: Hualong Zhang


Fix spotbugs in HttpFSFileSystem#getXAttr

Spotbugs warnings:
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7019/22/artifact/out/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-httpfs-warnings.html



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-17636) Don't add declspec for Windows

2024-10-02 Thread Gautham Banasandra (Jira)
Gautham Banasandra created HDFS-17636:
-

 Summary: Don't add declspec for Windows
 Key: HDFS-17636
 URL: https://issues.apache.org/jira/browse/HDFS-17636
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: libhdfs
Affects Versions: 3.5.0
 Environment: Windows 10/11
Reporter: Gautham Banasandra
Assignee: Gautham Banasandra


* Windows doesn't want the
  macro _JNI_IMPORT_OR_EXPORT_
  to be defined in the function
  definition. It fails to compile with
  the following error -
  "definition of dllimport function
  not allowed".
* However, Linux needs it. Hence,
  we're going to add this macro
  based on the OS.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17635) MutableQuantiles.getQuantiles() should be made a static method

2024-10-01 Thread Wei-Chiu Chuang (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17886285#comment-17886285
 ] 

Wei-Chiu Chuang commented on HDFS-17635:


Found when building Ozone 2.0 on Hadoop 3.4.1 RC2.

> MutableQuantiles.getQuantiles() should be made a static method
> --
>
> Key: HDFS-17635
> URL: https://issues.apache.org/jira/browse/HDFS-17635
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Priority: Trivial
>
> MutableQuantiles.getQuantiles() returns the static member variable QUANTILES, 
> so this method should be a static method too.
> https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/lib/MutableQuantiles.java#L157



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-17635) MutableQuantiles.getQuantiles() should be made a static method

2024-10-01 Thread Wei-Chiu Chuang (Jira)
Wei-Chiu Chuang created HDFS-17635:
--

 Summary: MutableQuantiles.getQuantiles() should be made a static 
method
 Key: HDFS-17635
 URL: https://issues.apache.org/jira/browse/HDFS-17635
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Wei-Chiu Chuang


MutableQuantiles.getQuantiles() returns the static member variable QUANTILES, 
so this method should be a static method too.

https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/lib/MutableQuantiles.java#L157



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-17634) RBF: Web UI missing DN last block report

2024-09-30 Thread Felix N (Jira)
Felix N created HDFS-17634:
--

 Summary: RBF: Web UI missing DN last block report
 Key: HDFS-17634
 URL: https://issues.apache.org/jira/browse/HDFS-17634
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Felix N
Assignee: Felix N
 Attachments: image-2024-09-30-15-47-07-392.png

!image-2024-09-30-15-47-07-392.png|width=160,height=69!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17623) RBF:The router service fails to delete a mount table with multiple subclusters mounted on it through MultipleDestinationMountTableResolver

2024-09-29 Thread Guo Wei (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17885789#comment-17885789
 ] 

Guo Wei commented on HDFS-17623:


I'm doing this

> RBF:The router service fails to delete a mount table with multiple 
> subclusters mounted on it through MultipleDestinationMountTableResolver
> --
>
> Key: HDFS-17623
> URL: https://issues.apache.org/jira/browse/HDFS-17623
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: rbf
>Affects Versions: 3.4.0
>Reporter: Guo Wei
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>
> Please see the error message in the following example:
> [hdfs@sjsy-hh202-zbxh55w root]$ hdfs dfs -mkdir -p 
> hdfs://hh-rbf-test1/guov100/data
> [hdfs@sjsy-hh202-zbxh55w root]$ hdfs dfs -mkdir -p 
> hdfs://hh-rbf-test2/guov100/data
> [hdfs@sjsy-hh202-zbxh55w root]$ hdfs dfsrouteradmin -add /guov100/data 
> hh-rbf-test1,hh-rbf-test2 /guov100/data -order RANDOM
> Successfully added mount point /guov100/data
> [hdfs@sjsy-hh202-zbxh55w root]$ hdfs dfsrouteradmin -ls
> Mount Table Entries:
> Source                    Destinations              Owner                     
> Group                     Mode       Quota/Usage
> /guov100/data              
> hh-rbf-test1->/guov100/data,hh-rbf-test2->/guov100/data hdfs                  
>     hadoop                    rwxr-xr-x  [NsQuota: -/-, SsQuota: -/-]
> [hdfs@sjsy-hh202-zbxh55w root]$ hdfs dfs -mkdir 
> hdfs://test-fed/guov100/data/test
> [hdfs@sjsy-hh202-zbxh55w root]$ hdfs dfs -touch 
> hdfs://hh-rbf-test1/guov100/data/test/file-test1.txt
> [hdfs@sjsy-hh202-zbxh55w root]$ hdfs dfs -touch 
> hdfs://hh-rbf-test2/guov100/data/test/file-test2.txt
> [hdfs@sjsy-hh202-zbxh55w root]$ hdfs dfs -ls 
> hdfs://hh-rbf-test1/guov100/data/test/
> Found 1 items
> {-}rw-r{-}{-}r{-}-   3 hdfs hdfs          0 2024-09-13 09:56 
> hdfs://hh-rbf-test1/guov100/data/test/file-test1.txt
> [hdfs@sjsy-hh202-zbxh55w root]$ hdfs dfs -ls 
> hdfs://hh-rbf-test2/guov100/data/test/
> Found 1 items
> {-}rw-r{-}{-}r{-}-   3 hdfs hdfs          0 2024-09-13 09:56 
> hdfs://hh-rbf-test2/guov100/data/test/file-test2.txt
> [hdfs@sjsy-hh202-zbxh55w root]$ hdfs dfs -ls 
> hdfs://test-fed/guov100/data/test/
> Found 2 items
> {-}rw-r{-}{-}r{-}-   3 hdfs hdfs          0 2024-09-13 09:56 
> hdfs://test-fed/guov100/data/test/file-test1.txt
> {-}rw-r{-}{-}r{-}-   3 hdfs hdfs          0 2024-09-13 09:56 
> hdfs://test-fed/guov100/data/test/file-test2.txt
> [hdfs@sjsy-hh202-zbxh55w root]$ 
> [hdfs@sjsy-hh202-zbxh55w root]$ hdfs dfs -rm -r 
> hdfs://test-fed/guov100/data/test/
> {color:#FF}rm: Failed to move to trash: 
> hdfs://test-fed/guov100/data/test: Rename of /guov100/data/test to 
> /user/hdfs/.Trash/Current/guov100/data/test is not allowed, no eligible 
> destination in the same namespace was found.{color}
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-17631) Fix RedundantEditLogInputStream.nextOp() state error when EditLogInputStream.skipUntil() throw IOException

2024-09-29 Thread liuguanghua (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

liuguanghua updated HDFS-17631:
---
Description: 
For namenode HA mode, standby namenode load editlog form journalnodes  via 
QuorumJournalManger.selectInputStreams().  And RedundantEditLogInputStream is 
used for combine multiple remote journalnode inputstreams.

The problems is that when read editlog with 
RedundantEditLogInputStream.nextOp() if the first stream execute skipUntil() 
throw IOException ( network errors, or hardware problems etc..) ,  it will be 
State.OK rather than State.STREAM_FAILED. 

And the proper state will be like blew and fault tolerant:

State.SKIP_UNTIL -> State.STREAM_FAILED ->(try next stream)  State.SKIP_UNTIL 
-> State.OK

  was:
For namenode HA mode, standby namenode load editlog form journalnodes  via 
QuorumJournalManger.selectInputStreams().  And RedundantEditLogInputStream is 
used for combine multiple remote journalnode inputstreams.

The problems is that when read editlog with 
RedundantEditLogInputStream.nextOp() if the first stream execute skipUntil() 
throw IOException ( network errors, or hardware problems etc..) ,  it will be 
State.OK rather than State.STREAM_FAILED. 

 

The proper state will be like blew:

State.SKIP_UNTIL -> State.STREAM_FAILED ->(try next stream)  State.SKIP_UNTIL 
-> State.OK


> Fix RedundantEditLogInputStream.nextOp()  state error when 
> EditLogInputStream.skipUntil() throw IOException
> ---
>
> Key: HDFS-17631
> URL: https://issues.apache.org/jira/browse/HDFS-17631
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: liuguanghua
>Assignee: liuguanghua
>Priority: Major
>  Labels: pull-request-available
>
> For namenode HA mode, standby namenode load editlog form journalnodes  via 
> QuorumJournalManger.selectInputStreams().  And RedundantEditLogInputStream is 
> used for combine multiple remote journalnode inputstreams.
> The problems is that when read editlog with 
> RedundantEditLogInputStream.nextOp() if the first stream execute skipUntil() 
> throw IOException ( network errors, or hardware problems etc..) ,  it will be 
> State.OK rather than State.STREAM_FAILED. 
> And the proper state will be like blew and fault tolerant:
> State.SKIP_UNTIL -> State.STREAM_FAILED ->(try next stream)  State.SKIP_UNTIL 
> -> State.OK



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-17631) Fix RedundantEditLogInputStream.nextOp() state error when EditLogInputStream.skipUntil() throw IOException

2024-09-29 Thread liuguanghua (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

liuguanghua updated HDFS-17631:
---
Description: 
For namenode HA mode, standby namenode load editlog form journalnodes  via 
QuorumJournalManger.selectInputStreams().  And RedundantEditLogInputStream is 
used for combine multiple remote journalnode inputstreams.

The problems is that when read editlog with 
RedundantEditLogInputStream.nextOp() if the first stream execute skipUntil() 
throw IOException ( network errors, or hardware problems etc..) ,  it will be 
State.OK rather than State.STREAM_FAILED. 

 

The proper state will be like blew:

State.SKIP_UNTIL -> State.STREAM_FAILED ->(try next stream)  State.SKIP_UNTIL 
-> State.OK

  was:
For namenode HA mode, standby namenode load editlog form journalnodes  via 
QuorumJournalManger.selectInputStreams().  And RedundantEditLogInputStream is 
used for combine multiple remote journalnode inputstreams.

Now when EditLogInputStream.skipUntil() throw IOException in 
RedundantEditLogInputStream.nextOp(), it is still into State.OK rather than 
State.STREAM_FAILED. 

The proper state will be like blew:

State.SKIP_UNTIL -> State.STREAM_FAILED ->(try next stream)  State.SKIP_UNTIL


> Fix RedundantEditLogInputStream.nextOp()  state error when 
> EditLogInputStream.skipUntil() throw IOException
> ---
>
> Key: HDFS-17631
>     URL: https://issues.apache.org/jira/browse/HDFS-17631
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: liuguanghua
>Assignee: liuguanghua
>Priority: Major
>  Labels: pull-request-available
>
> For namenode HA mode, standby namenode load editlog form journalnodes  via 
> QuorumJournalManger.selectInputStreams().  And RedundantEditLogInputStream is 
> used for combine multiple remote journalnode inputstreams.
> The problems is that when read editlog with 
> RedundantEditLogInputStream.nextOp() if the first stream execute skipUntil() 
> throw IOException ( network errors, or hardware problems etc..) ,  it will be 
> State.OK rather than State.STREAM_FAILED. 
>  
> The proper state will be like blew:
> State.SKIP_UNTIL -> State.STREAM_FAILED ->(try next stream)  State.SKIP_UNTIL 
> -> State.OK



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-17631) Fix RedundantEditLogInputStream.nextOp() state error when EditLogInputStream.skipUntil() throw IOException

2024-09-29 Thread liuguanghua (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

liuguanghua updated HDFS-17631:
---
Description: 
For namenode HA mode, standby namenode load editlog form journalnodes  via 
QuorumJournalManger.selectInputStreams().  And RedundantEditLogInputStream is 
used for combine multiple remote journalnode inputstreams.

Now when EditLogInputStream.skipUntil() throw IOException in 
RedundantEditLogInputStream.nextOp(), it is still into State.OK rather than 
State.STREAM_FAILED. 

The proper state will be like blew:

State.SKIP_UNTIL -> State.STREAM_FAILED ->(try next stream)  State.SKIP_UNTIL

  was:
For namenode HA mode, standby namenode load editlog form journalnodes.  

 

Now when EditLogInputStream.skipUntil() throw IOException in 
RedundantEditLogInputStream.nextOp(), it is still into State.OK rather than 
State.STREAM_FAILED. 

The proper state will be like blew:

State.SKIP_UNTIL -> State.STREAM_FAILED ->(try next stream)  State.SKIP_UNTIL


> Fix RedundantEditLogInputStream.nextOp()  state error when 
> EditLogInputStream.skipUntil() throw IOException
> ---
>
> Key: HDFS-17631
> URL: https://issues.apache.org/jira/browse/HDFS-17631
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: liuguanghua
>Assignee: liuguanghua
>Priority: Major
>  Labels: pull-request-available
>
> For namenode HA mode, standby namenode load editlog form journalnodes  via 
> QuorumJournalManger.selectInputStreams().  And RedundantEditLogInputStream is 
> used for combine multiple remote journalnode inputstreams.
> Now when EditLogInputStream.skipUntil() throw IOException in 
> RedundantEditLogInputStream.nextOp(), it is still into State.OK rather than 
> State.STREAM_FAILED. 
> The proper state will be like blew:
> State.SKIP_UNTIL -> State.STREAM_FAILED ->(try next stream)  State.SKIP_UNTIL



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-17631) Fix RedundantEditLogInputStream.nextOp() state error when EditLogInputStream.skipUntil() throw IOException

2024-09-29 Thread liuguanghua (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

liuguanghua updated HDFS-17631:
---
Description: 
For namenode HA mode, standby namenode load editlog form journalnodes.  

 

Now when EditLogInputStream.skipUntil() throw IOException in 
RedundantEditLogInputStream.nextOp(), it is still into State.OK rather than 
State.STREAM_FAILED. 

The proper state will be like blew:

State.SKIP_UNTIL -> State.STREAM_FAILED ->(try next stream)  State.SKIP_UNTIL

  was:
Now when EditLogInputStream.skipUntil() throw IOException in 
RedundantEditLogInputStream.nextOp(), it is still into State.OK rather than 
State.STREAM_FAILED. 

The proper state will be like blew:

State.SKIP_UNTIL -> State.STREAM_FAILED ->(try next stream)  State.SKIP_UNTIL


> Fix RedundantEditLogInputStream.nextOp()  state error when 
> EditLogInputStream.skipUntil() throw IOException
> ---
>
> Key: HDFS-17631
> URL: https://issues.apache.org/jira/browse/HDFS-17631
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: liuguanghua
>Assignee: liuguanghua
>Priority: Major
>  Labels: pull-request-available
>
> For namenode HA mode, standby namenode load editlog form journalnodes.  
>  
> Now when EditLogInputStream.skipUntil() throw IOException in 
> RedundantEditLogInputStream.nextOp(), it is still into State.OK rather than 
> State.STREAM_FAILED. 
> The proper state will be like blew:
> State.SKIP_UNTIL -> State.STREAM_FAILED ->(try next stream)  State.SKIP_UNTIL



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-17631) Fix RedundantEditLogInputStream.nextOp() state error when EditLogInputStream.skipUntil() throw IOException

2024-09-29 Thread liuguanghua (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

liuguanghua updated HDFS-17631:
---
Summary: Fix RedundantEditLogInputStream.nextOp()  state error when 
EditLogInputStream.skipUntil() throw IOException  (was: 
RedundantEditLogInputStream.nextOp() will be State.STREAM_FAILED when 
EditLogInputStream.skipUntil() throw IOException)

> Fix RedundantEditLogInputStream.nextOp()  state error when 
> EditLogInputStream.skipUntil() throw IOException
> ---
>
> Key: HDFS-17631
> URL: https://issues.apache.org/jira/browse/HDFS-17631
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: liuguanghua
>Assignee: liuguanghua
>Priority: Major
>  Labels: pull-request-available
>
> Now when EditLogInputStream.skipUntil() throw IOException in 
> RedundantEditLogInputStream.nextOp(), it is still into State.OK rather than 
> State.STREAM_FAILED. 
> The proper state will be like blew:
> State.SKIP_UNTIL -> State.STREAM_FAILED ->(try next stream)  State.SKIP_UNTIL



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-15098) Add SM4 encryption method for HDFS

2024-09-28 Thread wayne cook (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17885643#comment-17885643
 ] 

wayne cook edited comment on HDFS-15098 at 9/29/24 6:13 AM:


I have a error message  as follows, when i update to  hadoop 3.4.0 from 3.3.4.

 
{code:java}
# The namenode log
2024-09-27 16:41:39 ERROR org.apache.ranger.plugin.util.PolicyRefresher: 
PolicyRefresher(serviceName=hdfs-service): failed to refresh policies. Will 
continue to use last known version of policies 
(10)javax.ws.rs.WebApplicationException: 
com.sun.xml.bind.v2.runtime.IllegalAnnotationsException: 2 counts of 
IllegalAnnotationExceptionsjava.util.Map is an interface, and JAXB can't handle 
interfaces.   this problem is related to the following location:  
at java.util.Mapat private java.util.List 
org.apache.ranger.plugin.model.RangerPolicy.additionalResources   
at org.apache.ranger.plugin.model.RangerPolicy  at private 
java.util.List org.apache.ranger.plugin.util.ServicePolicies.policies   
 at org.apache.ranger.plugin.util.ServicePoliciesjava.util.Map does not 
have a no-arg default constructor.   this problem is related to the 
following location:  at java.util.Mapat private 
java.util.List org.apache.ranger.plugin.model.RangerPolicy.additionalResources  
 at org.apache.ranger.plugin.model.RangerPolicy  at private 
java.util.List org.apache.ranger.plugin.util.ServicePolicies.policies   
 at org.apache.ranger.plugin.util.ServicePolicies
at 
com.sun.jersey.core.provider.jaxb.AbstractRootElementProvider.readFrom(AbstractRootElementProvider.java:115)
 at com.sun.jersey.api.client.ClientResponse.getEntity(ClientResponse.java:634) 
 at com.sun.jersey.api.client.ClientResponse.getEntity(ClientResponse.java:586) 
 at 
org.apache.ranger.admin.client.RangerAdminRESTClient.getServicePoliciesIfUpdatedWithCred(RangerAdminRESTClient.java:858)
 at 
org.apache.ranger.admin.client.RangerAdminRESTClient.getServicePoliciesIfUpdated(RangerAdminRESTClient.java:146)
 at 
org.apache.ranger.plugin.util.PolicyRefresher.loadPolicyfromPolicyAdmin(PolicyRefresher.java:308)
at 
org.apache.ranger.plugin.util.PolicyRefresher.loadPolicy(PolicyRefresher.java:247)
   at 
org.apache.ranger.plugin.util.PolicyRefresher.run(PolicyRefresher.java:209)Caused
 by: com.sun.xml.bind.v2.runtime.IllegalAnnotationsException: 2 counts of 
IllegalAnnotationExceptionsjava.util.Map is an interface, and JAXB can't handle 
interfaces.   this problem is related to the following location:
  at java.util.Mapat private java.util.List 
org.apache.ranger.plugin.model.RangerPolicy.additionalResources   
at org.apache.ranger.plugin.model.RangerPolicy  at private 
java.util.List org.apache.ranger.plugin.util.ServicePolicies.policies   
 at org.apache.ranger.plugin.util.ServicePoliciesjava.util.Map does not 
have a no-arg default constructor.   this problem is related to the 
following location:  at java.util.Mapat private 
java.util.List org.apache.ranger.plugin.model.RangerPolicy.additionalResources  
 at org.apache.ranger.plugin.model.RangerPolicy  at private 
java.util.List org.apache.ranger.plugin.util.ServicePolicies.policies   
 at org.apache.ranger.plugin.util.ServicePolicies
at 
com.sun.xml.bind.v2.runtime.IllegalAnnotationsException$Builder.check(IllegalAnnotationsException.java:106)
  at 
com.sun.xml.bind.v2.runtime.JAXBContextImpl.getTypeInfoSet(JAXBContextImpl.java:489)
 at 
com.sun.xml.bind.v2.runtime.JAXBContextImpl.(JAXBContextImpl.java:319) at 
com.sun.xml.bind.v2.runtime.JAXBContextImpl$JAXBContextBuilder.build(JAXBContextImpl.java:1170)
  at 
com.sun.xml.bind.v2.ContextFactory.createContext(ContextFactory.java:145)at 
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)  at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)   
 at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498) at 
javax.xml.bind.ContextFinder.newInstance(ContextFinder.java:247) at 
javax.xml.bind.ContextFinder.newInstance(ContextFinder.java:234) at 
javax.xml.bind.ContextFinder.find(ContextFinder.java:441)at 
javax.xml.bind.JAXBContext.newInstance(JAXBContext.java:641) at 
javax.xml.bind.JAXBContext.newInstance(JAXBContext.java:584) at 
com.sun.jersey.core.provider.jaxb.AbstractJAXBProvider.getStoredJAXBContext(AbstractJAXBProvider.java:196)
   at 
com.sun.jersey.core.provider.jaxb.AbstractJAXBProvider.getJAXBContext(AbstractJAXBProvider.java:188)
 at 
com.sun.jersey.core.provider.jaxb.AbstractJAXBProvider.g

[jira] [Resolved] (HDFS-17633) `CombinedFileRange.merge` should not convert disjoint ranges into overlapped ones

2024-09-28 Thread Dongjoon Hyun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17633?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dongjoon Hyun resolved HDFS-17633.
--
Resolution: Duplicate

> `CombinedFileRange.merge` should not convert disjoint ranges into overlapped 
> ones
> -
>
> Key: HDFS-17633
> URL: https://issues.apache.org/jira/browse/HDFS-17633
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: common
>Affects Versions: 3.3.9, 3.4.1, 3.5.0
>Reporter: Dongjoon Hyun
>Priority: Major
> Attachments: Screenshot 2024-09-28 at 21.59.09.png
>
>
> Currently, Hadoop has a bug to convert disjoint ranges into overlapped ones 
> and eventually fails by itself.
>  !Screenshot 2024-09-28 at 21.59.09.png! 
> {code}
> +  public void testMergeSortedRanges() {
> +List input = asList(
> +createFileRange(13816220, 24, null),
> +createFileRange(13816244, 7423960, null)
> +);
> +assertIsNotOrderedDisjoint(input, 100, 800);
> +final List outputList = mergeSortedRanges(
> +sortRangeList(input), 100, 1001, 2500);
> +
> +assertRangeListSize(outputList, 1);
> +assertFileRange(outputList.get(0), 13816200, 7424100);
> +  }
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-17633) `CombinedFileRange.merge` should not convert disjoint ranges into overlapped ones

2024-09-28 Thread Dongjoon Hyun (Jira)
Dongjoon Hyun created HDFS-17633:


 Summary: `CombinedFileRange.merge` should not convert disjoint 
ranges into overlapped ones
 Key: HDFS-17633
 URL: https://issues.apache.org/jira/browse/HDFS-17633
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: common
Affects Versions: 3.3.9, 3.4.1, 3.5.0
Reporter: Dongjoon Hyun
 Attachments: Screenshot 2024-09-28 at 21.59.09.png

Currently, Hadoop has a bug to convert disjoint ranges into overlapped ones and 
eventually fails by itself.

 !Screenshot 2024-09-28 at 21.59.09.png! 

{code}
+  public void testMergeSortedRanges() {
+List input = asList(
+createFileRange(13816220, 24, null),
+createFileRange(13816244, 7423960, null)
+);
+assertIsNotOrderedDisjoint(input, 100, 800);
+final List outputList = mergeSortedRanges(
+sortRangeList(input), 100, 1001, 2500);
+
+assertRangeListSize(outputList, 1);
+assertFileRange(outputList.get(0), 13816200, 7424100);
+  }
{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-17376) Distcp creates Factor 1 replication file on target if Source is EC

2024-09-28 Thread Xiaoqiao He (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17376?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoqiao He updated HDFS-17376:
---
Fix Version/s: 3.4.2
   (was: 3.4.1)

> Distcp creates Factor 1 replication file on target if Source is EC
> --
>
> Key: HDFS-17376
> URL: https://issues.apache.org/jira/browse/HDFS-17376
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: distcp
>Affects Versions: 3.3.6
>Reporter: Sadanand Shenoy
>Assignee: Sadanand Shenoy
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.5.0, 3.4.2
>
>
> If the source file is EC, distcp without preserve option creates a 1 
> replication file (this is not intended). 
> This is because for an EC file getReplication() always return 1 . Instead it 
> should create the file as per the default replication on the target.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-17376) Distcp creates Factor 1 replication file on target if Source is EC

2024-09-28 Thread Xiaoqiao He (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17376?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoqiao He updated HDFS-17376:
---
Fix Version/s: 3.4.1

> Distcp creates Factor 1 replication file on target if Source is EC
> --
>
> Key: HDFS-17376
> URL: https://issues.apache.org/jira/browse/HDFS-17376
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: distcp
>Affects Versions: 3.3.6
>Reporter: Sadanand Shenoy
>Assignee: Sadanand Shenoy
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.1, 3.5.0
>
>
> If the source file is EC, distcp without preserve option creates a 1 
> replication file (this is not intended). 
> This is because for an EC file getReplication() always return 1 . Instead it 
> should create the file as per the default replication on the target.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-17626) Reduce lock contention at datanode startup

2024-09-28 Thread Xiaoqiao He (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17626?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoqiao He resolved HDFS-17626.

Fix Version/s: 3.5.0
 Hadoop Flags: Reviewed
   Resolution: Fixed

> Reduce lock contention at datanode startup
> --
>
> Key: HDFS-17626
> URL: https://issues.apache.org/jira/browse/HDFS-17626
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Tao Li
>Assignee: Tao Li
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.5.0
>
> Attachments: image-2024-09-18-20-45-56-999.png
>
>
> During the datanode startup process, there is a debug log, because there is 
> no LOG.isDebugEnabled() guard, so even if debug is not enabled, the read lock 
> will be obtained. The guard should be added here to reduce lock contention.
> !image-2024-09-18-20-45-56-999.png|width=333,height=263!
> !https://docs.corp.vipshop.com/uploader/f/4DSEukZKf6cV5VRY.png?accessToken=eyJhbGciOiJIUzI1NiIsImtpZCI6ImRlZmF1bHQiLCJ0eXAiOiJKV1QifQ.eyJleHAiOjE3MjY2NjQxNjYsImZpbGVHVUlEIjoiQWxvNE5uOU9OYko2aDJ4WCIsImlhdCI6MTcyNjY2MzU2NiwiaXNzIjoidXBsb2FkZXJfYWNjZXNzX3Jlc291cmNlIiwidXNlcklkIjo2MTYyMTQwfQ.DwDBnJ6I8vCFd14A-wsq2oLU5a0rcPoUvq49Z4aWg2A|width=334,height=133!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-15098) Add SM4 encryption method for HDFS

2024-09-28 Thread wayne cook (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17885643#comment-17885643
 ] 

wayne cook edited comment on HDFS-15098 at 9/29/24 2:12 AM:


I have a error message  as follows, when i update to  hadoop 3.4.0 from 3.3.4.

 
{code:java}
# The namenode log
2024-09-27 16:41:39 ERROR org.apache.ranger.plugin.util.PolicyRefresher: 
PolicyRefresher(serviceName=hdfs-service): failed to refresh policies. Will 
continue to use last known version of policies 
(10)javax.ws.rs.WebApplicationException: 
com.sun.xml.bind.v2.runtime.IllegalAnnotationsException: 2 counts of 
IllegalAnnotationExceptionsjava.util.Map is an interface, and JAXB can't handle 
interfaces.   this problem is related to the following location:  
at java.util.Mapat private java.util.List 
org.apache.ranger.plugin.model.RangerPolicy.additionalResources   
at org.apache.ranger.plugin.model.RangerPolicy  at private 
java.util.List org.apache.ranger.plugin.util.ServicePolicies.policies   
 at org.apache.ranger.plugin.util.ServicePoliciesjava.util.Map does not 
have a no-arg default constructor.   this problem is related to the 
following location:  at java.util.Mapat private 
java.util.List org.apache.ranger.plugin.model.RangerPolicy.additionalResources  
 at org.apache.ranger.plugin.model.RangerPolicy  at private 
java.util.List org.apache.ranger.plugin.util.ServicePolicies.policies   
 at org.apache.ranger.plugin.util.ServicePolicies
at 
com.sun.jersey.core.provider.jaxb.AbstractRootElementProvider.readFrom(AbstractRootElementProvider.java:115)
 at com.sun.jersey.api.client.ClientResponse.getEntity(ClientResponse.java:634) 
 at com.sun.jersey.api.client.ClientResponse.getEntity(ClientResponse.java:586) 
 at 
org.apache.ranger.admin.client.RangerAdminRESTClient.getServicePoliciesIfUpdatedWithCred(RangerAdminRESTClient.java:858)
 at 
org.apache.ranger.admin.client.RangerAdminRESTClient.getServicePoliciesIfUpdated(RangerAdminRESTClient.java:146)
 at 
org.apache.ranger.plugin.util.PolicyRefresher.loadPolicyfromPolicyAdmin(PolicyRefresher.java:308)
at 
org.apache.ranger.plugin.util.PolicyRefresher.loadPolicy(PolicyRefresher.java:247)
   at 
org.apache.ranger.plugin.util.PolicyRefresher.run(PolicyRefresher.java:209)Caused
 by: com.sun.xml.bind.v2.runtime.IllegalAnnotationsException: 2 counts of 
IllegalAnnotationExceptionsjava.util.Map is an interface, and JAXB can't handle 
interfaces.   this problem is related to the following location:
  at java.util.Mapat private java.util.List 
org.apache.ranger.plugin.model.RangerPolicy.additionalResources   
at org.apache.ranger.plugin.model.RangerPolicy  at private 
java.util.List org.apache.ranger.plugin.util.ServicePolicies.policies   
 at org.apache.ranger.plugin.util.ServicePoliciesjava.util.Map does not 
have a no-arg default constructor.   this problem is related to the 
following location:  at java.util.Mapat private 
java.util.List org.apache.ranger.plugin.model.RangerPolicy.additionalResources  
 at org.apache.ranger.plugin.model.RangerPolicy  at private 
java.util.List org.apache.ranger.plugin.util.ServicePolicies.policies   
 at org.apache.ranger.plugin.util.ServicePolicies
at 
com.sun.xml.bind.v2.runtime.IllegalAnnotationsException$Builder.check(IllegalAnnotationsException.java:106)
  at 
com.sun.xml.bind.v2.runtime.JAXBContextImpl.getTypeInfoSet(JAXBContextImpl.java:489)
 at 
com.sun.xml.bind.v2.runtime.JAXBContextImpl.(JAXBContextImpl.java:319) at 
com.sun.xml.bind.v2.runtime.JAXBContextImpl$JAXBContextBuilder.build(JAXBContextImpl.java:1170)
  at 
com.sun.xml.bind.v2.ContextFactory.createContext(ContextFactory.java:145)at 
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)  at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)   
 at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498) at 
javax.xml.bind.ContextFinder.newInstance(ContextFinder.java:247) at 
javax.xml.bind.ContextFinder.newInstance(ContextFinder.java:234) at 
javax.xml.bind.ContextFinder.find(ContextFinder.java:441)at 
javax.xml.bind.JAXBContext.newInstance(JAXBContext.java:641) at 
javax.xml.bind.JAXBContext.newInstance(JAXBContext.java:584) at 
com.sun.jersey.core.provider.jaxb.AbstractJAXBProvider.getStoredJAXBContext(AbstractJAXBProvider.java:196)
   at 
com.sun.jersey.core.provider.jaxb.AbstractJAXBProvider.getJAXBContext(AbstractJAXBProvider.java:188)
 at 
com.sun.jersey.core.provider.jaxb.AbstractJAXBProvider.g

[jira] [Comment Edited] (HDFS-15098) Add SM4 encryption method for HDFS

2024-09-28 Thread wayne cook (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17885643#comment-17885643
 ] 

wayne cook edited comment on HDFS-15098 at 9/29/24 2:11 AM:


I have a error message  as follows, when i update to  hadoop 3.4.0 from 3.3.4.

 
{code:java}
# The namenode log
2024-09-27 16:41:39 ERROR org.apache.ranger.plugin.util.PolicyRefresher: 
PolicyRefresher(serviceName=hdfs-service): failed to refresh policies. Will 
continue to use last known version of policies 
(10)javax.ws.rs.WebApplicationException: 
com.sun.xml.bind.v2.runtime.IllegalAnnotationsException: 2 counts of 
IllegalAnnotationExceptionsjava.util.Map is an interface, and JAXB can't handle 
interfaces.   this problem is related to the following location:  
at java.util.Mapat private java.util.List 
org.apache.ranger.plugin.model.RangerPolicy.additionalResources   
at org.apache.ranger.plugin.model.RangerPolicy  at private 
java.util.List org.apache.ranger.plugin.util.ServicePolicies.policies   
 at org.apache.ranger.plugin.util.ServicePoliciesjava.util.Map does not 
have a no-arg default constructor.   this problem is related to the 
following location:  at java.util.Mapat private 
java.util.List org.apache.ranger.plugin.model.RangerPolicy.additionalResources  
 at org.apache.ranger.plugin.model.RangerPolicy  at private 
java.util.List org.apache.ranger.plugin.util.ServicePolicies.policies   
 at org.apache.ranger.plugin.util.ServicePolicies
at 
com.sun.jersey.core.provider.jaxb.AbstractRootElementProvider.readFrom(AbstractRootElementProvider.java:115)
 at com.sun.jersey.api.client.ClientResponse.getEntity(ClientResponse.java:634) 
 at com.sun.jersey.api.client.ClientResponse.getEntity(ClientResponse.java:586) 
 at 
org.apache.ranger.admin.client.RangerAdminRESTClient.getServicePoliciesIfUpdatedWithCred(RangerAdminRESTClient.java:858)
 at 
org.apache.ranger.admin.client.RangerAdminRESTClient.getServicePoliciesIfUpdated(RangerAdminRESTClient.java:146)
 at 
org.apache.ranger.plugin.util.PolicyRefresher.loadPolicyfromPolicyAdmin(PolicyRefresher.java:308)
at 
org.apache.ranger.plugin.util.PolicyRefresher.loadPolicy(PolicyRefresher.java:247)
   at 
org.apache.ranger.plugin.util.PolicyRefresher.run(PolicyRefresher.java:209)Caused
 by: com.sun.xml.bind.v2.runtime.IllegalAnnotationsException: 2 counts of 
IllegalAnnotationExceptionsjava.util.Map is an interface, and JAXB can't handle 
interfaces.   this problem is related to the following location:
  at java.util.Mapat private java.util.List 
org.apache.ranger.plugin.model.RangerPolicy.additionalResources   
at org.apache.ranger.plugin.model.RangerPolicy  at private 
java.util.List org.apache.ranger.plugin.util.ServicePolicies.policies   
 at org.apache.ranger.plugin.util.ServicePoliciesjava.util.Map does not 
have a no-arg default constructor.   this problem is related to the 
following location:  at java.util.Mapat private 
java.util.List org.apache.ranger.plugin.model.RangerPolicy.additionalResources  
 at org.apache.ranger.plugin.model.RangerPolicy  at private 
java.util.List org.apache.ranger.plugin.util.ServicePolicies.policies   
 at org.apache.ranger.plugin.util.ServicePolicies
at 
com.sun.xml.bind.v2.runtime.IllegalAnnotationsException$Builder.check(IllegalAnnotationsException.java:106)
  at 
com.sun.xml.bind.v2.runtime.JAXBContextImpl.getTypeInfoSet(JAXBContextImpl.java:489)
 at 
com.sun.xml.bind.v2.runtime.JAXBContextImpl.(JAXBContextImpl.java:319) at 
com.sun.xml.bind.v2.runtime.JAXBContextImpl$JAXBContextBuilder.build(JAXBContextImpl.java:1170)
  at 
com.sun.xml.bind.v2.ContextFactory.createContext(ContextFactory.java:145)at 
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)  at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)   
 at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498) at 
javax.xml.bind.ContextFinder.newInstance(ContextFinder.java:247) at 
javax.xml.bind.ContextFinder.newInstance(ContextFinder.java:234) at 
javax.xml.bind.ContextFinder.find(ContextFinder.java:441)at 
javax.xml.bind.JAXBContext.newInstance(JAXBContext.java:641) at 
javax.xml.bind.JAXBContext.newInstance(JAXBContext.java:584) at 
com.sun.jersey.core.provider.jaxb.AbstractJAXBProvider.getStoredJAXBContext(AbstractJAXBProvider.java:196)
   at 
com.sun.jersey.core.provider.jaxb.AbstractJAXBProvider.getJAXBContext(AbstractJAXBProvider.java:188)
 at 
com.sun.jersey.core.provider.jaxb.AbstractJAXBProvider.g

[jira] [Comment Edited] (HDFS-15098) Add SM4 encryption method for HDFS

2024-09-28 Thread wayne cook (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17885643#comment-17885643
 ] 

wayne cook edited comment on HDFS-15098 at 9/29/24 2:11 AM:


I have a error message  as follows, when i update to  hadoop 3.4.0 from 3.3.4.

 
{code:java}
# The namenode log
2024-09-27 16:41:39 ERROR org.apache.ranger.plugin.util.PolicyRefresher: 
PolicyRefresher(serviceName=hdfs-service): failed to refresh policies. Will 
continue to use last known version of policies 
(10)javax.ws.rs.WebApplicationException: 
com.sun.xml.bind.v2.runtime.IllegalAnnotationsException: 2 counts of 
IllegalAnnotationExceptionsjava.util.Map is an interface, and JAXB can't handle 
interfaces.   this problem is related to the following location:  
at java.util.Mapat private java.util.List 
org.apache.ranger.plugin.model.RangerPolicy.additionalResources   
at org.apache.ranger.plugin.model.RangerPolicy  at private 
java.util.List org.apache.ranger.plugin.util.ServicePolicies.policies   
 at org.apache.ranger.plugin.util.ServicePoliciesjava.util.Map does not 
have a no-arg default constructor.   this problem is related to the 
following location:  at java.util.Mapat private 
java.util.List org.apache.ranger.plugin.model.RangerPolicy.additionalResources  
 at org.apache.ranger.plugin.model.RangerPolicy  at private 
java.util.List org.apache.ranger.plugin.util.ServicePolicies.policies   
 at org.apache.ranger.plugin.util.ServicePolicies
at 
com.sun.jersey.core.provider.jaxb.AbstractRootElementProvider.readFrom(AbstractRootElementProvider.java:115)
 at com.sun.jersey.api.client.ClientResponse.getEntity(ClientResponse.java:634) 
 at com.sun.jersey.api.client.ClientResponse.getEntity(ClientResponse.java:586) 
 at 
org.apache.ranger.admin.client.RangerAdminRESTClient.getServicePoliciesIfUpdatedWithCred(RangerAdminRESTClient.java:858)
 at 
org.apache.ranger.admin.client.RangerAdminRESTClient.getServicePoliciesIfUpdated(RangerAdminRESTClient.java:146)
 at 
org.apache.ranger.plugin.util.PolicyRefresher.loadPolicyfromPolicyAdmin(PolicyRefresher.java:308)
at 
org.apache.ranger.plugin.util.PolicyRefresher.loadPolicy(PolicyRefresher.java:247)
   at 
org.apache.ranger.plugin.util.PolicyRefresher.run(PolicyRefresher.java:209)Caused
 by: com.sun.xml.bind.v2.runtime.IllegalAnnotationsException: 2 counts of 
IllegalAnnotationExceptionsjava.util.Map is an interface, and JAXB can't handle 
interfaces.   this problem is related to the following location:
  at java.util.Mapat private java.util.List 
org.apache.ranger.plugin.model.RangerPolicy.additionalResources   
at org.apache.ranger.plugin.model.RangerPolicy  at private 
java.util.List org.apache.ranger.plugin.util.ServicePolicies.policies   
 at org.apache.ranger.plugin.util.ServicePoliciesjava.util.Map does not 
have a no-arg default constructor.   this problem is related to the 
following location:  at java.util.Mapat private 
java.util.List org.apache.ranger.plugin.model.RangerPolicy.additionalResources  
 at org.apache.ranger.plugin.model.RangerPolicy  at private 
java.util.List org.apache.ranger.plugin.util.ServicePolicies.policies   
 at org.apache.ranger.plugin.util.ServicePolicies
at 
com.sun.xml.bind.v2.runtime.IllegalAnnotationsException$Builder.check(IllegalAnnotationsException.java:106)
  at 
com.sun.xml.bind.v2.runtime.JAXBContextImpl.getTypeInfoSet(JAXBContextImpl.java:489)
 at 
com.sun.xml.bind.v2.runtime.JAXBContextImpl.(JAXBContextImpl.java:319) at 
com.sun.xml.bind.v2.runtime.JAXBContextImpl$JAXBContextBuilder.build(JAXBContextImpl.java:1170)
  at 
com.sun.xml.bind.v2.ContextFactory.createContext(ContextFactory.java:145)at 
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)  at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)   
 at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498) at 
javax.xml.bind.ContextFinder.newInstance(ContextFinder.java:247) at 
javax.xml.bind.ContextFinder.newInstance(ContextFinder.java:234) at 
javax.xml.bind.ContextFinder.find(ContextFinder.java:441)at 
javax.xml.bind.JAXBContext.newInstance(JAXBContext.java:641) at 
javax.xml.bind.JAXBContext.newInstance(JAXBContext.java:584) at 
com.sun.jersey.core.provider.jaxb.AbstractJAXBProvider.getStoredJAXBContext(AbstractJAXBProvider.java:196)
   at 
com.sun.jersey.core.provider.jaxb.AbstractJAXBProvider.getJAXBContext(AbstractJAXBProvider.java:188)
 at 
com.sun.jersey.core.provider.jaxb.AbstractJAXBProvider.g

[jira] [Commented] (HDFS-15098) Add SM4 encryption method for HDFS

2024-09-28 Thread wayne cook (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17885643#comment-17885643
 ] 

wayne cook commented on HDFS-15098:
---

have a error message  as follows, when id update to  hadoop 3.4.0 from 3.3.4.

 
{code:java}
# The namenode log
2024-09-27 16:41:39 ERROR org.apache.ranger.plugin.util.PolicyRefresher: 
PolicyRefresher(serviceName=hdfs-service): failed to refresh policies. Will 
continue to use last known version of policies 
(10)javax.ws.rs.WebApplicationException: 
com.sun.xml.bind.v2.runtime.IllegalAnnotationsException: 2 counts of 
IllegalAnnotationExceptionsjava.util.Map is an interface, and JAXB can't handle 
interfaces.   this problem is related to the following location:  
at java.util.Mapat private java.util.List 
org.apache.ranger.plugin.model.RangerPolicy.additionalResources   
at org.apache.ranger.plugin.model.RangerPolicy  at private 
java.util.List org.apache.ranger.plugin.util.ServicePolicies.policies   
 at org.apache.ranger.plugin.util.ServicePoliciesjava.util.Map does not 
have a no-arg default constructor.   this problem is related to the 
following location:  at java.util.Mapat private 
java.util.List org.apache.ranger.plugin.model.RangerPolicy.additionalResources  
 at org.apache.ranger.plugin.model.RangerPolicy  at private 
java.util.List org.apache.ranger.plugin.util.ServicePolicies.policies   
 at org.apache.ranger.plugin.util.ServicePolicies
at 
com.sun.jersey.core.provider.jaxb.AbstractRootElementProvider.readFrom(AbstractRootElementProvider.java:115)
 at com.sun.jersey.api.client.ClientResponse.getEntity(ClientResponse.java:634) 
 at com.sun.jersey.api.client.ClientResponse.getEntity(ClientResponse.java:586) 
 at 
org.apache.ranger.admin.client.RangerAdminRESTClient.getServicePoliciesIfUpdatedWithCred(RangerAdminRESTClient.java:858)
 at 
org.apache.ranger.admin.client.RangerAdminRESTClient.getServicePoliciesIfUpdated(RangerAdminRESTClient.java:146)
 at 
org.apache.ranger.plugin.util.PolicyRefresher.loadPolicyfromPolicyAdmin(PolicyRefresher.java:308)
at 
org.apache.ranger.plugin.util.PolicyRefresher.loadPolicy(PolicyRefresher.java:247)
   at 
org.apache.ranger.plugin.util.PolicyRefresher.run(PolicyRefresher.java:209)Caused
 by: com.sun.xml.bind.v2.runtime.IllegalAnnotationsException: 2 counts of 
IllegalAnnotationExceptionsjava.util.Map is an interface, and JAXB can't handle 
interfaces.   this problem is related to the following location:
  at java.util.Mapat private java.util.List 
org.apache.ranger.plugin.model.RangerPolicy.additionalResources   
at org.apache.ranger.plugin.model.RangerPolicy  at private 
java.util.List org.apache.ranger.plugin.util.ServicePolicies.policies   
 at org.apache.ranger.plugin.util.ServicePoliciesjava.util.Map does not 
have a no-arg default constructor.   this problem is related to the 
following location:  at java.util.Mapat private 
java.util.List org.apache.ranger.plugin.model.RangerPolicy.additionalResources  
 at org.apache.ranger.plugin.model.RangerPolicy  at private 
java.util.List org.apache.ranger.plugin.util.ServicePolicies.policies   
 at org.apache.ranger.plugin.util.ServicePolicies
at 
com.sun.xml.bind.v2.runtime.IllegalAnnotationsException$Builder.check(IllegalAnnotationsException.java:106)
  at 
com.sun.xml.bind.v2.runtime.JAXBContextImpl.getTypeInfoSet(JAXBContextImpl.java:489)
 at 
com.sun.xml.bind.v2.runtime.JAXBContextImpl.(JAXBContextImpl.java:319) at 
com.sun.xml.bind.v2.runtime.JAXBContextImpl$JAXBContextBuilder.build(JAXBContextImpl.java:1170)
  at 
com.sun.xml.bind.v2.ContextFactory.createContext(ContextFactory.java:145)at 
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)  at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)   
 at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498) at 
javax.xml.bind.ContextFinder.newInstance(ContextFinder.java:247) at 
javax.xml.bind.ContextFinder.newInstance(ContextFinder.java:234) at 
javax.xml.bind.ContextFinder.find(ContextFinder.java:441)at 
javax.xml.bind.JAXBContext.newInstance(JAXBContext.java:641) at 
javax.xml.bind.JAXBContext.newInstance(JAXBContext.java:584) at 
com.sun.jersey.core.provider.jaxb.AbstractJAXBProvider.getStoredJAXBContext(AbstractJAXBProvider.java:196)
   at 
com.sun.jersey.core.provider.jaxb.AbstractJAXBProvider.getJAXBContext(AbstractJAXBProvider.java:188)
 at 
com.sun.jersey.core.provider.jaxb.AbstractJAXBProvider.getUnmarshaller(AbstractJAXBProvider.java:1

[jira] [Commented] (HDFS-17629) The IP address is incorrectly displayed in the IPv6 environment.

2024-09-28 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17629?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17885562#comment-17885562
 ] 

ASF GitHub Bot commented on HDFS-17629:
---

hadoop-yetus commented on PR #7078:
URL: https://github.com/apache/hadoop/pull/7078#issuecomment-2380608198

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 32s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  jshint  |   0m  0s |  |  jshint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  44m 55s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  79m 38s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 13s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  shadedclient  |  34m 33s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 38s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 118m 15s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.47 ServerAPI=1.47 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7078/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/7078 |
   | Optional Tests | dupname asflicense shadedclient codespell detsecrets 
jshint |
   | uname | Linux 9b683fc1bea7 5.15.0-117-generic #127-Ubuntu SMP Fri Jul 5 
20:13:28 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 338ee9c42b78b8f62f09e0a01fd3053ea7a3fed2 |
   | Max. process+thread count | 699 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7078/1/console |
   | versions | git=2.25.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   




> The IP address is incorrectly displayed in the IPv6 environment.
> 
>
> Key: HDFS-17629
> URL: https://issues.apache.org/jira/browse/HDFS-17629
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: zeekling
>Priority: Major
>  Labels: pull-request-available
> Attachments: image-2024-09-23-09-22-28-495.png
>
>
>  
> !image-2024-09-23-09-22-28-495.png!
>  
> function open_hostip_list in histogram-hostip.js , here is root  reason
> {code:java}
> if (index > x0 && index <= x1) {
>       ips.push(dn.infoAddr.split(":")[0]);
> } {code}
> need change to:
> {code:java}
> if (index > x0 && index <= x1) {
>ips.push(dn.infoAddr.split(":")[0]);
>var idx = dn.infoAddr.lastIndexOf(":"); 
>var dnIp = dn.infoAddr.substring(0, idx);
>ips.push(dnIp);   
> }{code}
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-17629) The IP address is incorrectly displayed in the IPv6 environment.

2024-09-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17629?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDFS-17629:
--
Labels: pull-request-available  (was: )

> The IP address is incorrectly displayed in the IPv6 environment.
> 
>
> Key: HDFS-17629
> URL: https://issues.apache.org/jira/browse/HDFS-17629
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: zeekling
>Priority: Major
>  Labels: pull-request-available
> Attachments: image-2024-09-23-09-22-28-495.png
>
>
>  
> !image-2024-09-23-09-22-28-495.png!
>  
> function open_hostip_list in histogram-hostip.js , here is root  reason
> {code:java}
> if (index > x0 && index <= x1) {
>       ips.push(dn.infoAddr.split(":")[0]);
> } {code}
> need change to:
> {code:java}
> if (index > x0 && index <= x1) {
>ips.push(dn.infoAddr.split(":")[0]);
>var idx = dn.infoAddr.lastIndexOf(":"); 
>    var dnIp = dn.infoAddr.substring(0, idx);
>ips.push(dnIp);   
> }{code}
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17629) The IP address is incorrectly displayed in the IPv6 environment.

2024-09-28 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17629?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=1788#comment-1788
 ] 

ASF GitHub Bot commented on HDFS-17629:
---

zeekling opened a new pull request, #7078:
URL: https://github.com/apache/hadoop/pull/7078

   …nment.
   
   ### Description of PR
   
   for https://issues.apache.org/jira/browse/HDFS-17629
   
   unction open_hostip_list in histogram-hostip.js , here is root  reason
   
   ```js
   if (index > x0 && index <= x1) {
 ips.push(dn.infoAddr.split(":")[0]);
   } 
   ```
   need change to:
   
   ```js
   if (index > x0 && index <= x1) {
  ips.push(dn.infoAddr.split(":")[0]);
  var idx = dn.infoAddr.lastIndexOf(":"); 
  var dnIp = dn.infoAddr.substring(0, idx);
  ips.push(dnIp);   
   }
   ```
   
   ### How was this patch tested?
   
   
   ### For code changes:
   
   - [ ] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   




> The IP address is incorrectly displayed in the IPv6 environment.
> 
>
>     Key: HDFS-17629
> URL: https://issues.apache.org/jira/browse/HDFS-17629
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: zeekling
>Priority: Major
> Attachments: image-2024-09-23-09-22-28-495.png
>
>
>  
> !image-2024-09-23-09-22-28-495.png!
>  
> function open_hostip_list in histogram-hostip.js , here is root  reason
> {code:java}
> if (index > x0 && index <= x1) {
>       ips.push(dn.infoAddr.split(":")[0]);
> } {code}
> need change to:
> {code:java}
> if (index > x0 && index <= x1) {
>ips.push(dn.infoAddr.split(":")[0]);
>var idx = dn.infoAddr.lastIndexOf(":"); 
>var dnIp = dn.infoAddr.substring(0, idx);
>ips.push(dnIp);   
> }{code}
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17594) [ARR] RouterCacheAdmin supports asynchronous rpc.

2024-09-27 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17885318#comment-17885318
 ] 

ASF GitHub Bot commented on HDFS-17594:
---

KeeProMise commented on PR #6986:
URL: https://github.com/apache/hadoop/pull/6986#issuecomment-2379024105

   Hi, @Archie-wang 
[HDFS-17545](https://issues.apache.org/jira/browse/HDFS-17545) has already been 
merged into [HDFS-17531](https://issues.apache.org/jira/browse/HDFS-17531), 
please rebase your development branch using 
[HDFS-17531](https://issues.apache.org/jira/browse/HDFS-17531).




> [ARR] RouterCacheAdmin supports asynchronous rpc.
> -
>
> Key: HDFS-17594
> URL: https://issues.apache.org/jira/browse/HDFS-17594
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: rbf
>Reporter: Jian Zhang
>Assignee: Jian Zhang
>Priority: Major
>  Labels: pull-request-available
>
> *Describe*
> The main new addition is RouterAsyncCacheAdmin, which extends 
> RouterCacheAdmin so that cache admin supports asynchronous rpc.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17597) [ARR] RouterSnapshot supports asynchronous rpc.

2024-09-27 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17885366#comment-17885366
 ] 

ASF GitHub Bot commented on HDFS-17597:
---

hadoop-yetus commented on PR #6994:
URL: https://github.com/apache/hadoop/pull/6994#issuecomment-2379287223

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 24s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ HDFS-17531 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  33m 38s |  |  HDFS-17531 passed  |
   | +1 :green_heart: |  compile  |   0m 26s |  |  HDFS-17531 passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  compile  |   0m 21s |  |  HDFS-17531 passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  checkstyle  |   0m 19s |  |  HDFS-17531 passed  |
   | +1 :green_heart: |  mvnsite  |   0m 24s |  |  HDFS-17531 passed  |
   | +1 :green_heart: |  javadoc  |   0m 27s |  |  HDFS-17531 passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  javadoc  |   0m 20s |  |  HDFS-17531 passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  spotbugs  |   0m 53s |  |  HDFS-17531 passed  |
   | +1 :green_heart: |  shadedclient  |  21m 36s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 19s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 22s |  |  the patch passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  javac  |   0m 22s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 17s |  |  the patch passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  javac  |   0m 17s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 11s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 20s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 17s |  |  the patch passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  javadoc  |   0m 16s |  |  the patch passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  spotbugs  |   0m 50s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  22m 13s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  |  29m 37s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6994/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt)
 |  hadoop-hdfs-rbf in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 22s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 115m  5s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdfs.server.federation.router.TestRouterRpcMultiDestination |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.47 ServerAPI=1.47 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6994/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6994 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 85c883ace502 5.15.0-116-generic #126-Ubuntu SMP Mon Jul 1 
10:14:24 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | HDFS-17531 / 428c52d91875bfd3a688c31467d7037f19c1b760 |
   | Default Java | Private Build-1.8.0_422-8u422-b05-1~20.04-b05 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_422-8u422-b05-1~20.04-b05 
|
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6994/2/testReport/ |
   | Max. process+thread count | 3244 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
ha

[jira] [Commented] (HDFS-17632) RBF: Support listOpenFiles for routers

2024-09-27 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17885427#comment-17885427
 ] 

ASF GitHub Bot commented on HDFS-17632:
---

hadoop-yetus commented on PR #7075:
URL: https://github.com/apache/hadoop/pull/7075#issuecomment-2379709792

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 33s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m 16s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  35m 37s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   6m 11s |  |  trunk passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  compile  |   5m 55s |  |  trunk passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  checkstyle  |   1m 26s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 13s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 52s |  |  trunk passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  javadoc  |   2m 19s |  |  trunk passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  spotbugs  |   4m 58s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  40m 49s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 33s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 51s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   6m 18s |  |  the patch passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  javac  |   6m 18s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   5m 40s |  |  the patch passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  javac  |   5m 40s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m 24s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 59s |  |  the patch passed  |
   | -1 :x: |  javadoc  |   1m  6s | 
[/results-javadoc-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7075/1/artifact/out/results-javadoc-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04.txt)
 |  
hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04 
with JDK Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04 generated 1 new + 3116 
unchanged - 0 fixed = 3117 total (was 3116)  |
   | +1 :green_heart: |  javadoc  |   2m  7s |  |  the patch passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  spotbugs  |   5m  4s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  42m  1s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  | 224m 14s |  |  hadoop-hdfs in the patch 
passed.  |
   | -1 :x: |  unit  |  30m 57s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7075/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt)
 |  hadoop-hdfs-rbf in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 47s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 443m 16s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdfs.server.federation.security.token.TestZKDelegationTokenSecretManagerImpl
 |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.47 ServerAPI=1.47 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7075/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/7075 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 9033aff95b68 5.15.0-117-generic #127-Ubuntu SMP Fri Jul 5 
20:13:28 UTC 2024 x86_64 x86_64 x86_64 GNU/L

[jira] [Commented] (HDFS-17595) [ARR] ErasureCoding supports asynchronous rpc.

2024-09-27 Thread farmmamba (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17885378#comment-17885378
 ] 

farmmamba commented on HDFS-17595:
--

Nice,Sir. Will do it soonly after vocation



 Replied Message 

   [ 
https://issues.apache.org/jira/browse/HDFS-17595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17885320#comment-17885320
 ]

ASF GitHub Bot commented on HDFS-17595:
---

KeeProMise commented on PR #6983:
URL: https://github.com/apache/hadoop/pull/6983#issuecomment-2379024667

  Hi, @hfutatzhanghb  
[HDFS-17545](https://issues.apache.org/jira/browse/HDFS-17545) has already been 
merged into [HDFS-17531](https://issues.apache.org/jira/browse/HDFS-17531), 
please rebase your development branch using 
[HDFS-17531](https://issues.apache.org/jira/browse/HDFS-17531).







--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org


> [ARR] ErasureCoding supports asynchronous rpc.
> --
>
> Key: HDFS-17595
> URL: https://issues.apache.org/jira/browse/HDFS-17595
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: rbf
>Reporter: Jian Zhang
>Assignee: farmmamba
>Priority: Major
>  Labels: pull-request-available
>
> *Describe*
> The main new addition is AsyncErasureCoding, which extends ErasureCoding so 
> that supports asynchronous rpc.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17597) [ARR] RouterSnapshot supports asynchronous rpc.

2024-09-27 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17885370#comment-17885370
 ] 

ASF GitHub Bot commented on HDFS-17597:
---

hadoop-yetus commented on PR #6994:
URL: https://github.com/apache/hadoop/pull/6994#issuecomment-2379302639

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 20s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ HDFS-17531 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  34m 15s |  |  HDFS-17531 passed  |
   | +1 :green_heart: |  compile  |   0m 25s |  |  HDFS-17531 passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  compile  |   0m 22s |  |  HDFS-17531 passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  checkstyle  |   0m 19s |  |  HDFS-17531 passed  |
   | +1 :green_heart: |  mvnsite  |   0m 24s |  |  HDFS-17531 passed  |
   | +1 :green_heart: |  javadoc  |   0m 27s |  |  HDFS-17531 passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  javadoc  |   0m 19s |  |  HDFS-17531 passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  spotbugs  |   0m 53s |  |  HDFS-17531 passed  |
   | +1 :green_heart: |  shadedclient  |  21m 54s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 20s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 23s |  |  the patch passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  javac  |   0m 23s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 17s |  |  the patch passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  javac  |   0m 17s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 11s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 20s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 18s |  |  the patch passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  javadoc  |   0m 15s |  |  the patch passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  spotbugs  |   0m 49s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  23m 10s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  27m  5s |  |  hadoop-hdfs-rbf in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 25s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 114m 28s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.47 ServerAPI=1.47 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6994/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6994 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 705019f217c8 5.15.0-116-generic #126-Ubuntu SMP Mon Jul 1 
10:14:24 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | HDFS-17531 / 50a13b3a9bd6d832713496c43dded767c1fc8bc1 |
   | Default Java | Private Build-1.8.0_422-8u422-b05-1~20.04-b05 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_422-8u422-b05-1~20.04-b05 
|
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6994/3/testReport/ |
   | Max. process+thread count | 3704 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6994/3/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   




&g

[jira] [Commented] (HDFS-17596) [ARR] RouterStoragePolicy supports asynchronous rpc.

2024-09-27 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17596?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17885317#comment-17885317
 ] 

ASF GitHub Bot commented on HDFS-17596:
---

KeeProMise commented on PR #6988:
URL: https://github.com/apache/hadoop/pull/6988#issuecomment-2379023538

   Hi, @hfutatzhanghb 
[HDFS-17545](https://issues.apache.org/jira/browse/HDFS-17545) has already been 
merged into [HDFS-17531](https://issues.apache.org/jira/browse/HDFS-17531), 
please rebase your development branch using 
[HDFS-17531](https://issues.apache.org/jira/browse/HDFS-17531).




> [ARR] RouterStoragePolicy supports asynchronous rpc.
> 
>
> Key: HDFS-17596
> URL: https://issues.apache.org/jira/browse/HDFS-17596
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Jian Zhang
>Assignee: farmmamba
>Priority: Major
>  Labels: pull-request-available
>
> *Describe*
> The main new addition is RouterAsyncStoragePolicy, which extends 
> RouterStoragePolicy so that supports asynchronous rpc.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-17545) [ARR] router async rpc client.

2024-09-27 Thread Jian Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17545?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian Zhang resolved HDFS-17545.
---
Fix Version/s: HDFS-17531
 Hadoop Flags: Reviewed
   Resolution: Done

> [ARR] router async rpc client.
> --
>
> Key: HDFS-17545
> URL: https://issues.apache.org/jira/browse/HDFS-17545
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: rbf
>Reporter: Jian Zhang
>Assignee: Jian Zhang
>Priority: Major
>  Labels: pull-request-available
> Fix For: HDFS-17531
>
>
> *Describe*
> 1. Mainly using AsyncUtil to implement {*}RouterAsyncRpcClient{*}, this class 
> extends RouterRpcClient, enabling the {*}invoiceAll{*}, {*}invoiceMethod{*}, 
> {*}invoiceSequential{*}, {*}invoiceConcurrent{*}, and *invoiceSingle* methods 
> to support asynchrony.
> 2. Use two thread pools, *asyncRouterHandler* and {*}asyncRouterResponder{*}, 
> to handle asynchronous requests and responses, respectively.
> 3. Added {*}DFS_ROUTER_RPC_ENABLE_ASYNC{*}, 
> {*}DFS_ROUTER_RPC_ASYNC_HANDLER_COUNT{*}, 
> *DFS_ROUTER_RPC_ASYNC_RESPONDER_COUNT_DEFAULT* to configure whether to use 
> async router, as well as the number of asyncRouterHandlers and 
> asyncRouterResponders.
> 4. Using *ThreadLocalContext* to maintain thread local variables, ensuring 
> that thread local variables can be correctly passed between handler, 
> asyncRouterHandler, and asyncRouterResponder.
>  
> *Test*
> new UT TestRouterAsyncRpcClient
> Note: For discussions on *AsyncUtil* and client {*}protocolPB{*}, please 
> refer to HDFS-17543 and HDFS-17544.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17597) [ARR] RouterSnapshot supports asynchronous rpc.

2024-09-27 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17885335#comment-17885335
 ] 

ASF GitHub Bot commented on HDFS-17597:
---

LeoLee commented on PR #6994:
URL: https://github.com/apache/hadoop/pull/6994#issuecomment-2379082449

   > Hi, @LeoLee 
[HDFS-17545](https://issues.apache.org/jira/browse/HDFS-17545) has already been 
merged into [HDFS-17531](https://issues.apache.org/jira/browse/HDFS-17531), 
please rebase your development branch using 
[HDFS-17531](https://issues.apache.org/jira/browse/HDFS-17531).
   
   @KeeProMise Ok, recommit, please take a look again.




> [ARR] RouterSnapshot supports asynchronous rpc.
> ---
>
> Key: HDFS-17597
> URL: https://issues.apache.org/jira/browse/HDFS-17597
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Jian Zhang
>Assignee: farmmamba
>Priority: Major
>  Labels: pull-request-available
>
> *Describe*
> The main new addition is RouterAsyncSnapshot, which extends RouterSnapshot so 
> that supports asynchronous rpc.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17545) [ARR] router async rpc client.

2024-09-27 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17885312#comment-17885312
 ] 

ASF GitHub Bot commented on HDFS-17545:
---

KeeProMise commented on PR #6871:
URL: https://github.com/apache/hadoop/pull/6871#issuecomment-2378999830

   > @KeeProMise Thanks for your works. LGTM. I think it is ready to commit to 
branch-17531 and let's continue the rest PR. Thanks again.
   
   @Hexiaoqiao @hfutatzhanghb Thanks for your review!




> [ARR] router async rpc client.
> --
>
> Key: HDFS-17545
> URL: https://issues.apache.org/jira/browse/HDFS-17545
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: rbf
>Reporter: Jian Zhang
>Assignee: Jian Zhang
>Priority: Major
>  Labels: pull-request-available
>
> *Describe*
> 1. Mainly using AsyncUtil to implement {*}RouterAsyncRpcClient{*}, this class 
> extends RouterRpcClient, enabling the {*}invoiceAll{*}, {*}invoiceMethod{*}, 
> {*}invoiceSequential{*}, {*}invoiceConcurrent{*}, and *invoiceSingle* methods 
> to support asynchrony.
> 2. Use two thread pools, *asyncRouterHandler* and {*}asyncRouterResponder{*}, 
> to handle asynchronous requests and responses, respectively.
> 3. Added {*}DFS_ROUTER_RPC_ENABLE_ASYNC{*}, 
> {*}DFS_ROUTER_RPC_ASYNC_HANDLER_COUNT{*}, 
> *DFS_ROUTER_RPC_ASYNC_RESPONDER_COUNT_DEFAULT* to configure whether to use 
> async router, as well as the number of asyncRouterHandlers and 
> asyncRouterResponders.
> 4. Using *ThreadLocalContext* to maintain thread local variables, ensuring 
> that thread local variables can be correctly passed between handler, 
> asyncRouterHandler, and asyncRouterResponder.
>  
> *Test*
> new UT TestRouterAsyncRpcClient
> Note: For discussions on *AsyncUtil* and client {*}protocolPB{*}, please 
> refer to HDFS-17543 and HDFS-17544.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17595) [ARR] ErasureCoding supports asynchronous rpc.

2024-09-27 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17885320#comment-17885320
 ] 

ASF GitHub Bot commented on HDFS-17595:
---

KeeProMise commented on PR #6983:
URL: https://github.com/apache/hadoop/pull/6983#issuecomment-2379024667

   Hi, @hfutatzhanghb  
[HDFS-17545](https://issues.apache.org/jira/browse/HDFS-17545) has already been 
merged into [HDFS-17531](https://issues.apache.org/jira/browse/HDFS-17531), 
please rebase your development branch using 
[HDFS-17531](https://issues.apache.org/jira/browse/HDFS-17531).




> [ARR] ErasureCoding supports asynchronous rpc.
> --
>
> Key: HDFS-17595
> URL: https://issues.apache.org/jira/browse/HDFS-17595
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: rbf
>Reporter: Jian Zhang
>Assignee: farmmamba
>Priority: Major
>  Labels: pull-request-available
>
> *Describe*
> The main new addition is AsyncErasureCoding, which extends ErasureCoding so 
> that supports asynchronous rpc.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17545) [ARR] router async rpc client.

2024-09-27 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17885311#comment-17885311
 ] 

ASF GitHub Bot commented on HDFS-17545:
---

KeeProMise merged PR #6871:
URL: https://github.com/apache/hadoop/pull/6871




> [ARR] router async rpc client.
> --
>
> Key: HDFS-17545
> URL: https://issues.apache.org/jira/browse/HDFS-17545
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: rbf
>Reporter: Jian Zhang
>Assignee: Jian Zhang
>Priority: Major
>  Labels: pull-request-available
>
> *Describe*
> 1. Mainly using AsyncUtil to implement {*}RouterAsyncRpcClient{*}, this class 
> extends RouterRpcClient, enabling the {*}invoiceAll{*}, {*}invoiceMethod{*}, 
> {*}invoiceSequential{*}, {*}invoiceConcurrent{*}, and *invoiceSingle* methods 
> to support asynchrony.
> 2. Use two thread pools, *asyncRouterHandler* and {*}asyncRouterResponder{*}, 
> to handle asynchronous requests and responses, respectively.
> 3. Added {*}DFS_ROUTER_RPC_ENABLE_ASYNC{*}, 
> {*}DFS_ROUTER_RPC_ASYNC_HANDLER_COUNT{*}, 
> *DFS_ROUTER_RPC_ASYNC_RESPONDER_COUNT_DEFAULT* to configure whether to use 
> async router, as well as the number of asyncRouterHandlers and 
> asyncRouterResponders.
> 4. Using *ThreadLocalContext* to maintain thread local variables, ensuring 
> that thread local variables can be correctly passed between handler, 
> asyncRouterHandler, and asyncRouterResponder.
>  
> *Test*
> new UT TestRouterAsyncRpcClient
> Note: For discussions on *AsyncUtil* and client {*}protocolPB{*}, please 
> refer to HDFS-17543 and HDFS-17544.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17597) [ARR] RouterSnapshot supports asynchronous rpc.

2024-09-27 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17885316#comment-17885316
 ] 

ASF GitHub Bot commented on HDFS-17597:
---

KeeProMise commented on PR #6994:
URL: https://github.com/apache/hadoop/pull/6994#issuecomment-2379022443

   Hi, @LeoLee HDFS-17545 has already been merged into HDFS-17531, please 
rebase your development branch using HDFS-17531.




> [ARR] RouterSnapshot supports asynchronous rpc.
> ---
>
> Key: HDFS-17597
> URL: https://issues.apache.org/jira/browse/HDFS-17597
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Jian Zhang
>Assignee: farmmamba
>Priority: Major
>  Labels: pull-request-available
>
> *Describe*
> The main new addition is RouterAsyncSnapshot, which extends RouterSnapshot so 
> that supports asynchronous rpc.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17632) RBF: Support listOpenFiles for routers

2024-09-27 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17885289#comment-17885289
 ] 

ASF GitHub Bot commented on HDFS-17632:
---

kokon191 opened a new pull request, #7075:
URL: https://github.com/apache/hadoop/pull/7075

   ### Description of PR
   
   Routers don't have support for `listOpenFiles` yet. Single destination paths 
are straightforward. Multi destination paths are joined; entries with inodeId > 
min(max(entry inodeIds)) are ignored to get a consistent prevId for the 
iterator.
   
   ### How was this patch tested?
   
   UTs.




> RBF: Support listOpenFiles for routers
> --
>
> Key: HDFS-17632
> URL: https://issues.apache.org/jira/browse/HDFS-17632
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Felix N
>Assignee: Felix N
>Priority: Major
>
> {code:java}
> @Override
> public BatchedEntries listOpenFiles(long prevId,
> EnumSet openFilesTypes, String path)
> throws IOException {
>   rpcServer.checkOperation(NameNode.OperationCategory.READ, false);
>   return null;
> } {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17545) [ARR] router async rpc client.

2024-09-27 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17885303#comment-17885303
 ] 

ASF GitHub Bot commented on HDFS-17545:
---

Hexiaoqiao commented on PR #6871:
URL: https://github.com/apache/hadoop/pull/6871#issuecomment-2378911936

   @KeeProMise Thanks for your works. LGTM. I think it is ready to commit to 
branch-17531 and let's continue the rest PR. Thanks again.




> [ARR] router async rpc client.
> --
>
> Key: HDFS-17545
> URL: https://issues.apache.org/jira/browse/HDFS-17545
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: rbf
>Reporter: Jian Zhang
>Assignee: Jian Zhang
>Priority: Major
>  Labels: pull-request-available
>
> *Describe*
> 1. Mainly using AsyncUtil to implement {*}RouterAsyncRpcClient{*}, this class 
> extends RouterRpcClient, enabling the {*}invoiceAll{*}, {*}invoiceMethod{*}, 
> {*}invoiceSequential{*}, {*}invoiceConcurrent{*}, and *invoiceSingle* methods 
> to support asynchrony.
> 2. Use two thread pools, *asyncRouterHandler* and {*}asyncRouterResponder{*}, 
> to handle asynchronous requests and responses, respectively.
> 3. Added {*}DFS_ROUTER_RPC_ENABLE_ASYNC{*}, 
> {*}DFS_ROUTER_RPC_ASYNC_HANDLER_COUNT{*}, 
> *DFS_ROUTER_RPC_ASYNC_RESPONDER_COUNT_DEFAULT* to configure whether to use 
> async router, as well as the number of asyncRouterHandlers and 
> asyncRouterResponders.
> 4. Using *ThreadLocalContext* to maintain thread local variables, ensuring 
> that thread local variables can be correctly passed between handler, 
> asyncRouterHandler, and asyncRouterResponder.
>  
> *Test*
> new UT TestRouterAsyncRpcClient
> Note: For discussions on *AsyncUtil* and client {*}protocolPB{*}, please 
> refer to HDFS-17543 and HDFS-17544.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17607) Reduce the number of times conf is loaded when DataNode startUp

2024-09-27 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17885291#comment-17885291
 ] 

ASF GitHub Bot commented on HDFS-17607:
---

Hexiaoqiao commented on code in PR #7012:
URL: https://github.com/apache/hadoop/pull/7012#discussion_r1778337276


##
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataStorage.java:
##
@@ -335,7 +335,7 @@ public VolumeBuilder prepareVolume(DataNode datanode,
 VolumeBuilder builder =
 new VolumeBuilder(this, sd);
 for (NamespaceInfo nsInfo : nsInfos) {
-  location.makeBlockPoolDir(nsInfo.getBlockPoolID(), null);
+  location.makeBlockPoolDir(nsInfo.getBlockPoolID(), datanode.getConf());

Review Comment:
   what about `datanode.getDNConf()` here?





> Reduce the number of times conf is loaded when DataNode startUp
> ---
>
> Key: HDFS-17607
> URL: https://issues.apache.org/jira/browse/HDFS-17607
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: lei w
>Priority: Major
>  Labels: pull-request-available
>
> The value of the conf parameter in the current access method 
> StorageLocation#makeBlockPoolDir is null, which leads to the problem of 
> loading conf multiple times when DataNode startUp



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-17632) RBF: Support listOpenFiles for routers

2024-09-27 Thread Felix N (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17632?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Felix N updated HDFS-17632:
---
Component/s: hdfs
 rbf

> RBF: Support listOpenFiles for routers
> --
>
> Key: HDFS-17632
> URL: https://issues.apache.org/jira/browse/HDFS-17632
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, rbf
>Reporter: Felix N
>Assignee: Felix N
>Priority: Major
>  Labels: pull-request-available
>
> {code:java}
> @Override
> public BatchedEntries listOpenFiles(long prevId,
> EnumSet openFilesTypes, String path)
> throws IOException {
>   rpcServer.checkOperation(NameNode.OperationCategory.READ, false);
>   return null;
> } {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-17632) RBF: Support listOpenFiles for routers

2024-09-27 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17632?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDFS-17632:
--
Labels: pull-request-available  (was: )

> RBF: Support listOpenFiles for routers
> --
>
> Key: HDFS-17632
> URL: https://issues.apache.org/jira/browse/HDFS-17632
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Felix N
>Assignee: Felix N
>Priority: Major
>  Labels: pull-request-available
>
> {code:java}
> @Override
> public BatchedEntries listOpenFiles(long prevId,
> EnumSet openFilesTypes, String path)
> throws IOException {
>   rpcServer.checkOperation(NameNode.OperationCategory.READ, false);
>   return null;
> } {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-17632) RBF: Support listOpenFiles for routers

2024-09-27 Thread Felix N (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17632?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Felix N updated HDFS-17632:
---
Summary: RBF: Support listOpenFiles for routers  (was: Support 
listOpenFiles for routers)

> RBF: Support listOpenFiles for routers
> --
>
> Key: HDFS-17632
> URL: https://issues.apache.org/jira/browse/HDFS-17632
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Felix N
>Assignee: Felix N
>Priority: Major
>
> {code:java}
> @Override
> public BatchedEntries listOpenFiles(long prevId,
> EnumSet openFilesTypes, String path)
> throws IOException {
>   rpcServer.checkOperation(NameNode.OperationCategory.READ, false);
>   return null;
> } {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-17632) Support listOpenFiles for routers

2024-09-27 Thread Felix N (Jira)
Felix N created HDFS-17632:
--

 Summary: Support listOpenFiles for routers
 Key: HDFS-17632
 URL: https://issues.apache.org/jira/browse/HDFS-17632
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Felix N
Assignee: Felix N


{code:java}
@Override
public BatchedEntries listOpenFiles(long prevId,
EnumSet openFilesTypes, String path)
throws IOException {
  rpcServer.checkOperation(NameNode.OperationCategory.READ, false);
  return null;
} {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-17624) Fix DFSNetworkTopology#chooseRandomWithStorageType() availableCount when excluded node is not in selected scope.

2024-09-27 Thread Xiaoqiao He (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoqiao He updated HDFS-17624:
---
Summary: Fix DFSNetworkTopology#chooseRandomWithStorageType() 
availableCount when excluded node is not in selected scope.  (was: 
DFSNetworkTopology#chooseRandomWithStorageType() should not decrease the 
storage count for excluded nodes that are not part of the selected scope.)

> Fix DFSNetworkTopology#chooseRandomWithStorageType() availableCount when 
> excluded node is not in selected scope.
> 
>
> Key: HDFS-17624
> URL: https://issues.apache.org/jira/browse/HDFS-17624
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: fuchaohong
>Assignee: fuchaohong
>Priority: Major
>  Labels: pull-request-available
>
> Presently if chosen scope is /default/rack1 and excluded node is 
> /default/rack2/host2. Then the available count will be deducted.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-17624) Fix DFSNetworkTopology#chooseRandomWithStorageType() availableCount when excluded node is not in selected scope.

2024-09-27 Thread Xiaoqiao He (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoqiao He resolved HDFS-17624.

Fix Version/s: 3.5.0
 Hadoop Flags: Reviewed
   Resolution: Fixed

> Fix DFSNetworkTopology#chooseRandomWithStorageType() availableCount when 
> excluded node is not in selected scope.
> 
>
> Key: HDFS-17624
> URL: https://issues.apache.org/jira/browse/HDFS-17624
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: fuchaohong
>Assignee: fuchaohong
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.5.0
>
>
> Presently if chosen scope is /default/rack1 and excluded node is 
> /default/rack2/host2. Then the available count will be deducted.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17624) DFSNetworkTopology#chooseRandomWithStorageType() should not decrease the storage count for excluded nodes that are not part of the selected scope.

2024-09-27 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17885279#comment-17885279
 ] 

ASF GitHub Bot commented on HDFS-17624:
---

Hexiaoqiao commented on PR #7042:
URL: https://github.com/apache/hadoop/pull/7042#issuecomment-2378790373

   Committed to trunk. Thanks @fuchaohong .




> DFSNetworkTopology#chooseRandomWithStorageType() should not decrease the 
> storage count for excluded nodes that are not part of the selected scope.
> --
>
> Key: HDFS-17624
> URL: https://issues.apache.org/jira/browse/HDFS-17624
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: fuchaohong
>Assignee: fuchaohong
>Priority: Major
>  Labels: pull-request-available
>
> Presently if chosen scope is /default/rack1 and excluded node is 
> /default/rack2/host2. Then the available count will be deducted.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17624) DFSNetworkTopology#chooseRandomWithStorageType() should not decrease the storage count for excluded nodes that are not part of the selected scope.

2024-09-27 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17885278#comment-17885278
 ] 

ASF GitHub Bot commented on HDFS-17624:
---

Hexiaoqiao merged PR #7042:
URL: https://github.com/apache/hadoop/pull/7042




> DFSNetworkTopology#chooseRandomWithStorageType() should not decrease the 
> storage count for excluded nodes that are not part of the selected scope.
> --
>
> Key: HDFS-17624
> URL: https://issues.apache.org/jira/browse/HDFS-17624
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: fuchaohong
>Assignee: fuchaohong
>Priority: Major
>  Labels: pull-request-available
>
> Presently if chosen scope is /default/rack1 and excluded node is 
> /default/rack2/host2. Then the available count will be deducted.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17626) Reduce lock contention at datanode startup

2024-09-27 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17885263#comment-17885263
 ] 

ASF GitHub Bot commented on HDFS-17626:
---

KeeProMise commented on code in PR #7053:
URL: https://github.com/apache/hadoop/pull/7053#discussion_r1778211573


##
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java:
##
@@ -258,7 +258,7 @@ NamespaceInfo retrieveNamespaceInfo() throws IOException {
 while (shouldRun()) {
   try {
 nsInfo = bpNamenode.versionRequest();
-LOG.debug(this + " received versionRequest response: " + nsInfo);
+LOG.debug("{} received versionRequest response: {}", this, nsInfo);

Review Comment:
   > > HI, IMO, "if (LOG.isDebugEnabled()) {...}" is better.
   > 
   > Thanks for your comment. I agree with @ayushtkn and @virajjasani here. 
LOG.debug already does isDebugEnabled() internally. https://private-user-images.githubusercontent.com/55134131/369738260-ed4f9b82-8dbd-4a89-80b5-1263f982c782.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3Mjc0MjQ4NzYsIm5iZiI6MTcyNzQyNDU3NiwicGF0aCI6Ii81NTEzNDEzMS8zNjk3MzgyNjAtZWQ0ZjliODItOGRiZC00YTg5LTgwYjUtMTI2M2Y5ODJjNzgyLnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDA5MjclMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQwOTI3VDA4MDkzNlomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPWViMTllNTJmYWZiN2I5NjMxZDUyMDNlNmZiZGUyMGZlNjRiMjRmNDJjNGVmNTIwYjFmNmI3ZTI5ODA4OTM2MTMmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0In0.-iBwFzkAvy6JJOqsn_2Mpwe9MCFUgmJCNj6wSx7p1tk";>
   > 
   
   @tomscut Got it, thanks, LGTM.
   
   





> Reduce lock contention at datanode startup
> --
>
> Key: HDFS-17626
> URL: https://issues.apache.org/jira/browse/HDFS-17626
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Tao Li
>Assignee: Tao Li
>Priority: Minor
>  Labels: pull-request-available
> Attachments: image-2024-09-18-20-45-56-999.png
>
>
> During the datanode startup process, there is a debug log, because there is 
> no LOG.isDebugEnabled() guard, so even if debug is not enabled, the read lock 
> will be obtained. The guard should be added here to reduce lock contention.
> !image-2024-09-18-20-45-56-999.png|width=333,height=263!
> !https://docs.corp.vipshop.com/uploader/f/4DSEukZKf6cV5VRY.png?accessToken=eyJhbGciOiJIUzI1NiIsImtpZCI6ImRlZmF1bHQiLCJ0eXAiOiJKV1QifQ.eyJleHAiOjE3MjY2NjQxNjYsImZpbGVHVUlEIjoiQWxvNE5uOU9OYko2aDJ4WCIsImlhdCI6MTcyNjY2MzU2NiwiaXNzIjoidXBsb2FkZXJfYWNjZXNzX3Jlc291cmNlIiwidXNlcklkIjo2MTYyMTQwfQ.DwDBnJ6I8vCFd14A-wsq2oLU5a0rcPoUvq49Z4aWg2A|width=334,height=133!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17626) Reduce lock contention at datanode startup

2024-09-27 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17885265#comment-17885265
 ] 

ASF GitHub Bot commented on HDFS-17626:
---

KeeProMise commented on PR #7053:
URL: https://github.com/apache/hadoop/pull/7053#issuecomment-2378696840

   > LGTM. Hi @ayushtkn @virajjasani @KeeProMise any more comments here? Thanks.
   
   @Hexiaoqiao @tomscut Hi, LGTM!




> Reduce lock contention at datanode startup
> --
>
> Key: HDFS-17626
> URL: https://issues.apache.org/jira/browse/HDFS-17626
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Tao Li
>Assignee: Tao Li
>Priority: Minor
>  Labels: pull-request-available
> Attachments: image-2024-09-18-20-45-56-999.png
>
>
> During the datanode startup process, there is a debug log, because there is 
> no LOG.isDebugEnabled() guard, so even if debug is not enabled, the read lock 
> will be obtained. The guard should be added here to reduce lock contention.
> !image-2024-09-18-20-45-56-999.png|width=333,height=263!
> !https://docs.corp.vipshop.com/uploader/f/4DSEukZKf6cV5VRY.png?accessToken=eyJhbGciOiJIUzI1NiIsImtpZCI6ImRlZmF1bHQiLCJ0eXAiOiJKV1QifQ.eyJleHAiOjE3MjY2NjQxNjYsImZpbGVHVUlEIjoiQWxvNE5uOU9OYko2aDJ4WCIsImlhdCI6MTcyNjY2MzU2NiwiaXNzIjoidXBsb2FkZXJfYWNjZXNzX3Jlc291cmNlIiwidXNlcklkIjo2MTYyMTQwfQ.DwDBnJ6I8vCFd14A-wsq2oLU5a0rcPoUvq49Z4aWg2A|width=334,height=133!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17626) Reduce lock contention at datanode startup

2024-09-27 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17885264#comment-17885264
 ] 

ASF GitHub Bot commented on HDFS-17626:
---

tomscut commented on code in PR #7053:
URL: https://github.com/apache/hadoop/pull/7053#discussion_r1770720963


##
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java:
##
@@ -258,7 +258,7 @@ NamespaceInfo retrieveNamespaceInfo() throws IOException {
 while (shouldRun()) {
   try {
 nsInfo = bpNamenode.versionRequest();
-LOG.debug(this + " received versionRequest response: " + nsInfo);
+LOG.debug("{} received versionRequest response: {}", this, nsInfo);

Review Comment:
   > HI, IMO, "if (LOG.isDebugEnabled()) {...}" is better.
   
   Thanks for your comment. I agree with @ayushtkn and @virajjasani here. 
LOG.debug already does isDebugEnabled() internally.
   https://github.com/user-attachments/assets/ed4f9b82-8dbd-4a89-80b5-1263f982c782";>





> Reduce lock contention at datanode startup
> --
>
> Key: HDFS-17626
> URL: https://issues.apache.org/jira/browse/HDFS-17626
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Tao Li
>Assignee: Tao Li
>Priority: Minor
>  Labels: pull-request-available
> Attachments: image-2024-09-18-20-45-56-999.png
>
>
> During the datanode startup process, there is a debug log, because there is 
> no LOG.isDebugEnabled() guard, so even if debug is not enabled, the read lock 
> will be obtained. The guard should be added here to reduce lock contention.
> !image-2024-09-18-20-45-56-999.png|width=333,height=263!
> !https://docs.corp.vipshop.com/uploader/f/4DSEukZKf6cV5VRY.png?accessToken=eyJhbGciOiJIUzI1NiIsImtpZCI6ImRlZmF1bHQiLCJ0eXAiOiJKV1QifQ.eyJleHAiOjE3MjY2NjQxNjYsImZpbGVHVUlEIjoiQWxvNE5uOU9OYko2aDJ4WCIsImlhdCI6MTcyNjY2MzU2NiwiaXNzIjoidXBsb2FkZXJfYWNjZXNzX3Jlc291cmNlIiwidXNlcklkIjo2MTYyMTQwfQ.DwDBnJ6I8vCFd14A-wsq2oLU5a0rcPoUvq49Z4aWg2A|width=334,height=133!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17626) Reduce lock contention at datanode startup

2024-09-27 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17885262#comment-17885262
 ] 

ASF GitHub Bot commented on HDFS-17626:
---

tomscut commented on code in PR #7053:
URL: https://github.com/apache/hadoop/pull/7053#discussion_r1770720963


##
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java:
##
@@ -258,7 +258,7 @@ NamespaceInfo retrieveNamespaceInfo() throws IOException {
 while (shouldRun()) {
   try {
 nsInfo = bpNamenode.versionRequest();
-LOG.debug(this + " received versionRequest response: " + nsInfo);
+LOG.debug("{} received versionRequest response: {}", this, nsInfo);

Review Comment:
   > HI, IMO, "if (LOG.isDebugEnabled()) {...}" is better.
   
   Thanks for your comment. I agree with @ayushtkn and @virajjasani here. 
LOG.debug already does isDebugEnabled() internally.
   https://github.com/user-attachments/assets/ed4f9b82-8dbd-4a89-80b5-1263f982c782";>
   
   @tomscut Got it, thanks, LGTM.





> Reduce lock contention at datanode startup
> --
>
> Key: HDFS-17626
> URL: https://issues.apache.org/jira/browse/HDFS-17626
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Tao Li
>Assignee: Tao Li
>Priority: Minor
>  Labels: pull-request-available
> Attachments: image-2024-09-18-20-45-56-999.png
>
>
> During the datanode startup process, there is a debug log, because there is 
> no LOG.isDebugEnabled() guard, so even if debug is not enabled, the read lock 
> will be obtained. The guard should be added here to reduce lock contention.
> !image-2024-09-18-20-45-56-999.png|width=333,height=263!
> !https://docs.corp.vipshop.com/uploader/f/4DSEukZKf6cV5VRY.png?accessToken=eyJhbGciOiJIUzI1NiIsImtpZCI6ImRlZmF1bHQiLCJ0eXAiOiJKV1QifQ.eyJleHAiOjE3MjY2NjQxNjYsImZpbGVHVUlEIjoiQWxvNE5uOU9OYko2aDJ4WCIsImlhdCI6MTcyNjY2MzU2NiwiaXNzIjoidXBsb2FkZXJfYWNjZXNzX3Jlc291cmNlIiwidXNlcklkIjo2MTYyMTQwfQ.DwDBnJ6I8vCFd14A-wsq2oLU5a0rcPoUvq49Z4aWg2A|width=334,height=133!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17626) Reduce lock contention at datanode startup

2024-09-27 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17885260#comment-17885260
 ] 

ASF GitHub Bot commented on HDFS-17626:
---

Hexiaoqiao commented on PR #7053:
URL: https://github.com/apache/hadoop/pull/7053#issuecomment-2378666988

   LGTM. Hi @ayushtkn @virajjasani @KeeProMise any more comments here? Thanks.




> Reduce lock contention at datanode startup
> --
>
> Key: HDFS-17626
> URL: https://issues.apache.org/jira/browse/HDFS-17626
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Tao Li
>Assignee: Tao Li
>Priority: Minor
>  Labels: pull-request-available
> Attachments: image-2024-09-18-20-45-56-999.png
>
>
> During the datanode startup process, there is a debug log, because there is 
> no LOG.isDebugEnabled() guard, so even if debug is not enabled, the read lock 
> will be obtained. The guard should be added here to reduce lock contention.
> !image-2024-09-18-20-45-56-999.png|width=333,height=263!
> !https://docs.corp.vipshop.com/uploader/f/4DSEukZKf6cV5VRY.png?accessToken=eyJhbGciOiJIUzI1NiIsImtpZCI6ImRlZmF1bHQiLCJ0eXAiOiJKV1QifQ.eyJleHAiOjE3MjY2NjQxNjYsImZpbGVHVUlEIjoiQWxvNE5uOU9OYko2aDJ4WCIsImlhdCI6MTcyNjY2MzU2NiwiaXNzIjoidXBsb2FkZXJfYWNjZXNzX3Jlc291cmNlIiwidXNlcklkIjo2MTYyMTQwfQ.DwDBnJ6I8vCFd14A-wsq2oLU5a0rcPoUvq49Z4aWg2A|width=334,height=133!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-17631) RedundantEditLogInputStream.nextOp() will be State.STREAM_FAILED when EditLogInputStream.skipUntil() throw IOException

2024-09-27 Thread Xiaoqiao He (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoqiao He updated HDFS-17631:
---
Description: 
Now when EditLogInputStream.skipUntil() throw IOException in 
RedundantEditLogInputStream.nextOp(), it is still into State.OK rather than 
State.STREAM_FAILED. 

The proper state will be like blew:

State.SKIP_UNTIL -> State.STREAM_FAILED ->(try next stream)  State.SKIP_UNTIL
Environment: (was: Now when EditLogInputStream.skipUntil() throw 
IOException in RedundantEditLogInputStream.nextOp(), it is still into State.OK 
rather than State.STREAM_FAILED. 

The proper state will be like blew:

State.SKIP_UNTIL -> State.STREAM_FAILED ->(try next stream)  State.SKIP_UNTIL)

> RedundantEditLogInputStream.nextOp() will be State.STREAM_FAILED when 
> EditLogInputStream.skipUntil() throw IOException
> --
>
> Key: HDFS-17631
> URL: https://issues.apache.org/jira/browse/HDFS-17631
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: liuguanghua
>Assignee: liuguanghua
>Priority: Major
>  Labels: pull-request-available
>
> Now when EditLogInputStream.skipUntil() throw IOException in 
> RedundantEditLogInputStream.nextOp(), it is still into State.OK rather than 
> State.STREAM_FAILED. 
> The proper state will be like blew:
> State.SKIP_UNTIL -> State.STREAM_FAILED ->(try next stream)  State.SKIP_UNTIL



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10665) Provide a way to add a new Journalnode to an existing quorum

2024-09-26 Thread Mohamed Aashif (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-10665?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17885225#comment-17885225
 ] 

Mohamed Aashif commented on HDFS-10665:
---

Has the issue been fixed? Couldn't find the fix till date.
We are in the process of replacing JN nodes in multiple deployments and this 
workaround is time-consuming.

Kindly fix this asap.

> Provide a way to add a new Journalnode to an existing quorum
> 
>
> Key: HDFS-10665
> URL: https://issues.apache.org/jira/browse/HDFS-10665
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: ha, hdfs, journal-node
>Reporter: Amit Anand
>Priority: Major
>
> In current implementation of {{HDFS}} {{HA}} using {{QJOURNAL}} there is no 
> way to add a new {{Journalnode(JN)}} to an existing {{JN}} quorum or 
> reinstall a failed {{JN}} machine.
> The current process to populate {{JN}} directories is:
> * Start {{JN}} daemons on multiple machines (usually an odd number 3 or 5)
> * Shutdown {{Namenode}}
> * Issue {{hdfs namenode -initializeSharedEdits}} - This will populate {{JN}}
> After {{JN}} are populated; if a machine, after hardware failure, is 
> reinstalled or a new set of machines are added to expand the {{JN}} quorum 
> the new {{JN}} machines will not be populated by {{NameNode}} without 
> following the current process that is described above. 
> The current process causes downtime on a 24x7 operation cluster if {{JN}} 
> needs any maintenance. 
> Although, one can follow steps given below to work around the issue described 
> above:
> 1. Install a new {{JN}} or reinstall an existing {{JN}} machine.
> 2. Created the required {{JN}} directory structure
> 3. Copy {{VERSION}} file from an existing {{JN}} to {{JN's}} {{current}} 
> directory
> 4. Manually create {{paxos}} directory under {{JN's}} {{current}} directory
> 5. Start the {{JN}} daemon.
> 6. Add new set of {{JNs}} to {{hdfs-site.xml}} and restart {{NN}}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17381) Distcp of EC files should not be limited to DFS.

2024-09-26 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17884999#comment-17884999
 ] 

ASF GitHub Bot commented on HDFS-17381:
---

steveloughran commented on PR #6551:
URL: https://github.com/apache/hadoop/pull/6551#issuecomment-2376871824

   separately please, easier to track, revert etc.




> Distcp of EC files should not be limited to DFS.
> 
>
> Key: HDFS-17381
> URL: https://issues.apache.org/jira/browse/HDFS-17381
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: distcp
>Affects Versions: 3.4.0
>Reporter: Sadanand Shenoy
>Assignee: Sadanand Shenoy
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.5.0
>
>
> Currently EC file support in distcp is limited to DFS and the code checks if 
> the given FS instance is DistributedFileSystem, In Ozone, EC is supported 
> now, and this limitation can be removed and a general contract for any 
> filesystem that supports EC files should be allowed by implementing few 
> interfaces/methods.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17376) Distcp creates Factor 1 replication file on target if Source is EC

2024-09-26 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17884945#comment-17884945
 ] 

ASF GitHub Bot commented on HDFS-17376:
---

hadoop-yetus commented on PR #7073:
URL: https://github.com/apache/hadoop/pull/7073#issuecomment-2376513325

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  12m  7s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ branch-3.4 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  44m  4s |  |  branch-3.4 passed  |
   | +1 :green_heart: |  compile  |   0m 32s |  |  branch-3.4 passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  compile  |   0m 29s |  |  branch-3.4 passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  checkstyle  |   0m 30s |  |  branch-3.4 passed  |
   | +1 :green_heart: |  mvnsite  |   0m 35s |  |  branch-3.4 passed  |
   | +1 :green_heart: |  javadoc  |   0m 35s |  |  branch-3.4 passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  javadoc  |   0m 30s |  |  branch-3.4 passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  spotbugs  |   0m 51s |  |  branch-3.4 passed  |
   | +1 :green_heart: |  shadedclient  |  33m 25s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 23s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 22s |  |  the patch passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  javac  |   0m 22s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 20s |  |  the patch passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  javac  |   0m 20s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 17s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 24s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 22s |  |  the patch passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  javadoc  |   0m 21s |  |  the patch passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  spotbugs  |   0m 50s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  33m 30s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  15m 26s |  |  hadoop-distcp in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 38s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 148m 20s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.47 ServerAPI=1.47 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7073/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/7073 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 9553871c7015 5.15.0-117-generic #127-Ubuntu SMP Fri Jul 5 
20:13:28 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | branch-3.4 / 7c20682aa5ce4e575e392c7c6806e47cf2656018 |
   | Default Java | Private Build-1.8.0_422-8u422-b05-1~20.04-b05 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_422-8u422-b05-1~20.04-b05 
|
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7073/1/testReport/ |
   | Max. process+thread count | 552 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-distcp U: hadoop-tools/hadoop-distcp |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7073/1/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   




> Distcp creates F

[jira] [Commented] (HDFS-17381) Distcp of EC files should not be limited to DFS.

2024-09-26 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17884905#comment-17884905
 ] 

ASF GitHub Bot commented on HDFS-17381:
---

sadanand48 commented on PR #6551:
URL: https://github.com/apache/hadoop/pull/6551#issuecomment-2376162304

   Thanks @steveloughran , HDFS-17376 is a dependency not present in the 3.4 
branch,  I have raised https://github.com/apache/hadoop/pull/7073 for that 
cherry-pick. Once that goes in, I will create a backport for this PR too or  
can we club them both in one ?




> Distcp of EC files should not be limited to DFS.
> 
>
> Key: HDFS-17381
> URL: https://issues.apache.org/jira/browse/HDFS-17381
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: distcp
>Affects Versions: 3.4.0
>Reporter: Sadanand Shenoy
>Assignee: Sadanand Shenoy
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.5.0
>
>
> Currently EC file support in distcp is limited to DFS and the code checks if 
> the given FS instance is DistributedFileSystem, In Ozone, EC is supported 
> now, and this limitation can be removed and a general contract for any 
> filesystem that supports EC files should be allowed by implementing few 
> interfaces/methods.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17376) Distcp creates Factor 1 replication file on target if Source is EC

2024-09-26 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17884902#comment-17884902
 ] 

ASF GitHub Bot commented on HDFS-17376:
---

sadanand48 opened a new pull request, #7073:
URL: https://github.com/apache/hadoop/pull/7073

   ### Description of PR
   Cherrypicking HDFS-17376 from trunk to  branch 3.4.
   
   ### How was this patch tested?
   Unit tests




> Distcp creates Factor 1 replication file on target if Source is EC
> --
>
> Key: HDFS-17376
> URL: https://issues.apache.org/jira/browse/HDFS-17376
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: distcp
>Affects Versions: 3.3.6
>Reporter: Sadanand Shenoy
>Assignee: Sadanand Shenoy
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.5.0
>
>
> If the source file is EC, distcp without preserve option creates a 1 
> replication file (this is not intended). 
> This is because for an EC file getReplication() always return 1 . Instead it 
> should create the file as per the default replication on the target.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17624) DFSNetworkTopology#chooseRandomWithStorageType() should not decrease the storage count for excluded nodes that are not part of the selected scope.

2024-09-25 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17884836#comment-17884836
 ] 

ASF GitHub Bot commented on HDFS-17624:
---

fuchaohong commented on PR #7042:
URL: https://github.com/apache/hadoop/pull/7042#issuecomment-2375778759

   hi @Hexiaoqiao @goiri  Could you help me reiview this patch? Thank you.




> DFSNetworkTopology#chooseRandomWithStorageType() should not decrease the 
> storage count for excluded nodes that are not part of the selected scope.
> --
>
> Key: HDFS-17624
> URL: https://issues.apache.org/jira/browse/HDFS-17624
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: fuchaohong
>Assignee: fuchaohong
>Priority: Major
>  Labels: pull-request-available
>
> Presently if chosen scope is /default/rack1 and excluded node is 
> /default/rack2/host2. Then the available count will be deducted.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-17624) DFSNetworkTopology#chooseRandomWithStorageType() should not decrease the storage count for excluded nodes that are not part of the selected scope.

2024-09-25 Thread fuchaohong (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fuchaohong updated HDFS-17624:
--
Summary: DFSNetworkTopology#chooseRandomWithStorageType() should not 
decrease the storage count for excluded nodes that are not part of the selected 
scope.  (was: The availableCount will be deducted only if the excludedNode is 
included in the selected scope.)

> DFSNetworkTopology#chooseRandomWithStorageType() should not decrease the 
> storage count for excluded nodes that are not part of the selected scope.
> --
>
> Key: HDFS-17624
> URL: https://issues.apache.org/jira/browse/HDFS-17624
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: fuchaohong
>Assignee: fuchaohong
>Priority: Major
>  Labels: pull-request-available
>
> Presently if chosen scope is /default/rack1 and excluded node is 
> /default/rack2/host2. Then the available count will be deducted.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-17381) Distcp of EC files should not be limited to DFS.

2024-09-25 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17381?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HDFS-17381:
--
Affects Version/s: 3.4.0

> Distcp of EC files should not be limited to DFS.
> 
>
> Key: HDFS-17381
> URL: https://issues.apache.org/jira/browse/HDFS-17381
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: distcp
>Affects Versions: 3.4.0
>Reporter: Sadanand Shenoy
>Assignee: Sadanand Shenoy
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.5.0
>
>
> Currently EC file support in distcp is limited to DFS and the code checks if 
> the given FS instance is DistributedFileSystem, In Ozone, EC is supported 
> now, and this limitation can be removed and a general contract for any 
> filesystem that supports EC files should be allowed by implementing few 
> interfaces/methods.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-17381) Distcp of EC files should not be limited to DFS.

2024-09-25 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17381?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HDFS-17381:
--
Fix Version/s: 3.5.0

> Distcp of EC files should not be limited to DFS.
> 
>
> Key: HDFS-17381
> URL: https://issues.apache.org/jira/browse/HDFS-17381
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: distcp
>Reporter: Sadanand Shenoy
>Assignee: Sadanand Shenoy
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.5.0
>
>
> Currently EC file support in distcp is limited to DFS and the code checks if 
> the given FS instance is DistributedFileSystem, In Ozone, EC is supported 
> now, and this limitation can be removed and a general contract for any 
> filesystem that supports EC files should be allowed by implementing few 
> interfaces/methods.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17381) Distcp of EC files should not be limited to DFS.

2024-09-25 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17884676#comment-17884676
 ] 

ASF GitHub Bot commented on HDFS-17381:
---

steveloughran commented on PR #6551:
URL: https://github.com/apache/hadoop/pull/6551#issuecomment-2374634133

   @sadanand48 merged to trunk; if you can do a PR on hadoop branch-3.4 then 
once Yetus is happy we can merge it there too




> Distcp of EC files should not be limited to DFS.
> 
>
> Key: HDFS-17381
> URL: https://issues.apache.org/jira/browse/HDFS-17381
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: distcp
>Affects Versions: 3.4.0
>Reporter: Sadanand Shenoy
>Assignee: Sadanand Shenoy
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.5.0
>
>
> Currently EC file support in distcp is limited to DFS and the code checks if 
> the given FS instance is DistributedFileSystem, In Ozone, EC is supported 
> now, and this limitation can be removed and a general contract for any 
> filesystem that supports EC files should be allowed by implementing few 
> interfaces/methods.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17381) Distcp of EC files should not be limited to DFS.

2024-09-25 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17884672#comment-17884672
 ] 

ASF GitHub Bot commented on HDFS-17381:
---

steveloughran merged PR #6551:
URL: https://github.com/apache/hadoop/pull/6551




> Distcp of EC files should not be limited to DFS.
> 
>
> Key: HDFS-17381
> URL: https://issues.apache.org/jira/browse/HDFS-17381
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: distcp
>Reporter: Sadanand Shenoy
>Assignee: Sadanand Shenoy
>Priority: Major
>  Labels: pull-request-available
>
> Currently EC file support in distcp is limited to DFS and the code checks if 
> the given FS instance is DistributedFileSystem, In Ozone, EC is supported 
> now, and this limitation can be removed and a general contract for any 
> filesystem that supports EC files should be allowed by implementing few 
> interfaces/methods.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17631) RedundantEditLogInputStream.nextOp() will be State.STREAM_FAILED when EditLogInputStream.skipUntil() throw IOException

2024-09-25 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17884556#comment-17884556
 ] 

ASF GitHub Bot commented on HDFS-17631:
---

LiuGuH commented on PR #7066:
URL: https://github.com/apache/hadoop/pull/7066#issuecomment-2373609756

   @Hexiaoqiao  Hello,sir.   Please review this PR if you have time. Thanks




> RedundantEditLogInputStream.nextOp() will be State.STREAM_FAILED when 
> EditLogInputStream.skipUntil() throw IOException
> --
>
> Key: HDFS-17631
> URL: https://issues.apache.org/jira/browse/HDFS-17631
> Project: Hadoop HDFS
>  Issue Type: Bug
> Environment: Now when EditLogInputStream.skipUntil() throw 
> IOException in RedundantEditLogInputStream.nextOp(), it is still into 
> State.OK rather than State.STREAM_FAILED. 
> The proper state will be like blew:
> State.SKIP_UNTIL -> State.STREAM_FAILED ->(try next stream)  State.SKIP_UNTIL
>Reporter: liuguanghua
>Assignee: liuguanghua
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17631) RedundantEditLogInputStream.nextOp() will be State.STREAM_FAILED when EditLogInputStream.skipUntil() throw IOException

2024-09-25 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17884534#comment-17884534
 ] 

ASF GitHub Bot commented on HDFS-17631:
---

hadoop-yetus commented on PR #7066:
URL: https://github.com/apache/hadoop/pull/7066#issuecomment-2373394885

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 31s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  47m  9s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 25s |  |  trunk passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  compile  |   1m 20s |  |  trunk passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  checkstyle  |   1m 13s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 28s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 15s |  |  trunk passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  javadoc  |   1m 46s |  |  trunk passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  spotbugs  |   3m 31s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  37m 52s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 16s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 18s |  |  the patch passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  javac  |   1m 18s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 12s |  |  the patch passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  javac  |   1m 12s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   1m  1s | 
[/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7066/3/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs-project/hadoop-hdfs: The patch generated 1 new + 12 unchanged - 
0 fixed = 13 total (was 12)  |
   | +1 :green_heart: |  mvnsite  |   1m 22s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m  6s |  |  the patch passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  javadoc  |   1m 41s |  |  the patch passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  spotbugs  |   3m 25s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  39m 13s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  | 222m  4s |  |  hadoop-hdfs in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 46s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 369m 55s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.47 ServerAPI=1.47 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7066/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/7066 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 8229d926b0f3 5.15.0-119-generic #129-Ubuntu SMP Fri Aug 2 
19:25:20 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / ffc70421d0eaaeb59aeb7ae82b27188b68fcc3f0 |
   | Default Java | Private Build-1.8.0_422-8u422-b05-1~20.04-b05 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_422-8u422-b05-1~20.04-b05 
|
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7066/3/testReport/ |
   | Max. process+thread count | 3271 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibr

[jira] [Commented] (HDFS-17381) Distcp of EC files should not be limited to DFS.

2024-09-24 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17884422#comment-17884422
 ] 

ASF GitHub Bot commented on HDFS-17381:
---

hadoop-yetus commented on PR #6551:
URL: https://github.com/apache/hadoop/pull/6551#issuecomment-2372149333

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   7m  5s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m 47s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  21m  2s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   9m 20s |  |  trunk passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  compile  |   8m 48s |  |  trunk passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  checkstyle  |   2m 10s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 58s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 41s |  |  trunk passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  javadoc  |   1m 22s |  |  trunk passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  spotbugs  |   3m 33s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m 53s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 22s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 13s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   8m 45s |  |  the patch passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  javac  |   8m 44s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   8m 34s |  |  the patch passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  javac  |   8m 34s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   2m  0s | 
[/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6551/13/artifact/out/results-checkstyle-root.txt)
 |  root: The patch generated 5 new + 142 unchanged - 0 fixed = 147 total (was 
142)  |
   | +1 :green_heart: |  mvnsite  |   2m 17s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 54s |  |  the patch passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  javadoc  |   1m 44s |  |  the patch passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  spotbugs  |   4m  9s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  20m 59s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  16m 13s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   2m  3s |  |  hadoop-hdfs-client in the patch 
passed.  |
   | +1 :green_heart: |  unit  |  24m 35s |  |  hadoop-distcp in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 44s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 191m 34s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.47 ServerAPI=1.47 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6551/13/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6551 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 665d106f1ac4 5.15.0-117-generic #127-Ubuntu SMP Fri Jul 5 
20:13:28 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 6b056ffc4a183f7e6d128e81cf48209298bf2303 |
   | Default Java | Private Build-1.8.0_422-8u422-b05-1~20.04-b05 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_422-8u422-b05-1~20.04-b05 
|
   |  Test Results | 
https://ci-hadoop.apache.org/job/ha

  1   2   3   4   5   6   7   8   9   10   >