[jira] [Commented] (HDFS-17082) Add documentation for provisionSnapshotTrash command to HDFSCommands.md and HdfsSnapshots.md

2023-07-12 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17742633#comment-17742633
 ] 

ASF GitHub Bot commented on HDFS-17082:
---

slfan1989 commented on PR #5834:
URL: https://github.com/apache/hadoop/pull/5834#issuecomment-1633451166

   @haiyang1987 Thanks for the contribution, but we need to fix checkstyle.




> Add documentation for provisionSnapshotTrash command to HDFSCommands.md  and 
> HdfsSnapshots.md
> -
>
> Key: HDFS-17082
> URL: https://issues.apache.org/jira/browse/HDFS-17082
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Haiyang Hu
>Assignee: Haiyang Hu
>Priority: Major
>  Labels: pull-request-available
>
> HDFS-15607 and HDFS-15997 introduced provisionSnapshotTrash should add it to 
> the document.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17082) Add documentation for provisionSnapshotTrash command to HDFSCommands.md and HdfsSnapshots.md

2023-07-12 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17742631#comment-17742631
 ] 

ASF GitHub Bot commented on HDFS-17082:
---

haiyang1987 commented on PR #5834:
URL: https://github.com/apache/hadoop/pull/5834#issuecomment-1633444779

   Update PR, @ayushtkn @slfan1989 please help me review it again, thanks.




> Add documentation for provisionSnapshotTrash command to HDFSCommands.md  and 
> HdfsSnapshots.md
> -
>
> Key: HDFS-17082
> URL: https://issues.apache.org/jira/browse/HDFS-17082
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Haiyang Hu
>Assignee: Haiyang Hu
>Priority: Major
>  Labels: pull-request-available
>
> HDFS-15607 and HDFS-15997 introduced provisionSnapshotTrash should add it to 
> the document.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17082) Add documentation for provisionSnapshotTrash command to HDFSCommands.md and HdfsSnapshots.md

2023-07-12 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17742630#comment-17742630
 ] 

ASF GitHub Bot commented on HDFS-17082:
---

haiyang1987 commented on code in PR #5834:
URL: https://github.com/apache/hadoop/pull/5834#discussion_r1261901247


##
hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSCommands.md:
##
@@ -722,8 +724,6 @@ Usage: `hdfs debug verifyEC -file `
 |:--

> Add documentation for provisionSnapshotTrash command to HDFSCommands.md  and 
> HdfsSnapshots.md
> -
>
> Key: HDFS-17082
> URL: https://issues.apache.org/jira/browse/HDFS-17082
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Haiyang Hu
>Assignee: Haiyang Hu
>Priority: Major
>  Labels: pull-request-available
>
> HDFS-15607 and HDFS-15997 introduced provisionSnapshotTrash should add it to 
> the document.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17083) Support getErasureCodeCodecs API in WebHDFS

2023-07-12 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17742619#comment-17742619
 ] 

ASF GitHub Bot commented on HDFS-17083:
---

slfan1989 commented on PR #5836:
URL: https://github.com/apache/hadoop/pull/5836#issuecomment-1633417646

   LGTM.




> Support getErasureCodeCodecs API in WebHDFS
> ---
>
> Key: HDFS-17083
> URL: https://issues.apache.org/jira/browse/HDFS-17083
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Affects Versions: 3.4.0
>Reporter: Hualong Zhang
>Assignee: Hualong Zhang
>Priority: Major
>  Labels: pull-request-available
> Attachments: image-2023-07-12-22-52-15-954.png
>
>
> WebHDFS should support getErasureCodeCodecs:
> !image-2023-07-12-22-52-15-954.png|width=799,height=210!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17084) Utilize StringTable for numerable XAttributes

2023-07-12 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17742595#comment-17742595
 ] 

ASF GitHub Bot commented on HDFS-17084:
---

hadoop-yetus commented on PR #5835:
URL: https://github.com/apache/hadoop/pull/5835#issuecomment-1633348418

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m  6s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  buf  |   0m  0s |  |  buf was not available.  |
   | +0 :ok: |  buf  |   0m  0s |  |  buf was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  16m 26s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  36m 23s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  19m  7s |  |  trunk passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  compile  |  17m 27s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  checkstyle  |   4m 46s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   4m 32s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   3m 22s |  |  trunk passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   3m 22s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  spotbugs  |   8m 55s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  42m 15s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 29s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 58s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  17m 55s |  |  the patch passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  cc  |  17m 55s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |  17m 55s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  17m  9s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  cc  |  17m  9s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |  17m  9s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   4m 41s | 
[/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5835/1/artifact/out/results-checkstyle-root.txt)
 |  root: The patch generated 9 new + 547 unchanged - 0 fixed = 556 total (was 
547)  |
   | +1 :green_heart: |  mvnsite  |   4m 23s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   3m 18s |  |  the patch passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   3m 24s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  spotbugs  |   9m 28s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  41m 41s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  |  18m 47s | 
[/patch-unit-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5835/1/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt)
 |  hadoop-common in the patch passed.  |
   | +1 :green_heart: |  unit  |   2m 37s |  |  hadoop-hdfs-client in the patch 
passed.  |
   | -1 :x: |  unit  | 244m  3s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5835/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   1m 16s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 531m 46s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.fs.TestHarFileSystem |
   |   | hadoop.fs.TestFilterFileSystem |
   |   | hadoop.hdfs.TestRollingUpgrade |
   |   | hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 

[jira] [Commented] (HDFS-17083) Support getErasureCodeCodecs API in WebHDFS

2023-07-12 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17742589#comment-17742589
 ] 

ASF GitHub Bot commented on HDFS-17083:
---

hadoop-yetus commented on PR #5836:
URL: https://github.com/apache/hadoop/pull/5836#issuecomment-161630

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 51s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  markdownlint  |   0m  1s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  16m 40s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  31m 27s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   5m 42s |  |  trunk passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  compile  |   5m 28s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  checkstyle  |   1m 28s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 32s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   2m  3s |  |  trunk passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   2m 32s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  spotbugs  |   5m 55s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  35m 19s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 33s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m  5s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   5m 29s |  |  the patch passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javac  |   5m 29s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   5m 15s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  javac  |   5m 15s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m 15s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   2m 12s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 37s |  |  the patch passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   2m  7s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  spotbugs  |   5m 47s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  35m 36s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 30s |  |  hadoop-hdfs-client in the patch 
passed.  |
   | +1 :green_heart: |  unit  | 244m  8s |  |  hadoop-hdfs in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 57s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 420m 34s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5836/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5836 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets markdownlint 
|
   | uname | Linux 28fa8c1086f9 4.15.0-212-generic #223-Ubuntu SMP Tue May 23 
13:09:22 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 5df3b1c8110bc7a5de355fbf9aef6911429bc422 |
   | Default Java | Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5836/1/testReport/ |
   | Max. process+thread count | 2856 (vs. ulimit of 5500) |
   | modules | C: 

[jira] [Commented] (HDFS-17081) Append ec file check if a block is replicated to at least the minimum replication need consider striped block

2023-07-12 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17742501#comment-17742501
 ] 

ASF GitHub Bot commented on HDFS-17081:
---

hadoop-yetus commented on PR #5833:
URL: https://github.com/apache/hadoop/pull/5833#issuecomment-1632950287

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 53s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  50m 20s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 28s |  |  trunk passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  compile  |   1m 15s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  checkstyle  |   1m 16s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 29s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 13s |  |  trunk passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 37s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  spotbugs  |   3m 29s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  41m 55s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 16s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 21s |  |  the patch passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javac  |   1m 21s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 13s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  javac  |   1m 13s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m  5s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 19s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m  1s |  |  the patch passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 31s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  spotbugs  |   3m 25s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  41m 41s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  | 234m  5s |  |  hadoop-hdfs in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 45s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 393m 48s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5833/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5833 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 0e7e6fe3c900 4.15.0-212-generic #223-Ubuntu SMP Tue May 23 
13:09:22 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / ca9aee8198d6f013b423cd7f8a4b015880e8bcff |
   | Default Java | Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5833/1/testReport/ |
   | Max. process+thread count | 2473 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5833/1/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | 

[jira] [Commented] (HDFS-17082) Add documentation for provisionSnapshotTrash command to HDFSCommands.md and HdfsSnapshots.md

2023-07-12 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17742494#comment-17742494
 ] 

ASF GitHub Bot commented on HDFS-17082:
---

hadoop-yetus commented on PR #5834:
URL: https://github.com/apache/hadoop/pull/5834#issuecomment-1632929487

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 35s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  markdownlint  |   0m  1s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  44m 42s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 24s |  |  trunk passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  compile  |   1m 20s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  checkstyle  |   1m 15s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 29s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 13s |  |  trunk passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 35s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  spotbugs  |   3m 17s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  36m  0s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 13s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 18s |  |  the patch passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javac  |   1m 18s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 12s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  javac  |   1m 12s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   1m  1s | 
[/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5834/1/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs-project/hadoop-hdfs: The patch generated 4 new + 121 unchanged 
- 0 fixed = 125 total (was 121)  |
   | +1 :green_heart: |  mvnsite  |   1m 16s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 55s |  |  the patch passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 30s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  spotbugs  |   3m 13s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  35m 29s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 214m 15s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5834/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 50s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 355m 51s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.server.datanode.TestDirectoryScanner |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5834/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5834 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets markdownlint 
|
   | uname | Linux 76717f7a6e1f 4.15.0-212-generic #223-Ubuntu SMP Tue May 23 
13:09:22 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / b87ab046cb3b14000ab9b92b3d7e669c8f5f923b |
   | Default Java | Private 

[jira] [Updated] (HDFS-17083) Support getErasureCodeCodecs API in WebHDFS

2023-07-12 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17083?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDFS-17083:
--
Labels: pull-request-available  (was: )

> Support getErasureCodeCodecs API in WebHDFS
> ---
>
> Key: HDFS-17083
> URL: https://issues.apache.org/jira/browse/HDFS-17083
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Affects Versions: 3.4.0
>Reporter: Hualong Zhang
>Assignee: Hualong Zhang
>Priority: Major
>  Labels: pull-request-available
> Attachments: image-2023-07-12-22-52-15-954.png
>
>
> WebHDFS should support getErasureCodeCodecs:
> !image-2023-07-12-22-52-15-954.png|width=799,height=210!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17083) Support getErasureCodeCodecs API in WebHDFS

2023-07-12 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17742484#comment-17742484
 ] 

ASF GitHub Bot commented on HDFS-17083:
---

zhtttylz opened a new pull request, #5836:
URL: https://github.com/apache/hadoop/pull/5836

   JIRA: HDFS-17083. Support getECPolices API in WebHDFS




> Support getErasureCodeCodecs API in WebHDFS
> ---
>
> Key: HDFS-17083
> URL: https://issues.apache.org/jira/browse/HDFS-17083
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Affects Versions: 3.4.0
>Reporter: Hualong Zhang
>Assignee: Hualong Zhang
>Priority: Major
> Attachments: image-2023-07-12-22-52-15-954.png
>
>
> WebHDFS should support getErasureCodeCodecs:
> !image-2023-07-12-22-52-15-954.png|width=799,height=210!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-17072) DFSAdmin: add a triggerDirectoryScanner command

2023-07-12 Thread Ayush Saxena (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17072?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-17072:

Summary: DFSAdmin: add a triggerDirectoryScanner command   (was: DFSAdmin: 
add a triggerVolumeScanner command )

> DFSAdmin: add a triggerDirectoryScanner command 
> 
>
> Key: HDFS-17072
> URL: https://issues.apache.org/jira/browse/HDFS-17072
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.4.0
>Reporter: farmmamba
>Assignee: farmmamba
>Priority: Major
>  Labels: pull-request-available
>
> Like -triggerBlockReport, I think we should add a command named 
> -triggerVolumeScanner to manually trigger volume scanner run immediately.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17068) Datanode should record last directory scan time.

2023-07-12 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17742467#comment-17742467
 ] 

ASF GitHub Bot commented on HDFS-17068:
---

ayushtkn commented on code in PR #5809:
URL: https://github.com/apache/hadoop/pull/5809#discussion_r1261357967


##
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDirectoryScanner.java:
##
@@ -1304,6 +1305,23 @@ public void testLocalReplicaUpdateWithReplica() throws 
Exception {
 assertEquals(realBlkFile, localReplica.getBlockFile());
   }
 
+  @Test(timeout = 6)
+  public void testLastDirScannerFinishTimeIsUpdated() throws Exception {
+Configuration conf = getConfiguration();
+conf.setLong(DFSConfigKeys.DFS_DATANODE_DIRECTORYSCAN_INTERVAL_KEY, 3L);
+cluster = new MiniDFSCluster.Builder(conf).build();
+try {
+  cluster.waitActive();
+  bpid = cluster.getNamesystem().getBlockPoolId();
+  fds = DataNodeTestUtils.getFSDataset(cluster.getDataNodes().get(0));
+  assertEquals(fds.getLastDirScannerFinishTime(), 0L);
+  Thread.sleep(4000);
+  assertNotEquals(0L, fds.getLastDirScannerFinishTime());
+} finally {
+  cluster.shutdown();
+}
+  }
+

Review Comment:
   can we rather than doing this sleep thing, have a test like this
   ```
 @Test(timeout = 6)
 public void testLastDirScannerFinishTimeIsUpdated() throws Exception {
   Configuration conf = getConfiguration();
   cluster = new MiniDFSCluster.Builder(conf).build();
   try {
 cluster.waitActive();
 bpid = cluster.getNamesystem().getBlockPoolId();
 final DataNode dn = cluster.getDataNodes().get(0);
 fds = DataNodeTestUtils.getFSDataset(dn);
 long lastDirScannerFinishTime = fds.getLastDirScannerFinishTime();
 dn.getDirectoryScanner().run();
 assertNotEquals(lastDirScannerFinishTime, 
fds.getLastDirScannerFinishTime());
   } finally {
 cluster.shutdown();
   }
 }
   ```





> Datanode should record last directory scan time.
> 
>
> Key: HDFS-17068
> URL: https://issues.apache.org/jira/browse/HDFS-17068
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: farmmamba
>Assignee: farmmamba
>Priority: Minor
>  Labels: pull-request-available
>
> I think it is useful for us to record last directory scan time for one 
> datanode. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17082) Add documentation for provisionSnapshotTrash command to HDFSCommands.md and HdfsSnapshots.md

2023-07-12 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17742460#comment-17742460
 ] 

ASF GitHub Bot commented on HDFS-17082:
---

ayushtkn commented on code in PR #5834:
URL: https://github.com/apache/hadoop/pull/5834#discussion_r1261335811


##
hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSCommands.md:
##
@@ -722,8 +724,6 @@ Usage: `hdfs debug verifyEC -file `
 |:--

> Add documentation for provisionSnapshotTrash command to HDFSCommands.md  and 
> HdfsSnapshots.md
> -
>
> Key: HDFS-17082
> URL: https://issues.apache.org/jira/browse/HDFS-17082
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Haiyang Hu
>Assignee: Haiyang Hu
>Priority: Major
>  Labels: pull-request-available
>
> HDFS-15607 and HDFS-15997 introduced provisionSnapshotTrash should add it to 
> the document.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-17084) Utilize StringTable for numerable XAttributes

2023-07-12 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17084?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDFS-17084:
--
Labels: pull-request-available  (was: )

> Utilize StringTable for numerable XAttributes
> -
>
> Key: HDFS-17084
> URL: https://issues.apache.org/jira/browse/HDFS-17084
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: Janus Chow
>Assignee: Janus Chow
>Priority: Major
>  Labels: pull-request-available
>
> Currently only the name of XAttr will utilize SerialNumber's StringTable to 
> store values, for values they are stored as "byte[]".
> If the XAttr values are numerable, StringTable could be used for efficiency.
> This ticket is to let users create numerable attributes.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17084) Utilize StringTable for numerable XAttributes

2023-07-12 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17742457#comment-17742457
 ] 

ASF GitHub Bot commented on HDFS-17084:
---

symious opened a new pull request, #5835:
URL: https://github.com/apache/hadoop/pull/5835

   
   
   ### Description of PR
   
   Currently only the name of XAttr will utilize SerialNumber's StringTable to 
store values, for values they are stored as "byte[]".
   
   If the XAttr values are numerable, StringTable could be used for efficiency.
   
   This ticket is to let users create numerable attributes.
   
   ### How was this patch tested?
   
   unit test.
   
   ### For code changes:
   
   - [x] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   




> Utilize StringTable for numerable XAttributes
> -
>
> Key: HDFS-17084
> URL: https://issues.apache.org/jira/browse/HDFS-17084
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: Janus Chow
>Assignee: Janus Chow
>Priority: Major
>
> Currently only the name of XAttr will utilize SerialNumber's StringTable to 
> store values, for values they are stored as "byte[]".
> If the XAttr values are numerable, StringTable could be used for efficiency.
> This ticket is to let users create numerable attributes.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-17083) Support getErasureCodeCodecs API in WebHDFS

2023-07-12 Thread Hualong Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17083?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hualong Zhang updated HDFS-17083:
-
Description: 
WebHDFS should support getErasureCodeCodecs:
!image-2023-07-12-22-52-15-954.png|width=799,height=210!

  was:
WebHDFS should support getErasureCodeCodecs:
!image-2023-07-12-22-52-15-954.png|width=643,height=169!


> Support getErasureCodeCodecs API in WebHDFS
> ---
>
> Key: HDFS-17083
> URL: https://issues.apache.org/jira/browse/HDFS-17083
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Affects Versions: 3.4.0
>Reporter: Hualong Zhang
>Assignee: Hualong Zhang
>Priority: Major
> Attachments: image-2023-07-12-22-52-15-954.png
>
>
> WebHDFS should support getErasureCodeCodecs:
> !image-2023-07-12-22-52-15-954.png|width=799,height=210!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-17084) Utilize StringTable for numerable XAttributes

2023-07-12 Thread Janus Chow (Jira)
Janus Chow created HDFS-17084:
-

 Summary: Utilize StringTable for numerable XAttributes
 Key: HDFS-17084
 URL: https://issues.apache.org/jira/browse/HDFS-17084
 Project: Hadoop HDFS
  Issue Type: Task
Reporter: Janus Chow
Assignee: Janus Chow


Currently only the name of XAttr will utilize SerialNumber's StringTable to 
store values, for values they are stored as "byte[]".

If the XAttr values are numerable, StringTable could be used for efficiency.

This ticket is to let users create numerable attributes.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-17083) Support getErasureCodeCodecs API in WebHDFS

2023-07-12 Thread Hualong Zhang (Jira)
Hualong Zhang created HDFS-17083:


 Summary: Support getErasureCodeCodecs API in WebHDFS
 Key: HDFS-17083
 URL: https://issues.apache.org/jira/browse/HDFS-17083
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: webhdfs
Affects Versions: 3.4.0
Reporter: Hualong Zhang
Assignee: Hualong Zhang
 Attachments: image-2023-07-12-22-52-15-954.png

WebHDFS should support getErasureCodeCodecs:
!image-2023-07-12-22-52-15-954.png|width=643,height=169!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17082) Add documentation for provisionSnapshotTrash command to HDFSCommands.md and HdfsSnapshots.md

2023-07-12 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17742408#comment-17742408
 ] 

ASF GitHub Bot commented on HDFS-17082:
---

slfan1989 commented on PR #5834:
URL: https://github.com/apache/hadoop/pull/5834#issuecomment-1632453448

   LGTM.




> Add documentation for provisionSnapshotTrash command to HDFSCommands.md  and 
> HdfsSnapshots.md
> -
>
> Key: HDFS-17082
> URL: https://issues.apache.org/jira/browse/HDFS-17082
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Haiyang Hu
>Assignee: Haiyang Hu
>Priority: Major
>  Labels: pull-request-available
>
> HDFS-15607 and HDFS-15997 introduced provisionSnapshotTrash should add it to 
> the document.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-17082) Add documentation for provisionSnapshotTrash command to HDFSCommands.md and HdfsSnapshots.md

2023-07-12 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17082?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDFS-17082:
--
Labels: pull-request-available  (was: )

> Add documentation for provisionSnapshotTrash command to HDFSCommands.md  and 
> HdfsSnapshots.md
> -
>
> Key: HDFS-17082
> URL: https://issues.apache.org/jira/browse/HDFS-17082
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Haiyang Hu
>Assignee: Haiyang Hu
>Priority: Major
>  Labels: pull-request-available
>
> HDFS-15607 and HDFS-15997 introduced provisionSnapshotTrash should add it to 
> the document.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17082) Add documentation for provisionSnapshotTrash command to HDFSCommands.md and HdfsSnapshots.md

2023-07-12 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17742384#comment-17742384
 ] 

ASF GitHub Bot commented on HDFS-17082:
---

haiyang1987 opened a new pull request, #5834:
URL: https://github.com/apache/hadoop/pull/5834

   ### Description of PR
   https://issues.apache.org/jira/browse/HDFS-17082
   
   [HDFS-15607](https://issues.apache.org/jira/browse/HDFS-15607) and 
[HDFS-15997](https://issues.apache.org/jira/browse/HDFS-15997) introduced 
provisionSnapshotTrash should add it to the document.




> Add documentation for provisionSnapshotTrash command to HDFSCommands.md  and 
> HdfsSnapshots.md
> -
>
> Key: HDFS-17082
> URL: https://issues.apache.org/jira/browse/HDFS-17082
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Haiyang Hu
>Assignee: Haiyang Hu
>Priority: Major
>
> HDFS-15607 and HDFS-15997 introduced provisionSnapshotTrash should add it to 
> the document.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-17082) Add documentation for provisionSnapshotTrash command to HDFSCommands.md and HdfsSnapshots.md

2023-07-12 Thread Haiyang Hu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17082?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haiyang Hu updated HDFS-17082:
--
Description: HDFS-15607 and HDFS-15997 introduced provisionSnapshotTrash 
should add it to the document.  (was: Add documentation for 
provisionSnapshotTrash command to HDFSCommands.md and HdfsSnapshots.md, )

> Add documentation for provisionSnapshotTrash command to HDFSCommands.md  and 
> HdfsSnapshots.md
> -
>
> Key: HDFS-17082
> URL: https://issues.apache.org/jira/browse/HDFS-17082
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Haiyang Hu
>Assignee: Haiyang Hu
>Priority: Major
>
> HDFS-15607 and HDFS-15997 introduced provisionSnapshotTrash should add it to 
> the document.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-17082) Add documentation for provisionSnapshotTrash command to HDFSCommands.md and HdfsSnapshots.md

2023-07-12 Thread Haiyang Hu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17082?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haiyang Hu updated HDFS-17082:
--
Description: Add documentation for provisionSnapshotTrash command to 
HDFSCommands.md and HdfsSnapshots.md, 

> Add documentation for provisionSnapshotTrash command to HDFSCommands.md  and 
> HdfsSnapshots.md
> -
>
> Key: HDFS-17082
> URL: https://issues.apache.org/jira/browse/HDFS-17082
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Haiyang Hu
>Assignee: Haiyang Hu
>Priority: Major
>
> Add documentation for provisionSnapshotTrash command to HDFSCommands.md and 
> HdfsSnapshots.md, 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-17082) Add documentation for provisionSnapshotTrash command to HDFSCommands.md and HdfsSnapshots.md

2023-07-12 Thread Haiyang Hu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17082?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haiyang Hu reassigned HDFS-17082:
-

Assignee: Haiyang Hu

> Add documentation for provisionSnapshotTrash command to HDFSCommands.md  and 
> HdfsSnapshots.md
> -
>
> Key: HDFS-17082
> URL: https://issues.apache.org/jira/browse/HDFS-17082
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Haiyang Hu
>Assignee: Haiyang Hu
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-17082) Add documentation for provisionSnapshotTrash command to HDFSCommands.md and HdfsSnapshots.md

2023-07-12 Thread Haiyang Hu (Jira)
Haiyang Hu created HDFS-17082:
-

 Summary: Add documentation for provisionSnapshotTrash command to 
HDFSCommands.md  and HdfsSnapshots.md
 Key: HDFS-17082
 URL: https://issues.apache.org/jira/browse/HDFS-17082
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Haiyang Hu






--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17081) Append ec file check if a block is replicated to at least the minimum replication need consider striped block

2023-07-12 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17742379#comment-17742379
 ] 

ASF GitHub Bot commented on HDFS-17081:
---

haiyang1987 opened a new pull request, #5833:
URL: https://github.com/apache/hadoop/pull/5833

   ### Description of PR
   
   https://issues.apache.org/jira/browse/HDFS-17081
   
   Append ec file check if a block is replicated to at least the minimum 
replication need consider ec block.
   
   currently only the minimum replication of the replica is considered, the 
code is as follows:
   
   /**
   ```
  * Check if a block is replicated to at least the minimum replication.
  */
 public boolean isSufficientlyReplicated(BlockInfo b) {
   // Compare against the lesser of the minReplication and number of live 
DNs.
   final int liveReplicas = countNodes(b).liveReplicas();
   if (liveReplicas >= minReplication) {
 return true;
   }
   // getNumLiveDataNodes() is very expensive and we minimize its use by
   // comparing with minReplication first.
   return liveReplicas >= getDatanodeManager().getNumLiveDataNodes();
 }
   ```




> Append ec file check if a block is replicated to at least the minimum 
> replication need consider striped block
> -
>
> Key: HDFS-17081
> URL: https://issues.apache.org/jira/browse/HDFS-17081
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Haiyang Hu
>Assignee: Haiyang Hu
>Priority: Major
>
> Append ec file check if a block is replicated to at least the minimum 
> replication need consider ec block.
> currently only the minimum replication of the replica is considered, the code 
> is as follows:
> {code:java}
> /**
>* Check if a block is replicated to at least the minimum replication.
>*/
>   public boolean isSufficientlyReplicated(BlockInfo b) {
> // Compare against the lesser of the minReplication and number of live 
> DNs.
> final int liveReplicas = countNodes(b).liveReplicas();
> if (liveReplicas >= minReplication) {
>   return true;
> }
> // getNumLiveDataNodes() is very expensive and we minimize its use by
> // comparing with minReplication first.
> return liveReplicas >= getDatanodeManager().getNumLiveDataNodes();
>   }
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-17081) Append ec file check if a block is replicated to at least the minimum replication need consider striped block

2023-07-12 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17081?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDFS-17081:
--
Labels: pull-request-available  (was: )

> Append ec file check if a block is replicated to at least the minimum 
> replication need consider striped block
> -
>
> Key: HDFS-17081
> URL: https://issues.apache.org/jira/browse/HDFS-17081
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Haiyang Hu
>Assignee: Haiyang Hu
>Priority: Major
>  Labels: pull-request-available
>
> Append ec file check if a block is replicated to at least the minimum 
> replication need consider ec block.
> currently only the minimum replication of the replica is considered, the code 
> is as follows:
> {code:java}
> /**
>* Check if a block is replicated to at least the minimum replication.
>*/
>   public boolean isSufficientlyReplicated(BlockInfo b) {
> // Compare against the lesser of the minReplication and number of live 
> DNs.
> final int liveReplicas = countNodes(b).liveReplicas();
> if (liveReplicas >= minReplication) {
>   return true;
> }
> // getNumLiveDataNodes() is very expensive and we minimize its use by
> // comparing with minReplication first.
> return liveReplicas >= getDatanodeManager().getNumLiveDataNodes();
>   }
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-17081) Append ec file check if a block is replicated to at least the minimum replication need consider striped block

2023-07-12 Thread Haiyang Hu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17081?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haiyang Hu updated HDFS-17081:
--
Summary: Append ec file check if a block is replicated to at least the 
minimum replication need consider striped block  (was: Append ec file check if 
a block is replicated to at least the minimum replication need consider ec 
block)

> Append ec file check if a block is replicated to at least the minimum 
> replication need consider striped block
> -
>
> Key: HDFS-17081
> URL: https://issues.apache.org/jira/browse/HDFS-17081
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Haiyang Hu
>Assignee: Haiyang Hu
>Priority: Major
>
> Append ec file check if a block is replicated to at least the minimum 
> replication need consider ec block.
> currently only the minimum replication of the replica is considered, the code 
> is as follows:
> {code:java}
> /**
>* Check if a block is replicated to at least the minimum replication.
>*/
>   public boolean isSufficientlyReplicated(BlockInfo b) {
> // Compare against the lesser of the minReplication and number of live 
> DNs.
> final int liveReplicas = countNodes(b).liveReplicas();
> if (liveReplicas >= minReplication) {
>   return true;
> }
> // getNumLiveDataNodes() is very expensive and we minimize its use by
> // comparing with minReplication first.
> return liveReplicas >= getDatanodeManager().getNumLiveDataNodes();
>   }
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17068) Datanode should record last directory scan time.

2023-07-12 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17742345#comment-17742345
 ] 

ASF GitHub Bot commented on HDFS-17068:
---

hadoop-yetus commented on PR #5809:
URL: https://github.com/apache/hadoop/pull/5809#issuecomment-1632207612

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 43s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 4 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  48m 12s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 33s |  |  trunk passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  compile  |   1m 23s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  checkstyle  |   1m 15s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 29s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 11s |  |  trunk passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 35s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  spotbugs  |   3m 29s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  40m 14s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 12s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 16s |  |  the patch passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javac  |   1m 16s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  9s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  javac  |   1m  9s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  1s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m  2s |  |  
hadoop-hdfs-project/hadoop-hdfs: The patch generated 0 new + 200 unchanged - 1 
fixed = 200 total (was 201)  |
   | +1 :green_heart: |  mvnsite  |   1m 18s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 58s |  |  the patch passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 26s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  spotbugs  |   3m 21s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  37m 29s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  | 216m 42s |  |  hadoop-hdfs in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 53s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 367m 51s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5809/22/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5809 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 9a563291821c 4.15.0-212-generic #223-Ubuntu SMP Tue May 23 
13:09:22 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 2c497595ddac53325a67ed8db06e40085ea350d6 |
   | Default Java | Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5809/22/testReport/ |
   | Max. process+thread count | 3122 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5809/22/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 

[jira] [Updated] (HDFS-17081) Append ec file check if a block is replicated to at least the minimum replication need consider ec block

2023-07-12 Thread Haiyang Hu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17081?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haiyang Hu updated HDFS-17081:
--
Description: 
Append ec file check if a block is replicated to at least the minimum 
replication need consider ec block.

currently only the minimum replication of the replica is considered, the code 
is as follows:


{code:java}
/**
   * Check if a block is replicated to at least the minimum replication.
   */
  public boolean isSufficientlyReplicated(BlockInfo b) {
// Compare against the lesser of the minReplication and number of live DNs.
final int liveReplicas = countNodes(b).liveReplicas();
if (liveReplicas >= minReplication) {
  return true;
}
// getNumLiveDataNodes() is very expensive and we minimize its use by
// comparing with minReplication first.
return liveReplicas >= getDatanodeManager().getNumLiveDataNodes();
  }
{code}



  was:

/**
   * Check if a block is replicated to at least the minimum replication.
   */
  public boolean isSufficientlyReplicated(BlockInfo b) {
// Compare against the lesser of the minReplication and number of live DNs.
final int liveReplicas = countNodes(b).liveReplicas();
if (liveReplicas >= minReplication) {
  return true;
}
// getNumLiveDataNodes() is very expensive and we minimize its use by
// comparing with minReplication first.
return liveReplicas >= getDatanodeManager().getNumLiveDataNodes();
  }



> Append ec file check if a block is replicated to at least the minimum 
> replication need consider ec block
> 
>
> Key: HDFS-17081
> URL: https://issues.apache.org/jira/browse/HDFS-17081
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Haiyang Hu
>Assignee: Haiyang Hu
>Priority: Major
>
> Append ec file check if a block is replicated to at least the minimum 
> replication need consider ec block.
> currently only the minimum replication of the replica is considered, the code 
> is as follows:
> {code:java}
> /**
>* Check if a block is replicated to at least the minimum replication.
>*/
>   public boolean isSufficientlyReplicated(BlockInfo b) {
> // Compare against the lesser of the minReplication and number of live 
> DNs.
> final int liveReplicas = countNodes(b).liveReplicas();
> if (liveReplicas >= minReplication) {
>   return true;
> }
> // getNumLiveDataNodes() is very expensive and we minimize its use by
> // comparing with minReplication first.
> return liveReplicas >= getDatanodeManager().getNumLiveDataNodes();
>   }
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-17081) Append ec file check if a block is replicated to at least the minimum replication need consider ec block

2023-07-12 Thread Haiyang Hu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17081?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haiyang Hu updated HDFS-17081:
--
Description: 
/**
   * Check if a block is replicated to at least the minimum replication.
   */
  public boolean isSufficientlyReplicated(BlockInfo b) {
// Compare against the lesser of the minReplication and number of live DNs.
final int liveReplicas = countNodes(b).liveReplicas();
if (liveReplicas >= minReplication) {
  return true;
}
// getNumLiveDataNodes() is very expensive and we minimize its use by
// comparing with minReplication first.
return liveReplicas >= getDatanodeManager().getNumLiveDataNodes();
  }


> Append ec file check if a block is replicated to at least the minimum 
> replication need consider ec block
> 
>
> Key: HDFS-17081
> URL: https://issues.apache.org/jira/browse/HDFS-17081
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Haiyang Hu
>Assignee: Haiyang Hu
>Priority: Major
>
> /**
>* Check if a block is replicated to at least the minimum replication.
>*/
>   public boolean isSufficientlyReplicated(BlockInfo b) {
> // Compare against the lesser of the minReplication and number of live 
> DNs.
> final int liveReplicas = countNodes(b).liveReplicas();
> if (liveReplicas >= minReplication) {
>   return true;
> }
> // getNumLiveDataNodes() is very expensive and we minimize its use by
> // comparing with minReplication first.
> return liveReplicas >= getDatanodeManager().getNumLiveDataNodes();
>   }



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-17081) Append ec file check if a block is replicated to at least the minimum replication need consider ec block

2023-07-12 Thread Haiyang Hu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17081?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haiyang Hu updated HDFS-17081:
--
Description: 

/**
   * Check if a block is replicated to at least the minimum replication.
   */
  public boolean isSufficientlyReplicated(BlockInfo b) {
// Compare against the lesser of the minReplication and number of live DNs.
final int liveReplicas = countNodes(b).liveReplicas();
if (liveReplicas >= minReplication) {
  return true;
}
// getNumLiveDataNodes() is very expensive and we minimize its use by
// comparing with minReplication first.
return liveReplicas >= getDatanodeManager().getNumLiveDataNodes();
  }


  was:
/**
   * Check if a block is replicated to at least the minimum replication.
   */
  public boolean isSufficientlyReplicated(BlockInfo b) {
// Compare against the lesser of the minReplication and number of live DNs.
final int liveReplicas = countNodes(b).liveReplicas();
if (liveReplicas >= minReplication) {
  return true;
}
// getNumLiveDataNodes() is very expensive and we minimize its use by
// comparing with minReplication first.
return liveReplicas >= getDatanodeManager().getNumLiveDataNodes();
  }



> Append ec file check if a block is replicated to at least the minimum 
> replication need consider ec block
> 
>
> Key: HDFS-17081
> URL: https://issues.apache.org/jira/browse/HDFS-17081
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Haiyang Hu
>Assignee: Haiyang Hu
>Priority: Major
>
> /**
>* Check if a block is replicated to at least the minimum replication.
>*/
>   public boolean isSufficientlyReplicated(BlockInfo b) {
> // Compare against the lesser of the minReplication and number of live 
> DNs.
> final int liveReplicas = countNodes(b).liveReplicas();
> if (liveReplicas >= minReplication) {
>   return true;
> }
> // getNumLiveDataNodes() is very expensive and we minimize its use by
> // comparing with minReplication first.
> return liveReplicas >= getDatanodeManager().getNumLiveDataNodes();
>   }



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-17081) Append ec file check if a block is replicated to at least the minimum replication need consider ec block

2023-07-12 Thread Haiyang Hu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17081?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haiyang Hu reassigned HDFS-17081:
-

Assignee: Haiyang Hu

> Append ec file check if a block is replicated to at least the minimum 
> replication need consider ec block
> 
>
> Key: HDFS-17081
> URL: https://issues.apache.org/jira/browse/HDFS-17081
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Haiyang Hu
>Assignee: Haiyang Hu
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-17081) Append ec file check if a block is replicated to at least the minimum replication need consider ec block

2023-07-12 Thread Haiyang Hu (Jira)
Haiyang Hu created HDFS-17081:
-

 Summary: Append ec file check if a block is replicated to at least 
the minimum replication need consider ec block
 Key: HDFS-17081
 URL: https://issues.apache.org/jira/browse/HDFS-17081
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Haiyang Hu






--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17074) Remove incorrect comment in TestRedudantBlocks#setup

2023-07-12 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17742321#comment-17742321
 ] 

ASF GitHub Bot commented on HDFS-17074:
---

hfutatzhanghb commented on PR #5822:
URL: https://github.com/apache/hadoop/pull/5822#issuecomment-1632098271

   @ayushtkn @zhangshuyan0 Hi, sir. Could you please help me review this simple 
modification when you have free time? Thanx a lot.




> Remove incorrect comment in TestRedudantBlocks#setup
> 
>
> Key: HDFS-17074
> URL: https://issues.apache.org/jira/browse/HDFS-17074
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 3.4.0
>Reporter: farmmamba
>Assignee: farmmamba
>Priority: Trivial
>  Labels: pull-request-available
>
> In TestRedudantBlocks#setup(),  The below comment is incorrect.
> {code:java}
> // disable block recovery 
> conf.setInt(DFSConfigKeys.DFS_NAMENODE_REDUNDANCY_INTERVAL_SECONDS_KEY, 1);
> conf.setInt(DFSConfigKeys.DFS_HEARTBEAT_INTERVAL_KEY, 1);{code}
> We should delete this comment.
> The correct usage is in TestAddOverReplicatedStripedBlocks#setup()
> {code:java}
> // disable block recovery
> conf.setInt(DFSConfigKeys.DFS_NAMENODE_REPLICATION_MAX_STREAMS_KEY, 0);
> conf.setInt(DFSConfigKeys.DFS_NAMENODE_REDUNDANCY_INTERVAL_SECONDS_KEY, 1);
> conf.setInt(DFSConfigKeys.DFS_HEARTBEAT_INTERVAL_KEY, 1); {code}
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-17069) The documentation and implementation of "dfs.blocksize" are inconsistent.

2023-07-12 Thread ECFuzz (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ECFuzz updated HDFS-17069:
--
Description: 
My hadoop version is 3.3.6, and I use the Pseudo-Distributed Operation.

core-site.xml like below.
{code:java}

  
        fs.defaultFS
        hdfs://localhost:9000
    
    
        hadoop.tmp.dir
        /home/hadoop/Mutil_Component/tmp
    
   
{code}
hdfs-site.xml like below.
{code:java}

   
        dfs.replication
        1
    

        dfs.blocksize
        128k
    
   
{code}
And then format the namenode, and start the hdfs.
{code:java}
hadoop@hadoop-Standard-PC-i440FX-PIIX-1996:~/Mutil_Component/hadoop-3.3.6$ 
bin/hdfs namenode -format
x(many info)
hadoop@hadoop-Standard-PC-i440FX-PIIX-1996:~/Mutil_Component/hadoop-3.3.6$ 
sbin/start-dfs.sh
Starting namenodes on [localhost]
Starting datanodes
Starting secondary namenodes [hadoop-Standard-PC-i440FX-PIIX-1996]{code}
Finally, use dfs to put a file. Then I get the message which means 128k is less 
than 1M.

 
{code:java}
hadoop@hadoop-Standard-PC-i440FX-PIIX-1996:~/Mutil_Component/hadoop-3.3.6$ 
bin/hdfs dfs -mkdir -p /user/hadoop
hadoop@hadoop-Standard-PC-i440FX-PIIX-1996:~/Mutil_Component/hadoop-3.3.6$ 
bin/hdfs dfs -mkdir input
hadoop@hadoop-Standard-PC-i440FX-PIIX-1996:~/Mutil_Component/hadoop-3.3.6$ 
bin/hdfs dfs -put etc/hadoop/hdfs-site.xml input
put: Specified block size is less than configured minimum value 
(dfs.namenode.fs-limits.min-block-size): 131072 < 1048576
{code}
But I find that in the document, dfs.blocksize can be set like 128k and other 
values in hdfs-default.xml .
{code:java}
The default block size for new files, in bytes. You can use the following 
suffix (case insensitive): k(kilo), m(mega), g(giga), t(tera), p(peta), e(exa) 
to specify the size (such as 128k, 512m, 1g, etc.), Or provide complete size in 
bytes (such as 134217728 for 128 MB).{code}
So, should there be some issues with the documents here?Or should notice user 
to set this configuration to be larger than 1M?

 

Additionally, I start the yarn and run the given mapreduce job.
{code:java}
hadoop@hadoop-Standard-PC-i440FX-PIIX-1996:~/Mutil_Component/hadoop-3.3.6$ 
sbin/start-yarn.sh 
hadoop@hadoop-Standard-PC-i440FX-PIIX-1996:~/Mutil_Component/hadoop-3.3.6$ 
bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-3.3.6.jar grep 
input output 'dfs[a-z.]+'{code}

 And,  the shell will throw some exceptions like below.
{code:java}
2023-07-12 15:12:29,964 INFO client.DefaultNoHARMFailoverProxyProvider: 
Connecting to ResourceManager at /0.0.0.0:8032
2023-07-12 15:12:30,430 INFO mapreduce.JobResourceUploader: Disabling Erasure 
Coding for path: /tmp/hadoop-yarn/staging/hadoop/.staging/job_1689145947338_0001
2023-07-12 15:12:30,542 INFO mapreduce.JobSubmitter: Cleaning up the staging 
area /tmp/hadoop-yarn/staging/hadoop/.staging/job_1689145947338_0001
org.apache.hadoop.ipc.RemoteException(java.io.IOException): Specified block 
size is less than configured minimum value 
(dfs.namenode.fs-limits.min-block-size): 131072 < 1048576
        at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2690)
        at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2625)
        at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:807)
        at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:496)
        at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at 
org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:621)
        at 
org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:589)
        at 
org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:573)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1227)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1094)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1017)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1899)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:3048)        at 
org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1567)
        at org.apache.hadoop.ipc.Client.call(Client.java:1513)
        at org.apache.hadoop.ipc.Client.call(Client.java:1410)
        at 
org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:258)
        at 

[jira] [Updated] (HDFS-17069) The documentation and implementation of "dfs.blocksize" are inconsistent.

2023-07-12 Thread ECFuzz (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ECFuzz updated HDFS-17069:
--
Description: 
My hadoop version is 3.3.6, and I use the Pseudo-Distributed Operation.

core-site.xml like below.
{code:java}

  
        fs.defaultFS
        hdfs://localhost:9000
    
    
        hadoop.tmp.dir
        /home/hadoop/Mutil_Component/tmp
    
   
{code}
hdfs-site.xml like below.
{code:java}

   
        dfs.replication
        1
    

        dfs.blocksize
        128k
    
   
{code}
And then format the namenode, and start the hdfs.
{code:java}
hadoop@hadoop-Standard-PC-i440FX-PIIX-1996:~/Mutil_Component/hadoop-3.3.6$ 
bin/hdfs namenode -format
x(many info)
hadoop@hadoop-Standard-PC-i440FX-PIIX-1996:~/Mutil_Component/hadoop-3.3.6$ 
sbin/start-dfs.sh
Starting namenodes on [localhost]
Starting datanodes
Starting secondary namenodes [hadoop-Standard-PC-i440FX-PIIX-1996]{code}
Finally, use dfs to put a file. Then I get the message which means 128k is less 
than 1M.

 
{code:java}
hadoop@hadoop-Standard-PC-i440FX-PIIX-1996:~/Mutil_Component/hadoop-3.3.6$ 
bin/hdfs dfs -mkdir -p /user/hadoop
hadoop@hadoop-Standard-PC-i440FX-PIIX-1996:~/Mutil_Component/hadoop-3.3.6$ 
bin/hdfs dfs -mkdir input
hadoop@hadoop-Standard-PC-i440FX-PIIX-1996:~/Mutil_Component/hadoop-3.3.6$ 
bin/hdfs dfs -put etc/hadoop/hdfs-site.xml input
put: Specified block size is less than configured minimum value 
(dfs.namenode.fs-limits.min-block-size): 131072 < 1048576
{code}
But I find that in the document, dfs.blocksize can be set like 128k and other 
values in hdfs-default.xml .
{code:java}
The default block size for new files, in bytes. You can use the following 
suffix (case insensitive): k(kilo), m(mega), g(giga), t(tera), p(peta), e(exa) 
to specify the size (such as 128k, 512m, 1g, etc.), Or provide complete size in 
bytes (such as 134217728 for 128 MB).{code}
So, should there be some issues with the documents here?Or should notice user 
to set this configuration to be larger than 1M?

 

Additionally, 
 

  was:
My hadoop version is 3.3.6, and I use the Pseudo-Distributed Operation.

core-site.xml like below.
{code:java}

  
        fs.defaultFS
        hdfs://localhost:9000
    
    
        hadoop.tmp.dir
        /home/hadoop/Mutil_Component/tmp
    
   
{code}
hdfs-site.xml like below.
{code:java}

   
        dfs.replication
        1
    

        dfs.blocksize
        128k
    
   
{code}
And then format the namenode, and start the hdfs.
{code:java}
hadoop@hadoop-Standard-PC-i440FX-PIIX-1996:~/Mutil_Component/hadoop-3.3.6$ 
bin/hdfs namenode -format
x(many info)
hadoop@hadoop-Standard-PC-i440FX-PIIX-1996:~/Mutil_Component/hadoop-3.3.6$ 
sbin/start-dfs.sh
Starting namenodes on [localhost]
Starting datanodes
Starting secondary namenodes [hadoop-Standard-PC-i440FX-PIIX-1996]{code}
Finally, use dfs to put a file. Then I get the message which means 128k is less 
than 1M.

 
{code:java}
hadoop@hadoop-Standard-PC-i440FX-PIIX-1996:~/Mutil_Component/hadoop-3.3.6$ 
bin/hdfs dfs -mkdir -p /user/hadoop
hadoop@hadoop-Standard-PC-i440FX-PIIX-1996:~/Mutil_Component/hadoop-3.3.6$ 
bin/hdfs dfs -mkdir input
hadoop@hadoop-Standard-PC-i440FX-PIIX-1996:~/Mutil_Component/hadoop-3.3.6$ 
bin/hdfs dfs -put etc/hadoop/hdfs-site.xml input
put: Specified block size is less than configured minimum value 
(dfs.namenode.fs-limits.min-block-size): 131072 < 1048576
{code}
But I find that in the document, dfs.blocksize can be set like 128k and other 
values in hdfs-default.xml .
{code:java}
The default block size for new files, in bytes. You can use the following 
suffix (case insensitive): k(kilo), m(mega), g(giga), t(tera), p(peta), e(exa) 
to specify the size (such as 128k, 512m, 1g, etc.), Or provide complete size in 
bytes (such as 134217728 for 128 MB).{code}
So, should there be some issues with the documents here?Or should notice user 
to set this configuration to be larger than 1M?


> The documentation and implementation of "dfs.blocksize" are inconsistent.
> -
>
> Key: HDFS-17069
> URL: https://issues.apache.org/jira/browse/HDFS-17069
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: dfs, documentation
>Affects Versions: 3.3.6
> Environment: Linux version 4.15.0-142-generic 
> (buildd@lgw01-amd64-039) (gcc version 5.4.0 20160609 (Ubuntu 
> 5.4.0-6ubuntu1~16.04.12))
> java version "1.8.0_162"
> Java(TM) SE Runtime Environment (build 1.8.0_162-b12)
> Java HotSpot(TM) 64-Bit Server VM (build 25.162-b12, mixed mode)
>Reporter: ECFuzz
>Priority: Major
>  Labels: pull-request-available
>
> My hadoop version is 3.3.6, and I use the Pseudo-Distributed Operation.
> core-site.xml like below.
> {code:java}
>