[GitHub] [hadoop] hadoop-yetus commented on pull request #4963: YARN-11326. [Federation] Add RM FederationStateStoreService Metrics.

2023-04-17 Thread via GitHub


hadoop-yetus commented on PR #4963:
URL: https://github.com/apache/hadoop/pull/4963#issuecomment-1512435547

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  17m 39s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  42m 41s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m  5s |  |  trunk passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  compile  |   0m 56s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  checkstyle  |   0m 50s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m  2s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 54s |  |  trunk passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 41s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  spotbugs  |   2m 11s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  24m  8s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  24m 25s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 52s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 59s |  |  the patch passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javac  |   1m  0s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 50s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  javac  |   0m 50s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 40s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 54s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 39s |  |  the patch passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 37s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  spotbugs  |   2m  0s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  24m 26s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  | 103m 37s |  |  
hadoop-yarn-server-resourcemanager in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 35s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 227m 58s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4963/37/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4963 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 27fe0941a89e 4.15.0-206-generic #217-Ubuntu SMP Fri Feb 3 
19:10:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 4ef0b90d3552fb7a90bdf3a02609e3209954b271 |
   | Default Java | Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1
 /usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4963/37/testReport/ |
   | Max. process+thread count | 855 (vs. ulimit of 5500) |
   | modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4963/37/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
  

[GitHub] [hadoop] hadoop-yetus commented on pull request #5551: YARN-11378. [Federation] Support checkForDecommissioningNodes、refreshClusterMaxPriority API's for Federation.

2023-04-17 Thread via GitHub


hadoop-yetus commented on PR #5551:
URL: https://github.com/apache/hadoop/pull/5551#issuecomment-1512420946

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  13m 21s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  buf  |   0m  1s |  |  buf was not available.  |
   | +0 :ok: |  buf  |   0m  1s |  |  buf was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  17m 11s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  26m  2s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   9m 44s |  |  trunk passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  compile  |   8m 32s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  checkstyle  |   1m 47s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 50s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   2m 42s |  |  trunk passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   2m 33s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  spotbugs  |   5m 16s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m 53s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 29s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 43s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   9m  5s |  |  the patch passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  cc  |   9m  5s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |   9m  5s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   8m 31s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  cc  |   8m 31s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |   8m 31s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   1m 37s | 
[/results-checkstyle-hadoop-yarn-project_hadoop-yarn.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5551/4/artifact/out/results-checkstyle-hadoop-yarn-project_hadoop-yarn.txt)
 |  hadoop-yarn-project/hadoop-yarn: The patch generated 7 new + 8 unchanged - 
1 fixed = 15 total (was 9)  |
   | +1 :green_heart: |  mvnsite  |   2m 36s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   2m 17s |  |  the patch passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   2m 17s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  spotbugs  |   5m 22s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  21m  7s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 10s |  |  hadoop-yarn-api in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   5m 41s |  |  hadoop-yarn-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   0m 45s |  |  hadoop-yarn-server-router in 
the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 57s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 178m 35s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5551/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5551 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets cc buflint 
bufcompat |
   | uname | Linux 4092f6e7095e 4.15.0-206-generic #217-Ubuntu SMP Fri Feb 3 
19:10:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 47208be556240db7f5aac0fd3c22126d698475c6 |
   | Default Java | Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 |
   | Multi-JDK versions | 

[GitHub] [hadoop] hadoop-yetus commented on pull request #5551: YARN-11378. [Federation] Support checkForDecommissioningNodes、refreshClusterMaxPriority API's for Federation.

2023-04-17 Thread via GitHub


hadoop-yetus commented on PR #5551:
URL: https://github.com/apache/hadoop/pull/5551#issuecomment-1512412414

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  12m 51s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  buf  |   0m  1s |  |  buf was not available.  |
   | +0 :ok: |  buf  |   0m  1s |  |  buf was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  17m 36s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  25m 56s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   9m 43s |  |  trunk passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  compile  |   8m 27s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  checkstyle  |   1m 48s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 52s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   2m 44s |  |  trunk passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   2m 31s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  spotbugs  |   5m 15s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m 48s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 28s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 43s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   9m  3s |  |  the patch passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  cc  |   9m  3s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |   9m  3s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   8m 24s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  cc  |   8m 24s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |   8m 24s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   1m 36s | 
[/results-checkstyle-hadoop-yarn-project_hadoop-yarn.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5551/3/artifact/out/results-checkstyle-hadoop-yarn-project_hadoop-yarn.txt)
 |  hadoop-yarn-project/hadoop-yarn: The patch generated 7 new + 8 unchanged - 
1 fixed = 15 total (was 9)  |
   | +1 :green_heart: |  mvnsite  |   2m 36s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   2m 19s |  |  the patch passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   2m 17s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  spotbugs  |   5m 17s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  21m 10s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 13s |  |  hadoop-yarn-api in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   5m 42s |  |  hadoop-yarn-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   0m 45s |  |  hadoop-yarn-server-router in 
the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 57s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 178m 14s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5551/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5551 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets cc buflint 
bufcompat |
   | uname | Linux bf835188ca01 4.15.0-206-generic #217-Ubuntu SMP Fri Feb 3 
19:10:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 47208be556240db7f5aac0fd3c22126d698475c6 |
   | Default Java | Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 |
   | Multi-JDK versions | 

[GitHub] [hadoop] hadoop-yetus commented on pull request #5551: YARN-11378. [Federation] Support checkForDecommissioningNodes、refreshClusterMaxPriority API's for Federation.

2023-04-17 Thread via GitHub


hadoop-yetus commented on PR #5551:
URL: https://github.com/apache/hadoop/pull/5551#issuecomment-1512407910

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  16m 36s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  buf  |   0m  0s |  |  buf was not available.  |
   | +0 :ok: |  buf  |   0m  0s |  |  buf was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  17m 31s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  25m 51s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   9m 44s |  |  trunk passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  compile  |   8m 37s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  checkstyle  |   1m 45s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 54s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   2m 43s |  |  trunk passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   2m 33s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  spotbugs  |   5m 17s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  21m  5s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 29s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 43s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   9m  6s |  |  the patch passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  cc  |   9m  6s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |   9m  6s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   8m 26s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  cc  |   8m 26s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |   8m 26s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   1m 40s | 
[/results-checkstyle-hadoop-yarn-project_hadoop-yarn.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5551/2/artifact/out/results-checkstyle-hadoop-yarn-project_hadoop-yarn.txt)
 |  hadoop-yarn-project/hadoop-yarn: The patch generated 4 new + 8 unchanged - 
1 fixed = 12 total (was 9)  |
   | +1 :green_heart: |  mvnsite  |   2m 36s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   2m 20s |  |  the patch passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   2m 17s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  spotbugs  |   5m 23s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  21m  8s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 12s |  |  hadoop-yarn-api in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   5m 42s |  |  hadoop-yarn-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   0m 45s |  |  hadoop-yarn-server-router in 
the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 57s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 182m 19s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5551/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5551 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets cc buflint 
bufcompat |
   | uname | Linux e65c98c2f46c 4.15.0-206-generic #217-Ubuntu SMP Fri Feb 3 
19:10:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / b07ac001cc448d9e5b61c0414300265e04903ec3 |
   | Default Java | Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 |
   | Multi-JDK versions | 

[GitHub] [hadoop] smarthanwang opened a new pull request, #5564: HDFS-16985. delete local block file when FileNotFoundException occurred may lead to missing block.

2023-04-17 Thread via GitHub


smarthanwang opened a new pull request, #5564:
URL: https://github.com/apache/hadoop/pull/5564

   
   ### Description of PR
   see https://issues.apache.org/jira/browse/HDFS-16985
   
   ### How was this patch tested?
   no unit tests need
   
   ### For code changes:
   
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #5537: YARN-11438. [Federation] ZookeeperFederationStateStore Support Version.

2023-04-17 Thread via GitHub


hadoop-yetus commented on PR #5537:
URL: https://github.com/apache/hadoop/pull/5537#issuecomment-1512374432

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  17m  0s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  42m 51s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 41s |  |  trunk passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  compile  |   0m 36s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  checkstyle  |   0m 29s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 41s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 44s |  |  trunk passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 30s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  spotbugs  |   1m 37s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  24m 10s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 32s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 37s |  |  the patch passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javac  |   0m 37s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 30s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  javac  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 16s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 33s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 27s |  |  the patch passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 24s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  spotbugs  |   1m 27s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  23m 50s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   3m  4s |  |  hadoop-yarn-server-common in 
the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 34s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 122m 46s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5537/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5537 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 238ed8ad2d06 4.15.0-206-generic #217-Ubuntu SMP Fri Feb 3 
19:10:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / fcc88cb56426c25b2b861e34fe8513e8ed312e25 |
   | Default Java | Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1
 /usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5537/6/testReport/ |
   | Max. process+thread count | 590 (vs. ulimit of 5500) |
   | modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5537/6/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific 

[jira] [Commented] (HADOOP-18671) Add recoverLease(), setSafeMode(), isFileClosed() APIs to FileSystem

2023-04-17 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17713340#comment-17713340
 ] 

ASF GitHub Bot commented on HADOOP-18671:
-

taklwu commented on code in PR #5553:
URL: https://github.com/apache/hadoop/pull/5553#discussion_r1169365930


##
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AFileSystemContract.java:
##
@@ -137,4 +142,30 @@ public void testRenameNonExistentPath() throws Exception {
 () -> super.testRenameNonExistentPath());
 
   }
+
+  @Test
+  public void testFileSystemCapabilities() throws Exception {

Review Comment:
   yeah, that's why I chose to use `default` implementation when declaring the 
interface, then whoever does implement that will throw 
UnsupportedOperationException.
   
   but still we should change those filesystem expectation, they're not 
extending or implementing those  interface for now. (we don't know if in the 
future anyone would like to build new feature on top, but it's very less likely 
for s3a and abfs)



##
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestFileSystemInitialization.java:
##
@@ -105,5 +108,27 @@ public void testFileSystemCapabilities() throws Throwable {
 ETAGS_PRESERVED_IN_RENAME, etagsAcrossRename,
 FS_ACLS, acls, fs)
 .isEqualTo(acls);
+
+final boolean leaseRecovery = fs.hasPathCapability(p, LEASE_RECOVERABLE);

Review Comment:
   soif you prefer me to remove them, that would save a bit on the PR lol, 
but let me wait for your reply first.



##
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java:
##
@@ -1633,6 +1634,53 @@ public DatanodeInfo[] getDataNodeStats(final 
DatanodeReportType type)
* @see org.apache.hadoop.hdfs.protocol.ClientProtocol#setSafeMode(
*HdfsConstants.SafeModeAction,boolean)
*/
+  @Override
+  public boolean setSafeMode(SafeModeAction action)
+throws IOException {
+return setSafeMode(action, false);
+  }
+
+  /**
+   * Enter, leave or get safe mode.
+   *
+   * @param action
+   *  One of SafeModeAction.ENTER, SafeModeAction.LEAVE and
+   *  SafeModeAction.GET
+   * @param isChecked
+   *  If true check only for Active NNs status, else check first NN's
+   *  status
+   */
+  @Override
+  public boolean setSafeMode(SafeModeAction action, boolean isChecked)
+throws IOException {
+return dfs.setSafeMode(convertToClientProtocolSafeModeAction(action), 
isChecked);
+  }
+
+  private HdfsConstants.SafeModeAction convertToClientProtocolSafeModeAction(

Review Comment:
   my understanding from the code reading, webhdfs does not support entering or 
leaving safe mode, so we can keep this function here, but having it as static 
would be good.





> Add recoverLease(), setSafeMode(), isFileClosed() APIs to FileSystem
> 
>
> Key: HADOOP-18671
> URL: https://issues.apache.org/jira/browse/HADOOP-18671
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Reporter: Wei-Chiu Chuang
>Priority: Major
>  Labels: pull-request-available
>
> We are in the midst of enabling HBase and Solr to run on Ozone.
> An obstacle is that HBase relies heavily on HDFS APIs and semantics for its 
> Write Ahead Log (WAL) file (similarly, for Solr's transaction log). We 
> propose to push up these HDFS APIs, i.e. recoverLease(), setSafeMode(), 
> isFileClosed() to FileSystem abstraction so that HBase and other applications 
> do not need to take on Ozone dependency at compile time. This work will 
> (hopefully) enable HBase to run on other storage system implementations in 
> the future.
> There are other HDFS features that HBase uses, including hedged read and 
> favored nodes. Those are FS-specific optimizations and are not critical to 
> enable HBase on Ozone.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] taklwu commented on a diff in pull request #5553: HADOOP-18671 Add recoverLease(), setSafeMode(), isFileClosed() APIs to FileSystem

2023-04-17 Thread via GitHub


taklwu commented on code in PR #5553:
URL: https://github.com/apache/hadoop/pull/5553#discussion_r1169365930


##
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AFileSystemContract.java:
##
@@ -137,4 +142,30 @@ public void testRenameNonExistentPath() throws Exception {
 () -> super.testRenameNonExistentPath());
 
   }
+
+  @Test
+  public void testFileSystemCapabilities() throws Exception {

Review Comment:
   yeah, that's why I chose to use `default` implementation when declaring the 
interface, then whoever does implement that will throw 
UnsupportedOperationException.
   
   but still we should change those filesystem expectation, they're not 
extending or implementing those  interface for now. (we don't know if in the 
future anyone would like to build new feature on top, but it's very less likely 
for s3a and abfs)



##
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestFileSystemInitialization.java:
##
@@ -105,5 +108,27 @@ public void testFileSystemCapabilities() throws Throwable {
 ETAGS_PRESERVED_IN_RENAME, etagsAcrossRename,
 FS_ACLS, acls, fs)
 .isEqualTo(acls);
+
+final boolean leaseRecovery = fs.hasPathCapability(p, LEASE_RECOVERABLE);

Review Comment:
   soif you prefer me to remove them, that would save a bit on the PR lol, 
but let me wait for your reply first.



##
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java:
##
@@ -1633,6 +1634,53 @@ public DatanodeInfo[] getDataNodeStats(final 
DatanodeReportType type)
* @see org.apache.hadoop.hdfs.protocol.ClientProtocol#setSafeMode(
*HdfsConstants.SafeModeAction,boolean)
*/
+  @Override
+  public boolean setSafeMode(SafeModeAction action)
+throws IOException {
+return setSafeMode(action, false);
+  }
+
+  /**
+   * Enter, leave or get safe mode.
+   *
+   * @param action
+   *  One of SafeModeAction.ENTER, SafeModeAction.LEAVE and
+   *  SafeModeAction.GET
+   * @param isChecked
+   *  If true check only for Active NNs status, else check first NN's
+   *  status
+   */
+  @Override
+  public boolean setSafeMode(SafeModeAction action, boolean isChecked)
+throws IOException {
+return dfs.setSafeMode(convertToClientProtocolSafeModeAction(action), 
isChecked);
+  }
+
+  private HdfsConstants.SafeModeAction convertToClientProtocolSafeModeAction(

Review Comment:
   my understanding from the code reading, webhdfs does not support entering or 
leaving safe mode, so we can keep this function here, but having it as static 
would be good.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18691) Add a CallerContext getter on the Schedulable interface

2023-04-17 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17713335#comment-17713335
 ] 

ASF GitHub Bot commented on HADOOP-18691:
-

hadoop-yetus commented on PR #5540:
URL: https://github.com/apache/hadoop/pull/5540#issuecomment-1512306901

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 58s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  42m 29s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  25m 30s |  |  trunk passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  compile  |  21m 54s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  checkstyle  |   1m  7s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 43s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m  8s |  |  trunk passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 42s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  spotbugs  |   2m 44s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  25m 43s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m  2s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  24m 44s |  |  the patch passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javac  |  24m 44s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  21m 57s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  javac  |  21m 57s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m  2s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 36s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m  1s |  |  the patch passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 43s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  spotbugs  |   2m 43s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  25m 18s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  18m 12s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 54s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 223m 25s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5540/8/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5540 |
   | Optional Tests | dupname asflicense codespell detsecrets compile javac 
javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle |
   | uname | Linux d535459e9d58 4.15.0-206-generic #217-Ubuntu SMP Fri Feb 3 
19:10:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 16c2b412308c0515f9e1e9214a472049923a09c4 |
   | Default Java | Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1
 /usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5540/8/testReport/ |
   | Max. process+thread count | 1264 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5540/8/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   




> Add a 

[GitHub] [hadoop] hadoop-yetus commented on pull request #5540: HADOOP-18691. Add a CallerContext getter on the Schedulable interface

2023-04-17 Thread via GitHub


hadoop-yetus commented on PR #5540:
URL: https://github.com/apache/hadoop/pull/5540#issuecomment-1512306901

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 58s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  42m 29s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  25m 30s |  |  trunk passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  compile  |  21m 54s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  checkstyle  |   1m  7s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 43s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m  8s |  |  trunk passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 42s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  spotbugs  |   2m 44s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  25m 43s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m  2s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  24m 44s |  |  the patch passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javac  |  24m 44s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  21m 57s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  javac  |  21m 57s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m  2s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 36s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m  1s |  |  the patch passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 43s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  spotbugs  |   2m 43s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  25m 18s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  18m 12s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 54s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 223m 25s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5540/8/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5540 |
   | Optional Tests | dupname asflicense codespell detsecrets compile javac 
javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle |
   | uname | Linux d535459e9d58 4.15.0-206-generic #217-Ubuntu SMP Fri Feb 3 
19:10:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 16c2b412308c0515f9e1e9214a472049923a09c4 |
   | Default Java | Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1
 /usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5540/8/testReport/ |
   | Max. process+thread count | 1264 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5540/8/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about 

[GitHub] [hadoop] hadoop-yetus commented on pull request #5556: HDFS-16982 Use the right Quantiles Array for Inverse Quantiles snapshot

2023-04-17 Thread via GitHub


hadoop-yetus commented on PR #5556:
URL: https://github.com/apache/hadoop/pull/5556#issuecomment-1512298008

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 53s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  16m 53s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  25m 44s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  23m  8s |  |  trunk passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  compile  |  20m 47s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  checkstyle  |   3m 43s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 46s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   2m  8s |  |  trunk passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 42s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  spotbugs  |   4m 20s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  21m 26s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 28s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 37s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  22m 29s |  |  the patch passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javac  |  22m 29s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m 35s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  javac  |  20m 35s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   3m 38s |  |  root: The patch generated 
0 new + 66 unchanged - 2 fixed = 66 total (was 68)  |
   | +1 :green_heart: |  mvnsite  |   2m 43s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 57s |  |  the patch passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 47s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  spotbugs  |   4m 30s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  21m 26s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  18m 42s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |  24m 11s |  |  hadoop-yarn-server-nodemanager 
in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   1m  2s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 252m 25s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5556/7/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5556 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 42c1ef96f143 4.15.0-206-generic #217-Ubuntu SMP Fri Feb 3 
19:10:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / d8dc71d1b32f73e4c95755f33a4e0a7500cf5e01 |
   | Default Java | Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1
 /usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5556/7/testReport/ |
   | Max. process+thread count | 2321 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: . |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5556/7/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | 

[jira] [Commented] (HADOOP-18706) The temporary files for disk-block buffer aren't unique enough to recover partial uploads. 

2023-04-17 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17713266#comment-17713266
 ] 

ASF GitHub Bot commented on HADOOP-18706:
-

cbevard1 opened a new pull request, #5563:
URL: https://github.com/apache/hadoop/pull/5563

   
   
   ### Description of PR
   
   This PR improves the ability to recovery partial S3A uploads.
   1. Changed the handleSyncableInvocation() to call flush() after warning that 
the syncable API isn't supported. This mirrors the downgradeSyncable behavior 
of BufferedIOStatisticsOutputStream and RawLocalFileSystem.
   2. Changed the DiskBlock temporary file names to include the S3 key to allow 
partial uploads to be recovered.
   
   ### How was this patch tested?
   
   Unit testsing and regression testing with Accumulo
   ### For code changes:
   
   - [ ] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   




> The temporary files for disk-block buffer aren't unique enough to recover 
> partial uploads. 
> ---
>
> Key: HADOOP-18706
> URL: https://issues.apache.org/jira/browse/HADOOP-18706
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Reporter: Chris Bevard
>Priority: Minor
>
> If an application crashes during an S3ABlockOutputStream upload, it's 
> possible to complete the upload if fast.upload.buffer is set to disk by 
> uploading the s3ablock file with putObject as the final part of the multipart 
> upload. If the application has multiple uploads running in parallel though 
> and they're on the same part number when the application fails, then there is 
> no way to determine which file belongs to which object, and recovery of 
> either upload is impossible.
> If the temporary file name for disk buffering included the s3 key, then every 
> partial upload would be recoverable.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18706) The temporary files for disk-block buffer aren't unique enough to recover partial uploads. 

2023-04-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18706?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HADOOP-18706:

Labels: pull-request-available  (was: )

> The temporary files for disk-block buffer aren't unique enough to recover 
> partial uploads. 
> ---
>
> Key: HADOOP-18706
> URL: https://issues.apache.org/jira/browse/HADOOP-18706
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Reporter: Chris Bevard
>Priority: Minor
>  Labels: pull-request-available
>
> If an application crashes during an S3ABlockOutputStream upload, it's 
> possible to complete the upload if fast.upload.buffer is set to disk by 
> uploading the s3ablock file with putObject as the final part of the multipart 
> upload. If the application has multiple uploads running in parallel though 
> and they're on the same part number when the application fails, then there is 
> no way to determine which file belongs to which object, and recovery of 
> either upload is impossible.
> If the temporary file name for disk buffering included the s3 key, then every 
> partial upload would be recoverable.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] cbevard1 opened a new pull request, #5563: HADOOP-18706: Improve S3ABlockOutputStream recovery

2023-04-17 Thread via GitHub


cbevard1 opened a new pull request, #5563:
URL: https://github.com/apache/hadoop/pull/5563

   
   
   ### Description of PR
   
   This PR improves the ability to recovery partial S3A uploads.
   1. Changed the handleSyncableInvocation() to call flush() after warning that 
the syncable API isn't supported. This mirrors the downgradeSyncable behavior 
of BufferedIOStatisticsOutputStream and RawLocalFileSystem.
   2. Changed the DiskBlock temporary file names to include the S3 key to allow 
partial uploads to be recovered.
   
   ### How was this patch tested?
   
   Unit testsing and regression testing with Accumulo
   ### For code changes:
   
   - [ ] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-18149) The FSDownload verifyAndCopy method doesn't support S3

2023-04-17 Thread Chris Bevard (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18149?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Bevard resolved HADOOP-18149.
---
Resolution: Invalid

> The FSDownload verifyAndCopy method doesn't support S3
> --
>
> Key: HADOOP-18149
> URL: https://issues.apache.org/jira/browse/HADOOP-18149
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Chris Bevard
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> The modification time comparison in FSDownload's verifyAndCopy method fails 
> for S3, which prohibits distributed cache files being loaded from S3. This 
> change allows S3 to be supported via a config change, that would replace the 
> IO Exception with a warning log entry.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #5561: HDFS-16983. Whether checking path access permissions should be decided by dfs.permissions.enabled in concat operation

2023-04-17 Thread via GitHub


hadoop-yetus commented on PR #5561:
URL: https://github.com/apache/hadoop/pull/5561#issuecomment-1511935124

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 49s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  44m 35s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 28s |  |  trunk passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  compile  |   1m 24s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  checkstyle  |   1m  5s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 29s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m  8s |  |  trunk passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 28s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  spotbugs  |   3m 37s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  26m  5s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 19s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 22s |  |  the patch passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javac  |   1m 22s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 15s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  javac  |   1m 15s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 53s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 23s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 53s |  |  the patch passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 29s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  spotbugs  |   3m 30s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  26m  5s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 218m 15s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5561/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 46s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 338m  6s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.server.datanode.TestDirectoryScanner |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5561/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5561 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 62a547ee9b45 4.15.0-206-generic #217-Ubuntu SMP Fri Feb 3 
19:10:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 5ff358ca067fe556aac691af44fbe1282d9bb608 |
   | Default Java | Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1
 /usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5561/1/testReport/ |
   | Max. process+thread count | 2153 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5561/1/console |
   | versions | git=2.25.1 

[jira] [Created] (HADOOP-18706) The temporary files for disk-block buffer aren't unique enough to recover partial uploads. 

2023-04-17 Thread Chris Bevard (Jira)
Chris Bevard created HADOOP-18706:
-

 Summary: The temporary files for disk-block buffer aren't unique 
enough to recover partial uploads. 
 Key: HADOOP-18706
 URL: https://issues.apache.org/jira/browse/HADOOP-18706
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs/s3
Reporter: Chris Bevard


If an application crashes during an S3ABlockOutputStream upload, it's possible 
to complete the upload if fast.upload.buffer is set to disk by uploading the 
s3ablock file with putObject as the final part of the multipart upload. If the 
application has multiple uploads running in parallel though and they're on the 
same part number when the application fails, then there is no way to determine 
which file belongs to which object, and recovery of either upload is impossible.

If the temporary file name for disk buffering included the s3 key, then every 
partial upload would be recoverable.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] rdingankar commented on a diff in pull request #5556: HDFS-16982 Use the right Quantiles Array for Inverse Quantiles snapshot

2023-04-17 Thread via GitHub


rdingankar commented on code in PR #5556:
URL: https://github.com/apache/hadoop/pull/5556#discussion_r1169036218


##
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/metrics2/lib/TestMutableMetrics.java:
##
@@ -52,6 +52,8 @@ public class TestMutableMetrics {
   private static final Logger LOG =
   LoggerFactory.getLogger(TestMutableMetrics.class);
   private static final double EPSILON = 1e-42;
+  private static final int SLEEP_TIME = 6000;

Review Comment:
   updated



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] goiri commented on a diff in pull request #5556: HDFS-16982 Use the right Quantiles Array for Inverse Quantiles snapshot

2023-04-17 Thread via GitHub


goiri commented on code in PR #5556:
URL: https://github.com/apache/hadoop/pull/5556#discussion_r1169028613


##
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/metrics2/lib/TestMutableMetrics.java:
##
@@ -52,6 +52,8 @@ public class TestMutableMetrics {
   private static final Logger LOG =
   LoggerFactory.getLogger(TestMutableMetrics.class);
   private static final double EPSILON = 1e-42;
+  private static final int SLEEP_TIME = 6000;

Review Comment:
   SLEEP_TIME_MS = 6 * 1000; // 6 seconds



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18704) Support a "permissive" mode for secure clusters to allow "simple" auth clients

2023-04-17 Thread Viraj Jasani (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18704?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17713194#comment-17713194
 ] 

Viraj Jasani commented on HADOOP-18704:
---

FYI [~bbeaudreault] 

> Support a "permissive" mode for secure clusters to allow "simple" auth clients
> --
>
> Key: HADOOP-18704
> URL: https://issues.apache.org/jira/browse/HADOOP-18704
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Affects Versions: 3.4.0, 2.10.3, 3.2.5, 3.3.6
>Reporter: Ravi Kishore Valeti
>Priority: Minor
>
> Similar to HBASE-14700, would like to add support for Secure Server to 
> fallback to simple auth for non-secure clients.
> Secure Hadoop to support a permissive mode to allow mixed secure and insecure 
> clients. This allows clients to be incrementally migrated over to a secure 
> configuration. To enable clients to continue to connect using SIMPLE 
> authentication when the cluster is configured for security, set 
> "hadoop.ipc.server.fallback-to-simple-auth-allowed" equal to "true" in 
> hdfs-site.xml. NOTE: This setting should ONLY be used as a temporary measure 
> while converting clients over to secure authentication. It MUST BE DISABLED 
> for secure operation.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #5562: YARN-11463. Node Labels root directory creation doesn't have a retry logic

2023-04-17 Thread via GitHub


hadoop-yetus commented on PR #5562:
URL: https://github.com/apache/hadoop/pull/5562#issuecomment-1511702885

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 35s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  41m  0s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 51s |  |  trunk passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  compile  |   0m 46s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  checkstyle  |   0m 41s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 49s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 54s |  |  trunk passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 43s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  spotbugs  |   1m 54s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  21m 34s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 39s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 47s |  |  the patch passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javac  |   0m 47s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 37s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  javac  |   0m 37s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 25s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 40s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 38s |  |  the patch passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 39s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  spotbugs  |   1m 45s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  21m 12s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  |   5m 29s | 
[/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5562/1/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common.txt)
 |  hadoop-yarn-common in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 34s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 103m 53s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.yarn.nodelabels.TestFileSystemNodeLabelsStore |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5562/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5562 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 8aba91f002da 4.15.0-206-generic #217-Ubuntu SMP Fri Feb 3 
19:10:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 9b677dd55e539d58f0697528ead09a8eda343f00 |
   | Default Java | Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1
 /usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5562/1/testReport/ |
   | Max. process+thread count | 666 (vs. ulimit of 5500) |
   | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common |
   | Console output | 

[GitHub] [hadoop] rdingankar commented on pull request #5556: HDFS-16982 Use the right Quantiles Array for Inverse Quantiles snapshot

2023-04-17 Thread via GitHub


rdingankar commented on PR #5556:
URL: https://github.com/apache/hadoop/pull/5556#issuecomment-1511698162

   @goiri fixed the checkstyle warnings. Can you please take a look at the PR 
and help merge if it looks good. Thanks!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-18688) s3a audit info to include #of items in a DeleteObjects request

2023-04-17 Thread Viraj Jasani (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18688?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17713186#comment-17713186
 ] 

Viraj Jasani edited comment on HADOOP-18688 at 4/17/23 4:20 PM:


For instance, in addition to "{_}object_delete_objects{_}", we could also 
introduce "{_}object_deleted_objects{_}" with the value derived from 
DeleteObjectsResult.

Hence, while "{_}object_delete_objects"{_} represents how many files were meant 
to be deleted before making request to s3, "object_deleted_objects" would 
represent how many files were actually deleted after receiving response from s3.


was (Author: vjasani):
For instance, in addition to "{_}object_delete_objects{_}", we could also 
introduce "{_}object_deleted_objects{_}" with the value derived from 
DeleteObjectsResult.

Hence, while "{_}object_delete_objects"{_} represents how many files were meant 
to be deleted, "object_deleted_objects" would represent how many files were 
actually deleted.

> s3a audit info to include #of items in a DeleteObjects request
> --
>
> Key: HADOOP-18688
> URL: https://issues.apache.org/jira/browse/HADOOP-18688
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.5
>Reporter: Steve Loughran
>Priority: Major
>
> it would be good to find out how many files were deleted in a DeleteObjects 
> call



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18688) s3a audit info to include #of items in a DeleteObjects request

2023-04-17 Thread Viraj Jasani (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18688?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17713186#comment-17713186
 ] 

Viraj Jasani commented on HADOOP-18688:
---

For instance, in addition to "{_}object_delete_objects{_}", we could also 
introduce "{_}object_deleted_objects{_}" with the value derived from 
DeleteObjectsResult.

Hence, while "{_}object_delete_objects"{_} represents how many files were meant 
to be deleted, "object_deleted_objects" would represent how many files were 
actually deleted.

> s3a audit info to include #of items in a DeleteObjects request
> --
>
> Key: HADOOP-18688
> URL: https://issues.apache.org/jira/browse/HADOOP-18688
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.5
>Reporter: Steve Loughran
>Priority: Major
>
> it would be good to find out how many files were deleted in a DeleteObjects 
> call



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] szetszwo commented on pull request #5532: HDFS-16972. Delete a snapshot may deleteCurrentFile.

2023-04-17 Thread via GitHub


szetszwo commented on PR #5532:
URL: https://github.com/apache/hadoop/pull/5532#issuecomment-1511669706

   > Are we planning to address here? [#5532 
(comment)](https://github.com/apache/hadoop/pull/5532#issuecomment-1499923239)
   
   Probably not.  I found a potential fix shown below -- it should update 
`snapshotId` only if it is larger.  It might be able to fix the bug.  However, 
it is a very big change since it change the rename diff entries and completely 
change the rename-cleanup-algorithm.
   ```java
   +++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodesInPath.java
   @@ -181,7 +181,9 @@ static INodesInPath resolve(final INodeDirectory 
startingDir,
(sf = 
curNode.asDirectory().getDirectoryWithSnapshotFeature()) != null) {
  lastSnapshot = sf.getLastSnapshotId();
}
   -snapshotId = lastSnapshot;
   +if (lastSnapshot > snapshotId) {
   +  snapshotId = lastSnapshot;
   +}
  }
}
  }
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18399) SingleFilePerBlockCache to use LocalDirAllocator for file allocation

2023-04-17 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18399?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17713180#comment-17713180
 ] 

ASF GitHub Bot commented on HADOOP-18399:
-

virajjasani commented on code in PR #5054:
URL: https://github.com/apache/hadoop/pull/5054#discussion_r1168961829


##
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/impl/prefetch/TestBlockCache.java:
##
@@ -67,7 +72,7 @@ public void testPutAndGet() throws Exception {
 
 assertEquals(0, cache.size());
 assertFalse(cache.containsBlock(0));
-cache.put(0, buffer1);
+cache.put(0, buffer1, CONF, new LocalDirAllocator(HADOOP_TMP_DIR));

Review Comment:
   For Test* classes, using `BUFFER_DIR` is not helpful as they don't use 
`S3ATestUtils#prepareTestConfiguration`.
   
   Hence, using `HADOOP_TMP_DIR` for Test* classes.





> SingleFilePerBlockCache to use LocalDirAllocator for file allocation
> 
>
> Key: HADOOP-18399
> URL: https://issues.apache.org/jira/browse/HADOOP-18399
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.4.0
>Reporter: Steve Loughran
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>
> prefetching stream's SingleFilePerBlockCache uses Files.tempFile() to 
> allocate a temp file.
> it should be using LocalDirAllocator to allocate space from a list of dirs, 
> taking a config key to use. for s3a we will use the Constants.BUFFER_DIR 
> option, which on yarn deployments is fixed under the env.LOCAL_DIR path, so 
> automatically cleaned up on container exit



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] virajjasani commented on a diff in pull request #5054: HADOOP-18399 Prefetch - SingleFilePerBlockCache to use LocalDirAllocator for file allocation

2023-04-17 Thread via GitHub


virajjasani commented on code in PR #5054:
URL: https://github.com/apache/hadoop/pull/5054#discussion_r1168961829


##
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/impl/prefetch/TestBlockCache.java:
##
@@ -67,7 +72,7 @@ public void testPutAndGet() throws Exception {
 
 assertEquals(0, cache.size());
 assertFalse(cache.containsBlock(0));
-cache.put(0, buffer1);
+cache.put(0, buffer1, CONF, new LocalDirAllocator(HADOOP_TMP_DIR));

Review Comment:
   For Test* classes, using `BUFFER_DIR` is not helpful as they don't use 
`S3ATestUtils#prepareTestConfiguration`.
   
   Hence, using `HADOOP_TMP_DIR` for Test* classes.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] szetszwo commented on a diff in pull request #5532: HDFS-16972. Delete a snapshot may deleteCurrentFile.

2023-04-17 Thread via GitHub


szetszwo commented on code in PR #5532:
URL: https://github.com/apache/hadoop/pull/5532#discussion_r1168961332


##
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeReference.java:
##
@@ -35,32 +35,45 @@
 import org.apache.hadoop.security.AccessControlException;
 
 /**
- * An anonymous reference to an inode.
- *
+ * A reference to an inode.
+ * 
  * This class and its subclasses are used to support multiple access paths.
  * A file/directory may have multiple access paths when it is stored in some
- * snapshots and it is renamed/moved to other locations.
- * 
+ * snapshots, and it is renamed/moved to other locations.
+ * 
  * For example,
- * (1) Suppose we have /abc/foo, say the inode of foo is 
inode(id=1000,name=foo)
- * (2) create snapshot s0 for /abc
+ * (1) Suppose we have /abc/foo and the inode is inode(id=1000,name=foo).
+ * Suppose foo is created after snapshot s0,
+ * i.e. foo is not in s0 and inode(id=1000,name=foo)
+ * is in the create-list of /abc for the s0 diff entry.
+ * (2) Create snapshot s1, s2 for /abc, i.e. foo is in s1 and s2.
+ * Suppose sDst is the last snapshot /xyz.
  * (3) mv /abc/foo /xyz/bar, i.e. inode(id=1000,name=...) is renamed from "foo"
  * to "bar" and its parent becomes /xyz.
- * 
- * Then, /xyz/bar and /abc/.snapshot/s0/foo are two different access paths to
- * the same inode, inode(id=1000,name=bar).
- *
+ * 
+ * Then, /xyz/bar, /abc/.snapshot/s1/foo and /abc/.snapshot/s2/foo
+ * are different access paths to the same inode, inode(id=1000,name=bar).
+ * 
  * With references, we have the following
- * - /abc has a child ref(id=1001,name=foo).
- * - /xyz has a child ref(id=1002) 
- * - Both ref(id=1001,name=foo) and ref(id=1002) point to another reference,
- *   ref(id=1003,count=2).
- * - Finally, ref(id=1003,count=2) points to inode(id=1000,name=bar).
- * 
- * Note 1: For a reference without name, e.g. ref(id=1002), it uses the name
- * of the referred inode.
+ * - The source /abc/foo inode(id=1000,name=foo) is replaced with
+ *   a WithName(name=foo,lastSnapshot=s2) and then it is moved
+ *   to the delete-list of /abc for the s2 diff entry.
+ *   The replacement also replaces inode(id=1000,name=foo)
+ *   in the create-list of /abc for the s0 diff entry with the WithName.
+ *   The same as before, /abc/foo is in s1 and s2, but not in s0.
+ * - The destination /xyz adds a child DstReference(dstSnapshot=sDst).
+ *   DstReference is added to the create-list of /xyz for the sDst diff entry.
+ *   /abc/bar is not in sDst.

Review Comment:
   Oops, it is a typo -- should be `/xyz/bar`.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18399) SingleFilePerBlockCache to use LocalDirAllocator for file allocation

2023-04-17 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18399?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17713173#comment-17713173
 ] 

ASF GitHub Bot commented on HADOOP-18399:
-

virajjasani commented on PR #5054:
URL: https://github.com/apache/hadoop/pull/5054#issuecomment-1511638514

   Re-run against us-west-2:
   
   ```
   $ mvn clean verify -Dparallel-tests -DtestsThreadCount=8 -Dscale -Dprefetch
   
   $ mvn clean verify -Dparallel-tests -DtestsThreadCount=8 -Dscale
   ```




> SingleFilePerBlockCache to use LocalDirAllocator for file allocation
> 
>
> Key: HADOOP-18399
> URL: https://issues.apache.org/jira/browse/HADOOP-18399
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.4.0
>Reporter: Steve Loughran
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>
> prefetching stream's SingleFilePerBlockCache uses Files.tempFile() to 
> allocate a temp file.
> it should be using LocalDirAllocator to allocate space from a list of dirs, 
> taking a config key to use. for s3a we will use the Constants.BUFFER_DIR 
> option, which on yarn deployments is fixed under the env.LOCAL_DIR path, so 
> automatically cleaned up on container exit



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] virajjasani commented on pull request #5054: HADOOP-18399 Prefetch - SingleFilePerBlockCache to use LocalDirAllocator for file allocation

2023-04-17 Thread via GitHub


virajjasani commented on PR #5054:
URL: https://github.com/apache/hadoop/pull/5054#issuecomment-1511638514

   Re-run against us-west-2:
   
   ```
   $ mvn clean verify -Dparallel-tests -DtestsThreadCount=8 -Dscale -Dprefetch
   
   $ mvn clean verify -Dparallel-tests -DtestsThreadCount=8 -Dscale
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18565) AWS SDK V2 - Complete outstanding items

2023-04-17 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17713157#comment-17713157
 ] 

ASF GitHub Bot commented on HADOOP-18565:
-

ahmarsuhail commented on PR #5421:
URL: https://github.com/apache/hadoop/pull/5421#issuecomment-1511606609

   I added 
   ```
 
   
   
   
 
   ```
   
   to hadoop-tools/hadoop-aws/dev-support/findbugs-exclude.xml, but that hasn't 
suppressed the spotbug for some reason. not sure what I did wrong..




> AWS SDK V2 - Complete outstanding items
> ---
>
> Key: HADOOP-18565
> URL: https://issues.apache.org/jira/browse/HADOOP-18565
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.4.0
>Reporter: Ahmar Suhail
>Priority: Major
>  Labels: pull-request-available
>
> The following work remains to complete the SDK upgrade work:
>  * S3A allows users configure to custom signers, add in support for this.
>  * Remove SDK V1 bundle dependency
>  * Update `getRegion()` logic to use retries. 
>  * Add in progress listeners for `S3ABlockOutputStream`
>  * Fix any failing tests.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ahmarsuhail commented on pull request #5421: HADOOP-18565. Completes outstanding items for the SDK V2 upgrade.

2023-04-17 Thread via GitHub


ahmarsuhail commented on PR #5421:
URL: https://github.com/apache/hadoop/pull/5421#issuecomment-1511606609

   I added 
   ```
 
   
   
   
 
   ```
   
   to hadoop-tools/hadoop-aws/dev-support/findbugs-exclude.xml, but that hasn't 
suppressed the spotbug for some reason. not sure what I did wrong..


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] umamaheswararao commented on a diff in pull request #5532: HDFS-16972. Delete a snapshot may deleteCurrentFile.

2023-04-17 Thread via GitHub


umamaheswararao commented on code in PR #5532:
URL: https://github.com/apache/hadoop/pull/5532#discussion_r1168909843


##
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeReference.java:
##
@@ -35,32 +35,45 @@
 import org.apache.hadoop.security.AccessControlException;
 
 /**
- * An anonymous reference to an inode.
- *
+ * A reference to an inode.
+ * 
  * This class and its subclasses are used to support multiple access paths.
  * A file/directory may have multiple access paths when it is stored in some
- * snapshots and it is renamed/moved to other locations.
- * 
+ * snapshots, and it is renamed/moved to other locations.
+ * 
  * For example,
- * (1) Suppose we have /abc/foo, say the inode of foo is 
inode(id=1000,name=foo)
- * (2) create snapshot s0 for /abc
+ * (1) Suppose we have /abc/foo and the inode is inode(id=1000,name=foo).
+ * Suppose foo is created after snapshot s0,
+ * i.e. foo is not in s0 and inode(id=1000,name=foo)
+ * is in the create-list of /abc for the s0 diff entry.
+ * (2) Create snapshot s1, s2 for /abc, i.e. foo is in s1 and s2.
+ * Suppose sDst is the last snapshot /xyz.
  * (3) mv /abc/foo /xyz/bar, i.e. inode(id=1000,name=...) is renamed from "foo"
  * to "bar" and its parent becomes /xyz.
- * 
- * Then, /xyz/bar and /abc/.snapshot/s0/foo are two different access paths to
- * the same inode, inode(id=1000,name=bar).
- *
+ * 
+ * Then, /xyz/bar, /abc/.snapshot/s1/foo and /abc/.snapshot/s2/foo
+ * are different access paths to the same inode, inode(id=1000,name=bar).
+ * 
  * With references, we have the following
- * - /abc has a child ref(id=1001,name=foo).
- * - /xyz has a child ref(id=1002) 
- * - Both ref(id=1001,name=foo) and ref(id=1002) point to another reference,
- *   ref(id=1003,count=2).
- * - Finally, ref(id=1003,count=2) points to inode(id=1000,name=bar).
- * 
- * Note 1: For a reference without name, e.g. ref(id=1002), it uses the name
- * of the referred inode.
+ * - The source /abc/foo inode(id=1000,name=foo) is replaced with
+ *   a WithName(name=foo,lastSnapshot=s2) and then it is moved
+ *   to the delete-list of /abc for the s2 diff entry.
+ *   The replacement also replaces inode(id=1000,name=foo)
+ *   in the create-list of /abc for the s0 diff entry with the WithName.
+ *   The same as before, /abc/foo is in s1 and s2, but not in s0.
+ * - The destination /xyz adds a child DstReference(dstSnapshot=sDst).
+ *   DstReference is added to the create-list of /xyz for the sDst diff entry.
+ *   /abc/bar is not in sDst.

Review Comment:
   /abc/bar does not exist at all right? I thought we renamed /abc/foo to 
/xyz/bar. I am quite lost why /abc/bar is coming here.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] umamaheswararao commented on pull request #5532: HDFS-16972. Delete a snapshot may deleteCurrentFile.

2023-04-17 Thread via GitHub


umamaheswararao commented on PR #5532:
URL: https://github.com/apache/hadoop/pull/5532#issuecomment-1511590848

   Are we planning to address here? 
https://github.com/apache/hadoop/pull/5532#issuecomment-1499923239


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] brumi1024 commented on a diff in pull request #5562: YARN-11463. Node Labels root directory creation doesn't have a retry logic

2023-04-17 Thread via GitHub


brumi1024 commented on code in PR #5562:
URL: https://github.com/apache/hadoop/pull/5562#discussion_r1168885162


##
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/nodelabels/store/AbstractFSNodeStore.java:
##
@@ -65,8 +65,30 @@ protected void initStore(Configuration conf, Path 
fsStorePath,
 this.fsWorkingPath = fsStorePath;
 this.manager = mgr;
 initFileSystem(conf);
-// mkdir of root dir path
-fs.mkdirs(fsWorkingPath);
+// mkdir of root dir path with retry logic
+int maxRetries = 3;

Review Comment:
   Can you please create a configuration entry for the retry count and the 
interval? Something like the 
   yarn.resourcemanager.zk-num-retries and 
yarn.resourcemanager.zk-retry-interval-ms parameters defined for ZK 
connections. That way the user could configure a longer period in case HDFS in 
safe mode for a few minutes when the RM tries to start.
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] brumi1024 commented on a diff in pull request #5562: YARN-11463. Node Labels root directory creation doesn't have a retry logic

2023-04-17 Thread via GitHub


brumi1024 commented on code in PR #5562:
URL: https://github.com/apache/hadoop/pull/5562#discussion_r1168885162


##
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/nodelabels/store/AbstractFSNodeStore.java:
##
@@ -65,8 +65,30 @@ protected void initStore(Configuration conf, Path 
fsStorePath,
 this.fsWorkingPath = fsStorePath;
 this.manager = mgr;
 initFileSystem(conf);
-// mkdir of root dir path
-fs.mkdirs(fsWorkingPath);
+// mkdir of root dir path with retry logic
+int maxRetries = 3;

Review Comment:
   Can you please create a configuration entry for the retry count and the 
interval? Something like the 
   yarn.resourcemanager.zk-num-retries and 
yarn.resourcemanager.connect.retry-interval.ms parameters defined for ZK 
connections. That way the user could configure a longer period in case HDFS in 
safe mode for a few minutes when the RM tries to start.
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18565) AWS SDK V2 - Complete outstanding items

2023-04-17 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17713134#comment-17713134
 ] 

ASF GitHub Bot commented on HADOOP-18565:
-

hadoop-yetus commented on PR #5421:
URL: https://github.com/apache/hadoop/pull/5421#issuecomment-1511537779

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 52s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  1s |  |  xmllint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 28 new or modified test files.  |
    _ feature-HADOOP-18073-s3a-sdk-upgrade Compile Tests _ |
   | +0 :ok: |  mvndep  |  17m  7s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  34m 28s |  |  
feature-HADOOP-18073-s3a-sdk-upgrade passed  |
   | +1 :green_heart: |  compile  |  25m 20s |  |  
feature-HADOOP-18073-s3a-sdk-upgrade passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  compile  |  21m 40s |  |  
feature-HADOOP-18073-s3a-sdk-upgrade passed with JDK Private 
Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  checkstyle  |   3m 58s |  |  
feature-HADOOP-18073-s3a-sdk-upgrade passed  |
   | +1 :green_heart: |  mvnsite  |   2m 59s |  |  
feature-HADOOP-18073-s3a-sdk-upgrade passed  |
   | +1 :green_heart: |  javadoc  |   2m  5s |  |  
feature-HADOOP-18073-s3a-sdk-upgrade passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 45s |  |  
feature-HADOOP-18073-s3a-sdk-upgrade passed with JDK Private 
Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +0 :ok: |  spotbugs  |   0m 43s |  |  branch/hadoop-project no spotbugs 
output file (spotbugsXml.xml)  |
   | -1 :x: |  spotbugs  |   1m 14s | 
[/branch-spotbugs-hadoop-tools_hadoop-aws-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5421/15/artifact/out/branch-spotbugs-hadoop-tools_hadoop-aws-warnings.html)
 |  hadoop-tools/hadoop-aws in feature-HADOOP-18073-s3a-sdk-upgrade has 1 
extant spotbugs warnings.  |
   | +1 :green_heart: |  shadedclient  |  27m 13s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 58s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m  9s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  24m 32s |  |  the patch passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javac  |  24m 32s |  |  
root-jdkUbuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1 with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1 generated 0 new + 2840 unchanged 
- 2 fixed = 2840 total (was 2842)  |
   | +1 :green_heart: |  compile  |  21m 41s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  javac  |  21m 41s |  |  
root-jdkPrivateBuild-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 with JDK Private 
Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 generated 0 new + 2641 unchanged 
- 2 fixed = 2641 total (was 2643)  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   3m 53s |  |  root: The patch generated 
0 new + 49 unchanged - 5 fixed = 49 total (was 54)  |
   | +1 :green_heart: |  mvnsite  |   2m 53s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 55s |  |  the patch passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 27s |  |  hadoop-project in the patch 
passed with JDK Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09.  |
   | +1 :green_heart: |  javadoc  |   0m 41s |  |  hadoop-common in the patch 
passed with JDK Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09.  |
   | +1 :green_heart: |  javadoc  |   0m 38s |  |  
hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 
with JDK Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 generated 0 new 
+ 1 unchanged - 3 fixed = 1 total (was 4)  |
   | +0 :ok: |  spotbugs  |   0m 28s |  |  hadoop-project has no data from 
spotbugs  |
   | +1 :green_heart: |  spotbugs  |   2m 43s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  spotbugs  |   1m 25s |  |  hadoop-tools/hadoop-aws 
generated 0 new + 0 

[GitHub] [hadoop] hadoop-yetus commented on pull request #5421: HADOOP-18565. Completes outstanding items for the SDK V2 upgrade.

2023-04-17 Thread via GitHub


hadoop-yetus commented on PR #5421:
URL: https://github.com/apache/hadoop/pull/5421#issuecomment-1511537779

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 52s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  1s |  |  xmllint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 28 new or modified test files.  |
    _ feature-HADOOP-18073-s3a-sdk-upgrade Compile Tests _ |
   | +0 :ok: |  mvndep  |  17m  7s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  34m 28s |  |  
feature-HADOOP-18073-s3a-sdk-upgrade passed  |
   | +1 :green_heart: |  compile  |  25m 20s |  |  
feature-HADOOP-18073-s3a-sdk-upgrade passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  compile  |  21m 40s |  |  
feature-HADOOP-18073-s3a-sdk-upgrade passed with JDK Private 
Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  checkstyle  |   3m 58s |  |  
feature-HADOOP-18073-s3a-sdk-upgrade passed  |
   | +1 :green_heart: |  mvnsite  |   2m 59s |  |  
feature-HADOOP-18073-s3a-sdk-upgrade passed  |
   | +1 :green_heart: |  javadoc  |   2m  5s |  |  
feature-HADOOP-18073-s3a-sdk-upgrade passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 45s |  |  
feature-HADOOP-18073-s3a-sdk-upgrade passed with JDK Private 
Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +0 :ok: |  spotbugs  |   0m 43s |  |  branch/hadoop-project no spotbugs 
output file (spotbugsXml.xml)  |
   | -1 :x: |  spotbugs  |   1m 14s | 
[/branch-spotbugs-hadoop-tools_hadoop-aws-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5421/15/artifact/out/branch-spotbugs-hadoop-tools_hadoop-aws-warnings.html)
 |  hadoop-tools/hadoop-aws in feature-HADOOP-18073-s3a-sdk-upgrade has 1 
extant spotbugs warnings.  |
   | +1 :green_heart: |  shadedclient  |  27m 13s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 58s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m  9s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  24m 32s |  |  the patch passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javac  |  24m 32s |  |  
root-jdkUbuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1 with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1 generated 0 new + 2840 unchanged 
- 2 fixed = 2840 total (was 2842)  |
   | +1 :green_heart: |  compile  |  21m 41s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  javac  |  21m 41s |  |  
root-jdkPrivateBuild-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 with JDK Private 
Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 generated 0 new + 2641 unchanged 
- 2 fixed = 2641 total (was 2643)  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   3m 53s |  |  root: The patch generated 
0 new + 49 unchanged - 5 fixed = 49 total (was 54)  |
   | +1 :green_heart: |  mvnsite  |   2m 53s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 55s |  |  the patch passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 27s |  |  hadoop-project in the patch 
passed with JDK Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09.  |
   | +1 :green_heart: |  javadoc  |   0m 41s |  |  hadoop-common in the patch 
passed with JDK Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09.  |
   | +1 :green_heart: |  javadoc  |   0m 38s |  |  
hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 
with JDK Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 generated 0 new 
+ 1 unchanged - 3 fixed = 1 total (was 4)  |
   | +0 :ok: |  spotbugs  |   0m 28s |  |  hadoop-project has no data from 
spotbugs  |
   | +1 :green_heart: |  spotbugs  |   2m 43s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  spotbugs  |   1m 25s |  |  hadoop-tools/hadoop-aws 
generated 0 new + 0 unchanged - 1 fixed = 0 total (was 1)  |
   | +1 :green_heart: |  shadedclient  |  27m  5s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   0m 26s |  |  

[GitHub] [hadoop] ashutoshcipher opened a new pull request, #5562: YARN-11463. Node Labels root directory creation doesn't have a retry logic

2023-04-17 Thread via GitHub


ashutoshcipher opened a new pull request, #5562:
URL: https://github.com/apache/hadoop/pull/5562

   ### Description of PR
   
   Node Labels root directory creation doesn't have a retry logic
   
   JIRA - YARN-11463
   
   
   ### For code changes:
   
   - [X] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18705) hadoop-azure: AzureBlobFileSystem should exclude incompatible credential providers when binding DelegationTokenManagers

2023-04-17 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17713111#comment-17713111
 ] 

ASF GitHub Bot commented on HADOOP-18705:
-

hadoop-yetus commented on PR #5560:
URL: https://github.com/apache/hadoop/pull/5560#issuecomment-1511439932

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 47s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  44m 37s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 40s |  |  trunk passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  compile  |   0m 34s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  checkstyle  |   0m 29s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 38s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 36s |  |  trunk passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 28s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  spotbugs  |   1m 14s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  23m 14s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 31s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 33s |  |  the patch passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javac  |   0m 33s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 28s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  javac  |   0m 28s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 17s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 32s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 23s |  |  the patch passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 21s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  spotbugs  |   1m  5s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  23m 37s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 56s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 32s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 105m 11s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5560/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5560 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 785a46dea41b 4.15.0-206-generic #217-Ubuntu SMP Fri Feb 3 
19:10:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / c093fe1297cb91d261e100aa9c898ffe3de4d983 |
   | Default Java | Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1
 /usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5560/1/testReport/ |
   | Max. process+thread count | 535 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5560/1/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   




> hadoop-azure: 

[GitHub] [hadoop] hadoop-yetus commented on pull request #5560: HADOOP-18705. hadoop-azure: AzureBlobFileSystem should exclude incomp…

2023-04-17 Thread via GitHub


hadoop-yetus commented on PR #5560:
URL: https://github.com/apache/hadoop/pull/5560#issuecomment-1511439932

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 47s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  44m 37s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 40s |  |  trunk passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  compile  |   0m 34s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  checkstyle  |   0m 29s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 38s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 36s |  |  trunk passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 28s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  spotbugs  |   1m 14s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  23m 14s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 31s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 33s |  |  the patch passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javac  |   0m 33s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 28s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  javac  |   0m 28s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 17s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 32s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 23s |  |  the patch passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 21s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  spotbugs  |   1m  5s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  23m 37s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 56s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 32s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 105m 11s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5560/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5560 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 785a46dea41b 4.15.0-206-generic #217-Ubuntu SMP Fri Feb 3 
19:10:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / c093fe1297cb91d261e100aa9c898ffe3de4d983 |
   | Default Java | Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1
 /usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5560/1/testReport/ |
   | Max. process+thread count | 535 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5560/1/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please 

[GitHub] [hadoop] lfxy opened a new pull request, #5561: HDFS-16983. Whether checking path access permissions should be decided by dfs.permissions.enabled in concat operation

2023-04-17 Thread via GitHub


lfxy opened a new pull request, #5561:
URL: https://github.com/apache/hadoop/pull/5561

   In concat RPC, it will call FSDirConcatOp::verifySrcFiles() to check the 
source files. In this function, it would make permission check for srcs. 
Whether do the permission check should be decided by dfs.permissions.enabled 
configuration. And the 'pc' parameter is always not null.
   `
   // permission check for srcs
   if (pc != null) {
 fsd.checkPathAccess(pc, iip, FsAction.READ); // read the file
 fsd.checkParentAccess(pc, iip, FsAction.WRITE); // for delete
   } 
   `


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-18705) hadoop-azure: AzureBlobFileSystem should exclude incompatible credential providers when binding DelegationTokenManagers

2023-04-17 Thread Tamas Domok (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Domok reassigned HADOOP-18705:


Assignee: Tamas Domok

> hadoop-azure: AzureBlobFileSystem should exclude incompatible credential 
> providers when binding DelegationTokenManagers
> ---
>
> Key: HADOOP-18705
> URL: https://issues.apache.org/jira/browse/HADOOP-18705
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 3.4.0
>Reporter: Tamas Domok
>Assignee: Tamas Domok
>Priority: Major
>  Labels: pull-request-available
>
> The DelegationTokenManager in AzureBlobFileSystem.initialize() gets the 
> untouched configuration which may contain a credentialProviderPath config 
> with incompatible credential providers (e.g.: jceks stored on abfs). This 
> results in an error:
> {quote}
> Caused by: org.apache.hadoop.fs.PathIOException: 
> `jceks://abfs@a@b.c.d/tmp/a.jceks': Recursive load of credential provider; if 
> loading a JCEKS file, this means that the filesystem connector is trying to 
> load the same file
> {quote}
> {code}
> this.delegationTokenManager = 
> abfsConfiguration.getDelegationTokenManager();
> delegationTokenManager.bind(getUri(), configuration);
> {code}
> The abfsConfiguration excludes the incompatible credential providers already.
> Reproduction steps:
> {code}
> export HADOOP_ROOT_LOGGER=DEBUG,console
> hdfs dfs -rm -r -skipTrash /user/qa/sort_input; hadoop jar 
> hadoop-mapreduce-examples.jar randomwriter 
> "-Dmapreduce.randomwriter.totalbytes=100" 
> "-Dhadoop.security.credential.provider.path=jceks://abfs@a@b.c.d/tmp/a.jceks" 
> /user/qa/sort_input 
> {code}
> Error:
> {code}
> ...
> org.apache.hadoop.fs.azurebfs.security.AbfsDelegationTokenManager.bind(AbfsDelegationTokenManager.java:96)
> at 
> org.apache.hadoop.fs.azurebfs.AzureBlobFileSystem.initialize(AzureBlobFileSystem.java:224)
> at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3452)
> at org.apache.hadoop.fs.FileSystem.access$300(FileSystem.java:162)
> at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3557)
> at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3504)
> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:522)
> at org.apache.hadoop.fs.Path.getFileSystem(Path.java:361)
> at 
> org.apache.hadoop.security.alias.KeyStoreProvider.initFileSystem(KeyStoreProvider.java:84)
> at 
> org.apache.hadoop.security.alias.AbstractJavaKeyStoreProvider.(AbstractJavaKeyStoreProvider.java:85)
> at 
> org.apache.hadoop.security.alias.KeyStoreProvider.(KeyStoreProvider.java:49)
> at 
> org.apache.hadoop.security.alias.JavaKeyStoreProvider.(JavaKeyStoreProvider.java:42)
> at 
> org.apache.hadoop.security.alias.JavaKeyStoreProvider.(JavaKeyStoreProvider.java:35)
> at 
> org.apache.hadoop.security.alias.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:68)
> at 
> org.apache.hadoop.security.alias.CredentialProviderFactory.getProviders(CredentialProviderFactory.java:91)
> at 
> org.apache.hadoop.conf.Configuration.getPasswordFromCredentialProviders(Configuration.java:2450)
> at 
> org.apache.hadoop.conf.Configuration.getPassword(Configuration.java:2388)
> at 
> org.apache.knox.gateway.cloud.idbroker.abfs.AbfsIDBClient.getTruststorePassword(AbfsIDBClient.java:104)
> at 
> org.apache.knox.gateway.cloud.idbroker.AbstractIDBClient.initializeAsFullIDBClient(AbstractIDBClient.java:860)
> at 
> org.apache.knox.gateway.cloud.idbroker.AbstractIDBClient.(AbstractIDBClient.java:139)
> at 
> org.apache.knox.gateway.cloud.idbroker.abfs.AbfsIDBClient.(AbfsIDBClient.java:74)
> at 
> org.apache.knox.gateway.cloud.idbroker.abfs.AbfsIDBIntegration.getClient(AbfsIDBIntegration.java:287)
> at 
> org.apache.knox.gateway.cloud.idbroker.abfs.AbfsIDBIntegration.serviceStart(AbfsIDBIntegration.java:240)
> at 
> org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
> at 
> org.apache.knox.gateway.cloud.idbroker.abfs.AbfsIDBIntegration.fromDelegationTokenManager(AbfsIDBIntegration.java:205)
> at 
> org.apache.knox.gateway.cloud.idbroker.abfs.AbfsIDBDelegationTokenManager.bind(AbfsIDBDelegationTokenManager.java:66)
> at 
> org.apache.hadoop.fs.azurebfs.extensions.ExtensionHelper.bind(ExtensionHelper.java:54)
> at 
> org.apache.hadoop.fs.azurebfs.security.AbfsDelegationTokenManager.bind(AbfsDelegationTokenManager.java:96)
> at 
> org.apache.hadoop.fs.azurebfs.AzureBlobFileSystem.initialize(AzureBlobFileSystem.java:224)
> at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3452)
> at 

[GitHub] [hadoop] ayushtkn commented on pull request #5547: HDFS-16977. Forbid assigned characters in pathname.

2023-04-17 Thread via GitHub


ayushtkn commented on PR #5547:
URL: https://github.com/apache/hadoop/pull/5547#issuecomment-1511269735

   > Some pathnames which contains special character(s) may lead to unexpected 
results. For example, there is a file named "/foo/file*" in my cluster, created 
by "DistributedFileSystem.create(new Path("/foo/file*"))". When I want to 
remove it, I type in "hadoop fs -rm /foo/file*" in shell. However, I remove all 
the files with the prefix of "/foo/file*" unexpectedly. There are also some 
other characters just like '*', such as ' ', '|', '&', etc.
   
   Should have escaped the special character. That is how shell behaves.
   ```
   bash-4.2$ hdfs dfs -ls /dir
   Found 6 items
   drwxr-xr-x   - hadoop supergroup  0 2023-04-17 12:41 /dir/a
   drwxr-xr-x   - hadoop supergroup  0 2023-04-17 12:41 /dir/ab*
   drwxr-xr-x   - hadoop supergroup  0 2023-04-17 12:42 /dir/abc
   drwxr-xr-x   - hadoop supergroup  0 2023-04-17 12:42 /dir/abcd
   drwxr-xr-x   - hadoop supergroup  0 2023-04-17 12:42 /dir/abcde
   drwxr-xr-x   - hadoop supergroup  0 2023-04-17 12:42 /dir/abcdef
   bash-4.2$ hdfs dfs -rm -r /dir/ab\\*
   Deleted /dir/ab*
   bash-4.2$ hdfs dfs -ls /dir
   Found 5 items
   drwxr-xr-x   - hadoop supergroup  0 2023-04-17 12:41 /dir/a
   drwxr-xr-x   - hadoop supergroup  0 2023-04-17 12:42 /dir/abc
   drwxr-xr-x   - hadoop supergroup  0 2023-04-17 12:42 /dir/abcd
   drwxr-xr-x   - hadoop supergroup  0 2023-04-17 12:42 /dir/abcde
   drwxr-xr-x   - hadoop supergroup  0 2023-04-17 12:42 /dir/abcdef
   ```
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18028) High performance S3A input stream with prefetching & caching

2023-04-17 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17713065#comment-17713065
 ] 

ASF GitHub Bot commented on HADOOP-18028:
-

ahmarsuhail commented on PR #5559:
URL: https://github.com/apache/hadoop/pull/5559#issuecomment-1511252967

   looks good so far, not sure if this helpful, but patches that came after 
this big commit are (listed in order they were committed to trunk): 
   
   - ITestS3ACannedACLs failure; not in a span: 
[JIRA](https://issues.apache.org/jira/browse/HADOOP-18385), 
[PR](https://github.com/apache/hadoop/pull/4736)
   - fs.s3a.prefetch.block.size to be read through longBytesOption: 
[JIRA](https://issues.apache.org/jira/browse/HADOOP-18380), [PR
   ](https://github.com/apache/hadoop/pull/4762)
   - s3a prefetching to use SemaphoredDelegatingExecutor for submitting work: 
[JIRA](https://issues.apache.org/jira/browse/HADOOP-18186), 
[PR](https://github.com/apache/hadoop/pull/4796)
   - hadoop-aws maven build to add a prefetch profile to run all tests with 
prefetching: [JIRA](https://issues.apache.org/jira/browse/HADOOP-18377), 
[PR](https://github.com/apache/hadoop/pull/4914)
   - s3a prefetching Executor should be closed: 
[JIRA](https://issues.apache.org/jira/browse/HADOOP-18455), 
[PR](https://github.com/apache/hadoop/pull/4879) & 
[PR](https://github.com/apache/hadoop/pull/4926)
   - Implement readFully(long position, byte[] buffer, int offset, int length) 
- [JIRA](https://issues.apache.org/jira/browse/HADOOP-18378), 
[PR](https://github.com/apache/hadoop/pull/4955)
   - S3PrefetchingInputStream to support status probes when closed - 
[JIRA](https://issues.apache.org/jira/browse/HADOOP-18189), 
[PR](https://github.com/apache/hadoop/pull/5036)
   - assertion failure in ITestS3APrefetchingInputStream - 
[JIRA](https://issues.apache.org/jira/browse/HADOOP-18531), 
[PR](https://github.com/apache/hadoop/pull/5149)
   - Remove lower limit on s3a prefetching/caching block size - 
[JIRA](https://issues.apache.org/jira/browse/HADOOP-18246), 
[PR](https://github.com/apache/hadoop/pull/5120)
   - S3A prefetching: Error logging during reads - 
[JIRA](https://issues.apache.org/jira/browse/HADOOP-18351),[ 
PR](https://github.com/apache/hadoop/pull/5274)
   
   Patch available, but not merged yet:
   SingleFilePerBlockCache to use LocalDirAllocator for file allocation: 
[JIRA](https://issues.apache.org/jira/browse/HADOOP-18399), 
[PR](https://github.com/apache/hadoop/pull/5054)




> High performance S3A input stream with prefetching & caching
> 
>
> Key: HADOOP-18028
> URL: https://issues.apache.org/jira/browse/HADOOP-18028
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Reporter: Bhalchandra Pandit
>Assignee: Bhalchandra Pandit
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 14.5h
>  Remaining Estimate: 0h
>
> I work for Pinterest. I developed a technique for vastly improving read 
> throughput when reading from the S3 file system. It not only helps the 
> sequential read case (like reading a SequenceFile) but also significantly 
> improves read throughput of a random access case (like reading Parquet). This 
> technique has been very useful in significantly improving efficiency of the 
> data processing jobs at Pinterest. 
>  
> I would like to contribute that feature to Apache Hadoop. More details on 
> this technique are available in this blog I wrote recently:
> [https://medium.com/pinterest-engineering/improving-efficiency-and-reducing-runtime-using-s3-read-optimization-b31da4b60fa0]
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ahmarsuhail commented on pull request #5559: HADOOP-18028. High performance S3A input stream (#4752)

2023-04-17 Thread via GitHub


ahmarsuhail commented on PR #5559:
URL: https://github.com/apache/hadoop/pull/5559#issuecomment-1511252967

   looks good so far, not sure if this helpful, but patches that came after 
this big commit are (listed in order they were committed to trunk): 
   
   - ITestS3ACannedACLs failure; not in a span: 
[JIRA](https://issues.apache.org/jira/browse/HADOOP-18385), 
[PR](https://github.com/apache/hadoop/pull/4736)
   - fs.s3a.prefetch.block.size to be read through longBytesOption: 
[JIRA](https://issues.apache.org/jira/browse/HADOOP-18380), [PR
   ](https://github.com/apache/hadoop/pull/4762)
   - s3a prefetching to use SemaphoredDelegatingExecutor for submitting work: 
[JIRA](https://issues.apache.org/jira/browse/HADOOP-18186), 
[PR](https://github.com/apache/hadoop/pull/4796)
   - hadoop-aws maven build to add a prefetch profile to run all tests with 
prefetching: [JIRA](https://issues.apache.org/jira/browse/HADOOP-18377), 
[PR](https://github.com/apache/hadoop/pull/4914)
   - s3a prefetching Executor should be closed: 
[JIRA](https://issues.apache.org/jira/browse/HADOOP-18455), 
[PR](https://github.com/apache/hadoop/pull/4879) & 
[PR](https://github.com/apache/hadoop/pull/4926)
   - Implement readFully(long position, byte[] buffer, int offset, int length) 
- [JIRA](https://issues.apache.org/jira/browse/HADOOP-18378), 
[PR](https://github.com/apache/hadoop/pull/4955)
   - S3PrefetchingInputStream to support status probes when closed - 
[JIRA](https://issues.apache.org/jira/browse/HADOOP-18189), 
[PR](https://github.com/apache/hadoop/pull/5036)
   - assertion failure in ITestS3APrefetchingInputStream - 
[JIRA](https://issues.apache.org/jira/browse/HADOOP-18531), 
[PR](https://github.com/apache/hadoop/pull/5149)
   - Remove lower limit on s3a prefetching/caching block size - 
[JIRA](https://issues.apache.org/jira/browse/HADOOP-18246), 
[PR](https://github.com/apache/hadoop/pull/5120)
   - S3A prefetching: Error logging during reads - 
[JIRA](https://issues.apache.org/jira/browse/HADOOP-18351),[ 
PR](https://github.com/apache/hadoop/pull/5274)
   
   Patch available, but not merged yet:
   SingleFilePerBlockCache to use LocalDirAllocator for file allocation: 
[JIRA](https://issues.apache.org/jira/browse/HADOOP-18399), 
[PR](https://github.com/apache/hadoop/pull/5054)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18705) hadoop-azure: AzureBlobFileSystem should exclude incompatible credential providers when binding DelegationTokenManagers

2023-04-17 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17713063#comment-17713063
 ] 

ASF GitHub Bot commented on HADOOP-18705:
-

tomicooler opened a new pull request, #5560:
URL: https://github.com/apache/hadoop/pull/5560

   …atible credential providers when binding DelegationTokenManagers
   
   Change-Id: I1ad8b5856a0b8c0b75d4538019d43e7fdb1962d2
   
   
   
   ### Description of PR
   
   
   ### How was this patch tested?
   
   I tested my change manually with a non-existing jceks file. Without my 
change I received the error described in the Jira: `Caused by: 
org.apache.hadoop.fs.PathIOException: `jceks://abfs@a@b.c.d/tmp/a.jceks': 
Recursive load of credential provider; if loading a JCEKS file, this means that 
the filesystem connector is trying to load the same file`.
   
   With my change the job run successfully, I also added some extra debug logs 
to see if the credential provider path is indeed correct.
   
   ### For code changes:
   
   - [x] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-18705. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   




> hadoop-azure: AzureBlobFileSystem should exclude incompatible credential 
> providers when binding DelegationTokenManagers
> ---
>
> Key: HADOOP-18705
> URL: https://issues.apache.org/jira/browse/HADOOP-18705
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 3.4.0
>Reporter: Tamas Domok
>Priority: Major
>
> The DelegationTokenManager in AzureBlobFileSystem.initialize() gets the 
> untouched configuration which may contain a credentialProviderPath config 
> with incompatible credential providers (e.g.: jceks stored on abfs). This 
> results in an error:
> {quote}
> Caused by: org.apache.hadoop.fs.PathIOException: 
> `jceks://abfs@a@b.c.d/tmp/a.jceks': Recursive load of credential provider; if 
> loading a JCEKS file, this means that the filesystem connector is trying to 
> load the same file
> {quote}
> {code}
> this.delegationTokenManager = 
> abfsConfiguration.getDelegationTokenManager();
> delegationTokenManager.bind(getUri(), configuration);
> {code}
> The abfsConfiguration excludes the incompatible credential providers already.
> Reproduction steps:
> {code}
> export HADOOP_ROOT_LOGGER=DEBUG,console
> hdfs dfs -rm -r -skipTrash /user/qa/sort_input; hadoop jar 
> hadoop-mapreduce-examples.jar randomwriter 
> "-Dmapreduce.randomwriter.totalbytes=100" 
> "-Dhadoop.security.credential.provider.path=jceks://abfs@a@b.c.d/tmp/a.jceks" 
> /user/qa/sort_input 
> {code}
> Error:
> {code}
> ...
> org.apache.hadoop.fs.azurebfs.security.AbfsDelegationTokenManager.bind(AbfsDelegationTokenManager.java:96)
> at 
> org.apache.hadoop.fs.azurebfs.AzureBlobFileSystem.initialize(AzureBlobFileSystem.java:224)
> at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3452)
> at org.apache.hadoop.fs.FileSystem.access$300(FileSystem.java:162)
> at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3557)
> at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3504)
> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:522)
> at org.apache.hadoop.fs.Path.getFileSystem(Path.java:361)
> at 
> org.apache.hadoop.security.alias.KeyStoreProvider.initFileSystem(KeyStoreProvider.java:84)
> at 
> org.apache.hadoop.security.alias.AbstractJavaKeyStoreProvider.(AbstractJavaKeyStoreProvider.java:85)
> at 
> org.apache.hadoop.security.alias.KeyStoreProvider.(KeyStoreProvider.java:49)
> at 
> org.apache.hadoop.security.alias.JavaKeyStoreProvider.(JavaKeyStoreProvider.java:42)
> at 
> org.apache.hadoop.security.alias.JavaKeyStoreProvider.(JavaKeyStoreProvider.java:35)
> at 
> org.apache.hadoop.security.alias.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:68)
> at 
> org.apache.hadoop.security.alias.CredentialProviderFactory.getProviders(CredentialProviderFactory.java:91)
> at 
> org.apache.hadoop.conf.Configuration.getPasswordFromCredentialProviders(Configuration.java:2450)
> at 
> org.apache.hadoop.conf.Configuration.getPassword(Configuration.java:2388)
> at 
> 

[jira] [Updated] (HADOOP-18705) hadoop-azure: AzureBlobFileSystem should exclude incompatible credential providers when binding DelegationTokenManagers

2023-04-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HADOOP-18705:

Labels: pull-request-available  (was: )

> hadoop-azure: AzureBlobFileSystem should exclude incompatible credential 
> providers when binding DelegationTokenManagers
> ---
>
> Key: HADOOP-18705
> URL: https://issues.apache.org/jira/browse/HADOOP-18705
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 3.4.0
>Reporter: Tamas Domok
>Priority: Major
>  Labels: pull-request-available
>
> The DelegationTokenManager in AzureBlobFileSystem.initialize() gets the 
> untouched configuration which may contain a credentialProviderPath config 
> with incompatible credential providers (e.g.: jceks stored on abfs). This 
> results in an error:
> {quote}
> Caused by: org.apache.hadoop.fs.PathIOException: 
> `jceks://abfs@a@b.c.d/tmp/a.jceks': Recursive load of credential provider; if 
> loading a JCEKS file, this means that the filesystem connector is trying to 
> load the same file
> {quote}
> {code}
> this.delegationTokenManager = 
> abfsConfiguration.getDelegationTokenManager();
> delegationTokenManager.bind(getUri(), configuration);
> {code}
> The abfsConfiguration excludes the incompatible credential providers already.
> Reproduction steps:
> {code}
> export HADOOP_ROOT_LOGGER=DEBUG,console
> hdfs dfs -rm -r -skipTrash /user/qa/sort_input; hadoop jar 
> hadoop-mapreduce-examples.jar randomwriter 
> "-Dmapreduce.randomwriter.totalbytes=100" 
> "-Dhadoop.security.credential.provider.path=jceks://abfs@a@b.c.d/tmp/a.jceks" 
> /user/qa/sort_input 
> {code}
> Error:
> {code}
> ...
> org.apache.hadoop.fs.azurebfs.security.AbfsDelegationTokenManager.bind(AbfsDelegationTokenManager.java:96)
> at 
> org.apache.hadoop.fs.azurebfs.AzureBlobFileSystem.initialize(AzureBlobFileSystem.java:224)
> at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3452)
> at org.apache.hadoop.fs.FileSystem.access$300(FileSystem.java:162)
> at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3557)
> at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3504)
> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:522)
> at org.apache.hadoop.fs.Path.getFileSystem(Path.java:361)
> at 
> org.apache.hadoop.security.alias.KeyStoreProvider.initFileSystem(KeyStoreProvider.java:84)
> at 
> org.apache.hadoop.security.alias.AbstractJavaKeyStoreProvider.(AbstractJavaKeyStoreProvider.java:85)
> at 
> org.apache.hadoop.security.alias.KeyStoreProvider.(KeyStoreProvider.java:49)
> at 
> org.apache.hadoop.security.alias.JavaKeyStoreProvider.(JavaKeyStoreProvider.java:42)
> at 
> org.apache.hadoop.security.alias.JavaKeyStoreProvider.(JavaKeyStoreProvider.java:35)
> at 
> org.apache.hadoop.security.alias.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:68)
> at 
> org.apache.hadoop.security.alias.CredentialProviderFactory.getProviders(CredentialProviderFactory.java:91)
> at 
> org.apache.hadoop.conf.Configuration.getPasswordFromCredentialProviders(Configuration.java:2450)
> at 
> org.apache.hadoop.conf.Configuration.getPassword(Configuration.java:2388)
> at 
> org.apache.knox.gateway.cloud.idbroker.abfs.AbfsIDBClient.getTruststorePassword(AbfsIDBClient.java:104)
> at 
> org.apache.knox.gateway.cloud.idbroker.AbstractIDBClient.initializeAsFullIDBClient(AbstractIDBClient.java:860)
> at 
> org.apache.knox.gateway.cloud.idbroker.AbstractIDBClient.(AbstractIDBClient.java:139)
> at 
> org.apache.knox.gateway.cloud.idbroker.abfs.AbfsIDBClient.(AbfsIDBClient.java:74)
> at 
> org.apache.knox.gateway.cloud.idbroker.abfs.AbfsIDBIntegration.getClient(AbfsIDBIntegration.java:287)
> at 
> org.apache.knox.gateway.cloud.idbroker.abfs.AbfsIDBIntegration.serviceStart(AbfsIDBIntegration.java:240)
> at 
> org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
> at 
> org.apache.knox.gateway.cloud.idbroker.abfs.AbfsIDBIntegration.fromDelegationTokenManager(AbfsIDBIntegration.java:205)
> at 
> org.apache.knox.gateway.cloud.idbroker.abfs.AbfsIDBDelegationTokenManager.bind(AbfsIDBDelegationTokenManager.java:66)
> at 
> org.apache.hadoop.fs.azurebfs.extensions.ExtensionHelper.bind(ExtensionHelper.java:54)
> at 
> org.apache.hadoop.fs.azurebfs.security.AbfsDelegationTokenManager.bind(AbfsDelegationTokenManager.java:96)
> at 
> org.apache.hadoop.fs.azurebfs.AzureBlobFileSystem.initialize(AzureBlobFileSystem.java:224)
> at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3452)
> at 

[GitHub] [hadoop] tomicooler opened a new pull request, #5560: HADOOP-18705. hadoop-azure: AzureBlobFileSystem should exclude incomp…

2023-04-17 Thread via GitHub


tomicooler opened a new pull request, #5560:
URL: https://github.com/apache/hadoop/pull/5560

   …atible credential providers when binding DelegationTokenManagers
   
   Change-Id: I1ad8b5856a0b8c0b75d4538019d43e7fdb1962d2
   
   
   
   ### Description of PR
   
   
   ### How was this patch tested?
   
   I tested my change manually with a non-existing jceks file. Without my 
change I received the error described in the Jira: `Caused by: 
org.apache.hadoop.fs.PathIOException: `jceks://abfs@a@b.c.d/tmp/a.jceks': 
Recursive load of credential provider; if loading a JCEKS file, this means that 
the filesystem connector is trying to load the same file`.
   
   With my change the job run successfully, I also added some extra debug logs 
to see if the credential provider path is indeed correct.
   
   ### For code changes:
   
   - [x] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-18705. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18705) hadoop-azure: AzureBlobFileSystem should exclude incompatible credential providers when binding DelegationTokenManagers

2023-04-17 Thread Tamas Domok (Jira)
Tamas Domok created HADOOP-18705:


 Summary: hadoop-azure: AzureBlobFileSystem should exclude 
incompatible credential providers when binding DelegationTokenManagers
 Key: HADOOP-18705
 URL: https://issues.apache.org/jira/browse/HADOOP-18705
 Project: Hadoop Common
  Issue Type: Bug
  Components: tools
Affects Versions: 3.4.0
Reporter: Tamas Domok


The DelegationTokenManager in AzureBlobFileSystem.initialize() gets the 
untouched configuration which may contain a credentialProviderPath config with 
incompatible credential providers (e.g.: jceks stored on abfs). This results in 
an error:

{quote}
Caused by: org.apache.hadoop.fs.PathIOException: 
`jceks://abfs@a@b.c.d/tmp/a.jceks': Recursive load of credential provider; if 
loading a JCEKS file, this means that the filesystem connector is trying to 
load the same file
{quote}

{code}
this.delegationTokenManager = 
abfsConfiguration.getDelegationTokenManager();
delegationTokenManager.bind(getUri(), configuration);
{code}

The abfsConfiguration excludes the incompatible credential providers already.

Reproduction steps:
{code}
export HADOOP_ROOT_LOGGER=DEBUG,console
hdfs dfs -rm -r -skipTrash /user/qa/sort_input; hadoop jar 
hadoop-mapreduce-examples.jar randomwriter 
"-Dmapreduce.randomwriter.totalbytes=100" 
"-Dhadoop.security.credential.provider.path=jceks://abfs@a@b.c.d/tmp/a.jceks" 
/user/qa/sort_input 
{code}

Error:
{code}
...
org.apache.hadoop.fs.azurebfs.security.AbfsDelegationTokenManager.bind(AbfsDelegationTokenManager.java:96)
at 
org.apache.hadoop.fs.azurebfs.AzureBlobFileSystem.initialize(AzureBlobFileSystem.java:224)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3452)
at org.apache.hadoop.fs.FileSystem.access$300(FileSystem.java:162)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3557)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3504)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:522)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:361)
at 
org.apache.hadoop.security.alias.KeyStoreProvider.initFileSystem(KeyStoreProvider.java:84)
at 
org.apache.hadoop.security.alias.AbstractJavaKeyStoreProvider.(AbstractJavaKeyStoreProvider.java:85)
at 
org.apache.hadoop.security.alias.KeyStoreProvider.(KeyStoreProvider.java:49)
at 
org.apache.hadoop.security.alias.JavaKeyStoreProvider.(JavaKeyStoreProvider.java:42)
at 
org.apache.hadoop.security.alias.JavaKeyStoreProvider.(JavaKeyStoreProvider.java:35)
at 
org.apache.hadoop.security.alias.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:68)
at 
org.apache.hadoop.security.alias.CredentialProviderFactory.getProviders(CredentialProviderFactory.java:91)
at 
org.apache.hadoop.conf.Configuration.getPasswordFromCredentialProviders(Configuration.java:2450)
at org.apache.hadoop.conf.Configuration.getPassword(Configuration.java:2388)
at 
org.apache.knox.gateway.cloud.idbroker.abfs.AbfsIDBClient.getTruststorePassword(AbfsIDBClient.java:104)
at 
org.apache.knox.gateway.cloud.idbroker.AbstractIDBClient.initializeAsFullIDBClient(AbstractIDBClient.java:860)
at 
org.apache.knox.gateway.cloud.idbroker.AbstractIDBClient.(AbstractIDBClient.java:139)
at 
org.apache.knox.gateway.cloud.idbroker.abfs.AbfsIDBClient.(AbfsIDBClient.java:74)
at 
org.apache.knox.gateway.cloud.idbroker.abfs.AbfsIDBIntegration.getClient(AbfsIDBIntegration.java:287)
at 
org.apache.knox.gateway.cloud.idbroker.abfs.AbfsIDBIntegration.serviceStart(AbfsIDBIntegration.java:240)
at org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
at 
org.apache.knox.gateway.cloud.idbroker.abfs.AbfsIDBIntegration.fromDelegationTokenManager(AbfsIDBIntegration.java:205)
at 
org.apache.knox.gateway.cloud.idbroker.abfs.AbfsIDBDelegationTokenManager.bind(AbfsIDBDelegationTokenManager.java:66)
at 
org.apache.hadoop.fs.azurebfs.extensions.ExtensionHelper.bind(ExtensionHelper.java:54)
at 
org.apache.hadoop.fs.azurebfs.security.AbfsDelegationTokenManager.bind(AbfsDelegationTokenManager.java:96)
at 
org.apache.hadoop.fs.azurebfs.AzureBlobFileSystem.initialize(AzureBlobFileSystem.java:224)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3452)
at org.apache.hadoop.fs.FileSystem.access$300(FileSystem.java:162)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3557)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3504)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:522)
at 
org.apache.hadoop.yarn.logaggregation.filecontroller.ifile.LogAggregationIndexedFileController.getRollOverLogMaxSize(LogAggregationIndexedFileController.java:1164)
at 

[GitHub] [hadoop] slfan1989 commented on pull request #5431: YARN-11444. Improve YARN md documentation format.

2023-04-17 Thread via GitHub


slfan1989 commented on PR #5431:
URL: https://github.com/apache/hadoop/pull/5431#issuecomment-1511186191

   @ayushtkn Can you help review this pr? Thank you very much!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18695) S3A: reject multipart copy requests when disabled

2023-04-17 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18695?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17713035#comment-17713035
 ] 

ASF GitHub Bot commented on HADOOP-18695:
-

hadoop-yetus commented on PR #5548:
URL: https://github.com/apache/hadoop/pull/5548#issuecomment-1511162947

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 48s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 7 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  46m 41s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 41s |  |  trunk passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  compile  |   0m 33s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  checkstyle  |   0m 31s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 41s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  |  trunk passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 30s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  spotbugs  |   1m 18s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  23m 22s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 31s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 35s |  |  the patch passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javac  |   0m 35s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 28s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  javac  |   0m 28s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 18s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 32s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 14s |  |  the patch passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 22s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  spotbugs  |   1m  7s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  23m 36s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 28s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 33s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 107m 50s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5548/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5548 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 56f63622110e 4.15.0-206-generic #217-Ubuntu SMP Fri Feb 3 
19:10:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 0a5c714e233e651ad14799b7fb62fd991f496b86 |
   | Default Java | Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1
 /usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5548/4/testReport/ |
   | Max. process+thread count | 531 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5548/4/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   




> S3A: reject multipart copy requests 

[GitHub] [hadoop] hadoop-yetus commented on pull request #5548: HADOOP-18695. S3A: reject multipart copy requests when disabled

2023-04-17 Thread via GitHub


hadoop-yetus commented on PR #5548:
URL: https://github.com/apache/hadoop/pull/5548#issuecomment-1511162947

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 48s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 7 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  46m 41s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 41s |  |  trunk passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  compile  |   0m 33s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  checkstyle  |   0m 31s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 41s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  |  trunk passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 30s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  spotbugs  |   1m 18s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  23m 22s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 31s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 35s |  |  the patch passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javac  |   0m 35s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 28s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  javac  |   0m 28s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 18s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 32s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 14s |  |  the patch passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 22s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  spotbugs  |   1m  7s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  23m 36s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 28s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 33s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 107m 50s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5548/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5548 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 56f63622110e 4.15.0-206-generic #217-Ubuntu SMP Fri Feb 3 
19:10:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 0a5c714e233e651ad14799b7fb62fd991f496b86 |
   | Default Java | Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1
 /usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5548/4/testReport/ |
   | Max. process+thread count | 531 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5548/4/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact 

[GitHub] [hadoop] steveloughran commented on a diff in pull request #5519: MAPREDUCE-7435. Manifest Committer OOM on abfs

2023-04-17 Thread via GitHub


steveloughran commented on code in PR #5519:
URL: https://github.com/apache/hadoop/pull/5519#discussion_r1168511215


##
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapreduce/lib/output/TestFileOutputCommitter.java:
##
@@ -756,66 +756,75 @@ private void testConcurrentCommitTaskWithSubDir(int 
version)
 conf.setInt(FileOutputCommitter.FILEOUTPUTCOMMITTER_ALGORITHM_VERSION,
 version);
 
-conf.setClass("fs.file.impl", RLFS.class, FileSystem.class);
+final String fileImpl = "fs.file.impl";
+final String fileImplClassname = "org.apache.hadoop.fs.LocalFileSystem";
+conf.setClass(fileImpl, RLFS.class, FileSystem.class);
 FileSystem.closeAll();
 
-final JobContext jContext = new JobContextImpl(conf, taskID.getJobID());
-final FileOutputCommitter amCommitter =
-new FileOutputCommitter(outDir, jContext);
-amCommitter.setupJob(jContext);
-
-final TaskAttemptContext[] taCtx = new TaskAttemptContextImpl[2];
-taCtx[0] = new TaskAttemptContextImpl(conf, taskID);
-taCtx[1] = new TaskAttemptContextImpl(conf, taskID1);
-
-final TextOutputFormat[] tof = new TextOutputFormat[2];
-for (int i = 0; i < tof.length; i++) {
-  tof[i] = new TextOutputFormat() {
-@Override
-public Path getDefaultWorkFile(TaskAttemptContext context,
-String extension) throws IOException {
-  final FileOutputCommitter foc = (FileOutputCommitter)
-  getOutputCommitter(context);
-  return new Path(new Path(foc.getWorkPath(), SUB_DIR),
-  getUniqueFile(context, getOutputName(context), extension));
-}
-  };
-}
-
-final ExecutorService executor = HadoopExecutors.newFixedThreadPool(2);
 try {
-  for (int i = 0; i < taCtx.length; i++) {
-final int taskIdx = i;
-executor.submit(new Callable() {
+  final JobContext jContext = new JobContextImpl(conf, taskID.getJobID());
+  final FileOutputCommitter amCommitter =
+  new FileOutputCommitter(outDir, jContext);
+  amCommitter.setupJob(jContext);
+
+  final TaskAttemptContext[] taCtx = new TaskAttemptContextImpl[2];
+  taCtx[0] = new TaskAttemptContextImpl(conf, taskID);
+  taCtx[1] = new TaskAttemptContextImpl(conf, taskID1);
+
+  final TextOutputFormat[] tof = new TextOutputFormat[2];
+  for (int i = 0; i < tof.length; i++) {
+tof[i] = new TextOutputFormat() {
   @Override
-  public Void call() throws IOException, InterruptedException {
-final OutputCommitter outputCommitter =
-tof[taskIdx].getOutputCommitter(taCtx[taskIdx]);
-outputCommitter.setupTask(taCtx[taskIdx]);
-final RecordWriter rw =
-tof[taskIdx].getRecordWriter(taCtx[taskIdx]);
-writeOutput(rw, taCtx[taskIdx]);
-outputCommitter.commitTask(taCtx[taskIdx]);
-return null;
+  public Path getDefaultWorkFile(TaskAttemptContext context,
+  String extension) throws IOException {
+final FileOutputCommitter foc = (FileOutputCommitter)
+getOutputCommitter(context);
+return new Path(new Path(foc.getWorkPath(), SUB_DIR),
+getUniqueFile(context, getOutputName(context), extension));
   }
-});
+};
   }
-} finally {
-  executor.shutdown();
-  while (!executor.awaitTermination(1, TimeUnit.SECONDS)) {
-LOG.info("Awaiting thread termination!");
+
+  final ExecutorService executor = HadoopExecutors.newFixedThreadPool(2);
+  try {
+for (int i = 0; i < taCtx.length; i++) {
+  final int taskIdx = i;
+  executor.submit(new Callable() {
+@Override
+public Void call() throws IOException, InterruptedException {
+  final OutputCommitter outputCommitter =
+  tof[taskIdx].getOutputCommitter(taCtx[taskIdx]);
+  outputCommitter.setupTask(taCtx[taskIdx]);
+  final RecordWriter rw =
+  tof[taskIdx].getRecordWriter(taCtx[taskIdx]);
+  writeOutput(rw, taCtx[taskIdx]);
+  outputCommitter.commitTask(taCtx[taskIdx]);
+  return null;
+}
+  });
+}
+  } finally {
+executor.shutdown();
+while (!executor.awaitTermination(1, TimeUnit.SECONDS)) {
+  LOG.info("Awaiting thread termination!");
+}
   }
-}
 
-amCommitter.commitJob(jContext);
-final RawLocalFileSystem lfs = new RawLocalFileSystem();
-lfs.setConf(conf);
-assertFalse("Must not end up with sub_dir/sub_dir",
-lfs.exists(new Path(OUT_SUB_DIR, SUB_DIR)));
+  amCommitter.commitJob(jContext);
+  final RawLocalFileSystem lfs = new RawLocalFileSystem();
+  lfs.setConf(conf);
+  assertFalse("Must not end up with 

[GitHub] [hadoop] steveloughran commented on a diff in pull request #5519: MAPREDUCE-7435. Manifest Committer OOM on abfs

2023-04-17 Thread via GitHub


steveloughran commented on code in PR #5519:
URL: https://github.com/apache/hadoop/pull/5519#discussion_r1168507366


##
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/output/committer/manifest/impl/ManifestCommitterSupport.java:
##
@@ -224,6 +231,23 @@ public static ManifestSuccessData createManifestOutcome(
 return outcome;
   }
 
+  /**
+   * Add heap information to IOStatisticSetters gauges, with a stage in front 
of every key.
+   * @param ioStatisticsSetters map to update
+   * @param stage stage
+   */
+  public static void addHeapInformation(IOStatisticsSetters 
ioStatisticsSetters,
+  String stage) {
+// force a gc. bit of bad form but it makes for better numbers
+System.gc();

Review Comment:
   yes, I will do that



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18565) AWS SDK V2 - Complete outstanding items

2023-04-17 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17713018#comment-17713018
 ] 

ASF GitHub Bot commented on HADOOP-18565:
-

ahmarsuhail commented on PR #5421:
URL: https://github.com/apache/hadoop/pull/5421#issuecomment-1511108657

   thanks @steveloughran, `s3AsyncClient` looks safe to me, have suppressed the 
warning. Like you mentioned, Similar to the `futurePool`, it's created in the 
unsynchronized initialize() and then the only synchronised usage is in the 
`close()` method




> AWS SDK V2 - Complete outstanding items
> ---
>
> Key: HADOOP-18565
> URL: https://issues.apache.org/jira/browse/HADOOP-18565
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.4.0
>Reporter: Ahmar Suhail
>Priority: Major
>  Labels: pull-request-available
>
> The following work remains to complete the SDK upgrade work:
>  * S3A allows users configure to custom signers, add in support for this.
>  * Remove SDK V1 bundle dependency
>  * Update `getRegion()` logic to use retries. 
>  * Add in progress listeners for `S3ABlockOutputStream`
>  * Fix any failing tests.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ahmarsuhail commented on pull request #5421: HADOOP-18565. Completes outstanding items for the SDK V2 upgrade.

2023-04-17 Thread via GitHub


ahmarsuhail commented on PR #5421:
URL: https://github.com/apache/hadoop/pull/5421#issuecomment-1511108657

   thanks @steveloughran, `s3AsyncClient` looks safe to me, have suppressed the 
warning. Like you mentioned, Similar to the `futurePool`, it's created in the 
unsynchronized initialize() and then the only synchronised usage is in the 
`close()` method


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18696) S3A ITestS3ABucketExistence access point test failure

2023-04-17 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18696?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-18696:

Affects Version/s: 3.3.9

> S3A ITestS3ABucketExistence access point test failure
> -
>
> Key: HADOOP-18696
> URL: https://issues.apache.org/jira/browse/HADOOP-18696
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.4.0, 3.3.9
>Reporter: Steve Loughran
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.9
>
>
> this is inevitably me having some config option in the way; just need to find 
> out what and clear it for the test case so the probes are to the same 
> region/endpont as the mock bucket
> {code}
> [ERROR] 
> testAccessPointProbingV2(org.apache.hadoop.fs.s3a.ITestS3ABucketExistence)  
> Time elapsed: 1.748 s  <<< ERROR!
> java.lang.IllegalArgumentException: The region field of the ARN being passed 
> as a bucket parameter to an S3 operation does not match the region the client 
> was configured with. Provided region: 'eu-west-1'; client region: 'eu-west-2'.
> at 
> com.amazonaws.services.s3.AmazonS3Client.validateIsTrue(AmazonS3Client.java:6588)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-18696) S3A ITestS3ABucketExistence access point test failure

2023-04-17 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18696?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-18696.
-
Fix Version/s: 3.3.9
   Resolution: Fixed

> S3A ITestS3ABucketExistence access point test failure
> -
>
> Key: HADOOP-18696
> URL: https://issues.apache.org/jira/browse/HADOOP-18696
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.4.0
>Reporter: Steve Loughran
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.9
>
>
> this is inevitably me having some config option in the way; just need to find 
> out what and clear it for the test case so the probes are to the same 
> region/endpont as the mock bucket
> {code}
> [ERROR] 
> testAccessPointProbingV2(org.apache.hadoop.fs.s3a.ITestS3ABucketExistence)  
> Time elapsed: 1.748 s  <<< ERROR!
> java.lang.IllegalArgumentException: The region field of the ARN being passed 
> as a bucket parameter to an S3 operation does not match the region the client 
> was configured with. Provided region: 'eu-west-1'; client region: 'eu-west-2'.
> at 
> com.amazonaws.services.s3.AmazonS3Client.validateIsTrue(AmazonS3Client.java:6588)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-18696) S3A ITestS3ABucketExistence access point test failure

2023-04-17 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18696?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran reassigned HADOOP-18696:
---

Assignee: Steve Loughran

> S3A ITestS3ABucketExistence access point test failure
> -
>
> Key: HADOOP-18696
> URL: https://issues.apache.org/jira/browse/HADOOP-18696
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.4.0, 3.3.9
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.9
>
>
> this is inevitably me having some config option in the way; just need to find 
> out what and clear it for the test case so the probes are to the same 
> region/endpont as the mock bucket
> {code}
> [ERROR] 
> testAccessPointProbingV2(org.apache.hadoop.fs.s3a.ITestS3ABucketExistence)  
> Time elapsed: 1.748 s  <<< ERROR!
> java.lang.IllegalArgumentException: The region field of the ARN being passed 
> as a bucket parameter to an S3 operation does not match the region the client 
> was configured with. Provided region: 'eu-west-1'; client region: 'eu-west-2'.
> at 
> com.amazonaws.services.s3.AmazonS3Client.validateIsTrue(AmazonS3Client.java:6588)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18696) S3A ITestS3ABucketExistence access point test failure

2023-04-17 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18696?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-18696:

Component/s: test

> S3A ITestS3ABucketExistence access point test failure
> -
>
> Key: HADOOP-18696
> URL: https://issues.apache.org/jira/browse/HADOOP-18696
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.4.0
>Reporter: Steve Loughran
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.9
>
>
> this is inevitably me having some config option in the way; just need to find 
> out what and clear it for the test case so the probes are to the same 
> region/endpont as the mock bucket
> {code}
> [ERROR] 
> testAccessPointProbingV2(org.apache.hadoop.fs.s3a.ITestS3ABucketExistence)  
> Time elapsed: 1.748 s  <<< ERROR!
> java.lang.IllegalArgumentException: The region field of the ARN being passed 
> as a bucket parameter to an S3 operation does not match the region the client 
> was configured with. Provided region: 'eu-west-1'; client region: 'eu-west-2'.
> at 
> com.amazonaws.services.s3.AmazonS3Client.validateIsTrue(AmazonS3Client.java:6588)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18696) S3A ITestS3ABucketExistence access point test failure

2023-04-17 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17712979#comment-17712979
 ] 

ASF GitHub Bot commented on HADOOP-18696:
-

steveloughran merged PR #5557:
URL: https://github.com/apache/hadoop/pull/5557




> S3A ITestS3ABucketExistence access point test failure
> -
>
> Key: HADOOP-18696
> URL: https://issues.apache.org/jira/browse/HADOOP-18696
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.4.0
>Reporter: Steve Loughran
>Priority: Major
>  Labels: pull-request-available
>
> this is inevitably me having some config option in the way; just need to find 
> out what and clear it for the test case so the probes are to the same 
> region/endpont as the mock bucket
> {code}
> [ERROR] 
> testAccessPointProbingV2(org.apache.hadoop.fs.s3a.ITestS3ABucketExistence)  
> Time elapsed: 1.748 s  <<< ERROR!
> java.lang.IllegalArgumentException: The region field of the ARN being passed 
> as a bucket parameter to an S3 operation does not match the region the client 
> was configured with. Provided region: 'eu-west-1'; client region: 'eu-west-2'.
> at 
> com.amazonaws.services.s3.AmazonS3Client.validateIsTrue(AmazonS3Client.java:6588)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18696) S3A ITestS3ABucketExistence access point test failure

2023-04-17 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17712978#comment-17712978
 ] 

ASF GitHub Bot commented on HADOOP-18696:
-

steveloughran commented on code in PR #5557:
URL: https://github.com/apache/hadoop/pull/5557#discussion_r1168410522


##
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3ABucketExistence.java:
##
@@ -189,6 +190,17 @@ public void testAccessPointRequired() throws Exception {
 () -> FileSystem.get(uri, configuration));
   }
 
+  /**
+   * Create a configuration which has bucket probe 2 and the endpoint.region
+   * option set to "eu-west-1" to match that of the ARNs generated.
+   * @return a configuration for tests which are expected to fail in specific 
ways.
+   */
+  private Configuration createArnConfiguration() {
+Configuration configuration = createConfigurationWithProbe(2);
+configuration.set(AWS_REGION, "eu-west-1");
+return configuration;
+  }

Review Comment:
   noted





> S3A ITestS3ABucketExistence access point test failure
> -
>
> Key: HADOOP-18696
> URL: https://issues.apache.org/jira/browse/HADOOP-18696
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.4.0
>Reporter: Steve Loughran
>Priority: Major
>  Labels: pull-request-available
>
> this is inevitably me having some config option in the way; just need to find 
> out what and clear it for the test case so the probes are to the same 
> region/endpont as the mock bucket
> {code}
> [ERROR] 
> testAccessPointProbingV2(org.apache.hadoop.fs.s3a.ITestS3ABucketExistence)  
> Time elapsed: 1.748 s  <<< ERROR!
> java.lang.IllegalArgumentException: The region field of the ARN being passed 
> as a bucket parameter to an S3 operation does not match the region the client 
> was configured with. Provided region: 'eu-west-1'; client region: 'eu-west-2'.
> at 
> com.amazonaws.services.s3.AmazonS3Client.validateIsTrue(AmazonS3Client.java:6588)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran merged pull request #5557: HADOOP-18696. ITestS3ABucketExistence ARN test failures.

2023-04-17 Thread via GitHub


steveloughran merged PR #5557:
URL: https://github.com/apache/hadoop/pull/5557


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a diff in pull request #5557: HADOOP-18696. ITestS3ABucketExistence ARN test failures.

2023-04-17 Thread via GitHub


steveloughran commented on code in PR #5557:
URL: https://github.com/apache/hadoop/pull/5557#discussion_r1168410522


##
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3ABucketExistence.java:
##
@@ -189,6 +190,17 @@ public void testAccessPointRequired() throws Exception {
 () -> FileSystem.get(uri, configuration));
   }
 
+  /**
+   * Create a configuration which has bucket probe 2 and the endpoint.region
+   * option set to "eu-west-1" to match that of the ARNs generated.
+   * @return a configuration for tests which are expected to fail in specific 
ways.
+   */
+  private Configuration createArnConfiguration() {
+Configuration configuration = createConfigurationWithProbe(2);
+configuration.set(AWS_REGION, "eu-west-1");
+return configuration;
+  }

Review Comment:
   noted



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18399) SingleFilePerBlockCache to use LocalDirAllocator for file allocation

2023-04-17 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18399?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17712952#comment-17712952
 ] 

ASF GitHub Bot commented on HADOOP-18399:
-

hadoop-yetus commented on PR #5054:
URL: https://github.com/apache/hadoop/pull/5054#issuecomment-1510923937

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 47s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  1s |  |  xmllint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 5 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  15m 53s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  28m 34s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  25m 12s |  |  trunk passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  compile  |  21m 40s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  checkstyle  |   3m 58s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 29s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 36s |  |  trunk passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 18s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  spotbugs  |   3m 58s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  23m 45s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 24s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 33s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  24m 33s |  |  the patch passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javac  |  24m 33s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  21m 35s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  javac  |  21m 35s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   3m 50s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   2m 25s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 29s |  |  the patch passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 18s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  spotbugs  |   4m 30s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  24m 20s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  18m 35s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   2m 38s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 53s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 240m 30s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5054/27/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5054 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets xmllint |
   | uname | Linux 173cdf5d1cf7 4.15.0-206-generic #217-Ubuntu SMP Fri Feb 3 
19:10:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 4b5c6ac5498d574abbd9a4b2a5692b344debd8c4 |
   | Default Java | Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1
 /usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5054/27/testReport/ |
   | Max. process+thread count | 2832 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common 

[GitHub] [hadoop] hadoop-yetus commented on pull request #5054: HADOOP-18399 Prefetch - SingleFilePerBlockCache to use LocalDirAllocator for file allocation

2023-04-17 Thread via GitHub


hadoop-yetus commented on PR #5054:
URL: https://github.com/apache/hadoop/pull/5054#issuecomment-1510923937

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 47s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  1s |  |  xmllint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 5 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  15m 53s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  28m 34s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  25m 12s |  |  trunk passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  compile  |  21m 40s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  checkstyle  |   3m 58s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 29s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 36s |  |  trunk passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 18s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  spotbugs  |   3m 58s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  23m 45s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 24s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 33s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  24m 33s |  |  the patch passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javac  |  24m 33s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  21m 35s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  javac  |  21m 35s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   3m 50s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   2m 25s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 29s |  |  the patch passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 18s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  spotbugs  |   4m 30s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  24m 20s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  18m 35s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   2m 38s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 53s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 240m 30s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5054/27/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5054 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets xmllint |
   | uname | Linux 173cdf5d1cf7 4.15.0-206-generic #217-Ubuntu SMP Fri Feb 3 
19:10:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 4b5c6ac5498d574abbd9a4b2a5692b344debd8c4 |
   | Default Java | Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1
 /usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5054/27/testReport/ |
   | Max. process+thread count | 2832 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws 
U: . |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5054/27/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |

[GitHub] [hadoop] YuanbenWang commented on pull request #5547: HDFS-16977. Forbid assigned characters in pathname.

2023-04-17 Thread via GitHub


YuanbenWang commented on PR #5547:
URL: https://github.com/apache/hadoop/pull/5547#issuecomment-1510845148

   @ayushtkn @Hexiaoqiao  Could you please help me to review this comment?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org