[jira] [Commented] (HADOOP-18606) Add reason in in x-ms-client-request-id on a retry API call.

2023-02-26 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17693821#comment-17693821
 ] 

ASF GitHub Bot commented on HADOOP-18606:
-

hadoop-yetus commented on PR #5299:
URL: https://github.com/apache/hadoop/pull/5299#issuecomment-1445811333

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 38s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  38m 29s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 41s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  compile  |   0m 38s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 35s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 43s |  |  trunk passed  |
   | -1 :x: |  javadoc  |   0m 41s | 
[/branch-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5299/10/artifact/out/branch-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt)
 |  hadoop-azure in trunk failed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.  |
   | +1 :green_heart: |  javadoc  |   0m 34s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m 18s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m 49s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  21m  9s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 31s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 34s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javac  |   0m 34s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 30s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 19s | 
[/results-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5299/10/artifact/out/results-checkstyle-hadoop-tools_hadoop-azure.txt)
 |  hadoop-tools/hadoop-azure: The patch generated 6 new + 2 unchanged - 0 
fixed = 8 total (was 2)  |
   | +1 :green_heart: |  mvnsite  |   0m 33s |  |  the patch passed  |
   | -1 :x: |  javadoc  |   0m 24s | 
[/patch-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5299/10/artifact/out/patch-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt)
 |  hadoop-azure in the patch failed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.  |
   | +1 :green_heart: |  javadoc  |   0m 24s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m  3s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  20m  4s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 18s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 37s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  93m 33s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5299/10/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5299 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 2184c714cc2f 4.15.0-200-generic #211-Ubuntu SMP Thu Nov 24 
18:16:04 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | 

[GitHub] [hadoop] hadoop-yetus commented on pull request #5299: HADOOP-18606. Add reason in in x-ms-client-request-id on a retry API call.

2023-02-26 Thread via GitHub


hadoop-yetus commented on PR #5299:
URL: https://github.com/apache/hadoop/pull/5299#issuecomment-1445811333

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 38s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  38m 29s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 41s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  compile  |   0m 38s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 35s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 43s |  |  trunk passed  |
   | -1 :x: |  javadoc  |   0m 41s | 
[/branch-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5299/10/artifact/out/branch-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt)
 |  hadoop-azure in trunk failed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.  |
   | +1 :green_heart: |  javadoc  |   0m 34s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m 18s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m 49s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  21m  9s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 31s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 34s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javac  |   0m 34s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 30s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 19s | 
[/results-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5299/10/artifact/out/results-checkstyle-hadoop-tools_hadoop-azure.txt)
 |  hadoop-tools/hadoop-azure: The patch generated 6 new + 2 unchanged - 0 
fixed = 8 total (was 2)  |
   | +1 :green_heart: |  mvnsite  |   0m 33s |  |  the patch passed  |
   | -1 :x: |  javadoc  |   0m 24s | 
[/patch-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5299/10/artifact/out/patch-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt)
 |  hadoop-azure in the patch failed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.  |
   | +1 :green_heart: |  javadoc  |   0m 24s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m  3s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  20m  4s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 18s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 37s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  93m 33s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5299/10/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5299 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 2184c714cc2f 4.15.0-200-generic #211-Ubuntu SMP Thu Nov 24 
18:16:04 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 75594b923a084d7269a1704af4ab308144d60b00 |
   | Default Java | Private Build-1.8.0_352-8u352-ga-1~20.04-b08 |
   | Multi-JDK versions | 

[jira] [Commented] (HADOOP-18632) ABFS: Customize and optimize timeouts made based on each separate request

2023-02-26 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17693820#comment-17693820
 ] 

ASF GitHub Bot commented on HADOOP-18632:
-

hadoop-yetus commented on PR #5399:
URL: https://github.com/apache/hadoop/pull/5399#issuecomment-1445810167

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 48s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  0s |  |  xmllint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 3 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  40m  4s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 42s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  compile  |   0m 35s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 35s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 39s |  |  trunk passed  |
   | -1 :x: |  javadoc  |   0m 43s | 
[/branch-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5399/6/artifact/out/branch-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt)
 |  hadoop-azure in trunk failed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.  |
   | +1 :green_heart: |  javadoc  |   0m 32s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m 17s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m 36s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | -1 :x: |  mvninstall  |   0m 18s | 
[/patch-mvninstall-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5399/6/artifact/out/patch-mvninstall-hadoop-tools_hadoop-azure.txt)
 |  hadoop-azure in the patch failed.  |
   | -1 :x: |  compile  |   0m 17s | 
[/patch-compile-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5399/6/artifact/out/patch-compile-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt)
 |  hadoop-azure in the patch failed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.  |
   | -1 :x: |  javac  |   0m 17s | 
[/patch-compile-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5399/6/artifact/out/patch-compile-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt)
 |  hadoop-azure in the patch failed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.  |
   | -1 :x: |  compile  |   0m 15s | 
[/patch-compile-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_352-8u352-ga-1~20.04-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5399/6/artifact/out/patch-compile-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_352-8u352-ga-1~20.04-b08.txt)
 |  hadoop-azure in the patch failed with JDK Private 
Build-1.8.0_352-8u352-ga-1~20.04-b08.  |
   | -1 :x: |  javac  |   0m 15s | 
[/patch-compile-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_352-8u352-ga-1~20.04-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5399/6/artifact/out/patch-compile-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_352-8u352-ga-1~20.04-b08.txt)
 |  hadoop-azure in the patch failed with JDK Private 
Build-1.8.0_352-8u352-ga-1~20.04-b08.  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 18s | 
[/results-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5399/6/artifact/out/results-checkstyle-hadoop-tools_hadoop-azure.txt)
 |  hadoop-tools/hadoop-azure: The patch generated 12 new + 2 unchanged - 0 
fixed = 14 total (was 2)  |
   | -1 :x: |  mvnsite  |   0m 17s | 
[/patch-mvnsite-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5399/6/artifact/out/patch-mvnsite-hadoop-tools_hadoop-azure.txt)
 |  hadoop-azure in the patch failed.  |
   | -1 :x: |  javadoc  |   0m 15s | 

[GitHub] [hadoop] hadoop-yetus commented on pull request #5399: HADOOP-18632: [ABFS] Customize and optimize timeouts made based on each separate request

2023-02-26 Thread via GitHub


hadoop-yetus commented on PR #5399:
URL: https://github.com/apache/hadoop/pull/5399#issuecomment-1445810167

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 48s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  0s |  |  xmllint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 3 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  40m  4s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 42s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  compile  |   0m 35s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 35s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 39s |  |  trunk passed  |
   | -1 :x: |  javadoc  |   0m 43s | 
[/branch-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5399/6/artifact/out/branch-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt)
 |  hadoop-azure in trunk failed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.  |
   | +1 :green_heart: |  javadoc  |   0m 32s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m 17s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m 36s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | -1 :x: |  mvninstall  |   0m 18s | 
[/patch-mvninstall-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5399/6/artifact/out/patch-mvninstall-hadoop-tools_hadoop-azure.txt)
 |  hadoop-azure in the patch failed.  |
   | -1 :x: |  compile  |   0m 17s | 
[/patch-compile-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5399/6/artifact/out/patch-compile-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt)
 |  hadoop-azure in the patch failed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.  |
   | -1 :x: |  javac  |   0m 17s | 
[/patch-compile-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5399/6/artifact/out/patch-compile-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt)
 |  hadoop-azure in the patch failed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.  |
   | -1 :x: |  compile  |   0m 15s | 
[/patch-compile-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_352-8u352-ga-1~20.04-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5399/6/artifact/out/patch-compile-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_352-8u352-ga-1~20.04-b08.txt)
 |  hadoop-azure in the patch failed with JDK Private 
Build-1.8.0_352-8u352-ga-1~20.04-b08.  |
   | -1 :x: |  javac  |   0m 15s | 
[/patch-compile-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_352-8u352-ga-1~20.04-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5399/6/artifact/out/patch-compile-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_352-8u352-ga-1~20.04-b08.txt)
 |  hadoop-azure in the patch failed with JDK Private 
Build-1.8.0_352-8u352-ga-1~20.04-b08.  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 18s | 
[/results-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5399/6/artifact/out/results-checkstyle-hadoop-tools_hadoop-azure.txt)
 |  hadoop-tools/hadoop-azure: The patch generated 12 new + 2 unchanged - 0 
fixed = 14 total (was 2)  |
   | -1 :x: |  mvnsite  |   0m 17s | 
[/patch-mvnsite-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5399/6/artifact/out/patch-mvnsite-hadoop-tools_hadoop-azure.txt)
 |  hadoop-azure in the patch failed.  |
   | -1 :x: |  javadoc  |   0m 15s | 
[/patch-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5399/6/artifact/out/patch-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt)
 |  hadoop-azure in the patch failed with JDK 

[jira] [Commented] (HADOOP-18632) ABFS: Customize and optimize timeouts made based on each separate request

2023-02-26 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17693819#comment-17693819
 ] 

ASF GitHub Bot commented on HADOOP-18632:
-

saxenapranav commented on code in PR #5399:
URL: https://github.com/apache/hadoop/pull/5399#discussion_r1118348558


##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsRestOperation.java:
##
@@ -134,7 +140,7 @@ String getSasToken() {
   AbfsRestOperation(final AbfsRestOperationType operationType,
 final AbfsClient client,
 final String method,
-final URL url,
+URL url,

Review Comment:
   same comment for final.



##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsRestOperation.java:
##
@@ -117,9 +122,10 @@ String getSasToken() {
   AbfsRestOperation(final AbfsRestOperationType operationType,
 final AbfsClient client,
 final String method,
-final URL url,
+URL url,

Review Comment:
   it can be left final. Any reason to remove it?





> ABFS: Customize and optimize timeouts made based on each separate request
> -
>
> Key: HADOOP-18632
> URL: https://issues.apache.org/jira/browse/HADOOP-18632
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Reporter: Sree Bhattacharyya
>Assignee: Sree Bhattacharyya
>Priority: Minor
>  Labels: pull-request-available
>
> In present day ABFS Driver functioning, all API request calls use the same 
> values of default timeouts. This is sub-optimal in the scenarios where a 
> request is failing due to hitting a particular busy node, and would benefit 
> simply by retrying quicker.
> For this, the change to be brought in chooses customized timeouts based on 
> which API call is being made. Further, starting with smaller, optimized 
> values of timeouts, the timeout values would increase by a certain 
> incremental factor for subsequent retries to ensure quicker retries and 
> success.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] saxenapranav commented on a diff in pull request #5399: HADOOP-18632: [ABFS] Customize and optimize timeouts made based on each separate request

2023-02-26 Thread via GitHub


saxenapranav commented on code in PR #5399:
URL: https://github.com/apache/hadoop/pull/5399#discussion_r1118348558


##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsRestOperation.java:
##
@@ -134,7 +140,7 @@ String getSasToken() {
   AbfsRestOperation(final AbfsRestOperationType operationType,
 final AbfsClient client,
 final String method,
-final URL url,
+URL url,

Review Comment:
   same comment for final.



##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsRestOperation.java:
##
@@ -117,9 +122,10 @@ String getSasToken() {
   AbfsRestOperation(final AbfsRestOperationType operationType,
 final AbfsClient client,
 final String method,
-final URL url,
+URL url,

Review Comment:
   it can be left final. Any reason to remove it?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #5420: YARN-11442. Refactor FederationInterceptorREST Code.

2023-02-26 Thread via GitHub


hadoop-yetus commented on PR #5420:
URL: https://github.com/apache/hadoop/pull/5420#issuecomment-1445743376

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 39s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  15m 57s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  25m 48s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   3m 53s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  compile  |   3m 20s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   1m 17s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 20s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 17s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javadoc  |   1m  6s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   2m 30s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  19m 58s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  20m 18s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 29s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   0m 59s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   3m 43s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javac  |   3m 43s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   3m 12s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   3m 12s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   1m  3s | 
[/results-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5420/3/artifact/out/results-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server.txt)
 |  hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server: The patch generated 1 
new + 10 unchanged - 0 fixed = 11 total (was 10)  |
   | +1 :green_heart: |  mvnsite  |   1m  5s |  |  the patch passed  |
   | -1 :x: |  javadoc  |   0m 24s | 
[/patch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-router-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5420/3/artifact/out/patch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-router-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt)
 |  hadoop-yarn-server-router in the patch failed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.  |
   | -1 :x: |  javadoc  |   0m 22s | 
[/patch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-router-jdkPrivateBuild-1.8.0_352-8u352-ga-1~20.04-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5420/3/artifact/out/patch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-router-jdkPrivateBuild-1.8.0_352-8u352-ga-1~20.04-b08.txt)
 |  hadoop-yarn-server-router in the patch failed with JDK Private 
Build-1.8.0_352-8u352-ga-1~20.04-b08.  |
   | -1 :x: |  spotbugs  |   0m 59s | 
[/new-spotbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-router.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5420/3/artifact/out/new-spotbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-router.html)
 |  
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router 
generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0)  |
   | +1 :green_heart: |  shadedclient  |  20m 44s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   3m 14s |  |  hadoop-yarn-server-common 

[jira] [Commented] (HADOOP-18632) ABFS: Customize and optimize timeouts made based on each separate request

2023-02-26 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17693796#comment-17693796
 ] 

ASF GitHub Bot commented on HADOOP-18632:
-

sreeb-msft commented on code in PR #5399:
URL: https://github.com/apache/hadoop/pull/5399#discussion_r1118303110


##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/TimeoutOptimizer.java:
##
@@ -0,0 +1,244 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services;
+
+import org.apache.hadoop.fs.azurebfs.AbfsConfiguration;
+import org.apache.hadoop.fs.azurebfs.constants.ConfigurationKeys;
+import org.apache.hadoop.fs.azurebfs.constants.HttpQueryParams;
+import 
org.apache.hadoop.fs.azurebfs.contracts.exceptions.InvalidConfigurationValueException;
+import org.apache.http.client.utils.URIBuilder;
+
+import java.net.MalformedURLException;
+import java.net.URISyntaxException;
+import java.net.URL;
+
+import static 
org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.DEFAULT_TIMEOUT;
+
+/**
+ * Class handling whether timeout values should be optimized.
+ * Timeout values optimized per request level,
+ * based on configs in the settings.
+ */
+public class TimeoutOptimizer {
+private AbfsConfiguration abfsConfiguration;
+private URL url;
+private AbfsRestOperationType opType;
+private ExponentialRetryPolicy retryPolicy;
+private int requestTimeout;
+private int readTimeout = -1;
+private int maxReqTimeout = -1;
+private int timeoutIncRate = -1;
+private boolean shouldOptimizeTimeout;
+
+/**
+ * Constructor to initialize the parameters in class,
+ * depending upon what is configured in the settings.
+ * @param url request URL
+ * @param opType operation type
+ * @param retryPolicy retry policy set for this instance of AbfsClient
+ * @param abfsConfiguration current configuration
+ */
+public TimeoutOptimizer(URL url, AbfsRestOperationType opType, 
ExponentialRetryPolicy retryPolicy, AbfsConfiguration abfsConfiguration) {
+this.url = url;
+this.opType = opType;
+if (opType != null) {
+this.retryPolicy = retryPolicy;
+this.abfsConfiguration = abfsConfiguration;
+String shouldOptimize = 
abfsConfiguration.get(ConfigurationKeys.AZURE_OPTIMIZE_TIMEOUTS);
+if (shouldOptimize == null || shouldOptimize.isEmpty()) {
+// config is not set
+this.shouldOptimizeTimeout = false;
+}
+else {
+this.shouldOptimizeTimeout = 
Boolean.parseBoolean(shouldOptimize);
+if (this.shouldOptimizeTimeout) {
+// config is set to true
+if 
(abfsConfiguration.get(ConfigurationKeys.AZURE_MAX_REQUEST_TIMEOUT) != null) {
+this.maxReqTimeout = 
Integer.parseInt(abfsConfiguration.get(ConfigurationKeys.AZURE_MAX_REQUEST_TIMEOUT));
+}
+if 
(abfsConfiguration.get(ConfigurationKeys.AZURE_REQUEST_TIMEOUT_INCREASE_RATE) 
!= null) {
+this.timeoutIncRate = 
Integer.parseInt(abfsConfiguration.get(ConfigurationKeys.AZURE_REQUEST_TIMEOUT_INCREASE_RATE));
+}
+if (this.maxReqTimeout == -1 || this.timeoutIncRate == -1) 
{
+this.shouldOptimizeTimeout = false;
+} else {
+initTimeouts();
+updateUrl();
+}
+}
+}
+} else {
+// optimization not required for opType == null
+this.shouldOptimizeTimeout = false;
+}
+}
+
+public void updateRetryTimeout(int retryCount) {
+if (!this.shouldOptimizeTimeout) {
+return;
+}
+
+// update all timeout values
+updateTimeouts(retryCount);
+updateUrl();
+}
+
+public URL getUrl() {
+return url;
+}
+public boolean getShouldOptimizeTimeout() {
+return this.shouldOptimizeTimeout;
+}

[GitHub] [hadoop] sreeb-msft commented on a diff in pull request #5399: HADOOP-18632: [ABFS] Customize and optimize timeouts made based on each separate request

2023-02-26 Thread via GitHub


sreeb-msft commented on code in PR #5399:
URL: https://github.com/apache/hadoop/pull/5399#discussion_r1118303110


##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/TimeoutOptimizer.java:
##
@@ -0,0 +1,244 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services;
+
+import org.apache.hadoop.fs.azurebfs.AbfsConfiguration;
+import org.apache.hadoop.fs.azurebfs.constants.ConfigurationKeys;
+import org.apache.hadoop.fs.azurebfs.constants.HttpQueryParams;
+import 
org.apache.hadoop.fs.azurebfs.contracts.exceptions.InvalidConfigurationValueException;
+import org.apache.http.client.utils.URIBuilder;
+
+import java.net.MalformedURLException;
+import java.net.URISyntaxException;
+import java.net.URL;
+
+import static 
org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.DEFAULT_TIMEOUT;
+
+/**
+ * Class handling whether timeout values should be optimized.
+ * Timeout values optimized per request level,
+ * based on configs in the settings.
+ */
+public class TimeoutOptimizer {
+private AbfsConfiguration abfsConfiguration;
+private URL url;
+private AbfsRestOperationType opType;
+private ExponentialRetryPolicy retryPolicy;
+private int requestTimeout;
+private int readTimeout = -1;
+private int maxReqTimeout = -1;
+private int timeoutIncRate = -1;
+private boolean shouldOptimizeTimeout;
+
+/**
+ * Constructor to initialize the parameters in class,
+ * depending upon what is configured in the settings.
+ * @param url request URL
+ * @param opType operation type
+ * @param retryPolicy retry policy set for this instance of AbfsClient
+ * @param abfsConfiguration current configuration
+ */
+public TimeoutOptimizer(URL url, AbfsRestOperationType opType, 
ExponentialRetryPolicy retryPolicy, AbfsConfiguration abfsConfiguration) {
+this.url = url;
+this.opType = opType;
+if (opType != null) {
+this.retryPolicy = retryPolicy;
+this.abfsConfiguration = abfsConfiguration;
+String shouldOptimize = 
abfsConfiguration.get(ConfigurationKeys.AZURE_OPTIMIZE_TIMEOUTS);
+if (shouldOptimize == null || shouldOptimize.isEmpty()) {
+// config is not set
+this.shouldOptimizeTimeout = false;
+}
+else {
+this.shouldOptimizeTimeout = 
Boolean.parseBoolean(shouldOptimize);
+if (this.shouldOptimizeTimeout) {
+// config is set to true
+if 
(abfsConfiguration.get(ConfigurationKeys.AZURE_MAX_REQUEST_TIMEOUT) != null) {
+this.maxReqTimeout = 
Integer.parseInt(abfsConfiguration.get(ConfigurationKeys.AZURE_MAX_REQUEST_TIMEOUT));
+}
+if 
(abfsConfiguration.get(ConfigurationKeys.AZURE_REQUEST_TIMEOUT_INCREASE_RATE) 
!= null) {
+this.timeoutIncRate = 
Integer.parseInt(abfsConfiguration.get(ConfigurationKeys.AZURE_REQUEST_TIMEOUT_INCREASE_RATE));
+}
+if (this.maxReqTimeout == -1 || this.timeoutIncRate == -1) 
{
+this.shouldOptimizeTimeout = false;
+} else {
+initTimeouts();
+updateUrl();
+}
+}
+}
+} else {
+// optimization not required for opType == null
+this.shouldOptimizeTimeout = false;
+}
+}
+
+public void updateRetryTimeout(int retryCount) {
+if (!this.shouldOptimizeTimeout) {
+return;
+}
+
+// update all timeout values
+updateTimeouts(retryCount);
+updateUrl();
+}
+
+public URL getUrl() {
+return url;
+}
+public boolean getShouldOptimizeTimeout() {
+return this.shouldOptimizeTimeout;
+}
+
+public int getRequestTimeout() {
+return requestTimeout;
+}
+
+public int getReadTimeout() {
+return readTimeout;
+}
+
+public int getReadTimeout(final int defaultTimeout) {
+if (readTimeout != -1 

[jira] [Commented] (HADOOP-18632) ABFS: Customize and optimize timeouts made based on each separate request

2023-02-26 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17693793#comment-17693793
 ] 

ASF GitHub Bot commented on HADOOP-18632:
-

sreeb-msft commented on code in PR #5399:
URL: https://github.com/apache/hadoop/pull/5399#discussion_r1118295742


##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/TimeoutOptimizer.java:
##
@@ -0,0 +1,227 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services;
+
+import org.apache.hadoop.fs.azurebfs.AbfsConfiguration;
+import org.apache.hadoop.fs.azurebfs.constants.ConfigurationKeys;
+import org.apache.hadoop.fs.azurebfs.constants.HttpQueryParams;
+import org.apache.http.client.utils.URIBuilder;
+
+import java.net.MalformedURLException;
+import java.net.URISyntaxException;
+import java.net.URL;
+
+import static 
org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.DEFAULT_TIMEOUT;
+
+public class TimeoutOptimizer {
+AbfsConfiguration abfsConfiguration;
+private URL url;
+private AbfsRestOperationType opType;
+private ExponentialRetryPolicy retryPolicy;
+private int requestTimeout;
+private int readTimeout = -1;
+private int connTimeout = -1;
+private int maxReqTimeout;
+private int timeoutIncRate;
+private boolean shouldOptimizeTimeout;
+
+public TimeoutOptimizer(URL url, AbfsRestOperationType opType, 
ExponentialRetryPolicy retryPolicy, AbfsConfiguration abfsConfiguration) {
+this.url = url;
+this.opType = opType;
+if (opType != null) {
+this.retryPolicy = retryPolicy;
+this.abfsConfiguration = abfsConfiguration;
+if 
(abfsConfiguration.get(ConfigurationKeys.AZURE_OPTIMIZE_TIMEOUTS) == null) {
+this.shouldOptimizeTimeout = false;
+}
+else {
+this.shouldOptimizeTimeout = 
Boolean.parseBoolean(abfsConfiguration.get(ConfigurationKeys.AZURE_OPTIMIZE_TIMEOUTS));
+}
+if (this.shouldOptimizeTimeout) {
+this.maxReqTimeout = 
Integer.parseInt(abfsConfiguration.get(ConfigurationKeys.AZURE_MAX_REQUEST_TIMEOUT));
+this.timeoutIncRate = 
Integer.parseInt(abfsConfiguration.get(ConfigurationKeys.AZURE_REQUEST_TIMEOUT_INCREASE_RATE));
+initTimeouts();
+updateUrl();
+}
+
+} else {
+this.shouldOptimizeTimeout = false;
+}
+}
+
+public void updateRetryTimeout(int retryCount) {
+if (!this.shouldOptimizeTimeout) {
+return;
+}
+
+// update all timeout values
+updateTimeouts(retryCount);
+updateUrl();
+}
+
+public URL getUrl() {
+return url;
+}
+public boolean getShouldOptimizeTimeout() { return 
this.shouldOptimizeTimeout; }
+
+public int getRequestTimeout() { return requestTimeout; }
+
+public int getReadTimeout() {
+return readTimeout;
+}
+
+public int getReadTimeout(final int defaultTimeout) {
+if (readTimeout != -1 && shouldOptimizeTimeout) {
+return readTimeout;
+}
+return defaultTimeout;
+}
+
+public int getConnTimeout() {
+return connTimeout;
+}
+
+public int getConnTimeout(final int defaultTimeout) {
+if (connTimeout == -1) {
+return defaultTimeout;
+}
+return connTimeout;
+}
+
+private void initTimeouts() {
+if (!shouldOptimizeTimeout) {
+requestTimeout = -1;
+readTimeout = -1;
+connTimeout = -1;
+return;
+}
+
+String query = url.getQuery();
+int timeoutPos = query.indexOf("timeout");
+if (timeoutPos < 0) {
+// no value of timeout exists in the URL
+// no optimization is needed for this particular request as well
+requestTimeout = -1;
+readTimeout = -1;
+connTimeout = -1;
+shouldOptimizeTimeout = false;
+return;
+}
+
+String timeout = "";
+

[GitHub] [hadoop] sreeb-msft commented on a diff in pull request #5399: HADOOP-18632: [ABFS] Customize and optimize timeouts made based on each separate request

2023-02-26 Thread via GitHub


sreeb-msft commented on code in PR #5399:
URL: https://github.com/apache/hadoop/pull/5399#discussion_r1118295742


##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/TimeoutOptimizer.java:
##
@@ -0,0 +1,227 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services;
+
+import org.apache.hadoop.fs.azurebfs.AbfsConfiguration;
+import org.apache.hadoop.fs.azurebfs.constants.ConfigurationKeys;
+import org.apache.hadoop.fs.azurebfs.constants.HttpQueryParams;
+import org.apache.http.client.utils.URIBuilder;
+
+import java.net.MalformedURLException;
+import java.net.URISyntaxException;
+import java.net.URL;
+
+import static 
org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.DEFAULT_TIMEOUT;
+
+public class TimeoutOptimizer {
+AbfsConfiguration abfsConfiguration;
+private URL url;
+private AbfsRestOperationType opType;
+private ExponentialRetryPolicy retryPolicy;
+private int requestTimeout;
+private int readTimeout = -1;
+private int connTimeout = -1;
+private int maxReqTimeout;
+private int timeoutIncRate;
+private boolean shouldOptimizeTimeout;
+
+public TimeoutOptimizer(URL url, AbfsRestOperationType opType, 
ExponentialRetryPolicy retryPolicy, AbfsConfiguration abfsConfiguration) {
+this.url = url;
+this.opType = opType;
+if (opType != null) {
+this.retryPolicy = retryPolicy;
+this.abfsConfiguration = abfsConfiguration;
+if 
(abfsConfiguration.get(ConfigurationKeys.AZURE_OPTIMIZE_TIMEOUTS) == null) {
+this.shouldOptimizeTimeout = false;
+}
+else {
+this.shouldOptimizeTimeout = 
Boolean.parseBoolean(abfsConfiguration.get(ConfigurationKeys.AZURE_OPTIMIZE_TIMEOUTS));
+}
+if (this.shouldOptimizeTimeout) {
+this.maxReqTimeout = 
Integer.parseInt(abfsConfiguration.get(ConfigurationKeys.AZURE_MAX_REQUEST_TIMEOUT));
+this.timeoutIncRate = 
Integer.parseInt(abfsConfiguration.get(ConfigurationKeys.AZURE_REQUEST_TIMEOUT_INCREASE_RATE));
+initTimeouts();
+updateUrl();
+}
+
+} else {
+this.shouldOptimizeTimeout = false;
+}
+}
+
+public void updateRetryTimeout(int retryCount) {
+if (!this.shouldOptimizeTimeout) {
+return;
+}
+
+// update all timeout values
+updateTimeouts(retryCount);
+updateUrl();
+}
+
+public URL getUrl() {
+return url;
+}
+public boolean getShouldOptimizeTimeout() { return 
this.shouldOptimizeTimeout; }
+
+public int getRequestTimeout() { return requestTimeout; }
+
+public int getReadTimeout() {
+return readTimeout;
+}
+
+public int getReadTimeout(final int defaultTimeout) {
+if (readTimeout != -1 && shouldOptimizeTimeout) {
+return readTimeout;
+}
+return defaultTimeout;
+}
+
+public int getConnTimeout() {
+return connTimeout;
+}
+
+public int getConnTimeout(final int defaultTimeout) {
+if (connTimeout == -1) {
+return defaultTimeout;
+}
+return connTimeout;
+}
+
+private void initTimeouts() {
+if (!shouldOptimizeTimeout) {
+requestTimeout = -1;
+readTimeout = -1;
+connTimeout = -1;
+return;
+}
+
+String query = url.getQuery();
+int timeoutPos = query.indexOf("timeout");
+if (timeoutPos < 0) {
+// no value of timeout exists in the URL
+// no optimization is needed for this particular request as well
+requestTimeout = -1;
+readTimeout = -1;
+connTimeout = -1;
+shouldOptimizeTimeout = false;
+return;
+}
+
+String timeout = "";
+if (opType == AbfsRestOperationType.CreateFileSystem) {

Review Comment:
   Resolving for future addition. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL 

[GitHub] [hadoop] anoopsjohn opened a new pull request, #5436: YARN-10896 RM fail over is not reporting the nodes DECOMMISSIONED

2023-02-26 Thread via GitHub


anoopsjohn opened a new pull request, #5436:
URL: https://github.com/apache/hadoop/pull/5436

   
   
   ### Description of PR
   When RM not in HA mode, and RM restart, we handle reading the exclude file 
and keep state of decommission requested NMs.  The in memory state of NMs 
become proper to decommissioned.  
   But in HA mode, when standby become active, this state transition dont 
happen properly. The decommissioning NMs will get kill request from RM when 
those NMs heart beat to new Active RM. But the in memory state do not become 
decommissioned. So the node state API dont give proper state of DECOMMISSIONED. 
 This PR fixes this issue.
   
   ### How was this patch tested?
   Manually tested with RM failover while some of the NMs are in graceful 
decommissioning state.
   
   ### For code changes:
   
   - [x] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18632) ABFS: Customize and optimize timeouts made based on each separate request

2023-02-26 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17693783#comment-17693783
 ] 

ASF GitHub Bot commented on HADOOP-18632:
-

saxenapranav commented on code in PR #5399:
URL: https://github.com/apache/hadoop/pull/5399#discussion_r1118282237


##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsHttpOperation.java:
##
@@ -167,6 +169,11 @@ public String getResponseHeader(String httpHeader) {
 return connection.getHeaderField(httpHeader);
   }
 
+  @VisibleForTesting
+  protected TimeoutOptimizer getTimeoutOptimizer() {

Review Comment:
   We have it package-protected:
   ```
   TimeoutOptimizer getTimeoutOptimizer()
   ```



##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsRestOperation.java:
##
@@ -94,6 +96,9 @@ public URL getUrl() {
 return url;
   }
 
+  @VisibleForTesting
+  protected TimeoutOptimizer getTimeoutOptimizer() { return timeoutOptimizer; }

Review Comment:
   we have it package-protected: 
   ```
   TimeoutOptimizer getTimeoutOptimizer() { return timeoutOptimizer; }
   ```





> ABFS: Customize and optimize timeouts made based on each separate request
> -
>
> Key: HADOOP-18632
> URL: https://issues.apache.org/jira/browse/HADOOP-18632
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Reporter: Sree Bhattacharyya
>Assignee: Sree Bhattacharyya
>Priority: Minor
>  Labels: pull-request-available
>
> In present day ABFS Driver functioning, all API request calls use the same 
> values of default timeouts. This is sub-optimal in the scenarios where a 
> request is failing due to hitting a particular busy node, and would benefit 
> simply by retrying quicker.
> For this, the change to be brought in chooses customized timeouts based on 
> which API call is being made. Further, starting with smaller, optimized 
> values of timeouts, the timeout values would increase by a certain 
> incremental factor for subsequent retries to ensure quicker retries and 
> success.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] saxenapranav commented on a diff in pull request #5399: HADOOP-18632: [ABFS] Customize and optimize timeouts made based on each separate request

2023-02-26 Thread via GitHub


saxenapranav commented on code in PR #5399:
URL: https://github.com/apache/hadoop/pull/5399#discussion_r1118282237


##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsHttpOperation.java:
##
@@ -167,6 +169,11 @@ public String getResponseHeader(String httpHeader) {
 return connection.getHeaderField(httpHeader);
   }
 
+  @VisibleForTesting
+  protected TimeoutOptimizer getTimeoutOptimizer() {

Review Comment:
   We have it package-protected:
   ```
   TimeoutOptimizer getTimeoutOptimizer()
   ```



##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsRestOperation.java:
##
@@ -94,6 +96,9 @@ public URL getUrl() {
 return url;
   }
 
+  @VisibleForTesting
+  protected TimeoutOptimizer getTimeoutOptimizer() { return timeoutOptimizer; }

Review Comment:
   we have it package-protected: 
   ```
   TimeoutOptimizer getTimeoutOptimizer() { return timeoutOptimizer; }
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] slfan1989 commented on a diff in pull request #5382: YARN-8972. [Router] Add support to prevent DoS attack over ApplicationSubmissionContext size.

2023-02-26 Thread via GitHub


slfan1989 commented on code in PR #5382:
URL: https://github.com/apache/hadoop/pull/5382#discussion_r111823


##
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/test/java/org/apache/hadoop/yarn/server/router/clientrm/TestApplicationSubmissionContextInterceptor.java:
##
@@ -0,0 +1,160 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.yarn.server.router.clientrm;
+
+import java.nio.ByteBuffer;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.test.LambdaTestUtils;
+import org.apache.hadoop.yarn.api.protocolrecords.SubmitApplicationRequest;
+import org.apache.hadoop.yarn.api.records.ApplicationAccessType;
+import org.apache.hadoop.yarn.api.records.ApplicationId;
+import org.apache.hadoop.yarn.api.records.ApplicationSubmissionContext;
+import org.apache.hadoop.yarn.api.records.ContainerLaunchContext;
+import org.apache.hadoop.yarn.api.records.LocalResource;
+import org.apache.hadoop.yarn.api.records.LocalResourceType;
+import org.apache.hadoop.yarn.api.records.LocalResourceVisibility;
+import org.apache.hadoop.yarn.api.records.Priority;
+import org.apache.hadoop.yarn.api.records.Resource;
+import org.apache.hadoop.yarn.api.records.URL;
+import 
org.apache.hadoop.yarn.api.records.impl.pb.ApplicationSubmissionContextPBImpl;
+import org.apache.hadoop.yarn.conf.YarnConfiguration;
+import org.apache.hadoop.yarn.exceptions.YarnException;
+import org.apache.hadoop.yarn.server.router.RouterServerUtil;
+import org.junit.Test;
+
+/**
+ * Extends the {@code BaseRouterClientRMTest} and overrides methods in order to
+ * use the {@code RouterClientRMService} pipeline test cases for testing the
+ * {@code ApplicationSubmissionContextInterceptor} class. The tests for
+ * {@code RouterClientRMService} has been written cleverly so that it can be
+ * reused to validate different request interceptor chains.
+ */
+public class TestApplicationSubmissionContextInterceptor extends 
BaseRouterClientRMTest {
+
+  @Override
+  protected YarnConfiguration createConfiguration() {
+YarnConfiguration conf = new YarnConfiguration();
+String mockPassThroughInterceptorClass =
+PassThroughClientRequestInterceptor.class.getName();
+
+// Create a request interceptor pipeline for testing. The last one in the
+// chain is the application submission context interceptor that checks
+// for exceeded submission context size
+// The others in the chain will simply forward it to the next one in the
+// chain
+conf.set(YarnConfiguration.ROUTER_CLIENTRM_INTERCEPTOR_CLASS_PIPELINE,

Review Comment:
   Thanks for your suggestion, I will add documentation.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #5434: HDFS-16934. org.apache.hadoop.hdfs.tools.TestDFSAdmin#testAllDatanodesReconfig regression

2023-02-26 Thread via GitHub


hadoop-yetus commented on PR #5434:
URL: https://github.com/apache/hadoop/pull/5434#issuecomment-1445374330

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 47s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  39m  0s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 26s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  compile  |   1m 23s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   1m  6s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 32s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m  9s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javadoc  |   1m 35s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   3m 31s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  22m 35s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 14s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 18s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javac  |   1m 18s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 12s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   1m 12s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 50s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 20s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 49s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javadoc  |   1m 24s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   3m 19s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  22m 36s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 205m 32s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5434/3/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 51s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 312m 25s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.server.datanode.TestDirectoryScanner |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5434/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5434 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux b482bad4c369 4.15.0-200-generic #211-Ubuntu SMP Thu Nov 24 
18:16:04 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / d2846c2703df45db725d704bbe864b9d9b86966d |
   | Default Java | Private Build-1.8.0_352-8u352-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_352-8u352-ga-1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5434/3/testReport/ |
   | Max. process+thread count | 2989 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5434/3/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from 

[jira] [Updated] (HADOOP-18646) Upgrade Netty to 4.1.89.Final

2023-02-26 Thread Aleksandr Nikolaev (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksandr Nikolaev updated HADOOP-18646:

Status: Patch Available  (was: Open)

> Upgrade Netty to 4.1.89.Final
> -
>
> Key: HADOOP-18646
> URL: https://issues.apache.org/jira/browse/HADOOP-18646
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Aleksandr Nikolaev
>Assignee: Aleksandr Nikolaev
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.5, 3.2.5
>
>
> h4. Netty version - 4.1.89 has fix  CVEs: 
> [CVE-2022-41881|https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41881]
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18646) Upgrade Netty to 4.1.89.Final

2023-02-26 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17693604#comment-17693604
 ] 

ASF GitHub Bot commented on HADOOP-18646:
-

hadoop-yetus commented on PR #5435:
URL: https://github.com/apache/hadoop/pull/5435#issuecomment-1445299913

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 38s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  0s |  |  xmllint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  40m  4s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 23s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  compile  |   0m 23s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  mvnsite  |   0m 28s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 31s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javadoc  |   0m 23s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  shadedclient  |  61m 41s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 15s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 14s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javac  |   0m 14s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 14s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   0m 14s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  mvnsite  |   0m 16s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 15s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javadoc  |   0m 15s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  shadedclient  |  21m 19s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   0m 18s |  |  hadoop-project in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 37s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  87m 17s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5435/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5435 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient codespell detsecrets xmllint |
   | uname | Linux 98214de68199 4.15.0-200-generic #211-Ubuntu SMP Thu Nov 24 
18:16:04 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / f852b7cb38ba4bb3448412b8d6c1ed47e03c3157 |
   | Default Java | Private Build-1.8.0_352-8u352-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_352-8u352-ga-1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5435/1/testReport/ |
   | Max. process+thread count | 699 (vs. ulimit of 5500) |
   | modules | C: hadoop-project U: hadoop-project |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5435/1/console |
   | versions | git=2.25.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   




> Upgrade Netty to 4.1.89.Final
> -
>
> Key: HADOOP-18646
> URL: https://issues.apache.org/jira/browse/HADOOP-18646
> Project: Hadoop Common
>  Issue 

[GitHub] [hadoop] hadoop-yetus commented on pull request #5435: HADOOP-18646 update Netty dependency

2023-02-26 Thread via GitHub


hadoop-yetus commented on PR #5435:
URL: https://github.com/apache/hadoop/pull/5435#issuecomment-1445299913

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 38s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  0s |  |  xmllint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  40m  4s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 23s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  compile  |   0m 23s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  mvnsite  |   0m 28s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 31s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javadoc  |   0m 23s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  shadedclient  |  61m 41s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 15s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 14s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javac  |   0m 14s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 14s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   0m 14s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  mvnsite  |   0m 16s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 15s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javadoc  |   0m 15s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  shadedclient  |  21m 19s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   0m 18s |  |  hadoop-project in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 37s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  87m 17s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5435/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5435 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient codespell detsecrets xmllint |
   | uname | Linux 98214de68199 4.15.0-200-generic #211-Ubuntu SMP Thu Nov 24 
18:16:04 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / f852b7cb38ba4bb3448412b8d6c1ed47e03c3157 |
   | Default Java | Private Build-1.8.0_352-8u352-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_352-8u352-ga-1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5435/1/testReport/ |
   | Max. process+thread count | 699 (vs. ulimit of 5500) |
   | modules | C: hadoop-project U: hadoop-project |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5435/1/console |
   | versions | git=2.25.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional