Re: [PR] HADOOP-19124. Update org.ehcache from 3.3.1 to 3.8.2. [hadoop]

2024-03-26 Thread via GitHub


hadoop-yetus commented on PR #6665:
URL: https://github.com/apache/hadoop/pull/6665#issuecomment-2022016531

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 21s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  shelldocs  |   0m  0s |  |  Shelldocs was not available.  |
   | +0 :ok: |  xmllint  |   0m  0s |  |  xmllint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m  4s |  |  Maven dependency ordering for branch  |
   | -1 :x: |  mvninstall  |  12m  4s | 
[/branch-mvninstall-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6665/3/artifact/out/branch-mvninstall-root.txt)
 |  root in trunk failed.  |
   | -1 :x: |  compile  |   0m 26s | 
[/branch-compile-root-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6665/3/artifact/out/branch-compile-root-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt)
 |  root in trunk failed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1. 
 |
   | -1 :x: |  compile  |   0m 26s | 
[/branch-compile-root-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6665/3/artifact/out/branch-compile-root-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt)
 |  root in trunk failed with JDK Private 
Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.  |
   | -1 :x: |  mvnsite  |   0m 24s | 
[/branch-mvnsite-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6665/3/artifact/out/branch-mvnsite-root.txt)
 |  root in trunk failed.  |
   | -1 :x: |  javadoc  |   0m 25s | 
[/branch-javadoc-root-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6665/3/artifact/out/branch-javadoc-root-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt)
 |  root in trunk failed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1. 
 |
   | -1 :x: |  javadoc  |   0m 32s | 
[/branch-javadoc-root-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6665/3/artifact/out/branch-javadoc-root-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt)
 |  root in trunk failed with JDK Private 
Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.  |
   | +1 :green_heart: |  shadedclient  |   1m 47s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 20s |  |  Maven dependency ordering for patch  |
   | -1 :x: |  mvninstall  |   0m 22s | 
[/patch-mvninstall-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6665/3/artifact/out/patch-mvninstall-root.txt)
 |  root in the patch failed.  |
   | -1 :x: |  mvninstall  |   0m 21s | 
[/patch-mvninstall-hadoop-client-modules_hadoop-client-check-invariants.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6665/3/artifact/out/patch-mvninstall-hadoop-client-modules_hadoop-client-check-invariants.txt)
 |  hadoop-client-check-invariants in the patch failed.  |
   | -1 :x: |  mvninstall  |   0m 22s | 
[/patch-mvninstall-hadoop-client-modules_hadoop-client-check-test-invariants.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6665/3/artifact/out/patch-mvninstall-hadoop-client-modules_hadoop-client-check-test-invariants.txt)
 |  hadoop-client-check-test-invariants in the patch failed.  |
   | -1 :x: |  mvninstall  |   0m 21s | 
[/patch-mvninstall-hadoop-project.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6665/3/artifact/out/patch-mvninstall-hadoop-project.txt)
 |  hadoop-project in the patch failed.  |
   | -1 :x: |  compile  |   0m 21s | 
[/patch-compile-root-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6665/3/artifact/out/patch-compile-root-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt)
 |  root in the patch failed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.  |
   | -1 :x: |  javac  |   0m 20s | 
[/patch-compile-root-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6665/3/artifact/out/patch-compile-root-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt)
 |  root in the patch failed with JDK 

Re: [PR] YARN-11664: Remove HDFS Binaries/Jars Dependency From Yarn [hadoop]

2024-03-26 Thread via GitHub


shameersss1 commented on PR #6631:
URL: https://github.com/apache/hadoop/pull/6631#issuecomment-2022015512

   > looks ok to me, but hdfs mail list should be invited to comment.
   > 
   > made some minor suggestions
   
   I have send an email to hdfs mailing list asking their opinion as well.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19071. Update maven-surefire-plugin from 3.0.0 to 3.2.5. [hadoop]

2024-03-26 Thread via GitHub


slfan1989 commented on code in PR #6664:
URL: https://github.com/apache/hadoop/pull/6664#discussion_r1540513670


##
dev-support/bin/hadoop.sh:
##
@@ -614,7 +615,7 @@ function shadedclient_rebuild
   echo_and_redirect "${logfile}" \
 "${MAVEN}" "${MAVEN_ARGS[@]}" verify -fae --batch-mode -am \
   "${modules[@]}" \
-  -DskipShade -Dtest=NoUnitTests -Dmaven.javadoc.skip=true 
-Dcheckstyle.skip=true \
+  -DskipShade -Dsurefire.failIfNoSpecifiedTests=false -Dtest=NoUnitTests 
-Dmaven.javadoc.skip=true -Dcheckstyle.skip=true \

Review Comment:
   I carefully readed the build report, and 
`-Dsurefire.failIfNoSpecifiedTests=false` is necessary because we have some 
modules without JUnit tests. We use `-Dtest=NoUnitTests` to handle this, which 
can cause an error. 
   
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19071) Update maven-surefire-plugin from 3.0.0 to 3.2.5

2024-03-26 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19071?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17831177#comment-17831177
 ] 

ASF GitHub Bot commented on HADOOP-19071:
-

slfan1989 commented on code in PR #6664:
URL: https://github.com/apache/hadoop/pull/6664#discussion_r1540513670


##
dev-support/bin/hadoop.sh:
##
@@ -614,7 +615,7 @@ function shadedclient_rebuild
   echo_and_redirect "${logfile}" \
 "${MAVEN}" "${MAVEN_ARGS[@]}" verify -fae --batch-mode -am \
   "${modules[@]}" \
-  -DskipShade -Dtest=NoUnitTests -Dmaven.javadoc.skip=true 
-Dcheckstyle.skip=true \
+  -DskipShade -Dsurefire.failIfNoSpecifiedTests=false -Dtest=NoUnitTests 
-Dmaven.javadoc.skip=true -Dcheckstyle.skip=true \

Review Comment:
   I carefully reviewed the build report, and 
`-Dsurefire.failIfNoSpecifiedTests=false` is necessary because we have some 
modules without JUnit tests. We use `-Dtest=NoUnitTests` to handle this, which 
can cause an error. 
   
   





> Update maven-surefire-plugin from 3.0.0 to 3.2.5  
> -
>
> Key: HADOOP-19071
> URL: https://issues.apache.org/jira/browse/HADOOP-19071
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build, common
>Affects Versions: 3.4.0, 3.5.0
>Reporter: Shilun Fan
>Assignee: Shilun Fan
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19071) Update maven-surefire-plugin from 3.0.0 to 3.2.5

2024-03-26 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19071?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17831178#comment-17831178
 ] 

ASF GitHub Bot commented on HADOOP-19071:
-

slfan1989 commented on code in PR #6664:
URL: https://github.com/apache/hadoop/pull/6664#discussion_r1540513670


##
dev-support/bin/hadoop.sh:
##
@@ -614,7 +615,7 @@ function shadedclient_rebuild
   echo_and_redirect "${logfile}" \
 "${MAVEN}" "${MAVEN_ARGS[@]}" verify -fae --batch-mode -am \
   "${modules[@]}" \
-  -DskipShade -Dtest=NoUnitTests -Dmaven.javadoc.skip=true 
-Dcheckstyle.skip=true \
+  -DskipShade -Dsurefire.failIfNoSpecifiedTests=false -Dtest=NoUnitTests 
-Dmaven.javadoc.skip=true -Dcheckstyle.skip=true \

Review Comment:
   I carefully readed the build report, and 
`-Dsurefire.failIfNoSpecifiedTests=false` is necessary because we have some 
modules without JUnit tests. We use `-Dtest=NoUnitTests` to handle this, which 
can cause an error. 
   
   





> Update maven-surefire-plugin from 3.0.0 to 3.2.5  
> -
>
> Key: HADOOP-19071
> URL: https://issues.apache.org/jira/browse/HADOOP-19071
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build, common
>Affects Versions: 3.4.0, 3.5.0
>Reporter: Shilun Fan
>Assignee: Shilun Fan
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HDFS-17216. Distcp: When handle the small files, the bandwidth parameter will be invalid, fix this bug. [hadoop]

2024-03-26 Thread via GitHub


hadoop-yetus commented on PR #6138:
URL: https://github.com/apache/hadoop/pull/6138#issuecomment-2022013520

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 30s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  1s |  |  xmllint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  44m 55s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 33s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   0m 29s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   0m 30s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 35s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 34s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 31s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   0m 52s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  32m 49s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 23s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 23s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   0m 23s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 21s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |   0m 21s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 17s | 
[/results-checkstyle-hadoop-tools_hadoop-distcp.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6138/14/artifact/out/results-checkstyle-hadoop-tools_hadoop-distcp.txt)
 |  hadoop-tools/hadoop-distcp: The patch generated 1 new + 6 unchanged - 0 
fixed = 7 total (was 6)  |
   | +1 :green_heart: |  mvnsite  |   0m 24s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 21s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 20s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   0m 49s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  32m 40s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  |  19m 33s | 
[/patch-unit-hadoop-tools_hadoop-distcp.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6138/14/artifact/out/patch-unit-hadoop-tools_hadoop-distcp.txt)
 |  hadoop-distcp in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 38s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 143m 10s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.tools.util.TestThrottledInputStream |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.45 ServerAPI=1.45 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6138/14/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6138 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient codespell detsecrets xmllint spotbugs checkstyle |
   | uname | Linux 44f97d7d362a 5.15.0-91-generic #101-Ubuntu SMP Tue Nov 14 
13:30:08 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 1b025cd46eb48b41ad1f88bc79a274fcc729c4ef |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6138/14/testReport/ |
   | Max. process+thread count | 754 (vs. ulimit of 5500) |
   | modules | C: 

Re: [PR] HADOOP-19071. Update maven-surefire-plugin from 3.0.0 to 3.2.5. [hadoop]

2024-03-26 Thread via GitHub


slfan1989 commented on code in PR #6664:
URL: https://github.com/apache/hadoop/pull/6664#discussion_r1540513670


##
dev-support/bin/hadoop.sh:
##
@@ -614,7 +615,7 @@ function shadedclient_rebuild
   echo_and_redirect "${logfile}" \
 "${MAVEN}" "${MAVEN_ARGS[@]}" verify -fae --batch-mode -am \
   "${modules[@]}" \
-  -DskipShade -Dtest=NoUnitTests -Dmaven.javadoc.skip=true 
-Dcheckstyle.skip=true \
+  -DskipShade -Dsurefire.failIfNoSpecifiedTests=false -Dtest=NoUnitTests 
-Dmaven.javadoc.skip=true -Dcheckstyle.skip=true \

Review Comment:
   I carefully reviewed the build report, and 
`-Dsurefire.failIfNoSpecifiedTests=false` is necessary because we have some 
modules without JUnit tests. We use `-Dtest=NoUnitTests` to handle this, which 
can cause an error. 
   
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19071. Update maven-surefire-plugin from 3.0.0 to 3.2.5. [hadoop]

2024-03-26 Thread via GitHub


hadoop-yetus commented on PR #6664:
URL: https://github.com/apache/hadoop/pull/6664#issuecomment-2022007319

   (!) A patch to the testing environment has been detected. 
   Re-executing against the patched versions to perform further tests. 
   The console is at 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6664/2/console in 
case of problems.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HDFS-17439. Support -nonSuperUser for NNThroughputBenchmark: useful f… [hadoop]

2024-03-26 Thread via GitHub


hadoop-yetus commented on PR #6677:
URL: https://github.com/apache/hadoop/pull/6677#issuecomment-2022006984

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 19s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  13m 44s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  19m 50s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   8m 51s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   8m  6s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   2m  7s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 53s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 31s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 54s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   3m 14s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  21m 45s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 22s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m  9s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   8m 31s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   8m 31s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   8m  7s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |   8m  7s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   2m  4s | 
[/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6677/2/artifact/out/results-checkstyle-root.txt)
 |  root: The patch generated 20 new + 113 unchanged - 4 fixed = 133 total (was 
117)  |
   | +1 :green_heart: |  mvnsite  |   1m 45s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 25s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 54s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   3m 32s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  21m 27s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  16m 17s |  |  hadoop-common in the patch 
passed.  |
   | -1 :x: |  unit  | 200m 31s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6677/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 49s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 353m 53s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.tools.TestDFSAdmin |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.45 ServerAPI=1.45 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6677/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6677 |
   | Optional Tests | dupname asflicense mvnsite codespell detsecrets 
markdownlint compile javac javadoc mvninstall unit shadedclient spotbugs 
checkstyle |
   | uname | Linux 30c762afd2ce 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / e6dd85d00a599d017f51cfd3447b6e67ac5e7eca |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 

Re: [PR] YARN-11664: Remove HDFS Binaries/Jars Dependency From Yarn [hadoop]

2024-03-26 Thread via GitHub


shameersss1 commented on code in PR #6631:
URL: https://github.com/apache/hadoop/pull/6631#discussion_r1540493331


##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/IOStreamPair.java:
##
@@ -15,15 +15,14 @@
  * See the License for the specific language governing permissions and
  * limitations under the License.
  */
-package org.apache.hadoop.hdfs.protocol.datatransfer;
+package org.apache.hadoop.io;
 
 import java.io.Closeable;
 import java.io.IOException;
 import java.io.InputStream;
 import java.io.OutputStream;
 
 import org.apache.hadoop.classification.InterfaceAudience;
-import org.apache.hadoop.io.IOUtils;
 
 /**
  * A little struct class to wrap an InputStream and an OutputStream.

Review Comment:
   ack



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] YARN-11664: Remove HDFS Binaries/Jars Dependency From Yarn [hadoop]

2024-03-26 Thread via GitHub


shameersss1 commented on code in PR #6631:
URL: https://github.com/apache/hadoop/pull/6631#discussion_r1540456198


##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/Constants.java:
##
@@ -0,0 +1,35 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop;
+
+import org.apache.hadoop.io.Text;
+
+/**
+ * This class contains constants for configuration keys and default values.
+ */
+public final class Constants {

Review Comment:
   ack



##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/Constants.java:
##
@@ -0,0 +1,35 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop;
+
+import org.apache.hadoop.io.Text;
+
+/**
+ * This class contains constants for configuration keys and default values.
+ */
+public final class Constants {
+
+  public static final Text HDFS_DELEGATION_KIND =

Review Comment:
   ack



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[PR] HDFS-17420. EditLogTailer and FSEditLogLoader support FGL [hadoop]

2024-03-26 Thread via GitHub


ZanderXu opened a new pull request, #6679:
URL: https://github.com/apache/hadoop/pull/6679

   EditLogTailer and FSEditLogLoader support FGL.
   
   Both EditLogTailer and FSEditLogLoader need Global lock.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-19130) FTPFileSystem rename with full qualified path broken

2024-03-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-19130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HADOOP-19130:

Labels: pull-request-available  (was: )

> FTPFileSystem rename with full qualified path broken
> 
>
> Key: HADOOP-19130
> URL: https://issues.apache.org/jira/browse/HADOOP-19130
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 0.20.2, 3.3.3, 3.3.4, 3.3.6
>Reporter: shawn
>Priority: Major
>  Labels: pull-request-available
> Attachments: image-2024-03-27-09-59-12-381.png
>
>   Original Estimate: 2h
>  Remaining Estimate: 2h
>
> When use fs shell to rename file in ftp server, it always get "Input/output 
> error", when full qualified path 
> is passed to it(eg. ftp://user:password@localhost/pathxxx), the reason is that
> changeWorkingDirectory command underneath is being passed a string with 
> file:// uri prefix which will not be understand 
> by ftp server。
> !image-2024-03-27-09-59-12-381.png!
> the solution should be pass 
> absoluteSrc.getParent().toUri().getPath().toString to avoid
> file:// uri prefix, like this: 
> {code:java}
> --- a/FTPFileSystem.java
> +++ b/FTPFileSystem.java
> @@ -549,15 +549,15 @@ public class FTPFileSystem extends FileSystem {
>        throw new IOException("Destination path " + dst
>            + " already exist, cannot rename!");
>      }
> -    String parentSrc = absoluteSrc.getParent().toUri().toString();
> -    String parentDst = absoluteDst.getParent().toUri().toString();
> +    URI parentSrc = absoluteSrc.getParent().toUri();
> +    URI parentDst = absoluteDst.getParent().toUri();
>      String from = src.getName();
>      String to = dst.getName();
> -    if (!parentSrc.equals(parentDst)) {
> +    if (!parentSrc.toString().equals(parentDst.toString())) {
>        throw new IOException("Cannot rename parent(source): " + parentSrc
>            + ", parent(destination):  " + parentDst);
>      }
> -    client.changeWorkingDirectory(parentSrc);
> +    client.changeWorkingDirectory(parentSrc.getPath().toString());
>      boolean renamed = client.rename(from, to);
>      return renamed;
>    }{code}
> already related issue  as follows 
> https://issues.apache.org/jira/browse/HADOOP-8653
> I wonder why this bug haven't be fixed
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19130) FTPFileSystem rename with full qualified path broken

2024-03-26 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17831137#comment-17831137
 ] 

ASF GitHub Bot commented on HADOOP-19130:
-

zj619 opened a new pull request, #6678:
URL: https://github.com/apache/hadoop/pull/6678

   
   
   ### Description of PR
   For more information about this PR, please refer to the following issue:
   [HADOOP-19130](https://issues.apache.org/jira/browse/HADOOP-19130)  fix bug 
in  FTPFileSystem.rename(FTPClient client, Path src, Path dst), pass 
changeWorkingDirectory absoluteSrc.getParent().toUri().getPath().toString to 
avoid
   [file://](file:///) uri prefix, in case that ftp server can not understand  
[file://](file:///) uri prefix
   
   
   ### How was this patch tested?
   add FTPFileSystem.testRenameFileWithFullQualifiedPath()
   ### For code changes:
   
   - [ ] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   




> FTPFileSystem rename with full qualified path broken
> 
>
> Key: HADOOP-19130
> URL: https://issues.apache.org/jira/browse/HADOOP-19130
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 0.20.2, 3.3.3, 3.3.4, 3.3.6
>Reporter: shawn
>Priority: Major
> Attachments: image-2024-03-27-09-59-12-381.png
>
>   Original Estimate: 2h
>  Remaining Estimate: 2h
>
> When use fs shell to rename file in ftp server, it always get "Input/output 
> error", when full qualified path 
> is passed to it(eg. ftp://user:password@localhost/pathxxx), the reason is that
> changeWorkingDirectory command underneath is being passed a string with 
> file:// uri prefix which will not be understand 
> by ftp server。
> !image-2024-03-27-09-59-12-381.png!
> the solution should be pass 
> absoluteSrc.getParent().toUri().getPath().toString to avoid
> file:// uri prefix, like this: 
> {code:java}
> --- a/FTPFileSystem.java
> +++ b/FTPFileSystem.java
> @@ -549,15 +549,15 @@ public class FTPFileSystem extends FileSystem {
>        throw new IOException("Destination path " + dst
>            + " already exist, cannot rename!");
>      }
> -    String parentSrc = absoluteSrc.getParent().toUri().toString();
> -    String parentDst = absoluteDst.getParent().toUri().toString();
> +    URI parentSrc = absoluteSrc.getParent().toUri();
> +    URI parentDst = absoluteDst.getParent().toUri();
>      String from = src.getName();
>      String to = dst.getName();
> -    if (!parentSrc.equals(parentDst)) {
> +    if (!parentSrc.toString().equals(parentDst.toString())) {
>        throw new IOException("Cannot rename parent(source): " + parentSrc
>            + ", parent(destination):  " + parentDst);
>      }
> -    client.changeWorkingDirectory(parentSrc);
> +    client.changeWorkingDirectory(parentSrc.getPath().toString());
>      boolean renamed = client.rename(from, to);
>      return renamed;
>    }{code}
> already related issue  as follows 
> https://issues.apache.org/jira/browse/HADOOP-8653
> I wonder why this bug haven't be fixed
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[PR] HADOOP-19130. FTPFileSystem rename with full qualified path broken [hadoop]

2024-03-26 Thread via GitHub


zj619 opened a new pull request, #6678:
URL: https://github.com/apache/hadoop/pull/6678

   
   
   ### Description of PR
   For more information about this PR, please refer to the following issue:
   [HADOOP-19130](https://issues.apache.org/jira/browse/HADOOP-19130)  fix bug 
in  FTPFileSystem.rename(FTPClient client, Path src, Path dst), pass 
changeWorkingDirectory absoluteSrc.getParent().toUri().getPath().toString to 
avoid
   [file://](file:///) uri prefix, in case that ftp server can not understand  
[file://](file:///) uri prefix
   
   
   ### How was this patch tested?
   add FTPFileSystem.testRenameFileWithFullQualifiedPath()
   ### For code changes:
   
   - [ ] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19116) update to zookeeper client 3.8.4 due to CVE-2024-23944

2024-03-26 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19116?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17831135#comment-17831135
 ] 

ASF GitHub Bot commented on HADOOP-19116:
-

hadoop-yetus commented on PR #6675:
URL: https://github.com/apache/hadoop/pull/6675#issuecomment-2021802435

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 20s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  0s |  |  xmllint was not available.  |
   | +0 :ok: |  shelldocs  |   0m  0s |  |  Shelldocs was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ branch-3.3 Compile Tests _ |
   | +0 :ok: |  mvndep  |  13m 45s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  22m 43s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  compile  |  11m 23s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  mvnsite  |  18m 11s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  javadoc  |   4m 52s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  shadedclient  |  25m 24s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 26s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |  21m 46s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  11m  4s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |  11m  4s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  mvnsite  |  14m 25s |  |  the patch passed  |
   | +1 :green_heart: |  shellcheck  |   0m  0s |  |  No new issues.  |
   | +1 :green_heart: |  javadoc  |   4m 49s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  25m 46s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 605m 34s | 
[/patch-unit-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6675/2/artifact/out/patch-unit-root.txt)
 |  root in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   1m  3s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 775m 51s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.server.datanode.TestLargeBlockReport |
   |   | hadoop.hdfs.TestReconstructStripedFileWithValidator |
   |   | hadoop.hdfs.protocol.TestBlockListAsLongs |
   |   | hadoop.hdfs.server.balancer.TestBalancerWithHANameNodes |
   |   | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped |
   |   | hadoop.yarn.applications.distributedshell.TestDSTimelineV20 |
   |   | 
hadoop.yarn.server.nodemanager.containermanager.logaggregation.TestLogAggregationService
 |
   |   | hadoop.yarn.client.api.impl.TestAMRMClient |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.45 ServerAPI=1.45 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6675/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6675 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient codespell detsecrets xmllint shellcheck shelldocs |
   | uname | Linux b537a41f8b84 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | branch-3.3 / 712c2ec9cf7e768675a0c6cb262e1a0a3812b285 |
   | Default Java | Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~18.04-b09 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6675/2/testReport/ |
   | Max. process+thread count | 3839 (vs. ulimit of 5500) |
   | modules | C: hadoop-project . U: . |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6675/2/console |
   | versions | git=2.17.1 maven=3.6.0 shellcheck=0.4.6 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   




> update to zookeeper client 3.8.4 due to  CVE-2024-23944
> 

Re: [PR] HADOOP-19116. Update to zookeeper client 3.8.4 due to CVE-2024-23944 [hadoop]

2024-03-26 Thread via GitHub


hadoop-yetus commented on PR #6675:
URL: https://github.com/apache/hadoop/pull/6675#issuecomment-2021802435

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 20s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  0s |  |  xmllint was not available.  |
   | +0 :ok: |  shelldocs  |   0m  0s |  |  Shelldocs was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ branch-3.3 Compile Tests _ |
   | +0 :ok: |  mvndep  |  13m 45s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  22m 43s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  compile  |  11m 23s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  mvnsite  |  18m 11s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  javadoc  |   4m 52s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  shadedclient  |  25m 24s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 26s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |  21m 46s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  11m  4s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |  11m  4s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  mvnsite  |  14m 25s |  |  the patch passed  |
   | +1 :green_heart: |  shellcheck  |   0m  0s |  |  No new issues.  |
   | +1 :green_heart: |  javadoc  |   4m 49s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  25m 46s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 605m 34s | 
[/patch-unit-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6675/2/artifact/out/patch-unit-root.txt)
 |  root in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   1m  3s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 775m 51s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.server.datanode.TestLargeBlockReport |
   |   | hadoop.hdfs.TestReconstructStripedFileWithValidator |
   |   | hadoop.hdfs.protocol.TestBlockListAsLongs |
   |   | hadoop.hdfs.server.balancer.TestBalancerWithHANameNodes |
   |   | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped |
   |   | hadoop.yarn.applications.distributedshell.TestDSTimelineV20 |
   |   | 
hadoop.yarn.server.nodemanager.containermanager.logaggregation.TestLogAggregationService
 |
   |   | hadoop.yarn.client.api.impl.TestAMRMClient |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.45 ServerAPI=1.45 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6675/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6675 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient codespell detsecrets xmllint shellcheck shelldocs |
   | uname | Linux b537a41f8b84 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | branch-3.3 / 712c2ec9cf7e768675a0c6cb262e1a0a3812b285 |
   | Default Java | Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~18.04-b09 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6675/2/testReport/ |
   | Max. process+thread count | 3839 (vs. ulimit of 5500) |
   | modules | C: hadoop-project . U: . |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6675/2/console |
   | versions | git=2.17.1 maven=3.6.0 shellcheck=0.4.6 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (HADOOP-19130) FTPFileSystem rename with full qualified path broken

2024-03-26 Thread shawn (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-19130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

shawn updated HADOOP-19130:
---
External issue ID:   (was: 
https://issues.apache.org/jira/browse/HADOOP-8653)

> FTPFileSystem rename with full qualified path broken
> 
>
> Key: HADOOP-19130
> URL: https://issues.apache.org/jira/browse/HADOOP-19130
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 0.20.2, 3.3.3, 3.3.4, 3.3.6
>Reporter: shawn
>Priority: Major
> Attachments: image-2024-03-27-09-59-12-381.png
>
>   Original Estimate: 2h
>  Remaining Estimate: 2h
>
> When use fs shell to rename file in ftp server, it always get "Input/output 
> error", when full qualified path 
> is passed to it(eg. ftp://user:password@localhost/pathxxx), the reason is that
> changeWorkingDirectory command underneath is being passed a string with 
> file:// uri prefix which will not be understand 
> by ftp server。
> !image-2024-03-27-09-59-12-381.png!
> the solution should be pass 
> absoluteSrc.getParent().toUri().getPath().toString to avoid
> file:// uri prefix, like this: 
> {code:java}
> --- a/FTPFileSystem.java
> +++ b/FTPFileSystem.java
> @@ -549,15 +549,15 @@ public class FTPFileSystem extends FileSystem {
>        throw new IOException("Destination path " + dst
>            + " already exist, cannot rename!");
>      }
> -    String parentSrc = absoluteSrc.getParent().toUri().toString();
> -    String parentDst = absoluteDst.getParent().toUri().toString();
> +    URI parentSrc = absoluteSrc.getParent().toUri();
> +    URI parentDst = absoluteDst.getParent().toUri();
>      String from = src.getName();
>      String to = dst.getName();
> -    if (!parentSrc.equals(parentDst)) {
> +    if (!parentSrc.toString().equals(parentDst.toString())) {
>        throw new IOException("Cannot rename parent(source): " + parentSrc
>            + ", parent(destination):  " + parentDst);
>      }
> -    client.changeWorkingDirectory(parentSrc);
> +    client.changeWorkingDirectory(parentSrc.getPath().toString());
>      boolean renamed = client.rename(from, to);
>      return renamed;
>    }{code}
> already related issue  as follows 
> https://issues.apache.org/jira/browse/HADOOP-8653
> I wonder why this bug haven't be fixed
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-19130) FTPFileSystem rename with full qualified path broken

2024-03-26 Thread shawn (Jira)
shawn created HADOOP-19130:
--

 Summary: FTPFileSystem rename with full qualified path broken
 Key: HADOOP-19130
 URL: https://issues.apache.org/jira/browse/HADOOP-19130
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 3.3.6, 3.3.4, 3.3.3, 0.20.2
Reporter: shawn
 Attachments: image-2024-03-27-09-59-12-381.png

When use fs shell to rename file in ftp server, it always get "Input/output 
error", when full qualified path 
is passed to it(eg. ftp://user:password@localhost/pathxxx), the reason is that
changeWorkingDirectory command underneath is being passed a string with file:// 
uri prefix which will not be understand 
by ftp server。

!image-2024-03-27-09-59-12-381.png!

the solution should be pass absoluteSrc.getParent().toUri().getPath().toString 
to avoid
file:// uri prefix, like this: 
{code:java}
--- a/FTPFileSystem.java
+++ b/FTPFileSystem.java
@@ -549,15 +549,15 @@ public class FTPFileSystem extends FileSystem {
       throw new IOException("Destination path " + dst
           + " already exist, cannot rename!");
     }
-    String parentSrc = absoluteSrc.getParent().toUri().toString();
-    String parentDst = absoluteDst.getParent().toUri().toString();
+    URI parentSrc = absoluteSrc.getParent().toUri();
+    URI parentDst = absoluteDst.getParent().toUri();
     String from = src.getName();
     String to = dst.getName();
-    if (!parentSrc.equals(parentDst)) {
+    if (!parentSrc.toString().equals(parentDst.toString())) {
       throw new IOException("Cannot rename parent(source): " + parentSrc
           + ", parent(destination):  " + parentDst);
     }
-    client.changeWorkingDirectory(parentSrc);
+    client.changeWorkingDirectory(parentSrc.getPath().toString());
     boolean renamed = client.rename(from, to);
     return renamed;
   }{code}

already related issue  as follows 
https://issues.apache.org/jira/browse/HADOOP-8653

I wonder why this bug haven't be fixed

 

 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HDFS-17412. [FGL] Client RPCs involving maintenance supports fine-grained lock [hadoop]

2024-03-26 Thread via GitHub


ferhui commented on PR #6667:
URL: https://github.com/apache/hadoop/pull/6667#issuecomment-2021773879

   @ZanderXu there are conflicts after merging other PRs. could you solve them?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HDFS-17417. [FGL] HeartbeatManager and DatanodeAdminMonitor support fine-grained locking [hadoop]

2024-03-26 Thread via GitHub


ferhui commented on PR #6656:
URL: https://github.com/apache/hadoop/pull/6656#issuecomment-2021771515

   @ZanderXu Thanks for contribution. @huangzhaobo99 Thanks for review. Merged.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HDFS-17417. [FGL] HeartbeatManager and DatanodeAdminMonitor support fine-grained locking [hadoop]

2024-03-26 Thread via GitHub


ferhui merged PR #6656:
URL: https://github.com/apache/hadoop/pull/6656


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HDFS-17415. [FGL] RPCs in NamenodeProtocol support fine-grained lock [hadoop]

2024-03-26 Thread via GitHub


ferhui commented on PR #6654:
URL: https://github.com/apache/hadoop/pull/6654#issuecomment-2021770281

   Thanks for contribution. Merged.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HDFS-17415. [FGL] RPCs in NamenodeProtocol support fine-grained lock [hadoop]

2024-03-26 Thread via GitHub


ferhui merged PR #6654:
URL: https://github.com/apache/hadoop/pull/6654


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HDFS-17410. [FGL] Client RPCs that changes file attributes supports fine-grained lock [hadoop]

2024-03-26 Thread via GitHub


ferhui commented on PR #6634:
URL: https://github.com/apache/hadoop/pull/6634#issuecomment-2021767985

   Thanks for contribution. Merged.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HDFS-17410. [FGL] Client RPCs that changes file attributes supports fine-grained lock [hadoop]

2024-03-26 Thread via GitHub


ferhui merged PR #6634:
URL: https://github.com/apache/hadoop/pull/6634


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19093) Improve rate limiting through ABFS in Manifest Committer

2024-03-26 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19093?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17831118#comment-17831118
 ] 

ASF GitHub Bot commented on HADOOP-19093:
-

hadoop-yetus commented on PR #6596:
URL: https://github.com/apache/hadoop/pull/6596#issuecomment-2021753216

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 48s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  markdownlint  |   0m  1s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 8 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m 14s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  37m 15s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  19m 34s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |  18m 45s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   4m 47s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   3m 35s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   2m 48s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   2m 19s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   5m 25s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  38m 47s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  39m 15s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 32s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 58s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  18m 47s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |  18m 47s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  17m 56s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |  17m 56s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   6m 14s | 
[/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6596/5/artifact/out/results-checkstyle-root.txt)
 |  root: The patch generated 1 new + 23 unchanged - 0 fixed = 24 total (was 
23)  |
   | -1 :x: |  mvnsite  |   0m 56s | 
[/patch-mvnsite-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-core.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6596/5/artifact/out/patch-mvnsite-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-core.txt)
 |  hadoop-mapreduce-client-core in the patch failed.  |
   | -1 :x: |  mvnsite  |   0m 54s | 
[/patch-mvnsite-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6596/5/artifact/out/patch-mvnsite-hadoop-tools_hadoop-azure.txt)
 |  hadoop-azure in the patch failed.  |
   | -1 :x: |  javadoc  |   0m 42s | 
[/patch-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6596/5/artifact/out/patch-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt)
 |  hadoop-azure in the patch failed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.  |
   | -1 :x: |  javadoc  |   0m 46s | 
[/results-javadoc-javadoc-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6596/5/artifact/out/results-javadoc-javadoc-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt)
 |  
hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 
with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 generated 4 new + 
15 unchanged - 0 fixed = 19 total (was 15)  |
   | -1 :x: |  spotbugs  |   0m 48s | 

Re: [PR] HADOOP-19093. [ABFS] Improve rate limiting through ABFS in Manifest Committer [hadoop]

2024-03-26 Thread via GitHub


hadoop-yetus commented on PR #6596:
URL: https://github.com/apache/hadoop/pull/6596#issuecomment-2021753216

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 48s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  markdownlint  |   0m  1s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 8 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m 14s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  37m 15s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  19m 34s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |  18m 45s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   4m 47s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   3m 35s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   2m 48s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   2m 19s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   5m 25s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  38m 47s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  39m 15s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 32s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 58s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  18m 47s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |  18m 47s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  17m 56s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |  17m 56s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   6m 14s | 
[/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6596/5/artifact/out/results-checkstyle-root.txt)
 |  root: The patch generated 1 new + 23 unchanged - 0 fixed = 24 total (was 
23)  |
   | -1 :x: |  mvnsite  |   0m 56s | 
[/patch-mvnsite-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-core.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6596/5/artifact/out/patch-mvnsite-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-core.txt)
 |  hadoop-mapreduce-client-core in the patch failed.  |
   | -1 :x: |  mvnsite  |   0m 54s | 
[/patch-mvnsite-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6596/5/artifact/out/patch-mvnsite-hadoop-tools_hadoop-azure.txt)
 |  hadoop-azure in the patch failed.  |
   | -1 :x: |  javadoc  |   0m 42s | 
[/patch-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6596/5/artifact/out/patch-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt)
 |  hadoop-azure in the patch failed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.  |
   | -1 :x: |  javadoc  |   0m 46s | 
[/results-javadoc-javadoc-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6596/5/artifact/out/results-javadoc-javadoc-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt)
 |  
hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 
with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 generated 4 new + 
15 unchanged - 0 fixed = 19 total (was 15)  |
   | -1 :x: |  spotbugs  |   0m 48s | 
[/patch-spotbugs-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-core.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6596/5/artifact/out/patch-spotbugs-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-core.txt)
 |  hadoop-mapreduce-client-core in the patch 

Re: [PR] HDFS-17429. Fixing wrong log file name in datatransfer Sender.java [hadoop]

2024-03-26 Thread via GitHub


slfan1989 commented on PR #6670:
URL: https://github.com/apache/hadoop/pull/6670#issuecomment-2021608269

   @wzk784533 Thanks for the contribution! merged into trunk. @ayushtkn Thanks 
for the review!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HDFS-17429. Fixing wrong log file name in datatransfer Sender.java [hadoop]

2024-03-26 Thread via GitHub


slfan1989 merged PR #6670:
URL: https://github.com/apache/hadoop/pull/6670


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] YARN-11663. [Federation] Add Cache Entity Nums Limit. [hadoop]

2024-03-26 Thread via GitHub


slfan1989 commented on code in PR #6662:
URL: https://github.com/apache/hadoop/pull/6662#discussion_r1540208851


##
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java:
##
@@ -4031,6 +4031,11 @@ public static boolean isAclEnabled(Configuration conf) {
   // 5 minutes
   public static final int DEFAULT_FEDERATION_CACHE_TIME_TO_LIVE_SECS = 5 * 60;
 
+  public static final String FEDERATION_CACHE_ENTITY_NUMS =
+  FEDERATION_PREFIX + "cache-entity.nums";
+  // default 1000

Review Comment:
   Thanks for reviewing the code! I will improve the code.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19096) [ABFS] Enhancing Client-Side Throttling Metrics Updation Logic

2024-03-26 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19096?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17831098#comment-17831098
 ] 

ASF GitHub Bot commented on HADOOP-19096:
-

mukund-thakur commented on code in PR #6276:
URL: https://github.com/apache/hadoop/pull/6276#discussion_r1540183142


##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsRestOperation.java:
##
@@ -283,7 +285,7 @@ String getClientLatency() {
   private boolean executeHttpOperation(final int retryCount,
 TracingContext tracingContext) throws AzureBlobFileSystemException {
 AbfsHttpOperation httpOperation;
-boolean wasIOExceptionThrown = false;
+boolean wasExceptionThrown = false;

Review Comment:
   maybe add a comment on the usage of this, 





> [ABFS] Enhancing Client-Side Throttling Metrics Updation Logic
> --
>
> Key: HADOOP-19096
> URL: https://issues.apache.org/jira/browse/HADOOP-19096
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.4.1
>Reporter: Anuj Modi
>Assignee: Anuj Modi
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.1
>
>
> ABFS has a client-side throttling mechanism which works on the metrics 
> collected from past requests made. I requests are getting failed due to 
> throttling at server, we update our metrics and client side backoff is 
> calculated based on those metrics.
> This PR enhances the logic to decide which requests should be considered to 
> compute client side backoff interval as follows:
> For each request made by ABFS driver, we will determine if they should 
> contribute to Client-Side Throttling based on the status code and result:
>  # Status code in 2xx range: Successful Operations should contribute.
>  # Status code in 3xx range: Redirection Operations should not contribute.
>  # Status code in 4xx range: User Errors should not contribute.
>  # Status code is 503: Throttling Error should contribute only if they are 
> due to client limits breach as follows:
>  ## 503, Ingress Over Account Limit: Should Contribute
>  ## 503, Egress Over Account Limit: Should Contribute
>  ## 503, TPS Over Account Limit: Should Contribute
>  ## 503, Other Server Throttling: Should not Contribute.
>  # Status code in 5xx range other than 503: Should not Contribute.
>  # IOException and UnknownHostExceptions: Should not Contribute.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19096: [ABFS] [CST Optimization] Enhancing Client-Side Throttling Metrics Updating Logic [hadoop]

2024-03-26 Thread via GitHub


mukund-thakur commented on code in PR #6276:
URL: https://github.com/apache/hadoop/pull/6276#discussion_r1540183142


##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsRestOperation.java:
##
@@ -283,7 +285,7 @@ String getClientLatency() {
   private boolean executeHttpOperation(final int retryCount,
 TracingContext tracingContext) throws AzureBlobFileSystemException {
 AbfsHttpOperation httpOperation;
-boolean wasIOExceptionThrown = false;
+boolean wasExceptionThrown = false;

Review Comment:
   maybe add a comment on the usage of this, 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19096) [ABFS] Enhancing Client-Side Throttling Metrics Updation Logic

2024-03-26 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19096?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17831097#comment-17831097
 ] 

ASF GitHub Bot commented on HADOOP-19096:
-

mukund-thakur commented on code in PR #6276:
URL: https://github.com/apache/hadoop/pull/6276#discussion_r1540181623


##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsRestOperation.java:
##
@@ -321,7 +323,27 @@ private boolean executeHttpOperation(final int retryCount,
   } else if (httpOperation.getStatusCode() == 
HttpURLConnection.HTTP_UNAVAILABLE) {
 incrementCounter(AbfsStatistic.SERVER_UNAVAILABLE, 1);
   }
+
+  // If no exception occurred till here it means http operation was 
successfully complete and
+  // a response from server has been received which might be failure or 
success.
+  // If any kind of exception has occurred it will be caught below.
+  // If request failed determine failure reason and retry policy here.
+  // else simply return with success after saving the result.
+  LOG.debug("HttpRequest: {}: {}", operationType, httpOperation);
+
+  int status = httpOperation.getStatusCode();
+  failureReason = RetryReason.getAbbreviation(null, status, 
httpOperation.getStorageErrorMessage());
+  retryPolicy = client.getRetryPolicy(failureReason);
+
+  if (retryPolicy.shouldRetry(retryCount, httpOperation.getStatusCode())) {
+return false;
+  }
+
+  // If the request has succeeded or failed with non-retrial error, save 
the operation and return.
+  result = httpOperation;
+
 } catch (UnknownHostException ex) {
+  wasExceptionThrown = true;

Review Comment:
   should we rename this? because in case of other exceptions, we update the 
metric below in finally block. 





> [ABFS] Enhancing Client-Side Throttling Metrics Updation Logic
> --
>
> Key: HADOOP-19096
> URL: https://issues.apache.org/jira/browse/HADOOP-19096
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.4.1
>Reporter: Anuj Modi
>Assignee: Anuj Modi
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.1
>
>
> ABFS has a client-side throttling mechanism which works on the metrics 
> collected from past requests made. I requests are getting failed due to 
> throttling at server, we update our metrics and client side backoff is 
> calculated based on those metrics.
> This PR enhances the logic to decide which requests should be considered to 
> compute client side backoff interval as follows:
> For each request made by ABFS driver, we will determine if they should 
> contribute to Client-Side Throttling based on the status code and result:
>  # Status code in 2xx range: Successful Operations should contribute.
>  # Status code in 3xx range: Redirection Operations should not contribute.
>  # Status code in 4xx range: User Errors should not contribute.
>  # Status code is 503: Throttling Error should contribute only if they are 
> due to client limits breach as follows:
>  ## 503, Ingress Over Account Limit: Should Contribute
>  ## 503, Egress Over Account Limit: Should Contribute
>  ## 503, TPS Over Account Limit: Should Contribute
>  ## 503, Other Server Throttling: Should not Contribute.
>  # Status code in 5xx range other than 503: Should not Contribute.
>  # IOException and UnknownHostExceptions: Should not Contribute.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19096: [ABFS] [CST Optimization] Enhancing Client-Side Throttling Metrics Updating Logic [hadoop]

2024-03-26 Thread via GitHub


mukund-thakur commented on code in PR #6276:
URL: https://github.com/apache/hadoop/pull/6276#discussion_r1540181623


##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsRestOperation.java:
##
@@ -321,7 +323,27 @@ private boolean executeHttpOperation(final int retryCount,
   } else if (httpOperation.getStatusCode() == 
HttpURLConnection.HTTP_UNAVAILABLE) {
 incrementCounter(AbfsStatistic.SERVER_UNAVAILABLE, 1);
   }
+
+  // If no exception occurred till here it means http operation was 
successfully complete and
+  // a response from server has been received which might be failure or 
success.
+  // If any kind of exception has occurred it will be caught below.
+  // If request failed determine failure reason and retry policy here.
+  // else simply return with success after saving the result.
+  LOG.debug("HttpRequest: {}: {}", operationType, httpOperation);
+
+  int status = httpOperation.getStatusCode();
+  failureReason = RetryReason.getAbbreviation(null, status, 
httpOperation.getStorageErrorMessage());
+  retryPolicy = client.getRetryPolicy(failureReason);
+
+  if (retryPolicy.shouldRetry(retryCount, httpOperation.getStatusCode())) {
+return false;
+  }
+
+  // If the request has succeeded or failed with non-retrial error, save 
the operation and return.
+  result = httpOperation;
+
 } catch (UnknownHostException ex) {
+  wasExceptionThrown = true;

Review Comment:
   should we rename this? because in case of other exceptions, we update the 
metric below in finally block. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19096) [ABFS] Enhancing Client-Side Throttling Metrics Updation Logic

2024-03-26 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19096?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17831096#comment-17831096
 ] 

ASF GitHub Bot commented on HADOOP-19096:
-

mukund-thakur commented on code in PR #6276:
URL: https://github.com/apache/hadoop/pull/6276#discussion_r1540174936


##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsRestOperation.java:
##
@@ -321,7 +323,27 @@ private boolean executeHttpOperation(final int retryCount,
   } else if (httpOperation.getStatusCode() == 
HttpURLConnection.HTTP_UNAVAILABLE) {
 incrementCounter(AbfsStatistic.SERVER_UNAVAILABLE, 1);
   }
+
+  // If no exception occurred till here it means http operation was 
successfully complete and
+  // a response from server has been received which might be failure or 
success.
+  // If any kind of exception has occurred it will be caught below.
+  // If request failed determine failure reason and retry policy here.

Review Comment:
   nit : failed to 





> [ABFS] Enhancing Client-Side Throttling Metrics Updation Logic
> --
>
> Key: HADOOP-19096
> URL: https://issues.apache.org/jira/browse/HADOOP-19096
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.4.1
>Reporter: Anuj Modi
>Assignee: Anuj Modi
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.1
>
>
> ABFS has a client-side throttling mechanism which works on the metrics 
> collected from past requests made. I requests are getting failed due to 
> throttling at server, we update our metrics and client side backoff is 
> calculated based on those metrics.
> This PR enhances the logic to decide which requests should be considered to 
> compute client side backoff interval as follows:
> For each request made by ABFS driver, we will determine if they should 
> contribute to Client-Side Throttling based on the status code and result:
>  # Status code in 2xx range: Successful Operations should contribute.
>  # Status code in 3xx range: Redirection Operations should not contribute.
>  # Status code in 4xx range: User Errors should not contribute.
>  # Status code is 503: Throttling Error should contribute only if they are 
> due to client limits breach as follows:
>  ## 503, Ingress Over Account Limit: Should Contribute
>  ## 503, Egress Over Account Limit: Should Contribute
>  ## 503, TPS Over Account Limit: Should Contribute
>  ## 503, Other Server Throttling: Should not Contribute.
>  # Status code in 5xx range other than 503: Should not Contribute.
>  # IOException and UnknownHostExceptions: Should not Contribute.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19096: [ABFS] [CST Optimization] Enhancing Client-Side Throttling Metrics Updating Logic [hadoop]

2024-03-26 Thread via GitHub


mukund-thakur commented on code in PR #6276:
URL: https://github.com/apache/hadoop/pull/6276#discussion_r1540174936


##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsRestOperation.java:
##
@@ -321,7 +323,27 @@ private boolean executeHttpOperation(final int retryCount,
   } else if (httpOperation.getStatusCode() == 
HttpURLConnection.HTTP_UNAVAILABLE) {
 incrementCounter(AbfsStatistic.SERVER_UNAVAILABLE, 1);
   }
+
+  // If no exception occurred till here it means http operation was 
successfully complete and
+  // a response from server has been received which might be failure or 
success.
+  // If any kind of exception has occurred it will be caught below.
+  // If request failed determine failure reason and retry policy here.

Review Comment:
   nit : failed to 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19096) [ABFS] Enhancing Client-Side Throttling Metrics Updation Logic

2024-03-26 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19096?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17831095#comment-17831095
 ] 

ASF GitHub Bot commented on HADOOP-19096:
-

mukund-thakur commented on code in PR #6276:
URL: https://github.com/apache/hadoop/pull/6276#discussion_r1540174681


##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsRestOperation.java:
##
@@ -333,57 +355,44 @@ private boolean executeHttpOperation(final int retryCount,
   }
   return false;
 } catch (IOException ex) {
+  wasExceptionThrown = true;
   if (LOG.isDebugEnabled()) {
 LOG.debug("HttpRequestFailure: {}, {}", httpOperation, ex);
   }
 
   failureReason = RetryReason.getAbbreviation(ex, -1, "");
   retryPolicy = client.getRetryPolicy(failureReason);
-  wasIOExceptionThrown = true;
   if (!retryPolicy.shouldRetry(retryCount, -1)) {
 throw new InvalidAbfsRestOperationException(ex, retryCount);
   }
 
   return false;
 } finally {
-  int status = httpOperation.getStatusCode();
   /*
-   A status less than 300 (2xx range) or greater than or equal
-   to 500 (5xx range) should contribute to throttling metrics being 
updated.
-   Less than 200 or greater than or equal to 500 show failed operations. 
2xx
-   range contributes to successful operations. 3xx range is for redirects
-   and 4xx range is for user errors. These should not be a part of
-   throttling backoff computation.
+Updating Client Side Throttling Metrics for relevant response status 
codes.
+1. Status code in 2xx range: Successful Operations should contribute
+2. Status code in 3xx range: Redirection Operations should not 
contribute
+3. Status code in 4xx range: User Errors should not contribute
+4. Status code is 503: Throttling Error should contribute as following:
+  a. 503, Ingress Over Account Limit: Should Contribute
+  b. 503, Egress Over Account Limit: Should Contribute
+  c. 503, TPS Over Account Limit: Should Contribute
+  d. 503, Other Server Throttling: Should not contribute
+5. Status code in 5xx range other than 503: Should not contribute
+6. IOException and UnknownHostExceptions: Should not contribute
*/
-  boolean updateMetricsResponseCode = (status < 
HttpURLConnection.HTTP_MULT_CHOICE
-  || status >= HttpURLConnection.HTTP_INTERNAL_ERROR);
-
-  /*
-   Connection Timeout failures should not contribute to throttling
-   In case the current request fails with Connection Timeout we will have
-   ioExceptionThrown true and failure reason as CT
-   In case the current request failed with 5xx, failure reason will be
-   updated after finally block but wasIOExceptionThrown will be false;
-   */
-  boolean isCTFailure = 
CONNECTION_TIMEOUT_ABBREVIATION.equals(failureReason) && wasIOExceptionThrown;
-
-  if (updateMetricsResponseCode && !isCTFailure) {
+  int statusCode = httpOperation.getStatusCode();
+  boolean shouldUpdateCSTMetrics = (statusCode <  
HttpURLConnection.HTTP_MULT_CHOICE // Case 1
+  || INGRESS_LIMIT_BREACH_ABBREVIATION.equals(failureReason) // 
Case 4.a
+  || EGRESS_LIMIT_BREACH_ABBREVIATION.equals(failureReason) // 
Case 4.b
+  || OPERATION_LIMIT_BREACH_ABBREVIATION.equals(failureReason)) // 
Case 4.c
+  && !wasExceptionThrown; // Case 6

Review Comment:
   +1 on this.





> [ABFS] Enhancing Client-Side Throttling Metrics Updation Logic
> --
>
> Key: HADOOP-19096
> URL: https://issues.apache.org/jira/browse/HADOOP-19096
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.4.1
>Reporter: Anuj Modi
>Assignee: Anuj Modi
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.1
>
>
> ABFS has a client-side throttling mechanism which works on the metrics 
> collected from past requests made. I requests are getting failed due to 
> throttling at server, we update our metrics and client side backoff is 
> calculated based on those metrics.
> This PR enhances the logic to decide which requests should be considered to 
> compute client side backoff interval as follows:
> For each request made by ABFS driver, we will determine if they should 
> contribute to Client-Side Throttling based on the status code and result:
>  # Status code in 2xx range: Successful Operations should contribute.
>  # Status code in 3xx range: Redirection Operations should not contribute.
>  # Status code in 4xx range: User Errors should not contribute.
>  # Status code is 503: Throttling Error should contribute only if 

Re: [PR] HADOOP-19096: [ABFS] [CST Optimization] Enhancing Client-Side Throttling Metrics Updating Logic [hadoop]

2024-03-26 Thread via GitHub


mukund-thakur commented on code in PR #6276:
URL: https://github.com/apache/hadoop/pull/6276#discussion_r1540174681


##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsRestOperation.java:
##
@@ -333,57 +355,44 @@ private boolean executeHttpOperation(final int retryCount,
   }
   return false;
 } catch (IOException ex) {
+  wasExceptionThrown = true;
   if (LOG.isDebugEnabled()) {
 LOG.debug("HttpRequestFailure: {}, {}", httpOperation, ex);
   }
 
   failureReason = RetryReason.getAbbreviation(ex, -1, "");
   retryPolicy = client.getRetryPolicy(failureReason);
-  wasIOExceptionThrown = true;
   if (!retryPolicy.shouldRetry(retryCount, -1)) {
 throw new InvalidAbfsRestOperationException(ex, retryCount);
   }
 
   return false;
 } finally {
-  int status = httpOperation.getStatusCode();
   /*
-   A status less than 300 (2xx range) or greater than or equal
-   to 500 (5xx range) should contribute to throttling metrics being 
updated.
-   Less than 200 or greater than or equal to 500 show failed operations. 
2xx
-   range contributes to successful operations. 3xx range is for redirects
-   and 4xx range is for user errors. These should not be a part of
-   throttling backoff computation.
+Updating Client Side Throttling Metrics for relevant response status 
codes.
+1. Status code in 2xx range: Successful Operations should contribute
+2. Status code in 3xx range: Redirection Operations should not 
contribute
+3. Status code in 4xx range: User Errors should not contribute
+4. Status code is 503: Throttling Error should contribute as following:
+  a. 503, Ingress Over Account Limit: Should Contribute
+  b. 503, Egress Over Account Limit: Should Contribute
+  c. 503, TPS Over Account Limit: Should Contribute
+  d. 503, Other Server Throttling: Should not contribute
+5. Status code in 5xx range other than 503: Should not contribute
+6. IOException and UnknownHostExceptions: Should not contribute
*/
-  boolean updateMetricsResponseCode = (status < 
HttpURLConnection.HTTP_MULT_CHOICE
-  || status >= HttpURLConnection.HTTP_INTERNAL_ERROR);
-
-  /*
-   Connection Timeout failures should not contribute to throttling
-   In case the current request fails with Connection Timeout we will have
-   ioExceptionThrown true and failure reason as CT
-   In case the current request failed with 5xx, failure reason will be
-   updated after finally block but wasIOExceptionThrown will be false;
-   */
-  boolean isCTFailure = 
CONNECTION_TIMEOUT_ABBREVIATION.equals(failureReason) && wasIOExceptionThrown;
-
-  if (updateMetricsResponseCode && !isCTFailure) {
+  int statusCode = httpOperation.getStatusCode();
+  boolean shouldUpdateCSTMetrics = (statusCode <  
HttpURLConnection.HTTP_MULT_CHOICE // Case 1
+  || INGRESS_LIMIT_BREACH_ABBREVIATION.equals(failureReason) // 
Case 4.a
+  || EGRESS_LIMIT_BREACH_ABBREVIATION.equals(failureReason) // 
Case 4.b
+  || OPERATION_LIMIT_BREACH_ABBREVIATION.equals(failureReason)) // 
Case 4.c
+  && !wasExceptionThrown; // Case 6

Review Comment:
   +1 on this.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HDFS-17439. Support -nonSuperUser for NNThroughputBenchmark: useful f… [hadoop]

2024-03-26 Thread via GitHub


hadoop-yetus commented on PR #6677:
URL: https://github.com/apache/hadoop/pull/6677#issuecomment-2021543381

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 19s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  13m 59s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  19m 54s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   8m 55s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   8m  4s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   2m  5s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 52s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 30s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 53s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   3m 18s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  21m 24s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 23s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 12s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   8m 34s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   8m 34s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   8m 14s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |   8m 14s |  |  the patch passed  |
   | -1 :x: |  blanks  |   0m  0s | 
[/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6677/1/artifact/out/blanks-eol.txt)
 |  The patch has 1 line(s) that end in blanks. Use git apply --whitespace=fix 
<>. Refer https://git-scm.com/docs/git-apply  |
   | -0 :warning: |  checkstyle  |   2m  6s | 
[/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6677/1/artifact/out/results-checkstyle-root.txt)
 |  root: The patch generated 20 new + 113 unchanged - 4 fixed = 133 total (was 
117)  |
   | +1 :green_heart: |  mvnsite  |   1m 51s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 28s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 55s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   3m 36s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  21m 50s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  16m 38s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  | 197m 42s |  |  hadoop-hdfs in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 46s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 351m 43s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.45 ServerAPI=1.45 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6677/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6677 |
   | Optional Tests | dupname asflicense mvnsite codespell detsecrets 
markdownlint compile javac javadoc mvninstall unit shadedclient spotbugs 
checkstyle |
   | uname | Linux ed4364f301bc 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / d7944e903f9a517ffbc7e18846783ae122dafab4 |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   |  Test Results | 

[jira] [Commented] (HADOOP-19093) Improve rate limiting through ABFS in Manifest Committer

2024-03-26 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19093?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17831082#comment-17831082
 ] 

ASF GitHub Bot commented on HADOOP-19093:
-

steveloughran commented on PR #6596:
URL: https://github.com/apache/hadoop/pull/6596#issuecomment-2021481003

   I am having fun testing at scale with a throttled store. 
   ```
   [ERROR] 
testSeekZeroByteFile(org.apache.hadoop.fs.azurebfs.contract.ITestAbfsFileSystemContractSeek)
  Time elapsed: 0.339 s  <<< ERROR!
   java.io.IOException: Failed with java.io.IOException while processing 
file/directory :[/fork-0003/test/seekfile.txt] in method:[Operation failed: 
"The resource was created or modified by the Azure Blob Service API and cannot 
be written to by the Azure Data Lake Storage Service API.", 409, PUT, 
https://stevelukwest.dfs.core.windows.net/stevel-testing/fork-0003/test/seekfile.txt?action=flush=false=1024=true=90,
 rId: 2c806902-b01f-0087-20c1-7fec4400, InvalidFlushOperation, "The 
resource was created or modified by the Azure Blob Service API and cannot be 
written to by the Azure Data Lake Storage Service API. 
RequestId:2c806902-b01f-0087-20c1-7fec4400 
Time:2024-03-26T21:04:12.1799296Z"]
   ```
   
   + likely regression from the manifest changes
   ```
   [ERROR] Tests run: 2, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 
10.484 s <<< FAILURE! - in 
org.apache.hadoop.fs.azurebfs.commit.ITestAbfsCreateOutputDirectoriesStage
   [ERROR] 
testPrepareDirtyTree(org.apache.hadoop.fs.azurebfs.commit.ITestAbfsCreateOutputDirectoriesStage)
  Time elapsed: 9.767 s  <<< FAILURE!
   java.lang.AssertionError: Expected a java.io.IOException to be thrown, but 
got the result: : Result{directory count=64}
   at 
org.apache.hadoop.test.LambdaTestUtils.intercept(LambdaTestUtils.java:499)
   at 
org.apache.hadoop.test.LambdaTestUtils.intercept(LambdaTestUtils.java:384)
   at 
org.apache.hadoop.mapreduce.lib.output.committer.manifest.TestCreateOutputDirectoriesStage.testPrepareDirtyTree(TestCreateOutputDirectoriesStage.java:254)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
   at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:498)
   at 
   ```
   
   ```
   [ERROR] 
test_120_terasort(org.apache.hadoop.fs.azurebfs.commit.ITestAbfsTerasort)  Time 
elapsed: 27.232 s  <<< ERROR!
   java.io.IOException: 
   java.io.IOException: Unknown Job job_1711487009902_0002
   at 
org.apache.hadoop.mapreduce.v2.hs.HistoryClientService$HSClientProtocolHandler.verifyAndGetJob(HistoryClientService.java:240)
   at 
org.apache.hadoop.mapreduce.v2.hs.HistoryClientService$HSClientProtocolHandler.getCounters(HistoryClientService.java:254)
   at 
org.apache.hadoop.mapreduce.v2.api.impl.pb.service.MRClientProtocolPBServiceImpl.getCounters(MRClientProtocolPBServiceImpl.java:159)
   at 
org.apache.hadoop.yarn.proto.MRClientProtocol$MRClientProtocolService$2.callBlockingMethod(MRClientProtocol.java:287)
   at 
org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:621)
   at 
org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:589)
   at 
org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:573)
   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1227)
   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1249)
   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1172)
   ```
will need to isolate




> Improve rate limiting through ABFS in Manifest Committer
> 
>
> Key: HADOOP-19093
> URL: https://issues.apache.org/jira/browse/HADOOP-19093
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure, test
>Affects Versions: 3.4.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>  Labels: pull-request-available
>
> I need a load test to verify that the rename resilience of the manifest 
> committer actually works as intended
> * test suite with name ILoadTest* prefix (as with s3)
> * parallel test running with many threads doing many renames
> * verify that rename recovery should be detected
> * and that all renames MUST NOT fail.
> maybe also: metrics for this in fs and doc update. 
> Possibly; LogExactlyOnce to warn of load issues



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: 

Re: [PR] HADOOP-19093. [ABFS] Improve rate limiting through ABFS in Manifest Committer [hadoop]

2024-03-26 Thread via GitHub


steveloughran commented on PR #6596:
URL: https://github.com/apache/hadoop/pull/6596#issuecomment-2021481003

   I am having fun testing at scale with a throttled store. 
   ```
   [ERROR] 
testSeekZeroByteFile(org.apache.hadoop.fs.azurebfs.contract.ITestAbfsFileSystemContractSeek)
  Time elapsed: 0.339 s  <<< ERROR!
   java.io.IOException: Failed with java.io.IOException while processing 
file/directory :[/fork-0003/test/seekfile.txt] in method:[Operation failed: 
"The resource was created or modified by the Azure Blob Service API and cannot 
be written to by the Azure Data Lake Storage Service API.", 409, PUT, 
https://stevelukwest.dfs.core.windows.net/stevel-testing/fork-0003/test/seekfile.txt?action=flush=false=1024=true=90,
 rId: 2c806902-b01f-0087-20c1-7fec4400, InvalidFlushOperation, "The 
resource was created or modified by the Azure Blob Service API and cannot be 
written to by the Azure Data Lake Storage Service API. 
RequestId:2c806902-b01f-0087-20c1-7fec4400 
Time:2024-03-26T21:04:12.1799296Z"]
   ```
   
   + likely regression from the manifest changes
   ```
   [ERROR] Tests run: 2, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 
10.484 s <<< FAILURE! - in 
org.apache.hadoop.fs.azurebfs.commit.ITestAbfsCreateOutputDirectoriesStage
   [ERROR] 
testPrepareDirtyTree(org.apache.hadoop.fs.azurebfs.commit.ITestAbfsCreateOutputDirectoriesStage)
  Time elapsed: 9.767 s  <<< FAILURE!
   java.lang.AssertionError: Expected a java.io.IOException to be thrown, but 
got the result: : Result{directory count=64}
   at 
org.apache.hadoop.test.LambdaTestUtils.intercept(LambdaTestUtils.java:499)
   at 
org.apache.hadoop.test.LambdaTestUtils.intercept(LambdaTestUtils.java:384)
   at 
org.apache.hadoop.mapreduce.lib.output.committer.manifest.TestCreateOutputDirectoriesStage.testPrepareDirtyTree(TestCreateOutputDirectoriesStage.java:254)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
   at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:498)
   at 
   ```
   
   ```
   [ERROR] 
test_120_terasort(org.apache.hadoop.fs.azurebfs.commit.ITestAbfsTerasort)  Time 
elapsed: 27.232 s  <<< ERROR!
   java.io.IOException: 
   java.io.IOException: Unknown Job job_1711487009902_0002
   at 
org.apache.hadoop.mapreduce.v2.hs.HistoryClientService$HSClientProtocolHandler.verifyAndGetJob(HistoryClientService.java:240)
   at 
org.apache.hadoop.mapreduce.v2.hs.HistoryClientService$HSClientProtocolHandler.getCounters(HistoryClientService.java:254)
   at 
org.apache.hadoop.mapreduce.v2.api.impl.pb.service.MRClientProtocolPBServiceImpl.getCounters(MRClientProtocolPBServiceImpl.java:159)
   at 
org.apache.hadoop.yarn.proto.MRClientProtocol$MRClientProtocolService$2.callBlockingMethod(MRClientProtocol.java:287)
   at 
org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:621)
   at 
org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:589)
   at 
org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:573)
   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1227)
   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1249)
   at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1172)
   ```
will need to isolate


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19098) Vector IO: consistent specified rejection of overlapping ranges

2024-03-26 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17831079#comment-17831079
 ] 

ASF GitHub Bot commented on HADOOP-19098:
-

hadoop-yetus commented on PR #6604:
URL: https://github.com/apache/hadoop/pull/6604#issuecomment-2021454419

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 54s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  markdownlint  |   0m  1s |  |  markdownlint was not available.  
|
   | +0 :ok: |  xmllint  |   0m  1s |  |  xmllint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 15 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m 35s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  32m  7s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  17m 23s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |  15m 53s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   4m 26s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   5m 13s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   4m  2s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   4m 29s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   8m 34s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  33m 44s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  34m 11s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 33s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   3m 10s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  16m 42s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |  16m 42s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  16m 11s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |  16m 11s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   4m 14s |  |  root: The patch generated 
0 new + 83 unchanged - 1 fixed = 83 total (was 84)  |
   | +1 :green_heart: |  mvnsite  |   5m 16s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   3m 56s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   4m 27s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   9m 20s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  34m  4s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  19m 37s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  | 273m  5s |  |  hadoop-hdfs in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   3m 26s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  unit  |   2m 33s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   1m  6s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 546m 57s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.45 ServerAPI=1.45 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6604/12/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6604 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets markdownlint 
xmllint |
   | uname | Linux 756da2da6d7c 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / a6a21432099fa99a8e9f8eb70adb7725bc0445c3 |

Re: [PR] HADOOP-19098 Vector IO: consistent specified rejection of overlapping ranges [hadoop]

2024-03-26 Thread via GitHub


hadoop-yetus commented on PR #6604:
URL: https://github.com/apache/hadoop/pull/6604#issuecomment-2021454419

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 54s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  markdownlint  |   0m  1s |  |  markdownlint was not available.  
|
   | +0 :ok: |  xmllint  |   0m  1s |  |  xmllint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 15 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m 35s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  32m  7s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  17m 23s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |  15m 53s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   4m 26s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   5m 13s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   4m  2s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   4m 29s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   8m 34s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  33m 44s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  34m 11s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 33s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   3m 10s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  16m 42s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |  16m 42s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  16m 11s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |  16m 11s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   4m 14s |  |  root: The patch generated 
0 new + 83 unchanged - 1 fixed = 83 total (was 84)  |
   | +1 :green_heart: |  mvnsite  |   5m 16s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   3m 56s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   4m 27s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   9m 20s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  34m  4s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  19m 37s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  | 273m  5s |  |  hadoop-hdfs in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   3m 26s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  unit  |   2m 33s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   1m  6s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 546m 57s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.45 ServerAPI=1.45 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6604/12/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6604 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets markdownlint 
xmllint |
   | uname | Linux 756da2da6d7c 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / a6a21432099fa99a8e9f8eb70adb7725bc0445c3 |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 

Re: [PR] YARN-11664: Remove HDFS Binaries/Jars Dependency From Yarn [hadoop]

2024-03-26 Thread via GitHub


steveloughran commented on code in PR #6631:
URL: https://github.com/apache/hadoop/pull/6631#discussion_r1539932538


##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/IOStreamPair.java:
##
@@ -15,15 +15,14 @@
  * See the License for the specific language governing permissions and
  * limitations under the License.
  */
-package org.apache.hadoop.hdfs.protocol.datatransfer;
+package org.apache.hadoop.io;
 
 import java.io.Closeable;
 import java.io.IOException;
 import java.io.InputStream;
 import java.io.OutputStream;
 
 import org.apache.hadoop.classification.InterfaceAudience;
-import org.apache.hadoop.io.IOUtils;
 
 /**
  * A little struct class to wrap an InputStream and an OutputStream.

Review Comment:
   add that they both get closed in the close



##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/Constants.java:
##
@@ -0,0 +1,35 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop;
+
+import org.apache.hadoop.io.Text;
+
+/**
+ * This class contains constants for configuration keys and default values.
+ */
+public final class Constants {
+
+  public static final Text HDFS_DELEGATION_KIND =

Review Comment:
   javadocs with {@value} entries here and below



##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/Constants.java:
##
@@ -0,0 +1,35 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop;
+
+import org.apache.hadoop.io.Text;
+
+/**
+ * This class contains constants for configuration keys and default values.
+ */
+public final class Constants {

Review Comment:
   not in this package. should go somewhere under filesystem, maybe a class like
   
   `org.apache.hadoop.fs.HdfsCommonConstants` for all hdfs related constants 
which need to go into hadoop-common



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19114) upgrade to commons-compress 1.26.1 due to cves

2024-03-26 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17831046#comment-17831046
 ] 

ASF GitHub Bot commented on HADOOP-19114:
-

pjfanning commented on PR #6636:
URL: https://github.com/apache/hadoop/pull/6636#issuecomment-2021243187

   createArchiveEntry returns a TarArchiveEntry - so I just need to change the 
variable type
   
   I'm checking to see if there might be similar issues in other parts of 
Hadoop but so far, the rest of the code seems ok.




> upgrade to commons-compress 1.26.1 due to cves
> --
>
> Key: HADOOP-19114
> URL: https://issues.apache.org/jira/browse/HADOOP-19114
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build, CVE
>Affects Versions: 3.4.0
>Reporter: PJ Fanning
>Priority: Major
>  Labels: pull-request-available
>
> 2 recent CVEs fixed - 
> https://mvnrepository.com/artifact/org.apache.commons/commons-compress



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19114. Upgrade to commons-compress 1.26.1 due to CVEs. [hadoop]

2024-03-26 Thread via GitHub


pjfanning commented on PR #6636:
URL: https://github.com/apache/hadoop/pull/6636#issuecomment-2021243187

   createArchiveEntry returns a TarArchiveEntry - so I just need to change the 
variable type
   
   I'm checking to see if there might be similar issues in other parts of 
Hadoop but so far, the rest of the code seems ok.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19114) upgrade to commons-compress 1.26.1 due to cves

2024-03-26 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17831044#comment-17831044
 ] 

ASF GitHub Bot commented on HADOOP-19114:
-

steveloughran commented on PR #6636:
URL: https://github.com/apache/hadoop/pull/6636#issuecomment-2021234507

   looking at the code which doesn't compile
   
   ```java
   TarArchiveOutputStream out = new TarArchiveOutputStream(targetStream)
   ...
   try (FileInputStream inputStream = new FileInputStream(file)) {
 ArchiveEntry entry = out.createArchiveEntry(file, file.getName());
 out.putArchiveEntry(entry);  // HERE
 IOUtils.copyBytes(inputStream, out, 1024 * 1024);
 out.closeArchiveEntry();
   }
   ```
   
   suspect that `TarArchiveOutputStream create/put` now return and require a 
TarArchiveEntry; the current library just casts it.
   
   ```java
 public void putArchiveEntry(ArchiveEntry archiveEntry) throws IOException {
   if (this.finished) {
 throw new IOException("Stream has already been finished");
   } else {
 TarArchiveEntry entry = (TarArchiveEntry)archiveEntry;
   ```
   
   




> upgrade to commons-compress 1.26.1 due to cves
> --
>
> Key: HADOOP-19114
> URL: https://issues.apache.org/jira/browse/HADOOP-19114
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build, CVE
>Affects Versions: 3.4.0
>Reporter: PJ Fanning
>Priority: Major
>  Labels: pull-request-available
>
> 2 recent CVEs fixed - 
> https://mvnrepository.com/artifact/org.apache.commons/commons-compress



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19114. Upgrade to commons-compress 1.26.1 due to CVEs. [hadoop]

2024-03-26 Thread via GitHub


steveloughran commented on PR #6636:
URL: https://github.com/apache/hadoop/pull/6636#issuecomment-2021234507

   looking at the code which doesn't compile
   
   ```java
   TarArchiveOutputStream out = new TarArchiveOutputStream(targetStream)
   ...
   try (FileInputStream inputStream = new FileInputStream(file)) {
 ArchiveEntry entry = out.createArchiveEntry(file, file.getName());
 out.putArchiveEntry(entry);  // HERE
 IOUtils.copyBytes(inputStream, out, 1024 * 1024);
 out.closeArchiveEntry();
   }
   ```
   
   suspect that `TarArchiveOutputStream create/put` now return and require a 
TarArchiveEntry; the current library just casts it.
   
   ```java
 public void putArchiveEntry(ArchiveEntry archiveEntry) throws IOException {
   if (this.finished) {
 throw new IOException("Stream has already been finished");
   } else {
 TarArchiveEntry entry = (TarArchiveEntry)archiveEntry;
   ```
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19114) upgrade to commons-compress 1.26.1 due to cves

2024-03-26 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17831040#comment-17831040
 ] 

ASF GitHub Bot commented on HADOOP-19114:
-

pjfanning commented on PR #6636:
URL: https://github.com/apache/hadoop/pull/6636#issuecomment-2021227384

   @steveloughran I'll have a look. Looks like I will need to uptake some API 
changes in commons-compress to get the code to compile again.




> upgrade to commons-compress 1.26.1 due to cves
> --
>
> Key: HADOOP-19114
> URL: https://issues.apache.org/jira/browse/HADOOP-19114
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build, CVE
>Affects Versions: 3.4.0
>Reporter: PJ Fanning
>Priority: Major
>  Labels: pull-request-available
>
> 2 recent CVEs fixed - 
> https://mvnrepository.com/artifact/org.apache.commons/commons-compress



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19114. Upgrade to commons-compress 1.26.1 due to CVEs. [hadoop]

2024-03-26 Thread via GitHub


pjfanning commented on PR #6636:
URL: https://github.com/apache/hadoop/pull/6636#issuecomment-2021227384

   @steveloughran I'll have a look. Looks like I will need to uptake some API 
changes in commons-compress to get the code to compile again.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19114) upgrade to commons-compress 1.26.1 due to cves

2024-03-26 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17831038#comment-17831038
 ] 

ASF GitHub Bot commented on HADOOP-19114:
-

steveloughran commented on PR #6636:
URL: https://github.com/apache/hadoop/pull/6636#issuecomment-2021221194

   not a happy build
   ```
   [ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-compiler-plugin:3.10.1:compile (default-compile) 
on project hadoop-mapreduce-client-uploader: Compilation failure
   [ERROR] 
/home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-6636/ubuntu-focal/src/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-uploader/src/main/java/org/apache/hadoop/mapred/uploader/FrameworkUploader.java:[341,31]
 incompatible types: org.apache.commons.compress.archivers.ArchiveEntry cannot 
be converted to org.apache.commons.compress.archivers.tar.TarArchiveEntry
   [ERROR] -> [Help 1]
   [ERROR] 
   ```
   




> upgrade to commons-compress 1.26.1 due to cves
> --
>
> Key: HADOOP-19114
> URL: https://issues.apache.org/jira/browse/HADOOP-19114
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build, CVE
>Affects Versions: 3.4.0
>Reporter: PJ Fanning
>Priority: Major
>  Labels: pull-request-available
>
> 2 recent CVEs fixed - 
> https://mvnrepository.com/artifact/org.apache.commons/commons-compress



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19114. Upgrade to commons-compress 1.26.1 due to CVEs. [hadoop]

2024-03-26 Thread via GitHub


steveloughran commented on PR #6636:
URL: https://github.com/apache/hadoop/pull/6636#issuecomment-2021221194

   not a happy build
   ```
   [ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-compiler-plugin:3.10.1:compile (default-compile) 
on project hadoop-mapreduce-client-uploader: Compilation failure
   [ERROR] 
/home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-6636/ubuntu-focal/src/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-uploader/src/main/java/org/apache/hadoop/mapred/uploader/FrameworkUploader.java:[341,31]
 incompatible types: org.apache.commons.compress.archivers.ArchiveEntry cannot 
be converted to org.apache.commons.compress.archivers.tar.TarArchiveEntry
   [ERROR] -> [Help 1]
   [ERROR] 
   ```
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HDFS-17410. [FGL] Client RPCs that changes file attributes supports fine-grained lock [hadoop]

2024-03-26 Thread via GitHub


hadoop-yetus commented on PR #6634:
URL: https://github.com/apache/hadoop/pull/6634#issuecomment-2021155840

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 46s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ HDFS-17384 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  49m 39s |  |  HDFS-17384 passed  |
   | +1 :green_heart: |  compile  |   1m 25s |  |  HDFS-17384 passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   1m 17s |  |  HDFS-17384 passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   1m 14s |  |  HDFS-17384 passed  |
   | +1 :green_heart: |  mvnsite  |   1m 25s |  |  HDFS-17384 passed  |
   | +1 :green_heart: |  javadoc  |   1m  8s |  |  HDFS-17384 passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 44s |  |  HDFS-17384 passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   3m 19s |  |  HDFS-17384 passed  |
   | +1 :green_heart: |  shadedclient  |  40m 15s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 11s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 17s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   1m 17s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  7s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |   1m  7s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m  3s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 15s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 55s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 39s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   3m 18s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  40m 40s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  | 262m 12s |  |  hadoop-hdfs in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 47s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 418m 30s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6634/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6634 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 35958051c223 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | HDFS-17384 / 7a54a91e0cd8ced983698956cf73cd61eff8a2d9 |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6634/4/testReport/ |
   | Max. process+thread count | 3169 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6634/4/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to 

[jira] [Resolved] (HADOOP-19047) Support InMemory Tracking Of S3A Magic Commits

2024-03-26 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-19047?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-19047.
-
Fix Version/s: 3.5.0
   3.4.1
   Resolution: Fixed

> Support InMemory Tracking Of S3A Magic Commits
> --
>
> Key: HADOOP-19047
> URL: https://issues.apache.org/jira/browse/HADOOP-19047
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Reporter: Syed Shameerur Rahman
>Assignee: Syed Shameerur Rahman
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.5.0, 3.4.1
>
>
> The following are the operations which happens within a Task when it uses S3A 
> Magic Committer. 
> *During closing of stream*
> 1. A 0-byte file with a same name of the original file is uploaded to S3 
> using PUT operation. Refer 
> [here|https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/magic/MagicCommitTracker.java#L152]
>  for more information. This is done so that the downstream application like 
> Spark could get the size of the file which is being written.
> 2. MultiPartUpload(MPU) metadata is uploaded to S3. Refer 
> [here|https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/magic/MagicCommitTracker.java#L176]
>  for more information.
> *During TaskCommit*
> 1. All the MPU metadata which the task wrote to S3 (There will be 'x' number 
> of metadata file in S3 if a single task writes to 'x' files) are read and 
> rewritten to S3 as a single metadata file. Refer 
> [here|https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/magic/MagicS3GuardCommitter.java#L201]
>  for more information
> Since these operations happens with the Task JVM, We could optimize as well 
> as save cost by storing these information in memory when Task memory usage is 
> not a constraint. Hence the proposal here is to introduce a new MagicCommit 
> Tracker called "InMemoryMagicCommitTracker" which will store the 
> 1. Metadata of MPU in memory till the Task is committed
> 2. Store the size of the file which can be used by the downstream application 
> to get the file size before it is committed/visible to the output path.
> This optimization will save 2 PUT S3 calls, 1 LIST S3 call, and 1 GET S3 call 
> given a Task writes only 1 file.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19096) [ABFS] Enhancing Client-Side Throttling Metrics Updation Logic

2024-03-26 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19096?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17831006#comment-17831006
 ] 

ASF GitHub Bot commented on HADOOP-19096:
-

rakeshadr commented on code in PR #6276:
URL: https://github.com/apache/hadoop/pull/6276#discussion_r1539600695


##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/retryReasonCategories/ServerErrorRetryReason.java:
##
@@ -56,10 +58,14 @@ String getAbbreviation(final Integer statusCode,
   splitedServerErrorMessage)) {
 return EGRESS_LIMIT_BREACH_ABBREVIATION;
   }
-  if (OPERATION_BREACH_MESSAGE.equalsIgnoreCase(
+  if (TPS_OVER_ACCOUNT_LIMIT.getErrorMessage().equalsIgnoreCase(
   splitedServerErrorMessage)) {
 return OPERATION_LIMIT_BREACH_ABBREVIATION;

Review Comment:
   Could you please rename `OPERATION_LIMIT_BREACH_ABBREVIATION` to 
`TPS_LIMIT_BREACH_ABBREVIATION`, this would maintain consistency with the 
naming convention with other constants.



##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsRestOperation.java:
##
@@ -333,57 +355,44 @@ private boolean executeHttpOperation(final int retryCount,
   }
   return false;
 } catch (IOException ex) {
+  wasExceptionThrown = true;
   if (LOG.isDebugEnabled()) {
 LOG.debug("HttpRequestFailure: {}, {}", httpOperation, ex);
   }
 
   failureReason = RetryReason.getAbbreviation(ex, -1, "");
   retryPolicy = client.getRetryPolicy(failureReason);
-  wasIOExceptionThrown = true;
   if (!retryPolicy.shouldRetry(retryCount, -1)) {
 throw new InvalidAbfsRestOperationException(ex, retryCount);
   }
 
   return false;
 } finally {
-  int status = httpOperation.getStatusCode();
   /*
-   A status less than 300 (2xx range) or greater than or equal
-   to 500 (5xx range) should contribute to throttling metrics being 
updated.
-   Less than 200 or greater than or equal to 500 show failed operations. 
2xx
-   range contributes to successful operations. 3xx range is for redirects
-   and 4xx range is for user errors. These should not be a part of
-   throttling backoff computation.
+Updating Client Side Throttling Metrics for relevant response status 
codes.
+1. Status code in 2xx range: Successful Operations should contribute
+2. Status code in 3xx range: Redirection Operations should not 
contribute
+3. Status code in 4xx range: User Errors should not contribute
+4. Status code is 503: Throttling Error should contribute as following:
+  a. 503, Ingress Over Account Limit: Should Contribute
+  b. 503, Egress Over Account Limit: Should Contribute
+  c. 503, TPS Over Account Limit: Should Contribute
+  d. 503, Other Server Throttling: Should not contribute
+5. Status code in 5xx range other than 503: Should not contribute
+6. IOException and UnknownHostExceptions: Should not contribute
*/
-  boolean updateMetricsResponseCode = (status < 
HttpURLConnection.HTTP_MULT_CHOICE
-  || status >= HttpURLConnection.HTTP_INTERNAL_ERROR);
-
-  /*
-   Connection Timeout failures should not contribute to throttling
-   In case the current request fails with Connection Timeout we will have
-   ioExceptionThrown true and failure reason as CT
-   In case the current request failed with 5xx, failure reason will be
-   updated after finally block but wasIOExceptionThrown will be false;
-   */
-  boolean isCTFailure = 
CONNECTION_TIMEOUT_ABBREVIATION.equals(failureReason) && wasIOExceptionThrown;
-
-  if (updateMetricsResponseCode && !isCTFailure) {
+  int statusCode = httpOperation.getStatusCode();
+  boolean shouldUpdateCSTMetrics = (statusCode <  
HttpURLConnection.HTTP_MULT_CHOICE // Case 1
+  || INGRESS_LIMIT_BREACH_ABBREVIATION.equals(failureReason) // 
Case 4.a
+  || EGRESS_LIMIT_BREACH_ABBREVIATION.equals(failureReason) // 
Case 4.b
+  || OPERATION_LIMIT_BREACH_ABBREVIATION.equals(failureReason)) // 
Case 4.c
+  && !wasExceptionThrown; // Case 6

Review Comment:
   Can you please encapsulate the chain of conditions to a method 
`#shouldUpdateCSTMetrics()`





> [ABFS] Enhancing Client-Side Throttling Metrics Updation Logic
> --
>
> Key: HADOOP-19096
> URL: https://issues.apache.org/jira/browse/HADOOP-19096
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.4.1
>Reporter: Anuj Modi
>Assignee: Anuj Modi
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.1
>
>
> ABFS 

Re: [PR] HADOOP-19096: [ABFS] [CST Optimization] Enhancing Client-Side Throttling Metrics Updating Logic [hadoop]

2024-03-26 Thread via GitHub


rakeshadr commented on code in PR #6276:
URL: https://github.com/apache/hadoop/pull/6276#discussion_r1539600695


##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/retryReasonCategories/ServerErrorRetryReason.java:
##
@@ -56,10 +58,14 @@ String getAbbreviation(final Integer statusCode,
   splitedServerErrorMessage)) {
 return EGRESS_LIMIT_BREACH_ABBREVIATION;
   }
-  if (OPERATION_BREACH_MESSAGE.equalsIgnoreCase(
+  if (TPS_OVER_ACCOUNT_LIMIT.getErrorMessage().equalsIgnoreCase(
   splitedServerErrorMessage)) {
 return OPERATION_LIMIT_BREACH_ABBREVIATION;

Review Comment:
   Could you please rename `OPERATION_LIMIT_BREACH_ABBREVIATION` to 
`TPS_LIMIT_BREACH_ABBREVIATION`, this would maintain consistency with the 
naming convention with other constants.



##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsRestOperation.java:
##
@@ -333,57 +355,44 @@ private boolean executeHttpOperation(final int retryCount,
   }
   return false;
 } catch (IOException ex) {
+  wasExceptionThrown = true;
   if (LOG.isDebugEnabled()) {
 LOG.debug("HttpRequestFailure: {}, {}", httpOperation, ex);
   }
 
   failureReason = RetryReason.getAbbreviation(ex, -1, "");
   retryPolicy = client.getRetryPolicy(failureReason);
-  wasIOExceptionThrown = true;
   if (!retryPolicy.shouldRetry(retryCount, -1)) {
 throw new InvalidAbfsRestOperationException(ex, retryCount);
   }
 
   return false;
 } finally {
-  int status = httpOperation.getStatusCode();
   /*
-   A status less than 300 (2xx range) or greater than or equal
-   to 500 (5xx range) should contribute to throttling metrics being 
updated.
-   Less than 200 or greater than or equal to 500 show failed operations. 
2xx
-   range contributes to successful operations. 3xx range is for redirects
-   and 4xx range is for user errors. These should not be a part of
-   throttling backoff computation.
+Updating Client Side Throttling Metrics for relevant response status 
codes.
+1. Status code in 2xx range: Successful Operations should contribute
+2. Status code in 3xx range: Redirection Operations should not 
contribute
+3. Status code in 4xx range: User Errors should not contribute
+4. Status code is 503: Throttling Error should contribute as following:
+  a. 503, Ingress Over Account Limit: Should Contribute
+  b. 503, Egress Over Account Limit: Should Contribute
+  c. 503, TPS Over Account Limit: Should Contribute
+  d. 503, Other Server Throttling: Should not contribute
+5. Status code in 5xx range other than 503: Should not contribute
+6. IOException and UnknownHostExceptions: Should not contribute
*/
-  boolean updateMetricsResponseCode = (status < 
HttpURLConnection.HTTP_MULT_CHOICE
-  || status >= HttpURLConnection.HTTP_INTERNAL_ERROR);
-
-  /*
-   Connection Timeout failures should not contribute to throttling
-   In case the current request fails with Connection Timeout we will have
-   ioExceptionThrown true and failure reason as CT
-   In case the current request failed with 5xx, failure reason will be
-   updated after finally block but wasIOExceptionThrown will be false;
-   */
-  boolean isCTFailure = 
CONNECTION_TIMEOUT_ABBREVIATION.equals(failureReason) && wasIOExceptionThrown;
-
-  if (updateMetricsResponseCode && !isCTFailure) {
+  int statusCode = httpOperation.getStatusCode();
+  boolean shouldUpdateCSTMetrics = (statusCode <  
HttpURLConnection.HTTP_MULT_CHOICE // Case 1
+  || INGRESS_LIMIT_BREACH_ABBREVIATION.equals(failureReason) // 
Case 4.a
+  || EGRESS_LIMIT_BREACH_ABBREVIATION.equals(failureReason) // 
Case 4.b
+  || OPERATION_LIMIT_BREACH_ABBREVIATION.equals(failureReason)) // 
Case 4.c
+  && !wasExceptionThrown; // Case 6

Review Comment:
   Can you please encapsulate the chain of conditions to a method 
`#shouldUpdateCSTMetrics()`



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19047) Support InMemory Tracking Of S3A Magic Commits

2024-03-26 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17831004#comment-17831004
 ] 

ASF GitHub Bot commented on HADOOP-19047:
-

hadoop-yetus commented on PR #6468:
URL: https://github.com/apache/hadoop/pull/6468#issuecomment-2020947224

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 51s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 4 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  49m  2s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 45s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   0m 34s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   0m 32s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 42s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 28s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 35s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   1m  8s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  38m 32s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 37s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   0m 37s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 27s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |   0m 27s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 20s | 
[/results-checkstyle-hadoop-tools_hadoop-aws.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6468/8/artifact/out/results-checkstyle-hadoop-tools_hadoop-aws.txt)
 |  hadoop-tools/hadoop-aws: The patch generated 1 new + 13 unchanged - 0 fixed 
= 14 total (was 13)  |
   | +1 :green_heart: |  mvnsite  |   0m 32s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 16s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   1m  9s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  38m 51s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 49s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 38s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 144m 22s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.45 ServerAPI=1.45 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6468/8/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6468 |
   | JIRA Issue | HADOOP-19047 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets markdownlint 
|
   | uname | Linux 6a4507cb7348 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 8d739beced49d1469593671cd7412b4662ca2f04 |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6468/8/testReport/ |
   | Max. process+thread count | 527 (vs. ulimit of 5500) |
   | modules | C: 

Re: [PR] HADOOP-19047: Support InMemory Tracking Of S3A Magic Commits [hadoop]

2024-03-26 Thread via GitHub


hadoop-yetus commented on PR #6468:
URL: https://github.com/apache/hadoop/pull/6468#issuecomment-2020947224

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 51s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 4 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  49m  2s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 45s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   0m 34s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   0m 32s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 42s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 28s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 35s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   1m  8s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  38m 32s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 37s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   0m 37s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 27s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |   0m 27s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 20s | 
[/results-checkstyle-hadoop-tools_hadoop-aws.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6468/8/artifact/out/results-checkstyle-hadoop-tools_hadoop-aws.txt)
 |  hadoop-tools/hadoop-aws: The patch generated 1 new + 13 unchanged - 0 fixed 
= 14 total (was 13)  |
   | +1 :green_heart: |  mvnsite  |   0m 32s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 16s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   1m  9s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  38m 51s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 49s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 38s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 144m 22s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.45 ServerAPI=1.45 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6468/8/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6468 |
   | JIRA Issue | HADOOP-19047 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets markdownlint 
|
   | uname | Linux 6a4507cb7348 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 8d739beced49d1469593671cd7412b4662ca2f04 |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6468/8/testReport/ |
   | Max. process+thread count | 527 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6468/8/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 

Re: [PR] YARN-11663. [Federation] Add Cache Entity Nums Limit. [hadoop]

2024-03-26 Thread via GitHub


dineshchitlangia commented on code in PR #6662:
URL: https://github.com/apache/hadoop/pull/6662#discussion_r1539567214


##
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java:
##
@@ -4031,6 +4031,11 @@ public static boolean isAclEnabled(Configuration conf) {
   // 5 minutes
   public static final int DEFAULT_FEDERATION_CACHE_TIME_TO_LIVE_SECS = 5 * 60;
 
+  public static final String FEDERATION_CACHE_ENTITY_NUMS =
+  FEDERATION_PREFIX + "cache-entity.nums";
+  // default 1000

Review Comment:
   NIT - This comment can be removed as the variable name is appropriate.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[PR] HDFS-17439. Support -nonSuperUser for NNThroughputBenchmark: useful f… [hadoop]

2024-03-26 Thread via GitHub


fateh288 opened a new pull request, #6677:
URL: https://github.com/apache/hadoop/pull/6677

   …or testing auth frameworks such as Ranger
   
   
   
   ### Description of PR
   
   
   ### How was this patch tested?
   1. Unit tests defined in TestNNThroughputBenchmark worked after the patch 
was applied
   2. Added new unit test case
   3. Tried some commands such to open files which worked both with and without 
-nonSuperUser option (in a cluster)
   e.g. hadoop org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark -op 
open -threads 100 -files 10 -filesPerDir 1 -keepResults [-nonSuperUser]
   
   ### For code changes:
   
   - [Y ] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ N/A] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ N/A] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ N/A] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HDFS-17412. [FGL] Client RPCs involving maintenance supports fine-grained lock [hadoop]

2024-03-26 Thread via GitHub


hadoop-yetus commented on PR #6667:
URL: https://github.com/apache/hadoop/pull/6667#issuecomment-2020823299

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 48s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ HDFS-17384 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  49m 20s |  |  HDFS-17384 passed  |
   | +1 :green_heart: |  compile  |   1m 22s |  |  HDFS-17384 passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   1m 14s |  |  HDFS-17384 passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   1m 14s |  |  HDFS-17384 passed  |
   | +1 :green_heart: |  mvnsite  |   1m 23s |  |  HDFS-17384 passed  |
   | +1 :green_heart: |  javadoc  |   1m  8s |  |  HDFS-17384 passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 43s |  |  HDFS-17384 passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   3m 21s |  |  HDFS-17384 passed  |
   | +1 :green_heart: |  shadedclient  |  42m 13s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 25s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 31s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   1m 31s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 20s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |   1m 20s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m 11s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 29s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m  1s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 40s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   3m 38s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  40m 37s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 263m  4s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6667/4/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 40s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 422m  6s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.tools.TestDFSAdmin |
   |   | hadoop.hdfs.TestRollingUpgrade |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6667/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6667 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 2f80c931fcb5 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | HDFS-17384 / cb9a23ba15c35b3630924841c2862304ae3e30ea |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6667/4/testReport/ |
   | Max. process+thread count | 2766 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 

[jira] [Commented] (HADOOP-19047) Support InMemory Tracking Of S3A Magic Commits

2024-03-26 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17830988#comment-17830988
 ] 

ASF GitHub Bot commented on HADOOP-19047:
-

steveloughran merged PR #6468:
URL: https://github.com/apache/hadoop/pull/6468




> Support InMemory Tracking Of S3A Magic Commits
> --
>
> Key: HADOOP-19047
> URL: https://issues.apache.org/jira/browse/HADOOP-19047
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Reporter: Syed Shameerur Rahman
>Assignee: Syed Shameerur Rahman
>Priority: Major
>  Labels: pull-request-available
>
> The following are the operations which happens within a Task when it uses S3A 
> Magic Committer. 
> *During closing of stream*
> 1. A 0-byte file with a same name of the original file is uploaded to S3 
> using PUT operation. Refer 
> [here|https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/magic/MagicCommitTracker.java#L152]
>  for more information. This is done so that the downstream application like 
> Spark could get the size of the file which is being written.
> 2. MultiPartUpload(MPU) metadata is uploaded to S3. Refer 
> [here|https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/magic/MagicCommitTracker.java#L176]
>  for more information.
> *During TaskCommit*
> 1. All the MPU metadata which the task wrote to S3 (There will be 'x' number 
> of metadata file in S3 if a single task writes to 'x' files) are read and 
> rewritten to S3 as a single metadata file. Refer 
> [here|https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/magic/MagicS3GuardCommitter.java#L201]
>  for more information
> Since these operations happens with the Task JVM, We could optimize as well 
> as save cost by storing these information in memory when Task memory usage is 
> not a constraint. Hence the proposal here is to introduce a new MagicCommit 
> Tracker called "InMemoryMagicCommitTracker" which will store the 
> 1. Metadata of MPU in memory till the Task is committed
> 2. Store the size of the file which can be used by the downstream application 
> to get the file size before it is committed/visible to the output path.
> This optimization will save 2 PUT S3 calls, 1 LIST S3 call, and 1 GET S3 call 
> given a Task writes only 1 file.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19047: Support InMemory Tracking Of S3A Magic Commits [hadoop]

2024-03-26 Thread via GitHub


steveloughran merged PR #6468:
URL: https://github.com/apache/hadoop/pull/6468


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HDFS-17412. [FGL] Client RPCs involving maintenance supports fine-grained lock [hadoop]

2024-03-26 Thread via GitHub


hadoop-yetus commented on PR #6667:
URL: https://github.com/apache/hadoop/pull/6667#issuecomment-2020685124

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 32s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ HDFS-17384 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  45m 52s |  |  HDFS-17384 passed  |
   | +1 :green_heart: |  compile  |   1m 21s |  |  HDFS-17384 passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   1m 18s |  |  HDFS-17384 passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   1m  9s |  |  HDFS-17384 passed  |
   | +1 :green_heart: |  mvnsite  |   1m 24s |  |  HDFS-17384 passed  |
   | +1 :green_heart: |  javadoc  |   1m  7s |  |  HDFS-17384 passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 43s |  |  HDFS-17384 passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   3m 27s |  |  HDFS-17384 passed  |
   | +1 :green_heart: |  shadedclient  |  36m 42s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m  8s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 10s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   1m 10s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  1s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |   1m  1s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 56s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 14s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 54s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 34s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   3m 23s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  41m 45s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 226m 39s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6667/3/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 46s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 376m  4s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdfs.server.diskbalancer.command.TestDiskBalancerCommand |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.45 ServerAPI=1.45 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6667/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6667 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux fe76f849b785 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | HDFS-17384 / cb9a23ba15c35b3630924841c2862304ae3e30ea |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6667/3/testReport/ |
   | Max. process+thread count | 3702 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 

Re: [PR] YARN-11387. [GPG] YARN GPG mistakenly deleted applicationid. [hadoop]

2024-03-26 Thread via GitHub


slfan1989 commented on code in PR #6660:
URL: https://github.com/apache/hadoop/pull/6660#discussion_r1539407981


##
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator/src/main/java/org/apache/hadoop/yarn/server/globalpolicygenerator/applicationcleaner/DefaultApplicationCleaner.java:
##
@@ -46,47 +45,38 @@ public void run() {
 LOG.info("Application cleaner run at time {}", now);
 
 FederationStateStoreFacade facade = getGPGContext().getStateStoreFacade();

Review Comment:
   Step 1: Retrieve all applications stored in the StateStore, which represents 
all applications submitted to the Router.
   Step 2: Use the Router's REST API to fetch all running tasks. This API will 
invoke applications from all active SubClusters.
   Step 3: Compare the results of Step1 and Step2 to identify applications that 
exist in Step1 but not in Step2. Delete these applications.
   
   There is a potential issue with this approach. If a particular SubCluster is 
undergoing maintenance, such as RM restart, Step2 will not be able to fetch the 
complete list of running applications. As a result, during the comparison in 
Step3, there is a risk of mistakenly deleting applications that are still 
running.
   
   We have three SubClusters: subClusterA, subClusterB, and subClusterC, with 
an equal allocation ratio of 1:1:1.
   
   We submit six applications through routerA.
   
   app1 and app2 are allocated to subClusterA
   app3 and app4 to subClusterB
   app5 and app6 to subClusterC.
   Among these, app1, app3, and app5 have completed their execution, and we 
expect to retain app2, app4, and app6 in the StateStore.
   
   In the normal scenario:
   
   Comparing the steps mentioned above:
   
   Step 1: We will retrieve six applications [app1, app2, app3, app4, app5, 
app6] from the StateStore.
   Step 2: We will fetch three applications [app2, app4, app6] from the 
Router's REST interface.
   Step 3: By comparing Step 1 and Step 2, we can identify that applications 
[app1, app3, app5] should be deleted.
   
   In the exceptional scenario:
   
   Comparing the steps mentioned above:
   
   Step 1: We will retrieve six applications [app1, app2, app3, app4, app5, 
app6] from the StateStore.
   Step 2: We will fetch the list of running applications from the Router's 
REST interface. However, due to maintenance in subClusterB and subClusterC, we 
can only obtain the applications running in subClusterA [app2].
   Step 3: By comparing Step 1 and Step 3, we can identify that applications 
[app1, app3, app4, app5, app6] should be deleted.
   
   In this case, we had an error deletion.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] YARN-11663. [Federation] Add Cache Entity Nums Limit. [hadoop]

2024-03-26 Thread via GitHub


slfan1989 commented on PR #6662:
URL: https://github.com/apache/hadoop/pull/6662#issuecomment-2020639828

   @goiri Can you help review this PR? Thank you very much!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19072) S3A: expand optimisations on stores with "fs.s3a.create.performance"

2024-03-26 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17830962#comment-17830962
 ] 

ASF GitHub Bot commented on HADOOP-19072:
-

hadoop-yetus commented on PR #6543:
URL: https://github.com/apache/hadoop/pull/6543#issuecomment-2020614729

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 23s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 6 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m 16s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  21m 45s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   9m 46s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   9m  3s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   2m 10s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 35s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m  9s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m  7s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   2m 17s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  21m 49s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 21s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   0m 54s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   8m 23s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   8m 23s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   7m 47s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |   7m 47s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m 58s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 25s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 53s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m  2s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   2m 25s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  20m  0s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  16m 39s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   2m 14s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 35s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 154m 16s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6543/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6543 |
   | Optional Tests | dupname asflicense mvnsite codespell detsecrets 
markdownlint compile javac javadoc mvninstall unit shadedclient spotbugs 
checkstyle |
   | uname | Linux 106dcbd22ec9 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / ff8a9d2e9dab36887a266a0eb00d6268c4331a1c |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6543/5/testReport/ |
   | Max. process+thread count | 1273 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common 

Re: [PR] HADOOP-19072. S3A: expand optimisations on stores with "fs.s3a.create.performance" [hadoop]

2024-03-26 Thread via GitHub


hadoop-yetus commented on PR #6543:
URL: https://github.com/apache/hadoop/pull/6543#issuecomment-2020614729

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 23s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 6 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m 16s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  21m 45s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   9m 46s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   9m  3s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   2m 10s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 35s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m  9s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m  7s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   2m 17s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  21m 49s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 21s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   0m 54s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   8m 23s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   8m 23s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   7m 47s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |   7m 47s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m 58s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 25s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 53s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m  2s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   2m 25s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  20m  0s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  16m 39s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   2m 14s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 35s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 154m 16s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6543/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6543 |
   | Optional Tests | dupname asflicense mvnsite codespell detsecrets 
markdownlint compile javac javadoc mvninstall unit shadedclient spotbugs 
checkstyle |
   | uname | Linux 106dcbd22ec9 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / ff8a9d2e9dab36887a266a0eb00d6268c4331a1c |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6543/5/testReport/ |
   | Max. process+thread count | 1273 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws 
U: . |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6543/5/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   

Re: [PR] HDFS-17415. [FGL] RPCs in NamenodeProtocol support fine-grained lock [hadoop]

2024-03-26 Thread via GitHub


hadoop-yetus commented on PR #6654:
URL: https://github.com/apache/hadoop/pull/6654#issuecomment-2020574709

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 39s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ HDFS-17384 Compile Tests _ |
   | -1 :x: |  mvninstall  |   7m  8s | 
[/branch-mvninstall-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6654/4/artifact/out/branch-mvninstall-root.txt)
 |  root in HDFS-17384 failed.  |
   | +1 :green_heart: |  compile  |   2m 50s |  |  HDFS-17384 passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   1m 10s |  |  HDFS-17384 passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   1m 10s |  |  HDFS-17384 passed  |
   | +1 :green_heart: |  mvnsite  |   1m 21s |  |  HDFS-17384 passed  |
   | +1 :green_heart: |  javadoc  |   0m 59s |  |  HDFS-17384 passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 35s |  |  HDFS-17384 passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   3m 11s |  |  HDFS-17384 passed  |
   | +1 :green_heart: |  shadedclient  |  39m 41s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 18s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 16s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   1m 16s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  9s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |   1m  9s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m  6s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 16s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 54s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 33s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   3m 13s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  39m 48s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  | 225m  6s |  |  hadoop-hdfs in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 44s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 338m 22s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.45 ServerAPI=1.45 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6654/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6654 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 2bfae9bbd71c 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | HDFS-17384 / 7440dbeb561f721f646225f9744c1ddabe050c61 |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6654/4/testReport/ |
   | Max. process+thread count | 4157 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6654/4/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was 

Re: [PR] HADOOP-19047: Support InMemory Tracking Of S3A Magic Commits [hadoop]

2024-03-26 Thread via GitHub


shameersss1 commented on PR #6468:
URL: https://github.com/apache/hadoop/pull/6468#issuecomment-2020548007

   @steveloughran  - I have addressed your comments.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19047) Support InMemory Tracking Of S3A Magic Commits

2024-03-26 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17830950#comment-17830950
 ] 

ASF GitHub Bot commented on HADOOP-19047:
-

shameersss1 commented on PR #6468:
URL: https://github.com/apache/hadoop/pull/6468#issuecomment-2020548007

   @steveloughran  - I have addressed your comments.




> Support InMemory Tracking Of S3A Magic Commits
> --
>
> Key: HADOOP-19047
> URL: https://issues.apache.org/jira/browse/HADOOP-19047
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Reporter: Syed Shameerur Rahman
>Assignee: Syed Shameerur Rahman
>Priority: Major
>  Labels: pull-request-available
>
> The following are the operations which happens within a Task when it uses S3A 
> Magic Committer. 
> *During closing of stream*
> 1. A 0-byte file with a same name of the original file is uploaded to S3 
> using PUT operation. Refer 
> [here|https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/magic/MagicCommitTracker.java#L152]
>  for more information. This is done so that the downstream application like 
> Spark could get the size of the file which is being written.
> 2. MultiPartUpload(MPU) metadata is uploaded to S3. Refer 
> [here|https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/magic/MagicCommitTracker.java#L176]
>  for more information.
> *During TaskCommit*
> 1. All the MPU metadata which the task wrote to S3 (There will be 'x' number 
> of metadata file in S3 if a single task writes to 'x' files) are read and 
> rewritten to S3 as a single metadata file. Refer 
> [here|https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/magic/MagicS3GuardCommitter.java#L201]
>  for more information
> Since these operations happens with the Task JVM, We could optimize as well 
> as save cost by storing these information in memory when Task memory usage is 
> not a constraint. Hence the proposal here is to introduce a new MagicCommit 
> Tracker called "InMemoryMagicCommitTracker" which will store the 
> 1. Metadata of MPU in memory till the Task is committed
> 2. Store the size of the file which can be used by the downstream application 
> to get the file size before it is committed/visible to the output path.
> This optimization will save 2 PUT S3 calls, 1 LIST S3 call, and 1 GET S3 call 
> given a Task writes only 1 file.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] [ABFS] Test Fixes [hadoop]

2024-03-26 Thread via GitHub


hadoop-yetus commented on PR #6676:
URL: https://github.com/apache/hadoop/pull/6676#issuecomment-2020457859

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  17m 30s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  shelldocs  |   0m  0s |  |  Shelldocs was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 14 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  50m 34s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 40s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   0m 33s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   0m 30s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 40s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 37s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 31s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   1m  4s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  38m 54s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 27s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 30s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 25s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |   0m 25s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  1s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 19s | 
[/results-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6676/1/artifact/out/results-checkstyle-hadoop-tools_hadoop-azure.txt)
 |  hadoop-tools/hadoop-azure: The patch generated 9 new + 6 unchanged - 1 
fixed = 15 total (was 7)  |
   | +1 :green_heart: |  mvnsite  |   0m 30s |  |  the patch passed  |
   | -1 :x: |  shellcheck  |   0m  1s | 
[/results-shellcheck.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6676/1/artifact/out/results-shellcheck.txt)
 |  The patch generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0)  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 24s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   1m  2s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  38m 18s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 13s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 36s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 161m  6s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6676/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6676 |
   | Optional Tests | dupname asflicense mvnsite unit codespell detsecrets 
shellcheck shelldocs compile javac javadoc mvninstall shadedclient spotbugs 
checkstyle |
   | uname | Linux 897f9c1f0eb3 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 0a54229c8793f86496a030fdeb6a471d8d4f6f80 |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6676/1/testReport/ |
   | Max. process+thread count | 526 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: 

Re: [PR] YARN-11261 Avoid unnecessary reconstruction of ConfigurationProperties [hadoop]

2024-03-26 Thread via GitHub


hadoop-yetus commented on PR #6652:
URL: https://github.com/apache/hadoop/pull/6652#issuecomment-2020426986

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 19s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m 22s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  19m 52s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   8m 39s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   7m 53s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   2m  3s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 31s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 16s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m  1s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   2m 35s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  22m 43s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 21s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   0m 57s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   8m 22s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   8m 22s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   8m  4s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |   8m  4s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   2m  3s | 
[/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6652/3/artifact/out/results-checkstyle-root.txt)
 |  root: The patch generated 7 new + 140 unchanged - 0 fixed = 147 total (was 
140)  |
   | +1 :green_heart: |  mvnsite  |   1m 33s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m  7s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 10s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | -1 :x: |  spotbugs  |   1m 33s | 
[/new-spotbugs-hadoop-common-project_hadoop-common.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6652/3/artifact/out/new-spotbugs-hadoop-common-project_hadoop-common.html)
 |  hadoop-common-project/hadoop-common generated 1 new + 0 unchanged - 0 fixed 
= 1 total (was 0)  |
   | +1 :green_heart: |  shadedclient  |  20m 26s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  16m 42s |  |  hadoop-common in the patch 
passed.  |
   | -1 :x: |  unit  |  88m  3s | 
[/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6652/3/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt)
 |  hadoop-yarn-server-resourcemanager in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 33s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 237m 43s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | SpotBugs | module:hadoop-common-project/hadoop-common |
   |  |  Inconsistent synchronization of 
org.apache.hadoop.conf.Configuration.properties; locked 87% of time  
Unsynchronized access at Configuration.java:87% of time  Unsynchronized access 
at Configuration.java:[line 771] |
   | Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacitySchedulerNewQueueAutoCreation
 |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6652/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6652 |
   | Optional Tests | dupname 

[jira] [Commented] (HADOOP-19116) update to zookeeper client 3.8.4 due to CVE-2024-23944

2024-03-26 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19116?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17830925#comment-17830925
 ] 

ASF GitHub Bot commented on HADOOP-19116:
-

hadoop-yetus commented on PR #6675:
URL: https://github.com/apache/hadoop/pull/6675#issuecomment-2020394416

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   3m 48s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  0s |  |  xmllint was not available.  |
   | +0 :ok: |  shelldocs  |   0m  0s |  |  Shelldocs was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ branch-3.3 Compile Tests _ |
   | +0 :ok: |  mvndep  |  28m 16s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  23m  5s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  compile  |  11m 28s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  mvnsite  |  24m 39s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  javadoc  |   5m 21s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  shadedclient  |  25m 51s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 36s |  |  Maven dependency ordering for patch  |
   | -1 :x: |  mvninstall  |  21m 44s | 
[/patch-mvninstall-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6675/1/artifact/out/patch-mvninstall-root.txt)
 |  root in the patch failed.  |
   | +1 :green_heart: |  compile  |  11m 10s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |  11m 10s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  1s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  mvnsite  |  14m 36s |  |  the patch passed  |
   | +1 :green_heart: |  shellcheck  |   0m  0s |  |  No new issues.  |
   | +1 :green_heart: |  javadoc  |   4m 49s |  |  the patch passed  |
   | -1 :x: |  shadedclient  |  26m 16s |  |  patch has errors when building 
and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 643m 14s | 
[/patch-unit-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6675/1/artifact/out/patch-unit-root.txt)
 |  root in the patch passed.  |
   | -1 :x: |  asflicense  |   1m  4s | 
[/results-asflicense.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6675/1/artifact/out/results-asflicense.txt)
 |  The patch generated 1 ASF License warnings.  |
   |  |   | 839m 56s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.tools.TestDFSAdminWithHA |
   |   | hadoop.hdfs.TestReconstructStripedFileWithRandomECPolicy |
   |   | hadoop.hdfs.server.namenode.TestFsck |
   |   | hadoop.hdfs.tools.TestECAdmin |
   |   | hadoop.hdfs.TestErasureCodingExerciseAPIs |
   |   | hadoop.hdfs.TestQuota |
   |   | hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetCache |
   |   | hadoop.hdfs.TestDFSUpgradeFromImage |
   |   | hadoop.hdfs.server.namenode.TestFSNamesystemLock |
   |   | hadoop.hdfs.server.namenode.TestEditsDoubleBuffer |
   |   | hadoop.hdfs.tools.TestViewFileSystemOverloadSchemeWithFSCommands |
   |   | hadoop.hdfs.server.blockmanagement.TestReplicationPolicy |
   |   | hadoop.cli.TestAclCLI |
   |   | hadoop.cli.TestHDFSCLI |
   |   | hadoop.hdfs.server.namenode.ha.TestObserverReadProxyProvider |
   |   | hadoop.hdfs.TestDFSShell |
   |   | hadoop.hdfs.server.namenode.snapshot.TestSnapshotFileLength |
   |   | hadoop.hdfs.server.namenode.ha.TestBootstrapStandby |
   |   | hadoop.hdfs.server.namenode.ha.TestDelegationTokensWithHA |
   |   | hadoop.hdfs.TestEncryptedTransfer |
   |   | hadoop.hdfs.server.namenode.ha.TestStandbyCheckpoints |
   |   | hadoop.hdfs.server.datanode.TestLargeBlockReport |
   |   | hadoop.hdfs.protocol.datatransfer.sasl.TestSaslDataTransfer |
   |   | hadoop.hdfs.tools.TestDebugAdmin |
   |   | hadoop.hdfs.TestReconstructStripedFileWithValidator |
   |   | hadoop.cli.TestAclCLIWithPosixAclInheritance |
   |   | hadoop.hdfs.server.blockmanagement.TestPendingReconstruction |
   |   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
   |   | hadoop.hdfs.server.datanode.TestDataNodeRollingUpgrade |
   |   | 

Re: [PR] HADOOP-19116. Update to zookeeper client 3.8.4 due to CVE-2024-23944 [hadoop]

2024-03-26 Thread via GitHub


hadoop-yetus commented on PR #6675:
URL: https://github.com/apache/hadoop/pull/6675#issuecomment-2020394416

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   3m 48s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  0s |  |  xmllint was not available.  |
   | +0 :ok: |  shelldocs  |   0m  0s |  |  Shelldocs was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ branch-3.3 Compile Tests _ |
   | +0 :ok: |  mvndep  |  28m 16s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  23m  5s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  compile  |  11m 28s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  mvnsite  |  24m 39s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  javadoc  |   5m 21s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  shadedclient  |  25m 51s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 36s |  |  Maven dependency ordering for patch  |
   | -1 :x: |  mvninstall  |  21m 44s | 
[/patch-mvninstall-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6675/1/artifact/out/patch-mvninstall-root.txt)
 |  root in the patch failed.  |
   | +1 :green_heart: |  compile  |  11m 10s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |  11m 10s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  1s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  mvnsite  |  14m 36s |  |  the patch passed  |
   | +1 :green_heart: |  shellcheck  |   0m  0s |  |  No new issues.  |
   | +1 :green_heart: |  javadoc  |   4m 49s |  |  the patch passed  |
   | -1 :x: |  shadedclient  |  26m 16s |  |  patch has errors when building 
and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 643m 14s | 
[/patch-unit-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6675/1/artifact/out/patch-unit-root.txt)
 |  root in the patch passed.  |
   | -1 :x: |  asflicense  |   1m  4s | 
[/results-asflicense.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6675/1/artifact/out/results-asflicense.txt)
 |  The patch generated 1 ASF License warnings.  |
   |  |   | 839m 56s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.tools.TestDFSAdminWithHA |
   |   | hadoop.hdfs.TestReconstructStripedFileWithRandomECPolicy |
   |   | hadoop.hdfs.server.namenode.TestFsck |
   |   | hadoop.hdfs.tools.TestECAdmin |
   |   | hadoop.hdfs.TestErasureCodingExerciseAPIs |
   |   | hadoop.hdfs.TestQuota |
   |   | hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetCache |
   |   | hadoop.hdfs.TestDFSUpgradeFromImage |
   |   | hadoop.hdfs.server.namenode.TestFSNamesystemLock |
   |   | hadoop.hdfs.server.namenode.TestEditsDoubleBuffer |
   |   | hadoop.hdfs.tools.TestViewFileSystemOverloadSchemeWithFSCommands |
   |   | hadoop.hdfs.server.blockmanagement.TestReplicationPolicy |
   |   | hadoop.cli.TestAclCLI |
   |   | hadoop.cli.TestHDFSCLI |
   |   | hadoop.hdfs.server.namenode.ha.TestObserverReadProxyProvider |
   |   | hadoop.hdfs.TestDFSShell |
   |   | hadoop.hdfs.server.namenode.snapshot.TestSnapshotFileLength |
   |   | hadoop.hdfs.server.namenode.ha.TestBootstrapStandby |
   |   | hadoop.hdfs.server.namenode.ha.TestDelegationTokensWithHA |
   |   | hadoop.hdfs.TestEncryptedTransfer |
   |   | hadoop.hdfs.server.namenode.ha.TestStandbyCheckpoints |
   |   | hadoop.hdfs.server.datanode.TestLargeBlockReport |
   |   | hadoop.hdfs.protocol.datatransfer.sasl.TestSaslDataTransfer |
   |   | hadoop.hdfs.tools.TestDebugAdmin |
   |   | hadoop.hdfs.TestReconstructStripedFileWithValidator |
   |   | hadoop.cli.TestAclCLIWithPosixAclInheritance |
   |   | hadoop.hdfs.server.blockmanagement.TestPendingReconstruction |
   |   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
   |   | hadoop.hdfs.server.datanode.TestDataNodeRollingUpgrade |
   |   | hadoop.hdfs.qjournal.server.TestJournaledEditsCache |
   |   | hadoop.hdfs.server.diskbalancer.TestDiskBalancer |
   |   | hadoop.hdfs.server.namenode.TestCheckpoint |
   |   | hadoop.hdfs.server.namenode.TestFSEditLogLoader |
   |   | 

[jira] [Commented] (HADOOP-19124) Update org.ehcache from 3.3.1 to 3.8.2.

2024-03-26 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17830912#comment-17830912
 ] 

ASF GitHub Bot commented on HADOOP-19124:
-

hadoop-yetus commented on PR #6665:
URL: https://github.com/apache/hadoop/pull/6665#issuecomment-2020301592

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 20s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  0s |  |  xmllint was not available.  |
   | +0 :ok: |  shelldocs  |   0m  0s |  |  Shelldocs was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  24m 47s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  23m  4s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   9m  4s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |  10m 13s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  mvnsite  |  19m  7s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   7m 27s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   5m  5s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  shadedclient  |  31m 48s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 44s |  |  Maven dependency ordering for patch  |
   | -1 :x: |  mvninstall  |  17m 41s | 
[/patch-mvninstall-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6665/2/artifact/out/patch-mvninstall-root.txt)
 |  root in the patch failed.  |
   | +1 :green_heart: |  compile  |   8m 46s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   8m 46s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   8m 16s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |   8m 16s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  mvnsite  |   8m 37s |  |  the patch passed  |
   | +1 :green_heart: |  shellcheck  |   0m  1s |  |  No new issues.  |
   | +1 :green_heart: |  javadoc  |   4m 52s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   5m  7s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | -1 :x: |  shadedclient  |  30m  8s |  |  patch has errors when building 
and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 634m 53s | 
[/patch-unit-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6665/2/artifact/out/patch-unit-root.txt)
 |  root in the patch failed.  |
   | +1 :green_heart: |  asflicense  |   1m  4s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 832m 33s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.TestFileCreation |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.45 ServerAPI=1.45 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6665/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6665 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient codespell detsecrets xmllint shellcheck shelldocs |
   | uname | Linux 7a531ce2c251 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 3d971045a68b596b6b9773b9a0332066b79d8c2f |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   

Re: [PR] HADOOP-19124. Update org.ehcache from 3.3.1 to 3.8.2. [hadoop]

2024-03-26 Thread via GitHub


hadoop-yetus commented on PR #6665:
URL: https://github.com/apache/hadoop/pull/6665#issuecomment-2020301592

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 20s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  0s |  |  xmllint was not available.  |
   | +0 :ok: |  shelldocs  |   0m  0s |  |  Shelldocs was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  24m 47s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  23m  4s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   9m  4s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |  10m 13s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  mvnsite  |  19m  7s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   7m 27s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   5m  5s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  shadedclient  |  31m 48s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 44s |  |  Maven dependency ordering for patch  |
   | -1 :x: |  mvninstall  |  17m 41s | 
[/patch-mvninstall-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6665/2/artifact/out/patch-mvninstall-root.txt)
 |  root in the patch failed.  |
   | +1 :green_heart: |  compile  |   8m 46s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   8m 46s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   8m 16s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |   8m 16s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  mvnsite  |   8m 37s |  |  the patch passed  |
   | +1 :green_heart: |  shellcheck  |   0m  1s |  |  No new issues.  |
   | +1 :green_heart: |  javadoc  |   4m 52s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   5m  7s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | -1 :x: |  shadedclient  |  30m  8s |  |  patch has errors when building 
and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 634m 53s | 
[/patch-unit-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6665/2/artifact/out/patch-unit-root.txt)
 |  root in the patch failed.  |
   | +1 :green_heart: |  asflicense  |   1m  4s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 832m 33s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.TestFileCreation |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.45 ServerAPI=1.45 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6665/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6665 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient codespell detsecrets xmllint shellcheck shelldocs |
   | uname | Linux 7a531ce2c251 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 3d971045a68b596b6b9773b9a0332066b79d8c2f |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6665/2/testReport/ |
   | Max. process+thread count | 4495 (vs. ulimit of 5500) |
   | modules | C: hadoop-project 

Re: [PR] HDFS-17368. HA: Standby should exit safemode when resources are available. [hadoop]

2024-03-26 Thread via GitHub


zhuzilong2013 commented on PR #6518:
URL: https://github.com/apache/hadoop/pull/6518#issuecomment-2020272541

   Thanks @Hexiaoqiao for your review and merge~


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19047) Support InMemory Tracking Of S3A Magic Commits

2024-03-26 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17830903#comment-17830903
 ] 

ASF GitHub Bot commented on HADOOP-19047:
-

hadoop-yetus commented on PR #6468:
URL: https://github.com/apache/hadoop/pull/6468#issuecomment-2020230242

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 50s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 4 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  49m 35s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 43s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   0m 34s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   0m 33s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 42s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 27s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 35s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   1m  8s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  38m 34s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | -1 :x: |  mvninstall  |   0m 15s | 
[/patch-mvninstall-hadoop-tools_hadoop-aws.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6468/7/artifact/out/patch-mvninstall-hadoop-tools_hadoop-aws.txt)
 |  hadoop-aws in the patch failed.  |
   | -1 :x: |  compile  |   0m 16s | 
[/patch-compile-hadoop-tools_hadoop-aws-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6468/7/artifact/out/patch-compile-hadoop-tools_hadoop-aws-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt)
 |  hadoop-aws in the patch failed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.  |
   | -1 :x: |  javac  |   0m 16s | 
[/patch-compile-hadoop-tools_hadoop-aws-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6468/7/artifact/out/patch-compile-hadoop-tools_hadoop-aws-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt)
 |  hadoop-aws in the patch failed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.  |
   | -1 :x: |  compile  |   0m 16s | 
[/patch-compile-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6468/7/artifact/out/patch-compile-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt)
 |  hadoop-aws in the patch failed with JDK Private 
Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.  |
   | -1 :x: |  javac  |   0m 16s | 
[/patch-compile-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6468/7/artifact/out/patch-compile-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt)
 |  hadoop-aws in the patch failed with JDK Private 
Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 15s | 
[/buildtool-patch-checkstyle-hadoop-tools_hadoop-aws.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6468/7/artifact/out/buildtool-patch-checkstyle-hadoop-tools_hadoop-aws.txt)
 |  The patch fails to run checkstyle in hadoop-aws  |
   | -1 :x: |  mvnsite  |   0m 17s | 
[/patch-mvnsite-hadoop-tools_hadoop-aws.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6468/7/artifact/out/patch-mvnsite-hadoop-tools_hadoop-aws.txt)
 |  hadoop-aws in the patch failed.  |
   | +1 :green_heart: |  javadoc  |   0m 16s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | -1 :x: |  javadoc  |   0m 17s | 

Re: [PR] HADOOP-19047: Support InMemory Tracking Of S3A Magic Commits [hadoop]

2024-03-26 Thread via GitHub


hadoop-yetus commented on PR #6468:
URL: https://github.com/apache/hadoop/pull/6468#issuecomment-2020230242

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 50s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 4 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  49m 35s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 43s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   0m 34s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   0m 33s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 42s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 27s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 35s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   1m  8s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  38m 34s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | -1 :x: |  mvninstall  |   0m 15s | 
[/patch-mvninstall-hadoop-tools_hadoop-aws.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6468/7/artifact/out/patch-mvninstall-hadoop-tools_hadoop-aws.txt)
 |  hadoop-aws in the patch failed.  |
   | -1 :x: |  compile  |   0m 16s | 
[/patch-compile-hadoop-tools_hadoop-aws-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6468/7/artifact/out/patch-compile-hadoop-tools_hadoop-aws-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt)
 |  hadoop-aws in the patch failed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.  |
   | -1 :x: |  javac  |   0m 16s | 
[/patch-compile-hadoop-tools_hadoop-aws-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6468/7/artifact/out/patch-compile-hadoop-tools_hadoop-aws-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt)
 |  hadoop-aws in the patch failed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.  |
   | -1 :x: |  compile  |   0m 16s | 
[/patch-compile-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6468/7/artifact/out/patch-compile-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt)
 |  hadoop-aws in the patch failed with JDK Private 
Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.  |
   | -1 :x: |  javac  |   0m 16s | 
[/patch-compile-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6468/7/artifact/out/patch-compile-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt)
 |  hadoop-aws in the patch failed with JDK Private 
Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 15s | 
[/buildtool-patch-checkstyle-hadoop-tools_hadoop-aws.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6468/7/artifact/out/buildtool-patch-checkstyle-hadoop-tools_hadoop-aws.txt)
 |  The patch fails to run checkstyle in hadoop-aws  |
   | -1 :x: |  mvnsite  |   0m 17s | 
[/patch-mvnsite-hadoop-tools_hadoop-aws.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6468/7/artifact/out/patch-mvnsite-hadoop-tools_hadoop-aws.txt)
 |  hadoop-aws in the patch failed.  |
   | +1 :green_heart: |  javadoc  |   0m 16s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | -1 :x: |  javadoc  |   0m 17s | 
[/patch-javadoc-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6468/7/artifact/out/patch-javadoc-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt)
 |  hadoop-aws in the patch failed with JDK Private 
Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.  |
   | -1 :x: |  spotbugs  |   0m 16s | 

[jira] [Commented] (HADOOP-19098) Vector IO: consistent specified rejection of overlapping ranges

2024-03-26 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17830898#comment-17830898
 ] 

ASF GitHub Bot commented on HADOOP-19098:
-

steveloughran commented on code in PR #6604:
URL: https://github.com/apache/hadoop/pull/6604#discussion_r1539064441


##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/VectoredReadUtils.java:
##
@@ -133,26 +172,42 @@ private static void 
readNonByteBufferPositionedReadable(PositionedReadable strea
   /**
* Read bytes from stream into a byte buffer using an
* intermediate byte array.
-   * @param length number of bytes to read.
+   *   
+   * (position, buffer, buffer-offset, length): Void
+   * position:= the position within the file to read data.
+   * buffer := a buffer to read fully `length` bytes into.
+   * buffer-offset := the offset within the buffer to write data
+   * length := the number of bytes to read.
+   *   
+   * The passed in function MUST block until the required length of
+   * data is read, or an exception is thrown.
+   * @param range range to read
* @param buffer buffer to fill.
* @param operation operation to use for reading data.
* @throws IOException any IOE.
*/
-  public static void readInDirectBuffer(int length,
-ByteBuffer buffer,
-Function4RaisingIOE operation) 
throws IOException {
+  public static void readInDirectBuffer(FileRange range,
+  ByteBuffer buffer,
+  Function4RaisingIOE operation)
+  throws IOException {
+
+LOG.debug("Reading {} into a direct buffer", range);
+validateRangeRequest(range);

Review Comment:
   because it is a public method called by the filesystems, and I'm just being 
rigorous. it is a low cost probe





> Vector IO: consistent specified rejection of overlapping ranges
> ---
>
> Key: HADOOP-19098
> URL: https://issues.apache.org/jira/browse/HADOOP-19098
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs, fs/s3
>Affects Versions: 3.3.6
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>  Labels: pull-request-available
>
> Related to PARQUET-2171 q: "how do you deal with overlapping ranges?"
> I believe s3a rejects this, but the other impls may not.
> Proposed
> FS spec to say 
> * "overlap triggers IllegalArgumentException". 
> * special case: 0 byte ranges may be short circuited to return empty buffer 
> even without checking file length etc.
> Contract tests to validate this
> (+ common helper code to do this).
> I'll copy the validation stuff into the parquet PR for consistency with older 
> releases



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19098 Vector IO: consistent specified rejection of overlapping ranges [hadoop]

2024-03-26 Thread via GitHub


steveloughran commented on code in PR #6604:
URL: https://github.com/apache/hadoop/pull/6604#discussion_r1539064441


##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/VectoredReadUtils.java:
##
@@ -133,26 +172,42 @@ private static void 
readNonByteBufferPositionedReadable(PositionedReadable strea
   /**
* Read bytes from stream into a byte buffer using an
* intermediate byte array.
-   * @param length number of bytes to read.
+   *   
+   * (position, buffer, buffer-offset, length): Void
+   * position:= the position within the file to read data.
+   * buffer := a buffer to read fully `length` bytes into.
+   * buffer-offset := the offset within the buffer to write data
+   * length := the number of bytes to read.
+   *   
+   * The passed in function MUST block until the required length of
+   * data is read, or an exception is thrown.
+   * @param range range to read
* @param buffer buffer to fill.
* @param operation operation to use for reading data.
* @throws IOException any IOE.
*/
-  public static void readInDirectBuffer(int length,
-ByteBuffer buffer,
-Function4RaisingIOE operation) 
throws IOException {
+  public static void readInDirectBuffer(FileRange range,
+  ByteBuffer buffer,
+  Function4RaisingIOE operation)
+  throws IOException {
+
+LOG.debug("Reading {} into a direct buffer", range);
+validateRangeRequest(range);

Review Comment:
   because it is a public method called by the filesystems, and I'm just being 
rigorous. it is a low cost probe



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19047) Support InMemory Tracking Of S3A Magic Commits

2024-03-26 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17830894#comment-17830894
 ] 

ASF GitHub Bot commented on HADOOP-19047:
-

shameersss1 commented on code in PR #6468:
URL: https://github.com/apache/hadoop/pull/6468#discussion_r1539051935


##
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/impl/CommitOperations.java:
##
@@ -584,7 +584,7 @@ public SinglePendingCommit uploadFileToPendingCommit(File 
localFile,
   destKey,
   uploadId,
   partNumber,
-  size).build();
+  size).build();x

Review Comment:
   ack





> Support InMemory Tracking Of S3A Magic Commits
> --
>
> Key: HADOOP-19047
> URL: https://issues.apache.org/jira/browse/HADOOP-19047
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Reporter: Syed Shameerur Rahman
>Assignee: Syed Shameerur Rahman
>Priority: Major
>  Labels: pull-request-available
>
> The following are the operations which happens within a Task when it uses S3A 
> Magic Committer. 
> *During closing of stream*
> 1. A 0-byte file with a same name of the original file is uploaded to S3 
> using PUT operation. Refer 
> [here|https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/magic/MagicCommitTracker.java#L152]
>  for more information. This is done so that the downstream application like 
> Spark could get the size of the file which is being written.
> 2. MultiPartUpload(MPU) metadata is uploaded to S3. Refer 
> [here|https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/magic/MagicCommitTracker.java#L176]
>  for more information.
> *During TaskCommit*
> 1. All the MPU metadata which the task wrote to S3 (There will be 'x' number 
> of metadata file in S3 if a single task writes to 'x' files) are read and 
> rewritten to S3 as a single metadata file. Refer 
> [here|https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/magic/MagicS3GuardCommitter.java#L201]
>  for more information
> Since these operations happens with the Task JVM, We could optimize as well 
> as save cost by storing these information in memory when Task memory usage is 
> not a constraint. Hence the proposal here is to introduce a new MagicCommit 
> Tracker called "InMemoryMagicCommitTracker" which will store the 
> 1. Metadata of MPU in memory till the Task is committed
> 2. Store the size of the file which can be used by the downstream application 
> to get the file size before it is committed/visible to the output path.
> This optimization will save 2 PUT S3 calls, 1 LIST S3 call, and 1 GET S3 call 
> given a Task writes only 1 file.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19047: Support InMemory Tracking Of S3A Magic Commits [hadoop]

2024-03-26 Thread via GitHub


shameersss1 commented on code in PR #6468:
URL: https://github.com/apache/hadoop/pull/6468#discussion_r1539051935


##
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/impl/CommitOperations.java:
##
@@ -584,7 +584,7 @@ public SinglePendingCommit uploadFileToPendingCommit(File 
localFile,
   destKey,
   uploadId,
   partNumber,
-  size).build();
+  size).build();x

Review Comment:
   ack



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18980) S3A credential provider remapping: make extensible

2024-03-26 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18980?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-18980:

Description: 
A new option fs.s3a.aws.credentials.provider.mapping takes a key value pair for 
automatic mapping of v1 credential providers to v2 credential providers.


h2. Backporting

There's a followup PR to the main patch which *should* be applied, as it 
hardens the parser.

{code}
HADOOP-18980. Invalid inputs for getTrimmedStringCollectionSplitByEquals 
(ADDENDUM) (#6546)
{code}


  was:
s3afs will now remap the common com.amazonaws credential providers to 
equivalents in the v2 sdk or in hadoop-aws

We could do the same for third party credential providers by taking a key=value 
list in a configuration property and adding to the map. 


h2. Backporting

There's a followup PR to the main patch which *should* be applied, as it 
hardens the parser.

{code}
HADOOP-18980. Invalid inputs for getTrimmedStringCollectionSplitByEquals 
(ADDENDUM) (#6546)
{code}



> S3A credential provider remapping: make extensible
> --
>
> Key: HADOOP-18980
> URL: https://issues.apache.org/jira/browse/HADOOP-18980
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.4.0
>Reporter: Steve Loughran
>Assignee: Viraj Jasani
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.5.0, 3.4.1
>
>
> A new option fs.s3a.aws.credentials.provider.mapping takes a key value pair 
> for automatic mapping of v1 credential providers to v2 credential providers.
> h2. Backporting
> There's a followup PR to the main patch which *should* be applied, as it 
> hardens the parser.
> {code}
> HADOOP-18980. Invalid inputs for getTrimmedStringCollectionSplitByEquals 
> (ADDENDUM) (#6546)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-18980. Invalid inputs for getTrimmedStringCollectionSplitByEquals (ADDENDUM) [hadoop]

2024-03-26 Thread via GitHub


steveloughran commented on PR #6546:
URL: https://github.com/apache/hadoop/pull/6546#issuecomment-2020161193

   in trunk; testing cherrypick to 3.4


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-18980. Invalid inputs for getTrimmedStringCollectionSplitByEquals (ADDENDUM) [hadoop]

2024-03-26 Thread via GitHub


steveloughran merged PR #6546:
URL: https://github.com/apache/hadoop/pull/6546


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19047) Support InMemory Tracking Of S3A Magic Commits

2024-03-26 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17830882#comment-17830882
 ] 

ASF GitHub Bot commented on HADOOP-19047:
-

steveloughran commented on code in PR #6468:
URL: https://github.com/apache/hadoop/pull/6468#discussion_r1539027620


##
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/impl/CommitOperations.java:
##
@@ -584,7 +584,7 @@ public SinglePendingCommit uploadFileToPendingCommit(File 
localFile,
   destKey,
   uploadId,
   partNumber,
-  size).build();
+  size).build();x

Review Comment:
   typo





> Support InMemory Tracking Of S3A Magic Commits
> --
>
> Key: HADOOP-19047
> URL: https://issues.apache.org/jira/browse/HADOOP-19047
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Reporter: Syed Shameerur Rahman
>Assignee: Syed Shameerur Rahman
>Priority: Major
>  Labels: pull-request-available
>
> The following are the operations which happens within a Task when it uses S3A 
> Magic Committer. 
> *During closing of stream*
> 1. A 0-byte file with a same name of the original file is uploaded to S3 
> using PUT operation. Refer 
> [here|https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/magic/MagicCommitTracker.java#L152]
>  for more information. This is done so that the downstream application like 
> Spark could get the size of the file which is being written.
> 2. MultiPartUpload(MPU) metadata is uploaded to S3. Refer 
> [here|https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/magic/MagicCommitTracker.java#L176]
>  for more information.
> *During TaskCommit*
> 1. All the MPU metadata which the task wrote to S3 (There will be 'x' number 
> of metadata file in S3 if a single task writes to 'x' files) are read and 
> rewritten to S3 as a single metadata file. Refer 
> [here|https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/magic/MagicS3GuardCommitter.java#L201]
>  for more information
> Since these operations happens with the Task JVM, We could optimize as well 
> as save cost by storing these information in memory when Task memory usage is 
> not a constraint. Hence the proposal here is to introduce a new MagicCommit 
> Tracker called "InMemoryMagicCommitTracker" which will store the 
> 1. Metadata of MPU in memory till the Task is committed
> 2. Store the size of the file which can be used by the downstream application 
> to get the file size before it is committed/visible to the output path.
> This optimization will save 2 PUT S3 calls, 1 LIST S3 call, and 1 GET S3 call 
> given a Task writes only 1 file.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19047: Support InMemory Tracking Of S3A Magic Commits [hadoop]

2024-03-26 Thread via GitHub


steveloughran commented on code in PR #6468:
URL: https://github.com/apache/hadoop/pull/6468#discussion_r1539027620


##
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/impl/CommitOperations.java:
##
@@ -584,7 +584,7 @@ public SinglePendingCommit uploadFileToPendingCommit(File 
localFile,
   destKey,
   uploadId,
   partNumber,
-  size).build();
+  size).build();x

Review Comment:
   typo



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[PR] [ABFS] Test Fixes [hadoop]

2024-03-26 Thread via GitHub


anujmodi2021 opened a new pull request, #6676:
URL: https://github.com/apache/hadoop/pull/6676

   
   
   ### Description of PR
   
   
   ### How was this patch tested?
   
   
   ### For code changes:
   
   - [ ] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-19129) ABFS: Fixing Test Script Bug and Some Known test Failures in ABFS Test Suite

2024-03-26 Thread Anuj Modi (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-19129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anuj Modi updated HADOOP-19129:
---
Description: 
Test Script used by ABFS to validate changes has following two issues:
 # When there are a lot of test failures or when error message of any failing 
test becomes very large, the regex used today to filter test results does not 
work as expected and fails to report all the failing tests.
To resolve this, we have come up with new regex that will only target one line 
test names for reporting them into aggregated test results.

 # While running the test suite for different combinations of Auth type and 
account type, we add the combination specific configs first and then include 
the account specific configs in core-site.xml file. This will override the 
combination specific configs like auth type if the same config is present in 
account specific config file. To avoid this, we will first include the account 
specific configs and then add the combination specific configs.

Due to above bug in test script, some test failures in ABFS were not getting 
our attention. This PR also targets to resolve them. Following are the tests 
fixed:

Following tests are failing on OSS trunk:
 # ITestAbfsClient.testListPathWithValueGreaterThanServerMaximum(): Fails 
Intermittently only for HNS enabled accounts. Test wants to assert that 
client.list() does not return more objects than what is configured in 
maxListResults. Assertions should be that number of objects returned should be 
less than expected as server might end up returning even lesser due to 
partition splits along with a continuation token.
 # ITestGetNameSpaceEnabled.testGetIsNamespaceEnabledWhenConfigIsTrue(): Fail 
when "fs.azure.test.namespace.enabled" config is missing. Ignore the test if 
config is missing.
 # ITestGetNameSpaceEnabled.testGetIsNamespaceEnabledWhenConfigIsFalse(): Fail 
when "fs.azure.test.namespace.enabled" config is missing. Ignore the test if 
config is missing.
 # ITestGetNameSpaceEnabled.testNonXNSAccount(): Fail when 
"fs.azure.test.namespace.enabled" config is missing. Ignore the test if config 
is missing.
 # ITestAbfsStreamStatistics.testAbfsStreamOps: Fails when 
"fs.azure.test.appendblob.enabled" is set to true. Test wanted to assert that 
number of read operations can be more in case of append blobs as compared to 
normal blob becuase of automatic flush. It could be same as that of normal blob 
as well.
 # ITestAzureBlobFileSystemCheckAccess.testCheckAccessForAccountWithoutNS: 
Fails for FNS Account only when following config is present:  
fs.azure.account.hns.enabled". Failure is because test wants to assert that 
when driver does not know if the account is HNS enabled or not it makes a 
server call and fails. But above config is letting driver know the account type 
and skipping the head call. Remove these configs from the test specific 
configurations and not from the account settings file.
 # ITestAbfsTerasort.test_120_terasort: Fails with OAuth on HNS account. 
Failure is because of identity mismatch. OAuth uses service principle OID as 
owner of the resources whereas Shared Key uses local system identities. Fix is 
to set configs that will allow overwrite of OID to localidentity. This will 
require a new config to be set by user that specify which OID has to be 
substituted. OAuth by default uses Superuser Identity, so same needs to be 
configured to be overwritten as well.
*New test config: "fs.azure.account.oauth2.client.service.principal.object.id"*
 # ITestExponentialRetryPolicy.testThrottlingIntercept: Fails with SharedKey 
only. Test was using a dummy account to create a new instance of 
AbfsConfiguration and for that dummy account, SharedKey was not configured. Fix 
is to Add non-account specific SharedKey in accountconfigs.
 # ITestAzureBlobFileSystemAuthorization: Fails when contract related configs 
are not present with NPE. Fix is to Ignore if required configs are not present.
 # ITestAzureBlobFileSystemAppend.testCloseOfDataBlockOnAppendComplete(): Fails 
when append blob is enabled. Append Blobs does not require explicit flush and 
hence flush/close related asserts do not work for append blobs.
 # ITestAzureBlobFileSystemLease:testTwoCreate(): Fail when 
"fs.azure.test.namespace.enabled" config is missing. Fix is to Ignore the test 
if config is missing.
 # ITestAzureBlobFileSystemChecksum.testAppendWithChecksumAtDifferentOffsets: 
Fail when "fs.azure.test.appendblob.enabled" config is true. Fix is to Ignore 
the test if config is true.

  was:
Test Script used by ABFS to validate changes has following two issues:
 # When there are a lot of test failures or when error message of any failing 
test becomes very large, the regex used today to filter test results does not 
work as expected and fails to report all the failing tests.
To resolve this, we have come up with new regex that will only target 

Re: [PR] HDFS-17410. [FGL] Client RPCs that changes file attributes supports fine-grained lock [hadoop]

2024-03-26 Thread via GitHub


hadoop-yetus commented on PR #6634:
URL: https://github.com/apache/hadoop/pull/6634#issuecomment-2020088958

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 46s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ HDFS-17384 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  63m  7s |  |  HDFS-17384 passed  |
   | +1 :green_heart: |  compile  |   1m 23s |  |  HDFS-17384 passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   1m 12s |  |  HDFS-17384 passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   1m 13s |  |  HDFS-17384 passed  |
   | +1 :green_heart: |  mvnsite  |   1m 25s |  |  HDFS-17384 passed  |
   | +1 :green_heart: |  javadoc  |   1m 12s |  |  HDFS-17384 passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 44s |  |  HDFS-17384 passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   3m 18s |  |  HDFS-17384 passed  |
   | +1 :green_heart: |  shadedclient  |  40m 17s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 14s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 16s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   1m 16s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 11s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |   1m 11s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   1m  0s | 
[/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6634/3/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs-project/hadoop-hdfs: The patch generated 1 new + 136 unchanged 
- 0 fixed = 137 total (was 136)  |
   | +1 :green_heart: |  mvnsite  |   1m 16s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 56s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 37s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   3m 20s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  40m 44s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 263m 39s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6634/3/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 45s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 433m 36s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.TestRollingUpgrade |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6634/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6634 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 42ace5baffa9 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | HDFS-17384 / 9e2abae70c1b0c964b68fb52715dd53b5d873e82 |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   |  Test Results | 

Re: [PR] HDFS-17408:Reduce the number of quota calculations in FSDirRenameOp [hadoop]

2024-03-26 Thread via GitHub


Hexiaoqiao commented on code in PR #6653:
URL: https://github.com/apache/hadoop/pull/6653#discussion_r1538927685


##
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirRenameOp.java:
##
@@ -681,14 +703,46 @@ private static class RenameOperation {
 this.srcIIP = INodesInPath.replace(srcIIP, srcIIP.length() - 1,
 srcChild);
 // get the counts before rename
-oldSrcCounts.add(withCount.getReferredINode().computeQuotaUsage(bsps));
+
oldSrcCountsInSnapshot.add(withCount.getReferredINode().computeQuotaUsage(bsps));
   } else if (srcChildIsReference) {
 // srcChild is reference but srcChild is not in latest snapshot
 withCount = (INodeReference.WithCount) srcChild.asReference()
 .getReferredINode();
   } else {
 withCount = null;
   }
+  // set quota for src and dst, ignore src is in Snapshot or is Reference
+  this.srcSubTreeCountOp = withCount == null ?
+  quotaPair.getLeft() : Optional.empty();
+  this.dstSubTreeCountOp = quotaPair.getRight();
+}
+
+boolean isSameStoragePolicy() {

Review Comment:
   Got it. Make sense to me.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-19129) ABFS: Fixing Test Script Bug and Some Known test Failures in ABFS Test Suite

2024-03-26 Thread Anuj Modi (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-19129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anuj Modi updated HADOOP-19129:
---
Description: 
Test Script used by ABFS to validate changes has following two issues:
 # When there are a lot of test failures or when error message of any failing 
test becomes very large, the regex used today to filter test results does not 
work as expected and fails to report all the failing tests.
To resolve this, we have come up with new regex that will only target one line 
test names for reporting them into aggregated test results.

 # While running the test suite for different combinations of Auth type and 
account type, we add the combination specific configs first and then include 
the account specific configs in core-site.xml file. This will override the 
combination specific configs like auth type if the same config is present in 
account specific config file. To avoid this, we will first include the account 
specific configs and then add the combination specific configs.

Due to above bug in test script, some test failures in ABFS were not getting 
our attention. This PR also targets to resolve them. Following are the tests 
fixed:

Following tests are failing on OSS trunk:
 # ITestAbfsClient.testListPathWithValueGreaterThanServerMaximum(): Fails 
Intermittently only for HNS enabled accounts. Test wants to assert that 
client.list() does not return more objects than what is configured in 
maxListResults. Assertions should be that number of objects returned should be 
less than expected as server might end up returning even lesser due to 
partition splits along with a continuation token.
 # ITestGetNameSpaceEnabled.testGetIsNamespaceEnabledWhenConfigIsTrue(): Fail 
when "fs.azure.test.namespace.enabled" config is missing. Ignore the test if 
config is missing.
 # ITestGetNameSpaceEnabled.testGetIsNamespaceEnabledWhenConfigIsFalse(): Fail 
when "fs.azure.test.namespace.enabled" config is missing. Ignore the test if 
config is missing.
 # ITestGetNameSpaceEnabled.testNonXNSAccount(): Fail when 
"fs.azure.test.namespace.enabled" config is missing. Ignore the test if config 
is missing.
 # ITestAbfsStreamStatistics.testAbfsStreamOps: Fails when 
"fs.azure.test.appendblob.enabled" is set to true. Test wanted to assert that 
number of read operations can be more in case of append blobs as compared to 
normal blob becuase of automatic flush. It could be same as that of normal blob 
as well.
 # ITestAzureBlobFileSystemCheckAccess.testCheckAccessForAccountWithoutNS: 
Fails for FNS Account only when following config is present:  
fs.azure.account.hns.enabled". Failure is because test wants to assert that 
when driver does not know if the account is HNS enabled or not it makes a 
server call and fails. But above config is letting driver know the account type 
and skipping the head call. Remove these configs from the test specific 
configurations and not from the account settings file.
 # ITestAbfsTerasort.test_120_terasort: Fails with OAuth on HNS account. 
Failure is because of identity mismatch. OAuth uses service principle OID as 
owner of the resources whereas Shared Key uses local system identities. Fix is 
to set configs that will allow overwrite of OID to localidentity. This will 
require a new config to be set by user that specify which OID has to be 
substituted. OAuth by default uses Superuser Identity, so same needs to be 
configured to be overwritten as well.
*New test config: "fs.azure.account.oauth2.client.service.principal.object.id"*
 # ITestExponentialRetryPolicy.testThrottlingIntercept: Fails with SharedKey 
only. Test was using a dummy account to create a new instance of 
AbfsConfiguration and for that dummy account, SharedKey was not configured. Fix 
is to Add non-account specific SharedKey in accountconfigs.
 # ITestAzureBlobFileSystemAuthorization: Fails when contract related configs 
are not present with NPE. Fix is to Ignore if required configs are not present.
 # ITestAzureBlobFileSystemAppend.testCloseOfDataBlockOnAppendComplete(): Fails 
when append blob is enabled. Append Blobs does not require explicit flush and 
hence flush/close related asserts do not work for append blobs.
 # ITestAzureBlobFileSystemLease:testTwoCreate(): Fail when 
"fs.azure.test.namespace.enabled" config is missing. Fix is to Ignore the test 
if config is missing.

  was:
Test Script used by ABFS to validate changes has following two issues:
 # When there are a lot of test failures or when error message of any failing 
test becomes very large, the regex used today to filter test results does not 
work as expected and fails to report all the failing tests.
To resolve this, we have come up with new regex that will only target one line 
test names for reporting them into aggregated test results.

 # While running the test suite for different combinations of Auth type and 
account type, we add the combination 

[jira] [Updated] (HADOOP-19129) ABFS: Fixing Test Script Bug and Some Known test Failures in ABFS Test Suite

2024-03-26 Thread Anuj Modi (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-19129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anuj Modi updated HADOOP-19129:
---
Description: 
Test Script used by ABFS to validate changes has following two issues:
 # When there are a lot of test failures or when error message of any failing 
test becomes very large, the regex used today to filter test results does not 
work as expected and fails to report all the failing tests.
To resolve this, we have come up with new regex that will only target one line 
test names for reporting them into aggregated test results.

 # While running the test suite for different combinations of Auth type and 
account type, we add the combination specific configs first and then include 
the account specific configs in core-site.xml file. This will override the 
combination specific configs like auth type if the same config is present in 
account specific config file. To avoid this, we will first include the account 
specific configs and then add the combination specific configs.

Due to above bug in test script, some test failures in ABFS were not getting 
our attention. This PR also targets to resolve them. Following are the tests 
fixed:

Following tests are failing on OSS trunk:
 # ITestAbfsClient.testListPathWithValueGreaterThanServerMaximum(): Fails 
Intermittently only for HNS enabled accounts. Test wants to assert that 
client.list() does not return more objects than what is configured in 
maxListResults. Assertions should be that number of objects returned should be 
less than expected as server might end up returning even lesser due to 
partition splits along with a continuation token.
 # ITestGetNameSpaceEnabled.testGetIsNamespaceEnabledWhenConfigIsTrue(): Fail 
when "fs.azure.test.namespace.enabled" config is missing. Ignore the test if 
config is missing.


 # ITestGetNameSpaceEnabled.testGetIsNamespaceEnabledWhenConfigIsFalse(): Fail 
when "fs.azure.test.namespace.enabled" config is missing. Ignore the test if 
config is missing.


 # ITestGetNameSpaceEnabled.testNonXNSAccount(): Fail when 
"fs.azure.test.namespace.enabled" config is missing. Ignore the test if config 
is missing.


 # ITestAbfsStreamStatistics.testAbfsStreamOps: Fails when 
"fs.azure.test.appendblob.enabled" is set to true. Test wanted to assert that 
number of read operations can be more in case of append blobs as compared to 
normal blob becuase of automatic flush. It could be same as that of normal blob 
as well.


 # ITestAzureBlobFileSystemCheckAccess.testCheckAccessForAccountWithoutNS: 
Fails for FNS Account only when following config is present:  
fs.azure.account.hns.enabled". Failure is because test wants to assert that 
when driver does not know if the account is HNS enabled or not it makes a 
server call and fails. But above config is letting driver know the account type 
and skipping the head call. Remove these configs from the test specific 
configurations and not from the account settings file.


 # ITestAbfsTerasort.test_120_terasort: Fails with OAuth on HNS account. 
Failure is because of identity mismatch. OAuth uses service principle OID as 
owner of the resources whereas Shared Key uses local system identities. Fix is 
to set configs that will allow overwrite of OID to localidentity. This will 
require a new config to be set by user that specify which OID has to be 
substituted. OAuth by default uses Superuser Identity, so same needs to be 
configured to be overwritten as well.
*New test config: "fs.azure.account.oauth2.client.service.principal.object.id"*
 # ITestExponentialRetryPolicy.testThrottlingIntercept: Fails with SharedKey 
only. Test was using a dummy account to create a new instance of 
AbfsConfiguration and for that dummy account, SharedKey was not configured. Fix 
is to Add non-account specific SharedKey in accountconfigs.


 # ITestAzureBlobFileSystemAuthorization: Fails when contract related configs 
are not present with NPE. Fix is to Ignore if required configs are not present.


 # ITestAzureBlobFileSystemAppend.testCloseOfDataBlockOnAppendComplete(): Fails 
when append blob is enabled. Append Blobs does not require explicit flush and 
hence flush/close related asserts do not work for append blobs.


 # ITestAzureBlobFileSystemLease:testTwoCreate(): Fail when 
"fs.azure.test.namespace.enabled" config is missing. Fix is to Ignore the test 
if config is missing.

  was:
Test Script used by ABFS to validate changes has following two issues:
 # When there are a lot of test failures or when error message of any failing 
test becomes very large, the regex used today to filter test results does not 
work as expected and fails to report all the failing tests.
To resolve this, we have come up with new regex that will only target one line 
test names for reporting them into aggregated test results.


 # While running the test suite for different combinations of Auth type and 
account type, we add 

[jira] [Updated] (HADOOP-19129) ABFS: Fixing Test Script Bug and Some Known test Failures in ABFS Test Suite

2024-03-26 Thread Anuj Modi (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-19129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anuj Modi updated HADOOP-19129:
---
Description: 
Test Script used by ABFS to validate changes has following two issues:
 # When there are a lot of test failures or when error message of any failing 
test becomes very large, the regex used today to filter test results does not 
work as expected and fails to report all the failing tests.
To resolve this, we have come up with new regex that will only target one line 
test names for reporting them into aggregated test results.


 # While running the test suite for different combinations of Auth type and 
account type, we add the combination specific configs first and then include 
the account specific configs in core-site.xml file. This will override the 
combination specific configs like auth type if the same config is present in 
account specific config file. To avoid this, we will first include the account 
specific configs and then add the combination specific configs.

Due to above bug in test script, some test failures in ABFS were not getting 
our attention. This PR also targets to resolve them. Following are the tests 
fixed:

Following tests are failing on OSS trunk:
 # ITestAbfsClient.testListPathWithValueGreaterThanServerMaximum(): 
 # Fails Intermittently only for HNS enabled accounts.
 # Test wants to assert that client.list() does not return more objects than 
what is configured in maxListResults
 # Fix: Assertions should be that number of objects returned should be less 
than expected as server might end up returning even lesser due to partition 
splits along with a continuation token.




 # ITestGetNameSpaceEnabled.testGetIsNamespaceEnabledWhenConfigIsTrue()
 # test fail when "fs.azure.test.namespace.enabled" config is missing.
 # Fix: Ignore the test if config is missing.




 # ITestGetNameSpaceEnabled.testGetIsNamespaceEnabledWhenConfigIsFalse()
 # test fail when "fs.azure.test.namespace.enabled" config is missing.
 # Fix: Ignore the test if config is missing.




 # ITestGetNameSpaceEnabled.testNonXNSAccount()
 # test fail when "fs.azure.test.namespace.enabled" config is missing.
 # Fix: Ignore the test if config is missing.




 # ITestAbfsStreamStatistics.testAbfsStreamOps:
 # fails when "fs.azure.test.appendblob.enabled" is set to true.
 # Test wanted to assert that number of read operations can be more in case of 
append blobs as compared to normal blob becuase of automatic flush.
 # Fix: It could be same as that of normal blob as well.




 # ITestAzureBlobFileSystemCheckAccess.testCheckAccessForAccountWithoutNS
 # Fails for FNS Account only when following config is present: 
"fs.azure.account.hns.enabled"
 # Failure is because test wants to assert that when driver does not know if 
the account is HNS enabled or not it makes a server call and fails. But above 
config is letting driver know the account type and skipping the head call.
 # Fix: Remove these configs from the test specific configurations and not from 
the account settings file.




 # ITestAbfsTerasort.test_120_terasort:
 # Fails with OAuth on HNS account
 # Failure is because of identity mismatch. OAuth uses service principle OID as 
owner of the resources whereas Shared Key uses local system identities.
 # Fix is to set configs that will allow overwrite of OID to localidentity. 
This will require a new config to be set by user that specify which OID has to 
be substituted. OAuth by default uses Superuser Identity, so same needs to be 
configured to be overwritten as well.




 # ITestExponentialRetryPolicy.testThrottlingIntercept:
 # Fails with SharedKey only
 # test was using a dummy account to create a new instance of AbfsConfiguration 
and for that dummy account, SharedKey was not configured.
 # Fix: Add non-account specific SharedKey in accountconfigs.




 # ITestAzureBlobFileSystemAuthorization:
 # Fails when contract related configs are not present with NPE
 # Fix: Ignore if required configs are not present




 # ITestAzureBlobFileSystemAppend.testCloseOfDataBlockOnAppendComplete()
 # Fails when append blob is enabled
 # Append Blobs does not require explicit flush and hence flush/close related 
asserts do not work for append blobs.




 # ITestAzureBlobFileSystemLease:testTwoCreate()
 # test fail when "fs.azure.test.namespace.enabled" config is missing.
 # Fix: Ignore the test if config is missing.

  was:
Test Script used by ABFS to validate changes has following two issues:
 # When there are a lot of test failures or when error message of any failing 
test becomes very large, the regex used today to filter test results does not 
work as expected and fails to report all the failing tests.
To resolve this, we have come up with new regex that will only target one line 
test names for reporting them into aggregated test results.


 # While running the test suite for different combinations of Auth type 

Re: [PR] HDFS-17408:Reduce the number of quota calculations in FSDirRenameOp [hadoop]

2024-03-26 Thread via GitHub


ThinkerLei commented on code in PR #6653:
URL: https://github.com/apache/hadoop/pull/6653#discussion_r1538895771


##
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirRenameOp.java:
##
@@ -681,14 +703,46 @@ private static class RenameOperation {
 this.srcIIP = INodesInPath.replace(srcIIP, srcIIP.length() - 1,
 srcChild);
 // get the counts before rename
-oldSrcCounts.add(withCount.getReferredINode().computeQuotaUsage(bsps));
+
oldSrcCountsInSnapshot.add(withCount.getReferredINode().computeQuotaUsage(bsps));
   } else if (srcChildIsReference) {
 // srcChild is reference but srcChild is not in latest snapshot
 withCount = (INodeReference.WithCount) srcChild.asReference()
 .getReferredINode();
   } else {
 withCount = null;
   }
+  // set quota for src and dst, ignore src is in Snapshot or is Reference
+  this.srcSubTreeCountOp = withCount == null ?
+  quotaPair.getLeft() : Optional.empty();
+  this.dstSubTreeCountOp = quotaPair.getRight();
+}
+
+boolean isSameStoragePolicy() {

Review Comment:
   This scenario corresponds to the case where the source INode itself has a 
storage policy. Under such circumstances, we consistently need to use its own 
storage policy to calculate the quota, rather than the storage policy of the 
target directory.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19047: Support InMemory Tracking Of S3A Magic Commits [hadoop]

2024-03-26 Thread via GitHub


shameersss1 commented on PR #6468:
URL: https://github.com/apache/hadoop/pull/6468#issuecomment-2019942477

   @steveloughran  - Thanks a lot for the detailed review. I have addressed 
your comments. 
   > note, you will need to a followup in the docs -but we can get this in and 
tested while you do that...
   What docs are we mentioning here? I have added the details in the 
committer.md file though.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19047) Support InMemory Tracking Of S3A Magic Commits

2024-03-26 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17830856#comment-17830856
 ] 

ASF GitHub Bot commented on HADOOP-19047:
-

shameersss1 commented on PR #6468:
URL: https://github.com/apache/hadoop/pull/6468#issuecomment-2019942477

   @steveloughran  - Thanks a lot for the detailed review. I have addressed 
your comments. 
   > note, you will need to a followup in the docs -but we can get this in and 
tested while you do that...
   What docs are we mentioning here? I have added the details in the 
committer.md file though.




> Support InMemory Tracking Of S3A Magic Commits
> --
>
> Key: HADOOP-19047
> URL: https://issues.apache.org/jira/browse/HADOOP-19047
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Reporter: Syed Shameerur Rahman
>Assignee: Syed Shameerur Rahman
>Priority: Major
>  Labels: pull-request-available
>
> The following are the operations which happens within a Task when it uses S3A 
> Magic Committer. 
> *During closing of stream*
> 1. A 0-byte file with a same name of the original file is uploaded to S3 
> using PUT operation. Refer 
> [here|https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/magic/MagicCommitTracker.java#L152]
>  for more information. This is done so that the downstream application like 
> Spark could get the size of the file which is being written.
> 2. MultiPartUpload(MPU) metadata is uploaded to S3. Refer 
> [here|https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/magic/MagicCommitTracker.java#L176]
>  for more information.
> *During TaskCommit*
> 1. All the MPU metadata which the task wrote to S3 (There will be 'x' number 
> of metadata file in S3 if a single task writes to 'x' files) are read and 
> rewritten to S3 as a single metadata file. Refer 
> [here|https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/magic/MagicS3GuardCommitter.java#L201]
>  for more information
> Since these operations happens with the Task JVM, We could optimize as well 
> as save cost by storing these information in memory when Task memory usage is 
> not a constraint. Hence the proposal here is to introduce a new MagicCommit 
> Tracker called "InMemoryMagicCommitTracker" which will store the 
> 1. Metadata of MPU in memory till the Task is committed
> 2. Store the size of the file which can be used by the downstream application 
> to get the file size before it is committed/visible to the output path.
> This optimization will save 2 PUT S3 calls, 1 LIST S3 call, and 1 GET S3 call 
> given a Task writes only 1 file.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



  1   2   >