[jira] [Commented] (HADOOP-17752) Remove lock contention in REGISTRY of Configuration
[ https://issues.apache.org/jira/browse/HADOOP-17752?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17698221#comment-17698221 ] ASF GitHub Bot commented on HADOOP-17752: - liangxs closed pull request #3085: HADOOP-17752. Remove lock contention in REGISTRY of Configuration URL: https://github.com/apache/hadoop/pull/3085 > Remove lock contention in REGISTRY of Configuration > --- > > Key: HADOOP-17752 > URL: https://issues.apache.org/jira/browse/HADOOP-17752 > Project: Hadoop Common > Issue Type: Sub-task > Components: common >Reporter: Xuesen Liang >Priority: Major > Labels: pull-request-available > Time Spent: 1.5h > Remaining Estimate: 0h > > Every Configuration instance is put into *Configuration#REGISTRY* by its > constructor. This operation is guard by Configuration.class. > REGISTRY is a *WeakHashMap*, which should be replaced by *ConcurrentHashMap*. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] liangxs closed pull request #3085: HADOOP-17752. Remove lock contention in REGISTRY of Configuration
liangxs closed pull request #3085: HADOOP-17752. Remove lock contention in REGISTRY of Configuration URL: https://github.com/apache/hadoop/pull/3085 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] susheel-gupta opened a new pull request, #5465: YARN-11427. + YARN-11404. Pull up the versioned imports in pom of hadoop-mapreduce-client-app to hadoop-project pom
susheel-gupta opened a new pull request, #5465: URL: https://github.com/apache/hadoop/pull/5465 ### Description of PR ### How was this patch tested? ### For code changes: - [ ] Does the title or this PR starts with the corresponding JIRA issue id (e.g. 'HADOOP-17799. Your PR title ...')? - [ ] Object storage: have the integration tests been executed and the endpoint declared according to the connector-specific documentation? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, `NOTICE-binary` files? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18640) ABFS: Enabling Client-side Backoff only for new requests
[ https://issues.apache.org/jira/browse/HADOOP-18640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17698201#comment-17698201 ] ASF GitHub Bot commented on HADOOP-18640: - sreeb-msft commented on code in PR #5446: URL: https://github.com/apache/hadoop/pull/5446#discussion_r1130546200 ## hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsClient.java: ## @@ -222,6 +224,10 @@ AbfsThrottlingIntercept getIntercept() { return intercept; } + boolean shouldThrottleRetries() { +return throttleRetries; + } + Review Comment: Right. That makes sense. We can directly invoke it from within AbfsRestOperation. > ABFS: Enabling Client-side Backoff only for new requests > > > Key: HADOOP-18640 > URL: https://issues.apache.org/jira/browse/HADOOP-18640 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Reporter: Sree Bhattacharyya >Assignee: Sree Bhattacharyya >Priority: Minor > Labels: pull-request-available > > Enabling backoff only for new requests that happen, and disabling for retried > requests. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] sreeb-msft commented on a diff in pull request #5446: HADOOP-18640: [ABFS] Enabling Client-side Backoff only for new requests
sreeb-msft commented on code in PR #5446: URL: https://github.com/apache/hadoop/pull/5446#discussion_r1130546200 ## hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsClient.java: ## @@ -222,6 +224,10 @@ AbfsThrottlingIntercept getIntercept() { return intercept; } + boolean shouldThrottleRetries() { +return throttleRetries; + } + Review Comment: Right. That makes sense. We can directly invoke it from within AbfsRestOperation. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18624) Leaked calls may cause ObserverNameNode OOM.
[ https://issues.apache.org/jira/browse/HADOOP-18624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17698198#comment-17698198 ] ASF GitHub Bot commented on HADOOP-18624: - xinglin commented on code in PR #5367: URL: https://github.com/apache/hadoop/pull/5367#discussion_r1130534446 ## hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java: ## @@ -1485,6 +1487,10 @@ Writable call(RPC.RpcKind rpcKind, Writable rpcRequest, releaseAsyncCall(); } throw e; +} finally { + if (!success) { +connection.calls.remove(call.id); Review Comment: Then the change makes sense to me. Thanks, > Leaked calls may cause ObserverNameNode OOM. > > > Key: HADOOP-18624 > URL: https://issues.apache.org/jira/browse/HADOOP-18624 > Project: Hadoop Common > Issue Type: Bug >Reporter: ZanderXu >Assignee: ZanderXu >Priority: Major > Labels: pull-request-available > > Leaked calls may cause ObserverNameNode OOM. > > During Observer Namenode tailing edits from JournalNode, it will cancel slow > request with an interruptException if there are a majority of successful > responses. > There is a bug in Client.java, it will not clean the interrupted call from > the calls. The leaked calls may cause ObserverNameNode OOM. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] xinglin commented on a diff in pull request #5367: HADOOP-18624. Leaked calls in Client.java may cause ObserverNameNode OOM
xinglin commented on code in PR #5367: URL: https://github.com/apache/hadoop/pull/5367#discussion_r1130534446 ## hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java: ## @@ -1485,6 +1487,10 @@ Writable call(RPC.RpcKind rpcKind, Writable rpcRequest, releaseAsyncCall(); } throw e; +} finally { + if (!success) { +connection.calls.remove(call.id); Review Comment: Then the change makes sense to me. Thanks, -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18646) Upgrade Netty to 4.1.89.Final
[ https://issues.apache.org/jira/browse/HADOOP-18646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17698193#comment-17698193 ] ASF GitHub Bot commented on HADOOP-18646: - hadoop-yetus commented on PR #5435: URL: https://github.com/apache/hadoop/pull/5435#issuecomment-1461343164 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 38s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +0 :ok: | xmllint | 0m 1s | | xmllint was not available. | | +0 :ok: | shelldocs | 0m 1s | | Shelldocs was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 15m 13s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 25m 40s | | trunk passed | | +1 :green_heart: | compile | 23m 2s | | trunk passed with JDK Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | compile | 20m 42s | | trunk passed with JDK Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 | | +1 :green_heart: | mvnsite | 25m 17s | | trunk passed | | +1 :green_heart: | javadoc | 8m 9s | | trunk passed with JDK Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javadoc | 7m 22s | | trunk passed with JDK Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 | | +1 :green_heart: | shadedclient | 34m 55s | | branch has no errors when building and testing our client artifacts. | | -0 :warning: | patch | 35m 15s | | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 1m 1s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 22m 7s | | the patch passed | | +1 :green_heart: | compile | 22m 45s | | the patch passed with JDK Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javac | 22m 45s | | the patch passed | | +1 :green_heart: | compile | 20m 29s | | the patch passed with JDK Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 | | -1 :x: | javac | 20m 29s | [/results-compile-javac-root-jdkPrivateBuild-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5435/3/artifact/out/results-compile-javac-root-jdkPrivateBuild-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09.txt) | root-jdkPrivateBuild-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 with JDK Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 generated 4 new + 2623 unchanged - 1 fixed = 2627 total (was 2624) | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | mvnsite | 19m 49s | | the patch passed | | +1 :green_heart: | shellcheck | 0m 0s | | No new issues. | | +1 :green_heart: | javadoc | 7m 47s | | the patch passed with JDK Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javadoc | 7m 21s | | the patch passed with JDK Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 | | +1 :green_heart: | shadedclient | 35m 48s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 724m 48s | [/patch-unit-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5435/3/artifact/out/patch-unit-root.txt) | root in the patch passed. | | +1 :green_heart: | asflicense | 1m 27s | | The patch does not generate ASF License warnings. | | | | 996m 48s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.server.datanode.TestDirectoryScanner | | | hadoop.mapreduce.v2.app.TestRuntimeEstimators | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.42 ServerAPI=1.42 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5435/3/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/5435 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient codespell detsecrets
[GitHub] [hadoop] hadoop-yetus commented on pull request #5435: HADOOP-18646 update Netty dependency
hadoop-yetus commented on PR #5435: URL: https://github.com/apache/hadoop/pull/5435#issuecomment-1461343164 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 38s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +0 :ok: | xmllint | 0m 1s | | xmllint was not available. | | +0 :ok: | shelldocs | 0m 1s | | Shelldocs was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 15m 13s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 25m 40s | | trunk passed | | +1 :green_heart: | compile | 23m 2s | | trunk passed with JDK Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | compile | 20m 42s | | trunk passed with JDK Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 | | +1 :green_heart: | mvnsite | 25m 17s | | trunk passed | | +1 :green_heart: | javadoc | 8m 9s | | trunk passed with JDK Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javadoc | 7m 22s | | trunk passed with JDK Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 | | +1 :green_heart: | shadedclient | 34m 55s | | branch has no errors when building and testing our client artifacts. | | -0 :warning: | patch | 35m 15s | | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 1m 1s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 22m 7s | | the patch passed | | +1 :green_heart: | compile | 22m 45s | | the patch passed with JDK Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javac | 22m 45s | | the patch passed | | +1 :green_heart: | compile | 20m 29s | | the patch passed with JDK Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 | | -1 :x: | javac | 20m 29s | [/results-compile-javac-root-jdkPrivateBuild-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5435/3/artifact/out/results-compile-javac-root-jdkPrivateBuild-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09.txt) | root-jdkPrivateBuild-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 with JDK Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 generated 4 new + 2623 unchanged - 1 fixed = 2627 total (was 2624) | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | mvnsite | 19m 49s | | the patch passed | | +1 :green_heart: | shellcheck | 0m 0s | | No new issues. | | +1 :green_heart: | javadoc | 7m 47s | | the patch passed with JDK Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javadoc | 7m 21s | | the patch passed with JDK Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 | | +1 :green_heart: | shadedclient | 35m 48s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 724m 48s | [/patch-unit-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5435/3/artifact/out/patch-unit-root.txt) | root in the patch passed. | | +1 :green_heart: | asflicense | 1m 27s | | The patch does not generate ASF License warnings. | | | | 996m 48s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.server.datanode.TestDirectoryScanner | | | hadoop.mapreduce.v2.app.TestRuntimeEstimators | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.42 ServerAPI=1.42 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5435/3/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/5435 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient codespell detsecrets xmllint shellcheck shelldocs | | uname | Linux 87609d29b95f 4.15.0-200-generic #211-Ubuntu SMP Thu Nov 24 18:16:04 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git
[GitHub] [hadoop] saxenapranav commented on pull request #5461: Backport Merged pr https://github.com/apache/hadoop/pull/5299 in branch-3.3
saxenapranav commented on PR #5461: URL: https://github.com/apache/hadoop/pull/5461#issuecomment-1461328265 > ok. regarding that conflict, looks like it is because [HADOOP-17836](https://issues.apache.org/jira/browse/HADOOP-17836)/ #3281 never got backported. I think I would like that in...let me pull it into branch-3.3 and then you can try to cherrypick again Thank you so much @steveloughran for cherry-picking the mentioned PR into branch-3.3. I have backmerged branch-3.3 into this PR. Requesting you to kindly review the PR please. Thanks. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #5461: Backport Merged pr https://github.com/apache/hadoop/pull/5299 in branch-3.3
hadoop-yetus commented on PR #5461: URL: https://github.com/apache/hadoop/pull/5461#issuecomment-1461325136 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 37s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 2 new or modified test files. | _ branch-3.3 Compile Tests _ | | +1 :green_heart: | mvninstall | 36m 20s | | branch-3.3 passed | | +1 :green_heart: | compile | 0m 37s | | branch-3.3 passed | | +1 :green_heart: | checkstyle | 0m 35s | | branch-3.3 passed | | +1 :green_heart: | mvnsite | 0m 43s | | branch-3.3 passed | | +1 :green_heart: | javadoc | 0m 39s | | branch-3.3 passed | | +1 :green_heart: | spotbugs | 1m 17s | | branch-3.3 passed | | +1 :green_heart: | shadedclient | 24m 16s | | branch has no errors when building and testing our client artifacts. | | -0 :warning: | patch | 24m 38s | | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 37s | | the patch passed | | +1 :green_heart: | compile | 0m 28s | | the patch passed | | +1 :green_heart: | javac | 0m 28s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 19s | | the patch passed | | +1 :green_heart: | mvnsite | 0m 33s | | the patch passed | | +1 :green_heart: | javadoc | 0m 22s | | the patch passed | | +1 :green_heart: | spotbugs | 1m 2s | | the patch passed | | +1 :green_heart: | shadedclient | 23m 23s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 2m 4s | | hadoop-azure in the patch passed. | | +1 :green_heart: | asflicense | 0m 37s | | The patch does not generate ASF License warnings. | | | | 95m 45s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.42 ServerAPI=1.42 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5461/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/5461 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux efc01f079e87 4.15.0-200-generic #211-Ubuntu SMP Thu Nov 24 18:16:04 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | branch-3.3 / 829a9bd95b73a530649034f587c06689bfbf1fab | | Default Java | Private Build-1.8.0_362-8u362-ga-0ubuntu1~18.04.1-b09 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5461/2/testReport/ | | Max. process+thread count | 552 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5461/2/console | | versions | git=2.17.1 maven=3.6.0 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18647) x-ms-client-request-id to have some way that identifies retry of an API.
[ https://issues.apache.org/jira/browse/HADOOP-18647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17698186#comment-17698186 ] ASF GitHub Bot commented on HADOOP-18647: - hadoop-yetus commented on PR #5437: URL: https://github.com/apache/hadoop/pull/5437#issuecomment-1461316790 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 55s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 2 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 42m 7s | | trunk passed | | +1 :green_heart: | compile | 0m 41s | | trunk passed with JDK Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | compile | 0m 35s | | trunk passed with JDK Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 | | +1 :green_heart: | checkstyle | 0m 30s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 39s | | trunk passed | | +1 :green_heart: | javadoc | 0m 37s | | trunk passed with JDK Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javadoc | 0m 28s | | trunk passed with JDK Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 | | +1 :green_heart: | spotbugs | 1m 15s | | trunk passed | | +1 :green_heart: | shadedclient | 23m 28s | | branch has no errors when building and testing our client artifacts. | | -0 :warning: | patch | 23m 45s | | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 32s | | the patch passed | | +1 :green_heart: | compile | 0m 33s | | the patch passed with JDK Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javac | 0m 33s | | the patch passed | | +1 :green_heart: | compile | 0m 29s | | the patch passed with JDK Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 | | +1 :green_heart: | javac | 0m 29s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 18s | | the patch passed | | +1 :green_heart: | mvnsite | 0m 32s | | the patch passed | | +1 :green_heart: | javadoc | 0m 23s | | the patch passed with JDK Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javadoc | 0m 24s | | the patch passed with JDK Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 | | +1 :green_heart: | spotbugs | 1m 4s | | the patch passed | | +1 :green_heart: | shadedclient | 23m 22s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 2m 0s | | hadoop-azure in the patch passed. | | +1 :green_heart: | asflicense | 0m 32s | | The patch does not generate ASF License warnings. | | | | 102m 23s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.42 ServerAPI=1.42 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5437/6/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/5437 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux c851258d8045 4.15.0-200-generic #211-Ubuntu SMP Thu Nov 24 18:16:04 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / f6283a923451e48d7e49f7ddd0a65e2a9abeeb20 | | Default Java | Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5437/6/testReport/ | | Max. process+thread count | 536 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5437/6/console | | versions | git=2.25.1
[GitHub] [hadoop] hadoop-yetus commented on pull request #5437: HADOOP-18647. x-ms-client-request-id to have some way that identifies retry of an API.
hadoop-yetus commented on PR #5437: URL: https://github.com/apache/hadoop/pull/5437#issuecomment-1461316790 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 55s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 2 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 42m 7s | | trunk passed | | +1 :green_heart: | compile | 0m 41s | | trunk passed with JDK Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | compile | 0m 35s | | trunk passed with JDK Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 | | +1 :green_heart: | checkstyle | 0m 30s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 39s | | trunk passed | | +1 :green_heart: | javadoc | 0m 37s | | trunk passed with JDK Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javadoc | 0m 28s | | trunk passed with JDK Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 | | +1 :green_heart: | spotbugs | 1m 15s | | trunk passed | | +1 :green_heart: | shadedclient | 23m 28s | | branch has no errors when building and testing our client artifacts. | | -0 :warning: | patch | 23m 45s | | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 32s | | the patch passed | | +1 :green_heart: | compile | 0m 33s | | the patch passed with JDK Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javac | 0m 33s | | the patch passed | | +1 :green_heart: | compile | 0m 29s | | the patch passed with JDK Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 | | +1 :green_heart: | javac | 0m 29s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 18s | | the patch passed | | +1 :green_heart: | mvnsite | 0m 32s | | the patch passed | | +1 :green_heart: | javadoc | 0m 23s | | the patch passed with JDK Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javadoc | 0m 24s | | the patch passed with JDK Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 | | +1 :green_heart: | spotbugs | 1m 4s | | the patch passed | | +1 :green_heart: | shadedclient | 23m 22s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 2m 0s | | hadoop-azure in the patch passed. | | +1 :green_heart: | asflicense | 0m 32s | | The patch does not generate ASF License warnings. | | | | 102m 23s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.42 ServerAPI=1.42 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5437/6/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/5437 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux c851258d8045 4.15.0-200-generic #211-Ubuntu SMP Thu Nov 24 18:16:04 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / f6283a923451e48d7e49f7ddd0a65e2a9abeeb20 | | Default Java | Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5437/6/testReport/ | | Max. process+thread count | 536 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5437/6/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message,
[jira] [Commented] (HADOOP-18647) x-ms-client-request-id to have some way that identifies retry of an API.
[ https://issues.apache.org/jira/browse/HADOOP-18647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17698184#comment-17698184 ] ASF GitHub Bot commented on HADOOP-18647: - saxenapranav commented on PR #5437: URL: https://github.com/apache/hadoop/pull/5437#issuecomment-1461312778 @steveloughran , requesting you to kindly review the PR please. Thanks. > x-ms-client-request-id to have some way that identifies retry of an API. > > > Key: HADOOP-18647 > URL: https://issues.apache.org/jira/browse/HADOOP-18647 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Reporter: Pranav Saxena >Assignee: Pranav Saxena >Priority: Minor > Labels: pull-request-available > Fix For: 3.4.0 > > > In case primaryRequestId in x-ms-client-request-id is empty-string, the > retry's primaryRequestId has to contain last part of clientRequestId UUID. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] saxenapranav commented on pull request #5437: HADOOP-18647. x-ms-client-request-id to have some way that identifies retry of an API.
saxenapranav commented on PR #5437: URL: https://github.com/apache/hadoop/pull/5437#issuecomment-1461312778 @steveloughran , requesting you to kindly review the PR please. Thanks. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18647) x-ms-client-request-id to have some way that identifies retry of an API.
[ https://issues.apache.org/jira/browse/HADOOP-18647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17698183#comment-17698183 ] ASF GitHub Bot commented on HADOOP-18647: - hadoop-yetus commented on PR #5437: URL: https://github.com/apache/hadoop/pull/5437#issuecomment-1461308899 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 53s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 2 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 41m 57s | | trunk passed | | +1 :green_heart: | compile | 0m 39s | | trunk passed with JDK Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | compile | 0m 35s | | trunk passed with JDK Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 | | +1 :green_heart: | checkstyle | 0m 30s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 40s | | trunk passed | | +1 :green_heart: | javadoc | 0m 36s | | trunk passed with JDK Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javadoc | 0m 29s | | trunk passed with JDK Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 | | +1 :green_heart: | spotbugs | 1m 14s | | trunk passed | | +1 :green_heart: | shadedclient | 23m 34s | | branch has no errors when building and testing our client artifacts. | | -0 :warning: | patch | 23m 51s | | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 30s | | the patch passed | | +1 :green_heart: | compile | 0m 32s | | the patch passed with JDK Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javac | 0m 32s | | the patch passed | | +1 :green_heart: | compile | 0m 28s | | the patch passed with JDK Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 | | +1 :green_heart: | javac | 0m 28s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 17s | | the patch passed | | +1 :green_heart: | mvnsite | 0m 30s | | the patch passed | | +1 :green_heart: | javadoc | 0m 23s | | the patch passed with JDK Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javadoc | 0m 22s | | the patch passed with JDK Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 | | +1 :green_heart: | spotbugs | 1m 6s | | the patch passed | | +1 :green_heart: | shadedclient | 23m 33s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 1m 58s | | hadoop-azure in the patch passed. | | +1 :green_heart: | asflicense | 0m 33s | | The patch does not generate ASF License warnings. | | | | 102m 14s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.42 ServerAPI=1.42 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5437/5/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/5437 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux 0697d3f6f776 4.15.0-200-generic #211-Ubuntu SMP Thu Nov 24 18:16:04 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / e15ce58b128e339cbdf984b6eb5c0d870b872f3b | | Default Java | Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5437/5/testReport/ | | Max. process+thread count | 531 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5437/5/console | | versions | git=2.25.1
[GitHub] [hadoop] hadoop-yetus commented on pull request #5437: HADOOP-18647. x-ms-client-request-id to have some way that identifies retry of an API.
hadoop-yetus commented on PR #5437: URL: https://github.com/apache/hadoop/pull/5437#issuecomment-1461308899 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 53s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 2 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 41m 57s | | trunk passed | | +1 :green_heart: | compile | 0m 39s | | trunk passed with JDK Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | compile | 0m 35s | | trunk passed with JDK Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 | | +1 :green_heart: | checkstyle | 0m 30s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 40s | | trunk passed | | +1 :green_heart: | javadoc | 0m 36s | | trunk passed with JDK Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javadoc | 0m 29s | | trunk passed with JDK Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 | | +1 :green_heart: | spotbugs | 1m 14s | | trunk passed | | +1 :green_heart: | shadedclient | 23m 34s | | branch has no errors when building and testing our client artifacts. | | -0 :warning: | patch | 23m 51s | | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 30s | | the patch passed | | +1 :green_heart: | compile | 0m 32s | | the patch passed with JDK Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javac | 0m 32s | | the patch passed | | +1 :green_heart: | compile | 0m 28s | | the patch passed with JDK Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 | | +1 :green_heart: | javac | 0m 28s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 17s | | the patch passed | | +1 :green_heart: | mvnsite | 0m 30s | | the patch passed | | +1 :green_heart: | javadoc | 0m 23s | | the patch passed with JDK Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javadoc | 0m 22s | | the patch passed with JDK Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 | | +1 :green_heart: | spotbugs | 1m 6s | | the patch passed | | +1 :green_heart: | shadedclient | 23m 33s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 1m 58s | | hadoop-azure in the patch passed. | | +1 :green_heart: | asflicense | 0m 33s | | The patch does not generate ASF License warnings. | | | | 102m 14s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.42 ServerAPI=1.42 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5437/5/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/5437 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux 0697d3f6f776 4.15.0-200-generic #211-Ubuntu SMP Thu Nov 24 18:16:04 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / e15ce58b128e339cbdf984b6eb5c0d870b872f3b | | Default Java | Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5437/5/testReport/ | | Max. process+thread count | 531 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5437/5/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message,
[jira] [Commented] (HADOOP-18647) x-ms-client-request-id to have some way that identifies retry of an API.
[ https://issues.apache.org/jira/browse/HADOOP-18647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17698178#comment-17698178 ] ASF GitHub Bot commented on HADOOP-18647: - saxenapranav commented on PR #5437: URL: https://github.com/apache/hadoop/pull/5437#issuecomment-1461287194 AGGREGATED TEST RESULT HNS-OAuth [INFO] Results: [INFO] [ERROR] Failures: [ERROR] TestAccountConfiguration.testConfigPropNotFound:386->testMissingConfigKey:399 Expected a org.apache.hadoop.fs.azurebfs.contracts.exceptions.TokenAccessProviderException to be thrown, but got the result: : "org.apache.hadoop.fs.azurebfs.oauth2.ClientCredsTokenProvider" [ERROR] Errors: [ERROR] TestExponentialRetryPolicy.testOperationOnAccountIdle:216 » AccessDenied Opera... [INFO] [ERROR] Tests run: 138, Failures: 1, Errors: 1, Skipped: 1 [INFO] Results: [INFO] [ERROR] Failures: [ERROR] ITestAzureBlobFileSystemRandomRead.testValidateSeekBounds:269->Assert.assertTrue:42->Assert.fail:89 There should not be any network I/O (elapsedTimeMs=26). [ERROR] Errors: [ERROR] ITestAzureBlobFileSystemLease.testAcquireRetry:329 » TestTimedOut test timed o... [ERROR] ITestAzureBlobFileSystemOauth.testBlobDataContributor:84 » AccessDenied Operat... [ERROR] ITestAzureBlobFileSystemOauth.testBlobDataReader:143 » AccessDenied Operation ... [INFO] [ERROR] Tests run: 568, Failures: 1, Errors: 3, Skipped: 99 [INFO] Results: [INFO] [ERROR] Errors: [ERROR] ITestAbfsTerasort.test_120_terasort:262->executeStage:206 » IO The ownership o... [INFO] [ERROR] Tests run: 336, Failures: 0, Errors: 1, Skipped: 55 HNS-SharedKey [INFO] Results: [INFO] [ERROR] Failures: [ERROR] TestAccountConfiguration.testConfigPropNotFound:386->testMissingConfigKey:399 Expected a org.apache.hadoop.fs.azurebfs.contracts.exceptions.TokenAccessProviderException to be thrown, but got the result: : "org.apache.hadoop.fs.azurebfs.oauth2.ClientCredsTokenProvider" [ERROR] Errors: [ERROR] TestExponentialRetryPolicy.testOperationOnAccountIdle:216 » AccessDenied Opera... [INFO] [ERROR] Tests run: 138, Failures: 1, Errors: 1, Skipped: 2 [INFO] Results: [INFO] [ERROR] Errors: [ERROR] ITestAzureBlobFileSystemLease.testAcquireRetry:329 » TestTimedOut test timed o... [INFO] [ERROR] Tests run: 568, Failures: 0, Errors: 1, Skipped: 54 [INFO] Results: [INFO] [WARNING] Tests run: 336, Failures: 0, Errors: 0, Skipped: 41 NonHNS-SharedKey [INFO] Results: [INFO] [ERROR] Failures: [ERROR] TestAccountConfiguration.testConfigPropNotFound:386->testMissingConfigKey:399 Expected a org.apache.hadoop.fs.azurebfs.contracts.exceptions.TokenAccessProviderException to be thrown, but got the result: : "org.apache.hadoop.fs.azurebfs.oauth2.ClientCredsTokenProvider" [ERROR] Errors: [ERROR] TestExponentialRetryPolicy.testOperationOnAccountIdle:216 » AccessDenied Opera... [INFO] [ERROR] Tests run: 138, Failures: 1, Errors: 1, Skipped: 2 [INFO] Results: [INFO] [ERROR] Failures: [ERROR] ITestAzureBlobFileSystemRandomRead.testValidateSeekBounds:269->Assert.assertTrue:42->Assert.fail:89 There should not be any network I/O (elapsedTimeMs=117). [ERROR] Errors: [ERROR] ITestAzureBlobFileSystemLease.testAcquireRetry:344->lambda$testAcquireRetry$6:345 » TestTimedOut [INFO] [ERROR] Tests run: 568, Failures: 1, Errors: 1, Skipped: 277 [INFO] Results: [INFO] [ERROR] Failures: [ERROR] ITestAbfsTerasort.test_110_teragen:244->executeStage:211->Assert.assertEquals:647->Assert.failNotEquals:835->Assert.fail:89 teragen(1000, abfs://testcontai...@pranavsaxenanonhns.dfs.core.windows.net/ITestAbfsTerasort/sortin) failed expected:<0> but was:<1> [ERROR] Errors: [ERROR] ITestAbfsJobThroughManifestCommitter.test_0420_validateJob » OutputValidation ... [ERROR] ITestAbfsManifestCommitProtocol.testCommitLifecycle » OutputValidation `abfs:/... [ERROR] ITestAbfsManifestCommitProtocol.testCommitterWithDuplicatedCommit » OutputValidation [ERROR] ITestAbfsManifestCommitProtocol.testConcurrentCommitTaskWithSubDir » OutputValidation [ERROR] ITestAbfsManifestCommitProtocol.testMapFileOutputCommitter » OutputValidation ... [ERROR] ITestAbfsManifestCommitProtocol.testOutputFormatIntegration » OutputValidation [ERROR] ITestAbfsManifestCommitProtocol.testParallelJobsToAdjacentPaths » OutputValidation [ERROR] ITestAbfsManifestCommitProtocol.testTwoTaskAttemptsCommit » OutputValidation `... [INFO] [ERROR] Tests run: 336, Failures: 1, Errors: 8, Skipped: 46 AppendBlob-HNS-OAuth [INFO] Results: [INFO] [ERROR] Failures: [ERROR]
[GitHub] [hadoop] saxenapranav commented on pull request #5437: HADOOP-18647. x-ms-client-request-id to have some way that identifies retry of an API.
saxenapranav commented on PR #5437: URL: https://github.com/apache/hadoop/pull/5437#issuecomment-1461287194 AGGREGATED TEST RESULT HNS-OAuth [INFO] Results: [INFO] [ERROR] Failures: [ERROR] TestAccountConfiguration.testConfigPropNotFound:386->testMissingConfigKey:399 Expected a org.apache.hadoop.fs.azurebfs.contracts.exceptions.TokenAccessProviderException to be thrown, but got the result: : "org.apache.hadoop.fs.azurebfs.oauth2.ClientCredsTokenProvider" [ERROR] Errors: [ERROR] TestExponentialRetryPolicy.testOperationOnAccountIdle:216 » AccessDenied Opera... [INFO] [ERROR] Tests run: 138, Failures: 1, Errors: 1, Skipped: 1 [INFO] Results: [INFO] [ERROR] Failures: [ERROR] ITestAzureBlobFileSystemRandomRead.testValidateSeekBounds:269->Assert.assertTrue:42->Assert.fail:89 There should not be any network I/O (elapsedTimeMs=26). [ERROR] Errors: [ERROR] ITestAzureBlobFileSystemLease.testAcquireRetry:329 » TestTimedOut test timed o... [ERROR] ITestAzureBlobFileSystemOauth.testBlobDataContributor:84 » AccessDenied Operat... [ERROR] ITestAzureBlobFileSystemOauth.testBlobDataReader:143 » AccessDenied Operation ... [INFO] [ERROR] Tests run: 568, Failures: 1, Errors: 3, Skipped: 99 [INFO] Results: [INFO] [ERROR] Errors: [ERROR] ITestAbfsTerasort.test_120_terasort:262->executeStage:206 » IO The ownership o... [INFO] [ERROR] Tests run: 336, Failures: 0, Errors: 1, Skipped: 55 HNS-SharedKey [INFO] Results: [INFO] [ERROR] Failures: [ERROR] TestAccountConfiguration.testConfigPropNotFound:386->testMissingConfigKey:399 Expected a org.apache.hadoop.fs.azurebfs.contracts.exceptions.TokenAccessProviderException to be thrown, but got the result: : "org.apache.hadoop.fs.azurebfs.oauth2.ClientCredsTokenProvider" [ERROR] Errors: [ERROR] TestExponentialRetryPolicy.testOperationOnAccountIdle:216 » AccessDenied Opera... [INFO] [ERROR] Tests run: 138, Failures: 1, Errors: 1, Skipped: 2 [INFO] Results: [INFO] [ERROR] Errors: [ERROR] ITestAzureBlobFileSystemLease.testAcquireRetry:329 » TestTimedOut test timed o... [INFO] [ERROR] Tests run: 568, Failures: 0, Errors: 1, Skipped: 54 [INFO] Results: [INFO] [WARNING] Tests run: 336, Failures: 0, Errors: 0, Skipped: 41 NonHNS-SharedKey [INFO] Results: [INFO] [ERROR] Failures: [ERROR] TestAccountConfiguration.testConfigPropNotFound:386->testMissingConfigKey:399 Expected a org.apache.hadoop.fs.azurebfs.contracts.exceptions.TokenAccessProviderException to be thrown, but got the result: : "org.apache.hadoop.fs.azurebfs.oauth2.ClientCredsTokenProvider" [ERROR] Errors: [ERROR] TestExponentialRetryPolicy.testOperationOnAccountIdle:216 » AccessDenied Opera... [INFO] [ERROR] Tests run: 138, Failures: 1, Errors: 1, Skipped: 2 [INFO] Results: [INFO] [ERROR] Failures: [ERROR] ITestAzureBlobFileSystemRandomRead.testValidateSeekBounds:269->Assert.assertTrue:42->Assert.fail:89 There should not be any network I/O (elapsedTimeMs=117). [ERROR] Errors: [ERROR] ITestAzureBlobFileSystemLease.testAcquireRetry:344->lambda$testAcquireRetry$6:345 » TestTimedOut [INFO] [ERROR] Tests run: 568, Failures: 1, Errors: 1, Skipped: 277 [INFO] Results: [INFO] [ERROR] Failures: [ERROR] ITestAbfsTerasort.test_110_teragen:244->executeStage:211->Assert.assertEquals:647->Assert.failNotEquals:835->Assert.fail:89 teragen(1000, abfs://testcontai...@pranavsaxenanonhns.dfs.core.windows.net/ITestAbfsTerasort/sortin) failed expected:<0> but was:<1> [ERROR] Errors: [ERROR] ITestAbfsJobThroughManifestCommitter.test_0420_validateJob » OutputValidation ... [ERROR] ITestAbfsManifestCommitProtocol.testCommitLifecycle » OutputValidation `abfs:/... [ERROR] ITestAbfsManifestCommitProtocol.testCommitterWithDuplicatedCommit » OutputValidation [ERROR] ITestAbfsManifestCommitProtocol.testConcurrentCommitTaskWithSubDir » OutputValidation [ERROR] ITestAbfsManifestCommitProtocol.testMapFileOutputCommitter » OutputValidation ... [ERROR] ITestAbfsManifestCommitProtocol.testOutputFormatIntegration » OutputValidation [ERROR] ITestAbfsManifestCommitProtocol.testParallelJobsToAdjacentPaths » OutputValidation [ERROR] ITestAbfsManifestCommitProtocol.testTwoTaskAttemptsCommit » OutputValidation `... [INFO] [ERROR] Tests run: 336, Failures: 1, Errors: 8, Skipped: 46 AppendBlob-HNS-OAuth [INFO] Results: [INFO] [ERROR] Failures: [ERROR] TestAccountConfiguration.testConfigPropNotFound:386->testMissingConfigKey:399 Expected a org.apache.hadoop.fs.azurebfs.contracts.exceptions.TokenAccessProviderException to be thrown, but got the result: :
[jira] [Commented] (HADOOP-18657) Tune ABFS create() retry logic
[ https://issues.apache.org/jira/browse/HADOOP-18657?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17698176#comment-17698176 ] ASF GitHub Bot commented on HADOOP-18657: - saxenapranav commented on code in PR #5462: URL: https://github.com/apache/hadoop/pull/5462#discussion_r1130438266 ## hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java: ## @@ -621,37 +622,57 @@ private AbfsRestOperation conditionalCreateOverwriteFile(final String relativePa isAppendBlob, null, tracingContext); } catch (AbfsRestOperationException e) { + LOG.debug("Failed to create {}", relativePath, e); if (e.getStatusCode() == HttpURLConnection.HTTP_CONFLICT) { // File pre-exists, fetch eTag +LOG.debug("Fetching etag of {}", relativePath); try { op = client.getPathStatus(relativePath, false, tracingContext); } catch (AbfsRestOperationException ex) { + LOG.debug("Failed to to getPathStatus {}", relativePath, ex); if (ex.getStatusCode() == HttpURLConnection.HTTP_NOT_FOUND) { // Is a parallel access case, as file which was found to be // present went missing by this request. -throw new ConcurrentWriteOperationDetectedException( -"Parallel access to the create path detected. Failing request " -+ "to honor single writer semantics"); +// this means the other thread deleted it and the conflict +// has implicitly been resolved. +LOG.debug("File at {} has been deleted; creation can continue", relativePath); } else { throw ex; } } -String eTag = op.getResult() -.getResponseHeader(HttpHeaderConfigurations.ETAG); +String eTag = op != null +? op.getResult().getResponseHeader(HttpHeaderConfigurations.ETAG) +: null; +LOG.debug("Attempting to create file {} with etag of {}", relativePath, eTag); try { - // overwrite only if eTag matches with the file properties fetched befpre - op = client.createPath(relativePath, true, true, permission, umask, + // overwrite only if eTag matches with the file properties fetched or the file + // was deleted and there is no etag. + // if the etag was not retrieved, overwrite is still false, so will fail + // if another process has just created the file + op = client.createPath(relativePath, true, eTag != null, permission, umask, isAppendBlob, eTag, tracingContext); } catch (AbfsRestOperationException ex) { - if (ex.getStatusCode() == HttpURLConnection.HTTP_PRECON_FAILED) { + final int sc = ex.getStatusCode(); + LOG.debug("Failed to create file {} with etag {}; status code={}", + relativePath, eTag, sc, ex); + if (sc == HttpURLConnection.HTTP_PRECON_FAILED + || sc == HttpURLConnection.HTTP_CONFLICT) { Review Comment: good that we have taken care of 409 which can come when due to `etag!=null` -> overwrite argument to `client.createPath` = false. would be awesome if we can put it in comments, and also have log according to it. log1: about some file is there whose eTag is with our process. When we went back to createPath with the same eTag, some other process had replaced that file which would lead to 412, which is present in the added code: ``` final ConcurrentWriteOperationDetectedException ex2 = new ConcurrentWriteOperationDetectedException( AbfsErrors.ERR_PARALLEL_ACCESS_DETECTED + " Path =\"" + relativePath+ "\"" + "; Status code =" + sc + "; etag = \"" + eTag + "\"" + "; error =" + ex.getErrorMessage()); ``` suggestion to add log2: where in when we searched for etag, there was no file, now when we will try to createPath with overWrite = false, if it will give 409 in case some other process created a file on same path. Also, in case of 409, it is similar to the case we started with in this method. Should we get into 409 control as in https://github.com/apache/hadoop/blob/7f9ca101e2ae057a42829883596085732f8d5fa6/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java#L624 for a number of times. Like if we keep threshold as 2. If it happens that it gets 409 at this line, we will try once again to handle 409, post that we fail. @snvijaya @anmolanmol1234 @sreeb-msft, what you feel. > Tune ABFS create() retry logic > -- > > Key: HADOOP-18657 > URL: https://issues.apache.org/jira/browse/HADOOP-18657 >
[GitHub] [hadoop] saxenapranav commented on a diff in pull request #5462: HADOOP-18657. Tune ABFS create() retry logic
saxenapranav commented on code in PR #5462: URL: https://github.com/apache/hadoop/pull/5462#discussion_r1130438266 ## hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java: ## @@ -621,37 +622,57 @@ private AbfsRestOperation conditionalCreateOverwriteFile(final String relativePa isAppendBlob, null, tracingContext); } catch (AbfsRestOperationException e) { + LOG.debug("Failed to create {}", relativePath, e); if (e.getStatusCode() == HttpURLConnection.HTTP_CONFLICT) { // File pre-exists, fetch eTag +LOG.debug("Fetching etag of {}", relativePath); try { op = client.getPathStatus(relativePath, false, tracingContext); } catch (AbfsRestOperationException ex) { + LOG.debug("Failed to to getPathStatus {}", relativePath, ex); if (ex.getStatusCode() == HttpURLConnection.HTTP_NOT_FOUND) { // Is a parallel access case, as file which was found to be // present went missing by this request. -throw new ConcurrentWriteOperationDetectedException( -"Parallel access to the create path detected. Failing request " -+ "to honor single writer semantics"); +// this means the other thread deleted it and the conflict +// has implicitly been resolved. +LOG.debug("File at {} has been deleted; creation can continue", relativePath); } else { throw ex; } } -String eTag = op.getResult() -.getResponseHeader(HttpHeaderConfigurations.ETAG); +String eTag = op != null +? op.getResult().getResponseHeader(HttpHeaderConfigurations.ETAG) +: null; +LOG.debug("Attempting to create file {} with etag of {}", relativePath, eTag); try { - // overwrite only if eTag matches with the file properties fetched befpre - op = client.createPath(relativePath, true, true, permission, umask, + // overwrite only if eTag matches with the file properties fetched or the file + // was deleted and there is no etag. + // if the etag was not retrieved, overwrite is still false, so will fail + // if another process has just created the file + op = client.createPath(relativePath, true, eTag != null, permission, umask, isAppendBlob, eTag, tracingContext); } catch (AbfsRestOperationException ex) { - if (ex.getStatusCode() == HttpURLConnection.HTTP_PRECON_FAILED) { + final int sc = ex.getStatusCode(); + LOG.debug("Failed to create file {} with etag {}; status code={}", + relativePath, eTag, sc, ex); + if (sc == HttpURLConnection.HTTP_PRECON_FAILED + || sc == HttpURLConnection.HTTP_CONFLICT) { Review Comment: good that we have taken care of 409 which can come when due to `etag!=null` -> overwrite argument to `client.createPath` = false. would be awesome if we can put it in comments, and also have log according to it. log1: about some file is there whose eTag is with our process. When we went back to createPath with the same eTag, some other process had replaced that file which would lead to 412, which is present in the added code: ``` final ConcurrentWriteOperationDetectedException ex2 = new ConcurrentWriteOperationDetectedException( AbfsErrors.ERR_PARALLEL_ACCESS_DETECTED + " Path =\"" + relativePath+ "\"" + "; Status code =" + sc + "; etag = \"" + eTag + "\"" + "; error =" + ex.getErrorMessage()); ``` suggestion to add log2: where in when we searched for etag, there was no file, now when we will try to createPath with overWrite = false, if it will give 409 in case some other process created a file on same path. Also, in case of 409, it is similar to the case we started with in this method. Should we get into 409 control as in https://github.com/apache/hadoop/blob/7f9ca101e2ae057a42829883596085732f8d5fa6/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java#L624 for a number of times. Like if we keep threshold as 2. If it happens that it gets 409 at this line, we will try once again to handle 409, post that we fail. @snvijaya @anmolanmol1234 @sreeb-msft, what you feel. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail:
[jira] [Commented] (HADOOP-18487) protobuf-2.5.0 dependencies => provided
[ https://issues.apache.org/jira/browse/HADOOP-18487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17698171#comment-17698171 ] ASF GitHub Bot commented on HADOOP-18487: - hadoop-yetus commented on PR #4996: URL: https://github.com/apache/hadoop/pull/4996#issuecomment-1461262587 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 59s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +0 :ok: | xmllint | 0m 1s | | xmllint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 3 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 15m 50s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 25m 38s | | trunk passed | | +1 :green_heart: | compile | 23m 13s | | trunk passed with JDK Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | compile | 20m 29s | | trunk passed with JDK Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 | | +1 :green_heart: | checkstyle | 3m 46s | | trunk passed | | +1 :green_heart: | mvnsite | 15m 9s | | trunk passed | | +1 :green_heart: | javadoc | 12m 9s | | trunk passed with JDK Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javadoc | 12m 16s | | trunk passed with JDK Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 | | +0 :ok: | spotbugs | 0m 53s | | branch/hadoop-project no spotbugs output file (spotbugsXml.xml) | | -1 :x: | spotbugs | 4m 16s | [/branch-spotbugs-hadoop-mapreduce-project_hadoop-mapreduce-client-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4996/14/artifact/out/branch-spotbugs-hadoop-mapreduce-project_hadoop-mapreduce-client-warnings.html) | hadoop-mapreduce-project/hadoop-mapreduce-client in trunk has 1 extant spotbugs warnings. | | +1 :green_heart: | shadedclient | 21m 13s | | branch has no errors when building and testing our client artifacts. | | -0 :warning: | patch | 21m 37s | | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 29s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 9m 43s | | the patch passed | | +1 :green_heart: | compile | 22m 31s | | the patch passed with JDK Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1 | | -1 :x: | javac | 22m 31s | [/results-compile-javac-root-jdkUbuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4996/14/artifact/out/results-compile-javac-root-jdkUbuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1.txt) | root-jdkUbuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1 with JDK Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1 generated 1 new + 2823 unchanged - 1 fixed = 2824 total (was 2824) | | +1 :green_heart: | compile | 20m 34s | | the patch passed with JDK Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 | | -1 :x: | javac | 20m 34s | [/results-compile-javac-root-jdkPrivateBuild-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4996/14/artifact/out/results-compile-javac-root-jdkPrivateBuild-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09.txt) | root-jdkPrivateBuild-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 with JDK Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 generated 1 new + 2620 unchanged - 1 fixed = 2621 total (was 2621) | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 3m 38s | | root: The patch generated 0 new + 273 unchanged - 5 fixed = 273 total (was 278) | | +1 :green_heart: | mvnsite | 15m 0s | | the patch passed | | +1 :green_heart: | javadoc | 12m 6s | | the patch passed with JDK Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javadoc | 12m 9s | | the patch passed with JDK Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 | | +0 :ok: | spotbugs | 0m 37s | | hadoop-project has no data from spotbugs | | +1 :green_heart: | shadedclient | 21m 2s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | |
[GitHub] [hadoop] hadoop-yetus commented on pull request #4996: HADOOP-18487. protobuf 2.5.0 marked as provided.
hadoop-yetus commented on PR #4996: URL: https://github.com/apache/hadoop/pull/4996#issuecomment-1461262587 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 59s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +0 :ok: | xmllint | 0m 1s | | xmllint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 3 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 15m 50s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 25m 38s | | trunk passed | | +1 :green_heart: | compile | 23m 13s | | trunk passed with JDK Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | compile | 20m 29s | | trunk passed with JDK Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 | | +1 :green_heart: | checkstyle | 3m 46s | | trunk passed | | +1 :green_heart: | mvnsite | 15m 9s | | trunk passed | | +1 :green_heart: | javadoc | 12m 9s | | trunk passed with JDK Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javadoc | 12m 16s | | trunk passed with JDK Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 | | +0 :ok: | spotbugs | 0m 53s | | branch/hadoop-project no spotbugs output file (spotbugsXml.xml) | | -1 :x: | spotbugs | 4m 16s | [/branch-spotbugs-hadoop-mapreduce-project_hadoop-mapreduce-client-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4996/14/artifact/out/branch-spotbugs-hadoop-mapreduce-project_hadoop-mapreduce-client-warnings.html) | hadoop-mapreduce-project/hadoop-mapreduce-client in trunk has 1 extant spotbugs warnings. | | +1 :green_heart: | shadedclient | 21m 13s | | branch has no errors when building and testing our client artifacts. | | -0 :warning: | patch | 21m 37s | | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 29s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 9m 43s | | the patch passed | | +1 :green_heart: | compile | 22m 31s | | the patch passed with JDK Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1 | | -1 :x: | javac | 22m 31s | [/results-compile-javac-root-jdkUbuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4996/14/artifact/out/results-compile-javac-root-jdkUbuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1.txt) | root-jdkUbuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1 with JDK Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1 generated 1 new + 2823 unchanged - 1 fixed = 2824 total (was 2824) | | +1 :green_heart: | compile | 20m 34s | | the patch passed with JDK Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 | | -1 :x: | javac | 20m 34s | [/results-compile-javac-root-jdkPrivateBuild-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4996/14/artifact/out/results-compile-javac-root-jdkPrivateBuild-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09.txt) | root-jdkPrivateBuild-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 with JDK Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 generated 1 new + 2620 unchanged - 1 fixed = 2621 total (was 2621) | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 3m 38s | | root: The patch generated 0 new + 273 unchanged - 5 fixed = 273 total (was 278) | | +1 :green_heart: | mvnsite | 15m 0s | | the patch passed | | +1 :green_heart: | javadoc | 12m 6s | | the patch passed with JDK Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javadoc | 12m 9s | | the patch passed with JDK Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 | | +0 :ok: | spotbugs | 0m 37s | | hadoop-project has no data from spotbugs | | +1 :green_heart: | shadedclient | 21m 2s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 0m 39s | | hadoop-project in the patch passed. | | +1 :green_heart: | unit | 18m 28s | | hadoop-common in the patch passed. | | +1 :green_heart: | unit | 2m 47s | | hadoop-hdfs-client in the
[jira] [Commented] (HADOOP-18644) Add bswap support for LoongArch
[ https://issues.apache.org/jira/browse/HADOOP-18644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17698167#comment-17698167 ] ASF GitHub Bot commented on HADOOP-18644: - Hexiaoqiao commented on PR #5453: URL: https://github.com/apache/hadoop/pull/5453#issuecomment-1461236437 @iwasakims Would you mind to give another review? Thanks. > Add bswap support for LoongArch > --- > > Key: HADOOP-18644 > URL: https://issues.apache.org/jira/browse/HADOOP-18644 > Project: Hadoop Common > Issue Type: Improvement > Components: native >Affects Versions: 3.4.0 >Reporter: zhaixiaojuan >Assignee: zhaixiaojuan >Priority: Major > Labels: pull-request-available > > The LoongArch architecture (LoongArch) is an Instruction Set Architecture > (ISA) that has a RISC style. > Documentations: > ISA: > [https://loongson.github.io/LoongArch-Documentation/LoongArch-Vol1-EN.html] > ABI: > [https://loongson.github.io/LoongArch-Documentation/LoongArch-ELF-ABI-EN.html] > More docs can be found at: > [https://loongson.github.io/LoongArch-Documentation/README-EN.html] -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] Hexiaoqiao commented on pull request #5453: HADOOP-18644. Add bswap support for LoongArch
Hexiaoqiao commented on PR #5453: URL: https://github.com/apache/hadoop/pull/5453#issuecomment-1461236437 @iwasakims Would you mind to give another review? Thanks. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] Hexiaoqiao commented on a diff in pull request #5460: HDFS-16942. Send error to datanode if FBR is rejected due to bad lease
Hexiaoqiao commented on code in PR #5460: URL: https://github.com/apache/hadoop/pull/5460#discussion_r1130375460 ## hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java: ## @@ -791,6 +792,9 @@ private void offerService() throws Exception { shouldServiceRun = false; return; } +if (InvalidBlockReportLeaseException.class.getName().equals(reClass)) { + fullBlockReportLeaseId = 0; Review Comment: > Its really the isBlockReportDue() method that controls whether a new one should be sent of not, and that is based on time since the last one. The the blockReport(), it updates the time after a successful block report, but if it gets an exception, like this change causes, it will not update the time and so it will try again on the next heartbeat if it gets a new lease. Thanks for the detailed explain. Make sense to me. > perhaps we can add one liner log to indicate that the particular FBR went through this trouble (i.e. log report id and lease id) +1 from my side. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18640) ABFS: Enabling Client-side Backoff only for new requests
[ https://issues.apache.org/jira/browse/HADOOP-18640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17698157#comment-17698157 ] ASF GitHub Bot commented on HADOOP-18640: - saxenapranav commented on code in PR #5446: URL: https://github.com/apache/hadoop/pull/5446#discussion_r1130357512 ## hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsClient.java: ## @@ -222,6 +224,10 @@ AbfsThrottlingIntercept getIntercept() { return intercept; } + boolean shouldThrottleRetries() { +return throttleRetries; + } + Review Comment: Either of two is fine. if we keep in abfsClient, it will be stored account level, and we dont need to check anything till abfsClient object is alived. In abfsRestOp, new field will be created as new object is created for each api call. if we keep it in abfsRestOperation, it is something which is actually requried in abfsRestOperation. Though it doesn't matter. You may please resolve this comment. > ABFS: Enabling Client-side Backoff only for new requests > > > Key: HADOOP-18640 > URL: https://issues.apache.org/jira/browse/HADOOP-18640 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Reporter: Sree Bhattacharyya >Assignee: Sree Bhattacharyya >Priority: Minor > Labels: pull-request-available > > Enabling backoff only for new requests that happen, and disabling for retried > requests. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18640) ABFS: Enabling Client-side Backoff only for new requests
[ https://issues.apache.org/jira/browse/HADOOP-18640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17698159#comment-17698159 ] ASF GitHub Bot commented on HADOOP-18640: - saxenapranav commented on code in PR #5446: URL: https://github.com/apache/hadoop/pull/5446#discussion_r1130357920 ## hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java: ## @@ -268,10 +269,62 @@ DefaultValue = DEFAULT_ACCOUNT_OPERATION_IDLE_TIMEOUT_MS) private int accountOperationIdleTimeout; + /** + * Analysis Period for client-side throttling + */ Review Comment: nit: spacing. > ABFS: Enabling Client-side Backoff only for new requests > > > Key: HADOOP-18640 > URL: https://issues.apache.org/jira/browse/HADOOP-18640 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Reporter: Sree Bhattacharyya >Assignee: Sree Bhattacharyya >Priority: Minor > Labels: pull-request-available > > Enabling backoff only for new requests that happen, and disabling for retried > requests. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] saxenapranav commented on a diff in pull request #5446: HADOOP-18640: [ABFS] Enabling Client-side Backoff only for new requests
saxenapranav commented on code in PR #5446: URL: https://github.com/apache/hadoop/pull/5446#discussion_r1130357920 ## hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java: ## @@ -268,10 +269,62 @@ DefaultValue = DEFAULT_ACCOUNT_OPERATION_IDLE_TIMEOUT_MS) private int accountOperationIdleTimeout; + /** + * Analysis Period for client-side throttling + */ Review Comment: nit: spacing. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] saxenapranav commented on a diff in pull request #5446: HADOOP-18640: [ABFS] Enabling Client-side Backoff only for new requests
saxenapranav commented on code in PR #5446: URL: https://github.com/apache/hadoop/pull/5446#discussion_r1130357512 ## hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsClient.java: ## @@ -222,6 +224,10 @@ AbfsThrottlingIntercept getIntercept() { return intercept; } + boolean shouldThrottleRetries() { +return throttleRetries; + } + Review Comment: Either of two is fine. if we keep in abfsClient, it will be stored account level, and we dont need to check anything till abfsClient object is alived. In abfsRestOp, new field will be created as new object is created for each api call. if we keep it in abfsRestOperation, it is something which is actually requried in abfsRestOperation. Though it doesn't matter. You may please resolve this comment. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18653) LogLevel servlet to determine log impl before using setLevel
[ https://issues.apache.org/jira/browse/HADOOP-18653?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17698135#comment-17698135 ] ASF GitHub Bot commented on HADOOP-18653: - hadoop-yetus commented on PR #5456: URL: https://github.com/apache/hadoop/pull/5456#issuecomment-1461128752 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 42s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 39m 3s | | trunk passed | | +1 :green_heart: | compile | 23m 7s | | trunk passed with JDK Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | compile | 20m 24s | | trunk passed with JDK Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 | | +1 :green_heart: | checkstyle | 1m 11s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 46s | | trunk passed | | +1 :green_heart: | javadoc | 1m 17s | | trunk passed with JDK Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javadoc | 0m 52s | | trunk passed with JDK Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 | | +1 :green_heart: | spotbugs | 2m 44s | | trunk passed | | +1 :green_heart: | shadedclient | 22m 28s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 1s | | the patch passed | | +1 :green_heart: | compile | 22m 25s | | the patch passed with JDK Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javac | 22m 25s | | the patch passed | | +1 :green_heart: | compile | 20m 33s | | the patch passed with JDK Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 | | +1 :green_heart: | javac | 20m 33s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 1m 9s | | the patch passed | | +1 :green_heart: | mvnsite | 1m 40s | | the patch passed | | +1 :green_heart: | javadoc | 1m 5s | | the patch passed with JDK Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javadoc | 0m 52s | | the patch passed with JDK Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 | | +1 :green_heart: | spotbugs | 2m 43s | | the patch passed | | +1 :green_heart: | shadedclient | 22m 26s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 18m 19s | | hadoop-common in the patch passed. | | +1 :green_heart: | asflicense | 1m 3s | | The patch does not generate ASF License warnings. | | | | 207m 32s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.42 ServerAPI=1.42 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5456/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/5456 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux 80c4ea3fdcc6 4.15.0-200-generic #211-Ubuntu SMP Thu Nov 24 18:16:04 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 5ff719cbc28ba6a2bce291f50359fbcc1941e046 | | Default Java | Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5456/2/testReport/ | | Max. process+thread count | 1549 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5456/2/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org | This message was automatically generated. >
[GitHub] [hadoop] hadoop-yetus commented on pull request #5456: HADOOP-18653. LogLevel servlet to determine log impl before using setLevel
hadoop-yetus commented on PR #5456: URL: https://github.com/apache/hadoop/pull/5456#issuecomment-1461128752 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 42s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 39m 3s | | trunk passed | | +1 :green_heart: | compile | 23m 7s | | trunk passed with JDK Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | compile | 20m 24s | | trunk passed with JDK Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 | | +1 :green_heart: | checkstyle | 1m 11s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 46s | | trunk passed | | +1 :green_heart: | javadoc | 1m 17s | | trunk passed with JDK Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javadoc | 0m 52s | | trunk passed with JDK Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 | | +1 :green_heart: | spotbugs | 2m 44s | | trunk passed | | +1 :green_heart: | shadedclient | 22m 28s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 1s | | the patch passed | | +1 :green_heart: | compile | 22m 25s | | the patch passed with JDK Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javac | 22m 25s | | the patch passed | | +1 :green_heart: | compile | 20m 33s | | the patch passed with JDK Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 | | +1 :green_heart: | javac | 20m 33s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 1m 9s | | the patch passed | | +1 :green_heart: | mvnsite | 1m 40s | | the patch passed | | +1 :green_heart: | javadoc | 1m 5s | | the patch passed with JDK Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javadoc | 0m 52s | | the patch passed with JDK Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 | | +1 :green_heart: | spotbugs | 2m 43s | | the patch passed | | +1 :green_heart: | shadedclient | 22m 26s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 18m 19s | | hadoop-common in the patch passed. | | +1 :green_heart: | asflicense | 1m 3s | | The patch does not generate ASF License warnings. | | | | 207m 32s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.42 ServerAPI=1.42 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5456/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/5456 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux 80c4ea3fdcc6 4.15.0-200-generic #211-Ubuntu SMP Thu Nov 24 18:16:04 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 5ff719cbc28ba6a2bce291f50359fbcc1941e046 | | Default Java | Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5456/2/testReport/ | | Max. process+thread count | 1549 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5456/2/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about
[jira] [Commented] (HADOOP-18652) Path.suffix raises NullPointerException
[ https://issues.apache.org/jira/browse/HADOOP-18652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17698131#comment-17698131 ] Viraj Jasani commented on HADOOP-18652: --- Sure, you can refer to [https://cwiki.apache.org/confluence/display/hadoop/how+to+contribute#HowToContribute-Provideapatch] Thanks > Path.suffix raises NullPointerException > --- > > Key: HADOOP-18652 > URL: https://issues.apache.org/jira/browse/HADOOP-18652 > Project: Hadoop Common > Issue Type: Bug > Components: hdfs-client >Reporter: Patrick Grandjean >Priority: Minor > > Calling the Path.suffix method on root raises a NullPointerException. Tested > with hadoop-client-api 3.3.2 > Scenario: > {code:java} > import org.apache.hadoop.fs.* > Path root = new Path("/") > root.getParent == null // true > root.suffix("bar") // NPE is raised > {code} > Stack: > {code:none} > 23/03/03 15:13:18 ERROR Uncaught throwable from user code: > java.lang.NullPointerException > at org.apache.hadoop.fs.Path.(Path.java:104) > at org.apache.hadoop.fs.Path.(Path.java:93) > at org.apache.hadoop.fs.Path.suffix(Path.java:361) > {code} -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] virajjasani commented on pull request #5464: HDFS-16944 Add audit log for RouterAdminServer to save privileged operation log seperately.
virajjasani commented on PR #5464: URL: https://github.com/apache/hadoop/pull/5464#issuecomment-1461110917 For RBF, if we really want audit logs, they should cover all like add/update mount table entry etc, and not just name service APIs. If we only need auditing for name service RPC, then we can rather name the logger as `NameserviceManager.class.getName() + ".audit"` rather than `RouterAdminServer.class.getName() + ".audit"` correct? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-18649) CLA and CRLA appenders to be replaced with RFA
[ https://issues.apache.org/jira/browse/HADOOP-18649?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Viraj Jasani updated HADOOP-18649: -- Release Note: ContainerLogAppender and ContainerRollingLogAppender both have quite similar functionality as RollingFileAppender. Both are marked as IS.Unstable. Before migrating to log4j2, replacing them with RollingFileAppender. Any downstreamers using it should do the same. > CLA and CRLA appenders to be replaced with RFA > -- > > Key: HADOOP-18649 > URL: https://issues.apache.org/jira/browse/HADOOP-18649 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Viraj Jasani >Assignee: Viraj Jasani >Priority: Major > Labels: pull-request-available > > ContainerLogAppender and ContainerRollingLogAppender both have quite similar > functionality as RollingFileAppender. Maintenance of custom appenders for > Log4J2 is costly when there is very minor difference in comparison with > built-in appender provided by Log4J. > The goal of this sub-task is to replace both ContainerLogAppender and > ContainerRollingLogAppender custom appenders with RollingFileAppender without > changing any system properties already being used to determine file name, > file size, backup index, pattern layout properties etc. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-18654) Remove unused custom appender TaskLogAppender
[ https://issues.apache.org/jira/browse/HADOOP-18654?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Viraj Jasani updated HADOOP-18654: -- Release Note: TaskLogAppender is IA.Private and IS.Unstable. Removing it before migrating to log4j2 as it is no longer used within Hadoop. Any downstreamers using it should use RollingFileAppender instead. > Remove unused custom appender TaskLogAppender > - > > Key: HADOOP-18654 > URL: https://issues.apache.org/jira/browse/HADOOP-18654 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Viraj Jasani >Assignee: Viraj Jasani >Priority: Major > Labels: pull-request-available > > TaskLogAppender is no longer being used in codebase. The only past references > we have are from old releasenotes (HADOOP-7308, MAPREDUCE-3208, > MAPREDUCE-2372, HADOOP-1355). > Before we migrate to log4j2, it would be good to remove TaskLogAppender. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] virajjasani commented on pull request #5460: HDFS-16942. Send error to datanode if FBR is rejected due to bad lease
virajjasani commented on PR #5460: URL: https://github.com/apache/hadoop/pull/5460#issuecomment-1461046706 @sodonnel also I am curious, was it just a specific log (only one case e.g. the lease was expired) or combination of logs from `checkLease(DatanodeDescriptor dn, long monotonicNowMs, long id)` that you have seen in various issues? I wonder if `lease expiry` or `invalid lease` are worth having some dedicated metrics in `NameNodeActivity`, maybe not as with this patch, the subsequent attempt should anyways have new lease available to it from the response of heartbeat API. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] virajjasani commented on a diff in pull request #5460: HDFS-16942. Send error to datanode if FBR is rejected due to bad lease
virajjasani commented on code in PR #5460: URL: https://github.com/apache/hadoop/pull/5460#discussion_r1130195994 ## hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java: ## @@ -791,6 +792,9 @@ private void offerService() throws Exception { shouldServiceRun = false; return; } +if (InvalidBlockReportLeaseException.class.getName().equals(reClass)) { + fullBlockReportLeaseId = 0; Review Comment: Or maybe `reportId` and `leaseId` can be added as constructor argument to `InvalidBlockReportLeaseException`. This way `RemoteException in offerService` log will likely print them anyways? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] virajjasani commented on a diff in pull request #5460: HDFS-16942. Send error to datanode if FBR is rejected due to bad lease
virajjasani commented on code in PR #5460: URL: https://github.com/apache/hadoop/pull/5460#discussion_r1130188008 ## hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java: ## @@ -791,6 +792,9 @@ private void offerService() throws Exception { shouldServiceRun = false; return; } +if (InvalidBlockReportLeaseException.class.getName().equals(reClass)) { + fullBlockReportLeaseId = 0; Review Comment: As we don't expect to reach here frequently (hopefully datanode is able to acquire lease successfully most of the times), perhaps we can add one liner log to indicate that the particular FBR went through this trouble (i.e. log report id and lease id)? (just in case if it helps debug further) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] virajjasani commented on a diff in pull request #5460: HDFS-16942. Send error to datanode if FBR is rejected due to bad lease
virajjasani commented on code in PR #5460: URL: https://github.com/apache/hadoop/pull/5460#discussion_r1130188008 ## hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java: ## @@ -791,6 +792,9 @@ private void offerService() throws Exception { shouldServiceRun = false; return; } +if (InvalidBlockReportLeaseException.class.getName().equals(reClass)) { + fullBlockReportLeaseId = 0; Review Comment: As we don't expect to reach here frequently (hopefully datanode is able to acquire lease successfully most of the times), perhaps we can add one liner log to indicate that the particular block id went through this trouble? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] virajjasani commented on pull request #5460: HDFS-16942. Send error to datanode if FBR is rejected due to bad lease
virajjasani commented on PR #5460: URL: https://github.com/apache/hadoop/pull/5460#issuecomment-1461015907 ``` Duplicate classes found: Found in: org.apache.hadoop:hadoop-client-minicluster:jar:3.4.0-SNAPSHOT:compile org.apache.hadoop:hadoop-client-api:jar:3.4.0-SNAPSHOT:compile Duplicate classes: org/apache/hadoop/hdfs/server/protocol/package-info.class ``` I believe this could be avoided by excluding it from either of the poms: ``` --- a/hadoop-client-modules/hadoop-client-api/pom.xml +++ b/hadoop-client-modules/hadoop-client-api/pom.xml @@ -126,6 +126,12 @@ org/apache/hadoop/yarn/client/api/package-info.class + + org.apache.hadoop:hadoop-hdfs + + org/apache/hadoop/hdfs/server/protocol/package-info.class + + ``` For instance, the above patch might help. I haven't tested this but I guess it might work. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] slfan1989 commented on pull request #5382: YARN-8972. [Router] Add support to prevent DoS attack over ApplicationSubmissionContext size.
slfan1989 commented on PR #5382: URL: https://github.com/apache/hadoop/pull/5382#issuecomment-1461013411 @goiri Thank you very much for your help in reviewing the code! I will continue to follow up YARN-11376, YARN-11445. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18649) CLA and CRLA appenders to be replaced with RFA
[ https://issues.apache.org/jira/browse/HADOOP-18649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17698092#comment-17698092 ] ASF GitHub Bot commented on HADOOP-18649: - virajjasani commented on PR #5448: URL: https://github.com/apache/hadoop/pull/5448#issuecomment-1460911573 @Apache9 @jojochuang I just checked usages with https://github.com/search?l=Java+Properties=org%3Aapache+org.apache.hadoop.yarn.ContainerRollingLogAppender=Code and https://github.com/search?l=Java+Properties=org%3Aapache+org.apache.hadoop.yarn.ContainerLogAppender=Code Looks like only yarn has the references. > CLA and CRLA appenders to be replaced with RFA > -- > > Key: HADOOP-18649 > URL: https://issues.apache.org/jira/browse/HADOOP-18649 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Viraj Jasani >Assignee: Viraj Jasani >Priority: Major > Labels: pull-request-available > > ContainerLogAppender and ContainerRollingLogAppender both have quite similar > functionality as RollingFileAppender. Maintenance of custom appenders for > Log4J2 is costly when there is very minor difference in comparison with > built-in appender provided by Log4J. > The goal of this sub-task is to replace both ContainerLogAppender and > ContainerRollingLogAppender custom appenders with RollingFileAppender without > changing any system properties already being used to determine file name, > file size, backup index, pattern layout properties etc. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] virajjasani commented on pull request #5448: HADOOP-18649. CLA and CRLA appenders to be replaced with RFA
virajjasani commented on PR #5448: URL: https://github.com/apache/hadoop/pull/5448#issuecomment-1460911573 @Apache9 @jojochuang I just checked usages with https://github.com/search?l=Java+Properties=org%3Aapache+org.apache.hadoop.yarn.ContainerRollingLogAppender=Code and https://github.com/search?l=Java+Properties=org%3Aapache+org.apache.hadoop.yarn.ContainerLogAppender=Code Looks like only yarn has the references. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] goiri merged pull request #5382: YARN-8972. [Router] Add support to prevent DoS attack over ApplicationSubmissionContext size.
goiri merged PR #5382: URL: https://github.com/apache/hadoop/pull/5382 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18654) Remove unused custom appender TaskLogAppender
[ https://issues.apache.org/jira/browse/HADOOP-18654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17698063#comment-17698063 ] ASF GitHub Bot commented on HADOOP-18654: - virajjasani commented on PR #5457: URL: https://github.com/apache/hadoop/pull/5457#issuecomment-1460746880 I will update Jira releasenotes to indicate that downstreamers should use RFA and not TLA (TLA is anyways not public/stable) > Remove unused custom appender TaskLogAppender > - > > Key: HADOOP-18654 > URL: https://issues.apache.org/jira/browse/HADOOP-18654 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Viraj Jasani >Assignee: Viraj Jasani >Priority: Major > Labels: pull-request-available > > TaskLogAppender is no longer being used in codebase. The only past references > we have are from old releasenotes (HADOOP-7308, MAPREDUCE-3208, > MAPREDUCE-2372, HADOOP-1355). > Before we migrate to log4j2, it would be good to remove TaskLogAppender. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] virajjasani commented on pull request #5457: HADOOP-18654. Remove unused custom appender TaskLogAppender
virajjasani commented on PR #5457: URL: https://github.com/apache/hadoop/pull/5457#issuecomment-1460746880 I will update Jira releasenotes to indicate that downstreamers should use RFA and not TLA (TLA is anyways not public/stable) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18654) Remove unused custom appender TaskLogAppender
[ https://issues.apache.org/jira/browse/HADOOP-18654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17698053#comment-17698053 ] ASF GitHub Bot commented on HADOOP-18654: - virajjasani commented on PR #5457: URL: https://github.com/apache/hadoop/pull/5457#issuecomment-1460724938 > If this is a Hadoop 3.4.0 only change then it's okay. Oh yes, this is for 3.4.0 only, at least that's what I am proposing for this Jira. > Remove unused custom appender TaskLogAppender > - > > Key: HADOOP-18654 > URL: https://issues.apache.org/jira/browse/HADOOP-18654 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Viraj Jasani >Assignee: Viraj Jasani >Priority: Major > Labels: pull-request-available > > TaskLogAppender is no longer being used in codebase. The only past references > we have are from old releasenotes (HADOOP-7308, MAPREDUCE-3208, > MAPREDUCE-2372, HADOOP-1355). > Before we migrate to log4j2, it would be good to remove TaskLogAppender. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] virajjasani commented on pull request #5457: HADOOP-18654. Remove unused custom appender TaskLogAppender
virajjasani commented on PR #5457: URL: https://github.com/apache/hadoop/pull/5457#issuecomment-1460724938 > If this is a Hadoop 3.4.0 only change then it's okay. Oh yes, this is for 3.4.0 only, at least that's what I am proposing for this Jira. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18654) Remove unused custom appender TaskLogAppender
[ https://issues.apache.org/jira/browse/HADOOP-18654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17698052#comment-17698052 ] ASF GitHub Bot commented on HADOOP-18654: - jojochuang commented on PR #5457: URL: https://github.com/apache/hadoop/pull/5457#issuecomment-1460722816 The only use of this class among Apache projects is in Chukwa which is in attic. But a number of Apache projects (Ambari, Bigtop, among others) has log4j.properties referencing TaskLogAppender. (https://github.com/search?l=Java+Properties=2=org%3Aapache+org.apache.hadoop.mapred.TaskLogAppender=Code) If this is a Hadoop 3.4.0 only change then it's okay. > Remove unused custom appender TaskLogAppender > - > > Key: HADOOP-18654 > URL: https://issues.apache.org/jira/browse/HADOOP-18654 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Viraj Jasani >Assignee: Viraj Jasani >Priority: Major > Labels: pull-request-available > > TaskLogAppender is no longer being used in codebase. The only past references > we have are from old releasenotes (HADOOP-7308, MAPREDUCE-3208, > MAPREDUCE-2372, HADOOP-1355). > Before we migrate to log4j2, it would be good to remove TaskLogAppender. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] jojochuang commented on pull request #5457: HADOOP-18654. Remove unused custom appender TaskLogAppender
jojochuang commented on PR #5457: URL: https://github.com/apache/hadoop/pull/5457#issuecomment-1460722816 The only use of this class among Apache projects is in Chukwa which is in attic. But a number of Apache projects (Ambari, Bigtop, among others) has log4j.properties referencing TaskLogAppender. (https://github.com/search?l=Java+Properties=2=org%3Aapache+org.apache.hadoop.mapred.TaskLogAppender=Code) If this is a Hadoop 3.4.0 only change then it's okay. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #5464: HDFS-16944 Add audit log for RouterAdminServer to save privileged operation log seperately.
hadoop-yetus commented on PR #5464: URL: https://github.com/apache/hadoop/pull/5464#issuecomment-1460612337 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 57s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 41m 25s | | trunk passed | | +1 :green_heart: | compile | 0m 43s | | trunk passed with JDK Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | compile | 0m 38s | | trunk passed with JDK Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 | | +1 :green_heart: | checkstyle | 0m 30s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 41s | | trunk passed | | +1 :green_heart: | javadoc | 0m 48s | | trunk passed with JDK Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javadoc | 0m 56s | | trunk passed with JDK Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 | | +1 :green_heart: | spotbugs | 1m 31s | | trunk passed | | +1 :green_heart: | shadedclient | 23m 29s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 33s | | the patch passed | | +1 :green_heart: | compile | 0m 37s | | the patch passed with JDK Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javac | 0m 37s | | the patch passed | | +1 :green_heart: | compile | 0m 30s | | the patch passed with JDK Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 | | +1 :green_heart: | javac | 0m 30s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 0m 16s | [/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5464/1/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs-rbf.txt) | hadoop-hdfs-project/hadoop-hdfs-rbf: The patch generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0) | | +1 :green_heart: | mvnsite | 0m 34s | | the patch passed | | +1 :green_heart: | javadoc | 0m 33s | | the patch passed with JDK Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javadoc | 0m 49s | | the patch passed with JDK Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 | | +1 :green_heart: | spotbugs | 1m 22s | | the patch passed | | +1 :green_heart: | shadedclient | 23m 24s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 21m 1s | | hadoop-hdfs-rbf in the patch passed. | | +1 :green_heart: | asflicense | 0m 35s | | The patch does not generate ASF License warnings. | | | | 123m 29s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.42 ServerAPI=1.42 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5464/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/5464 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux d6d53c4ce83d 4.15.0-200-generic #211-Ubuntu SMP Thu Nov 24 18:16:04 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 78b6013e1edec591cf0d66d748f46268d8aa43e3 | | Default Java | Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5464/1/testReport/ | | Max. process+thread count | 2205 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: hadoop-hdfs-project/hadoop-hdfs-rbf | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5464/1/console | | versions |
[GitHub] [hadoop] hadoop-yetus commented on pull request #5424: HDFS-16931. Observer nn delete blocks asynchronously when tail OP_DEL…
hadoop-yetus commented on PR #5424: URL: https://github.com/apache/hadoop/pull/5424#issuecomment-1460617729 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 1m 15s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 41m 59s | | trunk passed | | +1 :green_heart: | compile | 1m 28s | | trunk passed with JDK Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | compile | 1m 23s | | trunk passed with JDK Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 | | +1 :green_heart: | checkstyle | 1m 7s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 30s | | trunk passed | | +1 :green_heart: | javadoc | 1m 8s | | trunk passed with JDK Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javadoc | 1m 30s | | trunk passed with JDK Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 | | +1 :green_heart: | spotbugs | 3m 36s | | trunk passed | | +1 :green_heart: | shadedclient | 25m 55s | | branch has no errors when building and testing our client artifacts. | | -0 :warning: | patch | 26m 13s | | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 22s | | the patch passed | | +1 :green_heart: | compile | 1m 22s | | the patch passed with JDK Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javac | 1m 22s | | the patch passed | | +1 :green_heart: | compile | 1m 15s | | the patch passed with JDK Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 | | +1 :green_heart: | javac | 1m 15s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 0m 53s | [/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5424/2/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs-project/hadoop-hdfs: The patch generated 7 new + 118 unchanged - 0 fixed = 125 total (was 118) | | +1 :green_heart: | mvnsite | 1m 23s | | the patch passed | | -1 :x: | javadoc | 0m 53s | [/patch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5424/2/artifact/out/patch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1.txt) | hadoop-hdfs in the patch failed with JDK Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1. | | +1 :green_heart: | javadoc | 1m 26s | | the patch passed with JDK Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 | | +1 :green_heart: | spotbugs | 3m 27s | | the patch passed | | +1 :green_heart: | shadedclient | 26m 18s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 283m 54s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5424/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 48s | | The patch does not generate ASF License warnings. | | | | 401m 19s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.server.namenode.TestNameEditsConfigs | | | hadoop.fs.TestHDFSFileContextMainOperations | | | hadoop.hdfs.TestAclsEndToEnd | | | hadoop.hdfs.server.namenode.snapshot.TestRandomOpsWithSnapshots | | | hadoop.hdfs.server.namenode.snapshot.TestSnapshot | | | hadoop.hdfs.server.blockmanagement.TestPendingInvalidateBlock | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.42 ServerAPI=1.42 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5424/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/5424 | |
[GitHub] [hadoop] hadoop-yetus commented on pull request #5463: Bump snakeyaml from 1.33 to 2.0 in /hadoop-project
hadoop-yetus commented on PR #5463: URL: https://github.com/apache/hadoop/pull/5463#issuecomment-1460505727 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 44s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +0 :ok: | xmllint | 0m 0s | | xmllint was not available. | | +1 :green_heart: | @author | 0m 1s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 38m 27s | | trunk passed | | +1 :green_heart: | compile | 0m 23s | | trunk passed with JDK Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | compile | 0m 22s | | trunk passed with JDK Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 | | +1 :green_heart: | mvnsite | 0m 29s | | trunk passed | | +1 :green_heart: | javadoc | 0m 32s | | trunk passed with JDK Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javadoc | 0m 25s | | trunk passed with JDK Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 | | +1 :green_heart: | shadedclient | 60m 29s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 15s | | the patch passed | | +1 :green_heart: | compile | 0m 14s | | the patch passed with JDK Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javac | 0m 14s | | the patch passed | | +1 :green_heart: | compile | 0m 14s | | the patch passed with JDK Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 | | +1 :green_heart: | javac | 0m 14s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | mvnsite | 0m 16s | | the patch passed | | +1 :green_heart: | javadoc | 0m 15s | | the patch passed with JDK Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javadoc | 0m 15s | | the patch passed with JDK Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 | | +1 :green_heart: | shadedclient | 21m 5s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 0m 17s | | hadoop-project in the patch passed. | | +1 :green_heart: | asflicense | 0m 38s | | The patch does not generate ASF License warnings. | | | | 86m 56s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.42 ServerAPI=1.42 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5463/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/5463 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient codespell detsecrets xmllint | | uname | Linux 500c2e2f868f 4.15.0-200-generic #211-Ubuntu SMP Thu Nov 24 18:16:04 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 7456e359a7628b8a6d01857cb32bef1f5cddf362 | | Default Java | Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5463/1/testReport/ | | Max. process+thread count | 562 (vs. ulimit of 5500) | | modules | C: hadoop-project U: hadoop-project | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5463/1/console | | versions | git=2.25.1 maven=3.6.3 | | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To
[jira] [Commented] (HADOOP-18657) Tune ABFS create() retry logic
[ https://issues.apache.org/jira/browse/HADOOP-18657?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17698009#comment-17698009 ] ASF GitHub Bot commented on HADOOP-18657: - hadoop-yetus commented on PR #5462: URL: https://github.com/apache/hadoop/pull/5462#issuecomment-1460491719 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 57s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 38m 13s | | trunk passed | | +1 :green_heart: | compile | 0m 42s | | trunk passed with JDK Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | compile | 0m 39s | | trunk passed with JDK Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 | | +1 :green_heart: | checkstyle | 0m 35s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 43s | | trunk passed | | +1 :green_heart: | javadoc | 0m 42s | | trunk passed with JDK Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javadoc | 0m 34s | | trunk passed with JDK Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 | | +1 :green_heart: | spotbugs | 1m 16s | | trunk passed | | +1 :green_heart: | shadedclient | 20m 31s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 32s | | the patch passed | | +1 :green_heart: | compile | 0m 34s | | the patch passed with JDK Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javac | 0m 34s | | the patch passed | | +1 :green_heart: | compile | 0m 30s | | the patch passed with JDK Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 | | +1 :green_heart: | javac | 0m 30s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 20s | | the patch passed | | +1 :green_heart: | mvnsite | 0m 33s | | the patch passed | | +1 :green_heart: | javadoc | 0m 26s | | the patch passed with JDK Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javadoc | 0m 24s | | the patch passed with JDK Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 | | +1 :green_heart: | spotbugs | 1m 3s | | the patch passed | | +1 :green_heart: | shadedclient | 20m 12s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 2m 11s | | hadoop-azure in the patch passed. | | +1 :green_heart: | asflicense | 0m 38s | | The patch does not generate ASF License warnings. | | | | 93m 52s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.42 ServerAPI=1.42 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5462/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/5462 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux 0304206b7a96 4.15.0-200-generic #211-Ubuntu SMP Thu Nov 24 18:16:04 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / a4122e276ad2264c6303eecc3584b63f865dd353 | | Default Java | Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5462/1/testReport/ | | Max. process+thread count | 627 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5462/1/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus
[GitHub] [hadoop] hadoop-yetus commented on pull request #5462: HADOOP-18657. Tune ABFS create() retry logic
hadoop-yetus commented on PR #5462: URL: https://github.com/apache/hadoop/pull/5462#issuecomment-1460491719 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 57s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 38m 13s | | trunk passed | | +1 :green_heart: | compile | 0m 42s | | trunk passed with JDK Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | compile | 0m 39s | | trunk passed with JDK Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 | | +1 :green_heart: | checkstyle | 0m 35s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 43s | | trunk passed | | +1 :green_heart: | javadoc | 0m 42s | | trunk passed with JDK Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javadoc | 0m 34s | | trunk passed with JDK Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 | | +1 :green_heart: | spotbugs | 1m 16s | | trunk passed | | +1 :green_heart: | shadedclient | 20m 31s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 32s | | the patch passed | | +1 :green_heart: | compile | 0m 34s | | the patch passed with JDK Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javac | 0m 34s | | the patch passed | | +1 :green_heart: | compile | 0m 30s | | the patch passed with JDK Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 | | +1 :green_heart: | javac | 0m 30s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 20s | | the patch passed | | +1 :green_heart: | mvnsite | 0m 33s | | the patch passed | | +1 :green_heart: | javadoc | 0m 26s | | the patch passed with JDK Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javadoc | 0m 24s | | the patch passed with JDK Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 | | +1 :green_heart: | spotbugs | 1m 3s | | the patch passed | | +1 :green_heart: | shadedclient | 20m 12s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 2m 11s | | hadoop-azure in the patch passed. | | +1 :green_heart: | asflicense | 0m 38s | | The patch does not generate ASF License warnings. | | | | 93m 52s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.42 ServerAPI=1.42 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5462/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/5462 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux 0304206b7a96 4.15.0-200-generic #211-Ubuntu SMP Thu Nov 24 18:16:04 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / a4122e276ad2264c6303eecc3584b63f865dd353 | | Default Java | Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5462/1/testReport/ | | Max. process+thread count | 627 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5462/1/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the
[jira] [Commented] (HADOOP-18656) ABFS: Support for Pagination in Recursive Directory Delete
[ https://issues.apache.org/jira/browse/HADOOP-18656?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17697996#comment-17697996 ] Steve Loughran commented on HADOOP-18656: - isn't it an O(1) operation on a HNS store? > ABFS: Support for Pagination in Recursive Directory Delete > --- > > Key: HADOOP-18656 > URL: https://issues.apache.org/jira/browse/HADOOP-18656 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Reporter: Sree Bhattacharyya >Assignee: Sree Bhattacharyya >Priority: Minor > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] curie71 opened a new pull request, #5464: HDFS-16944 Add audit log for RouterAdminServer to save privileged operation log seperately.
curie71 opened a new pull request, #5464: URL: https://github.com/apache/hadoop/pull/5464 HDFS-16944 We found that in other components (like namenode in hdfs or resourcemanager in yarn), debug log and audit log are record seperately, except RouterAdminServer. We found that in other components (like namenode in hdfs or resourcemanager in yarn), *debug log and audit log are record seperately*, except RouterAdminServer. There are lots of +simple+ logs to help with debugging for the *developers* who can access to the source code. And there are also audit logs record +privileged operations+ with more +detailed+ information to help *system admins* understand what happened in a real run. There is an example in yarn: ```java public static final Log auditLog = LogFactory.getLog( FSNamesystem.class.getName() + ".audit"); try { // Safety userUgi = UserGroupInformation.getCurrentUser(); user = userUgi.getShortUserName(); } catch (IOException ie) { LOG.warn("Unable to get the current user.", ie); // debug log RMAuditLogger.logFailure(user, AuditConstants.SUBMIT_APP_REQUEST, ie.getMessage(), "ClientRMService", "Exception in submitting application", applicationId, callerContext, submissionContext.getQueue()); // audit log throw RPCUtil.getRemoteException(ie); } ``` So I suggest to add an audit log for *RouterAdminServer* to save privileged operation logs seperately. The logger' s name may be: ```java // hadoop security public static final Logger AUDITLOG = LoggerFactory.getLogger( "SecurityLogger." + ServiceAuthorizationManager.class.getName()); // namenode public static final Log auditLog = LogFactory.getLog( FSNamesystem.class.getName() + ".audit"); ``` I choose className.audit finally and record AUDITLOG instead of LOG for the privileged operations that call permission check function _checkSuperuserPrivilege_. ### Description of PR ### How was this patch tested? ### For code changes: - [x] Does the title or this PR starts with the corresponding JIRA issue id (e.g. 'HADOOP-17799. Your PR title ...')? - [ ] Object storage: have the integration tests been executed and the endpoint declared according to the connector-specific documentation? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, `NOTICE-binary` files? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18655) Upgrade Kerby to 2.0.3 due to CVE-2023-25613
[ https://issues.apache.org/jira/browse/HADOOP-18655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17697976#comment-17697976 ] ASF GitHub Bot commented on HADOOP-18655: - rohit-kb commented on PR #5458: URL: https://github.com/apache/hadoop/pull/5458#issuecomment-1460389510 Thanks @steveloughran for the review and the update. Since there is no reference to LdapIdentityBackend, so I assume we are not porting it to branch-3.3 then? In which case, I will mark the jira as resolved. > Upgrade Kerby to 2.0.3 due to CVE-2023-25613 > > > Key: HADOOP-18655 > URL: https://issues.apache.org/jira/browse/HADOOP-18655 > Project: Hadoop Common > Issue Type: Task > Components: build >Affects Versions: 3.3.4 >Reporter: Rohit Kumar Badeau >Assignee: Rohit Kumar Badeau >Priority: Minor > Labels: pull-request-available > Fix For: 3.4.0 > > > An LDAP Injection vulnerability exists in the LdapIdentityBackend of Apache > Kerby before 2.0.3. > CVSSv3 Score:- 9.8(Critical) > [https://nvd.nist.gov/vuln/detail/CVE-2023-25613] -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] rohit-kb commented on pull request #5458: HADOOP-18655. Upgrade kerby to 2.0.3 due to CVE-2023-25613
rohit-kb commented on PR #5458: URL: https://github.com/apache/hadoop/pull/5458#issuecomment-1460389510 Thanks @steveloughran for the review and the update. Since there is no reference to LdapIdentityBackend, so I assume we are not porting it to branch-3.3 then? In which case, I will mark the jira as resolved. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17836) ABFS connection reset reporting excessively noisy
[ https://issues.apache.org/jira/browse/HADOOP-17836?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-17836: Parent: (was: HADOOP-17736) Issue Type: Bug (was: Sub-task) > ABFS connection reset reporting excessively noisy > - > > Key: HADOOP-17836 > URL: https://issues.apache.org/jira/browse/HADOOP-17836 > Project: Hadoop Common > Issue Type: Bug > Components: fs/azure >Affects Versions: 3.3.1 > Environment: long haul FTTH link to Azure cardif ~50 miles away; > laptop with wifi >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Minor > Labels: pull-request-available > Fix For: 3.4.0 > > Time Spent: 50m > Remaining Estimate: 0h > > Large 10GB download from abfs failing after 50 minutes, connection reset > Assumptions > * Azure storage/routers etc get bored of long-lived HTTP connections > * ABFS client doesn't recover from socket exceptions -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18653) LogLevel servlet to determine log impl before using setLevel
[ https://issues.apache.org/jira/browse/HADOOP-18653?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17697972#comment-17697972 ] ASF GitHub Bot commented on HADOOP-18653: - steveloughran commented on code in PR #5456: URL: https://github.com/apache/hadoop/pull/5456#discussion_r1129648239 ## hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/log/LogLevel.java: ## @@ -338,14 +341,18 @@ public void doGet(HttpServletRequest request, HttpServletResponse response out.println(MARKER + "Submitted Class Name: " + logName + ""); -Logger log = Logger.getLogger(logName); +org.slf4j.Logger log = LoggerFactory.getLogger(logName); out.println(MARKER + "Log Class: " + log.getClass().getName() +""); if (level != null) { out.println(MARKER + "Submitted Level: " + level + ""); } -process(log, level, out); +if (GenericsUtil.isLog4jLogger(logName)) { + process(Logger.getLogger(logName), level, out); +} else { + out.println("Sorry, " + log.getClass() + " not supported."); Review Comment: text to explain "log4j loggers only" ## hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/GenericsUtil.java: ## @@ -89,10 +89,30 @@ public static boolean isLog4jLogger(Class clazz) { } Logger log = LoggerFactory.getLogger(clazz); try { - Class log4jClass = Class.forName("org.slf4j.impl.Log4jLoggerAdapter"); + Class log4jClass = Class.forName("org.slf4j.impl.Log4jLoggerAdapter"); Review Comment: make this a constant string and use everywhere it is needed ## hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/GenericsUtil.java: ## @@ -89,10 +89,30 @@ public static boolean isLog4jLogger(Class clazz) { } Logger log = LoggerFactory.getLogger(clazz); try { - Class log4jClass = Class.forName("org.slf4j.impl.Log4jLoggerAdapter"); + Class log4jClass = Class.forName("org.slf4j.impl.Log4jLoggerAdapter"); return log4jClass.isInstance(log); } catch (ClassNotFoundException e) { return false; } } + + /** + * Determine whether the log of the given logger is of Log4J implementation. + * + * @param logger the logger name, usually class name as string. + * @return true if the logger uses Log4J implementation. + */ + public static boolean isLog4jLogger(String logger) { +if (logger == null) { + return false; +} +Logger log = LoggerFactory.getLogger(logger); +try { + Class log4jClass = Class.forName("org.slf4j.impl.Log4jLoggerAdapter"); Review Comment: if the class isn't found, then that fact can be remembered in an atomic boolean so future loads/checks skipped. > LogLevel servlet to determine log impl before using setLevel > > > Key: HADOOP-18653 > URL: https://issues.apache.org/jira/browse/HADOOP-18653 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Viraj Jasani >Assignee: Viraj Jasani >Priority: Major > Labels: pull-request-available > > LogLevel GET API is used to set log level for a given class name dynamically. > While we have cleaned up the commons-logging references, it would be great to > determine whether slf4j log4j adapter is in the classpath before allowing > client to set the log level. > Proposed changes: > * Use slf4j logger factory to get the log reference for the given class name > * Use generic utility to identify if the slf4j log4j adapter is in the > classpath before using log4j API to update the log level > * If the log4j adapter is not in the classpath, report error in the output -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on a diff in pull request #5456: HADOOP-18653. LogLevel servlet to determine log impl before using setLevel
steveloughran commented on code in PR #5456: URL: https://github.com/apache/hadoop/pull/5456#discussion_r1129648239 ## hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/log/LogLevel.java: ## @@ -338,14 +341,18 @@ public void doGet(HttpServletRequest request, HttpServletResponse response out.println(MARKER + "Submitted Class Name: " + logName + ""); -Logger log = Logger.getLogger(logName); +org.slf4j.Logger log = LoggerFactory.getLogger(logName); out.println(MARKER + "Log Class: " + log.getClass().getName() +""); if (level != null) { out.println(MARKER + "Submitted Level: " + level + ""); } -process(log, level, out); +if (GenericsUtil.isLog4jLogger(logName)) { + process(Logger.getLogger(logName), level, out); +} else { + out.println("Sorry, " + log.getClass() + " not supported."); Review Comment: text to explain "log4j loggers only" ## hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/GenericsUtil.java: ## @@ -89,10 +89,30 @@ public static boolean isLog4jLogger(Class clazz) { } Logger log = LoggerFactory.getLogger(clazz); try { - Class log4jClass = Class.forName("org.slf4j.impl.Log4jLoggerAdapter"); + Class log4jClass = Class.forName("org.slf4j.impl.Log4jLoggerAdapter"); Review Comment: make this a constant string and use everywhere it is needed ## hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/GenericsUtil.java: ## @@ -89,10 +89,30 @@ public static boolean isLog4jLogger(Class clazz) { } Logger log = LoggerFactory.getLogger(clazz); try { - Class log4jClass = Class.forName("org.slf4j.impl.Log4jLoggerAdapter"); + Class log4jClass = Class.forName("org.slf4j.impl.Log4jLoggerAdapter"); return log4jClass.isInstance(log); } catch (ClassNotFoundException e) { return false; } } + + /** + * Determine whether the log of the given logger is of Log4J implementation. + * + * @param logger the logger name, usually class name as string. + * @return true if the logger uses Log4J implementation. + */ + public static boolean isLog4jLogger(String logger) { +if (logger == null) { + return false; +} +Logger log = LoggerFactory.getLogger(logger); +try { + Class log4jClass = Class.forName("org.slf4j.impl.Log4jLoggerAdapter"); Review Comment: if the class isn't found, then that fact can be remembered in an atomic boolean so future loads/checks skipped. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18655) Upgrade Kerby to 2.0.3 due to CVE-2023-25613
[ https://issues.apache.org/jira/browse/HADOOP-18655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17697971#comment-17697971 ] Steve Loughran commented on HADOOP-18655: - update: there is no reference to LdapIdentityBackend, therefore i don't think we are exposed. not targeting 3.3.5 with this > Upgrade Kerby to 2.0.3 due to CVE-2023-25613 > > > Key: HADOOP-18655 > URL: https://issues.apache.org/jira/browse/HADOOP-18655 > Project: Hadoop Common > Issue Type: Task > Components: build >Affects Versions: 3.3.4 >Reporter: Rohit Kumar Badeau >Assignee: Rohit Kumar Badeau >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > > An LDAP Injection vulnerability exists in the LdapIdentityBackend of Apache > Kerby before 2.0.3. > CVSSv3 Score:- 9.8(Critical) > [https://nvd.nist.gov/vuln/detail/CVE-2023-25613] -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-18655) Upgrade Kerby to 2.0.3 due to CVE-2023-25613
[ https://issues.apache.org/jira/browse/HADOOP-18655?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-18655: Priority: Minor (was: Major) > Upgrade Kerby to 2.0.3 due to CVE-2023-25613 > > > Key: HADOOP-18655 > URL: https://issues.apache.org/jira/browse/HADOOP-18655 > Project: Hadoop Common > Issue Type: Task > Components: build >Affects Versions: 3.3.4 >Reporter: Rohit Kumar Badeau >Assignee: Rohit Kumar Badeau >Priority: Minor > Labels: pull-request-available > Fix For: 3.4.0 > > > An LDAP Injection vulnerability exists in the LdapIdentityBackend of Apache > Kerby before 2.0.3. > CVSSv3 Score:- 9.8(Critical) > [https://nvd.nist.gov/vuln/detail/CVE-2023-25613] -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Assigned] (HADOOP-18655) Upgrade Kerby to 2.0.3 due to CVE-2023-25613
[ https://issues.apache.org/jira/browse/HADOOP-18655?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran reassigned HADOOP-18655: --- Assignee: Rohit Kumar Badeau > Upgrade Kerby to 2.0.3 due to CVE-2023-25613 > > > Key: HADOOP-18655 > URL: https://issues.apache.org/jira/browse/HADOOP-18655 > Project: Hadoop Common > Issue Type: Task > Components: build >Affects Versions: 3.3.4 >Reporter: Rohit Kumar Badeau >Assignee: Rohit Kumar Badeau >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > > An LDAP Injection vulnerability exists in the LdapIdentityBackend of Apache > Kerby before 2.0.3. > CVSSv3 Score:- 9.8(Critical) > [https://nvd.nist.gov/vuln/detail/CVE-2023-25613] -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-18655) Upgrade Kerby to 2.0.3 due to CVE-2023-25613
[ https://issues.apache.org/jira/browse/HADOOP-18655?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-18655: Fix Version/s: 3.4.0 (was: 3.3.5) > Upgrade Kerby to 2.0.3 due to CVE-2023-25613 > > > Key: HADOOP-18655 > URL: https://issues.apache.org/jira/browse/HADOOP-18655 > Project: Hadoop Common > Issue Type: Task > Components: build >Affects Versions: 3.3.4 >Reporter: Rohit Kumar Badeau >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > > An LDAP Injection vulnerability exists in the LdapIdentityBackend of Apache > Kerby before 2.0.3. > CVSSv3 Score:- 9.8(Critical) > [https://nvd.nist.gov/vuln/detail/CVE-2023-25613] -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18487) protobuf-2.5.0 dependencies => provided
[ https://issues.apache.org/jira/browse/HADOOP-18487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17697968#comment-17697968 ] ASF GitHub Bot commented on HADOOP-18487: - steveloughran commented on PR #4996: URL: https://github.com/apache/hadoop/pull/4996#issuecomment-1460343979 spotbugs still unhappy. will have to explicitly exclude > protobuf-2.5.0 dependencies => provided > --- > > Key: HADOOP-18487 > URL: https://issues.apache.org/jira/browse/HADOOP-18487 > Project: Hadoop Common > Issue Type: Improvement > Components: build, ipc >Affects Versions: 3.3.4 >Reporter: Steve Loughran >Priority: Major > Labels: pull-request-available > > uses of protobuf 2.5 and RpcEnginej have been deprecated since 3.3.0 in > HADOOP-17046 > while still keeping those files around (for a long time...), how about we > make the protobuf 2.5.0 export off hadoop common and hadoop-hdfs *provided*, > rather than *compile* > that way, if apps want it for their own apis, they have to explicitly ask for > it, but at least our own scans don't break. > i have no idea what will happen to the rest of the stack at this point, it > will be "interesting" to see -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on pull request #4996: HADOOP-18487. protobuf 2.5.0 marked as provided.
steveloughran commented on PR #4996: URL: https://github.com/apache/hadoop/pull/4996#issuecomment-1460343979 spotbugs still unhappy. will have to explicitly exclude -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18655) Upgrade Kerby to 2.0.3 due to CVE-2023-25613
[ https://issues.apache.org/jira/browse/HADOOP-18655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17697966#comment-17697966 ] ASF GitHub Bot commented on HADOOP-18655: - steveloughran commented on PR #5458: URL: https://github.com/apache/hadoop/pull/5458#issuecomment-1460341615 merged to trunk. thanks! @rohit-kb can you do a pr with this patch cherrypicked into branch-3.3? that'll get into people's hands faster. we are doing a 3.3.5 RC this week, but I am reluctant to do a last minute change here. How exposed do you think hadoop apps are exposed to this? > Upgrade Kerby to 2.0.3 due to CVE-2023-25613 > > > Key: HADOOP-18655 > URL: https://issues.apache.org/jira/browse/HADOOP-18655 > Project: Hadoop Common > Issue Type: Task > Components: build >Affects Versions: 3.3.4 >Reporter: Rohit Kumar Badeau >Priority: Major > Labels: pull-request-available > Fix For: 3.3.5 > > > An LDAP Injection vulnerability exists in the LdapIdentityBackend of Apache > Kerby before 2.0.3. > CVSSv3 Score:- 9.8(Critical) > [https://nvd.nist.gov/vuln/detail/CVE-2023-25613] -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] dependabot[bot] opened a new pull request, #5463: Bump snakeyaml from 1.33 to 2.0 in /hadoop-project
dependabot[bot] opened a new pull request, #5463: URL: https://github.com/apache/hadoop/pull/5463 Bumps [snakeyaml](https://bitbucket.org/snakeyaml/snakeyaml) from 1.33 to 2.0. Commits https://bitbucket.org/snakeyaml/snakeyaml/commits/c98ffba9cd065d1ead94c9ec580d8b5a5966c9d3;>c98ffba issue 561: add negative test case https://bitbucket.org/snakeyaml/snakeyaml/commits/e2ca740df5510abf4f8de49c56e4ec53ec7b5624;>e2ca740 Use Maven wrapper on github https://bitbucket.org/snakeyaml/snakeyaml/commits/49d91a1e2d7fbd756f1d5f380b0c07e13546222d;>49d91a1 Fix target for github https://bitbucket.org/snakeyaml/snakeyaml/commits/19e331dd722325758263bfdfdd1d72872d8451bd;>19e331d Disable toolchain for github https://bitbucket.org/snakeyaml/snakeyaml/commits/42c781297909a3c7e61a234071540b91c6bf5834;>42c7812 Cobertura plugin does not work https://bitbucket.org/snakeyaml/snakeyaml/commits/03c82b5d8ef3525ba407f3a96cbb6d5f6f9d364d;>03c82b5 Rename GlobalTagRejectionTest to be run by Maven https://bitbucket.org/snakeyaml/snakeyaml/commits/6e8cd890716dfe22d5ba56f9a592225fb7fa2803;>6e8cd89 Remove cobertura https://bitbucket.org/snakeyaml/snakeyaml/commits/d9b0f480b1a63aca4678da7ab1915fcfc7d2a856;>d9b0f48 Improve Javadoc https://bitbucket.org/snakeyaml/snakeyaml/commits/519791aa35b5415494234cd91c250ba5ed9fa80a;>519791a Run install and site goals under docker https://bitbucket.org/snakeyaml/snakeyaml/commits/82f33d25ae189560ebeed29bbe3aff5bc44556fc;>82f33d2 Merge branch 'master' into add-module-info Additional commits viewable in https://bitbucket.org/snakeyaml/snakeyaml/branches/compare/snakeyaml-2.0..snakeyaml-1.33;>compare view [![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=org.yaml:snakeyaml=maven=1.33=2.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) --- Dependabot commands and options You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) - `@dependabot use these labels` will set the current labels as the default for future PRs for this repo and language - `@dependabot use these reviewers` will set the current reviewers as the default for future PRs for this repo and language - `@dependabot use these assignees` will set the current assignees as the default for future PRs for this repo and language - `@dependabot use this milestone` will set the current milestone as the default for future PRs for this repo and language You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/apache/hadoop/network/alerts). -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on pull request #5458: HADOOP-18655. Upgrade kerby to 2.0.3 due to CVE-2023-25613
steveloughran commented on PR #5458: URL: https://github.com/apache/hadoop/pull/5458#issuecomment-1460341615 merged to trunk. thanks! @rohit-kb can you do a pr with this patch cherrypicked into branch-3.3? that'll get into people's hands faster. we are doing a 3.3.5 RC this week, but I am reluctant to do a last minute change here. How exposed do you think hadoop apps are exposed to this? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18655) Upgrade Kerby to 2.0.3 due to CVE-2023-25613
[ https://issues.apache.org/jira/browse/HADOOP-18655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17697965#comment-17697965 ] ASF GitHub Bot commented on HADOOP-18655: - steveloughran merged PR #5458: URL: https://github.com/apache/hadoop/pull/5458 > Upgrade Kerby to 2.0.3 due to CVE-2023-25613 > > > Key: HADOOP-18655 > URL: https://issues.apache.org/jira/browse/HADOOP-18655 > Project: Hadoop Common > Issue Type: Task > Components: build >Affects Versions: 3.3.4 >Reporter: Rohit Kumar Badeau >Priority: Major > Labels: pull-request-available > Fix For: 3.3.5 > > > An LDAP Injection vulnerability exists in the LdapIdentityBackend of Apache > Kerby before 2.0.3. > CVSSv3 Score:- 9.8(Critical) > [https://nvd.nist.gov/vuln/detail/CVE-2023-25613] -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran merged pull request #5458: HADOOP-18655. Upgrade kerby to 2.0.3 due to CVE-2023-25613
steveloughran merged PR #5458: URL: https://github.com/apache/hadoop/pull/5458 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18646) Upgrade Netty to 4.1.89.Final
[ https://issues.apache.org/jira/browse/HADOOP-18646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17697964#comment-17697964 ] ASF GitHub Bot commented on HADOOP-18646: - nao-it commented on PR #5435: URL: https://github.com/apache/hadoop/pull/5435#issuecomment-1460340312 > all the test failures are unrelated; we have jiras on these being flaky/brittle to timing issues. > > could you rebase as there's now merge problems with the license file...this will trigger a new run and we can see if the failures go away Done > Upgrade Netty to 4.1.89.Final > - > > Key: HADOOP-18646 > URL: https://issues.apache.org/jira/browse/HADOOP-18646 > Project: Hadoop Common > Issue Type: Improvement > Components: build >Affects Versions: 3.3.4 >Reporter: Aleksandr Nikolaev >Assignee: Aleksandr Nikolaev >Priority: Major > Labels: pull-request-available > > h4. Netty version - 4.1.89 has fix CVEs: > [CVE-2022-41881|https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-41881] > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] nao-it commented on pull request #5435: HADOOP-18646 update Netty dependency
nao-it commented on PR #5435: URL: https://github.com/apache/hadoop/pull/5435#issuecomment-1460340312 > all the test failures are unrelated; we have jiras on these being flaky/brittle to timing issues. > > could you rebase as there's now merge problems with the license file...this will trigger a new run and we can see if the failures go away Done -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-18655) Upgrade Kerby to 2.0.3 due to CVE-2023-25613
[ https://issues.apache.org/jira/browse/HADOOP-18655?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-18655: Fix Version/s: 3.3.5 > Upgrade Kerby to 2.0.3 due to CVE-2023-25613 > > > Key: HADOOP-18655 > URL: https://issues.apache.org/jira/browse/HADOOP-18655 > Project: Hadoop Common > Issue Type: Task > Components: build >Affects Versions: 3.3.4 >Reporter: Rohit Kumar Badeau >Priority: Major > Labels: pull-request-available > Fix For: 3.3.5 > > > An LDAP Injection vulnerability exists in the LdapIdentityBackend of Apache > Kerby before 2.0.3. > CVSSv3 Score:- 9.8(Critical) > [https://nvd.nist.gov/vuln/detail/CVE-2023-25613] -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18657) Tune ABFS create() retry logic
[ https://issues.apache.org/jira/browse/HADOOP-18657?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17697951#comment-17697951 ] ASF GitHub Bot commented on HADOOP-18657: - steveloughran commented on PR #5462: URL: https://github.com/apache/hadoop/pull/5462#issuecomment-1460326545 fyi @saxenapranav @mehakmeet as well as improving diagnostics, this patch also changes the recovery code by handling a deletion of the target file between the first failure and the retry. > Tune ABFS create() retry logic > -- > > Key: HADOOP-18657 > URL: https://issues.apache.org/jira/browse/HADOOP-18657 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/azure >Affects Versions: 3.3.5 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > Labels: pull-request-available > > Based on experience trying to debug this happening > # add debug statements when create() fails > # generated exception text to reference string shared with tests, path and > error code > # generated exception to include inner exception for full stack trace > Currently the retry logic is > # create(overwrite=false) > # if HTTP_CONFLICT/409 raised; call HEAD > # use etag in create(path, overwrite=true, etag) > # special handling of error HTTP_PRECON_FAILED = 412 > There's a race condition here, which is if between 1 and 2 the file which > exists is deleted. The retry should succeed, but currently a 404 from the > head is escalated to a failure > proposed changes > # if HEAD is 404, leave etag == null and continue > # special handling of 412 also to handle 409 -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on pull request #5462: HADOOP-18657. Tune ABFS create() retry logic
steveloughran commented on PR #5462: URL: https://github.com/apache/hadoop/pull/5462#issuecomment-1460326545 fyi @saxenapranav @mehakmeet as well as improving diagnostics, this patch also changes the recovery code by handling a deletion of the target file between the first failure and the retry. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on pull request #5461: Backport Merged pr https://github.com/apache/hadoop/pull/5299 in branch-3.3
steveloughran commented on PR #5461: URL: https://github.com/apache/hadoop/pull/5461#issuecomment-1460321462 ok. regarding that conflict, looks like it is because HADOOP-17836/ #3281 never got backported. I think I would like that in...let me pull it into branch-3.3 and then you can try to cherrypick again -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-18657) Tune ABFS create() retry logic
[ https://issues.apache.org/jira/browse/HADOOP-18657?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated HADOOP-18657: Labels: pull-request-available (was: ) > Tune ABFS create() retry logic > -- > > Key: HADOOP-18657 > URL: https://issues.apache.org/jira/browse/HADOOP-18657 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/azure >Affects Versions: 3.3.5 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > Labels: pull-request-available > > Based on experience trying to debug this happening > # add debug statements when create() fails > # generated exception text to reference string shared with tests, path and > error code > # generated exception to include inner exception for full stack trace > Currently the retry logic is > # create(overwrite=false) > # if HTTP_CONFLICT/409 raised; call HEAD > # use etag in create(path, overwrite=true, etag) > # special handling of error HTTP_PRECON_FAILED = 412 > There's a race condition here, which is if between 1 and 2 the file which > exists is deleted. The retry should succeed, but currently a 404 from the > head is escalated to a failure > proposed changes > # if HEAD is 404, leave etag == null and continue > # special handling of 412 also to handle 409 -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18657) Tune ABFS create() retry logic
[ https://issues.apache.org/jira/browse/HADOOP-18657?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17697947#comment-17697947 ] ASF GitHub Bot commented on HADOOP-18657: - steveloughran opened a new pull request, #5462: URL: https://github.com/apache/hadoop/pull/5462 ### Description of PR Tunes how abfs handles a failure during create which may be due to concurrency *or* load-related retries happening in the store. * better logging * happy with the confict being resolved by the file being deleted * more diagnostics in failure raised ### How was this patch tested? lease test run already; doing full hadoop-azure test run ### For code changes: - [X] Does the title or this PR starts with the corresponding JIRA issue id (e.g. 'HADOOP-17799. Your PR title ...')? - [X] Object storage: have the integration tests been executed and the endpoint declared according to the connector-specific documentation? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, `NOTICE-binary` files? > Tune ABFS create() retry logic > -- > > Key: HADOOP-18657 > URL: https://issues.apache.org/jira/browse/HADOOP-18657 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/azure >Affects Versions: 3.3.5 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > > Based on experience trying to debug this happening > # add debug statements when create() fails > # generated exception text to reference string shared with tests, path and > error code > # generated exception to include inner exception for full stack trace > Currently the retry logic is > # create(overwrite=false) > # if HTTP_CONFLICT/409 raised; call HEAD > # use etag in create(path, overwrite=true, etag) > # special handling of error HTTP_PRECON_FAILED = 412 > There's a race condition here, which is if between 1 and 2 the file which > exists is deleted. The retry should succeed, but currently a 404 from the > head is escalated to a failure > proposed changes > # if HEAD is 404, leave etag == null and continue > # special handling of 412 also to handle 409 -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-18657) Tune ABFS create() retry logic
Steve Loughran created HADOOP-18657: --- Summary: Tune ABFS create() retry logic Key: HADOOP-18657 URL: https://issues.apache.org/jira/browse/HADOOP-18657 Project: Hadoop Common Issue Type: Improvement Components: fs/azure Affects Versions: 3.3.5 Reporter: Steve Loughran Based on experience trying to debug this happening # add debug statements when create() fails # generated exception text to reference string shared with tests, path and error code # generated exception to include inner exception for full stack trace Currently the retry logic is # create(overwrite=false) # if HTTP_CONFLICT/409 raised; call HEAD # use etag in create(path, overwrite=true, etag) # special handling of error HTTP_PRECON_FAILED = 412 There's a race condition here, which is if between 1 and 2 the file which exists is deleted. The retry should succeed, but currently a 404 from the head is escalated to a failure proposed changes # if HEAD is 404, leave etag == null and continue # special handling of 412 also to handle 409 -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Assigned] (HADOOP-18657) Tune ABFS create() retry logic
[ https://issues.apache.org/jira/browse/HADOOP-18657?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran reassigned HADOOP-18657: --- Assignee: Steve Loughran > Tune ABFS create() retry logic > -- > > Key: HADOOP-18657 > URL: https://issues.apache.org/jira/browse/HADOOP-18657 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/azure >Affects Versions: 3.3.5 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > > Based on experience trying to debug this happening > # add debug statements when create() fails > # generated exception text to reference string shared with tests, path and > error code > # generated exception to include inner exception for full stack trace > Currently the retry logic is > # create(overwrite=false) > # if HTTP_CONFLICT/409 raised; call HEAD > # use etag in create(path, overwrite=true, etag) > # special handling of error HTTP_PRECON_FAILED = 412 > There's a race condition here, which is if between 1 and 2 the file which > exists is deleted. The retry should succeed, but currently a 404 from the > head is escalated to a failure > proposed changes > # if HEAD is 404, leave etag == null and continue > # special handling of 412 also to handle 409 -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #5382: YARN-8972. [Router] Add support to prevent DoS attack over ApplicationSubmissionContext size.
hadoop-yetus commented on PR #5382: URL: https://github.com/apache/hadoop/pull/5382#issuecomment-1460152208 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 55s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +0 :ok: | xmllint | 0m 1s | | xmllint was not available. | | +0 :ok: | markdownlint | 0m 1s | | markdownlint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 15m 28s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 25m 56s | | trunk passed | | +1 :green_heart: | compile | 9m 42s | | trunk passed with JDK Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | compile | 8m 33s | | trunk passed with JDK Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 | | +1 :green_heart: | checkstyle | 1m 50s | | trunk passed | | +1 :green_heart: | mvnsite | 3m 35s | | trunk passed | | +1 :green_heart: | javadoc | 3m 20s | | trunk passed with JDK Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javadoc | 3m 9s | | trunk passed with JDK Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 | | +0 :ok: | spotbugs | 0m 35s | | branch/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site no spotbugs output file (spotbugsXml.xml) | | +1 :green_heart: | shadedclient | 20m 21s | | branch has no errors when building and testing our client artifacts. | | -0 :warning: | patch | 20m 43s | | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 29s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 1m 57s | | the patch passed | | +1 :green_heart: | compile | 9m 17s | | the patch passed with JDK Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javac | 9m 17s | | the patch passed | | +1 :green_heart: | compile | 8m 27s | | the patch passed with JDK Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 | | +1 :green_heart: | javac | 8m 27s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 1m 39s | | the patch passed | | +1 :green_heart: | mvnsite | 3m 14s | | the patch passed | | +1 :green_heart: | javadoc | 2m 57s | | the patch passed with JDK Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javadoc | 2m 48s | | the patch passed with JDK Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 | | +0 :ok: | spotbugs | 0m 32s | | hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site has no data from spotbugs | | +1 :green_heart: | shadedclient | 20m 24s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 1m 14s | | hadoop-yarn-api in the patch passed. | | +1 :green_heart: | unit | 5m 44s | | hadoop-yarn-common in the patch passed. | | +1 :green_heart: | unit | 0m 45s | | hadoop-yarn-server-router in the patch passed. | | +1 :green_heart: | unit | 0m 31s | | hadoop-yarn-site in the patch passed. | | +1 :green_heart: | asflicense | 0m 55s | | The patch does not generate ASF License warnings. | | | | 169m 47s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.42 ServerAPI=1.42 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5382/14/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/5382 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets xmllint markdownlint | | uname | Linux de418a0f76e3 4.15.0-200-generic #211-Ubuntu SMP Thu Nov 24 18:16:04 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / f73c703bf5629e86c77ba4478d58b6f44ca94e2c | | Default Java | Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 | | Multi-JDK versions |
[GitHub] [hadoop] hadoop-yetus commented on pull request #5382: YARN-8972. [Router] Add support to prevent DoS attack over ApplicationSubmissionContext size.
hadoop-yetus commented on PR #5382: URL: https://github.com/apache/hadoop/pull/5382#issuecomment-1460150196 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 40s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +0 :ok: | xmllint | 0m 0s | | xmllint was not available. | | +0 :ok: | markdownlint | 0m 0s | | markdownlint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 15m 50s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 25m 37s | | trunk passed | | +1 :green_heart: | compile | 9m 45s | | trunk passed with JDK Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | compile | 8m 27s | | trunk passed with JDK Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 | | +1 :green_heart: | checkstyle | 1m 48s | | trunk passed | | +1 :green_heart: | mvnsite | 3m 34s | | trunk passed | | +1 :green_heart: | javadoc | 3m 19s | | trunk passed with JDK Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javadoc | 3m 8s | | trunk passed with JDK Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 | | +0 :ok: | spotbugs | 0m 37s | | branch/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site no spotbugs output file (spotbugsXml.xml) | | +1 :green_heart: | shadedclient | 20m 29s | | branch has no errors when building and testing our client artifacts. | | -0 :warning: | patch | 20m 51s | | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 28s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 1m 59s | | the patch passed | | +1 :green_heart: | compile | 9m 2s | | the patch passed with JDK Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javac | 9m 2s | | the patch passed | | +1 :green_heart: | compile | 8m 32s | | the patch passed with JDK Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 | | +1 :green_heart: | javac | 8m 32s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 1m 37s | | the patch passed | | +1 :green_heart: | mvnsite | 3m 13s | | the patch passed | | +1 :green_heart: | javadoc | 2m 54s | | the patch passed with JDK Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javadoc | 2m 49s | | the patch passed with JDK Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 | | +0 :ok: | spotbugs | 0m 33s | | hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site has no data from spotbugs | | +1 :green_heart: | shadedclient | 20m 20s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 1m 12s | | hadoop-yarn-api in the patch passed. | | +1 :green_heart: | unit | 5m 43s | | hadoop-yarn-common in the patch passed. | | +1 :green_heart: | unit | 0m 44s | | hadoop-yarn-server-router in the patch passed. | | +1 :green_heart: | unit | 0m 34s | | hadoop-yarn-site in the patch passed. | | +1 :green_heart: | asflicense | 0m 57s | | The patch does not generate ASF License warnings. | | | | 169m 27s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.42 ServerAPI=1.42 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5382/13/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/5382 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets xmllint markdownlint | | uname | Linux 46d44b7d2d17 4.15.0-200-generic #211-Ubuntu SMP Thu Nov 24 18:16:04 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / f73c703bf5629e86c77ba4478d58b6f44ca94e2c | | Default Java | Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 | | Multi-JDK versions |
[jira] [Commented] (HADOOP-18640) ABFS: Enabling Client-side Backoff only for new requests
[ https://issues.apache.org/jira/browse/HADOOP-18640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17697835#comment-17697835 ] ASF GitHub Bot commented on HADOOP-18640: - sreeb-msft commented on code in PR #5446: URL: https://github.com/apache/hadoop/pull/5446#discussion_r1129256701 ## hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsClient.java: ## @@ -222,6 +224,10 @@ AbfsThrottlingIntercept getIntercept() { return intercept; } + boolean shouldThrottleRetries() { +return throttleRetries; + } + Review Comment: Within AbfsRestOperation, would have to do client.getAbfsConfiguration.getShouldThrottleRetries. Would that be more preferable? > ABFS: Enabling Client-side Backoff only for new requests > > > Key: HADOOP-18640 > URL: https://issues.apache.org/jira/browse/HADOOP-18640 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Reporter: Sree Bhattacharyya >Assignee: Sree Bhattacharyya >Priority: Minor > Labels: pull-request-available > > Enabling backoff only for new requests that happen, and disabling for retried > requests. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] sreeb-msft commented on a diff in pull request #5446: HADOOP-18640: [ABFS] Enabling Client-side Backoff only for new requests
sreeb-msft commented on code in PR #5446: URL: https://github.com/apache/hadoop/pull/5446#discussion_r1129256701 ## hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsClient.java: ## @@ -222,6 +224,10 @@ AbfsThrottlingIntercept getIntercept() { return intercept; } + boolean shouldThrottleRetries() { +return throttleRetries; + } + Review Comment: Within AbfsRestOperation, would have to do client.getAbfsConfiguration.getShouldThrottleRetries. Would that be more preferable? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] sodonnel commented on a diff in pull request #5460: HDFS-16942. Send error to datanode if FBR is rejected due to bad lease
sodonnel commented on code in PR #5460: URL: https://github.com/apache/hadoop/pull/5460#discussion_r1129185145 ## hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java: ## @@ -791,6 +792,9 @@ private void offerService() throws Exception { shouldServiceRun = false; return; } +if (InvalidBlockReportLeaseException.class.getName().equals(reClass)) { + fullBlockReportLeaseId = 0; Review Comment: Also, do you have any idea about fixing the checkstyle issue? As I mentioned above, trying to add a package-info.java file broke my compile locally. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] sodonnel commented on a diff in pull request #5460: HDFS-16942. Send error to datanode if FBR is rejected due to bad lease
sodonnel commented on code in PR #5460: URL: https://github.com/apache/hadoop/pull/5460#discussion_r1129184134 ## hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java: ## @@ -791,6 +792,9 @@ private void offerService() throws Exception { shouldServiceRun = false; return; } +if (InvalidBlockReportLeaseException.class.getName().equals(reClass)) { + fullBlockReportLeaseId = 0; Review Comment: At line 717, we can see where it attempts to get a lease from the heartbeat if the lease in the DN == 0: ``` boolean requestBlockReportLease = (fullBlockReportLeaseId == 0) && scheduler.isBlockReportDue(startTime); ``` So its the isBlockReportDue that controls this. Then later, if we have a non zero lease, it will try to create the block report: ``` boolean forceFullBr = scheduler.forceFullBlockReport.getAndSet(false); if (forceFullBr) { LOG.info("Forcing a full block report to " + nnAddr); } if ((fullBlockReportLeaseId != 0) || forceFullBr) { cmds = blockReport(fullBlockReportLeaseId); fullBlockReportLeaseId = 0; } ``` Its really the `isBlockReportDue()` method that controls whether a new one should be sent of not, and that is based on time since the last one. The the `blockReport()`, it updates the time after a successful block report, but if it gets an exception, like this change causes, it will not update the time and so it will try again on the next heartbeat if it gets a new lease. I think `forceFullBlockReport` is only for tests, or the command to force a DN block from the CLI. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18606) Add reason in in x-ms-client-request-id on a retry API call.
[ https://issues.apache.org/jira/browse/HADOOP-18606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17697784#comment-17697784 ] Pranav Saxena commented on HADOOP-18606: PR for backport to branch-3.3: https://github.com/apache/hadoop/pull/5461 > Add reason in in x-ms-client-request-id on a retry API call. > > > Key: HADOOP-18606 > URL: https://issues.apache.org/jira/browse/HADOOP-18606 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Reporter: Pranav Saxena >Assignee: Pranav Saxena >Priority: Minor > Labels: pull-request-available > Fix For: 3.4.0 > > > In the header, x-ms-client-request-id contains informaiton on what retry this > particular API call is: for ex: > :eb06d8f6-5693-461b-b63c-5858fa7655e6:29cb0d19-2b68-4409-bc35-cb7160b90dd8:::CF:1. > We want to add the reason for the retry in the header_value:Now the same > header would include retry reason in case its not the 0th iteration of the > API operation. It would be like > :eb06d8f6-5693-461b-b63c-5858fa7655e6:29cb0d19-2b68-4409-bc35-cb7160b90dd8:::CF:1_RT. > This corresponds that its retry number 1. The 0th iteration was failed due > to read timeout. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] saxenapranav commented on pull request #5461: Backport Merged pr https://github.com/apache/hadoop/pull/5299 in branch-3.3
saxenapranav commented on PR #5461: URL: https://github.com/apache/hadoop/pull/5461#issuecomment-1459757162 @steveloughran, as discussed on https://github.com/apache/hadoop/pull/5299. I have backported the change in branch-3.3 Requesting you to kindly review it. Thank you so much. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18487) protobuf-2.5.0 dependencies => provided
[ https://issues.apache.org/jira/browse/HADOOP-18487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17697782#comment-17697782 ] ASF GitHub Bot commented on HADOOP-18487: - hadoop-yetus commented on PR #4996: URL: https://github.com/apache/hadoop/pull/4996#issuecomment-1459746434 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 16m 50s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +0 :ok: | xmllint | 0m 1s | | xmllint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 3 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 15m 19s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 25m 33s | | trunk passed | | +1 :green_heart: | compile | 23m 14s | | trunk passed with JDK Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | compile | 20m 38s | | trunk passed with JDK Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 | | +1 :green_heart: | checkstyle | 3m 50s | | trunk passed | | +1 :green_heart: | mvnsite | 15m 4s | | trunk passed | | +1 :green_heart: | javadoc | 12m 14s | | trunk passed with JDK Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javadoc | 12m 0s | | trunk passed with JDK Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 | | +0 :ok: | spotbugs | 0m 54s | | branch/hadoop-project no spotbugs output file (spotbugsXml.xml) | | -1 :x: | spotbugs | 4m 7s | [/branch-spotbugs-hadoop-mapreduce-project_hadoop-mapreduce-client-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4996/13/artifact/out/branch-spotbugs-hadoop-mapreduce-project_hadoop-mapreduce-client-warnings.html) | hadoop-mapreduce-project/hadoop-mapreduce-client in trunk has 1 extant spotbugs warnings. | | +1 :green_heart: | shadedclient | 20m 53s | | branch has no errors when building and testing our client artifacts. | | -0 :warning: | patch | 21m 17s | | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 28s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 9m 40s | | the patch passed | | +1 :green_heart: | compile | 22m 29s | | the patch passed with JDK Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1 | | -1 :x: | javac | 22m 29s | [/results-compile-javac-root-jdkUbuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4996/13/artifact/out/results-compile-javac-root-jdkUbuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1.txt) | root-jdkUbuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1 with JDK Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1 generated 1 new + 2823 unchanged - 1 fixed = 2824 total (was 2824) | | +1 :green_heart: | compile | 20m 42s | | the patch passed with JDK Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 | | -1 :x: | javac | 20m 42s | [/results-compile-javac-root-jdkPrivateBuild-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4996/13/artifact/out/results-compile-javac-root-jdkPrivateBuild-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09.txt) | root-jdkPrivateBuild-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 with JDK Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 generated 1 new + 2622 unchanged - 1 fixed = 2623 total (was 2623) | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 3m 38s | | root: The patch generated 0 new + 272 unchanged - 5 fixed = 272 total (was 277) | | +1 :green_heart: | mvnsite | 15m 5s | | the patch passed | | +1 :green_heart: | javadoc | 11m 58s | | the patch passed with JDK Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javadoc | 11m 57s | | the patch passed with JDK Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 | | +0 :ok: | spotbugs | 0m 36s | | hadoop-project has no data from spotbugs | | -1 :x: | spotbugs | 2m 42s |
[GitHub] [hadoop] hadoop-yetus commented on pull request #4996: HADOOP-18487. protobuf 2.5.0 marked as provided.
hadoop-yetus commented on PR #4996: URL: https://github.com/apache/hadoop/pull/4996#issuecomment-1459746434 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 16m 50s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +0 :ok: | xmllint | 0m 1s | | xmllint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 3 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 15m 19s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 25m 33s | | trunk passed | | +1 :green_heart: | compile | 23m 14s | | trunk passed with JDK Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | compile | 20m 38s | | trunk passed with JDK Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 | | +1 :green_heart: | checkstyle | 3m 50s | | trunk passed | | +1 :green_heart: | mvnsite | 15m 4s | | trunk passed | | +1 :green_heart: | javadoc | 12m 14s | | trunk passed with JDK Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javadoc | 12m 0s | | trunk passed with JDK Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 | | +0 :ok: | spotbugs | 0m 54s | | branch/hadoop-project no spotbugs output file (spotbugsXml.xml) | | -1 :x: | spotbugs | 4m 7s | [/branch-spotbugs-hadoop-mapreduce-project_hadoop-mapreduce-client-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4996/13/artifact/out/branch-spotbugs-hadoop-mapreduce-project_hadoop-mapreduce-client-warnings.html) | hadoop-mapreduce-project/hadoop-mapreduce-client in trunk has 1 extant spotbugs warnings. | | +1 :green_heart: | shadedclient | 20m 53s | | branch has no errors when building and testing our client artifacts. | | -0 :warning: | patch | 21m 17s | | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 28s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 9m 40s | | the patch passed | | +1 :green_heart: | compile | 22m 29s | | the patch passed with JDK Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1 | | -1 :x: | javac | 22m 29s | [/results-compile-javac-root-jdkUbuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4996/13/artifact/out/results-compile-javac-root-jdkUbuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1.txt) | root-jdkUbuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1 with JDK Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1 generated 1 new + 2823 unchanged - 1 fixed = 2824 total (was 2824) | | +1 :green_heart: | compile | 20m 42s | | the patch passed with JDK Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 | | -1 :x: | javac | 20m 42s | [/results-compile-javac-root-jdkPrivateBuild-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4996/13/artifact/out/results-compile-javac-root-jdkPrivateBuild-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09.txt) | root-jdkPrivateBuild-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 with JDK Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 generated 1 new + 2622 unchanged - 1 fixed = 2623 total (was 2623) | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 3m 38s | | root: The patch generated 0 new + 272 unchanged - 5 fixed = 272 total (was 277) | | +1 :green_heart: | mvnsite | 15m 5s | | the patch passed | | +1 :green_heart: | javadoc | 11m 58s | | the patch passed with JDK Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javadoc | 11m 57s | | the patch passed with JDK Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 | | +0 :ok: | spotbugs | 0m 36s | | hadoop-project has no data from spotbugs | | -1 :x: | spotbugs | 2m 42s | [/new-spotbugs-hadoop-common-project_hadoop-common.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4996/13/artifact/out/new-spotbugs-hadoop-common-project_hadoop-common.html) | hadoop-common-project/hadoop-common generated 4 new + 0 unchanged - 0 fixed = 4 total (was 0) | | +1 :green_heart: | shadedclient | 21m 13s | | patch has no