[jira] [Commented] (HADOOP-19052) Hadoop use Shell command to get the count of the hard link which takes a lot of time
[ https://issues.apache.org/jira/browse/HADOOP-19052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17828609#comment-17828609 ] ASF GitHub Bot commented on HADOOP-19052: - hadoop-yetus commented on PR #6587: URL: https://github.com/apache/hadoop/pull/6587#issuecomment-2008697038 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 46s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 50m 55s | | trunk passed | | +1 :green_heart: | compile | 19m 11s | | trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 | | +1 :green_heart: | compile | 17m 23s | | trunk passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 | | +1 :green_heart: | checkstyle | 1m 20s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 43s | | trunk passed | | +1 :green_heart: | javadoc | 1m 14s | | trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 | | +1 :green_heart: | javadoc | 0m 50s | | trunk passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 | | +1 :green_heart: | spotbugs | 2m 43s | | trunk passed | | +1 :green_heart: | shadedclient | 40m 18s | | branch has no errors when building and testing our client artifacts. | | -0 :warning: | patch | 40m 43s | | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 54s | | the patch passed | | +1 :green_heart: | compile | 18m 28s | | the patch passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 | | +1 :green_heart: | javac | 18m 28s | | the patch passed | | +1 :green_heart: | compile | 17m 24s | | the patch passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 | | +1 :green_heart: | javac | 17m 24s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 1m 13s | | the patch passed | | +1 :green_heart: | mvnsite | 1m 38s | | the patch passed | | +1 :green_heart: | javadoc | 1m 7s | | the patch passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 | | +1 :green_heart: | javadoc | 0m 49s | | the patch passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 | | +1 :green_heart: | spotbugs | 2m 41s | | the patch passed | | +1 :green_heart: | shadedclient | 40m 1s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 19m 14s | | hadoop-common in the patch passed. | | +1 :green_heart: | asflicense | 1m 0s | | The patch does not generate ASF License warnings. | | | | 244m 11s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.44 ServerAPI=1.44 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6587/9/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/6587 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux abf724bfec2a 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 4973e96d6ce61ab78daaa0209f1fb5ba9a282e8e | | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6587/9/testReport/ | | Max. process+thread count | 3134 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6587/9/console | | versions | git=2.25.1
Re: [PR] HADOOP-19052.Hadoop use Shell command to get the count of the hard link which takes a lot of time [hadoop]
hadoop-yetus commented on PR #6587: URL: https://github.com/apache/hadoop/pull/6587#issuecomment-2008697038 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 46s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 50m 55s | | trunk passed | | +1 :green_heart: | compile | 19m 11s | | trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 | | +1 :green_heart: | compile | 17m 23s | | trunk passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 | | +1 :green_heart: | checkstyle | 1m 20s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 43s | | trunk passed | | +1 :green_heart: | javadoc | 1m 14s | | trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 | | +1 :green_heart: | javadoc | 0m 50s | | trunk passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 | | +1 :green_heart: | spotbugs | 2m 43s | | trunk passed | | +1 :green_heart: | shadedclient | 40m 18s | | branch has no errors when building and testing our client artifacts. | | -0 :warning: | patch | 40m 43s | | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 54s | | the patch passed | | +1 :green_heart: | compile | 18m 28s | | the patch passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 | | +1 :green_heart: | javac | 18m 28s | | the patch passed | | +1 :green_heart: | compile | 17m 24s | | the patch passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 | | +1 :green_heart: | javac | 17m 24s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 1m 13s | | the patch passed | | +1 :green_heart: | mvnsite | 1m 38s | | the patch passed | | +1 :green_heart: | javadoc | 1m 7s | | the patch passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 | | +1 :green_heart: | javadoc | 0m 49s | | the patch passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 | | +1 :green_heart: | spotbugs | 2m 41s | | the patch passed | | +1 :green_heart: | shadedclient | 40m 1s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 19m 14s | | hadoop-common in the patch passed. | | +1 :green_heart: | asflicense | 1m 0s | | The patch does not generate ASF License warnings. | | | | 244m 11s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.44 ServerAPI=1.44 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6587/9/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/6587 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux abf724bfec2a 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 4973e96d6ce61ab78daaa0209f1fb5ba9a282e8e | | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6587/9/testReport/ | | Max. process+thread count | 3134 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6587/9/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the
Re: [PR] HDFS-17430. RecoveringBlock will skip no live replicas when get block recovery command. [hadoop]
dineshchitlangia commented on PR #6635: URL: https://github.com/apache/hadoop/pull/6635#issuecomment-2008619571 @ZanderXu as you had posted the first set of suggestions, could you confirm if your suggestions are addressed? We can merge once we have your +1 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HDFS-17430. RecoveringBlock will skip no live replicas when get block recovery command. [hadoop]
haiyang1987 commented on PR #6635: URL: https://github.com/apache/hadoop/pull/6635#issuecomment-2008615230 Hi Sir @dineshchitlangia @ayushtkn Would you mind help review this PR when you have free time? Thank you so much. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HDFS-17431. Fix log format for BlockRecoveryWorker#recoverBlocks [hadoop]
haiyang1987 commented on PR #6643: URL: https://github.com/apache/hadoop/pull/6643#issuecomment-2008608210 Thanks @dineshchitlangia @ayushtkn @wzk784533 for your review and merge~ -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HDFS-17431. Fix log format for BlockRecoveryWorker#recoverBlocks [hadoop]
dineshchitlangia merged PR #6643: URL: https://github.com/apache/hadoop/pull/6643 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HDFS-17431. Fix log format for BlockRecoveryWorker#recoverBlocks [hadoop]
haiyang1987 commented on PR #6643: URL: https://github.com/apache/hadoop/pull/6643#issuecomment-2008603273 Thanks @wzk784533 @ayushtkn @dineshchitlangia for your review. I found that the log format showed some problems, such as the mentioned in this issue https://github.com/apache/hadoop/pull/6635. ``` 2024-03-13 23:07:05,401 WARN datanode.DataNode (BlockRecoveryWorker.java:run(623)) [org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$1@54e291ac] - recover Block: RecoveringBlock{BP-xxx:blk_xxx_xxx; getBlockSize()=0; corrupt=false; offset=-1; locs=[DatanodeInfoWithStorage[dn1:50010,null,null], DatanodeInfoWithStorage[dn2:50010,null,null]]; cachedLocs=[]} FAILED: {} org.apache.hadoop.ipc.RemoteException(java.io.IOException): The recovery id 28577373754 does not match current recovery id 28578772548 for block BP-xxx:blk_xxx_xxx at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.commitBlockSynchronization(FSNamesystem.java:4129) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.commitBlockSynchronization(NameNodeRpcServer.java:1184) at org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.commitBlockSynchronization(DatanodeProtocolServerSideTranslatorPB.java:310) at org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:34391) at org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:635) at org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:603) at org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:587) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1137) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1236) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1134) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:2005) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:3360) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1579) at org.apache.hadoop.ipc.Client.call(Client.java:1511) at org.apache.hadoop.ipc.Client.call(Client.java:1402) at org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:268) at org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:142) at com.sun.proxy.$Proxy17.commitBlockSynchronization(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolClientSideTranslatorPB.commitBlockSynchronization(DatanodeProtocolClientSideTranslatorPB.java:342) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$RecoveryTaskContiguous.syncBlock(BlockRecoveryWorker.java:334) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$RecoveryTaskContiguous.recover(BlockRecoveryWorker.java:189) at org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$1.run(BlockRecoveryWorker.java:620) at java.lang.Thread.run(Thread.java:748) ``` `LOG.warn("recover Block: {} FAILED: {}", b, e);` it invoke e will print the entire trace. so for the second placeholders is meaningless, i think choose to remove for the second placeholders or change e to e.toString() Hi sir @ayushtkn @dineshchitlangia @wzk784533 what dou you think? Thanks~ -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HDFS-17433. metrics sumOfActorCommandQueueLength should only record valid commands. [hadoop]
hfutatzhanghb commented on PR #6644: URL: https://github.com/apache/hadoop/pull/6644#issuecomment-2008595490 > +1 LGTM, pending CI @hfutatzhanghb thanks for finding this issue and contributing the fix. Sir, thanks a lot for reviewing. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[PR] HDFS-17433. metrics sumOfActorCommandQueueLength should only record valid commands. [hadoop]
hfutatzhanghb opened a new pull request, #6644: URL: https://github.com/apache/hadoop/pull/6644 ### Description of PR We add an phone alarm on metrics sumOfActorCommandQueueLength when it beyond 3000. Recently, we received the alarm and we found that `DatanodeCommand[] cmds` with array length equals to 0 was still put into queue and incrActorCmdQueueLength. When processedCommandsOpAvgTime is high, those empty cmds were put into queue every heartbeat intervel. sumOfActorCommandQueueLength should better only record valid commands. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-19052) Hadoop use Shell command to get the count of the hard link which takes a lot of time
[ https://issues.apache.org/jira/browse/HADOOP-19052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17828561#comment-17828561 ] ASF GitHub Bot commented on HADOOP-19052: - slfan1989 commented on PR #6587: URL: https://github.com/apache/hadoop/pull/6587#issuecomment-2008519566 @liangyu-1 Can we fix the checkstyle issue? > Hadoop use Shell command to get the count of the hard link which takes a lot > of time > > > Key: HADOOP-19052 > URL: https://issues.apache.org/jira/browse/HADOOP-19052 > Project: Hadoop Common > Issue Type: Improvement > Environment: Hadopp 3.3.4 >Reporter: liang yu >Priority: Major > Labels: pull-request-available > Attachments: debuglog.png > > > Using Hadoop 3.3.4 > > When the QPS of `append` executions is very high, at a rate of above 1/s. > > We found that the write speed in hadoop is very slow. We traced some > datanodes' log and find that there is a warning : > {code:java} > 2024-01-26 11:09:44,292 WARN impl.FsDatasetImpl > (InstrumentedLock.java:logwaitWarning(165)) Waited above threshold(300 ms) to > acquire lock: lock identifier: FsDatasetRwlock waitTimeMs=336 ms.Suppressed 0 > lock wait warnings.Longest supressed waitTimeMs=0.The stack trace is > java.lang.Thread,getStackTrace(Thread.java:1559) > org.apache.hadoop.util.StringUtils.getStackTrace(StringUtils.java:1060) > org.apache.hadoop.util.Instrumentedlock.logWaitWarning(InstrumentedLock.java:171) > org.apache.hadoop.util.InstrumentedLock.check(InstrumentedLock.java:222) > org.apache.hadoop.util.InstrumentedLock.lock(InstrumentedLock, iaya:105) > org.apache.hadoop.util.AutocloseableLock.acquire(AutocloseableLock.java:67) > org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.append(FsDatasetImpl.java:1239) > org.apache.hadoop.hdfs.server.datanode.BlockReceiver.(BlockReceiver.java:230) > org.apache.hadoop.hdfs.server.datanode.DataXceiver.getBlockReceiver > (DataXceiver.java:1313) > org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock > (DataXceiver.java:764) > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:176) > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:110) > org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:293) > java.lang.Thread.run(Thread.java:748) > {code} > > Then we traced the method > _org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.append(FsDatasetImpl. > java:1239),_ and print how long each command take to finish the execution, > and find that it takes us 700ms to get the linkCount of the file which is > really slow. > !debuglog.png! > > We traced the code and find that java1.8 use a Shell Command to get the > linkCount, in which execution it will start a new Process and wait for the > Process to fork, when the QPS is very high, it will sometimes take a long > time to fork the process. > Here is the shell command. > {code:java} > stat -c%h /path/to/file > {code} > > Solution: > For the FileStore that supports the file attributes "unix", we can use the > method _Files.getAttribute(f.toPath(), "unix:nlink")_ to get the linkCount, > this method doesn't need to start a new process, and will return the result > in a very short time. > > When we use this method to get the file linkCount, we rarely get the WARN log > above when the QPS of append execution is high. > . > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HADOOP-19052.Hadoop use Shell command to get the count of the hard link which takes a lot of time [hadoop]
slfan1989 commented on PR #6587: URL: https://github.com/apache/hadoop/pull/6587#issuecomment-2008519566 @liangyu-1 Can we fix the checkstyle issue? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-19118) KeyShell fails with NPE when KMS throws Exception with null as message
[ https://issues.apache.org/jira/browse/HADOOP-19118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17828517#comment-17828517 ] Steve Loughran commented on HADOOP-19118: - if e.e.getLocalizedMessage() is null it should fall back to e.toString() > KeyShell fails with NPE when KMS throws Exception with null as message > -- > > Key: HADOOP-19118 > URL: https://issues.apache.org/jira/browse/HADOOP-19118 > Project: Hadoop Common > Issue Type: Bug > Components: common, crypto >Affects Versions: 3.3.6 >Reporter: Dénes Bodó >Priority: Major > > There is an issue in specific Ranger versions (where RANGER-3989 is not > fixed) which throws Exception in case of concurrent access to a HashMap with > Message {*}null{*}. > {noformat} > java.util.ConcurrentModificationException: null > at java.util.HashMap$HashIterator.nextNode(HashMap.java:1469) > at java.util.HashMap$EntryIterator.next(HashMap.java:1503) > at java.util.HashMap$EntryIterator.next(HashMap.java:1501) {noformat} > This manifests in Hadoop's KeyShell as an Exception with message {*}null{*}. > So when > {code:java} > private String prettifyException(Exception e) { > return e.getClass().getSimpleName() + ": " + > e.getLocalizedMessage().split("\n")[0]; > } {code} > tries to print out the Exception the user experiences NPE > {noformat} > Exception in thread "main" java.lang.NullPointerException > at > org.apache.hadoop.crypto.key.KeyShell.prettifyException(KeyShell.java:541) > at > org.apache.hadoop.crypto.key.KeyShell.printException(KeyShell.java:536) > at org.apache.hadoop.tools.CommandShell.run(CommandShell.java:79) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:81) > at org.apache.hadoop.crypto.key.KeyShell.main(KeyShell.java:553) > {noformat} > This is an unwanted behaviour because the user does not have any feedback > what and where went wrong. > > My suggestion is to add *null checking* into the affected *prettifyException* > method. > I'll create the Github PR soon. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14837) Handle S3A "glacier" data
[ https://issues.apache.org/jira/browse/HADOOP-14837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17828513#comment-17828513 ] ASF GitHub Bot commented on HADOOP-14837: - steveloughran commented on code in PR #6407: URL: https://github.com/apache/hadoop/pull/6407#discussion_r1530919362 ## hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/api/S3ObjectStorageClassFilter.java: ## @@ -0,0 +1,93 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.fs.s3a.api; + +import java.util.Set; +import java.util.function.Function; + +import software.amazon.awssdk.services.s3.model.ObjectStorageClass; +import software.amazon.awssdk.services.s3.model.S3Object; + +import org.apache.hadoop.fs.s3a.S3AFileSystem; +import org.apache.hadoop.thirdparty.com.google.common.collect.Sets; + + +/** + * + * {@link S3ObjectStorageClassFilter} will filter the S3 files based on the + * {@code fs.s3a.glacier.read.restored.objects} configuration set in {@link S3AFileSystem} + * The config can have 3 values: + * {@code READ_ALL}: Retrieval of Glacier files will fail with InvalidObjectStateException: + * The operation is not valid for the object's storage class. + * {@code SKIP_ALL_GLACIER}: If this value is set then this will ignore any S3 Objects which are + * tagged with Glacier storage classes and retrieve the others. + * {@code READ_RESTORED_GLACIER_OBJECTS}: If this value is set then restored status of the Glacier + * object will be checked, if restored the objects would be read like normal S3 objects + * else they will be ignored as the objects would not have been retrieved from the S3 Glacier. + * + */ +public enum S3ObjectStorageClassFilter { + READ_ALL(o -> true), + SKIP_ALL_GLACIER(S3ObjectStorageClassFilter::isNotGlacierObject), + READ_RESTORED_GLACIER_OBJECTS(S3ObjectStorageClassFilter::isCompletedRestoredObject); + + private static final Set GLACIER_STORAGE_CLASSES = Sets.newHashSet( + ObjectStorageClass.GLACIER, ObjectStorageClass.DEEP_ARCHIVE); + + private final Function filter; + + S3ObjectStorageClassFilter(Function filter) { Review Comment: should this be private, or it implicit with enums? ## hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/api/S3ObjectStorageClassFilter.java: ## @@ -0,0 +1,93 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.fs.s3a.api; + +import java.util.Set; +import java.util.function.Function; + +import software.amazon.awssdk.services.s3.model.ObjectStorageClass; +import software.amazon.awssdk.services.s3.model.S3Object; + +import org.apache.hadoop.fs.s3a.S3AFileSystem; +import org.apache.hadoop.thirdparty.com.google.common.collect.Sets; Review Comment: could you use org.apache.hadoop.util.Sets here. its part of our attempt to isolate ourselves better from guava changes and the pain that causes downstream ## hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/list/ITestS3AReadRestoredGlacierObjects.java: ## @@ -0,0 +1,193 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + *
Re: [PR] HADOOP-14837 : Support Read Restored Glacier Objects [hadoop]
steveloughran commented on code in PR #6407: URL: https://github.com/apache/hadoop/pull/6407#discussion_r1530919362 ## hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/api/S3ObjectStorageClassFilter.java: ## @@ -0,0 +1,93 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.fs.s3a.api; + +import java.util.Set; +import java.util.function.Function; + +import software.amazon.awssdk.services.s3.model.ObjectStorageClass; +import software.amazon.awssdk.services.s3.model.S3Object; + +import org.apache.hadoop.fs.s3a.S3AFileSystem; +import org.apache.hadoop.thirdparty.com.google.common.collect.Sets; + + +/** + * + * {@link S3ObjectStorageClassFilter} will filter the S3 files based on the + * {@code fs.s3a.glacier.read.restored.objects} configuration set in {@link S3AFileSystem} + * The config can have 3 values: + * {@code READ_ALL}: Retrieval of Glacier files will fail with InvalidObjectStateException: + * The operation is not valid for the object's storage class. + * {@code SKIP_ALL_GLACIER}: If this value is set then this will ignore any S3 Objects which are + * tagged with Glacier storage classes and retrieve the others. + * {@code READ_RESTORED_GLACIER_OBJECTS}: If this value is set then restored status of the Glacier + * object will be checked, if restored the objects would be read like normal S3 objects + * else they will be ignored as the objects would not have been retrieved from the S3 Glacier. + * + */ +public enum S3ObjectStorageClassFilter { + READ_ALL(o -> true), + SKIP_ALL_GLACIER(S3ObjectStorageClassFilter::isNotGlacierObject), + READ_RESTORED_GLACIER_OBJECTS(S3ObjectStorageClassFilter::isCompletedRestoredObject); + + private static final Set GLACIER_STORAGE_CLASSES = Sets.newHashSet( + ObjectStorageClass.GLACIER, ObjectStorageClass.DEEP_ARCHIVE); + + private final Function filter; + + S3ObjectStorageClassFilter(Function filter) { Review Comment: should this be private, or it implicit with enums? ## hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/api/S3ObjectStorageClassFilter.java: ## @@ -0,0 +1,93 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.fs.s3a.api; + +import java.util.Set; +import java.util.function.Function; + +import software.amazon.awssdk.services.s3.model.ObjectStorageClass; +import software.amazon.awssdk.services.s3.model.S3Object; + +import org.apache.hadoop.fs.s3a.S3AFileSystem; +import org.apache.hadoop.thirdparty.com.google.common.collect.Sets; Review Comment: could you use org.apache.hadoop.util.Sets here. its part of our attempt to isolate ourselves better from guava changes and the pain that causes downstream ## hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/list/ITestS3AReadRestoredGlacierObjects.java: ## @@ -0,0 +1,193 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing,
Re: [PR] YARN-5305. Allow log aggregation to discard expired delegation tokens [hadoop]
hadoop-yetus commented on PR #6625: URL: https://github.com/apache/hadoop/pull/6625#issuecomment-2008119232 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 31s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 14m 44s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 32m 9s | | trunk passed | | +1 :green_heart: | compile | 17m 41s | | trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 | | +1 :green_heart: | compile | 16m 15s | | trunk passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 | | +1 :green_heart: | checkstyle | 4m 26s | | trunk passed | | +1 :green_heart: | mvnsite | 2m 42s | | trunk passed | | +1 :green_heart: | javadoc | 2m 14s | | trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 | | +1 :green_heart: | javadoc | 1m 50s | | trunk passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 | | -1 :x: | spotbugs | 2m 35s | [/branch-spotbugs-hadoop-common-project_hadoop-common-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6625/4/artifact/out/branch-spotbugs-hadoop-common-project_hadoop-common-warnings.html) | hadoop-common-project/hadoop-common in trunk has 1 extant spotbugs warnings. | | +1 :green_heart: | shadedclient | 34m 22s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 33s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 1m 32s | | the patch passed | | +1 :green_heart: | compile | 16m 48s | | the patch passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 | | +1 :green_heart: | javac | 16m 48s | | the patch passed | | +1 :green_heart: | compile | 16m 13s | | the patch passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 | | +1 :green_heart: | javac | 16m 13s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 4m 21s | [/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6625/4/artifact/out/results-checkstyle-root.txt) | root: The patch generated 1 new + 197 unchanged - 0 fixed = 198 total (was 197) | | +1 :green_heart: | mvnsite | 2m 42s | | the patch passed | | +1 :green_heart: | javadoc | 2m 10s | | the patch passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 | | +1 :green_heart: | javadoc | 1m 51s | | the patch passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 | | +1 :green_heart: | spotbugs | 4m 32s | | the patch passed | | +1 :green_heart: | shadedclient | 34m 19s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 19m 40s | | hadoop-common in the patch passed. | | +1 :green_heart: | unit | 24m 43s | | hadoop-yarn-server-nodemanager in the patch passed. | | +1 :green_heart: | asflicense | 1m 5s | | The patch does not generate ASF License warnings. | | | | 268m 12s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.44 ServerAPI=1.44 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6625/4/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/6625 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux c8c47b10ec91 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 772720878905eb7caa1ca4ca2936d727d54ee7b9 | | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 | | Test Results |
[jira] [Commented] (HADOOP-19098) Vector IO: consistent specified rejection of overlapping ranges
[ https://issues.apache.org/jira/browse/HADOOP-19098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17828511#comment-17828511 ] ASF GitHub Bot commented on HADOOP-19098: - hadoop-yetus commented on PR #6604: URL: https://github.com/apache/hadoop/pull/6604#issuecomment-2008090350 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 52s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +0 :ok: | markdownlint | 0m 0s | | markdownlint was not available. | | +0 :ok: | xmllint | 0m 0s | | xmllint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 15 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 14m 45s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 34m 40s | | trunk passed | | +1 :green_heart: | compile | 18m 42s | | trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 | | +1 :green_heart: | compile | 17m 8s | | trunk passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 | | +1 :green_heart: | checkstyle | 4m 46s | | trunk passed | | +1 :green_heart: | mvnsite | 5m 1s | | trunk passed | | +1 :green_heart: | javadoc | 3m 46s | | trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 | | +1 :green_heart: | javadoc | 4m 12s | | trunk passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 | | -1 :x: | spotbugs | 2m 37s | [/branch-spotbugs-hadoop-common-project_hadoop-common-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6604/8/artifact/out/branch-spotbugs-hadoop-common-project_hadoop-common-warnings.html) | hadoop-common-project/hadoop-common in trunk has 1 extant spotbugs warnings. | | +1 :green_heart: | shadedclient | 35m 46s | | branch has no errors when building and testing our client artifacts. | | -0 :warning: | patch | 36m 13s | | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 30s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 3m 12s | | the patch passed | | +1 :green_heart: | compile | 18m 5s | | the patch passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 | | +1 :green_heart: | javac | 18m 5s | | the patch passed | | +1 :green_heart: | compile | 17m 8s | | the patch passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 | | +1 :green_heart: | javac | 17m 8s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 4m 38s | [/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6604/8/artifact/out/results-checkstyle-root.txt) | root: The patch generated 2 new + 83 unchanged - 1 fixed = 85 total (was 84) | | +1 :green_heart: | mvnsite | 5m 4s | | the patch passed | | +1 :green_heart: | javadoc | 3m 43s | | the patch passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 | | +1 :green_heart: | javadoc | 4m 16s | | the patch passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 | | +1 :green_heart: | spotbugs | 9m 4s | | the patch passed | | +1 :green_heart: | shadedclient | 35m 29s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 19m 34s | | hadoop-common in the patch passed. | | +1 :green_heart: | unit | 278m 13s | | hadoop-hdfs in the patch passed. | | +1 :green_heart: | unit | 4m 8s | | hadoop-aws in the patch passed. | | +1 :green_heart: | unit | 2m 32s | | hadoop-azure in the patch passed. | | +1 :green_heart: | asflicense | 1m 5s | | The patch does not generate ASF License warnings. | | | | 562m 26s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.44 ServerAPI=1.44 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6604/8/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/6604 | | Optional Tests | dupname
Re: [PR] HADOOP-19098 Vector IO: consistent specified rejection of overlapping ranges [hadoop]
hadoop-yetus commented on PR #6604: URL: https://github.com/apache/hadoop/pull/6604#issuecomment-2008090350 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 52s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +0 :ok: | markdownlint | 0m 0s | | markdownlint was not available. | | +0 :ok: | xmllint | 0m 0s | | xmllint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 15 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 14m 45s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 34m 40s | | trunk passed | | +1 :green_heart: | compile | 18m 42s | | trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 | | +1 :green_heart: | compile | 17m 8s | | trunk passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 | | +1 :green_heart: | checkstyle | 4m 46s | | trunk passed | | +1 :green_heart: | mvnsite | 5m 1s | | trunk passed | | +1 :green_heart: | javadoc | 3m 46s | | trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 | | +1 :green_heart: | javadoc | 4m 12s | | trunk passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 | | -1 :x: | spotbugs | 2m 37s | [/branch-spotbugs-hadoop-common-project_hadoop-common-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6604/8/artifact/out/branch-spotbugs-hadoop-common-project_hadoop-common-warnings.html) | hadoop-common-project/hadoop-common in trunk has 1 extant spotbugs warnings. | | +1 :green_heart: | shadedclient | 35m 46s | | branch has no errors when building and testing our client artifacts. | | -0 :warning: | patch | 36m 13s | | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 30s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 3m 12s | | the patch passed | | +1 :green_heart: | compile | 18m 5s | | the patch passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 | | +1 :green_heart: | javac | 18m 5s | | the patch passed | | +1 :green_heart: | compile | 17m 8s | | the patch passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 | | +1 :green_heart: | javac | 17m 8s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 4m 38s | [/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6604/8/artifact/out/results-checkstyle-root.txt) | root: The patch generated 2 new + 83 unchanged - 1 fixed = 85 total (was 84) | | +1 :green_heart: | mvnsite | 5m 4s | | the patch passed | | +1 :green_heart: | javadoc | 3m 43s | | the patch passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 | | +1 :green_heart: | javadoc | 4m 16s | | the patch passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 | | +1 :green_heart: | spotbugs | 9m 4s | | the patch passed | | +1 :green_heart: | shadedclient | 35m 29s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 19m 34s | | hadoop-common in the patch passed. | | +1 :green_heart: | unit | 278m 13s | | hadoop-hdfs in the patch passed. | | +1 :green_heart: | unit | 4m 8s | | hadoop-aws in the patch passed. | | +1 :green_heart: | unit | 2m 32s | | hadoop-azure in the patch passed. | | +1 :green_heart: | asflicense | 1m 5s | | The patch does not generate ASF License warnings. | | | | 562m 26s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.44 ServerAPI=1.44 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6604/8/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/6604 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets markdownlint xmllint | | uname | Linux 84dbdd9db352 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 15:25:40 UTC 2024 x86_64 x86_64
Re: [PR] HDFS-17431. Fix log format for BlockRecoveryWorker#recoverBlocks [hadoop]
ayushtkn commented on code in PR #6643: URL: https://github.com/apache/hadoop/pull/6643#discussion_r1530927655 ## hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockRecoveryWorker.java: ## @@ -628,7 +628,7 @@ public void run() { new RecoveryTaskContiguous(b).recover(); } } catch (IOException e) { - LOG.warn("recover Block: {} FAILED: {}", b, e); + LOG.warn("recover Block: {} FAILED: ", b, e); Review Comment: Whats wrong here? the number of placeholders are correct only, for the second one it will invoke e.toString(), now you are changing it to print the entire trace. I don't think it is broken, it looks like it was intentional -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HDFS-17431. Fix log format for BlockRecoveryWorker#recoverBlocks [hadoop]
hadoop-yetus commented on PR #6643: URL: https://github.com/apache/hadoop/pull/6643#issuecomment-2007841730 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 35s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 47m 17s | | trunk passed | | +1 :green_heart: | compile | 1m 29s | | trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 | | +1 :green_heart: | compile | 1m 20s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | checkstyle | 1m 18s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 29s | | trunk passed | | +1 :green_heart: | javadoc | 1m 8s | | trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 | | +1 :green_heart: | javadoc | 1m 46s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 3m 30s | | trunk passed | | +1 :green_heart: | shadedclient | 38m 29s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 17s | | the patch passed | | +1 :green_heart: | compile | 1m 15s | | the patch passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 | | +1 :green_heart: | javac | 1m 15s | | the patch passed | | +1 :green_heart: | compile | 1m 13s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | javac | 1m 13s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 1m 4s | | the patch passed | | +1 :green_heart: | mvnsite | 1m 18s | | the patch passed | | +1 :green_heart: | javadoc | 0m 58s | | the patch passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 | | +1 :green_heart: | javadoc | 1m 38s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 3m 25s | | the patch passed | | +1 :green_heart: | shadedclient | 38m 41s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 228m 41s | | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 45s | | The patch does not generate ASF License warnings. | | | | 379m 37s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.44 ServerAPI=1.44 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6643/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/6643 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux 28104aeadaba 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 0ce2a9e09116ee8807a24c37e87595b52f3713da | | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6643/1/testReport/ | | Max. process+thread count | 4053 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6643/1/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail:
[jira] [Commented] (HADOOP-19102) [ABFS]: FooterReadBufferSize should not be greater than readBufferSize
[ https://issues.apache.org/jira/browse/HADOOP-19102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17828478#comment-17828478 ] ASF GitHub Bot commented on HADOOP-19102: - steveloughran commented on code in PR #6617: URL: https://github.com/apache/hadoop/pull/6617#discussion_r1530854506 ## hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/ITestAbfsInputStreamReadFooter.java: ## @@ -167,24 +200,55 @@ public void testSeekToEndAndReadWithConfFalse() throws Exception { private void testSeekAndReadWithConf(boolean optimizeFooterRead, Review Comment: nit: javadocs > [ABFS]: FooterReadBufferSize should not be greater than readBufferSize > -- > > Key: HADOOP-19102 > URL: https://issues.apache.org/jira/browse/HADOOP-19102 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Affects Versions: 3.4.0 >Reporter: Pranav Saxena >Assignee: Pranav Saxena >Priority: Major > Labels: pull-request-available > > The method `optimisedRead` creates a buffer array of size `readBufferSize`. > If footerReadBufferSize is greater than readBufferSize, abfs will attempt to > read more data than the buffer array can hold, which causes an exception. > Change: To avoid this, we will keep footerBufferSize = > min(readBufferSizeConfig, footerBufferSizeConfig) > > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HADOOP-19102. FooterReadBufferSize should not be greater than readBufferSize [hadoop]
steveloughran commented on code in PR #6617: URL: https://github.com/apache/hadoop/pull/6617#discussion_r1530854506 ## hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/ITestAbfsInputStreamReadFooter.java: ## @@ -167,24 +200,55 @@ public void testSeekToEndAndReadWithConfFalse() throws Exception { private void testSeekAndReadWithConf(boolean optimizeFooterRead, Review Comment: nit: javadocs -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-19102) [ABFS]: FooterReadBufferSize should not be greater than readBufferSize
[ https://issues.apache.org/jira/browse/HADOOP-19102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17828472#comment-17828472 ] ASF GitHub Bot commented on HADOOP-19102: - steveloughran commented on PR #6617: URL: https://github.com/apache/hadoop/pull/6617#issuecomment-2007808498 yeah, spotbugs unrelated; should be fixed in trunk now and for future PRs. > [ABFS]: FooterReadBufferSize should not be greater than readBufferSize > -- > > Key: HADOOP-19102 > URL: https://issues.apache.org/jira/browse/HADOOP-19102 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Affects Versions: 3.4.0 >Reporter: Pranav Saxena >Assignee: Pranav Saxena >Priority: Major > Labels: pull-request-available > > The method `optimisedRead` creates a buffer array of size `readBufferSize`. > If footerReadBufferSize is greater than readBufferSize, abfs will attempt to > read more data than the buffer array can hold, which causes an exception. > Change: To avoid this, we will keep footerBufferSize = > min(readBufferSizeConfig, footerBufferSizeConfig) > > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HADOOP-19102. FooterReadBufferSize should not be greater than readBufferSize [hadoop]
steveloughran commented on PR #6617: URL: https://github.com/apache/hadoop/pull/6617#issuecomment-2007808498 yeah, spotbugs unrelated; should be fixed in trunk now and for future PRs. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] YARN-11664: Remove HDFS Binaries/Jars Dependency From Yarn [hadoop]
shameersss1 commented on PR #6631: URL: https://github.com/apache/hadoop/pull/6631#issuecomment-2007805123 > -1. Please do not change the following `@Public` and `@Evolving` classes: > > * QuotaExceededException.java > > * DSQuotaExceededException.java > > > > https://apache.github.io/hadoop/hadoop-project-dist/hadoop-common/Compatibility.html > > Evolving interfaces must not change between minor releases. > > Can we use ClusterStorageCapacityExceededException (hadoop-common) instead of DSQuotaExceededException/QuotaExceededException (hadoop-hdfs) in YARN source code? > > IOStreamPair.java is `@Private` and I think we can relocate to hadoop-common. ClusterStorageCapacityExceededException is a parent exception of DSQuotaExceededException and hence catching it will serve the purpose as well. I will raise a revision of this change. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-19116) update to zookeeper client 3.8.4 due to CVE
[ https://issues.apache.org/jira/browse/HADOOP-19116?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17828468#comment-17828468 ] ASF GitHub Bot commented on HADOOP-19116: - ayushtkn commented on PR #6638: URL: https://github.com/apache/hadoop/pull/6638#issuecomment-2007802017 There is a failure in the patch-unit https://github.com/apache/hadoop/assets/25608848/27684f97-4dd4-42e9-8760-2a32f4202401;> ``` [INFO] [INFO] BUILD FAILURE [INFO] [INFO] Total time: 12:24 h [INFO] Finished at: 2024-03-19T16:20:08Z [INFO] [ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:3.0.0-M1:test (default-test) on project hadoop-hdfs-rbf: Execution default-test of goal org.apache.maven.plugins:maven-surefire-plugin:3.0.0-M1:test failed: java.lang.NoClassDefFoundError: org/junit/platform/launcher/core/LauncherFactory: org.junit.platform.launcher.core.LauncherFactory -> [Help 1] [ERROR] [ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch. [ERROR] Re-run Maven using the -X switch to enable full debug logging. [ERROR] [ERROR] For more information about the errors and possible solutions, please read the following articles: [ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/PluginExecutionException [ERROR] [ERROR] After correcting the problems, you can resume the build with the command [ERROR] mvn -rf :hadoop-hdfs-rbf ``` Link: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6638/1/artifact/out/patch-unit-root.txt Quite happy, that it happened here, it doesn't happen in our daily jobs, but was happening in my PR where I was trying to play with hbase, almost killed myself to figure out how I am inducing this without even going near RBF. Happy that it is a existing problem. Anyway this change fixes the error https://github.com/apache/hadoop/pull/6629/files#diff-dbf6ea05af8f5d11e74cd87e059a361dd8b06d0f12f1d13ea9899fbbc4ffbc48R185-R189 Atleast it did it for me, Post that you will be here: https://issues.apache.org/jira/browse/HDFS-17370?focusedCommentId=17828028=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-17828028 but that you may ignore the same failure is there in your other two PR as well, not copy-pasting there. > update to zookeeper client 3.8.4 due to CVE > --- > > Key: HADOOP-19116 > URL: https://issues.apache.org/jira/browse/HADOOP-19116 > Project: Hadoop Common > Issue Type: Bug > Components: CVE >Affects Versions: 3.4.0, 3.3.6 >Reporter: PJ Fanning >Priority: Major > Labels: pull-request-available > > https://github.com/advisories/GHSA-r978-9m6m-6gm6 -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HADOOP-19116. Update to zookeeper client 3.8.4 due to CVE. [hadoop]
ayushtkn commented on PR #6638: URL: https://github.com/apache/hadoop/pull/6638#issuecomment-2007802017 There is a failure in the patch-unit https://github.com/apache/hadoop/assets/25608848/27684f97-4dd4-42e9-8760-2a32f4202401;> ``` [INFO] [INFO] BUILD FAILURE [INFO] [INFO] Total time: 12:24 h [INFO] Finished at: 2024-03-19T16:20:08Z [INFO] [ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:3.0.0-M1:test (default-test) on project hadoop-hdfs-rbf: Execution default-test of goal org.apache.maven.plugins:maven-surefire-plugin:3.0.0-M1:test failed: java.lang.NoClassDefFoundError: org/junit/platform/launcher/core/LauncherFactory: org.junit.platform.launcher.core.LauncherFactory -> [Help 1] [ERROR] [ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch. [ERROR] Re-run Maven using the -X switch to enable full debug logging. [ERROR] [ERROR] For more information about the errors and possible solutions, please read the following articles: [ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/PluginExecutionException [ERROR] [ERROR] After correcting the problems, you can resume the build with the command [ERROR] mvn -rf :hadoop-hdfs-rbf ``` Link: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6638/1/artifact/out/patch-unit-root.txt Quite happy, that it happened here, it doesn't happen in our daily jobs, but was happening in my PR where I was trying to play with hbase, almost killed myself to figure out how I am inducing this without even going near RBF. Happy that it is a existing problem. Anyway this change fixes the error https://github.com/apache/hadoop/pull/6629/files#diff-dbf6ea05af8f5d11e74cd87e059a361dd8b06d0f12f1d13ea9899fbbc4ffbc48R185-R189 Atleast it did it for me, Post that you will be here: https://issues.apache.org/jira/browse/HDFS-17370?focusedCommentId=17828028=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-17828028 but that you may ignore the same failure is there in your other two PR as well, not copy-pasting there. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-19050) Add S3 Access Grants Support in S3A
[ https://issues.apache.org/jira/browse/HADOOP-19050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran resolved HADOOP-19050. - Fix Version/s: 3.5.0 Resolution: Fixed Fixed in trunk; backport to 3.4 should go in later. > Add S3 Access Grants Support in S3A > --- > > Key: HADOOP-19050 > URL: https://issues.apache.org/jira/browse/HADOOP-19050 > Project: Hadoop Common > Issue Type: New Feature > Components: fs/s3 >Affects Versions: 3.4.0 >Reporter: Jason Han >Assignee: Jason Han >Priority: Minor > Labels: pull-request-available > Fix For: 3.5.0 > > > Add support for S3 Access Grants > (https://aws.amazon.com/s3/features/access-grants/) in S3A. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HADOOP-19050. Add S3 Access Grants Support in S3A [hadoop]
steveloughran merged PR #6544: URL: https://github.com/apache/hadoop/pull/6544 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-19050) Add S3 Access Grants Support in S3A
[ https://issues.apache.org/jira/browse/HADOOP-19050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17828465#comment-17828465 ] ASF GitHub Bot commented on HADOOP-19050: - adnanhemani commented on PR #6544: URL: https://github.com/apache/hadoop/pull/6544#issuecomment-2007791751 Yup, this makes sense. I will ensure that Hadoop gets the SDK changes as soon as the SDK updates are complete. Thank you again for all your time reviewing this! > Add S3 Access Grants Support in S3A > --- > > Key: HADOOP-19050 > URL: https://issues.apache.org/jira/browse/HADOOP-19050 > Project: Hadoop Common > Issue Type: New Feature > Components: fs/s3 >Affects Versions: 3.4.0 >Reporter: Jason Han >Assignee: Jason Han >Priority: Minor > Labels: pull-request-available > > Add support for S3 Access Grants > (https://aws.amazon.com/s3/features/access-grants/) in S3A. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HADOOP-19050. Add S3 Access Grants Support in S3A [hadoop]
adnanhemani commented on PR #6544: URL: https://github.com/apache/hadoop/pull/6544#issuecomment-2007791751 Yup, this makes sense. I will ensure that Hadoop gets the SDK changes as soon as the SDK updates are complete. Thank you again for all your time reviewing this! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-19050) Add S3 Access Grants Support in S3A
[ https://issues.apache.org/jira/browse/HADOOP-19050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17828463#comment-17828463 ] ASF GitHub Bot commented on HADOOP-19050: - steveloughran commented on PR #6544: URL: https://github.com/apache/hadoop/pull/6544#issuecomment-2007781626 hey, if yetus is unhappy, rebase is the right thing to do, so don't worry too much. just trying to remember where i was, that's all > Add S3 Access Grants Support in S3A > --- > > Key: HADOOP-19050 > URL: https://issues.apache.org/jira/browse/HADOOP-19050 > Project: Hadoop Common > Issue Type: New Feature > Components: fs/s3 >Affects Versions: 3.4.0 >Reporter: Jason Han >Assignee: Jason Han >Priority: Minor > Labels: pull-request-available > > Add support for S3 Access Grants > (https://aws.amazon.com/s3/features/access-grants/) in S3A. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HADOOP-19050. Add S3 Access Grants Support in S3A [hadoop]
steveloughran commented on PR #6544: URL: https://github.com/apache/hadoop/pull/6544#issuecomment-2007781626 hey, if yetus is unhappy, rebase is the right thing to do, so don't worry too much. just trying to remember where i was, that's all -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-19079) check that class that is loaded is really an exception
[ https://issues.apache.org/jira/browse/HADOOP-19079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17828461#comment-17828461 ] ASF GitHub Bot commented on HADOOP-19079: - steveloughran commented on code in PR #6557: URL: https://github.com/apache/hadoop/pull/6557#discussion_r1530821435 ## hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/HttpExceptionUtils.java: ## @@ -150,9 +156,14 @@ public static void validateResponse(HttpURLConnection conn, try { ClassLoader cl = HttpExceptionUtils.class.getClassLoader(); Class klass = cl.loadClass(exClass); -Constructor constr = klass.getConstructor(String.class); -toThrow = (Exception) constr.newInstance(exMsg); - } catch (Exception ex) { +if (!Exception.class.isAssignableFrom(klass)) { Review Comment: use Preconditions.checkState() with error text including classname. if it ever does happen, we will want to debug ## hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestHttpExceptionUtils.java: ## @@ -164,4 +164,30 @@ public void testValidateResponseJsonErrorUnknownException() } } + @Test + public void testValidateResponseJsonErrorNonException() throws IOException { +Map json = new HashMap(); +json.put(HttpExceptionUtils.ERROR_EXCEPTION_JSON, "invalid"); +// test case where the exception classname is not a valid exception class +json.put(HttpExceptionUtils.ERROR_CLASSNAME_JSON, String.class.getName()); +json.put(HttpExceptionUtils.ERROR_MESSAGE_JSON, "EX"); +Map response = new HashMap(); +response.put(HttpExceptionUtils.ERROR_JSON, json); +ObjectMapper jsonMapper = new ObjectMapper(); +String msg = jsonMapper.writeValueAsString(response); +InputStream is = new ByteArrayInputStream(msg.getBytes()); +HttpURLConnection conn = Mockito.mock(HttpURLConnection.class); +Mockito.when(conn.getErrorStream()).thenReturn(is); +Mockito.when(conn.getResponseMessage()).thenReturn("msg"); +Mockito.when(conn.getResponseCode()).thenReturn( +HttpURLConnection.HTTP_BAD_REQUEST); +try { + HttpExceptionUtils.validateResponse(conn, HttpURLConnection.HTTP_CREATED); Review Comment: old code, going near the test is the time to clean it up. > check that class that is loaded is really an exception > -- > > Key: HADOOP-19079 > URL: https://issues.apache.org/jira/browse/HADOOP-19079 > Project: Hadoop Common > Issue Type: Task > Components: common, security >Reporter: PJ Fanning >Priority: Major > Labels: pull-request-available > > It can be dangerous taking class names as inputs from HTTP messages even if > we control the source. Issue is in HttpExceptionUtils in hadoop-common > (validateResponse method). > I can provide a PR that will highlight the issue. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HADOOP-19079. Check class is an exception class before constructing an instance [hadoop]
steveloughran commented on code in PR #6557: URL: https://github.com/apache/hadoop/pull/6557#discussion_r1530821435 ## hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/HttpExceptionUtils.java: ## @@ -150,9 +156,14 @@ public static void validateResponse(HttpURLConnection conn, try { ClassLoader cl = HttpExceptionUtils.class.getClassLoader(); Class klass = cl.loadClass(exClass); -Constructor constr = klass.getConstructor(String.class); -toThrow = (Exception) constr.newInstance(exMsg); - } catch (Exception ex) { +if (!Exception.class.isAssignableFrom(klass)) { Review Comment: use Preconditions.checkState() with error text including classname. if it ever does happen, we will want to debug ## hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestHttpExceptionUtils.java: ## @@ -164,4 +164,30 @@ public void testValidateResponseJsonErrorUnknownException() } } + @Test + public void testValidateResponseJsonErrorNonException() throws IOException { +Map json = new HashMap(); +json.put(HttpExceptionUtils.ERROR_EXCEPTION_JSON, "invalid"); +// test case where the exception classname is not a valid exception class +json.put(HttpExceptionUtils.ERROR_CLASSNAME_JSON, String.class.getName()); +json.put(HttpExceptionUtils.ERROR_MESSAGE_JSON, "EX"); +Map response = new HashMap(); +response.put(HttpExceptionUtils.ERROR_JSON, json); +ObjectMapper jsonMapper = new ObjectMapper(); +String msg = jsonMapper.writeValueAsString(response); +InputStream is = new ByteArrayInputStream(msg.getBytes()); +HttpURLConnection conn = Mockito.mock(HttpURLConnection.class); +Mockito.when(conn.getErrorStream()).thenReturn(is); +Mockito.when(conn.getResponseMessage()).thenReturn("msg"); +Mockito.when(conn.getResponseCode()).thenReturn( +HttpURLConnection.HTTP_BAD_REQUEST); +try { + HttpExceptionUtils.validateResponse(conn, HttpURLConnection.HTTP_CREATED); Review Comment: old code, going near the test is the time to clean it up. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-19050) Add S3 Access Grants Support in S3A
[ https://issues.apache.org/jira/browse/HADOOP-19050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17828459#comment-17828459 ] ASF GitHub Bot commented on HADOOP-19050: - adnanhemani commented on PR #6544: URL: https://github.com/apache/hadoop/pull/6544#issuecomment-2007775835 Sorry! Yetus was complaining that it could not apply the changes on top of `trunk` so I instinctively rebased and force pushed - will keep this in mind! > Add S3 Access Grants Support in S3A > --- > > Key: HADOOP-19050 > URL: https://issues.apache.org/jira/browse/HADOOP-19050 > Project: Hadoop Common > Issue Type: New Feature > Components: fs/s3 >Affects Versions: 3.4.0 >Reporter: Jason Han >Assignee: Jason Han >Priority: Minor > Labels: pull-request-available > > Add support for S3 Access Grants > (https://aws.amazon.com/s3/features/access-grants/) in S3A. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HADOOP-19050. Add S3 Access Grants Support in S3A [hadoop]
adnanhemani commented on PR #6544: URL: https://github.com/apache/hadoop/pull/6544#issuecomment-2007775835 Sorry! Yetus was complaining that it could not apply the changes on top of `trunk` so I instinctively rebased and force pushed - will keep this in mind! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] [SPARK-38958]: Override S3 Client in Spark Write/Read calls [hadoop]
steveloughran commented on PR #6550: URL: https://github.com/apache/hadoop/pull/6550#issuecomment-2007773085 Can use HADOOP-18562 for the JIRA ID here; hadoop codebase see. thanks -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-19050) Add S3 Access Grants Support in S3A
[ https://issues.apache.org/jira/browse/HADOOP-19050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17828456#comment-17828456 ] ASF GitHub Bot commented on HADOOP-19050: - steveloughran commented on PR #6544: URL: https://github.com/apache/hadoop/pull/6544#issuecomment-2007753756 please, please please, no force push once reviews have started unless there's merge problems or its been neglected for too long...makes it harder to seee what's changed between reviews > Add S3 Access Grants Support in S3A > --- > > Key: HADOOP-19050 > URL: https://issues.apache.org/jira/browse/HADOOP-19050 > Project: Hadoop Common > Issue Type: New Feature > Components: fs/s3 >Affects Versions: 3.4.0 >Reporter: Jason Han >Assignee: Jason Han >Priority: Minor > Labels: pull-request-available > > Add support for S3 Access Grants > (https://aws.amazon.com/s3/features/access-grants/) in S3A. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HADOOP-19050. Add S3 Access Grants Support in S3A [hadoop]
steveloughran commented on PR #6544: URL: https://github.com/apache/hadoop/pull/6544#issuecomment-2007753756 please, please please, no force push once reviews have started unless there's merge problems or its been neglected for too long...makes it harder to seee what's changed between reviews -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-19119) spotbugs complaining about possible NPE in org.apache.hadoop.crypto.key.kms.ValueQueue.getSize()
[ https://issues.apache.org/jira/browse/HADOOP-19119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-19119: Priority: Minor (was: Major) > spotbugs complaining about possible NPE in > org.apache.hadoop.crypto.key.kms.ValueQueue.getSize() > > > Key: HADOOP-19119 > URL: https://issues.apache.org/jira/browse/HADOOP-19119 > Project: Hadoop Common > Issue Type: Sub-task > Components: crypto >Affects Versions: 3.5.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Minor > Labels: pull-request-available > Fix For: 3.5.0, 3.4.1 > > > PRs against hadoop-common are reporting spotbugs problems > {code} > Dodgy code Warnings > Code Warning > NPPossible null pointer dereference in > org.apache.hadoop.crypto.key.kms.ValueQueue.getSize(String) due to return > value of called method > Bug type NP_NULL_ON_SOME_PATH_FROM_RETURN_VALUE (click for details) > In class org.apache.hadoop.crypto.key.kms.ValueQueue > In method org.apache.hadoop.crypto.key.kms.ValueQueue.getSize(String) > Local variable stored in JVM register ? > Dereferenced at ValueQueue.java:[line 332] > Known null at ValueQueue.java:[line 332] > {code} -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-19119) spotbugs complaining about possible NPE in org.apache.hadoop.crypto.key.kms.ValueQueue.getSize()
[ https://issues.apache.org/jira/browse/HADOOP-19119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran resolved HADOOP-19119. - Fix Version/s: 3.5.0 3.4.1 Resolution: Fixed > spotbugs complaining about possible NPE in > org.apache.hadoop.crypto.key.kms.ValueQueue.getSize() > > > Key: HADOOP-19119 > URL: https://issues.apache.org/jira/browse/HADOOP-19119 > Project: Hadoop Common > Issue Type: Sub-task > Components: crypto >Affects Versions: 3.5.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > Labels: pull-request-available > Fix For: 3.5.0, 3.4.1 > > > PRs against hadoop-common are reporting spotbugs problems > {code} > Dodgy code Warnings > Code Warning > NPPossible null pointer dereference in > org.apache.hadoop.crypto.key.kms.ValueQueue.getSize(String) due to return > value of called method > Bug type NP_NULL_ON_SOME_PATH_FROM_RETURN_VALUE (click for details) > In class org.apache.hadoop.crypto.key.kms.ValueQueue > In method org.apache.hadoop.crypto.key.kms.ValueQueue.getSize(String) > Local variable stored in JVM register ? > Dereferenced at ValueQueue.java:[line 332] > Known null at ValueQueue.java:[line 332] > {code} -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-19119) spotbugs complaining about possible NPE in org.apache.hadoop.crypto.key.kms.ValueQueue.getSize()
[ https://issues.apache.org/jira/browse/HADOOP-19119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17828452#comment-17828452 ] ASF GitHub Bot commented on HADOOP-19119: - steveloughran merged PR #6642: URL: https://github.com/apache/hadoop/pull/6642 > spotbugs complaining about possible NPE in > org.apache.hadoop.crypto.key.kms.ValueQueue.getSize() > > > Key: HADOOP-19119 > URL: https://issues.apache.org/jira/browse/HADOOP-19119 > Project: Hadoop Common > Issue Type: Sub-task > Components: crypto >Affects Versions: 3.5.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > Labels: pull-request-available > > PRs against hadoop-common are reporting spotbugs problems > {code} > Dodgy code Warnings > Code Warning > NPPossible null pointer dereference in > org.apache.hadoop.crypto.key.kms.ValueQueue.getSize(String) due to return > value of called method > Bug type NP_NULL_ON_SOME_PATH_FROM_RETURN_VALUE (click for details) > In class org.apache.hadoop.crypto.key.kms.ValueQueue > In method org.apache.hadoop.crypto.key.kms.ValueQueue.getSize(String) > Local variable stored in JVM register ? > Dereferenced at ValueQueue.java:[line 332] > Known null at ValueQueue.java:[line 332] > {code} -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HADOOP-19119. Spotbugs: possible NPE in org.apache.hadoop.crypto.key.kms.ValueQueue.getSize() [hadoop]
steveloughran merged PR #6642: URL: https://github.com/apache/hadoop/pull/6642 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HADOOP-19112. Hadoop 3.4.0 release wrap-up. [hadoop]
hadoop-yetus commented on PR #6640: URL: https://github.com/apache/hadoop/pull/6640#issuecomment-2007709572 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 23s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +0 :ok: | markdownlint | 0m 0s | | markdownlint was not available. | | +0 :ok: | xmllint | 0m 0s | | xmllint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 13m 44s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 24m 2s | | trunk passed | | +1 :green_heart: | compile | 10m 20s | | trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 | | +1 :green_heart: | compile | 9m 29s | | trunk passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 | | +1 :green_heart: | mvnsite | 4m 53s | | trunk passed | | +1 :green_heart: | javadoc | 4m 45s | | trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 | | +1 :green_heart: | javadoc | 4m 40s | | trunk passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 | | +1 :green_heart: | shadedclient | 96m 26s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 22s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 3m 39s | | the patch passed | | +1 :green_heart: | compile | 9m 45s | | the patch passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 | | +1 :green_heart: | javac | 9m 45s | | the patch passed | | +1 :green_heart: | compile | 9m 3s | | the patch passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 | | +1 :green_heart: | javac | 9m 3s | | the patch passed | | -1 :x: | blanks | 0m 1s | [/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6640/2/artifact/out/blanks-eol.txt) | The patch has 2283 line(s) that end in blanks. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply | | -1 :x: | blanks | 0m 1s | [/blanks-tabs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6640/2/artifact/out/blanks-tabs.txt) | The patch 4 line(s) with tabs. | | +1 :green_heart: | mvnsite | 4m 58s | | the patch passed | | +1 :green_heart: | javadoc | 4m 10s | | the patch passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 | | +1 :green_heart: | javadoc | 4m 29s | | the patch passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 | | +1 :green_heart: | shadedclient | 36m 19s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 0m 16s | | hadoop-project-dist in the patch passed. | | +1 :green_heart: | unit | 16m 25s | | hadoop-common in the patch passed. | | -1 :x: | unit | 198m 5s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6640/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | unit | 209m 55s | | hadoop-yarn in the patch passed. | | +1 :green_heart: | unit | 134m 38s | | hadoop-mapreduce-project in the patch passed. | | +1 :green_heart: | asflicense | 1m 1s | | The patch does not generate ASF License warnings. | | | | 720m 1s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.tools.TestDFSAdmin | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.44 ServerAPI=1.44 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6640/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/6640 | | Optional Tests | dupname asflicense mvnsite codespell detsecrets markdownlint xmllint compile javac javadoc mvninstall unit shadedclient | | uname | Linux d3d113cb1087 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality |
Re: [PR] YARN-11216. Avoid unnecessary reconstruction of ConfigurationProperties [hadoop]
hadoop-yetus commented on PR #4655: URL: https://github.com/apache/hadoop/pull/4655#issuecomment-2007703551 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 44s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 2 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 14m 22s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 36m 21s | | trunk passed | | +1 :green_heart: | compile | 19m 1s | | trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 | | +1 :green_heart: | compile | 17m 30s | | trunk passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 | | +1 :green_heart: | checkstyle | 4m 42s | | trunk passed | | +1 :green_heart: | mvnsite | 2m 53s | | trunk passed | | +1 :green_heart: | javadoc | 2m 17s | | trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 | | +1 :green_heart: | javadoc | 1m 48s | | trunk passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 | | -1 :x: | spotbugs | 2m 34s | [/branch-spotbugs-hadoop-common-project_hadoop-common-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4655/16/artifact/out/branch-spotbugs-hadoop-common-project_hadoop-common-warnings.html) | hadoop-common-project/hadoop-common in trunk has 1 extant spotbugs warnings. | | +1 :green_heart: | shadedclient | 41m 12s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 30s | | Maven dependency ordering for patch | | -1 :x: | mvninstall | 0m 32s | [/patch-mvninstall-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4655/16/artifact/out/patch-mvninstall-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt) | hadoop-yarn-server-resourcemanager in the patch failed. | | -1 :x: | compile | 8m 13s | [/patch-compile-root-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4655/16/artifact/out/patch-compile-root-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt) | root in the patch failed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1. | | -1 :x: | javac | 8m 13s | [/patch-compile-root-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4655/16/artifact/out/patch-compile-root-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt) | root in the patch failed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1. | | -1 :x: | compile | 7m 37s | [/patch-compile-root-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4655/16/artifact/out/patch-compile-root-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt) | root in the patch failed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06. | | -1 :x: | javac | 7m 37s | [/patch-compile-root-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4655/16/artifact/out/patch-compile-root-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt) | root in the patch failed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06. | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 4m 20s | [/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4655/16/artifact/out/results-checkstyle-root.txt) | root: The patch generated 6 new + 139 unchanged - 0 fixed = 145 total (was 139) | | -1 :x: | mvnsite | 0m 37s | [/patch-mvnsite-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4655/16/artifact/out/patch-mvnsite-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt) | hadoop-yarn-server-resourcemanager in the patch failed. | | +1 :green_heart: | javadoc | 1m 38s | | the patch passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 | | +1 :green_heart: | javadoc
Re: [PR] YARN-5305. Allow log aggregation to discard expired delegation tokens [hadoop]
K0K0V0K commented on PR #6625: URL: https://github.com/apache/hadoop/pull/6625#issuecomment-2007689009 Thanks, @p-szucs for this fix. Nice work! LGTM! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-19114) upgrade to commons-compress 1.26.1 due to cves
[ https://issues.apache.org/jira/browse/HADOOP-19114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17828442#comment-17828442 ] ASF GitHub Bot commented on HADOOP-19114: - hadoop-yetus commented on PR #6636: URL: https://github.com/apache/hadoop/pull/6636#issuecomment-2007664941 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 52s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +0 :ok: | xmllint | 0m 0s | | xmllint was not available. | | +0 :ok: | shelldocs | 0m 0s | | Shelldocs was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 14m 46s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 32m 29s | | trunk passed | | +1 :green_heart: | compile | 17m 32s | | trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 | | +1 :green_heart: | compile | 16m 15s | | trunk passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 | | +1 :green_heart: | mvnsite | 21m 15s | | trunk passed | | +1 :green_heart: | javadoc | 9m 6s | | trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 | | +1 :green_heart: | javadoc | 7m 54s | | trunk passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 | | +1 :green_heart: | shadedclient | 49m 5s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 37s | | Maven dependency ordering for patch | | -1 :x: | mvninstall | 29m 22s | [/patch-mvninstall-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6636/1/artifact/out/patch-mvninstall-root.txt) | root in the patch failed. | | -1 :x: | compile | 15m 13s | [/patch-compile-root-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6636/1/artifact/out/patch-compile-root-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt) | root in the patch failed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1. | | -1 :x: | javac | 15m 13s | [/patch-compile-root-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6636/1/artifact/out/patch-compile-root-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt) | root in the patch failed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1. | | -1 :x: | compile | 14m 54s | [/patch-compile-root-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6636/1/artifact/out/patch-compile-root-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt) | root in the patch failed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06. | | -1 :x: | javac | 14m 54s | [/patch-compile-root-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6636/1/artifact/out/patch-compile-root-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt) | root in the patch failed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06. | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -1 :x: | mvnsite | 4m 53s | [/patch-mvnsite-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6636/1/artifact/out/patch-mvnsite-root.txt) | root in the patch failed. | | +1 :green_heart: | shellcheck | 0m 0s | | No new issues. | | +1 :green_heart: | javadoc | 8m 49s | | the patch passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 | | +1 :green_heart: | javadoc | 7m 56s | | the patch passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 | | +1 :green_heart: | shadedclient | 50m 43s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 794m 47s | [/patch-unit-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6636/1/artifact/out/patch-unit-root.txt) | root in the patch failed. | | +1 :green_heart: | asflicense |
Re: [PR] HADOOP-19114. Upgrade to commons-compress 1.26.1 due to CVEs. [hadoop]
hadoop-yetus commented on PR #6636: URL: https://github.com/apache/hadoop/pull/6636#issuecomment-2007664941 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 52s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +0 :ok: | xmllint | 0m 0s | | xmllint was not available. | | +0 :ok: | shelldocs | 0m 0s | | Shelldocs was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 14m 46s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 32m 29s | | trunk passed | | +1 :green_heart: | compile | 17m 32s | | trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 | | +1 :green_heart: | compile | 16m 15s | | trunk passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 | | +1 :green_heart: | mvnsite | 21m 15s | | trunk passed | | +1 :green_heart: | javadoc | 9m 6s | | trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 | | +1 :green_heart: | javadoc | 7m 54s | | trunk passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 | | +1 :green_heart: | shadedclient | 49m 5s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 37s | | Maven dependency ordering for patch | | -1 :x: | mvninstall | 29m 22s | [/patch-mvninstall-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6636/1/artifact/out/patch-mvninstall-root.txt) | root in the patch failed. | | -1 :x: | compile | 15m 13s | [/patch-compile-root-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6636/1/artifact/out/patch-compile-root-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt) | root in the patch failed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1. | | -1 :x: | javac | 15m 13s | [/patch-compile-root-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6636/1/artifact/out/patch-compile-root-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt) | root in the patch failed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1. | | -1 :x: | compile | 14m 54s | [/patch-compile-root-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6636/1/artifact/out/patch-compile-root-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt) | root in the patch failed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06. | | -1 :x: | javac | 14m 54s | [/patch-compile-root-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6636/1/artifact/out/patch-compile-root-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt) | root in the patch failed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06. | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -1 :x: | mvnsite | 4m 53s | [/patch-mvnsite-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6636/1/artifact/out/patch-mvnsite-root.txt) | root in the patch failed. | | +1 :green_heart: | shellcheck | 0m 0s | | No new issues. | | +1 :green_heart: | javadoc | 8m 49s | | the patch passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 | | +1 :green_heart: | javadoc | 7m 56s | | the patch passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 | | +1 :green_heart: | shadedclient | 50m 43s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 794m 47s | [/patch-unit-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6636/1/artifact/out/patch-unit-root.txt) | root in the patch failed. | | +1 :green_heart: | asflicense | 1m 36s | | The patch does not generate ASF License warnings. | | | | 1070m 49s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.TestRollingUpgrade | | Subsystem |
[jira] [Commented] (HADOOP-19115) upgrade to nimbus-jose-jwt 9.37.2 due to CVE
[ https://issues.apache.org/jira/browse/HADOOP-19115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17828438#comment-17828438 ] ASF GitHub Bot commented on HADOOP-19115: - hadoop-yetus commented on PR #6637: URL: https://github.com/apache/hadoop/pull/6637#issuecomment-2007632453 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 30s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +0 :ok: | xmllint | 0m 1s | | xmllint was not available. | | +0 :ok: | shelldocs | 0m 1s | | Shelldocs was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 14m 35s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 31m 39s | | trunk passed | | +1 :green_heart: | compile | 17m 39s | | trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 | | +1 :green_heart: | compile | 16m 3s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | mvnsite | 21m 41s | | trunk passed | | +1 :green_heart: | javadoc | 9m 12s | | trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 | | +1 :green_heart: | javadoc | 7m 52s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | shadedclient | 49m 12s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 35s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 29m 13s | | the patch passed | | +1 :green_heart: | compile | 17m 6s | | the patch passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 | | +1 :green_heart: | javac | 17m 6s | | the patch passed | | +1 :green_heart: | compile | 16m 23s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | javac | 16m 23s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | mvnsite | 19m 20s | | the patch passed | | +1 :green_heart: | shellcheck | 0m 0s | | No new issues. | | +1 :green_heart: | javadoc | 8m 45s | | the patch passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 | | +1 :green_heart: | javadoc | 7m 50s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | shadedclient | 51m 49s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 756m 39s | [/patch-unit-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6637/1/artifact/out/patch-unit-root.txt) | root in the patch failed. | | +1 :green_heart: | asflicense | 1m 40s | | The patch does not generate ASF License warnings. | | | | 1050m 16s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.server.diskbalancer.command.TestDiskBalancerCommand | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.44 ServerAPI=1.44 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6637/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/6637 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient codespell detsecrets xmllint shellcheck shelldocs | | uname | Linux fd2b72f472eb 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 0377c239f6a80bd4574c917dbaa8f00861db03cc | | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6637/1/testReport/ | | Max. process+thread
Re: [PR] HADOOP-19115. Upgrade to nimbus-jose-jwt 9.37.2 due to CVE. [hadoop]
hadoop-yetus commented on PR #6637: URL: https://github.com/apache/hadoop/pull/6637#issuecomment-2007632453 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 30s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +0 :ok: | xmllint | 0m 1s | | xmllint was not available. | | +0 :ok: | shelldocs | 0m 1s | | Shelldocs was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 14m 35s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 31m 39s | | trunk passed | | +1 :green_heart: | compile | 17m 39s | | trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 | | +1 :green_heart: | compile | 16m 3s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | mvnsite | 21m 41s | | trunk passed | | +1 :green_heart: | javadoc | 9m 12s | | trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 | | +1 :green_heart: | javadoc | 7m 52s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | shadedclient | 49m 12s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 35s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 29m 13s | | the patch passed | | +1 :green_heart: | compile | 17m 6s | | the patch passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 | | +1 :green_heart: | javac | 17m 6s | | the patch passed | | +1 :green_heart: | compile | 16m 23s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | javac | 16m 23s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | mvnsite | 19m 20s | | the patch passed | | +1 :green_heart: | shellcheck | 0m 0s | | No new issues. | | +1 :green_heart: | javadoc | 8m 45s | | the patch passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 | | +1 :green_heart: | javadoc | 7m 50s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | shadedclient | 51m 49s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 756m 39s | [/patch-unit-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6637/1/artifact/out/patch-unit-root.txt) | root in the patch failed. | | +1 :green_heart: | asflicense | 1m 40s | | The patch does not generate ASF License warnings. | | | | 1050m 16s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.server.diskbalancer.command.TestDiskBalancerCommand | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.44 ServerAPI=1.44 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6637/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/6637 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient codespell detsecrets xmllint shellcheck shelldocs | | uname | Linux fd2b72f472eb 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 0377c239f6a80bd4574c917dbaa8f00861db03cc | | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6637/1/testReport/ | | Max. process+thread count | 4490 (vs. ulimit of 5500) | | modules | C: hadoop-project . U: . | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6637/1/console | | versions | git=2.25.1 maven=3.6.3 shellcheck=0.7.0 | | Powered
Re: [PR] YARN-5305. Allow log aggregation to discard expired delegation tokens [hadoop]
p-szucs commented on code in PR #6625: URL: https://github.com/apache/hadoop/pull/6625#discussion_r1530716191 ## hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/AppLogAggregatorImpl.java: ## @@ -286,7 +294,13 @@ private void uploadLogsForContainers(boolean appFinished) } addCredentials(); - +if (UserGroupInformation.isSecurityEnabled()) { Review Comment: Thanks @K0K0V0K for the review. Sure, I updated the PR with the fix. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-19116) update to zookeeper client 3.8.4 due to CVE
[ https://issues.apache.org/jira/browse/HADOOP-19116?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17828436#comment-17828436 ] ASF GitHub Bot commented on HADOOP-19116: - hadoop-yetus commented on PR #6638: URL: https://github.com/apache/hadoop/pull/6638#issuecomment-2007616577 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 31s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +0 :ok: | xmllint | 0m 0s | | xmllint was not available. | | +0 :ok: | shelldocs | 0m 0s | | Shelldocs was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 14m 52s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 31m 23s | | trunk passed | | +1 :green_heart: | compile | 17m 20s | | trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 | | +1 :green_heart: | compile | 15m 36s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | mvnsite | 20m 54s | | trunk passed | | +1 :green_heart: | javadoc | 9m 4s | | trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 | | +1 :green_heart: | javadoc | 7m 42s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | shadedclient | 48m 16s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 38s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 29m 40s | | the patch passed | | +1 :green_heart: | compile | 17m 48s | | the patch passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 | | +1 :green_heart: | javac | 17m 48s | | the patch passed | | +1 :green_heart: | compile | 16m 26s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | javac | 16m 26s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | mvnsite | 16m 27s | | the patch passed | | +1 :green_heart: | shellcheck | 0m 0s | | No new issues. | | +1 :green_heart: | javadoc | 8m 58s | | the patch passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 | | +1 :green_heart: | javadoc | 8m 12s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | shadedclient | 54m 17s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 744m 49s | [/patch-unit-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6638/1/artifact/out/patch-unit-root.txt) | root in the patch failed. | | +1 :green_heart: | asflicense | 1m 32s | | The patch does not generate ASF License warnings. | | | | 1036m 12s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.44 ServerAPI=1.44 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6638/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/6638 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient codespell detsecrets xmllint shellcheck shelldocs | | uname | Linux 3187f649cc11 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / c4a24158901c02d0dc6000ac8c8699729998e520 | | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6638/1/testReport/ | | Max. process+thread count | 4271 (vs. ulimit of 5500) | | modules | C: hadoop-project . U: . | | Console output |
Re: [PR] HADOOP-19116. Update to zookeeper client 3.8.4 due to CVE. [hadoop]
hadoop-yetus commented on PR #6638: URL: https://github.com/apache/hadoop/pull/6638#issuecomment-2007616577 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 31s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +0 :ok: | xmllint | 0m 0s | | xmllint was not available. | | +0 :ok: | shelldocs | 0m 0s | | Shelldocs was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 14m 52s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 31m 23s | | trunk passed | | +1 :green_heart: | compile | 17m 20s | | trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 | | +1 :green_heart: | compile | 15m 36s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | mvnsite | 20m 54s | | trunk passed | | +1 :green_heart: | javadoc | 9m 4s | | trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 | | +1 :green_heart: | javadoc | 7m 42s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | shadedclient | 48m 16s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 38s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 29m 40s | | the patch passed | | +1 :green_heart: | compile | 17m 48s | | the patch passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 | | +1 :green_heart: | javac | 17m 48s | | the patch passed | | +1 :green_heart: | compile | 16m 26s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | javac | 16m 26s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | mvnsite | 16m 27s | | the patch passed | | +1 :green_heart: | shellcheck | 0m 0s | | No new issues. | | +1 :green_heart: | javadoc | 8m 58s | | the patch passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 | | +1 :green_heart: | javadoc | 8m 12s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | shadedclient | 54m 17s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 744m 49s | [/patch-unit-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6638/1/artifact/out/patch-unit-root.txt) | root in the patch failed. | | +1 :green_heart: | asflicense | 1m 32s | | The patch does not generate ASF License warnings. | | | | 1036m 12s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.44 ServerAPI=1.44 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6638/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/6638 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient codespell detsecrets xmllint shellcheck shelldocs | | uname | Linux 3187f649cc11 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / c4a24158901c02d0dc6000ac8c8699729998e520 | | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6638/1/testReport/ | | Max. process+thread count | 4271 (vs. ulimit of 5500) | | modules | C: hadoop-project . U: . | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6638/1/console | | versions | git=2.25.1 maven=3.6.3 shellcheck=0.7.0 | | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated
Re: [PR] YARN-5305. Allow log aggregation to discard expired delegation tokens [hadoop]
K0K0V0K commented on code in PR #6625: URL: https://github.com/apache/hadoop/pull/6625#discussion_r1530670402 ## hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/AppLogAggregatorImpl.java: ## @@ -286,7 +294,13 @@ private void uploadLogsForContainers(boolean appFinished) } addCredentials(); - +if (UserGroupInformation.isSecurityEnabled()) { Review Comment: Nit: maybe we can move this if into the removeExpiredDelegationTokens method as an early return, so it will be more similar to the addCredentials() method -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HDFS-17413. [FGL] CacheReplicationMonitor supports fine-grained lock [hadoop]
hadoop-yetus commented on PR #6641: URL: https://github.com/apache/hadoop/pull/6641#issuecomment-2007547095 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 12m 26s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ HDFS-17384 Compile Tests _ | | +1 :green_heart: | mvninstall | 43m 55s | | HDFS-17384 passed | | +1 :green_heart: | compile | 1m 21s | | HDFS-17384 passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 | | +1 :green_heart: | compile | 1m 16s | | HDFS-17384 passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 | | +1 :green_heart: | checkstyle | 1m 13s | | HDFS-17384 passed | | +1 :green_heart: | mvnsite | 1m 24s | | HDFS-17384 passed | | +1 :green_heart: | javadoc | 1m 11s | | HDFS-17384 passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 | | +1 :green_heart: | javadoc | 1m 41s | | HDFS-17384 passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 | | +1 :green_heart: | spotbugs | 3m 17s | | HDFS-17384 passed | | +1 :green_heart: | shadedclient | 35m 33s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 12s | | the patch passed | | +1 :green_heart: | compile | 1m 11s | | the patch passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 | | +1 :green_heart: | javac | 1m 11s | | the patch passed | | +1 :green_heart: | compile | 1m 8s | | the patch passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 | | +1 :green_heart: | javac | 1m 8s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 1m 0s | | the patch passed | | +1 :green_heart: | mvnsite | 1m 10s | | the patch passed | | +1 :green_heart: | javadoc | 0m 52s | | the patch passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 | | +1 :green_heart: | javadoc | 1m 32s | | the patch passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 | | +1 :green_heart: | spotbugs | 3m 15s | | the patch passed | | +1 :green_heart: | shadedclient | 35m 11s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 229m 58s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6641/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 46s | | The patch does not generate ASF License warnings. | | | | 381m 28s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.server.datanode.TestLargeBlockReport | | | hadoop.hdfs.tools.TestDFSAdmin | | | hadoop.hdfs.protocol.TestBlockListAsLongs | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.44 ServerAPI=1.44 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6641/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/6641 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux dbd98b8aab75 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | HDFS-17384 / 901fff7cbf4ac90b8be0b4799ea19426eff89a20 | | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6641/1/testReport/ | | Max. process+thread count | 3429 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs
[jira] [Commented] (HADOOP-19112) Hadoop 3.4.0 release wrap-up
[ https://issues.apache.org/jira/browse/HADOOP-19112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17828352#comment-17828352 ] ASF GitHub Bot commented on HADOOP-19112: - hadoop-yetus commented on PR #6640: URL: https://github.com/apache/hadoop/pull/6640#issuecomment-2007274658 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 7m 30s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +0 :ok: | xmllint | 0m 0s | | xmllint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 14m 17s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 19m 38s | | trunk passed | | +1 :green_heart: | compile | 8m 54s | | trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 | | +1 :green_heart: | compile | 8m 5s | | trunk passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 | | +1 :green_heart: | mvnsite | 4m 0s | | trunk passed | | +1 :green_heart: | javadoc | 3m 50s | | trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 | | +1 :green_heart: | javadoc | 4m 10s | | trunk passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 | | +1 :green_heart: | shadedclient | 82m 23s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 20s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 2m 45s | | the patch passed | | +1 :green_heart: | compile | 8m 32s | | the patch passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 | | +1 :green_heart: | javac | 8m 32s | | the patch passed | | +1 :green_heart: | compile | 7m 58s | | the patch passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 | | +1 :green_heart: | javac | 7m 58s | | the patch passed | | -1 :x: | blanks | 0m 0s | [/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6640/1/artifact/out/blanks-eol.txt) | The patch has 2279 line(s) that end in blanks. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply | | -1 :x: | blanks | 0m 1s | [/blanks-tabs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6640/1/artifact/out/blanks-tabs.txt) | The patch 4 line(s) with tabs. | | +1 :green_heart: | mvnsite | 3m 54s | | the patch passed | | +1 :green_heart: | javadoc | 3m 33s | | the patch passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 | | +1 :green_heart: | javadoc | 4m 11s | | the patch passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 | | +1 :green_heart: | shadedclient | 32m 1s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 0m 16s | | hadoop-project-dist in the patch passed. | | -1 :x: | unit | 195m 3s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6640/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | unit | 207m 55s | | hadoop-yarn in the patch passed. | | +1 :green_heart: | unit | 136m 59s | | hadoop-mapreduce-project in the patch passed. | | +1 :green_heart: | asflicense | 1m 1s | | The patch does not generate ASF License warnings. | | | | 686m 26s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.tools.TestDFSAdmin | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.44 ServerAPI=1.44 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6640/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/6640 | | Optional Tests | dupname asflicense codespell detsecrets xmllint compile javac javadoc mvninstall mvnsite unit shadedclient | | uname | Linux 1d63186c1246 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 15:25:40 UTC 2024 x86_64 x86_64
Re: [PR] HADOOP-19112. Hadoop 3.4.0 release wrap-up. [hadoop]
hadoop-yetus commented on PR #6640: URL: https://github.com/apache/hadoop/pull/6640#issuecomment-2007274658 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 7m 30s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +0 :ok: | xmllint | 0m 0s | | xmllint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 14m 17s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 19m 38s | | trunk passed | | +1 :green_heart: | compile | 8m 54s | | trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 | | +1 :green_heart: | compile | 8m 5s | | trunk passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 | | +1 :green_heart: | mvnsite | 4m 0s | | trunk passed | | +1 :green_heart: | javadoc | 3m 50s | | trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 | | +1 :green_heart: | javadoc | 4m 10s | | trunk passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 | | +1 :green_heart: | shadedclient | 82m 23s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 20s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 2m 45s | | the patch passed | | +1 :green_heart: | compile | 8m 32s | | the patch passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 | | +1 :green_heart: | javac | 8m 32s | | the patch passed | | +1 :green_heart: | compile | 7m 58s | | the patch passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 | | +1 :green_heart: | javac | 7m 58s | | the patch passed | | -1 :x: | blanks | 0m 0s | [/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6640/1/artifact/out/blanks-eol.txt) | The patch has 2279 line(s) that end in blanks. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply | | -1 :x: | blanks | 0m 1s | [/blanks-tabs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6640/1/artifact/out/blanks-tabs.txt) | The patch 4 line(s) with tabs. | | +1 :green_heart: | mvnsite | 3m 54s | | the patch passed | | +1 :green_heart: | javadoc | 3m 33s | | the patch passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 | | +1 :green_heart: | javadoc | 4m 11s | | the patch passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 | | +1 :green_heart: | shadedclient | 32m 1s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 0m 16s | | hadoop-project-dist in the patch passed. | | -1 :x: | unit | 195m 3s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6640/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | unit | 207m 55s | | hadoop-yarn in the patch passed. | | +1 :green_heart: | unit | 136m 59s | | hadoop-mapreduce-project in the patch passed. | | +1 :green_heart: | asflicense | 1m 1s | | The patch does not generate ASF License warnings. | | | | 686m 26s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.tools.TestDFSAdmin | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.44 ServerAPI=1.44 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6640/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/6640 | | Optional Tests | dupname asflicense codespell detsecrets xmllint compile javac javadoc mvninstall mvnsite unit shadedclient | | uname | Linux 1d63186c1246 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 15f98882bbd982a059d6d2d6461ceeadf684a0e4 | | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 | |
[jira] [Commented] (HADOOP-19119) spotbugs complaining about possible NPE in org.apache.hadoop.crypto.key.kms.ValueQueue.getSize()
[ https://issues.apache.org/jira/browse/HADOOP-19119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17828342#comment-17828342 ] ASF GitHub Bot commented on HADOOP-19119: - hadoop-yetus commented on PR #6642: URL: https://github.com/apache/hadoop/pull/6642#issuecomment-2007214433 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 21s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 32m 41s | | trunk passed | | +1 :green_heart: | compile | 8m 58s | | trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 | | +1 :green_heart: | compile | 8m 6s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | checkstyle | 0m 40s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 58s | | trunk passed | | +1 :green_heart: | javadoc | 0m 49s | | trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 | | +1 :green_heart: | javadoc | 0m 35s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | -1 :x: | spotbugs | 1m 28s | [/branch-spotbugs-hadoop-common-project_hadoop-common-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6642/1/artifact/out/branch-spotbugs-hadoop-common-project_hadoop-common-warnings.html) | hadoop-common-project/hadoop-common in trunk has 1 extant spotbugs warnings. | | +1 :green_heart: | shadedclient | 20m 55s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 29s | | the patch passed | | +1 :green_heart: | compile | 8m 33s | | the patch passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 | | +1 :green_heart: | javac | 8m 33s | | the patch passed | | +1 :green_heart: | compile | 8m 9s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | javac | 8m 9s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 37s | | the patch passed | | +1 :green_heart: | mvnsite | 0m 55s | | the patch passed | | +1 :green_heart: | javadoc | 0m 41s | | the patch passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 | | +1 :green_heart: | javadoc | 0m 36s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 1m 32s | | hadoop-common-project/hadoop-common generated 0 new + 0 unchanged - 1 fixed = 0 total (was 1) | | +1 :green_heart: | shadedclient | 20m 48s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 16m 21s | | hadoop-common in the patch passed. | | +1 :green_heart: | asflicense | 0m 38s | | The patch does not generate ASF License warnings. | | | | 137m 41s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.44 ServerAPI=1.44 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6642/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/6642 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux 710c536cd994 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 11e6ec5c627ccb722ac9d67086ed4d9440958c4e | | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6642/1/testReport/ | | Max. process+thread count |
Re: [PR] HADOOP-19119. Spotbugs: possible NPE in org.apache.hadoop.crypto.key.kms.ValueQueue.getSize() [hadoop]
hadoop-yetus commented on PR #6642: URL: https://github.com/apache/hadoop/pull/6642#issuecomment-2007214433 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 21s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 32m 41s | | trunk passed | | +1 :green_heart: | compile | 8m 58s | | trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 | | +1 :green_heart: | compile | 8m 6s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | checkstyle | 0m 40s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 58s | | trunk passed | | +1 :green_heart: | javadoc | 0m 49s | | trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 | | +1 :green_heart: | javadoc | 0m 35s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | -1 :x: | spotbugs | 1m 28s | [/branch-spotbugs-hadoop-common-project_hadoop-common-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6642/1/artifact/out/branch-spotbugs-hadoop-common-project_hadoop-common-warnings.html) | hadoop-common-project/hadoop-common in trunk has 1 extant spotbugs warnings. | | +1 :green_heart: | shadedclient | 20m 55s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 29s | | the patch passed | | +1 :green_heart: | compile | 8m 33s | | the patch passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 | | +1 :green_heart: | javac | 8m 33s | | the patch passed | | +1 :green_heart: | compile | 8m 9s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | javac | 8m 9s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 37s | | the patch passed | | +1 :green_heart: | mvnsite | 0m 55s | | the patch passed | | +1 :green_heart: | javadoc | 0m 41s | | the patch passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 | | +1 :green_heart: | javadoc | 0m 36s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 1m 32s | | hadoop-common-project/hadoop-common generated 0 new + 0 unchanged - 1 fixed = 0 total (was 1) | | +1 :green_heart: | shadedclient | 20m 48s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 16m 21s | | hadoop-common in the patch passed. | | +1 :green_heart: | asflicense | 0m 38s | | The patch does not generate ASF License warnings. | | | | 137m 41s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.44 ServerAPI=1.44 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6642/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/6642 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux 710c536cd994 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 11e6ec5c627ccb722ac9d67086ed4d9440958c4e | | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6642/1/testReport/ | | Max. process+thread count | 1284 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6642/1/console | | versions |
Re: [PR] WIP: ApacheHttpClient adaptation in ABFS. [hadoop]
hadoop-yetus commented on PR #6633: URL: https://github.com/apache/hadoop/pull/6633#issuecomment-2007163057 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 18m 34s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 18 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 44m 17s | | trunk passed | | +1 :green_heart: | compile | 0m 37s | | trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 | | +1 :green_heart: | compile | 0m 33s | | trunk passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 | | +1 :green_heart: | checkstyle | 0m 31s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 41s | | trunk passed | | +1 :green_heart: | javadoc | 0m 39s | | trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 | | +1 :green_heart: | javadoc | 0m 34s | | trunk passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 | | +1 :green_heart: | spotbugs | 1m 7s | | trunk passed | | +1 :green_heart: | shadedclient | 32m 54s | | branch has no errors when building and testing our client artifacts. | | -0 :warning: | patch | 33m 15s | | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 30s | | the patch passed | | +1 :green_heart: | compile | 0m 29s | | the patch passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 | | +1 :green_heart: | javac | 0m 29s | | the patch passed | | +1 :green_heart: | compile | 0m 28s | | the patch passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 | | +1 :green_heart: | javac | 0m 28s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 0m 20s | [/results-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/8/artifact/out/results-checkstyle-hadoop-tools_hadoop-azure.txt) | hadoop-tools/hadoop-azure: The patch generated 137 new + 18 unchanged - 0 fixed = 155 total (was 18) | | +1 :green_heart: | mvnsite | 0m 30s | | the patch passed | | -1 :x: | javadoc | 0m 25s | [/results-javadoc-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/8/artifact/out/results-javadoc-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt) | hadoop-tools_hadoop-azure-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 generated 3 new + 15 unchanged - 0 fixed = 18 total (was 15) | | -1 :x: | javadoc | 0m 26s | [/results-javadoc-javadoc-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/8/artifact/out/results-javadoc-javadoc-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt) | hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 generated 3 new + 15 unchanged - 0 fixed = 18 total (was 15) | | -1 :x: | spotbugs | 1m 7s | [/new-spotbugs-hadoop-tools_hadoop-azure.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/8/artifact/out/new-spotbugs-hadoop-tools_hadoop-azure.html) | hadoop-tools/hadoop-azure generated 18 new + 0 unchanged - 0 fixed = 18 total (was 0) | | +1 :green_heart: | shadedclient | 33m 3s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 2m 32s | | hadoop-azure in the patch passed. | | +1 :green_heart: | asflicense | 0m 36s | | The patch does not generate ASF License warnings. | | | | 145m 14s | | | | Reason | Tests | |---:|:--| | SpotBugs | module:hadoop-tools/hadoop-azure | | | Unread field:AbfsConnectionManager.java:[line 113] | | | Unread field:AbfsApacheHttpClient.java:[line 63] | | | Unread field:AbfsApacheHttpClient.java:[line 88] |
Re: [PR] HDFS-17431. Fix log format for BlockRecoveryWorker#recoverBlocks [hadoop]
wzk784533 commented on PR #6643: URL: https://github.com/apache/hadoop/pull/6643#issuecomment-2007159204 LGTM -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-18487) Make protobuf 2.5 an optional runtime dependency.
[ https://issues.apache.org/jira/browse/HADOOP-18487?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shilun Fan updated HADOOP-18487: Fix Version/s: (was: 3.4.0) > Make protobuf 2.5 an optional runtime dependency. > - > > Key: HADOOP-18487 > URL: https://issues.apache.org/jira/browse/HADOOP-18487 > Project: Hadoop Common > Issue Type: Improvement > Components: build, ipc >Affects Versions: 3.3.4 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > Labels: pull-request-available > Fix For: 3.3.9 > > > uses of protobuf 2.5 and RpcEnginej have been deprecated since 3.3.0 in > HADOOP-17046 > while still keeping those files around (for a long time...), how about we > make the protobuf 2.5.0 export off hadoop common and hadoop-hdfs *provided*, > rather than *compile* > that way, if apps want it for their own apis, they have to explicitly ask for > it, but at least our own scans don't break. > i have no idea what will happen to the rest of the stack at this point, it > will be "interesting" to see -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-19089) [ABFS] Reverting Back Support of setXAttr() and getXAttr() on root path
[ https://issues.apache.org/jira/browse/HADOOP-19089?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shilun Fan updated HADOOP-19089: Fix Version/s: (was: 3.4.0) > [ABFS] Reverting Back Support of setXAttr() and getXAttr() on root path > --- > > Key: HADOOP-19089 > URL: https://issues.apache.org/jira/browse/HADOOP-19089 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Affects Versions: 3.4.0, 3.4.1 >Reporter: Anuj Modi >Assignee: Anuj Modi >Priority: Major > Labels: pull-request-available > Fix For: 3.4.1 > > > A while back changes were made to support HDFS.setXAttr() and HDFS.getXAttr() > on root path for ABFS Driver. > For these, filesystem level APIs were introduced and used to set/get metadata > of container. > Refer to Jira: [HADOOP-18869] ABFS: Fixing Behavior of a File System APIs on > root path - ASF JIRA (apache.org) > Ideally, same set of APIs should be used, and root should be treated as a > path like any other path. > This change is to avoid calling container APIs for these HDFS calls. > As a result of this these APIs will fail on root path (as earlier) because > service does not support get/set of user properties on root path. > This change will also update the documentation to reflect that these > operations are not supported on root path. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-19018) Release Hadoop 3.4.0
[ https://issues.apache.org/jira/browse/HADOOP-19018?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shilun Fan resolved HADOOP-19018. - Fix Version/s: 3.4.0 Hadoop Flags: Reviewed Resolution: Fixed > Release Hadoop 3.4.0 > > > Key: HADOOP-19018 > URL: https://issues.apache.org/jira/browse/HADOOP-19018 > Project: Hadoop Common > Issue Type: Task >Affects Versions: 3.4.0 >Reporter: Shilun Fan >Assignee: Shilun Fan >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > > Confirmed features to be included in the release: > - Enhanced functionality for YARN Federation. > - Redesigned resource allocation in YARN Capacity Scheduler > - Optimization of HDFS RBF. > - Introduction of fine-grained global locks for DataNodes. > - Improvements in the stability of HDFS EC, and more. > - Fixes for important CVEs. > *Issues that need to be addressed in hadoop-3.4.0-RC0 version.* > 1. confirm the JIRA target version/fix version is 3.4.0 to ensure that the > version setting is correct. > 2. confirm the highlight of hadoop-3.4.0. > 3. backport branch-3.4.0/branch-3.4. > {code:java} > HADOOP-19040. mvn site commands fails due to MetricsSystem And > MetricsSystemImpl changes. > YARN-11634. [Addendum] Speed-up TestTimelineClient. > MAPREDUCE-7468. [Addendum] Fix TestMapReduceChildJVM unit tests. > > Revert HDFS-16016. BPServiceActor to provide new thread to handle IBR. > {code} > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-19112) Hadoop 3.4.0 release wrap-up
[ https://issues.apache.org/jira/browse/HADOOP-19112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shilun Fan resolved HADOOP-19112. - Fix Version/s: 3.4.0 Hadoop Flags: Reviewed Resolution: Fixed > Hadoop 3.4.0 release wrap-up > > > Key: HADOOP-19112 > URL: https://issues.apache.org/jira/browse/HADOOP-19112 > Project: Hadoop Common > Issue Type: Sub-task > Components: common >Affects Versions: 3.4.0 >Reporter: Shilun Fan >Assignee: Shilun Fan >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-19117) 3.4.0 release documents
[ https://issues.apache.org/jira/browse/HADOOP-19117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shilun Fan resolved HADOOP-19117. - Fix Version/s: 3.4.0 Hadoop Flags: Reviewed Resolution: Fixed > 3.4.0 release documents > --- > > Key: HADOOP-19117 > URL: https://issues.apache.org/jira/browse/HADOOP-19117 > Project: Hadoop Common > Issue Type: Sub-task > Components: common >Affects Versions: 3.4.0 >Reporter: Shilun Fan >Assignee: Shilun Fan >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > Time Spent: 50m > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-19117) 3.4.0 release documents
[ https://issues.apache.org/jira/browse/HADOOP-19117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shilun Fan updated HADOOP-19117: Target Version/s: 3.4.0 > 3.4.0 release documents > --- > > Key: HADOOP-19117 > URL: https://issues.apache.org/jira/browse/HADOOP-19117 > Project: Hadoop Common > Issue Type: Sub-task > Components: common >Affects Versions: 3.4.0 >Reporter: Shilun Fan >Assignee: Shilun Fan >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > Time Spent: 50m > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-19112) Hadoop 3.4.0 release wrap-up
[ https://issues.apache.org/jira/browse/HADOOP-19112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17828301#comment-17828301 ] ASF GitHub Bot commented on HADOOP-19112: - slfan1989 commented on PR #6640: URL: https://github.com/apache/hadoop/pull/6640#issuecomment-2007013296 @Hexiaoqiao Thank you very much for the review! > Hadoop 3.4.0 release wrap-up > > > Key: HADOOP-19112 > URL: https://issues.apache.org/jira/browse/HADOOP-19112 > Project: Hadoop Common > Issue Type: Sub-task > Components: common >Affects Versions: 3.4.0 >Reporter: Shilun Fan >Assignee: Shilun Fan >Priority: Major > Labels: pull-request-available > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HADOOP-19112. Hadoop 3.4.0 release wrap-up. [hadoop]
slfan1989 commented on PR #6640: URL: https://github.com/apache/hadoop/pull/6640#issuecomment-2007013296 @Hexiaoqiao Thank you very much for the review! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HADOOP-19112. Hadoop 3.4.0 release wrap-up. [hadoop]
slfan1989 merged PR #6640: URL: https://github.com/apache/hadoop/pull/6640 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HADOOP-19117. 3.4.0 release documents. [hadoop-site]
slfan1989 commented on PR #53: URL: https://github.com/apache/hadoop-site/pull/53#issuecomment-2007011255 @steveloughran @Hexiaoqiao Thank you very much for the review! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HADOOP-19117. 3.4.0 release documents. [hadoop-site]
slfan1989 merged PR #53: URL: https://github.com/apache/hadoop-site/pull/53 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[PR] HDFS-17431. Fix log format for BlockRecoveryWorker#recoverBlocks [hadoop]
haiyang1987 opened a new pull request, #6643: URL: https://github.com/apache/hadoop/pull/6643 ### Description of PR https://issues.apache.org/jira/browse/HDFS-17431 Fix log format for BlockRecoveryWorker#recoverBlocks -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-19114) upgrade to commons-compress 1.26.1 due to cves
[ https://issues.apache.org/jira/browse/HADOOP-19114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-19114: Component/s: build CVE > upgrade to commons-compress 1.26.1 due to cves > -- > > Key: HADOOP-19114 > URL: https://issues.apache.org/jira/browse/HADOOP-19114 > Project: Hadoop Common > Issue Type: Bug > Components: build, CVE >Affects Versions: 3.4.0 >Reporter: PJ Fanning >Priority: Major > Labels: pull-request-available > > 2 recent CVEs fixed - > https://mvnrepository.com/artifact/org.apache.commons/commons-compress -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-19114) upgrade to commons-compress 1.26.1 due to cves
[ https://issues.apache.org/jira/browse/HADOOP-19114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-19114: Affects Version/s: 3.4.0 > upgrade to commons-compress 1.26.1 due to cves > -- > > Key: HADOOP-19114 > URL: https://issues.apache.org/jira/browse/HADOOP-19114 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 3.4.0 >Reporter: PJ Fanning >Priority: Major > Labels: pull-request-available > > 2 recent CVEs fixed - > https://mvnrepository.com/artifact/org.apache.commons/commons-compress -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-19115) upgrade to nimbus-jose-jwt 9.37.2 due to CVE
[ https://issues.apache.org/jira/browse/HADOOP-19115?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-19115: Component/s: build CVE > upgrade to nimbus-jose-jwt 9.37.2 due to CVE > > > Key: HADOOP-19115 > URL: https://issues.apache.org/jira/browse/HADOOP-19115 > Project: Hadoop Common > Issue Type: Bug > Components: build, CVE >Affects Versions: 3.4.0, 3.5.0 >Reporter: PJ Fanning >Priority: Major > Labels: pull-request-available > > https://github.com/advisories/GHSA-gvpg-vgmx-xg6w -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-19115) upgrade to nimbus-jose-jwt 9.37.2 due to CVE
[ https://issues.apache.org/jira/browse/HADOOP-19115?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-19115: Affects Version/s: 3.4.0 3.5.0 > upgrade to nimbus-jose-jwt 9.37.2 due to CVE > > > Key: HADOOP-19115 > URL: https://issues.apache.org/jira/browse/HADOOP-19115 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 3.4.0, 3.5.0 >Reporter: PJ Fanning >Priority: Major > Labels: pull-request-available > > https://github.com/advisories/GHSA-gvpg-vgmx-xg6w -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] WIP: ApacheHttpClient adaptation in ABFS. [hadoop]
hadoop-yetus commented on PR #6633: URL: https://github.com/apache/hadoop/pull/6633#issuecomment-2006966559 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 31s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 18 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 44m 23s | | trunk passed | | +1 :green_heart: | compile | 0m 38s | | trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 | | +1 :green_heart: | compile | 0m 36s | | trunk passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 | | +1 :green_heart: | checkstyle | 0m 32s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 40s | | trunk passed | | +1 :green_heart: | javadoc | 0m 39s | | trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 | | +1 :green_heart: | javadoc | 0m 36s | | trunk passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 | | +1 :green_heart: | spotbugs | 1m 7s | | trunk passed | | +1 :green_heart: | shadedclient | 33m 18s | | branch has no errors when building and testing our client artifacts. | | -0 :warning: | patch | 33m 39s | | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 30s | | the patch passed | | +1 :green_heart: | compile | 0m 30s | | the patch passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 | | +1 :green_heart: | javac | 0m 30s | | the patch passed | | +1 :green_heart: | compile | 0m 27s | | the patch passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 | | +1 :green_heart: | javac | 0m 27s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 0m 21s | [/results-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/7/artifact/out/results-checkstyle-hadoop-tools_hadoop-azure.txt) | hadoop-tools/hadoop-azure: The patch generated 136 new + 18 unchanged - 0 fixed = 154 total (was 18) | | +1 :green_heart: | mvnsite | 0m 31s | | the patch passed | | -1 :x: | javadoc | 0m 26s | [/results-javadoc-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/7/artifact/out/results-javadoc-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt) | hadoop-tools_hadoop-azure-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 generated 3 new + 15 unchanged - 0 fixed = 18 total (was 15) | | -1 :x: | javadoc | 0m 27s | [/results-javadoc-javadoc-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/7/artifact/out/results-javadoc-javadoc-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt) | hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 generated 3 new + 15 unchanged - 0 fixed = 18 total (was 15) | | -1 :x: | spotbugs | 1m 7s | [/new-spotbugs-hadoop-tools_hadoop-azure.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/7/artifact/out/new-spotbugs-hadoop-tools_hadoop-azure.html) | hadoop-tools/hadoop-azure generated 18 new + 0 unchanged - 0 fixed = 18 total (was 0) | | +1 :green_heart: | shadedclient | 33m 18s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 2m 26s | | hadoop-azure in the patch passed. | | +1 :green_heart: | asflicense | 0m 38s | | The patch does not generate ASF License warnings. | | | | 128m 7s | | | | Reason | Tests | |---:|:--| | SpotBugs | module:hadoop-tools/hadoop-azure | | | Unread field:AbfsConnectionManager.java:[line 113] | | | Unread field:AbfsApacheHttpClient.java:[line 63] | | | Unread field:AbfsApacheHttpClient.java:[line 88] |
Re: [PR] YARN-11656 RMStateStore event queue blocked [hadoop]
p-szucs commented on code in PR #6569: URL: https://github.com/apache/hadoop/pull/6569#discussion_r1530182598 ## hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/event/multidispatcher/MultiDispatcherExecutor.java: ## @@ -0,0 +1,122 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.yarn.event.multidispatcher; + +import java.util.Arrays; +import java.util.Map; +import java.util.concurrent.BlockingQueue; +import java.util.concurrent.LinkedBlockingQueue; +import java.util.stream.Collectors; + +import org.slf4j.Logger; + +import org.apache.hadoop.yarn.event.Event; +import org.apache.hadoop.yarn.util.Clock; +import org.apache.hadoop.yarn.util.MonotonicClock; + +/** + * This class contains the thread which process the {@link MultiDispatcher}'s events. + */ +public class MultiDispatcherExecutor { + + private final Logger log; + private final MultiDispatcherConfig config; + private final MultiDispatcherExecutorThread[] threads; + private final Clock clock = new MonotonicClock(); + + public MultiDispatcherExecutor( + Logger log, + MultiDispatcherConfig config, + String dispatcherName + ) { +this.log = log; +this.config = config; +this.threads = new MultiDispatcherExecutorThread[config.getDefaultPoolSize()]; +ThreadGroup group = new ThreadGroup(dispatcherName); +for (int i = 0; i < threads.length; ++i) { + threads[i] = new MultiDispatcherExecutorThread(group, i, config.getQueueSize()); +} + } + + public void start() { +for(Thread t : threads) { + t.start(); +} + } + + public void execute(Event event, Runnable runnable) { +String lockKey = event.getLockKey(); +// abs of Integer.MIN_VALUE is Integer.MIN_VALUE +int threadIndex = lockKey == null || lockKey.hashCode() == Integer.MIN_VALUE ? +0 : Math.abs(lockKey.hashCode() % threads.length); Review Comment: Based on our discussion, I think a comment or description probably would be useful to make the goal of this computation more clear ## hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/event/multidispatcher/MultiDispatcherConfig.java: ## @@ -0,0 +1,77 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.yarn.event.multidispatcher; + +import org.apache.hadoop.conf.Configuration; + +/** + * All the config what can be use in the {@link MultiDispatcher} + */ +class MultiDispatcherConfig extends Configuration { + + private final String prefix; + + public MultiDispatcherConfig(Configuration configuration, String dispatcherName) { +super(configuration); +this.prefix = String.format("yarn.dispatcher.multi-thread.%s.", dispatcherName); + } + + /** + * How many executor thread should be created to handle the incoming events + * @return configured value, or default 4 + */ + public int getDefaultPoolSize() { +return super.getInt(prefix + "default-pool-size", 4); + } + + /** + * Maximus size of the event queue of the executor threads. Review Comment: Just a typo, if you touch the code again anyways :) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this
Re: [PR] YARN-11664: Remove HDFS Binaries/Jars Dependency From Yarn [hadoop]
aajisaka commented on PR #6631: URL: https://github.com/apache/hadoop/pull/6631#issuecomment-2006945704 -1. Please do not change the following `@Public` and `@Evolving` classes: - QuotaExceededException.java - DSQuotaExceededException.java > https://apache.github.io/hadoop/hadoop-project-dist/hadoop-common/Compatibility.html > Evolving interfaces must not change between minor releases. Can we use ClusterStorageCapacityExceededException (hadoop-common) instead of DSQuotaExceededException/QuotaExceededException (hadoop-hdfs) in YARN source code? IOStreamPair.java is `@Private` and I think we can relocate to hadoop-common. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-19116) update to zookeeper client 3.8.4 due to CVE
[ https://issues.apache.org/jira/browse/HADOOP-19116?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-19116: Affects Version/s: 3.3.6 3.4.0 > update to zookeeper client 3.8.4 due to CVE > --- > > Key: HADOOP-19116 > URL: https://issues.apache.org/jira/browse/HADOOP-19116 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 3.4.0, 3.3.6 >Reporter: PJ Fanning >Priority: Major > Labels: pull-request-available > > https://github.com/advisories/GHSA-r978-9m6m-6gm6 -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-19116) update to zookeeper client 3.8.4 due to CVE
[ https://issues.apache.org/jira/browse/HADOOP-19116?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17828287#comment-17828287 ] Steve Loughran commented on HADOOP-19116: - I've just created a new component "CVE" which we can use for CVE stuff; makes it easier to get reports > update to zookeeper client 3.8.4 due to CVE > --- > > Key: HADOOP-19116 > URL: https://issues.apache.org/jira/browse/HADOOP-19116 > Project: Hadoop Common > Issue Type: Bug > Components: CVE >Affects Versions: 3.4.0, 3.3.6 >Reporter: PJ Fanning >Priority: Major > Labels: pull-request-available > > https://github.com/advisories/GHSA-r978-9m6m-6gm6 -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-19116) update to zookeeper client 3.8.4 due to CVE
[ https://issues.apache.org/jira/browse/HADOOP-19116?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-19116: Component/s: CVE > update to zookeeper client 3.8.4 due to CVE > --- > > Key: HADOOP-19116 > URL: https://issues.apache.org/jira/browse/HADOOP-19116 > Project: Hadoop Common > Issue Type: Bug > Components: CVE >Affects Versions: 3.4.0, 3.3.6 >Reporter: PJ Fanning >Priority: Major > Labels: pull-request-available > > https://github.com/advisories/GHSA-r978-9m6m-6gm6 -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HADOOP-19117. 3.4.0 release documents. [hadoop-site]
steveloughran commented on PR #53: URL: https://github.com/apache/hadoop-site/pull/53#issuecomment-2006936741 Really tips github PR review over the edge. I've trusted the generated job to be good and just verified that the links in the existing pages have been updated. All good! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] WIP: ApacheHttpClient adaptation in ABFS. [hadoop]
hadoop-yetus commented on PR #6633: URL: https://github.com/apache/hadoop/pull/6633#issuecomment-2006936579 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 33s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 18 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 45m 20s | | trunk passed | | +1 :green_heart: | compile | 0m 40s | | trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 | | +1 :green_heart: | compile | 0m 34s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | checkstyle | 0m 31s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 41s | | trunk passed | | +1 :green_heart: | javadoc | 0m 37s | | trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 | | +1 :green_heart: | javadoc | 0m 33s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 1m 7s | | trunk passed | | +1 :green_heart: | shadedclient | 35m 1s | | branch has no errors when building and testing our client artifacts. | | -0 :warning: | patch | 35m 22s | | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 29s | | the patch passed | | +1 :green_heart: | compile | 0m 31s | | the patch passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 | | +1 :green_heart: | javac | 0m 31s | | the patch passed | | +1 :green_heart: | compile | 0m 27s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | javac | 0m 27s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 0m 21s | [/results-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/6/artifact/out/results-checkstyle-hadoop-tools_hadoop-azure.txt) | hadoop-tools/hadoop-azure: The patch generated 134 new + 18 unchanged - 0 fixed = 152 total (was 18) | | +1 :green_heart: | mvnsite | 0m 32s | | the patch passed | | -1 :x: | javadoc | 0m 27s | [/results-javadoc-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/6/artifact/out/results-javadoc-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt) | hadoop-tools_hadoop-azure-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 generated 3 new + 15 unchanged - 0 fixed = 18 total (was 15) | | -1 :x: | javadoc | 0m 26s | [/results-javadoc-javadoc-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_392-8u392-ga-1~20.04-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/6/artifact/out/results-javadoc-javadoc-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_392-8u392-ga-1~20.04-b08.txt) | hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_392-8u392-ga-1~20.04-b08 with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 generated 3 new + 15 unchanged - 0 fixed = 18 total (was 15) | | -1 :x: | spotbugs | 1m 12s | [/new-spotbugs-hadoop-tools_hadoop-azure.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/6/artifact/out/new-spotbugs-hadoop-tools_hadoop-azure.html) | hadoop-tools/hadoop-azure generated 18 new + 0 unchanged - 0 fixed = 18 total (was 0) | | +1 :green_heart: | shadedclient | 34m 42s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 2m 27s | [/patch-unit-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6633/6/artifact/out/patch-unit-hadoop-tools_hadoop-azure.txt) | hadoop-azure in the patch passed. | | +1 :green_heart: | asflicense | 0m 37s | | The patch does not generate ASF License warnings. | | | | 132m 21s | | | | Reason | Tests | |---:|:--| | SpotBugs | module:hadoop-tools/hadoop-azure | | | Unread field:AbfsConnectionManager.java:[line 113] | | | Unread
[jira] [Updated] (HADOOP-19119) spotbugs complaining about possible NPE in org.apache.hadoop.crypto.key.kms.ValueQueue.getSize()
[ https://issues.apache.org/jira/browse/HADOOP-19119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated HADOOP-19119: Labels: pull-request-available (was: ) > spotbugs complaining about possible NPE in > org.apache.hadoop.crypto.key.kms.ValueQueue.getSize() > > > Key: HADOOP-19119 > URL: https://issues.apache.org/jira/browse/HADOOP-19119 > Project: Hadoop Common > Issue Type: Sub-task > Components: crypto >Affects Versions: 3.5.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > Labels: pull-request-available > > PRs against hadoop-common are reporting spotbugs problems > {code} > Dodgy code Warnings > Code Warning > NPPossible null pointer dereference in > org.apache.hadoop.crypto.key.kms.ValueQueue.getSize(String) due to return > value of called method > Bug type NP_NULL_ON_SOME_PATH_FROM_RETURN_VALUE (click for details) > In class org.apache.hadoop.crypto.key.kms.ValueQueue > In method org.apache.hadoop.crypto.key.kms.ValueQueue.getSize(String) > Local variable stored in JVM register ? > Dereferenced at ValueQueue.java:[line 332] > Known null at ValueQueue.java:[line 332] > {code} -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-19119) spotbugs complaining about possible NPE in org.apache.hadoop.crypto.key.kms.ValueQueue.getSize()
[ https://issues.apache.org/jira/browse/HADOOP-19119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17828282#comment-17828282 ] ASF GitHub Bot commented on HADOOP-19119: - steveloughran opened a new pull request, #6642: URL: https://github.com/apache/hadoop/pull/6642 Spotbugs is mistaken here as it doesn't observer the read/write locks used to manage exclusive access to the maps. * cache the value between checks * tag as @VisibleForTesting ### How was this patch tested? No tests, expecting spotbugs to STFU and existing tests to complete ### For code changes: - [X] Does the title or this PR starts with the corresponding JIRA issue id (e.g. 'HADOOP-17799. Your PR title ...')? - [ ] Object storage: have the integration tests been executed and the endpoint declared according to the connector-specific documentation? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, `NOTICE-binary` files? > spotbugs complaining about possible NPE in > org.apache.hadoop.crypto.key.kms.ValueQueue.getSize() > > > Key: HADOOP-19119 > URL: https://issues.apache.org/jira/browse/HADOOP-19119 > Project: Hadoop Common > Issue Type: Sub-task > Components: crypto >Affects Versions: 3.5.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > > PRs against hadoop-common are reporting spotbugs problems > {code} > Dodgy code Warnings > Code Warning > NPPossible null pointer dereference in > org.apache.hadoop.crypto.key.kms.ValueQueue.getSize(String) due to return > value of called method > Bug type NP_NULL_ON_SOME_PATH_FROM_RETURN_VALUE (click for details) > In class org.apache.hadoop.crypto.key.kms.ValueQueue > In method org.apache.hadoop.crypto.key.kms.ValueQueue.getSize(String) > Local variable stored in JVM register ? > Dereferenced at ValueQueue.java:[line 332] > Known null at ValueQueue.java:[line 332] > {code} -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[PR] HADOOP-19119. Spotbugs: possible NPE in org.apache.hadoop.crypto.key.kms.ValueQueue.getSize() [hadoop]
steveloughran opened a new pull request, #6642: URL: https://github.com/apache/hadoop/pull/6642 Spotbugs is mistaken here as it doesn't observer the read/write locks used to manage exclusive access to the maps. * cache the value between checks * tag as @VisibleForTesting ### How was this patch tested? No tests, expecting spotbugs to STFU and existing tests to complete ### For code changes: - [X] Does the title or this PR starts with the corresponding JIRA issue id (e.g. 'HADOOP-17799. Your PR title ...')? - [ ] Object storage: have the integration tests been executed and the endpoint declared according to the connector-specific documentation? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, `NOTICE-binary` files? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-19119) spotbugs complaining about possible NPE in org.apache.hadoop.crypto.key.kms.ValueQueue.getSize()
[ https://issues.apache.org/jira/browse/HADOOP-19119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-19119: Summary: spotbugs complaining about possible NPE in org.apache.hadoop.crypto.key.kms.ValueQueue.getSize() (was: spotbugs complaining about possible NPE in org.apache.hadoop.crypto.key.kms.ValueQueue.alueQueue.getSize()) > spotbugs complaining about possible NPE in > org.apache.hadoop.crypto.key.kms.ValueQueue.getSize() > > > Key: HADOOP-19119 > URL: https://issues.apache.org/jira/browse/HADOOP-19119 > Project: Hadoop Common > Issue Type: Sub-task > Components: crypto >Affects Versions: 3.5.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > > PRs against hadoop-common are reporting spotbugs problems > {code} > Dodgy code Warnings > Code Warning > NPPossible null pointer dereference in > org.apache.hadoop.crypto.key.kms.ValueQueue.getSize(String) due to return > value of called method > Bug type NP_NULL_ON_SOME_PATH_FROM_RETURN_VALUE (click for details) > In class org.apache.hadoop.crypto.key.kms.ValueQueue > In method org.apache.hadoop.crypto.key.kms.ValueQueue.getSize(String) > Local variable stored in JVM register ? > Dereferenced at ValueQueue.java:[line 332] > Known null at ValueQueue.java:[line 332] > {code} -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-19119) spotbugs complaining about possible NPE in org.apache.hadoop.crypto.key.kms.ValueQueue.alueQueue.getSize()
Steve Loughran created HADOOP-19119: --- Summary: spotbugs complaining about possible NPE in org.apache.hadoop.crypto.key.kms.ValueQueue.alueQueue.getSize() Key: HADOOP-19119 URL: https://issues.apache.org/jira/browse/HADOOP-19119 Project: Hadoop Common Issue Type: Sub-task Components: crypto Affects Versions: 3.5.0 Reporter: Steve Loughran Assignee: Steve Loughran PRs against hadoop-common are reporting spotbugs problems {code} Dodgy code Warnings CodeWarning NP Possible null pointer dereference in org.apache.hadoop.crypto.key.kms.ValueQueue.getSize(String) due to return value of called method Bug type NP_NULL_ON_SOME_PATH_FROM_RETURN_VALUE (click for details) In class org.apache.hadoop.crypto.key.kms.ValueQueue In method org.apache.hadoop.crypto.key.kms.ValueQueue.getSize(String) Local variable stored in JVM register ? Dereferenced at ValueQueue.java:[line 332] Known null at ValueQueue.java:[line 332] {code} -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-19100) Fix Spotbugs warnings in the build
[ https://issues.apache.org/jira/browse/HADOOP-19100?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17828280#comment-17828280 ] Steve Loughran commented on HADOOP-19100: - PRs against hadoop-common are reporting this; let me do a trivial fix which is spotbugs missing the previous .get() cal {code} Dodgy code Warnings CodeWarning NP Possible null pointer dereference in org.apache.hadoop.crypto.key.kms.ValueQueue.getSize(String) due to return value of called method Bug type NP_NULL_ON_SOME_PATH_FROM_RETURN_VALUE (click for details) In class org.apache.hadoop.crypto.key.kms.ValueQueue In method org.apache.hadoop.crypto.key.kms.ValueQueue.getSize(String) Local variable stored in JVM register ? Dereferenced at ValueQueue.java:[line 332] Known null at ValueQueue.java:[line 332] {code} > Fix Spotbugs warnings in the build > -- > > Key: HADOOP-19100 > URL: https://issues.apache.org/jira/browse/HADOOP-19100 > Project: Hadoop Common > Issue Type: Bug >Reporter: Ayush Saxena >Priority: Major > > We are getting spotbugs warnings in every PR. > [https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1532/artifact/out/branch-spotbugs-hadoop-common-project_hadoop-common-warnings.html] > [https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1532/artifact/out/branch-spotbugs-hadoop-common-project-warnings.html] > [https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1532/artifact/out/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-client-warnings.html] > [https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1532/artifact/out/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-httpfs-warnings.html] > [https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1532/artifact/out/branch-spotbugs-hadoop-yarn-project_hadoop-yarn-warnings.html] > [https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1532/artifact/out/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-rbf-warnings.html] > [https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1532/artifact/out/branch-spotbugs-hadoop-hdfs-project-warnings.html] > [https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1532/artifact/out/branch-spotbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications-warnings.html] > [https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1532/artifact/out/branch-spotbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-services-warnings.html] > [https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1532/artifact/out/branch-spotbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-services_hadoop-yarn-services-core-warnings.html] > [https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1517/artifact/out/branch-spotbugs-hadoop-yarn-project-warnings.html] > [https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1517/artifact/out/branch-spotbugs-root-warnings.html] > > Source: > https://ci-hadoop.apache.org/view/Hadoop/job/hadoop-qbt-trunk-java8-linux-x86_64/1532/console -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-19085) Compatibility Benchmark over HCFS Implementations
[ https://issues.apache.org/jira/browse/HADOOP-19085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17828279#comment-17828279 ] Steve Loughran commented on HADOOP-19085: - One thing I'd like to say is: what are the compatibility reports so far? > Compatibility Benchmark over HCFS Implementations > - > > Key: HADOOP-19085 > URL: https://issues.apache.org/jira/browse/HADOOP-19085 > Project: Hadoop Common > Issue Type: New Feature > Components: fs, test >Affects Versions: 3.4.0 >Reporter: Han Liu >Assignee: Han Liu >Priority: Major > Labels: pull-request-available > Fix For: 3.5.0 > > Attachments: HADOOP-19085.001.patch, HDFS Compatibility Benchmark > Design.pdf > > > {*}Background:{*}Hadoop-Compatible File System (HCFS) is a core conception in > big data storage ecosystem, providing unified interfaces and generally clear > semantics, and has become the de-factor standard for industry storage systems > to follow and conform with. There have been a series of HCFS implementations > in Hadoop, such as S3AFileSystem for Amazon's S3 Object Store, WASB for > Microsoft's Azure Blob Storage and OSS connector for Alibaba Cloud Object > Storage, and more from storage service's providers on their own. > {*}Problems:{*}However, as indicated by introduction.md, there is no formal > suite to do compatibility assessment of a file system for all such HCFS > implementations. Thus, whether the functionality is well accomplished and > meets the core compatible expectations mainly relies on service provider's > own report. Meanwhile, Hadoop is also developing and new features are > continuously contributing to HCFS interfaces for existing implementations to > follow and update, in which case, Hadoop also needs a tool to quickly assess > if these features are supported or not for a specific HCFS implementation. > Besides, the known hadoop command line tool or hdfs shell is used to directly > interact with a HCFS storage system, where most commands correspond to > specific HCFS interfaces and work well. Still, there are cases that are > complicated and may not work, like expunge command. To check such commands > for an HCFS, we also need an approach to figure them out. > {*}Proposal:{*}Accordingly, we propose to define a formal HCFS compatibility > benchmark and provide corresponding tool to do the compatibility assessment > for an HCFS storage system. The benchmark and tool should consider both HCFS > interfaces and hdfs shell commands. Different scenarios require different > kinds of compatibilities. For such consideration, we could define different > suites in the benchmark. > *Benefits:* We intend the benchmark and tool to be useful for both storage > providers and storage users. For end users, it can be used to evalute the > compatibility level and determine if the storage system in question is > suitable for the required scenarios. For storage providers, it helps to > quickly generate an objective and reliable report about core functioins of > the storage service. As an instance, if the HCFS got a 100% on a suite named > 'tpcds', it is demonstrated that all functions needed by a tpcds program have > been well achieved. It is also a guide indicating how storage service > abilities can map to HCFS interfaces, such as storage class on S3. > Any thoughts? Comments and feedback are mostly welcomed. Thanks in advance. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-19085) Compatibility Benchmark over HCFS Implementations
[ https://issues.apache.org/jira/browse/HADOOP-19085?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-19085: Component/s: fs test > Compatibility Benchmark over HCFS Implementations > - > > Key: HADOOP-19085 > URL: https://issues.apache.org/jira/browse/HADOOP-19085 > Project: Hadoop Common > Issue Type: New Feature > Components: fs, test >Affects Versions: 3.4.0 >Reporter: Han Liu >Assignee: Han Liu >Priority: Major > Labels: pull-request-available > Fix For: 3.5.0 > > Attachments: HADOOP-19085.001.patch, HDFS Compatibility Benchmark > Design.pdf > > > {*}Background:{*}Hadoop-Compatible File System (HCFS) is a core conception in > big data storage ecosystem, providing unified interfaces and generally clear > semantics, and has become the de-factor standard for industry storage systems > to follow and conform with. There have been a series of HCFS implementations > in Hadoop, such as S3AFileSystem for Amazon's S3 Object Store, WASB for > Microsoft's Azure Blob Storage and OSS connector for Alibaba Cloud Object > Storage, and more from storage service's providers on their own. > {*}Problems:{*}However, as indicated by introduction.md, there is no formal > suite to do compatibility assessment of a file system for all such HCFS > implementations. Thus, whether the functionality is well accomplished and > meets the core compatible expectations mainly relies on service provider's > own report. Meanwhile, Hadoop is also developing and new features are > continuously contributing to HCFS interfaces for existing implementations to > follow and update, in which case, Hadoop also needs a tool to quickly assess > if these features are supported or not for a specific HCFS implementation. > Besides, the known hadoop command line tool or hdfs shell is used to directly > interact with a HCFS storage system, where most commands correspond to > specific HCFS interfaces and work well. Still, there are cases that are > complicated and may not work, like expunge command. To check such commands > for an HCFS, we also need an approach to figure them out. > {*}Proposal:{*}Accordingly, we propose to define a formal HCFS compatibility > benchmark and provide corresponding tool to do the compatibility assessment > for an HCFS storage system. The benchmark and tool should consider both HCFS > interfaces and hdfs shell commands. Different scenarios require different > kinds of compatibilities. For such consideration, we could define different > suites in the benchmark. > *Benefits:* We intend the benchmark and tool to be useful for both storage > providers and storage users. For end users, it can be used to evalute the > compatibility level and determine if the storage system in question is > suitable for the required scenarios. For storage providers, it helps to > quickly generate an objective and reliable report about core functioins of > the storage service. As an instance, if the HCFS got a 100% on a suite named > 'tpcds', it is demonstrated that all functions needed by a tpcds program have > been well achieved. It is also a guide indicating how storage service > abilities can map to HCFS interfaces, such as storage class on S3. > Any thoughts? Comments and feedback are mostly welcomed. Thanks in advance. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-19085) Compatibility Benchmark over HCFS Implementations
[ https://issues.apache.org/jira/browse/HADOOP-19085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17828278#comment-17828278 ] Steve Loughran commented on HADOOP-19085: - bq. The patch was just committed I see that. this JIRA should be closed as fixed, the new work moved split out as toplevel or under a new uber-jira covering the work > Compatibility Benchmark over HCFS Implementations > - > > Key: HADOOP-19085 > URL: https://issues.apache.org/jira/browse/HADOOP-19085 > Project: Hadoop Common > Issue Type: New Feature >Affects Versions: 3.4.0 >Reporter: Han Liu >Assignee: Han Liu >Priority: Major > Labels: pull-request-available > Attachments: HADOOP-19085.001.patch, HDFS Compatibility Benchmark > Design.pdf > > > {*}Background:{*}Hadoop-Compatible File System (HCFS) is a core conception in > big data storage ecosystem, providing unified interfaces and generally clear > semantics, and has become the de-factor standard for industry storage systems > to follow and conform with. There have been a series of HCFS implementations > in Hadoop, such as S3AFileSystem for Amazon's S3 Object Store, WASB for > Microsoft's Azure Blob Storage and OSS connector for Alibaba Cloud Object > Storage, and more from storage service's providers on their own. > {*}Problems:{*}However, as indicated by introduction.md, there is no formal > suite to do compatibility assessment of a file system for all such HCFS > implementations. Thus, whether the functionality is well accomplished and > meets the core compatible expectations mainly relies on service provider's > own report. Meanwhile, Hadoop is also developing and new features are > continuously contributing to HCFS interfaces for existing implementations to > follow and update, in which case, Hadoop also needs a tool to quickly assess > if these features are supported or not for a specific HCFS implementation. > Besides, the known hadoop command line tool or hdfs shell is used to directly > interact with a HCFS storage system, where most commands correspond to > specific HCFS interfaces and work well. Still, there are cases that are > complicated and may not work, like expunge command. To check such commands > for an HCFS, we also need an approach to figure them out. > {*}Proposal:{*}Accordingly, we propose to define a formal HCFS compatibility > benchmark and provide corresponding tool to do the compatibility assessment > for an HCFS storage system. The benchmark and tool should consider both HCFS > interfaces and hdfs shell commands. Different scenarios require different > kinds of compatibilities. For such consideration, we could define different > suites in the benchmark. > *Benefits:* We intend the benchmark and tool to be useful for both storage > providers and storage users. For end users, it can be used to evalute the > compatibility level and determine if the storage system in question is > suitable for the required scenarios. For storage providers, it helps to > quickly generate an objective and reliable report about core functioins of > the storage service. As an instance, if the HCFS got a 100% on a suite named > 'tpcds', it is demonstrated that all functions needed by a tpcds program have > been well achieved. It is also a guide indicating how storage service > abilities can map to HCFS interfaces, such as storage class on S3. > Any thoughts? Comments and feedback are mostly welcomed. Thanks in advance. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-19085) Compatibility Benchmark over HCFS Implementations
[ https://issues.apache.org/jira/browse/HADOOP-19085?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-19085: Fix Version/s: 3.5.0 > Compatibility Benchmark over HCFS Implementations > - > > Key: HADOOP-19085 > URL: https://issues.apache.org/jira/browse/HADOOP-19085 > Project: Hadoop Common > Issue Type: New Feature >Affects Versions: 3.4.0 >Reporter: Han Liu >Assignee: Han Liu >Priority: Major > Labels: pull-request-available > Fix For: 3.5.0 > > Attachments: HADOOP-19085.001.patch, HDFS Compatibility Benchmark > Design.pdf > > > {*}Background:{*}Hadoop-Compatible File System (HCFS) is a core conception in > big data storage ecosystem, providing unified interfaces and generally clear > semantics, and has become the de-factor standard for industry storage systems > to follow and conform with. There have been a series of HCFS implementations > in Hadoop, such as S3AFileSystem for Amazon's S3 Object Store, WASB for > Microsoft's Azure Blob Storage and OSS connector for Alibaba Cloud Object > Storage, and more from storage service's providers on their own. > {*}Problems:{*}However, as indicated by introduction.md, there is no formal > suite to do compatibility assessment of a file system for all such HCFS > implementations. Thus, whether the functionality is well accomplished and > meets the core compatible expectations mainly relies on service provider's > own report. Meanwhile, Hadoop is also developing and new features are > continuously contributing to HCFS interfaces for existing implementations to > follow and update, in which case, Hadoop also needs a tool to quickly assess > if these features are supported or not for a specific HCFS implementation. > Besides, the known hadoop command line tool or hdfs shell is used to directly > interact with a HCFS storage system, where most commands correspond to > specific HCFS interfaces and work well. Still, there are cases that are > complicated and may not work, like expunge command. To check such commands > for an HCFS, we also need an approach to figure them out. > {*}Proposal:{*}Accordingly, we propose to define a formal HCFS compatibility > benchmark and provide corresponding tool to do the compatibility assessment > for an HCFS storage system. The benchmark and tool should consider both HCFS > interfaces and hdfs shell commands. Different scenarios require different > kinds of compatibilities. For such consideration, we could define different > suites in the benchmark. > *Benefits:* We intend the benchmark and tool to be useful for both storage > providers and storage users. For end users, it can be used to evalute the > compatibility level and determine if the storage system in question is > suitable for the required scenarios. For storage providers, it helps to > quickly generate an objective and reliable report about core functioins of > the storage service. As an instance, if the HCFS got a 100% on a suite named > 'tpcds', it is demonstrated that all functions needed by a tpcds program have > been well achieved. It is also a guide indicating how storage service > abilities can map to HCFS interfaces, such as storage class on S3. > Any thoughts? Comments and feedback are mostly welcomed. Thanks in advance. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-19118) KeyShell fails with NPE when KMS throws Exception with null as message
[ https://issues.apache.org/jira/browse/HADOOP-19118?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dénes Bodó updated HADOOP-19118: Description: There is an issue in specific Ranger versions (where RANGER-3989 is not fixed) which throws Exception in case of concurrent access to a HashMap with Message {*}null{*}. {noformat} java.util.ConcurrentModificationException: null at java.util.HashMap$HashIterator.nextNode(HashMap.java:1469) at java.util.HashMap$EntryIterator.next(HashMap.java:1503) at java.util.HashMap$EntryIterator.next(HashMap.java:1501) {noformat} This manifests in Hadoop's KeyShell as an Exception with message {*}null{*}. So when {code:java} private String prettifyException(Exception e) { return e.getClass().getSimpleName() + ": " + e.getLocalizedMessage().split("\n")[0]; } {code} tries to print out the Exception the user experiences NPE {noformat} Exception in thread "main" java.lang.NullPointerException at org.apache.hadoop.crypto.key.KeyShell.prettifyException(KeyShell.java:541) at org.apache.hadoop.crypto.key.KeyShell.printException(KeyShell.java:536) at org.apache.hadoop.tools.CommandShell.run(CommandShell.java:79) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:81) at org.apache.hadoop.crypto.key.KeyShell.main(KeyShell.java:553) {noformat} This is an unwanted behaviour because the user does not have any feedback what and where went wrong. My suggestion is to add *null checking* into the affected *prettifyException* method. I'll create the Github PR soon. was: There is an issue in specific Ranger version where RANGER-3989 which throws Exception in case of concurrent access to a HashMap with Message {*}null{*}. {noformat} java.util.ConcurrentModificationException: null at java.util.HashMap$HashIterator.nextNode(HashMap.java:1469) at java.util.HashMap$EntryIterator.next(HashMap.java:1503) at java.util.HashMap$EntryIterator.next(HashMap.java:1501) {noformat} This manifests in Hadoop's KeyShell as an Exception with message {*}null{*}. So when {code:java} private String prettifyException(Exception e) { return e.getClass().getSimpleName() + ": " + e.getLocalizedMessage().split("\n")[0]; } {code} tries to print out the Exception the user experiences NPE {noformat} Exception in thread "main" java.lang.NullPointerException at org.apache.hadoop.crypto.key.KeyShell.prettifyException(KeyShell.java:541) at org.apache.hadoop.crypto.key.KeyShell.printException(KeyShell.java:536) at org.apache.hadoop.tools.CommandShell.run(CommandShell.java:79) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:81) at org.apache.hadoop.crypto.key.KeyShell.main(KeyShell.java:553) {noformat} This is an unwanted behaviour because the user does not have any feedback what and where went wrong. My suggestion is to add *null checking* into the affected *prettifyException* method. I'll create the Github PR soon. > KeyShell fails with NPE when KMS throws Exception with null as message > -- > > Key: HADOOP-19118 > URL: https://issues.apache.org/jira/browse/HADOOP-19118 > Project: Hadoop Common > Issue Type: Bug > Components: common, crypto >Affects Versions: 3.3.6 >Reporter: Dénes Bodó >Priority: Major > > There is an issue in specific Ranger versions (where RANGER-3989 is not > fixed) which throws Exception in case of concurrent access to a HashMap with > Message {*}null{*}. > {noformat} > java.util.ConcurrentModificationException: null > at java.util.HashMap$HashIterator.nextNode(HashMap.java:1469) > at java.util.HashMap$EntryIterator.next(HashMap.java:1503) > at java.util.HashMap$EntryIterator.next(HashMap.java:1501) {noformat} > This manifests in Hadoop's KeyShell as an Exception with message {*}null{*}. > So when > {code:java} > private String prettifyException(Exception e) { > return e.getClass().getSimpleName() + ": " + > e.getLocalizedMessage().split("\n")[0]; > } {code} > tries to print out the Exception the user experiences NPE > {noformat} > Exception in thread "main" java.lang.NullPointerException > at > org.apache.hadoop.crypto.key.KeyShell.prettifyException(KeyShell.java:541) > at > org.apache.hadoop.crypto.key.KeyShell.printException(KeyShell.java:536) > at org.apache.hadoop.tools.CommandShell.run(CommandShell.java:79) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:81) > at org.apache.hadoop.crypto.key.KeyShell.main(KeyShell.java:553) > {noformat} > This is an unwanted behaviour because the user does not have any feedback > what and where went wrong. > > My suggestion is to add *null checking* into the
[jira] [Created] (HADOOP-19118) KeyShell fails with NPE when KMS throws Exception with null as message
Dénes Bodó created HADOOP-19118: --- Summary: KeyShell fails with NPE when KMS throws Exception with null as message Key: HADOOP-19118 URL: https://issues.apache.org/jira/browse/HADOOP-19118 Project: Hadoop Common Issue Type: Bug Components: common, crypto Affects Versions: 3.3.6 Reporter: Dénes Bodó There is an issue in specific Ranger version where RANGER-3989 which throws Exception in case of concurrent access to a HashMap with Message {*}null{*}. {noformat} java.util.ConcurrentModificationException: null at java.util.HashMap$HashIterator.nextNode(HashMap.java:1469) at java.util.HashMap$EntryIterator.next(HashMap.java:1503) at java.util.HashMap$EntryIterator.next(HashMap.java:1501) {noformat} This manifests in Hadoop's KeyShell as an Exception with message {*}null{*}. So when {code:java} private String prettifyException(Exception e) { return e.getClass().getSimpleName() + ": " + e.getLocalizedMessage().split("\n")[0]; } {code} tries to print out the Exception the user experiences NPE {noformat} Exception in thread "main" java.lang.NullPointerException at org.apache.hadoop.crypto.key.KeyShell.prettifyException(KeyShell.java:541) at org.apache.hadoop.crypto.key.KeyShell.printException(KeyShell.java:536) at org.apache.hadoop.tools.CommandShell.run(CommandShell.java:79) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:81) at org.apache.hadoop.crypto.key.KeyShell.main(KeyShell.java:553) {noformat} This is an unwanted behaviour because the user does not have any feedback what and where went wrong. My suggestion is to add *null checking* into the affected *prettifyException* method. I'll create the Github PR soon. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[PR] HDFS-17413. [FGL] CacheReplicationMonitor supports fine-grained lock [hadoop]
ZanderXu opened a new pull request, #6641: URL: https://github.com/apache/hadoop/pull/6641 Using FSLock to make cache-pool and cache-directive thread safe, since Clients will access or modify these information and these information has nothing to do with block. Using BMLock to make cachedBlock thread safe, since the related logic will access block information and modify cache-related information of one DN. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-19107) Drop support for HBase v1
[ https://issues.apache.org/jira/browse/HADOOP-19107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17828232#comment-17828232 ] Ayush Saxena commented on HADOOP-19107: --- I tried removing HBase v1 & upgrading v2 as part of: https://github.com/apache/hadoop/pull/6629 It shows some spotbugs warnings: HADOOP-19100 Had to go near RBF in this, because my build started failing without that: HDFS-17370 Rest, there were lot of HBase compatible. declared and used, I removed/cleaned all of that. I don't think that is required HBase shades most of them & they weren't creating any issue. The test failures in the build: I don't think they are related, I had 2 consecutive run, both had different tests failing, with one with "Unable to create native thread", shouldn't be me. I will check on the related tickets to see, If I can get a green build. > Drop support for HBase v1 > - > > Key: HADOOP-19107 > URL: https://issues.apache.org/jira/browse/HADOOP-19107 > Project: Hadoop Common > Issue Type: Task >Reporter: Ayush Saxena >Priority: Major > > Drop support for Hbase V1 and make building Hbase v2 default. > Dev List: > [https://lists.apache.org/thread/vb2gh5ljwncbrmqnk0oflb8ftdz64hhs] > https://lists.apache.org/thread/o88hnm7q8n3b4bng81q14vsj3fbhfx5w -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HADOOP-19117. 3.4.0 release documents. [hadoop-site]
slfan1989 commented on PR #53: URL: https://github.com/apache/hadoop-site/pull/53#issuecomment-2006271532 @Hexiaoqiao Can you help review this pr? Thank you very much! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org