[GitHub] [hadoop] huangtianhua commented on pull request #2377: HDFS-15624. fix the function of setting quota by storage type
huangtianhua commented on pull request #2377: URL: https://github.com/apache/hadoop/pull/2377#issuecomment-754470304 @ayushtkn , in fact we don't have to hold this for HDFS-15660 as vinay said, the codes here is to fix the specific issues of NVDIMM, to avoid operations which related with storage type during rollingupgrade, to keep the orinal of storage type to make sure the editLog/fsimage works after restart namenode. IIUC, the miniCompatLV of namenodelayout version is introduced to make sure to refuse operations while rollingupgrade, so I think the approach is appropriate for the situation. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17452) Upgrade guice to 4.1.0
[ https://issues.apache.org/jira/browse/HADOOP-17452?focusedWorklogId=531047=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-531047 ] ASF GitHub Bot logged work on HADOOP-17452: --- Author: ASF GitHub Bot Created on: 05/Jan/21 07:22 Start Date: 05/Jan/21 07:22 Worklog Time Spent: 10m Work Description: wangyum commented on pull request #2582: URL: https://github.com/apache/hadoop/pull/2582#issuecomment-754454361 > The latest Guice version is 4.2.3. Is there any reason to use 4.1.0? OK. Upgrade it to 4.2.3. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 531047) Time Spent: 1h (was: 50m) > Upgrade guice to 4.1.0 > -- > > Key: HADOOP-17452 > URL: https://issues.apache.org/jira/browse/HADOOP-17452 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Yuming Wang >Priority: Major > Labels: pull-request-available > Time Spent: 1h > Remaining Estimate: 0h > > Upgrade guice to 4.1.0 to fix compatibility issue: > {noformat} > Exception in thread "main" java.lang.NoSuchMethodError: > com.google.inject.util.Types.collectionOf(Ljava/lang/reflect/Type;)Ljava/lang/reflect/ParameterizedType; > » at > com.google.inject.multibindings.Multibinder.collectionOfProvidersOf(Multibinder.java:202) > » at > com.google.inject.multibindings.Multibinder$RealMultibinder.(Multibinder.java:283) > » at > com.google.inject.multibindings.Multibinder$RealMultibinder.(Multibinder.java:258) > » at > com.google.inject.multibindings.Multibinder.newRealSetBinder(Multibinder.java:178) > » at > com.google.inject.multibindings.Multibinder.newSetBinder(Multibinder.java:150) > » at > org.apache.druid.guice.LifecycleModule.getEagerBinder(LifecycleModule.java:115) > » at > org.apache.druid.guice.LifecycleModule.configure(LifecycleModule.java:121) > » at com.google.inject.spi.Elements$RecordingBinder.install(Elements.java:340) > » at com.google.inject.spi.Elements.getElements(Elements.java:110) > » at com.google.inject.util.Modules$OverrideModule.configure(Modules.java:177) > » at com.google.inject.AbstractModule.configure(AbstractModule.java:62) > » at com.google.inject.spi.Elements$RecordingBinder.install(Elements.java:340) > » at com.google.inject.spi.Elements.getElements(Elements.java:110) > » at com.google.inject.util.Modules$OverrideModule.configure(Modules.java:177) > » at com.google.inject.AbstractModule.configure(AbstractModule.java:62) > » at com.google.inject.spi.Elements$RecordingBinder.install(Elements.java:340) > » at com.google.inject.spi.Elements.getElements(Elements.java:110) > » at > com.google.inject.internal.InjectorShell$Builder.build(InjectorShell.java:138) > » at > com.google.inject.internal.InternalInjectorCreator.build(InternalInjectorCreator.java:104) > » at com.google.inject.Guice.createInjector(Guice.java:96) > » at com.google.inject.Guice.createInjector(Guice.java:73) > » at com.google.inject.Guice.createInjector(Guice.java:62) > » at > org.apache.druid.initialization.Initialization.makeInjectorWithModules(Initialization.java:431) > » at org.apache.druid.cli.GuiceRunnable.makeInjector(GuiceRunnable.java:69) > » at org.apache.druid.cli.ServerRunnable.run(ServerRunnable.java:58) > » at org.apache.druid.cli.Main.main(Main.java:113) > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] wangyum commented on pull request #2582: HADOOP-17452. Upgrade Guice to 4.2.3
wangyum commented on pull request #2582: URL: https://github.com/apache/hadoop/pull/2582#issuecomment-754454361 > The latest Guice version is 4.2.3. Is there any reason to use 4.1.0? OK. Upgrade it to 4.2.3. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] ayushtkn commented on pull request #2377: HDFS-15624. fix the function of setting quota by storage type
ayushtkn commented on pull request #2377: URL: https://github.com/apache/hadoop/pull/2377#issuecomment-754453252 @huangtianhua nopes you didn't. I know that is merged. That is what I said, but there were assertions earlier on jira that we should hold this code in Jira, for HDFS-15660. That would fix something or change our code here. So, we held this jira because of that only. So, just want to wait, so that can be clarified what needs to be done here post HDFS-15660. And secondly the NamenodeLayout version approach had objection too as I quoted above. We need to get an agreement over there. For me the code is good enough, once we have clarifications regarding these things, can conclude this This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17452) Upgrade guice to 4.1.0
[ https://issues.apache.org/jira/browse/HADOOP-17452?focusedWorklogId=531045=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-531045 ] ASF GitHub Bot logged work on HADOOP-17452: --- Author: ASF GitHub Bot Created on: 05/Jan/21 07:15 Start Date: 05/Jan/21 07:15 Worklog Time Spent: 10m Work Description: aajisaka commented on pull request #2582: URL: https://github.com/apache/hadoop/pull/2582#issuecomment-754451089 The latest Guice version is 4.2.3. Is there any reason to use 4.1.0? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 531045) Time Spent: 50m (was: 40m) > Upgrade guice to 4.1.0 > -- > > Key: HADOOP-17452 > URL: https://issues.apache.org/jira/browse/HADOOP-17452 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Yuming Wang >Priority: Major > Labels: pull-request-available > Time Spent: 50m > Remaining Estimate: 0h > > Upgrade guice to 4.1.0 to fix compatibility issue: > {noformat} > Exception in thread "main" java.lang.NoSuchMethodError: > com.google.inject.util.Types.collectionOf(Ljava/lang/reflect/Type;)Ljava/lang/reflect/ParameterizedType; > » at > com.google.inject.multibindings.Multibinder.collectionOfProvidersOf(Multibinder.java:202) > » at > com.google.inject.multibindings.Multibinder$RealMultibinder.(Multibinder.java:283) > » at > com.google.inject.multibindings.Multibinder$RealMultibinder.(Multibinder.java:258) > » at > com.google.inject.multibindings.Multibinder.newRealSetBinder(Multibinder.java:178) > » at > com.google.inject.multibindings.Multibinder.newSetBinder(Multibinder.java:150) > » at > org.apache.druid.guice.LifecycleModule.getEagerBinder(LifecycleModule.java:115) > » at > org.apache.druid.guice.LifecycleModule.configure(LifecycleModule.java:121) > » at com.google.inject.spi.Elements$RecordingBinder.install(Elements.java:340) > » at com.google.inject.spi.Elements.getElements(Elements.java:110) > » at com.google.inject.util.Modules$OverrideModule.configure(Modules.java:177) > » at com.google.inject.AbstractModule.configure(AbstractModule.java:62) > » at com.google.inject.spi.Elements$RecordingBinder.install(Elements.java:340) > » at com.google.inject.spi.Elements.getElements(Elements.java:110) > » at com.google.inject.util.Modules$OverrideModule.configure(Modules.java:177) > » at com.google.inject.AbstractModule.configure(AbstractModule.java:62) > » at com.google.inject.spi.Elements$RecordingBinder.install(Elements.java:340) > » at com.google.inject.spi.Elements.getElements(Elements.java:110) > » at > com.google.inject.internal.InjectorShell$Builder.build(InjectorShell.java:138) > » at > com.google.inject.internal.InternalInjectorCreator.build(InternalInjectorCreator.java:104) > » at com.google.inject.Guice.createInjector(Guice.java:96) > » at com.google.inject.Guice.createInjector(Guice.java:73) > » at com.google.inject.Guice.createInjector(Guice.java:62) > » at > org.apache.druid.initialization.Initialization.makeInjectorWithModules(Initialization.java:431) > » at org.apache.druid.cli.GuiceRunnable.makeInjector(GuiceRunnable.java:69) > » at org.apache.druid.cli.ServerRunnable.run(ServerRunnable.java:58) > » at org.apache.druid.cli.Main.main(Main.java:113) > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] aajisaka commented on pull request #2582: HADOOP-17452. Upgrade Guice to 4.1.0
aajisaka commented on pull request #2582: URL: https://github.com/apache/hadoop/pull/2582#issuecomment-754451089 The latest Guice version is 4.2.3. Is there any reason to use 4.1.0? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17452) Upgrade guice to 4.1.0
[ https://issues.apache.org/jira/browse/HADOOP-17452?focusedWorklogId=531043=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-531043 ] ASF GitHub Bot logged work on HADOOP-17452: --- Author: ASF GitHub Bot Created on: 05/Jan/21 07:14 Start Date: 05/Jan/21 07:14 Worklog Time Spent: 10m Work Description: aajisaka commented on pull request #2582: URL: https://github.com/apache/hadoop/pull/2582#issuecomment-754450332 The change itself looks good. Let me check if there are any test failures due to this change. @wangyum Is there any related JIRA in Apache Druid? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 531043) Time Spent: 40m (was: 0.5h) > Upgrade guice to 4.1.0 > -- > > Key: HADOOP-17452 > URL: https://issues.apache.org/jira/browse/HADOOP-17452 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Yuming Wang >Priority: Major > Labels: pull-request-available > Time Spent: 40m > Remaining Estimate: 0h > > Upgrade guice to 4.1.0 to fix compatibility issue: > {noformat} > Exception in thread "main" java.lang.NoSuchMethodError: > com.google.inject.util.Types.collectionOf(Ljava/lang/reflect/Type;)Ljava/lang/reflect/ParameterizedType; > » at > com.google.inject.multibindings.Multibinder.collectionOfProvidersOf(Multibinder.java:202) > » at > com.google.inject.multibindings.Multibinder$RealMultibinder.(Multibinder.java:283) > » at > com.google.inject.multibindings.Multibinder$RealMultibinder.(Multibinder.java:258) > » at > com.google.inject.multibindings.Multibinder.newRealSetBinder(Multibinder.java:178) > » at > com.google.inject.multibindings.Multibinder.newSetBinder(Multibinder.java:150) > » at > org.apache.druid.guice.LifecycleModule.getEagerBinder(LifecycleModule.java:115) > » at > org.apache.druid.guice.LifecycleModule.configure(LifecycleModule.java:121) > » at com.google.inject.spi.Elements$RecordingBinder.install(Elements.java:340) > » at com.google.inject.spi.Elements.getElements(Elements.java:110) > » at com.google.inject.util.Modules$OverrideModule.configure(Modules.java:177) > » at com.google.inject.AbstractModule.configure(AbstractModule.java:62) > » at com.google.inject.spi.Elements$RecordingBinder.install(Elements.java:340) > » at com.google.inject.spi.Elements.getElements(Elements.java:110) > » at com.google.inject.util.Modules$OverrideModule.configure(Modules.java:177) > » at com.google.inject.AbstractModule.configure(AbstractModule.java:62) > » at com.google.inject.spi.Elements$RecordingBinder.install(Elements.java:340) > » at com.google.inject.spi.Elements.getElements(Elements.java:110) > » at > com.google.inject.internal.InjectorShell$Builder.build(InjectorShell.java:138) > » at > com.google.inject.internal.InternalInjectorCreator.build(InternalInjectorCreator.java:104) > » at com.google.inject.Guice.createInjector(Guice.java:96) > » at com.google.inject.Guice.createInjector(Guice.java:73) > » at com.google.inject.Guice.createInjector(Guice.java:62) > » at > org.apache.druid.initialization.Initialization.makeInjectorWithModules(Initialization.java:431) > » at org.apache.druid.cli.GuiceRunnable.makeInjector(GuiceRunnable.java:69) > » at org.apache.druid.cli.ServerRunnable.run(ServerRunnable.java:58) > » at org.apache.druid.cli.Main.main(Main.java:113) > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] aajisaka commented on pull request #2582: HADOOP-17452. Upgrade Guice to 4.1.0
aajisaka commented on pull request #2582: URL: https://github.com/apache/hadoop/pull/2582#issuecomment-754450332 The change itself looks good. Let me check if there are any test failures due to this change. @wangyum Is there any related JIRA in Apache Druid? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] huangtianhua edited a comment on pull request #2377: HDFS-15624. fix the function of setting quota by storage type
huangtianhua edited a comment on pull request #2377: URL: https://github.com/apache/hadoop/pull/2377#issuecomment-754446695 @ayushtkn, thanks for review it. HDFS-15660 supports handling storage types for older clients in a generic way, and it has been merged, or I missed it? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] huangtianhua commented on pull request #2377: HDFS-15624. fix the function of setting quota by storage type
huangtianhua commented on pull request #2377: URL: https://github.com/apache/hadoop/pull/2377#issuecomment-754446695 @ayushtkn, thanks for review it. HDFS-15660 supports handling storage types for older clients in a generic way, and it has been merged, or I missed you? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #2549: Hadoop 17428. ABFS: Implementation for getContentSummary
hadoop-yetus commented on pull request #2549: URL: https://github.com/apache/hadoop/pull/2549#issuecomment-754446269 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 37s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | | 0m 0s | [test4tests](test4tests) | The patch appears to include 2 new or modified test files. | _ trunk Compile Tests _ | | -1 :x: | mvninstall | 32m 50s | [/branch-mvninstall-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2549/10/artifact/out/branch-mvninstall-root.txt) | root in trunk failed. | | -1 :x: | compile | 0m 25s | [/branch-compile-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2549/10/artifact/out/branch-compile-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04.txt) | hadoop-azure in trunk failed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04. | | -1 :x: | compile | 0m 31s | [/branch-compile-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2549/10/artifact/out/branch-compile-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01.txt) | hadoop-azure in trunk failed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01. | | -0 :warning: | checkstyle | 0m 27s | [/buildtool-branch-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2549/10/artifact/out/buildtool-branch-checkstyle-hadoop-tools_hadoop-azure.txt) | The patch fails to run checkstyle in hadoop-azure | | -1 :x: | mvnsite | 0m 33s | [/branch-mvnsite-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2549/10/artifact/out/branch-mvnsite-hadoop-tools_hadoop-azure.txt) | hadoop-azure in trunk failed. | | +1 :green_heart: | shadedclient | 1m 39s | | branch has no errors when building and testing our client artifacts. | | -1 :x: | javadoc | 0m 29s | [/branch-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2549/10/artifact/out/branch-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04.txt) | hadoop-azure in trunk failed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04. | | -1 :x: | javadoc | 0m 29s | [/branch-javadoc-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2549/10/artifact/out/branch-javadoc-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01.txt) | hadoop-azure in trunk failed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01. | | +0 :ok: | spotbugs | 3m 11s | | Used deprecated FindBugs config; considering switching to SpotBugs. | | -1 :x: | findbugs | 0m 29s | [/branch-findbugs-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2549/10/artifact/out/branch-findbugs-hadoop-tools_hadoop-azure.txt) | hadoop-azure in trunk failed. | | -0 :warning: | patch | 3m 42s | | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | _ Patch Compile Tests _ | | -1 :x: | mvninstall | 0m 24s | [/patch-mvninstall-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2549/10/artifact/out/patch-mvninstall-hadoop-tools_hadoop-azure.txt) | hadoop-azure in the patch failed. | | -1 :x: | compile | 0m 23s | [/patch-compile-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2549/10/artifact/out/patch-compile-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04.txt) | hadoop-azure in the patch failed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04. | | -1 :x: | javac | 0m 23s | [/patch-compile-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2549/10/artifact/out/patch-compile-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04.txt) | hadoop-azure in the patch failed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04. | | -1 :x: | compile | 0m 22s |
[GitHub] [hadoop] jojochuang commented on pull request #2533: HDFS-15719. [Hadoop 3] Both NameNodes can crash simultaneously due to the short JN socket timeout
jojochuang commented on pull request #2533: URL: https://github.com/apache/hadoop/pull/2533#issuecomment-754395404 Thanks Ayush and Stephen! This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] jojochuang merged pull request #2533: HDFS-15719. [Hadoop 3] Both NameNodes can crash simultaneously due to the short JN socket timeout
jojochuang merged pull request #2533: URL: https://github.com/apache/hadoop/pull/2533 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] iwasakims commented on pull request #2586: YARN-10558. Fix failure of TestDistributedShell#testDSShellWithOpportunisticContainers.
iwasakims commented on pull request #2586: URL: https://github.com/apache/hadoop/pull/2586#issuecomment-754389601 Thanks, @aajisaka. I merged this. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] iwasakims merged pull request #2586: YARN-10558. Fix failure of TestDistributedShell#testDSShellWithOpportunisticContainers.
iwasakims merged pull request #2586: URL: https://github.com/apache/hadoop/pull/2586 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-15984) Update jersey from 1.19 to 2.x
[ https://issues.apache.org/jira/browse/HADOOP-15984?focusedWorklogId=531011=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-531011 ] ASF GitHub Bot logged work on HADOOP-15984: --- Author: ASF GitHub Bot Created on: 05/Jan/21 04:27 Start Date: 05/Jan/21 04:27 Worklog Time Spent: 10m Work Description: aajisaka commented on pull request #763: URL: https://github.com/apache/hadoop/pull/763#issuecomment-754385819 > May I know when can we expect this PR to be merged? We have some whitesource findings which we need to resolve. I'd like to merge this by the end of 2021Q1. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 531011) Time Spent: 1h 10m (was: 1h) > Update jersey from 1.19 to 2.x > -- > > Key: HADOOP-15984 > URL: https://issues.apache.org/jira/browse/HADOOP-15984 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Akira Ajisaka >Priority: Critical > Labels: pull-request-available > Time Spent: 1h 10m > Remaining Estimate: 0h > > jersey-json 1.19 depends on Jackson 1.9.2. Let's upgrade. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] aajisaka commented on pull request #763: [WIP] HADOOP-15984. Update jersey from 1.19 to 2.x
aajisaka commented on pull request #763: URL: https://github.com/apache/hadoop/pull/763#issuecomment-754385819 > May I know when can we expect this PR to be merged? We have some whitesource findings which we need to resolve. I'd like to merge this by the end of 2021Q1. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #2588: HDFS-15761. Dead NORMAL DN shouldn't transit to DECOMMISSIONED immediately
hadoop-yetus commented on pull request #2588: URL: https://github.com/apache/hadoop/pull/2588#issuecomment-754315251 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 1m 15s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | | 0m 0s | [test4tests](test4tests) | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 36m 48s | | trunk passed | | +1 :green_heart: | compile | 1m 20s | | trunk passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 | | +1 :green_heart: | compile | 1m 11s | | trunk passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | +1 :green_heart: | checkstyle | 0m 50s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 18s | | trunk passed | | +1 :green_heart: | shadedclient | 20m 11s | | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 55s | | trunk passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 | | +1 :green_heart: | javadoc | 1m 26s | | trunk passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | +0 :ok: | spotbugs | 3m 33s | | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 3m 29s | | trunk passed | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 16s | | the patch passed | | +1 :green_heart: | compile | 1m 15s | | the patch passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 | | +1 :green_heart: | javac | 1m 15s | | the patch passed | | +1 :green_heart: | compile | 1m 9s | | the patch passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | +1 :green_heart: | javac | 1m 9s | | the patch passed | | -0 :warning: | checkstyle | 0m 43s | [/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2588/1/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs-project/hadoop-hdfs: The patch generated 1 new + 13 unchanged - 0 fixed = 14 total (was 13) | | +1 :green_heart: | mvnsite | 1m 15s | | the patch passed | | -1 :x: | whitespace | 0m 0s | [/whitespace-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2588/1/artifact/out/whitespace-eol.txt) | The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply | | +1 :green_heart: | shadedclient | 19m 25s | | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 1m 0s | | the patch passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 | | +1 :green_heart: | javadoc | 1m 34s | | the patch passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | +1 :green_heart: | findbugs | 3m 53s | | the patch passed | _ Other Tests _ | | -1 :x: | unit | 202m 5s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2588/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | -1 :x: | asflicense | 0m 49s | [/patch-asflicense-problems.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2588/1/artifact/out/patch-asflicense-problems.txt) | The patch generated 4 ASF License warnings. | | | | 305m 33s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.TestReadStripedFileWithDecodingDeletedData | | | hadoop.hdfs.TestDatanodeDeath | | | hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewerForContentSummary | | | hadoop.hdfs.server.diskbalancer.TestDiskBalancerWithMockMover | | | hadoop.hdfs.TestFileChecksum | | | hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots | | | hadoop.hdfs.TestSetrepIncreasing | | | hadoop.hdfs.server.datanode.TestDataNodeErasureCodingMetrics | | | hadoop.hdfs.server.datanode.TestBPOfferService | | | hadoop.hdfs.server.diskbalancer.TestDiskBalancerRPC | | | hadoop.hdfs.server.namenode.snapshot.TestRandomOpsWithSnapshots | | | hadoop.hdfs.TestReadStripedFileWithDecodingCorruptData | | | hadoop.hdfs.TestDistributedFileSystem | | | hadoop.hdfs.TestErasureCodingExerciseAPIs | | | hadoop.cli.TestHDFSCLI | | | hadoop.hdfs.server.namenode.TestAuditLogs | | |
[GitHub] [hadoop] iwasakims commented on pull request #2586: YARN-10558. Fix failure of TestDistributedShell#testDSShellWithOpportunisticContainers.
iwasakims commented on pull request #2586: URL: https://github.com/apache/hadoop/pull/2586#issuecomment-754275129 The failure of testDSShellWithEnforceExecutionType is expected. I'm going to address that in [YARN-10040](https://issues.apache.org/jira/browse/YARN-10040) later. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17371) Bump Jetty to the latest version 9.4.35
[ https://issues.apache.org/jira/browse/HADOOP-17371?focusedWorklogId=530914=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-530914 ] ASF GitHub Bot logged work on HADOOP-17371: --- Author: ASF GitHub Bot Created on: 04/Jan/21 21:37 Start Date: 04/Jan/21 21:37 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #2590: URL: https://github.com/apache/hadoop/pull/2590#issuecomment-754235012 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 10m 43s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | ||| _ branch-3.2 Compile Tests _ | | +0 :ok: | mvndep | 4m 19s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 24m 44s | branch-3.2 passed | | +1 :green_heart: | compile | 15m 31s | branch-3.2 passed | | +1 :green_heart: | checkstyle | 2m 47s | branch-3.2 passed | | +1 :green_heart: | mvnsite | 5m 12s | branch-3.2 passed | | +1 :green_heart: | shadedclient | 23m 51s | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 4m 31s | branch-3.2 passed | | +0 :ok: | spotbugs | 0m 28s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +0 :ok: | findbugs | 0m 41s | branch/hadoop-project no findbugs output file (findbugsXml.xml) | | +0 :ok: | findbugs | 0m 28s | branch/hadoop-client-modules/hadoop-client-minicluster no findbugs output file (findbugsXml.xml) | | -1 :x: | findbugs | 0m 35s | hadoop-auth in branch-3.2 failed. | | -1 :x: | findbugs | 0m 35s | hadoop-common in branch-3.2 failed. | | -1 :x: | findbugs | 0m 35s | hadoop-kms in branch-3.2 failed. | | -1 :x: | findbugs | 0m 36s | hadoop-hdfs in branch-3.2 failed. | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 19s | Maven dependency ordering for patch | | -1 :x: | mvninstall | 0m 21s | hadoop-client-minicluster in the patch failed. | | -1 :x: | mvninstall | 0m 22s | hadoop-auth in the patch failed. | | -1 :x: | mvninstall | 0m 22s | hadoop-common in the patch failed. | | -1 :x: | mvninstall | 0m 21s | hadoop-kms in the patch failed. | | -1 :x: | mvninstall | 0m 21s | hadoop-hdfs in the patch failed. | | -1 :x: | mvninstall | 0m 22s | hadoop-project in the patch failed. | | -1 :x: | compile | 0m 21s | root in the patch failed. | | -1 :x: | javac | 0m 21s | root in the patch failed. | | -0 :warning: | checkstyle | 0m 19s | The patch fails to run checkstyle in root | | -1 :x: | mvnsite | 0m 21s | hadoop-client-minicluster in the patch failed. | | -1 :x: | mvnsite | 0m 22s | hadoop-auth in the patch failed. | | -1 :x: | mvnsite | 0m 21s | hadoop-common in the patch failed. | | -1 :x: | mvnsite | 0m 22s | hadoop-kms in the patch failed. | | -1 :x: | mvnsite | 0m 22s | hadoop-hdfs in the patch failed. | | -1 :x: | mvnsite | 0m 21s | hadoop-project in the patch failed. | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | xml | 0m 5s | The patch has no ill-formed XML file. | | +1 :green_heart: | shadedclient | 0m 25s | patch has no errors when building and testing our client artifacts. | | -1 :x: | javadoc | 0m 22s | hadoop-client-minicluster in the patch failed. | | -1 :x: | javadoc | 0m 22s | hadoop-auth in the patch failed. | | -1 :x: | javadoc | 0m 21s | hadoop-common in the patch failed. | | -1 :x: | javadoc | 0m 22s | hadoop-kms in the patch failed. | | -1 :x: | javadoc | 0m 22s | hadoop-hdfs in the patch failed. | | -1 :x: | javadoc | 0m 21s | hadoop-project in the patch failed. | | -1 :x: | findbugs | 0m 21s | hadoop-client-minicluster in the patch failed. | | -1 :x: | findbugs | 0m 22s | hadoop-auth in the patch failed. | | -1 :x: | findbugs | 0m 21s | hadoop-common in the patch failed. | | -1 :x: | findbugs | 0m 22s | hadoop-kms in the patch failed. | | -1 :x: | findbugs | 0m 22s | hadoop-hdfs in the patch failed. | | -1 :x: | findbugs | 0m 22s | hadoop-project in the patch failed. | ||| _ Other Tests
[GitHub] [hadoop] hadoop-yetus commented on pull request #2590: [branch-3.2] Backport HADOOP-17371. Bump Jetty to the latest version 9.4.35.
hadoop-yetus commented on pull request #2590: URL: https://github.com/apache/hadoop/pull/2590#issuecomment-754235012 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 10m 43s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | ||| _ branch-3.2 Compile Tests _ | | +0 :ok: | mvndep | 4m 19s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 24m 44s | branch-3.2 passed | | +1 :green_heart: | compile | 15m 31s | branch-3.2 passed | | +1 :green_heart: | checkstyle | 2m 47s | branch-3.2 passed | | +1 :green_heart: | mvnsite | 5m 12s | branch-3.2 passed | | +1 :green_heart: | shadedclient | 23m 51s | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 4m 31s | branch-3.2 passed | | +0 :ok: | spotbugs | 0m 28s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +0 :ok: | findbugs | 0m 41s | branch/hadoop-project no findbugs output file (findbugsXml.xml) | | +0 :ok: | findbugs | 0m 28s | branch/hadoop-client-modules/hadoop-client-minicluster no findbugs output file (findbugsXml.xml) | | -1 :x: | findbugs | 0m 35s | hadoop-auth in branch-3.2 failed. | | -1 :x: | findbugs | 0m 35s | hadoop-common in branch-3.2 failed. | | -1 :x: | findbugs | 0m 35s | hadoop-kms in branch-3.2 failed. | | -1 :x: | findbugs | 0m 36s | hadoop-hdfs in branch-3.2 failed. | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 19s | Maven dependency ordering for patch | | -1 :x: | mvninstall | 0m 21s | hadoop-client-minicluster in the patch failed. | | -1 :x: | mvninstall | 0m 22s | hadoop-auth in the patch failed. | | -1 :x: | mvninstall | 0m 22s | hadoop-common in the patch failed. | | -1 :x: | mvninstall | 0m 21s | hadoop-kms in the patch failed. | | -1 :x: | mvninstall | 0m 21s | hadoop-hdfs in the patch failed. | | -1 :x: | mvninstall | 0m 22s | hadoop-project in the patch failed. | | -1 :x: | compile | 0m 21s | root in the patch failed. | | -1 :x: | javac | 0m 21s | root in the patch failed. | | -0 :warning: | checkstyle | 0m 19s | The patch fails to run checkstyle in root | | -1 :x: | mvnsite | 0m 21s | hadoop-client-minicluster in the patch failed. | | -1 :x: | mvnsite | 0m 22s | hadoop-auth in the patch failed. | | -1 :x: | mvnsite | 0m 21s | hadoop-common in the patch failed. | | -1 :x: | mvnsite | 0m 22s | hadoop-kms in the patch failed. | | -1 :x: | mvnsite | 0m 22s | hadoop-hdfs in the patch failed. | | -1 :x: | mvnsite | 0m 21s | hadoop-project in the patch failed. | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | xml | 0m 5s | The patch has no ill-formed XML file. | | +1 :green_heart: | shadedclient | 0m 25s | patch has no errors when building and testing our client artifacts. | | -1 :x: | javadoc | 0m 22s | hadoop-client-minicluster in the patch failed. | | -1 :x: | javadoc | 0m 22s | hadoop-auth in the patch failed. | | -1 :x: | javadoc | 0m 21s | hadoop-common in the patch failed. | | -1 :x: | javadoc | 0m 22s | hadoop-kms in the patch failed. | | -1 :x: | javadoc | 0m 22s | hadoop-hdfs in the patch failed. | | -1 :x: | javadoc | 0m 21s | hadoop-project in the patch failed. | | -1 :x: | findbugs | 0m 21s | hadoop-client-minicluster in the patch failed. | | -1 :x: | findbugs | 0m 22s | hadoop-auth in the patch failed. | | -1 :x: | findbugs | 0m 21s | hadoop-common in the patch failed. | | -1 :x: | findbugs | 0m 22s | hadoop-kms in the patch failed. | | -1 :x: | findbugs | 0m 22s | hadoop-hdfs in the patch failed. | | -1 :x: | findbugs | 0m 22s | hadoop-project in the patch failed. | ||| _ Other Tests _ | | -1 :x: | unit | 0m 21s | hadoop-client-minicluster in the patch failed. | | -1 :x: | unit | 0m 21s | hadoop-auth in the patch failed. | | -1 :x: | unit | 0m 22s | hadoop-common in the patch failed. | | -1 :x: | unit | 0m 21s | hadoop-kms in the patch failed. | | -1 :x: | unit | 0m 22s | hadoop-hdfs in the patch failed. | | -1 :x: | unit | 0m 23s | hadoop-project in
[jira] [Work logged] (HADOOP-16080) hadoop-aws does not work with hadoop-client-api
[ https://issues.apache.org/jira/browse/HADOOP-16080?focusedWorklogId=530904=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-530904 ] ASF GitHub Bot logged work on HADOOP-16080: --- Author: ASF GitHub Bot Created on: 04/Jan/21 21:18 Start Date: 04/Jan/21 21:18 Worklog Time Spent: 10m Work Description: sunchao commented on pull request #2575: URL: https://github.com/apache/hadoop/pull/2575#issuecomment-754224870 @steveloughran my bad. Should've done this in a proper way. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 530904) Time Spent: 5h 40m (was: 5.5h) > hadoop-aws does not work with hadoop-client-api > --- > > Key: HADOOP-16080 > URL: https://issues.apache.org/jira/browse/HADOOP-16080 > Project: Hadoop Common > Issue Type: Bug > Components: fs/s3 >Affects Versions: 3.2.0, 3.1.1 >Reporter: Keith Turner >Assignee: Chao Sun >Priority: Major > Labels: pull-request-available > Fix For: 3.2.2, 3.3.1 > > Time Spent: 5h 40m > Remaining Estimate: 0h > > I attempted to use Accumulo and S3a with the following jars on the classpath. > * hadoop-client-api-3.1.1.jar > * hadoop-client-runtime-3.1.1.jar > * hadoop-aws-3.1.1.jar > This failed with the following exception. > {noformat} > Exception in thread "init" java.lang.NoSuchMethodError: > org.apache.hadoop.util.SemaphoredDelegatingExecutor.(Lcom/google/common/util/concurrent/ListeningExecutorService;IZ)V > at org.apache.hadoop.fs.s3a.S3AFileSystem.create(S3AFileSystem.java:769) > at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1169) > at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1149) > at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1108) > at org.apache.hadoop.fs.FileSystem.createNewFile(FileSystem.java:1413) > at > org.apache.accumulo.server.fs.VolumeManagerImpl.createNewFile(VolumeManagerImpl.java:184) > at > org.apache.accumulo.server.init.Initialize.initDirs(Initialize.java:479) > at > org.apache.accumulo.server.init.Initialize.initFileSystem(Initialize.java:487) > at > org.apache.accumulo.server.init.Initialize.initialize(Initialize.java:370) > at org.apache.accumulo.server.init.Initialize.doInit(Initialize.java:348) > at org.apache.accumulo.server.init.Initialize.execute(Initialize.java:967) > at org.apache.accumulo.start.Main.lambda$execKeyword$0(Main.java:129) > at java.lang.Thread.run(Thread.java:748) > {noformat} > The problem is that {{S3AFileSystem.create()}} looks for > {{SemaphoredDelegatingExecutor(com.google.common.util.concurrent.ListeningExecutorService)}} > which does not exist in hadoop-client-api-3.1.1.jar. What does exist is > {{SemaphoredDelegatingExecutor(org.apache.hadoop.shaded.com.google.common.util.concurrent.ListeningExecutorService)}}. > To work around this issue I created a version of hadoop-aws-3.1.1.jar that > relocated references to Guava. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] sunchao commented on pull request #2575: HADOOP-16080. hadoop-aws does not work with hadoop-client-api
sunchao commented on pull request #2575: URL: https://github.com/apache/hadoop/pull/2575#issuecomment-754224870 @steveloughran my bad. Should've done this in a proper way. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17371) Bump Jetty to the latest version 9.4.35
[ https://issues.apache.org/jira/browse/HADOOP-17371?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang updated HADOOP-17371: - Fix Version/s: 3.3.1 > Bump Jetty to the latest version 9.4.35 > --- > > Key: HADOOP-17371 > URL: https://issues.apache.org/jira/browse/HADOOP-17371 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 3.3.1, 3.4.0, 3.1.5, 3.2.3 >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Major > Labels: pull-request-available > Fix For: 3.3.1, 3.4.0 > > Time Spent: 3h 50m > Remaining Estimate: 0h > > The Hadoop 3 branches are on 9.4.20. We should update to the latest version: > 9.4.34 -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-13327) Add OutputStream + Syncable to the Filesystem Specification
[ https://issues.apache.org/jira/browse/HADOOP-13327?focusedWorklogId=530879=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-530879 ] ASF GitHub Bot logged work on HADOOP-13327: --- Author: ASF GitHub Bot Created on: 04/Jan/21 20:29 Start Date: 04/Jan/21 20:29 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #2587: URL: https://github.com/apache/hadoop/pull/2587#issuecomment-754200640 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 32s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | markdownlint | 0m 1s | | markdownlint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | | 0m 0s | [test4tests](test4tests) | The patch appears to include 9 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 13m 48s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 23m 12s | | trunk passed | | +1 :green_heart: | compile | 20m 4s | | trunk passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 | | +1 :green_heart: | compile | 17m 15s | | trunk passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | +1 :green_heart: | checkstyle | 2m 44s | | trunk passed | | +1 :green_heart: | mvnsite | 5m 43s | | trunk passed | | +1 :green_heart: | shadedclient | 24m 27s | | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 4m 3s | | trunk passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 | | +1 :green_heart: | javadoc | 4m 57s | | trunk passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | +0 :ok: | spotbugs | 0m 47s | | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 9m 47s | | trunk passed | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 26s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 4m 31s | | the patch passed | | +1 :green_heart: | compile | 26m 52s | | the patch passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 | | -1 :x: | javac | 26m 52s | [/diff-compile-javac-root-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2587/1/artifact/out/diff-compile-javac-root-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04.txt) | root-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 generated 12 new + 2038 unchanged - 0 fixed = 2050 total (was 2038) | | +1 :green_heart: | compile | 24m 4s | | the patch passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | -1 :x: | javac | 24m 4s | [/diff-compile-javac-root-jdkPrivateBuild-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2587/1/artifact/out/diff-compile-javac-root-jdkPrivateBuild-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01.txt) | root-jdkPrivateBuild-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 generated 12 new + 1931 unchanged - 0 fixed = 1943 total (was 1931) | | -0 :warning: | checkstyle | 3m 28s | [/diff-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2587/1/artifact/out/diff-checkstyle-root.txt) | root: The patch generated 4 new + 128 unchanged - 7 fixed = 132 total (was 135) | | +1 :green_heart: | mvnsite | 6m 6s | | the patch passed | | -1 :x: | whitespace | 0m 0s | [/whitespace-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2587/1/artifact/out/whitespace-eol.txt) | The patch has 4 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply | | +1 :green_heart: | xml | 0m 5s | | The patch has no ill-formed XML file. | | +1 :green_heart: | shadedclient | 19m 15s | | patch has no errors when building and testing our client artifacts. | | -1 :x: | javadoc | 1m 10s | [/diff-javadoc-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2587/1/artifact/out/diff-javadoc-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04.txt) | hadoop-common-project_hadoop-common-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 with JDK
[GitHub] [hadoop] hadoop-yetus commented on pull request #2587: HADOOP-13327 Output Stream Specification.
hadoop-yetus commented on pull request #2587: URL: https://github.com/apache/hadoop/pull/2587#issuecomment-754200640 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 32s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | markdownlint | 0m 1s | | markdownlint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | | 0m 0s | [test4tests](test4tests) | The patch appears to include 9 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 13m 48s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 23m 12s | | trunk passed | | +1 :green_heart: | compile | 20m 4s | | trunk passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 | | +1 :green_heart: | compile | 17m 15s | | trunk passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | +1 :green_heart: | checkstyle | 2m 44s | | trunk passed | | +1 :green_heart: | mvnsite | 5m 43s | | trunk passed | | +1 :green_heart: | shadedclient | 24m 27s | | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 4m 3s | | trunk passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 | | +1 :green_heart: | javadoc | 4m 57s | | trunk passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | +0 :ok: | spotbugs | 0m 47s | | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 9m 47s | | trunk passed | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 26s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 4m 31s | | the patch passed | | +1 :green_heart: | compile | 26m 52s | | the patch passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 | | -1 :x: | javac | 26m 52s | [/diff-compile-javac-root-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2587/1/artifact/out/diff-compile-javac-root-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04.txt) | root-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 generated 12 new + 2038 unchanged - 0 fixed = 2050 total (was 2038) | | +1 :green_heart: | compile | 24m 4s | | the patch passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | -1 :x: | javac | 24m 4s | [/diff-compile-javac-root-jdkPrivateBuild-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2587/1/artifact/out/diff-compile-javac-root-jdkPrivateBuild-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01.txt) | root-jdkPrivateBuild-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 generated 12 new + 1931 unchanged - 0 fixed = 1943 total (was 1931) | | -0 :warning: | checkstyle | 3m 28s | [/diff-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2587/1/artifact/out/diff-checkstyle-root.txt) | root: The patch generated 4 new + 128 unchanged - 7 fixed = 132 total (was 135) | | +1 :green_heart: | mvnsite | 6m 6s | | the patch passed | | -1 :x: | whitespace | 0m 0s | [/whitespace-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2587/1/artifact/out/whitespace-eol.txt) | The patch has 4 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply | | +1 :green_heart: | xml | 0m 5s | | The patch has no ill-formed XML file. | | +1 :green_heart: | shadedclient | 19m 15s | | patch has no errors when building and testing our client artifacts. | | -1 :x: | javadoc | 1m 10s | [/diff-javadoc-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2587/1/artifact/out/diff-javadoc-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04.txt) | hadoop-common-project_hadoop-common-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 generated 1 new + 98 unchanged - 1 fixed = 99 total (was 99) | | +1 :green_heart: | javadoc | 5m 33s | | the patch passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | +1 :green_heart: | findbugs | 13m 45s | | the patch passed | _ Other Tests _ | | -1 :x: | unit | 11m 18s |
[GitHub] [hadoop] hadoop-yetus commented on pull request #2583: HDFS-15549. Improve DISK/ARCHIVE movement if they are on same filesystem
hadoop-yetus commented on pull request #2583: URL: https://github.com/apache/hadoop/pull/2583#issuecomment-754188800 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 1m 31s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | | 0m 0s | [test4tests](test4tests) | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 13m 59s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 24m 15s | | trunk passed | | +1 :green_heart: | compile | 22m 23s | | trunk passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 | | +1 :green_heart: | compile | 26m 9s | | trunk passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | +1 :green_heart: | checkstyle | 4m 36s | | trunk passed | | -1 :x: | mvnsite | 1m 9s | [/branch-mvnsite-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2583/2/artifact/out/branch-mvnsite-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in trunk failed. | | +1 :green_heart: | shadedclient | 9m 58s | | branch has no errors when building and testing our client artifacts. | | -1 :x: | javadoc | 0m 51s | [/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2583/2/artifact/out/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04.txt) | hadoop-hdfs in trunk failed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04. | | -1 :x: | javadoc | 0m 59s | [/branch-javadoc-hadoop-common-project_hadoop-common-jdkPrivateBuild-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2583/2/artifact/out/branch-javadoc-hadoop-common-project_hadoop-common-jdkPrivateBuild-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01.txt) | hadoop-common in trunk failed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01. | | -1 :x: | javadoc | 1m 0s | [/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2583/2/artifact/out/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01.txt) | hadoop-hdfs in trunk failed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01. | | +0 :ok: | spotbugs | 16m 45s | | Used deprecated FindBugs config; considering switching to SpotBugs. | | -1 :x: | findbugs | 1m 2s | [/branch-findbugs-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2583/2/artifact/out/branch-findbugs-hadoop-common-project_hadoop-common.txt) | hadoop-common in trunk failed. | | -1 :x: | findbugs | 1m 1s | [/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2583/2/artifact/out/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in trunk failed. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 40s | | Maven dependency ordering for patch | | -1 :x: | mvninstall | 0m 32s | [/patch-mvninstall-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2583/2/artifact/out/patch-mvninstall-hadoop-common-project_hadoop-common.txt) | hadoop-common in the patch failed. | | -1 :x: | mvninstall | 0m 28s | [/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2583/2/artifact/out/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch failed. | | -1 :x: | compile | 0m 30s | [/patch-compile-root-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2583/2/artifact/out/patch-compile-root-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04.txt) | root in the patch failed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04. | | -1 :x: | javac | 0m 30s | [/patch-compile-root-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2583/2/artifact/out/patch-compile-root-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04.txt) | root in the patch failed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04. | | -1 :x: | compile | 0m 33s |
[jira] [Work logged] (HADOOP-17371) Bump Jetty to the latest version 9.4.35
[ https://issues.apache.org/jira/browse/HADOOP-17371?focusedWorklogId=530851=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-530851 ] ASF GitHub Bot logged work on HADOOP-17371: --- Author: ASF GitHub Bot Created on: 04/Jan/21 19:58 Start Date: 04/Jan/21 19:58 Worklog Time Spent: 10m Work Description: jojochuang commented on pull request #2590: URL: https://github.com/apache/hadoop/pull/2590#issuecomment-754184397 The only small conflict is in hadoop-client-minicluster/pom.xml where excluding jetty-http is not needed in branch-3.2. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 530851) Time Spent: 3h 50m (was: 3h 40m) > Bump Jetty to the latest version 9.4.35 > --- > > Key: HADOOP-17371 > URL: https://issues.apache.org/jira/browse/HADOOP-17371 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 3.3.1, 3.4.0, 3.1.5, 3.2.3 >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > Time Spent: 3h 50m > Remaining Estimate: 0h > > The Hadoop 3 branches are on 9.4.20. We should update to the latest version: > 9.4.34 -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] jojochuang commented on pull request #2590: [branch-3.2] Backport HADOOP-17371. Bump Jetty to the latest version 9.4.35.
jojochuang commented on pull request #2590: URL: https://github.com/apache/hadoop/pull/2590#issuecomment-754184397 The only small conflict is in hadoop-client-minicluster/pom.xml where excluding jetty-http is not needed in branch-3.2. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17371) Bump Jetty to the latest version 9.4.35
[ https://issues.apache.org/jira/browse/HADOOP-17371?focusedWorklogId=530847=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-530847 ] ASF GitHub Bot logged work on HADOOP-17371: --- Author: ASF GitHub Bot Created on: 04/Jan/21 19:53 Start Date: 04/Jan/21 19:53 Worklog Time Spent: 10m Work Description: jojochuang opened a new pull request #2590: URL: https://github.com/apache/hadoop/pull/2590 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 530847) Time Spent: 3h 40m (was: 3.5h) > Bump Jetty to the latest version 9.4.35 > --- > > Key: HADOOP-17371 > URL: https://issues.apache.org/jira/browse/HADOOP-17371 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 3.3.1, 3.4.0, 3.1.5, 3.2.3 >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > Time Spent: 3h 40m > Remaining Estimate: 0h > > The Hadoop 3 branches are on 9.4.20. We should update to the latest version: > 9.4.34 -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] jojochuang opened a new pull request #2590: [branch-3.2] Backport HADOOP-17371. Bump Jetty to the latest version 9.4.35.
jojochuang opened a new pull request #2590: URL: https://github.com/apache/hadoop/pull/2590 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17371) Bump Jetty to the latest version 9.4.35
[ https://issues.apache.org/jira/browse/HADOOP-17371?focusedWorklogId=530844=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-530844 ] ASF GitHub Bot logged work on HADOOP-17371: --- Author: ASF GitHub Bot Created on: 04/Jan/21 19:52 Start Date: 04/Jan/21 19:52 Worklog Time Spent: 10m Work Description: jojochuang opened a new pull request #2589: URL: https://github.com/apache/hadoop/pull/2589 ## NOTICE Please create an issue in ASF JIRA before opening a pull request, and you need to set the title of the pull request which starts with the corresponding JIRA issue number. (e.g. HADOOP-X. Fix a typo in YYY.) For more details, please see https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 530844) Time Spent: 3h 20m (was: 3h 10m) > Bump Jetty to the latest version 9.4.35 > --- > > Key: HADOOP-17371 > URL: https://issues.apache.org/jira/browse/HADOOP-17371 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 3.3.1, 3.4.0, 3.1.5, 3.2.3 >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > Time Spent: 3h 20m > Remaining Estimate: 0h > > The Hadoop 3 branches are on 9.4.20. We should update to the latest version: > 9.4.34 -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17371) Bump Jetty to the latest version 9.4.35
[ https://issues.apache.org/jira/browse/HADOOP-17371?focusedWorklogId=530845=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-530845 ] ASF GitHub Bot logged work on HADOOP-17371: --- Author: ASF GitHub Bot Created on: 04/Jan/21 19:52 Start Date: 04/Jan/21 19:52 Worklog Time Spent: 10m Work Description: jojochuang closed pull request #2589: URL: https://github.com/apache/hadoop/pull/2589 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 530845) Time Spent: 3.5h (was: 3h 20m) > Bump Jetty to the latest version 9.4.35 > --- > > Key: HADOOP-17371 > URL: https://issues.apache.org/jira/browse/HADOOP-17371 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 3.3.1, 3.4.0, 3.1.5, 3.2.3 >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > Time Spent: 3.5h > Remaining Estimate: 0h > > The Hadoop 3 branches are on 9.4.20. We should update to the latest version: > 9.4.34 -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] jojochuang closed pull request #2589: HADOOP-17371 [branch-3.2] Backport HADOOP-17371. Bump Jetty to the latest version 9.4.35.
jojochuang closed pull request #2589: URL: https://github.com/apache/hadoop/pull/2589 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] jojochuang opened a new pull request #2589: HADOOP-17371 [branch-3.2] Backport HADOOP-17371. Bump Jetty to the latest version 9.4.35.
jojochuang opened a new pull request #2589: URL: https://github.com/apache/hadoop/pull/2589 ## NOTICE Please create an issue in ASF JIRA before opening a pull request, and you need to set the title of the pull request which starts with the corresponding JIRA issue number. (e.g. HADOOP-X. Fix a typo in YYY.) For more details, please see https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17371) Bump Jetty to the latest version 9.4.35
[ https://issues.apache.org/jira/browse/HADOOP-17371?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang updated HADOOP-17371: - Summary: Bump Jetty to the latest version 9.4.35 (was: Bump Jetty to the latest version 9.4.34) > Bump Jetty to the latest version 9.4.35 > --- > > Key: HADOOP-17371 > URL: https://issues.apache.org/jira/browse/HADOOP-17371 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 3.3.1, 3.4.0, 3.1.5, 3.2.3 >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > Time Spent: 3h 10m > Remaining Estimate: 0h > > The Hadoop 3 branches are on 9.4.20. We should update to the latest version: > 9.4.34 -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-17441) Update Jetty hadoop dependency
[ https://issues.apache.org/jira/browse/HADOOP-17441?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang resolved HADOOP-17441. -- Resolution: Duplicate Let's use HADOOP-17371 to track the Jetty update. > Update Jetty hadoop dependency > -- > > Key: HADOOP-17441 > URL: https://issues.apache.org/jira/browse/HADOOP-17441 > Project: Hadoop Common > Issue Type: Bug > Components: build >Affects Versions: 3.3.0, 3.2.1 >Reporter: Souryakanta Dwivedy >Assignee: Ravuri Sushma sree >Priority: Major > Attachments: Jetty_CVEs.png > > > Vulnerability fixes needed for Jetty hadoop dependency library > The jetty jars where CVEs are found are , > = > Jetty [version 9.4.20.v20190813 ] > jetty-server-9.4.20.v20190813.jar > CVE details :- [ CVE-2020-27216 ] > = > Jetty-http [version 9.4.20.v20190813 ] > jetty-http-9.4.20.v20190813.jar > CVE details :- [ CVE-2020-27216 ] -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] NickyYe opened a new pull request #2588: HDFS-15761. Dead NORMAL DN shouldn't transit to DECOMMISSIONED immediately
NickyYe opened a new pull request #2588: URL: https://github.com/apache/hadoop/pull/2588 https://issues.apache.org/jira/browse/HDFS-15761 ## NOTICE Please create an issue in ASF JIRA before opening a pull request, and you need to set the title of the pull request which starts with the corresponding JIRA issue number. (e.g. HADOOP-X. Fix a typo in YYY.) For more details, please see https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-17449) Jetty 9.4.20 can't generate resourceBase with NPE
[ https://issues.apache.org/jira/browse/HADOOP-17449?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang resolved HADOOP-17449. -- Resolution: Duplicate Thanks for reporting the issue. Let's use HADOOP-17371 to update the Jetty version. > Jetty 9.4.20 can't generate resourceBase with NPE > - > > Key: HADOOP-17449 > URL: https://issues.apache.org/jira/browse/HADOOP-17449 > Project: Hadoop Common > Issue Type: Bug >Reporter: Ahmed Hussein >Priority: Major > > While I was looking into TestDistributedShell logs, I noticed the following > {{Warning}} > {code:bash} > 2020-12-29 16:22:26,379 INFO [Time-limited test] handler.ContextHandler > (ContextHandler.java:doStart(824)) - Started > o.e.j.s.ServletContextHandler@75389179{logs,/logs,file:///hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/target/log,AVAILABLE} > 2020-12-29 16:22:26,380 INFO [Time-limited test] handler.ContextHandler > (ContextHandler.java:doStart(824)) - Started > o.e.j.s.ServletContextHandler@116ed75c{static,/static,jar:file:~/.m2/repository/org/apache/hadoop/hadoop-yarn-common/3.4.0-SNAPSHOT/hadoop-yarn-common-3.4.0-SNAPSHOT.jar!/webapps/static,AVAILABLE} > 2020-12-29 16:22:26,390 WARN [Time-limited test] webapp.WebInfConfiguration > (WebInfConfiguration.java:getCanonicalNameForWebAppTmpDir(794)) - Can't > generate resourceBase as part of webapp tmp dir name: > java.lang.NullPointerException > 2020-12-29 16:22:26,469 INFO [Time-limited test] util.TypeUtil > (TypeUtil.java:(201)) - JVM Runtime does not support Modules > {code} > For OS X, it looks like {{webAppContext.setBaseResource}} and accessing the > sources from a jar file will cause {{file.resource.toURI().getPath()}} to > return {{null}} for {{jar:-urls}} > I checked that changing the jetty-version from {{9.4.20.v20190813}} to > something above {{9.4.21}} (aka., 9.4.23.v20191118) fixes the warning. > [~inigoiri], [~aajisaka], [~weichiu], [~ayushtkn] > Do you guys think we should consider upgrading Jetty to the [latest versions > of 9.4.x|https://mvnrepository.com/artifact/org.eclipse.jetty/jetty-webapp] > like 9.4.35? -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #2585: HDFS-15759. EC: Verify EC reconstruction correctness on DataNode
hadoop-yetus commented on pull request #2585: URL: https://github.com/apache/hadoop/pull/2585#issuecomment-754160262 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 34s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | | 0m 0s | [test4tests](test4tests) | The patch appears to include 5 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 14m 1s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 20m 41s | | trunk passed | | +1 :green_heart: | compile | 20m 10s | | trunk passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 | | +1 :green_heart: | compile | 17m 17s | | trunk passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | +1 :green_heart: | checkstyle | 2m 52s | | trunk passed | | +1 :green_heart: | mvnsite | 3m 7s | | trunk passed | | +1 :green_heart: | shadedclient | 24m 41s | | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 2m 11s | | trunk passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 | | +1 :green_heart: | javadoc | 3m 18s | | trunk passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | +0 :ok: | spotbugs | 3m 19s | | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 5m 38s | | trunk passed | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 27s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 2m 6s | | the patch passed | | +1 :green_heart: | compile | 19m 21s | | the patch passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 | | +1 :green_heart: | javac | 19m 21s | | the patch passed | | +1 :green_heart: | compile | 17m 23s | | the patch passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | +1 :green_heart: | javac | 17m 23s | | the patch passed | | +1 :green_heart: | checkstyle | 2m 52s | | the patch passed | | +1 :green_heart: | mvnsite | 3m 6s | | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | | The patch has no whitespace issues. | | +1 :green_heart: | xml | 0m 2s | | The patch has no ill-formed XML file. | | +1 :green_heart: | shadedclient | 15m 26s | | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 2m 8s | | the patch passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 | | +1 :green_heart: | javadoc | 3m 17s | | the patch passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | +1 :green_heart: | findbugs | 5m 47s | | the patch passed | _ Other Tests _ | | +1 :green_heart: | unit | 9m 52s | | hadoop-common in the patch passed. | | -1 :x: | unit | 98m 45s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2585/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 1m 9s | | The patch does not generate ASF License warnings. | | | | 296m 1s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.TestReconstructStripedFileWithValidator | | | hadoop.hdfs.TestMultipleNNPortQOP | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2585/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2585 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle xml | | uname | Linux 98ae016c6e0c 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 2825d060cf9 | | Default Java | Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2585/1/testReport/ | | Max. process+thread count | 5250 (vs. ulimit of 5500) | | modules | C:
[jira] [Commented] (HADOOP-16415) Speed up S3A test runs
[ https://issues.apache.org/jira/browse/HADOOP-16415?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17258420#comment-17258420 ] Steve Loughran commented on HADOOP-16415: - v1 list API calls are 100s. Do we need these? Better: a minimal set of tests 102.663 s - in org.apache.hadoop.fs.s3a.ITestS3AContractGetFileStatusV1List ITestS3ARemoteFileChanged is 800s, because it is so parameterized, including on change detection policy on open streams. Not all tests change behaviour on those options, especially the rename ones. Better: split into tests which read file data, and tests which just manipulate files. With S3Guard off, we should still need to test what happens when a file is changed while open. We shouldn't need to worry about mismatch between listing and opened/renamed files. > Speed up S3A test runs > -- > > Key: HADOOP-16415 > URL: https://issues.apache.org/jira/browse/HADOOP-16415 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.3.0 >Reporter: Steve Loughran >Priority: Major > > S3A Test runs are way too slow. > Speed them by > * reducing test setup/teardown costs > * eliminating obsolete test cases > * merge small tests into larger ones. > One thing i see is that the main S3A test cases create and destroy new FS > instances; There's both a setup and teardown cost there, but it does > guarantee better isolation. > Maybe if we know all test cases in a specific suite need the same options, we > can manage that better; demand create the FS but only delete it in an > @Afterclass method. That'd give us the OO-inheritance based setup of tests, > but mean only one instance is done per suite -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16415) Speed up S3A test runs
[ https://issues.apache.org/jira/browse/HADOOP-16415?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17258415#comment-17258415 ] Steve Loughran commented on HADOOP-16415: - h3. Huge tests we have too many of the Huge upload tests, one for each buffer mechanism. Proposed: * only test disk buffering * make sure we have unit tests for the others for large buffers which verify we can mark/reset back to the beginning, which is what the aws sdk needs h3. Surprisingly slow 85.901 s - in org.apache.hadoop.fs.s3a.fileContext.ITestS3AFileContextURI -- too many exists/isFile/isDir checks. Best to only do isDir > Speed up S3A test runs > -- > > Key: HADOOP-16415 > URL: https://issues.apache.org/jira/browse/HADOOP-16415 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.3.0 >Reporter: Steve Loughran >Priority: Major > > S3A Test runs are way too slow. > Speed them by > * reducing test setup/teardown costs > * eliminating obsolete test cases > * merge small tests into larger ones. > One thing i see is that the main S3A test cases create and destroy new FS > instances; There's both a setup and teardown cost there, but it does > guarantee better isolation. > Maybe if we know all test cases in a specific suite need the same options, we > can manage that better; demand create the FS but only delete it in an > @Afterclass method. That'd give us the OO-inheritance based setup of tests, > but mean only one instance is done per suite -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17371) Bump Jetty to the latest version 9.4.34
[ https://issues.apache.org/jira/browse/HADOOP-17371?focusedWorklogId=530791=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-530791 ] ASF GitHub Bot logged work on HADOOP-17371: --- Author: ASF GitHub Bot Created on: 04/Jan/21 17:44 Start Date: 04/Jan/21 17:44 Worklog Time Spent: 10m Work Description: jojochuang merged pull request #2453: URL: https://github.com/apache/hadoop/pull/2453 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 530791) Time Spent: 3h 10m (was: 3h) > Bump Jetty to the latest version 9.4.34 > --- > > Key: HADOOP-17371 > URL: https://issues.apache.org/jira/browse/HADOOP-17371 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 3.3.1, 3.4.0, 3.1.5, 3.2.3 >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Major > Labels: pull-request-available > Time Spent: 3h 10m > Remaining Estimate: 0h > > The Hadoop 3 branches are on 9.4.20. We should update to the latest version: > 9.4.34 -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17371) Bump Jetty to the latest version 9.4.34
[ https://issues.apache.org/jira/browse/HADOOP-17371?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang updated HADOOP-17371: - Fix Version/s: 3.4.0 > Bump Jetty to the latest version 9.4.34 > --- > > Key: HADOOP-17371 > URL: https://issues.apache.org/jira/browse/HADOOP-17371 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 3.3.1, 3.4.0, 3.1.5, 3.2.3 >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > Time Spent: 3h 10m > Remaining Estimate: 0h > > The Hadoop 3 branches are on 9.4.20. We should update to the latest version: > 9.4.34 -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] jojochuang merged pull request #2453: HADOOP-17371. Bump Jetty to the latest version 9.4.34. Contributed by Wei-Chiu Chuang.
jojochuang merged pull request #2453: URL: https://github.com/apache/hadoop/pull/2453 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17338) Intermittent S3AInputStream failures: Premature end of Content-Length delimited message body etc
[ https://issues.apache.org/jira/browse/HADOOP-17338?focusedWorklogId=530789=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-530789 ] ASF GitHub Bot logged work on HADOOP-17338: --- Author: ASF GitHub Bot Created on: 04/Jan/21 17:43 Start Date: 04/Jan/21 17:43 Worklog Time Spent: 10m Work Description: yzhangal commented on pull request #2497: URL: https://github.com/apache/hadoop/pull/2497#issuecomment-754115484 Happy new year and many thanks again @steveloughran ! Good info about unbuffer() too! This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 530789) Time Spent: 4h 10m (was: 4h) > Intermittent S3AInputStream failures: Premature end of Content-Length > delimited message body etc > > > Key: HADOOP-17338 > URL: https://issues.apache.org/jira/browse/HADOOP-17338 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.3.0 >Reporter: Yongjun Zhang >Assignee: Yongjun Zhang >Priority: Major > Labels: pull-request-available > Fix For: 3.3.1 > > Attachments: HADOOP-17338.001.patch > > Time Spent: 4h 10m > Remaining Estimate: 0h > > We are seeing the following two kinds of intermittent exceptions when using > S3AInputSteam: > 1. > {code:java} > Caused by: com.amazonaws.thirdparty.apache.http.ConnectionClosedException: > Premature end of Content-Length delimited message body (expected: 156463674; > received: 150001089 > at > com.amazonaws.thirdparty.apache.http.impl.io.ContentLengthInputStream.read(ContentLengthInputStream.java:178) > at > com.amazonaws.thirdparty.apache.http.conn.EofSensorInputStream.read(EofSensorInputStream.java:135) > at > com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82) > at com.amazonaws.event.ProgressInputStream.read(ProgressInputStream.java:180) > at > com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82) > at > com.amazonaws.services.s3.internal.S3AbortableInputStream.read(S3AbortableInputStream.java:125) > at > com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82) > at > com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82) > at > com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82) > at com.amazonaws.event.ProgressInputStream.read(ProgressInputStream.java:180) > at > com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82) > at > com.amazonaws.util.LengthCheckInputStream.read(LengthCheckInputStream.java:107) > at > com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82) > at org.apache.hadoop.fs.s3a.S3AInputStream.read(S3AInputStream.java:181) > at java.io.DataInputStream.readFully(DataInputStream.java:195) > at java.io.DataInputStream.readFully(DataInputStream.java:169) > at > org.apache.parquet.hadoop.ParquetFileReader$ConsecutiveChunkList.readAll(ParquetFileReader.java:779) > at > org.apache.parquet.hadoop.ParquetFileReader.readNextRowGroup(ParquetFileReader.java:511) > at > org.apache.parquet.hadoop.InternalParquetRecordReader.checkRead(InternalParquetRecordReader.java:130) > at > org.apache.parquet.hadoop.InternalParquetRecordReader.nextKeyValue(InternalParquetRecordReader.java:214) > at > org.apache.parquet.hadoop.ParquetRecordReader.nextKeyValue(ParquetRecordReader.java:227) > at > org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.next(ParquetRecordReaderWrapper.java:208) > at > org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.next(ParquetRecordReaderWrapper.java:63) > at > org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.doNext(HiveContextAwareRecordReader.java:350) > ... 15 more > {code} > 2. > {code:java} > Caused by: javax.net.ssl.SSLException: SSL peer shut down incorrectly > at sun.security.ssl.InputRecord.readV3Record(InputRecord.java:596) > at sun.security.ssl.InputRecord.read(InputRecord.java:532) > at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:990) > at sun.security.ssl.SSLSocketImpl.readDataRecord(SSLSocketImpl.java:948) > at sun.security.ssl.AppInputStream.read(AppInputStream.java:105) > at > com.amazonaws.thirdparty.apache.http.impl.io.SessionInputBufferImpl.streamRead(SessionInputBufferImpl.java:137) > at >
[GitHub] [hadoop] yzhangal commented on pull request #2497: HADOOP-17338. Intermittent S3AInputStream failures: Premature end of …
yzhangal commented on pull request #2497: URL: https://github.com/apache/hadoop/pull/2497#issuecomment-754115484 Happy new year and many thanks again @steveloughran ! Good info about unbuffer() too! This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17338) Intermittent S3AInputStream failures: Premature end of Content-Length delimited message body etc
[ https://issues.apache.org/jira/browse/HADOOP-17338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17258363#comment-17258363 ] Yongjun Zhang commented on HADOOP-17338: Happy new year and many thanks again [~ste...@apache.org]! I will work out a 2.10.x version asap. > Intermittent S3AInputStream failures: Premature end of Content-Length > delimited message body etc > > > Key: HADOOP-17338 > URL: https://issues.apache.org/jira/browse/HADOOP-17338 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.3.0 >Reporter: Yongjun Zhang >Assignee: Yongjun Zhang >Priority: Major > Labels: pull-request-available > Fix For: 3.3.1 > > Attachments: HADOOP-17338.001.patch > > Time Spent: 4h > Remaining Estimate: 0h > > We are seeing the following two kinds of intermittent exceptions when using > S3AInputSteam: > 1. > {code:java} > Caused by: com.amazonaws.thirdparty.apache.http.ConnectionClosedException: > Premature end of Content-Length delimited message body (expected: 156463674; > received: 150001089 > at > com.amazonaws.thirdparty.apache.http.impl.io.ContentLengthInputStream.read(ContentLengthInputStream.java:178) > at > com.amazonaws.thirdparty.apache.http.conn.EofSensorInputStream.read(EofSensorInputStream.java:135) > at > com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82) > at com.amazonaws.event.ProgressInputStream.read(ProgressInputStream.java:180) > at > com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82) > at > com.amazonaws.services.s3.internal.S3AbortableInputStream.read(S3AbortableInputStream.java:125) > at > com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82) > at > com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82) > at > com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82) > at com.amazonaws.event.ProgressInputStream.read(ProgressInputStream.java:180) > at > com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82) > at > com.amazonaws.util.LengthCheckInputStream.read(LengthCheckInputStream.java:107) > at > com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82) > at org.apache.hadoop.fs.s3a.S3AInputStream.read(S3AInputStream.java:181) > at java.io.DataInputStream.readFully(DataInputStream.java:195) > at java.io.DataInputStream.readFully(DataInputStream.java:169) > at > org.apache.parquet.hadoop.ParquetFileReader$ConsecutiveChunkList.readAll(ParquetFileReader.java:779) > at > org.apache.parquet.hadoop.ParquetFileReader.readNextRowGroup(ParquetFileReader.java:511) > at > org.apache.parquet.hadoop.InternalParquetRecordReader.checkRead(InternalParquetRecordReader.java:130) > at > org.apache.parquet.hadoop.InternalParquetRecordReader.nextKeyValue(InternalParquetRecordReader.java:214) > at > org.apache.parquet.hadoop.ParquetRecordReader.nextKeyValue(ParquetRecordReader.java:227) > at > org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.next(ParquetRecordReaderWrapper.java:208) > at > org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.next(ParquetRecordReaderWrapper.java:63) > at > org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.doNext(HiveContextAwareRecordReader.java:350) > ... 15 more > {code} > 2. > {code:java} > Caused by: javax.net.ssl.SSLException: SSL peer shut down incorrectly > at sun.security.ssl.InputRecord.readV3Record(InputRecord.java:596) > at sun.security.ssl.InputRecord.read(InputRecord.java:532) > at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:990) > at sun.security.ssl.SSLSocketImpl.readDataRecord(SSLSocketImpl.java:948) > at sun.security.ssl.AppInputStream.read(AppInputStream.java:105) > at > com.amazonaws.thirdparty.apache.http.impl.io.SessionInputBufferImpl.streamRead(SessionInputBufferImpl.java:137) > at > com.amazonaws.thirdparty.apache.http.impl.io.SessionInputBufferImpl.read(SessionInputBufferImpl.java:198) > at > com.amazonaws.thirdparty.apache.http.impl.io.ContentLengthInputStream.read(ContentLengthInputStream.java:176) > at > com.amazonaws.thirdparty.apache.http.conn.EofSensorInputStream.read(EofSensorInputStream.java:135) > at > com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82) > at com.amazonaws.event.ProgressInputStream.read(ProgressInputStream.java:180) > at > com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82) > at > com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82) > at > com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82) > at
[GitHub] [hadoop] goiri commented on a change in pull request #2581: YARN-10553. Refactor TestDistributedShell
goiri commented on a change in pull request #2581: URL: https://github.com/apache/hadoop/pull/2581#discussion_r551457156 ## File path: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/test/java/org/apache/hadoop/yarn/applications/distributedshell/DistributedShellBaseTest.java ## @@ -0,0 +1,496 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.yarn.applications.distributedshell; + +import java.io.ByteArrayOutputStream; +import java.io.File; +import java.io.FileOutputStream; +import java.io.IOException; +import java.io.OutputStream; +import java.net.URL; +import java.util.List; +import java.util.concurrent.TimeUnit; +import java.util.concurrent.atomic.AtomicBoolean; +import java.util.concurrent.atomic.AtomicReference; +import java.util.function.Supplier; + +import org.junit.After; +import org.junit.AfterClass; +import org.junit.Assert; +import org.junit.Before; +import org.junit.Rule; +import org.junit.rules.TemporaryFolder; +import org.junit.rules.TestName; +import org.junit.rules.Timeout; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.fs.FileContext; +import org.apache.hadoop.fs.FileSystem; +import org.apache.hadoop.fs.Path; +import org.apache.hadoop.net.ServerSocketUtil; +import org.apache.hadoop.test.GenericTestUtils; +import org.apache.hadoop.util.JarFinder; +import org.apache.hadoop.util.Shell; +import org.apache.hadoop.yarn.api.records.ApplicationAttemptId; +import org.apache.hadoop.yarn.api.records.ApplicationAttemptReport; +import org.apache.hadoop.yarn.api.records.ApplicationId; +import org.apache.hadoop.yarn.api.records.ApplicationReport; +import org.apache.hadoop.yarn.api.records.ContainerReport; +import org.apache.hadoop.yarn.api.records.FinalApplicationStatus; +import org.apache.hadoop.yarn.api.records.YarnApplicationState; +import org.apache.hadoop.yarn.api.records.timeline.TimelineDomain; +import org.apache.hadoop.yarn.api.records.timeline.TimelineEntities; +import org.apache.hadoop.yarn.client.api.YarnClient; +import org.apache.hadoop.yarn.conf.YarnConfiguration; +import org.apache.hadoop.yarn.server.MiniYARNCluster; +import org.apache.hadoop.yarn.server.resourcemanager.RMContext; +import org.apache.hadoop.yarn.server.timeline.NameValuePair; +import org.apache.hadoop.yarn.util.LinuxResourceCalculatorPlugin; +import org.apache.hadoop.yarn.util.ProcfsBasedProcessTree; + +import static org.junit.Assert.assertTrue; + +public class DistributedShellBaseTest { + + protected final static String APPMASTER_JAR = + JarFinder.getJar(ApplicationMaster.class); + protected static final int MIN_ALLOCATION_MB = 128; + protected static final int TEST_TIME_OUT = 16; + // set the timeout of the yarnClient to be 95% of the globalTimeout. + protected static final int TEST_TIME_WINDOW_EXPIRE = + (TEST_TIME_OUT * 90) / 100; + private static final Logger LOG = + LoggerFactory.getLogger(DistributedShellBaseTest.class); + private static final int NUM_NMS = 1; + private static final float DEFAULT_TIMELINE_VERSION = 1.0f; + private static final String TIMELINE_AUX_SERVICE_NAME = "timeline_collector"; + // set the timeout of the yarnClient to be 95% of the globalTimeout. + private final String yarnClientTimeout = + String.valueOf(TEST_TIME_WINDOW_EXPIRE); + private final String[] commonArgs = { + "--jar", + APPMASTER_JAR, + "--timeout", + yarnClientTimeout, + "--appname", + "" + }; + @Rule + public Timeout globalTimeout = new Timeout(TEST_TIME_OUT, + TimeUnit.MILLISECONDS); + @Rule + public TemporaryFolder tmpFolder = new TemporaryFolder(); + @Rule + public TestName name = new TestName(); + + protected MiniYARNCluster yarnCluster; + protected YarnConfiguration conf = null; + // location of the filesystem timeline writer for timeline service v.2 + private String timelineV2StorageDir = null; + + protected float getTimelineVersion() { +return DEFAULT_TIMELINE_VERSION; + } + + public String getTimelineV2StorageDir() { +return timelineV2StorageDir; + } + + public void
[jira] [Work logged] (HADOOP-17414) Magic committer files don't have the count of bytes written collected by spark
[ https://issues.apache.org/jira/browse/HADOOP-17414?focusedWorklogId=530767=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-530767 ] ASF GitHub Bot logged work on HADOOP-17414: --- Author: ASF GitHub Bot Created on: 04/Jan/21 17:13 Start Date: 04/Jan/21 17:13 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #2530: URL: https://github.com/apache/hadoop/pull/2530#issuecomment-754100106 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 1m 23s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | markdownlint | 0m 1s | | markdownlint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | | 0m 0s | [test4tests](test4tests) | The patch appears to include 7 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 13m 32s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 23m 33s | | trunk passed | | +1 :green_heart: | compile | 27m 45s | | trunk passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 | | +1 :green_heart: | compile | 20m 36s | | trunk passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | +1 :green_heart: | checkstyle | 2m 50s | | trunk passed | | +1 :green_heart: | mvnsite | 2m 23s | | trunk passed | | +1 :green_heart: | shadedclient | 23m 2s | | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 1m 29s | | trunk passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 | | +1 :green_heart: | javadoc | 2m 6s | | trunk passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | +0 :ok: | spotbugs | 1m 13s | | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 3m 30s | | trunk passed | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 22s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 1m 29s | | the patch passed | | +1 :green_heart: | compile | 20m 57s | | the patch passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 | | +1 :green_heart: | javac | 20m 57s | | the patch passed | | +1 :green_heart: | compile | 18m 15s | | the patch passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | +1 :green_heart: | javac | 18m 15s | | the patch passed | | +1 :green_heart: | checkstyle | 2m 47s | | the patch passed | | +1 :green_heart: | mvnsite | 2m 15s | | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | | The patch has no whitespace issues. | | +1 :green_heart: | xml | 0m 1s | | The patch has no ill-formed XML file. | | +1 :green_heart: | shadedclient | 17m 19s | | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 1m 25s | | the patch passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 | | +1 :green_heart: | javadoc | 2m 4s | | the patch passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | +1 :green_heart: | findbugs | 3m 50s | | the patch passed | _ Other Tests _ | | +1 :green_heart: | unit | 9m 58s | | hadoop-common in the patch passed. | | +1 :green_heart: | unit | 1m 27s | | hadoop-aws in the patch passed. | | +1 :green_heart: | asflicense | 0m 49s | | The patch does not generate ASF License warnings. | | | | 204m 52s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2530/7/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2530 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient xml findbugs checkstyle markdownlint | | uname | Linux 7869e92f3a79 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 2825d060cf9 | | Default Java | Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | Test Results |
[GitHub] [hadoop] hadoop-yetus commented on pull request #2530: HADOOP-17414. Magic committer files don't have the count of bytes written collected by spark
hadoop-yetus commented on pull request #2530: URL: https://github.com/apache/hadoop/pull/2530#issuecomment-754100106 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 1m 23s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | markdownlint | 0m 1s | | markdownlint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | | 0m 0s | [test4tests](test4tests) | The patch appears to include 7 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 13m 32s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 23m 33s | | trunk passed | | +1 :green_heart: | compile | 27m 45s | | trunk passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 | | +1 :green_heart: | compile | 20m 36s | | trunk passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | +1 :green_heart: | checkstyle | 2m 50s | | trunk passed | | +1 :green_heart: | mvnsite | 2m 23s | | trunk passed | | +1 :green_heart: | shadedclient | 23m 2s | | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 1m 29s | | trunk passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 | | +1 :green_heart: | javadoc | 2m 6s | | trunk passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | +0 :ok: | spotbugs | 1m 13s | | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 3m 30s | | trunk passed | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 22s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 1m 29s | | the patch passed | | +1 :green_heart: | compile | 20m 57s | | the patch passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 | | +1 :green_heart: | javac | 20m 57s | | the patch passed | | +1 :green_heart: | compile | 18m 15s | | the patch passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | +1 :green_heart: | javac | 18m 15s | | the patch passed | | +1 :green_heart: | checkstyle | 2m 47s | | the patch passed | | +1 :green_heart: | mvnsite | 2m 15s | | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | | The patch has no whitespace issues. | | +1 :green_heart: | xml | 0m 1s | | The patch has no ill-formed XML file. | | +1 :green_heart: | shadedclient | 17m 19s | | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 1m 25s | | the patch passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 | | +1 :green_heart: | javadoc | 2m 4s | | the patch passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | +1 :green_heart: | findbugs | 3m 50s | | the patch passed | _ Other Tests _ | | +1 :green_heart: | unit | 9m 58s | | hadoop-common in the patch passed. | | +1 :green_heart: | unit | 1m 27s | | hadoop-aws in the patch passed. | | +1 :green_heart: | asflicense | 0m 49s | | The patch does not generate ASF License warnings. | | | | 204m 52s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2530/7/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2530 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient xml findbugs checkstyle markdownlint | | uname | Linux 7869e92f3a79 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 2825d060cf9 | | Default Java | Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2530/7/testReport/ | | Max. process+thread count | 1440 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws U: . | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2530/7/console | | versions | git=2.17.1 maven=3.6.0 findbugs=4.0.6 | | Powered by | Apache Yetus 0.13.0-SNAPSHOT
[jira] [Work logged] (HADOOP-16202) Stabilize openFile() and adopt internally
[ https://issues.apache.org/jira/browse/HADOOP-16202?focusedWorklogId=530757=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-530757 ] ASF GitHub Bot logged work on HADOOP-16202: --- Author: ASF GitHub Bot Created on: 04/Jan/21 16:41 Start Date: 04/Jan/21 16:41 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #2584: URL: https://github.com/apache/hadoop/pull/2584#issuecomment-754082437 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 51s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | | No case conflicting files found. | | +0 :ok: | markdownlint | 0m 1s | | markdownlint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | | 0m 0s | [test4tests](test4tests) | The patch appears to include 10 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 5m 15s | | Maven dependency ordering for branch | | -1 :x: | mvninstall | 0m 23s | [/branch-mvninstall-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2584/1/artifact/out/branch-mvninstall-root.txt) | root in trunk failed. | | -1 :x: | compile | 18m 15s | [/branch-compile-root-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2584/1/artifact/out/branch-compile-root-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04.txt) | root in trunk failed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04. | | -1 :x: | compile | 0m 28s | [/branch-compile-root-jdkPrivateBuild-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2584/1/artifact/out/branch-compile-root-jdkPrivateBuild-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01.txt) | root in trunk failed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01. | | +1 :green_heart: | checkstyle | 7m 9s | | trunk passed | | -1 :x: | mvnsite | 0m 36s | [/branch-mvnsite-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2584/1/artifact/out/branch-mvnsite-hadoop-common-project_hadoop-common.txt) | hadoop-common in trunk failed. | | -1 :x: | mvnsite | 0m 21s | [/branch-mvnsite-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2584/1/artifact/out/branch-mvnsite-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common.txt) | hadoop-yarn-common in trunk failed. | | +1 :green_heart: | shadedclient | 26m 40s | | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 3m 36s | | trunk passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 | | +1 :green_heart: | javadoc | 4m 16s | | trunk passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | +0 :ok: | spotbugs | 0m 44s | | Used deprecated FindBugs config; considering switching to SpotBugs. | | -1 :x: | findbugs | 0m 20s | [/branch-findbugs-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2584/1/artifact/out/branch-findbugs-hadoop-common-project_hadoop-common.txt) | hadoop-common in trunk failed. | | -1 :x: | findbugs | 0m 19s | [/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2584/1/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common.txt) | hadoop-yarn-common in trunk failed. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 51s | | Maven dependency ordering for patch | | -1 :x: | mvninstall | 0m 12s | [/patch-mvninstall-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2584/1/artifact/out/patch-mvninstall-hadoop-common-project_hadoop-common.txt) | hadoop-common in the patch failed. | | -1 :x: | mvninstall | 0m 12s | [/patch-mvninstall-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2584/1/artifact/out/patch-mvninstall-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common.txt) | hadoop-yarn-common in the patch failed. | | -1 :x: | mvninstall | 0m 22s | [/patch-mvninstall-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-core.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2584/1/artifact/out/patch-mvninstall-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-core.txt) |
[GitHub] [hadoop] hadoop-yetus commented on pull request #2584: HADOOP-16202. Enhance openFile()
hadoop-yetus commented on pull request #2584: URL: https://github.com/apache/hadoop/pull/2584#issuecomment-754082437 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 51s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | | No case conflicting files found. | | +0 :ok: | markdownlint | 0m 1s | | markdownlint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | | 0m 0s | [test4tests](test4tests) | The patch appears to include 10 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 5m 15s | | Maven dependency ordering for branch | | -1 :x: | mvninstall | 0m 23s | [/branch-mvninstall-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2584/1/artifact/out/branch-mvninstall-root.txt) | root in trunk failed. | | -1 :x: | compile | 18m 15s | [/branch-compile-root-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2584/1/artifact/out/branch-compile-root-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04.txt) | root in trunk failed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04. | | -1 :x: | compile | 0m 28s | [/branch-compile-root-jdkPrivateBuild-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2584/1/artifact/out/branch-compile-root-jdkPrivateBuild-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01.txt) | root in trunk failed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01. | | +1 :green_heart: | checkstyle | 7m 9s | | trunk passed | | -1 :x: | mvnsite | 0m 36s | [/branch-mvnsite-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2584/1/artifact/out/branch-mvnsite-hadoop-common-project_hadoop-common.txt) | hadoop-common in trunk failed. | | -1 :x: | mvnsite | 0m 21s | [/branch-mvnsite-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2584/1/artifact/out/branch-mvnsite-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common.txt) | hadoop-yarn-common in trunk failed. | | +1 :green_heart: | shadedclient | 26m 40s | | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 3m 36s | | trunk passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 | | +1 :green_heart: | javadoc | 4m 16s | | trunk passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | +0 :ok: | spotbugs | 0m 44s | | Used deprecated FindBugs config; considering switching to SpotBugs. | | -1 :x: | findbugs | 0m 20s | [/branch-findbugs-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2584/1/artifact/out/branch-findbugs-hadoop-common-project_hadoop-common.txt) | hadoop-common in trunk failed. | | -1 :x: | findbugs | 0m 19s | [/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2584/1/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common.txt) | hadoop-yarn-common in trunk failed. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 51s | | Maven dependency ordering for patch | | -1 :x: | mvninstall | 0m 12s | [/patch-mvninstall-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2584/1/artifact/out/patch-mvninstall-hadoop-common-project_hadoop-common.txt) | hadoop-common in the patch failed. | | -1 :x: | mvninstall | 0m 12s | [/patch-mvninstall-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2584/1/artifact/out/patch-mvninstall-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common.txt) | hadoop-yarn-common in the patch failed. | | -1 :x: | mvninstall | 0m 22s | [/patch-mvninstall-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-core.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2584/1/artifact/out/patch-mvninstall-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-core.txt) | hadoop-mapreduce-client-core in the patch failed. | | -1 :x: | mvninstall | 0m 19s | [/patch-mvninstall-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-app.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2584/1/artifact/out/patch-mvninstall-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-app.txt) | hadoop-mapreduce-client-app in the patch failed. | | -1 :x: |
[GitHub] [hadoop] hadoop-yetus commented on pull request #2586: YARN-10558. Fix failure of TestDistributedShell#testDSShellWithOpportunisticContainers.
hadoop-yetus commented on pull request #2586: URL: https://github.com/apache/hadoop/pull/2586#issuecomment-754076761 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 27m 12s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | | 0m 0s | [test4tests](test4tests) | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 33m 6s | | trunk passed | | +1 :green_heart: | compile | 0m 29s | | trunk passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 | | +1 :green_heart: | compile | 0m 28s | | trunk passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | +1 :green_heart: | checkstyle | 0m 26s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 32s | | trunk passed | | +1 :green_heart: | shadedclient | 16m 26s | | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 28s | | trunk passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 | | +1 :green_heart: | javadoc | 0m 25s | | trunk passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | +0 :ok: | spotbugs | 0m 47s | | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 0m 46s | | trunk passed | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 27s | | the patch passed | | +1 :green_heart: | compile | 0m 21s | | the patch passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 | | +1 :green_heart: | javac | 0m 21s | | the patch passed | | +1 :green_heart: | compile | 0m 20s | | the patch passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | +1 :green_heart: | javac | 0m 20s | | the patch passed | | +1 :green_heart: | checkstyle | 0m 14s | | the patch passed | | +1 :green_heart: | mvnsite | 0m 21s | | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | | The patch has no whitespace issues. | | +1 :green_heart: | shadedclient | 15m 3s | | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 23s | | the patch passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 | | +1 :green_heart: | javadoc | 0m 19s | | the patch passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | +1 :green_heart: | findbugs | 0m 45s | | the patch passed | _ Other Tests _ | | -1 :x: | unit | 29m 36s | [/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2586/1/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell.txt) | hadoop-yarn-applications-distributedshell in the patch passed. | | +1 :green_heart: | asflicense | 0m 33s | | The patch does not generate ASF License warnings. | | | | 131m 11s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.yarn.applications.distributedshell.TestDistributedShell | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2586/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2586 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 50e9a3edf83b 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 2825d060cf9 | | Default Java | Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2586/1/testReport/ | | Max. process+thread count | 888 (vs. ulimit of 5500) | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell | | Console output |
[GitHub] [hadoop] amahussein commented on a change in pull request #2581: YARN-10553. Refactor TestDistributedShell
amahussein commented on a change in pull request #2581: URL: https://github.com/apache/hadoop/pull/2581#discussion_r551400951 ## File path: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/test/java/org/apache/hadoop/yarn/applications/distributedshell/DistributedShellBaseTest.java ## @@ -0,0 +1,496 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.yarn.applications.distributedshell; + +import java.io.ByteArrayOutputStream; +import java.io.File; +import java.io.FileOutputStream; +import java.io.IOException; +import java.io.OutputStream; +import java.net.URL; +import java.util.List; +import java.util.concurrent.TimeUnit; +import java.util.concurrent.atomic.AtomicBoolean; +import java.util.concurrent.atomic.AtomicReference; +import java.util.function.Supplier; + +import org.junit.After; +import org.junit.AfterClass; +import org.junit.Assert; +import org.junit.Before; +import org.junit.Rule; +import org.junit.rules.TemporaryFolder; +import org.junit.rules.TestName; +import org.junit.rules.Timeout; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.fs.FileContext; +import org.apache.hadoop.fs.FileSystem; +import org.apache.hadoop.fs.Path; +import org.apache.hadoop.net.ServerSocketUtil; +import org.apache.hadoop.test.GenericTestUtils; +import org.apache.hadoop.util.JarFinder; +import org.apache.hadoop.util.Shell; +import org.apache.hadoop.yarn.api.records.ApplicationAttemptId; +import org.apache.hadoop.yarn.api.records.ApplicationAttemptReport; +import org.apache.hadoop.yarn.api.records.ApplicationId; +import org.apache.hadoop.yarn.api.records.ApplicationReport; +import org.apache.hadoop.yarn.api.records.ContainerReport; +import org.apache.hadoop.yarn.api.records.FinalApplicationStatus; +import org.apache.hadoop.yarn.api.records.YarnApplicationState; +import org.apache.hadoop.yarn.api.records.timeline.TimelineDomain; +import org.apache.hadoop.yarn.api.records.timeline.TimelineEntities; +import org.apache.hadoop.yarn.client.api.YarnClient; +import org.apache.hadoop.yarn.conf.YarnConfiguration; +import org.apache.hadoop.yarn.server.MiniYARNCluster; +import org.apache.hadoop.yarn.server.resourcemanager.RMContext; +import org.apache.hadoop.yarn.server.timeline.NameValuePair; +import org.apache.hadoop.yarn.util.LinuxResourceCalculatorPlugin; +import org.apache.hadoop.yarn.util.ProcfsBasedProcessTree; + +import static org.junit.Assert.assertTrue; + +public class DistributedShellBaseTest { + + protected final static String APPMASTER_JAR = + JarFinder.getJar(ApplicationMaster.class); + protected static final int MIN_ALLOCATION_MB = 128; + protected static final int TEST_TIME_OUT = 16; + // set the timeout of the yarnClient to be 95% of the globalTimeout. + protected static final int TEST_TIME_WINDOW_EXPIRE = + (TEST_TIME_OUT * 90) / 100; + private static final Logger LOG = + LoggerFactory.getLogger(DistributedShellBaseTest.class); + private static final int NUM_NMS = 1; + private static final float DEFAULT_TIMELINE_VERSION = 1.0f; + private static final String TIMELINE_AUX_SERVICE_NAME = "timeline_collector"; + // set the timeout of the yarnClient to be 95% of the globalTimeout. + private final String yarnClientTimeout = + String.valueOf(TEST_TIME_WINDOW_EXPIRE); + private final String[] commonArgs = { + "--jar", + APPMASTER_JAR, + "--timeout", + yarnClientTimeout, + "--appname", + "" + }; + @Rule + public Timeout globalTimeout = new Timeout(TEST_TIME_OUT, + TimeUnit.MILLISECONDS); + @Rule + public TemporaryFolder tmpFolder = new TemporaryFolder(); + @Rule + public TestName name = new TestName(); + + protected MiniYARNCluster yarnCluster; + protected YarnConfiguration conf = null; + // location of the filesystem timeline writer for timeline service v.2 + private String timelineV2StorageDir = null; + + protected float getTimelineVersion() { +return DEFAULT_TIMELINE_VERSION; + } + + public String getTimelineV2StorageDir() { +return timelineV2StorageDir; + } + + public void
[GitHub] [hadoop] amahussein commented on a change in pull request #2581: YARN-10553. Refactor TestDistributedShell
amahussein commented on a change in pull request #2581: URL: https://github.com/apache/hadoop/pull/2581#discussion_r551400242 ## File path: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/test/java/org/apache/hadoop/yarn/applications/distributedshell/DistributedShellBaseTest.java ## @@ -0,0 +1,496 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.yarn.applications.distributedshell; + +import java.io.ByteArrayOutputStream; +import java.io.File; +import java.io.FileOutputStream; +import java.io.IOException; +import java.io.OutputStream; +import java.net.URL; +import java.util.List; +import java.util.concurrent.TimeUnit; +import java.util.concurrent.atomic.AtomicBoolean; +import java.util.concurrent.atomic.AtomicReference; +import java.util.function.Supplier; + +import org.junit.After; +import org.junit.AfterClass; +import org.junit.Assert; +import org.junit.Before; +import org.junit.Rule; +import org.junit.rules.TemporaryFolder; +import org.junit.rules.TestName; +import org.junit.rules.Timeout; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.fs.FileContext; +import org.apache.hadoop.fs.FileSystem; +import org.apache.hadoop.fs.Path; +import org.apache.hadoop.net.ServerSocketUtil; +import org.apache.hadoop.test.GenericTestUtils; +import org.apache.hadoop.util.JarFinder; +import org.apache.hadoop.util.Shell; +import org.apache.hadoop.yarn.api.records.ApplicationAttemptId; +import org.apache.hadoop.yarn.api.records.ApplicationAttemptReport; +import org.apache.hadoop.yarn.api.records.ApplicationId; +import org.apache.hadoop.yarn.api.records.ApplicationReport; +import org.apache.hadoop.yarn.api.records.ContainerReport; +import org.apache.hadoop.yarn.api.records.FinalApplicationStatus; +import org.apache.hadoop.yarn.api.records.YarnApplicationState; +import org.apache.hadoop.yarn.api.records.timeline.TimelineDomain; +import org.apache.hadoop.yarn.api.records.timeline.TimelineEntities; +import org.apache.hadoop.yarn.client.api.YarnClient; +import org.apache.hadoop.yarn.conf.YarnConfiguration; +import org.apache.hadoop.yarn.server.MiniYARNCluster; +import org.apache.hadoop.yarn.server.resourcemanager.RMContext; +import org.apache.hadoop.yarn.server.timeline.NameValuePair; +import org.apache.hadoop.yarn.util.LinuxResourceCalculatorPlugin; +import org.apache.hadoop.yarn.util.ProcfsBasedProcessTree; + +import static org.junit.Assert.assertTrue; + +public class DistributedShellBaseTest { + + protected final static String APPMASTER_JAR = + JarFinder.getJar(ApplicationMaster.class); + protected static final int MIN_ALLOCATION_MB = 128; + protected static final int TEST_TIME_OUT = 16; + // set the timeout of the yarnClient to be 95% of the globalTimeout. + protected static final int TEST_TIME_WINDOW_EXPIRE = + (TEST_TIME_OUT * 90) / 100; + private static final Logger LOG = + LoggerFactory.getLogger(DistributedShellBaseTest.class); + private static final int NUM_NMS = 1; + private static final float DEFAULT_TIMELINE_VERSION = 1.0f; + private static final String TIMELINE_AUX_SERVICE_NAME = "timeline_collector"; + // set the timeout of the yarnClient to be 95% of the globalTimeout. + private final String yarnClientTimeout = + String.valueOf(TEST_TIME_WINDOW_EXPIRE); + private final String[] commonArgs = { + "--jar", + APPMASTER_JAR, + "--timeout", + yarnClientTimeout, + "--appname", + "" + }; + @Rule + public Timeout globalTimeout = new Timeout(TEST_TIME_OUT, + TimeUnit.MILLISECONDS); + @Rule + public TemporaryFolder tmpFolder = new TemporaryFolder(); + @Rule + public TestName name = new TestName(); + + protected MiniYARNCluster yarnCluster; + protected YarnConfiguration conf = null; + // location of the filesystem timeline writer for timeline service v.2 + private String timelineV2StorageDir = null; + + protected float getTimelineVersion() { +return DEFAULT_TIMELINE_VERSION; + } + + public String getTimelineV2StorageDir() { +return timelineV2StorageDir; + } + + public void
[jira] [Work logged] (HADOOP-13327) Add OutputStream + Syncable to the Filesystem Specification
[ https://issues.apache.org/jira/browse/HADOOP-13327?focusedWorklogId=530711=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-530711 ] ASF GitHub Bot logged work on HADOOP-13327: --- Author: ASF GitHub Bot Created on: 04/Jan/21 14:46 Start Date: 04/Jan/21 14:46 Worklog Time Spent: 10m Work Description: steveloughran commented on pull request #2587: URL: https://github.com/apache/hadoop/pull/2587#issuecomment-754016813 * does not address final comments in that review * no ITest runs (I don't think it needs it; will look at again) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 530711) Time Spent: 6h 20m (was: 6h 10m) > Add OutputStream + Syncable to the Filesystem Specification > --- > > Key: HADOOP-13327 > URL: https://issues.apache.org/jira/browse/HADOOP-13327 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs >Affects Versions: 2.8.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > Labels: pull-request-available > Attachments: HADOOP-13327-002.patch, HADOOP-13327-003.patch, > HADOOP-13327-branch-2-001.patch > > Time Spent: 6h 20m > Remaining Estimate: 0h > > Write down what a Filesystem output stream should do. While core the API is > defined in Java, that doesn't say what's expected about visibility, > durability, etc —and Hadoop Syncable interface is entirely ours to define. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on pull request #2587: HADOOP-13327 Output Stream Specification.
steveloughran commented on pull request #2587: URL: https://github.com/apache/hadoop/pull/2587#issuecomment-754016813 * does not address final comments in that review * no ITest runs (I don't think it needs it; will look at again) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-13327) Add OutputStream + Syncable to the Filesystem Specification
[ https://issues.apache.org/jira/browse/HADOOP-13327?focusedWorklogId=530691=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-530691 ] ASF GitHub Bot logged work on HADOOP-13327: --- Author: ASF GitHub Bot Created on: 04/Jan/21 14:26 Start Date: 04/Jan/21 14:26 Worklog Time Spent: 10m Work Description: steveloughran closed pull request #2102: URL: https://github.com/apache/hadoop/pull/2102 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 530691) Time Spent: 6h 10m (was: 6h) > Add OutputStream + Syncable to the Filesystem Specification > --- > > Key: HADOOP-13327 > URL: https://issues.apache.org/jira/browse/HADOOP-13327 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs >Affects Versions: 2.8.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > Labels: pull-request-available > Attachments: HADOOP-13327-002.patch, HADOOP-13327-003.patch, > HADOOP-13327-branch-2-001.patch > > Time Spent: 6h 10m > Remaining Estimate: 0h > > Write down what a Filesystem output stream should do. While core the API is > defined in Java, that doesn't say what's expected about visibility, > durability, etc —and Hadoop Syncable interface is entirely ours to define. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-13327) Add OutputStream + Syncable to the Filesystem Specification
[ https://issues.apache.org/jira/browse/HADOOP-13327?focusedWorklogId=530690=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-530690 ] ASF GitHub Bot logged work on HADOOP-13327: --- Author: ASF GitHub Bot Created on: 04/Jan/21 14:26 Start Date: 04/Jan/21 14:26 Worklog Time Spent: 10m Work Description: steveloughran commented on pull request #2102: URL: https://github.com/apache/hadoop/pull/2102#issuecomment-754005159 Closing as #2587 is a rebased successor, one where the use of the new `BufferedIOStatisticsOutputStream` wrapper passes syncable through. With that change raw local FS does now support Syncable *correctly* This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 530690) Time Spent: 6h (was: 5h 50m) > Add OutputStream + Syncable to the Filesystem Specification > --- > > Key: HADOOP-13327 > URL: https://issues.apache.org/jira/browse/HADOOP-13327 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs >Affects Versions: 2.8.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > Labels: pull-request-available > Attachments: HADOOP-13327-002.patch, HADOOP-13327-003.patch, > HADOOP-13327-branch-2-001.patch > > Time Spent: 6h > Remaining Estimate: 0h > > Write down what a Filesystem output stream should do. While core the API is > defined in Java, that doesn't say what's expected about visibility, > durability, etc —and Hadoop Syncable interface is entirely ours to define. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran closed pull request #2102: HADOOP-13327. Specify Output Stream and Syncable
steveloughran closed pull request #2102: URL: https://github.com/apache/hadoop/pull/2102 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on pull request #2102: HADOOP-13327. Specify Output Stream and Syncable
steveloughran commented on pull request #2102: URL: https://github.com/apache/hadoop/pull/2102#issuecomment-754005159 Closing as #2587 is a rebased successor, one where the use of the new `BufferedIOStatisticsOutputStream` wrapper passes syncable through. With that change raw local FS does now support Syncable *correctly* This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-13327) Add OutputStream + Syncable to the Filesystem Specification
[ https://issues.apache.org/jira/browse/HADOOP-13327?focusedWorklogId=530688=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-530688 ] ASF GitHub Bot logged work on HADOOP-13327: --- Author: ASF GitHub Bot Created on: 04/Jan/21 14:24 Start Date: 04/Jan/21 14:24 Worklog Time Spent: 10m Work Description: steveloughran opened a new pull request #2587: URL: https://github.com/apache/hadoop/pull/2587 Specification of OutputStream and Syncable with * RawLocalFileSystem to implement Syncable * Consistent use of StreamCapabilities everywhere This is a rebase of #2102. Because the RawLocalOutputStream is now wrapped by `BufferedIOStatisticsOutputStream`, which does passthrough of stream capabilities and the Syncable API, the tests which were failing there should now work This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 530688) Time Spent: 5h 50m (was: 5h 40m) > Add OutputStream + Syncable to the Filesystem Specification > --- > > Key: HADOOP-13327 > URL: https://issues.apache.org/jira/browse/HADOOP-13327 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs >Affects Versions: 2.8.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > Labels: pull-request-available > Attachments: HADOOP-13327-002.patch, HADOOP-13327-003.patch, > HADOOP-13327-branch-2-001.patch > > Time Spent: 5h 50m > Remaining Estimate: 0h > > Write down what a Filesystem output stream should do. While core the API is > defined in Java, that doesn't say what's expected about visibility, > durability, etc —and Hadoop Syncable interface is entirely ours to define. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran opened a new pull request #2587: HADOOP-13327 Output Stream Specification.
steveloughran opened a new pull request #2587: URL: https://github.com/apache/hadoop/pull/2587 Specification of OutputStream and Syncable with * RawLocalFileSystem to implement Syncable * Consistent use of StreamCapabilities everywhere This is a rebase of #2102. Because the RawLocalOutputStream is now wrapped by `BufferedIOStatisticsOutputStream`, which does passthrough of stream capabilities and the Syncable API, the tests which were failing there should now work This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] iwasakims opened a new pull request #2586: YARN-10558. Fix failure of TestDistributedShell#testDSShellWithOpportunisticContainers.
iwasakims opened a new pull request #2586: URL: https://github.com/apache/hadoop/pull/2586 https://issues.apache.org/jira/browse/YARN-10558 The TestDistributedShell#testDSShellWithOpportunisticContainers always fails due to insufficient test configuration. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] touchida opened a new pull request #2585: HDFS-15759. EC: Verify EC reconstruction correctness on DataNode
touchida opened a new pull request #2585: URL: https://github.com/apache/hadoop/pull/2585 ## NOTICE Please create an issue in ASF JIRA before opening a pull request, and you need to set the title of the pull request which starts with the corresponding JIRA issue number. (e.g. HADOOP-X. Fix a typo in YYY.) For more details, please see https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-16202) Stabilize openFile() and adopt internally
[ https://issues.apache.org/jira/browse/HADOOP-16202?focusedWorklogId=530679=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-530679 ] ASF GitHub Bot logged work on HADOOP-16202: --- Author: ASF GitHub Bot Created on: 04/Jan/21 14:09 Start Date: 04/Jan/21 14:09 Worklog Time Spent: 10m Work Description: steveloughran opened a new pull request #2584: URL: https://github.com/apache/hadoop/pull/2584 #2168 rebased to trunk 1. Does not (yet) address Thomas's comments 2. not retested This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 530679) Time Spent: 5h 20m (was: 5h 10m) > Stabilize openFile() and adopt internally > - > > Key: HADOOP-16202 > URL: https://issues.apache.org/jira/browse/HADOOP-16202 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs, fs/s3, tools/distcp >Affects Versions: 3.3.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > Labels: pull-request-available > Time Spent: 5h 20m > Remaining Estimate: 0h > > The {{openFile()}} builder API lets us add new options when reading a file > Add an option {{"fs.s3a.open.option.length"}} which takes a long and allows > the length of the file to be declared. If set, *no check for the existence of > the file is issued when opening the file* > Also: withFileStatus() to take any FileStatus implementation, rather than > only S3AFileStatus -and not check that the path matches the path being > opened. Needed to support viewFS-style wrapping and mounting. > and Adopt where appropriate to stop clusters with S3A reads switched to > random IO from killing download/localization > * fs shell copyToLocal > * distcp > * IOUtils.copy -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran opened a new pull request #2584: HADOOP-16202. Enhance openFile()
steveloughran opened a new pull request #2584: URL: https://github.com/apache/hadoop/pull/2584 #2168 rebased to trunk 1. Does not (yet) address Thomas's comments 2. not retested This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-16202) Stabilize openFile() and adopt internally
[ https://issues.apache.org/jira/browse/HADOOP-16202?focusedWorklogId=530678=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-530678 ] ASF GitHub Bot logged work on HADOOP-16202: --- Author: ASF GitHub Bot Created on: 04/Jan/21 14:08 Start Date: 04/Jan/21 14:08 Worklog Time Spent: 10m Work Description: steveloughran closed pull request #2168: URL: https://github.com/apache/hadoop/pull/2168 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 530678) Time Spent: 5h 10m (was: 5h) > Stabilize openFile() and adopt internally > - > > Key: HADOOP-16202 > URL: https://issues.apache.org/jira/browse/HADOOP-16202 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs, fs/s3, tools/distcp >Affects Versions: 3.3.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > Labels: pull-request-available > Time Spent: 5h 10m > Remaining Estimate: 0h > > The {{openFile()}} builder API lets us add new options when reading a file > Add an option {{"fs.s3a.open.option.length"}} which takes a long and allows > the length of the file to be declared. If set, *no check for the existence of > the file is issued when opening the file* > Also: withFileStatus() to take any FileStatus implementation, rather than > only S3AFileStatus -and not check that the path matches the path being > opened. Needed to support viewFS-style wrapping and mounting. > and Adopt where appropriate to stop clusters with S3A reads switched to > random IO from killing download/localization > * fs shell copyToLocal > * distcp > * IOUtils.copy -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran closed pull request #2168: HADOOP-16202. Enhance/Stabilize openFile()
steveloughran closed pull request #2168: URL: https://github.com/apache/hadoop/pull/2168 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-16202) Stabilize openFile() and adopt internally
[ https://issues.apache.org/jira/browse/HADOOP-16202?focusedWorklogId=530677=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-530677 ] ASF GitHub Bot logged work on HADOOP-16202: --- Author: ASF GitHub Bot Created on: 04/Jan/21 14:03 Start Date: 04/Jan/21 14:03 Worklog Time Spent: 10m Work Description: steveloughran commented on pull request #2168: URL: https://github.com/apache/hadoop/pull/2168#issuecomment-753992078 going to close this PR and open a new one as I've rebased to trunk; will merge everything back into a single patch This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 530677) Time Spent: 5h (was: 4h 50m) > Stabilize openFile() and adopt internally > - > > Key: HADOOP-16202 > URL: https://issues.apache.org/jira/browse/HADOOP-16202 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs, fs/s3, tools/distcp >Affects Versions: 3.3.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > Labels: pull-request-available > Time Spent: 5h > Remaining Estimate: 0h > > The {{openFile()}} builder API lets us add new options when reading a file > Add an option {{"fs.s3a.open.option.length"}} which takes a long and allows > the length of the file to be declared. If set, *no check for the existence of > the file is issued when opening the file* > Also: withFileStatus() to take any FileStatus implementation, rather than > only S3AFileStatus -and not check that the path matches the path being > opened. Needed to support viewFS-style wrapping and mounting. > and Adopt where appropriate to stop clusters with S3A reads switched to > random IO from killing download/localization > * fs shell copyToLocal > * distcp > * IOUtils.copy -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on pull request #2168: HADOOP-16202. Enhance/Stabilize openFile()
steveloughran commented on pull request #2168: URL: https://github.com/apache/hadoop/pull/2168#issuecomment-753992078 going to close this PR and open a new one as I've rebased to trunk; will merge everything back into a single patch This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on pull request #2548: DRAFT PR: Implementing ListStatusRemoteIterator
steveloughran commented on pull request #2548: URL: https://github.com/apache/hadoop/pull/2548#issuecomment-753991353 I need you to use `org.apache.hadoop.util.functional.RemoteIterators` as the wrapper iterators. These are only in trunk but will be backported with the rest of HADOOP-16380 after a few days of stabilisation. These iterators propagate the IOStatisticsSource interface, so when the innermost iterator collects cost/count of list calls, the stats will be visible to and collectable by callers. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17414) Magic committer files don't have the count of bytes written collected by spark
[ https://issues.apache.org/jira/browse/HADOOP-17414?focusedWorklogId=530667=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-530667 ] ASF GitHub Bot logged work on HADOOP-17414: --- Author: ASF GitHub Bot Created on: 04/Jan/21 13:47 Start Date: 04/Jan/21 13:47 Worklog Time Spent: 10m Work Description: steveloughran commented on pull request #2530: URL: https://github.com/apache/hadoop/pull/2530#issuecomment-753983870 rebased to trunk. _not yet retested/reviewed_ This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 530667) Time Spent: 3h 10m (was: 3h) > Magic committer files don't have the count of bytes written collected by spark > -- > > Key: HADOOP-17414 > URL: https://issues.apache.org/jira/browse/HADOOP-17414 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.2.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > Labels: pull-request-available > Time Spent: 3h 10m > Remaining Estimate: 0h > > The spark statistics tracking doesn't correctly assess the size of the > uploaded files as it only calls getFileStatus on the zero byte objects -not > the yet-to-manifest files. Which, given they don't exist yet, isn't easy to > do. > Solution: > * Add getXAttr and listXAttr API calls to S3AFileSystem > * Return all S3 object headers as XAttr attributes prefixed "header." That's > custom and standard (e.g header.Content-Length). > The setXAttr call isn't implemented, so for correctness the FS doesn't > declare its support for the API in hasPathCapability(). > The magic commit file write sets the custom header > set the length of the data final data in the header > x-hadoop-s3a-magic-data-length in the marker file. > A matching patch in Spark will look for the XAttr > "header.x-hadoop-s3a-magic-data-length" when the file > being probed for output data is zero byte long. > As a result, the job tracking statistics will report the > bytes written but yet to be manifest. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on pull request #2530: HADOOP-17414. Magic committer files don't have the count of bytes written collected by spark
steveloughran commented on pull request #2530: URL: https://github.com/apache/hadoop/pull/2530#issuecomment-753983870 rebased to trunk. _not yet retested/reviewed_ This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-16080) hadoop-aws does not work with hadoop-client-api
[ https://issues.apache.org/jira/browse/HADOOP-16080?focusedWorklogId=530664=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-530664 ] ASF GitHub Bot logged work on HADOOP-16080: --- Author: ASF GitHub Bot Created on: 04/Jan/21 13:35 Start Date: 04/Jan/21 13:35 Worklog Time Spent: 10m Work Description: steveloughran edited a comment on pull request #2575: URL: https://github.com/apache/hadoop/pull/2575#issuecomment-753978149 #2522 should have gone into trunk first _there should be nothing in an older branch which is not in trunk_ This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 530664) Time Spent: 5.5h (was: 5h 20m) > hadoop-aws does not work with hadoop-client-api > --- > > Key: HADOOP-16080 > URL: https://issues.apache.org/jira/browse/HADOOP-16080 > Project: Hadoop Common > Issue Type: Bug > Components: fs/s3 >Affects Versions: 3.2.0, 3.1.1 >Reporter: Keith Turner >Assignee: Chao Sun >Priority: Major > Labels: pull-request-available > Fix For: 3.2.2, 3.3.1 > > Time Spent: 5.5h > Remaining Estimate: 0h > > I attempted to use Accumulo and S3a with the following jars on the classpath. > * hadoop-client-api-3.1.1.jar > * hadoop-client-runtime-3.1.1.jar > * hadoop-aws-3.1.1.jar > This failed with the following exception. > {noformat} > Exception in thread "init" java.lang.NoSuchMethodError: > org.apache.hadoop.util.SemaphoredDelegatingExecutor.(Lcom/google/common/util/concurrent/ListeningExecutorService;IZ)V > at org.apache.hadoop.fs.s3a.S3AFileSystem.create(S3AFileSystem.java:769) > at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1169) > at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1149) > at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1108) > at org.apache.hadoop.fs.FileSystem.createNewFile(FileSystem.java:1413) > at > org.apache.accumulo.server.fs.VolumeManagerImpl.createNewFile(VolumeManagerImpl.java:184) > at > org.apache.accumulo.server.init.Initialize.initDirs(Initialize.java:479) > at > org.apache.accumulo.server.init.Initialize.initFileSystem(Initialize.java:487) > at > org.apache.accumulo.server.init.Initialize.initialize(Initialize.java:370) > at org.apache.accumulo.server.init.Initialize.doInit(Initialize.java:348) > at org.apache.accumulo.server.init.Initialize.execute(Initialize.java:967) > at org.apache.accumulo.start.Main.lambda$execKeyword$0(Main.java:129) > at java.lang.Thread.run(Thread.java:748) > {noformat} > The problem is that {{S3AFileSystem.create()}} looks for > {{SemaphoredDelegatingExecutor(com.google.common.util.concurrent.ListeningExecutorService)}} > which does not exist in hadoop-client-api-3.1.1.jar. What does exist is > {{SemaphoredDelegatingExecutor(org.apache.hadoop.shaded.com.google.common.util.concurrent.ListeningExecutorService)}}. > To work around this issue I created a version of hadoop-aws-3.1.1.jar that > relocated references to Guava. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-16080) hadoop-aws does not work with hadoop-client-api
[ https://issues.apache.org/jira/browse/HADOOP-16080?focusedWorklogId=530663=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-530663 ] ASF GitHub Bot logged work on HADOOP-16080: --- Author: ASF GitHub Bot Created on: 04/Jan/21 13:35 Start Date: 04/Jan/21 13:35 Worklog Time Spent: 10m Work Description: steveloughran commented on pull request #2575: URL: https://github.com/apache/hadoop/pull/2575#issuecomment-753978149 #2522 should have gone into trunk first * there should be nothing in an older branch which is not in trunk * This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 530663) Time Spent: 5h 20m (was: 5h 10m) > hadoop-aws does not work with hadoop-client-api > --- > > Key: HADOOP-16080 > URL: https://issues.apache.org/jira/browse/HADOOP-16080 > Project: Hadoop Common > Issue Type: Bug > Components: fs/s3 >Affects Versions: 3.2.0, 3.1.1 >Reporter: Keith Turner >Assignee: Chao Sun >Priority: Major > Labels: pull-request-available > Fix For: 3.2.2, 3.3.1 > > Time Spent: 5h 20m > Remaining Estimate: 0h > > I attempted to use Accumulo and S3a with the following jars on the classpath. > * hadoop-client-api-3.1.1.jar > * hadoop-client-runtime-3.1.1.jar > * hadoop-aws-3.1.1.jar > This failed with the following exception. > {noformat} > Exception in thread "init" java.lang.NoSuchMethodError: > org.apache.hadoop.util.SemaphoredDelegatingExecutor.(Lcom/google/common/util/concurrent/ListeningExecutorService;IZ)V > at org.apache.hadoop.fs.s3a.S3AFileSystem.create(S3AFileSystem.java:769) > at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1169) > at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1149) > at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1108) > at org.apache.hadoop.fs.FileSystem.createNewFile(FileSystem.java:1413) > at > org.apache.accumulo.server.fs.VolumeManagerImpl.createNewFile(VolumeManagerImpl.java:184) > at > org.apache.accumulo.server.init.Initialize.initDirs(Initialize.java:479) > at > org.apache.accumulo.server.init.Initialize.initFileSystem(Initialize.java:487) > at > org.apache.accumulo.server.init.Initialize.initialize(Initialize.java:370) > at org.apache.accumulo.server.init.Initialize.doInit(Initialize.java:348) > at org.apache.accumulo.server.init.Initialize.execute(Initialize.java:967) > at org.apache.accumulo.start.Main.lambda$execKeyword$0(Main.java:129) > at java.lang.Thread.run(Thread.java:748) > {noformat} > The problem is that {{S3AFileSystem.create()}} looks for > {{SemaphoredDelegatingExecutor(com.google.common.util.concurrent.ListeningExecutorService)}} > which does not exist in hadoop-client-api-3.1.1.jar. What does exist is > {{SemaphoredDelegatingExecutor(org.apache.hadoop.shaded.com.google.common.util.concurrent.ListeningExecutorService)}}. > To work around this issue I created a version of hadoop-aws-3.1.1.jar that > relocated references to Guava. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran edited a comment on pull request #2575: HADOOP-16080. hadoop-aws does not work with hadoop-client-api
steveloughran edited a comment on pull request #2575: URL: https://github.com/apache/hadoop/pull/2575#issuecomment-753978149 #2522 should have gone into trunk first _there should be nothing in an older branch which is not in trunk_ This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on pull request #2575: HADOOP-16080. hadoop-aws does not work with hadoop-client-api
steveloughran commented on pull request #2575: URL: https://github.com/apache/hadoop/pull/2575#issuecomment-753978149 #2522 should have gone into trunk first * there should be nothing in an older branch which is not in trunk * This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17338) Intermittent S3AInputStream failures: Premature end of Content-Length delimited message body etc
[ https://issues.apache.org/jira/browse/HADOOP-17338?focusedWorklogId=530660=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-530660 ] ASF GitHub Bot logged work on HADOOP-17338: --- Author: ASF GitHub Bot Created on: 04/Jan/21 13:32 Start Date: 04/Jan/21 13:32 Worklog Time Spent: 10m Work Description: steveloughran commented on pull request #2497: URL: https://github.com/apache/hadoop/pull/2497#issuecomment-753976842 > Great, thanks so much @steveloughran ! thank you for finding/fixing an obscure bug. Incidentally, if you are hanging on to streams for a long time. the `unbuffer()` method will release the stream and push out the current statistics to the FileSystem stats; this is how Impala manages long-lived streams This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 530660) Time Spent: 4h (was: 3h 50m) > Intermittent S3AInputStream failures: Premature end of Content-Length > delimited message body etc > > > Key: HADOOP-17338 > URL: https://issues.apache.org/jira/browse/HADOOP-17338 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.3.0 >Reporter: Yongjun Zhang >Assignee: Yongjun Zhang >Priority: Major > Labels: pull-request-available > Fix For: 3.3.1 > > Attachments: HADOOP-17338.001.patch > > Time Spent: 4h > Remaining Estimate: 0h > > We are seeing the following two kinds of intermittent exceptions when using > S3AInputSteam: > 1. > {code:java} > Caused by: com.amazonaws.thirdparty.apache.http.ConnectionClosedException: > Premature end of Content-Length delimited message body (expected: 156463674; > received: 150001089 > at > com.amazonaws.thirdparty.apache.http.impl.io.ContentLengthInputStream.read(ContentLengthInputStream.java:178) > at > com.amazonaws.thirdparty.apache.http.conn.EofSensorInputStream.read(EofSensorInputStream.java:135) > at > com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82) > at com.amazonaws.event.ProgressInputStream.read(ProgressInputStream.java:180) > at > com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82) > at > com.amazonaws.services.s3.internal.S3AbortableInputStream.read(S3AbortableInputStream.java:125) > at > com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82) > at > com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82) > at > com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82) > at com.amazonaws.event.ProgressInputStream.read(ProgressInputStream.java:180) > at > com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82) > at > com.amazonaws.util.LengthCheckInputStream.read(LengthCheckInputStream.java:107) > at > com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82) > at org.apache.hadoop.fs.s3a.S3AInputStream.read(S3AInputStream.java:181) > at java.io.DataInputStream.readFully(DataInputStream.java:195) > at java.io.DataInputStream.readFully(DataInputStream.java:169) > at > org.apache.parquet.hadoop.ParquetFileReader$ConsecutiveChunkList.readAll(ParquetFileReader.java:779) > at > org.apache.parquet.hadoop.ParquetFileReader.readNextRowGroup(ParquetFileReader.java:511) > at > org.apache.parquet.hadoop.InternalParquetRecordReader.checkRead(InternalParquetRecordReader.java:130) > at > org.apache.parquet.hadoop.InternalParquetRecordReader.nextKeyValue(InternalParquetRecordReader.java:214) > at > org.apache.parquet.hadoop.ParquetRecordReader.nextKeyValue(ParquetRecordReader.java:227) > at > org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.next(ParquetRecordReaderWrapper.java:208) > at > org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.next(ParquetRecordReaderWrapper.java:63) > at > org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.doNext(HiveContextAwareRecordReader.java:350) > ... 15 more > {code} > 2. > {code:java} > Caused by: javax.net.ssl.SSLException: SSL peer shut down incorrectly > at sun.security.ssl.InputRecord.readV3Record(InputRecord.java:596) > at sun.security.ssl.InputRecord.read(InputRecord.java:532) > at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:990) > at sun.security.ssl.SSLSocketImpl.readDataRecord(SSLSocketImpl.java:948) > at
[GitHub] [hadoop] steveloughran commented on pull request #2497: HADOOP-17338. Intermittent S3AInputStream failures: Premature end of …
steveloughran commented on pull request #2497: URL: https://github.com/apache/hadoop/pull/2497#issuecomment-753976842 > Great, thanks so much @steveloughran ! thank you for finding/fixing an obscure bug. Incidentally, if you are hanging on to streams for a long time. the `unbuffer()` method will release the stream and push out the current statistics to the FileSystem stats; this is how Impala manages long-lived streams This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #2578: [HDFS-15754] Add DataNode packet metrics
hadoop-yetus commented on pull request #2578: URL: https://github.com/apache/hadoop/pull/2578#issuecomment-753954286 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 47s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | markdownlint | 0m 0s | | markdownlint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | | 0m 0s | [test4tests](test4tests) | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 13m 39s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 26m 51s | | trunk passed | | +1 :green_heart: | compile | 24m 25s | | trunk passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 | | +1 :green_heart: | compile | 19m 57s | | trunk passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | +1 :green_heart: | checkstyle | 2m 44s | | trunk passed | | +1 :green_heart: | mvnsite | 3m 7s | | trunk passed | | +1 :green_heart: | shadedclient | 21m 24s | | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 2m 10s | | trunk passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 | | +1 :green_heart: | javadoc | 3m 20s | | trunk passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | +0 :ok: | spotbugs | 3m 17s | | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 5m 38s | | trunk passed | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 27s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 2m 10s | | the patch passed | | +1 :green_heart: | compile | 20m 50s | | the patch passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 | | +1 :green_heart: | javac | 20m 50s | | the patch passed | | +1 :green_heart: | compile | 18m 32s | | the patch passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | +1 :green_heart: | javac | 18m 32s | | the patch passed | | -0 :warning: | checkstyle | 2m 39s | [/diff-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2578/3/artifact/out/diff-checkstyle-root.txt) | root: The patch generated 4 new + 124 unchanged - 0 fixed = 128 total (was 124) | | +1 :green_heart: | mvnsite | 3m 4s | | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | | The patch has no whitespace issues. | | +1 :green_heart: | shadedclient | 15m 25s | | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 2m 6s | | the patch passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 | | +1 :green_heart: | javadoc | 3m 17s | | the patch passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | +1 :green_heart: | findbugs | 5m 52s | | the patch passed | _ Other Tests _ | | +1 :green_heart: | unit | 9m 58s | | hadoop-common in the patch passed. | | +1 :green_heart: | unit | 102m 1s | | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 1m 5s | | The patch does not generate ASF License warnings. | | | | 311m 16s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2578/3/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2578 | | Optional Tests | dupname asflicense mvnsite markdownlint compile javac javadoc mvninstall unit shadedclient findbugs checkstyle | | uname | Linux 63beae1e56dc 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 2825d060cf9 | | Default Java | Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2578/3/testReport/ | | Max. process+thread count | 4667 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs U: . | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2578/3/console |
[GitHub] [hadoop] hadoop-yetus commented on pull request #2583: HDFS-15549. Improve DISK/ARCHIVE movement if they are on same filesystem
hadoop-yetus commented on pull request #2583: URL: https://github.com/apache/hadoop/pull/2583#issuecomment-753874888 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 47m 34s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | | 0m 0s | [test4tests](test4tests) | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 0m 21s | | Maven dependency ordering for branch | | -1 :x: | mvninstall | 0m 23s | [/branch-mvninstall-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2583/1/artifact/out/branch-mvninstall-root.txt) | root in trunk failed. | | -1 :x: | compile | 0m 25s | [/branch-compile-root-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2583/1/artifact/out/branch-compile-root-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04.txt) | root in trunk failed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04. | | -1 :x: | compile | 0m 22s | [/branch-compile-root-jdkPrivateBuild-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2583/1/artifact/out/branch-compile-root-jdkPrivateBuild-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01.txt) | root in trunk failed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01. | | -0 :warning: | checkstyle | 0m 21s | [/buildtool-branch-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2583/1/artifact/out/buildtool-branch-checkstyle-root.txt) | The patch fails to run checkstyle in root | | -1 :x: | mvnsite | 0m 24s | [/branch-mvnsite-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2583/1/artifact/out/branch-mvnsite-hadoop-common-project_hadoop-common.txt) | hadoop-common in trunk failed. | | -1 :x: | mvnsite | 4m 15s | [/branch-mvnsite-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2583/1/artifact/out/branch-mvnsite-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in trunk failed. | | -1 :x: | shadedclient | 11m 37s | | branch has errors when building and testing our client artifacts. | | -1 :x: | javadoc | 0m 23s | [/branch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2583/1/artifact/out/branch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04.txt) | hadoop-common in trunk failed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04. | | -1 :x: | javadoc | 0m 29s | [/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2583/1/artifact/out/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04.txt) | hadoop-hdfs in trunk failed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04. | | -1 :x: | javadoc | 0m 24s | [/branch-javadoc-hadoop-common-project_hadoop-common-jdkPrivateBuild-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2583/1/artifact/out/branch-javadoc-hadoop-common-project_hadoop-common-jdkPrivateBuild-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01.txt) | hadoop-common in trunk failed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01. | | -1 :x: | javadoc | 0m 24s | [/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2583/1/artifact/out/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01.txt) | hadoop-hdfs in trunk failed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01. | | +0 :ok: | spotbugs | 14m 11s | | Used deprecated FindBugs config; considering switching to SpotBugs. | | -1 :x: | findbugs | 0m 30s | [/branch-findbugs-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2583/1/artifact/out/branch-findbugs-hadoop-common-project_hadoop-common.txt) | hadoop-common in trunk failed. | | -1 :x: | findbugs | 0m 22s | [/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2583/1/artifact/out/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in trunk failed. | _ Patch Compile Tests _ | | +0 :ok: |
[GitHub] [hadoop] ayushtkn commented on pull request #2377: HDFS-15624. fix the function of setting quota by storage type
ayushtkn commented on pull request #2377: URL: https://github.com/apache/hadoop/pull/2377#issuecomment-753872315 Thanx @huangtianhua for the work here, Sorry I couldn't revert back to your emails & pings. @brahmareddybattula has objections on the jira with the approach itself. Quoting him from the jira >I dn't think bumping the namelayout is best solution, need to check other way. ( may be like checking the client version during the upgrade.) There is no code change post HDFS-15660? It was asserted the generic solution shall solve this problem or will change something So, We might need changes here post HDFS-15660. should wait for him, unless he is convinced. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org