Re: [PR] Hadoop 18993 - target 3.4 [hadoop]
tmnd1991 commented on PR #6529: URL: https://github.com/apache/hadoop/pull/6529#issuecomment-1928923862 ``` [INFO] Results: [INFO] [ERROR] Failures: [ERROR] ITestS3AFileSystemStatistic.testBytesReadWithStream:72->Assert.assertEquals:647->Assert.failNotEquals:835->Assert.fail:89 Mismatch in number of FS bytes read by InputStreams expected:<2048> but was:<19632008> [ERROR] Errors: [ERROR] ITestS3APrefetchingCacheFiles.testCacheFileExistence:111 » AWSRedirect Receive... [ERROR] ITestAWSStatisticCollection.testCommonCrawlStatistics:74 » AccessDenied s3a://... [ERROR] ITestAWSStatisticCollection.testLandsatStatistics:56 » AccessDenied s3a://land... [INFO] [ERROR] Tests run: 343, Failures: 1, Errors: 3, Skipped: 53 ``` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HDFS-17372. CommandProcessingThread#queue should use LinkedBlockingDeque to prevent high priority command blocked by low priority command [hadoop]
hfutatzhanghb commented on PR #6530: URL: https://github.com/apache/hadoop/pull/6530#issuecomment-1928872491 @Hexiaoqiao @zhangshuyan0 @tasanuma @tomscut Hi, sir. Could you please take a look at this problem? If needed, I will post an UT soonly. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-19058) [JDK-17] Fix UT Failures in hadoop common, hdfs, yarn
[ https://issues.apache.org/jira/browse/HADOOP-19058?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bilwa S T updated HADOOP-19058: --- Description: Most of the UT's failed with below exception: Caused by: java.lang.ExceptionInInitializerError: Exception java.lang.reflect.InaccessibleObjectException: Unable to make protected final java.lang.Class java.lang.ClassLoader.defineClass(java.lang.String,byte[],int,int) throws java.lang.ClassFormatError accessible: module java.base does not "opens java.lang" to unnamed module @d13f7c [in thread "Time-limited test"] > [JDK-17] Fix UT Failures in hadoop common, hdfs, yarn > - > > Key: HADOOP-19058 > URL: https://issues.apache.org/jira/browse/HADOOP-19058 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Bilwa S T >Assignee: Bilwa S T >Priority: Major > Labels: pull-request-available > > Most of the UT's failed with below exception: > Caused by: java.lang.ExceptionInInitializerError: Exception > java.lang.reflect.InaccessibleObjectException: Unable to make protected final > java.lang.Class > java.lang.ClassLoader.defineClass(java.lang.String,byte[],int,int) throws > java.lang.ClassFormatError accessible: module java.base does not "opens > java.lang" to unnamed module @d13f7c [in thread "Time-limited test"] -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-19058) [JDK-17] Fix UT Failures in hadoop common, hdfs, yarn
[ https://issues.apache.org/jira/browse/HADOOP-19058?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated HADOOP-19058: Labels: pull-request-available (was: ) > [JDK-17] Fix UT Failures in hadoop common, hdfs, yarn > - > > Key: HADOOP-19058 > URL: https://issues.apache.org/jira/browse/HADOOP-19058 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Bilwa S T >Assignee: Bilwa S T >Priority: Major > Labels: pull-request-available > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-19058) [JDK-17] Fix UT Failures in hadoop common, hdfs, yarn
[ https://issues.apache.org/jira/browse/HADOOP-19058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17814619#comment-17814619 ] ASF GitHub Bot commented on HADOOP-19058: - BilwaST opened a new pull request, #6531: URL: https://github.com/apache/hadoop/pull/6531 ### Description of PR ### How was this patch tested? ### For code changes: - [ ] Does the title or this PR starts with the corresponding JIRA issue id (e.g. 'HADOOP-17799. Your PR title ...')? - [ ] Object storage: have the integration tests been executed and the endpoint declared according to the connector-specific documentation? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, `NOTICE-binary` files? > [JDK-17] Fix UT Failures in hadoop common, hdfs, yarn > - > > Key: HADOOP-19058 > URL: https://issues.apache.org/jira/browse/HADOOP-19058 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Bilwa S T >Assignee: Bilwa S T >Priority: Major > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[PR] HADOOP-19058. [JDK-17] Fix UT Failures in hadoop common, hdfs, yarn [hadoop]
BilwaST opened a new pull request, #6531: URL: https://github.com/apache/hadoop/pull/6531 ### Description of PR ### How was this patch tested? ### For code changes: - [ ] Does the title or this PR starts with the corresponding JIRA issue id (e.g. 'HADOOP-17799. Your PR title ...')? - [ ] Object storage: have the integration tests been executed and the endpoint declared according to the connector-specific documentation? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, `NOTICE-binary` files? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-19058) [JDK-17] Fix UT Failures in hadoop common, hdfs, yarn
[ https://issues.apache.org/jira/browse/HADOOP-19058?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bilwa S T updated HADOOP-19058: --- Summary: [JDK-17] Fix UT Failures in hadoop common, hdfs, yarn (was: [JDK-17] Fix UT Failures in hadoop common) > [JDK-17] Fix UT Failures in hadoop common, hdfs, yarn > - > > Key: HADOOP-19058 > URL: https://issues.apache.org/jira/browse/HADOOP-19058 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Bilwa S T >Assignee: Bilwa S T >Priority: Major > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-19058) [JDK-17] Fix UT Failures in hadoop common
[ https://issues.apache.org/jira/browse/HADOOP-19058?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bilwa S T updated HADOOP-19058: --- Description: (was: java.lang.NullPointerException: Cannot enter synchronized block because "this.closeLock" is null at java.base/java.io.FilterOutputStream.close(FilterOutputStream.java:173) at org.apache.hadoop.crypto.CryptoOutputStream.close(CryptoOutputStream.java:249) at org.apache.hadoop.crypto.TestCryptoOutputStreamClosing.lambda$testUnderlyingOutputStreamClosedWhenExceptionClosing$0(TestCryptoOutputStreamClosing.java:70) at org.apache.hadoop.test.LambdaTestUtils.intercept(LambdaTestUtils.java:408) at org.apache.hadoop.crypto.TestCryptoOutputStreamClosing.testUnderlyingOutputStreamClosedWhenExceptionClosing(TestCryptoOutputStreamClosing.java:69) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:568) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.ParentRunner.run(ParentRunner.java:413) at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365) at org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273) at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238) at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159) at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384) at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345) at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126) at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)) > [JDK-17] Fix UT Failures in hadoop common > - > > Key: HADOOP-19058 > URL: https://issues.apache.org/jira/browse/HADOOP-19058 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Bilwa S T >Assignee: Bilwa S T >Priority: Major > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-19058) [JDK-17] Fix UT Failures in hadoop common
[ https://issues.apache.org/jira/browse/HADOOP-19058?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bilwa S T updated HADOOP-19058: --- Summary: [JDK-17] Fix UT Failures in hadoop common (was: [JDK-17] TestCryptoOutputStreamClosing#testUnderlyingOutputStreamClosedWhenExceptionClosing fails) > [JDK-17] Fix UT Failures in hadoop common > - > > Key: HADOOP-19058 > URL: https://issues.apache.org/jira/browse/HADOOP-19058 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Bilwa S T >Assignee: Bilwa S T >Priority: Major > > java.lang.NullPointerException: Cannot enter synchronized block because > "this.closeLock" is null > at > java.base/java.io.FilterOutputStream.close(FilterOutputStream.java:173) > at > org.apache.hadoop.crypto.CryptoOutputStream.close(CryptoOutputStream.java:249) > at > org.apache.hadoop.crypto.TestCryptoOutputStreamClosing.lambda$testUnderlyingOutputStreamClosedWhenExceptionClosing$0(TestCryptoOutputStreamClosing.java:70) > at > org.apache.hadoop.test.LambdaTestUtils.intercept(LambdaTestUtils.java:408) > at > org.apache.hadoop.crypto.TestCryptoOutputStreamClosing.testUnderlyingOutputStreamClosedWhenExceptionClosing(TestCryptoOutputStreamClosing.java:69) > at > java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) > at > java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.base/java.lang.reflect.Method.invoke(Method.java:568) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) > at > org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) > at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) > at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) > at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) > at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) > at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) > at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) > at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) > at org.junit.runners.ParentRunner.run(ParentRunner.java:413) > at > org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365) > at > org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273) > at > org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238) > at > org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159) > at > org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384) > at > org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345) > at > org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126) > at > org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418) -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[PR] HDFS-17372. CommandProcessingThread#queue should use LinkedBlockingDeque to prevent high priority command blocked by low priority command [hadoop]
hfutatzhanghb opened a new pull request, #6530: URL: https://github.com/apache/hadoop/pull/6530 ### Description of PR Refer to HDFS-17372. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17461) Add thread-level IOStatistics Context
[ https://issues.apache.org/jira/browse/HADOOP-17461?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17814580#comment-17814580 ] junyuc25 commented on HADOOP-17461: --- Hi [~ste...@apache.org] and [~mehakmeetSingh] , I have a quick question here. Please correct me if I'm wrong, currently it looks like the AWS SDK metrics are only collected and aggregated at FS level: [https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/statistics/impl/AwsStatisticsCollector.java] . Are there any plans to collect SDK metrics in thread level IOStatistics? When Spark uses S3A to access S3 data, it would be helpful to see S3 request statistics (request counts, latency etc) at Spark task level, but I'm not sure if Hadoop supports this use case currently? > Add thread-level IOStatistics Context > - > > Key: HADOOP-17461 > URL: https://issues.apache.org/jira/browse/HADOOP-17461 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs, fs/azure, fs/s3 >Affects Versions: 3.3.1 >Reporter: Steve Loughran >Assignee: Mehakmeet Singh >Priority: Major > Labels: pull-request-available > Fix For: 3.3.5 > > Time Spent: 11h 20m > Remaining Estimate: 0h > > For effective reporting of the iostatistics of individual worker threads, we > need a thread-level context which IO components update. > * this contact needs to be passed in two background thread forming work on > behalf of a task. > * IO Components (streams, iterators, filesystems) need to update this context > statistics as they perform work > * Without double counting anything. > I imagine a ThreadLocal IOStatisticContext which will be updated in the > FileSystem API Calls. This context MUST be passed into the background threads > used by a task, so that IO is correctly aggregated. > I don't want streams, listIterators to do the updating as there is more > risk of double counting. However, we need to see their statistics if we want > to know things like "bytes discarded in backwards seeks". And I don't want to > be updating a shared context object on every read() call. > If all we want is store IO (HEAD, GET, DELETE, list performance etc) then the > FS is sufficient. > If we do want the stream-specific detail, then I propose > * caching the context in the constructor > * updating it only in close() or unbuffer() (as we do from S3AInputStream to > S3AInstrumenation) > * excluding those we know the FS already collects. > h3. important > when backporting, please follow with HADOOP-18373 -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HDFS-17370. Fix junit dependency for running parameterized tests in hadoop-hdfs-rbf [hadoop]
tasanuma commented on PR #6522: URL: https://github.com/apache/hadoop/pull/6522#issuecomment-1928690446 Thanks for your review, @simbadzina. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HDFS-17370. Fix junit dependency for running parameterized tests in hadoop-hdfs-rbf [hadoop]
tasanuma merged PR #6522: URL: https://github.com/apache/hadoop/pull/6522 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HDFS-17362. RBF: RouterObserverReadProxyProvider should use ConfiguredFailoverProxyProvider internally [hadoop]
tasanuma commented on code in PR #6510: URL: https://github.com/apache/hadoop/pull/6510#discussion_r1479162410 ## hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/RouterObserverReadProxyProvider.java: ## @@ -84,7 +84,8 @@ public class RouterObserverReadProxyProvider extends AbstractNNFailoverProxyP public RouterObserverReadProxyProvider(Configuration conf, URI uri, Class xface, HAProxyFactory factory) { -this(conf, uri, xface, factory, new IPFailoverProxyProvider<>(conf, uri, xface, factory)); +this(conf, uri, xface, factory, +new ConfiguredFailoverProxyProvider<>(conf, uri, xface, factory)); Review Comment: I'd prefer to create a new class. I'd like to keep the existing class as is and create a new class named `RouterObserverReadConfiguredFailoverProxyProvider` using `ConfiguredFailoverProxyProvider` internally. I will update the PR soon. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-19066) AWS SDK V2 - Enabling FIPS should be allowed with central endpoint
[ https://issues.apache.org/jira/browse/HADOOP-19066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17814576#comment-17814576 ] Viraj Jasani commented on HADOOP-19066: --- Indeed! hopefully some final stabilization work. > AWS SDK V2 - Enabling FIPS should be allowed with central endpoint > -- > > Key: HADOOP-19066 > URL: https://issues.apache.org/jira/browse/HADOOP-19066 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.5.0, 3.4.1 >Reporter: Viraj Jasani >Assignee: Viraj Jasani >Priority: Major > > FIPS support can be enabled by setting "fs.s3a.endpoint.fips". Since the SDK > considers overriding endpoint and enabling fips as mutually exclusive, we > fail fast if fs.s3a.endpoint is set with fips support (details on > HADOOP-18975). > Now, we no longer override SDK endpoint for central endpoint since we enable > cross region access (details on HADOOP-19044) but we would still fail fast if > endpoint is central and fips is enabled. > Changes proposed: > * S3A to fail fast only if FIPS is enabled and non-central endpoint is > configured. > * Tests to ensure S3 bucket is accessible with default region us-east-2 with > cross region access (expected with central endpoint). > * Document FIPS support with central endpoint on connecting.html. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-19050) Add S3 Access Grants Support in S3A
[ https://issues.apache.org/jira/browse/HADOOP-19050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17814574#comment-17814574 ] ASF GitHub Bot commented on HADOOP-19050: - hadoop-yetus commented on PR #6507: URL: https://github.com/apache/hadoop/pull/6507#issuecomment-1928684496 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 30s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +0 :ok: | xmllint | 0m 0s | | xmllint was not available. | | +0 :ok: | markdownlint | 0m 0s | | markdownlint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 14m 26s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 30m 37s | | trunk passed | | +1 :green_heart: | compile | 16m 16s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 14m 41s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | checkstyle | 4m 41s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 32s | | trunk passed | | +1 :green_heart: | javadoc | 1m 12s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 1m 17s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +0 :ok: | spotbugs | 0m 41s | | branch/hadoop-project no spotbugs output file (spotbugsXml.xml) | | +1 :green_heart: | shadedclient | 32m 58s | | branch has no errors when building and testing our client artifacts. | | -0 :warning: | patch | 33m 22s | | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 34s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 0m 43s | | the patch passed | | +1 :green_heart: | compile | 15m 33s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 15m 33s | | the patch passed | | +1 :green_heart: | compile | 15m 13s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | javac | 15m 13s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 4m 4s | [/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6507/10/artifact/out/results-checkstyle-root.txt) | root: The patch generated 7 new + 2 unchanged - 0 fixed = 9 total (was 2) | | +1 :green_heart: | mvnsite | 1m 25s | | the patch passed | | +1 :green_heart: | javadoc | 1m 8s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 1m 9s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +0 :ok: | spotbugs | 0m 34s | | hadoop-project has no data from spotbugs | | +1 :green_heart: | shadedclient | 34m 27s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 0m 32s | | hadoop-project in the patch passed. | | +1 :green_heart: | unit | 3m 10s | | hadoop-aws in the patch passed. | | +1 :green_heart: | asflicense | 0m 56s | | The patch does not generate ASF License warnings. | | | | 207m 7s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.44 ServerAPI=1.44 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6507/10/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/6507 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient codespell detsecrets xmllint spotbugs checkstyle markdownlint | | uname | Linux e6a0ac0030c9 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / dcbab4bc5973ef4886ebe3c350160fcb7f9df0f6 | | Default
Re: [PR] HADOOP-19050, Add Support for AWS S3 Access Grants [hadoop]
hadoop-yetus commented on PR #6507: URL: https://github.com/apache/hadoop/pull/6507#issuecomment-1928684496 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 30s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +0 :ok: | xmllint | 0m 0s | | xmllint was not available. | | +0 :ok: | markdownlint | 0m 0s | | markdownlint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 14m 26s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 30m 37s | | trunk passed | | +1 :green_heart: | compile | 16m 16s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 14m 41s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | checkstyle | 4m 41s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 32s | | trunk passed | | +1 :green_heart: | javadoc | 1m 12s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 1m 17s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +0 :ok: | spotbugs | 0m 41s | | branch/hadoop-project no spotbugs output file (spotbugsXml.xml) | | +1 :green_heart: | shadedclient | 32m 58s | | branch has no errors when building and testing our client artifacts. | | -0 :warning: | patch | 33m 22s | | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 34s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 0m 43s | | the patch passed | | +1 :green_heart: | compile | 15m 33s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 15m 33s | | the patch passed | | +1 :green_heart: | compile | 15m 13s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | javac | 15m 13s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 4m 4s | [/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6507/10/artifact/out/results-checkstyle-root.txt) | root: The patch generated 7 new + 2 unchanged - 0 fixed = 9 total (was 2) | | +1 :green_heart: | mvnsite | 1m 25s | | the patch passed | | +1 :green_heart: | javadoc | 1m 8s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 1m 9s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +0 :ok: | spotbugs | 0m 34s | | hadoop-project has no data from spotbugs | | +1 :green_heart: | shadedclient | 34m 27s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 0m 32s | | hadoop-project in the patch passed. | | +1 :green_heart: | unit | 3m 10s | | hadoop-aws in the patch passed. | | +1 :green_heart: | asflicense | 0m 56s | | The patch does not generate ASF License warnings. | | | | 207m 7s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.44 ServerAPI=1.44 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6507/10/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/6507 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient codespell detsecrets xmllint spotbugs checkstyle markdownlint | | uname | Linux e6a0ac0030c9 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / dcbab4bc5973ef4886ebe3c350160fcb7f9df0f6 | | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | |
Re: [PR] HDFS-17299. Adding rack failure tolerance when creating a new file [hadoop]
shahrs87 commented on code in PR #6513: URL: https://github.com/apache/hadoop/pull/6513#discussion_r1479041865 ## hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java: ## @@ -1607,8 +1607,11 @@ private void transfer(final DatanodeInfo src, final DatanodeInfo[] targets, * it can be written to. * This happens when a file is appended or data streaming fails * It keeps on trying until a pipeline is setup + * + * Returns boolean whether pipeline was setup successfully or not. + * This boolean is used upstream on whether to continue creating pipeline or throw exception */ - private void setupPipelineForAppendOrRecovery() throws IOException { + private boolean setupPipelineForAppendOrRecovery() throws IOException { Review Comment: We are changing the return type of `setupPipelineForAppendOrRecovery` and `setupPipelineInternal` methods. IIRC this is the reason: `handleBadDatanode` can silently fail to handle bad datanode [here](https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java#L1700-L1706) and `setupPipelineInternal` will silently return [here](https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java#L1637-L1638) without bubbling up the exception. ## hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java: ## @@ -1618,24 +1621,33 @@ private void setupPipelineForAppendOrRecovery() throws IOException { LOG.warn(msg); lastException.set(new IOException(msg)); streamerClosed = true; - return; + return false; } -setupPipelineInternal(nodes, storageTypes, storageIDs); +return setupPipelineInternal(nodes, storageTypes, storageIDs); } - protected void setupPipelineInternal(DatanodeInfo[] datanodes, + protected boolean setupPipelineInternal(DatanodeInfo[] datanodes, StorageType[] nodeStorageTypes, String[] nodeStorageIDs) throws IOException { boolean success = false; long newGS = 0L; +boolean isCreateStage = BlockConstructionStage.PIPELINE_SETUP_CREATE == stage; while (!success && !streamerClosed && dfsClient.clientRunning) { if (!handleRestartingDatanode()) { -return; +return false; + } + + final boolean isRecovery = errorState.hasInternalError() && !isCreateStage; + + // During create stage, if we remove a node (nodes.length - 1) + // min replication should still be satisfied. + if (isCreateStage && !(dfsClient.dtpReplaceDatanodeOnFailureReplication > 0 && Review Comment: Reason behind adding this check here: We are already doing this check in catch block of `addDatanode2ExistingPipeline` method [here](https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java#L1528-L1539). But when `isAppend` flag is set to `false` and we are in `PIPELINE_SETUP_CREATE` phase, we exit early from `addDatanode2ExistingPipeline` method [here](https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java#L1489-L1492) Lets say the replication factor is 3 and we have set the config property `dfs.client.block.write.replace-datanode-on-failure.min-replication` to 3 and there is one bad node in the pipeline. Even if we have set the config property to `ReplaceDatanodeOnFailure.CONDITION_TRUE`, the code will exit the addDatanode2ExistingPipeline method early since `isAppend` is set to false and stage is `PIPELINE_SETUP_CREATE`. Assuming that there are NO available nodes in the rack, the pipeline will succeed with 2 nodes in the pipeline which will violate the config property: `dfs.client.block.write.replace-datanode-on-failure.min-replication` Having written all of these, I realized that even if there are some good nodes available in the rack, we will exit early after this patch. Should we move this check after `handleDatanodeReplacement` method? @ritegarg -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-19039) Hadoop 3.4.0 Highlight big features and improvements.
[ https://issues.apache.org/jira/browse/HADOOP-19039?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17814533#comment-17814533 ] ASF GitHub Bot commented on HADOOP-19039: - slfan1989 commented on PR #6462: URL: https://github.com/apache/hadoop/pull/6462#issuecomment-1928481660 > Compile time support for JDK-8 & Runtime is till JDK-11, JDK-17 runtime isn't there itself: > [HADOOP-18716](https://issues.apache.org/jira/browse/HADOOP-18716) tells about some issues with JDK-17, I haven't followed that up on that Thank you for the information! We plan to support JDK17 in the production environment (we will upgrade to Spark4.0 in the future). I will continue to follow up on the compilation of JDK11 and JDK17. I look forward to successfully completing this task together. > Hadoop 3.4.0 Highlight big features and improvements. > - > > Key: HADOOP-19039 > URL: https://issues.apache.org/jira/browse/HADOOP-19039 > Project: Hadoop Common > Issue Type: Improvement > Components: common >Affects Versions: 3.4.0 >Reporter: Shilun Fan >Assignee: Shilun Fan >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0, 3.5.0 > > > While preparing for the release of Hadoop-3.4.0, I've noticed the inclusion > of numerous commits in this version. Therefore, highlighting significant > features and improvements becomes crucial. I've completed the initial > version and now seek the review of more experienced partner to ensure the > finalization of the version's highlights. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HADOOP-19039. Hadoop 3.4.0 Highlight big features and improvements. [hadoop]
slfan1989 commented on PR #6462: URL: https://github.com/apache/hadoop/pull/6462#issuecomment-1928481660 > Compile time support for JDK-8 & Runtime is till JDK-11, JDK-17 runtime isn't there itself: > [HADOOP-18716](https://issues.apache.org/jira/browse/HADOOP-18716) tells about some issues with JDK-17, I haven't followed that up on that Thank you for the information! We plan to support JDK17 in the production environment (we will upgrade to Spark4.0 in the future). I will continue to follow up on the compilation of JDK11 and JDK17. I look forward to successfully completing this task together. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-19050) Add S3 Access Grants Support in S3A
[ https://issues.apache.org/jira/browse/HADOOP-19050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17814530#comment-17814530 ] ASF GitHub Bot commented on HADOOP-19050: - hadoop-yetus commented on PR #6507: URL: https://github.com/apache/hadoop/pull/6507#issuecomment-1928393471 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 30s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +0 :ok: | xmllint | 0m 0s | | xmllint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 14m 34s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 30m 49s | | trunk passed | | +1 :green_heart: | compile | 16m 23s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 15m 4s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | checkstyle | 4m 12s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 31s | | trunk passed | | +1 :green_heart: | javadoc | 1m 15s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 1m 20s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +0 :ok: | spotbugs | 0m 42s | | branch/hadoop-project no spotbugs output file (spotbugsXml.xml) | | +1 :green_heart: | shadedclient | 32m 57s | | branch has no errors when building and testing our client artifacts. | | -0 :warning: | patch | 33m 21s | | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 33s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 0m 44s | | the patch passed | | +1 :green_heart: | compile | 15m 37s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 15m 37s | | the patch passed | | +1 :green_heart: | compile | 14m 50s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | javac | 14m 50s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 4m 2s | [/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6507/9/artifact/out/results-checkstyle-root.txt) | root: The patch generated 7 new + 2 unchanged - 0 fixed = 9 total (was 2) | | +1 :green_heart: | mvnsite | 1m 28s | | the patch passed | | +1 :green_heart: | javadoc | 1m 12s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 1m 20s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +0 :ok: | spotbugs | 0m 33s | | hadoop-project has no data from spotbugs | | +1 :green_heart: | shadedclient | 33m 14s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 0m 34s | | hadoop-project in the patch passed. | | +1 :green_heart: | unit | 3m 12s | | hadoop-aws in the patch passed. | | +1 :green_heart: | asflicense | 0m 57s | | The patch does not generate ASF License warnings. | | | | 206m 37s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.44 ServerAPI=1.44 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6507/9/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/6507 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient codespell detsecrets xmllint spotbugs checkstyle | | uname | Linux e44bc4938765 5.15.0-91-generic #101-Ubuntu SMP Tue Nov 14 13:30:08 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 2bc7c41b6f41fe5ba3ab29e52c58ab538d1bcfa1 | | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Multi-JDK versions |
Re: [PR] HADOOP-19050, Add Support for AWS S3 Access Grants [hadoop]
hadoop-yetus commented on PR #6507: URL: https://github.com/apache/hadoop/pull/6507#issuecomment-1928393471 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 30s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +0 :ok: | xmllint | 0m 0s | | xmllint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 14m 34s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 30m 49s | | trunk passed | | +1 :green_heart: | compile | 16m 23s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 15m 4s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | checkstyle | 4m 12s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 31s | | trunk passed | | +1 :green_heart: | javadoc | 1m 15s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 1m 20s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +0 :ok: | spotbugs | 0m 42s | | branch/hadoop-project no spotbugs output file (spotbugsXml.xml) | | +1 :green_heart: | shadedclient | 32m 57s | | branch has no errors when building and testing our client artifacts. | | -0 :warning: | patch | 33m 21s | | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 33s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 0m 44s | | the patch passed | | +1 :green_heart: | compile | 15m 37s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 15m 37s | | the patch passed | | +1 :green_heart: | compile | 14m 50s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | javac | 14m 50s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 4m 2s | [/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6507/9/artifact/out/results-checkstyle-root.txt) | root: The patch generated 7 new + 2 unchanged - 0 fixed = 9 total (was 2) | | +1 :green_heart: | mvnsite | 1m 28s | | the patch passed | | +1 :green_heart: | javadoc | 1m 12s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 1m 20s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +0 :ok: | spotbugs | 0m 33s | | hadoop-project has no data from spotbugs | | +1 :green_heart: | shadedclient | 33m 14s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 0m 34s | | hadoop-project in the patch passed. | | +1 :green_heart: | unit | 3m 12s | | hadoop-aws in the patch passed. | | +1 :green_heart: | asflicense | 0m 57s | | The patch does not generate ASF License warnings. | | | | 206m 37s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.44 ServerAPI=1.44 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6507/9/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/6507 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient codespell detsecrets xmllint spotbugs checkstyle | | uname | Linux e44bc4938765 5.15.0-91-generic #101-Ubuntu SMP Tue Nov 14 13:30:08 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 2bc7c41b6f41fe5ba3ab29e52c58ab538d1bcfa1 | | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6507/9/testReport/ |
Re: [PR] Hadoop 18993 - target 3.4 [hadoop]
hadoop-yetus commented on PR #6529: URL: https://github.com/apache/hadoop/pull/6529#issuecomment-1928391700 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 20s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +0 :ok: | markdownlint | 0m 0s | | markdownlint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ branch-3.4 Compile Tests _ | | +1 :green_heart: | mvninstall | 31m 59s | | branch-3.4 passed | | +1 :green_heart: | compile | 0m 25s | | branch-3.4 passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 0m 21s | | branch-3.4 passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | checkstyle | 0m 22s | | branch-3.4 passed | | +1 :green_heart: | mvnsite | 0m 25s | | branch-3.4 passed | | +1 :green_heart: | javadoc | 0m 19s | | branch-3.4 passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 22s | | branch-3.4 passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 0m 44s | | branch-3.4 passed | | +1 :green_heart: | shadedclient | 19m 19s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 17s | | the patch passed | | +1 :green_heart: | compile | 0m 19s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 0m 19s | | the patch passed | | +1 :green_heart: | compile | 0m 14s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | javac | 0m 14s | | the patch passed | | +1 :green_heart: | blanks | 0m 1s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 12s | | the patch passed | | +1 :green_heart: | mvnsite | 0m 20s | | the patch passed | | +1 :green_heart: | javadoc | 0m 10s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 17s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 0m 40s | | the patch passed | | +1 :green_heart: | shadedclient | 19m 6s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 2m 17s | | hadoop-aws in the patch passed. | | +1 :green_heart: | asflicense | 0m 24s | | The patch does not generate ASF License warnings. | | | | 81m 5s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.44 ServerAPI=1.44 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6529/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/6529 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets markdownlint | | uname | Linux 13cffa8b801e 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | branch-3.4 / ab292b470a5efdd06a930501482b22a7feed0c47 | | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6529/1/testReport/ | | Max. process+thread count | 552 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6529/1/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail:
[jira] [Commented] (HADOOP-14837) Handle S3A "glacier" data
[ https://issues.apache.org/jira/browse/HADOOP-14837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17814523#comment-17814523 ] ASF GitHub Bot commented on HADOOP-14837: - steveloughran commented on code in PR #6407: URL: https://github.com/apache/hadoop/pull/6407#discussion_r1478954399 ## hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/StoreContext.java: ## @@ -25,6 +25,7 @@ import java.util.concurrent.CompletableFuture; import java.util.concurrent.ExecutorService; +import org.apache.hadoop.fs.s3a.S3ObjectStorageClassFilter; Review Comment: can you move down to the rest of the org.apache. these guava things are in the wrong block due to the big search and replace which created them ## hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java: ## @@ -441,6 +441,8 @@ public class S3AFileSystem extends FileSystem implements StreamCapabilities, */ private boolean isCSEEnabled; + private S3ObjectStorageClassFilter s3ObjectStorageClassFilter; Review Comment: nit: add a javadoc -and remember a "." at the end to keep all javadoc versions happy ## hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3ObjectStorageClassFilter.java: ## @@ -0,0 +1,69 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.fs.s3a; + +import org.apache.hadoop.thirdparty.com.google.common.collect.Sets; +import java.util.Set; +import java.util.function.Function; +import software.amazon.awssdk.services.s3.model.ObjectStorageClass; +import software.amazon.awssdk.services.s3.model.S3Object; + + +/** + * + * {@link S3ObjectStorageClassFilter} will filter the S3 files based on the {@code fs.s3a.glacier.read.restored.objects} configuration set in {@link S3AFileSystem} + * The config can have 3 values: + * {@code READ_ALL}: Retrieval of Glacier files will fail with InvalidObjectStateException: The operation is not valid for the object's storage class. + * {@code SKIP_ALL_GLACIER}: If this value is set then this will ignore any S3 Objects which are tagged with Glacier storage classes and retrieve the others. + * {@code READ_RESTORED_GLACIER_OBJECTS}: If this value is set then restored status of the Glacier object will be checked, if restored the objects would be read like normal S3 objects else they will be ignored as the objects would not have been retrieved from the S3 Glacier. + * + */ +public enum S3ObjectStorageClassFilter { + READ_ALL(o -> true), + SKIP_ALL_GLACIER(S3ObjectStorageClassFilter::isNotGlacierObject), + READ_RESTORED_GLACIER_OBJECTS(S3ObjectStorageClassFilter::isCompletedRestoredObject); + + private static final Set GLACIER_STORAGE_CLASSES = Sets.newHashSet(ObjectStorageClass.GLACIER, ObjectStorageClass.DEEP_ARCHIVE); + + private final Function filter; + + S3ObjectStorageClassFilter(Function filter) { +this.filter = filter; + } + + private static boolean isNotGlacierObject(S3Object object) { Review Comment: add javadocs all the way down here, thanks ## hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java: ## @@ -581,6 +583,12 @@ public void initialize(URI name, Configuration originalConf) s3aInternals = createS3AInternals(); + s3ObjectStorageClassFilter = Optional.ofNullable(conf.get(READ_RESTORED_GLACIER_OBJECTS)) Review Comment: @ahmarsuhail but doing the the way it is does handle case differences. I'd go for getTrimmed(READ_RESTORED_GLACIER_OBJECTS, ""); if empty string map to empty optional, otherwise .toupper and valueof. one thing to consider: meaningful failure if the value doesn't map. I'd change Configuration to do that case mapping if it wasn't such a critical class ## hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java: ## @@ -581,6 +583,12 @@ public void initialize(URI name, Configuration originalConf) s3aInternals = createS3AInternals(); + s3ObjectStorageClassFilter = Optional.ofNullable(conf.get(READ_RESTORED_GLACIER_OBJECTS)) Review
Re: [PR] HADOOP-14837 : Support Read Restored Glacier Objects [hadoop]
steveloughran commented on code in PR #6407: URL: https://github.com/apache/hadoop/pull/6407#discussion_r1478954399 ## hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/StoreContext.java: ## @@ -25,6 +25,7 @@ import java.util.concurrent.CompletableFuture; import java.util.concurrent.ExecutorService; +import org.apache.hadoop.fs.s3a.S3ObjectStorageClassFilter; Review Comment: can you move down to the rest of the org.apache. these guava things are in the wrong block due to the big search and replace which created them ## hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java: ## @@ -441,6 +441,8 @@ public class S3AFileSystem extends FileSystem implements StreamCapabilities, */ private boolean isCSEEnabled; + private S3ObjectStorageClassFilter s3ObjectStorageClassFilter; Review Comment: nit: add a javadoc -and remember a "." at the end to keep all javadoc versions happy ## hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3ObjectStorageClassFilter.java: ## @@ -0,0 +1,69 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.fs.s3a; + +import org.apache.hadoop.thirdparty.com.google.common.collect.Sets; +import java.util.Set; +import java.util.function.Function; +import software.amazon.awssdk.services.s3.model.ObjectStorageClass; +import software.amazon.awssdk.services.s3.model.S3Object; + + +/** + * + * {@link S3ObjectStorageClassFilter} will filter the S3 files based on the {@code fs.s3a.glacier.read.restored.objects} configuration set in {@link S3AFileSystem} + * The config can have 3 values: + * {@code READ_ALL}: Retrieval of Glacier files will fail with InvalidObjectStateException: The operation is not valid for the object's storage class. + * {@code SKIP_ALL_GLACIER}: If this value is set then this will ignore any S3 Objects which are tagged with Glacier storage classes and retrieve the others. + * {@code READ_RESTORED_GLACIER_OBJECTS}: If this value is set then restored status of the Glacier object will be checked, if restored the objects would be read like normal S3 objects else they will be ignored as the objects would not have been retrieved from the S3 Glacier. + * + */ +public enum S3ObjectStorageClassFilter { + READ_ALL(o -> true), + SKIP_ALL_GLACIER(S3ObjectStorageClassFilter::isNotGlacierObject), + READ_RESTORED_GLACIER_OBJECTS(S3ObjectStorageClassFilter::isCompletedRestoredObject); + + private static final Set GLACIER_STORAGE_CLASSES = Sets.newHashSet(ObjectStorageClass.GLACIER, ObjectStorageClass.DEEP_ARCHIVE); + + private final Function filter; + + S3ObjectStorageClassFilter(Function filter) { +this.filter = filter; + } + + private static boolean isNotGlacierObject(S3Object object) { Review Comment: add javadocs all the way down here, thanks ## hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java: ## @@ -581,6 +583,12 @@ public void initialize(URI name, Configuration originalConf) s3aInternals = createS3AInternals(); + s3ObjectStorageClassFilter = Optional.ofNullable(conf.get(READ_RESTORED_GLACIER_OBJECTS)) Review Comment: @ahmarsuhail but doing the the way it is does handle case differences. I'd go for getTrimmed(READ_RESTORED_GLACIER_OBJECTS, ""); if empty string map to empty optional, otherwise .toupper and valueof. one thing to consider: meaningful failure if the value doesn't map. I'd change Configuration to do that case mapping if it wasn't such a critical class ## hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java: ## @@ -581,6 +583,12 @@ public void initialize(URI name, Configuration originalConf) s3aInternals = createS3AInternals(); + s3ObjectStorageClassFilter = Optional.ofNullable(conf.get(READ_RESTORED_GLACIER_OBJECTS)) Review Comment: or we just go for "upper case is required" and use what you've proposed. more brittle but simpler? ## hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/list/ITestS3AReadRestoredGlacierObjects.java: ## @@
Re: [PR] HDFS-17362. RBF: RouterObserverReadProxyProvider should use ConfiguredFailoverProxyProvider internally [hadoop]
simbadzina commented on code in PR #6510: URL: https://github.com/apache/hadoop/pull/6510#discussion_r1478942086 ## hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/RouterObserverReadProxyProvider.java: ## @@ -84,7 +84,8 @@ public class RouterObserverReadProxyProvider extends AbstractNNFailoverProxyP public RouterObserverReadProxyProvider(Configuration conf, URI uri, Class xface, HAProxyFactory factory) { -this(conf, uri, xface, factory, new IPFailoverProxyProvider<>(conf, uri, xface, factory)); +this(conf, uri, xface, factory, +new ConfiguredFailoverProxyProvider<>(conf, uri, xface, factory)); Review Comment: Yes, I was suggesting adding a new parameter, like `dfs.client.failover.router.internal.proxy.provider` you named. Creating a new class is also a good solution. I'm a bit worried though about the update story for clients who are already using the existing class. The new parameter approach makes the update backward compatible with existing client configs. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] Hadoop 18993 - target 3.4 [hadoop]
tmnd1991 commented on PR #6529: URL: https://github.com/apache/hadoop/pull/6529#issuecomment-1928138070 @steveloughran opened against 3.4, I'm running tests rn, I'll let you know :smile: -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-19067) Allow tag passing to AWS Assume Role Credential Provider
[ https://issues.apache.org/jira/browse/HADOOP-19067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17814512#comment-17814512 ] Jason Martin commented on HADOOP-19067: --- I hadn't seen the audit logging, thank you for that. In this environment I don't think I can rely on links back to the spark cluster since they are ephemeral but centrally managed. I can get session data in Cloudtrail Data Events and map the credential back to the AssumeRole, and the platform could have added in all the breadcrumbs in those tags. Being able to define these additional fields the referrer header would also do it, I'll probably open a separate ticket about that. > Allow tag passing to AWS Assume Role Credential Provider > > > Key: HADOOP-19067 > URL: https://issues.apache.org/jira/browse/HADOOP-19067 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/s3 >Affects Versions: 3.4.0 >Reporter: Jason Martin >Priority: Minor > > [https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/auth/AssumedRoleCredentialProvider.java#L131-L133] > passes a session name and role arn to AssumeRoleRequest. The AWS AssumeRole > API also supports passing a list of tags: > [https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/services/sts/model/AssumeRoleRequest.html#tags()] > These tags could be used by platforms to enhance the data encoded into > CloudTrail entries to provide better information about the client. For > example, a 'notebook' based platform could encode the notebook / jobname / > invoker-id in these tags, enabling more granular access controls and leaving > a richer breadcrumb-trail as to what operations are being performed. > This is particularly useful in larger environments where jobs do not get > individual roles to assume, and there is a desire to track what > jobs/notebooks are reading a given set of files in S3. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-19067) Allow tag passing to AWS Assume Role Credential Provider
[ https://issues.apache.org/jira/browse/HADOOP-19067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-19067: Affects Version/s: 3.4.0 (was: 3.3.6) > Allow tag passing to AWS Assume Role Credential Provider > > > Key: HADOOP-19067 > URL: https://issues.apache.org/jira/browse/HADOOP-19067 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/s3 >Affects Versions: 3.4.0 >Reporter: Jason Martin >Priority: Minor > > [https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/auth/AssumedRoleCredentialProvider.java#L131-L133] > passes a session name and role arn to AssumeRoleRequest. The AWS AssumeRole > API also supports passing a list of tags: > [https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/services/sts/model/AssumeRoleRequest.html#tags()] > These tags could be used by platforms to enhance the data encoded into > CloudTrail entries to provide better information about the client. For > example, a 'notebook' based platform could encode the notebook / jobname / > invoker-id in these tags, enabling more granular access controls and leaving > a richer breadcrumb-trail as to what operations are being performed. > This is particularly useful in larger environments where jobs do not get > individual roles to assume, and there is a desire to track what > jobs/notebooks are reading a given set of files in S3. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-19067) Allow tag passing to AWS Credential Provider
[ https://issues.apache.org/jira/browse/HADOOP-19067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17814511#comment-17814511 ] Steve Loughran commented on HADOOP-19067: - you've seen the s3 auditing stuff right? where can map HTTP requests to kerberos principals, spark jobs IDs, even fs commands? main issue there is the http referrer header doesn't get to cloudtrail -if you could express your need for that to anyone @ AWS you know that'd be great. I want to tie every single GET operation to the job and task which does it. mapping assume role to (principal, job, id) helps, but if you have multiple jobs with same role active at the same time, insufficient. as for the adding of tags * an option to add that referrer header would be good * and if you look at the fs.s3a.header design something similar to that for assumed role tags will be welcome too. usual test process as documented in testing.md. thanks. Hadoop 3.4+ only BTW; 3.3.x is feature frozen for s3a, just critical bug fixes -the move to the v2 sdk makes backporting too hard. > Allow tag passing to AWS Credential Provider > > > Key: HADOOP-19067 > URL: https://issues.apache.org/jira/browse/HADOOP-19067 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/s3 >Affects Versions: 3.3.6 >Reporter: Jason Martin >Priority: Minor > > [https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/auth/AssumedRoleCredentialProvider.java#L131-L133] > passes a session name and role arn to AssumeRoleRequest. The AWS AssumeRole > API also supports passing a list of tags: > [https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/services/sts/model/AssumeRoleRequest.html#tags()] > These tags could be used by platforms to enhance the data encoded into > CloudTrail entries to provide better information about the client. For > example, a 'notebook' based platform could encode the notebook / jobname / > invoker-id in these tags, enabling more granular access controls and leaving > a richer breadcrumb-trail as to what operations are being performed. > This is particularly useful in larger environments where jobs do not get > individual roles to assume, and there is a desire to track what > jobs/notebooks are reading a given set of files in S3. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-19067) Allow tag passing to AWS Assume Role Credential Provider
[ https://issues.apache.org/jira/browse/HADOOP-19067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-19067: Summary: Allow tag passing to AWS Assume Role Credential Provider (was: Allow tag passing to AWS Credential Provider) > Allow tag passing to AWS Assume Role Credential Provider > > > Key: HADOOP-19067 > URL: https://issues.apache.org/jira/browse/HADOOP-19067 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/s3 >Affects Versions: 3.3.6 >Reporter: Jason Martin >Priority: Minor > > [https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/auth/AssumedRoleCredentialProvider.java#L131-L133] > passes a session name and role arn to AssumeRoleRequest. The AWS AssumeRole > API also supports passing a list of tags: > [https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/services/sts/model/AssumeRoleRequest.html#tags()] > These tags could be used by platforms to enhance the data encoded into > CloudTrail entries to provide better information about the client. For > example, a 'notebook' based platform could encode the notebook / jobname / > invoker-id in these tags, enabling more granular access controls and leaving > a richer breadcrumb-trail as to what operations are being performed. > This is particularly useful in larger environments where jobs do not get > individual roles to assume, and there is a desire to track what > jobs/notebooks are reading a given set of files in S3. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-19067) Allow tag passing to AWS Credential Provider
Jason Martin created HADOOP-19067: - Summary: Allow tag passing to AWS Credential Provider Key: HADOOP-19067 URL: https://issues.apache.org/jira/browse/HADOOP-19067 Project: Hadoop Common Issue Type: Improvement Components: fs/s3 Affects Versions: 3.3.6 Reporter: Jason Martin [https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/auth/AssumedRoleCredentialProvider.java#L131-L133] passes a session name and role arn to AssumeRoleRequest. The AWS AssumeRole API also supports passing a list of tags: [https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/services/sts/model/AssumeRoleRequest.html#tags()] These tags could be used by platforms to enhance the data encoded into CloudTrail entries to provide better information about the client. For example, a 'notebook' based platform could encode the notebook / jobname / invoker-id in these tags, enabling more granular access controls and leaving a richer breadcrumb-trail as to what operations are being performed. This is particularly useful in larger environments where jobs do not get individual roles to assume, and there is a desire to track what jobs/notebooks are reading a given set of files in S3. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HDFS-17181 WebHDFS: Route all CREATE requests to the BlockManager [hadoop]
lfrancke commented on PR #6108: URL: https://github.com/apache/hadoop/pull/6108#issuecomment-1928029263 This has now been running in production since September without a problem. I'll update the branch. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-19050) Add S3 Access Grants Support in S3A
[ https://issues.apache.org/jira/browse/HADOOP-19050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17814464#comment-17814464 ] ASF GitHub Bot commented on HADOOP-19050: - jxhan3 commented on code in PR #6507: URL: https://github.com/apache/hadoop/pull/6507#discussion_r1478681708 ## hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/DefaultS3ClientFactory.java: ## @@ -401,4 +409,32 @@ private static Region getS3RegionFromEndpoint(final String endpoint, return Region.of(AWS_S3_DEFAULT_REGION); } + public static , ClientT> void Review Comment: This is for testing purpose, otherwise we may need to use reflection to test private method. Please share your thoughts on this. Thanks. > Add S3 Access Grants Support in S3A > --- > > Key: HADOOP-19050 > URL: https://issues.apache.org/jira/browse/HADOOP-19050 > Project: Hadoop Common > Issue Type: New Feature > Components: fs/s3 >Affects Versions: 3.4.0 >Reporter: Jason Han >Assignee: Jason Han >Priority: Minor > Labels: pull-request-available > > Add support for S3 Access Grants > (https://aws.amazon.com/s3/features/access-grants/) in S3A. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HADOOP-19050, Add Support for AWS S3 Access Grants [hadoop]
jxhan3 commented on code in PR #6507: URL: https://github.com/apache/hadoop/pull/6507#discussion_r1478681708 ## hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/DefaultS3ClientFactory.java: ## @@ -401,4 +409,32 @@ private static Region getS3RegionFromEndpoint(final String endpoint, return Region.of(AWS_S3_DEFAULT_REGION); } + public static , ClientT> void Review Comment: This is for testing purpose, otherwise we may need to use reflection to test private method. Please share your thoughts on this. Thanks. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18993) Allow to not isolate S3AFileSystem classloader when needed
[ https://issues.apache.org/jira/browse/HADOOP-18993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17814461#comment-17814461 ] Steve Loughran commented on HADOOP-18993: - [~tmnd91] : merged to trunk. Can you create a PR against branch-3.4 and rerun the tests, and then we can merge there and so target 3.4.1 as the release with this. thanks! > Allow to not isolate S3AFileSystem classloader when needed > -- > > Key: HADOOP-18993 > URL: https://issues.apache.org/jira/browse/HADOOP-18993 > Project: Hadoop Common > Issue Type: Improvement > Components: hadoop-thirdparty >Affects Versions: 3.3.6 >Reporter: Antonio Murgia >Assignee: Antonio Murgia >Priority: Minor > Labels: pull-request-available > Fix For: 3.5.0 > > > In HADOOP-17372 the S3AFileSystem forces the configuration classloader to be > the same as the one that loaded S3AFileSystem. This leads to the > impossibility in Spark applications to load third party credentials providers > as user jars. > I propose to add a configuration key > {{fs.s3a.extensions.isolated.classloader}} with a default value of {{true}} > that if set to {{false}} will not perform the classloader set. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-18993) Allow to not isolate S3AFileSystem classloader when needed
[ https://issues.apache.org/jira/browse/HADOOP-18993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran resolved HADOOP-18993. - Fix Version/s: 3.5.0 Resolution: Fixed > Allow to not isolate S3AFileSystem classloader when needed > -- > > Key: HADOOP-18993 > URL: https://issues.apache.org/jira/browse/HADOOP-18993 > Project: Hadoop Common > Issue Type: Improvement > Components: hadoop-thirdparty >Affects Versions: 3.3.6 >Reporter: Antonio Murgia >Assignee: Antonio Murgia >Priority: Minor > Labels: pull-request-available > Fix For: 3.5.0 > > > In HADOOP-17372 the S3AFileSystem forces the configuration classloader to be > the same as the one that loaded S3AFileSystem. This leads to the > impossibility in Spark applications to load third party credentials providers > as user jars. > I propose to add a configuration key > {{fs.s3a.extensions.isolated.classloader}} with a default value of {{true}} > that if set to {{false}} will not perform the classloader set. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Assigned] (HADOOP-18993) Allow to not isolate S3AFileSystem classloader when needed
[ https://issues.apache.org/jira/browse/HADOOP-18993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran reassigned HADOOP-18993: --- Assignee: Antonio Murgia > Allow to not isolate S3AFileSystem classloader when needed > -- > > Key: HADOOP-18993 > URL: https://issues.apache.org/jira/browse/HADOOP-18993 > Project: Hadoop Common > Issue Type: Improvement > Components: hadoop-thirdparty >Affects Versions: 3.3.6 >Reporter: Antonio Murgia >Assignee: Antonio Murgia >Priority: Minor > Labels: pull-request-available > > In HADOOP-17372 the S3AFileSystem forces the configuration classloader to be > the same as the one that loaded S3AFileSystem. This leads to the > impossibility in Spark applications to load third party credentials providers > as user jars. > I propose to add a configuration key > {{fs.s3a.extensions.isolated.classloader}} with a default value of {{true}} > that if set to {{false}} will not perform the classloader set. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18993) Allow to not isolate S3AFileSystem classloader when needed
[ https://issues.apache.org/jira/browse/HADOOP-18993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17814460#comment-17814460 ] ASF GitHub Bot commented on HADOOP-18993: - steveloughran merged PR #6301: URL: https://github.com/apache/hadoop/pull/6301 > Allow to not isolate S3AFileSystem classloader when needed > -- > > Key: HADOOP-18993 > URL: https://issues.apache.org/jira/browse/HADOOP-18993 > Project: Hadoop Common > Issue Type: Improvement > Components: hadoop-thirdparty >Affects Versions: 3.3.6 >Reporter: Antonio Murgia >Priority: Minor > Labels: pull-request-available > > In HADOOP-17372 the S3AFileSystem forces the configuration classloader to be > the same as the one that loaded S3AFileSystem. This leads to the > impossibility in Spark applications to load third party credentials providers > as user jars. > I propose to add a configuration key > {{fs.s3a.extensions.isolated.classloader}} with a default value of {{true}} > that if set to {{false}} will not perform the classloader set. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HADOOP-18993 Allow to not isolate S3AFileSystem classloader when needed [hadoop]
steveloughran merged PR #6301: URL: https://github.com/apache/hadoop/pull/6301 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18993) Allow to not isolate S3AFileSystem classloader when needed
[ https://issues.apache.org/jira/browse/HADOOP-18993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17814457#comment-17814457 ] ASF GitHub Bot commented on HADOOP-18993: - steveloughran commented on PR #6301: URL: https://github.com/apache/hadoop/pull/6301#issuecomment-1927622965 I've looked at the closed fs test. It is verifying that threads are all gone so its potentially a sign of some thread leakage. Can you post the stack trace/output you are seeing? I don't see this as the cause, so will merge to trunk/branch-3.4; we can create a new JIRA with the new failure as it may need to be made more resilient as well as maybe improving reporting > Allow to not isolate S3AFileSystem classloader when needed > -- > > Key: HADOOP-18993 > URL: https://issues.apache.org/jira/browse/HADOOP-18993 > Project: Hadoop Common > Issue Type: Improvement > Components: hadoop-thirdparty >Affects Versions: 3.3.6 >Reporter: Antonio Murgia >Priority: Minor > Labels: pull-request-available > > In HADOOP-17372 the S3AFileSystem forces the configuration classloader to be > the same as the one that loaded S3AFileSystem. This leads to the > impossibility in Spark applications to load third party credentials providers > as user jars. > I propose to add a configuration key > {{fs.s3a.extensions.isolated.classloader}} with a default value of {{true}} > that if set to {{false}} will not perform the classloader set. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HADOOP-18993 Allow to not isolate S3AFileSystem classloader when needed [hadoop]
steveloughran commented on PR #6301: URL: https://github.com/apache/hadoop/pull/6301#issuecomment-1927622965 I've looked at the closed fs test. It is verifying that threads are all gone so its potentially a sign of some thread leakage. Can you post the stack trace/output you are seeing? I don't see this as the cause, so will merge to trunk/branch-3.4; we can create a new JIRA with the new failure as it may need to be made more resilient as well as maybe improving reporting -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-19057) S3 public test bucket landsat-pds unreadable -needs replacement
[ https://issues.apache.org/jira/browse/HADOOP-19057?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17814450#comment-17814450 ] ASF GitHub Bot commented on HADOOP-19057: - hadoop-yetus commented on PR #6515: URL: https://github.com/apache/hadoop/pull/6515#issuecomment-1927581664 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 54s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +0 :ok: | markdownlint | 0m 1s | | markdownlint was not available. | | +0 :ok: | xmllint | 0m 1s | | xmllint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 18 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 14m 7s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 35m 30s | | trunk passed | | +1 :green_heart: | compile | 18m 29s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 17m 37s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | checkstyle | 4m 39s | | trunk passed | | +1 :green_heart: | mvnsite | 2m 31s | | trunk passed | | +1 :green_heart: | javadoc | 1m 48s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 1m 34s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 3m 48s | | trunk passed | | +1 :green_heart: | shadedclient | 38m 19s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 30s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 1m 31s | | the patch passed | | +1 :green_heart: | compile | 19m 9s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 19m 9s | | the patch passed | | +1 :green_heart: | compile | 21m 21s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | javac | 21m 21s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 6m 39s | | the patch passed | | +1 :green_heart: | mvnsite | 3m 22s | | the patch passed | | +1 :green_heart: | javadoc | 2m 20s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 2m 17s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 6m 13s | | the patch passed | | -1 :x: | shadedclient | 41m 29s | | patch has errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 0m 41s | [/patch-unit-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6515/5/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt) | hadoop-common in the patch failed. | | -1 :x: | unit | 0m 42s | [/patch-unit-hadoop-tools_hadoop-aws.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6515/5/artifact/out/patch-unit-hadoop-tools_hadoop-aws.txt) | hadoop-aws in the patch failed. | | +0 :ok: | asflicense | 0m 44s | | ASF License check generated no output? | | | | 253m 0s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.44 ServerAPI=1.44 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6515/5/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/6515 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets markdownlint xmllint | | uname | Linux 653c18f7dae1 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 35edd7cb2b49c7b21fe7b5ef1da9a5d603db05b7 | | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Multi-JDK versions |
Re: [PR] HADOOP-19057. Landsat bucket deleted [hadoop]
hadoop-yetus commented on PR #6515: URL: https://github.com/apache/hadoop/pull/6515#issuecomment-1927581664 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 54s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +0 :ok: | markdownlint | 0m 1s | | markdownlint was not available. | | +0 :ok: | xmllint | 0m 1s | | xmllint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 18 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 14m 7s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 35m 30s | | trunk passed | | +1 :green_heart: | compile | 18m 29s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 17m 37s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | checkstyle | 4m 39s | | trunk passed | | +1 :green_heart: | mvnsite | 2m 31s | | trunk passed | | +1 :green_heart: | javadoc | 1m 48s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 1m 34s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 3m 48s | | trunk passed | | +1 :green_heart: | shadedclient | 38m 19s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 30s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 1m 31s | | the patch passed | | +1 :green_heart: | compile | 19m 9s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 19m 9s | | the patch passed | | +1 :green_heart: | compile | 21m 21s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | javac | 21m 21s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 6m 39s | | the patch passed | | +1 :green_heart: | mvnsite | 3m 22s | | the patch passed | | +1 :green_heart: | javadoc | 2m 20s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 2m 17s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 6m 13s | | the patch passed | | -1 :x: | shadedclient | 41m 29s | | patch has errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 0m 41s | [/patch-unit-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6515/5/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt) | hadoop-common in the patch failed. | | -1 :x: | unit | 0m 42s | [/patch-unit-hadoop-tools_hadoop-aws.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6515/5/artifact/out/patch-unit-hadoop-tools_hadoop-aws.txt) | hadoop-aws in the patch failed. | | +0 :ok: | asflicense | 0m 44s | | ASF License check generated no output? | | | | 253m 0s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.44 ServerAPI=1.44 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6515/5/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/6515 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets markdownlint xmllint | | uname | Linux 653c18f7dae1 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 35edd7cb2b49c7b21fe7b5ef1da9a5d603db05b7 | | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6515/5/testReport/ | | Max. process+thread count | 585 (vs. ulimit of
[jira] [Commented] (HADOOP-19050) Add S3 Access Grants Support in S3A
[ https://issues.apache.org/jira/browse/HADOOP-19050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17814442#comment-17814442 ] ASF GitHub Bot commented on HADOOP-19050: - ahmarsuhail commented on code in PR #6507: URL: https://github.com/apache/hadoop/pull/6507#discussion_r1478502544 ## hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/DefaultS3ClientFactory.java: ## @@ -401,4 +409,32 @@ private static Region getS3RegionFromEndpoint(final String endpoint, return Region.of(AWS_S3_DEFAULT_REGION); } + public static , ClientT> void Review Comment: method can be private? ## hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/tools/S3AccessGrantsUtil.java: ## @@ -0,0 +1,60 @@ +package org.apache.hadoop.fs.s3a.tools; + +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.fs.s3a.DefaultS3ClientFactory; +import org.apache.hadoop.fs.store.LogExactlyOnce; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; +import software.amazon.awssdk.s3accessgrants.plugin.S3AccessGrantsPlugin; +import software.amazon.awssdk.services.s3.S3BaseClientBuilder; + +import static org.apache.hadoop.fs.s3a.Constants.AWS_S3_ACCESS_GRANTS_FALLBACK_TO_IAM_ENABLED; + +public class S3AccessGrantsUtil { + + protected static final Logger LOG = + LoggerFactory.getLogger(S3AccessGrantsUtil.class); + + private static final LogExactlyOnce LOG_EXACTLY_ONCE = new LogExactlyOnce(LOG); Review Comment: rename from `LOG_EXACTLY_ONCE` to what this log is actually for, eg: `IAM_FALLBACK_WARN`. look at `WARN_OF_DEFAULT_REGION_CHAIN` in DefaultS3ClientFactory as an example. ## hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/tools/S3AccessGrantsUtil.java: ## @@ -0,0 +1,60 @@ +package org.apache.hadoop.fs.s3a.tools; + +import org.apache.hadoop.conf.Configuration; Review Comment: add apache license to the top of this class (copy it over from any other class) ## hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/tools/S3AccessGrantsUtil.java: ## @@ -0,0 +1,60 @@ +package org.apache.hadoop.fs.s3a.tools; + Review Comment: wrong package for this class, move to the impl package. ## hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/TestS3AccessGrantConfiguration.java: ## @@ -0,0 +1,89 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.fs.s3a; + +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.test.AbstractHadoopTestBase; +import org.junit.Test; + +import software.amazon.awssdk.services.s3.S3AsyncClient; +import software.amazon.awssdk.services.s3.S3BaseClientBuilder; +import software.amazon.awssdk.services.s3.S3Client; + +import static org.apache.hadoop.fs.s3a.Constants.AWS_S3_ACCESS_GRANTS_ENABLED; +import static org.junit.Assert.assertEquals; + + +/** + * Test S3 Access Grants configurations. + */ +public class TestS3AccessGrantConfiguration extends AbstractHadoopTestBase { + + @Test + public void testS3AccessGrantsEnabled() { +applyVerifyS3AGPlugin(S3Client.builder(), false, true); + } + + @Test + public void testS3AccessGrantsEnabledAsync() { +applyVerifyS3AGPlugin(S3AsyncClient.builder(), false, true); + } + + @Test + public void testS3AccessGrantsDisabled() { +applyVerifyS3AGPlugin(S3Client.builder(), false, false); + } + + @Test + public void testS3AccessGrantsDisabledByDefault() { +applyVerifyS3AGPlugin(S3Client.builder(), true, false); + } + + @Test + public void testS3AccessGrantsDisabledAsync() { +applyVerifyS3AGPlugin(S3AsyncClient.builder(), false, false); + } + + @Test + public void testS3AccessGrantsDisabledByDefaultAsync() { +applyVerifyS3AGPlugin(S3AsyncClient.builder(), true, false); + } + + private Configuration createConfig(boolean isDefault, boolean s3agEnabled) { +Configuration conf = new Configuration(); +if (!isDefault){ + conf.setBoolean(AWS_S3_ACCESS_GRANTS_ENABLED, s3agEnabled); +} +return conf; + } + + private , ClientT> void + applyVerifyS3AGPlugin(BuilderT builder,
Re: [PR] HADOOP-19050, Add Support for AWS S3 Access Grants [hadoop]
ahmarsuhail commented on code in PR #6507: URL: https://github.com/apache/hadoop/pull/6507#discussion_r1478502544 ## hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/DefaultS3ClientFactory.java: ## @@ -401,4 +409,32 @@ private static Region getS3RegionFromEndpoint(final String endpoint, return Region.of(AWS_S3_DEFAULT_REGION); } + public static , ClientT> void Review Comment: method can be private? ## hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/tools/S3AccessGrantsUtil.java: ## @@ -0,0 +1,60 @@ +package org.apache.hadoop.fs.s3a.tools; + +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.fs.s3a.DefaultS3ClientFactory; +import org.apache.hadoop.fs.store.LogExactlyOnce; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; +import software.amazon.awssdk.s3accessgrants.plugin.S3AccessGrantsPlugin; +import software.amazon.awssdk.services.s3.S3BaseClientBuilder; + +import static org.apache.hadoop.fs.s3a.Constants.AWS_S3_ACCESS_GRANTS_FALLBACK_TO_IAM_ENABLED; + +public class S3AccessGrantsUtil { + + protected static final Logger LOG = + LoggerFactory.getLogger(S3AccessGrantsUtil.class); + + private static final LogExactlyOnce LOG_EXACTLY_ONCE = new LogExactlyOnce(LOG); Review Comment: rename from `LOG_EXACTLY_ONCE` to what this log is actually for, eg: `IAM_FALLBACK_WARN`. look at `WARN_OF_DEFAULT_REGION_CHAIN` in DefaultS3ClientFactory as an example. ## hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/tools/S3AccessGrantsUtil.java: ## @@ -0,0 +1,60 @@ +package org.apache.hadoop.fs.s3a.tools; + +import org.apache.hadoop.conf.Configuration; Review Comment: add apache license to the top of this class (copy it over from any other class) ## hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/tools/S3AccessGrantsUtil.java: ## @@ -0,0 +1,60 @@ +package org.apache.hadoop.fs.s3a.tools; + Review Comment: wrong package for this class, move to the impl package. ## hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/TestS3AccessGrantConfiguration.java: ## @@ -0,0 +1,89 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.fs.s3a; + +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.test.AbstractHadoopTestBase; +import org.junit.Test; + +import software.amazon.awssdk.services.s3.S3AsyncClient; +import software.amazon.awssdk.services.s3.S3BaseClientBuilder; +import software.amazon.awssdk.services.s3.S3Client; + +import static org.apache.hadoop.fs.s3a.Constants.AWS_S3_ACCESS_GRANTS_ENABLED; +import static org.junit.Assert.assertEquals; + + +/** + * Test S3 Access Grants configurations. + */ +public class TestS3AccessGrantConfiguration extends AbstractHadoopTestBase { + + @Test + public void testS3AccessGrantsEnabled() { +applyVerifyS3AGPlugin(S3Client.builder(), false, true); + } + + @Test + public void testS3AccessGrantsEnabledAsync() { +applyVerifyS3AGPlugin(S3AsyncClient.builder(), false, true); + } + + @Test + public void testS3AccessGrantsDisabled() { +applyVerifyS3AGPlugin(S3Client.builder(), false, false); + } + + @Test + public void testS3AccessGrantsDisabledByDefault() { +applyVerifyS3AGPlugin(S3Client.builder(), true, false); + } + + @Test + public void testS3AccessGrantsDisabledAsync() { +applyVerifyS3AGPlugin(S3AsyncClient.builder(), false, false); + } + + @Test + public void testS3AccessGrantsDisabledByDefaultAsync() { +applyVerifyS3AGPlugin(S3AsyncClient.builder(), true, false); + } + + private Configuration createConfig(boolean isDefault, boolean s3agEnabled) { +Configuration conf = new Configuration(); +if (!isDefault){ + conf.setBoolean(AWS_S3_ACCESS_GRANTS_ENABLED, s3agEnabled); +} +return conf; + } + + private , ClientT> void + applyVerifyS3AGPlugin(BuilderT builder, boolean isDefault, boolean enabled) { +DefaultS3ClientFactory.applyS3AccessGrantsConfigurations(builder, createConfig(isDefault, enabled)); +if (enabled){ + assertEquals(1, builder.plugins().size()); +
[jira] [Commented] (HADOOP-19057) S3 public test bucket landsat-pds unreadable -needs replacement
[ https://issues.apache.org/jira/browse/HADOOP-19057?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17814418#comment-17814418 ] ASF GitHub Bot commented on HADOOP-19057: - hadoop-yetus commented on PR #6515: URL: https://github.com/apache/hadoop/pull/6515#issuecomment-1927324582 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 20s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +0 :ok: | markdownlint | 0m 1s | | markdownlint was not available. | | +0 :ok: | xmllint | 0m 1s | | xmllint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 18 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 14m 6s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 19m 17s | | trunk passed | | +1 :green_heart: | compile | 8m 27s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 7m 32s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | checkstyle | 2m 3s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 26s | | trunk passed | | +1 :green_heart: | javadoc | 1m 6s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 58s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 2m 11s | | trunk passed | | +1 :green_heart: | shadedclient | 19m 44s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 20s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 0m 53s | | the patch passed | | +1 :green_heart: | compile | 7m 55s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 7m 55s | | the patch passed | | +1 :green_heart: | compile | 7m 34s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | javac | 7m 34s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 2m 0s | | the patch passed | | +1 :green_heart: | mvnsite | 1m 24s | | the patch passed | | +1 :green_heart: | javadoc | 0m 59s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 1m 0s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 2m 19s | | the patch passed | | +1 :green_heart: | shadedclient | 19m 42s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 16m 41s | | hadoop-common in the patch passed. | | +1 :green_heart: | unit | 2m 24s | | hadoop-aws in the patch passed. | | +1 :green_heart: | asflicense | 0m 39s | | The patch does not generate ASF License warnings. | | | | 144m 53s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.44 ServerAPI=1.44 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6515/6/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/6515 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets markdownlint xmllint | | uname | Linux 3d4c304237dd 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 35edd7cb2b49c7b21fe7b5ef1da9a5d603db05b7 | | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6515/6/testReport/ | | Max. process+thread count | 3153 (vs. ulimit of 5500) | | modules | C:
Re: [PR] HADOOP-19057. Landsat bucket deleted [hadoop]
hadoop-yetus commented on PR #6515: URL: https://github.com/apache/hadoop/pull/6515#issuecomment-1927324582 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 20s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +0 :ok: | markdownlint | 0m 1s | | markdownlint was not available. | | +0 :ok: | xmllint | 0m 1s | | xmllint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 18 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 14m 6s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 19m 17s | | trunk passed | | +1 :green_heart: | compile | 8m 27s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 7m 32s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | checkstyle | 2m 3s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 26s | | trunk passed | | +1 :green_heart: | javadoc | 1m 6s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 58s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 2m 11s | | trunk passed | | +1 :green_heart: | shadedclient | 19m 44s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 20s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 0m 53s | | the patch passed | | +1 :green_heart: | compile | 7m 55s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 7m 55s | | the patch passed | | +1 :green_heart: | compile | 7m 34s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | javac | 7m 34s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 2m 0s | | the patch passed | | +1 :green_heart: | mvnsite | 1m 24s | | the patch passed | | +1 :green_heart: | javadoc | 0m 59s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 1m 0s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 2m 19s | | the patch passed | | +1 :green_heart: | shadedclient | 19m 42s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 16m 41s | | hadoop-common in the patch passed. | | +1 :green_heart: | unit | 2m 24s | | hadoop-aws in the patch passed. | | +1 :green_heart: | asflicense | 0m 39s | | The patch does not generate ASF License warnings. | | | | 144m 53s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.44 ServerAPI=1.44 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6515/6/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/6515 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets markdownlint xmllint | | uname | Linux 3d4c304237dd 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 35edd7cb2b49c7b21fe7b5ef1da9a5d603db05b7 | | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6515/6/testReport/ | | Max. process+thread count | 3153 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws U: . | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6515/6/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0
Re: [PR] HDFS-17146.Use the dfsadmin -reconfig command to initiate reconfiguration on all decommissioning datanodes. [hadoop]
hadoop-yetus commented on PR #6504: URL: https://github.com/apache/hadoop/pull/6504#issuecomment-1927285846 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 38s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +0 :ok: | markdownlint | 0m 0s | | markdownlint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 41m 0s | | trunk passed | | +1 :green_heart: | compile | 1m 18s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 1m 14s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | checkstyle | 1m 8s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 20s | | trunk passed | | +1 :green_heart: | javadoc | 1m 5s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 1m 37s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 3m 18s | | trunk passed | | +1 :green_heart: | shadedclient | 34m 42s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 8s | | the patch passed | | +1 :green_heart: | compile | 1m 13s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 1m 13s | | the patch passed | | +1 :green_heart: | compile | 1m 6s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | javac | 1m 6s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 56s | | the patch passed | | +1 :green_heart: | mvnsite | 1m 12s | | the patch passed | | +1 :green_heart: | javadoc | 0m 51s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 1m 28s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 3m 17s | | the patch passed | | +1 :green_heart: | shadedclient | 34m 32s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 220m 11s | | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 43s | | The patch does not generate ASF License warnings. | | | | 354m 37s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.44 ServerAPI=1.44 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6504/6/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/6504 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets markdownlint | | uname | Linux 4e2d111fc67d 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 33ff6f18f5e52802cb3b5d03df192c6179d76bec | | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6504/6/testReport/ | | Max. process+thread count | 3435 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6504/6/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For
Re: [PR] YARN-11654. [JDK17] TestLinuxContainerExecutorWithMocks.testStartLoca… [hadoop]
hadoop-yetus commented on PR #6528: URL: https://github.com/apache/hadoop/pull/6528#issuecomment-1927177125 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 33s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 42m 46s | | trunk passed | | +1 :green_heart: | compile | 1m 24s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 1m 22s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | checkstyle | 0m 37s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 42s | | trunk passed | | +1 :green_heart: | javadoc | 0m 42s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 37s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 1m 25s | | trunk passed | | +1 :green_heart: | shadedclient | 32m 37s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 32s | | the patch passed | | +1 :green_heart: | compile | 1m 17s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 1m 17s | | the patch passed | | +1 :green_heart: | compile | 1m 14s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | javac | 1m 14s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 27s | | hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager: The patch generated 0 new + 3 unchanged - 1 fixed = 3 total (was 4) | | +1 :green_heart: | mvnsite | 0m 33s | | the patch passed | | +1 :green_heart: | javadoc | 0m 32s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 32s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 1m 23s | | the patch passed | | +1 :green_heart: | shadedclient | 32m 39s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 24m 16s | | hadoop-yarn-server-nodemanager in the patch passed. | | +1 :green_heart: | asflicense | 0m 35s | | The patch does not generate ASF License warnings. | | | | 150m 0s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.44 ServerAPI=1.44 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6528/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/6528 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux 274d96550475 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / be835e44742a7755c26d1b26d196db8cb2894e6a | | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6528/1/testReport/ | | Max. process+thread count | 558 (vs. ulimit of 5500) | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6528/1/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the
[jira] [Commented] (HADOOP-19066) AWS SDK V2 - Enabling FIPS should be allowed with central endpoint
[ https://issues.apache.org/jira/browse/HADOOP-19066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17814372#comment-17814372 ] Steve Loughran commented on HADOOP-19066: - ha! what a moving target region support is. fs.s3a.endpoint was so much simpler > AWS SDK V2 - Enabling FIPS should be allowed with central endpoint > -- > > Key: HADOOP-19066 > URL: https://issues.apache.org/jira/browse/HADOOP-19066 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.5.0, 3.4.1 >Reporter: Viraj Jasani >Assignee: Viraj Jasani >Priority: Major > > FIPS support can be enabled by setting "fs.s3a.endpoint.fips". Since the SDK > considers overriding endpoint and enabling fips as mutually exclusive, we > fail fast if fs.s3a.endpoint is set with fips support (details on > HADOOP-18975). > Now, we no longer override SDK endpoint for central endpoint since we enable > cross region access (details on HADOOP-19044) but we would still fail fast if > endpoint is central and fips is enabled. > Changes proposed: > * S3A to fail fast only if FIPS is enabled and non-central endpoint is > configured. > * Tests to ensure S3 bucket is accessible with default region us-east-2 with > cross region access (expected with central endpoint). > * Document FIPS support with central endpoint on connecting.html. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[PR] YARN-11654. [JDK17] TestLinuxContainerExecutorWithMocks.testStartLoca… [hadoop]
BilwaST opened a new pull request, #6528: URL: https://github.com/apache/hadoop/pull/6528 …lizer fails I have ran this test locally and verified -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-19047) Support InMemory Tracking Of S3A Magic Commits
[ https://issues.apache.org/jira/browse/HADOOP-19047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17814324#comment-17814324 ] ASF GitHub Bot commented on HADOOP-19047: - hadoop-yetus commented on PR #6468: URL: https://github.com/apache/hadoop/pull/6468#issuecomment-1926764689 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 51s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +0 :ok: | markdownlint | 0m 0s | | markdownlint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 3 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 46m 54s | | trunk passed | | +1 :green_heart: | compile | 0m 44s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 0m 33s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | checkstyle | 0m 32s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 43s | | trunk passed | | +1 :green_heart: | javadoc | 0m 26s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 32s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 1m 9s | | trunk passed | | +1 :green_heart: | shadedclient | 39m 5s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 32s | | the patch passed | | +1 :green_heart: | compile | 0m 38s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 0m 38s | | the patch passed | | +1 :green_heart: | compile | 0m 27s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | javac | 0m 27s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 0m 22s | [/results-checkstyle-hadoop-tools_hadoop-aws.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6468/5/artifact/out/results-checkstyle-hadoop-tools_hadoop-aws.txt) | hadoop-tools/hadoop-aws: The patch generated 1 new + 5 unchanged - 0 fixed = 6 total (was 5) | | +1 :green_heart: | mvnsite | 0m 37s | | the patch passed | | +1 :green_heart: | javadoc | 0m 15s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 27s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 1m 12s | | the patch passed | | +1 :green_heart: | shadedclient | 38m 24s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 2m 55s | | hadoop-aws in the patch passed. | | +1 :green_heart: | asflicense | 0m 35s | | The patch does not generate ASF License warnings. | | | | 141m 44s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.44 ServerAPI=1.44 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6468/5/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/6468 | | JIRA Issue | HADOOP-19047 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets markdownlint | | uname | Linux f10aab5720a0 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 7d213000e14bcd1ee7aa6c1369f996a0815347e8 | | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6468/5/testReport/ | | Max. process+thread count | 528 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws | | Console
Re: [PR] HADOOP-19047: Support InMemory Tracking Of S3A Magic Commits [hadoop]
hadoop-yetus commented on PR #6468: URL: https://github.com/apache/hadoop/pull/6468#issuecomment-1926764689 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 51s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +0 :ok: | markdownlint | 0m 0s | | markdownlint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 3 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 46m 54s | | trunk passed | | +1 :green_heart: | compile | 0m 44s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 0m 33s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | checkstyle | 0m 32s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 43s | | trunk passed | | +1 :green_heart: | javadoc | 0m 26s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 32s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 1m 9s | | trunk passed | | +1 :green_heart: | shadedclient | 39m 5s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 32s | | the patch passed | | +1 :green_heart: | compile | 0m 38s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 0m 38s | | the patch passed | | +1 :green_heart: | compile | 0m 27s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | javac | 0m 27s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 0m 22s | [/results-checkstyle-hadoop-tools_hadoop-aws.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6468/5/artifact/out/results-checkstyle-hadoop-tools_hadoop-aws.txt) | hadoop-tools/hadoop-aws: The patch generated 1 new + 5 unchanged - 0 fixed = 6 total (was 5) | | +1 :green_heart: | mvnsite | 0m 37s | | the patch passed | | +1 :green_heart: | javadoc | 0m 15s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 27s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 1m 12s | | the patch passed | | +1 :green_heart: | shadedclient | 38m 24s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 2m 55s | | hadoop-aws in the patch passed. | | +1 :green_heart: | asflicense | 0m 35s | | The patch does not generate ASF License warnings. | | | | 141m 44s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.44 ServerAPI=1.44 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6468/5/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/6468 | | JIRA Issue | HADOOP-19047 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets markdownlint | | uname | Linux f10aab5720a0 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 7d213000e14bcd1ee7aa6c1369f996a0815347e8 | | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6468/5/testReport/ | | Max. process+thread count | 528 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6468/5/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org | This message was
[jira] [Commented] (HADOOP-14837) Handle S3A "glacier" data
[ https://issues.apache.org/jira/browse/HADOOP-14837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17814321#comment-17814321 ] ASF GitHub Bot commented on HADOOP-14837: - hadoop-yetus commented on PR #6407: URL: https://github.com/apache/hadoop/pull/6407#issuecomment-1926752080 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 21s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +0 :ok: | markdownlint | 0m 0s | | markdownlint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 31m 31s | | trunk passed | | +1 :green_heart: | compile | 0m 23s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 0m 19s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | checkstyle | 0m 19s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 23s | | trunk passed | | +1 :green_heart: | javadoc | 0m 17s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 22s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 0m 41s | | trunk passed | | +1 :green_heart: | shadedclient | 19m 21s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 16s | | the patch passed | | +1 :green_heart: | compile | 0m 20s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 0m 20s | | the patch passed | | +1 :green_heart: | compile | 0m 16s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | javac | 0m 16s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 0m 13s | [/results-checkstyle-hadoop-tools_hadoop-aws.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6407/8/artifact/out/results-checkstyle-hadoop-tools_hadoop-aws.txt) | hadoop-tools/hadoop-aws: The patch generated 21 new + 6 unchanged - 0 fixed = 27 total (was 6) | | +1 :green_heart: | mvnsite | 0m 18s | | the patch passed | | +1 :green_heart: | javadoc | 0m 10s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 17s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 0m 41s | | the patch passed | | +1 :green_heart: | shadedclient | 19m 5s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 2m 27s | | hadoop-aws in the patch passed. | | +1 :green_heart: | asflicense | 0m 24s | | The patch does not generate ASF License warnings. | | | | 81m 4s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.44 ServerAPI=1.44 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6407/8/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/6407 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets markdownlint | | uname | Linux 6f1c70e86dea 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / d035d1561d64e8d1279a7c6240bff6f09494b64c | | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6407/8/testReport/ | | Max. process+thread count | 663 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws | | Console output |
Re: [PR] HADOOP-14837 : Support Read Restored Glacier Objects [hadoop]
hadoop-yetus commented on PR #6407: URL: https://github.com/apache/hadoop/pull/6407#issuecomment-1926752080 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 21s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +0 :ok: | markdownlint | 0m 0s | | markdownlint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 31m 31s | | trunk passed | | +1 :green_heart: | compile | 0m 23s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 0m 19s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | checkstyle | 0m 19s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 23s | | trunk passed | | +1 :green_heart: | javadoc | 0m 17s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 22s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 0m 41s | | trunk passed | | +1 :green_heart: | shadedclient | 19m 21s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 16s | | the patch passed | | +1 :green_heart: | compile | 0m 20s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 0m 20s | | the patch passed | | +1 :green_heart: | compile | 0m 16s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | javac | 0m 16s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 0m 13s | [/results-checkstyle-hadoop-tools_hadoop-aws.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6407/8/artifact/out/results-checkstyle-hadoop-tools_hadoop-aws.txt) | hadoop-tools/hadoop-aws: The patch generated 21 new + 6 unchanged - 0 fixed = 27 total (was 6) | | +1 :green_heart: | mvnsite | 0m 18s | | the patch passed | | +1 :green_heart: | javadoc | 0m 10s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 17s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 0m 41s | | the patch passed | | +1 :green_heart: | shadedclient | 19m 5s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 2m 27s | | hadoop-aws in the patch passed. | | +1 :green_heart: | asflicense | 0m 24s | | The patch does not generate ASF License warnings. | | | | 81m 4s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.44 ServerAPI=1.44 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6407/8/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/6407 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets markdownlint | | uname | Linux 6f1c70e86dea 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / d035d1561d64e8d1279a7c6240bff6f09494b64c | | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6407/8/testReport/ | | Max. process+thread count | 663 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6407/8/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org | This message was automatically generated. --
[jira] [Commented] (HADOOP-19052) Hadoop use Shell command to get the count of the hard link which takes a lot of time
[ https://issues.apache.org/jira/browse/HADOOP-19052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17814294#comment-17814294 ] ASF GitHub Bot commented on HADOOP-19052: - hadoop-yetus commented on PR #6527: URL: https://github.com/apache/hadoop/pull/6527#issuecomment-1926551401 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 22s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 31m 18s | | trunk passed | | +1 :green_heart: | compile | 8m 11s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 7m 20s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | checkstyle | 0m 40s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 51s | | trunk passed | | +1 :green_heart: | javadoc | 0m 42s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 27s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 1m 25s | | trunk passed | | +1 :green_heart: | shadedclient | 21m 0s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 29s | | the patch passed | | +1 :green_heart: | compile | 7m 45s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 7m 45s | | the patch passed | | +1 :green_heart: | compile | 7m 23s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | javac | 7m 23s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 0m 34s | [/results-checkstyle-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6527/1/artifact/out/results-checkstyle-hadoop-common-project_hadoop-common.txt) | hadoop-common-project/hadoop-common: The patch generated 4 new + 42 unchanged - 0 fixed = 46 total (was 42) | | +1 :green_heart: | mvnsite | 0m 49s | | the patch passed | | +1 :green_heart: | javadoc | 0m 36s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 30s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 1m 24s | | the patch passed | | +1 :green_heart: | shadedclient | 20m 11s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 16m 15s | | hadoop-common in the patch passed. | | +1 :green_heart: | asflicense | 0m 35s | | The patch does not generate ASF License warnings. | | | | 130m 47s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.44 ServerAPI=1.44 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6527/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/6527 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux d611acab831f 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / f810715d570523fead07564f3fb3dd20db5afcd1 | | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6527/1/testReport/ | | Max. process+thread count | 1273 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output |
Re: [PR] HADOOP-19052.Hadoop use Shell command to get the count of the hard link which takes a lot of time [hadoop]
hadoop-yetus commented on PR #6527: URL: https://github.com/apache/hadoop/pull/6527#issuecomment-1926551401 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 22s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 31m 18s | | trunk passed | | +1 :green_heart: | compile | 8m 11s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 7m 20s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | checkstyle | 0m 40s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 51s | | trunk passed | | +1 :green_heart: | javadoc | 0m 42s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 27s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 1m 25s | | trunk passed | | +1 :green_heart: | shadedclient | 21m 0s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 29s | | the patch passed | | +1 :green_heart: | compile | 7m 45s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 7m 45s | | the patch passed | | +1 :green_heart: | compile | 7m 23s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | javac | 7m 23s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 0m 34s | [/results-checkstyle-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6527/1/artifact/out/results-checkstyle-hadoop-common-project_hadoop-common.txt) | hadoop-common-project/hadoop-common: The patch generated 4 new + 42 unchanged - 0 fixed = 46 total (was 42) | | +1 :green_heart: | mvnsite | 0m 49s | | the patch passed | | +1 :green_heart: | javadoc | 0m 36s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 30s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 1m 24s | | the patch passed | | +1 :green_heart: | shadedclient | 20m 11s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 16m 15s | | hadoop-common in the patch passed. | | +1 :green_heart: | asflicense | 0m 35s | | The patch does not generate ASF License warnings. | | | | 130m 47s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.44 ServerAPI=1.44 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6527/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/6527 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux d611acab831f 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / f810715d570523fead07564f3fb3dd20db5afcd1 | | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6527/1/testReport/ | | Max. process+thread count | 1273 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6527/1/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message
[jira] [Commented] (HADOOP-19047) Support InMemory Tracking Of S3A Magic Commits
[ https://issues.apache.org/jira/browse/HADOOP-19047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17814285#comment-17814285 ] ASF GitHub Bot commented on HADOOP-19047: - shameersss1 commented on PR #6468: URL: https://github.com/apache/hadoop/pull/6468#issuecomment-1926512939 @steveloughran - Thanks a lot a detailed review as well as amazing follow up question. I have addressed your comments, Please let me know your thoughts. > Support InMemory Tracking Of S3A Magic Commits > -- > > Key: HADOOP-19047 > URL: https://issues.apache.org/jira/browse/HADOOP-19047 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/s3 >Reporter: Syed Shameerur Rahman >Assignee: Syed Shameerur Rahman >Priority: Major > Labels: pull-request-available > > The following are the operations which happens within a Task when it uses S3A > Magic Committer. > *During closing of stream* > 1. A 0-byte file with a same name of the original file is uploaded to S3 > using PUT operation. Refer > [here|https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/magic/MagicCommitTracker.java#L152] > for more information. This is done so that the downstream application like > Spark could get the size of the file which is being written. > 2. MultiPartUpload(MPU) metadata is uploaded to S3. Refer > [here|https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/magic/MagicCommitTracker.java#L176] > for more information. > *During TaskCommit* > 1. All the MPU metadata which the task wrote to S3 (There will be 'x' number > of metadata file in S3 if a single task writes to 'x' files) are read and > rewritten to S3 as a single metadata file. Refer > [here|https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/magic/MagicS3GuardCommitter.java#L201] > for more information > Since these operations happens with the Task JVM, We could optimize as well > as save cost by storing these information in memory when Task memory usage is > not a constraint. Hence the proposal here is to introduce a new MagicCommit > Tracker called "InMemoryMagicCommitTracker" which will store the > 1. Metadata of MPU in memory till the Task is committed > 2. Store the size of the file which can be used by the downstream application > to get the file size before it is committed/visible to the output path. > This optimization will save 2 PUT S3 calls, 1 LIST S3 call, and 1 GET S3 call > given a Task writes only 1 file. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HADOOP-19047: Support InMemory Tracking Of S3A Magic Commits [hadoop]
shameersss1 commented on PR #6468: URL: https://github.com/apache/hadoop/pull/6468#issuecomment-1926512939 @steveloughran - Thanks a lot a detailed review as well as amazing follow up question. I have addressed your comments, Please let me know your thoughts. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14837) Handle S3A "glacier" data
[ https://issues.apache.org/jira/browse/HADOOP-14837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17814283#comment-17814283 ] ASF GitHub Bot commented on HADOOP-14837: - hadoop-yetus commented on PR #6407: URL: https://github.com/apache/hadoop/pull/6407#issuecomment-1926503914 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 0s | | Docker mode activated. | | -1 :x: | patch | 0m 16s | | https://github.com/apache/hadoop/pull/6407 does not apply to trunk. Rebase required? Wrong Branch? See https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute for help. | | Subsystem | Report/Notes | |--:|:-| | GITHUB PR | https://github.com/apache/hadoop/pull/6407 | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6407/7/console | | versions | git=2.34.1 | | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org | This message was automatically generated. > Handle S3A "glacier" data > - > > Key: HADOOP-14837 > URL: https://issues.apache.org/jira/browse/HADOOP-14837 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.0.0-beta1 >Reporter: Steve Loughran >Assignee: Bhavay Pahuja >Priority: Minor > Labels: pull-request-available > > SPARK-21797 covers how if you have AWS S3 set to copy some files to glacier, > they appear in the listing but GETs fail, and so does everything else > We should think about how best to handle this. > # report better > # if listings can identify files which are glaciated then maybe we could have > an option to filter them out > # test & see what happens -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HADOOP-14837 : Support Read Restored Glacier Objects [hadoop]
hadoop-yetus commented on PR #6407: URL: https://github.com/apache/hadoop/pull/6407#issuecomment-1926503914 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 0s | | Docker mode activated. | | -1 :x: | patch | 0m 16s | | https://github.com/apache/hadoop/pull/6407 does not apply to trunk. Rebase required? Wrong Branch? See https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute for help. | | Subsystem | Report/Notes | |--:|:-| | GITHUB PR | https://github.com/apache/hadoop/pull/6407 | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6407/7/console | | versions | git=2.34.1 | | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-19047) Support InMemory Tracking Of S3A Magic Commits
[ https://issues.apache.org/jira/browse/HADOOP-19047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17814280#comment-17814280 ] ASF GitHub Bot commented on HADOOP-19047: - shameersss1 commented on code in PR #6468: URL: https://github.com/apache/hadoop/pull/6468#discussion_r1477845326 ## hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/commit/magic/ITestMagicCommitProtocol.java: ## @@ -71,6 +79,26 @@ public void setup() throws Exception { CommitUtils.verifyIsMagicCommitFS(getFileSystem()); } + @Parameterized.Parameters(name = "track-commit-in-memory-{0}") + public static Collection params() { +return Arrays.asList(new Object[][]{ +{false}, +{true} +}); + } + + public ITestMagicCommitProtocol(boolean trackCommitsInMemory) { +this.trackCommitsInMemory = trackCommitsInMemory; + } + + @Override + protected Configuration createConfiguration() { +Configuration conf = super.createConfiguration(); +conf.setBoolean(FS_S3A_COMMITTER_MAGIC_TRACK_COMMITS_IN_MEMORY_ENABLED, trackCommitsInMemory); Review Comment: ack > Support InMemory Tracking Of S3A Magic Commits > -- > > Key: HADOOP-19047 > URL: https://issues.apache.org/jira/browse/HADOOP-19047 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/s3 >Reporter: Syed Shameerur Rahman >Assignee: Syed Shameerur Rahman >Priority: Major > Labels: pull-request-available > > The following are the operations which happens within a Task when it uses S3A > Magic Committer. > *During closing of stream* > 1. A 0-byte file with a same name of the original file is uploaded to S3 > using PUT operation. Refer > [here|https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/magic/MagicCommitTracker.java#L152] > for more information. This is done so that the downstream application like > Spark could get the size of the file which is being written. > 2. MultiPartUpload(MPU) metadata is uploaded to S3. Refer > [here|https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/magic/MagicCommitTracker.java#L176] > for more information. > *During TaskCommit* > 1. All the MPU metadata which the task wrote to S3 (There will be 'x' number > of metadata file in S3 if a single task writes to 'x' files) are read and > rewritten to S3 as a single metadata file. Refer > [here|https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/magic/MagicS3GuardCommitter.java#L201] > for more information > Since these operations happens with the Task JVM, We could optimize as well > as save cost by storing these information in memory when Task memory usage is > not a constraint. Hence the proposal here is to introduce a new MagicCommit > Tracker called "InMemoryMagicCommitTracker" which will store the > 1. Metadata of MPU in memory till the Task is committed > 2. Store the size of the file which can be used by the downstream application > to get the file size before it is committed/visible to the output path. > This optimization will save 2 PUT S3 calls, 1 LIST S3 call, and 1 GET S3 call > given a Task writes only 1 file. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HADOOP-19047: Support InMemory Tracking Of S3A Magic Commits [hadoop]
shameersss1 commented on code in PR #6468: URL: https://github.com/apache/hadoop/pull/6468#discussion_r1477845326 ## hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/commit/magic/ITestMagicCommitProtocol.java: ## @@ -71,6 +79,26 @@ public void setup() throws Exception { CommitUtils.verifyIsMagicCommitFS(getFileSystem()); } + @Parameterized.Parameters(name = "track-commit-in-memory-{0}") + public static Collection params() { +return Arrays.asList(new Object[][]{ +{false}, +{true} +}); + } + + public ITestMagicCommitProtocol(boolean trackCommitsInMemory) { +this.trackCommitsInMemory = trackCommitsInMemory; + } + + @Override + protected Configuration createConfiguration() { +Configuration conf = super.createConfiguration(); +conf.setBoolean(FS_S3A_COMMITTER_MAGIC_TRACK_COMMITS_IN_MEMORY_ENABLED, trackCommitsInMemory); Review Comment: ack -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-19047) Support InMemory Tracking Of S3A Magic Commits
[ https://issues.apache.org/jira/browse/HADOOP-19047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17814250#comment-17814250 ] ASF GitHub Bot commented on HADOOP-19047: - shameersss1 commented on code in PR #6468: URL: https://github.com/apache/hadoop/pull/6468#discussion_r1477808239 ## hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java: ## @@ -52,6 +52,7 @@ import java.util.concurrent.atomic.AtomicBoolean; import javax.annotation.Nullable; +import org.apache.hadoop.fs.s3a.commit.magic.InMemoryMagicCommitTracker; Review Comment: Ack. > Support InMemory Tracking Of S3A Magic Commits > -- > > Key: HADOOP-19047 > URL: https://issues.apache.org/jira/browse/HADOOP-19047 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/s3 >Reporter: Syed Shameerur Rahman >Assignee: Syed Shameerur Rahman >Priority: Major > Labels: pull-request-available > > The following are the operations which happens within a Task when it uses S3A > Magic Committer. > *During closing of stream* > 1. A 0-byte file with a same name of the original file is uploaded to S3 > using PUT operation. Refer > [here|https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/magic/MagicCommitTracker.java#L152] > for more information. This is done so that the downstream application like > Spark could get the size of the file which is being written. > 2. MultiPartUpload(MPU) metadata is uploaded to S3. Refer > [here|https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/magic/MagicCommitTracker.java#L176] > for more information. > *During TaskCommit* > 1. All the MPU metadata which the task wrote to S3 (There will be 'x' number > of metadata file in S3 if a single task writes to 'x' files) are read and > rewritten to S3 as a single metadata file. Refer > [here|https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/magic/MagicS3GuardCommitter.java#L201] > for more information > Since these operations happens with the Task JVM, We could optimize as well > as save cost by storing these information in memory when Task memory usage is > not a constraint. Hence the proposal here is to introduce a new MagicCommit > Tracker called "InMemoryMagicCommitTracker" which will store the > 1. Metadata of MPU in memory till the Task is committed > 2. Store the size of the file which can be used by the downstream application > to get the file size before it is committed/visible to the output path. > This optimization will save 2 PUT S3 calls, 1 LIST S3 call, and 1 GET S3 call > given a Task writes only 1 file. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HADOOP-19047: Support InMemory Tracking Of S3A Magic Commits [hadoop]
shameersss1 commented on code in PR #6468: URL: https://github.com/apache/hadoop/pull/6468#discussion_r1477808239 ## hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java: ## @@ -52,6 +52,7 @@ import java.util.concurrent.atomic.AtomicBoolean; import javax.annotation.Nullable; +import org.apache.hadoop.fs.s3a.commit.magic.InMemoryMagicCommitTracker; Review Comment: Ack. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-19047) Support InMemory Tracking Of S3A Magic Commits
[ https://issues.apache.org/jira/browse/HADOOP-19047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17814247#comment-17814247 ] ASF GitHub Bot commented on HADOOP-19047: - shameersss1 commented on code in PR #6468: URL: https://github.com/apache/hadoop/pull/6468#discussion_r1477804569 ## hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/magic/S3MagicCommitTracker.java: ## @@ -0,0 +1,124 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.fs.s3a.commit.magic; + +import org.apache.commons.lang3.StringUtils; Review Comment: Ack. I will import `Code formatter xml is present here: https://github.com/apache/hadoop/tree/trunk/dev-support/code-formatter . IntelliJ users can directly import hadoop_idea_formatter.xml` ## hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/magic/InMemoryMagicCommitTracker.java: ## @@ -0,0 +1,126 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.fs.s3a.commit.magic; + +import org.apache.commons.lang3.StringUtils; +import org.apache.hadoop.fs.Path; +import org.apache.hadoop.fs.s3a.WriteOperationHelper; +import org.apache.hadoop.fs.s3a.commit.files.SinglePendingCommit; +import org.apache.hadoop.fs.s3a.statistics.PutTrackerStatistics; +import org.apache.hadoop.fs.statistics.IOStatistics; +import org.apache.hadoop.fs.statistics.IOStatisticsSnapshot; +import org.apache.hadoop.thirdparty.com.google.common.base.Preconditions; +import software.amazon.awssdk.services.s3.model.CompletedPart; Review Comment: Ack. I will import `Code formatter xml is present here: https://github.com/apache/hadoop/tree/trunk/dev-support/code-formatter . IntelliJ users can directly import hadoop_idea_formatter.xml` > Support InMemory Tracking Of S3A Magic Commits > -- > > Key: HADOOP-19047 > URL: https://issues.apache.org/jira/browse/HADOOP-19047 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/s3 >Reporter: Syed Shameerur Rahman >Assignee: Syed Shameerur Rahman >Priority: Major > Labels: pull-request-available > > The following are the operations which happens within a Task when it uses S3A > Magic Committer. > *During closing of stream* > 1. A 0-byte file with a same name of the original file is uploaded to S3 > using PUT operation. Refer > [here|https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/magic/MagicCommitTracker.java#L152] > for more information. This is done so that the downstream application like > Spark could get the size of the file which is being written. > 2. MultiPartUpload(MPU) metadata is uploaded to S3. Refer > [here|https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/magic/MagicCommitTracker.java#L176] > for more information. > *During TaskCommit* > 1. All the MPU metadata which the task wrote to S3 (There will be 'x' number > of metadata file in S3 if a single task writes to 'x' files) are read and > rewritten to S3 as a single metadata file. Refer >
Re: [PR] HADOOP-19047: Support InMemory Tracking Of S3A Magic Commits [hadoop]
shameersss1 commented on code in PR #6468: URL: https://github.com/apache/hadoop/pull/6468#discussion_r1477804569 ## hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/magic/S3MagicCommitTracker.java: ## @@ -0,0 +1,124 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.fs.s3a.commit.magic; + +import org.apache.commons.lang3.StringUtils; Review Comment: Ack. I will import `Code formatter xml is present here: https://github.com/apache/hadoop/tree/trunk/dev-support/code-formatter . IntelliJ users can directly import hadoop_idea_formatter.xml` ## hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/magic/InMemoryMagicCommitTracker.java: ## @@ -0,0 +1,126 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.fs.s3a.commit.magic; + +import org.apache.commons.lang3.StringUtils; +import org.apache.hadoop.fs.Path; +import org.apache.hadoop.fs.s3a.WriteOperationHelper; +import org.apache.hadoop.fs.s3a.commit.files.SinglePendingCommit; +import org.apache.hadoop.fs.s3a.statistics.PutTrackerStatistics; +import org.apache.hadoop.fs.statistics.IOStatistics; +import org.apache.hadoop.fs.statistics.IOStatisticsSnapshot; +import org.apache.hadoop.thirdparty.com.google.common.base.Preconditions; +import software.amazon.awssdk.services.s3.model.CompletedPart; Review Comment: Ack. I will import `Code formatter xml is present here: https://github.com/apache/hadoop/tree/trunk/dev-support/code-formatter . IntelliJ users can directly import hadoop_idea_formatter.xml` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-19047) Support InMemory Tracking Of S3A Magic Commits
[ https://issues.apache.org/jira/browse/HADOOP-19047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17814234#comment-17814234 ] ASF GitHub Bot commented on HADOOP-19047: - shameersss1 commented on code in PR #6468: URL: https://github.com/apache/hadoop/pull/6468#discussion_r1477795947 ## hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/magic/MagicS3GuardCommitter.java: ## @@ -248,6 +236,80 @@ private PendingSet innerCommitTask( return pendingSet; } + /** + * Loads pending commits from either memory or from the remote store (S3) based on the config. + * @param context TaskAttemptContext + * @return All pending commit data for the given TaskAttemptContext + * @throws IOException + * if there is an error trying to read the commit data + */ + protected PendingSet loadPendingCommits(TaskAttemptContext context) throws IOException { +PendingSet pendingSet = new PendingSet(); +if (isTrackMagicCommitsInMemoryEnabled(context.getConfiguration())) { + // load from memory + List pendingCommits = loadPendingCommitsFromMemory(context); + + for (SinglePendingCommit singleCommit : pendingCommits) { +// aggregate stats +pendingSet.getIOStatistics() +.aggregate(singleCommit.getIOStatistics()); +// then clear so they aren't marshalled again. +singleCommit.getIOStatistics().clear(); + } + pendingSet.setCommits(pendingCommits); +} else { + // Load from remote store + CommitOperations actions = getCommitOperations(); + Path taskAttemptPath = getTaskAttemptPath(context); + try (CommitContext commitContext = initiateTaskOperation(context)) { +Pair>> loaded = +actions.loadSinglePendingCommits(taskAttemptPath, true, commitContext); +pendingSet = loaded.getKey(); +List> failures = loaded.getValue(); +if (!failures.isEmpty()) { + // At least one file failed to load + // revert all which did; report failure with first exception + LOG.error("At least one commit file could not be read: failing"); + abortPendingUploads(commitContext, pendingSet.getCommits(), true); + throw failures.get(0).getValue(); +} + } +} +return pendingSet; + } + + private List loadPendingCommitsFromMemory(TaskAttemptContext context) Review Comment: ack. > Support InMemory Tracking Of S3A Magic Commits > -- > > Key: HADOOP-19047 > URL: https://issues.apache.org/jira/browse/HADOOP-19047 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/s3 >Reporter: Syed Shameerur Rahman >Assignee: Syed Shameerur Rahman >Priority: Major > Labels: pull-request-available > > The following are the operations which happens within a Task when it uses S3A > Magic Committer. > *During closing of stream* > 1. A 0-byte file with a same name of the original file is uploaded to S3 > using PUT operation. Refer > [here|https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/magic/MagicCommitTracker.java#L152] > for more information. This is done so that the downstream application like > Spark could get the size of the file which is being written. > 2. MultiPartUpload(MPU) metadata is uploaded to S3. Refer > [here|https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/magic/MagicCommitTracker.java#L176] > for more information. > *During TaskCommit* > 1. All the MPU metadata which the task wrote to S3 (There will be 'x' number > of metadata file in S3 if a single task writes to 'x' files) are read and > rewritten to S3 as a single metadata file. Refer > [here|https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/magic/MagicS3GuardCommitter.java#L201] > for more information > Since these operations happens with the Task JVM, We could optimize as well > as save cost by storing these information in memory when Task memory usage is > not a constraint. Hence the proposal here is to introduce a new MagicCommit > Tracker called "InMemoryMagicCommitTracker" which will store the > 1. Metadata of MPU in memory till the Task is committed > 2. Store the size of the file which can be used by the downstream application > to get the file size before it is committed/visible to the output path. > This optimization will save 2 PUT S3 calls, 1 LIST S3 call, and 1 GET S3 call > given a Task writes only 1 file. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail:
Re: [PR] HADOOP-19047: Support InMemory Tracking Of S3A Magic Commits [hadoop]
shameersss1 commented on code in PR #6468: URL: https://github.com/apache/hadoop/pull/6468#discussion_r1477795947 ## hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/magic/MagicS3GuardCommitter.java: ## @@ -248,6 +236,80 @@ private PendingSet innerCommitTask( return pendingSet; } + /** + * Loads pending commits from either memory or from the remote store (S3) based on the config. + * @param context TaskAttemptContext + * @return All pending commit data for the given TaskAttemptContext + * @throws IOException + * if there is an error trying to read the commit data + */ + protected PendingSet loadPendingCommits(TaskAttemptContext context) throws IOException { +PendingSet pendingSet = new PendingSet(); +if (isTrackMagicCommitsInMemoryEnabled(context.getConfiguration())) { + // load from memory + List pendingCommits = loadPendingCommitsFromMemory(context); + + for (SinglePendingCommit singleCommit : pendingCommits) { +// aggregate stats +pendingSet.getIOStatistics() +.aggregate(singleCommit.getIOStatistics()); +// then clear so they aren't marshalled again. +singleCommit.getIOStatistics().clear(); + } + pendingSet.setCommits(pendingCommits); +} else { + // Load from remote store + CommitOperations actions = getCommitOperations(); + Path taskAttemptPath = getTaskAttemptPath(context); + try (CommitContext commitContext = initiateTaskOperation(context)) { +Pair>> loaded = +actions.loadSinglePendingCommits(taskAttemptPath, true, commitContext); +pendingSet = loaded.getKey(); +List> failures = loaded.getValue(); +if (!failures.isEmpty()) { + // At least one file failed to load + // revert all which did; report failure with first exception + LOG.error("At least one commit file could not be read: failing"); + abortPendingUploads(commitContext, pendingSet.getCommits(), true); + throw failures.get(0).getValue(); +} + } +} +return pendingSet; + } + + private List loadPendingCommitsFromMemory(TaskAttemptContext context) Review Comment: ack. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org