[jira] [Commented] (HADOOP-18716) [JDK-17] Failed unit tests , with Java 17 runtime and compiled Java 8
[ https://issues.apache.org/jira/browse/HADOOP-18716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17812565#comment-17812565 ] Bilwa S T commented on HADOOP-18716: [~ayushtkn] Are you working on this? > [JDK-17] Failed unit tests , with Java 17 runtime and compiled Java 8 > - > > Key: HADOOP-18716 > URL: https://issues.apache.org/jira/browse/HADOOP-18716 > Project: Hadoop Common > Issue Type: Bug >Reporter: Vinay Devadiga >Priority: Critical > > Compiled Hadoop - Hadoop branch 3.3.3 > mvn clean install - DskipTests=True > Java_Home -> points to Java-8 > maven version - 3.8.8 (Quite latest) > > Ran various whole test suit on my private cloud environment - > Changed Java_Home to Java-17 > > mvn surefire:test > > Out of 22k tests - 2.5 k tests failed . -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-19058) [JDK-17] TestCryptoOutputStreamClosing#testUnderlyingOutputStreamClosedWhenExceptionClosing fails
Bilwa S T created HADOOP-19058: -- Summary: [JDK-17] TestCryptoOutputStreamClosing#testUnderlyingOutputStreamClosedWhenExceptionClosing fails Key: HADOOP-19058 URL: https://issues.apache.org/jira/browse/HADOOP-19058 Project: Hadoop Common Issue Type: Sub-task Reporter: Bilwa S T Assignee: Bilwa S T -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-19055) Preparing for 1.3.0 development
[ https://issues.apache.org/jira/browse/HADOOP-19055?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shilun Fan updated HADOOP-19055: Hadoop Flags: Reviewed > Preparing for 1.3.0 development > --- > > Key: HADOOP-19055 > URL: https://issues.apache.org/jira/browse/HADOOP-19055 > Project: Hadoop Common > Issue Type: Sub-task > Components: hadoop-thirdparty >Affects Versions: thirdparty-1.3.0 >Reporter: Shilun Fan >Assignee: Shilun Fan >Priority: Major > Labels: pull-request-available > Fix For: thirdparty-1.2.0, thirdparty-1.3.0 > > Time Spent: 0.5h > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-19054) Update hadoop-thirdparty index.md.vm
[ https://issues.apache.org/jira/browse/HADOOP-19054?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shilun Fan resolved HADOOP-19054. - Hadoop Flags: Reviewed Target Version/s: thirdparty-1.2.0, thirdparty-1.3.0 (was: thirdparty-1.2.0) Resolution: Fixed > Update hadoop-thirdparty index.md.vm > > > Key: HADOOP-19054 > URL: https://issues.apache.org/jira/browse/HADOOP-19054 > Project: Hadoop Common > Issue Type: Sub-task > Components: hadoop-thirdparty >Affects Versions: thirdparty-1.2.0 >Reporter: Shilun Fan >Assignee: Shilun Fan >Priority: Major > Labels: pull-request-available > Fix For: thirdparty-1.2.0 > > Time Spent: 40m > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-19050) Add S3 Access Grants Support in S3A
[ https://issues.apache.org/jira/browse/HADOOP-19050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17812551#comment-17812551 ] Jason Han commented on HADOOP-19050: [~ste...@apache.org], the SDK v2.23.7 update and the change to enable S3 Access Grants plugin are all the code changes we need to enable this new feature. Thanks. Two PRs: [https://github.com/apache/hadoop/pull/6506] [https://github.com/apache/hadoop/pull/6507] > Add S3 Access Grants Support in S3A > --- > > Key: HADOOP-19050 > URL: https://issues.apache.org/jira/browse/HADOOP-19050 > Project: Hadoop Common > Issue Type: New Feature > Components: fs/s3 >Affects Versions: 3.4.0 >Reporter: Jason Han >Priority: Minor > Labels: pull-request-available > > Add support for S3 Access Grants > (https://aws.amazon.com/s3/features/access-grants/) in S3A. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-19055) Preparing for 1.3.0 development
[ https://issues.apache.org/jira/browse/HADOOP-19055?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shilun Fan resolved HADOOP-19055. - Fix Version/s: thirdparty-1.2.0 Resolution: Fixed > Preparing for 1.3.0 development > --- > > Key: HADOOP-19055 > URL: https://issues.apache.org/jira/browse/HADOOP-19055 > Project: Hadoop Common > Issue Type: Sub-task > Components: hadoop-thirdparty >Affects Versions: thirdparty-1.3.0 >Reporter: Shilun Fan >Assignee: Shilun Fan >Priority: Major > Labels: pull-request-available > Fix For: thirdparty-1.2.0, thirdparty-1.3.0 > > Time Spent: 0.5h > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-19054) Update hadoop-thirdparty index.md.vm
[ https://issues.apache.org/jira/browse/HADOOP-19054?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shilun Fan updated HADOOP-19054: Component/s: hadoop-thirdparty > Update hadoop-thirdparty index.md.vm > > > Key: HADOOP-19054 > URL: https://issues.apache.org/jira/browse/HADOOP-19054 > Project: Hadoop Common > Issue Type: Sub-task > Components: hadoop-thirdparty >Affects Versions: thirdparty-1.2.0 >Reporter: Shilun Fan >Assignee: Shilun Fan >Priority: Major > Labels: pull-request-available > Fix For: thirdparty-1.2.0 > > Time Spent: 40m > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-19056) Highlight RBF features and improvements targeting version 3.4
[ https://issues.apache.org/jira/browse/HADOOP-19056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shilun Fan updated HADOOP-19056: Target Version/s: 3.4.0, 3.5.0 Affects Version/s: 3.5.0 > Highlight RBF features and improvements targeting version 3.4 > - > > Key: HADOOP-19056 > URL: https://issues.apache.org/jira/browse/HADOOP-19056 > Project: Hadoop Common > Issue Type: Task > Components: build, common >Affects Versions: 3.4.0, 3.5.0 >Reporter: Takanobu Asanuma >Assignee: Takanobu Asanuma >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > > I want to hight recent RBF features and improvements. > - Support observer node from Router-Based Federation > - Enhanced IPC throughput between Router and NameNode > - Improved isolation for downstream name nodes. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-19056) Highlight RBF features and improvements targeting version 3.4
[ https://issues.apache.org/jira/browse/HADOOP-19056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shilun Fan updated HADOOP-19056: Component/s: build common > Highlight RBF features and improvements targeting version 3.4 > - > > Key: HADOOP-19056 > URL: https://issues.apache.org/jira/browse/HADOOP-19056 > Project: Hadoop Common > Issue Type: Task > Components: build, common >Affects Versions: 3.4.0 >Reporter: Takanobu Asanuma >Assignee: Takanobu Asanuma >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > > I want to hight recent RBF features and improvements. > - Support observer node from Router-Based Federation > - Enhanced IPC throughput between Router and NameNode > - Improved isolation for downstream name nodes. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-19056) Highlight RBF features and improvements targeting version 3.4
[ https://issues.apache.org/jira/browse/HADOOP-19056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shilun Fan updated HADOOP-19056: Fix Version/s: 3.4.0 > Highlight RBF features and improvements targeting version 3.4 > - > > Key: HADOOP-19056 > URL: https://issues.apache.org/jira/browse/HADOOP-19056 > Project: Hadoop Common > Issue Type: Task >Affects Versions: 3.4.0 >Reporter: Takanobu Asanuma >Assignee: Takanobu Asanuma >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > > I want to hight recent RBF features and improvements. > - Support observer node from Router-Based Federation > - Enhanced IPC throughput between Router and NameNode > - Improved isolation for downstream name nodes. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-19056) Highlight RBF features and improvements targeting version 3.4
[ https://issues.apache.org/jira/browse/HADOOP-19056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shilun Fan updated HADOOP-19056: Hadoop Flags: Reviewed Resolution: Fixed Status: Resolved (was: Patch Available) > Highlight RBF features and improvements targeting version 3.4 > - > > Key: HADOOP-19056 > URL: https://issues.apache.org/jira/browse/HADOOP-19056 > Project: Hadoop Common > Issue Type: Task > Components: build, common >Affects Versions: 3.4.0 >Reporter: Takanobu Asanuma >Assignee: Takanobu Asanuma >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > > I want to hight recent RBF features and improvements. > - Support observer node from Router-Based Federation > - Enhanced IPC throughput between Router and NameNode > - Improved isolation for downstream name nodes. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-19056) Highlight RBF features and improvements targeting version 3.4
[ https://issues.apache.org/jira/browse/HADOOP-19056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shilun Fan updated HADOOP-19056: Affects Version/s: 3.4.0 > Highlight RBF features and improvements targeting version 3.4 > - > > Key: HADOOP-19056 > URL: https://issues.apache.org/jira/browse/HADOOP-19056 > Project: Hadoop Common > Issue Type: Task >Affects Versions: 3.4.0 >Reporter: Takanobu Asanuma >Assignee: Takanobu Asanuma >Priority: Major > Labels: pull-request-available > > I want to hight recent RBF features and improvements. > - Support observer node from Router-Based Federation > - Enhanced IPC throughput between Router and NameNode > - Improved isolation for downstream name nodes. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-19050) Add S3 Access Grants Support in S3A
[ https://issues.apache.org/jira/browse/HADOOP-19050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17812492#comment-17812492 ] Adnan Hemani commented on HADOOP-19050: --- Hi Steve, I'm working on this along with Jason Han. We've introduced a patch that is utilizing AWS' officially released S3 Access Grants plugin ([https://github.com/aws/aws-s3-accessgrants-plugin-java-v2] and [https://mvnrepository.com/artifact/software.amazon.s3.accessgrants/aws-s3-accessgrants-java-plugin/2.0.0]), so I think that will clear most of the complexities you've mentioned above as the main logic for supporting S3 Access Grants will come from there once it is attached to the S3 SDKv2 client. We only need to introduce code in Hadoop that will allow users to enable adding this plugin onto their S3 SDKv2 clients. I'm taking a look at the build failures mentioned above, will update the GitHub with any findings (doubt any of these issues are related to the new code, as all the issues listed are unrelated to the code diff - but will take a look to make sure anyways). Thanks for your support! Please let us know your thoughts on the code when you have time next. -Adnan > Add S3 Access Grants Support in S3A > --- > > Key: HADOOP-19050 > URL: https://issues.apache.org/jira/browse/HADOOP-19050 > Project: Hadoop Common > Issue Type: New Feature > Components: fs/s3 >Affects Versions: 3.4.0 >Reporter: Jason Han >Priority: Minor > Labels: pull-request-available > > Add support for S3 Access Grants > (https://aws.amazon.com/s3/features/access-grants/) in S3A. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-19057) S3 public test bucket landsat-pds unreadable -needs replacement
[ https://issues.apache.org/jira/browse/HADOOP-19057?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17812452#comment-17812452 ] Steve Loughran commented on HADOOP-19057: - HADOOP-14661 added requester pays, so 3.3.5+ can move to a new source > S3 public test bucket landsat-pds unreadable -needs replacement > --- > > Key: HADOOP-19057 > URL: https://issues.apache.org/jira/browse/HADOOP-19057 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3, test >Affects Versions: 3.4.0, 3.2.4, 3.3.9, 3.3.6, 3.5.0 >Reporter: Steve Loughran >Priority: Critical > > The s3 test bucket used in hadoop-aws tests of S3 select and large file reads > is no longer publicly accessible > {code} > java.nio.file.AccessDeniedException: landsat-pds: getBucketMetadata() on > landsat-pds: software.amazon.awssdk.services.s3.model.S3Exception: null > (Service: S3, Status Code: 403, Request ID: 06QNYQ9GND5STQ2S, Extended > Request ID: > O+u2Y1MrCQuuSYGKRAWHj/5LcDLuaFS8owNuXXWSJ0zFXYfuCaTVLEP351S/umti558eKlUqV6U=):null > {code} > * Because HADOOP-18830 has cut s3 select, all we need in 3.4.1+ is a large > file for some reading tests > * changing the default value disables s3 select tests on older releases > * if fs.s3a.scale.test.csvfile is set to " " then other tests which need it > will be skipped > Proposed > * we locate a new large file under the (requester pays) s3a://usgs-landsat/ > bucket . All releases with HADOOP-18168 can use this > * update 3.4.1 source to use this; document it > * do something similar for 3.3.9 + maybe even cut s3 select there too. > * document how to use it on older releases with requester-pays support > * document how to completely disable it on older releases. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-17784) hadoop-aws landsat-pds test bucket will be deleted after Jul 1, 2021
[ https://issues.apache.org/jira/browse/HADOOP-17784?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran resolved HADOOP-17784. - Resolution: Duplicate HADOOP-17784 will address this now the bucket is completely gone > hadoop-aws landsat-pds test bucket will be deleted after Jul 1, 2021 > > > Key: HADOOP-17784 > URL: https://issues.apache.org/jira/browse/HADOOP-17784 > Project: Hadoop Common > Issue Type: Test > Components: fs/s3, test >Reporter: Leona Yoda >Priority: Major > Attachments: org.apache.hadoop.fs.s3a.select.ITestS3SelectMRJob.txt > > > I found an anouncement that landsat-pds buket will be deleted on July 1, 2021 > (https://registry.opendata.aws/landsat-8/) > and I think this bucket is used in th test of hadoop-aws module use > [https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/S3ATestConstants.java#L93] > > At this time I can access the bucket but we might have to change the test > bucket in someday. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-19057) S3 public test bucket landsat-pds unreadable -needs replacement
Steve Loughran created HADOOP-19057: --- Summary: S3 public test bucket landsat-pds unreadable -needs replacement Key: HADOOP-19057 URL: https://issues.apache.org/jira/browse/HADOOP-19057 Project: Hadoop Common Issue Type: Sub-task Components: fs/s3, test Affects Versions: 3.3.6, 3.2.4, 3.4.0, 3.3.9, 3.5.0 Reporter: Steve Loughran The s3 test bucket used in hadoop-aws tests of S3 select and large file reads is no longer publicly accessible {code} java.nio.file.AccessDeniedException: landsat-pds: getBucketMetadata() on landsat-pds: software.amazon.awssdk.services.s3.model.S3Exception: null (Service: S3, Status Code: 403, Request ID: 06QNYQ9GND5STQ2S, Extended Request ID: O+u2Y1MrCQuuSYGKRAWHj/5LcDLuaFS8owNuXXWSJ0zFXYfuCaTVLEP351S/umti558eKlUqV6U=):null {code} * Because HADOOP-18830 has cut s3 select, all we need in 3.4.1+ is a large file for some reading tests * changing the default value disables s3 select tests on older releases * if fs.s3a.scale.test.csvfile is set to " " then other tests which need it will be skipped Proposed * we locate a new large file under the (requester pays) s3a://usgs-landsat/ bucket . All releases with HADOOP-18168 can use this * update 3.4.1 source to use this; document it * do something similar for 3.3.9 + maybe even cut s3 select there too. * document how to use it on older releases with requester-pays support * document how to completely disable it on older releases. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-19022) S3A : ITestS3AConfiguration#testRequestTimeout failure
[ https://issues.apache.org/jira/browse/HADOOP-19022?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran resolved HADOOP-19022. - Fix Version/s: 3.5.0 3.4.1 Assignee: Steve Loughran Resolution: Duplicate > S3A : ITestS3AConfiguration#testRequestTimeout failure > -- > > Key: HADOOP-19022 > URL: https://issues.apache.org/jira/browse/HADOOP-19022 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3, test >Affects Versions: 3.4.0 >Reporter: Viraj Jasani >Assignee: Steve Loughran >Priority: Minor > Fix For: 3.5.0, 3.4.1 > > > "fs.s3a.connection.request.timeout" should be specified in milliseconds as per > {code:java} > Duration apiCallTimeout = getDuration(conf, REQUEST_TIMEOUT, > DEFAULT_REQUEST_TIMEOUT_DURATION, TimeUnit.MILLISECONDS, Duration.ZERO); > {code} > The test fails consistently because it sets 120 ms timeout which is less than > 15s (min network operation duration), and hence gets reset to 15000 ms based > on the enforcement. > > {code:java} > [ERROR] testRequestTimeout(org.apache.hadoop.fs.s3a.ITestS3AConfiguration) > Time elapsed: 0.016 s <<< FAILURE! > java.lang.AssertionError: Configured fs.s3a.connection.request.timeout is > different than what AWS sdk configuration uses internally expected:<12> > but was:<15000> > at org.junit.Assert.fail(Assert.java:89) > at org.junit.Assert.failNotEquals(Assert.java:835) > at org.junit.Assert.assertEquals(Assert.java:647) > at > org.apache.hadoop.fs.s3a.ITestS3AConfiguration.testRequestTimeout(ITestS3AConfiguration.java:444) > {code} -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-19045) S3A: pass request timeouts down to sdk clients
[ https://issues.apache.org/jira/browse/HADOOP-19045?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran resolved HADOOP-19045. - Fix Version/s: 3.5.0 3.4.1 Resolution: Fixed > S3A: pass request timeouts down to sdk clients > -- > > Key: HADOOP-19045 > URL: https://issues.apache.org/jira/browse/HADOOP-19045 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.4.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > Labels: pull-request-available > Fix For: 3.5.0, 3.4.1 > > > s3a client timeout settings are getting down to http client, but not sdk > timeouts, so you can't have a longer timeout than the default. This surfaces > in the inability to tune the timeouts for CreateSession calls even now the > latest SDK does pick it up -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-18830) S3A: Cut S3 Select
[ https://issues.apache.org/jira/browse/HADOOP-18830?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-18830: Fix Version/s: 3.5.0 > S3A: Cut S3 Select > -- > > Key: HADOOP-18830 > URL: https://issues.apache.org/jira/browse/HADOOP-18830 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.4.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > Labels: pull-request-available > Fix For: 3.5.0, 3.4.1 > > > getting s3 select to work with the v2 sdk is tricky, we need to add extra > libraries to the classpath beyond just bundle.jar. we can do this but > * AFAIK nobody has ever done CSV predicate pushdown, as it breaks split logic > completely > * CSV is a bad format > * one-line JSON more structured but also way less efficient > ORC/Parquet benefit from vectored IO and work spanning the cluster. > accordingly, I'm wondering what to do about s3 select > # cut? > # downgrade to optional and document the extra classes on the classpath > Option #2 is straightforward and effectively the default. we can also declare > the feature deprecated. > {code} > [ERROR] > testReadLandsatRecordsNoMatch(org.apache.hadoop.fs.s3a.select.ITestS3SelectLandsat) > Time elapsed: 147.958 s <<< ERROR! > java.io.IOException: java.lang.NoClassDefFoundError: > software/amazon/eventstream/MessageDecoder > at > org.apache.hadoop.fs.s3a.select.SelectObjectContentHelper.select(SelectObjectContentHelper.java:75) > at > org.apache.hadoop.fs.s3a.WriteOperationHelper.lambda$select$10(WriteOperationHelper.java:660) > at > org.apache.hadoop.fs.store.audit.AuditingFunctions.lambda$withinAuditSpan$0(AuditingFunctions.java:62) > at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:122) > {code} -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-18830) S3A: Cut S3 Select
[ https://issues.apache.org/jira/browse/HADOOP-18830?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran resolved HADOOP-18830. - Fix Version/s: 3.4.1 Hadoop Flags: Incompatible change Release Note: S3 Select is no longer supported through the S3A connector Resolution: Fixed > S3A: Cut S3 Select > -- > > Key: HADOOP-18830 > URL: https://issues.apache.org/jira/browse/HADOOP-18830 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.4.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > Labels: pull-request-available > Fix For: 3.4.1 > > > getting s3 select to work with the v2 sdk is tricky, we need to add extra > libraries to the classpath beyond just bundle.jar. we can do this but > * AFAIK nobody has ever done CSV predicate pushdown, as it breaks split logic > completely > * CSV is a bad format > * one-line JSON more structured but also way less efficient > ORC/Parquet benefit from vectored IO and work spanning the cluster. > accordingly, I'm wondering what to do about s3 select > # cut? > # downgrade to optional and document the extra classes on the classpath > Option #2 is straightforward and effectively the default. we can also declare > the feature deprecated. > {code} > [ERROR] > testReadLandsatRecordsNoMatch(org.apache.hadoop.fs.s3a.select.ITestS3SelectLandsat) > Time elapsed: 147.958 s <<< ERROR! > java.io.IOException: java.lang.NoClassDefFoundError: > software/amazon/eventstream/MessageDecoder > at > org.apache.hadoop.fs.s3a.select.SelectObjectContentHelper.select(SelectObjectContentHelper.java:75) > at > org.apache.hadoop.fs.s3a.WriteOperationHelper.lambda$select$10(WriteOperationHelper.java:660) > at > org.apache.hadoop.fs.store.audit.AuditingFunctions.lambda$withinAuditSpan$0(AuditingFunctions.java:62) > at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:122) > {code} -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HDFS-17353. Fix failing RBF module tests. [hadoop]
hadoop-yetus commented on PR #6491: URL: https://github.com/apache/hadoop/pull/6491#issuecomment-1917623902 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 51s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 2 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 46m 21s | | trunk passed | | +1 :green_heart: | compile | 0m 41s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 0m 35s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | checkstyle | 0m 30s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 40s | | trunk passed | | +1 :green_heart: | javadoc | 0m 42s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 30s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 1m 21s | | trunk passed | | +1 :green_heart: | shadedclient | 37m 51s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 31s | | the patch passed | | +1 :green_heart: | compile | 0m 32s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 0m 32s | | the patch passed | | +1 :green_heart: | compile | 0m 28s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | javac | 0m 28s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 18s | | the patch passed | | +1 :green_heart: | mvnsite | 0m 31s | | the patch passed | | +1 :green_heart: | javadoc | 0m 28s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 23s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 1m 19s | | the patch passed | | +1 :green_heart: | shadedclient | 37m 57s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 24m 48s | | hadoop-hdfs-rbf in the patch passed. | | +1 :green_heart: | asflicense | 0m 35s | | The patch does not generate ASF License warnings. | | | | 162m 15s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.44 ServerAPI=1.44 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6491/4/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/6491 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux 8627f072c7dd 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 7ba5e5f8e7817423298ffd523dc703d0518d4b5c | | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6491/4/testReport/ | | Max. process+thread count | 2085 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: hadoop-hdfs-project/hadoop-hdfs-rbf | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6491/4/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] HDFS-17358. EC: infinite lease recovery caused by the length of RWR equals to zero. [hadoop]
hadoop-yetus commented on PR #6509: URL: https://github.com/apache/hadoop/pull/6509#issuecomment-1917563938 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 22s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 34m 45s | | trunk passed | | +1 :green_heart: | compile | 0m 40s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 0m 39s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | checkstyle | 0m 40s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 43s | | trunk passed | | +1 :green_heart: | javadoc | 0m 39s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 1m 6s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 1m 52s | | trunk passed | | +1 :green_heart: | shadedclient | 21m 42s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 36s | | the patch passed | | +1 :green_heart: | compile | 0m 40s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 0m 40s | | the patch passed | | +1 :green_heart: | compile | 0m 35s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | javac | 0m 35s | | the patch passed | | -1 :x: | blanks | 0m 0s | [/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6509/12/artifact/out/blanks-eol.txt) | The patch has 5 line(s) that end in blanks. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply | | +1 :green_heart: | checkstyle | 0m 27s | | the patch passed | | +1 :green_heart: | mvnsite | 0m 36s | | the patch passed | | +1 :green_heart: | javadoc | 0m 29s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 59s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 1m 44s | | the patch passed | | +1 :green_heart: | shadedclient | 21m 51s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 199m 34s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6509/12/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 28s | | The patch does not generate ASF License warnings. | | | | 291m 26s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.server.datanode.TestDirectoryScanner | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.44 ServerAPI=1.44 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6509/12/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/6509 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux ab7ea8abd688 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 4619de20d50f9ad1380d3ee407d5a615e94539b5 | | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6509/12/testReport/ | | Max. process+thread count | 4069 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6509/12/console | | versions | git=2.25.1 ma
[jira] [Commented] (HADOOP-19044) AWS SDK V2 - Update S3A region logic
[ https://issues.apache.org/jira/browse/HADOOP-19044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17812422#comment-17812422 ] ASF GitHub Bot commented on HADOOP-19044: - virajjasani commented on code in PR #6479: URL: https://github.com/apache/hadoop/pull/6479#discussion_r1471667589 ## hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AEndpointRegion.java: ## @@ -257,6 +275,65 @@ public void testWithVPCE() throws Throwable { expectInterceptorException(client); } + @Test + public void testCentralEndpointCrossRegionAccess() throws Throwable { +describe("Create bucket on different region and access it using central endpoint"); +final Configuration conf = getConfiguration(); +removeBaseAndBucketOverrides(conf, ENDPOINT); Review Comment: Sure will set region here because null region is anyways covered below. > AWS SDK V2 - Update S3A region logic > - > > Key: HADOOP-19044 > URL: https://issues.apache.org/jira/browse/HADOOP-19044 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.4.0 >Reporter: Ahmar Suhail >Assignee: Viraj Jasani >Priority: Major > Labels: pull-request-available > > If both fs.s3a.endpoint & fs.s3a.endpoint.region are empty, Spark will set > fs.s3a.endpoint to > s3.amazonaws.com here: > [https://github.com/apache/spark/blob/9a2f39318e3af8b3817dc5e4baf52e548d82063c/core/src/main/scala/org/apache/spark/deploy/SparkHadoopUtil.scala#L540] > > > HADOOP-18908, updated the region logic such that if fs.s3a.endpoint.region is > set, or if a region can be parsed from fs.s3a.endpoint (which will happen in > this case, region will be US_EAST_1), cross region access is not enabled. > This will cause 400 errors if the bucket is not in US_EAST_1. > > Proposed: Updated the logic so that if the endpoint is the global > s3.amazonaws.com , cross region access is enabled. > > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HADOOP-19044. AWS SDK V2 - Update S3A region logic [hadoop]
virajjasani commented on code in PR #6479: URL: https://github.com/apache/hadoop/pull/6479#discussion_r1471667589 ## hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AEndpointRegion.java: ## @@ -257,6 +275,65 @@ public void testWithVPCE() throws Throwable { expectInterceptorException(client); } + @Test + public void testCentralEndpointCrossRegionAccess() throws Throwable { +describe("Create bucket on different region and access it using central endpoint"); +final Configuration conf = getConfiguration(); +removeBaseAndBucketOverrides(conf, ENDPOINT); Review Comment: Sure will set region here because null region is anyways covered below. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-19044) AWS SDK V2 - Update S3A region logic
[ https://issues.apache.org/jira/browse/HADOOP-19044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17812421#comment-17812421 ] ASF GitHub Bot commented on HADOOP-19044: - virajjasani commented on code in PR #6479: URL: https://github.com/apache/hadoop/pull/6479#discussion_r1471664768 ## hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/DefaultS3ClientFactory.java: ## @@ -289,17 +290,36 @@ private , ClientT> void builder.fipsEnabled(fipsEnabled); if (endpoint != null) { + boolean overrideEndpoint = true; checkArgument(!fipsEnabled, "%s : %s", ERROR_ENDPOINT_WITH_FIPS, endpoint); - builder.endpointOverride(endpoint); - // No region was configured, try to determine it from the endpoint. - if (region == null) { -region = getS3RegionFromEndpoint(parameters.getEndpoint()); + boolean endpointEndsWithCentral = + endpointStr.endsWith(CENTRAL_ENDPOINT); + // No region was configured or the endpoint is central, + // determine the region from the endpoint. + if (region == null || endpointEndsWithCentral) { Review Comment: I meant "central endpoint" with "us-west-1" region, to access bucket created on us-west-2. I will test out the combination. Thanks > AWS SDK V2 - Update S3A region logic > - > > Key: HADOOP-19044 > URL: https://issues.apache.org/jira/browse/HADOOP-19044 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.4.0 >Reporter: Ahmar Suhail >Assignee: Viraj Jasani >Priority: Major > Labels: pull-request-available > > If both fs.s3a.endpoint & fs.s3a.endpoint.region are empty, Spark will set > fs.s3a.endpoint to > s3.amazonaws.com here: > [https://github.com/apache/spark/blob/9a2f39318e3af8b3817dc5e4baf52e548d82063c/core/src/main/scala/org/apache/spark/deploy/SparkHadoopUtil.scala#L540] > > > HADOOP-18908, updated the region logic such that if fs.s3a.endpoint.region is > set, or if a region can be parsed from fs.s3a.endpoint (which will happen in > this case, region will be US_EAST_1), cross region access is not enabled. > This will cause 400 errors if the bucket is not in US_EAST_1. > > Proposed: Updated the logic so that if the endpoint is the global > s3.amazonaws.com , cross region access is enabled. > > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HADOOP-19044. AWS SDK V2 - Update S3A region logic [hadoop]
virajjasani commented on code in PR #6479: URL: https://github.com/apache/hadoop/pull/6479#discussion_r1471664768 ## hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/DefaultS3ClientFactory.java: ## @@ -289,17 +290,36 @@ private , ClientT> void builder.fipsEnabled(fipsEnabled); if (endpoint != null) { + boolean overrideEndpoint = true; checkArgument(!fipsEnabled, "%s : %s", ERROR_ENDPOINT_WITH_FIPS, endpoint); - builder.endpointOverride(endpoint); - // No region was configured, try to determine it from the endpoint. - if (region == null) { -region = getS3RegionFromEndpoint(parameters.getEndpoint()); + boolean endpointEndsWithCentral = + endpointStr.endsWith(CENTRAL_ENDPOINT); + // No region was configured or the endpoint is central, + // determine the region from the endpoint. + if (region == null || endpointEndsWithCentral) { Review Comment: I meant "central endpoint" with "us-west-1" region, to access bucket created on us-west-2. I will test out the combination. Thanks -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18830) S3A: Cut S3 Select
[ https://issues.apache.org/jira/browse/HADOOP-18830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17812394#comment-17812394 ] ASF GitHub Bot commented on HADOOP-18830: - steveloughran merged PR #6144: URL: https://github.com/apache/hadoop/pull/6144 > S3A: Cut S3 Select > -- > > Key: HADOOP-18830 > URL: https://issues.apache.org/jira/browse/HADOOP-18830 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.4.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > Labels: pull-request-available > > getting s3 select to work with the v2 sdk is tricky, we need to add extra > libraries to the classpath beyond just bundle.jar. we can do this but > * AFAIK nobody has ever done CSV predicate pushdown, as it breaks split logic > completely > * CSV is a bad format > * one-line JSON more structured but also way less efficient > ORC/Parquet benefit from vectored IO and work spanning the cluster. > accordingly, I'm wondering what to do about s3 select > # cut? > # downgrade to optional and document the extra classes on the classpath > Option #2 is straightforward and effectively the default. we can also declare > the feature deprecated. > {code} > [ERROR] > testReadLandsatRecordsNoMatch(org.apache.hadoop.fs.s3a.select.ITestS3SelectLandsat) > Time elapsed: 147.958 s <<< ERROR! > java.io.IOException: java.lang.NoClassDefFoundError: > software/amazon/eventstream/MessageDecoder > at > org.apache.hadoop.fs.s3a.select.SelectObjectContentHelper.select(SelectObjectContentHelper.java:75) > at > org.apache.hadoop.fs.s3a.WriteOperationHelper.lambda$select$10(WriteOperationHelper.java:660) > at > org.apache.hadoop.fs.store.audit.AuditingFunctions.lambda$withinAuditSpan$0(AuditingFunctions.java:62) > at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:122) > {code} -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HADOOP-18830. Cut S3 Select [hadoop]
steveloughran merged PR #6144: URL: https://github.com/apache/hadoop/pull/6144 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HADOOP-19044. AWS SDK V2 - Update S3A region logic [hadoop]
steveloughran commented on code in PR #6479: URL: https://github.com/apache/hadoop/pull/6479#discussion_r1471515567 ## hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AEndpointRegion.java: ## @@ -257,6 +275,65 @@ public void testWithVPCE() throws Throwable { expectInterceptorException(client); } + @Test + public void testCentralEndpointCrossRegionAccess() throws Throwable { +describe("Create bucket on different region and access it using central endpoint"); +final Configuration conf = getConfiguration(); +removeBaseAndBucketOverrides(conf, ENDPOINT); Review Comment: what should region be set to here? either unset it or explicitly set it. ## hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/DefaultS3ClientFactory.java: ## @@ -289,17 +290,36 @@ private , ClientT> void builder.fipsEnabled(fipsEnabled); if (endpoint != null) { + boolean overrideEndpoint = true; checkArgument(!fipsEnabled, "%s : %s", ERROR_ENDPOINT_WITH_FIPS, endpoint); - builder.endpointOverride(endpoint); - // No region was configured, try to determine it from the endpoint. - if (region == null) { -region = getS3RegionFromEndpoint(parameters.getEndpoint()); + boolean endpointEndsWithCentral = + endpointStr.endsWith(CENTRAL_ENDPOINT); + // No region was configured or the endpoint is central, + // determine the region from the endpoint. + if (region == null || endpointEndsWithCentral) { Review Comment: I don't think anyone should set region=us-west-2 and endpoint = us-west-1 unless they like debugging things. all we want is to handle situations where things are not set. ## hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AEndpointRegion.java: ## @@ -146,7 +150,21 @@ public void testCentralEndpoint() throws Throwable { describe("Create a client with the central endpoint"); Configuration conf = getConfiguration(); -S3Client client = createS3Client(conf, CENTRAL_ENDPOINT, null, US_EAST_1, false); +S3Client client = createS3Client(conf, CENTRAL_ENDPOINT, null, US_EAST_2, false); + +expectInterceptorException(client); + } + + @Test + public void testCentralEndpointWithRegion() throws Throwable { +describe("Create a client with the central endpoint but also specify region"); +Configuration conf = getConfiguration(); + +S3Client client = createS3Client(conf, CENTRAL_ENDPOINT, US_WEST_2, US_EAST_2, false); Review Comment: as in #6466 I'm going to propose we make the static methods accessible and unit tests to validate them, because * this stuff is so important and complicated we need it running on every pr * everyone's ITest setup is different, so may miss things. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-19044) AWS SDK V2 - Update S3A region logic
[ https://issues.apache.org/jira/browse/HADOOP-19044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17812391#comment-17812391 ] ASF GitHub Bot commented on HADOOP-19044: - steveloughran commented on code in PR #6479: URL: https://github.com/apache/hadoop/pull/6479#discussion_r1471515567 ## hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AEndpointRegion.java: ## @@ -257,6 +275,65 @@ public void testWithVPCE() throws Throwable { expectInterceptorException(client); } + @Test + public void testCentralEndpointCrossRegionAccess() throws Throwable { +describe("Create bucket on different region and access it using central endpoint"); +final Configuration conf = getConfiguration(); +removeBaseAndBucketOverrides(conf, ENDPOINT); Review Comment: what should region be set to here? either unset it or explicitly set it. ## hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/DefaultS3ClientFactory.java: ## @@ -289,17 +290,36 @@ private , ClientT> void builder.fipsEnabled(fipsEnabled); if (endpoint != null) { + boolean overrideEndpoint = true; checkArgument(!fipsEnabled, "%s : %s", ERROR_ENDPOINT_WITH_FIPS, endpoint); - builder.endpointOverride(endpoint); - // No region was configured, try to determine it from the endpoint. - if (region == null) { -region = getS3RegionFromEndpoint(parameters.getEndpoint()); + boolean endpointEndsWithCentral = + endpointStr.endsWith(CENTRAL_ENDPOINT); + // No region was configured or the endpoint is central, + // determine the region from the endpoint. + if (region == null || endpointEndsWithCentral) { Review Comment: I don't think anyone should set region=us-west-2 and endpoint = us-west-1 unless they like debugging things. all we want is to handle situations where things are not set. ## hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AEndpointRegion.java: ## @@ -146,7 +150,21 @@ public void testCentralEndpoint() throws Throwable { describe("Create a client with the central endpoint"); Configuration conf = getConfiguration(); -S3Client client = createS3Client(conf, CENTRAL_ENDPOINT, null, US_EAST_1, false); +S3Client client = createS3Client(conf, CENTRAL_ENDPOINT, null, US_EAST_2, false); + +expectInterceptorException(client); + } + + @Test + public void testCentralEndpointWithRegion() throws Throwable { +describe("Create a client with the central endpoint but also specify region"); +Configuration conf = getConfiguration(); + +S3Client client = createS3Client(conf, CENTRAL_ENDPOINT, US_WEST_2, US_EAST_2, false); Review Comment: as in #6466 I'm going to propose we make the static methods accessible and unit tests to validate them, because * this stuff is so important and complicated we need it running on every pr * everyone's ITest setup is different, so may miss things. > AWS SDK V2 - Update S3A region logic > - > > Key: HADOOP-19044 > URL: https://issues.apache.org/jira/browse/HADOOP-19044 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.4.0 >Reporter: Ahmar Suhail >Assignee: Viraj Jasani >Priority: Major > Labels: pull-request-available > > If both fs.s3a.endpoint & fs.s3a.endpoint.region are empty, Spark will set > fs.s3a.endpoint to > s3.amazonaws.com here: > [https://github.com/apache/spark/blob/9a2f39318e3af8b3817dc5e4baf52e548d82063c/core/src/main/scala/org/apache/spark/deploy/SparkHadoopUtil.scala#L540] > > > HADOOP-18908, updated the region logic such that if fs.s3a.endpoint.region is > set, or if a region can be parsed from fs.s3a.endpoint (which will happen in > this case, region will be US_EAST_1), cross region access is not enabled. > This will cause 400 errors if the bucket is not in US_EAST_1. > > Proposed: Updated the logic so that if the endpoint is the global > s3.amazonaws.com , cross region access is enabled. > > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HADOOP-18938. AWS SDK v2: Fix endpoint region parsing for vpc endpoints. [hadoop]
steveloughran commented on code in PR #6466: URL: https://github.com/apache/hadoop/pull/6466#discussion_r1471504240 ## hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/DefaultS3ClientFactory.java: ## @@ -82,6 +84,9 @@ public class DefaultS3ClientFactory extends Configured private static final String S3_SERVICE_NAME = "s3"; + private static final Pattern VPC_ENDPOINT_PATTERN = + Pattern.compile("^(?:.+\\.)?([a-z0-9-]+)\\.vpce\\.amazonaws\\.(?:com|com\\.cn)$"); Review Comment: does aws govcloud have a different pattern? or is it just .cn -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18938) S3A region logic to handle vpce and non standard endpoints
[ https://issues.apache.org/jira/browse/HADOOP-18938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17812385#comment-17812385 ] ASF GitHub Bot commented on HADOOP-18938: - steveloughran commented on code in PR #6466: URL: https://github.com/apache/hadoop/pull/6466#discussion_r1471504240 ## hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/DefaultS3ClientFactory.java: ## @@ -82,6 +84,9 @@ public class DefaultS3ClientFactory extends Configured private static final String S3_SERVICE_NAME = "s3"; + private static final Pattern VPC_ENDPOINT_PATTERN = + Pattern.compile("^(?:.+\\.)?([a-z0-9-]+)\\.vpce\\.amazonaws\\.(?:com|com\\.cn)$"); Review Comment: does aws govcloud have a different pattern? or is it just .cn > S3A region logic to handle vpce and non standard endpoints > --- > > Key: HADOOP-18938 > URL: https://issues.apache.org/jira/browse/HADOOP-18938 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.4.0 >Reporter: Ahmar Suhail >Priority: Major > Labels: pull-request-available > > For non standard endpoints such as VPCE the region parsing added in > HADOOP-18908 doesn't work. This is expected as that logic is only meant to be > used for standard endpoints. > If you are using a non-standard endpoint, check if a region is also provided, > else fail fast. > Also update documentation to explain to region and endpoint behaviour with > SDK V2. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-19044) AWS SDK V2 - Update S3A region logic
[ https://issues.apache.org/jira/browse/HADOOP-19044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17812384#comment-17812384 ] ASF GitHub Bot commented on HADOOP-19044: - steveloughran commented on code in PR #6479: URL: https://github.com/apache/hadoop/pull/6479#discussion_r1471495390 ## hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/DefaultS3ClientFactory.java: ## @@ -320,6 +321,11 @@ private , ClientT> void origin = "SDK region chain"; } +if (endpointStr != null && endpointStr.endsWith(CENTRAL_ENDPOINT)) { Review Comment: landsat bucket seems to have been closed off; we will need to move off it -but the replacement must be something other than us-east so it stresses more of the system > AWS SDK V2 - Update S3A region logic > - > > Key: HADOOP-19044 > URL: https://issues.apache.org/jira/browse/HADOOP-19044 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.4.0 >Reporter: Ahmar Suhail >Assignee: Viraj Jasani >Priority: Major > Labels: pull-request-available > > If both fs.s3a.endpoint & fs.s3a.endpoint.region are empty, Spark will set > fs.s3a.endpoint to > s3.amazonaws.com here: > [https://github.com/apache/spark/blob/9a2f39318e3af8b3817dc5e4baf52e548d82063c/core/src/main/scala/org/apache/spark/deploy/SparkHadoopUtil.scala#L540] > > > HADOOP-18908, updated the region logic such that if fs.s3a.endpoint.region is > set, or if a region can be parsed from fs.s3a.endpoint (which will happen in > this case, region will be US_EAST_1), cross region access is not enabled. > This will cause 400 errors if the bucket is not in US_EAST_1. > > Proposed: Updated the logic so that if the endpoint is the global > s3.amazonaws.com , cross region access is enabled. > > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HADOOP-19044. AWS SDK V2 - Update S3A region logic [hadoop]
steveloughran commented on code in PR #6479: URL: https://github.com/apache/hadoop/pull/6479#discussion_r1471495390 ## hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/DefaultS3ClientFactory.java: ## @@ -320,6 +321,11 @@ private , ClientT> void origin = "SDK region chain"; } +if (endpointStr != null && endpointStr.endsWith(CENTRAL_ENDPOINT)) { Review Comment: landsat bucket seems to have been closed off; we will need to move off it -but the replacement must be something other than us-east so it stresses more of the system -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18938) S3A region logic to handle vpce and non standard endpoints
[ https://issues.apache.org/jira/browse/HADOOP-18938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17812381#comment-17812381 ] ASF GitHub Bot commented on HADOOP-18938: - steveloughran commented on code in PR #6466: URL: https://github.com/apache/hadoop/pull/6466#discussion_r1471473061 ## hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/DefaultS3ClientFactory.java: ## @@ -361,6 +366,13 @@ private static URI getS3Endpoint(String endpoint, final Configuration conf) { */ private static Region getS3RegionFromEndpoint(String endpoint) { +// S3 VPC endpoint parsing +Matcher matcher = VPC_ENDPOINT_PATTERN.matcher(endpoint); +if(matcher.find()) { + LOG.debug("Endpoint {} is vpc endpoint; parsing", endpoint); + return Region.of(matcher.group(1)); Review Comment: could you log the group(1) value in the debug log so we can see exactly which one it is. region and endpoint bindings are the cause of many of our support calls. ## hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/DefaultS3ClientFactory.java: ## @@ -361,7 +366,15 @@ private static URI getS3Endpoint(String endpoint, final Configuration conf) { */ private static Region getS3RegionFromEndpoint(String endpoint) { Review Comment: this could be made package private/visible for testing and then have some unit tests which will run every time yetus does its work. this should include a test which passes an endpoint which shouldn't match the regexp -and verifies that it is rejected > S3A region logic to handle vpce and non standard endpoints > --- > > Key: HADOOP-18938 > URL: https://issues.apache.org/jira/browse/HADOOP-18938 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.4.0 >Reporter: Ahmar Suhail >Priority: Major > Labels: pull-request-available > > For non standard endpoints such as VPCE the region parsing added in > HADOOP-18908 doesn't work. This is expected as that logic is only meant to be > used for standard endpoints. > If you are using a non-standard endpoint, check if a region is also provided, > else fail fast. > Also update documentation to explain to region and endpoint behaviour with > SDK V2. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HADOOP-18938. AWS SDK v2: Fix endpoint region parsing for vpc endpoints. [hadoop]
steveloughran commented on code in PR #6466: URL: https://github.com/apache/hadoop/pull/6466#discussion_r1471473061 ## hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/DefaultS3ClientFactory.java: ## @@ -361,6 +366,13 @@ private static URI getS3Endpoint(String endpoint, final Configuration conf) { */ private static Region getS3RegionFromEndpoint(String endpoint) { +// S3 VPC endpoint parsing +Matcher matcher = VPC_ENDPOINT_PATTERN.matcher(endpoint); +if(matcher.find()) { + LOG.debug("Endpoint {} is vpc endpoint; parsing", endpoint); + return Region.of(matcher.group(1)); Review Comment: could you log the group(1) value in the debug log so we can see exactly which one it is. region and endpoint bindings are the cause of many of our support calls. ## hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/DefaultS3ClientFactory.java: ## @@ -361,7 +366,15 @@ private static URI getS3Endpoint(String endpoint, final Configuration conf) { */ private static Region getS3RegionFromEndpoint(String endpoint) { Review Comment: this could be made package private/visible for testing and then have some unit tests which will run every time yetus does its work. this should include a test which passes an endpoint which shouldn't match the regexp -and verifies that it is rejected -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-19045) S3A: pass request timeouts down to sdk clients
[ https://issues.apache.org/jira/browse/HADOOP-19045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17812378#comment-17812378 ] ASF GitHub Bot commented on HADOOP-19045: - steveloughran merged PR #6470: URL: https://github.com/apache/hadoop/pull/6470 > S3A: pass request timeouts down to sdk clients > -- > > Key: HADOOP-19045 > URL: https://issues.apache.org/jira/browse/HADOOP-19045 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.4.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > Labels: pull-request-available > > s3a client timeout settings are getting down to http client, but not sdk > timeouts, so you can't have a longer timeout than the default. This surfaces > in the inability to tune the timeouts for CreateSession calls even now the > latest SDK does pick it up -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HADOOP-19045. S3A: Validate CreateSession Timeout Propagation [hadoop]
steveloughran merged PR #6470: URL: https://github.com/apache/hadoop/pull/6470 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-19045) S3A: pass request timeouts down to sdk clients
[ https://issues.apache.org/jira/browse/HADOOP-19045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17812376#comment-17812376 ] ASF GitHub Bot commented on HADOOP-19045: - steveloughran commented on PR #6470: URL: https://github.com/apache/hadoop/pull/6470#issuecomment-1917224296 ok, unrelated. we wil need to fix that. let's try some other public dataset > S3A: pass request timeouts down to sdk clients > -- > > Key: HADOOP-19045 > URL: https://issues.apache.org/jira/browse/HADOOP-19045 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.4.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > Labels: pull-request-available > > s3a client timeout settings are getting down to http client, but not sdk > timeouts, so you can't have a longer timeout than the default. This surfaces > in the inability to tune the timeouts for CreateSession calls even now the > latest SDK does pick it up -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HADOOP-19045. S3A: Validate CreateSession Timeout Propagation [hadoop]
steveloughran commented on PR #6470: URL: https://github.com/apache/hadoop/pull/6470#issuecomment-1917224296 ok, unrelated. we wil need to fix that. let's try some other public dataset -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-19044) AWS SDK V2 - Update S3A region logic
[ https://issues.apache.org/jira/browse/HADOOP-19044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17812374#comment-17812374 ] ASF GitHub Bot commented on HADOOP-19044: - virajjasani commented on code in PR #6479: URL: https://github.com/apache/hadoop/pull/6479#discussion_r1471439003 ## hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/DefaultS3ClientFactory.java: ## @@ -289,17 +290,36 @@ private , ClientT> void builder.fipsEnabled(fipsEnabled); if (endpoint != null) { + boolean overrideEndpoint = true; checkArgument(!fipsEnabled, "%s : %s", ERROR_ENDPOINT_WITH_FIPS, endpoint); - builder.endpointOverride(endpoint); - // No region was configured, try to determine it from the endpoint. - if (region == null) { -region = getS3RegionFromEndpoint(parameters.getEndpoint()); + boolean endpointEndsWithCentral = + endpointStr.endsWith(CENTRAL_ENDPOINT); + // No region was configured or the endpoint is central, + // determine the region from the endpoint. + if (region == null || endpointEndsWithCentral) { Review Comment: Got it, will test this out today. Thanks a lot for the reviews!! > AWS SDK V2 - Update S3A region logic > - > > Key: HADOOP-19044 > URL: https://issues.apache.org/jira/browse/HADOOP-19044 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.4.0 >Reporter: Ahmar Suhail >Assignee: Viraj Jasani >Priority: Major > Labels: pull-request-available > > If both fs.s3a.endpoint & fs.s3a.endpoint.region are empty, Spark will set > fs.s3a.endpoint to > s3.amazonaws.com here: > [https://github.com/apache/spark/blob/9a2f39318e3af8b3817dc5e4baf52e548d82063c/core/src/main/scala/org/apache/spark/deploy/SparkHadoopUtil.scala#L540] > > > HADOOP-18908, updated the region logic such that if fs.s3a.endpoint.region is > set, or if a region can be parsed from fs.s3a.endpoint (which will happen in > this case, region will be US_EAST_1), cross region access is not enabled. > This will cause 400 errors if the bucket is not in US_EAST_1. > > Proposed: Updated the logic so that if the endpoint is the global > s3.amazonaws.com , cross region access is enabled. > > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HADOOP-19044. AWS SDK V2 - Update S3A region logic [hadoop]
virajjasani commented on code in PR #6479: URL: https://github.com/apache/hadoop/pull/6479#discussion_r1471439003 ## hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/DefaultS3ClientFactory.java: ## @@ -289,17 +290,36 @@ private , ClientT> void builder.fipsEnabled(fipsEnabled); if (endpoint != null) { + boolean overrideEndpoint = true; checkArgument(!fipsEnabled, "%s : %s", ERROR_ENDPOINT_WITH_FIPS, endpoint); - builder.endpointOverride(endpoint); - // No region was configured, try to determine it from the endpoint. - if (region == null) { -region = getS3RegionFromEndpoint(parameters.getEndpoint()); + boolean endpointEndsWithCentral = + endpointStr.endsWith(CENTRAL_ENDPOINT); + // No region was configured or the endpoint is central, + // determine the region from the endpoint. + if (region == null || endpointEndsWithCentral) { Review Comment: Got it, will test this out today. Thanks a lot for the reviews!! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-19044) AWS SDK V2 - Update S3A region logic
[ https://issues.apache.org/jira/browse/HADOOP-19044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17812373#comment-17812373 ] ASF GitHub Bot commented on HADOOP-19044: - virajjasani commented on PR #6479: URL: https://github.com/apache/hadoop/pull/6479#issuecomment-1917189869 > for no. 5, > endpoint s3-us-east-2.amazonaws.com and region us-east-2 (and null) > > unable to perform any operation, as expected (no central endpoint, no cross-region access) > > > > you should be able to perform all operations right? It's not central endpoint, and cross region access is also not enabled. Bucket is on us-west-2. > AWS SDK V2 - Update S3A region logic > - > > Key: HADOOP-19044 > URL: https://issues.apache.org/jira/browse/HADOOP-19044 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.4.0 >Reporter: Ahmar Suhail >Assignee: Viraj Jasani >Priority: Major > Labels: pull-request-available > > If both fs.s3a.endpoint & fs.s3a.endpoint.region are empty, Spark will set > fs.s3a.endpoint to > s3.amazonaws.com here: > [https://github.com/apache/spark/blob/9a2f39318e3af8b3817dc5e4baf52e548d82063c/core/src/main/scala/org/apache/spark/deploy/SparkHadoopUtil.scala#L540] > > > HADOOP-18908, updated the region logic such that if fs.s3a.endpoint.region is > set, or if a region can be parsed from fs.s3a.endpoint (which will happen in > this case, region will be US_EAST_1), cross region access is not enabled. > This will cause 400 errors if the bucket is not in US_EAST_1. > > Proposed: Updated the logic so that if the endpoint is the global > s3.amazonaws.com , cross region access is enabled. > > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HADOOP-19044. AWS SDK V2 - Update S3A region logic [hadoop]
virajjasani commented on PR #6479: URL: https://github.com/apache/hadoop/pull/6479#issuecomment-1917189869 > for no. 5, > endpoint s3-us-east-2.amazonaws.com and region us-east-2 (and null) > > unable to perform any operation, as expected (no central endpoint, no cross-region access) > > > > you should be able to perform all operations right? It's not central endpoint, and cross region access is also not enabled. Bucket is on us-west-2. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-19044) AWS SDK V2 - Update S3A region logic
[ https://issues.apache.org/jira/browse/HADOOP-19044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17812372#comment-17812372 ] ASF GitHub Bot commented on HADOOP-19044: - ahmarsuhail commented on code in PR #6479: URL: https://github.com/apache/hadoop/pull/6479#discussion_r1471432558 ## hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/DefaultS3ClientFactory.java: ## @@ -289,17 +290,36 @@ private , ClientT> void builder.fipsEnabled(fipsEnabled); if (endpoint != null) { + boolean overrideEndpoint = true; checkArgument(!fipsEnabled, "%s : %s", ERROR_ENDPOINT_WITH_FIPS, endpoint); - builder.endpointOverride(endpoint); - // No region was configured, try to determine it from the endpoint. - if (region == null) { -region = getS3RegionFromEndpoint(parameters.getEndpoint()); + boolean endpointEndsWithCentral = + endpointStr.endsWith(CENTRAL_ENDPOINT); + // No region was configured or the endpoint is central, + // determine the region from the endpoint. + if (region == null || endpointEndsWithCentral) { Review Comment: no, SDK fill figure out the endpoint even if cross region is not enabled. cross region is only if you don't know the region, so we set a random region and enable it. it doesn't effect endpoint resolution behaviour afaik > AWS SDK V2 - Update S3A region logic > - > > Key: HADOOP-19044 > URL: https://issues.apache.org/jira/browse/HADOOP-19044 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.4.0 >Reporter: Ahmar Suhail >Assignee: Viraj Jasani >Priority: Major > Labels: pull-request-available > > If both fs.s3a.endpoint & fs.s3a.endpoint.region are empty, Spark will set > fs.s3a.endpoint to > s3.amazonaws.com here: > [https://github.com/apache/spark/blob/9a2f39318e3af8b3817dc5e4baf52e548d82063c/core/src/main/scala/org/apache/spark/deploy/SparkHadoopUtil.scala#L540] > > > HADOOP-18908, updated the region logic such that if fs.s3a.endpoint.region is > set, or if a region can be parsed from fs.s3a.endpoint (which will happen in > this case, region will be US_EAST_1), cross region access is not enabled. > This will cause 400 errors if the bucket is not in US_EAST_1. > > Proposed: Updated the logic so that if the endpoint is the global > s3.amazonaws.com , cross region access is enabled. > > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HADOOP-19044. AWS SDK V2 - Update S3A region logic [hadoop]
ahmarsuhail commented on code in PR #6479: URL: https://github.com/apache/hadoop/pull/6479#discussion_r1471432558 ## hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/DefaultS3ClientFactory.java: ## @@ -289,17 +290,36 @@ private , ClientT> void builder.fipsEnabled(fipsEnabled); if (endpoint != null) { + boolean overrideEndpoint = true; checkArgument(!fipsEnabled, "%s : %s", ERROR_ENDPOINT_WITH_FIPS, endpoint); - builder.endpointOverride(endpoint); - // No region was configured, try to determine it from the endpoint. - if (region == null) { -region = getS3RegionFromEndpoint(parameters.getEndpoint()); + boolean endpointEndsWithCentral = + endpointStr.endsWith(CENTRAL_ENDPOINT); + // No region was configured or the endpoint is central, + // determine the region from the endpoint. + if (region == null || endpointEndsWithCentral) { Review Comment: no, SDK fill figure out the endpoint even if cross region is not enabled. cross region is only if you don't know the region, so we set a random region and enable it. it doesn't effect endpoint resolution behaviour afaik -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-19044) AWS SDK V2 - Update S3A region logic
[ https://issues.apache.org/jira/browse/HADOOP-19044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17812371#comment-17812371 ] ASF GitHub Bot commented on HADOOP-19044: - virajjasani commented on code in PR #6479: URL: https://github.com/apache/hadoop/pull/6479#discussion_r1471431028 ## hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/DefaultS3ClientFactory.java: ## @@ -289,17 +290,36 @@ private , ClientT> void builder.fipsEnabled(fipsEnabled); if (endpoint != null) { + boolean overrideEndpoint = true; checkArgument(!fipsEnabled, "%s : %s", ERROR_ENDPOINT_WITH_FIPS, endpoint); - builder.endpointOverride(endpoint); - // No region was configured, try to determine it from the endpoint. - if (region == null) { -region = getS3RegionFromEndpoint(parameters.getEndpoint()); + boolean endpointEndsWithCentral = + endpointStr.endsWith(CENTRAL_ENDPOINT); + // No region was configured or the endpoint is central, + // determine the region from the endpoint. + if (region == null || endpointEndsWithCentral) { Review Comment: Otherwise we will have same problem i suppose e.g. bucket on us-west-2 won't be accessible by central endpoint and us-west-1 combination. It will only be accessible by central endpoint and null region combination. > AWS SDK V2 - Update S3A region logic > - > > Key: HADOOP-19044 > URL: https://issues.apache.org/jira/browse/HADOOP-19044 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.4.0 >Reporter: Ahmar Suhail >Assignee: Viraj Jasani >Priority: Major > Labels: pull-request-available > > If both fs.s3a.endpoint & fs.s3a.endpoint.region are empty, Spark will set > fs.s3a.endpoint to > s3.amazonaws.com here: > [https://github.com/apache/spark/blob/9a2f39318e3af8b3817dc5e4baf52e548d82063c/core/src/main/scala/org/apache/spark/deploy/SparkHadoopUtil.scala#L540] > > > HADOOP-18908, updated the region logic such that if fs.s3a.endpoint.region is > set, or if a region can be parsed from fs.s3a.endpoint (which will happen in > this case, region will be US_EAST_1), cross region access is not enabled. > This will cause 400 errors if the bucket is not in US_EAST_1. > > Proposed: Updated the logic so that if the endpoint is the global > s3.amazonaws.com , cross region access is enabled. > > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HADOOP-19044. AWS SDK V2 - Update S3A region logic [hadoop]
virajjasani commented on code in PR #6479: URL: https://github.com/apache/hadoop/pull/6479#discussion_r1471431028 ## hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/DefaultS3ClientFactory.java: ## @@ -289,17 +290,36 @@ private , ClientT> void builder.fipsEnabled(fipsEnabled); if (endpoint != null) { + boolean overrideEndpoint = true; checkArgument(!fipsEnabled, "%s : %s", ERROR_ENDPOINT_WITH_FIPS, endpoint); - builder.endpointOverride(endpoint); - // No region was configured, try to determine it from the endpoint. - if (region == null) { -region = getS3RegionFromEndpoint(parameters.getEndpoint()); + boolean endpointEndsWithCentral = + endpointStr.endsWith(CENTRAL_ENDPOINT); + // No region was configured or the endpoint is central, + // determine the region from the endpoint. + if (region == null || endpointEndsWithCentral) { Review Comment: Otherwise we will have same problem i suppose e.g. bucket on us-west-2 won't be accessible by central endpoint and us-west-1 combination. It will only be accessible by central endpoint and null region combination. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-19044) AWS SDK V2 - Update S3A region logic
[ https://issues.apache.org/jira/browse/HADOOP-19044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17812368#comment-17812368 ] ASF GitHub Bot commented on HADOOP-19044: - virajjasani commented on code in PR #6479: URL: https://github.com/apache/hadoop/pull/6479#discussion_r1471423884 ## hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/DefaultS3ClientFactory.java: ## @@ -289,17 +290,36 @@ private , ClientT> void builder.fipsEnabled(fipsEnabled); if (endpoint != null) { + boolean overrideEndpoint = true; checkArgument(!fipsEnabled, "%s : %s", ERROR_ENDPOINT_WITH_FIPS, endpoint); - builder.endpointOverride(endpoint); - // No region was configured, try to determine it from the endpoint. - if (region == null) { -region = getS3RegionFromEndpoint(parameters.getEndpoint()); + boolean endpointEndsWithCentral = + endpointStr.endsWith(CENTRAL_ENDPOINT); + // No region was configured or the endpoint is central, + // determine the region from the endpoint. + if (region == null || endpointEndsWithCentral) { Review Comment: Even for user configured region, for sdk to figure out, we still need to enable cross region access right? > AWS SDK V2 - Update S3A region logic > - > > Key: HADOOP-19044 > URL: https://issues.apache.org/jira/browse/HADOOP-19044 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.4.0 >Reporter: Ahmar Suhail >Assignee: Viraj Jasani >Priority: Major > Labels: pull-request-available > > If both fs.s3a.endpoint & fs.s3a.endpoint.region are empty, Spark will set > fs.s3a.endpoint to > s3.amazonaws.com here: > [https://github.com/apache/spark/blob/9a2f39318e3af8b3817dc5e4baf52e548d82063c/core/src/main/scala/org/apache/spark/deploy/SparkHadoopUtil.scala#L540] > > > HADOOP-18908, updated the region logic such that if fs.s3a.endpoint.region is > set, or if a region can be parsed from fs.s3a.endpoint (which will happen in > this case, region will be US_EAST_1), cross region access is not enabled. > This will cause 400 errors if the bucket is not in US_EAST_1. > > Proposed: Updated the logic so that if the endpoint is the global > s3.amazonaws.com , cross region access is enabled. > > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HADOOP-19044. AWS SDK V2 - Update S3A region logic [hadoop]
virajjasani commented on code in PR #6479: URL: https://github.com/apache/hadoop/pull/6479#discussion_r1471423884 ## hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/DefaultS3ClientFactory.java: ## @@ -289,17 +290,36 @@ private , ClientT> void builder.fipsEnabled(fipsEnabled); if (endpoint != null) { + boolean overrideEndpoint = true; checkArgument(!fipsEnabled, "%s : %s", ERROR_ENDPOINT_WITH_FIPS, endpoint); - builder.endpointOverride(endpoint); - // No region was configured, try to determine it from the endpoint. - if (region == null) { -region = getS3RegionFromEndpoint(parameters.getEndpoint()); + boolean endpointEndsWithCentral = + endpointStr.endsWith(CENTRAL_ENDPOINT); + // No region was configured or the endpoint is central, + // determine the region from the endpoint. + if (region == null || endpointEndsWithCentral) { Review Comment: Even for user configured region, for sdk to figure out, we still need to enable cross region access right? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-19044) AWS SDK V2 - Update S3A region logic
[ https://issues.apache.org/jira/browse/HADOOP-19044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17812367#comment-17812367 ] ASF GitHub Bot commented on HADOOP-19044: - virajjasani commented on code in PR #6479: URL: https://github.com/apache/hadoop/pull/6479#discussion_r1471423884 ## hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/DefaultS3ClientFactory.java: ## @@ -289,17 +290,36 @@ private , ClientT> void builder.fipsEnabled(fipsEnabled); if (endpoint != null) { + boolean overrideEndpoint = true; checkArgument(!fipsEnabled, "%s : %s", ERROR_ENDPOINT_WITH_FIPS, endpoint); - builder.endpointOverride(endpoint); - // No region was configured, try to determine it from the endpoint. - if (region == null) { -region = getS3RegionFromEndpoint(parameters.getEndpoint()); + boolean endpointEndsWithCentral = + endpointStr.endsWith(CENTRAL_ENDPOINT); + // No region was configured or the endpoint is central, + // determine the region from the endpoint. + if (region == null || endpointEndsWithCentral) { Review Comment: For sdk to figure out, we still need to enable cross region access right? > AWS SDK V2 - Update S3A region logic > - > > Key: HADOOP-19044 > URL: https://issues.apache.org/jira/browse/HADOOP-19044 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.4.0 >Reporter: Ahmar Suhail >Assignee: Viraj Jasani >Priority: Major > Labels: pull-request-available > > If both fs.s3a.endpoint & fs.s3a.endpoint.region are empty, Spark will set > fs.s3a.endpoint to > s3.amazonaws.com here: > [https://github.com/apache/spark/blob/9a2f39318e3af8b3817dc5e4baf52e548d82063c/core/src/main/scala/org/apache/spark/deploy/SparkHadoopUtil.scala#L540] > > > HADOOP-18908, updated the region logic such that if fs.s3a.endpoint.region is > set, or if a region can be parsed from fs.s3a.endpoint (which will happen in > this case, region will be US_EAST_1), cross region access is not enabled. > This will cause 400 errors if the bucket is not in US_EAST_1. > > Proposed: Updated the logic so that if the endpoint is the global > s3.amazonaws.com , cross region access is enabled. > > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HADOOP-19044. AWS SDK V2 - Update S3A region logic [hadoop]
virajjasani commented on code in PR #6479: URL: https://github.com/apache/hadoop/pull/6479#discussion_r1471423884 ## hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/DefaultS3ClientFactory.java: ## @@ -289,17 +290,36 @@ private , ClientT> void builder.fipsEnabled(fipsEnabled); if (endpoint != null) { + boolean overrideEndpoint = true; checkArgument(!fipsEnabled, "%s : %s", ERROR_ENDPOINT_WITH_FIPS, endpoint); - builder.endpointOverride(endpoint); - // No region was configured, try to determine it from the endpoint. - if (region == null) { -region = getS3RegionFromEndpoint(parameters.getEndpoint()); + boolean endpointEndsWithCentral = + endpointStr.endsWith(CENTRAL_ENDPOINT); + // No region was configured or the endpoint is central, + // determine the region from the endpoint. + if (region == null || endpointEndsWithCentral) { Review Comment: For sdk to figure out, we still need to enable cross region access right? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-19045) S3A: pass request timeouts down to sdk clients
[ https://issues.apache.org/jira/browse/HADOOP-19045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17812364#comment-17812364 ] ASF GitHub Bot commented on HADOOP-19045: - virajjasani commented on PR #6470: URL: https://github.com/apache/hadoop/pull/6470#issuecomment-1917133434 Not only that, when i checked for any recent regression by running landsat-pds tests on relatively old branch (PR #6406 has outdated branch), it failed there too, with 403. > S3A: pass request timeouts down to sdk clients > -- > > Key: HADOOP-19045 > URL: https://issues.apache.org/jira/browse/HADOOP-19045 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.4.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > Labels: pull-request-available > > s3a client timeout settings are getting down to http client, but not sdk > timeouts, so you can't have a longer timeout than the default. This surfaces > in the inability to tune the timeouts for CreateSession calls even now the > latest SDK does pick it up -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HADOOP-19045. S3A: Validate CreateSession Timeout Propagation [hadoop]
virajjasani commented on PR #6470: URL: https://github.com/apache/hadoop/pull/6470#issuecomment-1917133434 Not only that, when i checked for any recent regression by running landsat-pds tests on relatively old branch (PR #6406 has outdated branch), it failed there too, with 403. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-19045) S3A: pass request timeouts down to sdk clients
[ https://issues.apache.org/jira/browse/HADOOP-19045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17812363#comment-17812363 ] ASF GitHub Bot commented on HADOOP-19045: - virajjasani commented on PR #6470: URL: https://github.com/apache/hadoop/pull/6470#issuecomment-1917118379 @steveloughran I am also getting 403 for landsat-pds since yesterday. I was planning to keep it in backlog and look into it later on but IAM has S3 full access and yet i see 403 Access denied. Looks like something has changed globally. > S3A: pass request timeouts down to sdk clients > -- > > Key: HADOOP-19045 > URL: https://issues.apache.org/jira/browse/HADOOP-19045 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.4.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > Labels: pull-request-available > > s3a client timeout settings are getting down to http client, but not sdk > timeouts, so you can't have a longer timeout than the default. This surfaces > in the inability to tune the timeouts for CreateSession calls even now the > latest SDK does pick it up -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HADOOP-19045. S3A: Validate CreateSession Timeout Propagation [hadoop]
virajjasani commented on PR #6470: URL: https://github.com/apache/hadoop/pull/6470#issuecomment-1917118379 @steveloughran I am also getting 403 for landsat-pds since yesterday. I was planning to keep it in backlog and look into it later on but IAM has S3 full access and yet i see 403 Access denied. Looks like something has changed globally. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HDFS-17348. Enhance Log when checkLocations in RecoveryTaskStriped. [hadoop]
hadoop-yetus commented on PR #6485: URL: https://github.com/apache/hadoop/pull/6485#issuecomment-1917065883 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 57s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 42m 19s | | trunk passed | | +1 :green_heart: | compile | 1m 17s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 1m 13s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | checkstyle | 1m 8s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 22s | | trunk passed | | +1 :green_heart: | javadoc | 1m 5s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 1m 33s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 3m 9s | | trunk passed | | +1 :green_heart: | shadedclient | 34m 21s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 11s | | the patch passed | | +1 :green_heart: | compile | 1m 12s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 1m 12s | | the patch passed | | +1 :green_heart: | compile | 1m 7s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | javac | 1m 7s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 55s | | the patch passed | | +1 :green_heart: | mvnsite | 1m 11s | | the patch passed | | +1 :green_heart: | javadoc | 0m 52s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 1m 24s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 3m 17s | | the patch passed | | +1 :green_heart: | shadedclient | 34m 50s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 271m 59s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6485/4/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 43s | | The patch does not generate ASF License warnings. | | | | 407m 32s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.server.namenode.TestNameNodeMXBean | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.44 ServerAPI=1.44 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6485/4/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/6485 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux dd2ed517106c 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / b5155ca3abaa6d320b8cc0f3fc90518c59d3d813 | | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6485/4/testReport/ | | Max. process+thread count | 3508 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6485/4/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0 https:/
[jira] [Commented] (HADOOP-19045) S3A: pass request timeouts down to sdk clients
[ https://issues.apache.org/jira/browse/HADOOP-19045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17812342#comment-17812342 ] ASF GitHub Bot commented on HADOOP-19045: - hadoop-yetus commented on PR #6470: URL: https://github.com/apache/hadoop/pull/6470#issuecomment-1917034642 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 19s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 4 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 32m 48s | | trunk passed | | +1 :green_heart: | compile | 0m 25s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 0m 18s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | checkstyle | 0m 18s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 23s | | trunk passed | | +1 :green_heart: | javadoc | 0m 17s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 19s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 0m 44s | | trunk passed | | +1 :green_heart: | shadedclient | 21m 6s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 16s | | the patch passed | | +1 :green_heart: | compile | 0m 19s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 0m 19s | | the patch passed | | +1 :green_heart: | compile | 0m 15s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | javac | 0m 15s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 10s | | the patch passed | | +1 :green_heart: | mvnsite | 0m 18s | | the patch passed | | +1 :green_heart: | javadoc | 0m 9s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 15s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 0m 39s | | the patch passed | | +1 :green_heart: | shadedclient | 23m 43s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 2m 10s | | hadoop-aws in the patch passed. | | +1 :green_heart: | asflicense | 0m 25s | | The patch does not generate ASF License warnings. | | | | 88m 11s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.44 ServerAPI=1.44 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6470/6/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/6470 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux e6cc280ba859 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 6113e9ab0fc69182a6e37cca403fc7cf5745daa9 | | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6470/6/testReport/ | | Max. process+thread count | 551 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6470/6/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org | This message was automatically generated. > S3A: pass request timeouts down to sdk clients > --
Re: [PR] HADOOP-19045. S3A: Validate CreateSession Timeout Propagation [hadoop]
hadoop-yetus commented on PR #6470: URL: https://github.com/apache/hadoop/pull/6470#issuecomment-1917034642 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 19s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 4 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 32m 48s | | trunk passed | | +1 :green_heart: | compile | 0m 25s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 0m 18s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | checkstyle | 0m 18s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 23s | | trunk passed | | +1 :green_heart: | javadoc | 0m 17s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 19s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 0m 44s | | trunk passed | | +1 :green_heart: | shadedclient | 21m 6s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 16s | | the patch passed | | +1 :green_heart: | compile | 0m 19s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 0m 19s | | the patch passed | | +1 :green_heart: | compile | 0m 15s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | javac | 0m 15s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 10s | | the patch passed | | +1 :green_heart: | mvnsite | 0m 18s | | the patch passed | | +1 :green_heart: | javadoc | 0m 9s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 15s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 0m 39s | | the patch passed | | +1 :green_heart: | shadedclient | 23m 43s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 2m 10s | | hadoop-aws in the patch passed. | | +1 :green_heart: | asflicense | 0m 25s | | The patch does not generate ASF License warnings. | | | | 88m 11s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.44 ServerAPI=1.44 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6470/6/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/6470 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux e6cc280ba859 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 6113e9ab0fc69182a6e37cca403fc7cf5745daa9 | | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6470/6/testReport/ | | Max. process+thread count | 551 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6470/6/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org -
Re: [PR] HDFS-17353. Fix failing RBF module tests. [hadoop]
hadoop-yetus commented on PR #6491: URL: https://github.com/apache/hadoop/pull/6491#issuecomment-1916976857 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 50s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 2 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 46m 39s | | trunk passed | | +1 :green_heart: | compile | 0m 41s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 0m 36s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | checkstyle | 0m 29s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 41s | | trunk passed | | +1 :green_heart: | javadoc | 0m 42s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 30s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 1m 22s | | trunk passed | | +1 :green_heart: | shadedclient | 37m 51s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 30s | | the patch passed | | +1 :green_heart: | compile | 0m 33s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 0m 33s | | the patch passed | | +1 :green_heart: | compile | 0m 28s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | javac | 0m 28s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 18s | | the patch passed | | +1 :green_heart: | mvnsite | 0m 32s | | the patch passed | | +1 :green_heart: | javadoc | 0m 28s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 23s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 1m 20s | | the patch passed | | +1 :green_heart: | shadedclient | 38m 1s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 24m 59s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6491/3/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt) | hadoop-hdfs-rbf in the patch passed. | | +1 :green_heart: | asflicense | 0m 35s | | The patch does not generate ASF License warnings. | | | | 162m 44s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.server.federation.router.TestRouterRpcMultiDestination | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.44 ServerAPI=1.44 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6491/3/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/6491 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux 6b2fc8fb8df1 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / cdd5e9907fda28eb8cbb128c4048bdda73240cde | | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6491/3/testReport/ | | Max. process+thread count | 2074 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: hadoop-hdfs-project/hadoop-hdfs-rbf | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6491/3/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org | This message was automatically generated. --
Re: [PR] HDFS-17358. EC: infinite lease recovery caused by the length of RWR equals to zero. [hadoop]
hadoop-yetus commented on PR #6509: URL: https://github.com/apache/hadoop/pull/6509#issuecomment-1916966515 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 36s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 48m 10s | | trunk passed | | +1 :green_heart: | compile | 1m 28s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 1m 16s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | checkstyle | 1m 18s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 33s | | trunk passed | | +1 :green_heart: | javadoc | 1m 11s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 1m 38s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 3m 47s | | trunk passed | | +1 :green_heart: | shadedclient | 40m 55s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 17s | | the patch passed | | +1 :green_heart: | compile | 1m 21s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 1m 21s | | the patch passed | | +1 :green_heart: | compile | 1m 11s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | javac | 1m 11s | | the patch passed | | -1 :x: | blanks | 0m 0s | [/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6509/10/artifact/out/blanks-eol.txt) | The patch has 4 line(s) that end in blanks. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply | | +1 :green_heart: | checkstyle | 1m 8s | | the patch passed | | +1 :green_heart: | mvnsite | 1m 23s | | the patch passed | | +1 :green_heart: | javadoc | 0m 58s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 1m 28s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 3m 16s | | the patch passed | | +1 :green_heart: | shadedclient | 34m 18s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 223m 16s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6509/10/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 41s | | The patch does not generate ASF License warnings. | | | | 372m 36s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.server.datanode.TestDirectoryScanner | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.44 ServerAPI=1.44 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6509/10/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/6509 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux c0a7f19e0fa7 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 6c1081a5fb506f9a96078569ee11a6a242bf7552 | | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6509/10/testReport/ | | Max. process+thread count | 3981 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6509/10/console | | versions | git=2.25.1 ma
Re: [PR] HDFS-17358. EC: infinite lease recovery caused by the length of RWR equals to zero. [hadoop]
hadoop-yetus commented on PR #6509: URL: https://github.com/apache/hadoop/pull/6509#issuecomment-1916911462 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 20s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 31m 27s | | trunk passed | | +1 :green_heart: | compile | 0m 42s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 0m 38s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | checkstyle | 0m 36s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 40s | | trunk passed | | +1 :green_heart: | javadoc | 0m 39s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 1m 1s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 1m 44s | | trunk passed | | +1 :green_heart: | shadedclient | 20m 0s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 38s | | the patch passed | | +1 :green_heart: | compile | 0m 36s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 0m 36s | | the patch passed | | +1 :green_heart: | compile | 0m 32s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | javac | 0m 32s | | the patch passed | | -1 :x: | blanks | 0m 0s | [/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6509/11/artifact/out/blanks-eol.txt) | The patch has 4 line(s) that end in blanks. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply | | +1 :green_heart: | checkstyle | 0m 28s | | the patch passed | | +1 :green_heart: | mvnsite | 0m 36s | | the patch passed | | +1 :green_heart: | javadoc | 0m 29s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 56s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 1m 42s | | the patch passed | | +1 :green_heart: | shadedclient | 20m 3s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 194m 6s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6509/11/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 27s | | The patch does not generate ASF License warnings. | | | | 278m 56s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.server.datanode.TestDirectoryScanner | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.44 ServerAPI=1.44 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6509/11/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/6509 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux 1dac899ed063 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 64d91bcf6cefbda4b6f89517b06da48f2b3fefc5 | | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6509/11/testReport/ | | Max. process+thread count | 4207 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6509/11/console | | versions | git=2.25.1 ma
[jira] [Commented] (HADOOP-18883) Expect-100 JDK bug resolution: prevent multiple server calls
[ https://issues.apache.org/jira/browse/HADOOP-18883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17812309#comment-17812309 ] Steve Loughran commented on HADOOP-18883: - ok. think I've merged it everywhere and updated fix versions to match > Expect-100 JDK bug resolution: prevent multiple server calls > > > Key: HADOOP-18883 > URL: https://issues.apache.org/jira/browse/HADOOP-18883 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Reporter: Pranav Saxena >Assignee: Pranav Saxena >Priority: Major > Labels: pull-request-available > Fix For: 3.3.9, 3.5.0, 3.4.1 > > > This is inline to JDK bug: [https://bugs.openjdk.org/browse/JDK-8314978]. > > With the current implementation of HttpURLConnection if server rejects the > “Expect 100-continue” then there will be ‘java.net.ProtocolException’ will be > thrown from 'expect100Continue()' method. > After the exception thrown, If we call any other method on the same instance > (ex getHeaderField(), or getHeaderFields()). They will internally call > getOuputStream() which invokes writeRequests(), which make the actual server > call. > In the AbfsHttpOperation, after sendRequest() we call processResponse() > method from AbfsRestOperation. Even if the conn.getOutputStream() fails due > to expect-100 error, we consume the exception and let the code go ahead. So, > we can have getHeaderField() / getHeaderFields() / getHeaderFieldLong() which > will be triggered after getOutputStream is failed. These invocation will lead > to server calls. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-18883) Expect-100 JDK bug resolution: prevent multiple server calls
[ https://issues.apache.org/jira/browse/HADOOP-18883?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-18883: Fix Version/s: 3.4.1 (was: 3.4.0) > Expect-100 JDK bug resolution: prevent multiple server calls > > > Key: HADOOP-18883 > URL: https://issues.apache.org/jira/browse/HADOOP-18883 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Reporter: Pranav Saxena >Assignee: Pranav Saxena >Priority: Major > Labels: pull-request-available > Fix For: 3.3.9, 3.5.0, 3.4.1 > > > This is inline to JDK bug: [https://bugs.openjdk.org/browse/JDK-8314978]. > > With the current implementation of HttpURLConnection if server rejects the > “Expect 100-continue” then there will be ‘java.net.ProtocolException’ will be > thrown from 'expect100Continue()' method. > After the exception thrown, If we call any other method on the same instance > (ex getHeaderField(), or getHeaderFields()). They will internally call > getOuputStream() which invokes writeRequests(), which make the actual server > call. > In the AbfsHttpOperation, after sendRequest() we call processResponse() > method from AbfsRestOperation. Even if the conn.getOutputStream() fails due > to expect-100 error, we consume the exception and let the code go ahead. So, > we can have getHeaderField() / getHeaderFields() / getHeaderFieldLong() which > will be triggered after getOutputStream is failed. These invocation will lead > to server calls. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-18883) Expect-100 JDK bug resolution: prevent multiple server calls
[ https://issues.apache.org/jira/browse/HADOOP-18883?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-18883: Fix Version/s: 3.3.9 > Expect-100 JDK bug resolution: prevent multiple server calls > > > Key: HADOOP-18883 > URL: https://issues.apache.org/jira/browse/HADOOP-18883 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Reporter: Pranav Saxena >Assignee: Pranav Saxena >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0, 3.3.9, 3.5.0 > > > This is inline to JDK bug: [https://bugs.openjdk.org/browse/JDK-8314978]. > > With the current implementation of HttpURLConnection if server rejects the > “Expect 100-continue” then there will be ‘java.net.ProtocolException’ will be > thrown from 'expect100Continue()' method. > After the exception thrown, If we call any other method on the same instance > (ex getHeaderField(), or getHeaderFields()). They will internally call > getOuputStream() which invokes writeRequests(), which make the actual server > call. > In the AbfsHttpOperation, after sendRequest() we call processResponse() > method from AbfsRestOperation. Even if the conn.getOutputStream() fails due > to expect-100 error, we consume the exception and let the code go ahead. So, > we can have getHeaderField() / getHeaderFields() / getHeaderFieldLong() which > will be triggered after getOutputStream is failed. These invocation will lead > to server calls. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-18883) Expect-100 JDK bug resolution: prevent multiple server calls
[ https://issues.apache.org/jira/browse/HADOOP-18883?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-18883: Fix Version/s: 3.4.0 > Expect-100 JDK bug resolution: prevent multiple server calls > > > Key: HADOOP-18883 > URL: https://issues.apache.org/jira/browse/HADOOP-18883 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Reporter: Pranav Saxena >Assignee: Pranav Saxena >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0, 3.5.0 > > > This is inline to JDK bug: [https://bugs.openjdk.org/browse/JDK-8314978]. > > With the current implementation of HttpURLConnection if server rejects the > “Expect 100-continue” then there will be ‘java.net.ProtocolException’ will be > thrown from 'expect100Continue()' method. > After the exception thrown, If we call any other method on the same instance > (ex getHeaderField(), or getHeaderFields()). They will internally call > getOuputStream() which invokes writeRequests(), which make the actual server > call. > In the AbfsHttpOperation, after sendRequest() we call processResponse() > method from AbfsRestOperation. Even if the conn.getOutputStream() fails due > to expect-100 error, we consume the exception and let the code go ahead. So, > we can have getHeaderField() / getHeaderFields() / getHeaderFieldLong() which > will be triggered after getOutputStream is failed. These invocation will lead > to server calls. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18883) Expect-100 JDK bug resolution: prevent multiple server calls
[ https://issues.apache.org/jira/browse/HADOOP-18883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17812306#comment-17812306 ] ASF GitHub Bot commented on HADOOP-18883: - steveloughran merged PR #6511: URL: https://github.com/apache/hadoop/pull/6511 > Expect-100 JDK bug resolution: prevent multiple server calls > > > Key: HADOOP-18883 > URL: https://issues.apache.org/jira/browse/HADOOP-18883 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Reporter: Pranav Saxena >Assignee: Pranav Saxena >Priority: Major > Labels: pull-request-available > Fix For: 3.5.0 > > > This is inline to JDK bug: [https://bugs.openjdk.org/browse/JDK-8314978]. > > With the current implementation of HttpURLConnection if server rejects the > “Expect 100-continue” then there will be ‘java.net.ProtocolException’ will be > thrown from 'expect100Continue()' method. > After the exception thrown, If we call any other method on the same instance > (ex getHeaderField(), or getHeaderFields()). They will internally call > getOuputStream() which invokes writeRequests(), which make the actual server > call. > In the AbfsHttpOperation, after sendRequest() we call processResponse() > method from AbfsRestOperation. Even if the conn.getOutputStream() fails due > to expect-100 error, we consume the exception and let the code go ahead. So, > we can have getHeaderField() / getHeaderFields() / getHeaderFieldLong() which > will be triggered after getOutputStream is failed. These invocation will lead > to server calls. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HADOOP-18883. [ABFS]: Expect-100 JDK bug resolution: prevent multiple server calls [hadoop]
steveloughran merged PR #6511: URL: https://github.com/apache/hadoop/pull/6511 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-19049) Class loader leak caused by StatisticsDataReferenceCleaner thread
[ https://issues.apache.org/jira/browse/HADOOP-19049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17812305#comment-17812305 ] ASF GitHub Bot commented on HADOOP-19049: - steveloughran commented on PR #6488: URL: https://github.com/apache/hadoop/pull/6488#issuecomment-1916823954 those java links have convinced me; StreamCloser in the jdk does exactly this. If you can do a test for this -fine. but it may be too hard to write a test for...in which case I will merge as is > Class loader leak caused by StatisticsDataReferenceCleaner thread > - > > Key: HADOOP-19049 > URL: https://issues.apache.org/jira/browse/HADOOP-19049 > Project: Hadoop Common > Issue Type: Bug > Components: common >Affects Versions: 3.3.6 >Reporter: Jia Fan >Priority: Major > Labels: pull-request-available > > The > "org.apache.hadoop.fs.FileSystem$Statistics$StatisticsDataReferenceCleaner" > daemon thread was created by FileSystem. > This is fine if the thread's context class loader is the system class loader, > but it's bad if the context class loader is a custom class loader. The > reference held by this daemon thread means that the class loader can never > become eligible for GC. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HADOOP-19049. Fix StatisticsDataReferenceCleaner classloader leak [hadoop]
steveloughran commented on PR #6488: URL: https://github.com/apache/hadoop/pull/6488#issuecomment-1916823954 those java links have convinced me; StreamCloser in the jdk does exactly this. If you can do a test for this -fine. but it may be too hard to write a test for...in which case I will merge as is -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HDFS-17360. Record the number of times a block is read during a certain time period. [hadoop]
hadoop-yetus commented on PR #6505: URL: https://github.com/apache/hadoop/pull/6505#issuecomment-1916822235 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 23s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +0 :ok: | xmllint | 0m 0s | | xmllint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | -1 :x: | mvninstall | 33m 7s | [/branch-mvninstall-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6505/7/artifact/out/branch-mvninstall-root.txt) | root in trunk failed. | | +1 :green_heart: | compile | 0m 46s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 0m 39s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | checkstyle | 0m 45s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 46s | | trunk passed | | +1 :green_heart: | javadoc | 0m 39s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 1m 1s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 1m 40s | | trunk passed | | +1 :green_heart: | shadedclient | 21m 54s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 42s | | the patch passed | | +1 :green_heart: | compile | 0m 39s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 0m 39s | | the patch passed | | +1 :green_heart: | compile | 0m 41s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | javac | 0m 41s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 34s | | the patch passed | | +1 :green_heart: | mvnsite | 0m 39s | | the patch passed | | +1 :green_heart: | javadoc | 0m 28s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 59s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 1m 42s | | the patch passed | | +1 :green_heart: | shadedclient | 21m 54s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 204m 23s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6505/7/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 27s | | The patch does not generate ASF License warnings. | | | | 296m 6s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy | | | hadoop.hdfs.TestDFSStripedOutputStream | | | hadoop.hdfs.server.namenode.TestAuditLogs | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.44 ServerAPI=1.44 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6505/7/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/6505 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets xmllint | | uname | Linux c79c577e83a7 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 973a9034cb1e1b37e4c3f21dd01e276eddb8313a | | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6505/7/testReport/ | | Max. process+thread count | 4427 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs
[jira] [Commented] (HADOOP-19045) S3A: pass request timeouts down to sdk clients
[ https://issues.apache.org/jira/browse/HADOOP-19045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17812303#comment-17812303 ] ASF GitHub Bot commented on HADOOP-19045: - steveloughran commented on PR #6470: URL: https://github.com/apache/hadoop/pull/6470#issuecomment-1916810101 S3 london with `-Dparallel-tests -DtestsThreadCount=8`. landsat tests failing. they're also failing on command line installations I've set up, so either something bad has happened with regions/endpoints or there's been a change there ``` [ERROR] Failures: [ERROR] ITestS3ACommitterFactory.testEverything:112->testImplicitFileBinding:127->assertFactoryCreatesExpectedCommitter:187->Assert.assertEquals:120->Assert.failNotEquals:835->Assert.fail:89 Wrong Committer from factory expected: but was: [ERROR] Errors: [ERROR] ITestS3AAWSCredentialsProvider.testAnonymousProvider:180 » IllegalArgument An ... [ERROR] ITestS3AConfiguration.testS3SpecificSignerOverride:577 » SdkClient Unable to l... [ERROR] ITestS3AFailureHandling.testMultiObjectDeleteNoPermissions:175 » IllegalArgument [ERROR] ITestS3AFailureHandling.testSingleObjectDeleteNoPermissionsTranslated:210 » IllegalArgument [ERROR] ITestDelegatedMRJob.testJobSubmissionCollectsTokens:247 » IllegalArgument An e... [ERROR] ITestDelegatedMRJob.testJobSubmissionCollectsTokens:247 » IllegalArgument An e... [ERROR] ITestSessionDelegationInFilesystem.testDelegatedFileSystem:347->readLandsatMetadata:614 » AccessDenied [ERROR] ITestS3GuardTool.testLandsatBucketRequireEncrypted:85->AbstractS3GuardToolTestBase.runToFailure:128->AbstractS3GuardToolTestBase.lambda$runToFailure$0:129 » IllegalArgument [ERROR] ITestS3GuardTool.testLandsatBucketRequireGuarded:68->AbstractS3GuardToolTestBase.runToFailure:128->AbstractS3GuardToolTestBase.lambda$runToFailure$0:129 » IllegalArgument [ERROR] ITestS3GuardTool.testLandsatBucketRequireUnencrypted:78->AbstractS3GuardToolTestBase.run:114 » IllegalArgument [ERROR] ITestS3GuardTool.testLandsatBucketUnguarded:61->AbstractS3GuardToolTestBase.run:114 » IllegalArgument [ERROR] ITestAWSStatisticCollection.testLandsatStatistics:56 » AccessDenied s3a://land... [INFO] ``` > S3A: pass request timeouts down to sdk clients > -- > > Key: HADOOP-19045 > URL: https://issues.apache.org/jira/browse/HADOOP-19045 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.4.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > Labels: pull-request-available > > s3a client timeout settings are getting down to http client, but not sdk > timeouts, so you can't have a longer timeout than the default. This surfaces > in the inability to tune the timeouts for CreateSession calls even now the > latest SDK does pick it up -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HADOOP-19045. S3A: Validate CreateSession Timeout Propagation [hadoop]
steveloughran commented on PR #6470: URL: https://github.com/apache/hadoop/pull/6470#issuecomment-1916810101 S3 london with `-Dparallel-tests -DtestsThreadCount=8`. landsat tests failing. they're also failing on command line installations I've set up, so either something bad has happened with regions/endpoints or there's been a change there ``` [ERROR] Failures: [ERROR] ITestS3ACommitterFactory.testEverything:112->testImplicitFileBinding:127->assertFactoryCreatesExpectedCommitter:187->Assert.assertEquals:120->Assert.failNotEquals:835->Assert.fail:89 Wrong Committer from factory expected: but was: [ERROR] Errors: [ERROR] ITestS3AAWSCredentialsProvider.testAnonymousProvider:180 » IllegalArgument An ... [ERROR] ITestS3AConfiguration.testS3SpecificSignerOverride:577 » SdkClient Unable to l... [ERROR] ITestS3AFailureHandling.testMultiObjectDeleteNoPermissions:175 » IllegalArgument [ERROR] ITestS3AFailureHandling.testSingleObjectDeleteNoPermissionsTranslated:210 » IllegalArgument [ERROR] ITestDelegatedMRJob.testJobSubmissionCollectsTokens:247 » IllegalArgument An e... [ERROR] ITestDelegatedMRJob.testJobSubmissionCollectsTokens:247 » IllegalArgument An e... [ERROR] ITestSessionDelegationInFilesystem.testDelegatedFileSystem:347->readLandsatMetadata:614 » AccessDenied [ERROR] ITestS3GuardTool.testLandsatBucketRequireEncrypted:85->AbstractS3GuardToolTestBase.runToFailure:128->AbstractS3GuardToolTestBase.lambda$runToFailure$0:129 » IllegalArgument [ERROR] ITestS3GuardTool.testLandsatBucketRequireGuarded:68->AbstractS3GuardToolTestBase.runToFailure:128->AbstractS3GuardToolTestBase.lambda$runToFailure$0:129 » IllegalArgument [ERROR] ITestS3GuardTool.testLandsatBucketRequireUnencrypted:78->AbstractS3GuardToolTestBase.run:114 » IllegalArgument [ERROR] ITestS3GuardTool.testLandsatBucketUnguarded:61->AbstractS3GuardToolTestBase.run:114 » IllegalArgument [ERROR] ITestAWSStatisticCollection.testLandsatStatistics:56 » AccessDenied s3a://land... [INFO] ``` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HDFS-17360. Record the number of times a block is read during a certain time period. [hadoop]
hadoop-yetus commented on PR #6505: URL: https://github.com/apache/hadoop/pull/6505#issuecomment-1916809170 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 22s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +0 :ok: | xmllint | 0m 0s | | xmllint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 32m 26s | | trunk passed | | +1 :green_heart: | compile | 0m 39s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 0m 39s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | checkstyle | 0m 36s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 41s | | trunk passed | | +1 :green_heart: | javadoc | 0m 38s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 1m 0s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 1m 46s | | trunk passed | | +1 :green_heart: | shadedclient | 21m 55s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 34s | | the patch passed | | +1 :green_heart: | compile | 0m 35s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 0m 35s | | the patch passed | | +1 :green_heart: | compile | 0m 33s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | javac | 0m 33s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 0m 33s | [/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6505/6/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs-project/hadoop-hdfs: The patch generated 6 new + 322 unchanged - 0 fixed = 328 total (was 322) | | +1 :green_heart: | mvnsite | 0m 43s | | the patch passed | | +1 :green_heart: | javadoc | 0m 31s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 1m 2s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 1m 57s | | the patch passed | | +1 :green_heart: | shadedclient | 22m 11s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 203m 50s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6505/6/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 23s | | The patch does not generate ASF License warnings. | | | | 294m 23s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.server.datanode.TestDirectoryScanner | | | hadoop.hdfs.server.namenode.TestAddStripedBlocks | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.44 ServerAPI=1.44 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6505/6/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/6505 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets xmllint | | uname | Linux 6a820b7f4e6c 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 42aa99c807ed521cf66a1be5308b77673d17d5e4 | | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6505/6/testReport/ | | Max. process+thread count | 4541 (vs. ulimit of
[jira] [Commented] (HADOOP-19044) AWS SDK V2 - Update S3A region logic
[ https://issues.apache.org/jira/browse/HADOOP-19044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17812288#comment-17812288 ] ASF GitHub Bot commented on HADOOP-19044: - ahmarsuhail commented on code in PR #6479: URL: https://github.com/apache/hadoop/pull/6479#discussion_r1471137568 ## hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AEndpointRegion.java: ## @@ -257,6 +275,65 @@ public void testWithVPCE() throws Throwable { expectInterceptorException(client); } + @Test + public void testCentralEndpointCrossRegionAccess() throws Throwable { +describe("Create bucket on different region and access it using central endpoint"); +final Configuration conf = getConfiguration(); +removeBaseAndBucketOverrides(conf, ENDPOINT); + Review Comment: should we do `removeBaseAndBucketOverrides(conf, ENDPOINT, AWS_REGION);` and then set to us-west-2 or something, just to ensure this always gets tested with a different region? > AWS SDK V2 - Update S3A region logic > - > > Key: HADOOP-19044 > URL: https://issues.apache.org/jira/browse/HADOOP-19044 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.4.0 >Reporter: Ahmar Suhail >Assignee: Viraj Jasani >Priority: Major > Labels: pull-request-available > > If both fs.s3a.endpoint & fs.s3a.endpoint.region are empty, Spark will set > fs.s3a.endpoint to > s3.amazonaws.com here: > [https://github.com/apache/spark/blob/9a2f39318e3af8b3817dc5e4baf52e548d82063c/core/src/main/scala/org/apache/spark/deploy/SparkHadoopUtil.scala#L540] > > > HADOOP-18908, updated the region logic such that if fs.s3a.endpoint.region is > set, or if a region can be parsed from fs.s3a.endpoint (which will happen in > this case, region will be US_EAST_1), cross region access is not enabled. > This will cause 400 errors if the bucket is not in US_EAST_1. > > Proposed: Updated the logic so that if the endpoint is the global > s3.amazonaws.com , cross region access is enabled. > > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HADOOP-19044. AWS SDK V2 - Update S3A region logic [hadoop]
ahmarsuhail commented on code in PR #6479: URL: https://github.com/apache/hadoop/pull/6479#discussion_r1471137568 ## hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AEndpointRegion.java: ## @@ -257,6 +275,65 @@ public void testWithVPCE() throws Throwable { expectInterceptorException(client); } + @Test + public void testCentralEndpointCrossRegionAccess() throws Throwable { +describe("Create bucket on different region and access it using central endpoint"); +final Configuration conf = getConfiguration(); +removeBaseAndBucketOverrides(conf, ENDPOINT); + Review Comment: should we do `removeBaseAndBucketOverrides(conf, ENDPOINT, AWS_REGION);` and then set to us-west-2 or something, just to ensure this always gets tested with a different region? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-19044) AWS SDK V2 - Update S3A region logic
[ https://issues.apache.org/jira/browse/HADOOP-19044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17812285#comment-17812285 ] ASF GitHub Bot commented on HADOOP-19044: - ahmarsuhail commented on code in PR #6479: URL: https://github.com/apache/hadoop/pull/6479#discussion_r1471137568 ## hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AEndpointRegion.java: ## @@ -257,6 +275,65 @@ public void testWithVPCE() throws Throwable { expectInterceptorException(client); } + @Test + public void testCentralEndpointCrossRegionAccess() throws Throwable { +describe("Create bucket on different region and access it using central endpoint"); +final Configuration conf = getConfiguration(); +removeBaseAndBucketOverrides(conf, ENDPOINT); + Review Comment: should we do `removeBaseAndBucketOverrides(conf, ENDPOINT, AWS_REGION);` and then set to us-west-2 or something, just to ensure this always gets tested with a different region? > AWS SDK V2 - Update S3A region logic > - > > Key: HADOOP-19044 > URL: https://issues.apache.org/jira/browse/HADOOP-19044 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.4.0 >Reporter: Ahmar Suhail >Assignee: Viraj Jasani >Priority: Major > Labels: pull-request-available > > If both fs.s3a.endpoint & fs.s3a.endpoint.region are empty, Spark will set > fs.s3a.endpoint to > s3.amazonaws.com here: > [https://github.com/apache/spark/blob/9a2f39318e3af8b3817dc5e4baf52e548d82063c/core/src/main/scala/org/apache/spark/deploy/SparkHadoopUtil.scala#L540] > > > HADOOP-18908, updated the region logic such that if fs.s3a.endpoint.region is > set, or if a region can be parsed from fs.s3a.endpoint (which will happen in > this case, region will be US_EAST_1), cross region access is not enabled. > This will cause 400 errors if the bucket is not in US_EAST_1. > > Proposed: Updated the logic so that if the endpoint is the global > s3.amazonaws.com , cross region access is enabled. > > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HADOOP-19044. AWS SDK V2 - Update S3A region logic [hadoop]
ahmarsuhail commented on code in PR #6479: URL: https://github.com/apache/hadoop/pull/6479#discussion_r1471137568 ## hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AEndpointRegion.java: ## @@ -257,6 +275,65 @@ public void testWithVPCE() throws Throwable { expectInterceptorException(client); } + @Test + public void testCentralEndpointCrossRegionAccess() throws Throwable { +describe("Create bucket on different region and access it using central endpoint"); +final Configuration conf = getConfiguration(); +removeBaseAndBucketOverrides(conf, ENDPOINT); + Review Comment: should we do `removeBaseAndBucketOverrides(conf, ENDPOINT, AWS_REGION);` and then set to us-west-2 or something, just to ensure this always gets tested with a different region? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-19044) AWS SDK V2 - Update S3A region logic
[ https://issues.apache.org/jira/browse/HADOOP-19044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17812284#comment-17812284 ] ASF GitHub Bot commented on HADOOP-19044: - ahmarsuhail commented on code in PR #6479: URL: https://github.com/apache/hadoop/pull/6479#discussion_r1471135658 ## hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AEndpointRegion.java: ## @@ -257,6 +275,33 @@ public void testWithVPCE() throws Throwable { expectInterceptorException(client); } + @Test + public void testCentralEndpointCrossRegionAccess() throws Throwable { +describe("Create bucket on different region and access it using central endpoint"); +Configuration conf = getConfiguration(); +removeBaseAndBucketOverrides(conf, ENDPOINT, AWS_REGION); + +Configuration newConf = new Configuration(conf); + +newConf.set(ENDPOINT, CENTRAL_ENDPOINT); + +newFS = new S3AFileSystem(); +newFS.initialize(getFileSystem().getUri(), newConf); + +final String file = getMethodName(); Review Comment: ack, that makes sense, thanks for explaining. happy to keep as is > AWS SDK V2 - Update S3A region logic > - > > Key: HADOOP-19044 > URL: https://issues.apache.org/jira/browse/HADOOP-19044 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.4.0 >Reporter: Ahmar Suhail >Assignee: Viraj Jasani >Priority: Major > Labels: pull-request-available > > If both fs.s3a.endpoint & fs.s3a.endpoint.region are empty, Spark will set > fs.s3a.endpoint to > s3.amazonaws.com here: > [https://github.com/apache/spark/blob/9a2f39318e3af8b3817dc5e4baf52e548d82063c/core/src/main/scala/org/apache/spark/deploy/SparkHadoopUtil.scala#L540] > > > HADOOP-18908, updated the region logic such that if fs.s3a.endpoint.region is > set, or if a region can be parsed from fs.s3a.endpoint (which will happen in > this case, region will be US_EAST_1), cross region access is not enabled. > This will cause 400 errors if the bucket is not in US_EAST_1. > > Proposed: Updated the logic so that if the endpoint is the global > s3.amazonaws.com , cross region access is enabled. > > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HADOOP-19044. AWS SDK V2 - Update S3A region logic [hadoop]
ahmarsuhail commented on code in PR #6479: URL: https://github.com/apache/hadoop/pull/6479#discussion_r1471135658 ## hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AEndpointRegion.java: ## @@ -257,6 +275,33 @@ public void testWithVPCE() throws Throwable { expectInterceptorException(client); } + @Test + public void testCentralEndpointCrossRegionAccess() throws Throwable { +describe("Create bucket on different region and access it using central endpoint"); +Configuration conf = getConfiguration(); +removeBaseAndBucketOverrides(conf, ENDPOINT, AWS_REGION); + +Configuration newConf = new Configuration(conf); + +newConf.set(ENDPOINT, CENTRAL_ENDPOINT); + +newFS = new S3AFileSystem(); +newFS.initialize(getFileSystem().getUri(), newConf); + +final String file = getMethodName(); Review Comment: ack, that makes sense, thanks for explaining. happy to keep as is -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-19044) AWS SDK V2 - Update S3A region logic
[ https://issues.apache.org/jira/browse/HADOOP-19044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17812283#comment-17812283 ] ASF GitHub Bot commented on HADOOP-19044: - ahmarsuhail commented on PR #6479: URL: https://github.com/apache/hadoop/pull/6479#issuecomment-1916744338 for no. 5, > endpoint s3-us-east-2.amazonaws.com and region us-east-2 (and null) unable to perform any operation, as expected (no central endpoint, no cross-region access) you should be able to perform all operations right? > AWS SDK V2 - Update S3A region logic > - > > Key: HADOOP-19044 > URL: https://issues.apache.org/jira/browse/HADOOP-19044 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.4.0 >Reporter: Ahmar Suhail >Assignee: Viraj Jasani >Priority: Major > Labels: pull-request-available > > If both fs.s3a.endpoint & fs.s3a.endpoint.region are empty, Spark will set > fs.s3a.endpoint to > s3.amazonaws.com here: > [https://github.com/apache/spark/blob/9a2f39318e3af8b3817dc5e4baf52e548d82063c/core/src/main/scala/org/apache/spark/deploy/SparkHadoopUtil.scala#L540] > > > HADOOP-18908, updated the region logic such that if fs.s3a.endpoint.region is > set, or if a region can be parsed from fs.s3a.endpoint (which will happen in > this case, region will be US_EAST_1), cross region access is not enabled. > This will cause 400 errors if the bucket is not in US_EAST_1. > > Proposed: Updated the logic so that if the endpoint is the global > s3.amazonaws.com , cross region access is enabled. > > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HADOOP-19044. AWS SDK V2 - Update S3A region logic [hadoop]
ahmarsuhail commented on PR #6479: URL: https://github.com/apache/hadoop/pull/6479#issuecomment-1916744338 for no. 5, > endpoint s3-us-east-2.amazonaws.com and region us-east-2 (and null) unable to perform any operation, as expected (no central endpoint, no cross-region access) you should be able to perform all operations right? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-19044) AWS SDK V2 - Update S3A region logic
[ https://issues.apache.org/jira/browse/HADOOP-19044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17812282#comment-17812282 ] ASF GitHub Bot commented on HADOOP-19044: - ahmarsuhail commented on code in PR #6479: URL: https://github.com/apache/hadoop/pull/6479#discussion_r1471123277 ## hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AEndpointRegion.java: ## @@ -146,7 +150,21 @@ public void testCentralEndpoint() throws Throwable { describe("Create a client with the central endpoint"); Configuration conf = getConfiguration(); -S3Client client = createS3Client(conf, CENTRAL_ENDPOINT, null, US_EAST_1, false); +S3Client client = createS3Client(conf, CENTRAL_ENDPOINT, null, US_EAST_2, false); + +expectInterceptorException(client); + } + + @Test + public void testCentralEndpointWithRegion() throws Throwable { +describe("Create a client with the central endpoint but also specify region"); +Configuration conf = getConfiguration(); + +S3Client client = createS3Client(conf, CENTRAL_ENDPOINT, US_WEST_2, US_EAST_2, false); Review Comment: for example here, if configured region is US_WEST_2, expected region should also be US_WEST_2, not US_EAST_2 > AWS SDK V2 - Update S3A region logic > - > > Key: HADOOP-19044 > URL: https://issues.apache.org/jira/browse/HADOOP-19044 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.4.0 >Reporter: Ahmar Suhail >Assignee: Viraj Jasani >Priority: Major > Labels: pull-request-available > > If both fs.s3a.endpoint & fs.s3a.endpoint.region are empty, Spark will set > fs.s3a.endpoint to > s3.amazonaws.com here: > [https://github.com/apache/spark/blob/9a2f39318e3af8b3817dc5e4baf52e548d82063c/core/src/main/scala/org/apache/spark/deploy/SparkHadoopUtil.scala#L540] > > > HADOOP-18908, updated the region logic such that if fs.s3a.endpoint.region is > set, or if a region can be parsed from fs.s3a.endpoint (which will happen in > this case, region will be US_EAST_1), cross region access is not enabled. > This will cause 400 errors if the bucket is not in US_EAST_1. > > Proposed: Updated the logic so that if the endpoint is the global > s3.amazonaws.com , cross region access is enabled. > > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HADOOP-19044. AWS SDK V2 - Update S3A region logic [hadoop]
ahmarsuhail commented on code in PR #6479: URL: https://github.com/apache/hadoop/pull/6479#discussion_r1471123277 ## hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AEndpointRegion.java: ## @@ -146,7 +150,21 @@ public void testCentralEndpoint() throws Throwable { describe("Create a client with the central endpoint"); Configuration conf = getConfiguration(); -S3Client client = createS3Client(conf, CENTRAL_ENDPOINT, null, US_EAST_1, false); +S3Client client = createS3Client(conf, CENTRAL_ENDPOINT, null, US_EAST_2, false); + +expectInterceptorException(client); + } + + @Test + public void testCentralEndpointWithRegion() throws Throwable { +describe("Create a client with the central endpoint but also specify region"); +Configuration conf = getConfiguration(); + +S3Client client = createS3Client(conf, CENTRAL_ENDPOINT, US_WEST_2, US_EAST_2, false); Review Comment: for example here, if configured region is US_WEST_2, expected region should also be US_WEST_2, not US_EAST_2 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-19044) AWS SDK V2 - Update S3A region logic
[ https://issues.apache.org/jira/browse/HADOOP-19044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17812281#comment-17812281 ] ASF GitHub Bot commented on HADOOP-19044: - ahmarsuhail commented on code in PR #6479: URL: https://github.com/apache/hadoop/pull/6479#discussion_r1471123903 ## hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/DefaultS3ClientFactory.java: ## @@ -289,17 +290,36 @@ private , ClientT> void builder.fipsEnabled(fipsEnabled); if (endpoint != null) { + boolean overrideEndpoint = true; checkArgument(!fipsEnabled, "%s : %s", ERROR_ENDPOINT_WITH_FIPS, endpoint); - builder.endpointOverride(endpoint); - // No region was configured, try to determine it from the endpoint. - if (region == null) { -region = getS3RegionFromEndpoint(parameters.getEndpoint()); + boolean endpointEndsWithCentral = + endpointStr.endsWith(CENTRAL_ENDPOINT); + // No region was configured or the endpoint is central, + // determine the region from the endpoint. + if (region == null || endpointEndsWithCentral) { Review Comment: hmm, not sure about this. now we're parsing region if region is null or endpoint = s3.amazonaws.com. So if you set `s3.amazonaws.com` and region to eu-west-2, you still end up with us setting the region to `us-east-2` and cross region enabled. My thinking here is that a lot of people may have endpoint set to s3.amazonaws.com (as atleast with SDK V1 it was harmless to do that I think) . we only want to get into this parsing if region == null. so let's revert to the previous condition here. And then we never don't want to override if the endpoint is s3.amazonaws.com. Suggested: ``` if (endpoint != null) { checkArgument(!fipsEnabled, "%s : %s", ERROR_ENDPOINT_WITH_FIPS, endpoint); boolean endpointEndsWithCentral = endpointStr.endsWith(CENTRAL_ENDPOINT); // No region was configured or the endpoint is central, // determine the region from the endpoint. if (region == null) { region = getS3RegionFromEndpoint(endpointStr, endpointEndsWithCentral); if (region != null) { origin = "endpoint"; if (endpointEndsWithCentral) { builder.crossRegionAccessEnabled(true); origin = "origin with cross region access"; LOG.debug("Enabling cross region access for endpoint {}", endpointStr); } } } // No need to override endpoint with "s3.amazonaws.com". // Let the client take care of endpoint resolution. Overriding // the endpoint with "s3.amazonaws.com" causes 400 Bad Request // errors for non-existent buckets and objects. // ref: https://github.com/aws/aws-sdk-java-v2/issues/4846 if (!endpointEndsWithCentral) { builder.endpointOverride(endpoint); LOG.debug("Setting endpoint to {}", endpoint); } } ``` So now: 1) if endpoint = s3.amazonaws.com and region is null, set to US_EAST_2 and enable cross region, and don't override endpoint. 2) if endpoint = s3.amazonaw.com and region is set (eg to eu-west-1), set region but do not override endpoint..let SDK figure it out ## hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AEndpointRegion.java: ## @@ -146,7 +150,21 @@ public void testCentralEndpoint() throws Throwable { describe("Create a client with the central endpoint"); Configuration conf = getConfiguration(); -S3Client client = createS3Client(conf, CENTRAL_ENDPOINT, null, US_EAST_1, false); +S3Client client = createS3Client(conf, CENTRAL_ENDPOINT, null, US_EAST_2, false); + +expectInterceptorException(client); + } + + @Test + public void testCentralEndpointWithRegion() throws Throwable { +describe("Create a client with the central endpoint but also specify region"); +Configuration conf = getConfiguration(); + +S3Client client = createS3Client(conf, CENTRAL_ENDPOINT, US_WEST_2, US_EAST_2, false); Review Comment: for example here, if configured region is US_WEST_2, expected region should also be US_EAST_2 > AWS SDK V2 - Update S3A region logic > - > > Key: HADOOP-19044 > URL: https://issues.apache.org/jira/browse/HADOOP-19044 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.4.0 >Reporter: Ahmar Suhail >Assignee: Viraj Jasani >Priority: Major > Labels: pull-request-available > > If both fs.s3a.endpoint & fs.s3a.endpoint.region are empty, Spark will set > fs.s3
Re: [PR] HADOOP-19044. AWS SDK V2 - Update S3A region logic [hadoop]
ahmarsuhail commented on code in PR #6479: URL: https://github.com/apache/hadoop/pull/6479#discussion_r1471123903 ## hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/DefaultS3ClientFactory.java: ## @@ -289,17 +290,36 @@ private , ClientT> void builder.fipsEnabled(fipsEnabled); if (endpoint != null) { + boolean overrideEndpoint = true; checkArgument(!fipsEnabled, "%s : %s", ERROR_ENDPOINT_WITH_FIPS, endpoint); - builder.endpointOverride(endpoint); - // No region was configured, try to determine it from the endpoint. - if (region == null) { -region = getS3RegionFromEndpoint(parameters.getEndpoint()); + boolean endpointEndsWithCentral = + endpointStr.endsWith(CENTRAL_ENDPOINT); + // No region was configured or the endpoint is central, + // determine the region from the endpoint. + if (region == null || endpointEndsWithCentral) { Review Comment: hmm, not sure about this. now we're parsing region if region is null or endpoint = s3.amazonaws.com. So if you set `s3.amazonaws.com` and region to eu-west-2, you still end up with us setting the region to `us-east-2` and cross region enabled. My thinking here is that a lot of people may have endpoint set to s3.amazonaws.com (as atleast with SDK V1 it was harmless to do that I think) . we only want to get into this parsing if region == null. so let's revert to the previous condition here. And then we never don't want to override if the endpoint is s3.amazonaws.com. Suggested: ``` if (endpoint != null) { checkArgument(!fipsEnabled, "%s : %s", ERROR_ENDPOINT_WITH_FIPS, endpoint); boolean endpointEndsWithCentral = endpointStr.endsWith(CENTRAL_ENDPOINT); // No region was configured or the endpoint is central, // determine the region from the endpoint. if (region == null) { region = getS3RegionFromEndpoint(endpointStr, endpointEndsWithCentral); if (region != null) { origin = "endpoint"; if (endpointEndsWithCentral) { builder.crossRegionAccessEnabled(true); origin = "origin with cross region access"; LOG.debug("Enabling cross region access for endpoint {}", endpointStr); } } } // No need to override endpoint with "s3.amazonaws.com". // Let the client take care of endpoint resolution. Overriding // the endpoint with "s3.amazonaws.com" causes 400 Bad Request // errors for non-existent buckets and objects. // ref: https://github.com/aws/aws-sdk-java-v2/issues/4846 if (!endpointEndsWithCentral) { builder.endpointOverride(endpoint); LOG.debug("Setting endpoint to {}", endpoint); } } ``` So now: 1) if endpoint = s3.amazonaws.com and region is null, set to US_EAST_2 and enable cross region, and don't override endpoint. 2) if endpoint = s3.amazonaw.com and region is set (eg to eu-west-1), set region but do not override endpoint..let SDK figure it out ## hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AEndpointRegion.java: ## @@ -146,7 +150,21 @@ public void testCentralEndpoint() throws Throwable { describe("Create a client with the central endpoint"); Configuration conf = getConfiguration(); -S3Client client = createS3Client(conf, CENTRAL_ENDPOINT, null, US_EAST_1, false); +S3Client client = createS3Client(conf, CENTRAL_ENDPOINT, null, US_EAST_2, false); + +expectInterceptorException(client); + } + + @Test + public void testCentralEndpointWithRegion() throws Throwable { +describe("Create a client with the central endpoint but also specify region"); +Configuration conf = getConfiguration(); + +S3Client client = createS3Client(conf, CENTRAL_ENDPOINT, US_WEST_2, US_EAST_2, false); Review Comment: for example here, if configured region is US_WEST_2, expected region should also be US_EAST_2 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HDFS-17333. DFSClient support lazy resolve host->ip. [hadoop]
KeeProMise commented on PR #6430: URL: https://github.com/apache/hadoop/pull/6430#issuecomment-1916683372 Hi @tasanuma @Hexiaoqiao @zhangshuyan0, Please kindly review this PR as well if you have bandwidth, Thanks. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-19045) S3A: pass request timeouts down to sdk clients
[ https://issues.apache.org/jira/browse/HADOOP-19045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17812272#comment-17812272 ] ASF GitHub Bot commented on HADOOP-19045: - steveloughran commented on PR #6470: URL: https://github.com/apache/hadoop/pull/6470#issuecomment-1916662486 something has gone wrong with me and stuff testing with s3a://landsat-pds/ right now: 403 on all access. Thought it was my s3 select stuff but it seems to blow up for me everywhere now. anyway, doing this merge without worrying about it as it did work yesterday. maybe I've gone and broken my test setup > S3A: pass request timeouts down to sdk clients > -- > > Key: HADOOP-19045 > URL: https://issues.apache.org/jira/browse/HADOOP-19045 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.4.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > Labels: pull-request-available > > s3a client timeout settings are getting down to http client, but not sdk > timeouts, so you can't have a longer timeout than the default. This surfaces > in the inability to tune the timeouts for CreateSession calls even now the > latest SDK does pick it up -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HADOOP-19045. S3A: Validate CreateSession Timeout Propagation [hadoop]
steveloughran commented on PR #6470: URL: https://github.com/apache/hadoop/pull/6470#issuecomment-1916662486 something has gone wrong with me and stuff testing with s3a://landsat-pds/ right now: 403 on all access. Thought it was my s3 select stuff but it seems to blow up for me everywhere now. anyway, doing this merge without worrying about it as it did work yesterday. maybe I've gone and broken my test setup -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-19045) S3A: pass request timeouts down to sdk clients
[ https://issues.apache.org/jira/browse/HADOOP-19045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17812271#comment-17812271 ] ASF GitHub Bot commented on HADOOP-19045: - hadoop-yetus commented on PR #6470: URL: https://github.com/apache/hadoop/pull/6470#issuecomment-1916661209 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 0s | | Docker mode activated. | | -1 :x: | patch | 0m 15s | | https://github.com/apache/hadoop/pull/6470 does not apply to trunk. Rebase required? Wrong Branch? See https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute for help. | | Subsystem | Report/Notes | |--:|:-| | GITHUB PR | https://github.com/apache/hadoop/pull/6470 | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6470/5/console | | versions | git=2.34.1 | | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org | This message was automatically generated. > S3A: pass request timeouts down to sdk clients > -- > > Key: HADOOP-19045 > URL: https://issues.apache.org/jira/browse/HADOOP-19045 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.4.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > Labels: pull-request-available > > s3a client timeout settings are getting down to http client, but not sdk > timeouts, so you can't have a longer timeout than the default. This surfaces > in the inability to tune the timeouts for CreateSession calls even now the > latest SDK does pick it up -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HADOOP-19045. S3A: Validate CreateSession Timeout Propagation [hadoop]
hadoop-yetus commented on PR #6470: URL: https://github.com/apache/hadoop/pull/6470#issuecomment-1916661209 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 0s | | Docker mode activated. | | -1 :x: | patch | 0m 15s | | https://github.com/apache/hadoop/pull/6470 does not apply to trunk. Rebase required? Wrong Branch? See https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute for help. | | Subsystem | Report/Notes | |--:|:-| | GITHUB PR | https://github.com/apache/hadoop/pull/6470 | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6470/5/console | | versions | git=2.34.1 | | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HDFS-17358. EC: infinite lease recovery caused by the length of RWR equals to zero. [hadoop]
hadoop-yetus commented on PR #6509: URL: https://github.com/apache/hadoop/pull/6509#issuecomment-1916570884 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 49s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 47m 39s | | trunk passed | | +1 :green_heart: | compile | 1m 23s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 1m 13s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | checkstyle | 1m 12s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 22s | | trunk passed | | +1 :green_heart: | javadoc | 1m 8s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 1m 35s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 3m 19s | | trunk passed | | +1 :green_heart: | shadedclient | 39m 51s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 12s | | the patch passed | | +1 :green_heart: | compile | 1m 16s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 1m 15s | | the patch passed | | +1 :green_heart: | compile | 1m 6s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | javac | 1m 6s | | the patch passed | | -1 :x: | blanks | 0m 0s | [/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6509/9/artifact/out/blanks-eol.txt) | The patch has 4 line(s) that end in blanks. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply | | -0 :warning: | checkstyle | 1m 1s | [/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6509/9/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs-project/hadoop-hdfs: The patch generated 1 new + 5 unchanged - 0 fixed = 6 total (was 5) | | +1 :green_heart: | mvnsite | 1m 12s | | the patch passed | | +1 :green_heart: | javadoc | 0m 55s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 1m 27s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 3m 18s | | the patch passed | | +1 :green_heart: | shadedclient | 39m 54s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 253m 49s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6509/9/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 43s | | The patch does not generate ASF License warnings. | | | | 405m 54s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.TestLeaseRecoveryStriped | | | hadoop.hdfs.server.datanode.TestDirectoryScanner | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.44 ServerAPI=1.44 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6509/9/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/6509 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux 1f5d84414741 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 96ecedc969d91205d9cd40f9045a1b8d3538926b | | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop
Re: [PR] HDFS-17358. EC: infinite lease recovery caused by the length of RWR equals to zero. [hadoop]
hadoop-yetus commented on PR #6509: URL: https://github.com/apache/hadoop/pull/6509#issuecomment-1916558695 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 49s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 46m 24s | | trunk passed | | +1 :green_heart: | compile | 1m 23s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 1m 15s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | checkstyle | 1m 11s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 23s | | trunk passed | | +1 :green_heart: | javadoc | 1m 9s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 1m 33s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 3m 19s | | trunk passed | | +1 :green_heart: | shadedclient | 39m 29s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 10s | | the patch passed | | +1 :green_heart: | compile | 1m 15s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 1m 15s | | the patch passed | | +1 :green_heart: | compile | 1m 7s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | javac | 1m 7s | | the patch passed | | -1 :x: | blanks | 0m 0s | [/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6509/7/artifact/out/blanks-eol.txt) | The patch has 4 line(s) that end in blanks. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply | | -0 :warning: | checkstyle | 1m 1s | [/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6509/7/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs-project/hadoop-hdfs: The patch generated 1 new + 5 unchanged - 0 fixed = 6 total (was 5) | | +1 :green_heart: | mvnsite | 1m 13s | | the patch passed | | +1 :green_heart: | javadoc | 0m 55s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 1m 27s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 3m 22s | | the patch passed | | +1 :green_heart: | shadedclient | 39m 46s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 249m 58s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6509/7/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 42s | | The patch does not generate ASF License warnings. | | | | 400m 47s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.TestLeaseRecoveryStriped | | | hadoop.hdfs.TestRollingUpgrade | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.44 ServerAPI=1.44 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6509/7/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/6509 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux 26ad0358d483 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 96ecedc969d91205d9cd40f9045a1b8d3538926b | | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/P
Re: [PR] HDFS-17358. EC: infinite lease recovery caused by the length of RWR equals to zero. [hadoop]
hadoop-yetus commented on PR #6509: URL: https://github.com/apache/hadoop/pull/6509#issuecomment-1916474196 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 32s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 41m 47s | | trunk passed | | +1 :green_heart: | compile | 1m 19s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 1m 15s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | checkstyle | 1m 9s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 22s | | trunk passed | | +1 :green_heart: | javadoc | 1m 5s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 1m 38s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 3m 14s | | trunk passed | | +1 :green_heart: | shadedclient | 34m 39s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 8s | | the patch passed | | +1 :green_heart: | compile | 1m 12s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 1m 12s | | the patch passed | | +1 :green_heart: | compile | 1m 4s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | javac | 1m 4s | | the patch passed | | -1 :x: | blanks | 0m 0s | [/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6509/6/artifact/out/blanks-eol.txt) | The patch has 4 line(s) that end in blanks. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply | | -0 :warning: | checkstyle | 0m 56s | [/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6509/6/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs-project/hadoop-hdfs: The patch generated 1 new + 5 unchanged - 0 fixed = 6 total (was 5) | | +1 :green_heart: | mvnsite | 1m 12s | | the patch passed | | +1 :green_heart: | javadoc | 0m 53s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 1m 33s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 3m 14s | | the patch passed | | +1 :green_heart: | shadedclient | 34m 14s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 225m 0s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6509/6/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 42s | | The patch does not generate ASF License warnings. | | | | 359m 49s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.TestLeaseRecoveryStriped | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.44 ServerAPI=1.44 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6509/6/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/6509 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux f19d056e8500 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 96ecedc969d91205d9cd40f9045a1b8d3538926b | | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6509/6/testReport/ | | Max. process+t
Re: [PR] HDFS-17348. Enhance Log when checkLocations in RecoveryTaskStriped. [hadoop]
hfutatzhanghb commented on PR #6485: URL: https://github.com/apache/hadoop/pull/6485#issuecomment-1916397746 move changes to HDFS-17358 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HDFS-17348. Enhance Log when checkLocations in RecoveryTaskStriped. [hadoop]
hfutatzhanghb closed pull request #6485: HDFS-17348. Enhance Log when checkLocations in RecoveryTaskStriped. URL: https://github.com/apache/hadoop/pull/6485 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HDFS-17358. EC: infinite lease recovery caused by the length of RWR equals to zero. [hadoop]
hadoop-yetus commented on PR #6509: URL: https://github.com/apache/hadoop/pull/6509#issuecomment-1916349542 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 20s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 31m 55s | | trunk passed | | +1 :green_heart: | compile | 0m 40s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 0m 36s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | checkstyle | 0m 37s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 44s | | trunk passed | | +1 :green_heart: | javadoc | 0m 41s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 1m 3s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 1m 42s | | trunk passed | | +1 :green_heart: | shadedclient | 20m 25s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 33s | | the patch passed | | +1 :green_heart: | compile | 0m 36s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 0m 36s | | the patch passed | | +1 :green_heart: | compile | 0m 32s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | javac | 0m 32s | | the patch passed | | -1 :x: | blanks | 0m 0s | [/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6509/8/artifact/out/blanks-eol.txt) | The patch has 4 line(s) that end in blanks. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply | | -0 :warning: | checkstyle | 0m 28s | [/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6509/8/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs-project/hadoop-hdfs: The patch generated 1 new + 5 unchanged - 0 fixed = 6 total (was 5) | | +1 :green_heart: | mvnsite | 0m 37s | | the patch passed | | +1 :green_heart: | javadoc | 0m 27s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 59s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 1m 39s | | the patch passed | | +1 :green_heart: | shadedclient | 20m 3s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 205m 23s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6509/8/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 28s | | The patch does not generate ASF License warnings. | | | | 291m 41s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.server.datanode.TestDirectoryScanner | | | hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy | | | hadoop.hdfs.TestLeaseRecoveryStriped | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.44 ServerAPI=1.44 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6509/8/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/6509 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux af6ba03f974e 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 96ecedc969d91205d9cd40f9045a1b8d3538926b | | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-
Re: [PR] HDFS-17358. EC: infinite lease recovery caused by the length of RWR equals to zero. [hadoop]
hadoop-yetus commented on PR #6509: URL: https://github.com/apache/hadoop/pull/6509#issuecomment-1916338179 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 20s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 31m 57s | | trunk passed | | +1 :green_heart: | compile | 0m 41s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 0m 35s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | checkstyle | 0m 38s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 41s | | trunk passed | | +1 :green_heart: | javadoc | 0m 41s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 1m 1s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 1m 48s | | trunk passed | | +1 :green_heart: | shadedclient | 20m 32s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 34s | | the patch passed | | +1 :green_heart: | compile | 0m 38s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 0m 38s | | the patch passed | | +1 :green_heart: | compile | 0m 32s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | javac | 0m 32s | | the patch passed | | -1 :x: | blanks | 0m 0s | [/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6509/5/artifact/out/blanks-eol.txt) | The patch has 4 line(s) that end in blanks. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply | | -0 :warning: | checkstyle | 0m 28s | [/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6509/5/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs-project/hadoop-hdfs: The patch generated 1 new + 5 unchanged - 0 fixed = 6 total (was 5) | | +1 :green_heart: | mvnsite | 0m 36s | | the patch passed | | +1 :green_heart: | javadoc | 0m 31s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 57s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 1m 40s | | the patch passed | | +1 :green_heart: | shadedclient | 20m 17s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 200m 33s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6509/5/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 27s | | The patch does not generate ASF License warnings. | | | | 287m 9s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.server.datanode.TestDirectoryScanner | | | hadoop.hdfs.TestLeaseRecoveryStriped | | | hadoop.hdfs.server.balancer.TestBalancerWithHANameNodes | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.44 ServerAPI=1.44 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6509/5/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/6509 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux fac5502bec56 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 96ecedc969d91205d9cd40f9045a1b8d3538926b | | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-
Re: [PR] HDFS-17360. Record the number of times a block is read during a certain time period. [hadoop]
huangzhaobo99 commented on PR #6505: URL: https://github.com/apache/hadoop/pull/6505#issuecomment-1916294300 > @huangzhaobo99 Thanks for your reply. If this feature is practical in your scenario, I suggest adding a switch for this feature. @zhangshuyan0 Thanks for the guidance, I have added a switch for this feature. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org