[ 
https://issues.apache.org/jira/browse/HADOOP-17198?focusedWorklogId=720869&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-720869
 ]

ASF GitHub Bot logged work on HADOOP-17198:
-------------------------------------------

                Author: ASF GitHub Bot
            Created on: 04/Feb/22 13:40
            Start Date: 04/Feb/22 13:40
    Worklog Time Spent: 10m 
      Work Description: steveloughran commented on pull request #3958:
URL: https://github.com/apache/hadoop/pull/3958#issuecomment-1029994933


   hmm
   ```
   [ERROR] 
testAccessPointRequired(org.apache.hadoop.fs.s3a.ITestS3ABucketExistence)  Time 
elapsed: 0.768 s  <<< ERROR!
   java.lang.IllegalArgumentException: The region field of the ARN being passed 
as a bucket parameter to an S3 operation does not match the region the client 
was configured with. Provided region: 'eu-west-1'; client region: 
'accesspoint-eu-west-1'.
        at 
com.amazonaws.services.s3.AmazonS3Client.validateIsTrue(AmazonS3Client.java:6584)
        at 
com.amazonaws.services.s3.AmazonS3Client.validateS3ResourceArn(AmazonS3Client.java:5155)
        at 
com.amazonaws.services.s3.AmazonS3Client.createRequest(AmazonS3Client.java:4956)
        at 
com.amazonaws.services.s3.AmazonS3Client.createRequest(AmazonS3Client.java:4920)
        at 
com.amazonaws.services.s3.AmazonS3Client.getAcl(AmazonS3Client.java:4040)
        at 
com.amazonaws.services.s3.AmazonS3Client.getBucketAcl(AmazonS3Client.java:1278)
        at 
com.amazonaws.services.s3.AmazonS3Client.getBucketAcl(AmazonS3Client.java:1268)
        at 
org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$verifyBucketExistsV2$2(S3AFileSystem.java:731)
        at 
org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.lambda$trackDurationOfOperation$5(IOStatisticsBinding.java:499)
        at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:119)
        at org.apache.hadoop.fs.s3a.Invoker.lambda$retry$4(Invoker.java:348)
        at org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:440)
        at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:344)
        at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:319)
        at 
org.apache.hadoop.fs.s3a.S3AFileSystem.verifyBucketExistsV2(S3AFileSystem.java:724)
        at 
org.apache.hadoop.fs.s3a.S3AFileSystem.doBucketProbing(S3AFileSystem.java:611)
        at 
org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:506)
        at 
org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3469)
        at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:537)
        at 
org.apache.hadoop.fs.s3a.ITestS3ABucketExistence.lambda$testAccessPointRequired$14(ITestS3ABucketExistence.java:189)
        at 
org.apache.hadoop.test.LambdaTestUtils.intercept(LambdaTestUtils.java:498)
        at 
org.apache.hadoop.test.LambdaTestUtils.intercept(LambdaTestUtils.java:384)
        at 
org.apache.hadoop.fs.s3a.ITestS3ABucketExistence.expectUnknownStore(ITestS3ABucketExistence.java:103)
        at 
org.apache.hadoop.fs.s3a.ITestS3ABucketExistence.testAccessPointRequired(ITestS3ABucketExistence.java:188)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
        at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
        at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
        at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
        at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
        at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
        at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
        at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299)
        at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.lang.Thread.run(Thread.java:748)
   
   [INFO]
   ```
   sdk merge pain. the 3.3.2 patch is actually easier here


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
-------------------

    Worklog Id:     (was: 720869)
    Time Spent: 14h 40m  (was: 14.5h)

> Support S3 Access Points
> ------------------------
>
>                 Key: HADOOP-17198
>                 URL: https://issues.apache.org/jira/browse/HADOOP-17198
>             Project: Hadoop Common
>          Issue Type: Sub-task
>          Components: fs/s3
>    Affects Versions: 3.3.0
>            Reporter: Steve Loughran
>            Assignee: Bogdan Stolojan
>            Priority: Major
>              Labels: pull-request-available
>             Fix For: 3.4.0
>
>          Time Spent: 14h 40m
>  Remaining Estimate: 0h
>
> Improve VPC integration by supporting access points for buckets
> https://docs.aws.amazon.com/AmazonS3/latest/dev/access-points.html
> *important*: when backporting, always include HADOOP-17951 as the followup 
> patch



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org

Reply via email to