steveloughran commented on pull request #1820:
URL: https://github.com/apache/hadoop/pull/1820#issuecomment-620053085


   @mukund-thakur thanks for the review -tried to address your comments. Look 
in the package-info for some docs on how I imagine these to be used.
   
   Because the code doesn't compile, I'm going to rebase on trunk and submit 
that as a new PR
   
   That PR will revert back to the current S3A code to create an S3 client, 
because the builder mechanism needed to wire up the AWS metrics collection in 
the SDK is failing some of the tests -it is complex enough that it's slowing 
down the rest of the patch. We can add that later.
   ```
   
   [ERROR] 
testJobSubmissionCollectsTokens[2](org.apache.hadoop.fs.s3a.auth.delegation.ITestDelegatedMRJob)
  Time elapsed: 8.485 s  <<< ERROR!
   org.apache.hadoop.fs.s3a.AWSRedirectException: getFileStatus on 
s3a://osm-pds/planet/planet-latest.orc#_partition.lst: 
com.amazonaws.services.s3.model.AmazonS3Exception: The bucket is in this 
region: us-east-1. Please use this region to retry the request (Service: Amazon 
S3; Status Code: 301; Error Code: 301 Moved Permanently; Request ID: 
E6941931CE0008E4; S3 Extended Request ID: 
TkeYaR0Vf0nc/l6QgPLaKBO7E956CFqZYoSjTy4lLuqcQMQ0+U2gvnkewhAQan+0oYOT7UNeZF0=), 
S3 Extended Request ID: 
TkeYaR0Vf0nc/l6QgPLaKBO7E956CFqZYoSjTy4lLuqcQMQ0+U2gvnkewhAQan+0oYOT7UNeZF0=:301
 Moved Permanently: The bucket is in this region: us-east-1. Please use this 
region to retry the request (Service: Amazon S3; Status Code: 301; Error Code: 
301 Moved Permanently; Request ID: E6941931CE0008E4; S3 Extended Request ID: 
TkeYaR0Vf0nc/l6QgPLaKBO7E956CFqZYoSjTy4lLuqcQMQ0+U2gvnkewhAQan+0oYOT7UNeZF0=)
        at 
org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:234)
        at 
org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:168)
        at 
org.apache.hadoop.fs.s3a.S3AFileSystem.s3GetFileStatus(S3AFileSystem.java:3019)
        at 
org.apache.hadoop.fs.s3a.S3AFileSystem.innerGetFileStatus(S3AFileSystem.java:2937)
        at 
org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:2821)
        at 
org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.getFileStatus(ClientDistributedCacheManager.java:325)
        at 
org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.getFileStatus(ClientDistributedCacheManager.java:236)
        at 
org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.determineTimestamps(ClientDistributedCacheManager.java:105)
        at 
org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.determineTimestampsAndCacheVisibilities(ClientDistributedCacheManager.java:69)
        at 
org.apache.hadoop.mapreduce.JobResourceUploader.uploadResourcesInternal(JobResourceUploader.java:222)
        at 
org.apache.hadoop.mapreduce.JobResourceUploader.uploadResources(JobResourceUploader.java:135)
        at 
org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:99)
        at 
org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:194)
        at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1576)
        at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1573)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1845)
        at org.apache.hadoop.mapreduce.Job.submit(Job.java:1573)
        at 
org.apache.hadoop.fs.s3a.auth.delegation.ITestDelegatedMRJob.testJobSubmissionCollectsTokens(ITestDelegatedMRJob.java:286)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
        at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
        at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
        at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
        at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
        at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
        at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
        at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
        at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.lang.Thread.run(Thread.java:748)
   Caused by: com.amazonaws.services.s3.model.AmazonS3Exception: The bucket is 
in this region: us-east-1. Please use this region to retry the request 
(Service: Amazon S3; Status Code: 301; Error Code: 301 Moved Permanently; 
Request ID: E6941931CE0008E4; S3 Extended Request ID: 
TkeYaR0Vf0nc/l6QgPLaKBO7E956CFqZYoSjTy4lLuqcQMQ0+U2gvnkewhAQan+0oYOT7UNeZF0=), 
S3 Extended Request ID: 
TkeYaR0Vf0nc/l6QgPLaKBO7E956CFqZYoSjTy4lLuqcQMQ0+U2gvnkewhAQan+0oYOT7UNeZF0=
        at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1712)
        at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1367)
        at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1113)
        at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:770)
        at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:744)
        at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:726)
        at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:686)
        at 
com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:668)
        at 
com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:532)
        at 
com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:512)
        at 
com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4920)
        at 
com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4866)
        at 
com.amazonaws.services.s3.AmazonS3Client.getObjectMetadata(AmazonS3Client.java:1320)
        at 
org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$getObjectMetadata$6(S3AFileSystem.java:1899)
        at org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:407)
        at org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:370)
        at 
org.apache.hadoop.fs.s3a.S3AFileSystem.getObjectMetadata(S3AFileSystem.java:1892)
        at 
org.apache.hadoop.fs.s3a.S3AFileSystem.getObjectMetadata(S3AFileSystem.java:1868)
        at 
org.apache.hadoop.fs.s3a.S3AFileSystem.s3GetFileStatus(S3AFileSystem.java:3006)
        ... 32 more
   ```
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org

Reply via email to