[
https://issues.apache.org/jira/browse/HADOOP-19317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17893573#comment-17893573
]
ASF GitHub Bot commented on HADOOP-19317:
-----------------------------------------
steveloughran commented on PR #7134:
URL: https://github.com/apache/hadoop/pull/7134#issuecomment-2442314469
tested s3 london, -Dscale
one really interesting AWS-side failure we've never seen before. Looks like
a bulk delete hit a 500 error at the back end. Now we know what that looks like.
This also highlights something important: irrespective of the availability
assertions of AWS, things do fail and there is enough use of S3A code made
every day that someone, somewhere, will hit them *every single day*.
Today it was me
```
[ERROR]
testMultiPagesListingPerformanceAndCorrectness(org.apache.hadoop.fs.s3a.scale.ITestS3ADirectoryPerformance)
Time elapsed: 73.877 s <<< ERROR!
org.apache.hadoop.fs.s3a.AWSS3IOException:
Remove S3 Files on
s3a://stevel-london/job-00-fork-0001/test/testMultiPagesListingPerformanceAndCorrectness:
org.apache.hadoop.fs.s3a.impl.MultiObjectDeleteException:
[S3Error(Key=job-00-fork-0001/test/testMultiPagesListingPerformanceAndCorrectness/file-558,
Code=InternalError, Message=We encountered an internal error. Please try
again.)] (Service: Amazon S3, Status Code: 200, Request ID:
null):MultiObjectDeleteException: InternalError:
job-00-fork-0001/test/testMultiPagesListingPerformanceAndCorrectness/file-558:
We encountered an internal error. Please try again.
:
[S3Error(Key=job-00-fork-0001/test/testMultiPagesListingPerformanceAndCorrectness/file-558,
Code=InternalError, Message=We encountered an internal error. Please try
again.)] (Service: Amazon S3, Status Code: 200, Request ID: null)
at
org.apache.hadoop.fs.s3a.impl.MultiObjectDeleteException.translateException(MultiObjectDeleteException.java:132)
at
org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:350)
at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:124)
at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:163)
at
org.apache.hadoop.fs.s3a.impl.DeleteOperation.asyncDeleteAction(DeleteOperation.java:431)
at
org.apache.hadoop.fs.s3a.impl.DeleteOperation.lambda$submitDelete$2(DeleteOperation.java:403)
at
org.apache.hadoop.fs.store.audit.AuditingFunctions.lambda$callableWithinAuditSpan$3(AuditingFunctions.java:119)
at
org.apache.hadoop.fs.s3a.impl.CallableSupplier.get(CallableSupplier.java:88)
at
java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1604)
at
org.apache.hadoop.util.SemaphoredDelegatingExecutor$RunnableWithPermitRelease.run(SemaphoredDelegatingExecutor.java:225)
at
org.apache.hadoop.util.SemaphoredDelegatingExecutor$RunnableWithPermitRelease.run(SemaphoredDelegatingExecutor.java:225)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:750)
Caused by: org.apache.hadoop.fs.s3a.impl.MultiObjectDeleteException:
[S3Error(Key=job-00-fork-0001/test/testMultiPagesListingPerformanceAndCorrectness/file-558,
Code=InternalError, Message=We encountered an internal error. Please try
again.)] (Service: Amazon S3, Status Code: 200, Request ID: null)
at
org.apache.hadoop.fs.s3a.S3AFileSystem.deleteObjects(S3AFileSystem.java:3278)
at
org.apache.hadoop.fs.s3a.S3AFileSystem.removeKeysS3(S3AFileSystem.java:3478)
at
org.apache.hadoop.fs.s3a.S3AFileSystem.removeKeys(S3AFileSystem.java:3548)
at
org.apache.hadoop.fs.s3a.S3AFileSystem$OperationCallbacksImpl.removeKeys(S3AFileSystem.java:2653)
at
org.apache.hadoop.fs.s3a.impl.DeleteOperation.lambda$asyncDeleteAction$5(DeleteOperation.java:433)
at org.apache.hadoop.fs.s3a.Invoker.lambda$once$0(Invoker.java:165)
at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:122)
... 11 more
```
> S3A: export all http connection settings as configuration options
> -----------------------------------------------------------------
>
> Key: HADOOP-19317
> URL: https://issues.apache.org/jira/browse/HADOOP-19317
> Project: Hadoop Common
> Issue Type: Sub-task
> Components: fs/s3
> Affects Versions: 3.4.1
> Reporter: Steve Loughran
> Priority: Major
> Labels: pull-request-available
>
> There are a few extras settings for httpclient which can be passed down to
> the aws client configuration: expect-continue, tcp keepalive &more.
> Make them *all* configurable through fs.s3a options
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]