[
https://issues.apache.org/jira/browse/HADOOP-13560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15455434#comment-15455434
]
Steve Loughran edited comment on HADOOP-13560 at 9/1/16 1:45 PM:
-----------------------------------------------------------------
Stack trace under load. The client does retry, but it should be blocking
further up.
This doesn't show up with the default settings, as
{{fs.s3a.connection.maximum}} is at 15, {{fs.s3a.threads.max}} at 10. If
{{fs.s3a.threads.max}} is set to > 15 then things will fail. Retries may
complete the work, but it adds delays
{code}
2016-09-01 14:36:06,879 [s3a-transfer-shared-pool1-t5] INFO
http.AmazonHttpClient (AmazonHttpClient.java:executeHelper(496)) - Unable to
execute HTTP request: Timeout waiting for connection from pool
org.apache.http.conn.ConnectionPoolTimeoutException: Timeout waiting for
connection from pool
at
org.apache.http.impl.conn.PoolingClientConnectionManager.leaseConnection(PoolingClientConnectionManager.java:230)
at
org.apache.http.impl.conn.PoolingClientConnectionManager$1.getConnection(PoolingClientConnectionManager.java:199)
at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at
com.amazonaws.http.conn.ClientConnectionRequestFactory$Handler.invoke(ClientConnectionRequestFactory.java:70)
at com.amazonaws.http.conn.$Proxy10.getConnection(Unknown Source)
at
org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:424)
at
org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:884)
at
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
at
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:55)
at
com.amazonaws.http.AmazonHttpClient.executeOneRequest(AmazonHttpClient.java:728)
at
com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:489)
at
com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:310)
at
com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3785)
at
com.amazonaws.services.s3.AmazonS3Client.doUploadPart(AmazonS3Client.java:2921)
at
com.amazonaws.services.s3.AmazonS3Client.uploadPart(AmazonS3Client.java:2906)
at
org.apache.hadoop.fs.s3a.S3AFileSystem.uploadPart(S3AFileSystem.java:1080)
at
org.apache.hadoop.fs.s3a.S3AFastOutputStream$MultiPartUpload$1.call(S3AFastOutputStream.java:358)
at
org.apache.hadoop.fs.s3a.S3AFastOutputStream$MultiPartUpload$1.call(S3AFastOutputStream.java:353)
at
org.apache.hadoop.fs.s3a.BlockingThreadPoolExecutorService$CallableWithPermitRelease.call(BlockingThreadPoolExecutorService.java:239)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
2016-09-01 14:36:06,879 [java-sdk-progress-listener-callback-thread] INFO
scale.STestS3AHugeFileCreate (STestS3AHugeFileCreate.java:progressChanged(221))
- Event CLIENT_REQUEST_RETRY_EVENT, bytes: 0
...
{code}
was (Author: [email protected]):
Stack trace under load. The client does retry, but it should be blocking
further up.
This doesn't show up with the default settings, as
{{fs.s3a.connection.maximum}} is at 15, {{fs.s3a.threads.max}} at 10. If
{{fs.s3a.threads.max}} is set to > 15 then things will fail. Retries may
complete the work, but it adds delays
{code}
2016-09-01 14:36:06,879 [s3a-transfer-shared-pool1-t5] INFO
http.AmazonHttpClient (AmazonHttpClient.java:executeHelper(496)) - Unable to
execute HTTP request: Timeout waiting for connection from pool
org.apache.http.conn.ConnectionPoolTimeoutException: Timeout waiting for
connection from pool
at
org.apache.http.impl.conn.PoolingClientConnectionManager.leaseConnection(PoolingClientConnectionManager.java:230)
at
org.apache.http.impl.conn.PoolingClientConnectionManager$1.getConnection(PoolingClientConnectionManager.java:199)
at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at
com.amazonaws.http.conn.ClientConnectionRequestFactory$Handler.invoke(ClientConnectionRequestFactory.java:70)
at com.amazonaws.http.conn.$Proxy10.getConnection(Unknown Source)
at
org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:424)
at
org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:884)
at
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
at
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:55)
at
com.amazonaws.http.AmazonHttpClient.executeOneRequest(AmazonHttpClient.java:728)
at
com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:489)
at
com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:310)
at
com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3785)
at
com.amazonaws.services.s3.AmazonS3Client.doUploadPart(AmazonS3Client.java:2921)
at
com.amazonaws.services.s3.AmazonS3Client.uploadPart(AmazonS3Client.java:2906)
at
org.apache.hadoop.fs.s3a.S3AFileSystem.uploadPart(S3AFileSystem.java:1080)
at
org.apache.hadoop.fs.s3a.S3AFastOutputStream$MultiPartUpload$1.call(S3AFastOutputStream.java:358)
at
org.apache.hadoop.fs.s3a.S3AFastOutputStream$MultiPartUpload$1.call(S3AFastOutputStream.java:353)
at
org.apache.hadoop.fs.s3a.BlockingThreadPoolExecutorService$CallableWithPermitRelease.call(BlockingThreadPoolExecutorService.java:239)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
2016-09-01 14:36:06,879 [java-sdk-progress-listener-callback-thread] INFO
scale.STestS3AHugeFileCreate (STestS3AHugeFileCreate.java:progressChanged(221))
- Event CLIENT_REQUEST_RETRY_EVENT, bytes: 0
2
> S3A to support huge file writes and operations -with tests
> ----------------------------------------------------------
>
> Key: HADOOP-13560
> URL: https://issues.apache.org/jira/browse/HADOOP-13560
> Project: Hadoop Common
> Issue Type: Sub-task
> Components: fs/s3
> Affects Versions: 2.9.0
> Reporter: Steve Loughran
> Assignee: Steve Loughran
> Priority: Minor
>
> An AWS SDK [issue|https://github.com/aws/aws-sdk-java/issues/367] highlights
> that metadata isn't copied on large copies.
> 1. Add a test to do that large copy/rname and verify that the copy really
> works
> 1. Verify that metadata makes it over.
> Verifying large file rename is important on its own, as it is needed for very
> large commit operations for committers using rename
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]