[ 
https://issues.apache.org/jira/browse/HADOOP-16823?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16823:
------------------------------------
    Release Note: 
The AWS SDK client no longer handles 503/slow down messages from S3 with its 
own internall retry mechanism; these throttling messages are handled purely in 
the S3A client, which updates its counters/metrics before performing its own 
backoff/retry strategy,

The values of "fs.s3a.retry.throttle.interval" and  
"fs.s3a.retry.throttle.limit" have been set to compensate for the fact that the 
SDK will not be retrying internally: the values are 500ms and 20 respectively.

If you have explicitly set these values. Make them larger. The default values 
for the AWS SDK are defined in  com.amazonaws.retry.PredefinedRetryPolicies; 
currently defined as 500ms based delay + exponential/jittered backoff to 20 
seconds, which is about 4-5 attempts. The S3A throttle limit has been increased 
from 10 to 20 to (over) compensate. The S3A retry policy's Jitter is slightly 
randomised so that multiple threads encountering throttling situations Will not 
all sleep for exactly the same time. The AWS jitter seems a bit more 
deterministic.

You can now inspect the metrics/statistics for a filesystem and know that this 
counts the number of retries.
All other connections to S3 (especially DynamoDB) are still retried within the 
AWS clients with the S3A code wrapping these.

If you are curious, consult PredefinedRetryPolicies to see what the internal 
default backoff/retry policies are for S3, DynamoDB (S3Guard), etc.


> Manage S3 Throttling exclusively in S3A client
> ----------------------------------------------
>
>                 Key: HADOOP-16823
>                 URL: https://issues.apache.org/jira/browse/HADOOP-16823
>             Project: Hadoop Common
>          Issue Type: Sub-task
>          Components: fs/s3
>    Affects Versions: 3.2.1
>            Reporter: Steve Loughran
>            Assignee: Steve Loughran
>            Priority: Minor
>
> Currently AWS S3 throttling is initially handled in the AWS SDK, only 
> reaching the S3 client code after it has given up.
> This means we don't always directly observe when throttling is taking place.
> Proposed:
> * disable throttling retries in the AWS client library
> * add a quantile for the S3 throttle events, as DDB has
> * isolate counters of s3 and DDB throttle events to classify issues better
> Because we are taking over the AWS retries, we will need to expand the 
> initial delay en retries and the number of retries we should support before 
> giving up.
> Also: should we log throttling events? It could be useful but there is a risk 
> of logs overloading especially if many threads in the same process were 
> triggering the problem.
> Proposed: log at debug.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org

Reply via email to