[ 
https://issues.apache.org/jira/browse/FLINK-9061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16415501#comment-16415501
 ] 

Steve Loughran commented on FLINK-9061:
---------------------------------------

[~StephanEwen]: I knew that, but it's still the same AWS SDK underneath.

500 is not normal throttling; that's 503.

[~jgrier]: What you are seeing here is something has gone wrong in S3 itself. 
Usually transient, treat as retriable on all requests.

S3A on Hadoop 3.1+ will treat as a connectivity error and use whatever settings 
you use there (retryUpToMaximumCountWithFixedSleep policy). If a 500 can be 
caused by overload, that could/should be switched to the exponential backoff 
policy as per 503 events.
 # file a support request with the AWS team including the request ID of a 
failing request
 # add a followup here listing what they said/recommended

Obviously, I can't fix the stack trace here, but we can at least change the S3A 
connector to see this and retry appropriately.

Thank you for finding another interesting failure mode of S3 itself :)

+[~fabbri]

> S3 checkpoint data not partitioned well -- causes errors and poor performance
> -----------------------------------------------------------------------------
>
>                 Key: FLINK-9061
>                 URL: https://issues.apache.org/jira/browse/FLINK-9061
>             Project: Flink
>          Issue Type: Bug
>          Components: FileSystem, State Backends, Checkpointing
>    Affects Versions: 1.4.2
>            Reporter: Jamie Grier
>            Priority: Critical
>
> I think we need to modify the way we write checkpoints to S3 for high-scale 
> jobs (those with many total tasks).  The issue is that we are writing all the 
> checkpoint data under a common key prefix.  This is the worst case scenario 
> for S3 performance since the key is used as a partition key.
>  
> In the worst case checkpoints fail with a 500 status code coming back from S3 
> and an internal error type of TooBusyException.
>  
> One possible solution would be to add a hook in the Flink filesystem code 
> that allows me to "rewrite" paths.  For example say I have the checkpoint 
> directory set to:
>  
> s3://bucket/flink/checkpoints
>  
> I would hook that and rewrite that path to:
>  
> s3://bucket/[HASH]/flink/checkpoints, where HASH is the hash of the original 
> path
>  
> This would distribute the checkpoint write load around the S3 cluster evenly.
>  
> For reference: 
> https://aws.amazon.com/premiumsupport/knowledge-center/s3-bucket-performance-improve/
>  
> Any other people hit this issue?  Any other ideas for solutions?  This is a 
> pretty serious problem for people trying to checkpoint to S3.
>  
> -Jamie
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to