[ 
https://issues.apache.org/jira/browse/BEAM-6031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17123019#comment-17123019
 ] 

Beam JIRA Bot commented on BEAM-6031:
-------------------------------------

This issue is P2 but has been unassigned without any comment for 60 days so it 
has been labeled "stale-P2". If this issue is still affecting you, we care! 
Please comment and remove the label. Otherwise, in 14 days the issue will be 
moved to P3.

Please see https://beam.apache.org/contribute/jira-priorities/ for a detailed 
explanation of what these priorities mean.


> Add retry logic to S3FileSystem 
> --------------------------------
>
>                 Key: BEAM-6031
>                 URL: https://issues.apache.org/jira/browse/BEAM-6031
>             Project: Beam
>          Issue Type: Bug
>          Components: io-java-aws
>    Affects Versions: 2.7.0, 2.8.0
>            Reporter: Pawel Bartoszek
>            Priority: P2
>              Labels: stale-P2
>
> S3FileSystem should have some retry behaviour if ObjectsDelete fails. I have 
> seen such example in our job where 1 item from the delete batch cannot be 
> deleted due to S3 InternalError causing the whole job to restart. The source 
> code I am referring to:  
> [https://github.com/apache/beam/blob/8a88e72f293ef7f9be6c872aa0dda681458c7ca5/sdks/java/io/amazon-web-services/src/main/java/org/apache/beam/sdk/io/aws/s3/S3FileSystem.java#L633]
>  
> The retry logic might be added to other S3 calls in S3FileSystem as well.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to