[ https://issues.apache.org/jira/browse/HADOOP-14971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16218442#comment-16218442 ]
ASF GitHub Bot commented on HADOOP-14971: ----------------------------------------- Github user steveloughran commented on a diff in the pull request: https://github.com/apache/hadoop/pull/282#discussion_r146830482 --- Diff: hadoop-common-project/hadoop-common/src/main/resources/core-default.xml --- @@ -1344,34 +1338,34 @@ </description> </property> + <property> - <name>fs.s3a.retry.limit</name> - <value>4</value> - <description> - Number of times to retry any repeatable S3 client request on failure, - excluding throttling requests. - </description> + <name>fs.s3a.attempts.maximum</name> + <value>20</value> + <description>How many times we should retry commands on transient errors, + excluding throttling errors.</description> --- End diff -- I worry about overconfig of things which aren't easy to test. At the same time, the bulk/recursive ops are special cases because we need to think "how best to retry?". Is it per-op, or would you want to step back and start from scratch. With the fact that delete retries aren't isolated from other ops in the bucket, that's one where we could make a case for saying "no, no retries except in higher level algorithms (like the commit one)" > Merge S3A committers into trunk > ------------------------------- > > Key: HADOOP-14971 > URL: https://issues.apache.org/jira/browse/HADOOP-14971 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 > Affects Versions: 3.0.0 > Reporter: Steve Loughran > Assignee: Steve Loughran > > Merge the HADOOP-13786 committer into trunk. This branch is being set up as a > github PR for review there & to keep it out the mailboxes of the watchers on > the main JIRA -- This message was sent by Atlassian JIRA (v6.4.14#64029) --------------------------------------------------------------------- To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org