[ https://issues.apache.org/jira/browse/HADOOP-14971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16217896#comment-16217896 ]
ASF GitHub Bot commented on HADOOP-14971: ----------------------------------------- Github user ajfabbri commented on a diff in the pull request: https://github.com/apache/hadoop/pull/282#discussion_r146723076 --- Diff: hadoop-common-project/hadoop-common/src/main/resources/core-default.xml --- @@ -1344,34 +1338,34 @@ </description> </property> + <property> - <name>fs.s3a.retry.limit</name> - <value>4</value> - <description> - Number of times to retry any repeatable S3 client request on failure, - excluding throttling requests. - </description> + <name>fs.s3a.attempts.maximum</name> + <value>20</value> + <description>How many times we should retry commands on transient errors, + excluding throttling errors.</description> --- End diff -- Interesting. One of my concerns about all the retry logic being added here is that it is an invasive change and I'm feeling like there might be unintended consequences somewhere. I've been thinking that making it more configurable would mitigate the risk.. I'd lean towards making more types/classes of retry configuration instead of fewer. For example here, I'd like to have SDK retries configured separately. I mentioned before also the idea of having another retry policy for riskier parts (e.g. delete). Thoughts? > Merge S3A committers into trunk > ------------------------------- > > Key: HADOOP-14971 > URL: https://issues.apache.org/jira/browse/HADOOP-14971 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 > Affects Versions: 3.0.0 > Reporter: Steve Loughran > Assignee: Steve Loughran > > Merge the HADOOP-13786 committer into trunk. This branch is being set up as a > github PR for review there & to keep it out the mailboxes of the watchers on > the main JIRA -- This message was sent by Atlassian JIRA (v6.4.14#64029) --------------------------------------------------------------------- To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org