anujmodi2021 opened a new pull request, #5711:
URL: https://github.com/apache/hadoop/pull/5711

   ### Description of PR
   Today when a request fails with connection timeout, it falls back into the 
loop for exponential retry. Unlike Azure Storage, there are no guarantees of 
success on exponentially retried request or recommendations for ideal retry 
policies for Azure network or any other general failures. Faster failure and 
retry might be more beneficial for such generic connection timeout failures. 
   
   This PR introduces a new Linear Retry Policy which will currently be used 
only for Connection Timeout failures.
   Two types of Linear Backoff calculations will be supported:
   
   1. min-backoff starts with 500 ms and with each attempted retry, back-off 
increments double, capped at 30 sec max
   2. min-backoff starts with 500 ms and with each attempted retry, back-off 
increments by 1 sec, capped at 30 sec max
   
   ### How was this patch tested?
   
   
   ### For code changes:
   
   - [ ] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org

Reply via email to