[ https://issues.apache.org/jira/browse/HDDS-3046?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
ASF GitHub Bot updated HDDS-3046: --------------------------------- Labels: OMHA pull-request-available (was: OMHA) > Fix Retry handling in Hadoop RPC Client > --------------------------------------- > > Key: HDDS-3046 > URL: https://issues.apache.org/jira/browse/HDDS-3046 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Reporter: Bharat Viswanadham > Assignee: Bharat Viswanadham > Priority: Major > Labels: OMHA, pull-request-available > > Right now for all other exceptions other than serviceException we use > FailOverOnNetworkException. > This Exception policy is created with 15 max fail overs and 15 retries. > > {code:java} > retryPolicyOnNetworkException.shouldRetry( > exception, retries, failovers, isIdempotentOrAtMostOnce);{code} > *2 issues with this:* > # When shouldRetry returns action FAILOVER_AND_RETRY, it will stuck with > same OM, and does not perform failover to next OM. As > OMFailoverProxyProvider#performFailover() is a dummy call does not perform > any failover. > # When ozone.client.failover.max.attempts is set to 15, now with 2 policies > with each set to 15, we will retry 15*2 times in worst scenario. > > -- This message was sent by Atlassian Jira (v8.3.4#803005) --------------------------------------------------------------------- To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org