[GitHub] [hadoop-ozone] bharatviswa504 commented on a change in pull request #1324: HDDS-4068. Client should not retry same OM on network connection failure

2020-08-24 Thread GitBox


bharatviswa504 commented on a change in pull request #1324:
URL: https://github.com/apache/hadoop-ozone/pull/1324#discussion_r475966746



##
File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/ha/OMFailoverProxyProvider.java
##
@@ -172,9 +174,12 @@ private OzoneManagerProtocolPB 
createOMProxy(InetSocketAddress omAddress)
 LegacyHadoopConfigurationSource.asHadoopConfiguration(conf);
 RPC.setProtocolEngine(hadoopConf, OzoneManagerProtocolPB.class,
 ProtobufRpcEngine.class);
-return RPC.getProxy(OzoneManagerProtocolPB.class, omVersion, omAddress, 
ugi,
-hadoopConf, NetUtils.getDefaultSocketFactory(hadoopConf),
-(int) OmUtils.getOMClientRpcTimeOut(conf));
+RetryPolicy connectionRetryPolicy = RetryPolicies
+.failoverOnNetworkException(0);
+return RPC.getProtocolProxy(OzoneManagerProtocolPB.class, omVersion,
+omAddress, ugi, hadoopConf, NetUtils.getDefaultSocketFactory(
+hadoopConf), (int) OmUtils.getOMClientRpcTimeOut(conf),
+connectionRetryPolicy).getProxy();

Review comment:
   One question, can we use RetryPolicy TRY_ONCE_THEN_FAIL here?
   
   Because in this failoverOnNetworkException, also we set retry count to zero 
and maxFailOvers to zero.
  





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bharatviswa504 commented on a change in pull request #1324: HDDS-4068. Client should not retry same OM on network connection failure

2020-08-24 Thread GitBox


bharatviswa504 commented on a change in pull request #1324:
URL: https://github.com/apache/hadoop-ozone/pull/1324#discussion_r476093708



##
File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/ha/OMFailoverProxyProvider.java
##
@@ -172,9 +174,12 @@ private OzoneManagerProtocolPB 
createOMProxy(InetSocketAddress omAddress)
 LegacyHadoopConfigurationSource.asHadoopConfiguration(conf);
 RPC.setProtocolEngine(hadoopConf, OzoneManagerProtocolPB.class,
 ProtobufRpcEngine.class);
-return RPC.getProxy(OzoneManagerProtocolPB.class, omVersion, omAddress, 
ugi,
-hadoopConf, NetUtils.getDefaultSocketFactory(hadoopConf),
-(int) OmUtils.getOMClientRpcTimeOut(conf));
+RetryPolicy connectionRetryPolicy = RetryPolicies
+.failoverOnNetworkException(0);
+return RPC.getProtocolProxy(OzoneManagerProtocolPB.class, omVersion,
+omAddress, ugi, hadoopConf, NetUtils.getDefaultSocketFactory(
+hadoopConf), (int) OmUtils.getOMClientRpcTimeOut(conf),
+connectionRetryPolicy).getProxy();

Review comment:
   Yes. As failoverOnNetworkException uses fallback as TRY_ONCE_THEN_FAIL 
and maxFailOvers is zero, so it is like TRY_ONCE_THEN_FAIL, as in shouldretry 
it will fail in below part shouldRetry i think.
   
   
   ```
 if (failovers >= maxFailovers) {
   return new RetryAction(RetryAction.RetryDecision.FAIL, 0,
   "failovers (" + failovers + ") exceeded maximum allowed ("
   + maxFailovers + ")");
 }
   ```
   
   





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bharatviswa504 commented on a change in pull request #1324: HDDS-4068. Client should not retry same OM on network connection failure

2020-08-24 Thread GitBox


bharatviswa504 commented on a change in pull request #1324:
URL: https://github.com/apache/hadoop-ozone/pull/1324#discussion_r476093708



##
File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/ha/OMFailoverProxyProvider.java
##
@@ -172,9 +174,12 @@ private OzoneManagerProtocolPB 
createOMProxy(InetSocketAddress omAddress)
 LegacyHadoopConfigurationSource.asHadoopConfiguration(conf);
 RPC.setProtocolEngine(hadoopConf, OzoneManagerProtocolPB.class,
 ProtobufRpcEngine.class);
-return RPC.getProxy(OzoneManagerProtocolPB.class, omVersion, omAddress, 
ugi,
-hadoopConf, NetUtils.getDefaultSocketFactory(hadoopConf),
-(int) OmUtils.getOMClientRpcTimeOut(conf));
+RetryPolicy connectionRetryPolicy = RetryPolicies
+.failoverOnNetworkException(0);
+return RPC.getProtocolProxy(OzoneManagerProtocolPB.class, omVersion,
+omAddress, ugi, hadoopConf, NetUtils.getDefaultSocketFactory(
+hadoopConf), (int) OmUtils.getOMClientRpcTimeOut(conf),
+connectionRetryPolicy).getProxy();

Review comment:
   Yes. As failoverOnNetworkException uses fallback as TRY_ONCE_THEN_FAIL 
and maxFailOvers is zero, so it is like TRY_ONCE_THEN_FAIL, as in shouldretry 
it will fail in below part shouldRetry i think.
   
   
   ```
 if (failovers >= maxFailovers) {
   return new RetryAction(RetryAction.RetryDecision.FAIL, 0,
   "failovers (" + failovers + ") exceeded maximum allowed ("
   + maxFailovers + ")");
 }
   ```
   
   So, tehnically we are using it as similar to TRY_ONCE_THEN_FAIL in this 
scenario.

##
File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/ha/OMFailoverProxyProvider.java
##
@@ -172,9 +174,12 @@ private OzoneManagerProtocolPB 
createOMProxy(InetSocketAddress omAddress)
 LegacyHadoopConfigurationSource.asHadoopConfiguration(conf);
 RPC.setProtocolEngine(hadoopConf, OzoneManagerProtocolPB.class,
 ProtobufRpcEngine.class);
-return RPC.getProxy(OzoneManagerProtocolPB.class, omVersion, omAddress, 
ugi,
-hadoopConf, NetUtils.getDefaultSocketFactory(hadoopConf),
-(int) OmUtils.getOMClientRpcTimeOut(conf));
+RetryPolicy connectionRetryPolicy = RetryPolicies
+.failoverOnNetworkException(0);
+return RPC.getProtocolProxy(OzoneManagerProtocolPB.class, omVersion,
+omAddress, ugi, hadoopConf, NetUtils.getDefaultSocketFactory(
+hadoopConf), (int) OmUtils.getOMClientRpcTimeOut(conf),
+connectionRetryPolicy).getProxy();

Review comment:
   Yes. As failoverOnNetworkException uses fallback as TRY_ONCE_THEN_FAIL 
and maxFailOvers is zero, so it is like TRY_ONCE_THEN_FAIL, as in shouldretry 
it will fail in below part shouldRetry i think.
   
   
   ```
 if (failovers >= maxFailovers) {
   return new RetryAction(RetryAction.RetryDecision.FAIL, 0,
   "failovers (" + failovers + ") exceeded maximum allowed ("
   + maxFailovers + ")");
 }
   ```
   
   So, technically we are using it as similar to TRY_ONCE_THEN_FAIL in this 
scenario.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org