snvijaya commented on code in PR #5711:
URL: https://github.com/apache/hadoop/pull/5711#discussion_r1217958883


##########
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsRestOperation.java:
##########
@@ -347,9 +350,11 @@ private boolean executeHttpOperation(final int retryCount,
 
     LOG.debug("HttpRequest: {}: {}", operationType, httpOperation);
 
-    if (client.getRetryPolicy().shouldRetry(retryCount, 
httpOperation.getStatusCode())) {
-      int status = httpOperation.getStatusCode();
-      failureReason = RetryReason.getAbbreviation(null, status, 
httpOperation.getStorageErrorMessage());
+    int status = httpOperation.getStatusCode();
+    failureReason = RetryReason.getAbbreviation(null, status, 
httpOperation.getStorageErrorMessage());
+    retryPolicy = client.getRetryPolicy(failureReason);
+
+    if (retryPolicy.shouldRetry(retryCount, httpOperation.getStatusCode())) {

Review Comment:
   This is overlapping with Anmol's question later .. the call that happens 
around line 344
   ` intercept.updateMetrics(operationType, httpOperation);`
   will consider socket exceptions as throttling. When read timeout happens at 
the moment, we know of cases there it might be better to consider so. 
   But if failure is due to connection timeout, then we dont want 
ThrottlingInterceptor to consider this as an input to increment throttling 
related metrics. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org

Reply via email to