[ 
https://issues.apache.org/jira/browse/HDFS-15856?focusedWorklogId=559163&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-559163
 ]

ASF GitHub Bot logged work on HDFS-15856:
-----------------------------------------

                Author: ASF GitHub Bot
            Created on: 01/Mar/21 03:15
            Start Date: 01/Mar/21 03:15
    Worklog Time Spent: 10m 
      Work Description: qizhu-lucas commented on a change in pull request #2721:
URL: https://github.com/apache/hadoop/pull/2721#discussion_r584422780



##########
File path: 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java
##########
@@ -1263,14 +1265,18 @@ private boolean processDatanodeOrExternalError() throws 
IOException {
       packetSendTime.clear();
     }
 
-    // If we had to recover the pipeline five times in a row for the
+    // If we had to recover the pipeline more than the value
+    // defined by maxPipelineRecoveryRetries in a row for the
     // same packet, this client likely has corrupt data or corrupting
     // during transmission.
-    if (!errorState.isRestartingNode() && ++pipelineRecoveryCount > 5) {
+    if (!errorState.isRestartingNode() && ++pipelineRecoveryCount >
+        maxPipelineRecoveryRetries) {
       LOG.warn("Error recovering pipeline for writing " +
-          block + ". Already retried 5 times for the same packet.");
+          block + ". Already retried " + maxPipelineRecoveryRetries
+          + " times for the same packet.");
       lastException.set(new IOException("Failing write. Tried pipeline " +
-          "recovery 5 times without success."));
+          "recovery "+ maxPipelineRecoveryRetries

Review comment:
       Thanks @jojochuang for review.
   Fixed it latest pull request.

##########
File path: 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java
##########
@@ -1263,14 +1265,18 @@ private boolean processDatanodeOrExternalError() throws 
IOException {
       packetSendTime.clear();
     }
 
-    // If we had to recover the pipeline five times in a row for the
+    // If we had to recover the pipeline more than the value
+    // defined by maxPipelineRecoveryRetries in a row for the
     // same packet, this client likely has corrupt data or corrupting
     // during transmission.
-    if (!errorState.isRestartingNode() && ++pipelineRecoveryCount > 5) {
+    if (!errorState.isRestartingNode() && ++pipelineRecoveryCount >
+        maxPipelineRecoveryRetries) {
       LOG.warn("Error recovering pipeline for writing " +
-          block + ". Already retried 5 times for the same packet.");
+          block + ". Already retried " + maxPipelineRecoveryRetries
+          + " times for the same packet.");
       lastException.set(new IOException("Failing write. Tried pipeline " +
-          "recovery 5 times without success."));
+          "recovery "+ maxPipelineRecoveryRetries

Review comment:
       Thanks @jojochuang for review.
   Fixed it latest pull request.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
-------------------

    Worklog Id:     (was: 559163)
    Time Spent: 1h 40m  (was: 1.5h)

> Make recover the pipeline in same packet exceed times for stream closed 
> configurable.
> -------------------------------------------------------------------------------------
>
>                 Key: HDFS-15856
>                 URL: https://issues.apache.org/jira/browse/HDFS-15856
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>            Reporter: Qi Zhu
>            Assignee: Qi Zhu
>            Priority: Minor
>              Labels: pull-request-available
>          Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> Now recover the pipeline five times in a row for the same packet, will close 
> the stream, but i think it should be configurable for different cluster 
> needed.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to