[ 
https://issues.apache.org/jira/browse/HDFS-15798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17274126#comment-17274126
 ] 

huhaiyang edited comment on HDFS-15798 at 1/29/21, 3:07 AM:
------------------------------------------------------------

Thanks for the reviews, [~sodonnell]   

{quote}

If I understand this correctly, this problem can only occur is there are 
several tasks to process in the loop:

1. First pass around the loop, sets xmitsSubmitted = X, say 5.

2. This is used to increment the DN XmitsInProgress.

3. Next pass around the loop, the exception is thrown. As xmitsSubmitted was 
never reset to zero, the DN XmitsInProgress is decremented by the previous 
value from the first pass (5 in this example).

{quote}

Just as you said, This problem can only occur is there are several tasks to 
process in the loop.

As you suggested,Updated the patch.

 


was (Author: haiyang hu):
Thanks for the reviews, [~sodonnell] 

As you suggested,Updated the patch.

 

{{{quote}}} 

If I understand this correctly, this problem can only occur is there are 
several tasks to process in the loop:

1. First pass around the loop, sets xmitsSubmitted = X, say 5.

2. This is used to increment the DN XmitsInProgress.

3. Next pass around the loop, the exception is thrown. As xmitsSubmitted was 
never reset to zero, the DN XmitsInProgress is decremented by the previous 
value from the first pass (5 in this example).

{{{quote}}}

{{Just as you said. This problem can only occur is there are several tasks to 
process in the loop}}

{{}}{{}}

> EC: Reconstruct task failed, and It would be XmitsInProgress of DN has 
> negative number
> --------------------------------------------------------------------------------------
>
>                 Key: HDFS-15798
>                 URL: https://issues.apache.org/jira/browse/HDFS-15798
>             Project: Hadoop HDFS
>          Issue Type: Bug
>            Reporter: huhaiyang
>            Assignee: huhaiyang
>            Priority: Major
>         Attachments: HDFS-15798.001.patch, HDFS-15798.002.patch
>
>
> The EC reconstruct task failed, and the decrementXmitsInProgress of 
> processErasureCodingTasks operation abnormal value ;
>  It would be XmitsInProgress of DN has negative number, it affects NN chooses 
> pending tasks based on the ratio between the lengths of replication and 
> erasure-coded block queues.
> {code:java}
> // 1.ErasureCodingWorker.java
> public void processErasureCodingTasks(
>     Collection<BlockECReconstructionInfo> ecTasks) {
>   for (BlockECReconstructionInfo reconInfo : ecTasks) {
>     int xmitsSubmitted = 0;
>     try {
>       ...
>       // It may throw IllegalArgumentException from task#stripedReader
>       // constructor.
>       final StripedBlockReconstructor task =
>           new StripedBlockReconstructor(this, stripedReconInfo);
>       if (task.hasValidTargets()) {
>         // See HDFS-12044. We increase xmitsInProgress even the task is only
>         // enqueued, so that
>         //   1) NN will not send more tasks than what DN can execute and
>         //   2) DN will not throw away reconstruction tasks, and instead keeps
>         //      an unbounded number of tasks in the executor's task queue.
>         xmitsSubmitted = Math.max((int)(task.getXmits() * xmitWeight), 1);
>         getDatanode().incrementXmitsInProcess(xmitsSubmitted); //  1.task 
> start increment
>         stripedReconstructionPool.submit(task);
>       } else {
>         LOG.warn("No missing internal block. Skip reconstruction for task:{}",
>             reconInfo);
>       }
>     } catch (Throwable e) {
>       getDatanode().decrementXmitsInProgress(xmitsSubmitted); //  2.2. task 
> failed decrement
>       LOG.warn("Failed to reconstruct striped block {}",
>           reconInfo.getExtendedBlock().getLocalBlock(), e);
>     }
>   }
> }
> // 2.StripedBlockReconstructor.java
> public void run() {
>   try {
>     initDecoderIfNecessary();
>    ...
>   } catch (Throwable e) {
>     LOG.warn("Failed to reconstruct striped block: {}", getBlockGroup(), e);
>     getDatanode().getMetrics().incrECFailedReconstructionTasks();
>   } finally {
>     float xmitWeight = getErasureCodingWorker().getXmitWeight();
>     // if the xmits is smaller than 1, the xmitsSubmitted should be set to 1
>     // because if it set to zero, we cannot to measure the xmits submitted
>     int xmitsSubmitted = Math.max((int) (getXmits() * xmitWeight), 1);
>     getDatanode().decrementXmitsInProgress(xmitsSubmitted); // 2.1. task 
> complete decrement
>     ...
>   }
> }{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to