[ 
https://issues.apache.org/jira/browse/HDFS-12044?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-12044:
---------------------------------
    Attachment: HDFS-12044.03.patch

Thanks for the review, Andrew.

As you suggested, in the latest patch, {{xmitsInProgress}} increases for queued 
tasks also, to throttle the speed of NN sending tasks to DN.  Also for EC recon 
task, it increases the xmit with a "weight", that is currently calculated as 
{{len(sources) + len(targets)}} to represent the # of network connections. 

 I feel that the way of this weight calculation would not need to be calculate, 
as long as it presents the relative cost of recovery task (i.e., more 
connections *_usuallly_* mean more I/O (block size is the same) and more CPU 
(because more data)). 

> Mismatch between BlockManager#maxReplicationStreams and 
> ErasureCodingWorker.stripedReconstructionPool pool size causes slow and 
> bursty recovery
> -----------------------------------------------------------------------------------------------------------------------------------------------
>
>                 Key: HDFS-12044
>                 URL: https://issues.apache.org/jira/browse/HDFS-12044
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: erasure-coding
>    Affects Versions: 3.0.0-alpha3
>            Reporter: Lei (Eddy) Xu
>            Assignee: Lei (Eddy) Xu
>              Labels: hdfs-ec-3.0-must-do
>         Attachments: HDFS-12044.00.patch, HDFS-12044.01.patch, 
> HDFS-12044.02.patch, HDFS-12044.03.patch
>
>
> {{ErasureCodingWorker#stripedReconstructionPool}} is with {{corePoolSize=2}} 
> and {{maxPoolSize=8}} as default. And it rejects more tasks if the queue is 
> full.
> When {{BlockManager#maxReplicationStream}} is larger than 
> {{ErasureCodingWorker#stripedReconstructionPool#corePoolSize/maxPoolSize}}, 
> for example, {{maxReplicationStream=20}} and {{corePoolSize=2 , 
> maxPoolSize=8}}.  Meanwhile, NN sends up to {{maxTransfer}} reconstruction 
> tasks to DN for each heartbeat, and it is calculated in {{FSNamesystem}}:
> {code}
> final int maxTransfer = blockManager.getMaxReplicationStreams() - 
> xmitsInProgress;
> {code}
> However, at any giving time, 
> {{{ErasureCodingWorker#stripedReconstructionPool}} takes 2 {{xmitInProcess}}. 
> So for each heartbeat in 3s, NN will send about {{20-2 = 18}} reconstruction 
> tasks to the DN, and DN throw away most of them if there were 8 tasks in the 
> queue already. So NN needs to take longer to re-consider these blocks were 
> under-replicated to schedule new tasks.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to