[ 
https://issues.apache.org/jira/browse/HDFS-7411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14277909#comment-14277909
 ] 

Andrew Wang commented on HDFS-7411:
-----------------------------------

bq. Why is blocks.per.interval "more powerful" than blocks per minute?

I don't think the end goal is to achieve a certain rate per minute. Rather, 
it's how long the pause is when the DecomManager wakes up, and how often it 
wakes up. This tunes latency vs. throughput; short pause is better latency, 
long run is better throughput. This can't be expressed by just 
blocks.per.minute, since a high blocks.per.minute might mean to wake up very 
often to do a little work, or very occasionally to do a lot of work.

It also fixes the timescale to "per minute". This, naively, implies that it'd 
be okay to wake up once a minute to do a minute's worth of work. But maybe the 
user wants to see something happen within a few seconds, rather than a minute. 
Without being able to tune the interval, this flexibility is gone.

The event triggered idea is also something I considered, but even then we'd 
still need to do the full scan at the start of decom, which means some kind of 
limiting scheme.

> Refactor and improve decommissioning logic into DecommissionManager
> -------------------------------------------------------------------
>
>                 Key: HDFS-7411
>                 URL: https://issues.apache.org/jira/browse/HDFS-7411
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>    Affects Versions: 2.5.1
>            Reporter: Andrew Wang
>            Assignee: Andrew Wang
>         Attachments: hdfs-7411.001.patch, hdfs-7411.002.patch, 
> hdfs-7411.003.patch, hdfs-7411.004.patch, hdfs-7411.005.patch, 
> hdfs-7411.006.patch
>
>
> Would be nice to split out decommission logic from DatanodeManager to 
> DecommissionManager.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to