[ 
https://issues.apache.org/jira/browse/HDFS-7068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14358492#comment-14358492
 ] 

Walter Su commented on HDFS-7068:
---------------------------------

Thanks [~drankye] for enlightening me on the difference between stripping ec 
mode and pure ec mode. Extended storage policy is a great idea. Per [comments 
on 
HDFS-7285|https://issues.apache.org/jira/browse/HDFS-7285?focusedCommentId=14357754&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14357754]
 , we should decide how to fit EC with other storage policies first.

[~zhz]
{quote}
The basic logic is just to spread across as many racks as possible based on m 
and k. So maybe we should start with implementing option #1.
{quote}
Could you check out HDFS-7891. This jira does spread blocks across as many 
racks as possible. The policy doesn't based on m and k. Somehow I think they 
are unnecessary.

> Support multiple block placement policies
> -----------------------------------------
>
>                 Key: HDFS-7068
>                 URL: https://issues.apache.org/jira/browse/HDFS-7068
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>          Components: namenode
>    Affects Versions: 2.5.1
>            Reporter: Zesheng Wu
>            Assignee: Walter Su
>         Attachments: HDFS-7068.patch
>
>
> According to the code, the current implement of HDFS only supports one 
> specific type of block placement policy, which is BlockPlacementPolicyDefault 
> by default.
> The default policy is enough for most of the circumstances, but under some 
> special circumstances, it works not so well.
> For example, on a shared cluster, we want to erasure encode all the files 
> under some specified directories. So the files under these directories need 
> to use a new placement policy.
> But at the same time, other files still use the default placement policy. 
> Here we need to support multiple placement policies for the HDFS.
> One plain thought is that, the default placement policy is still configured 
> as the default. On the other hand, HDFS can let user specify customized 
> placement policy through the extended attributes(xattr). When the HDFS choose 
> the replica targets, it firstly check the customized placement policy, if not 
> specified, it fallbacks to the default one. 
> Any thoughts?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to