Zesheng Wu created HDFS-7068:
--------------------------------

             Summary: Support multiple block placement policies
                 Key: HDFS-7068
                 URL: https://issues.apache.org/jira/browse/HDFS-7068
             Project: Hadoop HDFS
          Issue Type: Improvement
          Components: namenode
    Affects Versions: 2.5.1
            Reporter: Zesheng Wu
            Assignee: Zesheng Wu


According to the code, the current implement of HDFS only supports one specific 
type of block placement policy, which is BlockPlacementPolicyDefault by default.
The default policy is enough for most of the circumstances, but under some 
special circumstances, it works not so well.

For example, on a shared cluster, we want to erasure encode all the files under 
some specified directories. So the files under these directories need to use a 
new placement policy.
But at the same time, other files still use the default placement policy. Here 
we need to support multiple placement policies for the HDFS.

One plain thought is that, the default placement policy is still configured as 
the default. On the other hand, HDFS can let user specify customized placement 
policy through the extended attributes(xattr). When the HDFS choose the replica 
targets, it firstly check the customized placement policy, if not specified, it 
fallbacks to the default one. 

Any thoughts?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to