[jira] [Updated] (HDFS-7068) Support multiple block placement policies

2015-04-20 Thread Walter Su (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7068?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Walter Su updated HDFS-7068:

Issue Type: Improvement  (was: Sub-task)
Parent: (was: HDFS-8031)

 Support multiple block placement policies
 -

 Key: HDFS-7068
 URL: https://issues.apache.org/jira/browse/HDFS-7068
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Affects Versions: 2.5.1
Reporter: Zesheng Wu
Assignee: Walter Su

 According to the code, the current implement of HDFS only supports one 
 specific type of block placement policy, which is BlockPlacementPolicyDefault 
 by default.
 The default policy is enough for most of the circumstances, but under some 
 special circumstances, it works not so well.
 For example, on a shared cluster, we want to erasure encode all the files 
 under some specified directories. So the files under these directories need 
 to use a new placement policy.
 But at the same time, other files still use the default placement policy. 
 Here we need to support multiple placement policies for the HDFS.
 One plain thought is that, the default placement policy is still configured 
 as the default. On the other hand, HDFS can let user specify customized 
 placement policy through the extended attributes(xattr). When the HDFS choose 
 the replica targets, it firstly check the customized placement policy, if not 
 specified, it fallbacks to the default one. 
 Any thoughts?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7068) Support multiple block placement policies

2015-04-20 Thread Walter Su (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7068?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Walter Su updated HDFS-7068:

Attachment: (was: HDFS-7068.patch)

 Support multiple block placement policies
 -

 Key: HDFS-7068
 URL: https://issues.apache.org/jira/browse/HDFS-7068
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Affects Versions: 2.5.1
Reporter: Zesheng Wu
Assignee: Walter Su

 According to the code, the current implement of HDFS only supports one 
 specific type of block placement policy, which is BlockPlacementPolicyDefault 
 by default.
 The default policy is enough for most of the circumstances, but under some 
 special circumstances, it works not so well.
 For example, on a shared cluster, we want to erasure encode all the files 
 under some specified directories. So the files under these directories need 
 to use a new placement policy.
 But at the same time, other files still use the default placement policy. 
 Here we need to support multiple placement policies for the HDFS.
 One plain thought is that, the default placement policy is still configured 
 as the default. On the other hand, HDFS can let user specify customized 
 placement policy through the extended attributes(xattr). When the HDFS choose 
 the replica targets, it firstly check the customized placement policy, if not 
 specified, it fallbacks to the default one. 
 Any thoughts?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7068) Support multiple block placement policies

2015-04-20 Thread Walter Su (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7068?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Walter Su updated HDFS-7068:

Target Version/s:   (was: HDFS-7285)

 Support multiple block placement policies
 -

 Key: HDFS-7068
 URL: https://issues.apache.org/jira/browse/HDFS-7068
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Affects Versions: 2.5.1
Reporter: Zesheng Wu
Assignee: Walter Su

 According to the code, the current implement of HDFS only supports one 
 specific type of block placement policy, which is BlockPlacementPolicyDefault 
 by default.
 The default policy is enough for most of the circumstances, but under some 
 special circumstances, it works not so well.
 For example, on a shared cluster, we want to erasure encode all the files 
 under some specified directories. So the files under these directories need 
 to use a new placement policy.
 But at the same time, other files still use the default placement policy. 
 Here we need to support multiple placement policies for the HDFS.
 One plain thought is that, the default placement policy is still configured 
 as the default. On the other hand, HDFS can let user specify customized 
 placement policy through the extended attributes(xattr). When the HDFS choose 
 the replica targets, it firstly check the customized placement policy, if not 
 specified, it fallbacks to the default one. 
 Any thoughts?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7068) Support multiple block placement policies

2015-03-31 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7068?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-7068:

Parent Issue: HDFS-8031  (was: HDFS-7285)

 Support multiple block placement policies
 -

 Key: HDFS-7068
 URL: https://issues.apache.org/jira/browse/HDFS-7068
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Affects Versions: 2.5.1
Reporter: Zesheng Wu
Assignee: Walter Su
 Attachments: HDFS-7068.patch


 According to the code, the current implement of HDFS only supports one 
 specific type of block placement policy, which is BlockPlacementPolicyDefault 
 by default.
 The default policy is enough for most of the circumstances, but under some 
 special circumstances, it works not so well.
 For example, on a shared cluster, we want to erasure encode all the files 
 under some specified directories. So the files under these directories need 
 to use a new placement policy.
 But at the same time, other files still use the default placement policy. 
 Here we need to support multiple placement policies for the HDFS.
 One plain thought is that, the default placement policy is still configured 
 as the default. On the other hand, HDFS can let user specify customized 
 placement policy through the extended attributes(xattr). When the HDFS choose 
 the replica targets, it firstly check the customized placement policy, if not 
 specified, it fallbacks to the default one. 
 Any thoughts?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7068) Support multiple block placement policies

2015-03-12 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7068?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-7068:

Assignee: Walter Su  (was: Arpit Agarwal)

 Support multiple block placement policies
 -

 Key: HDFS-7068
 URL: https://issues.apache.org/jira/browse/HDFS-7068
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Affects Versions: 2.5.1
Reporter: Zesheng Wu
Assignee: Walter Su
 Attachments: HDFS-7068.patch


 According to the code, the current implement of HDFS only supports one 
 specific type of block placement policy, which is BlockPlacementPolicyDefault 
 by default.
 The default policy is enough for most of the circumstances, but under some 
 special circumstances, it works not so well.
 For example, on a shared cluster, we want to erasure encode all the files 
 under some specified directories. So the files under these directories need 
 to use a new placement policy.
 But at the same time, other files still use the default placement policy. 
 Here we need to support multiple placement policies for the HDFS.
 One plain thought is that, the default placement policy is still configured 
 as the default. On the other hand, HDFS can let user specify customized 
 placement policy through the extended attributes(xattr). When the HDFS choose 
 the replica targets, it firstly check the customized placement policy, if not 
 specified, it fallbacks to the default one. 
 Any thoughts?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7068) Support multiple block placement policies

2015-03-10 Thread Walter Su (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7068?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Walter Su updated HDFS-7068:

Target Version/s: HDFS-7285  (was: 3.0.0)

 Support multiple block placement policies
 -

 Key: HDFS-7068
 URL: https://issues.apache.org/jira/browse/HDFS-7068
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Affects Versions: 2.5.1
Reporter: Zesheng Wu
Assignee: Walter Su
 Attachments: HDFS-7068.patch


 According to the code, the current implement of HDFS only supports one 
 specific type of block placement policy, which is BlockPlacementPolicyDefault 
 by default.
 The default policy is enough for most of the circumstances, but under some 
 special circumstances, it works not so well.
 For example, on a shared cluster, we want to erasure encode all the files 
 under some specified directories. So the files under these directories need 
 to use a new placement policy.
 But at the same time, other files still use the default placement policy. 
 Here we need to support multiple placement policies for the HDFS.
 One plain thought is that, the default placement policy is still configured 
 as the default. On the other hand, HDFS can let user specify customized 
 placement policy through the extended attributes(xattr). When the HDFS choose 
 the replica targets, it firstly check the customized placement policy, if not 
 specified, it fallbacks to the default one. 
 Any thoughts?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7068) Support multiple block placement policies

2015-03-09 Thread Walter Su (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7068?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Walter Su updated HDFS-7068:

Issue Type: Sub-task  (was: Improvement)
Parent: HDFS-7285

 Support multiple block placement policies
 -

 Key: HDFS-7068
 URL: https://issues.apache.org/jira/browse/HDFS-7068
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Affects Versions: 2.5.1
Reporter: Zesheng Wu
Assignee: Walter Su
 Attachments: HDFS-7068.patch


 According to the code, the current implement of HDFS only supports one 
 specific type of block placement policy, which is BlockPlacementPolicyDefault 
 by default.
 The default policy is enough for most of the circumstances, but under some 
 special circumstances, it works not so well.
 For example, on a shared cluster, we want to erasure encode all the files 
 under some specified directories. So the files under these directories need 
 to use a new placement policy.
 But at the same time, other files still use the default placement policy. 
 Here we need to support multiple placement policies for the HDFS.
 One plain thought is that, the default placement policy is still configured 
 as the default. On the other hand, HDFS can let user specify customized 
 placement policy through the extended attributes(xattr). When the HDFS choose 
 the replica targets, it firstly check the customized placement policy, if not 
 specified, it fallbacks to the default one. 
 Any thoughts?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7068) Support multiple block placement policies

2015-03-08 Thread Walter Su (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7068?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Walter Su updated HDFS-7068:

Attachment: HDFS-7068.patch

 Support multiple block placement policies
 -

 Key: HDFS-7068
 URL: https://issues.apache.org/jira/browse/HDFS-7068
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Affects Versions: 2.5.1
Reporter: Zesheng Wu
Assignee: Walter Su
 Attachments: HDFS-7068.patch


 According to the code, the current implement of HDFS only supports one 
 specific type of block placement policy, which is BlockPlacementPolicyDefault 
 by default.
 The default policy is enough for most of the circumstances, but under some 
 special circumstances, it works not so well.
 For example, on a shared cluster, we want to erasure encode all the files 
 under some specified directories. So the files under these directories need 
 to use a new placement policy.
 But at the same time, other files still use the default placement policy. 
 Here we need to support multiple placement policies for the HDFS.
 One plain thought is that, the default placement policy is still configured 
 as the default. On the other hand, HDFS can let user specify customized 
 placement policy through the extended attributes(xattr). When the HDFS choose 
 the replica targets, it firstly check the customized placement policy, if not 
 specified, it fallbacks to the default one. 
 Any thoughts?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)