[ https://issues.apache.org/jira/browse/HDFS-13088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16399788#comment-16399788 ]
Íñigo Goiri commented on HDFS-13088: ------------------------------------ I guess there is no way to add new fields to the header. I think keeping backwards compatibility is pretty important. In addition, we should support over replications higher than 8. > Allow HDFS files/blocks to be over-replicated. > ---------------------------------------------- > > Key: HDFS-13088 > URL: https://issues.apache.org/jira/browse/HDFS-13088 > Project: Hadoop HDFS > Issue Type: Sub-task > Reporter: Virajith Jalaparti > Assignee: Virajith Jalaparti > Priority: Major > Attachments: HDFS-13088.001.patch > > > This JIRA is to add a per-file "over-replication" factor to HDFS. As > mentioned in HDFS-13069, the over-replication factor will be the excess > replicas that will be allowed to exist for a file or block. This is > beneficial if the application deems additional replicas for a file are > needed. In the case of HDFS-13069, it would allow copies of data in PROVIDED > storage to be cached locally in HDFS in a read-through manner. > The Namenode will not proactively meet the over-replication i.e., it does not > schedule replications if the number of replicas for a block is less than > (replication factor + over-replication factor) as long as they are more than > the replication factor of the file. -- This message was sent by Atlassian JIRA (v7.6.3#76005) --------------------------------------------------------------------- To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org