I agree: Good work Konstantin! I'll file new feature requests for the sticky bits from the discussion.
On 4/10/06, Doug Cutting (JIRA) <[EMAIL PROTECTED]> wrote: > > [ http://issues.apache.org/jira/browse/HADOOP-51?page=all ] > > Doug Cutting resolved HADOOP-51: > -------------------------------- > > Resolution: Fixed > > I just committed this. I fixed the comment on dfs.replication.min. I > also added a message to CHANGES.txt. > > Thanks, Konstantin! > > Bryan: I think the issues you raise bear further discussion that I do not > wish to stifle. For example, we may someday want to be able to specify > more than 2^16 replications, and we may wish to handle replication requests > outside of the configured limits differently. But, for now, I think the > patch fixes this bug and that those issues can be addressed though > subsequent bugs as we gain experience. So please file new bugs for any > related issues that are important to you. > > > per-file replication counts > > --------------------------- > > > > Key: HADOOP-51 > > URL: http://issues.apache.org/jira/browse/HADOOP-51 > > Project: Hadoop > > Type: New Feature > > > Components: dfs > > Versions: 0.2 > > Reporter: Doug Cutting > > Assignee: Konstantin Shvachko > > Fix For: 0.2 > > Attachments: Replication.patch > > > > It should be possible to specify different replication counts for > different files. Perhaps an option when creating a new file should be the > desired replication count. MapReduce should take advantage of this feature > so that job.xml and job.jar files, which are frequently accessed by lots > of machines, are more highly replicated than large data files. > > -- > This message is automatically generated by JIRA. > - > If you think it was sent incorrectly contact one of the administrators: > http://issues.apache.org/jira/secure/Administrators.jspa > - > For more information on JIRA, see: > http://www.atlassian.com/software/jira > > -- Bryan A. Pendleton Ph: (877) geek-1-bp
