[ https://issues.apache.org/jira/browse/HDFS-1751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Daryn Sharp updated HDFS-1751: ------------------------------ Attachment: HADOOP-7175-2.patch Minimum component length isn't strictly needed. I only added it for symmetrical completeness. I'll remove if it's deemed completely superfluous. Added @Override annotations. Converted to junit 4. I'd prefer to keep specific exception types to make it easier for client code to differentiate between errors (parsing strings is fragile). As currently implemented, clients can generically catch FSLimitException, or a specific limit exception. The other quota exceptions are modeled this way. Keep in mind that Pig wants to be able to differentiate errors. Thoughts? > Intrinsic limits for HDFS files, directories > -------------------------------------------- > > Key: HDFS-1751 > URL: https://issues.apache.org/jira/browse/HDFS-1751 > Project: Hadoop HDFS > Issue Type: New Feature > Components: data-node > Affects Versions: 0.22.0 > Reporter: Daryn Sharp > Assignee: Daryn Sharp > Fix For: 0.23.0 > > Attachments: HDFS-1751.patch > > > Enforce a configurable limit on: > the length of a path component > the number of names in a directory > The intention is to prevent a too-long name or a too-full directory. This is > not about RPC buffers, the length of command lines, etc. There may be good > reasons for those kinds of limits, but that is not the intended scope of this > feature. Consequently, a reasonable implementation might be to extend the > existing quota checker so that it faults the creation of a name that violates > the limits. This strategy of faulting new creation evades the problem of > existing names or directories that violate the limits. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira