[ https://issues.apache.org/jira/browse/HDFS-7473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14235816#comment-14235816 ]
Chris Nauroth commented on HDFS-7473: ------------------------------------- Thank you for the patch, [~ajisakaa]. However, I think we actually need to allow 0, and the behavior is meant to skip enforcement of the max directory item check. This was broken in HDFS-6102, which shipped in 2.4.0, so this is a regression from prior versions. We need to restore the old behavior. > Document setting dfs.namenode.fs-limits.max-directory-items to 0 is invalid > --------------------------------------------------------------------------- > > Key: HDFS-7473 > URL: https://issues.apache.org/jira/browse/HDFS-7473 > Project: Hadoop HDFS > Issue Type: Bug > Components: documentation > Affects Versions: 2.4.0, 2.5.2 > Reporter: Jason Keller > Assignee: Akira AJISAKA > Labels: newbie > Attachments: HDFS-7473-001.patch > > > When setting dfs.namenode.fs-limits.max-directory-items to 0 in > hdfs-site.xml, the error "java.lang.IllegalArgumentException: Cannot set > dfs.namenode.fs-limits.max-directory-items to a value less than 0 or greater > than 6400000" is produced. However, the documentation shows that 0 is a > valid setting for dfs.namenode.fs-limits.max-directory-items, turning the > check off. > Looking into the code in > hadoop/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java > shows that the culprit is > Preconditions.checkArgument(maxDirItems > 0 && maxDirItems <= MAX_DIR_ITEMS, > "Cannot set "+ DFSConfigKeys.DFS_NAMENODE_MAX_DIRECTORY_ITEMS_KEY+ " to a > value less than 0 or greater than " + MAX_DIR_ITEMS); > This checks if maxDirItems is greater than 0. Since 0 is not greater than 0, > it produces an error. -- This message was sent by Atlassian JIRA (v6.3.4#6332)