[ https://issues.apache.org/jira/browse/HDFS-5517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15710421#comment-15710421 ]
Hudson commented on HDFS-5517: ------------------------------ SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10921 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/10921/]) HDFS-5517. Lower the default maximum number of blocks per file. (wang: rev 7226a71b1f684f562bd88ee121f1dd7aa8b73816) * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/metrics/TestNameNodeMetrics.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDirectoryScanner.java > Lower the default maximum number of blocks per file > --------------------------------------------------- > > Key: HDFS-5517 > URL: https://issues.apache.org/jira/browse/HDFS-5517 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode > Affects Versions: 2.2.0 > Reporter: Aaron T. Myers > Assignee: Aaron T. Myers > Labels: BB2015-05-TBR > Fix For: 3.0.0-alpha2 > > Attachments: HDFS-5517.002.patch, HDFS-5517.003.patch, HDFS-5517.patch > > > We introduced the maximum number of blocks per file in HDFS-4305, but we set > the default to 1MM. In practice this limit is so high as to never be hit, > whereas we know that an individual file with 10s of thousands of blocks can > cause problems. We should lower the default value, in my opinion to 10k. -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org