[ https://issues.apache.org/jira/browse/HDFS-5463?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Andrew Wang resolved HDFS-5463. ------------------------------- Resolution: Duplicate As Uma said above, I think this is handled as of 2.1.0 by HDFS-4305. Please re-open if you feel this is incorrect. Thanks Vinay. > NameNode should limit the number of blocks per file > --------------------------------------------------- > > Key: HDFS-5463 > URL: https://issues.apache.org/jira/browse/HDFS-5463 > Project: Hadoop HDFS > Issue Type: Bug > Reporter: Vinay > Assignee: Vinay > > Currently there is no limit to number of blocks user can write to a file. > And blocksize also can be set to minimum possible. > User can write any number of blocks continously, which may create problems in > NameNodes performance and service as the number of blocks of file increases. > Because each time new block allocated, all blocks of the file will be > persisted, and this can cause serious performance degradation > So proposal is to limit the number of maximum blocks a user can write to a > file. > May be 1024 blocks(if 128*MB is block size then 128 GB can be max file size) -- This message was sent by Atlassian JIRA (v6.1#6144)