[ 
https://issues.apache.org/jira/browse/HDFS-4305?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-4305:
------------------------------

    Attachment: hdfs-4305-1.patch

Took a hack at this, patch attached.

I set the default to 0 for the min block size. Going non-zero I think would 
break a lot of tests since they set small block sizes, and would be an 
incompatible change.

For the max number of blocks per file, I chose 1 million. I figure with the 
default 64MB block size, that's a 64TB file.

Feedback appreciated, especially if these defaults should be altered.
                
> Add a configurable limit on number of blocks per file, and min block size
> -------------------------------------------------------------------------
>
>                 Key: HDFS-4305
>                 URL: https://issues.apache.org/jira/browse/HDFS-4305
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: namenode
>    Affects Versions: 1.0.4, 3.0.0, 2.0.2-alpha
>            Reporter: Todd Lipcon
>            Assignee: Andrew Wang
>            Priority: Minor
>         Attachments: hdfs-4305-1.patch
>
>
> We recently had an issue where a user set the block size very very low and 
> managed to create a single file with hundreds of thousands of blocks. This 
> caused problems with the edit log since the OP_ADD op was so large 
> (HDFS-4304). I imagine it could also cause efficiency issues in the NN. To 
> prevent users from making such mistakes, we should:
> - introduce a configurable minimum block size, below which requests are 
> rejected
> - introduce a configurable maximum number of blocks per file, above which 
> requests to add another block are rejected (with a suitably high default as 
> to not prevent legitimate large files)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Reply via email to