[
https://issues.apache.org/jira/browse/HADOOP-8502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Tsz Wo Nicholas Sze resolved HADOOP-8502.
-----------------------------------------
Resolution: Not a Problem
If the file is known to be small, it can use a small block size. It this
example, it can set block size equal to 16kB. Then it won't get quota
exception.
Resolving as not-a-problem. Please feel free to reopen if you disagree.
> Quota accounting should be calculated based on actual size rather than block
> size
> ---------------------------------------------------------------------------------
>
> Key: HADOOP-8502
> URL: https://issues.apache.org/jira/browse/HADOOP-8502
> Project: Hadoop Common
> Issue Type: Bug
> Reporter: E. Sammer
>
> When calculating quotas, the block size is used rather than the actual size
> of the file. This limits the granularity of quota enforcement to increments
> of the block size which is wasteful and limits the usefulness (i.e. it's
> possible to violate the quota in a way that's not at all intuitive.
> {code}
> [esammer@xxx ~]$ hadoop fs -count -q /user/esammer/quota-test
> none inf 1048576 1048576 1
> 2 0 hdfs://xxx/user/esammer/quota-test
> [esammer@xxx ~]$ du /etc/passwd
> 4 /etc/passwd
> esammer@xxx ~]$ hadoop fs -put /etc/passwd /user/esammer/quota-test/
> 12/06/09 13:56:16 WARN hdfs.DFSClient: DataStreamer Exception:
> org.apache.hadoop.hdfs.protocol.DSQuotaExceededException:
> org.apache.hadoop.hdf
> s.protocol.DSQuotaExceededException: The DiskSpace quota of
> /user/esammer/quota-test is exceeded: quota=1048576 diskspace consumed=384.0m
> ...
> {code}
> Obviously the file in question would only occupy 12KB, not 384MB, and should
> easily fit within the 1MB quota.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)