[ 
https://issues.apache.org/jira/browse/HDFS-6423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-6423:
----------------------------

    Attachment: HDFS-6423.001.patch

Update the patch to fix the unit test.

> Diskspace quota usage is wrongly updated when appending data from partial 
> block
> -------------------------------------------------------------------------------
>
>                 Key: HDFS-6423
>                 URL: https://issues.apache.org/jira/browse/HDFS-6423
>             Project: Hadoop HDFS
>          Issue Type: Bug
>            Reporter: Jing Zhao
>            Assignee: Jing Zhao
>         Attachments: HDFS-6423.000.patch, HDFS-6423.001.patch
>
>
> When appending new data to a file whose last block is a partial block, the 
> diskspace quota usage is not correctly update. For example, suppose the block 
> size is 1024 bytes, and a file has size 1536 bytes (1.5 blocks). If we then 
> append another 1024 bytes to the file, the diskspace usage for this file will 
> not be updated to (2560 * replication) as expected, but (2048 * replication).
> The cause of the issue is that in FSNamesystem#commitOrCompleteLastBlock, we 
> have 
> {code}
>     // Adjust disk space consumption if required
>     final long diff = fileINode.getPreferredBlockSize() - 
> commitBlock.getNumBytes();    
>     if (diff > 0) {
>       try {
>         String path = fileINode.getFullPathName();
>         dir.updateSpaceConsumed(path, 0, 
> -diff*fileINode.getFileReplication());
>       } catch (IOException e) {
>         LOG.warn("Unexpected exception while updating disk space.", e);
>       }
>     }
> {code}
> This code assumes that the last block of the file has never been completed 
> before, thus is always counted with the preferred block size in quota 
> computation.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to