[ 
https://issues.apache.org/jira/browse/HDFS-6489?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravi Prakash updated HDFS-6489:
-------------------------------
    Attachment: HDFS-6489.007.patch

Andrew! Thanks for the review. Here's a patch with the changes you suggest. I 
am sticking with conditional for RBW for now because 1. Its the common case, 2. 
RUR doesn't have {{originalBytesReserved}}.

Even with this, {{dfsUsed}} and {{numblocks}} counting is all messed up. e.g. 
[FsDatasetImpl.removeOldBlock|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java#L2886]
 calls {{decDfsUsedAndNumBlocks}} twice (so even though {{dfsUsed}} is 
correctly decremented, {{numBlocks}} is not) . [~brahmareddy], [~vinayrpet] am 
I reading this right?

To really get this sorted out, we should probably have a unit test framework 
that tests DN side accounting is proper for several different operations (block 
creation, appends, block transfer (e.g. from 1 storage to another), etc.). 
Unfortunately, I don't think I'll have the cycles for that. Sorry :(
Arpit has seemed amenable to [removing 
reservations|https://issues.apache.org/jira/browse/HDFS-9530?focusedCommentId=15248968&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15248968]
 if there is some other alternative. I think we should disable the rejection of 
writes by DNs based on reservations until we can be sure that our accounting is 
correct. Just my $0.02

> DFS Used space is not correct computed on frequent append operations
> --------------------------------------------------------------------
>
>                 Key: HDFS-6489
>                 URL: https://issues.apache.org/jira/browse/HDFS-6489
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: datanode
>    Affects Versions: 2.2.0, 2.7.1, 2.7.2
>            Reporter: stanley shi
>            Assignee: Weiwei Yang
>         Attachments: HDFS-6489.001.patch, HDFS-6489.002.patch, 
> HDFS-6489.003.patch, HDFS-6489.004.patch, HDFS-6489.005.patch, 
> HDFS-6489.006.patch, HDFS-6489.007.patch, HDFS6489.java
>
>
> The current implementation of the Datanode will increase the DFS used space 
> on each block write operation. This is correct in most scenario (create new 
> file), but sometimes it will behave in-correct(append small data to a large 
> block).
> For example, I have a file with only one block(say, 60M). Then I try to 
> append to it very frequently but each time I append only 10 bytes;
> Then on each append, dfs used will be increased with the length of the 
> block(60M), not teh actual data length(10bytes).
> Consider in a scenario I use many clients to append concurrently to a large 
> number of files (1000+), assume the block size is 32M (half of the default 
> value), then the dfs used will be increased 1000*32M = 32G on each append to 
> the files; but actually I only write 10K bytes; this will cause the datanode 
> to report in-sufficient disk space on data write.
> {quote}2014-06-04 15:27:34,719 INFO 
> org.apache.hadoop.hdfs.server.datanode.DataNode: opWriteBlock  
> BP-1649188734-10.37.7.142-1398844098971:blk_1073742834_45306 received 
> exception org.apache.hadoop.util.DiskChecker$DiskOutOfSpaceException: 
> Insufficient space for appending to FinalizedReplica, blk_1073742834_45306, 
> FINALIZED{quote}
> But the actual disk usage:
> {quote}
> [root@hdsh143 ~]# df -h
> Filesystem            Size  Used Avail Use% Mounted on
> /dev/sda3              16G  2.9G   13G  20% /
> tmpfs                 1.9G   72K  1.9G   1% /dev/shm
> /dev/sda1              97M   32M   61M  35% /boot
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to