[ 
https://issues.apache.org/jira/browse/HDFS-17456?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17834633#comment-17834633
 ] 

ASF GitHub Bot commented on HDFS-17456:
---------------------------------------

fuchaohong commented on code in PR #6713:
URL: https://github.com/apache/hadoop/pull/6713#discussion_r1554843103


##########
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsVolumeImpl.java:
##########
@@ -1291,7 +1291,9 @@ public ReplicaInPipeline append(String bpid, ReplicaInfo 
replicaInfo,
 
     // rename meta file to rbw directory
     // rename block file to rbw directory
+    long oldReplicaLength = replicaInfo.getMetadataLength() + 
replicaInfo.getBlockDataLength();

Review Comment:
   Thanks @ZanderXu for your reviews, I have edited.





> Fix the dfsused statistics of datanode are incorrect when appending a file.
> ---------------------------------------------------------------------------
>
>                 Key: HDFS-17456
>                 URL: https://issues.apache.org/jira/browse/HDFS-17456
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: hdfs
>    Affects Versions: 3.3.3
>            Reporter: fuchaohong
>            Priority: Major
>              Labels: pull-request-available
>
> In our production env, the namenode page showed that the datanode space had 
> been used up, but the actual datanode machine still had a lot of free space. 
> After troubleshooting, the dfsused statistics of datanode are incorrect when 
> appending a file. The following is the dfsused after each append of 100.
> |*Error*|*Expect*|
> |0|0|
> |100|100|
> |300|200|
> |600|300|



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to