[ 
https://issues.apache.org/jira/browse/HDFS-16593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17600337#comment-17600337
 ] 

ASF GitHub Bot commented on HDFS-16593:
---------------------------------------

Hexiaoqiao merged PR #4353:
URL: https://github.com/apache/hadoop/pull/4353




> Correct inaccurate BlocksRemoved metric on DataNode side
> --------------------------------------------------------
>
>                 Key: HDFS-16593
>                 URL: https://issues.apache.org/jira/browse/HDFS-16593
>             Project: Hadoop HDFS
>          Issue Type: Bug
>            Reporter: ZanderXu
>            Assignee: ZanderXu
>            Priority: Minor
>              Labels: pull-request-available
>          Time Spent: 50m
>  Remaining Estimate: 0h
>
> When tracing the root cause of production issue, I found that the 
> BlocksRemoved  metric on Datanode size was inaccurate.
> {code:java}
> case DatanodeProtocol.DNA_INVALIDATE:
>       //
>       // Some local block(s) are obsolete and can be 
>       // safely garbage-collected.
>       //
>       Block toDelete[] = bcmd.getBlocks();
>       try {
>         // using global fsdataset
>         dn.getFSDataset().invalidate(bcmd.getBlockPoolId(), toDelete);
>       } catch(IOException e) {
>         // Exceptions caught here are not expected to be disk-related.
>         throw e;
>       }
>       dn.metrics.incrBlocksRemoved(toDelete.length);
>       break;
> {code}
> Because even if the invalidate method throws an exception, some blocks may 
> have been successfully deleted internally.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to