[ 
https://issues.apache.org/jira/browse/HDFS-8411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15596287#comment-15596287
 ] 

Andrew Wang commented on HDFS-8411:
-----------------------------------

Thanks for picking this up [~Sammi], I'd like to go a bit deeper in this patch.

* It looks like the metrics only increment at the block granularity, it doesn't 
track partial block reads/writes that then fail.
* To normalize with the other reconstruction metrics, maybe name 
"ecReconstructionBytesRead" and "ecReconstructionBytesWritten"?
* This is an optional comment (can do in follow-on maybe), but since it looks 
like the ECWorker can write both locally and remote, should we differentiate 
these as well? This is like how we differentiate remote vs. local reads in 
FileSystem$Statistics. e.g. bytesRead, bytesReadLocalHost, 
bytesReadDistanceOfOneOrTwo, etc.

> Add bytes count metrics to datanode for ECWorker
> ------------------------------------------------
>
>                 Key: HDFS-8411
>                 URL: https://issues.apache.org/jira/browse/HDFS-8411
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>            Reporter: Li Bo
>            Assignee: SammiChen
>         Attachments: HDFS-8411-001.patch, HDFS-8411-002.patch, 
> HDFS-8411-003.patch, HDFS-8411-004.patch
>
>
> This is a sub task of HDFS-7674. It calculates the amount of data that is 
> read from local or remote to attend decoding work, and also the amount of 
> data that is written to local or remote datanodes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to