[ 
https://issues.apache.org/jira/browse/HDFS-7129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14152913#comment-14152913
 ] 

Arpit Agarwal commented on HDFS-7129:
-------------------------------------

+1 for the patch. I will commit it shortly.

Two nitpicks we can clean up later:
# {{if (replicaInfo.getIsPersisted() ==  false)}} can just be written as {{if 
(!replicaInfo.getIsPersisted()}}.
# We can eliminate {{FsDatasetImpl.discardRamDiskReplica}} since it just 
forwards to {{ramDiskReplicaTracker.discardReplica}}.

> Metrics to track usage of memory for writes
> -------------------------------------------
>
>                 Key: HDFS-7129
>                 URL: https://issues.apache.org/jira/browse/HDFS-7129
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>          Components: datanode
>    Affects Versions: HDFS-6581
>            Reporter: Arpit Agarwal
>            Assignee: Xiaoyu Yao
>         Attachments: HDFS-7129.0.patch, HDFS-7129.1.patch, HDFS-7129.2.patch, 
> HDFS-7129.3.patch
>
>
> A few metrics to evaluate feature usage and suggest improvements. Thanks to 
> [~sureshms] for some of these suggestions.
> # Number of times a block in memory was read (before being ejected)
> # Average block size for data written to memory tier
> # Time the block was in memory before being ejected
> # Number of blocks written to memory
> # Number of memory writes requested but not satisfied (failed-over to disk)
> # Number of blocks evicted without ever being read from memory
> # Average delay between memory write and disk write (window where a node 
> restart could cause data loss).
> # Replicas written to disk by lazy writer
> # Bytes written to disk by lazy writer
> # Replicas deleted by application before being persisted to disk



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to