[ 
https://issues.apache.org/jira/browse/HDFS-6978?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14132214#comment-14132214
 ] 

Jitendra Nath Pandey commented on HDFS-6978:
--------------------------------------------

+1 for the patch.

A minor comment:
{code}
+          if (d < blockpoolReport.length) {
+            // There may be multiple on-disk records for the same block, don't 
increment
+            // the memory record pointer if so. 
+            ScanInfo nextInfo = blockpoolReport[Math.min(d, 
blockpoolReport.length - 1)];
{code}
Math.min(d, blockpoolReport.length - 1) will always be equal to d, unless we 
have race conditions. I think the same applies to the previous lines with 
Math.min as well.


> Directory scanner should correctly reconcile blocks on RAM disk
> ---------------------------------------------------------------
>
>                 Key: HDFS-6978
>                 URL: https://issues.apache.org/jira/browse/HDFS-6978
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>          Components: datanode
>    Affects Versions: HDFS-6581
>            Reporter: Arpit Agarwal
>            Assignee: Arpit Agarwal
>         Attachments: HDFS-6978.01.patch
>
>
> It used to be very unlikely that the directory scanner encountered two 
> replicas of the same block on different volumes.
> With memory storage, it is very likely to hit this with the following 
> sequence of events:
> # Block is written to RAM disk
> # Lazy writer saves a copy on persistent volume
> # DN attempts to evict the original replica from RAM disk, file deletion 
> fails as the replica is in use.
> # Directory scanner finds a replica on both RAM disk and persistent storage.
> The directory scanner should never delete the block on persistent storage.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to