[ 
https://issues.apache.org/jira/browse/HDFS-900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12990446#comment-12990446
 ] 

Konstantin Shvachko commented on HDFS-900:
------------------------------------------

test failures:
TestFileConcurrentReader - HDFS-1401
TestStorageRestore - HDFS-1496

test-patch results:
{code}
     [exec] -1 overall.  
     [exec]     +1 @author.  The patch does not contain any @author tags.
     [exec]     -1 tests included.  The patch doesn't appear to include any new 
or modified tests.
     [exec]                         Please justify why no new tests are needed 
for this patch.
     [exec]                         Also please list what manual steps were 
performed to verify this patch.
     [exec]     +1 javadoc.  The javadoc tool did not generate any warning 
messages.
     [exec]     +1 javac.  The applied patch does not increase the total number 
of javac compiler warnings.
     [exec]     +1 findbugs.  The patch does not introduce any new Findbugs 
(version 1.3.9) warnings.
     [exec]     +1 release audit.  The applied patch does not increase the 
total number of release audit warnings.
     [exec]     +1 system test framework.  The patch passed system test 
framework compile.
     [exec] 
======================================================================
{code}
Testing of this patch have dome manually and using Todd's utility attached 
above.

> Corrupt replicas are not tracked correctly through block report from DN
> -----------------------------------------------------------------------
>
>                 Key: HDFS-900
>                 URL: https://issues.apache.org/jira/browse/HDFS-900
>             Project: Hadoop HDFS
>          Issue Type: Bug
>    Affects Versions: 0.22.0
>            Reporter: Todd Lipcon
>            Assignee: Konstantin Shvachko
>            Priority: Blocker
>             Fix For: 0.22.0
>
>         Attachments: log-commented, reportCorruptBlock.patch, 
> to-reproduce.patch
>
>
> This one is tough to describe, but essentially the following order of events 
> is seen to occur:
> # A client marks one replica of a block to be corrupt by telling the NN about 
> it
> # Replication is then scheduled to make a new replica of this node
> # The replication completes, such that there are now 3 good replicas and 1 
> corrupt replica
> # The DN holding the corrupt replica sends a block report. Rather than 
> telling this DN to delete the node, the NN instead marks this as a new *good* 
> replica of the block, and schedules deletion on one of the good replicas.
> I don't know if this is a dataloss bug in the case of 1 corrupt replica with 
> dfs.replication=2, but it seems feasible. I will attach a debug log with some 
> commentary marked by '============>', plus a unit test patch which I can get 
> to reproduce this behavior reliably. (it's not a proper unit test, just some 
> edits to an existing one to show it)

-- 
This message is automatically generated by JIRA.
-
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to