[ 
https://issues.apache.org/jira/browse/HDFS-6289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13985094#comment-13985094
 ] 

Aaron T. Myers commented on HDFS-6289:
--------------------------------------

Thanks a lot for the review, Todd. The TestDNFencingWithReplication test failed 
with the following error:

{noformat}
java.lang.RuntimeException: Deferred
        at 
org.apache.hadoop.test.MultithreadedTestUtil$TestContext.checkException(MultithreadedTestUtil.java:130)
        at 
org.apache.hadoop.test.MultithreadedTestUtil$TestContext.stop(MultithreadedTestUtil.java:166)
        at 
org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication.testFencingStress(TestDNFencingWithReplication.java:135)
Caused by: java.io.IOException: Timed out waiting for 2 replicas on path /test-3
        at 
org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication$ReplicationToggler.waitForReplicas(TestDNFencingWithReplication.java:96)
        at 
org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication$ReplicationToggler.doAnAction(TestDNFencingWithReplication.java:78)
        at 
org.apache.hadoop.test.MultithreadedTestUtil$RepeatingTestThread.doWork(MultithreadedTestUtil.java:222)
        at 
org.apache.hadoop.test.MultithreadedTestUtil$TestingThread.run(MultithreadedTestUtil.java:189)
{noformat}

I'm fairly confident this was just a one-off flake, especially considering the 
code change in this patch is only triggered by DN restarts which 
TestDNFencingWithReplication doesn't do, but just to be sure I looped 
TestDNFencingWithReplication 50 times on my box and never saw a failure. I've 
also just kicked Jenkins to build this JIRA again, so hopefully it'll pass 
then. If it passes that, I'll go ahead and commit this based on your previous 
+1.

> HA failover can fail if there are pending DN messages for DNs which no longer 
> exist
> -----------------------------------------------------------------------------------
>
>                 Key: HDFS-6289
>                 URL: https://issues.apache.org/jira/browse/HDFS-6289
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: ha
>    Affects Versions: 2.4.0
>            Reporter: Aaron T. Myers
>            Assignee: Aaron T. Myers
>            Priority: Critical
>         Attachments: HDFS-6289.patch, HDFS-6289.patch
>
>
> In an HA setup, the standby NN may receive messages from DNs for blocks which 
> the standby NN is not yet aware of. It queues up these messages and replays 
> them when it next reads from the edit log or fails over. On a failover, all 
> of these pending DN messages must be processed successfully in order for the 
> failover to succeed. If one of these pending DN messages refers to a DN 
> storageId that no longer exists (because the DN with that transfer address 
> has been reformatted and has re-registered with the same transfer address) 
> then on transition to active the NN will not be able to process this DN 
> message and will suicide with an error like the following:
> {noformat}
> 2014-04-25 14:23:17,922 FATAL namenode.NameNode 
> (NameNode.java:doImmediateShutdown(1525)) - Error encountered requiring NN 
> shutdown. Shutting down immediately.
> java.io.IOException: Cannot mark 
> blk_1073741825_900(stored=blk_1073741825_1001) as corrupt because datanode 
> 127.0.0.1:33324 does not exist
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to