[ 
https://issues.apache.org/jira/browse/HADOOP-1044?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wendy Chien updated HADOOP-1044:
--------------------------------

    Attachment: hadoop-1044.patch

This patch changes the test to keep track of the datanodes that have been 
decommissioned and keeps them in the exclude file.  The problem with the test 
before was that the exclude file was overwritten with only the latest node to 
be decommissioned.  The previously decommissioned node would then be marked as 
normal instead of decommissioned, making it a valid target for replication. 

> TestDecommission fails because it attempts to transfer block to a dead 
> datanode
> -------------------------------------------------------------------------------
>
>                 Key: HADOOP-1044
>                 URL: https://issues.apache.org/jira/browse/HADOOP-1044
>             Project: Hadoop
>          Issue Type: Bug
>          Components: test
>            Reporter: Wendy Chien
>         Assigned To: Wendy Chien
>         Attachments: hadoop-1044.patch
>
>
> There are two iterations in TestDecommission.  After the first iteration, one 
> datanode will be shut down because it was decommissioned.  In the second 
> iteration, while decommissioning the node, if it attempts to transfer blocks 
> to the shut down node, the test will fail.
> http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/29/console 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to