[ 
https://issues.apache.org/jira/browse/HDFS-15170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17246841#comment-17246841
 ] 

Wei-Chiu Chuang commented on HDFS-15170:
----------------------------------------

Dumb question. I am not sure what makes EC blocks different. Seems like this 
would be the same for replicated blocks, and with HDFS-15200's change, 
BlockManager calls removeStoredBlock(b.getStored(), node) and that seems to do 
the same thing (and more, potentially more complete).

> EC: Block gets marked as CORRUPT in case of failover and pipeline recovery
> --------------------------------------------------------------------------
>
>                 Key: HDFS-15170
>                 URL: https://issues.apache.org/jira/browse/HDFS-15170
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: erasure-coding
>            Reporter: Ayush Saxena
>            Assignee: Ayush Saxena
>            Priority: Critical
>         Attachments: HDFS-15170-01.patch, HDFS-15170-02.patch, 
> HDFS-15170-03.patch
>
>
> Steps to Repro :
> 1. Start writing a EC file.
> 2. After more than one stripe has been written, stop one datanode.
> 3. Post pipeline recovery, keep on writing the data.
> 4.Close the file.
> 5. transition the namenode to standby and back to active.
> 6. Turn on the shutdown datanode in step 2
> The BR from datanode 2 will make the block corrupt and during invalidate 
> block won't remove it, since post failover the blocks would be on stale 
> storage.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to