[ 
https://issues.apache.org/jira/browse/HDFS-11553?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-11553:
-------------------------------
    Labels: hdfs-ec-3.0-nice-to-have  (was: )

> Erasure Coding: Missing parity blocks in the block group are warned as 
> corrupt blocks
> -------------------------------------------------------------------------------------
>
>                 Key: HDFS-11553
>                 URL: https://issues.apache.org/jira/browse/HDFS-11553
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>            Reporter: Manoj Govindassamy
>            Assignee: Manoj Govindassamy
>              Labels: hdfs-ec-3.0-nice-to-have
>
> Currently, {{DFSStripedOutputStream}} verifies if the allocated block 
> locations are at least numDataBlocks length. That is, for the EC Policy 
> RS-6-3-64K, though the total needed DNs for a full EC Block Group is 9, 
> Clients will be able to successfully create a DFSStripedOutputStream with 
> just 6 DNs. Moreover, the output stream thus created with less DNs will 
> totally ignore writing Parity Blocks. HDFS-11552 is tracking the improvement 
> needed to accommodate Parity Blocks along with Data Blocks from the same 
> Block Group.
> {code}
> [Thread-5] WARN  hdfs.DFSOutputStream 
> (DFSStripedOutputStream.java:allocateNewBlock(497)) - Failed to get block 
> location for parity block, index=6
> [Thread-5] WARN  hdfs.DFSOutputStream 
> (DFSStripedOutputStream.java:allocateNewBlock(497)) - Failed to get block 
> location for parity block, index=7
> [Thread-5] WARN  hdfs.DFSOutputStream 
> (DFSStripedOutputStream.java:allocateNewBlock(497)) - Failed to get block 
> location for parity block, index=8
> {code}
> In the above case, upon file stream close we get the following warning 
> message when the parity blocks are not yet written out. The warning message 
> claims that there are 3 corrupt blocks, which is in-correct. Its just the EC 
> redundancy is not sufficient and not corrupt or lost yet. This warning 
> message in the context of above usecase need to be fixed.
> {code}
> INFO  namenode.FSNamesystem (FSNamesystem.java:checkBlocksComplete(2726)) - 
> BLOCK* blk_-9223372036854775792_1002 is COMMITTED but not COMPLETE(numNodes= 
> 0 <  minimum = 6) in file /ec/test1
> INFO  hdfs.StateChange (FSNamesystem.java:completeFile(2679)) - DIR* 
> completeFile: /ec/test1 is closed by DFSClient_NONMAPREDUCE_-1900076771_17
> WARN  hdfs.DFSOutputStream 
> (DFSStripedOutputStream.java:logCorruptBlocks(1117)) - Block group <1> has 3 
> corrupt blocks. It's at high risk of losing data.
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to