[ 
https://issues.apache.org/jira/browse/HDFS-9373?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HDFS-9373:
----------------------------
       Resolution: Fixed
     Hadoop Flags: Reviewed
    Fix Version/s: 3.0.0
           Status: Resolved  (was: Patch Available)

Thanks Bo. +1 on the latest patch. I just committed to trunk.

> Erasure coding: friendly log information for write operations with some 
> failed streamers
> ----------------------------------------------------------------------------------------
>
>                 Key: HDFS-9373
>                 URL: https://issues.apache.org/jira/browse/HDFS-9373
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>          Components: erasure-coding
>    Affects Versions: 3.0.0
>            Reporter: Li Bo
>            Assignee: Li Bo
>             Fix For: 3.0.0
>
>         Attachments: HDFS-9373-001.patch, HDFS-9373-002.patch, 
> HDFS-9373-003.patch
>
>
> When not more than PARITY_NUM streamers fail for a block group, the client 
> may still succeed to write the data. But several exceptions are thrown to 
> user and user has to check the reasons.  The friendly way is just inform user 
> that some streamers fail when writing a block group. It’s not necessary to 
> show the details of exceptions because a small number of stream failures is 
> not vital to the client writing.
> When only DATA_NUM streamers succeed, the block group is in a high risk 
> because the corrupt of any block will cause all the six blocks' data lost. We 
> should give obvious warning to user when this occurs. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to