[ 
https://issues.apache.org/jira/browse/HDFS-9373?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15062843#comment-15062843
 ] 

Hudson commented on HDFS-9373:
------------------------------

FAILURE: Integrated in Hadoop-trunk-Commit #8987 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/8987/])
HDFS-9373. Erasure coding: friendly log information for write operations (zhz: 
rev 5104077e1f431ad3675d0b1c5c3cf53936902d8e)
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSStripedOutputStream.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Erasure coding: friendly log information for write operations with some 
> failed streamers
> ----------------------------------------------------------------------------------------
>
>                 Key: HDFS-9373
>                 URL: https://issues.apache.org/jira/browse/HDFS-9373
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>          Components: erasure-coding
>    Affects Versions: 3.0.0
>            Reporter: Li Bo
>            Assignee: Li Bo
>             Fix For: 3.0.0
>
>         Attachments: HDFS-9373-001.patch, HDFS-9373-002.patch, 
> HDFS-9373-003.patch
>
>
> When not more than PARITY_NUM streamers fail for a block group, the client 
> may still succeed to write the data. But several exceptions are thrown to 
> user and user has to check the reasons.  The friendly way is just inform user 
> that some streamers fail when writing a block group. It’s not necessary to 
> show the details of exceptions because a small number of stream failures is 
> not vital to the client writing.
> When only DATA_NUM streamers succeed, the block group is in a high risk 
> because the corrupt of any block will cause all the six blocks' data lost. We 
> should give obvious warning to user when this occurs. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to