[ 
https://issues.apache.org/jira/browse/HDFS-12933?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen reassigned HDFS-12933:
--------------------------------

    Assignee: chencan

> Improve logging when DFSStripedOutputStream failed to read some blocks
> ----------------------------------------------------------------------
>
>                 Key: HDFS-12933
>                 URL: https://issues.apache.org/jira/browse/HDFS-12933
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: erasure-coding
>            Reporter: Xiao Chen
>            Assignee: chencan
>            Priority: Minor
>         Attachments: HDFS-12933.001.patch
>
>
> Currently if there are less DataNodes than the erasure coding policy's (# of 
> data blocks + # of parity blocks), the client sees this:
> {noformat}
> 09:18:24 17/12/14 09:18:24 WARN hdfs.DFSOutputStream: Cannot allocate parity 
> block(index=13, policy=RS-10-4-1024k). Not enough datanodes? Exclude nodes=[]
> 09:18:24 17/12/14 09:18:24 WARN hdfs.DFSOutputStream: Block group <1> has 1 
> corrupt blocks.
> {noformat}
> The 1st line is good. The 2nd line may be confusing to end users. We should 
> investigate the error and be more general / accurate. Maybe something like 
> 'failed to read x blocks'.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to