[ https://issues.apache.org/jira/browse/HDFS-8704?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14647223#comment-14647223 ]
Li Bo commented on HDFS-8704: ----------------------------- Thanks for reporting this issue. This bug is really complex and I spent a lot of time fixing it. I have just uploaded a new patch(002), I think this patch has fixed this problem. > Erasure Coding: client fails to write large file when one datanode fails > ------------------------------------------------------------------------ > > Key: HDFS-8704 > URL: https://issues.apache.org/jira/browse/HDFS-8704 > Project: Hadoop HDFS > Issue Type: Sub-task > Reporter: Li Bo > Assignee: Li Bo > Attachments: HDFS-8704-000.patch, HDFS-8704-HDFS-7285-002.patch > > > I test current code on a 5-node cluster using RS(3,2). When a datanode is > corrupt, client succeeds to write a file smaller than a block group but fails > to write a large one. {{TestDFSStripeOutputStreamWithFailure}} only tests > files smaller than a block group, this jira will add more test situations. -- This message was sent by Atlassian JIRA (v6.3.4#6332)