[ https://issues.apache.org/jira/browse/HDFS-6937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15400752#comment-15400752 ]
Yongjun Zhang commented on HDFS-6937: ------------------------------------- Hi [~brahmareddy], Thanks for reporting the issue you ran into. If your problem is really a network issue, then your proposed solution sounds reasonable to me. However, it seems different than what HDFS-6937 intends to solve, and I think we can create a new jira for your issue. Here is why: HDFS-6937's scenario is that we keep replacing the third node in recovery, and did not detect that the middle node is corrupt. Thus adding a corruption checking for the middle node would solve the issue; In your case, even if we try to check the middle node, it would appear as not corrupt. The problem is that, we don't have a check for network issue (and probably adding a network check may not be feasible here). On the other hand, if it's not a network issue, then it could be caused by HDFS-4660, if you don't already have the fix. Hope my explanation makes sense. > Another issue in handling checksum errors in write pipeline > ----------------------------------------------------------- > > Key: HDFS-6937 > URL: https://issues.apache.org/jira/browse/HDFS-6937 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode, hdfs-client > Affects Versions: 2.5.0 > Reporter: Yongjun Zhang > Assignee: Wei-Chiu Chuang > Attachments: HDFS-6937.001.patch, HDFS-6937.002.patch > > > Given a write pipeline: > DN1 -> DN2 -> DN3 > DN3 detected cheksum error and terminate, DN2 truncates its replica to the > ACKed size. Then a new pipeline is attempted as > DN1 -> DN2 -> DN4 > DN4 detects checksum error again. Later when replaced DN4 with DN5 (and so > on), it failed for the same reason. This led to the observation that DN2's > data is corrupted. > Found that the software currently truncates DN2's replca to the ACKed size > after DN3 terminates. But it doesn't check the correctness of the data > already written to disk. > So intuitively, a solution would be, when downstream DN (DN3 here) found > checksum error, propagate this info back to upstream DN (DN2 here), DN2 > checks the correctness of the data already written to disk, and truncate the > replica to to MIN(correctDataSize, ACKedSize). > Found this issue is similar to what was reported by HDFS-3875, and the > truncation at DN2 was actually introduced as part of the HDFS-3875 solution. > Filing this jira for the issue reported here. HDFS-3875 was filed by > [~tlipcon] > and found he proposed something similar there. > {quote} > if the tail node in the pipeline detects a checksum error, then it returns a > special error code back up the pipeline indicating this (rather than just > disconnecting) > if a non-tail node receives this error code, then it immediately scans its > own block on disk (from the beginning up through the last acked length). If > it detects a corruption on its local copy, then it should assume that it is > the faulty one, rather than the downstream neighbor. If it detects no > corruption, then the faulty node is either the downstream mirror or the > network link between the two, and the current behavior is reasonable. > {quote} > Thanks. -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org