[ https://issues.apache.org/jira/browse/HDFS-11160?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16119030#comment-16119030 ]
Haohui Mai commented on HDFS-11160: ----------------------------------- Our clusters are around ~2000 nodes and disks failures are quite common at this scale. Based on the discussion on HDFS-12136 we are very concerned to run this patch in production as it puts I/O inside an exclusive lock. Are there any possibilities to move the I/O out of the lock? If it is not trivial to do, is it possible to defer this fix to 2.9 so that it is easier to get 2.8.2 out of the door? Since the bug has been around for a while we are okay to keep it as-is for a little bit longer. What do you think? > VolumeScanner reports write-in-progress replicas as corrupt incorrectly > ----------------------------------------------------------------------- > > Key: HDFS-11160 > URL: https://issues.apache.org/jira/browse/HDFS-11160 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode > Environment: CDH5.7.4 > Reporter: Wei-Chiu Chuang > Assignee: Wei-Chiu Chuang > Fix For: 2.8.0, 2.7.4, 3.0.0-alpha2 > > Attachments: HDFS-11160.001.patch, HDFS-11160.002.patch, > HDFS-11160.003.patch, HDFS-11160.004.patch, HDFS-11160.005.patch, > HDFS-11160.006.patch, HDFS-11160.007.patch, HDFS-11160.008.patch, > HDFS-11160.branch-2.patch, HDFS-11160.reproduce.patch > > > Due to a race condition initially reported in HDFS-6804, VolumeScanner may > erroneously detect good replicas as corrupt. This is serious because in some > cases it results in data loss if all replicas are declared corrupt. This bug > is especially prominent when there are a lot of append requests via > HttpFs/WebHDFS. > We are investigating an incidence that caused very high block corruption rate > in a relatively small cluster. Initially, we thought HDFS-11056 is to blame. > However, after applying HDFS-11056, we are still seeing VolumeScanner > reporting corrupt replicas. > It turns out that if a replica is being appended while VolumeScanner is > scanning it, VolumeScanner may use the new checksum to compare against old > data, causing checksum mismatch. > I have a unit test to reproduce the error. Will attach later. A quick and > simple fix is to hold FsDatasetImpl lock and read from disk the checksum. -- This message was sent by Atlassian JIRA (v6.4.14#64029) --------------------------------------------------------------------- To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org