[ https://issues.apache.org/jira/browse/HDFS-8809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14707601#comment-14707601 ]
Hudson commented on HDFS-8809: ------------------------------ SUCCESS: Integrated in Hadoop-Hdfs-trunk #2221 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk/2221/]) HDFS-8809. HDFS fsck reports under construction blocks as CORRUPT. Contributed by Jing Zhao. (jing9: rev c8bca62718203a1dad9b70d164bdf10cc71b40cd) * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFsck.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java * hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt > HDFS fsck reports under construction blocks as "CORRUPT" > -------------------------------------------------------- > > Key: HDFS-8809 > URL: https://issues.apache.org/jira/browse/HDFS-8809 > Project: Hadoop HDFS > Issue Type: Bug > Components: tools > Affects Versions: 2.7.0 > Environment: Hadoop 2.7.1 and HBase 1.1.1, on SUSE11sp3 (other > Linuxes not tested, probably not platform-dependent). This did NOT happen > with Hadoop 2.4 and HBase 0.98. > Reporter: Sudhir Prakash > Assignee: Jing Zhao > Fix For: 2.8.0 > > Attachments: HDFS-8809.000.patch > > > Whenever HBase is running, the "hdfs fsck /" reports four hbase-related > files in the path "hbase/data/WALs/" as CORRUPT. Even after letting the > cluster sit idle for a couple hours, it is still in the corrupt state. If > HBase is shut down, the problem goes away. If HBase is then restarted, the > problem recurs. This was observed with Hadoop 2.7.1 and HBase 1.1.1, and did > NOT happen with Hadoop 2.4 and HBase 0.98. > {code} > hades1:/var/opt/teradata/packages # su hdfs > hdfs@hades1:/var/opt/teradata/packages> hdfs fsck / > Connecting to namenode via > http://hades1.labs.teradata.com:50070/fsck?ugi=hdfs&path=%2F > FSCK started by hdfs (auth:SIMPLE) from /39.0.8.2 for path / at Wed Jun 24 > 20:40:17 GMT 2015 > ... > /apps/hbase/data/WALs/hades4.labs.teradata.com,16020,1435168292684/hades4.labs.teradata.com%2C16020%2C1435168292684.default.1435175500556: > MISSING 1 blocks of total size 83 B. > /apps/hbase/data/WALs/hades5.labs.teradata.com,16020,1435168290466/hades5.labs.teradata.com%2C16020%2C1435168290466..meta.1435175562144.meta: > MISSING 1 blocks of total size 83 B. > /apps/hbase/data/WALs/hades5.labs.teradata.com,16020,1435168290466/hades5.labs.teradata.com%2C16020%2C1435168290466.default.1435175498500: > MISSING 1 blocks of total size 83 B. > /apps/hbase/data/WALs/hades6.labs.teradata.com,16020,1435168292373/hades6.labs.teradata.com%2C16020%2C1435168292373.default.1435175500301: > MISSING 1 blocks of total size 83 > B.................................................................................................. > .................................................................................................... > .................................................................................................... > ........................................................................................Status: > CORRUPT > Total size: 723977553 B (Total open files size: 332 B) > Total dirs: 79 > Total files: 388 > Total symlinks: 0 (Files currently being written: 5) > Total blocks (validated): 387 (avg. block size 1870743 B) (Total open > file blocks (not validated): 4) > ******************************** > UNDER MIN REPL'D BLOCKS: 4 (1.0335917 %) > dfs.namenode.replication.min: 1 > CORRUPT FILES: 4 > MISSING BLOCKS: 4 > MISSING SIZE: 332 B > ******************************** > Minimally replicated blocks: 387 (100.0 %) > Over-replicated blocks: 0 (0.0 %) > Under-replicated blocks: 0 (0.0 %) > Mis-replicated blocks: 0 (0.0 %) > Default replication factor: 3 > Average block replication: 3.0 > Corrupt blocks: 0 > Missing replicas: 0 (0.0 %) > Number of data-nodes: 3 > Number of racks: 1 > FSCK ended at Wed Jun 24 20:40:17 GMT 2015 in 7 milliseconds > The filesystem under path '/' is CORRUPT > hdfs@hades1:/var/opt/teradata/packages> > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)