[jira] [Updated] (HDFS-9220) Reading small file (< 512 bytes) that is open for append fails due to incorrect checksum
[ https://issues.apache.org/jira/browse/HDFS-9220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Junping Du updated HDFS-9220: - Fix Version/s: 2.8.0 > Reading small file (< 512 bytes) that is open for append fails due to > incorrect checksum > > > Key: HDFS-9220 > URL: https://issues.apache.org/jira/browse/HDFS-9220 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.7.1 >Reporter: Bogdan Raducanu >Assignee: Jing Zhao >Priority: Blocker > Fix For: 2.8.0, 2.7.2, 2.6.4, 3.0.0-alpha1 > > Attachments: HDFS-9220.000.patch, HDFS-9220.001.patch, > HDFS-9220.002.patch, test2.java > > > Exception: > 2015-10-09 14:59:40 WARN DFSClient:1150 - fetchBlockByteRange(). Got a > checksum exception for /tmp/file0.05355529331575182 at > BP-353681639-10.10.10.10-1437493596883:blk_1075692769_9244882:0 from > DatanodeInfoWithStorage[10.10.10.10]:5001 > All 3 replicas cause this exception and the read fails entirely with: > BlockMissingException: Could not obtain block: > BP-353681639-10.10.10.10-1437493596883:blk_1075692769_9244882 > file=/tmp/file0.05355529331575182 > Code to reproduce is attached. > Does not happen in 2.7.0. > Data is read correctly if checksum verification is disabled. > More generally, the failure happens when reading from the last block of a > file and the last block has <= 512 bytes. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-9220) Reading small file (< 512 bytes) that is open for append fails due to incorrect checksum
[ https://issues.apache.org/jira/browse/HDFS-9220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Junping Du updated HDFS-9220: - Fix Version/s: 2.6.4 > Reading small file (< 512 bytes) that is open for append fails due to > incorrect checksum > > > Key: HDFS-9220 > URL: https://issues.apache.org/jira/browse/HDFS-9220 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.7.1 >Reporter: Bogdan Raducanu >Assignee: Jing Zhao >Priority: Blocker > Fix For: 2.7.2, 2.6.4 > > Attachments: HDFS-9220.000.patch, HDFS-9220.001.patch, > HDFS-9220.002.patch, test2.java > > > Exception: > 2015-10-09 14:59:40 WARN DFSClient:1150 - fetchBlockByteRange(). Got a > checksum exception for /tmp/file0.05355529331575182 at > BP-353681639-10.10.10.10-1437493596883:blk_1075692769_9244882:0 from > DatanodeInfoWithStorage[10.10.10.10]:5001 > All 3 replicas cause this exception and the read fails entirely with: > BlockMissingException: Could not obtain block: > BP-353681639-10.10.10.10-1437493596883:blk_1075692769_9244882 > file=/tmp/file0.05355529331575182 > Code to reproduce is attached. > Does not happen in 2.7.0. > Data is read correctly if checksum verification is disabled. > More generally, the failure happens when reading from the last block of a > file and the last block has <= 512 bytes. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9220) Reading small file (< 512 bytes) that is open for append fails due to incorrect checksum
[ https://issues.apache.org/jira/browse/HDFS-9220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vinod Kumar Vavilapalli updated HDFS-9220: -- Fix Version/s: (was: 3.0.0) > Reading small file (< 512 bytes) that is open for append fails due to > incorrect checksum > > > Key: HDFS-9220 > URL: https://issues.apache.org/jira/browse/HDFS-9220 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.7.1 >Reporter: Bogdan Raducanu >Assignee: Jing Zhao >Priority: Blocker > Fix For: 2.7.2 > > Attachments: HDFS-9220.000.patch, HDFS-9220.001.patch, > HDFS-9220.002.patch, test2.java > > > Exception: > 2015-10-09 14:59:40 WARN DFSClient:1150 - fetchBlockByteRange(). Got a > checksum exception for /tmp/file0.05355529331575182 at > BP-353681639-10.10.10.10-1437493596883:blk_1075692769_9244882:0 from > DatanodeInfoWithStorage[10.10.10.10]:5001 > All 3 replicas cause this exception and the read fails entirely with: > BlockMissingException: Could not obtain block: > BP-353681639-10.10.10.10-1437493596883:blk_1075692769_9244882 > file=/tmp/file0.05355529331575182 > Code to reproduce is attached. > Does not happen in 2.7.0. > Data is read correctly if checksum verification is disabled. > More generally, the failure happens when reading from the last block of a > file and the last block has <= 512 bytes. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9220) Reading small file (< 512 bytes) that is open for append fails due to incorrect checksum
[ https://issues.apache.org/jira/browse/HDFS-9220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Junping Du updated HDFS-9220: - Target Version/s: 2.7.2, 2.6.4 (was: 2.7.2) > Reading small file (< 512 bytes) that is open for append fails due to > incorrect checksum > > > Key: HDFS-9220 > URL: https://issues.apache.org/jira/browse/HDFS-9220 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.7.1 >Reporter: Bogdan Raducanu >Assignee: Jing Zhao >Priority: Blocker > Fix For: 3.0.0, 2.7.2 > > Attachments: HDFS-9220.000.patch, HDFS-9220.001.patch, > HDFS-9220.002.patch, test2.java > > > Exception: > 2015-10-09 14:59:40 WARN DFSClient:1150 - fetchBlockByteRange(). Got a > checksum exception for /tmp/file0.05355529331575182 at > BP-353681639-10.10.10.10-1437493596883:blk_1075692769_9244882:0 from > DatanodeInfoWithStorage[10.10.10.10]:5001 > All 3 replicas cause this exception and the read fails entirely with: > BlockMissingException: Could not obtain block: > BP-353681639-10.10.10.10-1437493596883:blk_1075692769_9244882 > file=/tmp/file0.05355529331575182 > Code to reproduce is attached. > Does not happen in 2.7.0. > Data is read correctly if checksum verification is disabled. > More generally, the failure happens when reading from the last block of a > file and the last block has <= 512 bytes. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9220) Reading small file (< 512 bytes) that is open for append fails due to incorrect checksum
[ https://issues.apache.org/jira/browse/HDFS-9220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kihwal Lee updated HDFS-9220: - Fix Version/s: 2.7.2 3.0.0 > Reading small file (< 512 bytes) that is open for append fails due to > incorrect checksum > > > Key: HDFS-9220 > URL: https://issues.apache.org/jira/browse/HDFS-9220 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.7.1 >Reporter: Bogdan Raducanu >Assignee: Jing Zhao >Priority: Blocker > Fix For: 3.0.0, 2.7.2 > > Attachments: HDFS-9220.000.patch, HDFS-9220.001.patch, > HDFS-9220.002.patch, test2.java > > > Exception: > 2015-10-09 14:59:40 WARN DFSClient:1150 - fetchBlockByteRange(). Got a > checksum exception for /tmp/file0.05355529331575182 at > BP-353681639-10.10.10.10-1437493596883:blk_1075692769_9244882:0 from > DatanodeInfoWithStorage[10.10.10.10]:5001 > All 3 replicas cause this exception and the read fails entirely with: > BlockMissingException: Could not obtain block: > BP-353681639-10.10.10.10-1437493596883:blk_1075692769_9244882 > file=/tmp/file0.05355529331575182 > Code to reproduce is attached. > Does not happen in 2.7.0. > Data is read correctly if checksum verification is disabled. > More generally, the failure happens when reading from the last block of a > file and the last block has <= 512 bytes. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9220) Reading small file (< 512 bytes) that is open for append fails due to incorrect checksum
[ https://issues.apache.org/jira/browse/HDFS-9220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kihwal Lee updated HDFS-9220: - Resolution: Fixed Hadoop Flags: Reviewed Status: Resolved (was: Patch Available) > Reading small file (< 512 bytes) that is open for append fails due to > incorrect checksum > > > Key: HDFS-9220 > URL: https://issues.apache.org/jira/browse/HDFS-9220 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.7.1 >Reporter: Bogdan Raducanu >Assignee: Jing Zhao >Priority: Blocker > Fix For: 3.0.0, 2.7.2 > > Attachments: HDFS-9220.000.patch, HDFS-9220.001.patch, > HDFS-9220.002.patch, test2.java > > > Exception: > 2015-10-09 14:59:40 WARN DFSClient:1150 - fetchBlockByteRange(). Got a > checksum exception for /tmp/file0.05355529331575182 at > BP-353681639-10.10.10.10-1437493596883:blk_1075692769_9244882:0 from > DatanodeInfoWithStorage[10.10.10.10]:5001 > All 3 replicas cause this exception and the read fails entirely with: > BlockMissingException: Could not obtain block: > BP-353681639-10.10.10.10-1437493596883:blk_1075692769_9244882 > file=/tmp/file0.05355529331575182 > Code to reproduce is attached. > Does not happen in 2.7.0. > Data is read correctly if checksum verification is disabled. > More generally, the failure happens when reading from the last block of a > file and the last block has <= 512 bytes. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9220) Reading small file (< 512 bytes) that is open for append fails due to incorrect checksum
[ https://issues.apache.org/jira/browse/HDFS-9220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jing Zhao updated HDFS-9220: Attachment: HDFS-9220.002.patch Looks like most of the test failures were caused by NULL checksum type. When the checksum is disabled, {{offset}} and {{end}} are both 0 but the assertion {{crcBytes != 0}} fails. Update the patch to fix this part. > Reading small file (< 512 bytes) that is open for append fails due to > incorrect checksum > > > Key: HDFS-9220 > URL: https://issues.apache.org/jira/browse/HDFS-9220 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.7.1 >Reporter: Bogdan Raducanu >Assignee: Jing Zhao >Priority: Blocker > Attachments: HDFS-9220.000.patch, HDFS-9220.001.patch, > HDFS-9220.002.patch, test2.java > > > Exception: > 2015-10-09 14:59:40 WARN DFSClient:1150 - fetchBlockByteRange(). Got a > checksum exception for /tmp/file0.05355529331575182 at > BP-353681639-10.10.10.10-1437493596883:blk_1075692769_9244882:0 from > DatanodeInfoWithStorage[10.10.10.10]:5001 > All 3 replicas cause this exception and the read fails entirely with: > BlockMissingException: Could not obtain block: > BP-353681639-10.10.10.10-1437493596883:blk_1075692769_9244882 > file=/tmp/file0.05355529331575182 > Code to reproduce is attached. > Does not happen in 2.7.0. > Data is read correctly if checksum verification is disabled. > More generally, the failure happens when reading from the last block of a > file and the last block has <= 512 bytes. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9220) Reading small file (< 512 bytes) that is open for append fails due to incorrect checksum
[ https://issues.apache.org/jira/browse/HDFS-9220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bogdan Raducanu updated HDFS-9220: -- Description: Exception: 2015-10-09 14:59:40 WARN DFSClient:1150 - fetchBlockByteRange(). Got a checksum exception for /tmp/file0.05355529331575182 at BP-353681639-10.10.10.10-1437493596883:blk_1075692769_9244882:0 from DatanodeInfoWithStorage[10.10.10.10]:5001 All 3 replicas cause this exception and the read fails entirely with: BlockMissingException: Could not obtain block: BP-353681639-10.10.10.10-1437493596883:blk_1075692769_9244882 file=/tmp/file0.05355529331575182 Code to reproduce is attached. Does not happen in 2.7.0. Data is read correctly if checksum verification is disabled. More generally, the failure happens when reading from the last block of a file and the last block has <= 512 bytes. was: Exception: 2015-10-09 14:59:40 WARN DFSClient:1150 - fetchBlockByteRange(). Got a checksum exception for /tmp/file0.05355529331575182 at BP-353681639-10.10.10.10-1437493596883:blk_1075692769_9244882:0 from DatanodeInfoWithStorage[10.10.10.10]:5001 All 3 replicas cause this exception and the read fails entirely with: BlockMissingException: Could not obtain block: BP-353681639-10.10.10.10-1437493596883:blk_1075692769_9244882 file=/tmp/file0.05355529331575182 Code to reproduce is attached. Does not happen in 2.7.0. Data is read correctly if checksum verification is disabled. > Reading small file (< 512 bytes) that is open for append fails due to > incorrect checksum > > > Key: HDFS-9220 > URL: https://issues.apache.org/jira/browse/HDFS-9220 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.7.1 >Reporter: Bogdan Raducanu >Assignee: Jing Zhao >Priority: Blocker > Attachments: HDFS-9220.000.patch, HDFS-9220.001.patch, test2.java > > > Exception: > 2015-10-09 14:59:40 WARN DFSClient:1150 - fetchBlockByteRange(). Got a > checksum exception for /tmp/file0.05355529331575182 at > BP-353681639-10.10.10.10-1437493596883:blk_1075692769_9244882:0 from > DatanodeInfoWithStorage[10.10.10.10]:5001 > All 3 replicas cause this exception and the read fails entirely with: > BlockMissingException: Could not obtain block: > BP-353681639-10.10.10.10-1437493596883:blk_1075692769_9244882 > file=/tmp/file0.05355529331575182 > Code to reproduce is attached. > Does not happen in 2.7.0. > Data is read correctly if checksum verification is disabled. > More generally, the failure happens when reading from the last block of a > file and the last block has <= 512 bytes. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9220) Reading small file (< 512 bytes) that is open for append fails due to incorrect checksum
[ https://issues.apache.org/jira/browse/HDFS-9220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bogdan Raducanu updated HDFS-9220: -- Summary: Reading small file (< 512 bytes) that is open for append fails due to incorrect checksum (was: ChecksumException after writing less than 512 bytes) > Reading small file (< 512 bytes) that is open for append fails due to > incorrect checksum > > > Key: HDFS-9220 > URL: https://issues.apache.org/jira/browse/HDFS-9220 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.7.1 >Reporter: Bogdan Raducanu >Assignee: Jagadesh Kiran N > Attachments: test2.java > > > Exception: > 2015-10-09 14:59:40 WARN DFSClient:1150 - fetchBlockByteRange(). Got a > checksum exception for /tmp/file0.05355529331575182 at > BP-353681639-10.10.10.10-1437493596883:blk_1075692769_9244882:0 from > DatanodeInfoWithStorage[10.10.10.10]:5001 > All 3 replicas cause this exception and the read fails entirely with: > BlockMissingException: Could not obtain block: > BP-353681639-10.10.10.10-1437493596883:blk_1075692769_9244882 > file=/tmp/file0.05355529331575182 > Code to reproduce is attached. > Does not happen in 2.7.0. > Data is read correctly if checksum verification is disabled. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9220) Reading small file (< 512 bytes) that is open for append fails due to incorrect checksum
[ https://issues.apache.org/jira/browse/HDFS-9220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jing Zhao updated HDFS-9220: Priority: Critical (was: Major) > Reading small file (< 512 bytes) that is open for append fails due to > incorrect checksum > > > Key: HDFS-9220 > URL: https://issues.apache.org/jira/browse/HDFS-9220 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.7.1 >Reporter: Bogdan Raducanu >Assignee: Jagadesh Kiran N >Priority: Critical > Attachments: test2.java > > > Exception: > 2015-10-09 14:59:40 WARN DFSClient:1150 - fetchBlockByteRange(). Got a > checksum exception for /tmp/file0.05355529331575182 at > BP-353681639-10.10.10.10-1437493596883:blk_1075692769_9244882:0 from > DatanodeInfoWithStorage[10.10.10.10]:5001 > All 3 replicas cause this exception and the read fails entirely with: > BlockMissingException: Could not obtain block: > BP-353681639-10.10.10.10-1437493596883:blk_1075692769_9244882 > file=/tmp/file0.05355529331575182 > Code to reproduce is attached. > Does not happen in 2.7.0. > Data is read correctly if checksum verification is disabled. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9220) Reading small file (< 512 bytes) that is open for append fails due to incorrect checksum
[ https://issues.apache.org/jira/browse/HDFS-9220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jing Zhao updated HDFS-9220: Attachment: HDFS-9220.001.patch Update the patch to address [~templedf]'s comments. > Reading small file (< 512 bytes) that is open for append fails due to > incorrect checksum > > > Key: HDFS-9220 > URL: https://issues.apache.org/jira/browse/HDFS-9220 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.7.1 >Reporter: Bogdan Raducanu >Assignee: Jagadesh Kiran N >Priority: Blocker > Attachments: HDFS-9220.000.patch, HDFS-9220.001.patch, test2.java > > > Exception: > 2015-10-09 14:59:40 WARN DFSClient:1150 - fetchBlockByteRange(). Got a > checksum exception for /tmp/file0.05355529331575182 at > BP-353681639-10.10.10.10-1437493596883:blk_1075692769_9244882:0 from > DatanodeInfoWithStorage[10.10.10.10]:5001 > All 3 replicas cause this exception and the read fails entirely with: > BlockMissingException: Could not obtain block: > BP-353681639-10.10.10.10-1437493596883:blk_1075692769_9244882 > file=/tmp/file0.05355529331575182 > Code to reproduce is attached. > Does not happen in 2.7.0. > Data is read correctly if checksum verification is disabled. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9220) Reading small file (< 512 bytes) that is open for append fails due to incorrect checksum
[ https://issues.apache.org/jira/browse/HDFS-9220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jing Zhao updated HDFS-9220: Priority: Blocker (was: Critical) > Reading small file (< 512 bytes) that is open for append fails due to > incorrect checksum > > > Key: HDFS-9220 > URL: https://issues.apache.org/jira/browse/HDFS-9220 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.7.1 >Reporter: Bogdan Raducanu >Assignee: Jagadesh Kiran N >Priority: Blocker > Attachments: HDFS-9220.000.patch, test2.java > > > Exception: > 2015-10-09 14:59:40 WARN DFSClient:1150 - fetchBlockByteRange(). Got a > checksum exception for /tmp/file0.05355529331575182 at > BP-353681639-10.10.10.10-1437493596883:blk_1075692769_9244882:0 from > DatanodeInfoWithStorage[10.10.10.10]:5001 > All 3 replicas cause this exception and the read fails entirely with: > BlockMissingException: Could not obtain block: > BP-353681639-10.10.10.10-1437493596883:blk_1075692769_9244882 > file=/tmp/file0.05355529331575182 > Code to reproduce is attached. > Does not happen in 2.7.0. > Data is read correctly if checksum verification is disabled. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9220) Reading small file (< 512 bytes) that is open for append fails due to incorrect checksum
[ https://issues.apache.org/jira/browse/HDFS-9220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jing Zhao updated HDFS-9220: Target Version/s: 2.7.2 > Reading small file (< 512 bytes) that is open for append fails due to > incorrect checksum > > > Key: HDFS-9220 > URL: https://issues.apache.org/jira/browse/HDFS-9220 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.7.1 >Reporter: Bogdan Raducanu >Assignee: Jagadesh Kiran N >Priority: Critical > Attachments: HDFS-9220.000.patch, test2.java > > > Exception: > 2015-10-09 14:59:40 WARN DFSClient:1150 - fetchBlockByteRange(). Got a > checksum exception for /tmp/file0.05355529331575182 at > BP-353681639-10.10.10.10-1437493596883:blk_1075692769_9244882:0 from > DatanodeInfoWithStorage[10.10.10.10]:5001 > All 3 replicas cause this exception and the read fails entirely with: > BlockMissingException: Could not obtain block: > BP-353681639-10.10.10.10-1437493596883:blk_1075692769_9244882 > file=/tmp/file0.05355529331575182 > Code to reproduce is attached. > Does not happen in 2.7.0. > Data is read correctly if checksum verification is disabled. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9220) Reading small file (< 512 bytes) that is open for append fails due to incorrect checksum
[ https://issues.apache.org/jira/browse/HDFS-9220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jing Zhao updated HDFS-9220: Attachment: HDFS-9220.000.patch Submit a patch including the above fix and also [~bograd]'s unit test. > Reading small file (< 512 bytes) that is open for append fails due to > incorrect checksum > > > Key: HDFS-9220 > URL: https://issues.apache.org/jira/browse/HDFS-9220 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.7.1 >Reporter: Bogdan Raducanu >Assignee: Jagadesh Kiran N >Priority: Critical > Attachments: HDFS-9220.000.patch, test2.java > > > Exception: > 2015-10-09 14:59:40 WARN DFSClient:1150 - fetchBlockByteRange(). Got a > checksum exception for /tmp/file0.05355529331575182 at > BP-353681639-10.10.10.10-1437493596883:blk_1075692769_9244882:0 from > DatanodeInfoWithStorage[10.10.10.10]:5001 > All 3 replicas cause this exception and the read fails entirely with: > BlockMissingException: Could not obtain block: > BP-353681639-10.10.10.10-1437493596883:blk_1075692769_9244882 > file=/tmp/file0.05355529331575182 > Code to reproduce is attached. > Does not happen in 2.7.0. > Data is read correctly if checksum verification is disabled. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9220) Reading small file (< 512 bytes) that is open for append fails due to incorrect checksum
[ https://issues.apache.org/jira/browse/HDFS-9220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jing Zhao updated HDFS-9220: Status: Patch Available (was: Open) > Reading small file (< 512 bytes) that is open for append fails due to > incorrect checksum > > > Key: HDFS-9220 > URL: https://issues.apache.org/jira/browse/HDFS-9220 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.7.1 >Reporter: Bogdan Raducanu >Assignee: Jagadesh Kiran N >Priority: Critical > Attachments: HDFS-9220.000.patch, test2.java > > > Exception: > 2015-10-09 14:59:40 WARN DFSClient:1150 - fetchBlockByteRange(). Got a > checksum exception for /tmp/file0.05355529331575182 at > BP-353681639-10.10.10.10-1437493596883:blk_1075692769_9244882:0 from > DatanodeInfoWithStorage[10.10.10.10]:5001 > All 3 replicas cause this exception and the read fails entirely with: > BlockMissingException: Could not obtain block: > BP-353681639-10.10.10.10-1437493596883:blk_1075692769_9244882 > file=/tmp/file0.05355529331575182 > Code to reproduce is attached. > Does not happen in 2.7.0. > Data is read correctly if checksum verification is disabled. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HDFS-9220) Reading small file (< 512 bytes) that is open for append fails due to incorrect checksum
[ https://issues.apache.org/jira/browse/HDFS-9220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Walter Su updated HDFS-9220: Assignee: Jing Zhao (was: Jagadesh Kiran N) > Reading small file (< 512 bytes) that is open for append fails due to > incorrect checksum > > > Key: HDFS-9220 > URL: https://issues.apache.org/jira/browse/HDFS-9220 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.7.1 >Reporter: Bogdan Raducanu >Assignee: Jing Zhao >Priority: Blocker > Attachments: HDFS-9220.000.patch, HDFS-9220.001.patch, test2.java > > > Exception: > 2015-10-09 14:59:40 WARN DFSClient:1150 - fetchBlockByteRange(). Got a > checksum exception for /tmp/file0.05355529331575182 at > BP-353681639-10.10.10.10-1437493596883:blk_1075692769_9244882:0 from > DatanodeInfoWithStorage[10.10.10.10]:5001 > All 3 replicas cause this exception and the read fails entirely with: > BlockMissingException: Could not obtain block: > BP-353681639-10.10.10.10-1437493596883:blk_1075692769_9244882 > file=/tmp/file0.05355529331575182 > Code to reproduce is attached. > Does not happen in 2.7.0. > Data is read correctly if checksum verification is disabled. -- This message was sent by Atlassian JIRA (v6.3.4#6332)