[ 
https://issues.apache.org/jira/browse/HDFS-15050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoqiao He updated HDFS-15050:
-------------------------------
    Attachment: HDFS-15050.001.patch
        Status: Patch Available  (was: Open)

submit init patch with minor exception changes.

> Optimize log information when DFSInputStream meet 
> CannotObtainBlockLengthException
> ----------------------------------------------------------------------------------
>
>                 Key: HDFS-15050
>                 URL: https://issues.apache.org/jira/browse/HDFS-15050
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: dfsclient
>            Reporter: Xiaoqiao He
>            Assignee: Xiaoqiao He
>            Priority: Major
>         Attachments: HDFS-15050.001.patch
>
>
> We could not identify which file it belongs easily when DFSInputStream meet 
> CannotObtainBlockLengthException, as the following exception log. Just 
> suggest to log file path string when we meet CannotObtainBlockLengthException.
> {code:java}
> Caused by: java.io.IOException: Cannot obtain block length for 
> LocatedBlock{BP-***:blk_***_***; getBlockSize()=690504; corrupt=false; 
> offset=1811939328; 
> locs=[DatanodeInfoWithStorage[*:50010,DS-2bcadcc4-458a-45c6-a91b-8461bf7cdd71,DISK],
>  
> DatanodeInfoWithStorage[*:50010,DS-8f2bb259-ecb2-4839-8769-4a0523360d58,DISK],
>  
> DatanodeInfoWithStorage[*:50010,DS-69f4de6f-2428-42ff-9486-98c2544b1ada,DISK]]}
>       at 
> org.apache.hadoop.hdfs.DFSInputStream.readBlockLength(DFSInputStream.java:402)
>       at 
> org.apache.hadoop.hdfs.DFSInputStream.fetchLocatedBlocksAndGetLastBlockLength(DFSInputStream.java:345)
>       at 
> org.apache.hadoop.hdfs.DFSInputStream.openInfo(DFSInputStream.java:280)
>       at org.apache.hadoop.hdfs.DFSInputStream.<init>(DFSInputStream.java:272)
>       at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1664)
>       at 
> org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:304)
>       at 
> org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:300)
>       at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>       at 
> org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:300)
>       at org.apache.hadoop.fs.FilterFileSystem.open(FilterFileSystem.java:161)
>       at 
> org.apache.hadoop.fs.viewfs.ChRootedFileSystem.open(ChRootedFileSystem.java:266)
>       at 
> org.apache.hadoop.fs.viewfs.ViewFileSystem.open(ViewFileSystem.java:481)
>       at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:828)
>       at 
> org.apache.hadoop.mapred.LineRecordReader.<init>(LineRecordReader.java:109)
>       at 
> org.apache.hadoop.mapred.TextInputFormat.getRecordReader(TextInputFormat.java:67)
>       at 
> org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.<init>(CombineHiveRecordReader.java:65)
>       ... 16 more
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to