[ 
https://issues.apache.org/jira/browse/HADOOP-5286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12675517#action_12675517
 ] 

Hairong Kuang commented on HADOOP-5286:
---------------------------------------

Thank Raghu for the clarification. I did not realize we have read timeout too. 
So either a slow reader or a slow writer might cause a read failure. But still 
multiple retries and slow writing by the datanode contributed to this 1.5 hours 
of reading. From the log, retries took at least 1/2 hour. The log did not show 
exactly how much time last successful read took because not every failed read 
was logged.

> DFS client blocked for a long time reading blocks of a file on the JobTracker
> -----------------------------------------------------------------------------
>
>                 Key: HADOOP-5286
>                 URL: https://issues.apache.org/jira/browse/HADOOP-5286
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>    Affects Versions: 0.20.0
>            Reporter: Hemanth Yamijala
>         Attachments: jt-log-for-blocked-reads.txt
>
>
> On a large cluster, we've observed that DFS client was blocked on reading a 
> block of a file for almost 1 and half hours. The file was being read by the 
> JobTracker of the cluster, and was a split file of a job. On the NameNode 
> logs, we observed that the block had a message as follows:
> Inconsistent size for block blk_2044238107768440002_840946 reported from 
> <ip>:<port> current size is 195072 reported size is 1318567
> Details follow.
>  

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to