[
https://issues.apache.org/jira/browse/HADOOP-922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12467181
]
Raghu Angadi commented on HADOOP-922:
-------------------------------------
Did you mean to do this only for small seeks? The code does not enforce that.
Even a 100MB seek will read 100MB data that should be skipped?
> Optimize small reads and seeks
> ------------------------------
>
> Key: HADOOP-922
> URL: https://issues.apache.org/jira/browse/HADOOP-922
> Project: Hadoop
> Issue Type: Bug
> Components: dfs
> Affects Versions: 0.10.1
> Reporter: dhruba borthakur
> Assigned To: dhruba borthakur
> Attachments: smallreadseek2.patch
>
>
> A seek on a DFSInputStream causes causes the next read to re-open the socket
> connection to the datanode and fetch the remainder of the block all over
> again. This is not optimal.
> A small read followed by a small positive seek could re-utilize the data
> already fetched from the datanode as part of the previous read.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.