[ 
https://issues.apache.org/jira/browse/HDFS-16520?focusedWorklogId=760665&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-760665
 ]

ASF GitHub Bot logged work on HDFS-16520:
-----------------------------------------

                Author: ASF GitHub Bot
            Created on: 22/Apr/22 07:30
            Start Date: 22/Apr/22 07:30
    Worklog Time Spent: 10m 
      Work Description: cndaimin commented on PR #4104:
URL: https://github.com/apache/hadoop/pull/4104#issuecomment-1106106812

   @tasanuma Thanks for your review.




Issue Time Tracking
-------------------

    Worklog Id:     (was: 760665)
    Time Spent: 2h 10m  (was: 2h)

> Improve EC pread: avoid potential reading whole block
> -----------------------------------------------------
>
>                 Key: HDFS-16520
>                 URL: https://issues.apache.org/jira/browse/HDFS-16520
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: dfsclient, ec
>    Affects Versions: 3.3.1, 3.3.2
>            Reporter: daimin
>            Assignee: daimin
>            Priority: Major
>              Labels: pull-request-available
>          Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> HDFS client 'pread' represents 'position read', this kind of read just need a 
> range of data instead of reading the whole file/block. By using 
> BlockReaderFactory#setLength, client tells datanode the block length to be 
> read from disk and sent to client.
> To EC file, the block length to read is not well set, by default using 
> 'block.getBlockSize() - offsetInBlock' to both pread and sread. Thus datanode 
> read much more data and send to client, and abort when client closes 
> connection. There is a lot waste of resource to this situation.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to