[ 
https://issues.apache.org/jira/browse/HADOOP-18296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17957045#comment-17957045
 ] 

ASF GitHub Bot commented on HADOOP-18296:
-----------------------------------------

steveloughran opened a new pull request, #7732:
URL: https://github.com/apache/hadoop/pull/7732

   ### Description of PR
   
   * Lets you turn off checksumming in local fs (but not hdfs!) with option 
`file.verify-checksum`
   * Has copy of Parquet's `TrackingByteBufferAllocator`, modified for Hadoop 
APIs and named
     `TrackingByteBufferPool`; not yet used in tests.
   * New capability "fs.capability.vectoredio.sliced" to declare that you slice 
buffers.
   
   
   ### How was this patch tested?
   
   no tests yet.
   
   ### For code changes:
   
   - [X] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   




> Memory fragmentation in ChecksumFileSystem Vectored IO implementation.
> ----------------------------------------------------------------------
>
>                 Key: HADOOP-18296
>                 URL: https://issues.apache.org/jira/browse/HADOOP-18296
>             Project: Hadoop Common
>          Issue Type: Sub-task
>          Components: common
>    Affects Versions: 3.4.0
>            Reporter: Mukund Thakur
>            Assignee: Steve Loughran
>            Priority: Minor
>              Labels: fs
>
> As we have implemented merging of ranges in the ChecksumFSInputChecker 
> implementation of vectored IO api, it can lead to memory fragmentation. Let 
> me explain by example.
>  
> Suppose client requests for 3 ranges. 
> 0-500, 700-1000 and 1200-1500.
> Now because of merging, all the above ranges will get merged into one and we 
> will allocate a big byte buffer of 0-1500 size but return sliced byte buffers 
> for the desired ranges.
> Now once the client is done reading all the ranges, it will only be able to 
> free the memory for requested ranges and memory of the gaps will never be 
> released for eg here (500-700 and 1000-1200).
>  
> Note this only happens for direct byte buffers.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org

Reply via email to