[ 
https://issues.apache.org/jira/browse/HDFS-8019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14528210#comment-14528210
 ] 

Vinayakumar B commented on HDFS-8019:
-------------------------------------

for client side erasure encoding, chunk buffers are obtained from 
{{ByteArrayManager}} provided by the DFSClient. This will allow client to 
re-use the byte arrays (onheap) if the byte arrays are having length more than 
threshold \[128 default\].

This was introduced in HDFS-7276. [~drankye] and [~szetszwo],  Do you think 
re-using {{ByteArrayManager}} in datanode side erasure recovery would be 
sufficient for this?

> Erasure Coding: erasure coding chunk buffer allocation and management
> ---------------------------------------------------------------------
>
>                 Key: HDFS-8019
>                 URL: https://issues.apache.org/jira/browse/HDFS-8019
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>            Reporter: Kai Zheng
>            Assignee: Vinayakumar B
>
> As a task of HDFS-7344, this is to come up a chunk buffer pool allocating and 
> managing coding chunk buffers, either based on on-heap or off-heap. Note this 
> assumes some DataNodes are powerful in computing and performing EC coding 
> work, so better to have this dedicated buffer pool and management.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to