[ 
https://issues.apache.org/jira/browse/ARROW-3406?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17393423#comment-17393423
 ] 

Weston Pace commented on ARROW-3406:
------------------------------------

This is often done during packet processing.  Chapter 5 of 
https://fast.dpdk.org/doc/pdf-guides-2.2/prog_guide-2.2.pdf explains how one 
such pool works.  Although packet processing is a bit of a different field.  
Packets are usually very small (a few kB) and arrive at a very fast rate.  I 
agree with Antoine that this would only be useful (not redundant with 
jemalloc/mimalloc) if you are using fixed size blocks and this is fairly common 
with I/O.  Although for disk I/O I would guess the I/O time will swamp any 
allocation time.  For memory mapped I/O this is a non-issue.  For network I/O 
via Flight I think this kind of optimization would be covered by gRPC.  I think 
the same is true of S3 where we receive the buffer from the AWS SDK.

  Without any kind of evidence that memory pressure/allocation is becoming a 
bottleneck I'm not sure it makes sense to work on this.

> [C++] Create a caching memory pool implementation
> -------------------------------------------------
>
>                 Key: ARROW-3406
>                 URL: https://issues.apache.org/jira/browse/ARROW-3406
>             Project: Apache Arrow
>          Issue Type: Improvement
>          Components: C++
>    Affects Versions: 0.11.0
>            Reporter: Antoine Pitrou
>            Priority: Minor
>              Labels: query-engine
>
> A caching memory pool implementation would be able to recycle freed memory 
> blocks instead of returning them to the system immediately. Two different 
> policies may be chosen:
> * either an unbounded cache
> * or a size-limited cache, perhaps with some kind of LRU mechanism
> Such a feature might help e.g. for CSV parsing, when reading and parsing data 
> into temporary memory buffers.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to