On May 8, 2014, at 9:42 AM, D'Alessandro, Luke K <[email protected]> wrote:
> On May 8, 2014, at 12:04 PM, Jason Evans <[email protected]> wrote:
>> On May 8, 2014, at 9:00 AM, D'Alessandro, Luke K <[email protected]> 
>> wrote:
>>> I’m in the market for a good concurrent allocator to manage a memory region 
>>> corresponding to pinned network memory for a multithreaded and distributed 
>>> HPC application. Basically, I’m going to want to do RDMA to objects that 
>>> are often malloced and freed. The pinning operation is expensive so it is 
>>> important to amortize it over lots of uses. I’ve written a simple 
>>> thread-local caching allocator that allows me to pin contiguous blocks when 
>>> they’re first allocated, and then just use TLS free listing to reuse space, 
>>> however I don’t really have the resources needed to implement this in a 
>>> robust way.
>> 
>> This pending change may be relevant to your needs:
>> 
>>      https://github.com/jemalloc/jemalloc/pull/80
>> 
>> I’m imagining that you would implement a custom chunk allocator that pins 
>> entire chunks, and then specifically use that arena for allocations that you 
>> require to be pinned.  This approach has some shortcomings, but perhaps they 
>> don’t matter to your specific application.
> 
> This patch appears to address half the battle, though I’m not 100% sure how 
> to implement the chunk allocator without calling back into jemalloc 
> recursively. I guess that I either use mmap() directly or jemalloc.h has a 
> way to get “raw” memory already. Although chunk_alloc_core doesn’t seem like 
> a name that’s going to be exposed, so maybe mmap() is the way to go—not a big 
> deal.

Yes, mmap() is the way to go.

> Based on the proposed patch, it looks like MALLOCX_ARENA(a) /will/ have an 
> effect for huge regions for both huge_palloc() and huge_dalloc() as well, 
> which is exactly what I need.
> 
> [...]
> 
> Do you have any sense of the likelihood that this patch will be accepted 
> going forward?

The patch will definitely be merged after it's cleaned up.  jemalloc 4.0.0 
probably won't be ready for release until late this year though, so you'll need 
to use the dev branch in the meanwhile if you depend on this functionality.

Jason
_______________________________________________
jemalloc-discuss mailing list
[email protected]
http://www.canonware.com/mailman/listinfo/jemalloc-discuss

Reply via email to