[ 
https://issues.apache.org/jira/browse/PHOENIX-2405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15200435#comment-15200435
 ] 

Wang, Gang commented on PHOENIX-2405:
-------------------------------------

Thanks [~jamestaylor], Yes, definitely, Apache Mnemonic provided a mechanism 
for client code to utilize different kinds of  device, e.g. off-heap, NVMe, SSD 
as additional memory space, note that the performance depends on those vary 
underlying devices. Furthermore, the memory allocator could be customized as 
service for your specific application logic. 

It is not very difficult to leverage Mnemonic infrastructure for this case, we 
can use the following code snippets to create a memory pool along with a 
general purpose of allocator service.
{code:title=Main.java|borderStyle=solid}
new SysMemAllocator(1024 * 1024 * 20 /*capacity*/, true); /*on off-heap*/
new BigDataMemAllocator( Utils.getVolatileMemoryAllocatorService("vmem"), 1024 
* 1024 * 20 /*capacity*/,  "." /*uri*/ , true); /*on volatile storage device*/
new BigDataMemAllocator( 
Utils.getVolatileMemoryAllocatorService("pmalloc"),1024 * 1024 * 20, 
"./example.dat", true); /*on non-volatile storage device*/
{code}
and then we can use createChunk(<...>) or createBuffer(<...>) to allocate 
memory resources, those external memory resources could be reclaimed 
automatically by JVM GC or manually by your code. 
Please refer to the example & testcase code of Apache Mnemonic for details, 
Thanks.

> Improve stability of server side sort for ORDER BY
> --------------------------------------------------
>
>                 Key: PHOENIX-2405
>                 URL: https://issues.apache.org/jira/browse/PHOENIX-2405
>             Project: Phoenix
>          Issue Type: Bug
>            Reporter: James Taylor
>            Assignee: Maryann Xue
>              Labels: gsoc2016
>             Fix For: 4.8.0
>
>
> We currently use memory mapped files to buffer data as it's being sorted in 
> an ORDER BY (see MappedByteBufferQueue). The following types of exceptions 
> have been seen to occur:
> {code}
> Caused by: java.lang.OutOfMemoryError: Map failed
>         at sun.nio.ch.FileChannelImpl.map0(Native Method)
>         at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:904)
> {code}
> [~apurtell] has read that memory mapped files are not cleaned up after very 
> well in Java:
> {quote}
> "Map failed" means the JVM ran out of virtual address space. If you search 
> around stack overflow for suggestions on what to do when your app (in this 
> case Phoenix) encounters this issue when using mapped buffers, the answers 
> tend toward manually cleaning up the mapped buffers or explicitly triggering 
> a full GC. See 
> http://stackoverflow.com/questions/8553158/prevent-outofmemory-when-using-java-nio-mappedbytebuffer
>  for example. There are apparently long standing JVM/JRE problems with 
> reclamation of mapped buffers. I think we may want to explore in Phoenix a 
> different way to achieve what the current code is doing.
> {quote}
> Instead of using memory mapped files, we could use heap memory, or perhaps 
> there are other mechanisms too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to