[ 
https://issues.apache.org/jira/browse/PHOENIX-2405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15200435#comment-15200435
 ] 

Wang, Gang edited comment on PHOENIX-2405 at 3/17/16 10:10 PM:
---------------------------------------------------------------

Thanks [~jamestaylor], Yes, definitely, Apache Mnemonic provided a mechanism 
for client code to utilize different kinds of  device, e.g. off-heap, NVMe, SSD 
as additional memory space, note that the performance depends on those vary 
underlying devices. Furthermore, the memory allocator could be customized as 
service for your specific application logic. 

we can use the following code snippets to create a memory pool along with a 
general purpose allocator service.
{code:title=Main.java|borderStyle=solid}
new SysMemAllocator(1024 * 1024 * 20 /*capacity*/, true); /*on off-heap*/
new BigDataMemAllocator( Utils.getVolatileMemoryAllocatorService("vmem"), 1024 
* 1024 * 20 /*capacity*/,  "." /*uri*/ , true); /*on volatile storage device*/
new BigDataMemAllocator( 
Utils.getVolatileMemoryAllocatorService("pmalloc"),1024 * 1024 * 20, 
"./example.dat", true); /*on non-volatile storage device*/
{code}
and then we can use createChunk(<...>) or createBuffer(<...>) to allocate 
memory resources, those external memory resources could be reclaimed 
automatically by JVM GC or manually by your code. 

Above is used for the applications of volatile block memory mode. if there are 
some huge object graphs introduced by this sorting operations, we can use the 
volatile object mode of Mnemonic, there are another two corresponding 
non-volatile modes but I think that might not be helpful for this case.

Please refer to the example & testcase code of Apache Mnemonic for details, 
Thanks.


was (Author: qichfan):
Thanks [~jamestaylor], Yes, definitely, Apache Mnemonic provided a mechanism 
for client code to utilize different kinds of  device, e.g. off-heap, NVMe, SSD 
as additional memory space, note that the performance depends on those vary 
underlying devices. Furthermore, the memory allocator could be customized as 
service for your specific application logic. 

we can use the following code snippets to create a memory pool along with a 
general purpose allocator service.
{code:title=Main.java|borderStyle=solid}
new SysMemAllocator(1024 * 1024 * 20 /*capacity*/, true); /*on off-heap*/
new BigDataMemAllocator( Utils.getVolatileMemoryAllocatorService("vmem"), 1024 
* 1024 * 20 /*capacity*/,  "." /*uri*/ , true); /*on volatile storage device*/
new BigDataMemAllocator( 
Utils.getVolatileMemoryAllocatorService("pmalloc"),1024 * 1024 * 20, 
"./example.dat", true); /*on non-volatile storage device*/
{code}
and then we can use createChunk(<...>) or createBuffer(<...>) to allocate 
memory resources, those external memory resources could be reclaimed 
automatically by JVM GC or manually by your code. 
Please refer to the example & testcase code of Apache Mnemonic for details, 
Thanks.

> Improve stability of server side sort for ORDER BY
> --------------------------------------------------
>
>                 Key: PHOENIX-2405
>                 URL: https://issues.apache.org/jira/browse/PHOENIX-2405
>             Project: Phoenix
>          Issue Type: Bug
>            Reporter: James Taylor
>            Assignee: Maryann Xue
>              Labels: gsoc2016
>             Fix For: 4.8.0
>
>
> We currently use memory mapped files to buffer data as it's being sorted in 
> an ORDER BY (see MappedByteBufferQueue). The following types of exceptions 
> have been seen to occur:
> {code}
> Caused by: java.lang.OutOfMemoryError: Map failed
>         at sun.nio.ch.FileChannelImpl.map0(Native Method)
>         at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:904)
> {code}
> [~apurtell] has read that memory mapped files are not cleaned up after very 
> well in Java:
> {quote}
> "Map failed" means the JVM ran out of virtual address space. If you search 
> around stack overflow for suggestions on what to do when your app (in this 
> case Phoenix) encounters this issue when using mapped buffers, the answers 
> tend toward manually cleaning up the mapped buffers or explicitly triggering 
> a full GC. See 
> http://stackoverflow.com/questions/8553158/prevent-outofmemory-when-using-java-nio-mappedbytebuffer
>  for example. There are apparently long standing JVM/JRE problems with 
> reclamation of mapped buffers. I think we may want to explore in Phoenix a 
> different way to achieve what the current code is doing.
> {quote}
> Instead of using memory mapped files, we could use heap memory, or perhaps 
> there are other mechanisms too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to