[ 
https://issues.apache.org/jira/browse/PHOENIX-2405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15201870#comment-15201870
 ] 

Yanping Wang commented on PHOENIX-2405:
---------------------------------------

Hi, Stack, Your concern is certainly valid. Yes, the Mnemonic idea is new, in 
fact, the entire non-volatile computing for up-coming TB sized persistent 
memory is new. That was why I said to add code, not to change code. Add means 
we can have a switch to use or not use Mnemonic way to deal with memory 
allocation. Just as we did for Spark, we added non-volatile RDD and test 
Mnemonic way's performance impact. it does not impact Spark if developers 
decide not to use it. 

Sure we need add more documentation with use cases and examples in Mnemonic 
project. We will do that after code to be migrated in. 
we know allocate system memory during middle of application or spill data into 
disk are very high cost operations. we all can work together to find a better 
solution. we are open and welcome all ideas and suggestions to improve Mnemonic 
project.

> Improve performance and stability of server side sort for ORDER BY
> ------------------------------------------------------------------
>
>                 Key: PHOENIX-2405
>                 URL: https://issues.apache.org/jira/browse/PHOENIX-2405
>             Project: Phoenix
>          Issue Type: Bug
>            Reporter: James Taylor
>            Assignee: Maryann Xue
>              Labels: gsoc2016
>             Fix For: 4.8.0
>
>
> We currently use memory mapped files to buffer data as it's being sorted in 
> an ORDER BY (see MappedByteBufferQueue). The following types of exceptions 
> have been seen to occur:
> {code}
> Caused by: java.lang.OutOfMemoryError: Map failed
>         at sun.nio.ch.FileChannelImpl.map0(Native Method)
>         at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:904)
> {code}
> [~apurtell] has read that memory mapped files are not cleaned up after very 
> well in Java:
> {quote}
> "Map failed" means the JVM ran out of virtual address space. If you search 
> around stack overflow for suggestions on what to do when your app (in this 
> case Phoenix) encounters this issue when using mapped buffers, the answers 
> tend toward manually cleaning up the mapped buffers or explicitly triggering 
> a full GC. See 
> http://stackoverflow.com/questions/8553158/prevent-outofmemory-when-using-java-nio-mappedbytebuffer
>  for example. There are apparently long standing JVM/JRE problems with 
> reclamation of mapped buffers. I think we may want to explore in Phoenix a 
> different way to achieve what the current code is doing.
> {quote}
> Instead of using memory mapped files, we could use heap memory, or perhaps 
> there are other mechanisms too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to