[ 
https://issues.apache.org/jira/browse/HBASE-10191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13918632#comment-13918632
 ] 

Matt Corgan commented on HBASE-10191:
-------------------------------------

{quote}"creating small in-memory HFiles..." – from a small CSLM that does the 
sort for us?{quote}yes, that is all i meant.  The CSLM would remain small 
because it gets flushed more often.  I don't doubt there are better ways to do 
it than the CSLM (like the deferred sorting you mention), but even just 
shrinking the size of the CSLM would be an improvement without having to 
re-think the memstore's concurrency mechanisms.

Let's say you have a 500MB memstore limit, and that encodes (not compresses) to 
100MB.  You could:
* split it into 10 stripes, each with ~50MB limit, and flush each of the 10 
stripes (to memory) individually
** you probably have a performance boost already because 10 50MB CSLMs is 
better than 1 500MB CSLM
* for a given stripe, flush the CSLM each time it reaches 25MB, which will spit 
out 5MB encoded "memory hfile" to the off-heap storage
* optionally compact a stripe's "memory hfiles" in the background to increase 
read performance
* when a stripe has 25MB CSLM + 5 encoded snapshots, flush/compact the whole 
thing to disk
* "release" the WAL entries for the stripe

On the WAL entries, i was just pointing out that you can no longer release the 
WAL entries when you flush the CSLM.  You have to hold on to the WAL entries 
until you flush the "memory hfiles" to disk.

> Move large arena storage off heap
> ---------------------------------
>
>                 Key: HBASE-10191
>                 URL: https://issues.apache.org/jira/browse/HBASE-10191
>             Project: HBase
>          Issue Type: Umbrella
>            Reporter: Andrew Purtell
>
> Even with the improved G1 GC in Java 7, Java processes that want to address 
> large regions of memory while also providing low high-percentile latencies 
> continue to be challenged. Fundamentally, a Java server process that has high 
> data throughput and also tight latency SLAs will be stymied by the fact that 
> the JVM does not provide a fully concurrent collector. There is simply not 
> enough throughput to copy data during GC under safepoint (all application 
> threads suspended) within available time bounds. This is increasingly an 
> issue for HBase users operating under dual pressures: 1. tight response SLAs, 
> 2. the increasing amount of RAM available in "commodity" server 
> configurations, because GC load is roughly proportional to heap size.
> We can address this using parallel strategies. We should talk with the Java 
> platform developer community about the possibility of a fully concurrent 
> collector appearing in OpenJDK somehow. Set aside the question of if this is 
> too little too late, if one becomes available the benefit will be immediate 
> though subject to qualification for production, and transparent in terms of 
> code changes. However in the meantime we need an answer for Java versions 
> already in production. This requires we move the large arena allocations off 
> heap, those being the blockcache and memstore. On other JIRAs recently there 
> has been related discussion about combining the blockcache and memstore 
> (HBASE-9399) and on flushing memstore into blockcache (HBASE-5311), which is 
> related work. We should build off heap allocation for memstore and 
> blockcache, perhaps a unified pool for both, and plumb through zero copy 
> direct access to these allocations (via direct buffers) through the read and 
> write I/O paths. This may require the construction of classes that provide 
> object views over data contained within direct buffers. This is something 
> else we could talk with the Java platform developer community about - it 
> could be possible to provide language level object views over off heap 
> memory, on heap objects could hold references to objects backed by off heap 
> memory but not vice versa, maybe facilitated by new intrinsics in Unsafe. 
> Again we need an answer for today also. We should investigate what existing 
> libraries may be available in this regard. Key will be avoiding 
> marshalling/unmarshalling costs. At most we should be copying primitives out 
> of the direct buffers to register or stack locations until finally copying 
> data to construct protobuf Messages. A related issue there is HBASE-9794, 
> which proposes scatter-gather access to KeyValues when constructing RPC 
> messages. We should see how far we can get with that and also zero copy 
> construction of protobuf Messages backed by direct buffer allocations. Some 
> amount of native code may be required.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to