[ 
https://issues.apache.org/jira/browse/HBASE-4027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13067993#comment-13067993
 ] 

stack commented on HBASE-4027:
------------------------------

Patch looks excellent!

You want that change to surefire heap size in pom.xml?  Are you saying here 
that tests won't pass unless you have 5G of RAM?

In apache code base, you cannot have '@author' tags.

WTF does DoubleBlockCache dooo? (smile).  Usually folks will fill in the class 
comment on the what/why a class exists.  Would be good here; e.g. why is it 
called DoubleBlockCache?  (Similar for other new classes you've added)

I love this:

{code}
+  private LruBlockCache onHeapCache;
+  private SlabCache offHeapCache;
{code}

Thats sweet.

Remove code rather than comment it out: e.g. +    // 
onHeapCache.cacheBlock(blockName, buf);

And when we cache a block, how we know where to put it?  On or off heap?  How 
you decide where to put it (Looks like you put it off heap always here).

Any chance of some offheap stats (is this getStats used?)

We only return heapsize of onheap cache.  You think we should not include 
offheap?

Yeah, a bunch of these state methods go unimplemented.  Can we do any of them?  
Or is it that they just don't make sense in offheap context?

Do the assign when you declare the blockSize and slabs data members rather than 
wait till the constructor runs.  Make them final while you are at it.   Similar 
comment for SingleSizeCache, etc.

What is this limit?  int blocksPerSlab = Integer.MAX_VALUE / blockSize;  Max of 
4G per slab?

Should slabsize just be hardcoded as Integer.MAX_VALUE?

The kung fu going on in the shutdown of metaslab needs a comment.  I think I 
know whats going on.  Explain what 'cleaner' is.

Is there something up w/ the naming in MetaSlab or is it me misreading?  We are 
iterating up to blocksPerSlab but we are creating slabs, not blocks.  I'd think 
we'd be making blocks in a slab, not slabs.

You think this is going to happen or how could it happen?

{code}
+    if(backingMap.containsKey(blockName)){
+      throw new RuntimeException("Cached an already cached block");
+    }
{code}

Whats going on here (I see this in a few places):

{code}
+    } catch (Exception e) {
+
+    }
{code}

The get from backingStore will never return a null here?

+      contentBlock = backingStore.get(key).getBlock(key, caching);

Is this a good name for the default blocksize config? 
"hbase.mapreduce.hfileoutputformat.blocksize"  The 'outputformat' would seem to 
come from mapreduce world (Which seems to be where you found it if I grep src). 
  SHould we be using DEFAULT_BLOCKSIZE from hfile here instead?

There are some copy paste issues with the javadoc you have in your 
TestSingleSlabCache.   

Hey Li, you get an F in documentation (smile) but the code is great.  Good 
stuff.





> Enable direct byte buffers LruBlockCache
> ----------------------------------------
>
>                 Key: HBASE-4027
>                 URL: https://issues.apache.org/jira/browse/HBASE-4027
>             Project: HBase
>          Issue Type: Improvement
>            Reporter: Jason Rutherglen
>            Assignee: Li Pi
>            Priority: Minor
>         Attachments: slabcachepatch.diff, slabcachepatchv2.diff
>
>
> Java offers the creation of direct byte buffers which are allocated outside 
> of the heap.
> They need to be manually free'd, which can be accomplished using an 
> documented {{clean}} method.
> The feature will be optional.  After implementing, we can benchmark for 
> differences in speed and garbage collection observances.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to