Hi Jeff,

  You can definitely lower the memstore, the last time I looked there it
had to be set to .1 at lowest it could go. I would not recommend disabling
compactions ever, bad things will occur and it can end up impacting your
read performance greatly.  I would recommend looking at the Intel G1GC
<https://software.intel.com/en-us/blogs/2014/06/18/part-1-tuning-java-garbage-collection-for-hbase>
blog series to leverage really large chunks of block cache, and then using
the remaining memory for off heap caching. You should make sure to turn on
things like Snappy compression, FAST_DIFF for data block encoding, and with
all the extra memory you will have available it might be worth using the
ROW+COL bloom filters, though you should have very few underlying HFiles
depending on how often you bulk load. I think short-circuit reads are on by
default these days, but it will greatly speed up read performance if not
already turned on. From an upfront design make sure you pre-split your
tables so your first few bulk loads don't cause split and compaction
pains.  Hope this helps!

On Fri, Mar 17, 2017 at 1:32 PM, jeff saremi <jeffsar...@hotmail.com> wrote:

> We're creating a readonly database and would like to know the recommended
> optimizations we could do. We'd be loading data via direct write to HFiles.
>
> One thing i could immediately think of is to eliminate the memory for
> Memstore. What is the minimum that we could get away with?
>
> How about disabling some regular operations to save CPU time. I think
> Compaction is one of those we'd like to stop.
>
> thanks
>
> Jeff
>



-- 
Kevin O'Dell
Field Engineer
850-496-1298 | ke...@rocana.com
@kevinrodell
<http://www.rocana.com>

Reply via email to