>From HBase server perspective we need restrict memstore size + block
cache size to be max 80%.  And memstore size alone can go down to 5%
if am not wrong.

We need to be careful when using G1 and giving this 80%.  The cache
will be mostly full as u said it will be read workload. Making ur
working size large.  The default value for Initial Heap Occupancy
Percentage (IHOP)  in G1 is 45%.   You can up this.  But having this
>80% am not sure whether really advisable.


-Anoop-

On Fri, Mar 17, 2017 at 11:50 PM, jeff saremi <jeffsar...@hotmail.com> wrote:
> I'll go through these recommendations, Kevin. Thanks a lot
>
> ________________________________
> From: Kevin O'Dell <ke...@rocana.com>
> Sent: Friday, March 17, 2017 10:55:49 AM
> To: user@hbase.apache.org
> Subject: Re: Optimizations for a Read-only database
>
> Hi Jeff,
>
>   You can definitely lower the memstore, the last time I looked there it
> had to be set to .1 at lowest it could go. I would not recommend disabling
> compactions ever, bad things will occur and it can end up impacting your
> read performance greatly.  I would recommend looking at the Intel G1GC
> <https://software.intel.com/en-us/blogs/2014/06/18/part-1-tuning-java-garbage-collection-for-hbase>
> blog series to leverage really large chunks of block cache, and then using
> the remaining memory for off heap caching. You should make sure to turn on
> things like Snappy compression, FAST_DIFF for data block encoding, and with
> all the extra memory you will have available it might be worth using the
> ROW+COL bloom filters, though you should have very few underlying HFiles
> depending on how often you bulk load. I think short-circuit reads are on by
> default these days, but it will greatly speed up read performance if not
> already turned on. From an upfront design make sure you pre-split your
> tables so your first few bulk loads don't cause split and compaction
> pains.  Hope this helps!
>
> On Fri, Mar 17, 2017 at 1:32 PM, jeff saremi <jeffsar...@hotmail.com> wrote:
>
>> We're creating a readonly database and would like to know the recommended
>> optimizations we could do. We'd be loading data via direct write to HFiles.
>>
>> One thing i could immediately think of is to eliminate the memory for
>> Memstore. What is the minimum that we could get away with?
>>
>> How about disabling some regular operations to save CPU time. I think
>> Compaction is one of those we'd like to stop.
>>
>> thanks
>>
>> Jeff
>>
>
>
>
> --
> Kevin O'Dell
> Field Engineer
> 850-496-1298 | ke...@rocana.com
> @kevinrodell
> <http://www.rocana.com>

Reply via email to