For #1, please see the top two blogs @ https://blogs.apache.org/hbase/

FYI

On Wed, Mar 30, 2016 at 7:59 AM, Amit Shah <amits...@gmail.com> wrote:

> Hi,
>
> I am trying to configure my hbase (version 1.0) phoenix (version - 4.6)
> cluster to utilize as much memory as possible on the server hardware. We
> have an OLAP workload that allows users to perform interactive analysis
> over huge sets of data. While reading about hbase configuration I came
> across two configs
>
> 1. Hbase bucket cache
> <
> http://blog.asquareb.com/blog/2014/11/24/how-to-leverage-large-physical-memory-to-improve-hbase-read-performance
> >
> (off heap) which looks like a good option to bypass garbage collection.
> 2. Hadoop pinned hdfs blocks
> <http://blog.cloudera.com/blog/2014/08/new-in-cdh-5-1-hdfs-read-caching/>
> (max locked memory) concept that loads the hdfs blocks in memory, but given
> that hbase is configured with short circuit reads I assume this config may
> not be of much help. Instead it would be right to increase hbase region
> server heap memory. Is my understanding right?
>
> We use HBase with Phoenix.
> Kindly let me know your thoughts or suggestions on any more options that I
> should explore
>
> Thanks,
> Amit.
>

Reply via email to