Depends on so much environment related variables and on data as well.
But to give you a number after all:
One of our clusters is on EC2, 6 RS, on m1.xlarge machines (network
performance 'high' according to aws), with 90% of the time we do reads; our
avg data size is 2K, block cache at 20K, 100 rows per scan avg, bloom
filters 'on' at the 'ROW' level, 40% of heap dedicated to block cache (note
that it contains several other bits and pieces) and I would say our average
latency for cached data (~97% blockCacheHitCachingRatio) is 3-4ms. File
system access is much much painful, especially on ec2 m1.xlarge where you
really can't tell what's going on, as far as I can tell. To tell you the
truth as I see it, this is an abuse (for our use case) of the HBase store
and for cache like behavior I would recommend going to something like Redis.


On Mon, Jun 3, 2013 at 12:13 PM, ramkrishna vasudevan <
ramkrishna.s.vasude...@gmail.com> wrote:

> What is that you are observing now?
>
> Regards
> Ram
>
>
> On Mon, Jun 3, 2013 at 2:00 PM, Liu, Raymond <raymond....@intel.com>
> wrote:
>
> > Hi
> >
> >         If all the data is already in RS blockcache.
> >         Then what's the typical scan latency for scan a few rows from a
> > say several GB table ( with dozens of regions ) on a small cluster with
> say
> > 4 RS ?
> >
> >         A few ms? Tens of ms? Or more?
> >
> > Best Regards,
> > Raymond Liu
> >
>

Reply via email to