Obviously this sort of test will depend massively on the level of caching.
 I believe that the numbers Lohit is quoting were designed to defeat caching
and test the resulting performance.

On Fri, Jun 24, 2011 at 1:41 PM, lohit <lohit.vijayar...@gmail.com> wrote:

> 2011/6/23 Sateesh Lakkarsu <lakka...@gmail.com>
>
> > We have been testing random reads and from a 6 node cluster (1NN, 5DN,
> 1HM,
> > 5RS each with 48G, 5 disks) right now seeing a throughput of 1100 per sec
> > per node. Most of the configs are default, except 4G for RS,
> *handler.count
> > and gc (
> >
> >
> http://www.cloudera.com/blog/2011/02/avoiding-full-gcs-in-hbase-with-memstore-local-allocation-buffers-part-1/
> > ).
> > The actual data is 300 byte records, each of which can have upwards of
> 1000
> > versions keyed on id which is random/uuid based. Using LZO compression.
> > There are 300M records with distribution almost even across RS which are
> > pre-created and bulk loaded.
> >
>
> We have done similar tests by with 1K byte records and 1B rows, without
> compression and see peaks of 400 ops/sec.
> Considering that, you might be able to push it a little more. How much
> memory is given for DataNode and Region server.
> Also, is your block cache set to default of 20%?
> Can you tell us what numbers do you see without compression.
>

Reply via email to