Hey Eran,

Usually this mailing list doesn't accept attachements (or it works for
voodoo reasons) so you'd be better off pastebin'ing them.

Some thoughts:

- Inserting into a new table without pre-splitting it is bound to be a
red herring of bad performance. Please pre-split it with methods such
as 
http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/HBaseAdmin.html#createTable(org.apache.hadoop.hbase.HTableDescriptor,
byte[][])

- You have 1 thrift per slave, but how are you using them? Pushing
everything to just one? Or each time to do a put you take a random
server?

- What you described when all region servers and clients are using low
resources is often a sign that you are waiting on the network round
trips a lot. Are you pushing only one row at a time or a batch of
them? Getting high insertions rates is usually done with batching
since a single RPC has to first go to the Thrift server, then to the
region server, and back all the way.

- Which language are you using to talk to thrift?

- When you added a second client, which key space was it using? Trying
to write to the same regions? Or did you start with an empty region
again?

Thx,

J-D

> Running the client on more than one server doesn't change the overall
> results, the total number of requests just get distributed across the
> two clients.
> I tried two things, inserting rows with one column each and inserting
> rows with 100 columns each, in both cases the data was 1K per column,
> so it does add up to 100K per row for the second test.
> I guess my config is more or less standard, I have two masters and a 3
> server ZK ensemble, I have replication enabled, but not for the table
> I'm using for testing, and the other tables are not getting any
> requests during this test. The only non standard thing I have is the
> new memory slab feature and the GC configuration as recommended in the
> recent Cloudera blog posts.
> I've attached the jstack dump from one of the RS, it seems a lot of
> threads are either parked or in "epollWait" state.
>
> Thanks for looking into it.
>
> -eran

Reply via email to