Thanks Stack for pointing us in the right direction. Indeed it was the tcpNodeDelay setting. We set these to be true.

ipc.server.tcpnodelay ==> true
hbase.ipc.client.tcpnodelay ==> true

All reads that previously had the 40ms overhead are now between 2 and 3 ms like we would expect them to be.

Are these settings worthy of being put in the Hbase online book  ?

Thanks,
Jay

On 8/30/12 2:07 AM, Stack wrote:
On Wed, Aug 29, 2012 at 10:42 AM, Wayne<wav...@gmail.com>  wrote:
This is basically a read bug/performance problem. The execution path
followed when the caching is used up is not consistent with the initial
execution path/performance. Can anyone help shed light on this? Was there
any changes to 0.94 to introduce this (we have not tested on other
versions)? Any help or advice would be appreciated. As it is stands we are
looking to have to reverse engineer every aspect of a read from both the
hbase client and server components to find and fix.

One additional lead is that not all rows behave like this. Only certain
small rows seem to do this consistently. Most of our rows are larger and do
not have this behavior.

Nagles?  (https://issues.apache.org/jira/browse/HBASE-2125)
St.Ack

Reply via email to