w, or does it vary?
>
> Lastly, how big are the rows?
>
> Thanks.
>
> -- Lars
>
> From: James Johansville <james.johansvi...@gmail.com>
> To: user@hbase.apache.org
> Sent: Friday, March 25, 2016 12:23 PM
> Subject: Re: Inconsistent scan performance
Sent: Friday, March 25, 2016 12:23 PM
Subject: Re: Inconsistent scan performance
Hello all,
I have 13 RegionServers and presplit into 13 regions (which motivated my
comment that I aligned my queries with the regionservers, which obviously
isn't accurate). I have been testing using a multiple
On Fri, Mar 25, 2016 at 12:23 PM, James Johansville <
james.johansvi...@gmail.com> wrote:
> Hello all,
>
> I have 13 RegionServers and presplit into 13 regions (which motivated my
> comment that I aligned my queries with the regionservers, which obviously
> isn't accurate). I have been testing
Hello all,
I have 13 RegionServers and presplit into 13 regions (which motivated my
comment that I aligned my queries with the regionservers, which obviously
isn't accurate). I have been testing using a multiple of 13 for partitioned
scans.
Here are my current region setup -- I converted the row
On Fri, Mar 25, 2016 at 3:50 AM, Ted Yu wrote:
> James:
> Another experiment you can do is to enable region replica - HBASE-10070.
>
> This would bring down the read variance greatly.
>
>
Suggest you NOT do this James.
Lets figure your issue as-is rather than compound by
The read path is much more complex than the write one, so the response time
has much more variance.
The gap is so wide here that I would bet on Ted's or Stack's points, but
here are a few other sources of variance:
- hbase cache: as Anoop said, may be the data is already in the hbase cache
I see you set cacheBlocks to be false on the Scan. By any chance on
some other RS(s), the data you are looking for is already in cache?
(Any previous scan or by cache on write) And there are no concurrent
writes any way right? This much difference in time ! One
possibility is blocks avail
On Thu, Mar 24, 2016 at 4:45 PM, James Johansville <
james.johansvi...@gmail.com> wrote:
> Hello all,
>
> So, I wrote a Java application for HBase that does a partitioned full-table
> scan according to a set number of partitions. For example, if there are 20
> partitions specified, then 20
Crossing region boundaries which happen to be on different servers may be.
On Thu, Mar 24, 2016 at 5:49 PM, James Johansville <
james.johansvi...@gmail.com> wrote:
> In theory they should be aligned with *regionserver* boundaries. Would
> crossing multiple regions on the same regionserver result
In theory they should be aligned with *regionserver* boundaries. Would
crossing multiple regions on the same regionserver result in the big
performance difference being seen here?
I am using Hortonworks HBase 1.1.2
On Thu, Mar 24, 2016 at 5:32 PM, Ted Yu wrote:
> I assume
I assume the partitions' boundaries don't align with region boundaries,
right ?
Meaning some partitions would cross region boundaries.
Which hbase release do you use ?
Thanks
On Thu, Mar 24, 2016 at 4:45 PM, James Johansville <
james.johansvi...@gmail.com> wrote:
> Hello all,
>
> So, I wrote
Hello all,
So, I wrote a Java application for HBase that does a partitioned full-table
scan according to a set number of partitions. For example, if there are 20
partitions specified, then 20 separate full scans are launched that cover
an equal slice of the row identifier range.
The rows are
On Wed, Aug 29, 2012 at 10:42 AM, Wayne wav...@gmail.com wrote:
This is basically a read bug/performance problem. The execution path
followed when the caching is used up is not consistent with the initial
execution path/performance. Can anyone help shed light on this? Was there
any changes to
Thanks Stack for pointing us in the right direction. Indeed it was the
tcpNodeDelay setting. We set these to be true.
ipc.server.tcpnodelay == true
hbase.ipc.client.tcpnodelay == true
All reads that previously had the 40ms overhead are now between 2 and 3
ms like we would expect them to be.
On Thu, Aug 30, 2012 at 9:24 AM, Jay T jay.pyl...@gmail.com wrote:
Thanks Stack for pointing us in the right direction. Indeed it was the
tcpNodeDelay setting. We set these to be true.
ipc.server.tcpnodelay == true
hbase.ipc.client.tcpnodelay == true
All reads that previously had the 40ms
: Inconsistent scan performance with caching set to 1
On Thu, Aug 30, 2012 at 9:24 AM, Jay T jay.pyl...@gmail.com wrote:
Thanks Stack for pointing us in the right direction. Indeed it was
the
tcpNodeDelay setting. We set these to be true.
ipc.server.tcpnodelay == true
On Thu, Aug 30, 2012 at 9:15 PM, Ramkrishna.S.Vasudevan
ramkrishna.vasude...@huawei.com wrote:
Thanks Stack for giving a pointer to this. Yes it does seems this property
is very important.
I moved config. up to 'important configs' section out of
troubleshooting section.
St.Ack
This is basically a read bug/performance problem. The execution path
followed when the caching is used up is not consistent with the initial
execution path/performance. Can anyone help shed light on this? Was there
any changes to 0.94 to introduce this (we have not tested on other
versions)? Any
requests sent back by the server. I hope here it is only one simple client.
Regards
Ram
-Original Message-
From: Jay T [mailto:jay.pyl...@gmail.com]
Sent: Wednesday, August 29, 2012 2:05 AM
To: user@hbase.apache.org
Subject: Inconsistent scan performance with caching set to 1
We
19 matches
Mail list logo