Thanks Kiru,

Scan is not an option for our use cases.  Our read is pretty random.

Any other suggestion to bring down the latency.   

Thanks,
Saurabh. 


On Aug 28, 2013, at 7:01 PM, Kiru Pakkirisamy <kirupakkiris...@yahoo.com> wrote:

> Saurabh, we are able to 600K rowxcolumns in 400 msec. We have put what was a 
> 40million row table as 400K rows and columns. We Get about 100 of the rows 
> from this 400K , do quite a bit of calculations in the coprocessor (almost a 
> group-order by) and return in this time.
> Maybe should consider replacing the MultiGets with Scan with Filter. I like 
> the FuzzyRowFilter even though you might need to match with exact key. It 
> works only with fixed length key.
> (I do have an issue right now, it is not scaling to multiple clients.)
>  
> Regards,
> - kiru
> 
> 
> Kiru Pakkirisamy | webcloudtech.wordpress.com
> 
> 
> ________________________________
> From: Saurabh Yahoo <saurabh...@yahoo.com>
> To: "user@hbase.apache.org" <user@hbase.apache.org> 
> Cc: "user@hbase.apache.org" <user@hbase.apache.org> 
> Sent: Wednesday, August 28, 2013 3:20 PM
> Subject: Re: experiencing high latency for few reads in HBase 
> 
> 
> Thanks Kitu. We need less than 1 sec latency.  
> 
> We are using both muliGet and get. 
> 
> We have three concurrent clients running 10 threads each. ( that makes total 
> 30 concurrent clients).
> 
> Thanks,
> Saurabh.  
> 
> On Aug 28, 2013, at 4:30 PM, Kiru Pakkirisamy <kirupakkiris...@yahoo.com> 
> wrote:
> 
>> Right 4 sec is good.  
>> @Saurabh - so your read is - getting 20 out of 25 millions rows ?. Is this a 
>> Get or a Scan ?
>> BTW, in this stress test how many concurrent clients do you have ? 
>>   
>> Regards,
>> - kiru
>> 
>> 
>> ________________________________
>> From: Vladimir Rodionov <vrodio...@carrieriq.com>
>> To: "user@hbase.apache.org" <user@hbase.apache.org> 
>> Sent: Wednesday, August 28, 2013 12:15 PM
>> Subject: RE: experiencing high latency for few reads in HBase 
>> 
>> 
>> 1. 4 sec max latency is not that bad taking into account 12GB heap.  It can 
>> be much larger. What is your SLA?
>> 2. Block evictions is the result of a poor cache hit rate and the root cause 
>> of a periodic stop-the-world GC pauses (max latencies
>>      latencies you have been observing in the test)
>> 3. Block cache consists of 3 parts (25% young generation, 50% - tenured, 25% 
>> - permanent). Permanent part is for CF with
>> IN_MEMORY = true (you can specify this when you create CF).  Block first 
>> stored in 'young gen' space, then gets promoted to 'tenured gen' space
>> (or gets evicted). May be your 'perm gen' space is underutilized? This is 
>> exact 25% of 4GB (1GB). Although HBase LruBlockCache should use all the 
>> space allocated for block cache -
>> there is no guarantee (as usual). If you don have in_memory column families 
>> you may decrease
>> 
>> 
>> 
>> Best regards,
>> Vladimir Rodionov
>> Principal Platform Engineer
>> Carrier IQ, www.carrieriq.com
>> e-mail: vrodio...@carrieriq.com
>> 
>> ________________________________________
>> From: Saurabh Yahoo [saurabh...@yahoo.com]
>> Sent: Wednesday, August 28, 2013 5:10 AM
>> To: user@hbase.apache.org
>> Subject: experiencing high latency for few reads in HBase
>> 
>> Hi,
>> 
>> We are running a stress test in our 5 node cluster and we are getting the 
>> expected mean latency of 10ms. But we are seeing around 20 reads out of 25 
>> million reads having latency more than 4 seconds. Can anyone provide the 
>> insight what we can do to meet below second SLA for each and every read?
>> 
>> We observe the following things -
>> 
>> 1. Reads are evenly distributed among 5 nodes.  CPUs remain under 5% 
>> utilized.
>> 
>> 2. We have 4gb block cache (30% block cache out of 12gb) setup. 3gb block 
>> cache got filled up but around 1gb remained free. There are a large number 
>> of cache eviction.
>> 
>> Questions to experts -
>> 
>> 1. If there are still 1gb of free block cache available, why is hbase 
>> evicting the block from cache?
>> 
>> 4. We are seeing memory went up to 10gb three times before dropping sharply 
>> to 5gb.
>> 
>> Any help is highly appreciable,
>> 
>> Thanks,
>> Saurabh.
>> 
>> Confidentiality Notice:  The information contained in this message, 
>> including any attachments hereto, may be confidential and is intended to be 
>> read only by the individual or entity to whom this message is addressed. If 
>> the reader of this message is not the intended recipient or an agent or 
>> designee of the intended recipient, please note that any review, use, 
>> disclosure or distribution of this message or its attachments, in any form, 
>> is strictly prohibited.  If you have received this message in error, please 
>> immediately notify the sender and/or notificati...@carrieriq.com and delete 
>> or destroy any copy of this message and its attachments.

Reply via email to