Re: Handlers being blocked during reads

2013-07-30 Thread lars hofhansl
Do you think we should change it to use a ConcurrentHashMap (or maybe a HashSet with locking), instead? Copy on write is good when change is rare and amount of data to be copied is small... Just to state the obvious :) I guess in some setups that would be the case, but in others it wouldn't.

Re: Handlers being blocked during reads

2013-07-30 Thread Elliott Clark
On Mon, Jul 29, 2013 at 11:08 PM, lars hofhansl la...@apache.org wrote: Do you think we should change it to use a ConcurrentHashMap Yea, I think that would be great. I really just forgot to file the jira (my bad).

Re: Multiple region servers per physical node

2013-07-30 Thread Elliott Clark
G1 doesn't really make our write path much better if you have uneven region writes (zipfian distribution or the like). Lately I've been seeing the memstore blocking size per region being a major factor. In fact I'm thinking of opening a jira to remove it by default. On Mon, Jul 29, 2013 at 4:12

Re: anyway to turn off per-region metrics?

2013-07-30 Thread Oliver Meyn (GBIF)
Thanks for the response Elliott, but I'm not sure how to use it. I tried adding the following: property namehbase.metrics.showTableName/name valuefalse/value /property to hbase-site.xml and then tried hbase.metrics.showTableName=false in hadoop-metrics.properties, but the metrics continue

Re: Handlers being blocked during reads

2013-07-30 Thread Pablo Medina
I've just created a Jira to discuss this issue: https://issues.apache.org/jira/browse/HBASE-9087 Thanks! 2013/7/30 Elliott Clark ecl...@apache.org On Mon, Jul 29, 2013 at 11:08 PM, lars hofhansl la...@apache.org wrote: Do you think we should change it to use a ConcurrentHashMap Yea, I

Re: Multiple region servers per physical node

2013-07-30 Thread Kevin O'dell
Elliot, Can you elaborate on the blocking being a major factor? Are you referring to the default value of 7 slowing down writes? I don't think removing that feature is a great idea. Here are a couple things that it is helpful for: 1.) Slows down the write path so we are less likely to end

Re: Excessive .META scans

2013-07-30 Thread Varun Sharma
JD, its a big problem. The region server holding .META has 2X the network traffic and 2X the cpu load, I can easily spot the region server holding .META. by just looking at the ganglia graphs of the region servers side by side - I don't need to go the master console. So we can't scale up the

Re: Excessive .META scans

2013-07-30 Thread Stack
Try turning off http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/HTable.html#setRegionCachePrefetch(byte[], boolean) St.Ack On Tue, Jul 30, 2013 at 11:27 AM, Varun Sharma va...@pinterest.com wrote: JD, its a big problem. The region server holding .META has 2X the network

help on key design

2013-07-30 Thread Demian Berjman
Hi, I would like to explain our use case of HBase, the row key design and the problems we are having so anyone can give us a help: The first thing we noticed is that our data set is too small compared to other cases we read in the list and forums. We have a table containing 20 million keys

Re: help on key design

2013-07-30 Thread Dhaval Shah
If all your keys are grouped together, why don't you use a scan with start/end key specified? A sequential scan can theoretically be faster than MultiGet lookups (assuming your grouping is tight, you can also use filters with the scan to give better performance) How much memory do you have for

Re: help on key design

2013-07-30 Thread Ted Yu
Please also go over http://hbase.apache.org/book.html#perf.reading Cheers On Tue, Jul 30, 2013 at 3:40 PM, Dhaval Shah prince_mithi...@yahoo.co.inwrote: If all your keys are grouped together, why don't you use a scan with start/end key specified? A sequential scan can theoretically be faster

Recursive delete upon cleanup

2013-07-30 Thread Ron Echeverri
I've run into this problem: 2013-07-30 00:01:02,126 WARN org.apache.hadoop.hbase.master.cleaner.CleanerChore: Error while cleaning the logs java.io.IOException: Could not delete dir maprfs:/hbase-richpush/.archive/rich_push.alias_user, Error: Directory not empty, Try with recursive flag set to

Re: Recursive delete upon cleanup

2013-07-30 Thread Ted Yu
I searched HBase 0.94 code base, hadoop 1 and hadoop 2 code base. I didn't find where 'Try with recursive flag' was logged. Mind giving us a bit more information on the Hadoop / HBase releases you were using ? On Tue, Jul 30, 2013 at 5:32 PM, Ron Echeverri recheve...@maprtech.comwrote: I've

Can't solve the Unable to load realm info from SCDynamicStore error

2013-07-30 Thread Seth Edwards
I am somewhat new to HBase and was using it fine locally. At some point I started getting Unable to load realm info from SCDynamicStore when I would try to run HBase in standalone mode. I'm on Mac OSX 10.8.4. I have gone through many steps mentioned on Stack Overflow, changing configurations in