Do you think we should change it to use a ConcurrentHashMap (or maybe a HashSet
with locking), instead?
Copy on write is good when change is rare and amount of data to be copied is
small... Just to state the obvious :)
I guess in some setups that would be the case, but in others it wouldn't.
On Mon, Jul 29, 2013 at 11:08 PM, lars hofhansl la...@apache.org wrote:
Do you think we should change it to use a ConcurrentHashMap
Yea, I think that would be great. I really just forgot to file the
jira (my bad).
G1 doesn't really make our write path much better if you have uneven
region writes (zipfian distribution or the like).
Lately I've been seeing the memstore blocking size per region being a
major factor. In fact I'm thinking of opening a jira to remove it by
default.
On Mon, Jul 29, 2013 at 4:12
Thanks for the response Elliott, but I'm not sure how to use it. I tried adding
the following:
property
namehbase.metrics.showTableName/name
valuefalse/value
/property
to hbase-site.xml and then tried
hbase.metrics.showTableName=false
in hadoop-metrics.properties, but the metrics continue
I've just created a Jira to discuss this issue:
https://issues.apache.org/jira/browse/HBASE-9087
Thanks!
2013/7/30 Elliott Clark ecl...@apache.org
On Mon, Jul 29, 2013 at 11:08 PM, lars hofhansl la...@apache.org wrote:
Do you think we should change it to use a ConcurrentHashMap
Yea, I
Elliot,
Can you elaborate on the blocking being a major factor? Are you
referring to the default value of 7 slowing down writes? I don't think
removing that feature is a great idea. Here are a couple things that it is
helpful for:
1.) Slows down the write path so we are less likely to end
JD, its a big problem. The region server holding .META has 2X the network
traffic and 2X the cpu load, I can easily spot the region server holding
.META. by just looking at the ganglia graphs of the region servers side by
side - I don't need to go the master console. So we can't scale up the
Try turning off
http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/HTable.html#setRegionCachePrefetch(byte[],
boolean)
St.Ack
On Tue, Jul 30, 2013 at 11:27 AM, Varun Sharma va...@pinterest.com wrote:
JD, its a big problem. The region server holding .META has 2X the network
Hi,
I would like to explain our use case of HBase, the row key design and the
problems we are having so anyone can give us a help:
The first thing we noticed is that our data set is too small compared to
other cases we read in the list and forums. We have a table containing 20
million keys
If all your keys are grouped together, why don't you use a scan with start/end
key specified? A sequential scan can theoretically be faster than MultiGet
lookups (assuming your grouping is tight, you can also use filters with the
scan to give better performance)
How much memory do you have for
Please also go over http://hbase.apache.org/book.html#perf.reading
Cheers
On Tue, Jul 30, 2013 at 3:40 PM, Dhaval Shah prince_mithi...@yahoo.co.inwrote:
If all your keys are grouped together, why don't you use a scan with
start/end key specified? A sequential scan can theoretically be faster
I've run into this problem:
2013-07-30 00:01:02,126 WARN
org.apache.hadoop.hbase.master.cleaner.CleanerChore: Error while
cleaning the logs java.io.IOException: Could not delete dir
maprfs:/hbase-richpush/.archive/rich_push.alias_user, Error: Directory
not empty, Try with recursive flag set to
I searched HBase 0.94 code base, hadoop 1 and hadoop 2 code base.
I didn't find where 'Try with recursive flag' was logged.
Mind giving us a bit more information on the Hadoop / HBase releases you
were using ?
On Tue, Jul 30, 2013 at 5:32 PM, Ron Echeverri recheve...@maprtech.comwrote:
I've
I am somewhat new to HBase and was using it fine locally. At some point I
started getting
Unable to load realm info from SCDynamicStore when I would try to run HBase
in standalone mode. I'm on Mac OSX 10.8.4. I have gone through many steps
mentioned on Stack Overflow, changing configurations in
14 matches
Mail list logo