Re: HBase stability

2010-12-13 Thread Todd Lipcon
HI Anze, In word, yes - 0.20.4 is not that stable in my experience, and upgrading to the latest CDH3 beta (which includes HBase 0.89.20100924) should give you a huge improvement in stability. You'll still need to do a bit of tuning of settings, but once it's well tuned it should be able to hold u

Re: HBase stability

2010-12-13 Thread baggio liu
Hi Anze, Our production cluster used HBase 0.20.6 and hdfs (CDH3b2), and we work for stability about a month. Some issue we have been met, and may helpful to you. HDFS: 1. hbase file has short life cycle than map-red, some times there're many blocks should be delete, we should tuning for

RE: HBase stability

2010-12-13 Thread Jonathan Gray
HBase is not designed or well tested for production or stability on 2 nodes. It will work on 2 nodes, but do not expect good performance or stability. What is the hardware configuration and daemon setup on this cluster of 2 nodes? How many cores, spindles, RAM, heap sizes etc... And you have t

RE: HBase stability

2010-12-13 Thread Jonathan Gray
related to exception handling and DFS errors. The according HDFS releases (CDH3 or 20-append) provide true durability. Thanks for sharing! JG > -Original Message- > From: baggio liu [mailto:baggi...@gmail.com] > Sent: Monday, December 13, 2010 8:45 AM > To: user@hbase.apache.or

Re: HBase stability

2010-12-13 Thread Stack
Some comments inline in the below. On Mon, Dec 13, 2010 at 8:45 AM, baggio liu wrote: > Hi  Anze, >   Our production cluster used HBase 0.20.6 and hdfs (CDH3b2), and we work > for stability about a month. Some issue we have been met, and may helpful to > you. > Thanks for writing back to the lis

Re: HBase stability

2010-12-13 Thread Christian van der Leeden
Hi, does https://issues.apache.org/jira/browse/HBASE-3334 prevent me currently from using 0.20-append trunk with 0.90-rc1 to build or can I just replace the hadoop.jar and be done? Christian On Dec 13, 2010, at 6:44 PM, Stack wrote: >>Beside upon, in production cluster, data loss is

Re: HBase stability

2010-12-13 Thread Stack
Just replace the jar. St.Ack On Mon, Dec 13, 2010 at 10:45 AM, Christian van der Leeden wrote: > Hi, > >        does https://issues.apache.org/jira/browse/HBASE-3334 > prevent me currently from using 0.20-append trunk with 0.90-rc1 > to build or can I just replace the hadoop.jar and be done? > >

RE: HBase stability

2010-12-13 Thread Geoff Hendrey
We we're having no end to "buffet" of errors and stability problems with 20.3 when we ran big mapreduce jobs to insert data. Upgraded to 20.6 last week, and have not seen any instability. Just my anecdotal experience. -geoff -Original Message- From: Anze [mailto:anzen...@volja.net] Sent:

Re: HBase stability

2010-12-14 Thread Anze
First of all, thank you all for the answers. I appreciate it! To recap: - 0.20.4 is known to be "fragile" - upgrade to 0.89 (cdh3b3) would improve stability - GC should be monitored and system tuned if necessary (not sure how to do that - yet :) - memory should be at least 2GB, better 4GB+ (we

Re: HBase stability

2010-12-14 Thread 陈加俊
Where should i download the branch-0.20-append? I can't get the compiled jar from url as follow: http://svn.apache.org/viewvc/hadoop/common/branches/branch-0.20-append . On Tue, Dec 14, 2010 at 1:44 AM, Stack wrote: > Some comments inline in the below. > > On Mon, Dec 13, 2010 at 8:45 AM, ba

Re: HBase stability

2010-12-14 Thread baggio liu
Please see comment inline. :D 2010/12/14 Stack > Some comments inline in the below. > > On Mon, Dec 13, 2010 at 8:45 AM, baggio liu wrote: > > Hi Anze, > > Our production cluster used HBase 0.20.6 and hdfs (CDH3b2), and we work > > for stability about a month. Some issue we have been met, a

Re: HBase stability

2010-12-14 Thread Stack
On Tue, Dec 14, 2010 at 1:46 AM, Anze wrote: > > First of all, thank you all for the answers. I appreciate it! > > To recap: > - 0.20.4 is known to be "fragile" Yes. It had a bug that would cause deadlock. > - upgrade to 0.89 (cdh3b3) would improve stability > - GC should be monitored and syst

Re: HBase stability

2010-12-14 Thread Stack
On Tue, Dec 14, 2010 at 5:56 AM, 陈加俊 wrote: >  Where should i download the branch-0.20-append?  I can't get the compiled > jar from url as follow: > http://svn.apache.org/viewvc/hadoop/common/branches/branch-0.20-append . > The link points to the svn repository. The cited doc. says you need to b

Re: HBase stability

2010-12-14 Thread Todd Lipcon
Hi Baggio, Sounds like you have some good experience with HDFS. Some comments inline below: On Tue, Dec 14, 2010 at 6:47 AM, baggio liu wrote: > > > In fact, we  found the low ivalid speed is because datanode invalid limit > per heartbeat. Many invaild block stay in namenode, and can not dispatch

Re: HBase stability

2010-12-14 Thread Stack
On Tue, Dec 14, 2010 at 6:47 AM, baggio liu wrote: >> This can be true.  Yes.  What are you suggesting here?  What should we >> tune? >> >> In fact, we  found the low ivalid speed is because datanode invalid limit > per heartbeat. Many invaild block stay in namenode, and can not dispatch to > data

Re: HBase stability

2010-12-15 Thread baggio liu
Hi Todd, Very appreciate to receive your reply :D 1. HDFS-611 can improve block invalidation rate, but it can not solve our problem. We found the bottleneck is the number invalid block which datanode fetched in a heartbeat. As stack say, we just make BLOCK_INVALIDATE_CHUNK configuable, and inc

Re: HBase Stability

2011-03-21 Thread Ted Dunning
Is there a reason you are not using a recent version of 0.90? On Mon, Mar 21, 2011 at 1:17 PM, Stuart Scott wrote: > We are using Hbase 0.89.20100924+28, r >

Re: HBase Stability

2011-03-21 Thread Ted Dunning
No, map-reduce is not really necessary to add so few rows. Our internal tests repeatedly load 10-100 million rows without much fuss. And that is on clusters ranging from 3 to 11 nodes. On Mon, Mar 21, 2011 at 1:17 PM, Stuart Scott wrote: > Is the only way to upload (say 1,000,000 rows) via map

Re: HBase Stability

2011-03-21 Thread Ted Dunning
This rate is dramatically slow than I would suspect. In our tests, a single insertion program has trouble inserting more than about 24,000 records per second, but that is because we are inserting kilobyte values and the network interfaces are saturated at this point. These tests are being done us

RE: HBase Stability

2011-03-21 Thread Stuart Scott
1 March 2011 20:20 To: user@hbase.apache.org Cc: Stuart Scott Subject: Re: HBase Stability No, map-reduce is not really necessary to add so few rows. Our internal tests repeatedly load 10-100 million rows without much fuss. And that is on clusters ranging from 3 to 11 nodes. On Mon, Mar 2

RE: HBase Stability

2011-03-21 Thread Buttler, David
Cc: Stuart Scott Subject: Re: HBase Stability Is there a reason you are not using a recent version of 0.90? On Mon, Mar 21, 2011 at 1:17 PM, Stuart Scott wrote: > We are using Hbase 0.89.20100924+28, r >

RE: HBase Stability

2011-03-21 Thread Stuart Scott
shouldn't collapse? Regards Stuart -Original Message- From: Buttler, David [mailto:buttl...@llnl.gov] Sent: 21 March 2011 20:46 To: user@hbase.apache.org Subject: RE: HBase Stability Have you seen Todd Lipcon's post on MSLAB's? http://www.cloudera.com/blog/2011/02/avoiding-f

Re: HBase Stability

2011-03-21 Thread M. C. Srivas
lto:buttl...@llnl.gov] > Sent: 21 March 2011 20:46 > To: user@hbase.apache.org > Subject: RE: HBase Stability > > Have you seen Todd Lipcon's post on MSLAB's? > http://www.cloudera.com/blog/2011/02/avoiding-full-gcs-in-hbase-with-mem > store-local-allocation-buffers-part-1/

RE: HBase Stability

2011-03-21 Thread Andrew Purtell
table#setAutoFlush(false) ? --- On Mon, 3/21/11, Buttler, David wrote: > From: Buttler, David > Subject: RE: HBase Stability > To: "user@hbase.apache.org" > Date: Monday, March 21, 2011, 1:46 PM > Have you seen Todd Lipcon's post on > MSLAB's?