Does HTable.put() guaranty that data are effectively written?

2010-08-02 Thread Vincent Barat
Hi, I have a simple Java program that write data into a set of HBase tables using the HTable().put() call and an infinite number of retries (in order to block when HBase fails and restart when it is up again, and thus guaranty that my data are written sooner or later). My cluster is a test

Re: Does HTable.put() guaranty that data are effectively written?

2010-08-02 Thread Vincent Barat
; hbase, you can use the latest HBase 0.89 (available on the website) along with a snapshot of Hadoop's 0.20-append branch. Alternatively, you can also use Cloudera's CDH3b2 which has both (I don't work for them, but it's probably just easier to checkout at the moment). J-D

NotServingRegionException exception after the loss of a regionserver

2011-06-17 Thread Vincent Barat
Hi! This morning, on our production system, we experienced a very bad behavior of HBase 0.20.6. 1- one of our region server crash 2- we restarted it with success (no error on the master nor on the region servers) 3- but we discovered that our HBase clients were enable to recover for this situ

Re: Lots of SocketTimeoutException for gets and puts since HBase 0.92.1

2012-11-16 Thread Vincent Barat
Le 16/11/12 01:56, Stack a écrit : On Thu, Nov 15, 2012 at 5:21 AM, Guillaume Perrot wrote: It happens when several tables are being compacted and/or when there is several scanners running. It happens for a particular region? Anything you can tell about the server looking in your cluster mon

Re: Lots of SocketTimeoutException for gets and puts since HBase 0.92.1

2012-11-16 Thread Vincent Barat
e for hbase.regionserver.handler.count ? I assume you keep the same value as that from 0.90.3 Thanks On Fri, Nov 16, 2012 at 8:14 AM, Vincent Barat wrote: Le 16/11/12 01:56, Stack a écrit : On Thu, Nov 15, 2012 at 5:21 AM, Guillaume Perrot

Re: Lots of SocketTimeoutException for gets and puts since HBase 0.92.1

2012-11-16 Thread Vincent Barat
Hi, Right now (and previously with 0.90.3) we were using the default value (10). We are trying right now to increase to 30 to see if it is better. Thanks for your concern Le 16/11/12 18:13, Ted Yu a écrit : Vincent: What's the value for hbase.regionserver.handler.count ? I assume you keep t

X3 slow down after moving from HBase 0.90.3 to HBase 0.92.1

2012-11-20 Thread Vincent Barat
down of batch put & ramdom read response time ... despite the fact that our RS CPU load is really low (10%) Note: we have not (yet) activated MSlabs, nor direct read on HDFS. Any idea please ? I'm really stuck on that issue. Best regards, Le 16/11/12 20:55, Vincent Barat a écrit : H

Re: High IPC Latency

2012-11-20 Thread Vincent Barat
Hi, We face a very similar issue since we upgraded from 0.90.3 to 0.92. I'll test the patch very soon, and I hope to see the RS CPU become the bottleneck as you did... I report on this tomorrow. Cheers, Le 30/10/12 12:06, Yousuf Ahmad a écrit : Hi Lars, Thank you very much. With this fixe

Re: HBase Tuning

2012-11-20 Thread Vincent Barat
Hi, It seems there is a potential contention in the HBase client code (a useless synchronized method) You may try to use this patch : https://issues.apache.org/jira/browse/HBASE-7069 I face similar issues on my production cluster since I upgraded to HBase 0.92. I will test this patch tomorro

Re: X3 slow down after moving from HBase 0.90.3 to HBase 0.92.1

2012-11-21 Thread Vincent Barat
The regions servers are run with a 16GB heap (-Xmx16000M) With these settings, at peak we can handle ~2K concurrent clients. Alok On Tue, Nov 20, 2012 at 8:21 AM, Vincent Barat wrote: Hi, We have changed some parameters on our 16(!) region servers : 1GB more -Xmx, more rpc ha

Re: X3 slow down after moving from HBase 0.90.3 to HBase 0.92.1

2012-11-21 Thread Vincent Barat
e ~2K concurrent clients. Alok On Tue, Nov 20, 2012 at 8:21 AM, Vincent Barat wrote: Hi, We have changed some parameters on our 16(!) region servers : 1GB more -Xmx, more rpc handler (from 10 to 30) longer timeout, but nothing seems to improve the response time: - Scans with HBase 0.92 are x3 S

Re: X3 slow down after moving from HBase 0.90.3 to HBase 0.92.1

2012-11-21 Thread Vincent Barat
Le 21/11/12 06:05, Stack a écrit : On Tue, Nov 20, 2012 at 8:21 AM, Vincent Barat wrote: We have changed some parameters on our 16(!) region servers : 1GB more -Xmx, more rpc handler (from 10 to 30) longer timeout, but nothing seems to improve the response time: You have taken a look at the

Re: X3 slow down after moving from HBase 0.90.3 to HBase 0.92.1

2012-11-21 Thread Vincent Barat
20, 2012 at 8:21 AM, Vincent Barat wrote: Hi, We have changed some parameters on our 16(!) region servers : 1GB more -Xmx, more rpc handler (from 10 to 30) longer timeout, but nothing seems to improve the response time: - Scans with HBase 0.92 are x3 SLOWER than with HBase 0.90.3 - A lot of

Re: HBase Tuning

2012-11-21 Thread Vincent Barat
Forget about this: it does not help Le 20/11/12 19:54, Vincent Barat a écrit : Hi, It seems there is a potential contention in the HBase client code (a useless synchronized method) You may try to use this patch : https://issues.apache.org/jira/browse/HBASE-7069 I face similar issues on my

Re: X3 slow down after moving from HBase 0.90.3 to HBase 0.92.1

2012-11-21 Thread Vincent Barat
Le 21/11/12 18:39, Stack a écrit : So Vincent, the servers are quiet? Which would match your low CPU observation. Clients are unable to send them load for some reason? How many disks. What is your block cache hit number (see regionserver log -- it gets printed every so often or in the b

Re: HBase scanner LeaseException

2012-11-22 Thread Vincent Barat
09:23, Vincent Barat a écrit : Le 21/11/12 06:05, Stack a écrit : On Tue, Nov 20, 2012 at 8:21 AM, Vincent Barat wrote: We have changed some parameters on our 16(!) region servers : 1GB more -Xmx, more rpc handler (from 10 to 30) longer timeout, but nothing seems to improve the response time:

Re: Fixing badly distributed table manually.

2013-04-10 Thread Vincent Barat
know python but i am interested in learning about your solution. It would be great If you could also share the logic for balancing the cluster. Thanks, Anil Gupta On Mon, Dec 24, 2012 at 9:53 AM, Mohit Anchlia wrote: On Mon, Dec 24, 2012 at 8:27 AM, Ivan Balashov wrote: Vincent Barat writes:

Re: Fixing badly distributed table manually.

2012-09-05 Thread Vincent Barat
or old releases of hbase (cdh2 I believe). There's no plan to upgrade it to newer releases. Cheers --- Guillaume -- *Vincent Barat* *CTO * logo *Contact info * vba...@capptain.com <mailto:vba...@capptain.com%20> www.capptain.com <http://www.capptain.com> Cell: +33 6 15 41 15 18

Re: Fixing badly distributed table manually.

2012-09-05 Thread Vincent Barat
Hi, Balancing regions between RS is correctly handled by HBase : I mean that your RSs always manage the same number of regions (the balancer takes care of it). Unfortunately, balancing all the regions of one particular table between the RS of your cluster is not always easy, since HBase (as

Re: HBase as a transformation engine

2013-11-13 Thread Vincent Barat
Hi, We have done this kind of thing using HBase 0.92.1 + Pig, but we finally had to limit the size of the tables and move the biggest data to HDFS: loading data directly from HBase is much slower than from HDFS, and doing it using M/R overloads HBase region servers, since several maps jobs sc