Re: Client Error: "java.net.ConnectException: Connection refused: no further information" after update HBase-0.20.6 to HBase0.90.2

2011-04-11 Thread 陈加俊
Soryy I had make a mistake at hbase.zookeeper.property.clientPort On Tue, Apr 12, 2011 at 1:08 PM, 陈加俊 wrote: > WARN : 04-12 12:54:13 Session 0x0 for server null, unexpected error, > closing socket connection and attempting reconnect > java.net.ConnectException: Connection refused: no further in

Re: The whole key space is not covered by the .META.

2011-04-11 Thread 茅旭峰
Anyway, looks like the two regions below have some kind of overlap, does this make sense? 2011/4/12 茅旭峰 > From the output of scan '.META.' I pasted before, we can see there are two > key ranges > which might cover the put key 'LCgwzrx2XTFkB2Ymz9HeJWPY0Ok='. They are > > #1, 'LC3MILeAUy8HmRFgU5-E

Client Error: "java.net.ConnectException: Connection refused: no further information" after update HBase-0.20.6 to HBase0.90.2

2011-04-11 Thread 陈加俊
WARN : 04-12 12:54:13 Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect java.net.ConnectException: Connection refused: no further information at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(Soc

Re: The whole key space is not covered by the .META.

2011-04-11 Thread 茅旭峰
the results of ./bin/hbase hbck show lots of 'inconsistence status' like ERROR: Region hdfs://cloud137:9000/hbase/table1/01c80f8b54523ad6c242c5f695544f16 on HDFS, but not listed in META or deployed on any region server. ERROR: Region hdfs://cloud137:9000/hbase/table1/01ce4e2f72baa0df51b7b2010

Re: too many regions cause OME ?

2011-04-11 Thread 陈加俊
I wan to know why openRegion can case the heap OME ,how to calculate the size of heap space . On Tue, Apr 12, 2011 at 10:49 AM, Jean-Daniel Cryans wrote: > Were they opening the same region by any chance? > > On Mon, Apr 11, 2011 at 6:13 PM, 陈加俊 wrote: > > There is no big scan,and just norma loa

Re: too many regions cause OME ?

2011-04-11 Thread 陈加俊
Yes ,I scan (or get or put) rows always . On Tue, Apr 12, 2011 at 10:49 AM, Jean-Daniel Cryans wrote: > Were they opening the same region by any chance? > > On Mon, Apr 11, 2011 at 6:13 PM, 陈加俊 wrote: > > There is no big scan,and just norma load. Also strange is when one RS > exited > > then an

Re: can i start teh hbase in pseudo-distributed mode with external zookeeper?

2011-04-11 Thread Stack
2011/4/11 : > Hi Harsh J, > > I thought it was that way, but according to the description of the " > hbase.cluster.distributed", for pseudo-distributed setup with managed > zookeeper, this value should be set to "false". > I think there's some more difference between the real-distributed mode an

Re: The whole key space is not covered by the .META.

2011-04-11 Thread Stack
Can you open the region again? (See shell commands for opening regions). What does hbck say: ./bin/hbase hbck. Add the -details flag. It might tell you a story about an offlined region. 0.90.2 has some fixes for issues in and around here (CDH3 release, out on the 14th, has most of them bundled)

Re: The whole key space is not covered by the .META.

2011-04-11 Thread Stack
2011/4/11 茅旭峰 : > We are using hadoop-CDH3B4 and hbase0.90.1-CDH3B4. I'll check the > issue further, but my understanding is the meta info and the root > region are saved by zookeeper, right? Do I need to check them there? > The .META. table is like any other and stored out on the cluster just as u

Hive integration with HBase

2011-04-11 Thread Marcos Ortiz
Regards to all. I was reading the guest post (http://www.cloudera.com/blog/2010/06/integrating-hive-and-hbase/) on the Cloudera Blog from John Sichi (http://people.apache.org/~jvs/) about the integration efforts from many HBase hackers from Cloudera, Facebook, StumbleUpon, Trend Micro and other

Re: hadoop branch-0.20-append Build error:build.xml:933: exec returned: 1

2011-04-11 Thread Marcos Ortiz
El 4/11/2011 10:45 PM, Alex Luya escribió: BUILD FAILED .../branch-0 .20-append/build.xml:927: The following error occurred while executing this line: ../branch-0 .20-append/build.xml:933: exec returned: 1 Total time: 1 minute 17 seconds + RESULT=1 + '[' 1 '!=' 0 ']' + echo 'Build Failed

Re: too many regions cause OME ?

2011-04-11 Thread Jean-Daniel Cryans
Were they opening the same region by any chance? On Mon, Apr 11, 2011 at 6:13 PM, 陈加俊 wrote: > There is no big scan,and just norma load. Also strange is when one RS exited > then another RS exited and others RS like that. > > On Tue, Apr 12, 2011 at 8:55 AM, Jean-Daniel Cryans > wrote: > >> Ok t

hadoop branch-0.20-append Build error:build.xml:933: exec returned: 1

2011-04-11 Thread Alex Luya
BUILD FAILED .../branch-0 .20-append/build.xml:927: The following error occurred while executing this line: ../branch-0 .20-append/build.xml:933: exec returned: 1 Total time: 1 minute 17 seconds + RESULT=1 + '[' 1 '!=' 0 ']' + echo 'Build Failed: 64-bit build not run' Build Failed: 64-bit

Re: The whole key space is not covered by the .META.

2011-04-11 Thread 茅旭峰
>From the output of scan '.META.' I pasted before, we can see there are two key ranges which might cover the put key 'LCgwzrx2XTFkB2Ymz9HeJWPY0Ok='. They are #1, 'LC3MILeAUy8HmRFgU5-ESE-9T7w=' -> 'LD4jOJWFyt4m7A3KGFST6d-uj3A=' #2, 'LC_vN8JYweYYsnKaKbpOo67kUNA=' -> 'some end key' The output has le

RE: can i start teh hbase in pseudo-distributed mode with external zookeeper?

2011-04-11 Thread stanley.shi
Hi Harsh J, I thought it was that way, but according to the description of the " hbase.cluster.distributed", for pseudo-distributed setup with managed zookeeper, this value should be set to "false". I think there's some more difference between the real-distributed mode and the pseudo one. -sta

Re: too many regions cause OME ?

2011-04-11 Thread 陈加俊
There is no big scan,and just norma load. Also strange is when one RS exited then another RS exited and others RS like that. On Tue, Apr 12, 2011 at 8:55 AM, Jean-Daniel Cryans wrote: > Ok that looks "fine", did the region server die under heavy load by > any chance? Or was it big scans? Or just

Re: too many regions cause OME ?

2011-04-11 Thread Jean-Daniel Cryans
Ok that looks "fine", did the region server die under heavy load by any chance? Or was it big scans? Or just normal load? J-D On Mon, Apr 11, 2011 at 5:50 PM, 陈加俊 wrote: > my configuration is follows: >   >                 hbase.client.write.buffer >                 2097152 >                  >

Re: too many regions cause OME ?

2011-04-11 Thread 陈加俊
my configuration is follows: hbase.client.write.buffer 2097152 1024*1024*2=2097152 hbase.hstore.blockingStoreFiles 14 hba

Re: too many regions cause OME ?

2011-04-11 Thread 陈加俊
There is one table has 1.4T*3(replication) data . On Tue, Apr 12, 2011 at 8:38 AM, Doug Meil wrote: > > Re: " maxHeap=3991" > > Seems like an awful lot of data to put in a 4gb heap. > > -Original Message- > From: 陈加俊 [mailto:cjjvict...@gmail.com] > Sent: Monday, April 11, 2011 8:35 PM

Re: too many regions cause OME ?

2011-04-11 Thread Jean-Daniel Cryans
And where will they go? The issue isn't the number of regions per say, it's the amount of data being served by that region server. Also I still don't know if that's really your issue or it's a configuration issue (which I have yet to see). J-D On Mon, Apr 11, 2011 at 5:41 PM, 陈加俊 wrote: > Can I

Re: too many regions cause OME ?

2011-04-11 Thread 陈加俊
Can I limit the numbers of regions on one RegionServer ? On Tue, Apr 12, 2011 at 8:37 AM, Jean-Daniel Cryans wrote: > It's really a lot yes, but it could also be weird configurations or > too big values. > > J-D > > On Mon, Apr 11, 2011 at 5:35 PM, 陈加俊 wrote: > > Is it too many regions ? Is the

RE: too many regions cause OME ?

2011-04-11 Thread Doug Meil
Re: " maxHeap=3991" Seems like an awful lot of data to put in a 4gb heap. -Original Message- From: 陈加俊 [mailto:cjjvict...@gmail.com] Sent: Monday, April 11, 2011 8:35 PM To: hbase-u...@hadoop.apache.org Subject: too many regions cause OME ? Is it too many regions ? Is the memory enou

Re: too many regions cause OME ?

2011-04-11 Thread Jean-Daniel Cryans
It's really a lot yes, but it could also be weird configurations or too big values. J-D On Mon, Apr 11, 2011 at 5:35 PM, 陈加俊 wrote: > Is it too many regions ? Is the memory enough ? > HBase-0.20.6 > > 2011-04-12 00:16:31,844 FATAL > org.apache.hadoop.hbase.regionserver.HRegionServer: OutOfMemory

too many regions cause OME ?

2011-04-11 Thread 陈加俊
Is it too many regions ? Is the memory enough ? HBase-0.20.6 2011-04-12 00:16:31,844 FATAL org.apache.hadoop.hbase.regionserver.HRegionServer: OutOfMemoryError, aborting. java.lang.OutOfMemoryError: Java heap space at java.io.BufferedInputStream.(BufferedInputStream.java:178) at or

Re: Catching ZK ConnectionLoss with HTable

2011-04-11 Thread Jean-Daniel Cryans
I thought a lot more about this issue and it could be a bigger undertaking than I thought, basically any HTable operation can throw ZK-related errors and I think they should be considered as fatal. In the mean time HBase could improve the situation a bit. You say it was spinning, do you know where

Re: Hadoop 0.20.3 Append branch?

2011-04-11 Thread Jason Rutherglen
I was confused by the reference in the pom.xml of HBase to the append jar, I neglected to mention that part. Thanks for the assistance. On Mon, Apr 11, 2011 at 4:56 PM, Andrew Purtell wrote: > Head of branch-0.20-append is "0.20.3-SNAPSHOT" > (http://svn.apache.org/viewvc/hadoop/common/branches

Re: Hadoop 0.20.3 Append branch?

2011-04-11 Thread Andrew Purtell
Head of branch-0.20-append is "0.20.3-SNAPSHOT" (http://svn.apache.org/viewvc/hadoop/common/branches/branch-0.20-append/build.xml) - Andy > From: Jason Rutherglen > Subject: Re: Hadoop 0.20.3 Append branch? > To: apurt...@apache.org, hbase-u...@hadoop.apache.org > Date: Monday, April 11, 2011

Re: hbase architecture question

2011-04-11 Thread javamann
This is basically what I do only I use a Java Client to aggregate and place the data into HBase. I can process a log with a million rows in a little over 13 seconds. To write the data to HBase takes around 40 seconds. Then we hit HBase via a thin client a SpringWS. Seems to work pretty well. -P

hbase architecture question

2011-04-11 Thread Prosperent
We're new to hbase, but somewhat familiar with the core concepts associated with it. We use mysql now, but have also used cassandra for portions of our code. We feel that hbase is a better fit because of the tight integration with mapreduce and the proven stability of the underlying hadoop system.

Re: Hadoop 0.20.3 Append branch?

2011-04-11 Thread Jean-Daniel Cryans
0.20 is the name of the branch, if you generate a tar from it you'll see that it's called 0.20.3-SNAPSHOT J-D On Mon, Apr 11, 2011 at 2:16 PM, Jason Rutherglen wrote: > Well, just the difference in versions, the one in HBase is listed as > 0.20 whereas the latest is 0.20.3? > > On Mon, Apr 11, 2

Re: Issue starting HBase

2011-04-11 Thread Jean-Daniel Cryans
What does the log look like when you start hbase without starting your own zookeeper? The "Couldnt start ZK at requested address" message means that it does fall into that part of the code, but something must be blocking it from starting... The log should tell you. J-D On Mon, Apr 11, 2011 at 11:

Re: Hadoop 0.20.3 Append branch?

2011-04-11 Thread Jason Rutherglen
Well, just the difference in versions, the one in HBase is listed as 0.20 whereas the latest is 0.20.3? On Mon, Apr 11, 2011 at 12:11 PM, Andrew Purtell wrote: > Requires it in fact. > > You mean this, right: > http://svn.apache.org/viewvc/hadoop/common/branches/branch-0.20-append/ > > ? > >  -

RE: Catching ZK ConnectionLoss with HTable

2011-04-11 Thread Sandy Pratt
Thanks J-D. I'll keep an eye on the Jira. > -Original Message- > From: jdcry...@gmail.com [mailto:jdcry...@gmail.com] On Behalf Of Jean- > Daniel Cryans > Sent: Monday, April 11, 2011 11:52 > To: user@hbase.apache.org > Subject: Re: Catching ZK ConnectionLoss with HTable > > I'm cleaning

Re: Cluster crash

2011-04-11 Thread Jean-Daniel Cryans
Alright so I was able to get the logs from Eran, the HDFS errors are a red herring, what followed in the region server log that is really important is: 2011-04-10 10:14:27,278 INFO org.apache.zookeeper.ClientCnxn: Client session timed out, have not heard from server in 144490ms for sessionid 0x12e

RE: cpu profiling

2011-04-11 Thread Andrew Purtell
We use JProfiler and connect to the remote VM via SSH tunnel. (Our testing is done up in EC2.) - Andy > From: Peter Haidinyak > Subject: RE: cpu profiling > To: "user@hbase.apache.org" > Date: Monday, April 11, 2011, 8:51 AM > I've been using JProfiler for years > and have been very happy

Re: Hadoop 0.20.3 Append branch?

2011-04-11 Thread Andrew Purtell
Requires it in fact. You mean this, right: http://svn.apache.org/viewvc/hadoop/common/branches/branch-0.20-append/ ? - Andy --- On Mon, 4/11/11, Jason Rutherglen wrote: > From: Jason Rutherglen > Subject: Hadoop 0.20.3 Append branch? > To: hbase-u...@hadoop.apache.org > Date: Monday, Ap

Hadoop 0.20.3 Append branch?

2011-04-11 Thread Jason Rutherglen
In the HBase pom.xml, the Hadoop branch is 0.20. Will HBase work with the Hadoop 0.20.3 append branch?

Re: Catching ZK ConnectionLoss with HTable

2011-04-11 Thread Jean-Daniel Cryans
I'm cleaning this up in this jira https://issues.apache.org/jira/browse/HBASE-3755 But it's a failure case I haven't seen before, really interesting. There's a HTable that's created in the guts if HCM that will throw a ZookeeperConnectionException but it will bubble up as an IOE. I'll try to addre

Re: ANN: HBase 0.90.2 is available for download

2011-04-11 Thread Ted Yu
Please take a look at HBASE-3750 and HBASE-3762 (which aren't in 0.90.2) if you use HTablePool, especially when you use maxSize lower than Integer.MAX_VALUE Thanks > > > > On Fri, Apr 8, 2011 at 8:39 PM, Stack wrote: > >> > >> The Apache HBase team is happy to announce the general availability

Re: Cluster crash

2011-04-11 Thread Jean-Daniel Cryans
So my understanding is that this log file was opened at 7:29 and then something happened at 10:12:55 as something triggered the recovery on that block. It triggered a recovery of the block with the new name being blk_1213779416283711358_54249 It seems that that process was started by the DFS Clien

Re: TableInputFormat and number of mappers == number of regions

2011-04-11 Thread Avery Ching
I found the code still exists in this code base for the old mapred interfaces src/main/java/org/apache/hadoop/hbase/mapred/TableInputFormatBase.java I'll adapt it for my needs. Thanks! Avery On Apr 9, 2011, at 9:55 AM, Jean-Daniel Cryans wrote: > It's weird, I thought we already did something

Re: TableInputFormat and number of mappers == number of regions

2011-04-11 Thread Vidhyashankar Venkataraman
Just so you guys know, the 150K regions was in a test cluster that we had let run amok. Our prod cluster has less than 50 regions per region server. Considering 700 nodes, that comes to around 22K regions! The job tracker could still potentially be overloaded with this number. The solution is i

Re: Cluster crash

2011-04-11 Thread Eran Kutner
There wasn't an attachment, I pasted all the lines from all the NN logs that contain that particular block number inline. As for CPU/IO, first there is nothing else running on those servers, second, CPU utilization on the slaves at peak load was around 40% and disk IO utilization less than 20%. Th

Re: Yet another bulk import question

2011-04-11 Thread Vivek Krishna
Is there a limiting factor/setting that limits/controls the bandwidth on HBase nodes? I know there is a number to be set on zoo.cfg to increase the number of incoming connections. Though I am using a 15 Gigabit ethernet card, I can see only 50-100MB/s of transfer per node (from clients) via gangli

Issue starting HBase

2011-04-11 Thread coolSK
I have a weird issue starting HBase server. I am using HBase-0.90.2. I set my root.dir under $HBASE_HOME/conf/hbase-site.xml. Then when I try starting HBase using bin/start-hbase.sh, I get the following error message : 2011-04-11 13:57:56,578 INFO org.apache.zookeeper.ClientCnxn: Opening socket co

Re: The whole key space is not covered by the .META.

2011-04-11 Thread 茅旭峰
We are using hadoop-CDH3B4 and hbase0.90.1-CDH3B4. I'll check the issue further, but my understanding is the meta info and the root region are saved by zookeeper, right? Do I need to check them there? m9suns 在 2011-4-12,0:40,Jean-Daniel Cryans 写道: > It's possible under some bugs, which HBase ve

Catching ZK ConnectionLoss with HTable

2011-04-11 Thread Sandy Pratt
Hi all, I had an issue recently where a scan job I frequently run caught ConnectionLoss and subsequently failed to recover. The stack trace looks like this: 11/04/08 12:20:04 INFO zookeeper.ZooKeeper: Session: 0x12f2497b00d03d8 closed 11/04/08 12:20:04 WARN client.HConnectionManager$ClientZKWat

Re: Cluster crash

2011-04-11 Thread Stack
On Sun, Apr 10, 2011 at 11:30 PM, Eran Kutner wrote: > Hi St.Ack and J-D, > Thanks for looking into this. > > It can definitely be a configuration problem, but I seriously doubt it > is a network or infrastructure problem. It's our own operated > infrastructure (not a cloud)  and we have a lot of

Re: Reg:HBase Client

2011-04-11 Thread Jean-Daniel Cryans
Same for the issue where the master isn't shutting down, we should be shutting down region servers once checkFilesystem is called... at least until we can find a way to ride over NN restarts. gets/puts are probably working for files that are already opened since HBase doesn't have to talk to the N

Re: can i start teh hbase in pseudo-distributed mode with external zookeeper?

2011-04-11 Thread Jean-Daniel Cryans
I think it changed somewhere between 0.20 and 0.90 as I remember being able to use a separate ZK with a standalone HBase. So for the moment you can just set hbase.cluster.distributed to true which will spawn the master and the region server into 2 processes but it will still work without HDFS becau

Re: The whole key space is not covered by the .META.

2011-04-11 Thread Jean-Daniel Cryans
It's possible under some bugs, which HBase version are you using? J-D On Mon, Apr 11, 2011 at 4:50 AM, 茅旭峰 wrote: > Hi, > > Is it possible that some table cannot cover the whole key space. What I saw > was like > > > hbase(main):006:0> put 'table1', 'abc', 'cfEStore:dasd', '123' > > 0 row(s

RE: cpu profiling

2011-04-11 Thread Peter Haidinyak
I've been using JProfiler for years and have been very happy with it. -Pete -Original Message- From: Jack Levin [mailto:magn...@gmail.com] Sent: Sunday, April 10, 2011 9:09 PM To: user@hbase.apache.org Subject: cpu profiling Hi all, what is the best way to profile CPU on Region Server J

Re: ANN: HBase 0.90.2 is available for download

2011-04-11 Thread Stack
Lior and Joe: Sorry for the mvn lag. The mvn deploy system is smarter than me. The deploy requires four full builds of HBase running all tests. I inevitably get distracted and forget the process or else I am demented and answer one of the questions just off and then I have to start over. I'm n

Re: The whole key space is not covered by the .META.

2011-04-11 Thread 茅旭峰
more input from the hbase shell, I used scan '.META', I got === table1,LC3MILeAUy8HmRFgU5-ESE-9T7w=,130 column=info:regioninfo, timestamp=1300519437064, value=REGION => {NAME => 'table1,LC3MILeAUy8HmRFgU5-ESE-9T7w=,13005 0519432575.0bdd3d8fa7fc710860a4ee51fc9c 19432575.0bdd3d8fa7fc710860a4ee51f

Re: The whole key space is not covered by the .META.

2011-04-11 Thread 茅旭峰
Or does this mean I've corrupted the .META. data? BTW, any way to recover the .META.? On Mon, Apr 11, 2011 at 7:50 PM, 茅旭峰 wrote: > Hi, > > Is it possible that some table cannot cover the whole key space. What I saw > was like > > > hbase(main):006:0> put 'table1', 'abc', 'cfEStore:dasd', '

The whole key space is not covered by the .META.

2011-04-11 Thread 茅旭峰
Hi, Is it possible that some table cannot cover the whole key space. What I saw was like hbase(main):006:0> put 'table1', 'abc', 'cfEStore:dasd', '123' 0 row(s) in 0.3030 seconds hbase(main):007:0> put 'table1', 'LCgwzrx2XTFkB2Ymz9HeJWPY0Ok=', 'cfEStore:dasd', '123' ERROR: java.io.IOExcep

Re: can i start teh hbase in pseudo-distributed mode with external zookeeper?

2011-04-11 Thread Harsh J
Hello Stanley, On Mon, Apr 11, 2011 at 4:56 PM, wrote: > hbase.cluster.distributed > defaule: false > The mode the cluster will be in. Possible values are false: standalone and > pseudo-distributed setups with managed Zookeeper true: fully-distributed with > unmanaged Zookeeper Quorum (see hba

can i start teh hbase in pseudo-distributed mode with external zookeeper?

2011-04-11 Thread stanley.shi
Hi guys, I've been working on the configuration for hours and didn't get a clue. I want to configure the hbase to run in pseudo-distributed mode with external zookeeper. But I have already configured the export HBASE_MANAGES_ZK=false but hbase still tries to start its own ZK, so I am wondering i

Reg:HBase Client

2011-04-11 Thread Ramkrishna S Vasudevan
Hi AFter starting Hmaster and hregionserver, if we kill the namenode and try to insert some data it is allowing the insertion and even get is working. But the RegionSErver and Master throws error saying unable to connect to namenode. Is it not like the client should also throw error .

Fwd: ANN: HBase 0.90.2 is available for download

2011-04-11 Thread Lior Schachter
-- Forwarded message -- From: Lior Schachter Date: Sun, Apr 10, 2011 at 9:57 AM Subject: Re: ANN: HBase 0.90.2 is available for download To: Stack , gene...@hadoop.apache.org Cc: user@hbase.apache.org Hi Stack, We already have version 0.90.1 installed on our production cluster a