I think you could change the configuration of one regionserver , and then
restart the regionserver
stop
start
Like this , one by one .
I did this also worked!
On Wed, Dec 8, 2010 at 5:30 AM, Ted Yu yuzhih...@gmail.com wrote:
You need to restart cluster.
See also
I have master also serving as regionserver. I'll run ZK also on 3 of the
regionservers. I don't have too much data (few TBs only), so I guess it
would be fine?
On Thu, Dec 9, 2010 at 12:44 AM, Ted Dunning tdunn...@maprtech.com wrote:
Ahh... that is very much at the other end of the spectrum
Tried that, nope, didn't help much. I was opening a table and scanning in
the reducer. Now I am calling scanner.close() in each reducer and I have put
HTable.close() in the cleanup() function too. Still seeing those children
even after the job is killed :(
I am using
Just to follow up on this(so if someone searches and hits this post),
This was solved now. Basically, I found hadoop dfs examples, found out
it was hadoop and DID need to fix /etc/hosts AND needed to move some
info from hdfs-site.xml to core-site.xml. Once the hadoop example
worked, the hbase
Ok, I finally answered one of my questions when running the rebalance
tool. One node is 180gig and another 20gig so it technically in balance
with a 10% ratio. When I ran 1%, it did rebalance. I guess I expected
hbase to be writing to both nodes but was only writing on one. I need
to get to a
Hello guys,
I got regionserver crash and trying to find out why. I found
* in regionserver log [1] : ZK session expired, and before that slow hlog
edits;
* nothing in DataNode log [2] and HMaster[4] log;
* some warns in ZK log [3] with EndOfStreamException.
I wonder can it be because of long GC
Hey Alex,
It's either the client or the server box that wasn't responding,
either way they didn't talk for 52 seconds when your session timeout
was set at 40 seconds. I suggest you also take a look at the ZK log,
it should tell you exactly when the session was expired.
J-D
On Thu, Dec 9, 2010
Ok, I finally answered one of my questions when running the rebalance
tool. One node is 180gig and another 20gig so it technically in balance
with a 10% ratio. When I ran 1%, it did rebalance. I guess I expected
hbase to be writing to both nodes but was only writing on one. I need
to get
Regarding using a lot of families... They are currently partitioned in a
manner that reflects the various data groups that are likely to be read
together... We're doing a lot of big scans on the regions of only one of
those families, with scans of the full table being much shorter/rarer. By
Hi,
I'm using HBase .89 and Hadoop .20.2
I'm trying to create a connection to HBase from a remote Java Client. I am
using the following code...
final Configuration configuration = HBaseConfiguration.create();
configuration.set(hbase.zookeeper.quorum, caiss01a);
Very often it's a version problem, make sure both the client and the
server have the same HBase and Hadoop version on their classpath.
J-D
On Thu, Dec 9, 2010 at 11:22 AM, Peter Haidinyak phaidin...@local.com wrote:
Hi,
I'm using HBase .89 and Hadoop .20.2
I'm trying to create a connection
We have a 6 node cluster, 5 with region serves. 2 of the region servers have
been stable for days, but 3 of them keep crashing. Here are the logs around
around when the crash occurs. (btw, we are shoving approximately the twitter
firehose into hbase via flume) I'm an hbase newbie, but I have
Lance,
Both those lines indicate the problem:
IPC Server handler 13 on 60020 took 182416ms
Client session timed out, have not heard from server in 182936ms
It's very clear that your region servers are suffering from
pause-of-the-world garbage collection issues. Basically this one GC'ed
for 3
Take a thread dump on those child processes before killing them. Use jstack
for instance and take a thread dump, wait for 30 secs and take another one.
That should tell you what they are waiting on.
--Suraj
On Thu, Dec 9, 2010 at 1:53 AM, Hari Sreekumar hsreeku...@clickable.comwrote:
Tried
Turns out that even though I am using hadoop 0.20.2 on the server I needed to
use version .21.0 on my client.
-Pete
-Original Message-
From: Peter Haidinyak [mailto:phaidin...@local.com]
Sent: Thursday, December 09, 2010 1:55 PM
To: user@hbase.apache.org
Subject: RE: Connection to
Juhani:
You can also consider https://issues.apache.org/jira/browse/HBASE-1537 which
is not in
hbase-0.89.20100924+28http://archive.cloudera.com/cdh/3/hbase-0.89.20100924+28/
You can apply Andrew's patch yourself.
On Thu, Dec 9, 2010 at 10:29 AM, Jean-Daniel Cryans jdcry...@apache.orgwrote:
This is an example of a URL we use to get master's stats.
http://imageshack.com:60010/master.jsp
On one of our clusters, the browser no longer can retrieve stats, the
connection simply hangs, but master log still reports activity and the
cluster is generally up. Does anyone know why this might
As far as I know hadoop 0.20.2 and 0.21.0 aren't wire compatible, so
there's really an issue here. Also HBase doesn't run on top of 0.21.0
J-D
On Thu, Dec 9, 2010 at 2:52 PM, Peter Haidinyak phaidin...@local.com wrote:
Turns out that even though I am using hadoop 0.20.2 on the server I needed
I checked the 'hadoop version' on each machine and it was .20.2+737
-Pete
-Original Message-
From: jdcry...@gmail.com [mailto:jdcry...@gmail.com] On Behalf Of Jean-Daniel
Cryans
Sent: Thursday, December 09, 2010 3:36 PM
To: user@hbase.apache.org
Subject: Re: Connection to HBase seems to
Yep that version is CDH3b3 which, because it contains the security
patches, isn't compatible with apache's 0.20.2
J-D
On Thu, Dec 9, 2010 at 3:41 PM, Peter Haidinyak phaidin...@local.com wrote:
I checked the 'hadoop version' on each machine and it was .20.2+737
-Pete
-Original
This is a windows system?
Is HBase for sure up and running and you've verified it so by
connecting with the shell?
St.Ack
On Thu, Dec 9, 2010 at 3:41 PM, Peter Haidinyak phaidin...@local.com wrote:
I checked the 'hadoop version' on each machine and it was .20.2+737
-Pete
-Original
Nothing in the master log?
Drawing that page its going to scan meta. If large meta could take a while.
You might verify that scan of meta is working in the shell:
hbase scan '.META.'
Does it have same pause as UI?
Does UI ever draw?
St.Ack
On Thu, Dec 9, 2010 at 3:01 PM, Jack Levin
This could indicate swapping during GC.
On Thu, Dec 9, 2010 at 12:13 PM, Lance Riedel lancerie...@gmail.com wrote:
Seems reasonable, but having trouble making sense of the GC logs I had
turned on. Basically since there was a full GC a minute before this happens
on that server that lasts less
My client is running on Windows XP and the HBase/Hadoop servers are on Linux.
I can connect to HBase on the server using its shell.
-Pete
-Original Message-
From: saint@gmail.com [mailto:saint@gmail.com] On Behalf Of Stack
Sent: Thursday, December 09, 2010 3:47 PM
To:
I'm not aware of anyone running HBase with OpenJDK.
Try using the Sun/Oracle JDK, any recent version, except for 1.6.0_18 which
has known issues.
On Tue, Dec 7, 2010 at 2:43 AM, 陈加俊 cjjvict...@gmail.com wrote:
One HBase regionserver is crashed , How can I do to avoid this happen
again?
Hi
We are running Hadoop-0.20.2 without append (HDFS-200) on our production
environment.
Can I run HBase on this cluster?
If I can, which version of HBase should I use?
Because using HBase is indispensable to us, should we change the version
of our cluster to Hadoop-0.20-append or Hadoop-0.21?
Thank you very much!
I will use Sun/Oracle JDK at every node.
jiajun
On Fri, Dec 10, 2010 at 8:37 AM, Gary Helmling ghelml...@gmail.com wrote:
I'm not aware of anyone running HBase with OpenJDK.
Try using the Sun/Oracle JDK, any recent version, except for 1.6.0_18 which
has known issues.
I known that HBase-0.20.6 can't run on Hadoop-0.21. We are running
Hadoop-0.20.2 without append (HDFS-200) on our production
environment,and running HBase-0.20.6 on this cluster,it worked good,but I'm
worrried that there my be some potential problems. So I am watching this
matter carefully !
I think the issue was that the meta region was on a serving that was
flaking out... this has been corrected and we are staying up with port
60010.
-Jack
On Thu, Dec 9, 2010 at 3:48 PM, Stack st...@duboce.net wrote:
Nothing in the master log?
Drawing that page its going to scan meta. If large
Nobody is running Hadoop on Gentoo in production, either.
Did you tweak CFLAGS by chance?
Anyway, don't do it this way.
Run the Sun JVM on a stable version of CentOS/RedHat, Debian, or Ubuntu.
Best regards,
- Andy
--- On Thu, 12/9/10, Gary Helmling ghelml...@gmail.com wrote:
From:
Can you confirm that you are running this from within a *cygwin* terminal
and not a Windows command shell? This would be the bash shell you launch by
running C:\cygwin\root\cygwin.bat
Secondly, make sure you are able to navigate using cd
/usr/local/hbase-0.20.6 within cygwin.
Just wanted to make
Here is our story on Hadoop for 0.90.0, the coming next major HBase
release which is currently in release candidate 2:
http://people.apache.org/~stack/hbase-0.90.0-candidate-1/docs/notsoquick.html#hadoop.
Whats stated herein applies to our current 0.89.x release also. If
our statement is not
Hi
One of my cluster is breaken, HMaster'log is here :
2010-12-10 12:48:17,320 INFO org.apache.hadoop.ipc.Client: Retrying connect
to server: /192.168.5.153:50020. Already tried 0 time(s).
2010-12-10 12:48:18,321 INFO org.apache.hadoop.ipc.Client: Retrying connect
to server:
there is more logs:
2010-12-10 12:56:27,727 INFO org.apache.hadoop.ipc.Client: Retrying connect
to server: /192.168.5.153:50020. Already tried 6 time(s).
2010-12-10 12:56:27,889 WARN org.apache.hadoop.hdfs.DFSClient: Error
Recovery for block blk_2629551547112989428_266782 failed because recovery
Hi Stack,
I just wondered whether 0.90.0-RC1 could be deployed to a maven
repository, so that we who use maven could avail it.
Thank you,
Imran
On Fri, Dec 10, 2010 at 10:50 AM, Stack st...@duboce.net wrote:
Here is our story on Hadoop for 0.90.0, the coming next major HBase
release which is
Not a problem, just wanted to know what the plan is, thanks for the info.
/Imran
On Fri, Dec 10, 2010 at 11:17 AM, Stack st...@duboce.net wrote:
We haven't done this work yet --
http://www.apache.org/dev/publishing-maven-artifacts.html#publish-snapshot.
We'll do it as part of RC3. Sorry for
Jiajun,
Hard to say whether you've lost data or not. Something looks wrong with HDFS.
What versions of HBase and HDFS are you running?
What's going on in the logs of the DataNodes and the NameNode when this is
happening? What about the dfs web ui?
Try running Hadoop fsck to see what's up
https://issues.apache.org/jira/browse/HBASE-3330
On Thu, Dec 9, 2010 at 9:22 PM, Imran M Yousuf imyou...@gmail.com wrote:
Not a problem, just wanted to know what the plan is, thanks for the info.
/Imran
On Fri, Dec 10, 2010 at 11:17 AM, Stack st...@duboce.net wrote:
We haven't done this
hi, all
I met this exception when I doing intensive insertions using YCSB. Anybody
give me some clues on this? I use hbase 0.20.6.
com.yahoo.ycsb.DBException:
org.apache.hadoop.hbase.client.RetriesExhaustedException: Trying to contact
region server -- nothing found, no 'location' returned,
Hi Guys,
Wonder if anybody could shed some light on how to reduce the load on HBase
cluster when running a full scan.
The need is to dump everything I have in HBase and into a Hive table. The
HBase data size is around 500g.
The job creates 9000 mappers, after about 1000 maps things go south
What J-D was pointing out was that you need the *exact* same hbase and
hadoop jars on *both* client side and server side.
Your client java.class.path shows that you are using apache hadoop 0.20.2
whereas on the server side you are using CDH3 0.20.2+737
These are not compatible and with version
Suraj,
Hbase works when i work with smaller clusters, so i dont think hbase is the
problem. But Now i'm trying to include conf directory in classpath and try
again.
But please tell me this, I dont find any proper documentation for starting
hbase in fully distributed mode.
So please help me
Hi JG
thank you
Datanode of HDFS and regionserver of HBase runned on the same
unexpectedhalted computer ,so Something looks wrong with HDFS.
What versions of HBase and HDFS are you running?
HBase version is 0.20.6
HDFS version is 0.20.2.
What's going on in the logs of the DataNodes and the
43 matches
Mail list logo