Hi,
It's a ZK expiry on sunday 1st. Root cause could be the leap second bug?
N.
On Thu, Jul 5, 2012 at 8:59 AM, lztaomin lztao...@163.com wrote:
HI ALL
My HBase group a total of 3 machine, Hadoop HBase mounted in the same
machine, zookeeper using HBase own. Operation 3 months after
Jay,
Have you tried the -metaOnly hbck option (possibly in conjunction with
-fixAssignments/-fix)? It could be that meta is out of whack which
prevents everything else from making progress.
If that doesn't work please share more logs -- it will help us figure out
where it got stuck.
Thanks,
Hi,
I can nolonger start my cluster correctly and get messages like
http://pastebin.com/T56wrJxE (taken on one region server)
I suppose Hbase is not done for being stopped but only for having some nodes
going down ??? HDFS is not complaining, it's only HBase that can't start
correctly :(
I
Hi,
I didn't hear about the possibility to split by regex. May somebody else will
post here if it's possible.
But you could maybe workaround that by doing a mapping from regex to region in
your client code.
If that's not an option and it's too difficult to decide how to pre-split you
could
From Lars' book:
The batch() calls currently do not support the Increment instance,
though this should change in near future.
Which version are you using, it's possible that it's still not there
enven in recent versions.
JM
2012/7/5, deanforwever2010 deanforwever2...@gmail.com:
my problem is
Hi,
Yesterday I stopped my cluster because of a storm. It did not went up
well so I have formated the hadoop FS and restarted it.
Now, when I'm trying to re-create my schema, I'm facing some issues.
It's telling me that the table don't exist when I want to delete it,
but that the table exist
See https://issues.apache.org/jira/browse/HBASE-2947 for details.
On Jul 5, 2012, at 12:26 PM, Jean-Marc Spaggiari wrote:
From Lars' book:
The batch() calls currently do not support the Increment instance,
though this should change in near future.
Which version are you using, it's
Hi JM,
So you already wiped everything on the HDFS level? The only thing left is
ZooKeeper. It should not hold you back, but it could be having an entry in
/hbase/table? Could you try the ZK shell and do an ls on that znode?
If at all, if you wipe HDFS anyways, please also try wiping the ZK
Hi Lars,
For for pointing me to the right direction.
I have already restarted ZK but I did not removed anything on its server.
Here is the output of the ZK ls. There is few tables I already removed
(test3, work, etc.)...
[zk: cube(CONNECTED) 8] ls /hbase/table
[work, work_sent, .META., -ROOT-,
Hi JM,
You can simply remove the znode's if you like, that ought to do it.
Lars
On Jul 5, 2012, at 2:23 PM, Jean-Marc Spaggiari wrote:
Hi Lars,
For for pointing me to the right direction.
I have already restarted ZK but I did not removed anything on its server.
Here is the output of
Super, thanks.
Seems that everything is now working fine after I removed the entries.
I have formated my HDFS because of this issue ;) Next time I will wait
a bit before doing that.
Thanks,
JM
2012/7/5, Lars George lars.geo...@gmail.com:
Hi JM,
You can simply remove the znode's if you
No, you need to know your key ranges for each split. If you don't and you guess
wrong, you may end up not seeing any benefits because your data may still end
up going to a single region...
(Its data dependent.)
I am personally not a fan of pre-splitting a table.
The way I look at it, you
-Metaonly returns a summary which I presume is what the other flags are
meant to return
Summary:
-ROOT- is okay.
Number of regions: 1
Deployed on: datanode006.si.lan,60020,1341357895747
.META. is okay.
Number of regions: 1
Deployed on: datanode013.si.lan,60020,1341357896016
Jay,
What version are you on?
You may have hit this: (do you have 50 regions?)
https://issues.apache.org/jira/browse/HBASE-6018
At the moment this isn't in an apache release yet (we're working on it!),
but we were able to get it into cdh4.0.0. A tarball with is here:
Hi,
My organization has been doing something zany to simulate atomic row operations
is HBase.
We have a converter-object model for the writables that are populated in an
HBase table, and one of the governing assumptions
is that if you are dealing with an Object record, you read all the columns
Take a look at HBASE-3584: Allow atomic put/delete in one call
It is in 0.94, meaning it is not even in cdh4
Cheers
On Thu, Jul 5, 2012 at 11:19 AM, Keith Wyss keith.w...@explorys.com wrote:
Hi,
My organization has been doing something zany to simulate atomic row
operations is HBase.
We
Thanks for the info Ted,
Anyone tackled this problem before 0.94?
Keith
On 7/5/12 2:28 PM, Ted Yu yuzhih...@gmail.com wrote:
Take a look at HBASE-3584: Allow atomic put/delete in one call
It is in 0.94, meaning it is not even in cdh4
Cheers
On Thu, Jul 5, 2012 at 11:19 AM, Keith Wyss
Other than the lack of hbck is there any thing else it prevents from
working?
If not it may be our best option to pull the data and make a new table
On 05/07/2012 18:39, Jonathan Hsieh j...@cloudera.com wrote:
Jay,
What version are you on?
You may have hit this: (do you have 50 regions?)
Also see https://issues.apache.org/jira/browse/HBASE-6294
From: Jean-Marc Spaggiari jean-m...@spaggiari.org
To: user@hbase.apache.org
Sent: Thursday, July 5, 2012 7:20 AM
Subject: Re: Table exist and not exist at the same time.
Super, thanks.
Seems that
yes :
/hbase/.logs/hb-d12,60020,1341429679981-splitting/hb-d12%2C60020%2C1341429679981.134143064971
I did a fsck and here is the report :
Status: HEALTHY
Total size:618827621255 B (Total open files size: 868 B)
Total dirs:4801
Total files: 2825 (Files currently being written: 42)
I am having the same problem. I tried N different things but I cannot solve the
problem.
hadoop-0.20.noarch 0.20.2+923.256-1
hadoop-hbase.noarch 0.90.6+84.29-1
hadoop-zookeeper.noarch 3.3.5+19.1-1
I already set:
property
Interesting... Can you read the file? Try a hadoop dfs -cat on it
and see if it goes to the end of it.
It could also be useful to see a bigger portion of the master log, for
all I know maybe it handles it somehow and there's a problem
elsewhere.
Finally, which Hadoop version are you using?
Thx,
Pablo, instead of CMSIncrementalMode try UseParNewGC.. That seemed to be the
silver bullet when I was dealing with HBase region server crashes
Regards,
Dhaval
From: Pablo Musa pa...@psafe.com
To: user@hbase.apache.org user@hbase.apache.org
Sent: Thursday, 5
Did you check http://hbase.apache.org/book.html#perf.os.swap;?
-邮件原件-
发件人: Pablo Musa [mailto:pa...@psafe.com]
发送时间: 2012年7月6日 5:38
收件人: user@hbase.apache.org
主题: RE: Hmaster and HRegionServer disappearance reason to ask
I am having the same problem. I tried N different things but I
3ks Lars,my server version is 0.94,my client jar is 0.92,I update the jar
now!
It works!
2012/7/5 Lars George lars.geo...@gmail.com
See https://issues.apache.org/jira/browse/HBASE-2947 for details.
On Jul 5, 2012, at 12:26 PM, Jean-Marc Spaggiari wrote:
From Lars' book:
The batch()
Hi all,
I've implemented an indexing system. It needs refinement of course
but I have my own work to do, so I decided to open it for everyone to
access, modify and any improvement or ideas would be great.
https://github.com/danix800/hbase-indexed
--
Best Regards!
Fei Ding
The following action allows me to compile your code:
551 cp ~/hbase-indexed/src/org/apache/hadoop/hbase/regionserver/*.java
src/main/java/org/apache/hadoop/hbase/regionserver/
552 mkdir src/main/java/org/apache/hadoop/hbase/regionserver/indexed
553 cp
Finally my HMaster has stabilized and been running for 7 hours. I
believe my networking issues are behind me now. Thank you everyone for
the help.
New issue is my RSes continue to die after about 20 minutes. Again the
cluster is idle. No jobs are running and I get this on all of my RSes
at
On Thursday, July 5, 2012 at 8:25 PM, Jay Wilson wrote:
Finally my HMaster has stabilized and been running for 7 hours. I
believe my networking issues are behind me now. Thank you everyone for
the help.
Awesome.
Looks like the same issue is biting you with the RS too. The RS isn't
I don't see that in the RS logs. Would I see that in the ZK logs?
At this point there is no network. Just a switch. I reduced the number
of nodes to 40 and had all of them placed on the same switch with a
single vlan. I even had the network techs use a completely different
switch just to be
Sorry, I don't have much experience on unit test. The performance may not
be good
since this is my leisure time work, nor do I have good understanding for
consistency.
I really wanna help but for now it's maybe beyond my ability. I'm still
digging the code
of hbase too... :-)
HBase is quite HUGE
The timeout can be configured using the session timeout configuration. The
default for that is 180s, but that means that if the RS doesn't heartbeat to ZK
for 180s, it's considered dead. Unless the machines are really loaded or GCs
are pausing the RS processes, I don't see any other reason
Funny you mention that. I asked the techs to set it up that why.
I went to pull my ZK logs and found that 1 RS is still running. What is
interesting is that RS is connected to ZK on devrackA-05. The 2 RSes
that died where connected to ZK on devrackA-03. devrackA-03 has ZK and
HMaster on it.
Is your ZK managed by HBase or are you managing it yourself?
BTW - All ZK nodes should be reachable by all nodes in the cluster.
The YouAreDeadException would be in RS logs if at all.
On Thursday, July 5, 2012 at 9:38 PM, Jay Wilson wrote:
Funny you mention that. I asked the techs to set it
34 matches
Mail list logo