Hi Stack,
I will delete everything and reconfigure hbase from scratch.
I will then attach logs in pastebin if i find same issues again.
On Fri, Mar 21, 2014 at 11:20 AM, Stack st...@duboce.net wrote:
Put the full logs up in pastebin. The below seem like symptoms, not the
problem.
St.Ack
I do this *all* the time, I have never seen an issue like this. So this is
interesting.
Is it possible that ZK happened to have picked a use ephemeral port? (but in
that case I would have only expected this to fail once and work the next time).
-- Lars
From:
Depends. If a machine fails hard and HDFS 2.x is setup correctly I'd expect
this to be the ZK timeout (180s by default in 0.94, but can be lowered) + a few
minutes to rejigger things.
-- Lars
From: Demai Ni nid...@gmail.com
To: user@hbase.apache.org
Hi Stack/Lars,
I configured Hbase again in pseudo distributed mode.
Again i got some exceptions which have shared in pastebin
zookeeper log - http://pastebin.com/6vgbm987
master log - http://pastebin.com/kcjpd1Zp
region log - http://pastebin.com/vmfE2HG2
There is no error in all 3 hadoop
Hi,
I could not stop myself from restarting the hbase cluster. :(
My predictions were correct
I restarted cluster and now I am getting those exceptions.
logs after restarting hbase cluster.
Zookeeper - http://pastebin.com/UBbbiERk
Master - http://pastebin.com/6GZH5hZK
Region -
Is it possible to use webhdfs to export a snapshot to another cluster? If so,
what would the command look like?
TIA
Brian
ExportSnapshot uses the FileSystem API
so you'll probably be able to say: -copy-to: webhdfs://host/path
Matteo
On Fri, Mar 21, 2014 at 12:09 PM, Brian Jeltema
brian.jelt...@digitalenvoy.net wrote:
Is it possible to use webhdfs to export a snapshot to another cluster? If
so,
what would the
Also Matteo, just like distcp, one advantage of this (using webhdfs while
copying) could also be that even if the versions are not same, we can still
copy?
Regards,
Shahab
On Fri, Mar 21, 2014 at 8:14 AM, Matteo Bertozzi theo.berto...@gmail.comwrote:
ExportSnapshot uses the FileSystem API
so
Exporting across versions was why I tried webhdfs. I have a cluster running
HBase 0.94 and wanted to export a
table to a different cluster running HBase 0.96. I got the export to work, but
attempting to
do a restore_snapshot results in:
TableInfoMissingException: No table descriptor file
The directory layout is changed, but easy to fix by hand.
94 path is /hbase/.archive/TABLE_NAME - 96 path is
/hbase/archive/data/TABLE_NAME
94 table info is in .hbase-snapshot/NAME/.tableinfo.xyz - 96 table info is
in .hbase-snapshot/NAME/.tableinfo/tableinfo.xyz
Matteo
On Fri, Mar 21, 2014
I'm new to hbase and I'm trying to load my first region observer
coprocessor. I working with cloudera's hbase 0.96.1.1-cdh5.0.0-beta-2
The basic steps i've tried. Its's about as basic a process as you can get,
I'm hoping just to put some stuff in the log and prevent a row from going
into the
sorry forgot the namespace: 96 path is
/hbase/archive/data/default/TABLE_NAME
Matteo
On Fri, Mar 21, 2014 at 4:38 PM, Matteo Bertozzi theo.berto...@gmail.comwrote:
The directory layout is changed, but easy to fix by hand.
94 path is /hbase/.archive/TABLE_NAME - 96 path is
Hi all,
By using Distcp utility full backup does the cluster should be down?If yes
then what will happen if it is up and running
--
View this message in context:
http://apache-hbase.679495.n3.nabble.com/Fullbackup-using-Distcp-tp4057363.html
Sent from the HBase User mailing list archive at
Hi,
You HDFS cluster need to be done.
You can keep your HBase on, but if there is writes in the cluster, then you
might capture an unconsistent state of your HBase files...
JM
2014-03-21 6:53 GMT-04:00 prmdbaora pinkpantherp...@gmail.com:
Hi all,
By using Distcp utility full backup does
My paths were different, and it was .tabledesc rather than .tableinfo, but I
got past
the problem. Now the restore_snapshot seems to be hung, and I’m seeing many
warnings like the following in the logs:
master.RegionStates: Failed to open/close 24635905eba622ed4911a498b7848caa
on
if you look at the RS log you probably can get us more information and the
full stack trace
Matteo
On Fri, Mar 21, 2014 at 5:22 PM, Brian Jeltema
brian.jelt...@digitalenvoy.net wrote:
My paths were different, and it was .tabledesc rather than .tableinfo, but
I got past
the problem. Now
Thanks. Looks like I’ve botched the file layout, even though the hbase shell
seemed to
understand my snapshot. I’ll fight with it a bit.
Brian
On Mar 21, 2014, at 1:25 PM, Matteo Bertozzi theo.berto...@gmail.com wrote:
if you look at the RS log you probably can get us more information and the
HBASE-5258 dropped per-region coprocessor list from HServerLoad.
Have you tried specifying namenode information in shell command. e.g.
'coprocessor'='hdfs://example0:8020...'
Please also take a look at region server log around the time table was
enabled.
Cheers
On Fri, Mar 21, 2014 at 9:38
Scan s = new Scan();
s.addColum(cf1,cq1)
This will return you rows but each row will contain only this column value
(cf1:cq1)
I guess you want the entire row (all columns)
A query like
select * from table where c1 != null
Correct Vimal? You will need Filter then.
-Anoop-
On Thu, Mar 20,
region observer CP is not having any seperate con with zk. The CPs, which
uses zk, use the same connection established by this RS. Which CP u want to
use?
-Anoop-
On Thu, Mar 20, 2014 at 6:51 AM, Jignesh Patel jigneshmpa...@gmail.comwrote:
How to configure coprocessor-region observer to use
As Anoop described, region observers don't use ZK directly. Can you
describe more of what you are trying to do in your coprocessor -- how / why
you are connecting to zookeeper, or even provide sample code from your
coprocessor implementation?
On Fri, Mar 21, 2014 at 10:43 AM, Anoop John
Your cluster seems fine.
Item #1 above should probably not be WARN. It is over-sharing on what zk
is up to during startup (#5 is related). #2 and #3 are just our logging
that root and meta are not where zk says they are so we reassign which
later in the log you'll see succeeds; this is just how
no entry in either log i see canary checks of 'events', but no indication
of success/failure or even trying to load the Coprocessor. Its like either
it doesn't recognize the table_att 'coprocessor' or 'COPROCESSOR', I tried
both just to see if Case mattered. I would love to see a failure of some
Todd:
Have you seen HBASE-9218 ?
Cheers
On Fri, Mar 21, 2014 at 2:57 PM, Todd Gruben tgru...@gmail.com wrote:
no entry in either log i see canary checks of 'events', but no indication
of success/failure or even trying to load the Coprocessor. Its like either
it doesn't recognize the
Hi Vimal,
did you rebuild HBase against Hadoop 1.2.1? The released tarballs are built
against Hadoop 1.0.4 and won't work with Hadoop 1.2.x or 2.x.y.
-- Lars
From: Vimal Jain vkj...@gmail.com
To: user@hbase.apache.org user@hbase.apache.org; lars hofhansl
On Fri, Mar 21, 2014 at 2:57 PM, Todd Gruben tgru...@gmail.com wrote:
no entry in either log i see canary checks of 'events', but no indication
of success/failure or even trying to load the Coprocessor. Its like either
it doesn't recognize the table_att 'coprocessor' or 'COPROCESSOR', I tried
Stack,
Thanks a lot for the analysis , so nothing to worry about ?
Lars,
I did not built hbase from source.
I simply downloaded tars from hbase and hadoop download page( hbase from
stable directory ( i.e 0.94.17 ) and hadoop from stable1 directory (
i.e. 1.2.1) ).
Do i need to build from source
27 matches
Mail list logo