I'm getting the following error while running hbase shell .
Installation of hbase and hadoop went fine .
ERROR zookeeper.RecoverableZooKeeper: ZooKeeper exists failed after 3
retries
12/10/04 18:11:22 WARN zookeeper.ZKUtil: hconnection Unable to set watcher
on znode /hbase/master
org.apache.zookee
Hi Bharadwaj,
Have you tried to connect to you ZooKeeper shell to see if you have
access to the node /hbase/master?
You can take a look there:
http://zookeeper.apache.org/doc/r3.2.2/zookeeperStarted.html to access
the shell.
JM
2012/10/4, Bharadwaj Yadati :
> I'm getting the following error whi
Hi Venkateswara,
What do you have on your master's logs? Do you have anything?
JM
2012/10/4, Venkateswara Rao Dokku :
> Hi,
>I configured 2 node hbase cluster with hadoop-0.20.2 & hbase 0.92.1. The
> installation went fine. One is the namenode & the other will act asa
> datanode as well as r
I could only see this
2012-10-04 06:25:45,419 INFO org.apache.hadoop.hbase.master.ServerManager:
Waiting on regionserver(s) to checkin
2012-10-04 06:25:46,919 INFO org.apache.hadoop.hbase.master.ServerManager:
Waiting on regionserver(s) to checkin
2012-10-04 06:25:48,420 INFO org.apache.hadoop.hbas
hi, hbase users.
I am wondering how we can make orders when we put under multiple threads.
I mean that
threads are working like this
thread1 puts A1 (rowkey)
thread2 puts A2
thread3 puts A3
by unexpected working time order,
thread1 puts earlier than thread2.
thread3 puts earlier than thread1.
I'm not 100% sure, but it looks like your "master" is not really your master.
ERROR org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Node
/hbase/master already exists and this is not a retry
And it's starting as a backup master:
Adding ZNode for
/hbase/backup-masters/oc-PowerEdge-R610,60
Silly question. Why do you care how your data is being stored?
Does it matter if the data is stored in rows where A1,A2, A3 are the order of
the keys, or
if its A3,A1,A2 ?
If you say that you want to store the rows in order based on entry time, you're
going to also have to deal with a little
I am not sure I hit your questoin, but if the data is not stored as what
you expect,
I guess it might be the problem of row key.
As we all know, the row key is sorted in a lexicographic order in HBase.
For example, 10 is before 9. So if your row key includes 1 ... 10,
it is neccessory to format the
I took it that the OP wants to store the rows A1->A3 in the order in which they
came in. So It could be A3,A1,A2 as an example.
So to do this you end up prefixing the rowkey with a timestamp or something.
This is not a good idea, and I was curious as to why the order of entry was
important t
Kevin,
Not at all preachy. Thank you very much for the all good, useful info. I
think your information will benefit others in the community if they would
meet the same problem I had. I wish I could see your post one day ago.
The cluster that I had that weird problem is a dev cluster and the issue
Hi Dalia,
I believe RowCounter (mapreduce) or AggregationClient (coprocessor) can
solve your problem.
Shumin
On Mon, Oct 1, 2012 at 10:02 AM, Jean-Marc Spaggiari <
jean-m...@spaggiari.org> wrote:
> Hi Dalia,
>
> If you are not maintaining an index for this table, then you will have
> to scan th
yes, this needs for our indexer for datas.
I mean that hbase need to store some kinds of data list based on entry time and
then by indexer, It will try to search new data list by a start-key and a limit
count.
for easy understanding,
If I used a timestamp row key,
tsdata
1 D1
2
It looks like the max version limit for a table or scanner is not applied
to disregard older versions, prior to counting columns within a
ColumnPaginationFilter or ColumnCountGetFilter. As a result, a Scan or Get
can ultimately retrieve fewer than the requested number of columns when
there is a suf
I am using the zookeeper that is provided by the hbase. Do I need to
install zookeeper separately?
On Thu, Oct 4, 2012 at 8:48 PM, Jean-Marc Spaggiari wrote:
> I'm not 100% sure, but it looks like your "master" is not really your
> master.
>
> ERROR org.apache.hadoop.hbase.zookeeper.Recoverable
Jacques: I think you got me wrong on my statement. I was only requesting
you to think again about my questions assuming that I have seen the jive
video, since there are some differences in our case compared to jive. I
completely understand that all this is voluntary effort and my sincere
thanks for
Thanks Eugeny. We are currently running some experiments based on your
suggestions!
On Thu, Oct 4, 2012 at 2:20 AM, Eugeny Morozov wrote:
> I'd suggest to think about manual major compactions and splits. Using
> manual compactions and bulkload allows to split HFiles manually. Like if
> you would
Seems to be a bug to me. Can you file a JIRA on this?
Regards
Ram
> -Original Message-
> From: Andrew Olson [mailto:noslower...@gmail.com]
> Sent: Friday, October 05, 2012 2:04 AM
> To: user@hbase.apache.org
> Subject: Issue with column-counting filters accepting multiple versions
> of a
Filters are applied before the version counting is performed.
This is a frequent area of contention. If filters were applied after the
version counting other folks would complain (and have complained - in the early
days filter were in fact evaluated after the version counting - which is why it
w
"Don't bother trying this in production" ;-)
1. Are you sure lookup by key are faster ?
2. Updating Lucene files in a lock-free maneer and ensuring good
concurrency can be a bit tricky
3. AFAIK, Lucene files don't fit in HDFS and thus another distributed
storage is required. Katta does not look as
hi, hbase users.
this question is about a row-key design pattern I believe.
To append data always an end of table, which row-key structures are
recommenable?
multiple threads puts many*many data into table.
in this condition, I want to be sure that all of data are going to append the
end of tab
20 matches
Mail list logo