hm this is production env, upgrading it is not going to happen any time soon.
Good to know about this bug, thanks. Beats me why it used to work when all I
changed was the zookeeper of the target cluster. Znode stayed the same.
Thanks
--
View this message in context:
http://apache-hbase.679495.
Maybe is related to this: https://issues.apache.org/jira/browse/HBASE-15393
give a try with 1.2.4?
thanks,
esteban.
--
Cloudera, Inc.
On Wed, Feb 1, 2017 at 10:22 AM, marjana wrote:
> They are not using the same zookeeper cluster. They also have different
> znode
> dir.
> When they were on t
They are not using the same zookeeper cluster. They also have different znode
dir.
When they were on the same zookeeper, this used to work. But why would that
matter?
--
View this message in context:
http://apache-hbase.679495.n3.nabble.com/enable-table-replication-error-Found-no-peer-tp4086006
Thats interesting. Are you using a non-standard configuration for
replication? maybe a different parent ZK root znode?
thanks,
esteban.
--
Cloudera, Inc.
On Wed, Feb 1, 2017 at 9:18 AM, marjana wrote:
> Hello,
> I am using Hbase 1.2.0. I have 2 clusters, same version, one is master, the
> ot
Hello,
I am using Hbase 1.2.0. I have 2 clusters, same version, one is master, the
other slave. I created a peer on master and it shows as enabled:
hbase(main):003:0> list_peers
PEER_ID CLUSTER_KEY STATE TABLE_CFS
1
zookeeper1.adm01.com,zookeeper2.adm01.com,zookeeper3.adm01.com:2181:/hbase
ENABL
Just to give an update, The lag and LoqQueue did indeed go down after the
port was open!
Now I have another issue/question but will open a new thread.
Thanks!
--
View this message in context:
http://apache-hbase.679495.n3.nabble.com/replication-concepts-enabling-peer-vs-enabling-table-replicat
Can you take a look at TestMasterCoprocessorExceptionWithRemove to see if
it covers your case ?
If not, can it be modified to exhibit the behavior you described ?
Cheers
On Wed, Feb 1, 2017 at 5:45 AM, Steen Manniche wrote:
> I'm trying to specify some sanity checks in my coprocessor's start()
I'm trying to specify some sanity checks in my coprocessor's start()
method, throwing exceptions if the checks fail. I have tried throwing
IllegalArgumentException and CoprocessorException from the start
method, but on the client side (a junit test), all I get is a
RuntimeException with no traces o
I was apparently a bit too naïve in my approach to the exception
handling here. I turns out that the exception bubbling out of hbase
will be the RetriesExhaustedWithDetailsException which contains one or
more originating exceptions. In this case, the
RetriesExhaustedWithDetailsException contained m
Any possible solution for this? As the number of clusters are increasing we
are hitting thread limit frequently. Any suggrestions?
Regards,
Mukund Murrali
On Sat, Jul 25, 2015 at 4:39 PM, mukund murrali
wrote:
> Sorry for the delay. No it was not a client scanner call. It happens
> during firs
10 matches
Mail list logo