thanks for your reply Ted. I found what was wrong. It's one index I was build
by phoenix which just lost it's data on hdfs a few days ago , and hbase alway
try to read it
2016-12-16
lk_hbase
发件人:Ted Yu
发送时间:2016-12-16 11:05
主题:Re: too many connection to zookeeper
Hi Dima,
As suggest by you not run HDFS Blancer untill I run Major compaction.
so till now what i did
I added one node with same configuration as other node are.
I did not run HDFS blancer due to this I am getting error on cloudera.
I have performed Major compation
now still I am getting error
Thank You Ted for your super fast reply.
What I am trying to determine is the timestamp [writeTime from the WALKey]
from the Master cluster that has been successfully replicated to the Slave.
My intention is to compare this time to a certain wall clock time of
interest to guarantee that all
Was dev1 hosting hbase:meta table (before the stop) ?
Looks like you embedded some image which didn't go through.
Consider using third party site if text is not enough to convey the message.
On Thu, Dec 15, 2016 at 7:02 PM, lk_hbase wrote:
> hi,all:
> I'm useing hbase 1.2.3
bq. preReplicateLogEntries and postReplicateLogEntries gets called on the
slave cluster region server
This is by design.
These two hooks are around ReplicationSinkService#replicateLogEntries().
ReplicationSinkService represents the sink.
Can you tell us what you need to know on the source
hi,all:
I'm useing hbase 1.2.3 zookeeper 3.4.9 hadoop2.7.3 for test . there is some
table with few data.
and I use Phoenix4.9 for hbase1.2 as JDBC level . recently I got too many
connections error , and I rise zookeeper's maxClientCnxns=300.
but few days later I got the error again.
and I fond
We are trying to use the RegionServerObserver to track the current status of
Replication [for a specific set of tables only]. Based on some experiments I
see that the preReplicateLogEntries and postReplicateLogEntries gets called
on the slave cluster region server but not on the Master. The same