I see your point. I won't disable_peer right away after I add new one. Will
wait for data to replicate and make sure there's no lag.
Thanks!
--
Sent from: http://apache-hbase.679495.n3.nabble.com/HBase-User-f4020416.html
> I won't miss any data as long as I add_peer (step 1) before I
disable_peer (step 2).
Yes, Wellington is right.
Consider the case:
1. hlog1 enqueued in ORIGAL_ID peer;
2. rs roll to write hlog2;
3. add the new peer (your step#1) with NEW_ID, it will only enqueue the
hlog2 to NEW_ID peer;
4.
Hi Marjana,
I guess OpenInx (much valid) point here is that between step#2 and #3, you
need to make sure there's no lags for ORIGINAL_PEER_ID, because if it has
huge lags, it might be that some of the edits pending on its queue came
before you added NEW_PEER_ID in step #1. In that case, since
ORIG
Hi OpenInx,
Correct, only ZK is being moved, hbase slave stays the same. I moved that
earlier effortlessly.
In order to move ZK, I will have to stop hbase. While it's down, hlogs will
accumulate for the NEW_ID and ORIGINAL_ID peers. Once I start hbase, hlogs
for NEW_ID will start replicating. hlogs
Sorry, seems I misunderstood your question. only zk configs change for
your slave cluster, the hbase slave cluster
keep the same.
Considered the steps your list:
1. add_peer NEW_ID 'newZK'
2. disable_peer ORIGINAL_ID 'originalZK'.
3. stop slave hbase. move ZK.
4. start slave hbase. Data starts c
Hi marjana. when you alter to the new replication peer, you only want the
new replication data redirect to
the new slave cluster ? how about the old data in the master cluster ? is
that necessary to migrate to the
new slave cluster also ? In our XiaoMi clusters, when doing the migration
to a ne
You were thinking something like:
1. add_peer NEW_ID 'newZK'
2. disable_peer ORIGINAL_ID 'originalZK'.
3. stop slave hbase. move ZK.
4. start slave hbase. Data starts coming in for NEW_ID peer.
5. drop_peer ORIGINAL_ID
Not sure about drop_peer, if I should do it at the end (in case something
goe
Yep, that's a better, concise description of what I meant. You could even
do #2 after #4, doesn't really matter, as long source cluster is already
trying to replicate to the new peer id.
Em qua, 10 de jul de 2019 às 13:03, marjana escreveu:
> You were thinking something like:
>
> 1. add_peer NEW
You were thinking something like:
1. add_peer NEW_ID 'newZK'
2. disable_peer ORIGINAL_ID 'originalZK'.
3. stop slave hbase. move ZK.
4. start slave hbase. Data starts coming in for NEW_ID peer.
5. drop_peer ORIGINAL_ID
Not sure about drop_peer, if I should do it at the end (in case something
goe
How about adding it as a new peer, where you define a new peerID for the
new ZK quorum? Until your new ZK quorum address is effective, replication
would accumulate edits, then once you complete the ZK move, it will resume
replication to that one, and original peer id could be removed.
Em ter, 9 de
Hello,
I have master-slave replication configured. My slave cluster's ZK needs to
be moved. Is there a way to alter peer on my master cluster so it points to
the new ZK?
If I disable_peer then remove_peer, I am afraid my replication will stop and
all my tables will have replication disabled.
There
11 matches
Mail list logo