I deduced that it was one of the old WALs because, from the UI, I see that
these old WALs are not being replicated. However, I'll do another round of
checks to see if I can find something more. Would enabling debug help me find
more information?
Thanks again for your help.
Replic
The staktrace you posted is messed up so it is not easy to find out
which file actually blocks the replication progress...
Could you please double check the WAL file which blocks the
replication? Is it really one of these old WAL files?
Thanks.
Hamado Dene 于2024年9月16日周一 21:57写道:
>
> Than
e ore 16:26:55 CEST, Hamado Dene
> ha scritto:
>
> Hi HBase Community,
> We are currently facing an issue in our production environment with HBase
> replication, and I would greatly appreciate any guidance or suggestions the
> community may have
>
> We are running HBase ve
g?
> Thank you in advance!
> Hamado Dene
> Il mercoledì 11 settembre 2024 alle ore 16:26:55 CEST, Hamado Dene
> ha scritto:
>
> Hi HBase Community,
> We are currently facing an issue in our production environment with HBase
> replication, and I would greatly app
assist me in resolving this issue I'm facing?
Thank you in advance!
Hamado Dene
Il mercoledì 11 settembre 2024 alle ore 16:26:55 CEST, Hamado Dene
ha scritto:
Hi HBase Community,
We are currently facing an issue in our production environment with HBase
replication, and I would gr
HBase
replication, and I would greatly appreciate any guidance or suggestions the
community may have
We are running HBase version 2.5.8, and in the logs, we consistently encounter
the following warning:
024-09-11T15:51:11,468 WARN
[RS_CLAIM_REPLICATION_QUEUE-regionserver/rzv-db09-hd:16
Hi HBase Community,
We are currently facing an issue in our production environment with HBase
replication, and I would greatly appreciate any guidance or suggestions the
community may have
We are running HBase version 2.5.8, and in the logs, we consistently encounter
the following warning
I have four HBase clusters (A, B, C, D) with replication between them. I've
configured each cluster to replicate to all other clusters in an attempt to
have a hot-hot, eventually-consistent-across-all-clusters setup. One nice
property of this configuration is that any individual cluste
11月18日周六 20:21写道:
>
> Dear Team,
>
> In one of the Hbase Cluster, some of the replication queue has not been
> properly removed, though the concerned peerId has been removed from
> list_peers.
>
> Due to this, I'm facing frequent region server restart has been
> occur
Dear Team,
In one of the Hbase Cluster, some of the replication queue has not been
properly removed, though the concerned peerId has been removed from
list_peers.
Due to this, I'm facing frequent region server restart has been
occurring in the cluster where replication has to be written.
I
Replication can also replicate the wal file which is currently being
written, so usually at least the sizeOfLogQueue is 1, even if there is no
write to the region server.
You can see how to calculate timeStampNextToReplicate on branch-1, I think
this is the correct way to fix the metrics issue
Hi Duo
[image: Screenshot 2023-08-22 at 5.39.58 PM.png]
In the above metrics, sizeofLogQueue is always as 1 though we don't
have any entry for regionserver in the oldWAL folder.
https://github.com/apache/hbase/blob/rel/1.4.14/hbase-server/src/main/java/org/apache/hadoop/hbase/replic
If it is just a metrics issue then HBASE-22784 won't help. I guess the
problem is that the replication lag is calculated by comparing the
current time and the time when we ship the last edit, so if there is
no new edit, the replication lag will keep growing.
Looking at the current code
Hi Duo Zhang
Its just metrics. Because in that cluster, there is no active write. So we
don't have any data to replicate to the another cluster.
On Wed, 16 Aug 2023 at 08:01, 张铎(Duo Zhang) wrote:
> Is this just a metrics issue or is there an actual replication lag?
>
> Valli 于2
Is this just a metrics issue or is there an actual replication lag?
Valli 于2023年8月11日周五 22:51写道:
>
> Hello HBase Community,
>
> We recently upgraded our HBase cluster from version 1.2.6 to 1.4.14 and
> have encountered an issue with replication lag in our Disaster Recovery
>
Hello HBase Community,
We recently upgraded our HBase cluster from version 1.2.6 to 1.4.14 and
have encountered an issue with replication lag in our Disaster Recovery
(DR) cluster. We have two clusters in our setup: an active write cluster
and a DR cluster that receives replication from the
, Dec 12, 2021, 8:06 PM 张铎(Duo Zhang) wrote:
> We have fixed several replication related issues which may cause data loss,
> for example, this one
>
> https://issues.apache.org/jira/browse/HBASE-26482
>
> For serial replication, if we miss some wal files, it usually causes
> re
Thanks Duo
I will patch this and verify for the issue I mentioned above.
On Sun, Dec 12, 2021, 8:06 PM 张铎(Duo Zhang) wrote:
> We have fixed several replication related issues which may cause data loss,
> for example, this one
>
> https://issues.apache.org/jira/browse/HBASE-2
We have fixed several replication related issues which may cause data loss,
for example, this one
https://issues.apache.org/jira/browse/HBASE-26482
For serial replication, if we miss some wal files, it usually causes
replication to be stuck...
Mallikarjun 于2021年12月12日周日 18:19写道:
> Sync ta
Sync table is to be run manually when you think there can be
inconsistencies between the 2 clusters only for specific time period.
As soon as you disable serial replication, it should start replicating from
the time it was stuck. You can build dashboards from jmx metrics generated
from hmaster to
t; ha scritto:
>
> We have faced issues with serial replication when one of the region server
> of either cluster goes into hardware failure, typically memory from my
> understanding. I could not spend enough time to reproduce reliably to
> identify the root cause. So I don't know
le utility" what do you mean?I am new to Hbase,
> I am not yet familiar with all Hbase tools.
>
>
>
> Il domenica 12 dicembre 2021, 10:15:01 CET, Mallikarjun <
> mallik.v.ar...@gmail.com> ha scritto:
>
> We have faced issues with serial replication when one
We have faced issues with serial replication when one of the region server
of either cluster goes into hardware failure, typically memory from my
understanding. I could not spend enough time to reproduce reliably to
identify the root cause. So I don't know why it is caused.
Issue could be your se
We have faced issues with serial replication when one of the region server
of either cluster goes into hardware failure, typically memory from my
understanding. I could not spend enough time to reproduce reliably to
identify the root cause. So I don't know why it is caused.
Issue could be
I'm using hbase 2.2.6 with hadoop 2.8.5.Yes, My replication serial is
enabled.This is my peer configuration
|
| Peer Id | Cluster Key | Endpoint | State | IsSerial | Bandwidth |
ReplicateAll | Namespaces | Exclude Namespaces | Table Cfs | Exclude Table Cfs |
| replicav1 | acv-db10-h
Which version of hbase are you using? Is your replication serial enabled?
---
Mallikarjun
On Sun, Dec 12, 2021 at 1:54 PM Hamado Dene
wrote:
> Hi Hbase community,
>
> On our production installation we have two hbase clusters in two different
> datacenters.The primary datacenter re
replication scope to 1 on
the primary.The peer pointing to quorum zk of the secondary cluster is
configured on the primary.
Initially, replication worked fine and data was replicated.We have recently
noticed that some tables are empty in the secondary datacenter. So most likely
the data is no longer
Yes the command enable_table_replication will check whether a table exists
in peer cluster and if so compare the CFs. Ya you correctly said. The
difference in the table description results in failure of this command. You
can enable replication at src using the alter table command. We can fix
We tested replication between Apache HBase1.4 and Apache HBase2.2. We found
that if you use 'enable_table_replication' command to enable replication,
it will compares table schemas before start. But HBase2 has more default
parameters than HBase1, which leads to schema comparison faile
Yes, replication interfaces are compatible between these two major
versions.
So I created two clusters in AWS and tried enable replication between HBase
> 1.4.13 and 2.2.5. But have got error "table exists but descriptors are not
> the same" (I will put screenshot in the attachm
We are thinking about simulator issue. Our clusters much less - 4 by 100 RS
however we need process data continuously too. So I created two clusters in
AWS and tried enable replication between HBase 1.4.13 and 2.2.5. But have
got error "table exists but descriptors are not the same" (
dge of any
incompatibilities in the replication layer between these 2 versions, as
that is not very well covered in the public docs afaict. I'm aware this
will likely be a multi-month or year+ long project for us, and am just
starting the investigation phase :) It honestly looks like it might be an
easi
: user@hbase.apache.org
Subject: Upgrading cdh5.16.2 to apache hbase 2.4 using replication
EXTERNAL
We are running about 40 HBase clusters, with over 5000 regionservers total.
These are all running cdh5.16.2. We also have thousands of clients (from APIs
to kafka workers to hadoop jobs, etc) hitting
ether replication is compatible between these two
versions. If so, we probably would consider swapping onto upgraded clusters
using backup/restore + replication. If we were to go this route we'd
probably want to consider bi-directional replication so that we can roll
back to the old cluster if
to the replication
>
> Exception in one cluster where active writing takes place
>
> 2021-04-26 13:12:53,917 INFO
> [regionserver//10.XX.235.XX:16020-SendThread(10.XX.212.XXX:2171)]
>> zookeeper.ClientCnxn: Socket connection established to
>> 10.XX.212.XXX/10.XX.212.XXX:2171,
Below is the exception in the RS related to the replication
Exception in one cluster where active writing takes place
2021-04-26 13:12:53,917 INFO
[regionserver//10.XX.235.XX:16020-SendThread(10.XX.212.XXX:2171)]
> zookeeper.ClientCnxn: Socket connection established to
> 10.XX.212.XXX/10.
Hi Roshan,
Are you seeing any replication related exception in your RS logs ?
On Tue, Apr 27, 2021 at 1:59 PM Roshan wrote:
> Hi,
>
> In the hbase-1.4.10, I have enabled replication for all tables and
> configured the peer_id. the list_peers provide the below result:
>
>
Hi,
In the hbase-1.4.10, I have enabled replication for all tables and
configured the peer_id. the list_peers provide the below result:
hbase(main):001:0> list_peers
> PEER_ID CLUSTER_KEY ENDPOINT_CLASSNAME STATE TABLE_CFS BANDWIDTH
> 1 10.XX.221.XX,10.XX.234.XX,10.XX.212.XX:2171:/
Replication from hbase1 to hbase2 works.
Your issue looks to be HBASE-24354 It is fixed in 2.3.0+ only.
S
On Fri, Jul 17, 2020 at 8:11 AM Vinu Raj wrote:
> Is HBase WAL replication from cluster with HBase 1.1.2 to cluster with
> HBase 2.0.2 supported ? Tried a simple test where fol
Is HBase WAL replication from cluster with HBase 1.1.2 to cluster with
HBase 2.0.2 supported ? Tried a simple test where following table was
created in both the cluster
create 'repl_test', { NAME => 'cf1', REPLICATION_SCOPE => '1'}
When issued enable_tab
t just that we don't have enough
> > > bandwidth to do everything we're asked to do.
> > >
> > > Specifically around releases, we're always looking for more people to
> > > help drive the release process. Those who can corral Jira issues, do
ically around releases, we're always looking for more people to
> > help drive the release process. Those who can corral Jira issues, do
> > testing, and stage release candidates are very welcome and desired to
> > help make our releases happen on a regular cadence. If y
g for more people to
> help drive the release process. Those who can corral Jira issues, do
> testing, and stage release candidates are very welcome and desired to
> help make our releases happen on a regular cadence. If you have the
> time/resources to help our, let us know on the dev list
have the
time/resources to help our, let us know on the dev list.
- Josh
On 1/27/20 11:04 AM, Whitney Jackson wrote:
Hi,
I've been running 1.4.12 with replication and experiencing the "random
region server aborts" as described here:
https://issues.apache.org/jira/browse/HBASE
Hi,
I've been running 1.4.12 with replication and experiencing the "random
region server aborts" as described here:
https://issues.apache.org/jira/browse/HBASE-23169
The underlying problem and fix (woohoo!) seems to be here:
https://issues.apache.org/jira/browse/HBASE-23205
I
Thanks Wellington.
On Mon, Jan 20, 2020 at 9:55 PM Wellington Chevreuil <
wellington.chevre...@gmail.com> wrote:
> Data will be targeted to replication only once you have a replication peer
> added and your related table column family REPLICATION_SCOPE is set to '1'.
&
Data will be targeted to replication only once you have a replication peer
added and your related table column family REPLICATION_SCOPE is set to '1'.
If you had your table column family REPLICATION_SCOPE set to '1', but no
replication peer added, then no data inserted/modifi
If you are planning to enable it using 'alter' command or java admin API,
no issues, replication will not be initiated until you add a replication
peer. If you try to do it using hbase shell "enable_table_replication"
command, the command will fail.
Em seg., 20 de jan. de 2
Have query regarding data replication in hbase, in case when cluster
replication is enabled after data is inserted to table.
If we have existing hbase table with replication enabled & later point in
time hbase cluster replication is enabled. Is only data saved after cluster
replication ena
Hi,
Will there be any issues, if we enable table level replication
(REPLICATION_SCOPE=>1), when cluster replication is disabled? Will there be
any issues with WAL log rollover or accumulation?
Thanks,
Sudhir
ring the same issue, you can find many old WALs in HDFS
> oldWals directory({hbase.rootdir}/oldWALs), and in zookeeper replication
> queues ({znodeParent/replication/rs/{rs}/{peer}/}, and also detour the
> issue by assigning a region of tables being replicated to the region server.
issue, with the your logs.
If you are suffering the same issue, you can find many old WALs in HDFS
oldWals directory({hbase.rootdir}/oldWALs), and in zookeeper replication
queues ({znodeParent/replication/rs/{rs}/{peer}/}, and also detour the
issue by assigning a region of tables being replicated t
Hello all,
Sometimes we observer that replication is not working at HBase-1.4.10
hbase07.prod.hbcluster:
SOURCE: PeerID=lp_analytics, AgeOfLastShippedOp=0, SizeOfLogQueue=1,
TimeStampsOfLastShippedOp=Thu Jan 01 03:00:00 MSK 1970, Replication
Lag=1573052815347
SINK
I see your point. I won't disable_peer right away after I add new one. Will
wait for data to replicate and make sure there's no lag.
Thanks!
--
Sent from: http://apache-hbase.679495.n3.nabble.com/HBase-User-f4020416.html
> I won't miss any data as long as I add_peer (step 1) before I
disable_peer (step 2).
Yes, Wellington is right.
Consider the case:
1. hlog1 enqueued in ORIGAL_ID peer;
2. rs roll to write hlog2;
3. add the new peer (your step#1) with NEW_ID, it will only enqueue the
hlog2 to NEW_ID peer;
4.
Hi Marjana,
I guess OpenInx (much valid) point here is that between step#2 and #3, you
need to make sure there's no lags for ORIGINAL_PEER_ID, because if it has
huge lags, it might be that some of the edits pending on its queue came
before you added NEW_PEER_ID in step #1. In that case, since
ORIG
Hi OpenInx,
Correct, only ZK is being moved, hbase slave stays the same. I moved that
earlier effortlessly.
In order to move ZK, I will have to stop hbase. While it's down, hlogs will
accumulate for the NEW_ID and ORIGINAL_ID peers. Once I start hbase, hlogs
for NEW_ID will start replicating. hlogs
ve the peer right now.
Thanks.
On Thu, Jul 11, 2019 at 9:45 AM OpenInx wrote:
> Hi marjana. when you alter to the new replication peer, you only want the
> new replication data redirect to
> the new slave cluster ? how about the old data in the master cluster ?
> is that necessary
Hi marjana. when you alter to the new replication peer, you only want the
new replication data redirect to
the new slave cluster ? how about the old data in the master cluster ? is
that necessary to migrate to the
new slave cluster also ? In our XiaoMi clusters, when doing the migration
to a
You were thinking something like:
1. add_peer NEW_ID 'newZK'
2. disable_peer ORIGINAL_ID 'originalZK'.
3. stop slave hbase. move ZK.
4. start slave hbase. Data starts coming in for NEW_ID peer.
5. drop_peer ORIGINAL_ID
Not sure about drop_peer, if I should do it at the end (in case something
goe
Yep, that's a better, concise description of what I meant. You could even
do #2 after #4, doesn't really matter, as long source cluster is already
trying to replicate to the new peer id.
Em qua, 10 de jul de 2019 às 13:03, marjana escreveu:
> You were thinking something like:
>
> 1. add_peer NEW
You were thinking something like:
1. add_peer NEW_ID 'newZK'
2. disable_peer ORIGINAL_ID 'originalZK'.
3. stop slave hbase. move ZK.
4. start slave hbase. Data starts coming in for NEW_ID peer.
5. drop_peer ORIGINAL_ID
Not sure about drop_peer, if I should do it at the end (in case something
goe
How about adding it as a new peer, where you define a new peerID for the
new ZK quorum? Until your new ZK quorum address is effective, replication
would accumulate edits, then once you complete the ZK move, it will resume
replication to that one, and original peer id could be removed.
Em ter, 9
Hello,
I have master-slave replication configured. My slave cluster's ZK needs to
be moved. Is there a way to alter peer on my master cluster so it points to
the new ZK?
If I disable_peer then remove_peer, I am afraid my replication will stop and
all my tables will have replication disabled.
On Fri, Jun 28, 2019 at 9:47 AM James Kebinger
wrote:
> We were thinking that meta replication would make a hot meta table on a
> busy cluster less hot rather than more hot.
You are running read replicas or replicating the meta to another cluster?
S
> Is the meta replication feat
Hey Stack, thanks for your response
We were thinking that meta replication would make a hot meta table on a
busy cluster less hot rather than more hot. Is the meta replication feature
for uptime rather than performance if it, as you said, will "up the
temperature even more"?
I was surp
If early 2.0 or 2.1 versions, perhaps it is HBASE-21292 ?
S
On Thu, Jun 27, 2019 at 10:13 PM Stack wrote:
> On Wed, Jun 26, 2019 at 6:47 AM James Kebinger
> wrote:
>
>> Hello,
>> We're experimenting with meta table replication
>
>
> Replicating the m
On Wed, Jun 26, 2019 at 6:47 AM James Kebinger
wrote:
> Hello,
> We're experimenting with meta table replication
Replicating the meta table itself? What you thinking?
> but have found that the
> region servers hosting the replicas will get extremely high priority RP
ta is heavily cached client-side, but this problem could also emerge if
misbehaving clients are rapidly cycling their hbase client.
--James
On Wed, Jun 26, 2019, 9:47 AM James Kebinger wrote:
> Hello,
> We're experimenting with meta table replication but have found that the
> reg
Hello,
We're experimenting with meta table replication but have found that the
region servers hosting the replicas will get extremely high priority RPC
handler usage, sometimes all the way to 100% at which point clients start
to experience errors - the priority RPC handler usage is much h
Please do not cross-post lists. I've dropped dev@hbase.
This doesn't seem like a replication issue. As you have described it, it
reads more like a data-correctness issue. However, I'd guess that it's
more related to timestamps rather than be an issue on your cluster.
I
ynchronized to A. I
> configured replication both on Cluster A and B for table T using 'add_peer'
> and 'enable_table_replication' by Hbase shell(firstly A to B,2ndly B to
> A).Then,I did test in Hbase shell as below,
> 1.Put a record by typing "put 'T','
e: HBase Replication Between Two Secure Clusters With Different
Kerberos KDC's
Thanks, here it is:
1. On the source cluster, will be identical on the target cluster as both
use the same Kerberos realm name, though each has its own cluster specific
KDC:
$ more /etc/zookeeper/conf/ser
s.conf",
>
> 2. and using zkCli.sh to getAcl /hbase.
> 3. BTW, what was your login principal when executing "add_peer" in
> hbase shell.
>
> From: Saad Mufti
> Sent: 23 May 2018 01:48:17
> To: user@hbase.apache.org
>
i
Sent: 23 May 2018 01:48:17
To: user@hbase.apache.org
Subject: HBase Replication Between Two Secure Clusters With Different Kerberos
KDC's
Hi,
Here is my scenario, I have two secure/authenticated EMR based HBase
clusters, both have their own cluster dedicated KDC (using EMR support fo
Hi,
Here is my scenario, I have two secure/authenticated EMR based HBase
clusters, both have their own cluster dedicated KDC (using EMR support for
this which means we get Kerberos support by just turning on a config flag).
Now we want to get replication going between them. For other application
PROD because replication might be taking only latest WALs over the time period
while replicating to COB.
In COB, if I run the row counter map reduce job on snapshot taken on table, it
gathers records from offline regions too and leads to incorrect extra row count.
Regards,
CM
+1 201 763 1656
gt; sudo -u hbase hbase
> org.apache.hadoop.hbase.mapreduce.replication.VerifyReplication peer1
> table1
>
>
> On Thu, Dec 28, 2017 at 5:00 AM, Sawant, Chandramohan <
> chandramohan.saw...@citi.com.invalid> wrote:
>
> > Hi All,
> >
> > Before enabling replication, snapshot of PROD table taken and restore
hbase
org.apache.hadoop.hbase.mapreduce.replication.VerifyReplication peer1
table1
On Thu, Dec 28, 2017 at 5:00 AM, Sawant, Chandramohan <
chandramohan.saw...@citi.com.invalid> wrote:
> Hi All,
>
> Before enabling replication, snapshot of PROD table taken and restored in
> COB and t
Hi All,
Before enabling replication, snapshot of PROD table taken and restored in COB
and then replication enabled, region count was matching at that time.
However after few days, COB showing extra regions than in prod where
replication enabled one way from PROD to COB.
What is the reason of
Hi all,
I'm running HBase replication on CDH 5.9.0 and am wondering if there are
known configurations/methods to decrease the replication lag/latency. I am
monitoring replication latency via two separate methods:
1) The JMX 'replication.source.ageOfLastShippedOp' exposed by th
Hi Kahlil
Your understanding is right as how HBase replication is across data centres
where as Hbase read replicas are more for providing faster availability for
reads.
>>not be the proper tool to use here since it appears to have higher
replication latency and be more catered towards Di
Hi All,
I have some questions about when to use HBase Replication vs. HBase Read
Replicas. They seem to accomplish similar-ish things, and I'm trying to
figure out which I should use.
I've read through the documentation, but I am confused on a few points. It
seems that HBase Replicatio
Hi All,
I have setup HBase replication between two clusters containing 25
nodes each. The inter-data center network link has a capacity of 500
MBPS.
I have been running some tests to understand the speed of replication.
I am observing that the replication speed does not go more than 5
MBPS.
On
Thx. Will try and see what I can find.
Saad
On Mon, May 1, 2017 at 5:41 AM Anoop John wrote:
> At server side (RS) as well as at client side, put the config
> "hbase.client.rpc.codec" with a value
> org.apache.hadoop.hbase.codec.KeyValueCodecWithTags. Then u will
> be able to retrieve
At server side (RS) as well as at client side, put the config
"hbase.client.rpc.codec" with a value
org.apache.hadoop.hbase.codec.KeyValueCodecWithTags. Then u will
be able to retrieve the tags back to client side and check
-Anoop-
On Mon, May 1, 2017 at 2:59 AM, Saad Mufti wrote:
> Is there
Is there any facility to check what tags are on a Cell from a client side
program? I started writing some Java code to look at the tags on a Cell
retrieved via a simple Get, but then started reading around and it seems
tags are not even returned (not returned at all or only in certain cases,
I'm no
Thanks for the feedback, I have confirmed that in both the main and replica
cluster, hbase.replication.rpc.codec is set to:
org.apache.hadoop.hbase.codec.KeyValueCodecWithTags
I have also run a couple of tests and it looks like the TTL is not being
obeyed on the replica for any entry. Almost as i
Ya can u check whether the replica cluster is NOT removing ANY of the
TTL expired cells (as per ur expectation from master cluster) or some.
Is there too much clock time skew for the source RS and peer cluster
RS? Just check.
BTW can u see what is the hbase.replication.rpc.codec configuration
valu
Hi,
I have a main HBase 1.x cluster and some of the tables are being replicated
to a separate HBase cluster of the same version, and the table schemas are
identical. The column family being used has TTL set to "FOREVER", but we do
a per put TTL in every Put we issue on the main cluster.
Data is b
methods which will fetch you these metrics value.
HTH
Regards,
Ashish
-Original Message-
From: Sreeram [mailto:sreera...@gmail.com]
Sent: 24 April 2017 14:01
To: user@hbase.apache.org
Subject: API to get HBase replication status
Hi,
I am trying to understand if the hbase shell commands to
Hi,
I am trying to understand if the hbase shell commands to get the
replication status are based on any underlying API.
Specifically I am trying to fetch values of last shipped timestamp and
replication lag per regionserver. The ReplicationAdmin does not seem
to be providing the information
Hello,
Did you check the RegionServer logs, was there any exception ?
Regards,
Ashish
On Thu, Mar 30, 2017 at 10:14 PM, James Johansville <
james.johansvi...@gmail.com> wrote:
> Hello,
>
> I attempted HBase replication for the first time and am trying to
> understand how it
Hello,
I attempted HBase replication for the first time and am trying to
understand how it works.
I have three HBase clusters each with their own ZK ensemble, A, B, C. I
wanted to have complete acyclical replication between all 3 clusters, so I
added B as a peer of A, C as a peer of B, A as a
Just to give an update, The lag and LoqQueue did indeed go down after the
port was open!
Now I have another issue/question but will open a new thread.
Thanks!
--
View this message in context:
http://apache-hbase.679495.n3.nabble.com/replication-concepts-enabling-peer-vs-enabling-table
Will post here. But what puzzles me why it keeps the lag and LogQueue going
up when all I did was to add a peer? I had thought it only starts saving
logs once I enable replication for some column families.
So what happens when I enable_table_replication vs when I add a peer?
--
View this
com:2181:/hbase
> ENABLED
> 1 row(s) in 0.0110 seconds
>
> Shows as ENABLED, but can't really access it until port is open. Should be
> soon. :)
>
>
>
> --
> View this message in context: http://apache-hbase.679495.n3.
> nabble.com/replication-concepts-enabling-peer-vs
ld be
soon. :)
--
View this message in context:
http://apache-hbase.679495.n3.nabble.com/replication-concepts-enabling-peer-vs-enabling-table-replication-tp4085975p4085993.html
Sent from the HBase User mailing list archive at Nabble.com.
could you please try "list_peer" on the source cluster?
also just to confirm, was above 'status' command run on source cluster?
On Tue, Jan 31, 2017 at 1:19 PM, marjana wrote:
> I am not able to run replication yet, waiting for a port to be open. But
> why
> sho
I am not able to run replication yet, waiting for a port to be open. But why
should I see SizeOfLogQueue and Replication Lag increase? I have not
enabled replication on any table. Just added a peer. No tables:columns are
set to replicate yet.
Interesting that TimeStampsOfLastAppliedOp shows the
1 - 100 of 650 matches
Mail list logo