I got it to work by raising a kafka connect from which to launch the mm2.
Silly question.
El lun, 20 mar 2023 a las 23:09, Miguel Ángel Fernández Fernández (<
miguelangelprogramac...@gmail.com>) escribió:
> I have two clusters up on the same machine with docker-compose
>
> services:
> zookeeper
I have two clusters up on the same machine with docker-compose
services:
zookeeper-lab:
image: "bitnami/zookeeper:3.8.1"
restart: always
environment:
ZOO_PORT_NUMBER: 2183
ALLOW_ANONYMOUS_LOGIN: "yes"
ports:
- "2183:2183"
- "2886:2888"
- "3886:3888"
kafk
Mi Miguel,
How many nodes are you running MM2 with? Just one?
Separately, do you notice anything at ERROR level in the logs?
Cheers,
Chris
On Mon, Mar 20, 2023 at 5:35 PM Miguel Ángel Fernández Fernández <
miguelangelprogramac...@gmail.com> wrote:
> Hello,
>
> I'm doing some tests with Mirror
Hello,
I'm doing some tests with MirrorMaker 2 but I'm stuck. I have a couple of
kafka clusters, I think everything is set up correctly. However, when I run
/bin/connect-mirror-maker /var/lib/kafka/data/mm2.properties
the result I get is the creation of the topics
mm2-configs.A.internal,
mm2-of
he.org/jira/browse/KAFKA-13255>) that we will be
> > fixed in v3.2.0.
> > I've asked if there is any workaround for this issue in the JIRA
> comments.
> >
> > Thank you.
> > Luke
> >
> > On Wed, Dec 15, 2021 at 1:24 PM Jigar Shah
> > wrote:
>
is a known issue (KAFKA-13255
> <https://issues.apache.org/jira/browse/KAFKA-13255>) that we will be
> fixed in v3.2.0.
> I've asked if there is any workaround for this issue in the JIRA comments.
>
> Thank you.
> Luke
>
> On Wed, Dec 15, 2021 at 1:24 PM Jigar Shah
> wr
Hello,
> I am trying to run MirrorMaker 2.0 on kafka version 3.0.0 on my source
> cluster and my target kafka cluster is in kafka version 2.7.2.
> I am facing issues with topic configuration to create topics on my target
> cluster.
>
>
> [2021-12-14 12:28:33,071] WARN [MirrorSourceC
Hello,
I am trying to run MirrorMaker 2.0 on kafka version 3.0.0 on my source
cluster and my target kafka cluster is in kafka version 2.7.2.
I am facing issues with topic configuration to create topics on my target
cluster.
[2021-12-14 12:28:33,071] WARN [MirrorSourceConnector|worker] Could not
hi
We have a kafka mirror maker 2 setup on aws. The issue is when mirror maker 2
starting it initially copied the messages from cluster A to cluster B, but then
it stops replication ( while it is up). So new messages in cluster A is not
getting to cluster B.
Please help me , let me know if y
yep!
On Wed, Jul 21, 2021, 3:18 AM Tomer Zeltzer
wrote:
> Hi,
>
> Can I use MirrorMaker2.0 from Kafka 2.8.0 with Kafka version 2.4.0?
>
> Thanks,
> Tomer Zeltzer
>
> This email and the information contained herein is proprietary and
> confidential and subject to the Amdocs Email Terms of Ser
Hi,
Can I use MirrorMaker2.0 from Kafka 2.8.0 with Kafka version 2.4.0?
Thanks,
Tomer Zeltzer
This email and the information contained herein is proprietary and confidential
and subject to the Amdocs Email Terms of Service, which you may review at
https://www.amdocs.com/about/email-terms-o
Hi All,
I am trying to replicate my kafka cluster to another one using mirror maker
2. This is uni-directional replication. Both these clusters are at
different locations.
This replication happens fine when the load is low. However, when the load
increases I usually get timeout error in my logs a
Hi Madhan,
try this article I found a while back in case this also become my use case
https://stackoverflow.com/questions/59390555/is-it-possible-to-replicate-kafka-topics-without-alias-prefix-with-mirrormaker2
On Thu, Apr 22, 2021 at 9:40 PM Dhanikachalam, Madhan (CORP)
wrote:
> I am testing
Hey Madhan,
The easiest way to get rid of aliases in the topic names is to add the
following to your config:
replication.policy.separator=
source.cluster.alias=
target.cluster.alias=
On Thu, Apr 22, 2021 at 11:40 PM Dhanikachalam, Madhan (CORP)
wrote:
> I am testing MM2. I got the connector
I am testing MM2. I got the connector working but it is creating topics in the
downstream cluster like this mm-poc-src.grp1-top1 which is the
alias.. How can I create the downstream topic to be the exact name
as the source?
Also, do you provide commercial support for open source kafka? Or direc
Hi Daniel, it is probably hard to figure out how to sync topics across two
kafka connect cluster.
In general, if implementing a solution requires strong technical pre-requisite
(e.g. kafka connect will be aware of each other offset), It may be better to go
with simpler solution first, for examp
Hello and thank you for the reply!
My problem is not with consumption of messages, because as you said,
MirrorMaker2 knows how to deal with the consumer offsets. Rather my problem is
with source connectors and the topic connect-offsets.
Because Kafka connect manages where it stopped reading f
Hi Daniel, MirrorMaker2 creates its own "offsets" topic to track the process of
consumption.
just my 2 cents - If you already have two Kafka connect clusters in two
different sites, it sounds practical to:
(1) use "cluster" mode, instead of "dedicated" mode of MirrorMaker2
(2) add one "MirrorMak
gt; where is mentioned the “at-least” delivery guarantee? Just for the record.
>
> Kind Regards,
>
> Από: Ning Zhang
> Αποστολή: Τετάρτη, 17 Μαρτίου 2021 22:39
> Προς: users@kafka.apache.org
> Θέμα: Re: Mirrormaker 2.0 - duplicates with idempotence enabled
>
> Hello Vang
: Mirrormaker 2.0 - duplicates with idempotence enabled
Hello Vangelis,
By default, current MM 2.0 guarantees "at-least" once delivery guarantee,
meaning there will be duplicate messages under some failure scenarios.
If you prefer to no-message loss, there is a pending PR about MM
Hi everyone,
I'm trying to create an active-active deployment of a kafka cluster between
two data centers using MirrorMaker2, but I'm facing a problem.
In my deployment I have Kafka Connect in both sites which each of them
connect to different database using sink and source connectors (MongoDB
sour
Hello Vangelis,
By default, current MM 2.0 guarantees "at-least" once delivery guarantee,
meaning there will be duplicate messages under some failure scenarios.
If you prefer to no-message loss, there is a pending PR about MM 2.0
https://issues.apache.org/jira/browse/KAFKA-10339
On 2021/03/10
Hi,
I have setup mirrormaker2 (Kafka v.2.6.0) on 2 clusters (CL1,CL2) and the
mirroring seems to work properly except with an issue with duplicates in the
following scenario:
While both clusters are up and running i simulate an incident, stopping one by
one the brokers of the CL2 cluster. Stopp
Hi Ryanne/Josh,
I'm working on active-active mirror maker and while translating consumer
offset from source- cluster A to dest cluster B. any pointer would be helpful .
Cluster A
Cluster Name--A
Topic name: testA
Consumer group name: mm-testA-consumer
Cluster -B
Cluster Name--B
Topic name: sou
Josh, make sure there is a consumer in cluster B subscribed to A.topic1.
Wait a few seconds for a checkpoint to appear upstream on cluster A, and
then translateOffsets() will give you the correct offsets.
By default MM2 will block consumers that look like kafka-console-cosumer,
so make sure you sp
Thanks again Ryanne, I didn't realize that MM2 would handle that.
However, I'm unable to mirror the remote topic back to the source cluster
by adding it to the topic whitelist. I've also tried to update the topic
blacklist and remove ".*\.replica" (since the blacklists take precedence
over the whi
Josh, if you have two clusters with bidirectional replication, you only get
two copies of each record. MM2 won't replicate the data "upstream", cuz it
knows it's already there. In particular, MM2 knows not to create topics
like B.A.topic1 on cluster A, as this would be an unnecessary cycle.
> is
Sorry, correction -- I am realizing now it would be 3 copies of the same
topic data as A.topic1 has different data than B.topic1. However, that
would still be 3 copies as opposed to just 2 with something like topic1 and
A.topic1.
As well, if I were to explicitly replicate the remote topic back to
Thanks for the clarification Ryanne. In the context of active/active
clusters, does this mean there would be 6 copies of the same topic data?
A topics:
- topic1
- B.topic1
- B.A.topic1
B topics:
- topic1
- A.topic1
- A.B.topic1
Out of curiosity, is there a reason for MM2 not emitting checkpoint
Josh, yes it's possible to migrate the consumer group back to the source
topic, but you need to explicitly replicate the remote topic back to the
source cluster -- otherwise no checkpoints will flow "upstream":
A->B.topics=test1
B->A.topics=A.test1
After the first checkpoint is emitted upstream,
Hi there,
I'm currently exploring MM2 and having some trouble with the
RemoteClusterUtils.translateOffsets() method. I have been successful in
migrating a consumer group from the source cluster to the target cluster,
but was wondering how I could migrate this consumer group back to the
original so
.com/Etionlimited> |
Instagram<https://www.instagram.com/Etionlimited/>
From: Sönke Liebau
Sent: Wednesday, 18 March 2020 1:12 PM
To: users@kafka.apache.org
Subject: Re: Mirrormaker 2.0 and compacted topics
Hi Pirow,
records at the same offset as in the original topic is not possible for non
com
Etionlimited> | Instagram
> <https://www.instagram.com/Etionlimited/>
>
>
>
> *From:* Sönke Liebau
> *Sent:* Wednesday, 18 March 2020 12:14 PM
> *To:* users@kafka.apache.org
> *Subject:* Re: Mirrormaker 2.0 and compacted topics
>
>
>
> Hi Pirow,
>
>
.youtube.com/channel/UCUY-5oeACtLk2uTsEjZCU6A> |
LinkedIn<https://www.linkedin.com/company/etionltd> |
Twitter<https://twitter.com/Etionlimited> |
Instagram<https://www.instagram.com/Etionlimited/>
From: Sönke Liebau
Sent: Wednesday, 18 March 2020 12:14 PM
To: users@kaf
Hi Pirow,
as far as I understand MirrorMaker 2.0 will not treat compacted topics any
different from uncompacted topics.
What that means for your scenario is that your replication may miss some
messages in the case of a long unavailability, if those messages were
compacted in the meantime
Hello,
We're currently trying to evaluate Mirrormaker 2.0 for future inter-cluster
replication, replacing our bespoke replicator. I understand that Mirrormaker
2.0 documentation is only slated to be released in Kafka 2.5.0, but I was
hoping that someone will know whether Mirrormaker 2.0 c
Hello,
We're currently testing Mirrormaker 2.0 functionality for replication between
clusters. I have successfully run the Mirrormaker 2.0 script
(connect-mirror-maker.sh) using this config, replicating between two Kubernetes
Kafka broker instances:
Clusters = MC,DC
MC.bootstrap.se
Ok, I see. I almost started to work on it, but figured out that we do not
need it now.
Thanks for the help around this topic :)
Peter
On Tue, 21 Jan 2020 at 21:04, Ryanne Dolan wrote:
> Peter, the LegacyReplicationPolicy class is described in the existing
> KIP-382 and is a requirement for the
Peter, the LegacyReplicationPolicy class is described in the existing
KIP-382 and is a requirement for the deprecation of MM1. I was planning to
implement it but would love the help if you're interested.
Ryanne
On Tue, Jan 21, 2020, 8:25 AM Péter Sinóros-Szabó
wrote:
> Ryanne,
>
> I didn't do m
Ryanne,
I didn't do much work yet, just checked the Interface to see if it is easy
to implement or not.
> The PR for LegacyReplicationPolicy should include any relevant fixes to
get it to run without crashing
Do you mean that there is already a PR for LegacyReplicationPolicy? If
there is, please
Peter, KIP-382 includes LegacyReplicationPolicy for this purpose, but no,
it has not been implemented yet. If you are interested in writing the PR,
it would not require a separate KIP before merging. Looks like you are
already doing the work :)
It is possible, as you point out, that returning null
Hi Sebastian & Ryanne,
do you have maybe an implementation of this is just some ideas about how to
implement the policy that does not rename topics?
I am checking the ReplicationPolicy interface and don't really know what
the impact will be if I implement this:
public String formatRemoteTopic(Str
Peter, that's right. So long as ReplicationPolicy is implemented with
proper semantics (i.e. the methods do what they say they should do) any
naming convention will work. You can also use something like double
underscore "__" as a separator with DefaultReplicationPolicy -- it doesn't
need to be a s
Hi Ryanne,
Am I right that as far as I implement ReplicationPolicy properly, those
features you just mentioned will work fine?
Asking because we already use dot(.) underscore(_) and even hyphen(-)
characters in not replicated topics :D , so it seems to be that we will
need a more advanced renamin
Hello Ryanne,
thank you, that helps to get a better understanding.
We'll just wait until something better is available and until then use
the legacy-mode of MM2...
Best regards
Sebastian
On 30-Dec-19 7:04 PM, Ryanne Dolan wrote:
Is there a way to prevent that from happening?
Unfortunatel
> Is there a way to prevent that from happening?
Unfortunately there is no tooling (yet?) to manipulate Connect's offsets,
so it's difficult to force MM2 to skip ahead, reset, etc.
One approach is to use Connect's Simple Message Transform feature. This
enables you to filter the messages being rep
Sebastian, you can drop in a custom jar in the "Connect plug-in path" and
MM2 will be able to load it. That enables you to implement your own
ReplicationPolicy (and other pluggable interfaces) without compiling
everything.
In an upcoming release we'll have a "LegacyReplicationPolicy" that does not
Hello,
I found that it's using the DefaultReplicationPolicy that always returns
"sourceClusterAlias + separator + topic" with only the separator being
configurable in the configuration-file with REPLICATION_POLICY_SEPARATOR.
It seems like I need a different ReplicationPolicy, like a
SimpleRe
Hello,
another thing I found and didn't find any configuration in the KIP yet
was that if I have two clusters (source and target) and a topic
"replicateme" on the source-cluster it will get replicated to the
target-cluster as "source.replicateme".
How can I stop it from adding the cluster-na
Hello Ryanne,
Are there any plans to implement an easy to use throttling to be a little
more kind with the cluster that we start to replicate?
I guess it is possible to use the existing throttling in the source and
destination clusters, but it is not really easy to use.
Also maybe an option to st
Hello Ryanne,
Is there a way to prevent that from happening? We have two separate
clusters with some topics being replicated to the second one for
reporting. If we replicate everything again that reporting would
probably have some problems.
Yes, I wondered when the Networking-guys would come
Glad to hear you are replicating now :)
> it probably started mirroring the last seven days as there was no offset
for the new consumer-group.
That's correct -- MM2 will replicate the entire topic, as far back as the
retention period. However, technically there are no consumer groups in MM2!
550
Sebastian, there are multiple ways to run MM2. One way is to start the
individual Connectors (MirrorSourceConnector, MirrorCheckpointConnector,
and MirrorHeartbeatConnector) on an existing Connect cluster, if you have
one. Some of the configuration properties you've listed, e.g. "name" and
"connect
Hello again!
Some probably important configs I found out:
We need this to enable mirroring as it seems to disabled by default?
source->target.enabled = true
target->source.enabled = true
Also, the Client-IDs can be configured using:
source.client.id = my_cool_id
target.client.id = my_cooler_i
Hello,
I tried running this connect-mirror-config:
name = $MIRROR_NAME
clusters = source, target
source.bootstrap.servers = $SOURCE_SERVERS
target.bootstrap.servers = $TARGET_SERVERS
source->target.topics = $SOURCE_TARGET_TOPICS
target->source.topics = $TARGET_SOURCE_TOPICS
source->target.emit.
Hello Sebastian, please let us know what issues you are facing and we can
probably help. Which config from the KIP are you referencing? Also check
out the readme under ./connect/mirror for more examples.
Ryanne
On Mon, Dec 23, 2019, 12:58 PM Sebastian Schmitz <
sebastian.schm...@propellerhead.co.
I find the best is the README in the source. Look under connect mirror
maker directory I believe.
Carl
On Mon, Dec 23, 2019, 13:57 Sebastian Schmitz <
sebastian.schm...@propellerhead.co.nz> wrote:
> Hello,
>
> I'm currently trying to implement the new Kafka 2.4.0 and the new MM2.
>
> However, it
Hello,
I'm currently trying to implement the new Kafka 2.4.0 and the new MM2.
However, it looks like the only documentation available is the KIP-382,
and the documentation
(https://kafka.apache.org/documentation/#basic_ops_mirror_maker) for the
MM isn't yet updated, and the documentation in t
I can verify that the above did take. ( kicking myself ) . It should be the
same for, these too ?
b.producer.batch.size = 1048576
b.producer.linger.ms = 30
b.producer.acks = 1
etc etc...
I also see that the properties can be overridden, so this routine
* kill 1 MM2
* change the mm2.prope
> BTW any ideas when 2.4 is being released
Looks like there are a few blockers still.
On Mon, Nov 4, 2019 at 2:06 PM Vishal Santoshi
wrote:
> I bet I have tested the "b.producer.acks' route. I will test again and let
> you know. Note that I resorted to hardcoding that value in the Sender and
>
I bet I have tested the "b.producer.acks' route. I will test again and let
you know. Note that I resorted to hardcoding that value in the Sender and
that alleviated the throttle I was seeing on consumption. BTW any ideas
when 2.4 is being released ( I thought it was Oct 30th 2019 )...
On Mon, Nov
Vishal, b.producer.acks should work, as can be seen in the following unit
test with similar producer property "client.id":
https://github.com/apache/kafka/blob/6b905ade0cdc7a5f6f746727ecfe4e7a7463a200/connect/mirror/src/test/java/org/apache/kafka/connect/mirror/MirrorMakerConfigTest.java#L182
Kee
Hello folks,
Was doing stress tests and realized that the replication
to the target cluster and thus the configuration of the KafkaProducer has a
default acks of -1 ( all ) and that was prohibitively expensive. It should
have been a simple a->b.producer.acks = 1 ( or b.producer.a
Jeremy, please see relevant changes documented here:
https://github.com/apache/kafka/blob/cae2a5e1f0779a0889f6cb43b523ebc8a812f4c2/connect/mirror/README.md#multicluster-environments
I've added a --clusters argument which makes XDCR a lot easier to manage,
obviating the configuration race issue.
Jeremy, thanks for double checking. I think you are right -- this is a
regression introduced here [1]. For context, we noticed that heartbeats
were only being sent to target clusters, whereas they should be sent to
every cluster regardless of replication topology. To get heartbeats running
everywhe
Apologies, copy/paste issue. Config should look like:
In DC1:
DC1->DC2.enabled = true
DC2->DC1.enabled = false
In DC2:
DC1->DC2.enabled = false
DC2->DC1.enabled = true
Running 1 mm2 node in DC1 / DC2 each. If I start up the DC1 node first,
then DC1 data is replicated to DC2. DC2 data does n
Hey Jeremy, it looks like you've got a typo or copy-paste artifact in the
configuration there -- you've got DC1->DC2 listed twice, but not the
reverse. That would result in the behavior you are seeing, as DC1 actually
has nothing enabled. Assuming this is just a mistake in the email, your
approach
I am attempting to setup a simple cross data center replication POC using the
new mirror maker branch. The behavior is not quite what I was expecting, so it
may be that I have made some assumptions in terms of deployment that are
incorrect or my setup is incorrect (see below). When I run the t
68 matches
Mail list logo