RE: Re: MirrorMaker 2 Reload Configuration

2022-01-13 Thread Praveen Sinha
Hi Peter,

I am running into a similar issue, did you create a JIRA ticket for this?
Or is there any workaround you have found for the same?

Thanks and regards,
Praveen

On 2020/11/13 14:45:23 Péter Sinóros-Szabó wrote:
> Hi,
>
> I tried as well to stop all instances of MM2, but it didn't help for me.
> I had to stop all MM2 instances, delete the mm2-config and mm2-status
> topics on the destination cluster and start up all MM2 instances again.
>
> Peter
>


Re: MirrorMaker 2 Reload Configuration

2020-11-13 Thread Péter Sinóros-Szabó
Hi,

I tried as well to stop all instances of MM2, but it didn't help for me.
I had to stop all MM2 instances, delete the mm2-config and mm2-status
topics on the destination cluster and start up all MM2 instances again.

Peter


Re: MirrorMaker 2 Reload Configuration

2020-11-13 Thread Devaki, Srinivas
Hi All,

After inspecting a few internal topics and running console consumer to
see the payloads in the mm2-configs topic identified that properties
are indeed not getting refreshed, I've assumed that mm2 internally is
joining the existing cluster, so to refresh config I've tried to
completely stop the mm2 cluster i.e reduce the mm2 deployment capacity
to 0 and waited for a couple of mins and increased the capacity again
back to our previous number.

With this approach the mm2 started to load the configuration from the
mm2.properties back again.

Thanks

On Fri, Nov 13, 2020 at 4:02 PM Péter Sinóros-Szabó
 wrote:
>
> Hi Ryanne,
>
> I will open an issue in Jira.
> I see mm2-config and mm2-status topics on both the source and destination
> clusters.
> Should I purge all of them? Or is it enough to purge just the destination
> topics?
>
> Thanks,
> Peter
>
> On Wed, 11 Nov 2020 at 19:33, Ryanne Dolan  wrote:
>
> > Hey guys, this is because the configuration gets loaded into the internal
> > mm2-config topics, and these may get out of sync with the mm2.properties
> > file in some scenarios. I believe this occurs whenever an old/bad
> > configuration gets written to Kafka, which MM2 can read successfully but
> > which causes MM2 to get stuck before it can write any updates back to the
> > mm2-config topics. Just modifying the mm2.properties file does not resolve
> > the issue, since Workers read from the mm2-config topics, not the
> > mm2.properties file directly.
> >
> > The fix is to truncate or delete the mm2-config and mm2-status topics. N.B.
> > do _not_ delete the mm2-offsets topics, as this would cause MM2 to
> > restart replication from offset 0.
> >
> > I'm not sure why deleting these topics works, but it seems to cause Connect
> > to wait for the new configuration to be loaded from mm2.properties, rather
> > than reading the old configuration from mm2-config and getting stuck.
> >
> > Can someone report the issue in jira?
> >
> > Ryanne
> >
> > On Wed, Nov 11, 2020 at 9:35 AM Péter Sinóros-Szabó
> >  wrote:
> >
> > > Hi,
> > >
> > > I have a similar issue.  I changed the source cluster bootstrap address
> > and
> > > MM2 picked it up only partially. Some parts of it still use the old
> > > address, some the new. The old and the new address list is routed to the
> > > same cluster, same brokers, just on a different network path.
> > >
> > > So is there any way to force the configuration update?
> > >
> > > Cheers,
> > > Peter
> > >
> > > On Wed, 4 Nov 2020 at 18:39, Ning Zhang  wrote:
> > >
> > > > if your new topics are not named "topic1" or "topic2", maybe you want
> > to
> > > > use regex * to allow more topics to be considered by Mm2
> > > >
> > > > # regex which defines which topics gets replicated. For eg "foo-.*"
> > > > src-cluster->dst-cluster.topics = topic1,topic2
> > > >
> > > > On 2020/10/30 01:48:00, "Devaki, Srinivas" 
> > > > wrote:
> > > > > Hi Folks,
> > > > >
> > > > > I'm running mirror maker as a dedicated cluster as given in the
> > > > > mirrormaker 2 doc. but for some reason when I add new topics and
> > > > > deploy the mirror maker it's not detecting the new topics at all,
> > even
> > > > > the config dumps in the mirror maker startup logs don't show the
> > newly
> > > > > added topics.
> > > > >
> > > > > I've attached the config that I'm using, initially I assumed that
> > > > > there might be some refresh configuration option either in connect or
> > > > > mirror maker, but the connect rest api doesn't seem to be working in
> > > > > this mode and also couldn't find any refresh configuration option.
> > > > >
> > > > > Any ideas on this? Thank you in advance
> > > > >
> > > > > ```
> > > > > clusters = src-cluster, dst-cluster
> > > > >
> > > > > # disable topic prefixes
> > > > > src-cluster.replication.policy.separator =
> > > > > dst-cluster.replication.policy.separator =
> > > > > replication.policy.separator =
> > > > > source.cluster.alias =
> > > > > target.cluster.alias =
> > > > >
> > > > >
> > > > > # enable idemptotence
> > > > > source.cluster.producer.enable.idempotence = true
> > > > > target.cluster.producer.enable.idempotence = true
> > > > >
> > > > > # connection information for each cluster
> > > > > # This is a comma separated host:port pairs for each cluster
> > > > > # for e.g. "A_host1:9092, A_host2:9092, A_host3:9092"
> > > > > src-cluster.bootstrap.servers =
> > > > >
> > > >
> > >
> > sng-kfnode1.internal:9092,sng-kfnode1.internal:9092,sng-kfnode1.internal:9092
> > > > > dst-cluster.bootstrap.servers =
> > > > >
> > > >
> > >
> > prod-online-v2-kafka-1.internal:9092,prod-online-v2-kafka-2.internal:9092,prod-online-v2-kafka-3.internal:9092,prod-online-v2-kafka-4.internal:9092,prod-online-v2-kafka-5.internal:9092
> > > > >
> > > > > # regex which defines which topics gets replicated. For eg "foo-.*"
> > > > > src-cluster->dst-cluster.topics = topic1,topic2
> > > > >
> > > > > # client-id
> > > > > src-cluster.client.id = 

Re: MirrorMaker 2 Reload Configuration

2020-11-13 Thread Péter Sinóros-Szabó
Hi Ryanne,

I will open an issue in Jira.
I see mm2-config and mm2-status topics on both the source and destination
clusters.
Should I purge all of them? Or is it enough to purge just the destination
topics?

Thanks,
Peter

On Wed, 11 Nov 2020 at 19:33, Ryanne Dolan  wrote:

> Hey guys, this is because the configuration gets loaded into the internal
> mm2-config topics, and these may get out of sync with the mm2.properties
> file in some scenarios. I believe this occurs whenever an old/bad
> configuration gets written to Kafka, which MM2 can read successfully but
> which causes MM2 to get stuck before it can write any updates back to the
> mm2-config topics. Just modifying the mm2.properties file does not resolve
> the issue, since Workers read from the mm2-config topics, not the
> mm2.properties file directly.
>
> The fix is to truncate or delete the mm2-config and mm2-status topics. N.B.
> do _not_ delete the mm2-offsets topics, as this would cause MM2 to
> restart replication from offset 0.
>
> I'm not sure why deleting these topics works, but it seems to cause Connect
> to wait for the new configuration to be loaded from mm2.properties, rather
> than reading the old configuration from mm2-config and getting stuck.
>
> Can someone report the issue in jira?
>
> Ryanne
>
> On Wed, Nov 11, 2020 at 9:35 AM Péter Sinóros-Szabó
>  wrote:
>
> > Hi,
> >
> > I have a similar issue.  I changed the source cluster bootstrap address
> and
> > MM2 picked it up only partially. Some parts of it still use the old
> > address, some the new. The old and the new address list is routed to the
> > same cluster, same brokers, just on a different network path.
> >
> > So is there any way to force the configuration update?
> >
> > Cheers,
> > Peter
> >
> > On Wed, 4 Nov 2020 at 18:39, Ning Zhang  wrote:
> >
> > > if your new topics are not named "topic1" or "topic2", maybe you want
> to
> > > use regex * to allow more topics to be considered by Mm2
> > >
> > > # regex which defines which topics gets replicated. For eg "foo-.*"
> > > src-cluster->dst-cluster.topics = topic1,topic2
> > >
> > > On 2020/10/30 01:48:00, "Devaki, Srinivas" 
> > > wrote:
> > > > Hi Folks,
> > > >
> > > > I'm running mirror maker as a dedicated cluster as given in the
> > > > mirrormaker 2 doc. but for some reason when I add new topics and
> > > > deploy the mirror maker it's not detecting the new topics at all,
> even
> > > > the config dumps in the mirror maker startup logs don't show the
> newly
> > > > added topics.
> > > >
> > > > I've attached the config that I'm using, initially I assumed that
> > > > there might be some refresh configuration option either in connect or
> > > > mirror maker, but the connect rest api doesn't seem to be working in
> > > > this mode and also couldn't find any refresh configuration option.
> > > >
> > > > Any ideas on this? Thank you in advance
> > > >
> > > > ```
> > > > clusters = src-cluster, dst-cluster
> > > >
> > > > # disable topic prefixes
> > > > src-cluster.replication.policy.separator =
> > > > dst-cluster.replication.policy.separator =
> > > > replication.policy.separator =
> > > > source.cluster.alias =
> > > > target.cluster.alias =
> > > >
> > > >
> > > > # enable idemptotence
> > > > source.cluster.producer.enable.idempotence = true
> > > > target.cluster.producer.enable.idempotence = true
> > > >
> > > > # connection information for each cluster
> > > > # This is a comma separated host:port pairs for each cluster
> > > > # for e.g. "A_host1:9092, A_host2:9092, A_host3:9092"
> > > > src-cluster.bootstrap.servers =
> > > >
> > >
> >
> sng-kfnode1.internal:9092,sng-kfnode1.internal:9092,sng-kfnode1.internal:9092
> > > > dst-cluster.bootstrap.servers =
> > > >
> > >
> >
> prod-online-v2-kafka-1.internal:9092,prod-online-v2-kafka-2.internal:9092,prod-online-v2-kafka-3.internal:9092,prod-online-v2-kafka-4.internal:9092,prod-online-v2-kafka-5.internal:9092
> > > >
> > > > # regex which defines which topics gets replicated. For eg "foo-.*"
> > > > src-cluster->dst-cluster.topics = topic1,topic2
> > > >
> > > > # client-id
> > > > src-cluster.client.id = prod-mm2-onlinev1-to-onlinev2-consumer-v0
> > > > dst-cluster.client.id = prod-mm2-onlinev1-to-onlinev2-producer-v0
> > > >
> > > >
> > > > # group.instance.id=_mirror_make_instance_1
> > > > # consumer should periodically emit heartbeats
> > > > src-cluster->dst-cluster.consumer.auto.offset.reset = earliest
> > > > src-cluster->dst-cluster.consumer.overrides.auto.offset.reset =
> > earliest
> > > >
> > > > # connector should periodically emit heartbeats
> > > > src-cluster->dst-cluster.emit.heartbeats.enabled = true
> > > >
> > > > # frequency of heartbeats, default is 5 seconds
> > > > src-cluster->dst-cluster.emit.heartbeats.interval.seconds = 10
> > > >
> > > > # connector should periodically emit consumer offset information
> > > > src-cluster->dst-cluster.emit.checkpoints.enabled = true
> > > >
> > > > # frequency of checkpoints, default is 5 seconds

Re: MirrorMaker 2 Reload Configuration

2020-11-11 Thread Ryanne Dolan
Hey guys, this is because the configuration gets loaded into the internal
mm2-config topics, and these may get out of sync with the mm2.properties
file in some scenarios. I believe this occurs whenever an old/bad
configuration gets written to Kafka, which MM2 can read successfully but
which causes MM2 to get stuck before it can write any updates back to the
mm2-config topics. Just modifying the mm2.properties file does not resolve
the issue, since Workers read from the mm2-config topics, not the
mm2.properties file directly.

The fix is to truncate or delete the mm2-config and mm2-status topics. N.B.
do _not_ delete the mm2-offsets topics, as this would cause MM2 to
restart replication from offset 0.

I'm not sure why deleting these topics works, but it seems to cause Connect
to wait for the new configuration to be loaded from mm2.properties, rather
than reading the old configuration from mm2-config and getting stuck.

Can someone report the issue in jira?

Ryanne

On Wed, Nov 11, 2020 at 9:35 AM Péter Sinóros-Szabó
 wrote:

> Hi,
>
> I have a similar issue.  I changed the source cluster bootstrap address and
> MM2 picked it up only partially. Some parts of it still use the old
> address, some the new. The old and the new address list is routed to the
> same cluster, same brokers, just on a different network path.
>
> So is there any way to force the configuration update?
>
> Cheers,
> Peter
>
> On Wed, 4 Nov 2020 at 18:39, Ning Zhang  wrote:
>
> > if your new topics are not named "topic1" or "topic2", maybe you want to
> > use regex * to allow more topics to be considered by Mm2
> >
> > # regex which defines which topics gets replicated. For eg "foo-.*"
> > src-cluster->dst-cluster.topics = topic1,topic2
> >
> > On 2020/10/30 01:48:00, "Devaki, Srinivas" 
> > wrote:
> > > Hi Folks,
> > >
> > > I'm running mirror maker as a dedicated cluster as given in the
> > > mirrormaker 2 doc. but for some reason when I add new topics and
> > > deploy the mirror maker it's not detecting the new topics at all, even
> > > the config dumps in the mirror maker startup logs don't show the newly
> > > added topics.
> > >
> > > I've attached the config that I'm using, initially I assumed that
> > > there might be some refresh configuration option either in connect or
> > > mirror maker, but the connect rest api doesn't seem to be working in
> > > this mode and also couldn't find any refresh configuration option.
> > >
> > > Any ideas on this? Thank you in advance
> > >
> > > ```
> > > clusters = src-cluster, dst-cluster
> > >
> > > # disable topic prefixes
> > > src-cluster.replication.policy.separator =
> > > dst-cluster.replication.policy.separator =
> > > replication.policy.separator =
> > > source.cluster.alias =
> > > target.cluster.alias =
> > >
> > >
> > > # enable idemptotence
> > > source.cluster.producer.enable.idempotence = true
> > > target.cluster.producer.enable.idempotence = true
> > >
> > > # connection information for each cluster
> > > # This is a comma separated host:port pairs for each cluster
> > > # for e.g. "A_host1:9092, A_host2:9092, A_host3:9092"
> > > src-cluster.bootstrap.servers =
> > >
> >
> sng-kfnode1.internal:9092,sng-kfnode1.internal:9092,sng-kfnode1.internal:9092
> > > dst-cluster.bootstrap.servers =
> > >
> >
> prod-online-v2-kafka-1.internal:9092,prod-online-v2-kafka-2.internal:9092,prod-online-v2-kafka-3.internal:9092,prod-online-v2-kafka-4.internal:9092,prod-online-v2-kafka-5.internal:9092
> > >
> > > # regex which defines which topics gets replicated. For eg "foo-.*"
> > > src-cluster->dst-cluster.topics = topic1,topic2
> > >
> > > # client-id
> > > src-cluster.client.id = prod-mm2-onlinev1-to-onlinev2-consumer-v0
> > > dst-cluster.client.id = prod-mm2-onlinev1-to-onlinev2-producer-v0
> > >
> > >
> > > # group.instance.id=_mirror_make_instance_1
> > > # consumer should periodically emit heartbeats
> > > src-cluster->dst-cluster.consumer.auto.offset.reset = earliest
> > > src-cluster->dst-cluster.consumer.overrides.auto.offset.reset =
> earliest
> > >
> > > # connector should periodically emit heartbeats
> > > src-cluster->dst-cluster.emit.heartbeats.enabled = true
> > >
> > > # frequency of heartbeats, default is 5 seconds
> > > src-cluster->dst-cluster.emit.heartbeats.interval.seconds = 10
> > >
> > > # connector should periodically emit consumer offset information
> > > src-cluster->dst-cluster.emit.checkpoints.enabled = true
> > >
> > > # frequency of checkpoints, default is 5 seconds
> > > src-cluster->dst-cluster.emit.checkpoints.interval.seconds = 10
> > >
> > > # whether to monitor source cluster ACLs for changes
> > > src-cluster->dst-cluster.sync.topic.acls.enabled = false
> > >
> > > # whether or not to monitor source cluster for configuration changes
> > > src-cluster->dst-cluster.sync.topic.configs.enabled = true
> > > # add retention.ms to the default list given in the
> > DefaultConfigPropertyFilter
> > > #
> >
> 

Re: MirrorMaker 2 Reload Configuration

2020-11-11 Thread Péter Sinóros-Szabó
Hi,

I have a similar issue.  I changed the source cluster bootstrap address and
MM2 picked it up only partially. Some parts of it still use the old
address, some the new. The old and the new address list is routed to the
same cluster, same brokers, just on a different network path.

So is there any way to force the configuration update?

Cheers,
Peter

On Wed, 4 Nov 2020 at 18:39, Ning Zhang  wrote:

> if your new topics are not named "topic1" or "topic2", maybe you want to
> use regex * to allow more topics to be considered by Mm2
>
> # regex which defines which topics gets replicated. For eg "foo-.*"
> src-cluster->dst-cluster.topics = topic1,topic2
>
> On 2020/10/30 01:48:00, "Devaki, Srinivas" 
> wrote:
> > Hi Folks,
> >
> > I'm running mirror maker as a dedicated cluster as given in the
> > mirrormaker 2 doc. but for some reason when I add new topics and
> > deploy the mirror maker it's not detecting the new topics at all, even
> > the config dumps in the mirror maker startup logs don't show the newly
> > added topics.
> >
> > I've attached the config that I'm using, initially I assumed that
> > there might be some refresh configuration option either in connect or
> > mirror maker, but the connect rest api doesn't seem to be working in
> > this mode and also couldn't find any refresh configuration option.
> >
> > Any ideas on this? Thank you in advance
> >
> > ```
> > clusters = src-cluster, dst-cluster
> >
> > # disable topic prefixes
> > src-cluster.replication.policy.separator =
> > dst-cluster.replication.policy.separator =
> > replication.policy.separator =
> > source.cluster.alias =
> > target.cluster.alias =
> >
> >
> > # enable idemptotence
> > source.cluster.producer.enable.idempotence = true
> > target.cluster.producer.enable.idempotence = true
> >
> > # connection information for each cluster
> > # This is a comma separated host:port pairs for each cluster
> > # for e.g. "A_host1:9092, A_host2:9092, A_host3:9092"
> > src-cluster.bootstrap.servers =
> >
> sng-kfnode1.internal:9092,sng-kfnode1.internal:9092,sng-kfnode1.internal:9092
> > dst-cluster.bootstrap.servers =
> >
> prod-online-v2-kafka-1.internal:9092,prod-online-v2-kafka-2.internal:9092,prod-online-v2-kafka-3.internal:9092,prod-online-v2-kafka-4.internal:9092,prod-online-v2-kafka-5.internal:9092
> >
> > # regex which defines which topics gets replicated. For eg "foo-.*"
> > src-cluster->dst-cluster.topics = topic1,topic2
> >
> > # client-id
> > src-cluster.client.id = prod-mm2-onlinev1-to-onlinev2-consumer-v0
> > dst-cluster.client.id = prod-mm2-onlinev1-to-onlinev2-producer-v0
> >
> >
> > # group.instance.id=_mirror_make_instance_1
> > # consumer should periodically emit heartbeats
> > src-cluster->dst-cluster.consumer.auto.offset.reset = earliest
> > src-cluster->dst-cluster.consumer.overrides.auto.offset.reset = earliest
> >
> > # connector should periodically emit heartbeats
> > src-cluster->dst-cluster.emit.heartbeats.enabled = true
> >
> > # frequency of heartbeats, default is 5 seconds
> > src-cluster->dst-cluster.emit.heartbeats.interval.seconds = 10
> >
> > # connector should periodically emit consumer offset information
> > src-cluster->dst-cluster.emit.checkpoints.enabled = true
> >
> > # frequency of checkpoints, default is 5 seconds
> > src-cluster->dst-cluster.emit.checkpoints.interval.seconds = 10
> >
> > # whether to monitor source cluster ACLs for changes
> > src-cluster->dst-cluster.sync.topic.acls.enabled = false
> >
> > # whether or not to monitor source cluster for configuration changes
> > src-cluster->dst-cluster.sync.topic.configs.enabled = true
> > # add retention.ms to the default list given in the
> DefaultConfigPropertyFilter
> > #
> https://github.com/apache/kafka/blob/889fd31b207b86db6d059792131d14389639d9e4/connect/mirror/src/main/java/org/apache/kafka/connect/mirror/DefaultConfigPropertyFilter.java#L33-L38
> > src-cluster->dst-cluster.config.properties.blacklist =
> > follower\\.replication\\.throttled\\.replicas, \
> >
> > leader\\.replication\\.throttled\\.replicas, \
> >
> > message\\.timestamp\\.difference\\.max\\.ms, \
> >
> message\\.timestamp\\.type, \
> >
> > unclean\\.leader\\.election\\.enable, \
> >
> min\\.insync\\.replicas, \
> >retention\\.ms
> >
> > # connector should periodically check for new topics
> > src-cluster->dst-cluster.refresh.topics.enabled = true
> >
> > # frequency to check source cluster for new topics, default is 5 seconds
> > src-cluster->dst-cluster.refresh.topics.interval.seconds = 300
> >
> > # enable and configure individual replication flows
> > src-cluster->dst-cluster.enabled = true
> > dst-cluster->src-cluster.enabled = false
> >
> >
> > # Setting replication factor of newly created remote topics
> > # replication.factor=2
> >
> > # Internal Topic Settings
> > #
> > # The replication factor for mm2 internal topics "heartbeats",
> > 

Re: MirrorMaker 2 Reload Configuration

2020-11-04 Thread Ning Zhang
if your new topics are not named "topic1" or "topic2", maybe you want to use 
regex * to allow more topics to be considered by Mm2

# regex which defines which topics gets replicated. For eg "foo-.*"
src-cluster->dst-cluster.topics = topic1,topic2

On 2020/10/30 01:48:00, "Devaki, Srinivas"  wrote: 
> Hi Folks,
> 
> I'm running mirror maker as a dedicated cluster as given in the
> mirrormaker 2 doc. but for some reason when I add new topics and
> deploy the mirror maker it's not detecting the new topics at all, even
> the config dumps in the mirror maker startup logs don't show the newly
> added topics.
> 
> I've attached the config that I'm using, initially I assumed that
> there might be some refresh configuration option either in connect or
> mirror maker, but the connect rest api doesn't seem to be working in
> this mode and also couldn't find any refresh configuration option.
> 
> Any ideas on this? Thank you in advance
> 
> ```
> clusters = src-cluster, dst-cluster
> 
> # disable topic prefixes
> src-cluster.replication.policy.separator =
> dst-cluster.replication.policy.separator =
> replication.policy.separator =
> source.cluster.alias =
> target.cluster.alias =
> 
> 
> # enable idemptotence
> source.cluster.producer.enable.idempotence = true
> target.cluster.producer.enable.idempotence = true
> 
> # connection information for each cluster
> # This is a comma separated host:port pairs for each cluster
> # for e.g. "A_host1:9092, A_host2:9092, A_host3:9092"
> src-cluster.bootstrap.servers =
> sng-kfnode1.internal:9092,sng-kfnode1.internal:9092,sng-kfnode1.internal:9092
> dst-cluster.bootstrap.servers =
> prod-online-v2-kafka-1.internal:9092,prod-online-v2-kafka-2.internal:9092,prod-online-v2-kafka-3.internal:9092,prod-online-v2-kafka-4.internal:9092,prod-online-v2-kafka-5.internal:9092
> 
> # regex which defines which topics gets replicated. For eg "foo-.*"
> src-cluster->dst-cluster.topics = topic1,topic2
> 
> # client-id
> src-cluster.client.id = prod-mm2-onlinev1-to-onlinev2-consumer-v0
> dst-cluster.client.id = prod-mm2-onlinev1-to-onlinev2-producer-v0
> 
> 
> # group.instance.id=_mirror_make_instance_1
> # consumer should periodically emit heartbeats
> src-cluster->dst-cluster.consumer.auto.offset.reset = earliest
> src-cluster->dst-cluster.consumer.overrides.auto.offset.reset = earliest
> 
> # connector should periodically emit heartbeats
> src-cluster->dst-cluster.emit.heartbeats.enabled = true
> 
> # frequency of heartbeats, default is 5 seconds
> src-cluster->dst-cluster.emit.heartbeats.interval.seconds = 10
> 
> # connector should periodically emit consumer offset information
> src-cluster->dst-cluster.emit.checkpoints.enabled = true
> 
> # frequency of checkpoints, default is 5 seconds
> src-cluster->dst-cluster.emit.checkpoints.interval.seconds = 10
> 
> # whether to monitor source cluster ACLs for changes
> src-cluster->dst-cluster.sync.topic.acls.enabled = false
> 
> # whether or not to monitor source cluster for configuration changes
> src-cluster->dst-cluster.sync.topic.configs.enabled = true
> # add retention.ms to the default list given in the 
> DefaultConfigPropertyFilter
> #   
> https://github.com/apache/kafka/blob/889fd31b207b86db6d059792131d14389639d9e4/connect/mirror/src/main/java/org/apache/kafka/connect/mirror/DefaultConfigPropertyFilter.java#L33-L38
> src-cluster->dst-cluster.config.properties.blacklist =
> follower\\.replication\\.throttled\\.replicas, \
> 
> leader\\.replication\\.throttled\\.replicas, \
> 
> message\\.timestamp\\.difference\\.max\\.ms, \
>
> message\\.timestamp\\.type, \
> 
> unclean\\.leader\\.election\\.enable, \
>min\\.insync\\.replicas, \
>retention\\.ms
> 
> # connector should periodically check for new topics
> src-cluster->dst-cluster.refresh.topics.enabled = true
> 
> # frequency to check source cluster for new topics, default is 5 seconds
> src-cluster->dst-cluster.refresh.topics.interval.seconds = 300
> 
> # enable and configure individual replication flows
> src-cluster->dst-cluster.enabled = true
> dst-cluster->src-cluster.enabled = false
> 
> 
> # Setting replication factor of newly created remote topics
> # replication.factor=2
> 
> # Internal Topic Settings
> #
> # The replication factor for mm2 internal topics "heartbeats",
> "B.checkpoints.internal" and
> # "mm2-offset-syncs.B.internal"
> # For anything other than development testing, a value greater than 1
> is recommended to ensure availability such as 3.
> checkpoints.topic.replication.factor=3
> # 14 days
> checkpoints.topic.retention.ms=120960
> heartbeats.topic.replication.factor=3
> offset-syncs.topic.replication.factor=3
> 
> # The replication factor for connect internal topics
> "mm2-configs.B.internal", "mm2-offsets.B.internal" and
> # "mm2-status.B.internal"

Re: MirrorMaker 2 Reload Configuration

2020-10-29 Thread Devaki, Srinivas
Hi Folks,

I've also ran a console-consumer on the `mm2-configs` kafka topic
created by the mirror maker and found that even after the restart of
the mirror maker 2 with new config, the config registered in the
mm2-configs kafka topic is still pointing to a legacy mirror maker
configuration.

Thanks

On Fri, Oct 30, 2020 at 7:18 AM Devaki, Srinivas  wrote:
>
> Hi Folks,
>
> I'm running mirror maker as a dedicated cluster as given in the
> mirrormaker 2 doc. but for some reason when I add new topics and
> deploy the mirror maker it's not detecting the new topics at all, even
> the config dumps in the mirror maker startup logs don't show the newly
> added topics.
>
> I've attached the config that I'm using, initially I assumed that
> there might be some refresh configuration option either in connect or
> mirror maker, but the connect rest api doesn't seem to be working in
> this mode and also couldn't find any refresh configuration option.
>
> Any ideas on this? Thank you in advance
>
> ```
> clusters = src-cluster, dst-cluster
>
> # disable topic prefixes
> src-cluster.replication.policy.separator =
> dst-cluster.replication.policy.separator =
> replication.policy.separator =
> source.cluster.alias =
> target.cluster.alias =
>
>
> # enable idemptotence
> source.cluster.producer.enable.idempotence = true
> target.cluster.producer.enable.idempotence = true
>
> # connection information for each cluster
> # This is a comma separated host:port pairs for each cluster
> # for e.g. "A_host1:9092, A_host2:9092, A_host3:9092"
> src-cluster.bootstrap.servers =
> sng-kfnode1.internal:9092,sng-kfnode1.internal:9092,sng-kfnode1.internal:9092
> dst-cluster.bootstrap.servers =
> prod-online-v2-kafka-1.internal:9092,prod-online-v2-kafka-2.internal:9092,prod-online-v2-kafka-3.internal:9092,prod-online-v2-kafka-4.internal:9092,prod-online-v2-kafka-5.internal:9092
>
> # regex which defines which topics gets replicated. For eg "foo-.*"
> src-cluster->dst-cluster.topics = topic1,topic2
>
> # client-id
> src-cluster.client.id = prod-mm2-onlinev1-to-onlinev2-consumer-v0
> dst-cluster.client.id = prod-mm2-onlinev1-to-onlinev2-producer-v0
>
>
> # group.instance.id=_mirror_make_instance_1
> # consumer should periodically emit heartbeats
> src-cluster->dst-cluster.consumer.auto.offset.reset = earliest
> src-cluster->dst-cluster.consumer.overrides.auto.offset.reset = earliest
>
> # connector should periodically emit heartbeats
> src-cluster->dst-cluster.emit.heartbeats.enabled = true
>
> # frequency of heartbeats, default is 5 seconds
> src-cluster->dst-cluster.emit.heartbeats.interval.seconds = 10
>
> # connector should periodically emit consumer offset information
> src-cluster->dst-cluster.emit.checkpoints.enabled = true
>
> # frequency of checkpoints, default is 5 seconds
> src-cluster->dst-cluster.emit.checkpoints.interval.seconds = 10
>
> # whether to monitor source cluster ACLs for changes
> src-cluster->dst-cluster.sync.topic.acls.enabled = false
>
> # whether or not to monitor source cluster for configuration changes
> src-cluster->dst-cluster.sync.topic.configs.enabled = true
> # add retention.ms to the default list given in the 
> DefaultConfigPropertyFilter
> #   
> https://github.com/apache/kafka/blob/889fd31b207b86db6d059792131d14389639d9e4/connect/mirror/src/main/java/org/apache/kafka/connect/mirror/DefaultConfigPropertyFilter.java#L33-L38
> src-cluster->dst-cluster.config.properties.blacklist =
> follower\\.replication\\.throttled\\.replicas, \
>
> leader\\.replication\\.throttled\\.replicas, \
>
> message\\.timestamp\\.difference\\.max\\.ms, \
>
> message\\.timestamp\\.type, \
>
> unclean\\.leader\\.election\\.enable, \
>min\\.insync\\.replicas, \
>retention\\.ms
>
> # connector should periodically check for new topics
> src-cluster->dst-cluster.refresh.topics.enabled = true
>
> # frequency to check source cluster for new topics, default is 5 seconds
> src-cluster->dst-cluster.refresh.topics.interval.seconds = 300
>
> # enable and configure individual replication flows
> src-cluster->dst-cluster.enabled = true
> dst-cluster->src-cluster.enabled = false
>
>
> # Setting replication factor of newly created remote topics
> # replication.factor=2
>
> # Internal Topic Settings
> #
> # The replication factor for mm2 internal topics "heartbeats",
> "B.checkpoints.internal" and
> # "mm2-offset-syncs.B.internal"
> # For anything other than development testing, a value greater than 1
> is recommended to ensure availability such as 3.
> checkpoints.topic.replication.factor=3
> # 14 days
> checkpoints.topic.retention.ms=120960
> heartbeats.topic.replication.factor=3
> offset-syncs.topic.replication.factor=3
>
> # The replication factor for connect internal topics
> "mm2-configs.B.internal", "mm2-offsets.B.internal" 

MirrorMaker 2 Reload Configuration

2020-10-29 Thread Devaki, Srinivas
Hi Folks,

I'm running mirror maker as a dedicated cluster as given in the
mirrormaker 2 doc. but for some reason when I add new topics and
deploy the mirror maker it's not detecting the new topics at all, even
the config dumps in the mirror maker startup logs don't show the newly
added topics.

I've attached the config that I'm using, initially I assumed that
there might be some refresh configuration option either in connect or
mirror maker, but the connect rest api doesn't seem to be working in
this mode and also couldn't find any refresh configuration option.

Any ideas on this? Thank you in advance

```
clusters = src-cluster, dst-cluster

# disable topic prefixes
src-cluster.replication.policy.separator =
dst-cluster.replication.policy.separator =
replication.policy.separator =
source.cluster.alias =
target.cluster.alias =


# enable idemptotence
source.cluster.producer.enable.idempotence = true
target.cluster.producer.enable.idempotence = true

# connection information for each cluster
# This is a comma separated host:port pairs for each cluster
# for e.g. "A_host1:9092, A_host2:9092, A_host3:9092"
src-cluster.bootstrap.servers =
sng-kfnode1.internal:9092,sng-kfnode1.internal:9092,sng-kfnode1.internal:9092
dst-cluster.bootstrap.servers =
prod-online-v2-kafka-1.internal:9092,prod-online-v2-kafka-2.internal:9092,prod-online-v2-kafka-3.internal:9092,prod-online-v2-kafka-4.internal:9092,prod-online-v2-kafka-5.internal:9092

# regex which defines which topics gets replicated. For eg "foo-.*"
src-cluster->dst-cluster.topics = topic1,topic2

# client-id
src-cluster.client.id = prod-mm2-onlinev1-to-onlinev2-consumer-v0
dst-cluster.client.id = prod-mm2-onlinev1-to-onlinev2-producer-v0


# group.instance.id=_mirror_make_instance_1
# consumer should periodically emit heartbeats
src-cluster->dst-cluster.consumer.auto.offset.reset = earliest
src-cluster->dst-cluster.consumer.overrides.auto.offset.reset = earliest

# connector should periodically emit heartbeats
src-cluster->dst-cluster.emit.heartbeats.enabled = true

# frequency of heartbeats, default is 5 seconds
src-cluster->dst-cluster.emit.heartbeats.interval.seconds = 10

# connector should periodically emit consumer offset information
src-cluster->dst-cluster.emit.checkpoints.enabled = true

# frequency of checkpoints, default is 5 seconds
src-cluster->dst-cluster.emit.checkpoints.interval.seconds = 10

# whether to monitor source cluster ACLs for changes
src-cluster->dst-cluster.sync.topic.acls.enabled = false

# whether or not to monitor source cluster for configuration changes
src-cluster->dst-cluster.sync.topic.configs.enabled = true
# add retention.ms to the default list given in the DefaultConfigPropertyFilter
#   
https://github.com/apache/kafka/blob/889fd31b207b86db6d059792131d14389639d9e4/connect/mirror/src/main/java/org/apache/kafka/connect/mirror/DefaultConfigPropertyFilter.java#L33-L38
src-cluster->dst-cluster.config.properties.blacklist =
follower\\.replication\\.throttled\\.replicas, \

leader\\.replication\\.throttled\\.replicas, \

message\\.timestamp\\.difference\\.max\\.ms, \
   message\\.timestamp\\.type, \

unclean\\.leader\\.election\\.enable, \
   min\\.insync\\.replicas, \
   retention\\.ms

# connector should periodically check for new topics
src-cluster->dst-cluster.refresh.topics.enabled = true

# frequency to check source cluster for new topics, default is 5 seconds
src-cluster->dst-cluster.refresh.topics.interval.seconds = 300

# enable and configure individual replication flows
src-cluster->dst-cluster.enabled = true
dst-cluster->src-cluster.enabled = false


# Setting replication factor of newly created remote topics
# replication.factor=2

# Internal Topic Settings
#
# The replication factor for mm2 internal topics "heartbeats",
"B.checkpoints.internal" and
# "mm2-offset-syncs.B.internal"
# For anything other than development testing, a value greater than 1
is recommended to ensure availability such as 3.
checkpoints.topic.replication.factor=3
# 14 days
checkpoints.topic.retention.ms=120960
heartbeats.topic.replication.factor=3
offset-syncs.topic.replication.factor=3

# The replication factor for connect internal topics
"mm2-configs.B.internal", "mm2-offsets.B.internal" and
# "mm2-status.B.internal"
# For anything other than development testing, a value greater than 1
is recommended to ensure availability such as 3.
offset.storage.replication.factor=3
status.storage.replication.factor=3
config.storage.replication.factor=3

# customize as needed
# replication.policy.separator = _
# sync.topic.acls.enabled = false
# emit.heartbeats.interval.seconds = 5
```

Thanks