Dear Team
Right now, Kafka 0.10.1, Zookeeper is 3.4.6 version are using in our production
cluster. In which we are using mirror maker tool to transfer data from DC
cluster to DR cluster.
We have 3 brokers and 3 zookeepers are using in both environments.
In DC , server.properties we set message size 75K, like below
message.max.bytes=75000
In DR, server.properties we set message size 1.5MB, like below
message.max.bytes=1500000
with above configuration mirror maker tool is running , we are getting data
from DC to DR
here (source=DC, target=DR)
when we change in serper.properties at DR environment
like message.max.bytes=75000, in this scenario we are not getting data from DC
to DR
mirror maker tool is stopping forcefully.
When its change to pervious value like message.max.bytes=1500000, this time is
working fine.
Only this message.max.bytes configuration is different in both
server.properties and rest of parameters are same in DC & DR
Please find these our current config properties
DC-kafka::
broker.id=0
port=9092
delete.topic.enable=true
message.max.bytes=75000
listeners=SSL://198.168.10.1:9092
advertised.listeners=SSL://198.168.10.1:9092
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=/lotus/kafka-logs
num.partitions=3
default.replication.factor=3
auto.topic.creation.enable=false
num.recovery.threads.per.data.dir=1
log.retention.hours=168
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
ssl.keystore.location=/opt/kafka/certificates/kafka.keystore.jks
ssl.keystore.password=Sbi#123
ssl.key.password=Sbi#123
ssl.truststore.location=/opt/kafka/certificates/kafka.truststore.jks
ssl.truststore.password=Sbi#123
security.inter.broker.protocol=SSL
zookeeper.connect=198.168.10.1:2181,198.168.10.2:2181,198.168.10.3:2181
zookeeper.connection.timeout.ms=6000
DR -KAFKA
broker.id=0
port=9092
delete.topic.enable=true
message.max.bytes=1500000
listeners=SSL://198.168.20.1:9092
advertised.listeners=SSL://198.168.20.1:9092
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=/lotus/kafka-logs
num.partitions=3
default.replication.factor=3
auto.topic.creation.enable=false
num.recovery.threads.per.data.dir=1
log.retention.hours=168
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
ssl.keystore.location=/opt/kafka/certificates/kafka.keystore.jks
ssl.keystore.password=Sbi#123
ssl.key.password=Sbi#123
ssl.truststore.location=/opt/kafka/certificates/kafka.truststore.jks
ssl.truststore.password=Sbi#123
security.inter.broker.protocol=SSL
zookeeper.connect=198.168.20.1:2181,198.168.20.2:2181,198.168.20.3:2181
zookeeper.connection.timeout.ms=6000
Mirror Maker (DC ---to---->DR)
Source:: (DC)
[kafka@digikafprodapp01 mirror-maker]$ cat source-cluster.config
bootstrap.servers=198.168.10.1:9092,198.168.10.2:9092,198.168.10.3:9092
group.id=mirror-maker-consumer
exclude.internal.topics=true
client.id=mirror_maker_consumer
security.protocol=SSL
ssl.truststore.location=/opt/kafka/certificates/client.truststore.jks
ssl.truststore.password=Sbi#123
auto.offset.reset=earliest
max.poll.records=100
Destination (DR):
[kafka@digikafprodapp01 mirror-maker]$ cat target-cluster.config
bootstrap.servers=198.168.20.1:9092,198.168.20.2:9092,198.168.20.3:9092
acks=1
client.id=mirror_maker_producer
security.protocol=SSL
ssl.truststore.location=/opt/kafka/certificates/client.truststore.jks
ssl.truststore.password=Sbi#123
*******
When we change message.max.bytes=75000 in server.properties at DR environment,
We are getting these kind of exceptions,
[2019-03-11 18:34:31,906] ERROR Error when sending message to topic audit-logs
with key: null, value: 304134 bytes with error:
(org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
org.apache.kafka.common.errors.RecordTooLargeException: The request included a
message larger than the max message size the server will accept.
[2019-03-11 18:34:31,909] FATAL [mirrormaker-thread-15] Mirror maker thread
failure due to (kafka.tools.MirrorMaker$MirrorMakerThread)
java.lang.IllegalStateException: Cannot send after the producer is closed.
at
org.apache.kafka.clients.producer.internals.RecordAccumulator.append(RecordAccumulator.java:185)
at
org.apache.kafka.clients.producer.KafkaProducer.doSend(KafkaProducer.java:474)
at
org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:436)
at
kafka.tools.MirrorMaker$MirrorMakerProducer.send(MirrorMaker.scala:657)
at
kafka.tools.MirrorMaker$MirrorMakerThread$$anonfun$run$6.apply(MirrorMaker.scala:434)
at
kafka.tools.MirrorMaker$MirrorMakerThread$$anonfun$run$6.apply(MirrorMaker.scala:434)
at scala.collection.Iterator$class.foreach(Iterator.scala:893)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at kafka.tools.MirrorMaker$MirrorMakerThread.run(MirrorMaker.scala:434)
[2019-03-11 18:34:31,909] ERROR Error when sending message to topic audit-logs
with key: null, value: 1141 bytes with error:
(org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
java.lang.IllegalStateException: Producer is closed forcefully.
at
org.apache.kafka.clients.producer.internals.RecordAccumulator.abortBatches(RecordAccumulator.java:513)
at
org.apache.kafka.clients.producer.internals.RecordAccumulator.abortIncompleteBatches(RecordAccumulator.java:493)
at
org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:156)
at java.lang.Thread.run(Thread.java:748)
[2019-03-11 18:34:31,909] FATAL [mirrormaker-thread-1] Mirror maker thread
failure due to (kafka.tools.MirrorMaker$MirrorMakerThread)
Please help us we are facing these since from 4 months….
Please help us.
Sent from Outlook<http://aka.ms/weboutlook>