Hi Oleg,

do you have configured your consumer/producer with "no data loss" configuration 
like bellow ?

For Consumer, set auto.commit.enabled=false in consumer.properties

For Producer

  1.  max.in.flight.requests.per.connection=1
  2.  retries=Int.MaxValue
  3.  acks=-1
  4.  block.on.buffer.full=true

Like advised in this topic:

https://community.hortonworks.com/articles/79891/kafka-mirror-maker-best-practices.html



I have the following assumption:

Your MM can't reach your targeted cluster (rc.exchange.jpy right ?)

for this reason, your producer will retry indefinitely.

But because mirror maker will  block on producer buffer when is full,

and now because your buffer is full and you have a retries policy + block to 
true combined with acks all, the wholeness of

these parameter may exceed the retention time of your records.


Do you have tracked this lead ?

(It's just an idea)


I hope you solve quickly your issue.


Adrien

________________________________
De : Oleg Danilovich <oleg.danilov...@expcapital.com>
Envoyé : samedi 3 mars 2018 15:42:47
À : users@kafka.apache.org
Objet : Mirror Maker Errors

Hello, i running mirror maker for mirroring data from one cluster to
another.

Now i get this error in log
Feb 25 22:38:56 ld4-27 MirrorMaker[54827]: [2018-02-25 22:38:56,914] ERROR
Error when sending message to topic rc.exchange.jpy with key: 29 bytes,
value: 153 bytes with error:
(org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
Feb 25 22:38:56 ld4-27 MirrorMaker[54827]:
org.apache.kafka.common.errors.TimeoutException: Expiring 82 record(s) for
rc.exchange.jpy-0: 50381 ms has passed since last append

I use maxintvalue retries in producer config.
Why this message occured. I cant detect any issues.

--
Best regards,
*Oleg Danilovich*

Reply via email to