Michael Hornung created KAFKA-13683:
---------------------------------------

             Summary: Streams - Transactional Producer - Transaction with key 
xyz went wrong with exception: Timeout expired after 60000milliseconds while 
awaiting InitProducerId
                 Key: KAFKA-13683
                 URL: https://issues.apache.org/jira/browse/KAFKA-13683
             Project: Kafka
          Issue Type: Bug
          Components: streams
    Affects Versions: 3.0.0, 2.7.0, 2.6.0
            Reporter: Michael Hornung
             Fix For: 3.0.0, 2.7.0, 2.6.0
         Attachments: AkkaHttpRestServer.scala, timeoutException.png

We have an urgent issue with our customer using kafka transactional producer 
with kafka cluster with 3 or more nodes. We are using confluent on azure.

We this exception regularly: "Transaction with key XYZ went wrong with 
exception: Timeout expired after 60000milliseconds while awaiting 
InitProducerId" (see attachment)

We assume that the cause is a node which is down and the producer still sends 
messages to the “down node”. 



We are using kafa streams 3.0.

*We expect that if a node is down kafka producer is intelligent enough to not 
send messages to this node any more.*

*What’s the solution of this issue? Is there any config we have to set?*

*This request is urgent because our costumer will soon have production issues.*

*Additional information*
 * send record --> see attachment “AkkaHttpRestServer.scala” – line 100
 * producer config --> see attachment “AkkaHttpRestServer.scala” – line 126



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

Reply via email to