Finbarr Naughton created KAFKA-7716:
---------------------------------------

             Summary: Unprocessed messages when Broker fails
                 Key: KAFKA-7716
                 URL: https://issues.apache.org/jira/browse/KAFKA-7716
             Project: Kafka
          Issue Type: Bug
          Components: core, streams
    Affects Versions: 2.0.1, 1.0.0
            Reporter: Finbarr Naughton


This occurs when running on Kubernetes on bare metal.

A Streams application with a single topology listening to two input topics A 
and B. A is read as a GlobalKTable, B as a KStream. The topology joins the 
stream to the GKTable and writes an updated message to topic A. The application 
is configured to use exactly_once processing.

There are three worker nodes. Kafka brokers are deployed as a statefulset on 
the three nodes using the helm chart from here 
-[https://github.com/helm/charts/tree/master/incubator/kafka] 

The application has three instances spread across the three nodes.

During a test, topic A is pre-populated with 50k messages over 5 minutes. Then 
50k messages with the same key-set are sent to topic B over 5 minutes. The 
expected behaviour is that Topic A will contain 50k updated messages 
afterwards. While all brokers are available this is the case, even when one of 
the application pods is deleted.

When a broker fails, however, a few expected updated messages fail to appear on 
topic A despite their existence on topic B.

 

More complete description here - 
[https://stackoverflow.com/questions/53557247/some-events-unprocessed-by-kafka-streams-application-on-kubernetes-on-bare-metal]

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to