[ 
https://issues.apache.org/jira/browse/KAFKA-4373?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15677107#comment-15677107
 ] 

Conor Hughes edited comment on KAFKA-4373 at 11/18/16 4:42 PM:
---------------------------------------------------------------

I have experienced a similar issue using Kafka 0.10.1.0 and Kafka Streams 
0.10.1.0.

I have 2 topologies running:

Topology number 1: Reads messages from Kafka topic, processes them and outputs 
them to another Kafka topic.
Topology number 2: Reads messages from topology number 1's output topic and 
inserts them into a database.

I am tracking the number of messages topology number 2 has read in both inside 
and after the consumer's deserialiser as well as the topology's offsets in 
Kafka.
These counts will incrementally match for a number of minutes then suddenly the 
offsets will jump by up to 40,000 and but the topology's counts remain 
incrementing normally as if this jump never happen so messages are being lost.


was (Author: thatguyhughesy):
I have experienced a similar issue using Kafka 0.10.1.0 and Kafka Stream 
0.10.1.0.

I have 2 topologies running:

Topology number 1: Reads messages from Kafka topic, processes them and outputs 
them to another Kafka topic.
Topology number 2: Reads messages from topology number 1's output topic and 
inserts them into a database.

I am tracking the number of messages topology number 2 has read in both inside 
and after the consumer's deserialiser as well as the topology's offsets in 
Kafka.
These counts will incrementally match for a number of minutes then suddenly the 
offsets will jump by up to 40,000 and but the topology's counts remain 
incrementing normally as if this jump never happen so messages are being lost.

> Kafka Consumer API jumping offsets
> ----------------------------------
>
>                 Key: KAFKA-4373
>                 URL: https://issues.apache.org/jira/browse/KAFKA-4373
>             Project: Kafka
>          Issue Type: Bug
>          Components: consumer
>            Reporter: Srinivasan Venkatraman
>
> Hi,
> I am using Kafka Version 0.10.0.1 and java consumer API to consume messages 
> from a topic. We are using a single node kafka and zookeeper. It is sometimes 
> observed that the consumer is losing a bulk of message. We are unable to find 
> the exact reason to replicate the issue.
> The scenario is:
> Consumer polls the topic.
> Fetches the messages and gives it to a thread pool to handle the message.
> Waits for the threads to return and then commits the offsets.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to