Hello,

I have a test cluster for Kafka consisting of one broker, one producer and one 
consumer, each on a separate node. Kafka broker and producer are v1.0.0. I am 
able to use the ProducerPerformance benchmark to write records to the broker. 
However, when I try to read the records using ConsumerPerformance benchmark 
with v1.0.0, the benchmark terminates prematurely with output suggesting there 
were no records to be read, as shown below:


start.time, end.time, data.consumed.in.MB, MB.sec, data.consumed.in.nMsg, 
nMsg.sec

2018-02-23 16:13:51:748, 2018-02-23 16:13:52:904, 0.0000, 0.0000, 0, 0.0000, 
24, 1132, 0.0000, 0.0000


I tried to debug the issue further by observing what API Requests are received 
from the consumer by the broker and they goes something like this:


API_VERSIONS

METADATA

FIND_COORDINATOR

API_VERSIONS

JOIN_GROUP

SYNC_GROUP

OFFSET_FETCH

API_VERSIONS

LIST_OFFSETS

LIST_OFFSETS

OFFSET_COMMIT

LEAVE_GROUP


>From what I understand, there are no FETCH requests being sent by the Consumer 
>and therefore no records sent by the broker as a response. For some odd 
>reason, the Consumer thinks that it does not need to read any records. 
>However, when I run the same Consumer Performance benchmark with v0.11.1.0, I 
>am successfully able to read data. The sequence of API requests receied at the 
>broker end correctly indicate that a number of FETCH requests were being sent 
>by the Consumer after it had already received the LIST_OFFSETS response from 
>the broker, as shown by API requests sequence received at the broker end below:


API_VERSIONS

METADATA

FIND_COORDINATOR

API_VERSIONS

JOIN_GROUP

SYNC_GROUP

OFFSET_FETCH

API_VERSIONS

LIST_OFFSETS

FETCH

...

FETCH

OFFSET_COMMIT

LEAVE_GROUP


What could be causing this issue?


- Haseeb

Reply via email to