Hello,

during our resilience tests, a message is logged with an error. The log line is 
the following:

2019-02-25 14:18:35.100+0000 [kafka-producer-network-thread | 
str1-StreamThread-1-producer] ERROR o.a.k.s.p.i.RecordCollectorImpl - task 
[0_0] Error sending record (key \x003\x9F\xF0 value [10, 7, 51, 51, 56, 51, 50, 
56, 48, 18, 28, 50, 48, 49, 50.... -110, 45, 32, -61, -56, 25] timestamp 
1551104224970) to topic price-publisher-price-events-changelog due to 
org.apache.kafka.common.errors.TimeoutException: Expiring 3 record(s) for 
price-publisher-price-events-changelog-0: 30034 ms has passed since last 
attempt plus backoff time; No more records will be sent and no more offsets 
will be recorded for this task.

The message has a huge list of numbers which is cropped in the above example. 
The Timeout error is probably OK however the thing is that we don't publish 
this message in our application code. Is it this message published by Kafka 
Streams internally?

Our application uses a Transformer, with three distinct stores, a punctuator 
that can possibly delete the values from the above stores and it has exactly 
once enabled.

Thank you for the help,
Yiannis

Reply via email to