I have a small test setup with a local zk/kafka server and a streams app that loads sample data. The test setup is usually up for a day or two before a new build goes out and its blown away and loaded from scratch.
Lately I've seen that after a few hours the stream app will stop processing and start spamming the logs with: org.apache.kafka.clients.consumer.internals.Fetcher: Fetch Offset 0 is out of range for partition foo-0, resetting offset org.apache.kafka.clients.consumer.internals.Fetcher: Fetch Offset 0 is out of range for partition foo-0, resetting offset org.apache.kafka.clients.consumer.internals.Fetcher: Fetch Offset 0 is out of range for partition foo-0, resetting offset Pretty much sinks a core into spamming the logs. Restarting the application puts it right back in that broke state. I thought it was because of this: https://issues.apache.org/jira/browse/KAFKA-5510 So I set my log.retention.hours=48, and offsets.retention.minutes=10081, which is huge compared to the total data retention time. Yet same error occurred. Any ideas?