[
https://issues.apache.org/jira/browse/KAFKA-8922?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Raman Gupta resolved KAFKA-8922.
--------------------------------
Resolution: Invalid
Closing as the error had nothing to do with streams -- just general broker
unavailability which was reported with a poor error message by the client.
Still don't know why the broker were unavailable but, hey, that's Kafka!
> Failed to get end offsets for topic partitions of global store
> --------------------------------------------------------------
>
> Key: KAFKA-8922
> URL: https://issues.apache.org/jira/browse/KAFKA-8922
> Project: Kafka
> Issue Type: Bug
> Reporter: Raman Gupta
> Priority: Major
>
> I have a Kafka stream that fails with this error on startup every time:
> {code}
> org.apache.kafka.streams.errors.StreamsException: Failed to get end offsets
> for topic partitions of global store test-uiService-dlq-events-table-store
> after 0 retry attempts. You can increase the number of retries via
> configuration parameter `retries`.
> at
> org.apache.kafka.streams.processor.internals.GlobalStateManagerImpl.register(GlobalStateManagerImpl.java:186)
> ~[kafka-streams-2.3.0.jar:?]
> at
> org.apache.kafka.streams.processor.internals.AbstractProcessorContext.register(AbstractProcessorContext.java:101)
> ~[kafka-streams-2.3.0.jar:?]
> at
> org.apache.kafka.streams.state.internals.RocksDBStore.init(RocksDBStore.java:207)
> ~[kafka-streams-2.3.0.jar:?]
> at
> org.apache.kafka.streams.state.internals.KeyValueToTimestampedKeyValueByteStoreAdapter.init(KeyValueToTimestampedKeyValueByteStoreAdapter.java:87)
> ~[kafka-streams-2.3.0.jar:?]
> at
> org.apache.kafka.streams.state.internals.WrappedStateStore.init(WrappedStateStore.java:48)
> ~[kafka-streams-2.3.0.jar:?]
> at
> org.apache.kafka.streams.state.internals.CachingKeyValueStore.init(CachingKeyValueStore.java:58)
> ~[kafka-streams-2.3.0.jar:?]
> at
> org.apache.kafka.streams.state.internals.WrappedStateStore.init(WrappedStateStore.java:48)
> ~[kafka-streams-2.3.0.jar:?]
> at
> org.apache.kafka.streams.state.internals.MeteredKeyValueStore.init(MeteredKeyValueStore.java:112)
> ~[kafka-streams-2.3.0.jar:?]
> at
> org.apache.kafka.streams.processor.internals.GlobalStateManagerImpl.initialize(GlobalStateManagerImpl.java:123)
> ~[kafka-streams-2.3.0.jar:?]
> at
> org.apache.kafka.streams.processor.internals.GlobalStateUpdateTask.initialize(GlobalStateUpdateTask.java:61)
> ~[kafka-streams-2.3.0.jar:?]
> at
> org.apache.kafka.streams.processor.internals.GlobalStreamThread$StateConsumer.initialize(GlobalStreamThread.java:229)
> ~[kafka-streams-2.3.0.jar:?]
> at
> org.apache.kafka.streams.processor.internals.GlobalStreamThread.initialize(GlobalStreamThread.java:345)
> ~[kafka-streams-2.3.0.jar:?]
> at
> org.apache.kafka.streams.processor.internals.GlobalStreamThread.run(GlobalStreamThread.java:270)
> ~[kafka-streams-2.3.0.jar:?]
> Caused by: org.apache.kafka.common.errors.TimeoutException: Failed to get
> offsets by times in 30001ms
> {code}
> The stream was working fine and then this started happening.
> The stream now throws this error on every start. I am now going to attempt to
> reset the stream and delete its local state.
> I hate to say it, but Kafka Streams suck. Its problem after problem.
> UPDATE: Some more info: it appears that the brokers may have gotten into some
> kind of crazy state, for an unknown reason, and now they are just shrinking
> and expanding ISRs repeatedly. Trying to figure out the root cause of this
> craziness.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)