I'm getting an OffsetOutOfRangeException accompanied by the log message
"Updating global state failed. You can restart KafkaStreams to recover from
this error." But I've restarted the app several times and it's not
recovering, it keeps failing the same way.
Is this error message just wrong (and
Thanks Bruno, filed https://issues.apache.org/jira/browse/KAFKA-9127 .
On Wed, Oct 30, 2019 at 2:06 AM Bruno Cadonna wrote:
> Hi Chris,
>
> Thank you for the clarification. Now I see what you mean. If your
> topology works correctly, I would not file it as a bug but as a
> possible improvement.
Hi Matthias,
Some additional information, after I restart the app, it went to
endless rebalancing. Join rate loos like below attachment. It's
basically rebalanced every 5 minutes. I checked into each node
logging. And found below warning:
On node A:
2019/10/31 10:13:46 | 2019-10-31 10:13:46,543
Thanks a lot for all the suggestions!
Streams API is not really what I need, but I looked into Streams sources
and found that it initializes transactional Producers in
ConsumerRebalanceListener.onPartitionsAssigned which is called in
KafkaConsumer.poll before fetching records.
Looks like this
Hi I'm sorry for the confusion.
My question should be 'are there limitations on what you can put in this
configuration map' ?
Could you put objects with state in the map for instance? Will this map be
serialized ?
Thanks
On Thu, Oct 31, 2019, at 10:35, Matthias J. Sax wrote:
> Well, the
SOLVED
@Jose
>If so, the review SSL conf related to that.
It turned out that in the SSL configuration file, the `extendedKeyUsage`
attribute was set to "serverAuth". So I extended it to "serverAuth,
clientAuth" which solved the problem. At the moment it seems everything
works as intended. The
Hi Matthias,
When I redeployment the application with the same application Id, it
will cause a rebalance loop: partition revoked -> rebalance -> offset
reset to zero -> partition assigned -> partition revoked.
The app was running well before the redeployment, but once redeployed,
it will keep
Quite a project to test transactions...
The current system test suite is part of the code base:
https://github.com/apache/kafka/tree/trunk/tests/kafkatest/tests
There is course also some unit/integration test for transactions.
There is also a blog post that describes in a high level what
Well, the `configure()` method is there to configure the serde :)
Hence, it might be a good place to instantiate the state of the serde,
ie, to create a `SchemaStore` instance.
For the passed in map: the full `StreamsConfig` will be passed into it,
hence, you can add any configuration you want
Bug fix releases are done "on demand" and there is no strict policy to
not do a bug fix release for older versions.
However, in practice bug fix release for older versions don't happen
very often. The release plan wiki page gives a good overview about
release in the past:
Btw: There is an example implementation of a custom daily window that
considers time zones. Maybe it helps:
https://github.com/confluentinc/kafka-streams-examples/blob/5.3.1-post/src/test/java/io/confluent/examples/streams/window/DailyTimeWindows.java
-Matthias
On 10/23/19 10:02 PM, 董宗桢 wrote:
I would recommend to read the Kafka Streams KIP about EOS:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-129%3A+Streams+Exactly-Once+Semantics
Fencing is the most critical part in the implementation. Kafka Streams
basically uses a dedicated `transactional.id` per input topic-partition.
12 matches
Mail list logo