Thanks Ivan! LGTM
On Fri, Apr 12, 2024, 13:38 Ivan Yurchenko wrote:
> Hi Chris and all,
>
> Thank you for your feedback. Your proposals seems good to me. I did these
> changed to the KIP, please have a look at the change [1]
>
> Best,
> Ivan
>
> [1]
>
Hi Chris and all,
Thank you for your feedback. Your proposals seems good to me. I did these
changed to the KIP, please have a look at the change [1]
Best,
Ivan
[1]
https://cwiki.apache.org/confluence/pages/diffpagesbyversion.action?pageId=240881396=14=12
On Thu, Apr 11, 2024, at 10:49, Chris
Hi Ivan,
I agree with Andrew that we can save cluster ID checking for later. This
feature is opt-in and if necessary we can add a note to users about only
enabling it if they can be certain that the same cluster will always be
resolved by the bootstrap servers. This would apply regardless of
Hi Andrew and all,
I did the mentioned change.
As there are no more comments, I'm letting this sit for e.g. a week and then I
will call for the vote.
Best,
Ivan
On Tue, Apr 9, 2024, at 23:51, Andrew Schofield wrote:
> Hi Ivan,
> I think you have to go one way or the other with the cluster
Hi Ivan,
I think you have to go one way or the other with the cluster ID, so I think
removing that from this KIP might
be the best. I think there’s another KIP waiting to be written for ensuring
consistency of clusters, but
I think that wouldn’t conflict at all with this one.
Thanks,
Andrew
>
Hi Ivan,
I think you have to go one way or the other with the cluster ID, so I think
removing that from this KIP might
be the best. I think there’s another KIP waiting to be written for ensuring
consistency of clusters, but
I think that wouldn’t conflict at all with this one.
Thanks,
Andrew
>
Hi Andrew and all,
I looked deeper into the code [1] and it seems the Metadata class is OK with
cluster ID changing. So I'm thinking that the rebootstrapping shouldn't
introduce a new failure mode here. And I should remove the mention of this
cluster ID checks from the KIP.
Best,
Ivan
[1]
Hi Ivan,
Thanks for the KIP. I can see situations in which this would be helpful. I have
one question.
The KIP says the client checks the cluster ID when it re-bootstraps and that it
will fail if the
cluster ID doesn’t match the previously known one. How does it fail? Which
exception does
it
Hello!
I changed the KIP a bit, specifying that the certain benefit goes to consumers
not participating in a group, but that other clients can benefit as well in
certain situations.
You can see the changes in the history [1]
Thank you!
Ivan
[1]
Hello!
I've made several changes to the KIP based on the comments:
1. Reduced the scope to producer and consumer clients only.
2. Added more details to the description of the rebootstrap process.
3. Documented the role of low values of reconnect.backoff.max.ms in
preventing rebootstrapping.
4.
Hi Chris and all,
> I believe the logic you've linked is only applicable for the producer and
> consumer clients; the admin client does something different (see [1]).
I see, thank you for the pointer. It seems the admin client is fairly
different from the producer and consumer. Probably it makes
Hi Ivan,
I believe the logic you've linked is only applicable for the producer and
consumer clients; the admin client does something different (see [1]).
Either way, it'd be nice to have a definition of when re-bootstrapping
would occur that doesn't rely on internal implementation details. What
Hi Chris,
Thank you for your question. As a part of various lifecycle phases
(including node disconnect), NetworkClient can request metadata update
eagerly (the `Metadata.requestUpdate` method), which results in
`MetadataUpdater.maybeUpdate` being called during next poll. Inside, it has
a way to
Hi Ivan,
I'm not very familiar with the clients side of things but the proposal
seems reasonable.
I like the flexibility of the "metadata.recovery.strategy" property as a
string instead of, e.g., a "rebootstrap.enabled" boolean. We may want to
adapt a different approach in the future, like the
Hi!
There seems to be not much more discussion going, so I'm planning to start
the vote in a couple of days.
Thanks,
Ivan
On Wed, 18 Jan 2023 at 12:06, Ivan Yurchenko
wrote:
> Hello!
> I would like to start the discussion thread on KIP-899: Allow clients to
> rebootstrap.
> This KIP proposes
Hi Philip and all,
> if you don't use the client for a long time, why can't you just close the
client and re-instantiate a new one when needed? I'm not familiar with the
stream thread, so I don't know if that's possible.
Yes, it's always possible to recreate the client (I think, it's the main
Hi Christo and all,
> Currently a long-running client refreshes their metadata from a set of
brokers obtained when first contacting the cluster. If they have been
“away” for too long those brokers might have all changed and upon trying to
refresh the metadata the client will fail because it
Hey Ivan,
Thanks for the KIP. Some questions for clarification: It seems like the
main problem is that if we don't poll frequently enough, the cluster
topology can change entirely before the metadata is refreshed and thus
causing staled clients. My question is: if you don't use the client for a
Hello!
Thank you for the KIP. I would like to summarise my understanding of the
problem in case I am wrong.
Currently a long-running client refreshes their metadata from a set of brokers
obtained when first contacting the cluster. If they have been “away” for too
long those brokers might have
Hello!
I would like to start the discussion thread on KIP-899: Allow clients to
rebootstrap.
This KIP proposes to allow Kafka clients to repeat the bootstrap process
when fetching metadata if none of the known nodes are available.
20 matches
Mail list logo