Hi,
Thanks for the KIP, Mayank.

I have a question about José’s comment (2). I can see that it’s possible for 
multiple
partitions to change leadership to the same broker/node and it’s wasteful to 
repeat
all of the connection information for each topic-partition. But, I think it’s 
important to
know which partitions are now lead by which node. That information at least 
needs to be
per-partition I think. I may have misunderstood, but it sounded like your 
comment
suggestion lost that relationship.

I also have a comment.

6. Sometimes I believe that the version number of an RPC has been incremented
to reflect a change in behaviour even when there’s not been a change in 
structure.
Given that this is likely to be in a new version of Kafka and it’s new 
behaviour to
return this optional information, I wonder whether it’s worthwhile to increment 
the
version of the Fetch RPC anyway.

Thanks,
Andrew

> On 13 Jul 2023, at 16:20, José Armando García Sancio 
> <jsan...@confluent.io.INVALID> wrote:
> 
> Hi Mayank, thanks for the KIP. I look forward to this improvement for
> new clients.
> 
> Some comments below.
> 
> On Thu, Jul 13, 2023 at 7:15 AM Mayank Shekhar Narula
> <mayanks.nar...@gmail.com> wrote:
>> Following KIP is up for discussion. Thanks for your feedback
> 
> Regarding the FETCH response changes:
> 1. Tagged field 2 already exists. Looks like you can use tagged field 3.
> 
> 2. The CurrentLeaderBroker should not be within PartitionData. It is
> possible to have multiple partitions that would change leadership to
> the same broker/node. We should instead move that information to a
> top-level field that is an array of (replica id, host, port, rack).
> 
> 3. In the future, I may use this information in the KRaft/Metadata
> implementation of FETCH. In that implementation not all of the
> replicas are brokers. Do you mind removing all references to the word
> broker in the description and field name. Maybe you can use the word
> replica instead. How about something like this:
>    { "name": "NodeEndpoints", "type": "[]NodeEndpoint", "versions":
> "15+", "taggedVersions": "15+", "tag": 3,
>      "about": "Endpoint information for all current leaders
> enumerated in PartitionData.", "fields": [
>          { "name": "ReplicaId", "type": "int32", "versions": "15+",
> "mapKey": true, "entityType": "brokerId",
>            "about": "The ID of the associated replica"},
>          { "name": "Host", "type": "string", "versions": "15+",
>            "about": "The replica's hostname." },
>          { "name": "Port", "type": "int32", "versions": "15+",
>            "about": "The replica's port." },
>          { "name": "Rack", "type": "string", "versions": "15+",
> "ignorable": true, "default": "null",
>            "about": "The rack of the replica, or null if it has not
> been assigned to a rack." }
>      ]},
> 
> Regarding the PRODUCE response changes:
> 4. Can we make similar changes as the ones mentioned in bullet points
> 2. and 3 above?.
> 
> 5. If you make the changes enumerated in bullet point 4., you'll
> probably want to change the tag so that NodeEpoint has tag 0 while
> CurrentLeader has tag 1.
> 
> Thanks!
> -- 
> -José

Reply via email to