Hi Andrew,

Good point.

Sorry to be dim bulb, but I’m still not sure I understand the downsides of 
bumping the version. The broker and all client implementations would have to 
change to fully support this feature anyway, but down version clients are 
handled by brokers already.

Thanks,
Kirk

> On Jul 13, 2023, at 10:30 AM, Andrew Schofield 
> <andrew_schofield_j...@outlook.com> wrote:
> 
> Hi Mayank,
> If we bump the version, the broker can tell whether it’s worth providing the 
> leader
> endpoint information to the client when the leader has changed. That’s my 
> reasoning.
> 
> Thanks,
> Andrew
> 
>> On 13 Jul 2023, at 18:02, Mayank Shekhar Narula <mayanks.nar...@gmail.com> 
>> wrote:
>> 
>> Thanks both for looking into this.
>> 
>> Jose,
>> 
>> 1/2 & 4(changes for PRODUCE) & 5 makes sense, will follow
>> 
>> 3. If I understood this correctly, certain replicas "aren't" brokers, what
>> are they then?
>> 
>> Also how about replacing "Replica" with "Leader", this is more readable on
>> the client. so, how about this?
>>   { "name": "LeaderEndpoints", "type": "[]Leader", "versions": "15+",
>> "taggedVersions": "15+", "tag": 3,
>>     "about": "Endpoints for all current leaders enumerated in
>> PartitionData.", "fields": [
>>     { "name": "NodeId", "type": "int32", "versions": "15+",
>>       "mapKey": true, "entityType": "brokerId", "about": "The ID of the
>> associated leader"},
>>     { "name": "Host", "type": "string", "versions": "15+",
>>       "about": "The leader's hostname." },
>>     { "name": "Port", "type": "int32", "versions": "15+",
>>       "about": "The leader's port." },
>>     { "name": "Rack", "type": "string", "versions": "15+", "ignorable":
>> true, "default": "null",
>>       "about": "The rack of the leader, or null if it has not been
>> assigned to a rack." }
>>   ]}
>> 
>> Andrew
>> 
>> 6. I wonder if non-Kafka clients might benefit from not bumping the
>> version. If versions are bumped, say for FetchResponse to 16, I believe
>> that client would have to support all versions until 16 to fully utilise
>> this feature. Whereas, if not bumped, they can simply support until version
>> 12( will change to version:12 for tagged fields ), and non-AK clients can
>> then implement this feature. What do you think? I am inclined to not bump.
>> 
>> On Thu, Jul 13, 2023 at 5:21 PM Andrew Schofield <
>> andrew_schofield_j...@outlook.com> wrote:
>> 
>>> Hi José,
>>> Thanks. Sounds good.
>>> 
>>> Andrew
>>> 
>>>> On 13 Jul 2023, at 16:45, José Armando García Sancio
>>> <jsan...@confluent.io.INVALID> wrote:
>>>> 
>>>> Hi Andrew,
>>>> 
>>>> On Thu, Jul 13, 2023 at 8:35 AM Andrew Schofield
>>>> <andrew_schofield_j...@outlook.com> wrote:
>>>>> I have a question about José’s comment (2). I can see that it’s
>>> possible for multiple
>>>>> partitions to change leadership to the same broker/node and it’s
>>> wasteful to repeat
>>>>> all of the connection information for each topic-partition. But, I
>>> think it’s important to
>>>>> know which partitions are now lead by which node. That information at
>>> least needs to be
>>>>> per-partition I think. I may have misunderstood, but it sounded like
>>> your comment
>>>>> suggestion lost that relationship.
>>>> 
>>>> Each partition in both the FETCH response and the PRODUCE response
>>>> will have the CurrentLeader, the tuple leader id and leader epoch.
>>>> Clients can use this information to update their partition to leader
>>>> id and leader epoch mapping.
>>>> 
>>>> They can also use the NodeEndpoints to update their mapping from
>>>> replica id to the tuple host, port and rack so that they can connect
>>>> to the correct node for future FETCH requests and PRODUCE requests.
>>>> 
>>>> Thanks,
>>>> --
>>>> -José
>>> 
>>> 
>> 
>> -- 
>> Regards,
>> Mayank Shekhar Narula
> 

Reply via email to