Build failed in Jenkins: Kafka » Kafka Branch Builder » trunk #2160

2023-08-31 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 409765 lines...]

Gradle Test Run :streams:test > Gradle Test Executor 86 > 
DefaultTaskManagerTest > shouldNotSetUncaughtExceptionsTwice() STARTED

Gradle Test Run :streams:test > Gradle Test Executor 86 > 
DefaultTaskManagerTest > shouldNotSetUncaughtExceptionsTwice() PASSED

Gradle Test Run :streams:test > Gradle Test Executor 86 > 
DefaultTaskManagerTest > 
shouldNotAssignTasksForPunctuationIfPunctuationDisabled() STARTED

Gradle Test Run :streams:test > Gradle Test Executor 86 > 
DefaultTaskManagerTest > 
shouldNotAssignTasksForPunctuationIfPunctuationDisabled() PASSED

Gradle Test Run :streams:test > Gradle Test Executor 86 > 
DefaultTaskManagerTest > shouldAddTask() STARTED

Gradle Test Run :streams:test > Gradle Test Executor 86 > 
DefaultTaskManagerTest > shouldAddTask() PASSED

Gradle Test Run :streams:test > Gradle Test Executor 86 > 
DefaultTaskManagerTest > shouldNotAssignAnyLockedTask() STARTED

Gradle Test Run :streams:test > Gradle Test Executor 86 > 
DefaultTaskManagerTest > shouldNotAssignAnyLockedTask() PASSED

Gradle Test Run :streams:test > Gradle Test Executor 86 > 
DefaultTaskManagerTest > shouldRemoveTask() STARTED

Gradle Test Run :streams:test > Gradle Test Executor 86 > 
DefaultTaskManagerTest > shouldRemoveTask() PASSED

Gradle Test Run :streams:test > Gradle Test Executor 86 > 
DefaultTaskManagerTest > shouldNotRemoveAssignedTask() STARTED

Gradle Test Run :streams:test > Gradle Test Executor 86 > 
DefaultTaskManagerTest > shouldNotRemoveAssignedTask() PASSED

Gradle Test Run :streams:test > Gradle Test Executor 86 > 
DefaultTaskManagerTest > shouldAssignTaskThatCanBeProcessed() STARTED

Gradle Test Run :streams:test > Gradle Test Executor 86 > 
DefaultTaskManagerTest > shouldAssignTaskThatCanBeProcessed() PASSED

Gradle Test Run :streams:test > Gradle Test Executor 86 > 
DefaultTaskManagerTest > shouldNotRemoveUnlockedTask() STARTED

Gradle Test Run :streams:test > Gradle Test Executor 86 > 
DefaultTaskManagerTest > shouldNotRemoveUnlockedTask() PASSED

Gradle Test Run :streams:test > Gradle Test Executor 86 > 
DefaultTaskManagerTest > shouldReturnAndClearExceptionsOnDrainExceptions() 
STARTED

Gradle Test Run :streams:test > Gradle Test Executor 86 > 
DefaultTaskManagerTest > shouldReturnAndClearExceptionsOnDrainExceptions() 
PASSED

Gradle Test Run :streams:test > Gradle Test Executor 86 > 
DefaultTaskManagerTest > shouldUnassignTask() STARTED

Gradle Test Run :streams:test > Gradle Test Executor 86 > 
DefaultTaskManagerTest > shouldUnassignTask() PASSED

Gradle Test Run :streams:test > Gradle Test Executor 86 > 
DefaultTaskManagerTest > 
shouldNotAssignTasksForProcessingIfProcessingDisabled() STARTED

Gradle Test Run :streams:test > Gradle Test Executor 86 > 
DefaultTaskManagerTest > 
shouldNotAssignTasksForProcessingIfProcessingDisabled() PASSED

Gradle Test Run :streams:test > Gradle Test Executor 86 > 
DefaultTaskManagerTest > shouldNotSetUncaughtExceptionsForUnassignedTasks() 
STARTED

Gradle Test Run :streams:test > Gradle Test Executor 86 > 
DefaultTaskManagerTest > shouldNotSetUncaughtExceptionsForUnassignedTasks() 
PASSED

Gradle Test Run :streams:test > Gradle Test Executor 86 > 
DefaultTaskManagerTest > shouldNotAssignLockedTask() STARTED

Gradle Test Run :streams:test > Gradle Test Executor 86 > 
DefaultTaskManagerTest > shouldNotAssignLockedTask() PASSED

Gradle Test Run :streams:test > Gradle Test Executor 86 > 
DefaultTaskManagerTest > shouldUnassignLockingTask() STARTED

Gradle Test Run :streams:test > Gradle Test Executor 86 > 
DefaultTaskManagerTest > shouldUnassignLockingTask() PASSED

Gradle Test Run :streams:test > Gradle Test Executor 86 > 
DefaultTaskManagerTest > shouldAssignTasksThatCanBeStreamTimePunctuated() 
STARTED

Gradle Test Run :streams:test > Gradle Test Executor 86 > 
DefaultTaskManagerTest > shouldAssignTasksThatCanBeStreamTimePunctuated() PASSED

Gradle Test Run :streams:test > Gradle Test Executor 86 > 
RocksDBTimeOrderedKeyValueBytesStoreTest > shouldCreateEmptyWriteBatches() 
STARTED

Gradle Test Run :streams:test > Gradle Test Executor 86 > 
RocksDBTimeOrderedKeyValueBytesStoreTest > shouldCreateEmptyWriteBatches() 
PASSED

Gradle Test Run :streams:test > Gradle Test Executor 86 > 
RocksDBTimeOrderedKeyValueBytesStoreTest > shouldCreateWriteBatches() STARTED

Gradle Test Run :streams:test > Gradle Test Executor 86 > 
RocksDBTimeOrderedKeyValueBytesStoreTest > shouldCreateWriteBatches() PASSED

Gradle Test Run :streams:test > Gradle Test Executor 86 > 
RocksDBMetricsRecorderTest > 
shouldThrowIfDbToAddWasAlreadyAddedForOtherSegment() STARTED

Gradle Test Run :streams:test > Gradle Test Executor 86 > 
RocksDBMetricsRecorderTest > 
shouldThrowIfDbToAddWasAlreadyAddedForOtherSegment() PASSED

Gradle Test Run :streams:test > Gradle Test Executor 86 > 
RocksDBMetricsRecorderTest > 
shou

Jenkins build is still unstable: Kafka » Kafka Branch Builder » 3.6 #14

2023-08-31 Thread Apache Jenkins Server
See 




Jenkins build is still unstable: Kafka » Kafka Branch Builder » trunk #2159

2023-08-31 Thread Apache Jenkins Server
See 




Re: Apache Kafka 3.6.0 release

2023-08-31 Thread Satish Duggana
Thanks Chris for bringing this issue here and filing the new JIRA for
3.6.0[1]. It seems to be a blocker for 3.6.0.

Please help review https://github.com/apache/kafka/pull/14314 as Chris
requested.

1. https://issues.apache.org/jira/browse/KAFKA-15425

~Satish.

On Fri, 1 Sept 2023 at 03:59, Chris Egerton  wrote:
>
> Hi all,
>
> Quick update: I've filed a separate ticket,
> https://issues.apache.org/jira/browse/KAFKA-15425, to track the behavior
> change in Admin::listOffsets. For the full history of the ticket, it's
> worth reading the comment thread on the old ticket at
> https://issues.apache.org/jira/browse/KAFKA-12879.
>
> I've also published https://github.com/apache/kafka/pull/14314 as a fairly
> lightweight PR to revert the behavior of Admin::listOffsets without also
> reverting the refactoring to use the internal admin driver API. Would
> appreciate a review on that if anyone can spare the cycles.
>
> Cheers,
>
> Chris
>
> On Wed, Aug 30, 2023 at 1:01 PM Chris Egerton  wrote:
>
> > Hi Satish,
> >
> > Wanted to let you know that KAFKA-12879 (
> > https://issues.apache.org/jira/browse/KAFKA-12879), a breaking change in
> > Admin::listOffsets, has been reintroduced into the code base. Since we
> > haven't yet published a release with this change (at least, not the more
> > recent instance of it), I was hoping we could treat it as a blocker for
> > 3.6.0. I'd also like to solicit the input of people familiar with the admin
> > client to weigh in on the Jira ticket about whether we should continue to
> > preserve the current behavior (if the consensus is that we should, I'm
> > happy to file a fix).
> >
> > Please let me know if you agree that this qualifies as a blocker. I plan
> > on publishing a potential fix sometime this week.
> >
> > Cheers,
> >
> > Chris
> >
> > On Wed, Aug 30, 2023 at 9:19 AM Satish Duggana 
> > wrote:
> >
> >> Hi,
> >> Please plan to continue merging pull requests associated with any
> >> outstanding minor features and stabilization changes to 3.6 branch
> >> before September 3rd. Kindly update the KIP's implementation status in
> >> the 3.6.0 release notes.
> >>
> >> Thanks,
> >> Satish.
> >>
> >> On Fri, 25 Aug 2023 at 21:37, Justine Olshan
> >>  wrote:
> >> >
> >> > Hey Satish,
> >> > Everything should be in 3.6, and I will update the release plan wiki.
> >> > Thanks!
> >> >
> >> > On Fri, Aug 25, 2023 at 4:08 AM Satish Duggana <
> >> satish.dugg...@gmail.com>
> >> > wrote:
> >> >
> >> > > Hi Justine,
> >> > > Adding KIP-890 part-1 to 3.6.0 seems reasonable to me. This part looks
> >> > > to be addressing a critical issue of consumers getting stuck. Please
> >> > > update the release plan wiki and merge all the required changes to 3.6
> >> > > branch.
> >> > >
> >> > > Thanks,
> >> > > Satish.
> >> > >
> >> > > On Thu, 24 Aug 2023 at 22:19, Justine Olshan
> >> > >  wrote:
> >> > > >
> >> > > > Hey Satish,
> >> > > > Does it make sense to include KIP-890 part 1? It prevents hanging
> >> > > > transactions for older clients. (An optimization and stronger EOS
> >> > > > guarantees will be included in part 2)
> >> > > >
> >> > > > Thanks,
> >> > > > Justine
> >> > > >
> >> > > > On Mon, Aug 21, 2023 at 3:29 AM Satish Duggana <
> >> satish.dugg...@gmail.com
> >> > > >
> >> > > > wrote:
> >> > > >
> >> > > > > Hi,
> >> > > > > 3.6 branch is created. Please make sure any PRs targeted for 3.6.0
> >> > > > > should be merged to 3.6 branch once those are merged to trunk.
> >> > > > >
> >> > > > > Thanks,
> >> > > > > Satish.
> >> > > > >
> >> > > > > On Wed, 16 Aug 2023 at 15:58, Satish Duggana <
> >> satish.dugg...@gmail.com
> >> > > >
> >> > > > > wrote:
> >> > > > > >
> >> > > > > > Hi,
> >> > > > > > Please plan to merge PRs(including the major features) targeted
> >> for
> >> > > > > > 3.6.0 by the end of Aug 20th UTC. Starting from August 21st,
> >> any pull
> >> > > > > > requests intended for the 3.6.0 release must include the changes
> >> > > > > > merged into the 3.6 branch as mentioned in the release plan.
> >> > > > > >
> >> > > > > > Thanks,
> >> > > > > > Satish.
> >> > > > > >
> >> > > > > > On Fri, 4 Aug 2023 at 18:39, Chris Egerton
> >> 
> >> > > > > wrote:
> >> > > > > > >
> >> > > > > > > Thanks for adding KIP-949, Satish!
> >> > > > > > >
> >> > > > > > > On Fri, Aug 4, 2023 at 7:06 AM Satish Duggana <
> >> > > > > satish.dugg...@gmail.com>
> >> > > > > > > wrote:
> >> > > > > > >
> >> > > > > > > > Hi,
> >> > > > > > > > Myself and Divij discussed and added the wiki for Kafka
> >> > > TieredStorage
> >> > > > > > > > Early Access Release[1]. If you have any comments or
> >> feedback,
> >> > > please
> >> > > > > > > > feel free to share them.
> >> > > > > > > >
> >> > > > > > > > 1.
> >> > > > > > > >
> >> > > > >
> >> > >
> >> https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Tiered+Storage+Early+Access+Release+Notes
> >> > > > > > > >
> >> > > > > > > > Thanks,
> >> > > > > > > > Satish.
> >> > > > > > > >
> >> > > > > > > > On Fri, 4 Aug 2023 at 08:40, 

Re: [DISCUSS] KIP-966: Eligible Leader Replicas

2023-08-31 Thread Justine Olshan
Hey Calvin,

Thanks for the responses. I think I understood most of it, but had a few
follow up questions

1. For the acks=1 case, I was wondering if there is any way to continue
with the current behavior (ie -- we only need one ack to produce to the log
and consider the request complete.) My understanding is that we can also
consume from such topics at that point.
If users wanted this lower durability could they set min.insync.replicas to
1?

2. For the case where we elect a leader that was unknowingly offline. Say
this replica was the only one in ELR. My understanding is that we would
promote it to ISR and remove it from ELR when it is the leader, but then we
would remove it from ISR and have no brokers in ISR or ELR. From there we
would need to do unclean recovery right?

3. Did we address the case where dynamically min isr is increased?

4. I think my comment was more about confusion on the KIP. It was not clear
to me that the section was describing points if one was done before the
other. But I now see the sentence explaining that. I think I skipped from
"delivery plan" to the bullet points.

Justine

On Thu, Aug 31, 2023 at 4:04 PM Calvin Liu 
wrote:

> Hi Justine
> Thanks for the questions!
>   *a. For my understanding, will we block replication? Or just the high
> watermark advancement?*
>   - The replication will not be blocked. The followers are free to
> replicate messages above the HWM. Only HWM advancement is blocked.
>
>   b. *Also in the acks=1 case, if folks want to continue the previous
> behavior, they also need to set min.insync.replicas to 1, correct?*
>   - If the clients only send ack=1 messages and minISR=2. The HWM behavior
> will only be different when there is 1 replica in the ISR. In this case,
> the min ISR does not do much in the current system. It is kind of a
> trade-off but we think it is ok.
>
>   c. *The KIP seems to suggest that we remove from ELR when we start up
> again and notice we do not have the clean shutdown file. Is there a chance
> we have an offline broker in ELR that had an unclean shutdown that we elect
> as a leader before we get the change to realize the shutdown was unclean?*
> *  - *The controller will only elect an unfenced(online) replica as the
> leader. If a broker has an unclean shutdown, it should register to the
> controller first(where it has to declair whether it is a clean/unclean
> shutdown) and then start to serve broker requests. So
>  1. If the broker has an unclean shutdown before the controller is
> aware that the replica is offline, then the broker can become the leader
> temporarily. But it can't serve any Fetch requests before it registers
> again, and that's when the controller will re-elect a leader.
>  2. If the controller knows the replica is offline(missing heartbeats
> from the broker for a while) before the broker re-registers, the broker
> can't be elected as a leader.
>
> d. *Would this be the case for strictly a smaller min ISR?*
> - Yes, only when we have a smaller min ISR. Once the leader is aware of the
> minISR change, the HWM can advance and make the current ELR obsolete. So
> the controller should clear the ELR if the ISR >= the new min ISR.
>
> e. *I thought we said the above "Last Leader” behavior can’t be maintained
> with an empty ISR and it should be removed."*
> -  As the Kip is a big one, we have to consider delivering it in phases. If
> only the Unclean Recovery is delivered, we do not touch the ISR then the
> ISR behavior will be the same as the current. I am open to the proposal
> that directly starting unclean recovery if the last leader fails. Let's see
> if other folks hope to have more if Unclean Recover delivers first.
>
> On Tue, Aug 29, 2023 at 4:53 PM Justine Olshan
> 
> wrote:
>
> > Hey Calvin,
> >
> > Thanks for the KIP. This will close some of the gaps in leader election!
> I
> > has a few questions:
> >
> > *>* *High Watermark can only advance if the ISR size is larger or equal
> > to min.insync.replicas*.
> >
> > For my understanding, will we block replication? Or just the high
> watermark
> > advancement?
> > Also in the acks=1 case, if folks want to continue the previous behavior,
> > they also need to set min.insync.replicas to 1, correct? It seems like
> this
> > change takes some control away from clients when it comes to durability
> vs
> > availability.
> >
> > *> *
> > *ELR + ISR size will not be dropped below the min ISR unless the
> controller
> > discovers an ELR member has an unclean shutdown. *
> > The KIP seems to suggest that we remove from ELR when we start up again
> and
> > notice we do not have the clean shutdown file. Is there a chance we have
> an
> > offline broker in ELR that had an unclean shutdown that we elect as a
> > leader before we get the change to realize the shutdown was unclean?
> > This seems like it could cause some problems. I may have missed how we
> > avoid this scenario though.
> >
> > *> When updating the config **min.insync.replicas, *
> > *if the n

[jira] [Created] (KAFKA-15426) Process and persist directory assignments

2023-08-31 Thread Igor Soarez (Jira)
Igor Soarez created KAFKA-15426:
---

 Summary: Process and persist directory assignments
 Key: KAFKA-15426
 URL: https://issues.apache.org/jira/browse/KAFKA-15426
 Project: Kafka
  Issue Type: Sub-task
Reporter: Igor Soarez


* Handle AssignReplicasToDirsRequest
 * Persist metadata changes



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [DISCUSS] KIP-966: Eligible Leader Replicas

2023-08-31 Thread Calvin Liu
Hi Justine
Thanks for the questions!
  *a. For my understanding, will we block replication? Or just the high
watermark advancement?*
  - The replication will not be blocked. The followers are free to
replicate messages above the HWM. Only HWM advancement is blocked.

  b. *Also in the acks=1 case, if folks want to continue the previous
behavior, they also need to set min.insync.replicas to 1, correct?*
  - If the clients only send ack=1 messages and minISR=2. The HWM behavior
will only be different when there is 1 replica in the ISR. In this case,
the min ISR does not do much in the current system. It is kind of a
trade-off but we think it is ok.

  c. *The KIP seems to suggest that we remove from ELR when we start up
again and notice we do not have the clean shutdown file. Is there a chance
we have an offline broker in ELR that had an unclean shutdown that we elect
as a leader before we get the change to realize the shutdown was unclean?*
*  - *The controller will only elect an unfenced(online) replica as the
leader. If a broker has an unclean shutdown, it should register to the
controller first(where it has to declair whether it is a clean/unclean
shutdown) and then start to serve broker requests. So
 1. If the broker has an unclean shutdown before the controller is
aware that the replica is offline, then the broker can become the leader
temporarily. But it can't serve any Fetch requests before it registers
again, and that's when the controller will re-elect a leader.
 2. If the controller knows the replica is offline(missing heartbeats
from the broker for a while) before the broker re-registers, the broker
can't be elected as a leader.

d. *Would this be the case for strictly a smaller min ISR?*
- Yes, only when we have a smaller min ISR. Once the leader is aware of the
minISR change, the HWM can advance and make the current ELR obsolete. So
the controller should clear the ELR if the ISR >= the new min ISR.

e. *I thought we said the above "Last Leader” behavior can’t be maintained
with an empty ISR and it should be removed."*
-  As the Kip is a big one, we have to consider delivering it in phases. If
only the Unclean Recovery is delivered, we do not touch the ISR then the
ISR behavior will be the same as the current. I am open to the proposal
that directly starting unclean recovery if the last leader fails. Let's see
if other folks hope to have more if Unclean Recover delivers first.

On Tue, Aug 29, 2023 at 4:53 PM Justine Olshan 
wrote:

> Hey Calvin,
>
> Thanks for the KIP. This will close some of the gaps in leader election! I
> has a few questions:
>
> *>* *High Watermark can only advance if the ISR size is larger or equal
> to min.insync.replicas*.
>
> For my understanding, will we block replication? Or just the high watermark
> advancement?
> Also in the acks=1 case, if folks want to continue the previous behavior,
> they also need to set min.insync.replicas to 1, correct? It seems like this
> change takes some control away from clients when it comes to durability vs
> availability.
>
> *> *
> *ELR + ISR size will not be dropped below the min ISR unless the controller
> discovers an ELR member has an unclean shutdown. *
> The KIP seems to suggest that we remove from ELR when we start up again and
> notice we do not have the clean shutdown file. Is there a chance we have an
> offline broker in ELR that had an unclean shutdown that we elect as a
> leader before we get the change to realize the shutdown was unclean?
> This seems like it could cause some problems. I may have missed how we
> avoid this scenario though.
>
> *> When updating the config **min.insync.replicas, *
> *if the new min ISR <= current ISR, the ELR will be removed.*Would this be
> the case for strictly a smaller min ISR? I suppose if we increase the ISR,
> we can't reason about ELR. Can we reason about high water mark in this
> case--seems like we will have the broker out of ISR not in ISR or ELR?
> (Forgive me if we can't increase min ISR if the increase will put us under
> it)
>
> *> Unclean recovery. *
>
>- *The unclean leader election will be replaced by the unclean
> recovery.*
>- *unclean.leader.election.enable will only be replaced by
>the unclean.recovery.strategy after ELR is delivered.*
>- *As there is no change to the ISR, the "last known leader" behavior is
>maintained.*
>
> What does "last known leader behavior maintained" mean here? I thought we
> said *"*The above “*Last Leader” behavior can’t be maintained with an empty
> ISR and it should be removed." *My understanding is once metadata version
> is updated we will always take the more thoughtful unclean election process
> (ie, inspect the logs)
>
> Overall though, the general KIP is pretty solid. Looking at the rejected
> alternatives, it looks like a lot was considered, so it's nice to see the
> final proposal.
>
> Justine
>
> On Mon, Aug 14, 2023 at 8:50 AM Calvin Liu 
> wrote:
>
> >1. Yes, the new protocol requires 2 things to 

Re: Apache Kafka 3.6.0 release

2023-08-31 Thread Chris Egerton
Hi all,

Quick update: I've filed a separate ticket,
https://issues.apache.org/jira/browse/KAFKA-15425, to track the behavior
change in Admin::listOffsets. For the full history of the ticket, it's
worth reading the comment thread on the old ticket at
https://issues.apache.org/jira/browse/KAFKA-12879.

I've also published https://github.com/apache/kafka/pull/14314 as a fairly
lightweight PR to revert the behavior of Admin::listOffsets without also
reverting the refactoring to use the internal admin driver API. Would
appreciate a review on that if anyone can spare the cycles.

Cheers,

Chris

On Wed, Aug 30, 2023 at 1:01 PM Chris Egerton  wrote:

> Hi Satish,
>
> Wanted to let you know that KAFKA-12879 (
> https://issues.apache.org/jira/browse/KAFKA-12879), a breaking change in
> Admin::listOffsets, has been reintroduced into the code base. Since we
> haven't yet published a release with this change (at least, not the more
> recent instance of it), I was hoping we could treat it as a blocker for
> 3.6.0. I'd also like to solicit the input of people familiar with the admin
> client to weigh in on the Jira ticket about whether we should continue to
> preserve the current behavior (if the consensus is that we should, I'm
> happy to file a fix).
>
> Please let me know if you agree that this qualifies as a blocker. I plan
> on publishing a potential fix sometime this week.
>
> Cheers,
>
> Chris
>
> On Wed, Aug 30, 2023 at 9:19 AM Satish Duggana 
> wrote:
>
>> Hi,
>> Please plan to continue merging pull requests associated with any
>> outstanding minor features and stabilization changes to 3.6 branch
>> before September 3rd. Kindly update the KIP's implementation status in
>> the 3.6.0 release notes.
>>
>> Thanks,
>> Satish.
>>
>> On Fri, 25 Aug 2023 at 21:37, Justine Olshan
>>  wrote:
>> >
>> > Hey Satish,
>> > Everything should be in 3.6, and I will update the release plan wiki.
>> > Thanks!
>> >
>> > On Fri, Aug 25, 2023 at 4:08 AM Satish Duggana <
>> satish.dugg...@gmail.com>
>> > wrote:
>> >
>> > > Hi Justine,
>> > > Adding KIP-890 part-1 to 3.6.0 seems reasonable to me. This part looks
>> > > to be addressing a critical issue of consumers getting stuck. Please
>> > > update the release plan wiki and merge all the required changes to 3.6
>> > > branch.
>> > >
>> > > Thanks,
>> > > Satish.
>> > >
>> > > On Thu, 24 Aug 2023 at 22:19, Justine Olshan
>> > >  wrote:
>> > > >
>> > > > Hey Satish,
>> > > > Does it make sense to include KIP-890 part 1? It prevents hanging
>> > > > transactions for older clients. (An optimization and stronger EOS
>> > > > guarantees will be included in part 2)
>> > > >
>> > > > Thanks,
>> > > > Justine
>> > > >
>> > > > On Mon, Aug 21, 2023 at 3:29 AM Satish Duggana <
>> satish.dugg...@gmail.com
>> > > >
>> > > > wrote:
>> > > >
>> > > > > Hi,
>> > > > > 3.6 branch is created. Please make sure any PRs targeted for 3.6.0
>> > > > > should be merged to 3.6 branch once those are merged to trunk.
>> > > > >
>> > > > > Thanks,
>> > > > > Satish.
>> > > > >
>> > > > > On Wed, 16 Aug 2023 at 15:58, Satish Duggana <
>> satish.dugg...@gmail.com
>> > > >
>> > > > > wrote:
>> > > > > >
>> > > > > > Hi,
>> > > > > > Please plan to merge PRs(including the major features) targeted
>> for
>> > > > > > 3.6.0 by the end of Aug 20th UTC. Starting from August 21st,
>> any pull
>> > > > > > requests intended for the 3.6.0 release must include the changes
>> > > > > > merged into the 3.6 branch as mentioned in the release plan.
>> > > > > >
>> > > > > > Thanks,
>> > > > > > Satish.
>> > > > > >
>> > > > > > On Fri, 4 Aug 2023 at 18:39, Chris Egerton
>> 
>> > > > > wrote:
>> > > > > > >
>> > > > > > > Thanks for adding KIP-949, Satish!
>> > > > > > >
>> > > > > > > On Fri, Aug 4, 2023 at 7:06 AM Satish Duggana <
>> > > > > satish.dugg...@gmail.com>
>> > > > > > > wrote:
>> > > > > > >
>> > > > > > > > Hi,
>> > > > > > > > Myself and Divij discussed and added the wiki for Kafka
>> > > TieredStorage
>> > > > > > > > Early Access Release[1]. If you have any comments or
>> feedback,
>> > > please
>> > > > > > > > feel free to share them.
>> > > > > > > >
>> > > > > > > > 1.
>> > > > > > > >
>> > > > >
>> > >
>> https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Tiered+Storage+Early+Access+Release+Notes
>> > > > > > > >
>> > > > > > > > Thanks,
>> > > > > > > > Satish.
>> > > > > > > >
>> > > > > > > > On Fri, 4 Aug 2023 at 08:40, Satish Duggana <
>> > > > > satish.dugg...@gmail.com>
>> > > > > > > > wrote:
>> > > > > > > > >
>> > > > > > > > > Hi Chris,
>> > > > > > > > > Thanks for the update. This looks to be a minor change
>> and is
>> > > also
>> > > > > > > > > useful for backward compatibility. I added it to the
>> release
>> > > plan
>> > > > > as
>> > > > > > > > > an exceptional case.
>> > > > > > > > >
>> > > > > > > > > ~Satish.
>> > > > > > > > >
>> > > > > > > > > On Thu, 3 Aug 2023 at 21:34, Chris Egerton
>> > > > > > > > >
>> > > > > > > > wrote:
>> > > > > > > > > >
>> > > > > > > > > > Hi

[jira] [Resolved] (KAFKA-12879) Compatibility break in Admin.listOffsets()

2023-08-31 Thread Chris Egerton (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Egerton resolved KAFKA-12879.
---
Resolution: Fixed

> Compatibility break in Admin.listOffsets()
> --
>
> Key: KAFKA-12879
> URL: https://issues.apache.org/jira/browse/KAFKA-12879
> Project: Kafka
>  Issue Type: Bug
>  Components: admin
>Affects Versions: 2.8.0, 2.7.1, 2.6.2
>Reporter: Tom Bentley
>Assignee: Philip Nee
>Priority: Blocker
> Fix For: 2.5.2, 2.7.3, 2.6.4, 3.0.2, 3.1.1, 3.2.0, 2.8.2
>
>
> KAFKA-12339 incompatibly changed the semantics of Admin.listOffsets(). 
> Previously it would fail with {{UnknownTopicOrPartitionException}} when a 
> topic didn't exist. Now it will (eventually) fail with {{TimeoutException}}. 
> It seems this was more or less intentional, even though it would break code 
> which was expecting and handling the {{UnknownTopicOrPartitionException}}. A 
> workaround is to use {{retries=1}} and inspect the cause of the 
> {{TimeoutException}}, but this isn't really suitable for cases where the same 
> Admin client instance is being used for other calls where retries is 
> desirable.
> Furthermore as well as the intended effect on {{listOffsets()}} it seems that 
> the change could actually affect other methods of Admin.
> More generally, the Admin client API is vague about which exceptions can 
> propagate from which methods. This means that it's not possible to say, in 
> cases like this, whether the calling code _should_ have been relying on the 
> {{UnknownTopicOrPartitionException}} or not.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-15425) Compatibility break in Admin.listOffsets() (2)

2023-08-31 Thread Chris Egerton (Jira)
Chris Egerton created KAFKA-15425:
-

 Summary: Compatibility break in Admin.listOffsets() (2)
 Key: KAFKA-15425
 URL: https://issues.apache.org/jira/browse/KAFKA-15425
 Project: Kafka
  Issue Type: Test
  Components: admin
Affects Versions: 3.6.0
Reporter: Chris Egerton
Assignee: Chris Egerton
 Fix For: 3.6.0


The behavioral change that warrants this ticket is identical to the change 
noted in KAFKA-12879, but has a different root cause (KAFKA-14821 instead of 
KAFKA-12339).

In both this ticket and KAFKA-12339, the issue is that calls to 
{{Admin::listOffsets}} will now retry on the [UNKNOWN_TOPIC_OR_PARTITION 
error|https://github.com/apache/kafka/blob/16dc983ad67767ee8debd125a3f8b150a91c7acf/clients/src/main/java/org/apache/kafka/common/protocol/Errors.java#L165-L166]
 (and possibly eventually throw a {{{}TimeoutException{}}}), whereas before, 
they would fail immediately and throw an 
{{{}UnknownTopicOrPartitionException{}}}.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Jenkins build is unstable: Kafka » Kafka Branch Builder » trunk #2158

2023-08-31 Thread Apache Jenkins Server
See 




[jira] [Created] (KAFKA-15424) Make verification a dynamic configuration

2023-08-31 Thread Justine Olshan (Jira)
Justine Olshan created KAFKA-15424:
--

 Summary: Make verification a dynamic configuration
 Key: KAFKA-15424
 URL: https://issues.apache.org/jira/browse/KAFKA-15424
 Project: Kafka
  Issue Type: Sub-task
Reporter: Justine Olshan
Assignee: Justine Olshan


It would be nice if we can dynamically disable the verification. This can 
prevent disruptive actions like a roll if the feature is causing issues.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-9565) Implementation of Tiered Storage SPI to integrate with S3

2023-08-31 Thread Satish Duggana (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Satish Duggana resolved KAFKA-9565.
---
Resolution: Won't Fix

> Implementation of Tiered Storage SPI to integrate with S3
> -
>
> Key: KAFKA-9565
> URL: https://issues.apache.org/jira/browse/KAFKA-9565
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Alexandre Dupriez
>Assignee: Ivan Yurchenko
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-12458) Implementation of Tiered Storage Integration with Azure Storage (ADLS + Blob Storage)

2023-08-31 Thread Satish Duggana (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Satish Duggana resolved KAFKA-12458.

Resolution: Won't Do

> Implementation of Tiered Storage Integration with Azure Storage (ADLS + Blob 
> Storage)
> -
>
> Key: KAFKA-12458
> URL: https://issues.apache.org/jira/browse/KAFKA-12458
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Israel Ekpo
>Assignee: Israel Ekpo
>Priority: Major
>
> Task to cover integration support for Azure Storage
>  * Azure Blob Storage
>  * Azure Data Lake Store
> Will split task up later into distinct tracks and components



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-15423) readUnsignedVarint implementation allows for negative numbers

2023-08-31 Thread Philip Warren (Jira)
Philip Warren created KAFKA-15423:
-

 Summary: readUnsignedVarint implementation allows for negative 
numbers
 Key: KAFKA-15423
 URL: https://issues.apache.org/jira/browse/KAFKA-15423
 Project: Kafka
  Issue Type: Improvement
  Components: clients
Affects Versions: 3.5.1
Reporter: Philip Warren


The current implementation of {{ByteUtils.readUnsignedVarint}} throws an 
IllegalArgumentException if the varint is encoded in more than 5 bytes which 
avoids some invalid values, however it still allows for 35 bits of precision 
instead of 31 bits of the underlying int type.

To make the method safer for callers, it seems like it should ensure that only 
the 3 lower bits of the 5th byte are set as anything else will overflow a Java 
int.

I've audited the codebase and there are some cases where a negative unsigned 
varint will lead to calling {{new Object[length]}} (leading to an exception), 
and a few potential values where reading a varint as a length (and subtracting 
one) causes a negative length of MIN_INT to wrap and become equal to MAX_INT.

As the KIP specs refer to varints as 31-bit integers (i.e. 
[KIP-482|https://cwiki.apache.org/confluence/display/KAFKA/KIP-482]), it would 
be good if the methods decoding them should also enforce this constraint.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-15422) Update documentation for Delegation Tokens in Kafka with KRaft

2023-08-31 Thread Proven Provenzano (Jira)
Proven Provenzano created KAFKA-15422:
-

 Summary: Update documentation for Delegation Tokens in Kafka with 
KRaft
 Key: KAFKA-15422
 URL: https://issues.apache.org/jira/browse/KAFKA-15422
 Project: Kafka
  Issue Type: Task
  Components: documentation
Reporter: Proven Provenzano
Assignee: Proven Provenzano
 Fix For: 3.6.0


Update the documentation to indicate that controllers need the same 
configuration as brokers for Delegation Tokens to work under Kafka with KRaft



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-13872) Partitions are truncated when leader is replaced

2023-08-31 Thread Francois Visconte (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13872?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francois Visconte resolved KAFKA-13872.
---
Resolution: Won't Fix

transitioning to won't fix as this seems the expected behaviour.

> Partitions are truncated when leader is replaced
> 
>
> Key: KAFKA-13872
> URL: https://issues.apache.org/jira/browse/KAFKA-13872
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 2.7.2
>Reporter: Francois Visconte
>Priority: Major
> Attachments: extract-2022-05-04T15_50_34.110Z.csv
>
>
> Sample setup:
>  * a topic with one partition and RF=3
>  * a producer using acks=1
>  * min.insync.replicas to 1
>  * 3 brokers 1,2,3
>  * Preferred leader of the partition is brokerId 0
>  
> Steps to reproduce the issue
>  * Producer keeps producing to the partition, leader is brokerId=0
>  * At some point, replicas 1 and 2 are falling behind and removed from the ISR
>  * The leader broker 0 has an hardware failure
>  * Partition transition to offline
>  * This leader is replaced with a new broker with an empty disk and the same 
> broker id 0
>  * Partition transition from offline to online with leader 0, ISR = 0
>  * Followers see the leader offset is 0 and decide to truncate their 
> partitions to 0, ISR=0,1,2
>  * At this point all the topic data has been removed from all replicas and 
> partition size drops to 0 on all replicas
> Attached some of the relevant logs. I can provide more logs if necessary



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Reopened] (KAFKA-15399) Enable OffloadAndConsumeFromLeader test

2023-08-31 Thread Kamal Chandraprakash (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15399?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kamal Chandraprakash reopened KAFKA-15399:
--

> Enable OffloadAndConsumeFromLeader test
> ---
>
> Key: KAFKA-15399
> URL: https://issues.apache.org/jira/browse/KAFKA-15399
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Kamal Chandraprakash
>Assignee: Kamal Chandraprakash
>Priority: Major
> Fix For: 3.6.0
>
>
> Build / JDK 17 and Scala 2.13 / initializationError – 
> org.apache.kafka.tiered.storage.integration.OffloadAndConsumeFromLeaderTest



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-15421) Enable DynamicBrokerReconfigurationTest#testThreadPoolResize test

2023-08-31 Thread Kamal Chandraprakash (Jira)
Kamal Chandraprakash created KAFKA-15421:


 Summary: Enable 
DynamicBrokerReconfigurationTest#testThreadPoolResize test
 Key: KAFKA-15421
 URL: https://issues.apache.org/jira/browse/KAFKA-15421
 Project: Kafka
  Issue Type: Task
Reporter: Kamal Chandraprakash






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-15420) Kafka Tiered Storage V1

2023-08-31 Thread Satish Duggana (Jira)
Satish Duggana created KAFKA-15420:
--

 Summary: Kafka Tiered Storage V1
 Key: KAFKA-15420
 URL: https://issues.apache.org/jira/browse/KAFKA-15420
 Project: Kafka
  Issue Type: Improvement
Reporter: Satish Duggana
Assignee: Satish Duggana
 Fix For: 3.7.0






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-13097) Handle the requests gracefully to publish the events in TopicBasedRemoteLogMetadataManager when it is not yet initialized.

2023-08-31 Thread Satish Duggana (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Satish Duggana resolved KAFKA-13097.

Resolution: Invalid

> Handle the requests gracefully to publish the events in 
> TopicBasedRemoteLogMetadataManager when it is not yet initialized.
> --
>
> Key: KAFKA-13097
> URL: https://issues.apache.org/jira/browse/KAFKA-13097
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Satish Duggana
>Assignee: Satish Duggana
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Jenkins build is still unstable: Kafka » Kafka Branch Builder » 3.6 #13

2023-08-31 Thread Apache Jenkins Server
See 




Build failed in Jenkins: Kafka » Kafka Branch Builder » trunk #2157

2023-08-31 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 408031 lines...]
Gradle Test Run :streams:test > Gradle Test Executor 84 > TasksTest > 
onlyRemovePendingTaskToCloseCleanShouldRemoveTaskFromPendingUpdateActions() 
PASSED

Gradle Test Run :streams:test > Gradle Test Executor 84 > TasksTest > 
shouldDrainPendingTasksToCreate() STARTED

Gradle Test Run :streams:test > Gradle Test Executor 84 > TasksTest > 
shouldDrainPendingTasksToCreate() PASSED

Gradle Test Run :streams:test > Gradle Test Executor 84 > TasksTest > 
onlyRemovePendingTaskToRecycleShouldRemoveTaskFromPendingUpdateActions() STARTED

Gradle Test Run :streams:test > Gradle Test Executor 84 > TasksTest > 
onlyRemovePendingTaskToRecycleShouldRemoveTaskFromPendingUpdateActions() PASSED

Gradle Test Run :streams:test > Gradle Test Executor 84 > TasksTest > 
shouldAddAndRemovePendingTaskToCloseClean() STARTED

Gradle Test Run :streams:test > Gradle Test Executor 84 > TasksTest > 
shouldAddAndRemovePendingTaskToCloseClean() PASSED

Gradle Test Run :streams:test > Gradle Test Executor 84 > TasksTest > 
shouldAddAndRemovePendingTaskToCloseDirty() STARTED

Gradle Test Run :streams:test > Gradle Test Executor 84 > TasksTest > 
shouldAddAndRemovePendingTaskToCloseDirty() PASSED

Gradle Test Run :streams:test > Gradle Test Executor 84 > TasksTest > 
shouldKeepAddedTasks() STARTED

Gradle Test Run :streams:test > Gradle Test Executor 84 > TasksTest > 
shouldKeepAddedTasks() PASSED

Gradle Test Run :streams:test > Gradle Test Executor 84 > 
DefaultTaskManagerTest > shouldAssignTasksThatCanBeSystemTimePunctuated() 
STARTED

Gradle Test Run :streams:test > Gradle Test Executor 84 > 
DefaultTaskManagerTest > shouldAssignTasksThatCanBeSystemTimePunctuated() PASSED

Gradle Test Run :streams:test > Gradle Test Executor 84 > 
DefaultTaskManagerTest > shouldNotUnassignNotOwnedTask() STARTED

Gradle Test Run :streams:test > Gradle Test Executor 84 > 
DefaultTaskManagerTest > shouldNotUnassignNotOwnedTask() PASSED

Gradle Test Run :streams:test > Gradle Test Executor 84 > 
DefaultTaskManagerTest > shouldNotSetUncaughtExceptionsTwice() STARTED

Gradle Test Run :streams:test > Gradle Test Executor 84 > 
DefaultTaskManagerTest > shouldNotSetUncaughtExceptionsTwice() PASSED

Gradle Test Run :streams:test > Gradle Test Executor 84 > 
DefaultTaskManagerTest > 
shouldNotAssignTasksForPunctuationIfPunctuationDisabled() STARTED

Gradle Test Run :streams:test > Gradle Test Executor 84 > 
DefaultTaskManagerTest > 
shouldNotAssignTasksForPunctuationIfPunctuationDisabled() PASSED

Gradle Test Run :streams:test > Gradle Test Executor 84 > 
DefaultTaskManagerTest > shouldAddTask() STARTED

Gradle Test Run :streams:test > Gradle Test Executor 84 > 
DefaultTaskManagerTest > shouldAddTask() PASSED

Gradle Test Run :streams:test > Gradle Test Executor 84 > 
DefaultTaskManagerTest > shouldNotAssignAnyLockedTask() STARTED

Gradle Test Run :streams:test > Gradle Test Executor 84 > 
DefaultTaskManagerTest > shouldNotAssignAnyLockedTask() PASSED

Gradle Test Run :streams:test > Gradle Test Executor 84 > 
DefaultTaskManagerTest > shouldRemoveTask() STARTED

Gradle Test Run :streams:test > Gradle Test Executor 84 > 
DefaultTaskManagerTest > shouldRemoveTask() PASSED

Gradle Test Run :streams:test > Gradle Test Executor 84 > 
DefaultTaskManagerTest > shouldNotRemoveAssignedTask() STARTED

Gradle Test Run :streams:test > Gradle Test Executor 84 > 
DefaultTaskManagerTest > shouldNotRemoveAssignedTask() PASSED

Gradle Test Run :streams:test > Gradle Test Executor 84 > 
DefaultTaskManagerTest > shouldAssignTaskThatCanBeProcessed() STARTED

Gradle Test Run :streams:test > Gradle Test Executor 84 > 
DefaultTaskManagerTest > shouldAssignTaskThatCanBeProcessed() PASSED

Gradle Test Run :streams:test > Gradle Test Executor 84 > 
DefaultTaskManagerTest > shouldNotRemoveUnlockedTask() STARTED

Gradle Test Run :streams:test > Gradle Test Executor 84 > 
DefaultTaskManagerTest > shouldNotRemoveUnlockedTask() PASSED

Gradle Test Run :streams:test > Gradle Test Executor 84 > 
DefaultTaskManagerTest > shouldReturnAndClearExceptionsOnDrainExceptions() 
STARTED

Gradle Test Run :streams:test > Gradle Test Executor 84 > 
DefaultTaskManagerTest > shouldReturnAndClearExceptionsOnDrainExceptions() 
PASSED

Gradle Test Run :streams:test > Gradle Test Executor 84 > 
DefaultTaskManagerTest > shouldUnassignTask() STARTED

Gradle Test Run :streams:test > Gradle Test Executor 84 > 
DefaultTaskManagerTest > shouldUnassignTask() PASSED

Gradle Test Run :streams:test > Gradle Test Executor 84 > 
DefaultTaskManagerTest > 
shouldNotAssignTasksForProcessingIfProcessingDisabled() STARTED

Gradle Test Run :streams:test > Gradle Test Executor 84 > 
DefaultTaskManagerTest > 
shouldNotAssignTasksForProcessingIfProcessingDisabled() PASSED

Gradle Test Run :streams:test > Gradle Test Executor 84 > 
DefaultTaskManagerTest > shouldNo

[jira] [Created] (KAFKA-15419) Flaky DescribeClusterRequestTest

2023-08-31 Thread Divij Vaidya (Jira)
Divij Vaidya created KAFKA-15419:


 Summary: Flaky DescribeClusterRequestTest
 Key: KAFKA-15419
 URL: https://issues.apache.org/jira/browse/KAFKA-15419
 Project: Kafka
  Issue Type: Test
Reporter: Divij Vaidya


Flakiness introduced since 
https://github.com/apache/kafka/pull/14294#issuecomment-1700704016



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: FYI - CI failures due to Apache Infra (Issue with creating launcher for agent)

2023-08-31 Thread Divij Vaidya
This should be fixed now. Please comment on
https://issues.apache.org/jira/browse/INFRA-24927 if you find a case
where you still hit this problem.

--
Divij Vaidya

On Mon, Aug 28, 2023 at 12:05 PM Luke Chen  wrote:
>
> Thanks for the info, Divij!
>
> Luke
>
> On Mon, Aug 28, 2023 at 6:01 PM Divij Vaidya 
> wrote:
>
> > Hey folks
> >
> > During you CI runs, you may notice that some test pipelines fail to
> > start with messages such as:
> >
> > "ERROR: Issue with creating launcher for agent builds38. The agent is
> > being disconnected"
> > "Remote call on builds38 failed"
> >
> > This occurs due to bad hosts in the Apache infrastructure CI. We have
> > an ongoing ticket here -
> >
> > https://issues.apache.org/jira/browse/INFRA-24927?focusedCommentId=17759528&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-17759528
> >
> > I will keep an eye on the ticket and reply to this thread when it is
> > fixed. Meanwhile, the workaround is to restart the tests.
> >
> > Cheers!
> >
> > --
> > Divij Vaidya
> >


Re: [VOTE] KIP-953: partition method to be overloaded to accept headers as well.

2023-08-31 Thread Jack Tomy
Hey everyone,

As I see devs favouring the current style of implementation, and that is
inline with existing code. I would like to go ahead with the same approach
as mentioned in the KIP.
Can I get a few more votes so that I can take the KIP forward.

Thanks



On Sun, Aug 27, 2023 at 1:38 PM Sagar  wrote:

> Hi Ismael,
>
> Thanks for pointing us towards the direction of a DTO based approach. The
> AdminClient examples seem very neat and extensible in that sense.
> Personally, I was trying to think only along the lines of how the current
> Partitioner interface has been designed, i.e having all requisite
> parameters as separate arguments (Topic, Key, Value etc).
>
> Regarding this question of yours:
>
> A more concrete question: did we consider having the method `partition`
> > take `ProduceRecord` as one of its parameters and `Cluster` as the other?
>
>
> No, I don't think in the discussion thread it was brought up and as I said
> it appears that could be due to an attempt to keep the new method's
> signature similar to the existing one within Partitioner. If I understood
> the intent of the question correctly, are you trying to hint here that
> `ProducerRecord` already contains all the arguments that the `partition`
> method accepts and also has a `headers` field within it. So, instead of
> adding another method for the `headers` field, why not create a new method
> taking ProducerRecord directly?
>
> If my understanding is correct, then it seems like a very clean way of
> adding support for `headers`. Anyways, the partition method within
> KafkaProducer already takes a ProducerRecord as an argument so that makes
> it easier. Keeping that in mind, should this new method's (which takes a
> ProducerRecord as an input) default implementation invoke the existing
> method ? One challenge I see there is that the existing partition method
> expects serialized keys and values while ProducerRecord doesn't have access
> to those (It directly operates on K, V).
>
> Thanks!
> Sagar.
>
>
> On Sun, Aug 27, 2023 at 8:51 AM Ismael Juma  wrote:
>
> > A more concrete question: did we consider having the method `partition`
> > take `ProduceRecord` as one of its parameters and `Cluster` as the other?
> >
> > Ismael
> >
> > On Sat, Aug 26, 2023 at 12:50 PM Greg Harris
>  > >
> > wrote:
> >
> > > Hey Ismael,
> > >
> > > > The mention of "runtime" is specific to Connect. When it comes to
> > > clients,
> > > one typically compiles and runs with the same version or runs with a
> > newer
> > > version than the one used for compilation. This is standard practice in
> > > Java and not something specific to Kafka.
> > >
> > > When I said "older runtimes" I was being lazy, and should have said
> > > "older versions of clients at runtime," thank you for figuring out
> > > what I meant.
> > >
> > > I don't know how common it is to compile a partitioner against one
> > > version of clients, and then distribute and run the partitioner with
> > > older versions of clients and expect graceful degradation of features.
> > > If you say that it is very uncommon and not something that we should
> > > optimize for, then I won't suggest otherwise.
> > >
> > > > With regards to the Admin APIs, they have been extended several times
> > > since introduction (naturally). One of them is:
> > > >
> > >
> >
> https://github.com/apache/kafka/commit/1d22b0d70686aef5689b775ea2ea7610a37f3e8c
> > >
> > > Thanks for the example. I also see that includes a migration from
> > > regular arguments to the DTO style, consistent with your
> > > recommendation here.
> > >
> > > I think the DTO style and the proposed additional argument style are
> > > both reasonable.
> > >
> > > Thanks,
> > > Greg
> > >
> > > On Sat, Aug 26, 2023 at 9:46 AM Ismael Juma  wrote:
> > > >
> > > > Hi Greg,
> > > >
> > > > The mention of "runtime" is specific to Connect. When it comes to
> > > clients,
> > > > one typically compiles and runs with the same version or runs with a
> > > newer
> > > > version than the one used for compilation. This is standard practice
> in
> > > > Java and not something specific to Kafka.
> > > >
> > > > With regards to the Admin APIs, they have been extended several times
> > > since
> > > > introduction (naturally). One of them is:
> > > >
> > > >
> > >
> >
> https://github.com/apache/kafka/commit/1d22b0d70686aef5689b775ea2ea7610a37f3e8c
> > > >
> > > > Ismael
> > > >
> > > > On Sat, Aug 26, 2023 at 8:29 AM Greg Harris
> >  > > >
> > > > wrote:
> > > >
> > > > > Hey Ismael,
> > > > >
> > > > > Thank you for clarifying where the DTO pattern is used already, I
> did
> > > > > not have the admin methods in mind.
> > > > >
> > > > > > With the DTO approach, you don't create a new DTO, you simply
> add a
> > > new
> > > > > overloaded constructor and accessor to the DTO.
> > > > >
> > > > > With this variant, partitioner implementations would receive a
> > > > > `NoSuchMethodException` when trying to access newer methods in
> older
> > > > > runtimes. Do we