Re: Apache Kafka 3.6.0 release

2023-09-04 Thread Justine Olshan
Thanks Satish. This is done 

Justine

On Mon, Sep 4, 2023 at 5:16 PM Satish Duggana 
wrote:

> Hey Justine,
> I went through KAFKA-15424 and the PR[1]. It seems there are no
> dependent changes missing in 3.6 branch. They seem to be low risk as
> you mentioned. Please merge it to the 3.6 branch as well.
>
> 1. https://github.com/apache/kafka/pull/14324.
>
> Thanks,
> Satish.
>
> On Tue, 5 Sept 2023 at 05:06, Justine Olshan
>  wrote:
> >
> > Sorry I meant to add the jira as well.
> > https://issues.apache.org/jira/browse/KAFKA-15424
> >
> > Justine
> >
> > On Mon, Sep 4, 2023 at 4:34 PM Justine Olshan 
> wrote:
> >
> > > Hey Satish,
> > >
> > > I was working on adding dynamic configuration for
> > > transaction verification. The PR is approved and ready to merge into
> trunk.
> > > I was thinking I could also add it to 3.6 since it is fairly low risk.
> > > What do you think?
> > >
> > > Justine
> > >
> > > On Sat, Sep 2, 2023 at 6:21 PM Sophie Blee-Goldman <
> ableegold...@gmail.com>
> > > wrote:
> > >
> > >> Thanks Satish! The fix has been merged and cherrypicked to 3.6
> > >>
> > >> On Sat, Sep 2, 2023 at 6:02 AM Satish Duggana <
> satish.dugg...@gmail.com>
> > >> wrote:
> > >>
> > >> > Hi Sophie,
> > >> > Please feel free to add that to 3.6 branch as you say this is a
> minor
> > >> > change and will not cause any regressions.
> > >> >
> > >> > Thanks,
> > >> > Satish.
> > >> >
> > >> > On Sat, 2 Sept 2023 at 08:44, Sophie Blee-Goldman
> > >> >  wrote:
> > >> > >
> > >> > > Hey Satish, someone reported a minor bug in the Streams
> application
> > >> > > shutdown which was a recent regression, though not strictly a new
> one:
> > >> > was
> > >> > > introduced in 3.4 I believe.
> > >> > >
> > >> > > The fix seems to be super lightweight and low-risk so I was
> hoping to
> > >> > slip
> > >> > > it into 3.6 if that's ok with you? They plan to have the patch
> > >> tonight.
> > >> > >
> > >> > > https://issues.apache.org/jira/browse/KAFKA-15429
> > >> > >
> > >> > > On Thu, Aug 31, 2023 at 5:45 PM Satish Duggana <
> > >> satish.dugg...@gmail.com
> > >> > >
> > >> > > wrote:
> > >> > >
> > >> > > > Thanks Chris for bringing this issue here and filing the new
> JIRA
> > >> for
> > >> > > > 3.6.0[1]. It seems to be a blocker for 3.6.0.
> > >> > > >
> > >> > > > Please help review https://github.com/apache/kafka/pull/14314
> as
> > >> Chris
> > >> > > > requested.
> > >> > > >
> > >> > > > 1. https://issues.apache.org/jira/browse/KAFKA-15425
> > >> > > >
> > >> > > > ~Satish.
> > >> > > >
> > >> > > > On Fri, 1 Sept 2023 at 03:59, Chris Egerton
>  > >> >
> > >> > > > wrote:
> > >> > > > >
> > >> > > > > Hi all,
> > >> > > > >
> > >> > > > > Quick update: I've filed a separate ticket,
> > >> > > > > https://issues.apache.org/jira/browse/KAFKA-15425, to track
> the
> > >> > behavior
> > >> > > > > change in Admin::listOffsets. For the full history of the
> ticket,
> > >> > it's
> > >> > > > > worth reading the comment thread on the old ticket at
> > >> > > > > https://issues.apache.org/jira/browse/KAFKA-12879.
> > >> > > > >
> > >> > > > > I've also published
> https://github.com/apache/kafka/pull/14314
> > >> as a
> > >> > > > fairly
> > >> > > > > lightweight PR to revert the behavior of Admin::listOffsets
> > >> without
> > >> > also
> > >> > > > > reverting the refactoring to use the internal admin driver
> API.
> > >> Would
> > >> > > > > appreciate a review on that if anyone can spare the cycles.
> > >> > > > >
> > >> > > > > Cheers,
> > >> > > > >
> > >> > > > > Chris
> > >> > > > >
> > >> > > > > On Wed, Aug 30, 2023 at 1:01 PM Chris Egerton <
> chr...@aiven.io>
> > >> > wrote:
> > >> > > > >
> > >> > > > > > Hi Satish,
> > >> > > > > >
> > >> > > > > > Wanted to let you know that KAFKA-12879 (
> > >> > > > > > https://issues.apache.org/jira/browse/KAFKA-12879), a
> breaking
> > >> > change
> > >> > > > in
> > >> > > > > > Admin::listOffsets, has been reintroduced into the code
> base.
> > >> > Since we
> > >> > > > > > haven't yet published a release with this change (at least,
> not
> > >> the
> > >> > > > more
> > >> > > > > > recent instance of it), I was hoping we could treat it as a
> > >> > blocker for
> > >> > > > > > 3.6.0. I'd also like to solicit the input of people familiar
> > >> with
> > >> > the
> > >> > > > admin
> > >> > > > > > client to weigh in on the Jira ticket about whether we
> should
> > >> > continue
> > >> > > > to
> > >> > > > > > preserve the current behavior (if the consensus is that we
> > >> should,
> > >> > I'm
> > >> > > > > > happy to file a fix).
> > >> > > > > >
> > >> > > > > > Please let me know if you agree that this qualifies as a
> > >> blocker. I
> > >> > > > plan
> > >> > > > > > on publishing a potential fix sometime this week.
> > >> > > > > >
> > >> > > > > > Cheers,
> > >> > > > > >
> > >> > > > > > Chris
> > >> > > > > >
> > >> > > > > > On Wed, Aug 30, 2023 at 9:19 AM Satish Duggana <
> > >> > > > satish.dugg...@gmail.com>
> > >> > > > > > wrote:
> > 

Jenkins build is still unstable: Kafka » Kafka Branch Builder » trunk #2171

2023-09-04 Thread Apache Jenkins Server
See 




Re: Apache Kafka 3.6.0 release

2023-09-04 Thread Satish Duggana
Hey Justine,
I went through KAFKA-15424 and the PR[1]. It seems there are no
dependent changes missing in 3.6 branch. They seem to be low risk as
you mentioned. Please merge it to the 3.6 branch as well.

1. https://github.com/apache/kafka/pull/14324.

Thanks,
Satish.

On Tue, 5 Sept 2023 at 05:06, Justine Olshan
 wrote:
>
> Sorry I meant to add the jira as well.
> https://issues.apache.org/jira/browse/KAFKA-15424
>
> Justine
>
> On Mon, Sep 4, 2023 at 4:34 PM Justine Olshan  wrote:
>
> > Hey Satish,
> >
> > I was working on adding dynamic configuration for
> > transaction verification. The PR is approved and ready to merge into trunk.
> > I was thinking I could also add it to 3.6 since it is fairly low risk.
> > What do you think?
> >
> > Justine
> >
> > On Sat, Sep 2, 2023 at 6:21 PM Sophie Blee-Goldman 
> > wrote:
> >
> >> Thanks Satish! The fix has been merged and cherrypicked to 3.6
> >>
> >> On Sat, Sep 2, 2023 at 6:02 AM Satish Duggana 
> >> wrote:
> >>
> >> > Hi Sophie,
> >> > Please feel free to add that to 3.6 branch as you say this is a minor
> >> > change and will not cause any regressions.
> >> >
> >> > Thanks,
> >> > Satish.
> >> >
> >> > On Sat, 2 Sept 2023 at 08:44, Sophie Blee-Goldman
> >> >  wrote:
> >> > >
> >> > > Hey Satish, someone reported a minor bug in the Streams application
> >> > > shutdown which was a recent regression, though not strictly a new one:
> >> > was
> >> > > introduced in 3.4 I believe.
> >> > >
> >> > > The fix seems to be super lightweight and low-risk so I was hoping to
> >> > slip
> >> > > it into 3.6 if that's ok with you? They plan to have the patch
> >> tonight.
> >> > >
> >> > > https://issues.apache.org/jira/browse/KAFKA-15429
> >> > >
> >> > > On Thu, Aug 31, 2023 at 5:45 PM Satish Duggana <
> >> satish.dugg...@gmail.com
> >> > >
> >> > > wrote:
> >> > >
> >> > > > Thanks Chris for bringing this issue here and filing the new JIRA
> >> for
> >> > > > 3.6.0[1]. It seems to be a blocker for 3.6.0.
> >> > > >
> >> > > > Please help review https://github.com/apache/kafka/pull/14314 as
> >> Chris
> >> > > > requested.
> >> > > >
> >> > > > 1. https://issues.apache.org/jira/browse/KAFKA-15425
> >> > > >
> >> > > > ~Satish.
> >> > > >
> >> > > > On Fri, 1 Sept 2023 at 03:59, Chris Egerton  >> >
> >> > > > wrote:
> >> > > > >
> >> > > > > Hi all,
> >> > > > >
> >> > > > > Quick update: I've filed a separate ticket,
> >> > > > > https://issues.apache.org/jira/browse/KAFKA-15425, to track the
> >> > behavior
> >> > > > > change in Admin::listOffsets. For the full history of the ticket,
> >> > it's
> >> > > > > worth reading the comment thread on the old ticket at
> >> > > > > https://issues.apache.org/jira/browse/KAFKA-12879.
> >> > > > >
> >> > > > > I've also published https://github.com/apache/kafka/pull/14314
> >> as a
> >> > > > fairly
> >> > > > > lightweight PR to revert the behavior of Admin::listOffsets
> >> without
> >> > also
> >> > > > > reverting the refactoring to use the internal admin driver API.
> >> Would
> >> > > > > appreciate a review on that if anyone can spare the cycles.
> >> > > > >
> >> > > > > Cheers,
> >> > > > >
> >> > > > > Chris
> >> > > > >
> >> > > > > On Wed, Aug 30, 2023 at 1:01 PM Chris Egerton 
> >> > wrote:
> >> > > > >
> >> > > > > > Hi Satish,
> >> > > > > >
> >> > > > > > Wanted to let you know that KAFKA-12879 (
> >> > > > > > https://issues.apache.org/jira/browse/KAFKA-12879), a breaking
> >> > change
> >> > > > in
> >> > > > > > Admin::listOffsets, has been reintroduced into the code base.
> >> > Since we
> >> > > > > > haven't yet published a release with this change (at least, not
> >> the
> >> > > > more
> >> > > > > > recent instance of it), I was hoping we could treat it as a
> >> > blocker for
> >> > > > > > 3.6.0. I'd also like to solicit the input of people familiar
> >> with
> >> > the
> >> > > > admin
> >> > > > > > client to weigh in on the Jira ticket about whether we should
> >> > continue
> >> > > > to
> >> > > > > > preserve the current behavior (if the consensus is that we
> >> should,
> >> > I'm
> >> > > > > > happy to file a fix).
> >> > > > > >
> >> > > > > > Please let me know if you agree that this qualifies as a
> >> blocker. I
> >> > > > plan
> >> > > > > > on publishing a potential fix sometime this week.
> >> > > > > >
> >> > > > > > Cheers,
> >> > > > > >
> >> > > > > > Chris
> >> > > > > >
> >> > > > > > On Wed, Aug 30, 2023 at 9:19 AM Satish Duggana <
> >> > > > satish.dugg...@gmail.com>
> >> > > > > > wrote:
> >> > > > > >
> >> > > > > >> Hi,
> >> > > > > >> Please plan to continue merging pull requests associated with
> >> any
> >> > > > > >> outstanding minor features and stabilization changes to 3.6
> >> branch
> >> > > > > >> before September 3rd. Kindly update the KIP's implementation
> >> > status in
> >> > > > > >> the 3.6.0 release notes.
> >> > > > > >>
> >> > > > > >> Thanks,
> >> > > > > >> Satish.
> >> > > > > >>
> >> > > > > >> On Fri, 25 Aug 2023 at 21:37, Justine Olshan
> >> > > 

Re: Apache Kafka 3.6.0 release

2023-09-04 Thread Justine Olshan
Sorry I meant to add the jira as well.
https://issues.apache.org/jira/browse/KAFKA-15424

Justine

On Mon, Sep 4, 2023 at 4:34 PM Justine Olshan  wrote:

> Hey Satish,
>
> I was working on adding dynamic configuration for
> transaction verification. The PR is approved and ready to merge into trunk.
> I was thinking I could also add it to 3.6 since it is fairly low risk.
> What do you think?
>
> Justine
>
> On Sat, Sep 2, 2023 at 6:21 PM Sophie Blee-Goldman 
> wrote:
>
>> Thanks Satish! The fix has been merged and cherrypicked to 3.6
>>
>> On Sat, Sep 2, 2023 at 6:02 AM Satish Duggana 
>> wrote:
>>
>> > Hi Sophie,
>> > Please feel free to add that to 3.6 branch as you say this is a minor
>> > change and will not cause any regressions.
>> >
>> > Thanks,
>> > Satish.
>> >
>> > On Sat, 2 Sept 2023 at 08:44, Sophie Blee-Goldman
>> >  wrote:
>> > >
>> > > Hey Satish, someone reported a minor bug in the Streams application
>> > > shutdown which was a recent regression, though not strictly a new one:
>> > was
>> > > introduced in 3.4 I believe.
>> > >
>> > > The fix seems to be super lightweight and low-risk so I was hoping to
>> > slip
>> > > it into 3.6 if that's ok with you? They plan to have the patch
>> tonight.
>> > >
>> > > https://issues.apache.org/jira/browse/KAFKA-15429
>> > >
>> > > On Thu, Aug 31, 2023 at 5:45 PM Satish Duggana <
>> satish.dugg...@gmail.com
>> > >
>> > > wrote:
>> > >
>> > > > Thanks Chris for bringing this issue here and filing the new JIRA
>> for
>> > > > 3.6.0[1]. It seems to be a blocker for 3.6.0.
>> > > >
>> > > > Please help review https://github.com/apache/kafka/pull/14314 as
>> Chris
>> > > > requested.
>> > > >
>> > > > 1. https://issues.apache.org/jira/browse/KAFKA-15425
>> > > >
>> > > > ~Satish.
>> > > >
>> > > > On Fri, 1 Sept 2023 at 03:59, Chris Egerton > >
>> > > > wrote:
>> > > > >
>> > > > > Hi all,
>> > > > >
>> > > > > Quick update: I've filed a separate ticket,
>> > > > > https://issues.apache.org/jira/browse/KAFKA-15425, to track the
>> > behavior
>> > > > > change in Admin::listOffsets. For the full history of the ticket,
>> > it's
>> > > > > worth reading the comment thread on the old ticket at
>> > > > > https://issues.apache.org/jira/browse/KAFKA-12879.
>> > > > >
>> > > > > I've also published https://github.com/apache/kafka/pull/14314
>> as a
>> > > > fairly
>> > > > > lightweight PR to revert the behavior of Admin::listOffsets
>> without
>> > also
>> > > > > reverting the refactoring to use the internal admin driver API.
>> Would
>> > > > > appreciate a review on that if anyone can spare the cycles.
>> > > > >
>> > > > > Cheers,
>> > > > >
>> > > > > Chris
>> > > > >
>> > > > > On Wed, Aug 30, 2023 at 1:01 PM Chris Egerton 
>> > wrote:
>> > > > >
>> > > > > > Hi Satish,
>> > > > > >
>> > > > > > Wanted to let you know that KAFKA-12879 (
>> > > > > > https://issues.apache.org/jira/browse/KAFKA-12879), a breaking
>> > change
>> > > > in
>> > > > > > Admin::listOffsets, has been reintroduced into the code base.
>> > Since we
>> > > > > > haven't yet published a release with this change (at least, not
>> the
>> > > > more
>> > > > > > recent instance of it), I was hoping we could treat it as a
>> > blocker for
>> > > > > > 3.6.0. I'd also like to solicit the input of people familiar
>> with
>> > the
>> > > > admin
>> > > > > > client to weigh in on the Jira ticket about whether we should
>> > continue
>> > > > to
>> > > > > > preserve the current behavior (if the consensus is that we
>> should,
>> > I'm
>> > > > > > happy to file a fix).
>> > > > > >
>> > > > > > Please let me know if you agree that this qualifies as a
>> blocker. I
>> > > > plan
>> > > > > > on publishing a potential fix sometime this week.
>> > > > > >
>> > > > > > Cheers,
>> > > > > >
>> > > > > > Chris
>> > > > > >
>> > > > > > On Wed, Aug 30, 2023 at 9:19 AM Satish Duggana <
>> > > > satish.dugg...@gmail.com>
>> > > > > > wrote:
>> > > > > >
>> > > > > >> Hi,
>> > > > > >> Please plan to continue merging pull requests associated with
>> any
>> > > > > >> outstanding minor features and stabilization changes to 3.6
>> branch
>> > > > > >> before September 3rd. Kindly update the KIP's implementation
>> > status in
>> > > > > >> the 3.6.0 release notes.
>> > > > > >>
>> > > > > >> Thanks,
>> > > > > >> Satish.
>> > > > > >>
>> > > > > >> On Fri, 25 Aug 2023 at 21:37, Justine Olshan
>> > > > > >>  wrote:
>> > > > > >> >
>> > > > > >> > Hey Satish,
>> > > > > >> > Everything should be in 3.6, and I will update the release
>> plan
>> > > > wiki.
>> > > > > >> > Thanks!
>> > > > > >> >
>> > > > > >> > On Fri, Aug 25, 2023 at 4:08 AM Satish Duggana <
>> > > > > >> satish.dugg...@gmail.com>
>> > > > > >> > wrote:
>> > > > > >> >
>> > > > > >> > > Hi Justine,
>> > > > > >> > > Adding KIP-890 part-1 to 3.6.0 seems reasonable to me. This
>> > part
>> > > > looks
>> > > > > >> > > to be addressing a critical issue of consumers getting
>> stuck.
>> > > > Please
>> > > > > >> > > update the release plan 

Re: Apache Kafka 3.6.0 release

2023-09-04 Thread Justine Olshan
Hey Satish,

I was working on adding dynamic configuration for transaction verification.
The PR is approved and ready to merge into trunk.
I was thinking I could also add it to 3.6 since it is fairly low risk. What
do you think?

Justine

On Sat, Sep 2, 2023 at 6:21 PM Sophie Blee-Goldman 
wrote:

> Thanks Satish! The fix has been merged and cherrypicked to 3.6
>
> On Sat, Sep 2, 2023 at 6:02 AM Satish Duggana 
> wrote:
>
> > Hi Sophie,
> > Please feel free to add that to 3.6 branch as you say this is a minor
> > change and will not cause any regressions.
> >
> > Thanks,
> > Satish.
> >
> > On Sat, 2 Sept 2023 at 08:44, Sophie Blee-Goldman
> >  wrote:
> > >
> > > Hey Satish, someone reported a minor bug in the Streams application
> > > shutdown which was a recent regression, though not strictly a new one:
> > was
> > > introduced in 3.4 I believe.
> > >
> > > The fix seems to be super lightweight and low-risk so I was hoping to
> > slip
> > > it into 3.6 if that's ok with you? They plan to have the patch tonight.
> > >
> > > https://issues.apache.org/jira/browse/KAFKA-15429
> > >
> > > On Thu, Aug 31, 2023 at 5:45 PM Satish Duggana <
> satish.dugg...@gmail.com
> > >
> > > wrote:
> > >
> > > > Thanks Chris for bringing this issue here and filing the new JIRA for
> > > > 3.6.0[1]. It seems to be a blocker for 3.6.0.
> > > >
> > > > Please help review https://github.com/apache/kafka/pull/14314 as
> Chris
> > > > requested.
> > > >
> > > > 1. https://issues.apache.org/jira/browse/KAFKA-15425
> > > >
> > > > ~Satish.
> > > >
> > > > On Fri, 1 Sept 2023 at 03:59, Chris Egerton  >
> > > > wrote:
> > > > >
> > > > > Hi all,
> > > > >
> > > > > Quick update: I've filed a separate ticket,
> > > > > https://issues.apache.org/jira/browse/KAFKA-15425, to track the
> > behavior
> > > > > change in Admin::listOffsets. For the full history of the ticket,
> > it's
> > > > > worth reading the comment thread on the old ticket at
> > > > > https://issues.apache.org/jira/browse/KAFKA-12879.
> > > > >
> > > > > I've also published https://github.com/apache/kafka/pull/14314 as
> a
> > > > fairly
> > > > > lightweight PR to revert the behavior of Admin::listOffsets without
> > also
> > > > > reverting the refactoring to use the internal admin driver API.
> Would
> > > > > appreciate a review on that if anyone can spare the cycles.
> > > > >
> > > > > Cheers,
> > > > >
> > > > > Chris
> > > > >
> > > > > On Wed, Aug 30, 2023 at 1:01 PM Chris Egerton 
> > wrote:
> > > > >
> > > > > > Hi Satish,
> > > > > >
> > > > > > Wanted to let you know that KAFKA-12879 (
> > > > > > https://issues.apache.org/jira/browse/KAFKA-12879), a breaking
> > change
> > > > in
> > > > > > Admin::listOffsets, has been reintroduced into the code base.
> > Since we
> > > > > > haven't yet published a release with this change (at least, not
> the
> > > > more
> > > > > > recent instance of it), I was hoping we could treat it as a
> > blocker for
> > > > > > 3.6.0. I'd also like to solicit the input of people familiar with
> > the
> > > > admin
> > > > > > client to weigh in on the Jira ticket about whether we should
> > continue
> > > > to
> > > > > > preserve the current behavior (if the consensus is that we
> should,
> > I'm
> > > > > > happy to file a fix).
> > > > > >
> > > > > > Please let me know if you agree that this qualifies as a
> blocker. I
> > > > plan
> > > > > > on publishing a potential fix sometime this week.
> > > > > >
> > > > > > Cheers,
> > > > > >
> > > > > > Chris
> > > > > >
> > > > > > On Wed, Aug 30, 2023 at 9:19 AM Satish Duggana <
> > > > satish.dugg...@gmail.com>
> > > > > > wrote:
> > > > > >
> > > > > >> Hi,
> > > > > >> Please plan to continue merging pull requests associated with
> any
> > > > > >> outstanding minor features and stabilization changes to 3.6
> branch
> > > > > >> before September 3rd. Kindly update the KIP's implementation
> > status in
> > > > > >> the 3.6.0 release notes.
> > > > > >>
> > > > > >> Thanks,
> > > > > >> Satish.
> > > > > >>
> > > > > >> On Fri, 25 Aug 2023 at 21:37, Justine Olshan
> > > > > >>  wrote:
> > > > > >> >
> > > > > >> > Hey Satish,
> > > > > >> > Everything should be in 3.6, and I will update the release
> plan
> > > > wiki.
> > > > > >> > Thanks!
> > > > > >> >
> > > > > >> > On Fri, Aug 25, 2023 at 4:08 AM Satish Duggana <
> > > > > >> satish.dugg...@gmail.com>
> > > > > >> > wrote:
> > > > > >> >
> > > > > >> > > Hi Justine,
> > > > > >> > > Adding KIP-890 part-1 to 3.6.0 seems reasonable to me. This
> > part
> > > > looks
> > > > > >> > > to be addressing a critical issue of consumers getting
> stuck.
> > > > Please
> > > > > >> > > update the release plan wiki and merge all the required
> > changes
> > > > to 3.6
> > > > > >> > > branch.
> > > > > >> > >
> > > > > >> > > Thanks,
> > > > > >> > > Satish.
> > > > > >> > >
> > > > > >> > > On Thu, 24 Aug 2023 at 22:19, Justine Olshan
> > > > > >> > >  wrote:
> > > > > >> > > >
> > > > > >> > > > Hey Satish,
> > > > > >> > > > 

Unable to start the Kafka with Kraft in Windows 11

2023-09-04 Thread Sumanshu Nankana
Hi *Team*,

I am following the steps mentioned here https://kafka.apache.org/quickstart
to Install the Kafka.

*Windows* 11
*Kafka Version*
https://www.apache.org/dyn/closer.cgi?path=/kafka/3.5.0/kafka_2.13-3.5.0.tgz
*64 Bit Operating System*


*Step1: Generate the Cluster UUID*

$KAFKA_CLUSTER_ID=.\bin\windows\kafka-storage.bat random-uuid

*Step2: Format Log Directories*

.\bin\windows\kafka-storage.bat format -t $KAFKA_CLUSTER_ID -c
.\config\kraft\server.properties

*Step3: Start the Kafka Server*

.\bin\windows\kafka-server-start.bat .\config\kraft\server.properties

I am getting the error. Logs are attached

Could you please help me to sort this error.

Kindly let me know, if you need any more information.

-
Best
Sumanshu Nankana
[2023-09-04 22:16:25,770] INFO Initialized snapshots with IDs SortedSet() from 
C:\Users\suman\Downloads\kafka\.\data\kraft-combined-logs\__cluster_metadata-0 
(kafka.raft.KafkaMetadataLog$)
[2023-09-04 22:16:25,802] INFO [raft-expiration-reaper]: Starting 
(kafka.raft.TimingWheelExpirationService$ExpiredOperationReaper)
[2023-09-04 22:16:25,973] ERROR [SharedServer id=1] Got exception while 
starting SharedServer (kafka.server.SharedServer)
java.io.UncheckedIOException: Error while writing the Quorum status from the 
file 
C:\Users\suman\Downloads\kafka\.\data\kraft-combined-logs\__cluster_metadata-0\quorum-state
at 
org.apache.kafka.raft.FileBasedStateStore.writeElectionStateToFile(FileBasedStateStore.java:155)
at 
org.apache.kafka.raft.FileBasedStateStore.writeElectionState(FileBasedStateStore.java:128)
at org.apache.kafka.raft.QuorumState.transitionTo(QuorumState.java:477)
at org.apache.kafka.raft.QuorumState.initialize(QuorumState.java:212)
at 
org.apache.kafka.raft.KafkaRaftClient.initialize(KafkaRaftClient.java:370)
at kafka.raft.KafkaRaftManager.buildRaftClient(RaftManager.scala:248)
at kafka.raft.KafkaRaftManager.(RaftManager.scala:174)
at kafka.server.SharedServer.start(SharedServer.scala:247)
at kafka.server.SharedServer.startForController(SharedServer.scala:129)
at kafka.server.ControllerServer.startup(ControllerServer.scala:197)
at 
kafka.server.KafkaRaftServer.$anonfun$startup$1(KafkaRaftServer.scala:95)
at 
kafka.server.KafkaRaftServer.$anonfun$startup$1$adapted(KafkaRaftServer.scala:95)
at scala.Option.foreach(Option.scala:437)
at kafka.server.KafkaRaftServer.startup(KafkaRaftServer.scala:95)
at kafka.Kafka$.main(Kafka.scala:113)
at kafka.Kafka.main(Kafka.scala)
Caused by: java.nio.file.FileSystemException: 
C:\Users\suman\Downloads\kafka\.\data\kraft-combined-logs\__cluster_metadata-0\quorum-state.tmp
 -> 
C:\Users\suman\Downloads\kafka\.\data\kraft-combined-logs\__cluster_metadata-0\quorum-state:
 The process cannot access the file because it is being used by another process
at 
java.base/sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:92)
at 
java.base/sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:103)
at java.base/sun.nio.fs.WindowsFileCopy.move(WindowsFileCopy.java:414)
at 
java.base/sun.nio.fs.WindowsFileSystemProvider.move(WindowsFileSystemProvider.java:291)
at java.base/java.nio.file.Files.move(Files.java:1429)
at 
org.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:950)
at 
org.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:933)
at 
org.apache.kafka.raft.FileBasedStateStore.writeElectionStateToFile(FileBasedStateStore.java:152)
... 15 more
Suppressed: java.nio.file.FileSystemException: 
C:\Users\suman\Downloads\kafka\.\data\kraft-combined-logs\__cluster_metadata-0\quorum-state.tmp
 -> 
C:\Users\suman\Downloads\kafka\.\data\kraft-combined-logs\__cluster_metadata-0\quorum-state:
 The process cannot access the file because it is being used by another process
at 
java.base/sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:92)
at 
java.base/sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:103)
at 
java.base/sun.nio.fs.WindowsFileCopy.move(WindowsFileCopy.java:328)
at 
java.base/sun.nio.fs.WindowsFileSystemProvider.move(WindowsFileSystemProvider.java:291)
at java.base/java.nio.file.Files.move(Files.java:1429)
at 
org.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:947)
... 17 more
[2023-09-04 22:16:25,973] INFO [ControllerServer id=1] Waiting for controller 
quorum voters future (kafka.server.ControllerServer)
[2023-09-04 22:16:25,973] INFO [ControllerServer id=1] Finished waiting for 
controller quorum voters future (kafka.server.ControllerServer)
[2023-09-04 22:16:25,989] ERROR Encountered fatal fault: caught exception 
(org.apache.kafka.server.fault.ProcessTerminatingFaultHandler)

[jira] [Resolved] (KAFKA-14936) Add Grace Period To Stream Table Join

2023-09-04 Thread Walker Carlson (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14936?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Walker Carlson resolved KAFKA-14936.

Resolution: Done

> Add Grace Period To Stream Table Join
> -
>
> Key: KAFKA-14936
> URL: https://issues.apache.org/jira/browse/KAFKA-14936
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Walker Carlson
>Assignee: Walker Carlson
>Priority: Major
>  Labels: kip, streams
> Fix For: 3.6.0
>
>
> Include the grace period for stream table joins as described in kip 923.
> Also add a rocksDB time based queueing implementation of 
> `TimeOrderedKeyValueBuffer`



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Jenkins build is still unstable: Kafka » Kafka Branch Builder » trunk #2170

2023-09-04 Thread Apache Jenkins Server
See 




[jira] [Created] (KAFKA-15431) Add support to assert offloaded segment for already produced event in Tiered Storage Framework

2023-09-04 Thread Kamal Chandraprakash (Jira)
Kamal Chandraprakash created KAFKA-15431:


 Summary: Add support to assert offloaded segment for already 
produced event in Tiered Storage Framework
 Key: KAFKA-15431
 URL: https://issues.apache.org/jira/browse/KAFKA-15431
 Project: Kafka
  Issue Type: Task
Reporter: Kamal Chandraprakash


See [comment|https://github.com/apache/kafka/pull/14307#discussion_r1314943942]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Jenkins build is still unstable: Kafka » Kafka Branch Builder » 3.6 #19

2023-09-04 Thread Apache Jenkins Server
See 




Jenkins build is still unstable: Kafka » Kafka Branch Builder » trunk #2169

2023-09-04 Thread Apache Jenkins Server
See 




Re: Re: Re: [DISCUSS] KIP-971 Expose replication-offset-lag MirrorMaker2 metric

2023-09-04 Thread hudeqi
My approach is to create another thread to regularly request and update the end 
offset of each partition for the `keySet` in the collection 
`lastReplicatedSourceOffsets` mentioned by your kip (if there is no update for 
a long time, it will be removed from `lastReplicatedSourceOffsets`). Obviously, 
such processing makes the calculation of the partition offset lag less 
real-time and accurate.
But this also meets our needs, because we need the partition offset lag to 
analyze the replication performance of the task and which task may have 
performance problems; and if you monitor the overall offset lag of the topic, 
then using the "kafka_consumer_consumer_fetch_manager_metrics_records_lag" 
metric will be more real-time and accurate.
This is my suggestion. I hope to be able to throw bricks and start jade, we can 
come up with a better solution.

best,
hudeqi

Elxan Eminov elxanemino...@gmail.com写道:
> @huqedi replying to your comment on the PR (
> https://github.com/apache/kafka/pull/14077#discussion_r1314592488), quote:
> 
> "I guess we have a disagreement about lag? My understanding of lag is: the
> real LEO of the source cluster partition minus the LEO that has been
> written to the target cluster. It seems that your definition of lag is: the
> lag between the mirror task getting data from consumption and writing it to
> the target cluster?"
> 
> Yes, this is the case. I've missed the fact that the consumer itself might
> be lagging behind the actual data in the partition.
> I believe your definition of the lag is more precise, but:
> Implementing it this way will come at the cost of an extra listOffsets
> request, introducing the overhead that you mentioned in your initial
> comment.
> 
> If you have enough insights about this, what would you say is the chances
> of the task consumer lagging behind the LEO of the partition?
> Are they big enough to justify the extra call to listOffsets?
> @Viktor,  any thoughts?
> 
> Thanks,
> Elkhan
> 
> On Mon, 4 Sept 2023 at 09:36, Elxan Eminov  wrote:
> 
> > I already have the PR for this so if it will make it easier to discuss,
> > feel free to take a look: https://github.com/apache/kafka/pull/14077
> >
> > On Mon, 4 Sept 2023 at 09:17, hudeqi <16120...@bjtu.edu.cn> wrote:
> >
> >> But does the offset of the last `ConsumerRecord` obtained in poll not
> >> only represent the offset of this record in the source cluster? It seems
> >> that it cannot represent the LEO of the source cluster for this partition.
> >> I understand that the offset lag introduced here should be the LEO of the
> >> source cluster minus the offset of the last record to be polled?
> >>
> >> best,
> >> hudeqi
> >>
> >>
> >>  -原始邮件-
> >>  发件人: "Elxan Eminov" 
> >>  发送时间: 2023-09-04 14:52:08 (星期一)
> >>  收件人: dev@kafka.apache.org
> >>  抄送:
> >>  主题: Re: [DISCUSS] KIP-971 Expose replication-offset-lag MirrorMaker2
> >> metric
> >> 
> >> 
> >
> >


[jira] [Resolved] (KAFKA-15052) Fix flaky test QuorumControllerTest.testBalancePartitionLeaders()

2023-09-04 Thread Luke Chen (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Luke Chen resolved KAFKA-15052.
---
Resolution: Fixed

> Fix flaky test QuorumControllerTest.testBalancePartitionLeaders()
> -
>
> Key: KAFKA-15052
> URL: https://issues.apache.org/jira/browse/KAFKA-15052
> Project: Kafka
>  Issue Type: Test
>Reporter: Dimitar Dimitrov
>Assignee: Dimitar Dimitrov
>Priority: Major
>  Labels: flaky-test
> Fix For: 3.6.0
>
>
> Test failed at 
> [https://ci-builds.apache.org/blue/organizations/jenkins/Kafka%2Fkafka/detail/trunk/1892/tests/]
>  as well as in various local runs.
> The test creates a topic, fences a broker, notes partition imbalance due to 
> another broker taking over the partition the fenced broker lost, re-registers 
> and unfences the fenced broker, sends {{AlterPartition}} for the lost 
> partition adding the now unfenced broker back to its ISR, then waits for the 
> partition imbalance to disappear.
> The local failures seem to happen when the brokers (including the ones that 
> never get fenced by the test) accidentally get fenced by losing their session 
> due to reaching the (aggressively low for test purposes) session timeout.
> The Cloudbees failure quoted above also seems to indicate that this happened:
> {code:java}
> ...[truncated 738209 chars]...
> 23. (org.apache.kafka.controller.QuorumController:768)
> [2023-06-02 18:17:22,202] DEBUG [QuorumController id=0] Scheduling write 
> event for maybeBalancePartitionLeaders because scheduled (DEFERRED), 
> checkIntervalNs (OptionalLong[10]) and isImbalanced (true) 
> (org.apache.kafka.controller.QuorumController:1401)
> [2023-06-02 18:17:22,202] INFO [QuorumController id=0] Fencing broker 2 
> because its session has timed out. 
> (org.apache.kafka.controller.ReplicationControlManager:1459)
> [2023-06-02 18:17:22,203] DEBUG [QuorumController id=0] handleBrokerFenced: 
> changing partition(s): foo-0, foo-1, foo-2 
> (org.apache.kafka.controller.ReplicationControlManager:1750)
> [2023-06-02 18:17:22,203] DEBUG [QuorumController id=0] partition change for 
> foo-0 with topic ID 033_QSX7TfitL4SDzoeR4w: leader: 2 -> -1, leaderEpoch: 2 
> -> 3, partitionEpoch: 2 -> 3 
> (org.apache.kafka.controller.ReplicationControlManager:157)
> [2023-06-02 18:17:22,204] DEBUG [QuorumController id=0] partition change for 
> foo-1 with topic ID 033_QSX7TfitL4SDzoeR4w: isr: [2, 3] -> [3], leaderEpoch: 
> 3 -> 4, partitionEpoch: 4 -> 5 
> (org.apache.kafka.controller.ReplicationControlManager:157)
> [2023-06-02 18:17:22,204] DEBUG [QuorumController id=0] partition change for 
> foo-2 with topic ID 033_QSX7TfitL4SDzoeR4w: leader: 2 -> -1, leaderEpoch: 2 
> -> 3, partitionEpoch: 2 -> 3 
> (org.apache.kafka.controller.ReplicationControlManager:157)
> [2023-06-02 18:17:22,205] DEBUG append(batch=LocalRecordBatch(leaderEpoch=1, 
> appendTimestamp=240, 
> records=[ApiMessageAndVersion(PartitionChangeRecord(partitionId=0, 
> topicId=033_QSX7TfitL4SDzoeR4w, isr=null, leader=-1, replicas=null, 
> removingReplicas=null, addingReplicas=null, leaderRecoveryState=-1) at 
> version 0), ApiMessageAndVersion(PartitionChangeRecord(partitionId=1, 
> topicId=033_QSX7TfitL4SDzoeR4w, isr=[3], leader=3, replicas=null, 
> removingReplicas=null, addingReplicas=null, leaderRecoveryState=-1) at 
> version 0), ApiMessageAndVersion(PartitionChangeRecord(partitionId=2, 
> topicId=033_QSX7TfitL4SDzoeR4w, isr=null, leader=-1, replicas=null, 
> removingReplicas=null, addingReplicas=null, leaderRecoveryState=-1) at 
> version 0), ApiMessageAndVersion(BrokerRegistrationChangeRecord(brokerId=2, 
> brokerEpoch=3, fenced=1, inControlledShutdown=0) at version 0)]), 
> prevOffset=27) (org.apache.kafka.metalog.LocalLogManager$SharedLogData:253)
> [2023-06-02 18:17:22,205] DEBUG [QuorumController id=0] Creating in-memory 
> snapshot 27 (org.apache.kafka.timeline.SnapshotRegistry:197)
> [2023-06-02 18:17:22,205] DEBUG [LocalLogManager 0] Node 0: running log 
> check. (org.apache.kafka.metalog.LocalLogManager:512)
> [2023-06-02 18:17:22,205] DEBUG [QuorumController id=0] Read-write operation 
> maybeFenceReplicas(451616131) will be completed when the log reaches offset 
> 27. (org.apache.kafka.controller.QuorumController:768)
> [2023-06-02 18:17:22,208] INFO [QuorumController id=0] Fencing broker 3 
> because its session has timed out. 
> (org.apache.kafka.controller.ReplicationControlManager:1459)
> [2023-06-02 18:17:22,209] DEBUG [QuorumController id=0] handleBrokerFenced: 
> changing partition(s): foo-1 
> (org.apache.kafka.controller.ReplicationControlManager:1750)
> [2023-06-02 18:17:22,209] DEBUG [QuorumController id=0] partition change for 
> foo-1 with topic ID 033_QSX7TfitL4SDzoeR4w: leader: 3 -> -1, leaderEpoch: 4 
> -> 

Re: Re: [DISCUSS] KIP-971 Expose replication-offset-lag MirrorMaker2 metric

2023-09-04 Thread Elxan Eminov
@huqedi replying to your comment on the PR (
https://github.com/apache/kafka/pull/14077#discussion_r1314592488), quote:

"I guess we have a disagreement about lag? My understanding of lag is: the
real LEO of the source cluster partition minus the LEO that has been
written to the target cluster. It seems that your definition of lag is: the
lag between the mirror task getting data from consumption and writing it to
the target cluster?"

Yes, this is the case. I've missed the fact that the consumer itself might
be lagging behind the actual data in the partition.
I believe your definition of the lag is more precise, but:
Implementing it this way will come at the cost of an extra listOffsets
request, introducing the overhead that you mentioned in your initial
comment.

If you have enough insights about this, what would you say is the chances
of the task consumer lagging behind the LEO of the partition?
Are they big enough to justify the extra call to listOffsets?
@Viktor,  any thoughts?

Thanks,
Elkhan

On Mon, 4 Sept 2023 at 09:36, Elxan Eminov  wrote:

> I already have the PR for this so if it will make it easier to discuss,
> feel free to take a look: https://github.com/apache/kafka/pull/14077
>
> On Mon, 4 Sept 2023 at 09:17, hudeqi <16120...@bjtu.edu.cn> wrote:
>
>> But does the offset of the last `ConsumerRecord` obtained in poll not
>> only represent the offset of this record in the source cluster? It seems
>> that it cannot represent the LEO of the source cluster for this partition.
>> I understand that the offset lag introduced here should be the LEO of the
>> source cluster minus the offset of the last record to be polled?
>>
>> best,
>> hudeqi
>>
>>
>>  -原始邮件-
>>  发件人: "Elxan Eminov" 
>>  发送时间: 2023-09-04 14:52:08 (星期一)
>>  收件人: dev@kafka.apache.org
>>  抄送:
>>  主题: Re: [DISCUSS] KIP-971 Expose replication-offset-lag MirrorMaker2
>> metric
>> 
>> 
>
>


[jira] [Created] (KAFKA-15430) Kafla create replca partition on controller node

2023-09-04 Thread Andrii Vysotskiy (Jira)
Andrii Vysotskiy created KAFKA-15430:


 Summary: Kafla create replca partition on controller node
 Key: KAFKA-15430
 URL: https://issues.apache.org/jira/browse/KAFKA-15430
 Project: Kafka
  Issue Type: Test
  Components: kraft
Affects Versions: 3.5.1
Reporter: Andrii Vysotskiy


{*}{*}I have configuration 5 nodes, with next roles: 4 broker+controller and 1 
controller. Create topic with replication factor 5, and it is created, and 
describe show that topic partition have 5 replicas.

 

{{/opt/kafka/latest/bin/kafka-topics.sh --create 
--bootstrap-server=dc1-prod-kafka-001-vs:9092 --replication-factor 5 
--partitions 1 --topic test5}}

 

{{/opt/kafka/latest/bin/kafka-topics.sh --describe --topic test5 
--bootstrap-server=dc1-prod-kafka-001-vs:9092
Topic: test5 TopicId: amuqr8EgRmqeKryUHZwsMA PartitionCount: 1 
ReplicationFactor: 5 Configs: segment.bytes=1073741824
Topic: test5 Partition: 0 Leader: 3 Replicas: 3,4,1,2,5 Isr: 3,4,1,2}}

{{}}

Replicas 5 and ISR 4. Why does kafka initially allow you to create a replica on 
the controller node, although in reality the replica is not created on the 
controller node and there are no topic files in the log directory.

Is this expected behavior or not? Thanks.

I want to understand whether such behavior is the norm for Kafka

{{}}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: Re: [DISCUSS] KIP-971 Expose replication-offset-lag MirrorMaker2 metric

2023-09-04 Thread Elxan Eminov
I already have the PR for this so if it will make it easier to discuss,
feel free to take a look: https://github.com/apache/kafka/pull/14077

On Mon, 4 Sept 2023 at 09:17, hudeqi <16120...@bjtu.edu.cn> wrote:

> But does the offset of the last `ConsumerRecord` obtained in poll not only
> represent the offset of this record in the source cluster? It seems that it
> cannot represent the LEO of the source cluster for this partition. I
> understand that the offset lag introduced here should be the LEO of the
> source cluster minus the offset of the last record to be polled?
>
> best,
> hudeqi
>
>
>  -原始邮件-
>  发件人: "Elxan Eminov" 
>  发送时间: 2023-09-04 14:52:08 (星期一)
>  收件人: dev@kafka.apache.org
>  抄送:
>  主题: Re: [DISCUSS] KIP-971 Expose replication-offset-lag MirrorMaker2
> metric
> 
> 


Re: Re: [DISCUSS] KIP-971 Expose replication-offset-lag MirrorMaker2 metric

2023-09-04 Thread Elxan Eminov
The offset lag is the difference between the last end offset of the source
partition (LEO) and the last replicated source offset (LRO).
The offset of the last `ConsumerRecord` for a partition obtained in poll
represent the LEO in the source cluster, and LRO is obtained at producer
callback where we have the source offset of the record being committed
available to us:
https://kafka.apache.org/26/javadoc/org/apache/kafka/connect/source/SourceTask.html#commitRecord-org.apache.kafka.connect.source.SourceRecord-org.apache.kafka.clients.producer.RecordMetadata-

Does this make sense?
Thanks!
Elkhan

On Mon, 4 Sept 2023 at 09:17, hudeqi <16120...@bjtu.edu.cn> wrote:

> But does the offset of the last `ConsumerRecord` obtained in poll not only
> represent the offset of this record in the source cluster? It seems that it
> cannot represent the LEO of the source cluster for this partition. I
> understand that the offset lag introduced here should be the LEO of the
> source cluster minus the offset of the last record to be polled?
>
> best,
> hudeqi
>
>
>  -原始邮件-
>  发件人: "Elxan Eminov" 
>  发送时间: 2023-09-04 14:52:08 (星期一)
>  收件人: dev@kafka.apache.org
>  抄送:
>  主题: Re: [DISCUSS] KIP-971 Expose replication-offset-lag MirrorMaker2
> metric
> 
> 


Re: Re: [DISCUSS] KIP-971 Expose replication-offset-lag MirrorMaker2 metric

2023-09-04 Thread hudeqi
But does the offset of the last `ConsumerRecord` obtained in poll not only 
represent the offset of this record in the source cluster? It seems that it 
cannot represent the LEO of the source cluster for this partition. I understand 
that the offset lag introduced here should be the LEO of the source cluster 
minus the offset of the last record to be polled?

best,
hudeqi


 -原始邮件-
 发件人: "Elxan Eminov" 
 发送时间: 2023-09-04 14:52:08 (星期一)
 收件人: dev@kafka.apache.org
 抄送: 
 主题: Re: [DISCUSS] KIP-971 Expose replication-offset-lag MirrorMaker2 metric
 


Re: KIP-976: Cluster-wide dynamic log adjustment for Kafka Connect

2023-09-04 Thread Federico Valeri
Hi Chris, thanks. This looks like a useful feature.

Due to the idempotent nature of PUT, I guess that the last_modified
timestamp won't change if the same request is repeated successively.
Should we add a unit test for that?

On Mon, Sep 4, 2023 at 6:17 AM Ashwin  wrote:
>
> Hi Chris,
>
> Thanks for thinking about this useful feature !
> I had a question regarding
>
> > Since cluster metadata is not required to handle these types of request,
> they will not be forwarded to the leader
>
> And later, we also mention about supporting more scope types in the future.
> Don't you foresee a future scope type which may require cluster metadata ?
> In that case, isn't it better to forward the requests to the leader in the
> initial implementation ?
>
> I would also recommend an additional system test for Standalone herder to
> ensure that the new scope parameter is honored and the response contains
> the last modified time.
>
> Thanks,
> Ashwin
>
> On Sat, Sep 2, 2023 at 5:12 AM Chris Egerton 
> wrote:
>
> > Hi all,
> >
> > Can't imagine a worse time to publish a new KIP (it's late on a Friday and
> > we're in the middle of the 3.6.0 release), but I wanted to put forth
> > KIP-976 for discussion:
> >
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-976%3A+Cluster-wide+dynamic+log+adjustment+for+Kafka+Connect
> >
> > TL;DR: The API to dynamically adjust log levels at runtime with Connect is
> > great, and I'd like to augment it with support to adjust log levels for
> > every worker in the cluster (instead of just the worker that receives the
> > REST request).
> >
> > I look forward to everyone's thoughts, but expect that this will probably
> > take a bump or two once the dust has settled on 3.6.0. Huge thanks to
> > everyone that's contributed to that release so far, especially our release
> > manager Satish!
> >
> > Cheers,
> >
> > Chris
> >


Re: [DISCUSS] KIP-971 Expose replication-offset-lag MirrorMaker2 metric

2023-09-04 Thread Elxan Eminov
Hi huqedi,

I've considered two solutions:
1)
https://kafka.apache.org/24/javadoc/org/apache/kafka/clients/consumer/KafkaConsumer.html#endOffsets-java.util.Collection-
This option requires an additional network call, so not preferable
2) Manually getting the last element of List in the
response of the poll()
I believe this option doesn't introduce considerable overhead

Let me know what you think,
Thanks,
Elkhan

On Mon, 4 Sept 2023 at 04:03, hudeqi <16120...@bjtu.edu.cn> wrote:

> Hi, Eminov.
> My doubt is: how do you get the LEO of the source cluster for the
> partition in `poll`?
>
> best,
> hudeqi
>
>
>  -原始邮件-
>  发件人: "Elxan Eminov" 
>  发送时间: 2023-09-02 19:18:23 (星期六)
>  收件人: dev@kafka.apache.org
>  抄送:
>  主题: Re: Re: [DISCUSS] KIP-971 Expose replication-offset-lag
> MirrorMaker2 metric
> 
> 


Re: KIP-976: Cluster-wide dynamic log adjustment for Kafka Connect

2023-09-04 Thread Yash Mayya
> If no modifications to a logging namespace have
> been made, won't the namespace itself be omitted
> from the response? It looks like we currently only
> return loggers that have non-null log levels in the
*> **GET /admin/loggers* endpoint.

This can be ignored - I didn't account for the fact that at worker startup
we'll still have loggers with non-null log levels - the root logger and any
other named loggers which have explicit log levels configured in the log4j
properties.

On Mon, Sep 4, 2023 at 12:00 PM Yash Mayya  wrote:

> Hi Chris,
>
> Thanks for the KIP, this looks like a really useful addition to Kafka
> Connect's log level REST APIs! I have a few questions and comments:
>
> > If no modifications to the namespace have
> > been made since the worker was started,
> > they will be null
>
> If no modifications to a logging namespace have been made, won't the
> namespace itself be omitted from the response? It looks like we currently
> only return loggers that have non-null log levels in the *GET
> /admin/loggers* endpoint.
>
> > Last modified timestamp
>
> From the proposed changes section, it isn't very clear to me how we'll be
> tracking this last modified timestamp to be returned in responses for the *GET
> /admin/loggers* and *GET /admin/loggers/{logger}* endpoints. Could you
> please elaborate on this? Also, will we track the last modified timestamp
> even for worker scoped modifications where we won't write any records to
> the config topic and the requests will essentially be processed
> synchronously?
>
> > Record values will have the following format, where ${level} is the new
> logging level for the namespace:
>
> In the current synchronous implementation for the *PUT
> /admin/loggers/{logger} *endpoint, we return a 404 error if the level is
> invalid (i.e. not one of TRACE, DEBUG, WARN etc.). Since the new cluster
> scoped variant of the endpoint will be asynchronous, can we also add a
> validation to synchronously surface erroneous log levels to users?
>
> > Workers that have not yet completed startup
> > will ignore these records, including if the worker
> > reads one during the read-to-end of the config
> > topic that all workers perform during startup.
>
> I'm curious to know what the rationale here is? In KIP-745, the stated
> reasoning behind ignoring restart requests during worker startup was that
> the worker will anyway be starting all connectors and tasks assigned to it
> so the restart request is essentially meaningless. With the log level API
> however, wouldn't it make more sense to apply any cluster scoped
> modifications to new workers in the cluster too? This also applies to any
> workers that are restarted soon after a request is made to *PUT
> /admin/loggers/{logger}?scope=cluster *on another worker. Maybe we could
> stack up all the cluster scoped log level modification requests during the
> config topic read at worker startup and apply the latest ones per namespace
> (assuming older records haven't already been compacted) after we've
> finished reading to the end of the config topic?
>
> > if you're debugging the distributed herder, you
> > need all the help you can get
>
> 
>
> As an aside, thanks for the impressively thorough testing plan in the KIP!
>
>
> Hi Ashwin,
>
> > isn't it better to forward the requests to the
> > leader in the initial implementation ?
>
> Would there be any downside to only introducing leader forwarding for
> connector/task specific scope types in the future (rather than introducing
> it at the outset here where it isn't strictly required)?
>
> > I would also recommend an additional system
> > test for Standalone herder to ensure that the
> > new scope parameter is honored and the response
> > contains the last modified time.
>
> Can't this be sufficiently covered using unit and / or integration tests?
> System tests are fairly expensive to run in terms of overall test runtime
> and they are also not run on every PR or commit to trunk / feature branches
> (unlike unit tests and integration tests).
>
> Thanks,
> Yash
>
> On Sat, Sep 2, 2023 at 5:12 AM Chris Egerton 
> wrote:
>
>> Hi all,
>>
>> Can't imagine a worse time to publish a new KIP (it's late on a Friday and
>> we're in the middle of the 3.6.0 release), but I wanted to put forth
>> KIP-976 for discussion:
>>
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-976%3A+Cluster-wide+dynamic+log+adjustment+for+Kafka+Connect
>>
>> TL;DR: The API to dynamically adjust log levels at runtime with Connect is
>> great, and I'd like to augment it with support to adjust log levels for
>> every worker in the cluster (instead of just the worker that receives the
>> REST request).
>>
>> I look forward to everyone's thoughts, but expect that this will probably
>> take a bump or two once the dust has settled on 3.6.0. Huge thanks to
>> everyone that's contributed to that release so far, especially our release
>> manager Satish!
>>
>> Cheers,
>>
>> Chris
>>
>


Jenkins build is still unstable: Kafka » Kafka Branch Builder » 3.6 #18

2023-09-04 Thread Apache Jenkins Server
See 




Re: KIP-976: Cluster-wide dynamic log adjustment for Kafka Connect

2023-09-04 Thread Yash Mayya
Hi Chris,

Thanks for the KIP, this looks like a really useful addition to Kafka
Connect's log level REST APIs! I have a few questions and comments:

> If no modifications to the namespace have
> been made since the worker was started,
> they will be null

If no modifications to a logging namespace have been made, won't the
namespace itself be omitted from the response? It looks like we currently
only return loggers that have non-null log levels in the *GET
/admin/loggers* endpoint.

> Last modified timestamp

>From the proposed changes section, it isn't very clear to me how we'll be
tracking this last modified timestamp to be returned in responses for the *GET
/admin/loggers* and *GET /admin/loggers/{logger}* endpoints. Could you
please elaborate on this? Also, will we track the last modified timestamp
even for worker scoped modifications where we won't write any records to
the config topic and the requests will essentially be processed
synchronously?

> Record values will have the following format, where ${level} is the new
logging level for the namespace:

In the current synchronous implementation for the *PUT
/admin/loggers/{logger} *endpoint, we return a 404 error if the level is
invalid (i.e. not one of TRACE, DEBUG, WARN etc.). Since the new cluster
scoped variant of the endpoint will be asynchronous, can we also add a
validation to synchronously surface erroneous log levels to users?

> Workers that have not yet completed startup
> will ignore these records, including if the worker
> reads one during the read-to-end of the config
> topic that all workers perform during startup.

I'm curious to know what the rationale here is? In KIP-745, the stated
reasoning behind ignoring restart requests during worker startup was that
the worker will anyway be starting all connectors and tasks assigned to it
so the restart request is essentially meaningless. With the log level API
however, wouldn't it make more sense to apply any cluster scoped
modifications to new workers in the cluster too? This also applies to any
workers that are restarted soon after a request is made to *PUT
/admin/loggers/{logger}?scope=cluster *on another worker. Maybe we could
stack up all the cluster scoped log level modification requests during the
config topic read at worker startup and apply the latest ones per namespace
(assuming older records haven't already been compacted) after we've
finished reading to the end of the config topic?

> if you're debugging the distributed herder, you
> need all the help you can get



As an aside, thanks for the impressively thorough testing plan in the KIP!


Hi Ashwin,

> isn't it better to forward the requests to the
> leader in the initial implementation ?

Would there be any downside to only introducing leader forwarding for
connector/task specific scope types in the future (rather than introducing
it at the outset here where it isn't strictly required)?

> I would also recommend an additional system
> test for Standalone herder to ensure that the
> new scope parameter is honored and the response
> contains the last modified time.

Can't this be sufficiently covered using unit and / or integration tests?
System tests are fairly expensive to run in terms of overall test runtime
and they are also not run on every PR or commit to trunk / feature branches
(unlike unit tests and integration tests).

Thanks,
Yash

On Sat, Sep 2, 2023 at 5:12 AM Chris Egerton 
wrote:

> Hi all,
>
> Can't imagine a worse time to publish a new KIP (it's late on a Friday and
> we're in the middle of the 3.6.0 release), but I wanted to put forth
> KIP-976 for discussion:
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-976%3A+Cluster-wide+dynamic+log+adjustment+for+Kafka+Connect
>
> TL;DR: The API to dynamically adjust log levels at runtime with Connect is
> great, and I'd like to augment it with support to adjust log levels for
> every worker in the cluster (instead of just the worker that receives the
> REST request).
>
> I look forward to everyone's thoughts, but expect that this will probably
> take a bump or two once the dust has settled on 3.6.0. Huge thanks to
> everyone that's contributed to that release so far, especially our release
> manager Satish!
>
> Cheers,
>
> Chris
>


Jenkins build is still unstable: Kafka » Kafka Branch Builder » trunk #2168

2023-09-04 Thread Apache Jenkins Server
See