Re: New release branch 3.9

2024-07-30 Thread Ismael Juma
I would recommend against large refactorings in trunk until the first RC
for 3.9 - that will reduce cherry-pick friction. Once we have the first RC,
subsequent changes to 3.9 should be limited in scope.

Ismael

On Tue, Jul 30, 2024 at 4:31 PM Colin McCabe  wrote:

> Yeah, please go ahead. I know a lot of people are waiting for 4.0.
>
> best,
> Colin
>
>
> On Tue, Jul 30, 2024, at 16:05, Matthias J. Sax wrote:
> > Thanks for clarifying Colin. So my assumptions were actually correct.
> >
> > We have a lot of contributors waiting to pick-up 4.0 tickets, and I'll
> > go ahead a tell them that we are ready and they can start to pick them
> up.
> >
> > Thanks.
> >
> >
> > -Matthias
> >
> > On 7/30/24 3:51 PM, Colin McCabe wrote:
> >> Hi Chia-Ping Tsai,
> >>
> >> If you can get them done this week then I think we can merge them in to
> 3.9. If not, then let's wait until 4.0, please.
> >>
> >> best,
> >> Colin
> >>
> >>
> >> On Tue, Jul 30, 2024, at 09:07, Chia-Ping Tsai wrote:
> >>> hi Colin,
> >>>
> >>> Could you please consider adding
> >>> https://issues.apache.org/jira/browse/KAFKA-1 to 3.9.0
> >>>
> >>> The issue is used to deprecate the formatters in core module. Also, it
> >>> implements the replacements for them.
> >>>
> >>> In order to follow the deprecation rules, it would be nice to have
> >>> KAFKA-1 in 3.9.0
> >>>
> >>> If you agree to have them in 3.9.0, I will cherry-pick them into 3.9.0
> when
> >>> they get merged to trunk.
> >>>
> >>> Best,
> >>> Chia-Ping
> >>>
> >>>
> >>> José Armando García Sancio  於
> 2024年7月30日 週二
> >>> 下午11:59寫道:
> >>>
>  Thanks Colin.
> 
>  For KIP-853 (KRaft Controller Membership Changes), we still have the
>  following features that are in progress.
> 
>  1. UpdateVoter RPC and request handling
>  
>  2. Storage tool changes for KIP-853
>  
>  3. kafka-metadata-quorum describe changes for KIP-853
>  
>  4. kafka-metadata-quorum add voter and remove voter changes
>  
>  5. Sending UpdateVoter request and response handling
>  
> 
>  Can we cherry pick them to the release branch 3.9.0 when they get
> merged to
>  trunk? They have a small impact as they shouldn't affect the rest of
> Kafka
>  and only affect the kraft controller membership change feature. I
> expected
>  them to get merged to the trunk branch in the coming days.
> 
>  Thanks,
> 
>  On Mon, Jul 29, 2024 at 7:02 PM Colin McCabe 
> wrote:
> 
> > Hi Kafka developers and friends,
> >
> > As promised, we now have a release branch for the upcoming 3.9.0
> release.
> > Trunk has been bumped to 4.0.0-SNAPSHOT.
> >
> > I'll be going over the JIRAs to move every non-blocker from this
> release
>  to
> > the next release.
> >
> >  From this point, most changes should go to trunk.
> > *Blockers (existing and new that we discover while testing the
> release)
> > will be double-committed. *Please discuss with your reviewer whether
> your
> > PR should go to trunk or to trunk+release so they can merge
> accordingly.
> >
> > *Please help us test the release! *
> >
> > best,
> > Colin
> >
> 
> 
>  --
>  -José
> 
>


[jira] [Resolved] (KAFKA-17205) Allow topic config validation in controller level in KRaft mode

2024-07-30 Thread Luke Chen (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-17205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Luke Chen resolved KAFKA-17205.
---
Resolution: Fixed

> Allow topic config validation in controller level in KRaft mode
> ---
>
> Key: KAFKA-17205
> URL: https://issues.apache.org/jira/browse/KAFKA-17205
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Luke Chen
>Assignee: Luke Chen
>Priority: Major
> Fix For: 3.9.0
>
>
> Allow topic config validation in controller level. This is required because 
> we need to fail the invalid config change before it is written into metadata 
> log, especially for tiered storage feature.
>  
> Note: This ticket only changes for KRaft mode.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [VOTE] KIP-1070: Deprecate MockProcessorContext

2024-07-30 Thread Matthias J. Sax

Hi,

Also a +1 (binging) from me.


I am closing this vote with the KIP being accepted with 4 binding +1s 
from Sophie, Bill, Lucas, and myself.


Thanks a lot!


-Matthias


On 7/26/24 10:52 AM, Matthias J. Sax wrote:

I updated the KIP accordingly.

If there is no further follow usp, I'll close the vote shortly.


-Matthias

On 7/21/24 10:24 PM, Matthias J. Sax wrote:

I just realized that both

- ValueTransformerWithKey and
- ValueTransfromerWithKeySupplier

are still used by non-deprecated `KTable#transformValues()`. So we 
cannot deprecate them with this KIP.


I filed https://issues.apache.org/jira/browse/KAFKA-17178 for tracking.

As an alternative, we could extend this KIP and include K17178 that I 
just filed, but it would be a larger change. I frankly just wanted to 
tie up a few loose ends w/o the need to do a larger KIP.


Hope you are all ok with keeping the KIP as-is and simple, and we wait 
for somebody else to pickup K17178.


Thoughts?


-Matthias

On 7/15/24 8:43 PM, Sophie Blee-Goldman wrote:

Makes sense to me -- seems like an oversight since we did correctly
deprecate the old Processor, ProcessorSupplier, etc (not to mention the
#transform, #transformValues methods). Still a +1 (binding) from me

On Fri, Jul 12, 2024 at 4:41 PM Matthias J. Sax  
wrote:


I just realized, that there is more interfaces with a similar 
situation:


- Transformer
- TransformerSupplier
- ValueTransformer
- ValueTransfomerSupplier
- ValueTransformerWithKey
- ValueTransfromerWithKeySupplier

Given that `KStream#transform` and `KStream#transformValues` are
deprecated, it seems we should deprecate all of them, too?



-Matthias


On 7/12/24 1:06 AM, Lucas Brutschy wrote:

Sounds good to me!

+1 (binding)

On Fri, Jul 12, 2024 at 12:55 AM Bill Bejeck  
wrote:


+1 (binding)

On Thu, Jul 11, 2024 at 5:07 PM Sophie Blee-Goldman <

sop...@responsive.dev>

wrote:


Makes sense to me, +1 (binding)

On Thu, Jul 11, 2024 at 9:24 AM Matthias J. Sax 

wrote:



Hi,

I want to propose a very small KIP. Skipping the DISCUSS step, and
calling for a VOTE directly.






https://cwiki.apache.org/confluence/display/KAFKA/KIP-1070%3A+deprecate+MockProcessorContext



-Matthias









Re: [DISCUSS] KIP 1072 - Add @FunctionalInterface annotation to Kafka Streams SAM methods

2024-07-30 Thread Matthias J. Sax

Thanks a lot for the KIP Ray!

It seems to be a good improvement to make using KS with Clojure more 
seamless.


However, I am not 100% sure if all listed interfaces make sense?


(100) GlobalKTable: it's basically a sibling to `KStream`, and `KTable` 
interfaces, but users would never implemented it, and thus I think it 
won't make much sense to add the annotation. Playing devils advocate, it 
could even be "harmful" as it might provide a wrong signal to users that 
this interface would be intended to be implemented by them.



(200) NamedOperation: this is some helper interface that user also won't 
need to implement. I am less worried about being harmful (as I am for 
GlobalKTable), but I also don't see much of an advantage.



(300) TransformerSupplier and ValueTransformerSupplier: both are going 
to be deprecate with 4.0, which is only a side cleanup anyway. Both can 
only be used with `KStream.transform()`, `.flatTransform()`, 
`.transformValues()` and `.flatTransformValues()` and all four method 
will be removed in 4.0 rending both interface practically useless. -- No 
damage to include them, but also not useful.



(400) ProcessorSupplier and Processor: there is currently two interfaces 
with each name, the old and already deprecated interfaces 
`...processor.Processor[Supplier]` and the new 
`...processor.api.Processor[Supplier]`. The KIP should be explicit and 
say `api.Proceccor[Supplier]` as only the new interfaces have annotation 
already, but not the old ones. - The old interfaces do also not need to 
be updated IMHO, as will we remove both with 4.0, too.



(410) There is also two interfaces `...processor.ProcessorContext` and 
`...processor.api.ProcessorContext`. Both are still in used, and thus 
the KIP should mention both explicitly.



(500) I am not sure if the KIP need to explicitly list all interface it 
does not update... It's a long list and it might be easier to read the 
KIP if omitted?



(600) There is some other interface/abstract-classes which might benefit 
from the annotation, too:



 - org.apache.kafka.streams.errors:

   DeserializationExceptionHandler (does not qualify now, but we could 
include it in the KIP and file a follow up ticket for the future to add 
it, when we remove the deprecated method? -- This would avoid the need 
for another KIP in the future)

   StreamsUncaughtExceptionHandler


 - org.apache.kafka.streams.processor.api:

   ContextualFixedKeyProcessor
   ContextualProcessor


 - org.apache.kafka.streams.processor:

   CommitCallback
   Punctuator
   StateRestoreCallback
   StreamPartitioner (there is already a open PR to remove the 
deprecate method so it will qualify in 4.0 release)

   TimestampExtractor
   TopicNameExtractor



-Matthias


On 7/29/24 12:47 AM, Ray McDermott wrote:

These annotations assist with clarifying the purpose of the interface, as well 
as assisting interop with non-Java JVM languages.

https://cwiki.apache.org/confluence/display/KAFKA/KIP-1072%3A+Add+@FunctionalInterface+annotation+to+Kafka+Streams+SAM+methods

Comments welcome

Thanks

Ray


[jira] [Created] (KAFKA-17224) Make ForeachProcessor internal

2024-07-30 Thread Matthias J. Sax (Jira)
Matthias J. Sax created KAFKA-17224:
---

 Summary: Make ForeachProcessor internal
 Key: KAFKA-17224
 URL: https://issues.apache.org/jira/browse/KAFKA-17224
 Project: Kafka
  Issue Type: Improvement
  Components: streams
Reporter: Matthias J. Sax


`ForeachProcessor` is in public package `org.apache.kafka.streams.kstream` but 
it's actually an internal class.

We should deprecate it and move into an internal package is a future release.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [DISCUSS] KIP-512: make Record Headers available in onAcknowledgement

2024-07-30 Thread Matthias J. Sax

Thanks for updating the KIP.

(100) As stated previously, I personally don't think that adding Headers 
to RecordMetadata is the right thing to do. To me, Headers store 
"application metadata" of a record, but they are not "Kafka native" 
record metadata (ie metadata Kafka an reason about). Headers are a black 
box to Kafka, similar to key and value.


The JavaDocs of `RecordMetadata` might not be helpful, but I agree with 
Andrew and Lianet that its original purpose does not fit to the idea to 
add Headers to it.




(200) However, I don't understand the argument about `onComplete()`? Of 
course, if the interceptor `onSend()` methods modified the headers, 
`onComplete()` would see the modified record, no the original one. But 
the same should be true for `onAcknowledgement()` (no matter if we pass 
the Headers as parameter of via `RecordMetadata`), right?


Thus, I would personally argue, that extending `onComplete()` and pass 
in the Headers might be a good thing to do, too.




[side remark] I personally don't agree to the comment on the linked PR 
(even if this is not relevant to this KIP):



it is not recommended but allowable to create a new ProducerRecord in 
Interceptor.


In the end, `onSend()` has return type `ProducerRecord`, and while I 
agree that it should be used with care, it seems odd to call it an "edge 
case", and position it at something that is only "tolerated"... In the 
end, if we really want to consider modifying a record `onSend()` as bad 
practice, we should rather change the interceptor interface and change 
the `onSend()` return type to `void`.




(300) About Lianet's idea to just track "producer local timestamp" and 
pass into the callback: I find it interesting, but I am not totally sure 
if we might make it too complicated?


In general, I believe that we should actually have a much larger change 
to the Kafka message format, and always store the producer provided 
"create timestamp" plus the broker side "log append timestamp". For this 
case, `RecordMetadata` would be extended to also provide both 
timestamps. Of course, changing the message format is totally 
out-of-scope for this KIP, and I don't propose to tackle it with this 
KIP. However, if we would want to be forward looking (and optimistically 
assume that we might change the message format at some point in the 
future accordingly), we could actually deprecate the existing 
`RecordMetadata#timestamp()` method, and add `#createTimestamp()` and 
`#logAppendTimestamp()` methods, and clearly document their semantics 
with regard to the corresponding "CreateTime" vs "AppendTime" topic config.


The disadvantage I see is, that we do something that might never happen 
in the Kafka message format, and that we would limit the scope of this 
KIP to the "measure latency" use-case only.


The KIP also mentioned tracing as use-case which could benefit to access 
Headers in the callback what I find convincing. I don't see a good 
reason why we would want to exclude this use-case? -- And frankly, there 
could be even other use-case which might benefit from an easy access to 
Headers in the callbacks.




(400) If we really go with overloading `onAcknowledgement()` (and maybe 
also `onCompletion()`), and wondering if we should deprecate the 
existing overloads? Given that this idea is current a "rejected 
alternative" it ok that the KIP is not specific about it, however, if we 
would actually do it this way, we should consider it.




-Matthias



On 7/30/24 12:06 PM, Lianet M. wrote:

Hello Rich, thanks for resurrecting the KIP, seems to fill a gap indeed.

LM1. Specifically related to motivation#1. ProducerRecord already has a
timestamp, passed into the RecordMetadata, that represents the creation
time provided on new ProducerRecord, so couldn't we reuse it to avoid the
extra complexity of having to "include a timestamp in the header when the
message is sent" to be able to compute latency properly. The challenge of
course is that that timestamp may be overwritten (and this is the root
cause of the gap), but that could be resolved just by keeping the original
time and making it available.
RecordMetadata would keep a timestamp (passed from the record creation,
never mutated), and the "effectiveTimestamp" (the one it currently has,
updated with the broker result based on configs). Main advantage would be
not having to add a header for calculating latency. The user simply creates
the record with a timestamp (known existing concept), and we make that
value accessible in the RecordMetadata (where it exists already at some
point, but it's mutated). Thoughts?

LM2. Regardless of the point above, if we think having the headers
available on the onAcknowledgement would be helpful, I definitely see the
case for both alternatives (headers in RecordMetadata and as param). I
share Andrew's feeling because Headers are indeed part of the
ProducerRecord. But then headers will in practice simply contain info
related to the record, so it seems 

Re: [kafka-clients] [ANNOUNCE] Apache Kafka 3.8.0

2024-07-30 Thread Luke Chen
Thanks for running the release, and thanks everyone contributed to v3.8.0.



On Wed, Jul 31, 2024 at 6:09 AM Greg Harris 
wrote:

> Thank you to all of the Contributors, Committers, and our release manager
> Josep!
>
> Greg
>
> On Tue, Jul 30, 2024 at 1:34 PM Justine Olshan
> 
> wrote:
>
> > Thanks Josep for your hard work! And to everyone who contributed to this
> > release.
> >
> > Justine
> >
> > On Tue, Jul 30, 2024 at 8:04 AM Kamal Chandraprakash <
> > kamal.chandraprak...@gmail.com> wrote:
> >
> > > Thanks for running the release!
> > >
> > > On Tue, Jul 30, 2024 at 4:33 AM Colin McCabe 
> wrote:
> > >
> > > > +1. Thanks, Josep!
> > > >
> > > > Colin
> > > >
> > > > On Mon, Jul 29, 2024, at 10:32, Chris Egerton wrote:
> > > > > Thanks for running the release, Josep!
> > > > >
> > > > >
> > > > > On Mon, Jul 29, 2024, 13:31 'Josep Prat' via kafka-clients <
> > > > kafka-clie...@googlegroups.com> wrote:
> > > > >> The Apache Kafka community is pleased to announce the release for
> > > > Apache
> > > > >> Kafka 3.8.0
> > > > >>
> > > > >> This is a minor release and it includes fixes and improvements
> from
> > > 456
> > > > >> JIRAs.
> > > > >>
> > > > >> All of the changes in this release can be found in the release
> > notes:
> > > > >> https://www.apache.org/dist/kafka/3.8.0/RELEASE_NOTES.html
> > > > >>
> > > > >> An overview of the release can be found in our announcement blog
> > post:
> > > > >>
> https://kafka.apache.org/blog#apache_kafka_380_release_announcement
> > > > >>
> > > > >> You can download the source and binary release (Scala 2.12 and
> Scala
> > > > >> 2.13) from:
> > > > >> https://kafka.apache.org/downloads#3.8.0
> > > > >>
> > > > >>
> > > >
> > >
> >
> ---
> > > > >>
> > > > >>
> > > > >> Apache Kafka is a distributed streaming platform with four core
> > APIs:
> > > > >>
> > > > >>
> > > > >> ** The Producer API allows an application to publish a stream of
> > > > records to
> > > > >> one or more Kafka topics.
> > > > >>
> > > > >> ** The Consumer API allows an application to subscribe to one or
> > more
> > > > >> topics and process the stream of records produced to them.
> > > > >>
> > > > >> ** The Streams API allows an application to act as a stream
> > processor,
> > > > >> consuming an input stream from one or more topics and producing an
> > > > >> output stream to one or more output topics, effectively
> transforming
> > > the
> > > > >> input streams to output streams.
> > > > >>
> > > > >> ** The Connector API allows building and running reusable
> producers
> > or
> > > > >> consumers that connect Kafka topics to existing applications or
> data
> > > > >> systems. For example, a connector to a relational database might
> > > > >> capture every change to a table.
> > > > >>
> > > > >>
> > > > >> With these APIs, Kafka can be used for two broad classes of
> > > application:
> > > > >>
> > > > >> ** Building real-time streaming data pipelines that reliably get
> > data
> > > > >> between systems or applications.
> > > > >>
> > > > >> ** Building real-time streaming applications that transform or
> react
> > > > >> to the streams of data.
> > > > >>
> > > > >>
> > > > >> Apache Kafka is in use at large and small companies worldwide,
> > > including
> > > > >> Capital One, Goldman Sachs, ING, LinkedIn, Netflix, Pinterest,
> > > Rabobank,
> > > > >> Target, The New York Times, Uber, Yelp, and Zalando, among others.
> > > > >>
> > > > >> A big thank you for the following 202 contributors to this
> release!
> > > > >> (Please report an unintended omission)
> > > > >>
> > > > >> Aadithya Chandra, Abhijeet Kumar, Abhinav Dixit, Adrian Preston,
> > > Afshin
> > > > >> Moazami, Ahmed Najiub, Ahmed Sobeh, Akhilesh Chaganti, Almog
> Gavra,
> > > > Alok
> > > > >> Thatikunta, Alyssa Huang, Anatoly Popov, Andras Katona, Andrew
> > > > >> Schofield, Anna Sophie Blee-Goldman, Antoine Pourchet, Anton
> > Agestam,
> > > > >> Anton Liauchuk, Anuj Sharma, Apoorv Mittal, Arnout Engelen, Arpit
> > > > Goyal,
> > > > >> Artem Livshits, Ashwin Pankaj, Ayoub Omari, Bruno Cadonna, Calvin
> > Liu,
> > > > >> Cameron Redpath, charliecheng630, Cheng-Kai, Zhang, Cheryl
> Simmons,
> > > > Chia
> > > > >> Chuan Yu, Chia-Ping Tsai, ChickenchickenLove, Chris Egerton, Chris
> > > > >> Holland, Christo Lolov, Christopher Webb, Colin P. McCabe, Colt
> > > > McNealy,
> > > > >> cooper.ts...@suse.com, Vedarth Sharma, Crispin Bernier, Daan
> > Gerits,
> > > > >> David Arthur, David Jacot, David Mao, dengziming, Divij Vaidya,
> > > DL1231,
> > > > >> Dmitry Werner, Dongnuo Lyu, Drawxy, Dung Ha, Edoardo Comar, Eduwer
> > > > >> Camacaro, Emanuele Sabellico, Erik van Oosten, Eugene Mitskevich,
> > Fan
> > > > >> Yang, Federico Valeri, Fiore Mario Vitale, flashmouse, Florin
> > > Akermann,
> > > > >> Frederik Rouleau, Gantigmaa Selenge, Gaurav Narula, ghostspiders,
> > > > >> gongxuanzhang, Greg Harris, Gyeongwon 

Re: New release branch 3.9

2024-07-30 Thread Luke Chen
Hi Colin and all,

If KIP-853 can complete in v3.9.0 in time (or a little delay), I agree we
should try to keep v3.9.0 as the last release before v4.0.
This way, it will let all Kafka ecosystem projects have a clear (and
certain) picture about what will happen in Apache Kafka.

Hi Colin,
For KIP-950 (KAFKA-15132 )
to allow to disable tiered storage on topic level, the PR is under review
and we should be able to merge within this week.
For KIP-1005 (KAFKA-15857
) to expose remote
storage related offset in kafka-get-offsets.sh, this KIP was reverted in
v3.8.0 because of MV issue. We'd like to add it back and can be completed
within this week.

These 2 KIPs are important feature for tiered storage, we hope they can be
added into v3.9.0.

Thank you.
Luke



On Wed, Jul 31, 2024 at 7:31 AM Colin McCabe  wrote:

> Yeah, please go ahead. I know a lot of people are waiting for 4.0.
>
> best,
> Colin
>
>
> On Tue, Jul 30, 2024, at 16:05, Matthias J. Sax wrote:
> > Thanks for clarifying Colin. So my assumptions were actually correct.
> >
> > We have a lot of contributors waiting to pick-up 4.0 tickets, and I'll
> > go ahead a tell them that we are ready and they can start to pick them
> up.
> >
> > Thanks.
> >
> >
> > -Matthias
> >
> > On 7/30/24 3:51 PM, Colin McCabe wrote:
> >> Hi Chia-Ping Tsai,
> >>
> >> If you can get them done this week then I think we can merge them in to
> 3.9. If not, then let's wait until 4.0, please.
> >>
> >> best,
> >> Colin
> >>
> >>
> >> On Tue, Jul 30, 2024, at 09:07, Chia-Ping Tsai wrote:
> >>> hi Colin,
> >>>
> >>> Could you please consider adding
> >>> https://issues.apache.org/jira/browse/KAFKA-1 to 3.9.0
> >>>
> >>> The issue is used to deprecate the formatters in core module. Also, it
> >>> implements the replacements for them.
> >>>
> >>> In order to follow the deprecation rules, it would be nice to have
> >>> KAFKA-1 in 3.9.0
> >>>
> >>> If you agree to have them in 3.9.0, I will cherry-pick them into 3.9.0
> when
> >>> they get merged to trunk.
> >>>
> >>> Best,
> >>> Chia-Ping
> >>>
> >>>
> >>> José Armando García Sancio  於
> 2024年7月30日 週二
> >>> 下午11:59寫道:
> >>>
>  Thanks Colin.
> 
>  For KIP-853 (KRaft Controller Membership Changes), we still have the
>  following features that are in progress.
> 
>  1. UpdateVoter RPC and request handling
>  
>  2. Storage tool changes for KIP-853
>  
>  3. kafka-metadata-quorum describe changes for KIP-853
>  
>  4. kafka-metadata-quorum add voter and remove voter changes
>  
>  5. Sending UpdateVoter request and response handling
>  
> 
>  Can we cherry pick them to the release branch 3.9.0 when they get
> merged to
>  trunk? They have a small impact as they shouldn't affect the rest of
> Kafka
>  and only affect the kraft controller membership change feature. I
> expected
>  them to get merged to the trunk branch in the coming days.
> 
>  Thanks,
> 
>  On Mon, Jul 29, 2024 at 7:02 PM Colin McCabe 
> wrote:
> 
> > Hi Kafka developers and friends,
> >
> > As promised, we now have a release branch for the upcoming 3.9.0
> release.
> > Trunk has been bumped to 4.0.0-SNAPSHOT.
> >
> > I'll be going over the JIRAs to move every non-blocker from this
> release
>  to
> > the next release.
> >
> >  From this point, most changes should go to trunk.
> > *Blockers (existing and new that we discover while testing the
> release)
> > will be double-committed. *Please discuss with your reviewer whether
> your
> > PR should go to trunk or to trunk+release so they can merge
> accordingly.
> >
> > *Please help us test the release! *
> >
> > best,
> > Colin
> >
> 
> 
>  --
>  -José
> 
>


[jira] [Resolved] (KAFKA-16346) Fix flaky MetricsTest.testMetrics

2024-07-30 Thread Chia-Ping Tsai (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16346?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai resolved KAFKA-16346.

Fix Version/s: 4.0.0
   Resolution: Fixed

> Fix flaky MetricsTest.testMetrics
> -
>
> Key: KAFKA-16346
> URL: https://issues.apache.org/jira/browse/KAFKA-16346
> Project: Kafka
>  Issue Type: Bug
>Reporter: Chia-Ping Tsai
>Assignee: PoAn Yang
>Priority: Minor
> Fix For: 4.0.0
>
>
> {code}
> Gradle Test Run :core:test > Gradle Test Executor 1119 > MetricsTest > 
> testMetrics(boolean) > testMetrics with systemRemoteStorageEnabled: false 
> FAILED
> org.opentest4j.AssertionFailedError: Broker metric not recorded correctly 
> for 
> kafka.network:type=RequestMetrics,name=MessageConversionsTimeMs,request=Produce
>  value 0.0 ==> expected:  but was: 
> at 
> app//org.junit.jupiter.api.AssertionFailureBuilder.build(AssertionFailureBuilder.java:151)
> at 
> app//org.junit.jupiter.api.AssertionFailureBuilder.buildAndThrow(AssertionFailureBuilder.java:132)
> at 
> app//org.junit.jupiter.api.AssertTrue.failNotTrue(AssertTrue.java:63)
> at 
> app//org.junit.jupiter.api.AssertTrue.assertTrue(AssertTrue.java:36)
> at 
> app//org.junit.jupiter.api.Assertions.assertTrue(Assertions.java:214)
> at 
> app//kafka.api.MetricsTest.verifyBrokerMessageConversionMetrics(MetricsTest.scala:314)
> at app//kafka.api.MetricsTest.testMetrics(MetricsTest.scala:110)
> {code}
> The value used to update metrics is calculated by Math.round, so it could be 
> zero if you have a good machine :)
> We should verify the `count`  instead of `value`, since it is convincible and 
> more stable.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-12432) Fix AdminClient timeout handling in the presence of badly behaved brokers

2024-07-30 Thread Chia-Ping Tsai (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12432?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai resolved KAFKA-12432.

Fix Version/s: 3.0.0
   Resolution: Fixed

> Fix AdminClient timeout handling in the presence of badly behaved brokers
> -
>
> Key: KAFKA-12432
> URL: https://issues.apache.org/jira/browse/KAFKA-12432
> Project: Kafka
>  Issue Type: Bug
>Reporter: Colin McCabe
>Assignee: Colin McCabe
>Priority: Major
> Fix For: 3.0.0
>
>
> If NetworkClient allows us to create a connection to a node, but we can't 
> send a single request, AdminClient will hang forever (or until the operation 
> times out, at least) rather than retrying a different node.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-17223) Retrying the call after encoutering UnsupportedVersionException will cause ConcurrentModificationException

2024-07-30 Thread Chia-Ping Tsai (Jira)
Chia-Ping Tsai created KAFKA-17223:
--

 Summary: Retrying the call after encoutering 
UnsupportedVersionException will cause ConcurrentModificationException
 Key: KAFKA-17223
 URL: https://issues.apache.org/jira/browse/KAFKA-17223
 Project: Kafka
  Issue Type: Bug
Reporter: Chia-Ping Tsai
Assignee: Chia-Ping Tsai


{code:java}
[2024-07-31 07:11:03,928] ERROR Uncaught exception in thread 
'kafka-admin-client-thread | adminclient-1': 
(org.apache.kafka.common.utils.KafkaThread:51)
java.util.ConcurrentModificationException
at 
java.base/java.util.ArrayList$Itr.checkForComodification(ArrayList.java:1013)
at java.base/java.util.ArrayList$Itr.remove(ArrayList.java:981)
at 
org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.maybeDrainPendingCalls(KafkaAdminClient.java:1207)
at 
org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.processRequests(KafkaAdminClient.java:1510)
at 
org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.run(KafkaAdminClient.java:1464)
at java.base/java.lang.Thread.run(Thread.java:840)
{code}

The steps producing above error are shown below.

1. maybeDrainPendingCall[0] encounter error when calling 
`call.nodeProvider.provide();`[1]
2. `runnable.pendingCalls.add(this)`[2] adds the call back to `pendingCalls`
3. `pendingIter.remove();` tries to remove item from the modified array list.

IMHO, there are two solutions:

1. add call back to `newCalls` rather than `pendingCalls`.  This approach is to 
revert a part of KAFKA-12432
2. collect the toRemove callers and then remove them after while-loop.

[0] 
https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/clients/admin/KafkaAdminClient.java#L1206
[1] 
https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/clients/admin/KafkaAdminClient.java#L1219
[2] 
https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/clients/admin/KafkaAdminClient.java#L927
[3] 
https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/clients/admin/KafkaAdminClient.java#L1219





--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Jenkins build is still unstable: Kafka » Kafka Branch Builder » trunk #3153

2024-07-30 Thread Apache Jenkins Server
See 




Re: New release branch 3.9

2024-07-30 Thread Colin McCabe
Yeah, please go ahead. I know a lot of people are waiting for 4.0.

best,
Colin


On Tue, Jul 30, 2024, at 16:05, Matthias J. Sax wrote:
> Thanks for clarifying Colin. So my assumptions were actually correct.
>
> We have a lot of contributors waiting to pick-up 4.0 tickets, and I'll 
> go ahead a tell them that we are ready and they can start to pick them up.
>
> Thanks.
>
>
> -Matthias
>
> On 7/30/24 3:51 PM, Colin McCabe wrote:
>> Hi Chia-Ping Tsai,
>> 
>> If you can get them done this week then I think we can merge them in to 3.9. 
>> If not, then let's wait until 4.0, please.
>> 
>> best,
>> Colin
>> 
>> 
>> On Tue, Jul 30, 2024, at 09:07, Chia-Ping Tsai wrote:
>>> hi Colin,
>>>
>>> Could you please consider adding
>>> https://issues.apache.org/jira/browse/KAFKA-1 to 3.9.0
>>>
>>> The issue is used to deprecate the formatters in core module. Also, it
>>> implements the replacements for them.
>>>
>>> In order to follow the deprecation rules, it would be nice to have
>>> KAFKA-1 in 3.9.0
>>>
>>> If you agree to have them in 3.9.0, I will cherry-pick them into 3.9.0 when
>>> they get merged to trunk.
>>>
>>> Best,
>>> Chia-Ping
>>>
>>>
>>> José Armando García Sancio  於 2024年7月30日 週二
>>> 下午11:59寫道:
>>>
 Thanks Colin.

 For KIP-853 (KRaft Controller Membership Changes), we still have the
 following features that are in progress.

 1. UpdateVoter RPC and request handling
 
 2. Storage tool changes for KIP-853
 
 3. kafka-metadata-quorum describe changes for KIP-853
 
 4. kafka-metadata-quorum add voter and remove voter changes
 
 5. Sending UpdateVoter request and response handling
 

 Can we cherry pick them to the release branch 3.9.0 when they get merged to
 trunk? They have a small impact as they shouldn't affect the rest of Kafka
 and only affect the kraft controller membership change feature. I expected
 them to get merged to the trunk branch in the coming days.

 Thanks,

 On Mon, Jul 29, 2024 at 7:02 PM Colin McCabe  wrote:

> Hi Kafka developers and friends,
>
> As promised, we now have a release branch for the upcoming 3.9.0 release.
> Trunk has been bumped to 4.0.0-SNAPSHOT.
>
> I'll be going over the JIRAs to move every non-blocker from this release
 to
> the next release.
>
>  From this point, most changes should go to trunk.
> *Blockers (existing and new that we discover while testing the release)
> will be double-committed. *Please discuss with your reviewer whether your
> PR should go to trunk or to trunk+release so they can merge accordingly.
>
> *Please help us test the release! *
>
> best,
> Colin
>


 --
 -José



[jira] [Created] (KAFKA-17222) Remove the subclass of KafkaMetricsGroup

2024-07-30 Thread Chia-Ping Tsai (Jira)
Chia-Ping Tsai created KAFKA-17222:
--

 Summary: Remove the subclass of KafkaMetricsGroup
 Key: KAFKA-17222
 URL: https://issues.apache.org/jira/browse/KAFKA-17222
 Project: Kafka
  Issue Type: Improvement
Reporter: Chia-Ping Tsai
Assignee: bboyleonp


There are subclass of KafkaMetricsGroup which have override `metricName` 
[0][1][2]. They are used to keep metrics compatibility. Now, KafkaMetricsGroup 
has the new constructor which can define the package and class name, and so we 
don't need to override the `metricName` anymore.


[0] 
https://github.com/apache/kafka/blob/9e06767ffa80b26791c3bff6bc9b10b6612ce7d2/core/src/main/scala/kafka/log/UnifiedLog.scala#L116
[1] 
https://github.com/apache/kafka/blob/9e06767ffa80b26791c3bff6bc9b10b6612ce7d2/storage/src/main/java/org/apache/kafka/storage/internals/log/LogSegment.java#L77
[2] 
https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/server/KafkaBroker.scala#L109



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: New release branch 3.9

2024-07-30 Thread Matthias J. Sax

Thanks for clarifying Colin. So my assumptions were actually correct.

We have a lot of contributors waiting to pick-up 4.0 tickets, and I'll 
go ahead a tell them that we are ready and they can start to pick them up.


Thanks.


-Matthias

On 7/30/24 3:51 PM, Colin McCabe wrote:

Hi Chia-Ping Tsai,

If you can get them done this week then I think we can merge them in to 3.9. If 
not, then let's wait until 4.0, please.

best,
Colin


On Tue, Jul 30, 2024, at 09:07, Chia-Ping Tsai wrote:

hi Colin,

Could you please consider adding
https://issues.apache.org/jira/browse/KAFKA-1 to 3.9.0

The issue is used to deprecate the formatters in core module. Also, it
implements the replacements for them.

In order to follow the deprecation rules, it would be nice to have
KAFKA-1 in 3.9.0

If you agree to have them in 3.9.0, I will cherry-pick them into 3.9.0 when
they get merged to trunk.

Best,
Chia-Ping


José Armando García Sancio  於 2024年7月30日 週二
下午11:59寫道:


Thanks Colin.

For KIP-853 (KRaft Controller Membership Changes), we still have the
following features that are in progress.

1. UpdateVoter RPC and request handling

2. Storage tool changes for KIP-853

3. kafka-metadata-quorum describe changes for KIP-853

4. kafka-metadata-quorum add voter and remove voter changes

5. Sending UpdateVoter request and response handling


Can we cherry pick them to the release branch 3.9.0 when they get merged to
trunk? They have a small impact as they shouldn't affect the rest of Kafka
and only affect the kraft controller membership change feature. I expected
them to get merged to the trunk branch in the coming days.

Thanks,

On Mon, Jul 29, 2024 at 7:02 PM Colin McCabe  wrote:


Hi Kafka developers and friends,

As promised, we now have a release branch for the upcoming 3.9.0 release.
Trunk has been bumped to 4.0.0-SNAPSHOT.

I'll be going over the JIRAs to move every non-blocker from this release

to

the next release.

 From this point, most changes should go to trunk.
*Blockers (existing and new that we discover while testing the release)
will be double-committed. *Please discuss with your reviewer whether your
PR should go to trunk or to trunk+release so they can merge accordingly.

*Please help us test the release! *

best,
Colin




--
-José



Re: New release branch 3.9

2024-07-30 Thread Colin McCabe
Hi Igor,

Correct. The whole point of 3.9 is to complete KIP-853. We had many email 
threads on this :)

As release manager I have discretion to delay the release until this is 
complete. And I will do that. But I do not think the delay will be more than a 
few days, based on what we're looking at now.

If you have a feature that you've been waiting for 4.0 for, then you are 
unblocked. Please feel free to add it to trunk.

best,
Colin


On Tue, Jul 30, 2024, at 14:13, Igor Soarez wrote:
> My understanding was that the reason for the shorter cycle
> to the 3.9 release was based on the assumption that KIP-1012
> would be ready soon, so we could get to 4.0 quicker.
>
> If we can't move to 4.0 sooner, what's to gain with an early 3.9?
>
> --
> Igor


Re: New release branch 3.9

2024-07-30 Thread Colin McCabe
Hi Chia-Ping Tsai,

If you can get them done this week then I think we can merge them in to 3.9. If 
not, then let's wait until 4.0, please.

best,
Colin


On Tue, Jul 30, 2024, at 09:07, Chia-Ping Tsai wrote:
> hi Colin,
>
> Could you please consider adding
> https://issues.apache.org/jira/browse/KAFKA-1 to 3.9.0
>
> The issue is used to deprecate the formatters in core module. Also, it
> implements the replacements for them.
>
> In order to follow the deprecation rules, it would be nice to have
> KAFKA-1 in 3.9.0
>
> If you agree to have them in 3.9.0, I will cherry-pick them into 3.9.0 when
> they get merged to trunk.
>
> Best,
> Chia-Ping
>
>
> José Armando García Sancio  於 2024年7月30日 週二
> 下午11:59寫道:
>
>> Thanks Colin.
>>
>> For KIP-853 (KRaft Controller Membership Changes), we still have the
>> following features that are in progress.
>>
>> 1. UpdateVoter RPC and request handling
>> 
>> 2. Storage tool changes for KIP-853
>> 
>> 3. kafka-metadata-quorum describe changes for KIP-853
>> 
>> 4. kafka-metadata-quorum add voter and remove voter changes
>> 
>> 5. Sending UpdateVoter request and response handling
>> 
>>
>> Can we cherry pick them to the release branch 3.9.0 when they get merged to
>> trunk? They have a small impact as they shouldn't affect the rest of Kafka
>> and only affect the kraft controller membership change feature. I expected
>> them to get merged to the trunk branch in the coming days.
>>
>> Thanks,
>>
>> On Mon, Jul 29, 2024 at 7:02 PM Colin McCabe  wrote:
>>
>> > Hi Kafka developers and friends,
>> >
>> > As promised, we now have a release branch for the upcoming 3.9.0 release.
>> > Trunk has been bumped to 4.0.0-SNAPSHOT.
>> >
>> > I'll be going over the JIRAs to move every non-blocker from this release
>> to
>> > the next release.
>> >
>> > From this point, most changes should go to trunk.
>> > *Blockers (existing and new that we discover while testing the release)
>> > will be double-committed. *Please discuss with your reviewer whether your
>> > PR should go to trunk or to trunk+release so they can merge accordingly.
>> >
>> > *Please help us test the release! *
>> >
>> > best,
>> > Colin
>> >
>>
>>
>> --
>> -José
>>


Re: New release branch 3.9

2024-07-30 Thread Colin McCabe
On Tue, Jul 30, 2024, at 15:22, Matthias J. Sax wrote:
> Thanks Greg. Overall, this makes sense to me.
>
> The only assumption on my side was, that we are actually pretty sure 
> that we will hit case (1) or (2)...
>
> And I actually also thought, that completing the required KRaft work 
> would only take a few more weeks, and that is also why we have a 3.9 
> release branch already...
>
> If there is so much uncertainty about finishing KRaft work, why did we 
> cut a 3.9 branch now, and have a dedicate release plan for it? Colin did 
> propose the release plan with the goal in mind to quickly do a release 
> after 3.8, which would contain the missing KRaft things.
>
> If we might only release 3.9 in October/November, the current release 
> plan with kip/feature/code freeze deadlines does not make sense to me, 
> and we should not have a 3.9 release branch, and trunk should stay on 
> 3.9-SNAPSHOT for the time being...
>

Hi Matthias,

There is no uncertainty. We have been working hard on KIP-853 and there are 
only 3 or 4 PRs left to go before feature completeness.

Please let's not create confusion (I realize, unintentionally)

best,
Colin


>
> -Matthias
>
> On 7/30/24 3:03 PM, Greg Harris wrote:
>> Hi all,
>> 
>> I'd like to clarify my understanding of the path forward, the one I voted
>> for in KIP-1012 and what I understood to be the consensus in the 3.8.0
>> release thread.
>> 
>> 1. If KIP-853 is feature-complete before October, Kafka 3.9 can be released
>> ASAP with KIP-853. There will be no 3.10 release, and 4.0 will follow 4
>> months after 3.9, no later than February.
>> 2. If KIP-853 is feature complete in October, Kafka 3.9 should be released
>> in October as a normal release, with KIP-853. There will be no 3.10
>> release, and 4.0 will follow 4 months after 3.9, in February.
>> 3. If KIP-853 is not feature complete in October, Kafka 3.9 should be
>> released in October as a normal release, without KIP-853. There will be a
>> 3.10 release that may or may not contain KIP-853 no later than February.
>> 
>> As we are not sure which path will be taken, the most conservative strategy
>> is to bump to 3.10, and only after we know we're in case 1 or 2, bump the
>> version to 4.0 and skip 3.10.
>> If we leave the version bump to 4.0 in place, and later discover that we
>> are in case 3, it will be very damaging for the project, causing either a
>> big release delay, confusion for users, or unaddressed bugs.
>> 
>> Thanks,
>> Greg
>> 
>> On Tue, Jul 30, 2024 at 2:14 PM Igor Soarez  wrote:
>> 
>>> My understanding was that the reason for the shorter cycle
>>> to the 3.9 release was based on the assumption that KIP-1012
>>> would be ready soon, so we could get to 4.0 quicker.
>>>
>>> If we can't move to 4.0 sooner, what's to gain with an early 3.9?
>>>
>>> --
>>> Igor
>>>
>>


Re: New release branch 3.9

2024-07-30 Thread Colin McCabe
On Tue, Jul 30, 2024, at 15:35, Greg Harris wrote:
> Hi Matthias,
>
> I agree with you.
>
>> The only assumption on my side was, that we are actually pretty sure
>> that we will hit case (1) or (2)...
>
> I want this to be the case, but we were pretty sure that KIP-853 was going
> to be in 3.8 up until it wasn't ready. I believe we need to plan for the
> worst case, while being flexible to implement case (1) or (2) if the
> conditions are met.
>
>> we should not have a 3.9 release branch, and trunk should stay on
>> 3.9-SNAPSHOT for the time being...
>
> I agree. I think that selectively enforcing the current feature freeze
> deadline for other KIPs but not KIP-853 may unnecessarily delay those other
> KIPs 3-4 months.
> Our time-based release schedule's primary goal is to get features out in a
> timely manner, and it looks like this feature freeze could prevent that.

Hi Greg,

The release manager always has discretion to selectively enforce the feature 
freeze. This is the way Kafka releases work.

In this case, the whole purpose of the 3.9 release is to have KIP-853. We had 
several long threads on the mailing list about it. So it certainly would make 
no sense to ship 3.9 without KIP-853.

If you have a specific feature you want in 3.9, please let me know and I will 
see if we can make it in.

best,
Colin

>
> In that case, a straightforward revert would be the way forward:
> https://github.com/apache/kafka/pull/16737
>
> Thanks,
> Greg
>
> On Tue, Jul 30, 2024 at 3:22 PM Matthias J. Sax  wrote:
>
>> Thanks Greg. Overall, this makes sense to me.
>>
>> The only assumption on my side was, that we are actually pretty sure
>> that we will hit case (1) or (2)...
>>
>> And I actually also thought, that completing the required KRaft work
>> would only take a few more weeks, and that is also why we have a 3.9
>> release branch already...
>>
>> If there is so much uncertainty about finishing KRaft work, why did we
>> cut a 3.9 branch now, and have a dedicate release plan for it? Colin did
>> propose the release plan with the goal in mind to quickly do a release
>> after 3.8, which would contain the missing KRaft things.
>>
>> If we might only release 3.9 in October/November, the current release
>> plan with kip/feature/code freeze deadlines does not make sense to me,
>> and we should not have a 3.9 release branch, and trunk should stay on
>> 3.9-SNAPSHOT for the time being...
>>
>>
>> -Matthias
>>
>> On 7/30/24 3:03 PM, Greg Harris wrote:
>> > Hi all,
>> >
>> > I'd like to clarify my understanding of the path forward, the one I voted
>> > for in KIP-1012 and what I understood to be the consensus in the 3.8.0
>> > release thread.
>> >
>> > 1. If KIP-853 is feature-complete before October, Kafka 3.9 can be
>> released
>> > ASAP with KIP-853. There will be no 3.10 release, and 4.0 will follow 4
>> > months after 3.9, no later than February.
>> > 2. If KIP-853 is feature complete in October, Kafka 3.9 should be
>> released
>> > in October as a normal release, with KIP-853. There will be no 3.10
>> > release, and 4.0 will follow 4 months after 3.9, in February.
>> > 3. If KIP-853 is not feature complete in October, Kafka 3.9 should be
>> > released in October as a normal release, without KIP-853. There will be a
>> > 3.10 release that may or may not contain KIP-853 no later than February.
>> >
>> > As we are not sure which path will be taken, the most conservative
>> strategy
>> > is to bump to 3.10, and only after we know we're in case 1 or 2, bump the
>> > version to 4.0 and skip 3.10.
>> > If we leave the version bump to 4.0 in place, and later discover that we
>> > are in case 3, it will be very damaging for the project, causing either a
>> > big release delay, confusion for users, or unaddressed bugs.
>> >
>> > Thanks,
>> > Greg
>> >
>> > On Tue, Jul 30, 2024 at 2:14 PM Igor Soarez  wrote:
>> >
>> >> My understanding was that the reason for the shorter cycle
>> >> to the 3.9 release was based on the assumption that KIP-1012
>> >> would be ready soon, so we could get to 4.0 quicker.
>> >>
>> >> If we can't move to 4.0 sooner, what's to gain with an early 3.9?
>> >>
>> >> --
>> >> Igor
>> >>
>> >
>>


Re: New release branch 3.9

2024-07-30 Thread Colin McCabe
On Tue, Jul 30, 2024, at 09:52, Matthias J. Sax wrote:
> Thanks for cutting the release branch.
>
> It's great to see `trunk` being bumped to 4.0-SNAPSHOT, and I wanted to 
> follow up on this:
>
> We have a bunch of tickets that we can only ship with 4.0 release, and 
> these tickets were blocked so far. I wanted to get confirmation that we 
> will stick with 4.0 coming after 3.9, and that we can start to work on 
> these tickets? Or is there any reason why we should still hold off to 
> pick them up? We don't want to delay them unnecessary to make sure we 
> can them all into 4.0 release, but of course also don't want to work on 
> them prematurely (to avoid that we have to revert them after merging).
>
>
> -Matthias

Hi Matthias,

If you have something you want in 4.0, please feel free to add it to trunk. 
Trunk is 4.0.

best,
Colin

>
> On 7/30/24 9:07 AM, Chia-Ping Tsai wrote:
>> hi Colin,
>> 
>> Could you please consider adding
>> https://issues.apache.org/jira/browse/KAFKA-1 to 3.9.0
>> 
>> The issue is used to deprecate the formatters in core module. Also, it
>> implements the replacements for them.
>> 
>> In order to follow the deprecation rules, it would be nice to have
>> KAFKA-1 in 3.9.0
>> 
>> If you agree to have them in 3.9.0, I will cherry-pick them into 3.9.0 when
>> they get merged to trunk.
>> 
>> Best,
>> Chia-Ping
>> 
>> 
>> José Armando García Sancio  於 2024年7月30日 週二
>> 下午11:59寫道:
>> 
>>> Thanks Colin.
>>>
>>> For KIP-853 (KRaft Controller Membership Changes), we still have the
>>> following features that are in progress.
>>>
>>> 1. UpdateVoter RPC and request handling
>>> 
>>> 2. Storage tool changes for KIP-853
>>> 
>>> 3. kafka-metadata-quorum describe changes for KIP-853
>>> 
>>> 4. kafka-metadata-quorum add voter and remove voter changes
>>> 
>>> 5. Sending UpdateVoter request and response handling
>>> 
>>>
>>> Can we cherry pick them to the release branch 3.9.0 when they get merged to
>>> trunk? They have a small impact as they shouldn't affect the rest of Kafka
>>> and only affect the kraft controller membership change feature. I expected
>>> them to get merged to the trunk branch in the coming days.
>>>
>>> Thanks,
>>>
>>> On Mon, Jul 29, 2024 at 7:02 PM Colin McCabe  wrote:
>>>
 Hi Kafka developers and friends,

 As promised, we now have a release branch for the upcoming 3.9.0 release.
 Trunk has been bumped to 4.0.0-SNAPSHOT.

 I'll be going over the JIRAs to move every non-blocker from this release
>>> to
 the next release.

  From this point, most changes should go to trunk.
 *Blockers (existing and new that we discover while testing the release)
 will be double-committed. *Please discuss with your reviewer whether your
 PR should go to trunk or to trunk+release so they can merge accordingly.

 *Please help us test the release! *

 best,
 Colin

>>>
>>>
>>> --
>>> -José
>>>
>>


Re: New release branch 3.9

2024-07-30 Thread Colin McCabe
On Tue, Jul 30, 2024, at 08:59, José Armando García Sancio wrote:
> Thanks Colin.
>
> For KIP-853 (KRaft Controller Membership Changes), we still have the
> following features that are in progress.
>
> 1. UpdateVoter RPC and request handling
> 
> 2. Storage tool changes for KIP-853
> 
> 3. kafka-metadata-quorum describe changes for KIP-853
> 
> 4. kafka-metadata-quorum add voter and remove voter changes
> 
> 5. Sending UpdateVoter request and response handling
> 
>
> Can we cherry pick them to the release branch 3.9.0 when they get merged to
> trunk? They have a small impact as they shouldn't affect the rest of Kafka
> and only affect the kraft controller membership change feature. I expected
> them to get merged to the trunk branch in the coming days.
>
> Thanks,

Hi José,

Yes, we will cherry-pick those to 3.9 when they become available.

Looking forward to testing out KIP-853 this week.

best,
Colin


>
> On Mon, Jul 29, 2024 at 7:02 PM Colin McCabe  wrote:
>
>> Hi Kafka developers and friends,
>>
>> As promised, we now have a release branch for the upcoming 3.9.0 release.
>> Trunk has been bumped to 4.0.0-SNAPSHOT.
>>
>> I'll be going over the JIRAs to move every non-blocker from this release to
>> the next release.
>>
>> From this point, most changes should go to trunk.
>> *Blockers (existing and new that we discover while testing the release)
>> will be double-committed. *Please discuss with your reviewer whether your
>> PR should go to trunk or to trunk+release so they can merge accordingly.
>>
>> *Please help us test the release! *
>>
>> best,
>> Colin
>>
>
>
> -- 
> -José


Re: New release branch 3.9

2024-07-30 Thread Colin McCabe
Hi all,

KIP-853 is shipping in 3.9. If we have to delay 3.9 to accomplish this, we 
will, but that seems very unlikely at this point. We are mostly on schedule so 
far.

Trunk is 4.0.

best,
Colin


On Tue, Jul 30, 2024, at 15:35, Greg Harris wrote:
> Hi Matthias,
>
> I agree with you.
>
>> The only assumption on my side was, that we are actually pretty sure
>> that we will hit case (1) or (2)...
>
> I want this to be the case, but we were pretty sure that KIP-853 was going
> to be in 3.8 up until it wasn't ready. I believe we need to plan for the
> worst case, while being flexible to implement case (1) or (2) if the
> conditions are met.
>
>> we should not have a 3.9 release branch, and trunk should stay on
>> 3.9-SNAPSHOT for the time being...
>
> I agree. I think that selectively enforcing the current feature freeze
> deadline for other KIPs but not KIP-853 may unnecessarily delay those other
> KIPs 3-4 months.
> Our time-based release schedule's primary goal is to get features out in a
> timely manner, and it looks like this feature freeze could prevent that.
>
> In that case, a straightforward revert would be the way forward:
> https://github.com/apache/kafka/pull/16737
>
> Thanks,
> Greg
>
> On Tue, Jul 30, 2024 at 3:22 PM Matthias J. Sax  wrote:
>
>> Thanks Greg. Overall, this makes sense to me.
>>
>> The only assumption on my side was, that we are actually pretty sure
>> that we will hit case (1) or (2)...
>>
>> And I actually also thought, that completing the required KRaft work
>> would only take a few more weeks, and that is also why we have a 3.9
>> release branch already...
>>
>> If there is so much uncertainty about finishing KRaft work, why did we
>> cut a 3.9 branch now, and have a dedicate release plan for it? Colin did
>> propose the release plan with the goal in mind to quickly do a release
>> after 3.8, which would contain the missing KRaft things.
>>
>> If we might only release 3.9 in October/November, the current release
>> plan with kip/feature/code freeze deadlines does not make sense to me,
>> and we should not have a 3.9 release branch, and trunk should stay on
>> 3.9-SNAPSHOT for the time being...
>>
>>
>> -Matthias
>>
>> On 7/30/24 3:03 PM, Greg Harris wrote:
>> > Hi all,
>> >
>> > I'd like to clarify my understanding of the path forward, the one I voted
>> > for in KIP-1012 and what I understood to be the consensus in the 3.8.0
>> > release thread.
>> >
>> > 1. If KIP-853 is feature-complete before October, Kafka 3.9 can be
>> released
>> > ASAP with KIP-853. There will be no 3.10 release, and 4.0 will follow 4
>> > months after 3.9, no later than February.
>> > 2. If KIP-853 is feature complete in October, Kafka 3.9 should be
>> released
>> > in October as a normal release, with KIP-853. There will be no 3.10
>> > release, and 4.0 will follow 4 months after 3.9, in February.
>> > 3. If KIP-853 is not feature complete in October, Kafka 3.9 should be
>> > released in October as a normal release, without KIP-853. There will be a
>> > 3.10 release that may or may not contain KIP-853 no later than February.
>> >
>> > As we are not sure which path will be taken, the most conservative
>> strategy
>> > is to bump to 3.10, and only after we know we're in case 1 or 2, bump the
>> > version to 4.0 and skip 3.10.
>> > If we leave the version bump to 4.0 in place, and later discover that we
>> > are in case 3, it will be very damaging for the project, causing either a
>> > big release delay, confusion for users, or unaddressed bugs.
>> >
>> > Thanks,
>> > Greg
>> >
>> > On Tue, Jul 30, 2024 at 2:14 PM Igor Soarez  wrote:
>> >
>> >> My understanding was that the reason for the shorter cycle
>> >> to the 3.9 release was based on the assumption that KIP-1012
>> >> would be ready soon, so we could get to 4.0 quicker.
>> >>
>> >> If we can't move to 4.0 sooner, what's to gain with an early 3.9?
>> >>
>> >> --
>> >> Igor
>> >>
>> >
>>


Re: New release branch 3.9

2024-07-30 Thread Greg Harris
Hi Matthias,

I agree with you.

> The only assumption on my side was, that we are actually pretty sure
> that we will hit case (1) or (2)...

I want this to be the case, but we were pretty sure that KIP-853 was going
to be in 3.8 up until it wasn't ready. I believe we need to plan for the
worst case, while being flexible to implement case (1) or (2) if the
conditions are met.

> we should not have a 3.9 release branch, and trunk should stay on
> 3.9-SNAPSHOT for the time being...

I agree. I think that selectively enforcing the current feature freeze
deadline for other KIPs but not KIP-853 may unnecessarily delay those other
KIPs 3-4 months.
Our time-based release schedule's primary goal is to get features out in a
timely manner, and it looks like this feature freeze could prevent that.

In that case, a straightforward revert would be the way forward:
https://github.com/apache/kafka/pull/16737

Thanks,
Greg

On Tue, Jul 30, 2024 at 3:22 PM Matthias J. Sax  wrote:

> Thanks Greg. Overall, this makes sense to me.
>
> The only assumption on my side was, that we are actually pretty sure
> that we will hit case (1) or (2)...
>
> And I actually also thought, that completing the required KRaft work
> would only take a few more weeks, and that is also why we have a 3.9
> release branch already...
>
> If there is so much uncertainty about finishing KRaft work, why did we
> cut a 3.9 branch now, and have a dedicate release plan for it? Colin did
> propose the release plan with the goal in mind to quickly do a release
> after 3.8, which would contain the missing KRaft things.
>
> If we might only release 3.9 in October/November, the current release
> plan with kip/feature/code freeze deadlines does not make sense to me,
> and we should not have a 3.9 release branch, and trunk should stay on
> 3.9-SNAPSHOT for the time being...
>
>
> -Matthias
>
> On 7/30/24 3:03 PM, Greg Harris wrote:
> > Hi all,
> >
> > I'd like to clarify my understanding of the path forward, the one I voted
> > for in KIP-1012 and what I understood to be the consensus in the 3.8.0
> > release thread.
> >
> > 1. If KIP-853 is feature-complete before October, Kafka 3.9 can be
> released
> > ASAP with KIP-853. There will be no 3.10 release, and 4.0 will follow 4
> > months after 3.9, no later than February.
> > 2. If KIP-853 is feature complete in October, Kafka 3.9 should be
> released
> > in October as a normal release, with KIP-853. There will be no 3.10
> > release, and 4.0 will follow 4 months after 3.9, in February.
> > 3. If KIP-853 is not feature complete in October, Kafka 3.9 should be
> > released in October as a normal release, without KIP-853. There will be a
> > 3.10 release that may or may not contain KIP-853 no later than February.
> >
> > As we are not sure which path will be taken, the most conservative
> strategy
> > is to bump to 3.10, and only after we know we're in case 1 or 2, bump the
> > version to 4.0 and skip 3.10.
> > If we leave the version bump to 4.0 in place, and later discover that we
> > are in case 3, it will be very damaging for the project, causing either a
> > big release delay, confusion for users, or unaddressed bugs.
> >
> > Thanks,
> > Greg
> >
> > On Tue, Jul 30, 2024 at 2:14 PM Igor Soarez  wrote:
> >
> >> My understanding was that the reason for the shorter cycle
> >> to the 3.9 release was based on the assumption that KIP-1012
> >> would be ready soon, so we could get to 4.0 quicker.
> >>
> >> If we can't move to 4.0 sooner, what's to gain with an early 3.9?
> >>
> >> --
> >> Igor
> >>
> >
>


Re: New release branch 3.9

2024-07-30 Thread Matthias J. Sax

Thanks Greg. Overall, this makes sense to me.

The only assumption on my side was, that we are actually pretty sure 
that we will hit case (1) or (2)...


And I actually also thought, that completing the required KRaft work 
would only take a few more weeks, and that is also why we have a 3.9 
release branch already...


If there is so much uncertainty about finishing KRaft work, why did we 
cut a 3.9 branch now, and have a dedicate release plan for it? Colin did 
propose the release plan with the goal in mind to quickly do a release 
after 3.8, which would contain the missing KRaft things.


If we might only release 3.9 in October/November, the current release 
plan with kip/feature/code freeze deadlines does not make sense to me, 
and we should not have a 3.9 release branch, and trunk should stay on 
3.9-SNAPSHOT for the time being...



-Matthias

On 7/30/24 3:03 PM, Greg Harris wrote:

Hi all,

I'd like to clarify my understanding of the path forward, the one I voted
for in KIP-1012 and what I understood to be the consensus in the 3.8.0
release thread.

1. If KIP-853 is feature-complete before October, Kafka 3.9 can be released
ASAP with KIP-853. There will be no 3.10 release, and 4.0 will follow 4
months after 3.9, no later than February.
2. If KIP-853 is feature complete in October, Kafka 3.9 should be released
in October as a normal release, with KIP-853. There will be no 3.10
release, and 4.0 will follow 4 months after 3.9, in February.
3. If KIP-853 is not feature complete in October, Kafka 3.9 should be
released in October as a normal release, without KIP-853. There will be a
3.10 release that may or may not contain KIP-853 no later than February.

As we are not sure which path will be taken, the most conservative strategy
is to bump to 3.10, and only after we know we're in case 1 or 2, bump the
version to 4.0 and skip 3.10.
If we leave the version bump to 4.0 in place, and later discover that we
are in case 3, it will be very damaging for the project, causing either a
big release delay, confusion for users, or unaddressed bugs.

Thanks,
Greg

On Tue, Jul 30, 2024 at 2:14 PM Igor Soarez  wrote:


My understanding was that the reason for the shorter cycle
to the 3.9 release was based on the assumption that KIP-1012
would be ready soon, so we could get to 4.0 quicker.

If we can't move to 4.0 sooner, what's to gain with an early 3.9?

--
Igor





Re: [kafka-clients] [ANNOUNCE] Apache Kafka 3.8.0

2024-07-30 Thread Greg Harris
Thank you to all of the Contributors, Committers, and our release manager
Josep!

Greg

On Tue, Jul 30, 2024 at 1:34 PM Justine Olshan 
wrote:

> Thanks Josep for your hard work! And to everyone who contributed to this
> release.
>
> Justine
>
> On Tue, Jul 30, 2024 at 8:04 AM Kamal Chandraprakash <
> kamal.chandraprak...@gmail.com> wrote:
>
> > Thanks for running the release!
> >
> > On Tue, Jul 30, 2024 at 4:33 AM Colin McCabe  wrote:
> >
> > > +1. Thanks, Josep!
> > >
> > > Colin
> > >
> > > On Mon, Jul 29, 2024, at 10:32, Chris Egerton wrote:
> > > > Thanks for running the release, Josep!
> > > >
> > > >
> > > > On Mon, Jul 29, 2024, 13:31 'Josep Prat' via kafka-clients <
> > > kafka-clie...@googlegroups.com> wrote:
> > > >> The Apache Kafka community is pleased to announce the release for
> > > Apache
> > > >> Kafka 3.8.0
> > > >>
> > > >> This is a minor release and it includes fixes and improvements from
> > 456
> > > >> JIRAs.
> > > >>
> > > >> All of the changes in this release can be found in the release
> notes:
> > > >> https://www.apache.org/dist/kafka/3.8.0/RELEASE_NOTES.html
> > > >>
> > > >> An overview of the release can be found in our announcement blog
> post:
> > > >> https://kafka.apache.org/blog#apache_kafka_380_release_announcement
> > > >>
> > > >> You can download the source and binary release (Scala 2.12 and Scala
> > > >> 2.13) from:
> > > >> https://kafka.apache.org/downloads#3.8.0
> > > >>
> > > >>
> > >
> >
> ---
> > > >>
> > > >>
> > > >> Apache Kafka is a distributed streaming platform with four core
> APIs:
> > > >>
> > > >>
> > > >> ** The Producer API allows an application to publish a stream of
> > > records to
> > > >> one or more Kafka topics.
> > > >>
> > > >> ** The Consumer API allows an application to subscribe to one or
> more
> > > >> topics and process the stream of records produced to them.
> > > >>
> > > >> ** The Streams API allows an application to act as a stream
> processor,
> > > >> consuming an input stream from one or more topics and producing an
> > > >> output stream to one or more output topics, effectively transforming
> > the
> > > >> input streams to output streams.
> > > >>
> > > >> ** The Connector API allows building and running reusable producers
> or
> > > >> consumers that connect Kafka topics to existing applications or data
> > > >> systems. For example, a connector to a relational database might
> > > >> capture every change to a table.
> > > >>
> > > >>
> > > >> With these APIs, Kafka can be used for two broad classes of
> > application:
> > > >>
> > > >> ** Building real-time streaming data pipelines that reliably get
> data
> > > >> between systems or applications.
> > > >>
> > > >> ** Building real-time streaming applications that transform or react
> > > >> to the streams of data.
> > > >>
> > > >>
> > > >> Apache Kafka is in use at large and small companies worldwide,
> > including
> > > >> Capital One, Goldman Sachs, ING, LinkedIn, Netflix, Pinterest,
> > Rabobank,
> > > >> Target, The New York Times, Uber, Yelp, and Zalando, among others.
> > > >>
> > > >> A big thank you for the following 202 contributors to this release!
> > > >> (Please report an unintended omission)
> > > >>
> > > >> Aadithya Chandra, Abhijeet Kumar, Abhinav Dixit, Adrian Preston,
> > Afshin
> > > >> Moazami, Ahmed Najiub, Ahmed Sobeh, Akhilesh Chaganti, Almog Gavra,
> > > Alok
> > > >> Thatikunta, Alyssa Huang, Anatoly Popov, Andras Katona, Andrew
> > > >> Schofield, Anna Sophie Blee-Goldman, Antoine Pourchet, Anton
> Agestam,
> > > >> Anton Liauchuk, Anuj Sharma, Apoorv Mittal, Arnout Engelen, Arpit
> > > Goyal,
> > > >> Artem Livshits, Ashwin Pankaj, Ayoub Omari, Bruno Cadonna, Calvin
> Liu,
> > > >> Cameron Redpath, charliecheng630, Cheng-Kai, Zhang, Cheryl Simmons,
> > > Chia
> > > >> Chuan Yu, Chia-Ping Tsai, ChickenchickenLove, Chris Egerton, Chris
> > > >> Holland, Christo Lolov, Christopher Webb, Colin P. McCabe, Colt
> > > McNealy,
> > > >> cooper.ts...@suse.com, Vedarth Sharma, Crispin Bernier, Daan
> Gerits,
> > > >> David Arthur, David Jacot, David Mao, dengziming, Divij Vaidya,
> > DL1231,
> > > >> Dmitry Werner, Dongnuo Lyu, Drawxy, Dung Ha, Edoardo Comar, Eduwer
> > > >> Camacaro, Emanuele Sabellico, Erik van Oosten, Eugene Mitskevich,
> Fan
> > > >> Yang, Federico Valeri, Fiore Mario Vitale, flashmouse, Florin
> > Akermann,
> > > >> Frederik Rouleau, Gantigmaa Selenge, Gaurav Narula, ghostspiders,
> > > >> gongxuanzhang, Greg Harris, Gyeongwon Do, Hailey Ni, Hao Li, Hector
> > > >> Geraldino, highluck, hudeqi, Hy (하이), IBeyondy, Iblis Lin, Igor
> > Soarez,
> > > >> ilyazr, Ismael Juma, Ivan Vaskevych, Ivan Yurchenko, James Faulkner,
> > > >> Jamie Holmes, Jason Gustafson, Jeff Kim, jiangyuan, Jim Galasyn,
> > > Jinyong
> > > >> Choi, Joel Hamill, John Doe zh2725284...@gmail.com, John Roesler,
> > John
> > > >> Yu, Johnny Hsu, Jorge Esteban Quilcate 

Re: [VOTE] KIP-1022 Formatting and Updating Features

2024-07-30 Thread Jun Rao
Thanks for updating the KIP, Justine.

Jun

On Tue, Jul 30, 2024 at 1:37 PM Justine Olshan 
wrote:

> I added this update to the end of the section Colin added.
>
> Justine
>
> On Tue, Jul 30, 2024 at 11:01 AM Jun Rao  wrote:
>
> > Hi, Colin,
> >
> > Thanks for the update. We also excluded supported features with
> maxVersion
> > of 0 from both ApiVersionResponse and BrokerRegistrationRequest, and
> > excluded finalized features with version of 0 from ApiVersionResponse. It
> > would be useful to document those too.
> >
> > Jun
> >
> > On Mon, Jul 29, 2024 at 9:25 PM Colin McCabe  wrote:
> >
> > > Hi Jun,
> > >
> > > Just to close the loop on this... the KIP now mentions both
> > > ApiVersionResponse and BrokerRegistrationRequest.
> > >
> > > best,
> > > Colin
> > >
> > > On Mon, Jul 8, 2024, at 14:57, Jun Rao wrote:
> > > > Hi, Colin,
> > > >
> > > > Thanks for the update. Since the PR also introduces a new version of
> > > > BrokerRegistrationRequest, could we include that change in the KIP
> > update
> > > > too?
> > > >
> > > > Jun
> > > >
> > > > On Mon, Jul 8, 2024 at 11:08 AM Colin McCabe 
> > wrote:
> > > >
> > > >> Hi all,
> > > >>
> > > >> I've updated the approach in
> > https://github.com/apache/kafka/pull/16421
> > > >> so that we change the minVersion=0 to minVersion=1 in older
> > > >> ApiVersionsResponses.
> > > >>
> > > >> I hope we can get this in soon and unblock the features that are
> > waiting
> > > >> for it!
> > > >>
> > > >> best,
> > > >> Colin
> > > >>
> > > >> On Wed, Jul 3, 2024, at 10:55, Jun Rao wrote:
> > > >> > Hi, David,
> > > >> >
> > > >> > Thanks for the reply. In the common case, there is no difference
> > > between
> > > >> > omitting just v0 of the feature or omitting the feature
> completely.
> > > It's
> > > >> > just when an old client is used, there is some difference. To me,
> > > >> > omitting just v0 of the feature seems slightly better for the old
> > > client.
> > > >> >
> > > >> > Jun
> > > >> >
> > > >> > On Wed, Jul 3, 2024 at 9:45 AM David Jacot
> > > 
> > > >> > wrote:
> > > >> >
> > > >> >> Hi Jun, Colin,
> > > >> >>
> > > >> >> Thanks for your replies.
> > > >> >>
> > > >> >> If the FeatureCommand relies on version 0 too, my suggestion does
> > not
> > > >> work.
> > > >> >> Omitting the features for old clients as suggested by Colin seems
> > > fine
> > > >> for
> > > >> >> me. In practice, administrators will usually use a version of
> > > >> >> FeatureCommand matching the cluster version so the impact is not
> > too
> > > bad
> > > >> >> knowing that the first features will be introduced from 3.9 on.
> > > >> >>
> > > >> >> Best,
> > > >> >> David
> > > >> >>
> > > >> >> On Tue, Jul 2, 2024 at 2:15 AM Colin McCabe 
> > > wrote:
> > > >> >>
> > > >> >> > Hi David,
> > > >> >> >
> > > >> >> > In the ApiVersionsResponse, we really don't have an easy way of
> > > >> mapping
> > > >> >> > finalizedVersion = 1 to "off" in older releases such as 3.7.0.
> > For
> > > >> >> example,
> > > >> >> > if a 3.9.0 broker advertises that it has finalized
> group.version
> > =
> > > 1,
> > > >> >> that
> > > >> >> > will be treated by 3.7.0 as a brand new feature, not as
> "KIP-848
> > is
> > > >> off."
> > > >> >> > However, I suppose we could work around this by not setting a
> > > >> >> > finalizedVersion at all for group.version (or any other
> feature)
> > if
> > > >> its
> > > >> >> > finalized level was 1. We could also work around the "deletion
> =
> > > set
> > > >> to
> > > >> >> 0"
> > > >> >> > issue on the server side. The server can translate requests to
> > set
> > > the
> > > >> >> > finalized level to 0, into requests to set it to 1.
> > > >> >> >
> > > >> >> > So maybe this solution is worth considering, although it's
> > > >> unfortunate to
> > > >> >> > lose 0. I suppose we'd have to special case metadata.version
> > being
> > > >> set to
> > > >> >> > 1, since that was NOT equivalent to it being "off"
> > > >> >> >
> > > >> >> > best,
> > > >> >> > Colin
> > > >> >> >
> > > >> >> >
> > > >> >> > On Mon, Jul 1, 2024, at 10:11, Jun Rao wrote:
> > > >> >> > > Hi, David,
> > > >> >> > >
> > > >> >> > > Yes, that's another option. It probably has its own
> challenges.
> > > For
> > > >> >> > > example, the FeatureCommand tool currently treats disabling a
> > > >> feature
> > > >> >> as
> > > >> >> > > setting the version to 0. It would be useful to get Jose's
> > > opinion
> > > >> on
> > > >> >> > this
> > > >> >> > > since he introduced version 0 in the kraft.version feature.
> > > >> >> > >
> > > >> >> > > Thanks,
> > > >> >> > >
> > > >> >> > > Jun
> > > >> >> > >
> > > >> >> > > On Sun, Jun 30, 2024 at 11:48 PM David Jacot
> > > >> >>  > > >> >> > >
> > > >> >> > > wrote:
> > > >> >> > >
> > > >> >> > >> Hi Jun, Colin,
> > > >> >> > >>
> > > >> >> > >> Have we considered sticking with the range going from
> version
> > 1
> > > to
> > > >> N
> > > >> >> > where
> > > >> >> > >> version 1 would be the equivalent of "disabled"? In the
> > > >> 

Re: New release branch 3.9

2024-07-30 Thread Greg Harris
Hi all,

I'd like to clarify my understanding of the path forward, the one I voted
for in KIP-1012 and what I understood to be the consensus in the 3.8.0
release thread.

1. If KIP-853 is feature-complete before October, Kafka 3.9 can be released
ASAP with KIP-853. There will be no 3.10 release, and 4.0 will follow 4
months after 3.9, no later than February.
2. If KIP-853 is feature complete in October, Kafka 3.9 should be released
in October as a normal release, with KIP-853. There will be no 3.10
release, and 4.0 will follow 4 months after 3.9, in February.
3. If KIP-853 is not feature complete in October, Kafka 3.9 should be
released in October as a normal release, without KIP-853. There will be a
3.10 release that may or may not contain KIP-853 no later than February.

As we are not sure which path will be taken, the most conservative strategy
is to bump to 3.10, and only after we know we're in case 1 or 2, bump the
version to 4.0 and skip 3.10.
If we leave the version bump to 4.0 in place, and later discover that we
are in case 3, it will be very damaging for the project, causing either a
big release delay, confusion for users, or unaddressed bugs.

Thanks,
Greg

On Tue, Jul 30, 2024 at 2:14 PM Igor Soarez  wrote:

> My understanding was that the reason for the shorter cycle
> to the 3.9 release was based on the assumption that KIP-1012
> would be ready soon, so we could get to 4.0 quicker.
>
> If we can't move to 4.0 sooner, what's to gain with an early 3.9?
>
> --
> Igor
>


Re: New release branch 3.9

2024-07-30 Thread Igor Soarez
My understanding was that the reason for the shorter cycle
to the 3.9 release was based on the assumption that KIP-1012
would be ready soon, so we could get to 4.0 quicker.

If we can't move to 4.0 sooner, what's to gain with an early 3.9?

--
Igor


Re: [VOTE] KIP-1022 Formatting and Updating Features

2024-07-30 Thread Justine Olshan
I added this update to the end of the section Colin added.

Justine

On Tue, Jul 30, 2024 at 11:01 AM Jun Rao  wrote:

> Hi, Colin,
>
> Thanks for the update. We also excluded supported features with maxVersion
> of 0 from both ApiVersionResponse and BrokerRegistrationRequest, and
> excluded finalized features with version of 0 from ApiVersionResponse. It
> would be useful to document those too.
>
> Jun
>
> On Mon, Jul 29, 2024 at 9:25 PM Colin McCabe  wrote:
>
> > Hi Jun,
> >
> > Just to close the loop on this... the KIP now mentions both
> > ApiVersionResponse and BrokerRegistrationRequest.
> >
> > best,
> > Colin
> >
> > On Mon, Jul 8, 2024, at 14:57, Jun Rao wrote:
> > > Hi, Colin,
> > >
> > > Thanks for the update. Since the PR also introduces a new version of
> > > BrokerRegistrationRequest, could we include that change in the KIP
> update
> > > too?
> > >
> > > Jun
> > >
> > > On Mon, Jul 8, 2024 at 11:08 AM Colin McCabe 
> wrote:
> > >
> > >> Hi all,
> > >>
> > >> I've updated the approach in
> https://github.com/apache/kafka/pull/16421
> > >> so that we change the minVersion=0 to minVersion=1 in older
> > >> ApiVersionsResponses.
> > >>
> > >> I hope we can get this in soon and unblock the features that are
> waiting
> > >> for it!
> > >>
> > >> best,
> > >> Colin
> > >>
> > >> On Wed, Jul 3, 2024, at 10:55, Jun Rao wrote:
> > >> > Hi, David,
> > >> >
> > >> > Thanks for the reply. In the common case, there is no difference
> > between
> > >> > omitting just v0 of the feature or omitting the feature completely.
> > It's
> > >> > just when an old client is used, there is some difference. To me,
> > >> > omitting just v0 of the feature seems slightly better for the old
> > client.
> > >> >
> > >> > Jun
> > >> >
> > >> > On Wed, Jul 3, 2024 at 9:45 AM David Jacot
> > 
> > >> > wrote:
> > >> >
> > >> >> Hi Jun, Colin,
> > >> >>
> > >> >> Thanks for your replies.
> > >> >>
> > >> >> If the FeatureCommand relies on version 0 too, my suggestion does
> not
> > >> work.
> > >> >> Omitting the features for old clients as suggested by Colin seems
> > fine
> > >> for
> > >> >> me. In practice, administrators will usually use a version of
> > >> >> FeatureCommand matching the cluster version so the impact is not
> too
> > bad
> > >> >> knowing that the first features will be introduced from 3.9 on.
> > >> >>
> > >> >> Best,
> > >> >> David
> > >> >>
> > >> >> On Tue, Jul 2, 2024 at 2:15 AM Colin McCabe 
> > wrote:
> > >> >>
> > >> >> > Hi David,
> > >> >> >
> > >> >> > In the ApiVersionsResponse, we really don't have an easy way of
> > >> mapping
> > >> >> > finalizedVersion = 1 to "off" in older releases such as 3.7.0.
> For
> > >> >> example,
> > >> >> > if a 3.9.0 broker advertises that it has finalized group.version
> =
> > 1,
> > >> >> that
> > >> >> > will be treated by 3.7.0 as a brand new feature, not as "KIP-848
> is
> > >> off."
> > >> >> > However, I suppose we could work around this by not setting a
> > >> >> > finalizedVersion at all for group.version (or any other feature)
> if
> > >> its
> > >> >> > finalized level was 1. We could also work around the "deletion =
> > set
> > >> to
> > >> >> 0"
> > >> >> > issue on the server side. The server can translate requests to
> set
> > the
> > >> >> > finalized level to 0, into requests to set it to 1.
> > >> >> >
> > >> >> > So maybe this solution is worth considering, although it's
> > >> unfortunate to
> > >> >> > lose 0. I suppose we'd have to special case metadata.version
> being
> > >> set to
> > >> >> > 1, since that was NOT equivalent to it being "off"
> > >> >> >
> > >> >> > best,
> > >> >> > Colin
> > >> >> >
> > >> >> >
> > >> >> > On Mon, Jul 1, 2024, at 10:11, Jun Rao wrote:
> > >> >> > > Hi, David,
> > >> >> > >
> > >> >> > > Yes, that's another option. It probably has its own challenges.
> > For
> > >> >> > > example, the FeatureCommand tool currently treats disabling a
> > >> feature
> > >> >> as
> > >> >> > > setting the version to 0. It would be useful to get Jose's
> > opinion
> > >> on
> > >> >> > this
> > >> >> > > since he introduced version 0 in the kraft.version feature.
> > >> >> > >
> > >> >> > > Thanks,
> > >> >> > >
> > >> >> > > Jun
> > >> >> > >
> > >> >> > > On Sun, Jun 30, 2024 at 11:48 PM David Jacot
> > >> >>  > >> >> > >
> > >> >> > > wrote:
> > >> >> > >
> > >> >> > >> Hi Jun, Colin,
> > >> >> > >>
> > >> >> > >> Have we considered sticking with the range going from version
> 1
> > to
> > >> N
> > >> >> > where
> > >> >> > >> version 1 would be the equivalent of "disabled"? In the
> > >> group.version
> > >> >> > case,
> > >> >> > >> we could introduce group.version=1 that does basically nothing
> > and
> > >> >> > >> group.version=2 that enables the new protocol. I suppose that
> we
> > >> could
> > >> >> > do
> > >> >> > >> the same for the other features. I agree that it is less
> elegant
> > >> but
> > >> >> it
> > >> >> > >> would avoid all the backward compatibility issues.
> > >> >> > >>
> > >> >> > >> Best,
> 

Re: [kafka-clients] [ANNOUNCE] Apache Kafka 3.8.0

2024-07-30 Thread Justine Olshan
Thanks Josep for your hard work! And to everyone who contributed to this
release.

Justine

On Tue, Jul 30, 2024 at 8:04 AM Kamal Chandraprakash <
kamal.chandraprak...@gmail.com> wrote:

> Thanks for running the release!
>
> On Tue, Jul 30, 2024 at 4:33 AM Colin McCabe  wrote:
>
> > +1. Thanks, Josep!
> >
> > Colin
> >
> > On Mon, Jul 29, 2024, at 10:32, Chris Egerton wrote:
> > > Thanks for running the release, Josep!
> > >
> > >
> > > On Mon, Jul 29, 2024, 13:31 'Josep Prat' via kafka-clients <
> > kafka-clie...@googlegroups.com> wrote:
> > >> The Apache Kafka community is pleased to announce the release for
> > Apache
> > >> Kafka 3.8.0
> > >>
> > >> This is a minor release and it includes fixes and improvements from
> 456
> > >> JIRAs.
> > >>
> > >> All of the changes in this release can be found in the release notes:
> > >> https://www.apache.org/dist/kafka/3.8.0/RELEASE_NOTES.html
> > >>
> > >> An overview of the release can be found in our announcement blog post:
> > >> https://kafka.apache.org/blog#apache_kafka_380_release_announcement
> > >>
> > >> You can download the source and binary release (Scala 2.12 and Scala
> > >> 2.13) from:
> > >> https://kafka.apache.org/downloads#3.8.0
> > >>
> > >>
> >
> ---
> > >>
> > >>
> > >> Apache Kafka is a distributed streaming platform with four core APIs:
> > >>
> > >>
> > >> ** The Producer API allows an application to publish a stream of
> > records to
> > >> one or more Kafka topics.
> > >>
> > >> ** The Consumer API allows an application to subscribe to one or more
> > >> topics and process the stream of records produced to them.
> > >>
> > >> ** The Streams API allows an application to act as a stream processor,
> > >> consuming an input stream from one or more topics and producing an
> > >> output stream to one or more output topics, effectively transforming
> the
> > >> input streams to output streams.
> > >>
> > >> ** The Connector API allows building and running reusable producers or
> > >> consumers that connect Kafka topics to existing applications or data
> > >> systems. For example, a connector to a relational database might
> > >> capture every change to a table.
> > >>
> > >>
> > >> With these APIs, Kafka can be used for two broad classes of
> application:
> > >>
> > >> ** Building real-time streaming data pipelines that reliably get data
> > >> between systems or applications.
> > >>
> > >> ** Building real-time streaming applications that transform or react
> > >> to the streams of data.
> > >>
> > >>
> > >> Apache Kafka is in use at large and small companies worldwide,
> including
> > >> Capital One, Goldman Sachs, ING, LinkedIn, Netflix, Pinterest,
> Rabobank,
> > >> Target, The New York Times, Uber, Yelp, and Zalando, among others.
> > >>
> > >> A big thank you for the following 202 contributors to this release!
> > >> (Please report an unintended omission)
> > >>
> > >> Aadithya Chandra, Abhijeet Kumar, Abhinav Dixit, Adrian Preston,
> Afshin
> > >> Moazami, Ahmed Najiub, Ahmed Sobeh, Akhilesh Chaganti, Almog Gavra,
> > Alok
> > >> Thatikunta, Alyssa Huang, Anatoly Popov, Andras Katona, Andrew
> > >> Schofield, Anna Sophie Blee-Goldman, Antoine Pourchet, Anton Agestam,
> > >> Anton Liauchuk, Anuj Sharma, Apoorv Mittal, Arnout Engelen, Arpit
> > Goyal,
> > >> Artem Livshits, Ashwin Pankaj, Ayoub Omari, Bruno Cadonna, Calvin Liu,
> > >> Cameron Redpath, charliecheng630, Cheng-Kai, Zhang, Cheryl Simmons,
> > Chia
> > >> Chuan Yu, Chia-Ping Tsai, ChickenchickenLove, Chris Egerton, Chris
> > >> Holland, Christo Lolov, Christopher Webb, Colin P. McCabe, Colt
> > McNealy,
> > >> cooper.ts...@suse.com, Vedarth Sharma, Crispin Bernier, Daan Gerits,
> > >> David Arthur, David Jacot, David Mao, dengziming, Divij Vaidya,
> DL1231,
> > >> Dmitry Werner, Dongnuo Lyu, Drawxy, Dung Ha, Edoardo Comar, Eduwer
> > >> Camacaro, Emanuele Sabellico, Erik van Oosten, Eugene Mitskevich, Fan
> > >> Yang, Federico Valeri, Fiore Mario Vitale, flashmouse, Florin
> Akermann,
> > >> Frederik Rouleau, Gantigmaa Selenge, Gaurav Narula, ghostspiders,
> > >> gongxuanzhang, Greg Harris, Gyeongwon Do, Hailey Ni, Hao Li, Hector
> > >> Geraldino, highluck, hudeqi, Hy (하이), IBeyondy, Iblis Lin, Igor
> Soarez,
> > >> ilyazr, Ismael Juma, Ivan Vaskevych, Ivan Yurchenko, James Faulkner,
> > >> Jamie Holmes, Jason Gustafson, Jeff Kim, jiangyuan, Jim Galasyn,
> > Jinyong
> > >> Choi, Joel Hamill, John Doe zh2725284...@gmail.com, John Roesler,
> John
> > >> Yu, Johnny Hsu, Jorge Esteban Quilcate Otoya, Josep Prat, José Armando
> > >> García Sancio, Jun Rao, Justine Olshan, Kalpesh Patel, Kamal
> > >> Chandraprakash, Ken Huang, Kirk True, Kohei Nozaki, Krishna Agarwal,
> > >> KrishVora01, Kuan-Po (Cooper) Tseng, Kvicii, Lee Dongjin, Leonardo
> > >> Silva, Lianet Magrans, LiangliangSui, Linu Shibu, lixinyang, Lokesh
> > >> Kumar, Loïc GREFFIER, Lucas Brutschy, Lucia Cerchie, Luke Chen,
> 

Re: New release branch 3.9

2024-07-30 Thread Josep Prat
Hi Matthias,
Note that is KIP-853 the one with the feature parity with ZK.
KIP-1012 was the one were we agreed to stay on 3.x until feature parity.
The reason to have a 3.10 is to have a safe way to upgrade to KRaft Kafkas
while ZK is still around. We try to explain this in KIP-1012

--
Josep Prat
Open Source Engineering Director, Aiven
josep.p...@aiven.io   |   +491715557497 | aiven.io
Aiven Deutschland GmbH
Alexanderufer 3-7, 10117 Berlin
Geschäftsführer: Oskari Saarenmaa, Hannu Valtonen,
Anna Richardson, Kenneth Chen
Amtsgericht Charlottenburg, HRB 209739 B

On Tue, Jul 30, 2024, 22:10 Matthias J. Sax  wrote:

> Thanks for the input. However, I am wondering if releasing 3.9 makes
> sense if KIP-1012 won't make it?
>
> My understanding was we do 3.9 only to not delay 3.8 and not delay 4.0.
>
> If we would go with a 3.10, would it also be an "intermediate" release
> like 3.9? Or would it replace 4.0? For the first case, why not just
> delay 3.9 until everything is ready? And for the latest case, why not
> just do the 3.9 release in Nov and drop an intermediate release all
> together?
>
> What do we gain by a 3.10 release?
>
>
> -Matthias
>
> On 7/30/24 10:26 AM, Josep Prat wrote:
> > +1 Greg. I'd be really happy to bump trunk to 4.0.0, but only once we
> know
> > we can safely do so.
> >
> > On Tue, Jul 30, 2024 at 7:24 PM Greg Harris  >
> > wrote:
> >
> >> Hi all,
> >>
> >> I agree that we are not yet ready for breaking changes on trunk, so I
> >> opened a PR to bump to 3.10.0-SNAPSHOT:
> >> https://github.com/apache/kafka/pull/16732
> >>
> >> When KIP-853 is feature complete, we can bump to 4.0.0-SNAPSHOT.
> >>
> >> Thanks,
> >> Greg
> >>
> >> On Tue, Jul 30, 2024 at 10:01 AM Josep Prat  >
> >> wrote:
> >>
> >>> Hi all,
> >>> As per KIP-1012[1] we can't yet say if the next release will be 3.10.0
> or
> >>> 4.0.0. It will come down to the state of KIP-853 in 3.9.0.
> >>>
> >>> So, in my opinion we should still wait before committing breaking
> changes
> >>> on trunk until we know for sure that KIP-853 will make it.
> >>> Maybe Jose can share more about the chances of this.
> >>>
> >>> [1]
> >>>
> >>>
> >>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-1012%3A+The+need+for+a+Kafka+3.8.x+release
> >>>
> >>> Best
> >>>
> >>>
> >>> --
> >>> Josep Prat
> >>> Open Source Engineering Director, Aiven
> >>> josep.p...@aiven.io   |   +491715557497 | aiven.io
> >>> Aiven Deutschland GmbH
> >>> Alexanderufer 3-7, 10117 Berlin
> >>> Geschäftsführer: Oskari Saarenmaa, Hannu Valtonen,
> >>> Anna Richardson, Kenneth Chen
> >>> Amtsgericht Charlottenburg, HRB 209739 B
> >>>
> >>> On Tue, Jul 30, 2024, 18:52 Matthias J. Sax  wrote:
> >>>
>  Thanks for cutting the release branch.
> 
>  It's great to see `trunk` being bumped to 4.0-SNAPSHOT, and I wanted
> to
>  follow up on this:
> 
>  We have a bunch of tickets that we can only ship with 4.0 release, and
>  these tickets were blocked so far. I wanted to get confirmation that
> we
>  will stick with 4.0 coming after 3.9, and that we can start to work on
>  these tickets? Or is there any reason why we should still hold off to
>  pick them up? We don't want to delay them unnecessary to make sure we
>  can them all into 4.0 release, but of course also don't want to work
> on
>  them prematurely (to avoid that we have to revert them after merging).
> 
> 
>  -Matthias
> 
>  On 7/30/24 9:07 AM, Chia-Ping Tsai wrote:
> > hi Colin,
> >
> > Could you please consider adding
> > https://issues.apache.org/jira/browse/KAFKA-1 to 3.9.0
> >
> > The issue is used to deprecate the formatters in core module. Also,
> >> it
> > implements the replacements for them.
> >
> > In order to follow the deprecation rules, it would be nice to have
> > KAFKA-1 in 3.9.0
> >
> > If you agree to have them in 3.9.0, I will cherry-pick them into
> >> 3.9.0
>  when
> > they get merged to trunk.
> >
> > Best,
> > Chia-Ping
> >
> >
> > José Armando García Sancio  於
> >> 2024年7月30日
>  週二
> > 下午11:59寫道:
> >
> >> Thanks Colin.
> >>
> >> For KIP-853 (KRaft Controller Membership Changes), we still have the
> >> following features that are in progress.
> >>
> >> 1. UpdateVoter RPC and request handling
> >> 
> >> 2. Storage tool changes for KIP-853
> >> 
> >> 3. kafka-metadata-quorum describe changes for KIP-853
> >> 
> >> 4. kafka-metadata-quorum add voter and remove voter changes
> >> 
> >> 5. Sending UpdateVoter request and response handling
> >> 
> >>
> >> Can we cherry pick them to the 

[jira] [Resolved] (KAFKA-17203) StreamThread leaking producer instances

2024-07-30 Thread Chia-Ping Tsai (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-17203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai resolved KAFKA-17203.

Fix Version/s: 3.10.0
   Resolution: Fixed

> StreamThread leaking producer instances
> ---
>
> Key: KAFKA-17203
> URL: https://issues.apache.org/jira/browse/KAFKA-17203
> Project: Kafka
>  Issue Type: Test
>  Components: streams
>Affects Versions: 3.9.0
>Reporter: Greg Harris
>Assignee: PoAn Yang
>Priority: Minor
>  Labels: newbie
> Fix For: 3.10.0
>
>
> When running 
> EosIntegrationTest.shouldCheckpointRestoredOffsetsWhenClosingCleanDuringRestoringStateUpdaterEnabled
>  leaks streams producers with the KAFKA-15845 leak testing extension, I 
> observed that this test appears to consistently leak StreamsProducers. The 
> producer is instantiated here:
> {noformat}
> This test contains a resource leak. Close the resources, or open a KAFKA 
> ticket and annotate this class with 
> @LeakTestingExtension.IgnoreAll("KAFKA-XYZ")
> org.opentest4j.AssertionFailedError: This test contains a resource leak. 
> Close the resources, or open a KAFKA ticket and annotate this class with 
> @LeakTestingExtension.IgnoreAll("KAFKA-XYZ")
>     at 
> org.apache.kafka.common.network.LeakTestingExtension.after(LeakTestingExtension.java:98)
>     at 
> org.apache.kafka.common.network.LeakTestingExtension$All.afterAll(LeakTestingExtension.java:123)
>     at java.base/java.util.ArrayList.forEach(ArrayList.java:1511)
> Caused by: org.opentest4j.AssertionFailedError: Leak check failed
>     at 
> org.apache.kafka.common.utils.LeakTester.lambda$combine$0(LeakTester.java:89)
>     at 
> org.apache.kafka.common.network.LeakTestingExtension.after(LeakTestingExtension.java:96)
>     ... 2 more
>     Suppressed: org.opentest4j.AssertionFailedError: AbstractSelector 
> instances left open
>         at 
> org.apache.kafka.common.utils.PredicateLeakTester.lambda$start$0(PredicateLeakTester.java:94)
>         at 
> org.apache.kafka.common.utils.LeakTester.lambda$combine$0(LeakTester.java:86)
>         ... 3 more
>         Suppressed: java.lang.Exception: Opened sun.nio.ch.KQueueSelectorImpl
>             at 
> org.apache.kafka.common.utils.PredicateLeakTester.open(PredicateLeakTester.java:63)
>             at 
> org.apache.kafka.common.network.NetworkContextLeakTester$RecordingSelectorProvider.openSelector(NetworkContextLeakTester.java:135)
>             at 
> org.apache.kafka.common.network.TestNetworkContext$SelectorProviderDecorator.openSelector(TestNetworkContext.java:166)
>             at 
> org.apache.kafka.common.network.Selector.(Selector.java:160)
>             at 
> org.apache.kafka.common.network.Selector.(Selector.java:213)
>             at 
> org.apache.kafka.common.network.Selector.(Selector.java:225)
>             at 
> org.apache.kafka.common.network.Selector.(Selector.java:229)
>             at 
> org.apache.kafka.clients.ClientUtils.createNetworkClient(ClientUtils.java:225)
>             at 
> org.apache.kafka.clients.ClientUtils.createNetworkClient(ClientUtils.java:163)
>             at 
> org.apache.kafka.clients.producer.KafkaProducer.newSender(KafkaProducer.java:526)
>             at 
> org.apache.kafka.clients.producer.KafkaProducer.(KafkaProducer.java:465)
>             at 
> org.apache.kafka.clients.producer.KafkaProducer.(KafkaProducer.java:297)
>             at 
> org.apache.kafka.streams.processor.internals.DefaultKafkaClientSupplier.getProducer(DefaultKafkaClientSupplier.java:39)
>             at 
> org.apache.kafka.streams.processor.internals.StreamsProducer.(StreamsProducer.java:142)
>             at 
> org.apache.kafka.streams.processor.internals.ActiveTaskCreator.createRecordCollector(ActiveTaskCreator.java:196)
>             at 
> org.apache.kafka.streams.processor.internals.ActiveTaskCreator.createActiveTask(ActiveTaskCreator.java:265)
>             at 
> org.apache.kafka.streams.processor.internals.ActiveTaskCreator.createTasks(ActiveTaskCreator.java:176)
>             at 
> org.apache.kafka.streams.processor.internals.TaskManager.createNewTasks(TaskManager.java:441)
>             at 
> org.apache.kafka.streams.processor.internals.TaskManager.handleAssignment(TaskManager.java:390)
>             at 
> org.apache.kafka.streams.processor.internals.StreamsPartitionAssignor.onAssignment(StreamsPartitionAssignor.java:1559)
>             at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.invokeOnAssignment(ConsumerCoordinator.java:327)
>             at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.onJoinComplete(ConsumerCoordinator.java:416)
>             at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.joinGroupIfNeeded(AbstractCoordinator.java:504)
>             at 
> 

Re: New release branch 3.9

2024-07-30 Thread Matthias J. Sax
Thanks for the input. However, I am wondering if releasing 3.9 makes 
sense if KIP-1012 won't make it?


My understanding was we do 3.9 only to not delay 3.8 and not delay 4.0.

If we would go with a 3.10, would it also be an "intermediate" release 
like 3.9? Or would it replace 4.0? For the first case, why not just 
delay 3.9 until everything is ready? And for the latest case, why not 
just do the 3.9 release in Nov and drop an intermediate release all 
together?


What do we gain by a 3.10 release?


-Matthias

On 7/30/24 10:26 AM, Josep Prat wrote:

+1 Greg. I'd be really happy to bump trunk to 4.0.0, but only once we know
we can safely do so.

On Tue, Jul 30, 2024 at 7:24 PM Greg Harris 
wrote:


Hi all,

I agree that we are not yet ready for breaking changes on trunk, so I
opened a PR to bump to 3.10.0-SNAPSHOT:
https://github.com/apache/kafka/pull/16732

When KIP-853 is feature complete, we can bump to 4.0.0-SNAPSHOT.

Thanks,
Greg

On Tue, Jul 30, 2024 at 10:01 AM Josep Prat 
wrote:


Hi all,
As per KIP-1012[1] we can't yet say if the next release will be 3.10.0 or
4.0.0. It will come down to the state of KIP-853 in 3.9.0.

So, in my opinion we should still wait before committing breaking changes
on trunk until we know for sure that KIP-853 will make it.
Maybe Jose can share more about the chances of this.

[1]



https://cwiki.apache.org/confluence/display/KAFKA/KIP-1012%3A+The+need+for+a+Kafka+3.8.x+release


Best


--
Josep Prat
Open Source Engineering Director, Aiven
josep.p...@aiven.io   |   +491715557497 | aiven.io
Aiven Deutschland GmbH
Alexanderufer 3-7, 10117 Berlin
Geschäftsführer: Oskari Saarenmaa, Hannu Valtonen,
Anna Richardson, Kenneth Chen
Amtsgericht Charlottenburg, HRB 209739 B

On Tue, Jul 30, 2024, 18:52 Matthias J. Sax  wrote:


Thanks for cutting the release branch.

It's great to see `trunk` being bumped to 4.0-SNAPSHOT, and I wanted to
follow up on this:

We have a bunch of tickets that we can only ship with 4.0 release, and
these tickets were blocked so far. I wanted to get confirmation that we
will stick with 4.0 coming after 3.9, and that we can start to work on
these tickets? Or is there any reason why we should still hold off to
pick them up? We don't want to delay them unnecessary to make sure we
can them all into 4.0 release, but of course also don't want to work on
them prematurely (to avoid that we have to revert them after merging).


-Matthias

On 7/30/24 9:07 AM, Chia-Ping Tsai wrote:

hi Colin,

Could you please consider adding
https://issues.apache.org/jira/browse/KAFKA-1 to 3.9.0

The issue is used to deprecate the formatters in core module. Also,

it

implements the replacements for them.

In order to follow the deprecation rules, it would be nice to have
KAFKA-1 in 3.9.0

If you agree to have them in 3.9.0, I will cherry-pick them into

3.9.0

when

they get merged to trunk.

Best,
Chia-Ping


José Armando García Sancio  於

2024年7月30日

週二

下午11:59寫道:


Thanks Colin.

For KIP-853 (KRaft Controller Membership Changes), we still have the
following features that are in progress.

1. UpdateVoter RPC and request handling

2. Storage tool changes for KIP-853

3. kafka-metadata-quorum describe changes for KIP-853

4. kafka-metadata-quorum add voter and remove voter changes

5. Sending UpdateVoter request and response handling


Can we cherry pick them to the release branch 3.9.0 when they get

merged to

trunk? They have a small impact as they shouldn't affect the rest of

Kafka

and only affect the kraft controller membership change feature. I

expected

them to get merged to the trunk branch in the coming days.

Thanks,

On Mon, Jul 29, 2024 at 7:02 PM Colin McCabe 

wrote:



Hi Kafka developers and friends,

As promised, we now have a release branch for the upcoming 3.9.0

release.

Trunk has been bumped to 4.0.0-SNAPSHOT.

I'll be going over the JIRAs to move every non-blocker from this

release

to

the next release.

  From this point, most changes should go to trunk.
*Blockers (existing and new that we discover while testing the

release)

will be double-committed. *Please discuss with your reviewer

whether

your

PR should go to trunk or to trunk+release so they can merge

accordingly.


*Please help us test the release! *

best,
Colin




--
-José














Re: New release branch 3.9

2024-07-30 Thread Christopher Shannon
One option is to just have a separate 4.0 branch for development to not
block work on new features.

With Apache Accumulo  we have a similar
situation where we have a separate branch (ironically also 4.0) that we are
calling "elasticity" that is a long running branch with a ton of breaking
changes. Our main branch line is still 3.1.x.  Our workflow is that we just
continuously merge main forward into elasticity to keep things up to date.
When elasticity/4.0 is ready to be the next release we plan to merge it
into main.

So for Kafka, we could have a branch created that is for version 4.0.0 and
people could do work only related to 4.x in that branch. For example, I'm
going to open up a draft PR soon for KIP-1032 that only belongs there. Work
that can go into 3.9.0 or 3.10.0 can stay in trunk and periodically trunk
can be merged forward into the 4.0.0 branch. Once KIP-853 is ready we can
merge 4.0.0 back into trunk.

The downside is having to resolve merge conflicts that come up as things
diverge but the upside is allowing work to keep moving.

On Tue, Jul 30, 2024 at 1:27 PM Josep Prat 
wrote:

> +1 Greg. I'd be really happy to bump trunk to 4.0.0, but only once we know
> we can safely do so.
>
> On Tue, Jul 30, 2024 at 7:24 PM Greg Harris 
> wrote:
>
> > Hi all,
> >
> > I agree that we are not yet ready for breaking changes on trunk, so I
> > opened a PR to bump to 3.10.0-SNAPSHOT:
> > https://github.com/apache/kafka/pull/16732
> >
> > When KIP-853 is feature complete, we can bump to 4.0.0-SNAPSHOT.
> >
> > Thanks,
> > Greg
> >
> > On Tue, Jul 30, 2024 at 10:01 AM Josep Prat  >
> > wrote:
> >
> > > Hi all,
> > > As per KIP-1012[1] we can't yet say if the next release will be 3.10.0
> or
> > > 4.0.0. It will come down to the state of KIP-853 in 3.9.0.
> > >
> > > So, in my opinion we should still wait before committing breaking
> changes
> > > on trunk until we know for sure that KIP-853 will make it.
> > > Maybe Jose can share more about the chances of this.
> > >
> > > [1]
> > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-1012%3A+The+need+for+a+Kafka+3.8.x+release
> > >
> > > Best
> > >
> > >
> > > --
> > > Josep Prat
> > > Open Source Engineering Director, Aiven
> > > josep.p...@aiven.io   |   +491715557497 | aiven.io
> > > Aiven Deutschland GmbH
> > > Alexanderufer 3-7, 10117 Berlin
> > > Geschäftsführer: Oskari Saarenmaa, Hannu Valtonen,
> > > Anna Richardson, Kenneth Chen
> > > Amtsgericht Charlottenburg, HRB 209739 B
> > >
> > > On Tue, Jul 30, 2024, 18:52 Matthias J. Sax  wrote:
> > >
> > > > Thanks for cutting the release branch.
> > > >
> > > > It's great to see `trunk` being bumped to 4.0-SNAPSHOT, and I wanted
> to
> > > > follow up on this:
> > > >
> > > > We have a bunch of tickets that we can only ship with 4.0 release,
> and
> > > > these tickets were blocked so far. I wanted to get confirmation that
> we
> > > > will stick with 4.0 coming after 3.9, and that we can start to work
> on
> > > > these tickets? Or is there any reason why we should still hold off to
> > > > pick them up? We don't want to delay them unnecessary to make sure we
> > > > can them all into 4.0 release, but of course also don't want to work
> on
> > > > them prematurely (to avoid that we have to revert them after
> merging).
> > > >
> > > >
> > > > -Matthias
> > > >
> > > > On 7/30/24 9:07 AM, Chia-Ping Tsai wrote:
> > > > > hi Colin,
> > > > >
> > > > > Could you please consider adding
> > > > > https://issues.apache.org/jira/browse/KAFKA-1 to 3.9.0
> > > > >
> > > > > The issue is used to deprecate the formatters in core module. Also,
> > it
> > > > > implements the replacements for them.
> > > > >
> > > > > In order to follow the deprecation rules, it would be nice to have
> > > > > KAFKA-1 in 3.9.0
> > > > >
> > > > > If you agree to have them in 3.9.0, I will cherry-pick them into
> > 3.9.0
> > > > when
> > > > > they get merged to trunk.
> > > > >
> > > > > Best,
> > > > > Chia-Ping
> > > > >
> > > > >
> > > > > José Armando García Sancio  於
> > 2024年7月30日
> > > > 週二
> > > > > 下午11:59寫道:
> > > > >
> > > > >> Thanks Colin.
> > > > >>
> > > > >> For KIP-853 (KRaft Controller Membership Changes), we still have
> the
> > > > >> following features that are in progress.
> > > > >>
> > > > >> 1. UpdateVoter RPC and request handling
> > > > >> 
> > > > >> 2. Storage tool changes for KIP-853
> > > > >> 
> > > > >> 3. kafka-metadata-quorum describe changes for KIP-853
> > > > >> 
> > > > >> 4. kafka-metadata-quorum add voter and remove voter changes
> > > > >> 
> > > > >> 5. Sending UpdateVoter request and response handling
> > > > >> 
> > > > >>
> > > > >> Can we cherry pick them to the 

[jira] [Resolved] (KAFKA-16972) Move `BrokerTopicStats` and `BrokerTopicMetrics` to `org.apache.kafka.storage.log.metrics` (storage module)

2024-07-30 Thread Chia-Ping Tsai (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16972?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai resolved KAFKA-16972.

Fix Version/s: 3.10.0
   Resolution: Fixed

> Move `BrokerTopicStats` and `BrokerTopicMetrics` to 
> `org.apache.kafka.storage.log.metrics` (storage module)
> ---
>
> Key: KAFKA-16972
> URL: https://issues.apache.org/jira/browse/KAFKA-16972
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Chia-Ping Tsai
>Assignee: PoAn Yang
>Priority: Minor
> Fix For: 3.10.0
>
>
> KAFKA-15852 says `kafka.server` should be moved to server module.
> However, `BrokerTopicMetrics` and `BrokerTopicStats` should be moved to 
> storage module due to following reason.
> 1. `RemoteLogManager` will be moved to storage module (KAFKA-14523)
> 2. `LogValidator` is already in storage module
> 3. `server` depends on `storage` 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [DISCUSS] KIP-512: make Record Headers available in onAcknowledgement

2024-07-30 Thread Lianet M.
Hello Rich, thanks for resurrecting the KIP, seems to fill a gap indeed.

LM1. Specifically related to motivation#1. ProducerRecord already has a
timestamp, passed into the RecordMetadata, that represents the creation
time provided on new ProducerRecord, so couldn't we reuse it to avoid the
extra complexity of having to "include a timestamp in the header when the
message is sent" to be able to compute latency properly. The challenge of
course is that that timestamp may be overwritten (and this is the root
cause of the gap), but that could be resolved just by keeping the original
time and making it available.
RecordMetadata would keep a timestamp (passed from the record creation,
never mutated), and the "effectiveTimestamp" (the one it currently has,
updated with the broker result based on configs). Main advantage would be
not having to add a header for calculating latency. The user simply creates
the record with a timestamp (known existing concept), and we make that
value accessible in the RecordMetadata (where it exists already at some
point, but it's mutated). Thoughts?

LM2. Regardless of the point above, if we think having the headers
available on the onAcknowledgement would be helpful, I definitely see the
case for both alternatives (headers in RecordMetadata and as param). I
share Andrew's feeling because Headers are indeed part of the
ProducerRecord. But then headers will in practice simply contain info
related to the record, so it seems sensible to expect to find headers in
the RecordMetadata as you suggest, ok with me.

Thanks!

Lianet

On Mon, Jul 29, 2024 at 9:41 PM Rich C.  wrote:

> Hi Andrew,
>
> Thanks for the feedback. I have updated KIP-512 and addressed AS2, AS3, and
> AS4. For AS1, let's wait for further responses from the community.
>
>
> Regards,
> Rich
>
>
> On Mon, Jul 29, 2024 at 5:59 AM Andrew Schofield <
> andrew_schofi...@live.com>
> wrote:
>
> > Hi,
> > Thanks for adding the detail. It seems quite straightforward to
> > implement in the producer code.
> >
> > AS1: Personally, and of course this is a matter of taste and just one
> > opinion, I don’t like adding Headers to RecordMetadata. It seems to me
> > that RecordMetadata is information about the record that’s been produced
> > whereas the Headers are really part of the record itself. So, I prefer
> the
> > alternative which overloads ProducerInterceptor.onAcknowledgement.
> >
> > AS2: ProducerBatch and FutureRecordMetadata are both internal classes
> > and do not need to be documented in the KIP.
> >
> > AS3: This KIP is adding rather than replcaing the constructor for
> > RecordMetadata.
> > You should define the value for the Headers if an existing constructor
> > without headers is used.
> >
> > AS4: You should add a method `Headers headers()` to RecordMetadata.
> >
> >
> > I wonder what other community members think about whether it’s a good
> > idea to extend RecordMetadata with the headers.
> >
> > Thanks,
> > Andrew
> >
> > > On 29 Jul 2024, at 05:36, Rich C.  wrote:
> > >
> > > Hi all,
> > >
> > > Thank you for the positive feedback. I added proposal changes to
> KIP-512
> > > and included a FAQ section to address some concerns.
> > >
> > > Hi Andrew, yes, this KIP focuses on
> > > `ProducerInterceptor.onAcknowledgement`. I added FAQ#3 to explain that.
> > >
> > > Hi Matthias, for your question about "RecordMetadata being Kafka
> > metadata" in
> > > this thread
> > > <
> >
> https://lists.apache.org/list?dev@kafka.apache.org:lte=1M:make%20Record%20Headers%20available%20in%20onAcknowledgement
> > >,
> > > I added FAQ#2 to explain that. If I have missed any documentation
> > regarding
> > > the design of RecordMetadata, please let me know.
> > >
> > > Regards,
> > > Rich
> > >
> > >
> > > On Fri, Jul 26, 2024 at 4:00 PM Andrew Schofield <
> > andrew_schofi...@live.com>
> > > wrote:
> > >
> > >> Hi Rich,
> > >> Thanks for resurrecting this KIP. It seems like a useful idea to me
> and
> > >> I’d be interested in seeing the proposed public interfaces.
> > >>
> > >> I note that you specifically called out the
> > >> ProducerInterceptor.onAcknowledgement
> > >> method, as opposed to the producer Callback.onCompletion method.
> > >>
> > >> Thanks,
> > >> Andrew
> > >>
> > >>> On 26 Jul 2024, at 04:54, Rich C.  wrote:
> > >>>
> > >>> Hi Kevin,
> > >>>
> > >>> Thanks for your support.
> > >>>
> > >>> Hi Matthias,
> > >>>
> > >>> I apologize for the confusion. I've deleted the Public Interface
> > sections
> > >>> for now. I think we should focus on discussing its necessity with the
> > >>> community. I'll let it sit for a few more days, and if there are no
> > >>> objections, I will propose changes over the weekend and share them
> here
> > >>> again.
> > >>>
> > >>> Regards,
> > >>> Rich
> > >>>
> > >>>
> > >>> On Thu, Jul 25, 2024 at 5:51 PM Matthias J. Sax 
> > >> wrote:
> > >>>
> >  Rich,
> > 
> >  thanks for resurrecting this KIP. I was not part of the original
> >  discussion back in the day, but 

Re: [DISCUSS] KIP-1074: Make the replication of internal topics configurable

2024-07-30 Thread Chris Egerton
Hi Patrick,

I share Greg's concerns with the feature as it's currently proposed. I
don't think I could vote for something unless it made replication of
genuinely internal topics and replication cycles impossible, or at least
significantly less likely.

Best,

Chris

On Tue, Jul 30, 2024, 14:51 Greg Harris 
wrote:

> Hi Patrik,
>
> Thanks for the KIP!
>
> Your motivation for this KIP is reasonable, because it is definitely
> possible for the ".internal" suffix to collide with real topics. It would
> have been nice if the original design included some mm2-specific namespace
> like "mm2.internal" to lessen the likelihood of a collision.
>
> However, this is a problem that has numerous existing workarounds:
> * Use a custom ReplicationPolicy and override the methods (for existing
> workloads/mirror makers)
> * Use non-conflicting user topic names (for new user topics)
> * Use the replication.policy.separator to use a non-conflicting separator
> character (for new mirror maker setups)
>
> And the feature as-described has significant risks attached:
> * May allow replication cycles and runaway replication
> * Mirrors internal topics that are unusable on the destination cluster
>
> While these risks can be accounted for if a user is attentive (e.g. when
> they're writing their own ReplicationPolicy) it is not a risk-free
> configuration that composes well with other out-of-the-box configurations.
> For example, someone may expect to take their existing configuration, turn
> on this new option, and expect reasonable behavior, which isn't always
> guaranteed.
>
> If you're still interested in this feature, please reference the existing
> workarounds and include them as rejected alternatives so we can know where
> the existing solutions fall short.
> We'd also have to figure out if and how the risks I mentioned could be
> mitigated.
>
> Thanks,
> Greg
>
> On Tue, Jul 30, 2024 at 5:49 AM Patrik Marton  >
> wrote:
>
> > Hi Team,
> >
> > I would like to start a discussion on KIP-1074: Make the replication of
> > internal topics configurable
> > <
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-1074%3A+Make+the+replication+of+internal+topics+configurable
> > >
> >
> > The goal of this KIP is to make it easier to replicate topics that seem
> > internal (for example ending in .internal suffix), but are just normal
> > business topics created by the user.
> >
> > I appreciate any feedback and recommendations!
> >
> > Thanks!
> > Patrik
> >
>


Re: [DISCUSS] KIP-1074: Make the replication of internal topics configurable

2024-07-30 Thread Greg Harris
Hi Patrik,

Thanks for the KIP!

Your motivation for this KIP is reasonable, because it is definitely
possible for the ".internal" suffix to collide with real topics. It would
have been nice if the original design included some mm2-specific namespace
like "mm2.internal" to lessen the likelihood of a collision.

However, this is a problem that has numerous existing workarounds:
* Use a custom ReplicationPolicy and override the methods (for existing
workloads/mirror makers)
* Use non-conflicting user topic names (for new user topics)
* Use the replication.policy.separator to use a non-conflicting separator
character (for new mirror maker setups)

And the feature as-described has significant risks attached:
* May allow replication cycles and runaway replication
* Mirrors internal topics that are unusable on the destination cluster

While these risks can be accounted for if a user is attentive (e.g. when
they're writing their own ReplicationPolicy) it is not a risk-free
configuration that composes well with other out-of-the-box configurations.
For example, someone may expect to take their existing configuration, turn
on this new option, and expect reasonable behavior, which isn't always
guaranteed.

If you're still interested in this feature, please reference the existing
workarounds and include them as rejected alternatives so we can know where
the existing solutions fall short.
We'd also have to figure out if and how the risks I mentioned could be
mitigated.

Thanks,
Greg

On Tue, Jul 30, 2024 at 5:49 AM Patrik Marton 
wrote:

> Hi Team,
>
> I would like to start a discussion on KIP-1074: Make the replication of
> internal topics configurable
> <
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-1074%3A+Make+the+replication+of+internal+topics+configurable
> >
>
> The goal of this KIP is to make it easier to replicate topics that seem
> internal (for example ending in .internal suffix), but are just normal
> business topics created by the user.
>
> I appreciate any feedback and recommendations!
>
> Thanks!
> Patrik
>


[jira] [Resolved] (KAFKA-17185) Make sure a single logger instance is created

2024-07-30 Thread Chia-Ping Tsai (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-17185?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai resolved KAFKA-17185.

Fix Version/s: 3.10.0
   Resolution: Fixed

> Make sure a single logger instance is created 
> --
>
> Key: KAFKA-17185
> URL: https://issues.apache.org/jira/browse/KAFKA-17185
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Chia-Ping Tsai
>Assignee: Ming-Yen Chung
>Priority: Minor
> Fix For: 3.10.0
>
>
> the discussion: 
> https://github.com/apache/kafka/pull/16657#discussion_r1686938593
> In short, "private final logger" -> "private static final logger"



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-17221) Flaky test DedicatedMirrorIntegrationTest::testMultiNodeCluster

2024-07-30 Thread Chris Egerton (Jira)
Chris Egerton created KAFKA-17221:
-

 Summary: Flaky test 
DedicatedMirrorIntegrationTest::testMultiNodeCluster
 Key: KAFKA-17221
 URL: https://issues.apache.org/jira/browse/KAFKA-17221
 Project: Kafka
  Issue Type: Test
  Components: mirrormaker
Reporter: Chris Egerton
Assignee: Chris Egerton


This test has failed at least once: 
[https://ge.apache.org/s/icsclaee3pdhg/tests/task/:connect:mirror:test/details/org.apache.kafka.connect.mirror.integration.DedicatedMirrorIntegrationTest/testMultiNodeCluster()?top-execution=1]

After examining the logs, it looks like no MM2 node ever attempted to write 
connector configs to the config topic ([this log 
message|https://github.com/apache/kafka/blob/1084d3b9c95aecccbe3c82e84ae4c8f406fc68e1/connect/mirror/src/main/java/org/apache/kafka/connect/mirror/MirrorHerder.java#L55]
 is missing). It's possible that there is a bug in the logic introduced in 
[https://github.com/apache/kafka/pull/14293] for KAFKA-15372.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [VOTE] KIP-1022 Formatting and Updating Features

2024-07-30 Thread Jun Rao
Hi, Colin,

Thanks for the update. We also excluded supported features with maxVersion
of 0 from both ApiVersionResponse and BrokerRegistrationRequest, and
excluded finalized features with version of 0 from ApiVersionResponse. It
would be useful to document those too.

Jun

On Mon, Jul 29, 2024 at 9:25 PM Colin McCabe  wrote:

> Hi Jun,
>
> Just to close the loop on this... the KIP now mentions both
> ApiVersionResponse and BrokerRegistrationRequest.
>
> best,
> Colin
>
> On Mon, Jul 8, 2024, at 14:57, Jun Rao wrote:
> > Hi, Colin,
> >
> > Thanks for the update. Since the PR also introduces a new version of
> > BrokerRegistrationRequest, could we include that change in the KIP update
> > too?
> >
> > Jun
> >
> > On Mon, Jul 8, 2024 at 11:08 AM Colin McCabe  wrote:
> >
> >> Hi all,
> >>
> >> I've updated the approach in https://github.com/apache/kafka/pull/16421
> >> so that we change the minVersion=0 to minVersion=1 in older
> >> ApiVersionsResponses.
> >>
> >> I hope we can get this in soon and unblock the features that are waiting
> >> for it!
> >>
> >> best,
> >> Colin
> >>
> >> On Wed, Jul 3, 2024, at 10:55, Jun Rao wrote:
> >> > Hi, David,
> >> >
> >> > Thanks for the reply. In the common case, there is no difference
> between
> >> > omitting just v0 of the feature or omitting the feature completely.
> It's
> >> > just when an old client is used, there is some difference. To me,
> >> > omitting just v0 of the feature seems slightly better for the old
> client.
> >> >
> >> > Jun
> >> >
> >> > On Wed, Jul 3, 2024 at 9:45 AM David Jacot
> 
> >> > wrote:
> >> >
> >> >> Hi Jun, Colin,
> >> >>
> >> >> Thanks for your replies.
> >> >>
> >> >> If the FeatureCommand relies on version 0 too, my suggestion does not
> >> work.
> >> >> Omitting the features for old clients as suggested by Colin seems
> fine
> >> for
> >> >> me. In practice, administrators will usually use a version of
> >> >> FeatureCommand matching the cluster version so the impact is not too
> bad
> >> >> knowing that the first features will be introduced from 3.9 on.
> >> >>
> >> >> Best,
> >> >> David
> >> >>
> >> >> On Tue, Jul 2, 2024 at 2:15 AM Colin McCabe 
> wrote:
> >> >>
> >> >> > Hi David,
> >> >> >
> >> >> > In the ApiVersionsResponse, we really don't have an easy way of
> >> mapping
> >> >> > finalizedVersion = 1 to "off" in older releases such as 3.7.0. For
> >> >> example,
> >> >> > if a 3.9.0 broker advertises that it has finalized group.version =
> 1,
> >> >> that
> >> >> > will be treated by 3.7.0 as a brand new feature, not as "KIP-848 is
> >> off."
> >> >> > However, I suppose we could work around this by not setting a
> >> >> > finalizedVersion at all for group.version (or any other feature) if
> >> its
> >> >> > finalized level was 1. We could also work around the "deletion =
> set
> >> to
> >> >> 0"
> >> >> > issue on the server side. The server can translate requests to set
> the
> >> >> > finalized level to 0, into requests to set it to 1.
> >> >> >
> >> >> > So maybe this solution is worth considering, although it's
> >> unfortunate to
> >> >> > lose 0. I suppose we'd have to special case metadata.version being
> >> set to
> >> >> > 1, since that was NOT equivalent to it being "off"
> >> >> >
> >> >> > best,
> >> >> > Colin
> >> >> >
> >> >> >
> >> >> > On Mon, Jul 1, 2024, at 10:11, Jun Rao wrote:
> >> >> > > Hi, David,
> >> >> > >
> >> >> > > Yes, that's another option. It probably has its own challenges.
> For
> >> >> > > example, the FeatureCommand tool currently treats disabling a
> >> feature
> >> >> as
> >> >> > > setting the version to 0. It would be useful to get Jose's
> opinion
> >> on
> >> >> > this
> >> >> > > since he introduced version 0 in the kraft.version feature.
> >> >> > >
> >> >> > > Thanks,
> >> >> > >
> >> >> > > Jun
> >> >> > >
> >> >> > > On Sun, Jun 30, 2024 at 11:48 PM David Jacot
> >> >>  >> >> > >
> >> >> > > wrote:
> >> >> > >
> >> >> > >> Hi Jun, Colin,
> >> >> > >>
> >> >> > >> Have we considered sticking with the range going from version 1
> to
> >> N
> >> >> > where
> >> >> > >> version 1 would be the equivalent of "disabled"? In the
> >> group.version
> >> >> > case,
> >> >> > >> we could introduce group.version=1 that does basically nothing
> and
> >> >> > >> group.version=2 that enables the new protocol. I suppose that we
> >> could
> >> >> > do
> >> >> > >> the same for the other features. I agree that it is less elegant
> >> but
> >> >> it
> >> >> > >> would avoid all the backward compatibility issues.
> >> >> > >>
> >> >> > >> Best,
> >> >> > >> David
> >> >> > >>
> >> >> > >> On Fri, Jun 28, 2024 at 6:02 PM Jun Rao
> 
> >> >> > wrote:
> >> >> > >>
> >> >> > >> > Hi, Colin,
> >> >> > >> >
> >> >> > >> > Yes, #3 is the scenario that I was thinking about.
> >> >> > >> >
> >> >> > >> > In either approach, there will be some information missing in
> the
> >> >> old
> >> >> > >> > client. It seems that we should just pick the one that's less
> >> wrong.
> >> >> > In
> >> >> > >> the
> 

[jira] [Resolved] (KAFKA-17175) Remove interface `BrokerNode` and `ControllerNode`

2024-07-30 Thread Chia-Ping Tsai (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-17175?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai resolved KAFKA-17175.

Fix Version/s: 3.9.0
   Resolution: Fixed

> Remove interface `BrokerNode` and `ControllerNode`
> --
>
> Key: KAFKA-17175
> URL: https://issues.apache.org/jira/browse/KAFKA-17175
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Chia-Ping Tsai
>Assignee: TengYao Chi
>Priority: Major
> Fix For: 3.9.0
>
>
> After KAFKA-16560, `BrokerNode` and `ControllerNode` are almost the same. 
> Hence, it is time to remove them.
> This issue should be composed by following changes.
> 1. remove BrokerNode
> 2. remove ControllerNode
> 3. move the BrokerNode.Builder to TestKitNode
> 4. move the ControllerNode.Builder to TestKitNode
> 5. add default `logDataDirectories` to TestKitNode. it can be implemented by 
> `initialMetaPropertiesEnsemble().logDirProps().keySet();`
> 6. remove `boolean combined();` from `TestKitNode` since it can be replaced 
> by `TestKitNodes.isCombined()`



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: New release branch 3.9

2024-07-30 Thread Josep Prat
+1 Greg. I'd be really happy to bump trunk to 4.0.0, but only once we know
we can safely do so.

On Tue, Jul 30, 2024 at 7:24 PM Greg Harris 
wrote:

> Hi all,
>
> I agree that we are not yet ready for breaking changes on trunk, so I
> opened a PR to bump to 3.10.0-SNAPSHOT:
> https://github.com/apache/kafka/pull/16732
>
> When KIP-853 is feature complete, we can bump to 4.0.0-SNAPSHOT.
>
> Thanks,
> Greg
>
> On Tue, Jul 30, 2024 at 10:01 AM Josep Prat 
> wrote:
>
> > Hi all,
> > As per KIP-1012[1] we can't yet say if the next release will be 3.10.0 or
> > 4.0.0. It will come down to the state of KIP-853 in 3.9.0.
> >
> > So, in my opinion we should still wait before committing breaking changes
> > on trunk until we know for sure that KIP-853 will make it.
> > Maybe Jose can share more about the chances of this.
> >
> > [1]
> >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-1012%3A+The+need+for+a+Kafka+3.8.x+release
> >
> > Best
> >
> >
> > --
> > Josep Prat
> > Open Source Engineering Director, Aiven
> > josep.p...@aiven.io   |   +491715557497 | aiven.io
> > Aiven Deutschland GmbH
> > Alexanderufer 3-7, 10117 Berlin
> > Geschäftsführer: Oskari Saarenmaa, Hannu Valtonen,
> > Anna Richardson, Kenneth Chen
> > Amtsgericht Charlottenburg, HRB 209739 B
> >
> > On Tue, Jul 30, 2024, 18:52 Matthias J. Sax  wrote:
> >
> > > Thanks for cutting the release branch.
> > >
> > > It's great to see `trunk` being bumped to 4.0-SNAPSHOT, and I wanted to
> > > follow up on this:
> > >
> > > We have a bunch of tickets that we can only ship with 4.0 release, and
> > > these tickets were blocked so far. I wanted to get confirmation that we
> > > will stick with 4.0 coming after 3.9, and that we can start to work on
> > > these tickets? Or is there any reason why we should still hold off to
> > > pick them up? We don't want to delay them unnecessary to make sure we
> > > can them all into 4.0 release, but of course also don't want to work on
> > > them prematurely (to avoid that we have to revert them after merging).
> > >
> > >
> > > -Matthias
> > >
> > > On 7/30/24 9:07 AM, Chia-Ping Tsai wrote:
> > > > hi Colin,
> > > >
> > > > Could you please consider adding
> > > > https://issues.apache.org/jira/browse/KAFKA-1 to 3.9.0
> > > >
> > > > The issue is used to deprecate the formatters in core module. Also,
> it
> > > > implements the replacements for them.
> > > >
> > > > In order to follow the deprecation rules, it would be nice to have
> > > > KAFKA-1 in 3.9.0
> > > >
> > > > If you agree to have them in 3.9.0, I will cherry-pick them into
> 3.9.0
> > > when
> > > > they get merged to trunk.
> > > >
> > > > Best,
> > > > Chia-Ping
> > > >
> > > >
> > > > José Armando García Sancio  於
> 2024年7月30日
> > > 週二
> > > > 下午11:59寫道:
> > > >
> > > >> Thanks Colin.
> > > >>
> > > >> For KIP-853 (KRaft Controller Membership Changes), we still have the
> > > >> following features that are in progress.
> > > >>
> > > >> 1. UpdateVoter RPC and request handling
> > > >> 
> > > >> 2. Storage tool changes for KIP-853
> > > >> 
> > > >> 3. kafka-metadata-quorum describe changes for KIP-853
> > > >> 
> > > >> 4. kafka-metadata-quorum add voter and remove voter changes
> > > >> 
> > > >> 5. Sending UpdateVoter request and response handling
> > > >> 
> > > >>
> > > >> Can we cherry pick them to the release branch 3.9.0 when they get
> > > merged to
> > > >> trunk? They have a small impact as they shouldn't affect the rest of
> > > Kafka
> > > >> and only affect the kraft controller membership change feature. I
> > > expected
> > > >> them to get merged to the trunk branch in the coming days.
> > > >>
> > > >> Thanks,
> > > >>
> > > >> On Mon, Jul 29, 2024 at 7:02 PM Colin McCabe 
> > > wrote:
> > > >>
> > > >>> Hi Kafka developers and friends,
> > > >>>
> > > >>> As promised, we now have a release branch for the upcoming 3.9.0
> > > release.
> > > >>> Trunk has been bumped to 4.0.0-SNAPSHOT.
> > > >>>
> > > >>> I'll be going over the JIRAs to move every non-blocker from this
> > > release
> > > >> to
> > > >>> the next release.
> > > >>>
> > > >>>  From this point, most changes should go to trunk.
> > > >>> *Blockers (existing and new that we discover while testing the
> > release)
> > > >>> will be double-committed. *Please discuss with your reviewer
> whether
> > > your
> > > >>> PR should go to trunk or to trunk+release so they can merge
> > > accordingly.
> > > >>>
> > > >>> *Please help us test the release! *
> > > >>>
> > > >>> best,
> > > >>> Colin
> > > >>>
> > > >>
> > > >>
> > > >> --
> > > >> -José
> > > >>
> > > >
> > >
> >
>


-- 
[image: Aiven] 

*Josep Prat*
Open Source Engineering Director, *Aiven*
josep.p...@aiven.io   

Re: New release branch 3.9

2024-07-30 Thread Greg Harris
Hi all,

I agree that we are not yet ready for breaking changes on trunk, so I
opened a PR to bump to 3.10.0-SNAPSHOT:
https://github.com/apache/kafka/pull/16732

When KIP-853 is feature complete, we can bump to 4.0.0-SNAPSHOT.

Thanks,
Greg

On Tue, Jul 30, 2024 at 10:01 AM Josep Prat 
wrote:

> Hi all,
> As per KIP-1012[1] we can't yet say if the next release will be 3.10.0 or
> 4.0.0. It will come down to the state of KIP-853 in 3.9.0.
>
> So, in my opinion we should still wait before committing breaking changes
> on trunk until we know for sure that KIP-853 will make it.
> Maybe Jose can share more about the chances of this.
>
> [1]
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-1012%3A+The+need+for+a+Kafka+3.8.x+release
>
> Best
>
>
> --
> Josep Prat
> Open Source Engineering Director, Aiven
> josep.p...@aiven.io   |   +491715557497 | aiven.io
> Aiven Deutschland GmbH
> Alexanderufer 3-7, 10117 Berlin
> Geschäftsführer: Oskari Saarenmaa, Hannu Valtonen,
> Anna Richardson, Kenneth Chen
> Amtsgericht Charlottenburg, HRB 209739 B
>
> On Tue, Jul 30, 2024, 18:52 Matthias J. Sax  wrote:
>
> > Thanks for cutting the release branch.
> >
> > It's great to see `trunk` being bumped to 4.0-SNAPSHOT, and I wanted to
> > follow up on this:
> >
> > We have a bunch of tickets that we can only ship with 4.0 release, and
> > these tickets were blocked so far. I wanted to get confirmation that we
> > will stick with 4.0 coming after 3.9, and that we can start to work on
> > these tickets? Or is there any reason why we should still hold off to
> > pick them up? We don't want to delay them unnecessary to make sure we
> > can them all into 4.0 release, but of course also don't want to work on
> > them prematurely (to avoid that we have to revert them after merging).
> >
> >
> > -Matthias
> >
> > On 7/30/24 9:07 AM, Chia-Ping Tsai wrote:
> > > hi Colin,
> > >
> > > Could you please consider adding
> > > https://issues.apache.org/jira/browse/KAFKA-1 to 3.9.0
> > >
> > > The issue is used to deprecate the formatters in core module. Also, it
> > > implements the replacements for them.
> > >
> > > In order to follow the deprecation rules, it would be nice to have
> > > KAFKA-1 in 3.9.0
> > >
> > > If you agree to have them in 3.9.0, I will cherry-pick them into 3.9.0
> > when
> > > they get merged to trunk.
> > >
> > > Best,
> > > Chia-Ping
> > >
> > >
> > > José Armando García Sancio  於 2024年7月30日
> > 週二
> > > 下午11:59寫道:
> > >
> > >> Thanks Colin.
> > >>
> > >> For KIP-853 (KRaft Controller Membership Changes), we still have the
> > >> following features that are in progress.
> > >>
> > >> 1. UpdateVoter RPC and request handling
> > >> 
> > >> 2. Storage tool changes for KIP-853
> > >> 
> > >> 3. kafka-metadata-quorum describe changes for KIP-853
> > >> 
> > >> 4. kafka-metadata-quorum add voter and remove voter changes
> > >> 
> > >> 5. Sending UpdateVoter request and response handling
> > >> 
> > >>
> > >> Can we cherry pick them to the release branch 3.9.0 when they get
> > merged to
> > >> trunk? They have a small impact as they shouldn't affect the rest of
> > Kafka
> > >> and only affect the kraft controller membership change feature. I
> > expected
> > >> them to get merged to the trunk branch in the coming days.
> > >>
> > >> Thanks,
> > >>
> > >> On Mon, Jul 29, 2024 at 7:02 PM Colin McCabe 
> > wrote:
> > >>
> > >>> Hi Kafka developers and friends,
> > >>>
> > >>> As promised, we now have a release branch for the upcoming 3.9.0
> > release.
> > >>> Trunk has been bumped to 4.0.0-SNAPSHOT.
> > >>>
> > >>> I'll be going over the JIRAs to move every non-blocker from this
> > release
> > >> to
> > >>> the next release.
> > >>>
> > >>>  From this point, most changes should go to trunk.
> > >>> *Blockers (existing and new that we discover while testing the
> release)
> > >>> will be double-committed. *Please discuss with your reviewer whether
> > your
> > >>> PR should go to trunk or to trunk+release so they can merge
> > accordingly.
> > >>>
> > >>> *Please help us test the release! *
> > >>>
> > >>> best,
> > >>> Colin
> > >>>
> > >>
> > >>
> > >> --
> > >> -José
> > >>
> > >
> >
>


[jira] [Resolved] (KAFKA-17044) Connector deletion can lead to resource leak during a long running connector startup

2024-07-30 Thread Chris Egerton (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-17044?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Egerton resolved KAFKA-17044.
---
Resolution: Won't Fix

> Connector deletion can lead to resource leak during a long running connector 
> startup
> 
>
> Key: KAFKA-17044
> URL: https://issues.apache.org/jira/browse/KAFKA-17044
> Project: Kafka
>  Issue Type: Bug
>  Components: connect
>Reporter: Bhagyashree
>Priority: Major
>
> We have identified a gap in the shutdown flow for the connector worker. If 
> the connector is in 
> [INIT|https://github.com/apache/kafka/blob/trunk/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerConnector.java#L403-L404]
>  state and still executing the 
> [WorkerConnector::doStart|https://github.com/apache/kafka/blob/trunk/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerConnector.java#L207-L218]
>  method, a DELETE API call would invoke the 
> [WorkerConnector::shutdown|https://github.com/apache/kafka/blob/trunk/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerConnector.java#L294-L298]
>  and [notify() 
> |https://github.com/apache/kafka/blob/trunk/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerConnector.java#L297]but
>  the connector worker would not shutdown immediately. This happens because 
> [start()|https://github.com/apache/kafka/blob/trunk/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerConnector.java#L216]
>  is a blocking call and the control reaches 
> [wait()|https://github.com/apache/kafka/blob/trunk/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerConnector.java#L176]
>  in 
> [doRun()|https://github.com/apache/kafka/blob/trunk/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerConnector.java#L151]
>  after the 
> [start()|https://github.com/apache/kafka/blob/trunk/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerConnector.java#L216]
>  call has completed. This results in a gap in the delete flow where the 
> connector is not immediately shutdown leaving the resources running. 
> [start()|https://github.com/apache/kafka/blob/trunk/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerConnector.java#L216]
>  keeps running and only when the execution of 
> [start()|https://github.com/apache/kafka/blob/trunk/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerConnector.java#L216]
>  completes, we reach at the point of 
> [wait()|https://github.com/apache/kafka/blob/trunk/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerConnector.java#L176]
>  and then 
> [doShutdown()|https://github.com/apache/kafka/blob/trunk/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerConnector.java#L183]
>  of the connector worker is invoked.
> This seems similar to what has been identified for connector tasks as part of 
> https://issues.apache.org/jira/browse/KAFKA-14725.
> *Steps to repro*
> 1. Start a connector with time taking operation in 
> [connector.start()|https://github.com/apache/kafka/blob/trunk/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerConnector.java#L216]
>  call
> 2. Call DELETE API to delete this connector
> 3. The connector would be deleted only after the 
> [start()|https://github.com/apache/kafka/blob/trunk/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerConnector.java#L216]
>  completes.
> The issue was observed when a connector was configured to retry a db 
> connection for sometime. 
> {*}Current Behaviour{*}: The connector did not shutdown until the 
> [start()|https://github.com/apache/kafka/blob/trunk/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerConnector.java#L216]
>  method completed.
> {*}Expected Behaviou{*}r: The connector should abort what it is doing and 
> shutdown as requested by the Delete call.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: New release branch 3.9

2024-07-30 Thread Josep Prat
Hi all,
As per KIP-1012[1] we can't yet say if the next release will be 3.10.0 or
4.0.0. It will come down to the state of KIP-853 in 3.9.0.

So, in my opinion we should still wait before committing breaking changes
on trunk until we know for sure that KIP-853 will make it.
Maybe Jose can share more about the chances of this.

[1]
https://cwiki.apache.org/confluence/display/KAFKA/KIP-1012%3A+The+need+for+a+Kafka+3.8.x+release

Best


--
Josep Prat
Open Source Engineering Director, Aiven
josep.p...@aiven.io   |   +491715557497 | aiven.io
Aiven Deutschland GmbH
Alexanderufer 3-7, 10117 Berlin
Geschäftsführer: Oskari Saarenmaa, Hannu Valtonen,
Anna Richardson, Kenneth Chen
Amtsgericht Charlottenburg, HRB 209739 B

On Tue, Jul 30, 2024, 18:52 Matthias J. Sax  wrote:

> Thanks for cutting the release branch.
>
> It's great to see `trunk` being bumped to 4.0-SNAPSHOT, and I wanted to
> follow up on this:
>
> We have a bunch of tickets that we can only ship with 4.0 release, and
> these tickets were blocked so far. I wanted to get confirmation that we
> will stick with 4.0 coming after 3.9, and that we can start to work on
> these tickets? Or is there any reason why we should still hold off to
> pick them up? We don't want to delay them unnecessary to make sure we
> can them all into 4.0 release, but of course also don't want to work on
> them prematurely (to avoid that we have to revert them after merging).
>
>
> -Matthias
>
> On 7/30/24 9:07 AM, Chia-Ping Tsai wrote:
> > hi Colin,
> >
> > Could you please consider adding
> > https://issues.apache.org/jira/browse/KAFKA-1 to 3.9.0
> >
> > The issue is used to deprecate the formatters in core module. Also, it
> > implements the replacements for them.
> >
> > In order to follow the deprecation rules, it would be nice to have
> > KAFKA-1 in 3.9.0
> >
> > If you agree to have them in 3.9.0, I will cherry-pick them into 3.9.0
> when
> > they get merged to trunk.
> >
> > Best,
> > Chia-Ping
> >
> >
> > José Armando García Sancio  於 2024年7月30日
> 週二
> > 下午11:59寫道:
> >
> >> Thanks Colin.
> >>
> >> For KIP-853 (KRaft Controller Membership Changes), we still have the
> >> following features that are in progress.
> >>
> >> 1. UpdateVoter RPC and request handling
> >> 
> >> 2. Storage tool changes for KIP-853
> >> 
> >> 3. kafka-metadata-quorum describe changes for KIP-853
> >> 
> >> 4. kafka-metadata-quorum add voter and remove voter changes
> >> 
> >> 5. Sending UpdateVoter request and response handling
> >> 
> >>
> >> Can we cherry pick them to the release branch 3.9.0 when they get
> merged to
> >> trunk? They have a small impact as they shouldn't affect the rest of
> Kafka
> >> and only affect the kraft controller membership change feature. I
> expected
> >> them to get merged to the trunk branch in the coming days.
> >>
> >> Thanks,
> >>
> >> On Mon, Jul 29, 2024 at 7:02 PM Colin McCabe 
> wrote:
> >>
> >>> Hi Kafka developers and friends,
> >>>
> >>> As promised, we now have a release branch for the upcoming 3.9.0
> release.
> >>> Trunk has been bumped to 4.0.0-SNAPSHOT.
> >>>
> >>> I'll be going over the JIRAs to move every non-blocker from this
> release
> >> to
> >>> the next release.
> >>>
> >>>  From this point, most changes should go to trunk.
> >>> *Blockers (existing and new that we discover while testing the release)
> >>> will be double-committed. *Please discuss with your reviewer whether
> your
> >>> PR should go to trunk or to trunk+release so they can merge
> accordingly.
> >>>
> >>> *Please help us test the release! *
> >>>
> >>> best,
> >>> Colin
> >>>
> >>
> >>
> >> --
> >> -José
> >>
> >
>


Re: New release branch 3.9

2024-07-30 Thread Matthias J. Sax

Thanks for cutting the release branch.

It's great to see `trunk` being bumped to 4.0-SNAPSHOT, and I wanted to 
follow up on this:


We have a bunch of tickets that we can only ship with 4.0 release, and 
these tickets were blocked so far. I wanted to get confirmation that we 
will stick with 4.0 coming after 3.9, and that we can start to work on 
these tickets? Or is there any reason why we should still hold off to 
pick them up? We don't want to delay them unnecessary to make sure we 
can them all into 4.0 release, but of course also don't want to work on 
them prematurely (to avoid that we have to revert them after merging).



-Matthias

On 7/30/24 9:07 AM, Chia-Ping Tsai wrote:

hi Colin,

Could you please consider adding
https://issues.apache.org/jira/browse/KAFKA-1 to 3.9.0

The issue is used to deprecate the formatters in core module. Also, it
implements the replacements for them.

In order to follow the deprecation rules, it would be nice to have
KAFKA-1 in 3.9.0

If you agree to have them in 3.9.0, I will cherry-pick them into 3.9.0 when
they get merged to trunk.

Best,
Chia-Ping


José Armando García Sancio  於 2024年7月30日 週二
下午11:59寫道:


Thanks Colin.

For KIP-853 (KRaft Controller Membership Changes), we still have the
following features that are in progress.

1. UpdateVoter RPC and request handling

2. Storage tool changes for KIP-853

3. kafka-metadata-quorum describe changes for KIP-853

4. kafka-metadata-quorum add voter and remove voter changes

5. Sending UpdateVoter request and response handling


Can we cherry pick them to the release branch 3.9.0 when they get merged to
trunk? They have a small impact as they shouldn't affect the rest of Kafka
and only affect the kraft controller membership change feature. I expected
them to get merged to the trunk branch in the coming days.

Thanks,

On Mon, Jul 29, 2024 at 7:02 PM Colin McCabe  wrote:


Hi Kafka developers and friends,

As promised, we now have a release branch for the upcoming 3.9.0 release.
Trunk has been bumped to 4.0.0-SNAPSHOT.

I'll be going over the JIRAs to move every non-blocker from this release

to

the next release.

 From this point, most changes should go to trunk.
*Blockers (existing and new that we discover while testing the release)
will be double-committed. *Please discuss with your reviewer whether your
PR should go to trunk or to trunk+release so they can merge accordingly.

*Please help us test the release! *

best,
Colin




--
-José





Re: New release branch 3.9

2024-07-30 Thread Chia-Ping Tsai
hi Colin,

Could you please consider adding
https://issues.apache.org/jira/browse/KAFKA-1 to 3.9.0

The issue is used to deprecate the formatters in core module. Also, it
implements the replacements for them.

In order to follow the deprecation rules, it would be nice to have
KAFKA-1 in 3.9.0

If you agree to have them in 3.9.0, I will cherry-pick them into 3.9.0 when
they get merged to trunk.

Best,
Chia-Ping


José Armando García Sancio  於 2024年7月30日 週二
下午11:59寫道:

> Thanks Colin.
>
> For KIP-853 (KRaft Controller Membership Changes), we still have the
> following features that are in progress.
>
> 1. UpdateVoter RPC and request handling
> 
> 2. Storage tool changes for KIP-853
> 
> 3. kafka-metadata-quorum describe changes for KIP-853
> 
> 4. kafka-metadata-quorum add voter and remove voter changes
> 
> 5. Sending UpdateVoter request and response handling
> 
>
> Can we cherry pick them to the release branch 3.9.0 when they get merged to
> trunk? They have a small impact as they shouldn't affect the rest of Kafka
> and only affect the kraft controller membership change feature. I expected
> them to get merged to the trunk branch in the coming days.
>
> Thanks,
>
> On Mon, Jul 29, 2024 at 7:02 PM Colin McCabe  wrote:
>
> > Hi Kafka developers and friends,
> >
> > As promised, we now have a release branch for the upcoming 3.9.0 release.
> > Trunk has been bumped to 4.0.0-SNAPSHOT.
> >
> > I'll be going over the JIRAs to move every non-blocker from this release
> to
> > the next release.
> >
> > From this point, most changes should go to trunk.
> > *Blockers (existing and new that we discover while testing the release)
> > will be double-committed. *Please discuss with your reviewer whether your
> > PR should go to trunk or to trunk+release so they can merge accordingly.
> >
> > *Please help us test the release! *
> >
> > best,
> > Colin
> >
>
>
> --
> -José
>


Re: New release branch 3.9

2024-07-30 Thread José Armando García Sancio
Thanks Colin.

For KIP-853 (KRaft Controller Membership Changes), we still have the
following features that are in progress.

1. UpdateVoter RPC and request handling

2. Storage tool changes for KIP-853

3. kafka-metadata-quorum describe changes for KIP-853

4. kafka-metadata-quorum add voter and remove voter changes

5. Sending UpdateVoter request and response handling


Can we cherry pick them to the release branch 3.9.0 when they get merged to
trunk? They have a small impact as they shouldn't affect the rest of Kafka
and only affect the kraft controller membership change feature. I expected
them to get merged to the trunk branch in the coming days.

Thanks,

On Mon, Jul 29, 2024 at 7:02 PM Colin McCabe  wrote:

> Hi Kafka developers and friends,
>
> As promised, we now have a release branch for the upcoming 3.9.0 release.
> Trunk has been bumped to 4.0.0-SNAPSHOT.
>
> I'll be going over the JIRAs to move every non-blocker from this release to
> the next release.
>
> From this point, most changes should go to trunk.
> *Blockers (existing and new that we discover while testing the release)
> will be double-committed. *Please discuss with your reviewer whether your
> PR should go to trunk or to trunk+release so they can merge accordingly.
>
> *Please help us test the release! *
>
> best,
> Colin
>


-- 
-José


[jira] [Created] (KAFKA-17220) Define new metrics for MirrorMaker2

2024-07-30 Thread Greg Harris (Jira)
Greg Harris created KAFKA-17220:
---

 Summary: Define new metrics for MirrorMaker2
 Key: KAFKA-17220
 URL: https://issues.apache.org/jira/browse/KAFKA-17220
 Project: Kafka
  Issue Type: New Feature
  Components: mirrormaker
Reporter: Greg Harris


MirrorMaker2 provides some observability into it's operation, but lacks 
observability in the following areas:
 * Number of replicated topics/partitions
 * Number of consumer groups & consumer offsets
 * Background job success/failure/latency/duration
 * Checkpoint task startup time
 * Consumer group translation lag/expected redelivered data
 * Error metrics for failed offset translations

These concepts should be more clearly defined and included in a general 
observability KIP for MM2.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Build failed in Jenkins: Kafka » Kafka Branch Builder » 3.8 #76

2024-07-30 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 523078 lines...]
[2024-07-30T15:20:30.983Z] > Task :connect:json:publishToMavenLocal
[2024-07-30T15:20:30.983Z] > Task :server:compileTestJava
[2024-07-30T15:20:30.983Z] > Task :server:testClasses
[2024-07-30T15:20:33.712Z] > Task :server-common:compileTestJava
[2024-07-30T15:20:33.712Z] > Task :server-common:testClasses
[2024-07-30T15:20:37.807Z] > Task :raft:compileTestJava
[2024-07-30T15:20:37.807Z] > Task :raft:testClasses
[2024-07-30T15:20:40.717Z] 
[2024-07-30T15:20:40.717Z] > Task :clients:javadoc
[2024-07-30T15:20:40.717Z] 
/home/jenkins/workspace/Kafka_kafka_3.8/clients/src/main/java/org/apache/kafka/clients/admin/ScramMechanism.java:32:
 warning - Tag @see: missing final '>': "https://cwiki.apache.org/confluence/display/KAFKA/KIP-554%3A+Add+Broker-side+SCRAM+Config+API;>KIP-554:
 Add Broker-side SCRAM Config API
[2024-07-30T15:20:40.717Z] 
[2024-07-30T15:20:40.717Z]  This code is duplicated in 
org.apache.kafka.common.security.scram.internals.ScramMechanism.
[2024-07-30T15:20:40.717Z]  The type field in both files must match and must 
not change. The type field
[2024-07-30T15:20:40.717Z]  is used both for passing ScramCredentialUpsertion 
and for the internal
[2024-07-30T15:20:40.717Z]  UserScramCredentialRecord. Do not change the type 
field."
[2024-07-30T15:20:43.273Z] 
/home/jenkins/workspace/Kafka_kafka_3.8/clients/src/main/java/org/apache/kafka/common/security/oauthbearer/secured/package-info.java:21:
 warning - Tag @link: reference not found: 
org.apache.kafka.common.security.oauthbearer
[2024-07-30T15:20:43.273Z] 2 warnings
[2024-07-30T15:20:44.637Z] 
[2024-07-30T15:20:44.637Z] > Task :clients:javadocJar
[2024-07-30T15:20:46.168Z] > Task :group-coordinator:compileTestJava
[2024-07-30T15:20:46.168Z] > Task :group-coordinator:testClasses
[2024-07-30T15:20:47.699Z] > Task :metadata:compileTestJava
[2024-07-30T15:20:47.699Z] > Task :metadata:testClasses
[2024-07-30T15:20:49.058Z] > Task :clients:srcJar
[2024-07-30T15:20:50.499Z] > Task :clients:testJar
[2024-07-30T15:20:50.499Z] > Task :clients:testSrcJar
[2024-07-30T15:20:51.779Z] > Task 
:clients:publishMavenJavaPublicationToMavenLocal
[2024-07-30T15:20:51.779Z] > Task :clients:publishToMavenLocal
[2024-07-30T15:20:51.779Z] > Task 
:connect:api:generateMetadataFileForMavenJavaPublication
[2024-07-30T15:20:51.779Z] > Task :connect:api:compileTestJava UP-TO-DATE
[2024-07-30T15:20:51.779Z] > Task :connect:api:testClasses UP-TO-DATE
[2024-07-30T15:20:51.779Z] > Task :connect:api:testJar
[2024-07-30T15:20:51.779Z] > Task :connect:api:testSrcJar
[2024-07-30T15:20:51.779Z] > Task 
:connect:api:publishMavenJavaPublicationToMavenLocal
[2024-07-30T15:20:51.779Z] > Task :connect:api:publishToMavenLocal
[2024-07-30T15:20:57.243Z] 
[2024-07-30T15:20:57.243Z] > Task :streams:javadoc
[2024-07-30T15:20:57.243Z] 
/home/jenkins/workspace/Kafka_kafka_3.8/streams/src/main/java/org/apache/kafka/streams/processor/assignment/TaskAssignor.java:80:
 warning - @param argument "assignment:" is not a parameter name.
[2024-07-30T15:20:57.243Z] 
/home/jenkins/workspace/Kafka_kafka_3.8/streams/src/main/java/org/apache/kafka/streams/processor/assignment/TaskAssignor.java:80:
 warning - @param argument "subscription:" is not a parameter name.
[2024-07-30T15:20:57.243Z] 
/home/jenkins/workspace/Kafka_kafka_3.8/streams/src/main/java/org/apache/kafka/streams/processor/assignment/TaskAssignor.java:80:
 warning - @param argument "error:" is not a parameter name.
[2024-07-30T15:20:59.980Z] 3 warnings
[2024-07-30T15:20:59.980Z] 
[2024-07-30T15:20:59.980Z] > Task :streams:javadocJar
[2024-07-30T15:21:02.706Z] > Task :streams:srcJar
[2024-07-30T15:21:02.706Z] > Task :streams:processTestResources UP-TO-DATE
[2024-07-30T15:21:32.234Z] > Task :core:classes
[2024-07-30T15:21:32.234Z] > Task :core:compileTestJava NO-SOURCE
[2024-07-30T15:22:02.427Z] > Task :core:compileTestScala
[2024-07-30T15:23:03.094Z] > Task :core:testClasses
[2024-07-30T15:23:33.480Z] > Task :streams:compileTestJava
[2024-07-30T15:25:18.105Z] > Task :streams:testClasses
[2024-07-30T15:25:18.105Z] > Task :streams:testJar
[2024-07-30T15:25:18.105Z] > Task :streams:testSrcJar
[2024-07-30T15:25:18.105Z] > Task 
:streams:publishMavenJavaPublicationToMavenLocal
[2024-07-30T15:25:18.105Z] > Task :streams:publishToMavenLocal
[2024-07-30T15:25:18.105Z] 
[2024-07-30T15:25:18.105Z] Deprecated Gradle features were used in this build, 
making it incompatible with Gradle 9.0.
[2024-07-30T15:25:18.105Z] 
[2024-07-30T15:25:18.105Z] You can use '--warning-mode all' to show the 
individual deprecation warnings and determine if they come from your own 
scripts or plugins.
[2024-07-30T15:25:18.105Z] 
[2024-07-30T15:25:18.105Z] For more on this, please refer to 
https://docs.gradle.org/8.7/userguide/command_line_interface.html#sec:command_line_warnings
 in the Gradle documentation.

Re: [kafka-clients] [ANNOUNCE] Apache Kafka 3.8.0

2024-07-30 Thread Kamal Chandraprakash
Thanks for running the release!

On Tue, Jul 30, 2024 at 4:33 AM Colin McCabe  wrote:

> +1. Thanks, Josep!
>
> Colin
>
> On Mon, Jul 29, 2024, at 10:32, Chris Egerton wrote:
> > Thanks for running the release, Josep!
> >
> >
> > On Mon, Jul 29, 2024, 13:31 'Josep Prat' via kafka-clients <
> kafka-clie...@googlegroups.com> wrote:
> >> The Apache Kafka community is pleased to announce the release for
> Apache
> >> Kafka 3.8.0
> >>
> >> This is a minor release and it includes fixes and improvements from 456
> >> JIRAs.
> >>
> >> All of the changes in this release can be found in the release notes:
> >> https://www.apache.org/dist/kafka/3.8.0/RELEASE_NOTES.html
> >>
> >> An overview of the release can be found in our announcement blog post:
> >> https://kafka.apache.org/blog#apache_kafka_380_release_announcement
> >>
> >> You can download the source and binary release (Scala 2.12 and Scala
> >> 2.13) from:
> >> https://kafka.apache.org/downloads#3.8.0
> >>
> >>
> ---
> >>
> >>
> >> Apache Kafka is a distributed streaming platform with four core APIs:
> >>
> >>
> >> ** The Producer API allows an application to publish a stream of
> records to
> >> one or more Kafka topics.
> >>
> >> ** The Consumer API allows an application to subscribe to one or more
> >> topics and process the stream of records produced to them.
> >>
> >> ** The Streams API allows an application to act as a stream processor,
> >> consuming an input stream from one or more topics and producing an
> >> output stream to one or more output topics, effectively transforming the
> >> input streams to output streams.
> >>
> >> ** The Connector API allows building and running reusable producers or
> >> consumers that connect Kafka topics to existing applications or data
> >> systems. For example, a connector to a relational database might
> >> capture every change to a table.
> >>
> >>
> >> With these APIs, Kafka can be used for two broad classes of application:
> >>
> >> ** Building real-time streaming data pipelines that reliably get data
> >> between systems or applications.
> >>
> >> ** Building real-time streaming applications that transform or react
> >> to the streams of data.
> >>
> >>
> >> Apache Kafka is in use at large and small companies worldwide, including
> >> Capital One, Goldman Sachs, ING, LinkedIn, Netflix, Pinterest, Rabobank,
> >> Target, The New York Times, Uber, Yelp, and Zalando, among others.
> >>
> >> A big thank you for the following 202 contributors to this release!
> >> (Please report an unintended omission)
> >>
> >> Aadithya Chandra, Abhijeet Kumar, Abhinav Dixit, Adrian Preston, Afshin
> >> Moazami, Ahmed Najiub, Ahmed Sobeh, Akhilesh Chaganti, Almog Gavra,
> Alok
> >> Thatikunta, Alyssa Huang, Anatoly Popov, Andras Katona, Andrew
> >> Schofield, Anna Sophie Blee-Goldman, Antoine Pourchet, Anton Agestam,
> >> Anton Liauchuk, Anuj Sharma, Apoorv Mittal, Arnout Engelen, Arpit
> Goyal,
> >> Artem Livshits, Ashwin Pankaj, Ayoub Omari, Bruno Cadonna, Calvin Liu,
> >> Cameron Redpath, charliecheng630, Cheng-Kai, Zhang, Cheryl Simmons,
> Chia
> >> Chuan Yu, Chia-Ping Tsai, ChickenchickenLove, Chris Egerton, Chris
> >> Holland, Christo Lolov, Christopher Webb, Colin P. McCabe, Colt
> McNealy,
> >> cooper.ts...@suse.com, Vedarth Sharma, Crispin Bernier, Daan Gerits,
> >> David Arthur, David Jacot, David Mao, dengziming, Divij Vaidya, DL1231,
> >> Dmitry Werner, Dongnuo Lyu, Drawxy, Dung Ha, Edoardo Comar, Eduwer
> >> Camacaro, Emanuele Sabellico, Erik van Oosten, Eugene Mitskevich, Fan
> >> Yang, Federico Valeri, Fiore Mario Vitale, flashmouse, Florin Akermann,
> >> Frederik Rouleau, Gantigmaa Selenge, Gaurav Narula, ghostspiders,
> >> gongxuanzhang, Greg Harris, Gyeongwon Do, Hailey Ni, Hao Li, Hector
> >> Geraldino, highluck, hudeqi, Hy (하이), IBeyondy, Iblis Lin, Igor Soarez,
> >> ilyazr, Ismael Juma, Ivan Vaskevych, Ivan Yurchenko, James Faulkner,
> >> Jamie Holmes, Jason Gustafson, Jeff Kim, jiangyuan, Jim Galasyn,
> Jinyong
> >> Choi, Joel Hamill, John Doe zh2725284...@gmail.com, John Roesler, John
> >> Yu, Johnny Hsu, Jorge Esteban Quilcate Otoya, Josep Prat, José Armando
> >> García Sancio, Jun Rao, Justine Olshan, Kalpesh Patel, Kamal
> >> Chandraprakash, Ken Huang, Kirk True, Kohei Nozaki, Krishna Agarwal,
> >> KrishVora01, Kuan-Po (Cooper) Tseng, Kvicii, Lee Dongjin, Leonardo
> >> Silva, Lianet Magrans, LiangliangSui, Linu Shibu, lixinyang, Lokesh
> >> Kumar, Loïc GREFFIER, Lucas Brutschy, Lucia Cerchie, Luke Chen,
> >> Manikumar Reddy, mannoopj, Manyanda Chitimbo, Mario Pareja, Matthew de
> >> Detrich, Matthias Berndt, Matthias J. Sax, Matthias Sax, Max Riedel,
> >> Mayank Shekhar Narula, Michael Edgar, Michael Westerby, Mickael Maison,
> >> Mike Lloyd, Minha, Jeong, Murali Basani, n.izhikov, Nick Telford,
> Nikhil
> >> Ramakrishnan, Nikolay, Octavian Ciubotaru, Okada Haruki, Omnia G.H
> >> Ibrahim, Ori Hoch, Owen Leung, 

Re: [DISCUSS] KIP-1062: Introduce Pagination for some requests used by Admin API

2024-07-30 Thread David Arthur
Omnia, thanks for the updates!

> Am happy to add section for throttling in this KIP if it is high concern
or open a followup KIP for this once we already have the pagination in
place. Which one do you suggest?

I'm okay leaving throttling for a future KIP. It might be useful to see the
feature in action for a while before deciding if its necessary or the best
way to approach it.

On Mon, Jul 22, 2024 at 9:23 AM Omnia Ibrahim 
wrote:

>
> Hi David, thanks for the feedback and sorry for taking long to respond as
> I was off for a week.
> > DA1: In "Public Interfaces" you say "max.request.pagination.size.limit"
> > controls the max items to return by default. It's not clear to me if this
> > is just a default, or if it is a hard limit. In KIP-966, this config
> serves
> > as a hard limit to prevent misconfigured or malicious clients from
> > requesting too many resources. Can you clarify this bit?
>
> `max.request.partition.size.limit` will be used in same way as KIP-966 I
> just meant `max.request.partition.size.limit` will equal
> `max.request.pagination.size.limit` by default unless it is specified
> otherwise. I clarified this in the KIP now
>
> > DA2: Is "ItemsLeftToFetch" accounting for authorization? If not, it could
> > be considered a minor info leak.
>
> This is a good point. Any of the requests still will count to what ACLs
> and resources the authorised user is used by the client, the pagination
> will not effect this.
> In cases where the client is using user with wild ACLs I am assuming this
> is okay and they have the right to see this info.
> However am rethinking this now as it might not be that useful and we can
> just relay on if the there is a next cursor or not to simplify the approach
> similar to KIP-966. I have updated the KIP to reflect this.
>
> > DA3: By splitting up results into pages, we are introducing the
> possibility
> > of inconsistency into these RPCs. For example, today MetadataRequest
> > returns results from the same MetadataImage, so the response is
> consistent.
> > With the paging approach, it is possible (likely even) that different
> > requests will be served from different MetadataImage-s, leading to
> > inconsistencies. This can be even worse if paged requests go to different
> > brokers that may be lagging behind in metadata propagation. BTW this
> issue
> > exists for KIP-966 as well. We don't necessarily need to solve this right
> > away, but I think it's worth mentioning in the KIP.
>
> I added a limitation section to the KIP to mention this. I also mentioned
> it in the top section of public interfaces.
>
> > DA4: Have we considered some generic throttling for paged requests? I
> > expect it might be an issue if clients want to get everything and just
> page
> > through all of the results as quickly as possible.
> I didn’t consider throttling for pagination requests as
>  Right now the only throttling AdminClient knows is throttling
> TopicCreate/Delete which is different than pagination and might need it is
> own conversation and KIP.
> For example in the case of throttling and retries > timeouts, should
> consider send back what we fetched so far and allow the operator to set the
> cursor next time. If this is the case then we need to include cursor to all
> the Option classes to these requests. Also Admin API for
> DescribeTopicPartitionRequest in KIP-966 don’t provide Cursor as part of
> DescribeTopicsOptions.
> Also extending `controllerMutation` or should we separate the paging
> throttling to its own quota
> The only requests I think might actively scraped are `OffsetFetchRequest`,
> `ListGroupsRequest`, `DescribeGroupsRequest` and
> `ConsumerGroupDescribeRequest` to actively provide lag metrics/dashboards
> to consumers. So there might be too many pages.
> The rest of the requests mostly used during maintenance of the cluster or
> incidents (specially the producer/txn requests) and operator of the cluster
> need them to take a decision. The pagination just provides them with a way
> to escape the timeout problem with large clusters. So am not sure adding
> throttling during such time would be wise.
> Am happy to add section for throttling in this KIP if it is high concern
> or open a followup KIP for this once we already have the pagination in
> place. Which one do you suggest?
>
> Thanks
> Omnia
>
> > On 12 Jul 2024, at 14:56, David Arthur  wrote:
> >
> > Hey Omnia, thanks for the KIP! I think this will be a really nice
> > improvement for operators.
> >
> > DA1: In "Public Interfaces" you say "max.request.pagination.size.limit"
> > controls the max items to return by default. It's not clear to me if this
> > is just a default, or if it is a hard limit. In KIP-966, this config
> serves
> > as a hard limit to prevent misconfigured or malicious clients from
> > requesting too many resources. Can you clarify this bit?
> >
> > DA2: Is "ItemsLeftToFetch" accounting for authorization? If not, it could
> > be considered a minor info leak.
> >
> > DA3: By 

Re: [PR] MINOR: Refresh of the docs [kafka-site]

2024-07-30 Thread via GitHub


mimaison merged PR #618:
URL: https://github.com/apache/kafka-site/pull/618


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [DISCUSS] KIP-1066: Mechanism to cordon brokers and log directories

2024-07-30 Thread David Arthur
DA1: I like Jun's suggestion of using a wildcard, this would also help with
the case I mentioned (cordon a whole broker, regardless of how many log
dirs).

DA2.1: Re: log dir names to UUID mapping -- do you mean a new CordonLogDirs
RPC would need to send the UUID instead of the log dir name?

DA2.2: In general, I don't really like the idea of using dynamic configs as
an API for modifying state. I know we do it a lot, but I think it's an
anti-pattern. It makes it difficult to evolve an API since you're stuck
with the key/value config system. Also, the dynamic config code is a lot
harder to navigate and understand than KafkaApis and MetadataImage (my
opinion is probably biased here).

DA3: I was thinking along the lines of this being a new feature, so we may
want to just disable the code paths entirely with a config. For example,
maybe we introduce a rare bug that breaks assignments, or broker
registration, or whatever -- it's nicer to have a config to turn off a
feature rather than having to downgrade. I guess this really depends on how
tightly integrated the new feature is with existing code. Let's leave this
out of the KIP for now and maybe revisit after the feature is done.

DA4: Ok, thanks. So eventually the operator would un-cordon the brokers to
allow normal placements to occur. Since we expect the operator to
eventually come back and un-cordon the brokers, this means it will
occasionally be forgotten (since it's a human task). WDYT about a new
broker metric to show the number of cordoned log dirs? This would help
operators set up monitors like "alert if cordoned count > 0 for 7 days" or
similar.

-David

On Wed, Jul 24, 2024 at 1:21 PM Jun Rao  wrote:

> Hi, Mickael,
>
> Thanks for the KIP.
>
> 10. A common case is only one log dir per broker. Could we support sth
> like --add-config cordoned.log.dirs=* to make it more convenient for this
> case?
>
> 11. Since we changed the metadata record format, should we gate the new
> configuration based on a new metadata version?
>
> Jun
>
> On Wed, Jul 17, 2024 at 9:18 AM Mickael Maison 
> wrote:
>
> > Hi Kamal,
> >
> > Good spot, yes this is a typo. The flexibleVersions stays as "0+". Fixed
> >
> > Thanks,
> > Mickael
> >
> > On Wed, Jul 17, 2024 at 6:14 PM Mickael Maison  >
> > wrote:
> > >
> > > Hi David,
> > >
> > > DA1: It's a point I considered, especially being able to cordon a
> > > whole broker. With the current proposal you just need to set
> > > cordoned.log.dirs to the same value as log.dirs. That does not seem
> > > excessively difficult.
> > >
> > > DA2: I did consider using a new metadata record like we do for
> > > fence/unfence. Since I don't expect cordoned log dirs to change very
> > > frequently, and the size should be small, I opted to reuse the
> > > BrokerRegistrationRecord metadata record. At the moment I guess it was
> > > mostly for the convenience while prototyping. Semantically it probably
> > > makes sense to have separate records. Your further point suggest
> > > design changes in the mechanism as a whole, so let's discuss these
> > > first and we can return to the exact metadata records after.
> > > I find the idea of having dedicated RPCs interesting. One of the
> > > reasons I piggybacked on the heartbeating process is for the mapping
> > > between log directory names and their UUIDs. Currently the mapping
> > > only exists on each broker instance. So if we wanted a dedicated RPC,
> > > we would first need to change how we manage the log directory to UUID
> > > mappings. I guess this could be done via the BrokerRegistration API.
> > > I'm not sure about storing additional metadata (reason, timestamp). We
> > > currently don't do that for any operations
> > > (AlterPartitionReassignments, UnregisterBroker). Typically these are
> > > stored in the tools used by the operators to drive these operations.
> > > You bring another interesting point about the ability to cordon
> > > brokers/log directories while they are offline. It's not something the
> > > current proposal supports. I'm not sure how useful this would turn out
> > > it practice. In experience is that brokers are mostly online, so I'd
> > > expect the need to do so relatively rare. This also kind of loops back
> > > to KAFKA-17094 [0] and the discussion Gantigmaa started on the dev
> > > list [1] about being able to identify the ids of offline (but still
> > > registered) brokers.
> > >
> > > DA3: With the current proposal, I don't see a reason why you would
> > > want to disable the new behavior. If you don't want to use it, you
> > > have nothing to do. It's opt-in as you need to set cordoned.log.dirs
> > > on some brokers to get the new behavior. If you don't want it anymore,
> > > you should unset cordoned.log.dirs. Can you explain why this would not
> > > work?
> > >
> > > DA4: Yes
> > >
> > > 0: https://issues.apache.org/jira/browse/KAFKA-17094
> > > 1: https://lists.apache.org/thread/1rrgbhk43d85wobcp0dqz6mhpn93j9yo
> > >
> > > Thanks,
> > > Mickael
> > >
> > >

Re: [PR] MINOR: Refresh of the docs [kafka-site]

2024-07-30 Thread via GitHub


jlprat commented on PR #618:
URL: https://github.com/apache/kafka-site/pull/618#issuecomment-2258469770

   This was the PR for trunk https://github.com/apache/kafka/pull/16654


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] MINOR: Refresh of the docs [kafka-site]

2024-07-30 Thread via GitHub


jlprat commented on PR #618:
URL: https://github.com/apache/kafka-site/pull/618#issuecomment-2258463153

   I thought I ported both there, maybe I overlooked a couple, I'll check later 
if nobody does


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] MINOR: Refresh of the docs [kafka-site]

2024-07-30 Thread via GitHub


mimaison commented on PR #618:
URL: https://github.com/apache/kafka-site/pull/618#issuecomment-2258458423

   Thanks for the review. So it looks like some changes were only done in 
kafka-site and are not in 3.8. I pushed an update to fix these.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (KAFKA-17219) Adjust system test framework for new protocol consumer

2024-07-30 Thread Dongnuo Lyu (Jira)
Dongnuo Lyu created KAFKA-17219:
---

 Summary: Adjust system test framework for new protocol consumer
 Key: KAFKA-17219
 URL: https://issues.apache.org/jira/browse/KAFKA-17219
 Project: Kafka
  Issue Type: Task
  Components: clients, consumer, system tests
Reporter: Dongnuo Lyu


The current test framework doesn't work well with the existing tests using the 
new consumer protocol. There are two main issues
 
We sometimes assume there is no rebalance triggered, for instance in 
{{consumer_test.py::test_consumer_failure}}
# verify that there were no rebalances on failover
assert num_rebalances == consumer.num_rebalances(), "Broker failure should not 
cause a rebalance"
The current frame work calculates {{num_rebalances}} by increment by one every 
time a new assignment is received, so if a reconciliation happened during the 
failover, {{num_rebalances}} will also be incremented. For new protocol we need 
a new way to update {{{}num_rebalances{}}}.
 
For the new protocol, we need a way to make sure all members have joined {*}and 
stablized{*}. Currently we only make sure all members have joined (the event 
handlers are all in Joined state), where some partitions haven't been assigned 
and more time is needed for reconciliation. The issue can cause failure in 
assertions like timeout waiting for consumption and

{{}}
partition_owner = consumer.owner(partition)
assert partition_owner is not None
 
For a short term solution, we can make the tests pass by bypassing with adding 
{{time.sleep}} or skip checking {{{}num_rebalance{}}}. To truly fix them, we 
should adjust 
{{tools/src/main/java/org/apache/kafka/tools/VerifiableConsumer.java}} to work 
well with the new protocol.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] MINOR: Refresh of the docs [kafka-site]

2024-07-30 Thread via GitHub


jlprat commented on code in PR #618:
URL: https://github.com/apache/kafka-site/pull/618#discussion_r1696998174


##
38/upgrade.html:
##
@@ -19,7 +19,7 @@
 
 

Jenkins build is still unstable: Kafka » Kafka Branch Builder » trunk #3151

2024-07-30 Thread Apache Jenkins Server
See 




[PR] MINOR: Refresh of the docs [kafka-site]

2024-07-30 Thread via GitHub


mimaison opened a new pull request, #618:
URL: https://github.com/apache/kafka-site/pull/618

   Regenerated the docs from the 3.8 branch


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[DISCUSS] KIP-1074: Make the replication of internal topics configurable

2024-07-30 Thread Patrik Marton
Hi Team,

I would like to start a discussion on KIP-1074: Make the replication of
internal topics configurable


The goal of this KIP is to make it easier to replicate topics that seem
internal (for example ending in .internal suffix), but are just normal
business topics created by the user.

I appreciate any feedback and recommendations!

Thanks!
Patrik


[jira] [Resolved] (KAFKA-15469) Document built-in configuration providers

2024-07-30 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15469?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-15469.

Fix Version/s: 4.0.0
   Resolution: Fixed

> Document built-in configuration providers
> -
>
> Key: KAFKA-15469
> URL: https://issues.apache.org/jira/browse/KAFKA-15469
> Project: Kafka
>  Issue Type: Task
>  Components: documentation
>Reporter: Mickael Maison
>Assignee: Paul Mellor
>Priority: Major
> Fix For: 4.0.0
>
>
> Kafka has 3 built-in ConfigProvider implementations:
> * DirectoryConfigProvider
> * EnvVarConfigProvider
> * FileConfigProvider
> These don't appear anywhere in the documentation. We should at least mention 
> them and probably even demonstrate how to use them.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-17218) kafka-consumer-groups fails to describe all group if one group has been consuming from a deleted topic

2024-07-30 Thread Omnia Ibrahim (Jira)
Omnia Ibrahim created KAFKA-17218:
-

 Summary: kafka-consumer-groups fails to describe all group if one 
group has been consuming from a deleted topic
 Key: KAFKA-17218
 URL: https://issues.apache.org/jira/browse/KAFKA-17218
 Project: Kafka
  Issue Type: Task
Affects Versions: 3.8.0, 3.7.0, 3.6.0
Reporter: Omnia Ibrahim


`kafka-consumer-groups.sh --describe --all-groups` fails with 
`org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition.` if single group has metadata for a topic 
that was recently deleted. 

As impact this prevent admin from viewing the group metadata for the whole 
cluster. 

Instead we should either 
- follow the same approach when we calculate the lag which print a place holder 
`-` when it is not applicable to calculate the metadata. 
- or we exclude instances with deleted topics 

Currently I am more leaning into the first option with `-` as placeholder.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] Add KIP-477 to blog [kafka-site]

2024-07-30 Thread via GitHub


jlprat merged PR #617:
URL: https://github.com/apache/kafka-site/pull/617


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] Add KIP-477 to blog [kafka-site]

2024-07-30 Thread via GitHub


jlprat commented on code in PR #617:
URL: https://github.com/apache/kafka-site/pull/617#discussion_r1696798968


##
blog.html:
##
@@ -110,6 +110,9 @@ Kafka Streams
 
 Kafka Connect
 
+https://cwiki.apache.org/confluence/display/KAFKA/KIP-477%3A+Add+PATCH+method+for+connector+config+in+Connect+REST+API;>KIP-477:
 Add PATCH method for connector config in Connect REST API:
+This KIP add the ability in Kafka Connect 
understand PATCH methods in the Connect REST API, allowing to 
provide partial updates for configuration.

Review Comment:
   Thanks @soarez, applied your suggestion.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] Add KIP-477 to blog [kafka-site]

2024-07-30 Thread via GitHub


soarez commented on code in PR #617:
URL: https://github.com/apache/kafka-site/pull/617#discussion_r1696796779


##
blog.html:
##
@@ -110,6 +110,9 @@ Kafka Streams
 
 Kafka Connect
 
+https://cwiki.apache.org/confluence/display/KAFKA/KIP-477%3A+Add+PATCH+method+for+connector+config+in+Connect+REST+API;>KIP-477:
 Add PATCH method for connector config in Connect REST API:
+This KIP add the ability in Kafka Connect 
understand PATCH methods in the Connect REST API, allowing to 
provide partial updates for configuration.

Review Comment:
   ```suggestion
   This KIP adds the ability in Kafka Connect 
to understand PATCH methods in the Connect REST API, allowing 
partial configuration updates.
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Jenkins build is still unstable: Kafka » Kafka Branch Builder » 3.9 #2

2024-07-30 Thread Apache Jenkins Server
See 




[jira] [Reopened] (KAFKA-15863) Handle push telemetry throttling with quota manager

2024-07-30 Thread Apoorv Mittal (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15863?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Apoorv Mittal reopened KAFKA-15863:
---

Reopening to introduce metric quota.

> Handle push telemetry throttling with quota manager
> ---
>
> Key: KAFKA-15863
> URL: https://issues.apache.org/jira/browse/KAFKA-15863
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Apoorv Mittal
>Assignee: Apoorv Mittal
>Priority: Major
>
> Details: https://github.com/apache/kafka/pull/14699#discussion_r1399714279



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-14614) Missing cluster tool script for Windows

2024-07-30 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14614?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-14614.

Fix Version/s: 3.6.0
   Resolution: Fixed

> Missing cluster tool script for Windows
> ---
>
> Key: KAFKA-14614
> URL: https://issues.apache.org/jira/browse/KAFKA-14614
> Project: Kafka
>  Issue Type: Bug
>Reporter: Mickael Maison
>Priority: Major
> Fix For: 3.6.0
>
>
> We have the kafka-cluster.sh script to run ClusterTool but there's no 
> matching script for Windows.
> We should check if other scripts are missing too.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Jenkins build is still unstable: Kafka » Kafka Branch Builder » trunk #3150

2024-07-30 Thread Apache Jenkins Server
See 




[jira] [Resolved] (KAFKA-17210) Broker fixes for smooth concurrent fetches on share partition

2024-07-30 Thread Abhinav Dixit (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-17210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhinav Dixit resolved KAFKA-17210.
---
Fix Version/s: 4.0.0
   Resolution: Fixed

> Broker fixes for smooth concurrent fetches on share partition
> -
>
> Key: KAFKA-17210
> URL: https://issues.apache.org/jira/browse/KAFKA-17210
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Abhinav Dixit
>Assignee: Abhinav Dixit
>Priority: Major
> Fix For: 4.0.0
>
>
> Identified a couple of reliability issues with broker code for share groups - 
>  # Broker seems to get stuck at times when using multiple share consumers due 
> to a corner case where the second last fetch request did not contain any 
> topic partition to fetch, because of which the broker could never complete 
> the last request. This results in a share fetch request getting stuck.
>  # Since persister would not perform any business logic around sending state 
> batches for a share partition, there could be scenarios where it sends state 
> batches with no AVAILABLE records. This could cause a breach on the limit of 
> in-flight messages we have configured, and hence broker would never be able 
> to complete the share fetch requests.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-17217) Clients : Optimise batching of requests per node in ShareConsumeRequestManager

2024-07-30 Thread Shivsundar R (Jira)
Shivsundar R created KAFKA-17217:


 Summary: Clients : Optimise batching of requests per node in 
ShareConsumeRequestManager
 Key: KAFKA-17217
 URL: https://issues.apache.org/jira/browse/KAFKA-17217
 Project: Kafka
  Issue Type: Sub-task
Reporter: Shivsundar R
Assignee: Shivsundar R


In ShareConsumeRequestManager, currently every time we perform a commitSync or 
commitAsync, we create one ShareAcknowledge RPC for the same. Here, we can 
optimise the number of RPC calls by batching the acknowledgements before the 
next poll is invoked per node. 
This will ensure that between 2 calls, the acknowledgements are accumulated in 
one request per node and then sent during poll, resulting in lesser RPC calls,



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[PR] Add KIP-477 to blog [kafka-site]

2024-07-30 Thread via GitHub


jlprat opened a new pull request, #617:
URL: https://github.com/apache/kafka-site/pull/617

   KIP-477 was forgotten on the announcement blog, adding it now


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Jenkins build is unstable: Kafka » Kafka Branch Builder » 3.9 #1

2024-07-29 Thread Apache Jenkins Server
See 




Re: [VOTE] KIP-1022 Formatting and Updating Features

2024-07-29 Thread Colin McCabe
Hi Jun,

Just to close the loop on this... the KIP now mentions both ApiVersionResponse 
and BrokerRegistrationRequest.

best,
Colin

On Mon, Jul 8, 2024, at 14:57, Jun Rao wrote:
> Hi, Colin,
>
> Thanks for the update. Since the PR also introduces a new version of
> BrokerRegistrationRequest, could we include that change in the KIP update
> too?
>
> Jun
>
> On Mon, Jul 8, 2024 at 11:08 AM Colin McCabe  wrote:
>
>> Hi all,
>>
>> I've updated the approach in https://github.com/apache/kafka/pull/16421
>> so that we change the minVersion=0 to minVersion=1 in older
>> ApiVersionsResponses.
>>
>> I hope we can get this in soon and unblock the features that are waiting
>> for it!
>>
>> best,
>> Colin
>>
>> On Wed, Jul 3, 2024, at 10:55, Jun Rao wrote:
>> > Hi, David,
>> >
>> > Thanks for the reply. In the common case, there is no difference between
>> > omitting just v0 of the feature or omitting the feature completely. It's
>> > just when an old client is used, there is some difference. To me,
>> > omitting just v0 of the feature seems slightly better for the old client.
>> >
>> > Jun
>> >
>> > On Wed, Jul 3, 2024 at 9:45 AM David Jacot 
>> > wrote:
>> >
>> >> Hi Jun, Colin,
>> >>
>> >> Thanks for your replies.
>> >>
>> >> If the FeatureCommand relies on version 0 too, my suggestion does not
>> work.
>> >> Omitting the features for old clients as suggested by Colin seems fine
>> for
>> >> me. In practice, administrators will usually use a version of
>> >> FeatureCommand matching the cluster version so the impact is not too bad
>> >> knowing that the first features will be introduced from 3.9 on.
>> >>
>> >> Best,
>> >> David
>> >>
>> >> On Tue, Jul 2, 2024 at 2:15 AM Colin McCabe  wrote:
>> >>
>> >> > Hi David,
>> >> >
>> >> > In the ApiVersionsResponse, we really don't have an easy way of
>> mapping
>> >> > finalizedVersion = 1 to "off" in older releases such as 3.7.0. For
>> >> example,
>> >> > if a 3.9.0 broker advertises that it has finalized group.version = 1,
>> >> that
>> >> > will be treated by 3.7.0 as a brand new feature, not as "KIP-848 is
>> off."
>> >> > However, I suppose we could work around this by not setting a
>> >> > finalizedVersion at all for group.version (or any other feature) if
>> its
>> >> > finalized level was 1. We could also work around the "deletion = set
>> to
>> >> 0"
>> >> > issue on the server side. The server can translate requests to set the
>> >> > finalized level to 0, into requests to set it to 1.
>> >> >
>> >> > So maybe this solution is worth considering, although it's
>> unfortunate to
>> >> > lose 0. I suppose we'd have to special case metadata.version being
>> set to
>> >> > 1, since that was NOT equivalent to it being "off"
>> >> >
>> >> > best,
>> >> > Colin
>> >> >
>> >> >
>> >> > On Mon, Jul 1, 2024, at 10:11, Jun Rao wrote:
>> >> > > Hi, David,
>> >> > >
>> >> > > Yes, that's another option. It probably has its own challenges. For
>> >> > > example, the FeatureCommand tool currently treats disabling a
>> feature
>> >> as
>> >> > > setting the version to 0. It would be useful to get Jose's opinion
>> on
>> >> > this
>> >> > > since he introduced version 0 in the kraft.version feature.
>> >> > >
>> >> > > Thanks,
>> >> > >
>> >> > > Jun
>> >> > >
>> >> > > On Sun, Jun 30, 2024 at 11:48 PM David Jacot
>> >> > >> > >
>> >> > > wrote:
>> >> > >
>> >> > >> Hi Jun, Colin,
>> >> > >>
>> >> > >> Have we considered sticking with the range going from version 1 to
>> N
>> >> > where
>> >> > >> version 1 would be the equivalent of "disabled"? In the
>> group.version
>> >> > case,
>> >> > >> we could introduce group.version=1 that does basically nothing and
>> >> > >> group.version=2 that enables the new protocol. I suppose that we
>> could
>> >> > do
>> >> > >> the same for the other features. I agree that it is less elegant
>> but
>> >> it
>> >> > >> would avoid all the backward compatibility issues.
>> >> > >>
>> >> > >> Best,
>> >> > >> David
>> >> > >>
>> >> > >> On Fri, Jun 28, 2024 at 6:02 PM Jun Rao 
>> >> > wrote:
>> >> > >>
>> >> > >> > Hi, Colin,
>> >> > >> >
>> >> > >> > Yes, #3 is the scenario that I was thinking about.
>> >> > >> >
>> >> > >> > In either approach, there will be some information missing in the
>> >> old
>> >> > >> > client. It seems that we should just pick the one that's less
>> wrong.
>> >> > In
>> >> > >> the
>> >> > >> > more common case when a feature is finalized on the server,
>> >> > presenting a
>> >> > >> > supported feature with a range of 1-1 seems less wrong than
>> omitting
>> >> > it
>> >> > >> in
>> >> > >> > the output of "kafka-features describe".
>> >> > >> >
>> >> > >> > Thanks,
>> >> > >> >
>> >> > >> > Jun
>> >> > >> >
>> >> > >> > On Thu, Jun 27, 2024 at 9:52 PM Colin McCabe > >
>> >> > wrote:
>> >> > >> >
>> >> > >> > > Hi Jun,
>> >> > >> > >
>> >> > >> > > This is a fair question. I think there's a few different
>> scenarios
>> >> > to
>> >> > >> > > consider:
>> >> > >> > >
>> >> > >> > > 1. mixed server software 

[jira] [Created] (KAFKA-17216) StreamsConfig STATE_DIR_CONFIG

2024-07-29 Thread raphaelauv (Jira)
raphaelauv created KAFKA-17216:
--

 Summary: StreamsConfig STATE_DIR_CONFIG
 Key: KAFKA-17216
 URL: https://issues.apache.org/jira/browse/KAFKA-17216
 Project: Kafka
  Issue Type: Bug
Affects Versions: 3.8.0
Reporter: raphaelauv


I can't use the class StreamsConfig 

it fail with         Caused by: java.lang.ExceptionInInitializerError at 
StreamsConfig.java:866

problem is not present in 3.7.0



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Jenkins build is still unstable: Kafka » Kafka Branch Builder » trunk #3149

2024-07-29 Thread Apache Jenkins Server
See 




Re: [DISCUSS] KIP-512: make Record Headers available in onAcknowledgement

2024-07-29 Thread Rich C.
Hi Andrew,

Thanks for the feedback. I have updated KIP-512 and addressed AS2, AS3, and
AS4. For AS1, let's wait for further responses from the community.


Regards,
Rich


On Mon, Jul 29, 2024 at 5:59 AM Andrew Schofield 
wrote:

> Hi,
> Thanks for adding the detail. It seems quite straightforward to
> implement in the producer code.
>
> AS1: Personally, and of course this is a matter of taste and just one
> opinion, I don’t like adding Headers to RecordMetadata. It seems to me
> that RecordMetadata is information about the record that’s been produced
> whereas the Headers are really part of the record itself. So, I prefer the
> alternative which overloads ProducerInterceptor.onAcknowledgement.
>
> AS2: ProducerBatch and FutureRecordMetadata are both internal classes
> and do not need to be documented in the KIP.
>
> AS3: This KIP is adding rather than replcaing the constructor for
> RecordMetadata.
> You should define the value for the Headers if an existing constructor
> without headers is used.
>
> AS4: You should add a method `Headers headers()` to RecordMetadata.
>
>
> I wonder what other community members think about whether it’s a good
> idea to extend RecordMetadata with the headers.
>
> Thanks,
> Andrew
>
> > On 29 Jul 2024, at 05:36, Rich C.  wrote:
> >
> > Hi all,
> >
> > Thank you for the positive feedback. I added proposal changes to KIP-512
> > and included a FAQ section to address some concerns.
> >
> > Hi Andrew, yes, this KIP focuses on
> > `ProducerInterceptor.onAcknowledgement`. I added FAQ#3 to explain that.
> >
> > Hi Matthias, for your question about "RecordMetadata being Kafka
> metadata" in
> > this thread
> > <
> https://lists.apache.org/list?dev@kafka.apache.org:lte=1M:make%20Record%20Headers%20available%20in%20onAcknowledgement
> >,
> > I added FAQ#2 to explain that. If I have missed any documentation
> regarding
> > the design of RecordMetadata, please let me know.
> >
> > Regards,
> > Rich
> >
> >
> > On Fri, Jul 26, 2024 at 4:00 PM Andrew Schofield <
> andrew_schofi...@live.com>
> > wrote:
> >
> >> Hi Rich,
> >> Thanks for resurrecting this KIP. It seems like a useful idea to me and
> >> I’d be interested in seeing the proposed public interfaces.
> >>
> >> I note that you specifically called out the
> >> ProducerInterceptor.onAcknowledgement
> >> method, as opposed to the producer Callback.onCompletion method.
> >>
> >> Thanks,
> >> Andrew
> >>
> >>> On 26 Jul 2024, at 04:54, Rich C.  wrote:
> >>>
> >>> Hi Kevin,
> >>>
> >>> Thanks for your support.
> >>>
> >>> Hi Matthias,
> >>>
> >>> I apologize for the confusion. I've deleted the Public Interface
> sections
> >>> for now. I think we should focus on discussing its necessity with the
> >>> community. I'll let it sit for a few more days, and if there are no
> >>> objections, I will propose changes over the weekend and share them here
> >>> again.
> >>>
> >>> Regards,
> >>> Rich
> >>>
> >>>
> >>> On Thu, Jul 25, 2024 at 5:51 PM Matthias J. Sax 
> >> wrote:
> >>>
>  Rich,
> 
>  thanks for resurrecting this KIP. I was not part of the original
>  discussion back in the day, but personally agree with your assessment
>  that making headers available in the callbacks would make developer's
>  life much simpler.
> 
>  For the KIP itself, starting with "Public Interface" section,
> everything
>  is formatted as "strike through". Can you fix this? It's confusing as
>  it's apparently not correctly formatted, but unclear which (if any)
>  parts should be formatted like this. In general, wiki pages have
>  history, so strike-through should be used rather rarely but the wiki
>  page should just contain the latest proposal. (If one want to see the
>  history, it's there anyway).
> 
> 
>  -Matthias
> 
>  On 7/23/24 6:36 AM, Kevin Lam wrote:
> > Hi,
> >
> > Thanks for starting the discussion. Latency Measurement and Tracing
> > Completeness are both good reasons to support this feature, and would
> >> be
> > interested to see this move forward.
> >
> > On Mon, Jul 22, 2024 at 11:15 PM Rich C. 
> >> wrote:
> >
> >> Hi Everyone,
> >>
> >> I hope this email finds you well.
> >>
> >> I would like to start a discussion on KIP-512. The initial version
> of
> >> KIP-512 was created in 2019, and I have resurrected it in 2024 with
> >> more
> >> details about the motivation behind it.
> >>
> >> You can view the current version of the KIP here: KIP-512: Make
> Record
> >> Headers Available in onAcknowledgement.
> >> <
> >>
> 
> >>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-512%3A+make+Record+Headers+available+in+onAcknowledgement
> >>>
> >>
> >> Let's focus on discussing the necessity of this feature first. If we
>  agree
> >> on its importance, we can then move on to discussing the proposed
>  changes.
> >>
> >> Looking forward to your feedback.
> 

[jira] [Resolved] (KAFKA-17186) Cannot receive message after stopping Source Mirror Maker 2

2024-07-29 Thread George Yang (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-17186?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

George Yang resolved KAFKA-17186.
-
Resolution: Not A Problem

configuration issues, fixed

> Cannot receive message after stopping Source Mirror Maker 2
> ---
>
> Key: KAFKA-17186
> URL: https://issues.apache.org/jira/browse/KAFKA-17186
> Project: Kafka
>  Issue Type: Bug
>  Components: mirrormaker
>Affects Versions: 3.7.1
> Environment: Source Kafka Cluster per Node:
> CPU(s): 32
> Memory: 32G/1.1G free
> Target Kafka Cluster standalone Node:
> CPU(s): 24
> Memory: 30G/909M free
> Kafka Version 3.7
> Mirrormaker Version 3.7.1
>Reporter: George Yang
>Priority: Major
> Attachments: image-2024-07-25-14-14-21-327.png, mirrorMaker.out
>
>
> Deploy nodes 1, 2, and 3 in Data Center A, with MM2 service deployed on node 
> 1. Deploy node 1 in Data Center B, with MM2 service also deployed on node 1. 
> Currently, a service on node 1 in Data Center A acts as a producer sending 
> messages to the `myTest` topic. A service in Data Center B acts as a consumer 
> listening to `A.myTest`. 
> The issue arises when MM2 on node 1 in Data Center A is stopped: the consumer 
> in Data Center B ceases to receive messages. Even after I restarting MM2 in 
> Data Center A, the consumer in Data Center B still does not receive messages 
> until approximately 5 minutes later when a rebalance occurs, at which point 
> it begins receiving messages again.
>  
> [Logs From Consumer on Data Center B]
> [2024-07-23 17:29:17,270] INFO [MirrorCheckpointConnector|worker] refreshing 
> consumer groups took 185 ms (org.apache.kafka.connect.mirror.Scheduler:95)
> [2024-07-23 17:29:19,189] INFO [MirrorCheckpointConnector|worker] refreshing 
> consumer groups took 365 ms (org.apache.kafka.connect.mirror.Scheduler:95)
> [2024-07-23 17:29:22,271] INFO [MirrorCheckpointConnector|worker] refreshing 
> consumer groups took 186 ms (org.apache.kafka.connect.mirror.Scheduler:95)
> [2024-07-23 17:29:24,193] INFO [MirrorCheckpointConnector|worker] refreshing 
> consumer groups took 369 ms (org.apache.kafka.connect.mirror.Scheduler:95)
> [2024-07-23 17:29:25,377] INFO [Worker clientId=B->A, groupId=B-mm2] 
> Rebalance started 
> (org.apache.kafka.connect.runtime.distributed.WorkerCoordinator:242)
> [2024-07-23 17:29:25,377] INFO [Worker clientId=B->A, groupId=B-mm2] 
> (Re-)joining group 
> (org.apache.kafka.connect.runtime.distributed.WorkerCoordinator:604)
> [2024-07-23 17:29:25,386] INFO [Worker clientId=B->A, groupId=B-mm2] 
> Successfully joined group with generation Generation\{generationId=52, 
> memberId='B->A-adc19038-a8b6-40fb-9bf6-249f866944ab', protocol='sessioned'} 
> (org.apache.kafka.connect.runtime.distributed.WorkerCoordinator:665)
> [2024-07-23 17:29:25,390] INFO [Worker clientId=B->A, groupId=B-mm2] 
> Successfully synced group in generation Generation\{generationId=52, 
> memberId='B->A-adc19038-a8b6-40fb-9bf6-249f866944ab', protocol='sessioned'} 
> (org.apache.kafka.connect.runtime.distributed.WorkerCoordinator:842)
> [2024-07-23 17:29:25,390] INFO [Worker clientId=B->A, groupId=B-mm2] Joined 
> group at generation 52 with protocol version 2 and got assignment: 
> Assignment\{error=0, leader='B->A-adc19038-a8b6-40fb-9bf6-249f866944ab', 
> leaderUrl='NOTUSED', offset=1360, connectorIds=[MirrorCheckpointConnector], 
> taskIds=[MirrorCheckpointConnector-0, MirrorCheckpointConnector-1, 
> MirrorCheckpointConnector-2], revokedConnectorIds=[], revokedTaskIds=[], 
> delay=0} with rebalance delay: 0 
> (org.apache.kafka.connect.runtime.distributed.DistributedHerder:2580)
> [2024-07-23 17:29:25,390] INFO [Worker clientId=B->A, groupId=B-mm2] Starting 
> connectors and tasks using config offset 1360 
> (org.apache.kafka.connect.runtime.distributed.DistributedHerder:1921)
> [2024-07-23 17:29:25,390] INFO [Worker clientId=B->A, groupId=B-mm2] Finished 
> starting connectors and tasks 
> (org.apache.kafka.connect.runtime.distributed.DistributedHerder:1950)
> [2024-07-23 17:29:26,883] INFO [Worker clientId=A->B, groupId=A-mm2] 
> Rebalance started 
> (org.apache.kafka.connect.runtime.distributed.WorkerCoordinator:242)
> [2024-07-23 17:29:26,883] INFO [Worker clientId=A->B, groupId=A-mm2] 
> (Re-)joining group 
> (org.apache.kafka.connect.runtime.distributed.WorkerCoordinator:604)
> [2024-07-23 17:29:26,890] INFO [Worker clientId=A->B, groupId=A-mm2] 
> Successfully joined group with generation Generation\{generationId=143, 
> memberId='A->B-0d04e6c1-f12a-4121-89af-e9992a167a01', protocol='sessioned'} 
> (org.apache.kafka.connect.runtime.distributed.WorkerCoordinator:665)
> [2024-07-23 17:29:26,893] INFO [Worker clientId=A->B, groupId=A-mm2] 
> Successfully synced group in generation Generation\{generationId=143, 
> 

[jira] [Created] (KAFKA-17215) Remove get-prefix for all getters

2024-07-29 Thread Matthias J. Sax (Jira)
Matthias J. Sax created KAFKA-17215:
---

 Summary: Remove get-prefix for all getters
 Key: KAFKA-17215
 URL: https://issues.apache.org/jira/browse/KAFKA-17215
 Project: Kafka
  Issue Type: Improvement
  Components: streams, streams-test-utils
Reporter: Matthias J. Sax


Kafka traditionally does not use a `get` prefix for getter methods. However, 
for multiple public interfaces, we don't follow this common pattern, but 
actually have a get-prefix.

We might want to clean this up. The upcoming 4.0 release might be a good 
opportunity to deprecate existing methods and add them back with the "correct" 
name.

We should maybe also do multiple smaller KIPs instead of just one big KIP. We 
do know of the following
 * StreamsConfig (getMainConsumerConfigs, getRestoreConsumerConfigs, 
getGlobalConsumerConfigs, getProducerConfigs, getAdminConfigs, getClientTags, 
getKafkaClientSupplier – for some of these, we might even consider to remove 
them; it's questionable if it makes sense to have them in the public API (cf 
[https://github.com/apache/kafka/pull/14548)] – we should also consider 
https://issues.apache.org/jira/browse/KAFKA-16945 for this work)
 * TopologyConfig (getTaskConfig)
 * KafkaClientSupplier (getAdmin, getProducer, getConsumer, getRestoreConsumer, 
getGlobalConsumer)
 * Contexts (maybe not worth it... we might deprecate the whole class soon):
 ** ProcessorContext (getStateStore)
 ** MockProcessorContext (getStateStore)
 * 
api.ProcessingContext (getStateStore)
 ** 
api.FixedKeyProcessorContext (getStateStore)
 ** 
api.MockProcessorContext (getStateStore)
 * StateStore (getPosition)

 
 * IQv2: officially an evolving API (maybe we can rename in 4.0 directly w/o 
deprecation period, but might be nasty...)
 ** 
KeyQuery (getKey) 
 ** Position (getTopics, getPartitionPositions)
 ** QueryResult (getExecutionInfo, getPosition, getFailureReason, 
getFailureMessage, getResult)
 ** RangeQuery (getLowerBound, getUpperBound)
 ** StateQueryRequest (getStoreName, getPositionBound, getQuery, getPartitions)
 ** StateQueryResult (getPartitionResults, getOnlyPartitionResult, 
getGlobalResult, getPosition)
 ** WindowKeyQuery (getKey, getTimeFrom, getTimeTo, 
 ** WindowRangeQuery (getKey, getTimeFrom, getTimeTo)

 
 * TopologyTestDriver (getAllStateStores, getStateStore, getKeyValueStore, 
getTimestampedKeyValueStore, getVersionedKeyValueStore, getWindowStore, 
getTimestampedWindowStore, getSessionStore)
 * TestOutputTopic (getQueueSize)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [kafka-clients] [ANNOUNCE] Apache Kafka 3.8.0

2024-07-29 Thread Colin McCabe
+1. Thanks, Josep!

Colin

On Mon, Jul 29, 2024, at 10:32, Chris Egerton wrote:
> Thanks for running the release, Josep!
> 
> 
> On Mon, Jul 29, 2024, 13:31 'Josep Prat' via kafka-clients 
>  wrote:
>> The Apache Kafka community is pleased to announce the release for Apache 
>> Kafka 3.8.0
>> 
>> This is a minor release and it includes fixes and improvements from 456 
>> JIRAs.
>> 
>> All of the changes in this release can be found in the release notes:
>> https://www.apache.org/dist/kafka/3.8.0/RELEASE_NOTES.html
>> 
>> An overview of the release can be found in our announcement blog post:
>> https://kafka.apache.org/blog#apache_kafka_380_release_announcement
>> 
>> You can download the source and binary release (Scala 2.12 and Scala 
>> 2.13) from:
>> https://kafka.apache.org/downloads#3.8.0
>> 
>> ---
>> 
>> 
>> Apache Kafka is a distributed streaming platform with four core APIs:
>> 
>> 
>> ** The Producer API allows an application to publish a stream of records to
>> one or more Kafka topics.
>> 
>> ** The Consumer API allows an application to subscribe to one or more
>> topics and process the stream of records produced to them.
>> 
>> ** The Streams API allows an application to act as a stream processor,
>> consuming an input stream from one or more topics and producing an
>> output stream to one or more output topics, effectively transforming the
>> input streams to output streams.
>> 
>> ** The Connector API allows building and running reusable producers or
>> consumers that connect Kafka topics to existing applications or data
>> systems. For example, a connector to a relational database might
>> capture every change to a table.
>> 
>> 
>> With these APIs, Kafka can be used for two broad classes of application:
>> 
>> ** Building real-time streaming data pipelines that reliably get data
>> between systems or applications.
>> 
>> ** Building real-time streaming applications that transform or react
>> to the streams of data.
>> 
>> 
>> Apache Kafka is in use at large and small companies worldwide, including
>> Capital One, Goldman Sachs, ING, LinkedIn, Netflix, Pinterest, Rabobank,
>> Target, The New York Times, Uber, Yelp, and Zalando, among others.
>> 
>> A big thank you for the following 202 contributors to this release! 
>> (Please report an unintended omission)
>> 
>> Aadithya Chandra, Abhijeet Kumar, Abhinav Dixit, Adrian Preston, Afshin 
>> Moazami, Ahmed Najiub, Ahmed Sobeh, Akhilesh Chaganti, Almog Gavra, Alok 
>> Thatikunta, Alyssa Huang, Anatoly Popov, Andras Katona, Andrew 
>> Schofield, Anna Sophie Blee-Goldman, Antoine Pourchet, Anton Agestam, 
>> Anton Liauchuk, Anuj Sharma, Apoorv Mittal, Arnout Engelen, Arpit Goyal, 
>> Artem Livshits, Ashwin Pankaj, Ayoub Omari, Bruno Cadonna, Calvin Liu, 
>> Cameron Redpath, charliecheng630, Cheng-Kai, Zhang, Cheryl Simmons, Chia 
>> Chuan Yu, Chia-Ping Tsai, ChickenchickenLove, Chris Egerton, Chris 
>> Holland, Christo Lolov, Christopher Webb, Colin P. McCabe, Colt McNealy, 
>> cooper.ts...@suse.com, Vedarth Sharma, Crispin Bernier, Daan Gerits, 
>> David Arthur, David Jacot, David Mao, dengziming, Divij Vaidya, DL1231, 
>> Dmitry Werner, Dongnuo Lyu, Drawxy, Dung Ha, Edoardo Comar, Eduwer 
>> Camacaro, Emanuele Sabellico, Erik van Oosten, Eugene Mitskevich, Fan 
>> Yang, Federico Valeri, Fiore Mario Vitale, flashmouse, Florin Akermann, 
>> Frederik Rouleau, Gantigmaa Selenge, Gaurav Narula, ghostspiders, 
>> gongxuanzhang, Greg Harris, Gyeongwon Do, Hailey Ni, Hao Li, Hector 
>> Geraldino, highluck, hudeqi, Hy (하이), IBeyondy, Iblis Lin, Igor Soarez, 
>> ilyazr, Ismael Juma, Ivan Vaskevych, Ivan Yurchenko, James Faulkner, 
>> Jamie Holmes, Jason Gustafson, Jeff Kim, jiangyuan, Jim Galasyn, Jinyong 
>> Choi, Joel Hamill, John Doe zh2725284...@gmail.com, John Roesler, John 
>> Yu, Johnny Hsu, Jorge Esteban Quilcate Otoya, Josep Prat, José Armando 
>> García Sancio, Jun Rao, Justine Olshan, Kalpesh Patel, Kamal 
>> Chandraprakash, Ken Huang, Kirk True, Kohei Nozaki, Krishna Agarwal, 
>> KrishVora01, Kuan-Po (Cooper) Tseng, Kvicii, Lee Dongjin, Leonardo 
>> Silva, Lianet Magrans, LiangliangSui, Linu Shibu, lixinyang, Lokesh 
>> Kumar, Loïc GREFFIER, Lucas Brutschy, Lucia Cerchie, Luke Chen, 
>> Manikumar Reddy, mannoopj, Manyanda Chitimbo, Mario Pareja, Matthew de 
>> Detrich, Matthias Berndt, Matthias J. Sax, Matthias Sax, Max Riedel, 
>> Mayank Shekhar Narula, Michael Edgar, Michael Westerby, Mickael Maison, 
>> Mike Lloyd, Minha, Jeong, Murali Basani, n.izhikov, Nick Telford, Nikhil 
>> Ramakrishnan, Nikolay, Octavian Ciubotaru, Okada Haruki, Omnia G.H 
>> Ibrahim, Ori Hoch, Owen Leung, Paolo Patierno, Philip Nee, 
>> Phuc-Hong-Tran, PoAn Yang, Proven Provenzano, Qichao Chu, Ramin Gharib, 
>> Ritika Reddy, Rittika Adhikari, Rohan, Ron Dagostino, runom, rykovsi, 
>> Sagar Rao, Said Boudjelda, sanepal, Sanskar Jhajharia, Satish Duggana, 
>> Sean Quah, 

New release branch 3.9

2024-07-29 Thread Colin McCabe
Hi Kafka developers and friends,

As promised, we now have a release branch for the upcoming 3.9.0 release.
Trunk has been bumped to 4.0.0-SNAPSHOT.

I'll be going over the JIRAs to move every non-blocker from this release to
the next release.

>From this point, most changes should go to trunk.
*Blockers (existing and new that we discover while testing the release)
will be double-committed. *Please discuss with your reviewer whether your
PR should go to trunk or to trunk+release so they can merge accordingly.

*Please help us test the release! *

best,
Colin


Jenkins build is unstable: Kafka » Kafka Branch Builder » trunk #3148

2024-07-29 Thread Apache Jenkins Server
See 




[jira] [Resolved] (KAFKA-15522) Flaky test org.apache.kafka.connect.mirror.integration.MirrorConnectorsIntegrationExactlyOnceTest.testOneWayReplicationWithFrequentOffsetSyncs

2024-07-29 Thread Chris Egerton (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15522?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Egerton resolved KAFKA-15522.
---
Resolution: Fixed

> Flaky test 
> org.apache.kafka.connect.mirror.integration.MirrorConnectorsIntegrationExactlyOnceTest.testOneWayReplicationWithFrequentOffsetSyncs
> --
>
> Key: KAFKA-15522
> URL: https://issues.apache.org/jira/browse/KAFKA-15522
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 3.6.0, 3.5.1
>Reporter: Josep Prat
>Priority: Major
>  Labels: flaky, flaky-test
>
> h3. Last seen: 
> https://ci-builds.apache.org/job/Kafka/job/kafka-pr/job/PR-14458/3/testReport/junit/org.apache.kafka.connect.mirror.integration/MirrorConnectorsIntegrationExactlyOnceTest/Build___JDK_17_and_Scala_2_13___testOneWayReplicationWithFrequentOffsetSyncs__/
> h3. Error Message
> {code:java}
> java.lang.RuntimeException: java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.errors.TimeoutException: The request timed out.{code}
> h3. Stacktrace
> {code:java}
> java.lang.RuntimeException: java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.errors.TimeoutException: The request timed out. at 
> org.apache.kafka.connect.util.clusters.EmbeddedKafkaCluster.createTopic(EmbeddedKafkaCluster.java:427)
>  at 
> org.apache.kafka.connect.mirror.integration.MirrorConnectorsIntegrationBaseTest.createTopics(MirrorConnectorsIntegrationBaseTest.java:1276)
>  at 
> org.apache.kafka.connect.mirror.integration.MirrorConnectorsIntegrationBaseTest.startClusters(MirrorConnectorsIntegrationBaseTest.java:235)
>  at 
> org.apache.kafka.connect.mirror.integration.MirrorConnectorsIntegrationBaseTest.startClusters(MirrorConnectorsIntegrationBaseTest.java:149)
>  at 
> org.apache.kafka.connect.mirror.integration.MirrorConnectorsIntegrationExactlyOnceTest.startClusters(MirrorConnectorsIntegrationExactlyOnceTest.java:51)
>  at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method) at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
>  at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.base/java.lang.reflect.Method.invoke(Method.java:568) at 
> org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:728)
>  at 
> org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60)
>  at 
> org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131)
>  at 
> org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:156)
>  at 
> org.junit.jupiter.engine.extension.TimeoutExtension.interceptLifecycleMethod(TimeoutExtension.java:128)
>  at 
> org.junit.jupiter.engine.extension.TimeoutExtension.interceptBeforeEachMethod(TimeoutExtension.java:78)
>  at 
> org.junit.jupiter.engine.execution.InterceptingExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(InterceptingExecutableInvoker.java:103)
>  at 
> org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.lambda$invoke$0(InterceptingExecutableInvoker.java:93)
>  at 
> org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106)
>  at 
> org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64)
>  at 
> org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45)
>  at 
> org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37)
>  at 
> org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.invoke(InterceptingExecutableInvoker.java:92)
>  at 
> org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.invoke(InterceptingExecutableInvoker.java:86)
>  at 
> org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.invokeMethodInExtensionContext(ClassBasedTestDescriptor.java:521)
>  at 
> org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.lambda$synthesizeBeforeEachMethodAdapter$23(ClassBasedTestDescriptor.java:506)
>  at 
> org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeBeforeEachMethods$3(TestMethodTestDescriptor.java:175)
>  at 
> org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeBeforeMethodsOrCallbacksUntilExceptionOccurs$6(TestMethodTestDescriptor.java:203)
>  at 
> org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
>  at 
> 

[jira] [Created] (KAFKA-17214) Add 3.8.0 Streams and Core to system tests

2024-07-29 Thread Josep Prat (Jira)
Josep Prat created KAFKA-17214:
--

 Summary: Add 3.8.0 Streams and Core to system tests
 Key: KAFKA-17214
 URL: https://issues.apache.org/jira/browse/KAFKA-17214
 Project: Kafka
  Issue Type: Bug
Reporter: Josep Prat


As per Release Instructions we should add 3.8.0 version to system tests. 
Example PRs:
 * Broker and clients: [https://github.com/apache/kafka/pull/12210]
 * Streams: [https://github.com/apache/kafka/pull/12209]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [kafka-clients] [ANNOUNCE] Apache Kafka 3.8.0

2024-07-29 Thread Chris Egerton
Thanks for running the release, Josep!

On Mon, Jul 29, 2024, 13:31 'Josep Prat' via kafka-clients <
kafka-clie...@googlegroups.com> wrote:

> The Apache Kafka community is pleased to announce the release for Apache
> Kafka 3.8.0
>
> This is a minor release and it includes fixes and improvements from 456
> JIRAs.
>
> All of the changes in this release can be found in the release notes:
> https://www.apache.org/dist/kafka/3.8.0/RELEASE_NOTES.html
>
> An overview of the release can be found in our announcement blog post:
> https://kafka.apache.org/blog#apache_kafka_380_release_announcement
>
> You can download the source and binary release (Scala 2.12 and Scala
> 2.13) from:
> https://kafka.apache.org/downloads#3.8.0
>
>
> ---
>
>
> Apache Kafka is a distributed streaming platform with four core APIs:
>
>
> ** The Producer API allows an application to publish a stream of records to
> one or more Kafka topics.
>
> ** The Consumer API allows an application to subscribe to one or more
> topics and process the stream of records produced to them.
>
> ** The Streams API allows an application to act as a stream processor,
> consuming an input stream from one or more topics and producing an
> output stream to one or more output topics, effectively transforming the
> input streams to output streams.
>
> ** The Connector API allows building and running reusable producers or
> consumers that connect Kafka topics to existing applications or data
> systems. For example, a connector to a relational database might
> capture every change to a table.
>
>
> With these APIs, Kafka can be used for two broad classes of application:
>
> ** Building real-time streaming data pipelines that reliably get data
> between systems or applications.
>
> ** Building real-time streaming applications that transform or react
> to the streams of data.
>
>
> Apache Kafka is in use at large and small companies worldwide, including
> Capital One, Goldman Sachs, ING, LinkedIn, Netflix, Pinterest, Rabobank,
> Target, The New York Times, Uber, Yelp, and Zalando, among others.
>
> A big thank you for the following 202 contributors to this release!
> (Please report an unintended omission)
>
> Aadithya Chandra, Abhijeet Kumar, Abhinav Dixit, Adrian Preston, Afshin
> Moazami, Ahmed Najiub, Ahmed Sobeh, Akhilesh Chaganti, Almog Gavra, Alok
> Thatikunta, Alyssa Huang, Anatoly Popov, Andras Katona, Andrew
> Schofield, Anna Sophie Blee-Goldman, Antoine Pourchet, Anton Agestam,
> Anton Liauchuk, Anuj Sharma, Apoorv Mittal, Arnout Engelen, Arpit Goyal,
> Artem Livshits, Ashwin Pankaj, Ayoub Omari, Bruno Cadonna, Calvin Liu,
> Cameron Redpath, charliecheng630, Cheng-Kai, Zhang, Cheryl Simmons, Chia
> Chuan Yu, Chia-Ping Tsai, ChickenchickenLove, Chris Egerton, Chris
> Holland, Christo Lolov, Christopher Webb, Colin P. McCabe, Colt McNealy,
> cooper.ts...@suse.com, Vedarth Sharma, Crispin Bernier, Daan Gerits,
> David Arthur, David Jacot, David Mao, dengziming, Divij Vaidya, DL1231,
> Dmitry Werner, Dongnuo Lyu, Drawxy, Dung Ha, Edoardo Comar, Eduwer
> Camacaro, Emanuele Sabellico, Erik van Oosten, Eugene Mitskevich, Fan
> Yang, Federico Valeri, Fiore Mario Vitale, flashmouse, Florin Akermann,
> Frederik Rouleau, Gantigmaa Selenge, Gaurav Narula, ghostspiders,
> gongxuanzhang, Greg Harris, Gyeongwon Do, Hailey Ni, Hao Li, Hector
> Geraldino, highluck, hudeqi, Hy (하이), IBeyondy, Iblis Lin, Igor Soarez,
> ilyazr, Ismael Juma, Ivan Vaskevych, Ivan Yurchenko, James Faulkner,
> Jamie Holmes, Jason Gustafson, Jeff Kim, jiangyuan, Jim Galasyn, Jinyong
> Choi, Joel Hamill, John Doe zh2725284...@gmail.com, John Roesler, John
> Yu, Johnny Hsu, Jorge Esteban Quilcate Otoya, Josep Prat, José Armando
> García Sancio, Jun Rao, Justine Olshan, Kalpesh Patel, Kamal
> Chandraprakash, Ken Huang, Kirk True, Kohei Nozaki, Krishna Agarwal,
> KrishVora01, Kuan-Po (Cooper) Tseng, Kvicii, Lee Dongjin, Leonardo
> Silva, Lianet Magrans, LiangliangSui, Linu Shibu, lixinyang, Lokesh
> Kumar, Loïc GREFFIER, Lucas Brutschy, Lucia Cerchie, Luke Chen,
> Manikumar Reddy, mannoopj, Manyanda Chitimbo, Mario Pareja, Matthew de
> Detrich, Matthias Berndt, Matthias J. Sax, Matthias Sax, Max Riedel,
> Mayank Shekhar Narula, Michael Edgar, Michael Westerby, Mickael Maison,
> Mike Lloyd, Minha, Jeong, Murali Basani, n.izhikov, Nick Telford, Nikhil
> Ramakrishnan, Nikolay, Octavian Ciubotaru, Okada Haruki, Omnia G.H
> Ibrahim, Ori Hoch, Owen Leung, Paolo Patierno, Philip Nee,
> Phuc-Hong-Tran, PoAn Yang, Proven Provenzano, Qichao Chu, Ramin Gharib,
> Ritika Reddy, Rittika Adhikari, Rohan, Ron Dagostino, runom, rykovsi,
> Sagar Rao, Said Boudjelda, sanepal, Sanskar Jhajharia, Satish Duggana,
> Sean Quah, Sebastian Marsching, Sebastien Viale, Sergio Troiano, Sid
> Yagnik, Stanislav Kozlovski, Stig Døssing, Sudesh Wasnik, TaiJuWu,
> TapDang, testn, TingIāu Ting Kì, vamossagar12, Vedarth
> Sharma, Victor van den 

[ANNOUNCE] Apache Kafka 3.8.0

2024-07-29 Thread Josep Prat
The Apache Kafka community is pleased to announce the release for Apache 
Kafka 3.8.0


This is a minor release and it includes fixes and improvements from 456 
JIRAs.


All of the changes in this release can be found in the release notes:
https://www.apache.org/dist/kafka/3.8.0/RELEASE_NOTES.html

An overview of the release can be found in our announcement blog post:
https://kafka.apache.org/blog#apache_kafka_380_release_announcement

You can download the source and binary release (Scala 2.12 and Scala 
2.13) from:

https://kafka.apache.org/downloads#3.8.0

---


Apache Kafka is a distributed streaming platform with four core APIs:


** The Producer API allows an application to publish a stream of records to
one or more Kafka topics.

** The Consumer API allows an application to subscribe to one or more
topics and process the stream of records produced to them.

** The Streams API allows an application to act as a stream processor,
consuming an input stream from one or more topics and producing an
output stream to one or more output topics, effectively transforming the
input streams to output streams.

** The Connector API allows building and running reusable producers or
consumers that connect Kafka topics to existing applications or data
systems. For example, a connector to a relational database might
capture every change to a table.


With these APIs, Kafka can be used for two broad classes of application:

** Building real-time streaming data pipelines that reliably get data
between systems or applications.

** Building real-time streaming applications that transform or react
to the streams of data.


Apache Kafka is in use at large and small companies worldwide, including
Capital One, Goldman Sachs, ING, LinkedIn, Netflix, Pinterest, Rabobank,
Target, The New York Times, Uber, Yelp, and Zalando, among others.

A big thank you for the following 202 contributors to this release! 
(Please report an unintended omission)


Aadithya Chandra, Abhijeet Kumar, Abhinav Dixit, Adrian Preston, Afshin 
Moazami, Ahmed Najiub, Ahmed Sobeh, Akhilesh Chaganti, Almog Gavra, Alok 
Thatikunta, Alyssa Huang, Anatoly Popov, Andras Katona, Andrew 
Schofield, Anna Sophie Blee-Goldman, Antoine Pourchet, Anton Agestam, 
Anton Liauchuk, Anuj Sharma, Apoorv Mittal, Arnout Engelen, Arpit Goyal, 
Artem Livshits, Ashwin Pankaj, Ayoub Omari, Bruno Cadonna, Calvin Liu, 
Cameron Redpath, charliecheng630, Cheng-Kai, Zhang, Cheryl Simmons, Chia 
Chuan Yu, Chia-Ping Tsai, ChickenchickenLove, Chris Egerton, Chris 
Holland, Christo Lolov, Christopher Webb, Colin P. McCabe, Colt McNealy, 
cooper.ts...@suse.com, Vedarth Sharma, Crispin Bernier, Daan Gerits, 
David Arthur, David Jacot, David Mao, dengziming, Divij Vaidya, DL1231, 
Dmitry Werner, Dongnuo Lyu, Drawxy, Dung Ha, Edoardo Comar, Eduwer 
Camacaro, Emanuele Sabellico, Erik van Oosten, Eugene Mitskevich, Fan 
Yang, Federico Valeri, Fiore Mario Vitale, flashmouse, Florin Akermann, 
Frederik Rouleau, Gantigmaa Selenge, Gaurav Narula, ghostspiders, 
gongxuanzhang, Greg Harris, Gyeongwon Do, Hailey Ni, Hao Li, Hector 
Geraldino, highluck, hudeqi, Hy (하이), IBeyondy, Iblis Lin, Igor Soarez, 
ilyazr, Ismael Juma, Ivan Vaskevych, Ivan Yurchenko, James Faulkner, 
Jamie Holmes, Jason Gustafson, Jeff Kim, jiangyuan, Jim Galasyn, Jinyong 
Choi, Joel Hamill, John Doe zh2725284...@gmail.com, John Roesler, John 
Yu, Johnny Hsu, Jorge Esteban Quilcate Otoya, Josep Prat, José Armando 
García Sancio, Jun Rao, Justine Olshan, Kalpesh Patel, Kamal 
Chandraprakash, Ken Huang, Kirk True, Kohei Nozaki, Krishna Agarwal, 
KrishVora01, Kuan-Po (Cooper) Tseng, Kvicii, Lee Dongjin, Leonardo 
Silva, Lianet Magrans, LiangliangSui, Linu Shibu, lixinyang, Lokesh 
Kumar, Loïc GREFFIER, Lucas Brutschy, Lucia Cerchie, Luke Chen, 
Manikumar Reddy, mannoopj, Manyanda Chitimbo, Mario Pareja, Matthew de 
Detrich, Matthias Berndt, Matthias J. Sax, Matthias Sax, Max Riedel, 
Mayank Shekhar Narula, Michael Edgar, Michael Westerby, Mickael Maison, 
Mike Lloyd, Minha, Jeong, Murali Basani, n.izhikov, Nick Telford, Nikhil 
Ramakrishnan, Nikolay, Octavian Ciubotaru, Okada Haruki, Omnia G.H 
Ibrahim, Ori Hoch, Owen Leung, Paolo Patierno, Philip Nee, 
Phuc-Hong-Tran, PoAn Yang, Proven Provenzano, Qichao Chu, Ramin Gharib, 
Ritika Reddy, Rittika Adhikari, Rohan, Ron Dagostino, runom, rykovsi, 
Sagar Rao, Said Boudjelda, sanepal, Sanskar Jhajharia, Satish Duggana, 
Sean Quah, Sebastian Marsching, Sebastien Viale, Sergio Troiano, Sid 
Yagnik, Stanislav Kozlovski, Stig Døssing, Sudesh Wasnik, TaiJuWu, 
TapDang, testn, TingIāu Ting Kì, vamossagar12, Vedarth 
Sharma, Victor van den Hoven, Vikas Balani, Viktor Somogyi-Vass, Vincent 
Rose, Walker Carlson, wernerdv, Yang Yu, Yash Mayya, yicheny, Yu-Chen 
Lai, yuz10, Zhifeng Chen, Zihao Lin, Ziming Deng, 谭九鼎


We welcome your help and feedback. For more information on how to
report problems, and 

Re: [PR] Add blog post for 3.8 release [kafka-site]

2024-07-29 Thread via GitHub


jlprat merged PR #614:
URL: https://github.com/apache/kafka-site/pull/614


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] Minor update website 38 release [kafka-site]

2024-07-29 Thread via GitHub


jlprat merged PR #616:
URL: https://github.com/apache/kafka-site/pull/616


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] Add blog post for 3.8 release [kafka-site]

2024-07-29 Thread via GitHub


kirktrue commented on code in PR #614:
URL: https://github.com/apache/kafka-site/pull/614#discussion_r1695573298


##
blog.html:
##
@@ -22,6 +22,106 @@
 
 
 Blog
+
+
+
+Apache 
Kafka 3.8.0 Release Announcement
+
+26 July 2024 - Josep Prat (https://twitter.com/jlprat;>@jlprat)
+We are proud to announce the release of Apache Kafka 3.8.0. 
This release contains many new features and improvements. This blog post will 
highlight some of the more prominent features. For a full list of changes, be 
sure to check the https://downloads.apache.org/kafka/3.8.0/RELEASE_NOTES.html;>release 
notes.
+See the https://kafka.apache.org/documentation.html#upgrade_3_8_0;>Upgrading to 
3.8.0 from any version 0.8.x through 3.7.x section in the documentation for 
the list of notable changes and detailed upgrade steps.
+
+In a previous release, 3.6,
+https://kafka.apache.org/38/documentation.html#tiered_storage;>tiered 
storage was released as early access feature.
+In this release, Tiered Storage now supports clusters 
configured with multiple log directories (i.e. JBOD feature). This feature 
still remains as early access.
+
+
+In the last release, 3.7, https://cwiki.apache.org/confluence/display/KAFKA/KIP-858%3A+Handle+JBOD+broker+disk+failure+in+KRaft;>KIP-858
+was released in early access. Since this version, JBOD in 
KRaft is no longer considered an early access feature.
+
+
+Up until now, only the default compression level was used 
by Apache Kafka. From this version on, a configuration mechanism to specify 
compression level is included. See https://cwiki.apache.org/confluence/display/KAFKA/KIP-390%3A+Support+Compression+Level;>KIP-390
 for more details.
+
+
+In the last release, 3.7, https://cwiki.apache.org/confluence/display/KAFKA/KIP-848%3A+The+Next+Generation+of+the+Consumer+Rebalance+Protocol;>KIP-848
 The Next Generation of the Consumer Rebalance Protocol was made available 
as early access. This version includes numerous bug
+fixes and the community is encouraged to test and provide 
feedback. https://cwiki.apache.org/confluence/display/KAFKA/The+Next+Generation+of+the+Consumer+Rebalance+Protocol+%28KIP-848%29+-+Early+Access+Release+Notes;>See
 the early access release notes for more information.

Review Comment:
   Yes. This seems fine. Thank you!



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Resolved] (KAFKA-17202) EosIntegrationTest.verifyChangelogMaxRecordOffsetMatchesCheckpointedOffset leaks consumers

2024-07-29 Thread Chia-Ping Tsai (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-17202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai resolved KAFKA-17202.

Fix Version/s: 3.9.0
   Resolution: Fixed

> EosIntegrationTest.verifyChangelogMaxRecordOffsetMatchesCheckpointedOffset 
> leaks consumers
> --
>
> Key: KAFKA-17202
> URL: https://issues.apache.org/jira/browse/KAFKA-17202
> Project: Kafka
>  Issue Type: Test
>  Components: streams
>Affects Versions: 3.9.0
>Reporter: Greg Harris
>Assignee: TengYao Chi
>Priority: Minor
>  Labels: newbie
> Fix For: 3.9.0
>
>
> This method creates a KafkaConsumer, but does not close it.
> We can use a try-with-resources to ensure the consumer is closed prior to 
> returning or throwing from this function.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: Help for contributing

2024-07-29 Thread meisam jafari
Thanks 

On Mon, Jul 29, 2024, 7:57 PM Josep Prat 
wrote:

> Oops!
> Funny enough, both links work for me...
>
> On Mon, Jul 29, 2024 at 6:25 PM Chia-Ping Tsai  wrote:
>
> > It seems the link offered by Josep has a small typo. Let's me fix it:
> >
> >
> >
> https://issues.apache.org/jira/issues/?jql=project%20%3D%20KAFKA%20AND%20status%20%3D%20Open%20AND%20labels%20%3D%20newbie%20AND%20assignee%20in%20(EMPTY)
> >
> > Best,
> > Chia-Ping
> >
> > Josep Prat  於 2024年7月29日 週一 下午11:01寫道:
> >
> > > Hello there,
> > > You can also take a look at this filter in Jira:
> > >
> > >
> >
> https://issues.apache.org/jira/browse/KAFKA-17201?jql=project+%3D+KAFKA+AND+status+%3D+Open+AND+labels+%3D+newbie+AND+assignee+in+%28EMPTY%29
> > > This shows you all unassigned issues in Kafka that have the label
> > "newbie".
> > > Issues under this label are usually good first issues.
> > >
> > > Best,
> > >
> > > On Sun, Jul 28, 2024 at 3:27 PM jiang dou 
> wrote:
> > >
> > > > Hello:
> > > >
> > > > You can find out how to contribute here:
> > > > https://kafka.apache.org/contributing
> > > >
> > > > Thank you
> > > >
> > > > meisam jafari  于2024年7月28日周日 13:50写道:
> > > >
> > > > > Hello there,
> > > > >
> > > > > I am very enthusiastic to contributing to the kafka, recently I
> read
> > > the
> > > > > kafka definitive guide book and got insights how kafka works, then
> I
> > > got
> > > > > kafka source code and built it and ran some unit tests. Now I want
> to
> > > > work
> > > > > on some real issues to start my gurney of becoming a kafka
> comitter.
> > > > Could
> > > > > you please help me how to take issues in jira to get started?.
> > > > >
> > > > > Thanks
> > > > >
> > > >
> > >
> > >
> > > --
> > > [image: Aiven] 
> > >
> > > *Josep Prat*
> > > Open Source Engineering Director, *Aiven*
> > > josep.p...@aiven.io   |   +491715557497
> > > aiven.io    |   <
> > https://www.facebook.com/aivencloud
> > > >
> > >      <
> > > https://twitter.com/aiven_io>
> > > *Aiven Deutschland GmbH*
> > > Alexanderufer 3-7, 10117 Berlin
> > > Geschäftsführer: Oskari Saarenmaa, Hannu Valtonen,
> > > Anna Richardson, Kenneth Chen
> > > Amtsgericht Charlottenburg, HRB 209739 B
> > >
> >
>
>
> --
> [image: Aiven] 
>
> *Josep Prat*
> Open Source Engineering Director, *Aiven*
> josep.p...@aiven.io   |   +491715557497
> aiven.io    |    >
>      <
> https://twitter.com/aiven_io>
> *Aiven Deutschland GmbH*
> Alexanderufer 3-7, 10117 Berlin
> Geschäftsführer: Oskari Saarenmaa, Hannu Valtonen,
> Anna Richardson, Kenneth Chen
> Amtsgericht Charlottenburg, HRB 209739 B
>


Re: Help for contributing

2024-07-29 Thread Josep Prat
Oops!
Funny enough, both links work for me...

On Mon, Jul 29, 2024 at 6:25 PM Chia-Ping Tsai  wrote:

> It seems the link offered by Josep has a small typo. Let's me fix it:
>
>
> https://issues.apache.org/jira/issues/?jql=project%20%3D%20KAFKA%20AND%20status%20%3D%20Open%20AND%20labels%20%3D%20newbie%20AND%20assignee%20in%20(EMPTY)
>
> Best,
> Chia-Ping
>
> Josep Prat  於 2024年7月29日 週一 下午11:01寫道:
>
> > Hello there,
> > You can also take a look at this filter in Jira:
> >
> >
> https://issues.apache.org/jira/browse/KAFKA-17201?jql=project+%3D+KAFKA+AND+status+%3D+Open+AND+labels+%3D+newbie+AND+assignee+in+%28EMPTY%29
> > This shows you all unassigned issues in Kafka that have the label
> "newbie".
> > Issues under this label are usually good first issues.
> >
> > Best,
> >
> > On Sun, Jul 28, 2024 at 3:27 PM jiang dou  wrote:
> >
> > > Hello:
> > >
> > > You can find out how to contribute here:
> > > https://kafka.apache.org/contributing
> > >
> > > Thank you
> > >
> > > meisam jafari  于2024年7月28日周日 13:50写道:
> > >
> > > > Hello there,
> > > >
> > > > I am very enthusiastic to contributing to the kafka, recently I read
> > the
> > > > kafka definitive guide book and got insights how kafka works, then I
> > got
> > > > kafka source code and built it and ran some unit tests. Now I want to
> > > work
> > > > on some real issues to start my gurney of becoming a kafka comitter.
> > > Could
> > > > you please help me how to take issues in jira to get started?.
> > > >
> > > > Thanks
> > > >
> > >
> >
> >
> > --
> > [image: Aiven] 
> >
> > *Josep Prat*
> > Open Source Engineering Director, *Aiven*
> > josep.p...@aiven.io   |   +491715557497
> > aiven.io    |   <
> https://www.facebook.com/aivencloud
> > >
> >      <
> > https://twitter.com/aiven_io>
> > *Aiven Deutschland GmbH*
> > Alexanderufer 3-7, 10117 Berlin
> > Geschäftsführer: Oskari Saarenmaa, Hannu Valtonen,
> > Anna Richardson, Kenneth Chen
> > Amtsgericht Charlottenburg, HRB 209739 B
> >
>


-- 
[image: Aiven] 

*Josep Prat*
Open Source Engineering Director, *Aiven*
josep.p...@aiven.io   |   +491715557497
aiven.io    |   
     
*Aiven Deutschland GmbH*
Alexanderufer 3-7, 10117 Berlin
Geschäftsführer: Oskari Saarenmaa, Hannu Valtonen,
Anna Richardson, Kenneth Chen
Amtsgericht Charlottenburg, HRB 209739 B


Re: Help for contributing

2024-07-29 Thread Chia-Ping Tsai
It seems the link offered by Josep has a small typo. Let's me fix it:

https://issues.apache.org/jira/issues/?jql=project%20%3D%20KAFKA%20AND%20status%20%3D%20Open%20AND%20labels%20%3D%20newbie%20AND%20assignee%20in%20(EMPTY)

Best,
Chia-Ping

Josep Prat  於 2024年7月29日 週一 下午11:01寫道:

> Hello there,
> You can also take a look at this filter in Jira:
>
> https://issues.apache.org/jira/browse/KAFKA-17201?jql=project+%3D+KAFKA+AND+status+%3D+Open+AND+labels+%3D+newbie+AND+assignee+in+%28EMPTY%29
> This shows you all unassigned issues in Kafka that have the label "newbie".
> Issues under this label are usually good first issues.
>
> Best,
>
> On Sun, Jul 28, 2024 at 3:27 PM jiang dou  wrote:
>
> > Hello:
> >
> > You can find out how to contribute here:
> > https://kafka.apache.org/contributing
> >
> > Thank you
> >
> > meisam jafari  于2024年7月28日周日 13:50写道:
> >
> > > Hello there,
> > >
> > > I am very enthusiastic to contributing to the kafka, recently I read
> the
> > > kafka definitive guide book and got insights how kafka works, then I
> got
> > > kafka source code and built it and ran some unit tests. Now I want to
> > work
> > > on some real issues to start my gurney of becoming a kafka comitter.
> > Could
> > > you please help me how to take issues in jira to get started?.
> > >
> > > Thanks
> > >
> >
>
>
> --
> [image: Aiven] 
>
> *Josep Prat*
> Open Source Engineering Director, *Aiven*
> josep.p...@aiven.io   |   +491715557497
> aiven.io    |    >
>      <
> https://twitter.com/aiven_io>
> *Aiven Deutschland GmbH*
> Alexanderufer 3-7, 10117 Berlin
> Geschäftsführer: Oskari Saarenmaa, Hannu Valtonen,
> Anna Richardson, Kenneth Chen
> Amtsgericht Charlottenburg, HRB 209739 B
>


Re: Help for contributing

2024-07-29 Thread Josep Prat
Hello there,
You can also take a look at this filter in Jira:
https://issues.apache.org/jira/browse/KAFKA-17201?jql=project+%3D+KAFKA+AND+status+%3D+Open+AND+labels+%3D+newbie+AND+assignee+in+%28EMPTY%29
This shows you all unassigned issues in Kafka that have the label "newbie".
Issues under this label are usually good first issues.

Best,

On Sun, Jul 28, 2024 at 3:27 PM jiang dou  wrote:

> Hello:
>
> You can find out how to contribute here:
> https://kafka.apache.org/contributing
>
> Thank you
>
> meisam jafari  于2024年7月28日周日 13:50写道:
>
> > Hello there,
> >
> > I am very enthusiastic to contributing to the kafka, recently I read the
> > kafka definitive guide book and got insights how kafka works, then I got
> > kafka source code and built it and ran some unit tests. Now I want to
> work
> > on some real issues to start my gurney of becoming a kafka comitter.
> Could
> > you please help me how to take issues in jira to get started?.
> >
> > Thanks
> >
>


-- 
[image: Aiven] 

*Josep Prat*
Open Source Engineering Director, *Aiven*
josep.p...@aiven.io   |   +491715557497
aiven.io    |   
     
*Aiven Deutschland GmbH*
Alexanderufer 3-7, 10117 Berlin
Geschäftsführer: Oskari Saarenmaa, Hannu Valtonen,
Anna Richardson, Kenneth Chen
Amtsgericht Charlottenburg, HRB 209739 B


Re: [DISCUSS] KIP-971 Expose replication-offset-lag MirrorMaker2 metric

2024-07-29 Thread Mickael Maison
Hi,

> My thinking is that any partition could go stale if there are no records
being produced into it.

1. Why would the value become stale if there are no new records? The
lag should stay the same, no?

> If enough of such partitions are present and are owned by a single MM task, 
> an OOM could happen.

2. We already have a dozen of metrics per partition [0]. Why do you
think adding a few more would cause OutOfMemory errors?
Each task should only emit metrics for partitions it owns.

> Regarding the scenario where the TTL value is lower than the refresh interval 
> - I believe that this is an edge that we need to document and prevent 
> against, for example either failing to start on such a combination or 
> resorting to a default value that would satisfy the constraint and logging an 
> error.

3. Can you add the behavior you propose in the KIP?

Thanks,
Mickael

0: 
https://github.com/apache/kafka/blob/trunk/connect/mirror/src/main/java/org/apache/kafka/connect/mirror/MirrorSourceMetrics.java#L71-L106

On Wed, May 22, 2024 at 9:18 PM Elxan Eminov  wrote:
>
> Hey Mickael,
> Just checking to see if you have any thoughts on this.
> thanks!
>
> On Thu, 11 Apr 2024 at 15:11, Elxan Eminov  wrote:
>
> > Hi Mickael!
> > Any thoughts on this?
> > Thanks!
> >
> > On Wed, 3 Apr 2024 at 13:21, Elxan Eminov  wrote:
> >
> >> Hi Mickael,
> >> Thanks for your response and apologies for a huge delay in mine.
> >>
> >> My thinking is that any partition could go stale if there are no records
> >> being produced into it. If enough of such partitions are present and are
> >> owned by a single MM task, an OOM could happen.
> >>
> >> Regarding the scenario where the TTL value is lower than the refresh
> >> interval - I believe that this is an edge that we need to document and
> >> prevent against, for example either failing to start on such a combination
> >> or resorting to a default value that would satisfy the constraint and
> >> logging an error.
> >>
> >> Thanks,
> >> Elkhan
> >>
> >> On Thu, 8 Feb 2024 at 14:17, Mickael Maison 
> >> wrote:
> >>
> >>> Hi,
> >>>
> >>> Thanks for the updates.
> >>> I'm wondering whether we really need the ttl eviction mechanism. The
> >>> motivation is to "avoid storing stale LRO entries which can cause an
> >>> eventual OOM error". How could it contain stake entries? I would
> >>> expect its cache to only contain entries for partitions assigned to
> >>> the task that owns it. Also what is the expected behavior if there's
> >>> no available LRO in the cache? If we keep this mechanism what happens
> >>> if its value is lower than
> >>> replication.record.lag.metric.refresh.interval?
> >>>
> >>> Thanks,
> >>> Mickael
> >>>
> >>> On Mon, Feb 5, 2024 at 5:23 PM Elxan Eminov 
> >>> wrote:
> >>> >
> >>> > Hi Mickael!
> >>> > Any further thoughts on this?
> >>> >
> >>> > Thanks,
> >>> > Elkhan
> >>> >
> >>> > On Thu, 18 Jan 2024 at 11:53, Mickael Maison  >>> >
> >>> > wrote:
> >>> >
> >>> > > Hi Elxan,
> >>> > >
> >>> > > Thanks for the updates.
> >>> > >
> >>> > > We used dots to separate words in configuration names, so I think
> >>> > > replication.offset.lag.metric.last-replicated-offset.ttl should be
> >>> > > named replication.offset.lag.metric.last.replicated.offset.ttl
> >>> > > instead.
> >>> > >
> >>> > > About the names of the metrics, fair enough if you prefer keeping the
> >>> > > replication prefix. Out of the alternatives you mentioned, I think I
> >>> > > prefer replication-record-lag. I think the metrics and configuration
> >>> > > names should match too. Let's see what the others think about it.
> >>> > >
> >>> > > Thanks,
> >>> > > Mickael
> >>> > >
> >>> > > On Mon, Jan 15, 2024 at 9:50 PM Elxan Eminov <
> >>> elxanemino...@gmail.com>
> >>> > > wrote:
> >>> > > >
> >>> > > > Apologies, forgot to reply on your last comment about the metric
> >>> name.
> >>> > > > I believe both replication-lag and record-lag are a little too
> >>> abstract -
> >>> > > > what do you think about either leaving it as
> >>> replication-offset-lag or
> >>> > > > renaming to replication-record-lag?
> >>> > > >
> >>> > > > Thanks
> >>> > > >
> >>> > > > On Wed, 10 Jan 2024 at 15:31, Mickael Maison <
> >>> mickael.mai...@gmail.com>
> >>> > > > wrote:
> >>> > > >
> >>> > > > > Hi Elxan,
> >>> > > > >
> >>> > > > > Thanks for the KIP, it looks like a useful addition.
> >>> > > > >
> >>> > > > > Can you add to the KIP the default value you propose for
> >>> > > > > replication.lag.metric.refresh.interval? In MirrorMaker most
> >>> interval
> >>> > > > > configs can be set to -1 to disable them, will it be the case
> >>> for this
> >>> > > > > new feature or will this setting only accept positive values?
> >>> > > > > I also wonder if replication-lag, or record-lag would be clearer
> >>> names
> >>> > > > > instead of replication-offset-lag, WDYT?
> >>> > > > >
> >>> > > > > Thanks,
> >>> > > > > Mickael
> >>> > > > >
> >>> > > > > On Wed, Jan 3, 2024 at 6:15 PM Elxan Eminov <
> >>> 

Re: [DISCUSS] KIP-802: Validation Support for Kafka Connect SMT Options

2024-07-29 Thread Mickael Maison
Hi,

I've not received a reply from Gunnar since last month, so I'll pick
this KIP up.

Thanks,
Mickael




On Tue, Jun 18, 2024 at 6:01 PM Mickael Maison  wrote:
>
> Hi Gunnar,
>
> I think this KIP would be a great addition to Kafka Connect but it
> looks like it's been abandoned.
>
> Are you still interested in working on this? If you need some time or
> help, that's fine, just let us know.
> If not, no worries, I'm happy to pick it up if needed.
>
> Thanks,
> Mickael
>
> On Wed, Dec 22, 2021 at 11:21 AM Tom Bentley  wrote:
> >
> > Hi Gunnar,
> >
> > Thanks for the KIP, especially the careful reasoning about compatibility. I
> > think this would be a useful improvement. I have a few observations, which
> > are all about how we effectively communicate the contract to implementers:
> >
> > 1. I think it would be good for the Javadoc to give a bit more of a hint
> > about what the validate(Map) method is supposed to do: At least call
> > ConfigDef.validate(Map) with the provided configs (for implementers that
> > can be achieved via super.validate()), and optionally apply extra
> > validation for constraints that ConfigDef (and ConfigDef.Validator) cannot
> > check. I think typically that would be where there's a dependency between
> > two config parameters, e.g. if 'foo' is present that 'bar' must be too, or
> > 'baz' and 'qux' cannot have the same value.
> > 2. Can the Javadoc give a bit more detail about the return value of these
> > new methods? I'm not sure that the implementer of a Transformation would
> > necessarily know how the Config returned from validate(Map) might be
> > "updated", or that updating ConfigValue's errorMessages is the right way to
> > report config-specific errors. The KIP should be clear on how we expect
> > implementers to report errors due to dependencies between multiple config
> > parameters (must they be tied to a config parameter, or should the method
> > throw, for example?). I think this is a bit awkward, actually, since the
> > ConfigInfo structure used for the JSON REST response doesn't seem to have a
> > nice way to represent errors which are not associated with a config
> > parameter.
> > 3. It might also be worth calling out that the expectation is that a
> > successful return from the new validate() method should imply that
> > configure(Map) will succeed (to do otherwise undermines the value of the
> > validate endpoint). This makes me wonder about implementers, who might
> > defensively program their configure(Map) method to implement the same
> > checks. Therefore the contract should make clear that the Connect runtime
> > guarantees that validate(Map) will be called before configure(Map).
> >
> > I don't really like the idea of implementing more-or-less the same default
> > multiple times. Since these Transformation, Predicate etc will have a
> > common contract wrt validate() and configure(), I wondered whether there
> > was benefit in a common interface which Transformation etc could extend.
> > It's a bit tricky because Connector and Converter are not Configurable.
> > This was the best I could manage:
> >
> > ```
> > interface ConfigValidatable {
> > /**
> >  * Validate the given configuration values against the given
> > configuration definitions.
> >  * This method will be called prior to the invocation of any
> > initializer method, such as {@link Connector#initialize(ConnectorContext)},
> > or {@link Configurable#configure(Map)} and should report any errors in the
> > given configuration value using the errorMessages of the ConfigValues in
> > the returned Config. If the Config returned by this method has no errors
> > then the initializer method should not throw due to bad configuration.
> >  *
> >  * @param configDef the configuration definition, which may be null.
> >  * @param configs the provided configuration values.
> >  * @return The updated configuration information given the current
> > configuration values
> >  *
> >  * @since 3.2
> >  */
> > default Config validate(ConfigDef configDef, Map
> > configs) {
> > List configValues = configDef.validate(smtConfigs);
> > return new Config(configValues);
> > }
> >
> > }
> > ```
> >
> > Note that the configDef is passed in, leaving it to the runtime to call
> > `thing.config()` to get the ConfigDef instance and validate whether it is
> > allowed to be null or not. The subinterfaces could override validate() to
> > define what the "initializer method" is in their case, and to indicate
> > whether configDef can actually be null.
> >
> > To be honest, I'm not really sure this is better, but I thought I'd suggest
> > it to see what others thought.
> >
> > Kind regards,
> >
> > Tom
> >
> > On Tue, Dec 21, 2021 at 6:46 PM Chris Egerton 
> > wrote:
> >
> > > Hi Gunnar,
> > >
> > > Thanks, this looks great. I'm ready to cast a non-binding on the vote
> > > thread when it comes.
> > >
> > > One small non-blocking nit: I like that you call out 

[jira] [Created] (KAFKA-17213) Make

2024-07-29 Thread xiaochen.zhou (Jira)
xiaochen.zhou created KAFKA-17213:
-

 Summary: Make 
 Key: KAFKA-17213
 URL: https://issues.apache.org/jira/browse/KAFKA-17213
 Project: Kafka
  Issue Type: New Feature
Reporter: xiaochen.zhou






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


  1   2   3   4   5   6   7   8   9   10   >