[jira] [Created] (KAFKA-16881) InitialState type leaks into the Connect REST API OpenAPI spec

2024-06-03 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-16881:
--

 Summary: InitialState type leaks into the Connect REST API OpenAPI 
spec
 Key: KAFKA-16881
 URL: https://issues.apache.org/jira/browse/KAFKA-16881
 Project: Kafka
  Issue Type: Task
  Components: connect
Affects Versions: 3.7.0
Reporter: Mickael Maison


In our [OpenAPI spec 
file|https://kafka.apache.org/37/generated/connect_rest.yaml] we have the 
following:
{noformat}
CreateConnectorRequest:
      type: object
      properties:
        config:
          type: object
          additionalProperties:
            type: string
        initialState:
          type: string
          enum:
          - RUNNING
          - PAUSED
          - STOPPED
        initial_state:
          type: string
          enum:
          - RUNNING
          - PAUSED
          - STOPPED
          writeOnly: true
        name:
          type: string{noformat}
Only initial_state is a valid field, InitialState should not be present.

 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: Request for Authorization to Create KIP

2024-06-03 Thread Mickael Maison
Hi,

I've granted you permissions. Thanks for your interest in Apache Kafka!

Thanks,
Mickael

On Mon, Jun 3, 2024 at 12:17 PM TingIāu Kì  wrote:
>
> Hello folks,
> I am writing to request authorization to create a KIP.
> Currently, I don’t have the necessary permission to access the “Create KIP” 
> function.
> Following is my JIRA ID and Confluence ID:
>
> JIRA: frankvicky
> Confluence: frankvicky
>
> Could you please grant me the required permission to create a KIP?
> Thank you very much for your time and assistance.
>
> Best regards,
> TingIāu


Re: [DISCUSS] KIP-877: Mechanism for plugins and connectors to register metrics

2024-05-31 Thread Mickael Maison
Bumping this thread.

If there are no other comments, I'll restart a vote in the next couple of weeks.

Thanks,
Mickael

On Thu, Apr 25, 2024 at 3:28 PM Mickael Maison  wrote:
>
> Hi Greg,
>
> Thanks for taking a close look at the KIP.
>
> 1/2) I understand your concern about leaking resources. I've played a
> bit more with the code and I think we should be able to handle the
> closing of the metrics internally rather than delegating it to the
> user code. I built a small PoC inspired by your MonitorablePlugin
> class example and that looked fine. I think we can even keep that
> class internal. I updated the KIP accordingly.
>
> 3) An earlier version of the proposal used connector and task contexts
> to allow them to retrieve their PluginMetrics instance. In a previous
> comment Chris suggested switching to implementing Monitorable for
> consistency. I think both approaches have pros and cons. I agree with
> you that implementing Monitorable with cause compatibility issues with
> older Connect runtimes. For that reason, I'm leaning towards
> reintroducing the context mechanism. However we would still have this
> issue with Converters/Transformations/Predicates. I think it's
> typically a bit less problematic with these plugins but it's worth
> considering the different approaches. If we can't agree on an approach
> we can exclude Connect from this proposal and revisit it at a later
> point.
>
> 4) If this KIP is accepted, I plan to follow up with another KIP to
> make MirrorMaker use this mechanism instead of the custom metrics
> logic it currently uses.
>
> Thanks,
> Mickael
>
>
>
>
> On Wed, Apr 24, 2024 at 9:03 PM Mickael Maison  
> wrote:
> >
> > Hi Matthias,
> >
> > I'm not sure making the Monitorable interface Closeable really solves the 
> > issue.
> > Ultimately you need to understand the lifecycle of a plugin to
> > determine when it make sense to close it and which part of the code is
> > responsible for doing it. I'd rather have this described properly in
> > the interface of the plugin itself than it being a side effect of
> > implementing Monitorable.
> >
> > Looking at Streams, as far as I can tell the only pluggable interfaces
> > that are Closeable today are the Serdes. It seems Streams can accept
> > Serdes instances created by the user [0]. In that case, I think it's
> > probably best to ignore Streams in this KIP. Nothing should prevent
> > Streams for adopting it, in a way that makes sense for Streams, in a
> > future KIP if needed.
> >
> > 0: 
> > https://github.com/apache/kafka/blob/trunk/streams/examples/src/main/java/org/apache/kafka/streams/examples/wordcount/WordCountDemo.java#L84
> >
> > Thanks,
> > Mickael
> >
> >
> >
> >
> >
> > On Fri, Feb 9, 2024 at 1:15 AM Greg Harris  
> > wrote:
> > >
> > > Hi Mickael,
> > >
> > > Thanks for the KIP, this looks like a great change!
> > >
> > > 1. I see that one of my concerns was already discussed, and appears to
> > > have been concluded with:
> > >
> > > > I considered Chris' idea of automatically removing metrics but decided 
> > > > to leave that responsibility to the plugins.
> > >
> > > After chasing resource leaks for the last few years, I've internalized
> > > that preventing leaks through careful implementation is always
> > > inadequate, and that leaks need to be prevented by design.
> > > If a leak is possible in a design, then we should count on it
> > > happening somewhere as a certainty, and should be prepared for the
> > > behavior afterwards.
> > >
> > > Chris already brought up one of the negative behaviors: Connect
> > > plugins which are cancelled may keep their metrics open past the point
> > > that a replacement plugin is instantiated.
> > > This will have the effect of showing incorrect metrics, which is
> > > harmful and confusing for operators.
> > > If you are constantly skeptical of the accuracy of your metrics, and
> > > there is no "source of truth" to verify against, then what use are the
> > > metrics?
> > >
> > > I think that managing the lifecycle of the PluginMetrics on the
> > > framework side would be acceptable if we had an internal class like
> > > the following, to keep a reference to the metrics adjacent to the
> > > plugin:
> > > class MonitorablePlugin implements Supplier, Closeable {
> > > MonitorablePlugin(T plugin, PluginMetrics metrics);
> > > }
> > > I already be

Re: [DISCUSS] Apache Kafka 3.8.0 release

2024-05-30 Thread Mickael Maison
Hi Calvin,

What's not clear from your reply is whether "KIP-966 Part 1" contains
the ability to perform unclean leader elections with KRaft?
Hopefully we have committers already looking at these. If you need
additional help, please shout (well ping!)

Thanks,
Mickael

On Thu, May 30, 2024 at 6:05 AM Ismael Juma  wrote:
>
> Sounds good, thanks Josep!
>
> Ismael
>
> On Wed, May 29, 2024 at 7:51 AM Josep Prat 
> wrote:
>
> > Hi Ismael,
> >
> > I think your proposal makes more sense than mine. The end goal is to try to
> > get these 2 KIPs in 3.8.0 if possible. I think we can also achieve this by
> > not delaying the general feature freeze, but rather by cherry picking the
> > future commits on these features to the 3.8 branch.
> >
> > So I would propose to leave the deadlines as they are and manually cherry
> > pick the commits related to KIP-853 and KIP-966.
> >
> > Best,
> >
> > On Wed, May 29, 2024 at 3:48 PM Ismael Juma  wrote:
> >
> > > Hi Josep,
> > >
> > > It's generally a bad idea to push these dates because the scope keeps
> > > increasing then. If there are features that need more time and we believe
> > > they are essential for 3.8 due to its special nature as the last release
> > > before 4.0, we should allow them to be cherry-picked to the release
> > branch
> > > versus delaying the feature freeze and code freeze for everything.
> > >
> > > Ismael
> > >
> > > On Wed, May 29, 2024 at 2:38 AM Josep Prat 
> > > wrote:
> > >
> > > > Hi Kafka developers,
> > > >
> > > > Given the fact we have a couple of KIPs that are halfway through their
> > > > implementation and it seems it's a matter of days (1 or 2 weeks) to
> > have
> > > > them completed. What would you think if we delay feature freeze and
> > code
> > > > freeze by 2 weeks? Let me know your thoughts.
> > > >
> > > > Best,
> > > >
> > > > On Tue, May 28, 2024 at 8:47 AM Josep Prat 
> > wrote:
> > > >
> > > > > Hi Kafka developers,
> > > > >
> > > > > This is a reminder about the upcoming deadlines:
> > > > > - Feature freeze is on May 29th
> > > > > - Code freeze is June 12th
> > > > >
> > > > > I'll cut the new branch during morning hours (CEST) on May 30th.
> > > > >
> > > > > Thanks all!
> > > > >
> > > > > On Thu, May 16, 2024 at 8:34 AM Josep Prat 
> > > wrote:
> > > > >
> > > > >> Hi all,
> > > > >>
> > > > >> We are now officially past the KIP freeze deadline. KIPs that are
> > > > >> approved after this point in time shouldn't be adopted in the 3.8.x
> > > > release
> > > > >> (except the 2 already mentioned KIPS 989 and 1028 assuming no vetoes
> > > > occur).
> > > > >>
> > > > >> Reminder of the upcoming deadlines:
> > > > >> - Feature freeze is on May 29th
> > > > >> - Code freeze is June 12th
> > > > >>
> > > > >> If you have an approved KIP that you know already you won't be able
> > to
> > > > >> complete before the feature freeze deadline, please update the
> > Release
> > > > >> column in the
> > > > >>
> > > >
> > >
> > https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Improvement+Proposals
> > > > >> page.
> > > > >>
> > > > >> Thanks all,
> > > > >>
> > > > >> On Wed, May 15, 2024 at 8:53 PM Josep Prat 
> > > wrote:
> > > > >>
> > > > >>> Hi Nick,
> > > > >>>
> > > > >>> If nobody comes up with concerns or pushback until the time of
> > > closing
> > > > >>> the vote, I think we can take it for 3.8.
> > > > >>>
> > > > >>> Best,
> > > > >>>
> > > > >>> -
> > > > >>>
> > > > >>> Josep Prat
> > > > >>> Open Source Engineering Director, aivenjosep.p...@aiven.io   |
> > > > >>> +491715557497 | aiven.io
> > > > >>> Aiven Deutschland GmbH
> > > > >>> Alexanderufer 3-7, 10117 Berlin
> > > > >>> Geschäftsführer: Oskari Saarenmaa & Hannu Valtonen
> > > > >>> Amtsgericht Charlottenburg, HRB 209739 B
> > > > >>>
> > > > >>> On Wed, May 15, 2024, 20:48 Nick Telford 
> > > > wrote:
> > > > >>>
> > > >  Hi Josep,
> > > > 
> > > >  Would it be possible to sneak KIP-989 into 3.8? Just as with 1028,
> > > > it's
> > > >  currently being voted on and has already received the requisite
> > > votes.
> > > >  The
> > > >  only thing holding it back is the 72 hour voting window.
> > > > 
> > > >  Vote thread here:
> > > >  https://lists.apache.org/thread/nhr65h4784z49jbsyt5nx8ys81q90k6s
> > > > 
> > > >  Regards,
> > > > 
> > > >  Nick
> > > > 
> > > >  On Wed, 15 May 2024 at 17:47, Josep Prat
> > >  > > > >
> > > >  wrote:
> > > > 
> > > >  > And my maths are wrong! I added 24 hours more to all the numbers
> > > in
> > > >  there.
> > > >  > If after 72 hours no vetoes appear, I have no objections on
> > adding
> > > >  this
> > > >  > specific KIP as it shouldn't have a big blast radius of
> > > affectation.
> > > >  >
> > > >  > Best,
> > > >  >
> > > >  > On Wed, May 15, 2024 at 6:44 PM Josep Prat  > >
> > > >  wrote:
> > > >  >
> > > >  > > Ah, I see Chris was faster writing this than me.
> > > >  > 

[jira] [Created] (KAFKA-16865) Admin.describeTopics behavior change after KIP-966

2024-05-30 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-16865:
--

 Summary: Admin.describeTopics behavior change after KIP-966
 Key: KAFKA-16865
 URL: https://issues.apache.org/jira/browse/KAFKA-16865
 Project: Kafka
  Issue Type: Task
  Components: admin, clients
Reporter: Mickael Maison


Running the following code produces different behavior between ZooKeeper and 
KRaft:


{code:java}
DescribeTopicsOptions options = new 
DescribeTopicsOptions().includeAuthorizedOperations(false);
TopicCollection topics = 
TopicCollection.ofTopicNames(Collections.singletonList(topic));
DescribeTopicsResult describeTopicsResult = admin.describeTopics(topics, 
options);
TopicDescription topicDescription = 
describeTopicsResult.topicNameValues().get(topic).get();
System.out.println(topicDescription.authorizedOperations());
{code}

With ZooKeeper this print null, and with KRaft it prints [ALTER, READ, DELETE, 
ALTER_CONFIGS, CREATE, DESCRIBE_CONFIGS, WRITE, DESCRIBE].

The Admin.getTopicDescriptionFromDescribeTopicsResponseTopic does not take into 
account the options provided to describeTopics() and always populates the 
authorizedOperations field.




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [VOTE] KIP-1040: Improve handling of nullable values in InsertField, ExtractField, and other transformations

2024-05-29 Thread Mickael Maison
Hi Mario,

+1 (binding)
Thanks for the KIP!

Mickael

On Mon, May 27, 2024 at 12:06 PM Mario Fiore Vitale  wrote:
>
> After 7 days I received only one vote. Should I suppose this will not be
> approved?
>
> On Mon, May 20, 2024 at 4:14 PM Chris Egerton 
> wrote:
>
> > Thanks for the KIP! +1 (binding)
> >
> > On Mon, May 20, 2024 at 4:22 AM Mario Fiore Vitale 
> > wrote:
> >
> > > Hi everyone,
> > >
> > > I'd like to call a vote on KIP-1040 which aims to improve handling of
> > > nullable values in InsertField, ExtractField, and other transformations
> > >
> > > KIP -
> > >
> > https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=303794677
> > >
> > > Discussion thread -
> > > https://lists.apache.org/thread/ggqqqjbg6ccpz8g6ztyj7oxr80q5184n
> > >
> > > Thanks and regards,
> > > Mario
> > >
> >
>
>
> --
>
> Mario Fiore Vitale
>
> Senior Software Engineer
>
> Red Hat 
> 


Re: [DISCUSS] Apache Kafka 3.8.0 release

2024-05-29 Thread Mickael Maison
Hi Josep,

The point of the 3.8.0 release was to bridge feature gaps we
identified between ZooKeeper and KRaft based Kafka clusters.
In KIP-1012, we had identified 2 features we needed before 4.0:
KIP-853 and unclean leader elections (KIP-966?).

Can we have a status update on both of these features?
If we think we can deliver both shortly (or if these are already
done), I think it makes sense to slightly adjust the dates. Otherwise
we'll need to either make another release or reconsider our timed
based release plan.

Thanks,
Mickael





On Wed, May 29, 2024 at 1:33 PM Josep Prat  wrote:
>
> A correction on the dates as they should be:
> - Feature freeze is on June 12th (you wrote May before)
> - code freeze is on June 26th
>
> So these are the new proposed deadlines.
>
> Best,
>
> On Wed, May 29, 2024 at 12:48 PM Luke Chen  wrote:
>
> > Hi Josep,
> >
> > Thanks for raising this.
> > I'm +1 for delaying some time to have features completed.
> >
> > But I think we might need to make it clear, what's the updated feature
> > freeze date/code freeze date?
> > Is this correct?
> > - Feature freeze is on May 12th
> > - Code freeze is June 26th
> >
> >
> > Thanks.
> > Luke
> >
> > On Wed, May 29, 2024 at 5:38 PM Josep Prat 
> > wrote:
> >
> > > Hi Kafka developers,
> > >
> > > Given the fact we have a couple of KIPs that are halfway through their
> > > implementation and it seems it's a matter of days (1 or 2 weeks) to have
> > > them completed. What would you think if we delay feature freeze and code
> > > freeze by 2 weeks? Let me know your thoughts.
> > >
> > > Best,
> > >
> > > On Tue, May 28, 2024 at 8:47 AM Josep Prat  wrote:
> > >
> > > > Hi Kafka developers,
> > > >
> > > > This is a reminder about the upcoming deadlines:
> > > > - Feature freeze is on May 29th
> > > > - Code freeze is June 12th
> > > >
> > > > I'll cut the new branch during morning hours (CEST) on May 30th.
> > > >
> > > > Thanks all!
> > > >
> > > > On Thu, May 16, 2024 at 8:34 AM Josep Prat 
> > wrote:
> > > >
> > > >> Hi all,
> > > >>
> > > >> We are now officially past the KIP freeze deadline. KIPs that are
> > > >> approved after this point in time shouldn't be adopted in the 3.8.x
> > > release
> > > >> (except the 2 already mentioned KIPS 989 and 1028 assuming no vetoes
> > > occur).
> > > >>
> > > >> Reminder of the upcoming deadlines:
> > > >> - Feature freeze is on May 29th
> > > >> - Code freeze is June 12th
> > > >>
> > > >> If you have an approved KIP that you know already you won't be able to
> > > >> complete before the feature freeze deadline, please update the Release
> > > >> column in the
> > > >>
> > >
> > https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Improvement+Proposals
> > > >> page.
> > > >>
> > > >> Thanks all,
> > > >>
> > > >> On Wed, May 15, 2024 at 8:53 PM Josep Prat 
> > wrote:
> > > >>
> > > >>> Hi Nick,
> > > >>>
> > > >>> If nobody comes up with concerns or pushback until the time of
> > closing
> > > >>> the vote, I think we can take it for 3.8.
> > > >>>
> > > >>> Best,
> > > >>>
> > > >>> -
> > > >>>
> > > >>> Josep Prat
> > > >>> Open Source Engineering Director, aivenjosep.p...@aiven.io   |
> > > >>> +491715557497 | aiven.io
> > > >>> Aiven Deutschland GmbH
> > > >>> Alexanderufer 3-7, 10117 Berlin
> > > >>> Geschäftsführer: Oskari Saarenmaa & Hannu Valtonen
> > > >>> Amtsgericht Charlottenburg, HRB 209739 B
> > > >>>
> > > >>> On Wed, May 15, 2024, 20:48 Nick Telford 
> > > wrote:
> > > >>>
> > >  Hi Josep,
> > > 
> > >  Would it be possible to sneak KIP-989 into 3.8? Just as with 1028,
> > > it's
> > >  currently being voted on and has already received the requisite
> > votes.
> > >  The
> > >  only thing holding it back is the 72 hour voting window.
> > > 
> > >  Vote thread here:
> > >  https://lists.apache.org/thread/nhr65h4784z49jbsyt5nx8ys81q90k6s
> > > 
> > >  Regards,
> > > 
> > >  Nick
> > > 
> > >  On Wed, 15 May 2024 at 17:47, Josep Prat
> >  > > >
> > >  wrote:
> > > 
> > >  > And my maths are wrong! I added 24 hours more to all the numbers
> > in
> > >  there.
> > >  > If after 72 hours no vetoes appear, I have no objections on adding
> > >  this
> > >  > specific KIP as it shouldn't have a big blast radius of
> > affectation.
> > >  >
> > >  > Best,
> > >  >
> > >  > On Wed, May 15, 2024 at 6:44 PM Josep Prat 
> > >  wrote:
> > >  >
> > >  > > Ah, I see Chris was faster writing this than me.
> > >  > >
> > >  > > On Wed, May 15, 2024 at 6:43 PM Josep Prat  > >
> > >  wrote:
> > >  > >
> > >  > >> Hi all,
> > >  > >> You still have the full day of today (independently for the
> > >  timezone) to
> > >  > >> get KIPs approved. Tomorrow morning (CEST timezone) I'll send
> > >  another
> > >  > email
> > >  > >> asking developers to assign future approved KIPs to another
> > > version
> > >  > that 

[jira] [Created] (KAFKA-16859) Cleanup check if tiered storage is enabled

2024-05-28 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-16859:
--

 Summary: Cleanup check if tiered storage is enabled
 Key: KAFKA-16859
 URL: https://issues.apache.org/jira/browse/KAFKA-16859
 Project: Kafka
  Issue Type: Task
Reporter: Mickael Maison


We have 2 ways to detect whether tiered storage is enabled:
- KafkaConfig.isRemoteLogStorageSystemEnabled
- KafkaConfig.remoteLogManagerConfig().enableRemoteStorageSystem()

We use both in various files. We should stick with one way to do it.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [VOTE] KIP-950: Tiered Storage Disablement

2024-05-28 Thread Mickael Maison
Hi,

I agree with Chia-Ping, I think we could drop the ZK variant
altogether, especially if this is not going to make it in 3.8.0.
Even if we end up needing a 3.9.0 release, I wouldn't write a bunch of
new ZooKeeper-related code in that release to delete it all right
after in 4.0.

Thanks,
Mickael

On Fri, May 24, 2024 at 5:03 PM Christo Lolov  wrote:
>
> Hello!
>
> I am closing this vote as ACCEPTED with 3 binding +1 (Luke, Chia-Ping and
> Satish) and 1 non-binding +1 (Kamal) - thank you for the reviews!
>
> Realistically, I don't think I have the bandwidth to get this in 3.8.0.
> Due to this, I will mark tentatively the Zookeeper part for 3.9 if the
> community decides that they do in fact want one more 3.x release.
> I will mark the KRaft part as ready to be started and aiming for either 4.0
> or 3.9.
>
> Best,
> Christo


Re: [VOTE] KIP 1047 - Introduce new org.apache.kafka.tools.api.Decoder to replace kafka.serializer.Decoder

2024-05-24 Thread Mickael Maison
+1 (binding)

Thanks,
Mickael

On Fri, May 24, 2024 at 11:39 AM Andrew Schofield
 wrote:
>
> Thanks for the KIP.
>
> +1 (non-binding)
>
> Thanks,
> Andrew
>
> > On 23 May 2024, at 18:48, Chia-Ping Tsai  wrote:
> >
> >
> > +1
> >
> > Thanks for Yang to take over this!
> >
> >> Frank Yang  於 2024年5月24日 凌晨12:27 寫道:
> >>
> >> Hi all,
> >>
> >> I would like to start a vote on KIP-1047: Introduce new
> >> org.apache.kafka.tools.api.Decoder to replace kafka.serializer.Decoder.
> >>
> >> KIP: 
> >> https://cwiki.apache.org/confluence/display/KAFKA/KIP-1047+Introduce+new+org.apache.kafka.tools.api.Decoder+to+replace+kafka.serializer.Decoder
> >>
> >> Discussion thread: 
> >> https://lists.apache.org/thread/n3k6vb4vddl1s5nopcyglnddtvzp4j63
> >>
> >> Thanks and regards,
> >> PoAn
>


[jira] [Resolved] (KAFKA-16825) CVE vulnerabilities in Jetty and netty

2024-05-23 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16825?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-16825.

Fix Version/s: 3.8.0
   Resolution: Fixed

> CVE vulnerabilities in Jetty and netty
> --
>
> Key: KAFKA-16825
> URL: https://issues.apache.org/jira/browse/KAFKA-16825
> Project: Kafka
>  Issue Type: Task
>Affects Versions: 3.7.0
>Reporter: mooner
>    Assignee: Mickael Maison
>Priority: Major
> Fix For: 3.8.0
>
>
> There is a vulnerability (CVE-2024-29025) in the passive dependency software 
> Netty used by Kafka, which has been fixed in version 4.1.108.Final.
> There is also a vulnerability (CVE-2024-22201) in the passive dependency 
> software Jetty, which has been fixed in version 9.4.54.v20240208.
> When will Kafka upgrade the versions of Netty and Jetty to fix these two 
> vulnerabilities?
> Reference website:
> https://nvd.nist.gov/vuln/detail/CVE-2024-29025
> https://nvd.nist.gov/vuln/detail/CVE-2024-22201



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-12399) Deprecate Log4J Appender KIP-719

2024-05-22 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12399?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-12399.

Fix Version/s: 3.8.0
   Resolution: Fixed

> Deprecate Log4J Appender KIP-719
> 
>
> Key: KAFKA-12399
> URL: https://issues.apache.org/jira/browse/KAFKA-12399
> Project: Kafka
>  Issue Type: Improvement
>  Components: logging
>Reporter: Dongjin Lee
>    Assignee: Mickael Maison
>Priority: Major
>  Labels: needs-kip
> Fix For: 3.8.0
>
>
> As a following job of KAFKA-9366, we have to entirely remove the log4j 1.2.7 
> dependency from the classpath by removing dependencies on log4j-appender.
> KIP-719: 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-719%3A+Deprecate+Log4J+Appender



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [DISCUSS] KIP 1047 - Introduce new org.apache.kafka.tools.api.Decoder to replace kafka.serializer.Decoder

2024-05-22 Thread Mickael Maison
Hi,

Thanks for the KIP. Sorting this out in 3.8.0 would be really nice as
it would allow us to migrate this tool in 4.0.0. We're unfortunately
past the KIP deadline but maybe this is small enough to have an
exception.

I'm wondering whether we need to introduce a new Decoder interface and
instead if we could reuse Deserializer. We could deprecate the
key-decoder-class and value-decoder-class flags and introduce new
flags like key-deserializer-class and value-deserializer-class. One
benefit is that we already have many existing deserializer
implementations. WDYT?

One issue I also noted is that some of the existing Decoder
implementations (StringDecoder for example) can accept configurations
but currently DumpLogSegments does not provide a way to pass any
configurations, it creates an empty VerifiableProperties object each
time it instantiates a Decoder instance. If we were to use
Deserializer we would also need a way to provide configurations.

Thanks,
Mickael

On Wed, May 22, 2024 at 4:12 PM Chia-Ping Tsai  wrote:
>
> Dear all,
>
> We know that  3.8.0 KIP is already frozen, but this is a small KIP and we 
> need to ship it to 3.8.0 so as to remove the deprecated scala interface from 
> 4.0.
>
> Best,
> Chia-Ping
>
> On 2024/05/22 14:05:16 Frank Yang wrote:
> > Hi team,
> >
> > Chia-Ping Tsai and I would like to propose KIP-1047 to migrate 
> > kafka.serializer.Decoder from core module (scala) to tools module (java).
> >
> > Feedback and comments are welcome.
> >
> > KIP-1047: 
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-1047+Introduce+new+org.apache.kafka.tools.api.Decoder+to+replace+kafka.serializer.Decoder
> > JIRA: https://issues.apache.org/jira/browse/KAFKA-16796
> >
> > Thank you.
> > PoAn


[jira] [Resolved] (KAFKA-7632) Support Compression Level

2024-05-21 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7632?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-7632.
---
Fix Version/s: 3.8.0
 Assignee: Mickael Maison  (was: Dongjin Lee)
   Resolution: Fixed

> Support Compression Level
> -
>
> Key: KAFKA-7632
> URL: https://issues.apache.org/jira/browse/KAFKA-7632
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients
>Affects Versions: 2.1.0
> Environment: all
>Reporter: Dave Waters
>Assignee: Mickael Maison
>Priority: Major
>  Labels: needs-kip
> Fix For: 3.8.0
>
>
> The compression level for ZSTD is currently set to use the default level (3), 
> which is a conservative setting that in some use cases eliminates the value 
> that ZSTD provides with improved compression. Each use case will vary, so 
> exposing the level as a producer, broker, and topic configuration setting 
> will allow the user to adjust the level.
> Since it applies to the other compression codecs, we should add the same 
> functionalities to them.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: Request for filing KIP

2024-05-21 Thread Mickael Maison
Hi,

I granted you permissions in the wiki. You should now be able to create a KIP.

Thanks,
Mickael

On Tue, May 21, 2024 at 5:11 AM Harsh Panchal  wrote:
>
> Dear Apache Kafka Team,
>
> As instructed, I would like to write a KIP for PR -
> https://github.com/apache/kafka/pull/15905.
>
> I see that I don't have access to the "Create KIP" button on confluence. I
> kindly request you to grant access to write up KIP. My user name is: bootmgr
>
> Best Regards,
> Harsh Panchal


Re: [VOTE] KIP-1025: Optionally URL-encode clientID and clientSecret in authorization header

2024-05-15 Thread Mickael Maison
Hi,

+1 (binding)
Thanks for the KIP!

Mickael

On Sun, Apr 21, 2024 at 7:12 PM Nelson B.  wrote:
>
> Hi all,
>
> Just a kind reminder. I would really appreciate if we could get two more
> binding +1 votes.
>
> Thanks
>
> On Mon, Apr 8, 2024, 2:08 PM Manikumar  wrote:
>
> > Thanks for the KIP.
> >
> > +1 (binding)
> >
> >
> >
> >
> > On Mon, Apr 8, 2024 at 9:49 AM Kirk True  wrote:
> > >
> > > +1 (non-binding)
> > >
> > > Apologies. I thought I’d already voted :(
> > >
> > > > On Apr 7, 2024, at 10:48 AM, Nelson B. 
> > wrote:
> > > >
> > > > Hi all,
> > > >
> > > > Just wanted to bump up this thread for visibility.
> > > >
> > > > Thanks!
> > > >
> > > > On Thu, Mar 28, 2024 at 3:40 AM Doğuşcan Namal <
> > namal.dogus...@gmail.com>
> > > > wrote:
> > > >
> > > >> Thanks for checking it out Nelson. Yeah I think it makes sense to
> > leave it
> > > >> for the users who want to use it for testing.
> > > >>
> > > >> On Mon, 25 Mar 2024 at 20:44, Nelson B. 
> > wrote:
> > > >>
> > > >>> Hi Doğuşcan,
> > > >>>
> > > >>> Thanks for your vote!
> > > >>>
> > > >>> Currently, the usage of TLS depends on the protocol used by the
> > > >>> authorization server which is configured
> > > >>> through the "sasl.oauthbearer.token.endpoint.url" option. So, if the
> > > >>> URL address uses simple http (not https)
> > > >>> then secrets will be transmitted in plaintext. I think it's possible
> > to
> > > >>> enforce using only https but I think any
> > > >>> production-grade authorization server uses https anyway and maybe
> > users
> > > >> may
> > > >>> want to test using http in the dev environment.
> > > >>>
> > > >>> Thanks,
> > > >>>
> > > >>> On Thu, Mar 21, 2024 at 3:56 PM Doğuşcan Namal <
> > namal.dogus...@gmail.com
> > > >>>
> > > >>> wrote:
> > > >>>
> > >  Hi Nelson, thanks for the KIP.
> > > 
> > >  From the RFC:
> > >  ```
> > >  The authorization server MUST require the use of TLS as described in
> > >    Section 1.6 when sending requests using password authentication.
> > >  ```
> > > 
> > >  I believe we already have an enforcement for OAuth to be enabled
> > only
> > > >> in
> > >  SSLChannel but would be good to double check. Sending secrets over
> > >  plaintext is a security bad practice :)
> > > 
> > >  +1 (non-binding) from me.
> > > 
> > >  On Tue, 19 Mar 2024 at 16:00, Nelson B. 
> > > >> wrote:
> > > 
> > > > Hi all,
> > > >
> > > > I would like to start a vote on KIP-1025
> > > > <
> > > >
> > > 
> > > >>>
> > > >>
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-1025%3A+Optionally+URL-encode+clientID+and+clientSecret+in+authorization+header
> > > >> ,
> > > > which would optionally URL-encode clientID and clientSecret in the
> > > > authorization header.
> > > >
> > > > I feel like all possible issues have been addressed in the
> > discussion
> > > > thread.
> > > >
> > > > Thanks,
> > > >
> > > 
> > > >>>
> > > >>
> > >
> >


Re: [DISCUSS] Apache Kafka 3.7.1 release

2024-05-15 Thread Mickael Maison
Hi Igor,

Thanks for volunteering, +1

Mickael

On Thu, Apr 25, 2024 at 11:09 AM Igor Soarez  wrote:
>
> Hi everyone,
>
> I'd like to volunteer to be the release manager for a 3.7.1 release.
>
> Please keep in mind, this would be my first release, so I might have some 
> questions,
> and it might also take me a bit longer to work through the release process.
> So I'm thinking a good target would be toward the end of May.
>
> Please let me know your thoughts and if there are any objections or concerns.
>
> Thanks,
>
> --
> Igor


[jira] [Created] (KAFKA-16771) First log directory printed twice when formatting storage

2024-05-15 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-16771:
--

 Summary: First log directory printed twice when formatting storage
 Key: KAFKA-16771
 URL: https://issues.apache.org/jira/browse/KAFKA-16771
 Project: Kafka
  Issue Type: Task
  Components: tools
Affects Versions: 3.7.0
Reporter: Mickael Maison


If multiple log directories are set, when running bin/kafka-storage.sh format, 
the first directory is printed twice. For example:

{noformat}
bin/kafka-storage.sh format -t $KAFKA_CLUSTER_ID -c 
config/kraft/server.properties --release-version 3.6
metaPropertiesEnsemble=MetaPropertiesEnsemble(metadataLogDir=Optional.empty, 
dirs={/tmp/kraft-combined-logs: EMPTY, /tmp/kraft-combined-logs2: EMPTY})
Formatting /tmp/kraft-combined-logs with metadata.version 3.6-IV2.
Formatting /tmp/kraft-combined-logs with metadata.version 3.6-IV2.
Formatting /tmp/kraft-combined-logs2 with metadata.version 3.6-IV2.
{noformat}






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16769) Delete deprecated add.source.alias.to.metrics configuration

2024-05-15 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-16769:
--

 Summary: Delete deprecated add.source.alias.to.metrics 
configuration
 Key: KAFKA-16769
 URL: https://issues.apache.org/jira/browse/KAFKA-16769
 Project: Kafka
  Issue Type: Task
  Components: mirrormaker
Reporter: Mickael Maison
Assignee: Mickael Maison
 Fix For: 4.0.0






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16646) Consider only running the CVE scanner action on apache/kafka and not in forks

2024-04-30 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-16646:
--

 Summary: Consider only running the CVE scanner action on 
apache/kafka and not in forks
 Key: KAFKA-16646
 URL: https://issues.apache.org/jira/browse/KAFKA-16646
 Project: Kafka
  Issue Type: Sub-task
Reporter: Mickael Maison


Currently the CVE scanner action is failing due to CVEs in the base image. It 
seems that anybody that has a fork is getting daily emails about it.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16645) CVEs in 3.7.0 docker image

2024-04-30 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-16645:
--

 Summary: CVEs in 3.7.0 docker image
 Key: KAFKA-16645
 URL: https://issues.apache.org/jira/browse/KAFKA-16645
 Project: Kafka
  Issue Type: Task
Affects Versions: 3.7.0
Reporter: Mickael Maison


Our Docker Image CVE Scanner GitHub action reports 2 high CVEs in our base 
image:

apache/kafka:3.7.0 (alpine 3.19.1)
==
Total: 2 (HIGH: 2, CRITICAL: 0)

┌──┬┬──┬┬───┬───┬─┐
│ Library  │ Vulnerability  │ Severity │ Status │ Installed Version │ Fixed 
Version │Title│
├──┼┼──┼┼───┼───┼─┤
│ libexpat │ CVE-2023-52425 │ HIGH │ fixed  │ 2.5.0-r2  │ 2.6.0-r0  
│ expat: parsing large tokens can trigger a denial of service │
│  ││  ││   │   
│ https://avd.aquasec.com/nvd/cve-2023-52425  │
│  ├┤  ││   
├───┼─┤
│  │ CVE-2024-28757 │  ││   │ 2.6.2-r0  
│ expat: XML Entity Expansion │
│  ││  ││   │   
│ https://avd.aquasec.com/nvd/cve-2024-28757  │
└──┴┴──┴┴───┴───┴─┘

Looking at the 
[KIP|https://cwiki.apache.org/confluence/display/KAFKA/KIP-975%3A+Docker+Image+for+Apache+Kafka#KIP975:DockerImageforApacheKafka-WhatifweobserveabugoracriticalCVEinthereleasedApacheKafkaDockerImage?]
 that introduced the docker images, it seems we should release a bugfix when 
high CVEs are detected. It would be good to investigate and assess whether 
Kafka is impacted or not.




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [DISCUSS] KIP-877: Mechanism for plugins and connectors to register metrics

2024-04-25 Thread Mickael Maison
Hi Greg,

Thanks for taking a close look at the KIP.

1/2) I understand your concern about leaking resources. I've played a
bit more with the code and I think we should be able to handle the
closing of the metrics internally rather than delegating it to the
user code. I built a small PoC inspired by your MonitorablePlugin
class example and that looked fine. I think we can even keep that
class internal. I updated the KIP accordingly.

3) An earlier version of the proposal used connector and task contexts
to allow them to retrieve their PluginMetrics instance. In a previous
comment Chris suggested switching to implementing Monitorable for
consistency. I think both approaches have pros and cons. I agree with
you that implementing Monitorable with cause compatibility issues with
older Connect runtimes. For that reason, I'm leaning towards
reintroducing the context mechanism. However we would still have this
issue with Converters/Transformations/Predicates. I think it's
typically a bit less problematic with these plugins but it's worth
considering the different approaches. If we can't agree on an approach
we can exclude Connect from this proposal and revisit it at a later
point.

4) If this KIP is accepted, I plan to follow up with another KIP to
make MirrorMaker use this mechanism instead of the custom metrics
logic it currently uses.

Thanks,
Mickael




On Wed, Apr 24, 2024 at 9:03 PM Mickael Maison  wrote:
>
> Hi Matthias,
>
> I'm not sure making the Monitorable interface Closeable really solves the 
> issue.
> Ultimately you need to understand the lifecycle of a plugin to
> determine when it make sense to close it and which part of the code is
> responsible for doing it. I'd rather have this described properly in
> the interface of the plugin itself than it being a side effect of
> implementing Monitorable.
>
> Looking at Streams, as far as I can tell the only pluggable interfaces
> that are Closeable today are the Serdes. It seems Streams can accept
> Serdes instances created by the user [0]. In that case, I think it's
> probably best to ignore Streams in this KIP. Nothing should prevent
> Streams for adopting it, in a way that makes sense for Streams, in a
> future KIP if needed.
>
> 0: 
> https://github.com/apache/kafka/blob/trunk/streams/examples/src/main/java/org/apache/kafka/streams/examples/wordcount/WordCountDemo.java#L84
>
> Thanks,
> Mickael
>
>
>
>
>
> On Fri, Feb 9, 2024 at 1:15 AM Greg Harris  
> wrote:
> >
> > Hi Mickael,
> >
> > Thanks for the KIP, this looks like a great change!
> >
> > 1. I see that one of my concerns was already discussed, and appears to
> > have been concluded with:
> >
> > > I considered Chris' idea of automatically removing metrics but decided to 
> > > leave that responsibility to the plugins.
> >
> > After chasing resource leaks for the last few years, I've internalized
> > that preventing leaks through careful implementation is always
> > inadequate, and that leaks need to be prevented by design.
> > If a leak is possible in a design, then we should count on it
> > happening somewhere as a certainty, and should be prepared for the
> > behavior afterwards.
> >
> > Chris already brought up one of the negative behaviors: Connect
> > plugins which are cancelled may keep their metrics open past the point
> > that a replacement plugin is instantiated.
> > This will have the effect of showing incorrect metrics, which is
> > harmful and confusing for operators.
> > If you are constantly skeptical of the accuracy of your metrics, and
> > there is no "source of truth" to verify against, then what use are the
> > metrics?
> >
> > I think that managing the lifecycle of the PluginMetrics on the
> > framework side would be acceptable if we had an internal class like
> > the following, to keep a reference to the metrics adjacent to the
> > plugin:
> > class MonitorablePlugin implements Supplier, Closeable {
> > MonitorablePlugin(T plugin, PluginMetrics metrics);
> > }
> > I already believe that we need similar wrapper classes in Connect [1]
> > to manage classloader swapping & exception safety, and this simpler
> > interface could be applied to non-connect call-sites that don't need
> > to swap the classloader.
> >
> > 2. Your "MyInterceptor" class doesn't have a "metrics" field, and
> > doesn't perform a null-check on the field in close().
> > Keeping the PluginMetrics as an non-final instance variable in every
> > plugin implementation is another burden on the plugin implementations,
> > as they will need to perform null checks in-case the metrics are never
> > initialized, suc

Re: Permissions to contribute to Apache Kafka

2024-04-25 Thread Mickael Maison
Hi,

I've granted you contributor permissions in Jira and Confluence.
Thanks for your interest in Kafka!

Mickael

On Thu, Apr 25, 2024 at 5:47 AM Rajdeep Sahoo
 wrote:
>
> Hi team ,
> Please find my wiki id and jira id mentioned below. Requesting you to grant
> access so that I will be able to contribute to apache kafka.
>
> *wiki id*: rajdeepsahoo2012
> *jira id*: rajdeep_sahoo
>
> Thanks ,
> Rajdeep sahoo


Re: [DISCUSS] KIP-877: Mechanism for plugins and connectors to register metrics

2024-04-24 Thread Mickael Maison
1d395629c2aa00bd9/connect/mirror/src/main/java/org/apache/kafka/connect/mirror/MirrorCheckpointConfig.java#L114
>
> On Thu, Feb 8, 2024 at 11:49 AM Matthias J. Sax  wrote:
> >
> > Still need to digest the KIP, but one thing coming to mind:
> >
> > Instead of requiring existing interfaces to implement `Closable`, would
> > it make sense to make `Monitorable extends Closable` to sidestep this issue?
> >
> >
> > -Matthias
> >
> > On 1/25/24 9:03 AM, Mickael Maison wrote:
> > > Hi Luke,
> > >
> > > The reason vary for each plugin, I've added details to most plugins in
> > > the table.
> > > The plugins without an explanation are all from Streams. I admit I
> > > don't know these interfaces enough to decide if it makes sense making
> > > them closeable and instrumenting them. It would be nice to get some
> > > input from Streams contributors to know.
> > >
> > > Thanks,
> > > Mickael
> > >
> > > On Thu, Jan 25, 2024 at 5:50 PM Mickael Maison  
> > > wrote:
> > >>
> > >> Hi Tom,
> > >>
> > >> Thanks for taking a look at the KIP!
> > >>
> > >> 1. Yes I considered several names (see the previous messages in the
> > >> discussion). KIP-608, which this KIP superseeds, used "monitor()" for
> > >> the method name. I find "withMetrics()" to be nicer due to the way the
> > >> method should be used. That said, I'm not attached to the name so if
> > >> more people prefer "monitor()", or can come up with a better name, I'm
> > >> happy to make the change. I updated the javadoc to clarify the usage
> > >> and mention when to close the PluginMetrics instance.
> > >>
> > >> 2. Yes I added a note to the PluginMetrics interface
> > >>
> > >> 3. I used this exception to follow the behavior of Metrics.addMetric()
> > >> which throws IllegalArgumentException if a metric with the same name
> > >> already exist.
> > >>
> > >> 4. I added details to the javadoc
> > >>
> > >> Thanks,
> > >> Mickael
> > >>
> > >>
> > >> On Thu, Jan 25, 2024 at 10:32 AM Luke Chen  wrote:
> > >>>
> > >>> Hi Mickael,
> > >>>
> > >>> Thanks for the KIP.
> > >>> The motivation and solution makes sense to me.
> > >>>
> > >>> Just one question:
> > >>> If we could extend `closable` for Converter plugin, why don't we do that
> > >>> for the "Unsupported Plugins" without close method?
> > >>> I don't say we must do that in this KIP, but maybe you could add the 
> > >>> reason
> > >>> in the "rejected alternatives".
> > >>>
> > >>> Thanks.
> > >>> Luke
> > >>>
> > >>> On Thu, Jan 25, 2024 at 3:46 PM Slathia p  
> > >>> wrote:
> > >>>
> > >>>> Hi Team,
> > >>>>
> > >>>>
> > >>>>
> > >>>> Greetings,
> > >>>>
> > >>>>
> > >>>>
> > >>>> Apologies for the delay in reply as I was down with flu.
> > >>>>
> > >>>>
> > >>>>
> > >>>> We actually reached out to you for IT/ SAP/ Oracle/ Infor / Microsoft
> > >>>> “VOTEC IT SERVICE PARTNERSHIP”  “IT SERVICE OUTSOURCING” “ “PARTNER 
> > >>>> SERVICE
> > >>>> SUBCONTRACTING”
> > >>>>
> > >>>>
> > >>>>
> > >>>> We have very attractive newly introduce reasonably price PARTNER IT
> > >>>> SERVICE ODC SUBCONTRACTING MODEL in USA, Philippines, India and 
> > >>>> Singapore
> > >>>> etc with White Label Model.
> > >>>>
> > >>>>
> > >>>>
> > >>>> Our LOW COST IT SERVICE ODC MODEL eliminate the cost of expensive 
> > >>>> employee
> > >>>> payroll, Help partner to get profit more than 50% on each project.. 
> > >>>> ..We
> > >>>> really mean it.
> > >>>>
> > >>>>
> > >>>>
> > >>>> We are already working with platinum partner like NTT DATA, NEC 
> > >>>> Singapore,
> > >>

Re: [ANNOUNCE] New committer: Igor Soarez

2024-04-24 Thread Mickael Maison
Congratulations Igor!

On Wed, Apr 24, 2024 at 8:06 PM Colin McCabe  wrote:
>
> Hi all,
>
> The PMC of Apache Kafka is pleased to announce a new Kafka committer, Igor 
> Soarez.
>
> Igor has been a Kafka contributor since 2019. In addition to being a regular 
> contributor and reviewer, he has made significant contributions to improving 
> Kafka's JBOD support in KRaft mode. He has also contributed to discussing and 
> reviewing many KIPs such as KIP-690, KIP-554, KIP-866, and KIP-938.
>
> Congratulations, Igor!
>
> Thanks,
>
> Colin (on behalf of the Apache Kafka PMC)


Re: [ANNOUNCE] New Kafka PMC Member: Greg Harris

2024-04-13 Thread Mickael Maison
Congratulations Greg!

On Sat, Apr 13, 2024 at 8:42 PM Chris Egerton  wrote:
>
> Hi all,
>
> Greg Harris has been a Kafka committer since July 2023. He has remained
> very active and instructive in the community since becoming a committer.
> It's my pleasure to announce that Greg is now a member of Kafka PMC.
>
> Congratulations, Greg!
>
> Chris, on behalf of the Apache Kafka PMC


Re: [VOTE] KIP-1031: Control offset translation in MirrorSourceConnector

2024-04-12 Thread Mickael Maison
Hi Omnia,

+1 (binding), thanks for the KIP!

Mickael

On Fri, Apr 12, 2024 at 9:01 AM Omnia Ibrahim  wrote:
>
> Hi everyone, I would like to start a voting thread for KIP-1031: Control 
> offset translation in MirrorSourceConnector 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-1031%3A+Control+offset+translation+in+MirrorSourceConnector
>
> For comments or feedback please check the discussion thread here 
> https://lists.apache.org/thread/ym6zr0wrhglft5c000x9c8ych098s7h6
>
> Thanks
> Omnia
>


Re: [DISCUSS] KIP-1006: Remove SecurityManager Support

2024-04-10 Thread Mickael Maison
Hi,

It looks like some of the SecurityManager APIs are starting to be
removed in JDK 23, see
- https://bugs.openjdk.org/browse/JDK-8296244
- https://github.com/quarkusio/quarkus/issues/39634

JDK 23 is currently planned for September 2024.
Considering the timelines and that we only drop support for Java
versions in major Kafka releases, I think the proposed approach of
detecting the APIs to use makes sense.

Thanks,
Mickael

On Tue, Nov 21, 2023 at 8:38 AM Greg Harris
 wrote:
>
> Hey Ashwin,
>
> Thanks for your question!
>
> I believe we have only removed support for two Java versions:
> 7: 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-118%3A+Drop+Support+for+Java+7
> in 2.0
> 8: https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=181308223
> in 4.0
>
> In both cases, we changed the gradle sourceCompatibility and
> targetCompatibility at the same time, which I believe changes the
> "-target" option in javac.
>
> We have no plans currently for dropping support for 11 or 17, but I
> presume they would work in much the same way.
>
> Hope this helps!
> Greg
>
> On Mon, Nov 20, 2023 at 11:19 PM Ashwin  wrote:
> >
> > Hi Greg,
> >
> > Thanks for writing this KIP.
> > I agree with you that handling this now will help us react to the
> > deprecation of SecurityManager, whenever it happens.
> >
> > I had a question regarding how we deprecate JDKs supported by Apache Kafka.
> > When we drop support for JDK 17, will we set the “-target” option of Javac
> > such that the resulting JARs will not load in JVMs which are lesser than or
> > equal to that version ?
> >
> > Thanks,
> > Ashwin
> >
> >
> > On Tue, Nov 21, 2023 at 6:18 AM Greg Harris 
> > wrote:
> >
> > > Hi all,
> > >
> > > I'd like to invite you all to discuss removing SecurityManager support
> > > from Kafka. This affects the client and server SASL mechanism, Tiered
> > > Storage, and Connect classloading.
> > >
> > > Find the KIP here:
> > >
> > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-1006%3A+Remove+SecurityManager+Support
> > >
> > > I think this is a "code higiene" effort that doesn't need to be dealt
> > > with urgently, but it would prevent a lot of headache later when Java
> > > does decide to remove support.
> > >
> > > If you are currently using the SecurityManager with Kafka, I'd really
> > > appreciate hearing how you're using it, and how you're planning around
> > > its removal.
> > >
> > > Thanks!
> > > Greg Harris
> > >


[jira] [Resolved] (KAFKA-16478) Links for Kafka 3.5.2 release are broken

2024-04-08 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16478?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-16478.

Resolution: Fixed

> Links for Kafka 3.5.2 release are broken
> 
>
> Key: KAFKA-16478
> URL: https://issues.apache.org/jira/browse/KAFKA-16478
> Project: Kafka
>  Issue Type: Bug
>  Components: website
>Affects Versions: 3.5.2
>Reporter: Philipp Trulson
>    Assignee: Mickael Maison
>Priority: Major
>
> While trying to update our setup, I noticed that the download links for the 
> 3.5.2 links are broken. They all point to a different host and also contain 
> an additional `/kafka` in their URL. Compare:
> not working:
> [https://downloads.apache.org/kafka/kafka/3.5.2/RELEASE_NOTES.html]
> working:
> [https://archive.apache.org/dist/kafka/3.5.1/RELEASE_NOTES.html]
> [https://downloads.apache.org/kafka/3.6.2/RELEASE_NOTES.html]
> This goes for all links in the release - archives, checksums, signatures.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [ANNOUNCE] New committer: Christo Lolov

2024-03-26 Thread Mickael Maison
Congratulations Christo!

On Tue, Mar 26, 2024 at 2:26 PM Chia-Ping Tsai  wrote:
>
> Congrats Christo!
>
> Chia-Ping


[jira] [Resolved] (KAFKA-15882) Scheduled nightly github actions workflow for CVE reports on published docker images

2024-03-25 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-15882.

Fix Version/s: 3.8.0
   Resolution: Fixed

> Scheduled nightly github actions workflow for CVE reports on published docker 
> images
> 
>
> Key: KAFKA-15882
> URL: https://issues.apache.org/jira/browse/KAFKA-15882
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Vedarth Sharma
>Assignee: Vedarth Sharma
>Priority: Major
> Fix For: 3.8.0
>
>
> This scheduled github actions workflow will check supported published docker 
> images for CVEs and generate reports.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-16206) KRaftMigrationZkWriter tries to delete deleted topic configs twice

2024-03-21 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16206?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-16206.

Fix Version/s: 3.8.0
   Resolution: Fixed

> KRaftMigrationZkWriter tries to delete deleted topic configs twice
> --
>
> Key: KAFKA-16206
> URL: https://issues.apache.org/jira/browse/KAFKA-16206
> Project: Kafka
>  Issue Type: Bug
>  Components: kraft, migration
>Reporter: David Arthur
>Assignee: Alyssa Huang
>Priority: Minor
> Fix For: 3.8.0
>
>
> When deleting a topic, we see spurious ERROR logs from 
> kafka.zk.migration.ZkConfigMigrationClient:
>  
> {code:java}
> Did not delete ConfigResource(type=TOPIC, name='xxx') since the node did not 
> exist. {code}
> This seems to happen because ZkTopicMigrationClient#deleteTopic is deleting 
> the topic, partitions, and config ZNodes in one shot. Subsequent calls from 
> KRaftMigrationZkWriter to delete the config encounter a NO_NODE since the 
> ZNode is already gone.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [VOTE] KIP-981: Manage Connect topics with custom implementation of Admin

2024-03-14 Thread Mickael Maison
Hi Omnia,

+1 (binding), thanks for the KIP

Mickael

On Tue, Mar 5, 2024 at 10:46 AM Omnia Ibrahim  wrote:
>
> Hi everyone, I would like to start the vote on KIP-981: Manage Connect topics 
> with custom implementation of Admin 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-981%3A+Manage+Connect+topics+with+custom+implementation+of+Admin
>
> Thanks
> Omnia


[DISCUSS] Personal branches under apache/kafka

2024-03-13 Thread Mickael Maison
Hi,

We have accumulated a number of personal branches in the github
repository: https://github.com/apache/kafka/branches/all

All these branches have been created by committers for various
reasons, bugfix, tests.

I wonder if we should avoid creating branches in the apache repository
(always use your own fork like regular contributors) and in the rare
cases this is necessary ensure we delete them once done? This way we
would only have branches for the various releases (3.7, 3.6, etc).

What do you think?

Thanks,
Mickael


[jira] [Created] (KAFKA-16355) ConcurrentModificationException in InMemoryTimeOrderedKeyValueBuffer.evictWhile

2024-03-08 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-16355:
--

 Summary: ConcurrentModificationException in 
InMemoryTimeOrderedKeyValueBuffer.evictWhile
 Key: KAFKA-16355
 URL: https://issues.apache.org/jira/browse/KAFKA-16355
 Project: Kafka
  Issue Type: Bug
  Components: streams
Affects Versions: 3.5.1
Reporter: Mickael Maison


While a Streams application was restoring its state after an outage, it hit the 
following:

org.apache.kafka.streams.errors.StreamsException: Exception caught in process. 
taskId=0_16, processor=KSTREAM-SOURCE-00, topic=, partition=16, 
offset=454875695, stacktrace=java.util.ConcurrentModificationException
at java.base/java.util.TreeMap$PrivateEntryIterator.remove(TreeMap.java:1507)
at 
org.apache.kafka.streams.state.internals.InMemoryTimeOrderedKeyValueBuffer.evictWhile(InMemoryTimeOrderedKeyValueBuffer.java:423)
at 
org.apache.kafka.streams.kstream.internals.suppress.KTableSuppressProcessorSupplier$KTableSuppressProcessor.enforceConstraints(KTableSuppressProcessorSupplier.java:178)
at 
org.apache.kafka.streams.kstream.internals.suppress.KTableSuppressProcessorSupplier$KTableSuppressProcessor.process(KTableSuppressProcessorSupplier.java:165)
at 
org.apache.kafka.streams.processor.internals.ProcessorNode.process(ProcessorNode.java:157)
at 
org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forwardInternal(ProcessorContextImpl.java:290)
at 
org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:269)
at 
org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:228)
at 
org.apache.kafka.streams.kstream.internals.TimestampedCacheFlushListener.apply(TimestampedCacheFlushListener.java:45)
at 
org.apache.kafka.streams.state.internals.MeteredWindowStore.lambda$setFlushListener$4(MeteredWindowStore.java:181)
at 
org.apache.kafka.streams.state.internals.CachingWindowStore.putAndMaybeForward(CachingWindowStore.java:124)
at 
org.apache.kafka.streams.state.internals.CachingWindowStore.lambda$initInternal$0(CachingWindowStore.java:99)
at 
org.apache.kafka.streams.state.internals.NamedCache.flush(NamedCache.java:158)
at 
org.apache.kafka.streams.state.internals.NamedCache.evict(NamedCache.java:252)
at 
org.apache.kafka.streams.state.internals.ThreadCache.maybeEvict(ThreadCache.java:302)
at 
org.apache.kafka.streams.state.internals.ThreadCache.put(ThreadCache.java:179)
at 
org.apache.kafka.streams.state.internals.CachingWindowStore.put(CachingWindowStore.java:173)
at 
org.apache.kafka.streams.state.internals.CachingWindowStore.put(CachingWindowStore.java:47)
at 
org.apache.kafka.streams.state.internals.MeteredWindowStore.lambda$put$5(MeteredWindowStore.java:201)
at 
org.apache.kafka.streams.processor.internals.metrics.StreamsMetricsImpl.maybeMeasureLatency(StreamsMetricsImpl.java:872)
at 
org.apache.kafka.streams.state.internals.MeteredWindowStore.put(MeteredWindowStore.java:200)
at 
org.apache.kafka.streams.processor.internals.AbstractReadWriteDecorator$WindowStoreReadWriteDecorator.put(AbstractReadWriteDecorator.java:201)
at 
org.apache.kafka.streams.kstream.internals.KStreamWindowAggregate$KStreamWindowAggregateProcessor.process(KStreamWindowAggregate.java:138)
at 
org.apache.kafka.streams.processor.internals.ProcessorNode.process(ProcessorNode.java:157)
at 
org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forwardInternal(ProcessorContextImpl.java:290)
at 
org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:269)
at 
org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:228)
at 
org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:215)
at 
org.apache.kafka.streams.kstream.internals.KStreamPeek$KStreamPeekProcessor.process(KStreamPeek.java:42)
at 
org.apache.kafka.streams.processor.internals.ProcessorNode.process(ProcessorNode.java:159)
at 
org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forwardInternal(ProcessorContextImpl.java:290)
at 
org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:269)
at 
org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:228)
at 
org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:215)
at 
org.apache.kafka.streams.kstream.internals.KStreamFilter$KStreamFilterProcessor.process(KStreamFilter.java:44)
at 
org.apache.kafka.streams.processor.internals.ProcessorNode.process(ProcessorNode.java:159)
at 
org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forwardInternal(ProcessorContextImpl.java:290)
at 
org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:269

[jira] [Created] (KAFKA-16347) Bump ZooKeeper to 3.8.4

2024-03-06 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-16347:
--

 Summary: Bump ZooKeeper to 3.8.4
 Key: KAFKA-16347
 URL: https://issues.apache.org/jira/browse/KAFKA-16347
 Project: Kafka
  Issue Type: Bug
Affects Versions: 3.6.1, 3.7.0
Reporter: Mickael Maison
Assignee: Mickael Maison


ZooKeeper 3.8.4 was released and contains a few CVE fixes: 
https://zookeeper.apache.org/doc/r3.8.4/releasenotes.html

We should update 3.6, 3.7 and trunk to use this new ZooKeeper release.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: Requesting permissions to contribute to Apache Kafka.

2024-03-01 Thread Mickael Maison
Hi Damien,

I've granted you permissions in both Jira and Confluence.

Thanks,
Mickael

On Fri, Mar 1, 2024 at 2:43 PM Damien Gasparina  wrote:
>
> Hi team,
>
> I would like permission to contribute to Kafka.
> My wiki ID is "d.gasparina" and my Jira ID is "Dabz".
>
> I would like to propose a KIP to improve Kafka Streams error and exception
> handling.
>
> Cheers,
> Damien


[jira] [Created] (KAFKA-16318) Add javadoc to KafkaMetric

2024-03-01 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-16318:
--

 Summary: Add javadoc to KafkaMetric
 Key: KAFKA-16318
 URL: https://issues.apache.org/jira/browse/KAFKA-16318
 Project: Kafka
  Issue Type: Bug
  Components: docs
Reporter: Mickael Maison


KafkaMetric is part of the public API but it's missing javadoc describing the 
class and several of its methods.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [ANNOUNCE] Apache Kafka 3.7.0

2024-02-27 Thread Mickael Maison
Thanks to all the contributors and thank you Stanislav for running the release!


On Tue, Feb 27, 2024 at 7:03 PM Stanislav Kozlovski
 wrote:
>
> The Apache Kafka community is pleased to announce the release of
> Apache Kafka 3.7.0
>
> This is a minor release that includes new features, fixes, and
> improvements from 296 JIRAs
>
> An overview of the release and its notable changes can be found in the
> release blog post:
> https://kafka.apache.org/blog#apache_kafka_370_release_announcement
>
> All of the changes in this release can be found in the release notes:
> https://www.apache.org/dist/kafka/3.7.0/RELEASE_NOTES.html
>
> You can download the source and binary release (Scala 2.12, 2.13) from:
> https://kafka.apache.org/downloads#3.7.0
>
> ---
>
>
> Apache Kafka is a distributed streaming platform with four core APIs:
>
>
> ** The Producer API allows an application to publish a stream of records to
> one or more Kafka topics.
>
> ** The Consumer API allows an application to subscribe to one or more
> topics and process the stream of records produced to them.
>
> ** The Streams API allows an application to act as a stream processor,
> consuming an input stream from one or more topics and producing an
> output stream to one or more output topics, effectively transforming the
> input streams to output streams.
>
> ** The Connector API allows building and running reusable producers or
> consumers that connect Kafka topics to existing applications or data
> systems. For example, a connector to a relational database might
> capture every change to a table.
>
>
> With these APIs, Kafka can be used for two broad classes of application:
>
> ** Building real-time streaming data pipelines that reliably get data
> between systems or applications.
>
> ** Building real-time streaming applications that transform or react
> to the streams of data.
>
>
> Apache Kafka is in use at large and small companies worldwide, including
> Capital One, Goldman Sachs, ING, LinkedIn, Netflix, Pinterest, Rabobank,
> Target, The New York Times, Uber, Yelp, and Zalando, among others.
>
> A big thank you to the following 146 contributors to this release!
> (Please report an unintended omission)
>
> Abhijeet Kumar, Akhilesh Chaganti, Alieh, Alieh Saeedi, Almog Gavra,
> Alok Thatikunta, Alyssa Huang, Aman Singh, Andras Katona, Andrew
> Schofield, Anna Sophie Blee-Goldman, Anton Agestam, Apoorv Mittal,
> Arnout Engelen, Arpit Goyal, Artem Livshits, Ashwin Pankaj,
> ashwinpankaj, atu-sharm, bachmanity1, Bob Barrett, Bruno Cadonna,
> Calvin Liu, Cerchie, chern, Chris Egerton, Christo Lolov, Colin
> Patrick McCabe, Colt McNealy, Crispin Bernier, David Arthur, David
> Jacot, David Mao, Deqi Hu, Dimitar Dimitrov, Divij Vaidya, Dongnuo
> Lyu, Eaugene Thomas, Eduwer Camacaro, Eike Thaden, Federico Valeri,
> Florin Akermann, Gantigmaa Selenge, Gaurav Narula, gongzhongqiang,
> Greg Harris, Guozhang Wang, Gyeongwon, Do, Hailey Ni, Hanyu Zheng, Hao
> Li, Hector Geraldino, hudeqi, Ian McDonald, Iblis Lin, Igor Soarez,
> iit2009060, Ismael Juma, Jakub Scholz, James Cheng, Jason Gustafson,
> Jay Wang, Jeff Kim, Jim Galasyn, John Roesler, Jorge Esteban Quilcate
> Otoya, Josep Prat, José Armando García Sancio, Jotaniya Jeel, Jouni
> Tenhunen, Jun Rao, Justine Olshan, Kamal Chandraprakash, Kirk True,
> kpatelatwork, kumarpritam863, Laglangyue, Levani Kokhreidze, Lianet
> Magrans, Liu Zeyu, Lucas Brutschy, Lucia Cerchie, Luke Chen, maniekes,
> Manikumar Reddy, mannoopj, Maros Orsak, Matthew de Detrich, Matthias
> J. Sax, Max Riedel, Mayank Shekhar Narula, Mehari Beyene, Michael
> Westerby, Mickael Maison, Nick Telford, Nikhil Ramakrishnan, Nikolay,
> Okada Haruki, olalamichelle, Omnia G.H Ibrahim, Owen Leung, Paolo
> Patierno, Philip Nee, Phuc-Hong-Tran, Proven Provenzano, Purshotam
> Chauhan, Qichao Chu, Matthias J. Sax, Rajini Sivaram, Renaldo Baur
> Filho, Ritika Reddy, Robert Wagner, Rohan, Ron Dagostino, Roon, runom,
> Ruslan Krivoshein, rykovsi, Sagar Rao, Said Boudjelda, Satish Duggana,
> shuoer86, Stanislav Kozlovski, Taher Ghaleb, Tang Yunzi, TapDang,
> Taras Ledkov, tkuramoto33, Tyler Bertrand, vamossagar12, Vedarth
> Sharma, Viktor Somogyi-Vass, Vincent Jiang, Walker Carlson,
> Wuzhengyu97, Xavier Léauté, Xiaobing Fang, yangy, Ritika Reddy,
> Yanming Zhou, Yash Mayya, yuyli, zhaohaidao, Zihao Lin, Ziming Deng
>
> We welcome your help and feedback. For more information on how to
> report problems, and to get involved, visit the project website at
> https://kafka.apache.org/
>
> Thank you!
>
>
> Regards,
>
> Stanislav Kozlovski
> Release Manager for Apache Kafka 3.7.0


Re: [VOTE] 3.7.0 RC4

2024-02-25 Thread Mickael Maison
ivij Vaidya
> > > > >
> > > > >
> > > > >
> > > > > On Tue, Feb 20, 2024 at 3:02 PM Stanislav Kozlovski
> > > > >  wrote:
> > > > >
> > > > > > Thanks for testing the release! And thanks for the review on the
> > > > > > documentation. Good catch on the license too.
> > > > > >
> > > > > > I have addressed the comments in the blog PR, and opened a few
> > other
> > > > PRs
> > > > > to
> > > > > > the website in relation to the release.
> > > > > >
> > > > > > - 37: Add download section for the latest 3.7 release
> > > > > > <https://github.com/apache/kafka-site/pull/583/files>
> > > > > > - 37: Update default docs to point to the 3.7.0 release docs
> > > > > > <https://github.com/apache/kafka-site/pull/582>
> > > > > > - 3.7: Add blog post for Kafka 3.7
> > > > > > <https://github.com/apache/kafka-site/pull/578>
> > > > > > - MINOR: Update stale upgrade_3_6_0 header links in documentation
> > > > > > <https://github.com/apache/kafka-site/pull/580>
> > > > > > - 37: Add upgrade notes for the 3.7.0 release
> > > > > > <https://github.com/apache/kafka-site/pull/581>
> > > > > >
> > > > > > I am a bit unclear on the precise process regarding what parts of
> > > this
> > > > > get
> > > > > > merged at what time, and whether the release first needs to be done
> > > or
> > > > > not.
> > > > > >
> > > > > > Best,
> > > > > > Stanislav
> > > > > >
> > > > > > On Mon, Feb 19, 2024 at 8:34 PM Divij Vaidya <
> > > divijvaidy...@gmail.com>
> > > > > > wrote:
> > > > > >
> > > > > > > Great. In that case we can fix the license issue
> > retrospectively. I
> > > > > have
> > > > > > > created a JIRA for it
> > > > > https://issues.apache.org/jira/browse/KAFKA-16278
> > > > > > > and
> > > > > > > also updated the release process (which redirects to
> > > > > > > https://issues.apache.org/jira/browse/KAFKA-12622) to check for
> > > the
> > > > > > > correct
> > > > > > > license in both the kafka binaries.
> > > > > > >
> > > > > > > I am +1 (binding) assuming Mickael's concerns about update notes
> > to
> > > > 3.7
> > > > > > are
> > > > > > > addressed before release.
> > > > > > >
> > > > > > > --
> > > > > > > Divij Vaidya
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > > On Mon, Feb 19, 2024 at 6:08 PM Mickael Maison <
> > > > > mickael.mai...@gmail.com
> > > > > > >
> > > > > > > wrote:
> > > > > > >
> > > > > > > > Hi,
> > > > > > > >
> > > > > > > > I agree with Josep, I don't think it's worth making a new RC
> > just
> > > > for
> > > > > > > this.
> > > > > > > >
> > > > > > > > Thanks Stanislav for sharing the test results. The last thing
> > > > holding
> > > > > > > > me from casting my vote is the missing upgrade notes for 3.7.0.
> > > > > > > >
> > > > > > > > Thanks,
> > > > > > > > Mickael
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > > On Mon, Feb 19, 2024 at 4:28 PM Josep Prat
> > > > >  > > > > > >
> > > > > > > > wrote:
> > > > > > > > >
> > > > > > > > > I think I remember finding a similar problem (NOTICE_binary)
> > > and
> > > > it
> > > > > > > > didn't
> > > > > > > > > qualify for an extra RC
> > > > > > > > >
> > > > > > > > > Best,
> > > > >

Re: [VOTE] KIP-390: Support Compression Level (rebooted)

2024-02-21 Thread Mickael Maison
Hi Jun,

Good catch!
The new configuration is indeed compression.zstd.level instead of
compression.snappy.level. I've updated the KIP.

Thanks,
Mickael

On Wed, Feb 21, 2024 at 7:38 PM Jun Rao  wrote:
>
> Hi, Mickael,
>
> Thanks for the updated KIP.
>
> There is a typo. The KIP says that it adds a new option
> compression.snappy.level,
> but later says that Snappy is excluded.
>
> Otherwise, the changes look good to me.
>
> Jun
>
>
> On Wed, Feb 7, 2024 at 6:40 AM Mickael Maison 
> wrote:
>
> > Hi Divij,
> >
> > Thanks for bringing that point. After reading KIP-984, I don't think
> > it supersedes KIP-390/KIP-780. Being able to tune the built-in codecs
> > would directly benefit many users. It may also cover some scenarios
> > that motivated KIP-984 without requiring users to write a custom
> > codec.
> > I've not commented in the KIP-984 thread yet but at the moment it
> > seems very light on details (no proposed API for codecs, no
> > explanations of error scenarios if clients or brokers don't have
> > compatible codecs), including the motivation which is important when
> > exposing new APIs. On the other hand, KIP-390/KIP-780 have much more
> > details with benchmarks to support the motivation.
> >
> > In my opinion starting with the compression level (KIP-390) is a good
> > first step and I think we should focus on that and deliver it. I
> > believe one of the reasons KIP-780 wasn't voted is because we never
> > delivered KIP-390 and nobody was keen on building a KIP on top of
> > another undelivered KIP.
> >
> > Thanks,
> > Mickael
> >
> >
> > On Wed, Feb 7, 2024 at 12:27 PM Divij Vaidya 
> > wrote:
> > >
> > > Hey Mickael
> > >
> > > Since this KIP was written, we have a new proposal to make the
> > compression
> > > completely pluggable
> > >
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-984%3A+Add+pluggable+compression+interface+to+Kafka
> > .
> > > If we implement that KIP, would it supersede the need for adding fine
> > grain
> > > compression controls in Kafka?
> > >
> > > It might be beneficial to have a joint proposal of these two KIPs which
> > may
> > > satisfy both use cases.
> > >
> > > --
> > > Divij Vaidya
> > >
> > >
> > >
> > > On Wed, Feb 7, 2024 at 11:14 AM Mickael Maison  > >
> > > wrote:
> > >
> > > > Hi,
> > > >
> > > > I'm resurrecting this old thread as this KIP would be a nice
> > > > improvement and almost 3 years later the PR for this KIP has still not
> > > > been merged!
> > > >
> > > > The reason is that during reviews we noticed the proposed
> > > > configuration, compression.level, was not easy to use as each codec
> > > > has its own valid range of levels [0].
> > > >
> > > > As proposed by Jun in the PR [1], I updated the KIP to use
> > > > compression..level configurations instead of a single
> > > > compression.level setting. This syntax would also line up with the
> > > > proposal to add per-codec configuration options from KIP-780 [2]
> > > > (still to be voted). I moved the original proposal to the rejected
> > > > section.
> > > >
> > > > I've put the original voters and KIP author on CC. Let me know if you
> > > > have any feedback.
> > > >
> > > > 0: https://github.com/apache/kafka/pull/10826
> > > > 1: https://github.com/apache/kafka/pull/10826#issuecomment-1795952612
> > > > 2:
> > > >
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-780%3A+Support+fine-grained+compression+options
> > > >
> > > > Thanks,
> > > > Mickael
> > > >
> > > >
> > > > On Fri, Jun 11, 2021 at 10:00 AM Dongjin Lee 
> > wrote:
> > > > >
> > > > > This KIP is now passed with:
> > > > >
> > > > > - binding: +3 (Ismael, Tom, Konstantine)
> > > > > - non-binding: +1 (Ryanne)
> > > > >
> > > > > Thanks again to all the supporters. I also updated the KIP by moving
> > the
> > > > > compression buffer option into the 'Future Works' section, as Ismael
> > > > > proposed.
> > > > >
> > > > > Best,
> > > > > Dongjin
> > > > >
> > > > >
> > > > >
> > > >

[jira] [Created] (KAFKA-16292) Revamp upgrade.html page

2024-02-21 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-16292:
--

 Summary: Revamp upgrade.html page 
 Key: KAFKA-16292
 URL: https://issues.apache.org/jira/browse/KAFKA-16292
 Project: Kafka
  Issue Type: Improvement
  Components: documentation
Reporter: Mickael Maison


At the moment we keep adding to this page for each release. The upgrade.html 
file is now over 2000 line long. It still contains steps for upgrading from 0.8 
to 0.9! These steps are already in the docs for each version which can be 
accessed via //documentation.html.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [VOTE] 3.7.0 RC4

2024-02-19 Thread Mickael Maison
ch-builder/6057
> > > >>
> > > >> *kafkatest.tests.core.upgrade_test.TestUpgrade#test_upgradeArguments:{
> > > >> "compression_types": [ "zstd" ], "from_kafka_version": "2.4.1",
> > > >> "to_message_format_version": null}*
> > > >> Fails with the same error of
> > > >> *`TimeoutError('Producer failed to produce messages for 20s.')`*
> > > >> *kafkatest.tests.core.upgrade_test.TestUpgrade#test_upgradeArguments:{
> > > >> "compression_types": [ "lz4" ], "from_kafka_version": "3.0.2",
> > > >> "to_message_format_version": null}*
> > > >> Fails with the same error of *`TimeoutError('Producer failed to
> > produce
> > > >> messages for 20s.')`*
> > > >>
> > > >> I have scheduled a re-run of this test here ->
> > > >>
> > https://jenkins.confluent.io/job/system-test-kafka-branch-builder/6058/
> > > >>
> > > >> On Fri, Feb 16, 2024 at 12:15 PM Vedarth Sharma <
> > > vedarth.sha...@gmail.com>
> > > >> wrote:
> > > >>
> > > >>> Hey Stanislav,
> > > >>>
> > > >>> Thanks for the release candidate.
> > > >>>
> > > >>> +1 (non-binding)
> > > >>>
> > > >>> I tested and verified the docker image artifact
> > > apache/kafka:3.7.0-rc4:-
> > > >>> - verified create topic, produce messages and consume messages flow
> > > when
> > > >>> running the docker image with
> > > >>> - default configs
> > > >>> - configs provided via env variables
> > > >>> - configs provided via file input
> > > >>> - verified the html documentation for docker image.
> > > >>> - ran the example docker compose files successfully.
> > > >>>
> > > >>> All looks good for the docker image artifact!
> > > >>>
> > > >>> Thanks and regards,
> > > >>> Vedarth
> > > >>>
> > > >>>
> > > >>> On Thu, Feb 15, 2024 at 10:58 PM Mickael Maison <
> > > >>> mickael.mai...@gmail.com>
> > > >>> wrote:
> > > >>>
> > > >>> > Hi Stanislav,
> > > >>> >
> > > >>> > Thanks for running the release.
> > > >>> >
> > > >>> > I did the following testing:
> > > >>> > - verified the check sums and signatures
> > > >>> > - ran ZooKeeper and KRaft quickstarts with Scala 2.13 binaries
> > > >>> > - ran a successful migration from ZooKeeper to KRaft
> > > >>> >
> > > >>> > We seem to be missing the upgrade notes for 3.7.0 in the docs. See
> > > >>> > https://kafka.apache.org/37/documentation.html#upgrade that still
> > > >>> > points to 3.6.0
> > > >>> > Before voting I'd like to see results from the system tests too.
> > > >>> >
> > > >>> > Thanks,
> > > >>> > Mickael
> > > >>> >
> > > >>> > On Thu, Feb 15, 2024 at 6:06 PM Andrew Schofield
> > > >>> >  wrote:
> > > >>> > >
> > > >>> > > +1 (non-binding). I used the staged binaries with Scala 2.13. I
> > > tried
> > > >>> > the new group coordinator
> > > >>> > > and consumer group protocol which is included with the Early
> > Access
> > > >>> > release of KIP-848.
> > > >>> > > Also verified the availability of the new APIs. All working as
> > > >>> expected.
> > > >>> > >
> > > >>> > > Thanks,
> > > >>> > > Andrew
> > > >>> > >
> > > >>> > > > On 15 Feb 2024, at 05:07, Paolo Patierno <
> > > paolo.patie...@gmail.com
> > > >>> >
> > > >>> > wrote:
> > > >>> > > >
> > > >>> > > > +1 (non-binding). I used the staged binaries with Scala 2.13
> > and
> > > >>> mostly
> > > >>> > > > focused on the ZooKeeper to KRaft migration with multipl

[jira] [Resolved] (KAFKA-13566) producer exponential backoff implementation

2024-02-19 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13566?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-13566.

Resolution: Duplicate

> producer exponential backoff implementation
> ---
>
> Key: KAFKA-13566
> URL: https://issues.apache.org/jira/browse/KAFKA-13566
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Luke Chen
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-13567) adminClient exponential backoff implementation

2024-02-19 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13567?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-13567.

Resolution: Duplicate

> adminClient exponential backoff implementation
> --
>
> Key: KAFKA-13567
> URL: https://issues.apache.org/jira/browse/KAFKA-13567
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Luke Chen
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-13565) consumer exponential backoff implementation

2024-02-19 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-13565.

Fix Version/s: 3.7.0
   Resolution: Duplicate

> consumer exponential backoff implementation
> ---
>
> Key: KAFKA-13565
> URL: https://issues.apache.org/jira/browse/KAFKA-13565
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Luke Chen
>Priority: Major
> Fix For: 3.7.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [VOTE] 3.7.0 RC4

2024-02-15 Thread Mickael Maison
Hi Stanislav,

Thanks for running the release.

I did the following testing:
- verified the check sums and signatures
- ran ZooKeeper and KRaft quickstarts with Scala 2.13 binaries
- ran a successful migration from ZooKeeper to KRaft

We seem to be missing the upgrade notes for 3.7.0 in the docs. See
https://kafka.apache.org/37/documentation.html#upgrade that still
points to 3.6.0
Before voting I'd like to see results from the system tests too.

Thanks,
Mickael

On Thu, Feb 15, 2024 at 6:06 PM Andrew Schofield
 wrote:
>
> +1 (non-binding). I used the staged binaries with Scala 2.13. I tried the new 
> group coordinator
> and consumer group protocol which is included with the Early Access release 
> of KIP-848.
> Also verified the availability of the new APIs. All working as expected.
>
> Thanks,
> Andrew
>
> > On 15 Feb 2024, at 05:07, Paolo Patierno  wrote:
> >
> > +1 (non-binding). I used the staged binaries with Scala 2.13 and mostly
> > focused on the ZooKeeper to KRaft migration with multiple tests. Everything
> > works fine.
> >
> > Thanks
> > Paolo
> >
> > On Mon, 12 Feb 2024, 22:06 Jakub Scholz,  wrote:
> >
> >> +1 (non-binding). I used the staged binaries with Scala 2.13 and the staged
> >> Maven artifacts to run my tests. All seems to work fine. Thanks.
> >>
> >> Jakub
> >>
> >> On Fri, Feb 9, 2024 at 4:20 PM Stanislav Kozlovski
> >>  wrote:
> >>
> >>> Hello Kafka users, developers and client-developers,
> >>>
> >>> This is the second candidate we are considering for release of Apache
> >> Kafka
> >>> 3.7.0.
> >>>
> >>> Major changes include:
> >>> - Early Access to KIP-848 - the next generation of the consumer rebalance
> >>> protocol
> >>> - Early Access to KIP-858: Adding JBOD support to KRaft
> >>> - KIP-714: Observability into Client metrics via a standardized interface
> >>>
> >>> Release notes for the 3.7.0 release:
> >>>
> >>>
> >> https://home.apache.org/~stanislavkozlovski/kafka-3.7.0-rc4/RELEASE_NOTES.html
> >>>
> >>> *** Please download, test and vote by Thursday, February 15th, 9AM PST
> >> ***
> >>>
> >>> Kafka's KEYS file containing PGP keys we use to sign the release:
> >>> https://kafka.apache.org/KEYS
> >>>
> >>> * Release artifacts to be voted upon (source and binary):
> >>> https://home.apache.org/~stanislavkozlovski/kafka-3.7.0-rc4/
> >>>
> >>> * Docker release artifact to be voted upon:
> >>> apache/kafka:3.7.0-rc4
> >>>
> >>> * Maven artifacts to be voted upon:
> >>> https://repository.apache.org/content/groups/staging/org/apache/kafka/
> >>>
> >>> * Javadoc:
> >>> https://home.apache.org/~stanislavkozlovski/kafka-3.7.0-rc4/javadoc/
> >>>
> >>> * Tag to be voted upon (off 3.7 branch) is the 3.7.0 tag:
> >>> https://github.com/apache/kafka/releases/tag/3.7.0-rc4
> >>>
> >>> * Documentation:
> >>> https://kafka.apache.org/37/documentation.html
> >>>
> >>> * Protocol:
> >>> https://kafka.apache.org/37/protocol.html
> >>>
> >>> * Successful Jenkins builds for the 3.7 branch:
> >>>
> >>> Unit/integration tests: I am in the process of running and analyzing
> >> these.
> >>> System tests: I am in the process of running these.
> >>>
> >>> Expect a follow-up over the weekend
> >>>
> >>> * Successful Docker Image Github Actions Pipeline for 3.7 branch:
> >>> Docker Build Test Pipeline:
> >>> https://github.com/apache/kafka/actions/runs/7845614846
> >>>
> >>> /**
> >>>
> >>> Best,
> >>> Stanislav
> >>>
> >>
>


[jira] [Resolved] (KAFKA-14576) Move ConsoleConsumer to tools

2024-02-13 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14576?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-14576.

Fix Version/s: 3.8.0
   Resolution: Fixed

> Move ConsoleConsumer to tools
> -
>
> Key: KAFKA-14576
> URL: https://issues.apache.org/jira/browse/KAFKA-14576
> Project: Kafka
>  Issue Type: Sub-task
>    Reporter: Mickael Maison
>        Assignee: Mickael Maison
>Priority: Major
> Fix For: 3.8.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-14822) Allow restricting File and Directory ConfigProviders to specific paths

2024-02-13 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-14822.

Fix Version/s: 3.8.0
 Assignee: Gantigmaa Selenge  (was: Mickael Maison)
   Resolution: Fixed

> Allow restricting File and Directory ConfigProviders to specific paths
> --
>
> Key: KAFKA-14822
> URL: https://issues.apache.org/jira/browse/KAFKA-14822
> Project: Kafka
>  Issue Type: Improvement
>    Reporter: Mickael Maison
>Assignee: Gantigmaa Selenge
>Priority: Major
>  Labels: need-kip
> Fix For: 3.8.0
>
>
> In sensitive environments, it would be interesting to be able to restrict the 
> files that can be accessed by the built-in configuration providers.
> For example:
> config.providers=directory
> config.providers.directory.class=org.apache.kafka.connect.configs.DirectoryConfigProvider
> config.providers.directory.path=/var/run
> Then if a caller tries to access another path, for example
> ssl.keystore.password=${directory:/etc/passwd:keystore-password}
> it would be rejected.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16246) Cleanups in ConsoleConsumer

2024-02-13 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-16246:
--

 Summary: Cleanups in ConsoleConsumer
 Key: KAFKA-16246
 URL: https://issues.apache.org/jira/browse/KAFKA-16246
 Project: Kafka
  Issue Type: Improvement
  Components: tools
Reporter: Mickael Maison


When rewriting ConsoleConsumer in Java, in order to keep the conversion and 
review process simple we mimicked the logic flow and types used in the Scala 
implementation.

Once the rewrite is merged, we should refactor some of the logic to make it 
more Java-like. This include removing Optional where it makes sense and moving 
all the argument checking logic into ConsoleConsumerOptions.


See https://github.com/apache/kafka/pull/15274 for pointers.

  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-16238) ConnectRestApiTest broken after KIP-1004

2024-02-09 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16238?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-16238.

Fix Version/s: 3.8.0
   Resolution: Fixed

> ConnectRestApiTest broken after KIP-1004
> 
>
> Key: KAFKA-16238
> URL: https://issues.apache.org/jira/browse/KAFKA-16238
> Project: Kafka
>  Issue Type: Improvement
>  Components: connect
>        Reporter: Mickael Maison
>    Assignee: Mickael Maison
>Priority: Major
> Fix For: 3.8.0
>
>
> KIP-1004 introduced a new configuration for connectors: 'tasks.max.enforce'.
> The ConnectRestApiTest system test needs to be updated to expect the new 
> configuration.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16238) ConnectRestApiTest broken after KIP-1004

2024-02-09 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-16238:
--

 Summary: ConnectRestApiTest broken after KIP-1004
 Key: KAFKA-16238
 URL: https://issues.apache.org/jira/browse/KAFKA-16238
 Project: Kafka
  Issue Type: Improvement
  Components: connect
Reporter: Mickael Maison
Assignee: Mickael Maison


KIP-1004 introduced a new configuration for connectors: 'tasks.max.enforce'.

The ConnectRestApiTest system test needs to be updated to expect the new 
configuration.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [DISCUSS] KIP-971 Expose replication-offset-lag MirrorMaker2 metric

2024-02-08 Thread Mickael Maison
Hi,

Thanks for the updates.
I'm wondering whether we really need the ttl eviction mechanism. The
motivation is to "avoid storing stale LRO entries which can cause an
eventual OOM error". How could it contain stake entries? I would
expect its cache to only contain entries for partitions assigned to
the task that owns it. Also what is the expected behavior if there's
no available LRO in the cache? If we keep this mechanism what happens
if its value is lower than
replication.record.lag.metric.refresh.interval?

Thanks,
Mickael

On Mon, Feb 5, 2024 at 5:23 PM Elxan Eminov  wrote:
>
> Hi Mickael!
> Any further thoughts on this?
>
> Thanks,
> Elkhan
>
> On Thu, 18 Jan 2024 at 11:53, Mickael Maison 
> wrote:
>
> > Hi Elxan,
> >
> > Thanks for the updates.
> >
> > We used dots to separate words in configuration names, so I think
> > replication.offset.lag.metric.last-replicated-offset.ttl should be
> > named replication.offset.lag.metric.last.replicated.offset.ttl
> > instead.
> >
> > About the names of the metrics, fair enough if you prefer keeping the
> > replication prefix. Out of the alternatives you mentioned, I think I
> > prefer replication-record-lag. I think the metrics and configuration
> > names should match too. Let's see what the others think about it.
> >
> > Thanks,
> > Mickael
> >
> > On Mon, Jan 15, 2024 at 9:50 PM Elxan Eminov 
> > wrote:
> > >
> > > Apologies, forgot to reply on your last comment about the metric name.
> > > I believe both replication-lag and record-lag are a little too abstract -
> > > what do you think about either leaving it as replication-offset-lag or
> > > renaming to replication-record-lag?
> > >
> > > Thanks
> > >
> > > On Wed, 10 Jan 2024 at 15:31, Mickael Maison 
> > > wrote:
> > >
> > > > Hi Elxan,
> > > >
> > > > Thanks for the KIP, it looks like a useful addition.
> > > >
> > > > Can you add to the KIP the default value you propose for
> > > > replication.lag.metric.refresh.interval? In MirrorMaker most interval
> > > > configs can be set to -1 to disable them, will it be the case for this
> > > > new feature or will this setting only accept positive values?
> > > > I also wonder if replication-lag, or record-lag would be clearer names
> > > > instead of replication-offset-lag, WDYT?
> > > >
> > > > Thanks,
> > > > Mickael
> > > >
> > > > On Wed, Jan 3, 2024 at 6:15 PM Elxan Eminov 
> > > > wrote:
> > > > >
> > > > > Hi all,
> > > > > Here is the vote thread:
> > > > > https://lists.apache.org/thread/ftlnolcrh858dry89sjg06mdcdj9mrqv
> > > > >
> > > > > Cheers!
> > > > >
> > > > > On Wed, 27 Dec 2023 at 11:23, Elxan Eminov 
> > > > wrote:
> > > > >
> > > > > > Hi all,
> > > > > > I've updated the KIP with the details we discussed in this thread.
> > > > > > I'll call in a vote after the holidays if everything looks good.
> > > > > > Thanks!
> > > > > >
> > > > > > On Sat, 26 Aug 2023 at 15:49, Elxan Eminov <
> > elxanemino...@gmail.com>
> > > > > > wrote:
> > > > > >
> > > > > >> Relatively minor change with a new metric for MM2
> > > > > >>
> > > > > >>
> > > >
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-971%3A+Expose+replication-offset-lag+MirrorMaker2+metric
> > > > > >>
> > > > > >
> > > >
> >


[jira] [Resolved] (KAFKA-12937) Mirrormaker2 can only start from the beginning of a topic

2024-02-08 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12937?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-12937.

Resolution: Duplicate

> Mirrormaker2  can only start from the beginning of a topic
> --
>
> Key: KAFKA-12937
> URL: https://issues.apache.org/jira/browse/KAFKA-12937
> Project: Kafka
>  Issue Type: Bug
>  Components: mirrormaker
>Affects Versions: 2.8.0
> Environment: Dockerized environment
>Reporter: Daan Bosch
>Priority: Major
>
> *Goal*:
>  I want to replace Mirrormaker version 1 with Mirrormaker2.
>  To do this I want to:
>  start Mirrormaker2 from the latest offset of every topic
>  stop Mirrormaker1 
>  There should only be a couple of double messages.
> What happened:
>  Mirrormaker2 starts replicating from the start of all topics
> *How to reproduce:*
>  Start two Kafka clusters, A and B
> I produce 3000 messages to cluster A on a topic (TOPIC1)
>  Kafka Connect is running and connected to cluster B
>  Start a Mirrormaker2 task in connect to replicate messages from cluster A. 
> Wit the option:
>  consumer auto.offset.reset to latest
>  Produce another 3000 messages to cluster A on the same topic (TOPIC1)
> *Expected result:*
>  Cluster B will only contain the messages produced the second time (3000 in 
> total) on TOPIC1
> Actual result:
>  The mirror picks up all messages from the start (6000 in total) and 
> replicates them to cluster B
> *Additional logs:*
>  Logs from the consumer of the Mirrormaker task:
> mirrormaker.log:7581:mirrormaker_1 | [2021-06-11 09:31:40,403] INFO [Consumer 
> clientId=consumer-null-4, groupId=null] Seeking to offset 0 for partition 
> perf-test-8 (org.apache.kafka.clients.consumer.KafkaConsumer:1582)
>  mirrormaker.log:7583:mirrormaker_1 | [2021-06-11 09:31:40,403] INFO 
> [Consumer clientId=consumer-null-4, groupId=null] Seeking to offset 0 for 
> partition perf-test-3 (org.apache.kafka.clients.consumer.KafkaConsumer:1582)
>  mirrormaker.log:7585:mirrormaker_1 | [2021-06-11 09:31:40,403] INFO 
> [Consumer clientId=consumer-null-4, groupId=null] Seeking to offset 0 for 
> partition perf-test-2 (org.apache.kafka.clients.consumer.KafkaConsumer:1582)
>  mirrormaker.log:7587:mirrormaker_1 | [2021-06-11 09:31:40,403] INFO 
> [Consumer clientId=consumer-null-4, groupId=null] Seeking to offset 0 for 
> partition perf-test-1 (org.apache.kafka.clients.consumer.KafkaConsumer:1582)
>  mirrormaker.log:7589:mirrormaker_1 | [2021-06-11 09:31:40,404] INFO 
> [Consumer clientId=consumer-null-4, groupId=null] Seeking to offset 0 for 
> partition perf-test-0 (org.apache.kafka.clients.consumer.KafkaConsumer:1582)
>  mirrormaker.log:7591:mirrormaker_1 | [2021-06-11 09:31:40,404] INFO 
> [Consumer clientId=consumer-null-4, groupId=null] Seeking to offset 0 for 
> partition perf-test-7 (org.apache.kafka.clients.consumer.KafkaConsumer:1582)
>  mirrormaker.log:7593:mirrormaker_1 | [2021-06-11 09:31:40,404] INFO 
> [Consumer clientId=consumer-null-4, groupId=null] Seeking to offset 0 for 
> partition perf-test-6 (org.apache.kafka.clients.consumer.KafkaConsumer:1582)
>  mirrormaker.log:7595:mirrormaker_1 | [2021-06-11 09:31:40,404] INFO 
> [Consumer clientId=consumer-null-4, groupId=null] Seeking to offset 0 for 
> partition perf-test-5 (org.apache.kafka.clients.consumer.KafkaConsumer:1582)
>  mirrormaker.log:7597:mirrormaker_1 | [2021-06-11 09:31:40,404] INFO 
> [Consumer clientId=consumer-null-4, groupId=null] Seeking to offset 0 for 
> partition perf-test-4 
> (org.apache.kafka.clients.consumer.KafkaConsumer:1582)You can see they are 
> trying to seek to a position and thus overriding the latest offset
>  
> You can see it is doing a seek to position 0 for every partition. which is 
> not what I expected



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-8259) Build RPM for Kafka

2024-02-08 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8259?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-8259.
---
Resolution: Won't Do

> Build RPM for Kafka
> ---
>
> Key: KAFKA-8259
> URL: https://issues.apache.org/jira/browse/KAFKA-8259
> Project: Kafka
>  Issue Type: Improvement
>  Components: build
>Reporter: Patrick Dignan
>Priority: Minor
>
> RPM packaging eases the installation and deployment of Kafka to make it much 
> more standard.
> I noticed in https://issues.apache.org/jira/browse/KAFKA-1324 [~jkreps] 
> closed the issue because other sources provide packaging.  I think it's 
> worthwhile for the standard, open source project to provide this as a base to 
> reduce redundant work and provide this functionality for users.  Other 
> similar open source software like Elasticsearch create an RPM 
> [https://github.com/elastic/elasticsearch/blob/0ad3d90a36529bf369813ea6253f305e11aff2e9/distribution/packages/build.gradle].
>   This also makes forking internally more maintainable by reducing the amount 
> of work to be done for each version upgrade.
> I have a patch to add this functionality that I will clean up and PR on 
> Github.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-9094) Validate the replicas for partition reassignments triggered through the /admin/reassign_partitions zNode

2024-02-08 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-9094.
---
Resolution: Won't Do

> Validate the replicas for partition reassignments triggered through the 
> /admin/reassign_partitions zNode
> 
>
> Key: KAFKA-9094
> URL: https://issues.apache.org/jira/browse/KAFKA-9094
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Stanislav Kozlovski
>Assignee: Stanislav Kozlovski
>Priority: Minor
>
> As was mentioned by [~jsancio] in 
> [https://github.com/apache/kafka/pull/7574#discussion_r337621762] , it would 
> make sense to apply the same replica validation we apply to the KIP-455 
> reassignments API.
> Namely, validate that the replicas:
> * are not empty (e.g [])
> * are not negative negative (e.g [1,2,-1])
> * are not brokers that are not part of the cluster or otherwise unhealthy 
> (e.g not in /brokers zNode)
> The last liveness validation is subject to comments. We are re-evaluating 
> whether we want to enforce it for the API



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [VOTE] KIP-390: Support Compression Level (rebooted)

2024-02-07 Thread Mickael Maison
Hi Divij,

Thanks for bringing that point. After reading KIP-984, I don't think
it supersedes KIP-390/KIP-780. Being able to tune the built-in codecs
would directly benefit many users. It may also cover some scenarios
that motivated KIP-984 without requiring users to write a custom
codec.
I've not commented in the KIP-984 thread yet but at the moment it
seems very light on details (no proposed API for codecs, no
explanations of error scenarios if clients or brokers don't have
compatible codecs), including the motivation which is important when
exposing new APIs. On the other hand, KIP-390/KIP-780 have much more
details with benchmarks to support the motivation.

In my opinion starting with the compression level (KIP-390) is a good
first step and I think we should focus on that and deliver it. I
believe one of the reasons KIP-780 wasn't voted is because we never
delivered KIP-390 and nobody was keen on building a KIP on top of
another undelivered KIP.

Thanks,
Mickael


On Wed, Feb 7, 2024 at 12:27 PM Divij Vaidya  wrote:
>
> Hey Mickael
>
> Since this KIP was written, we have a new proposal to make the compression
> completely pluggable
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-984%3A+Add+pluggable+compression+interface+to+Kafka.
> If we implement that KIP, would it supersede the need for adding fine grain
> compression controls in Kafka?
>
> It might be beneficial to have a joint proposal of these two KIPs which may
> satisfy both use cases.
>
> --
> Divij Vaidya
>
>
>
> On Wed, Feb 7, 2024 at 11:14 AM Mickael Maison 
> wrote:
>
> > Hi,
> >
> > I'm resurrecting this old thread as this KIP would be a nice
> > improvement and almost 3 years later the PR for this KIP has still not
> > been merged!
> >
> > The reason is that during reviews we noticed the proposed
> > configuration, compression.level, was not easy to use as each codec
> > has its own valid range of levels [0].
> >
> > As proposed by Jun in the PR [1], I updated the KIP to use
> > compression..level configurations instead of a single
> > compression.level setting. This syntax would also line up with the
> > proposal to add per-codec configuration options from KIP-780 [2]
> > (still to be voted). I moved the original proposal to the rejected
> > section.
> >
> > I've put the original voters and KIP author on CC. Let me know if you
> > have any feedback.
> >
> > 0: https://github.com/apache/kafka/pull/10826
> > 1: https://github.com/apache/kafka/pull/10826#issuecomment-1795952612
> > 2:
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-780%3A+Support+fine-grained+compression+options
> >
> > Thanks,
> > Mickael
> >
> >
> > On Fri, Jun 11, 2021 at 10:00 AM Dongjin Lee  wrote:
> > >
> > > This KIP is now passed with:
> > >
> > > - binding: +3 (Ismael, Tom, Konstantine)
> > > - non-binding: +1 (Ryanne)
> > >
> > > Thanks again to all the supporters. I also updated the KIP by moving the
> > > compression buffer option into the 'Future Works' section, as Ismael
> > > proposed.
> > >
> > > Best,
> > > Dongjin
> > >
> > >
> > >
> > > On Fri, Jun 11, 2021 at 3:03 AM Konstantine Karantasis
> > >  wrote:
> > >
> > > > Makes sense. Looks like a good improvement. Thanks for including the
> > > > evaluation in the proposal Dongjin.
> > > >
> > > > +1 (binding)
> > > >
> > > > Konstantine
> > > >
> > > > On Wed, Jun 9, 2021 at 6:59 PM Dongjin Lee  wrote:
> > > >
> > > > > Thanks Ismel, Tom and Ryanne,
> > > > >
> > > > > I am now updating the KIP about the further works. Sure, You won't be
> > > > > disappointed.
> > > > >
> > > > > As of Present:
> > > > >
> > > > > - binding: +2 (Ismael, Tom)
> > > > > - non-binding: +1 (Ryanne)
> > > > >
> > > > > Anyone else?
> > > > >
> > > > > Best,
> > > > > Dongjin
> > > > >
> > > > > On Thu, Jun 10, 2021 at 2:03 AM Tom Bentley 
> > wrote:
> > > > >
> > > > > > Hi Dongjin,
> > > > > >
> > > > > > Thanks for the KIP, +1 (binding).
> > > > > >
> > > > > > Kind regards,
> > > > > >
> > > > > > Tom
> > > > > >
> > > > > > On Wed, Jun 9, 2021 at 5:16 PM Ismael Juma 
> > wrote:
&

Re: [VOTE] KIP-390: Support Compression Level (rebooted)

2024-02-07 Thread Mickael Maison
Hi,

I'm resurrecting this old thread as this KIP would be a nice
improvement and almost 3 years later the PR for this KIP has still not
been merged!

The reason is that during reviews we noticed the proposed
configuration, compression.level, was not easy to use as each codec
has its own valid range of levels [0].

As proposed by Jun in the PR [1], I updated the KIP to use
compression..level configurations instead of a single
compression.level setting. This syntax would also line up with the
proposal to add per-codec configuration options from KIP-780 [2]
(still to be voted). I moved the original proposal to the rejected
section.

I've put the original voters and KIP author on CC. Let me know if you
have any feedback.

0: https://github.com/apache/kafka/pull/10826
1: https://github.com/apache/kafka/pull/10826#issuecomment-1795952612
2: 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-780%3A+Support+fine-grained+compression+options

Thanks,
Mickael


On Fri, Jun 11, 2021 at 10:00 AM Dongjin Lee  wrote:
>
> This KIP is now passed with:
>
> - binding: +3 (Ismael, Tom, Konstantine)
> - non-binding: +1 (Ryanne)
>
> Thanks again to all the supporters. I also updated the KIP by moving the
> compression buffer option into the 'Future Works' section, as Ismael
> proposed.
>
> Best,
> Dongjin
>
>
>
> On Fri, Jun 11, 2021 at 3:03 AM Konstantine Karantasis
>  wrote:
>
> > Makes sense. Looks like a good improvement. Thanks for including the
> > evaluation in the proposal Dongjin.
> >
> > +1 (binding)
> >
> > Konstantine
> >
> > On Wed, Jun 9, 2021 at 6:59 PM Dongjin Lee  wrote:
> >
> > > Thanks Ismel, Tom and Ryanne,
> > >
> > > I am now updating the KIP about the further works. Sure, You won't be
> > > disappointed.
> > >
> > > As of Present:
> > >
> > > - binding: +2 (Ismael, Tom)
> > > - non-binding: +1 (Ryanne)
> > >
> > > Anyone else?
> > >
> > > Best,
> > > Dongjin
> > >
> > > On Thu, Jun 10, 2021 at 2:03 AM Tom Bentley  wrote:
> > >
> > > > Hi Dongjin,
> > > >
> > > > Thanks for the KIP, +1 (binding).
> > > >
> > > > Kind regards,
> > > >
> > > > Tom
> > > >
> > > > On Wed, Jun 9, 2021 at 5:16 PM Ismael Juma  wrote:
> > > >
> > > > > I'm +1 on the proposed change. As I stated in the discuss thread, I
> > > don't
> > > > > think we should rule out the buffer size config, but we could list
> > that
> > > > as
> > > > > future work vs rejected alternatives.
> > > > >
> > > > > Ismael
> > > > >
> > > > > On Sat, Jun 5, 2021 at 2:37 PM Dongjin Lee 
> > wrote:
> > > > >
> > > > > > Hi all,
> > > > > >
> > > > > > I'd like to open a voting thread for KIP-390: Support Compression
> > > Level
> > > > > > (rebooted):
> > > > > >
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-390%3A+Support+Compression+Level
> > > > > >
> > > > > > Best,
> > > > > > Dongjin
> > > > > >
> > > > > > --
> > > > > > *Dongjin Lee*
> > > > > >
> > > > > > *A hitchhiker in the mathematical world.*
> > > > > >
> > > > > >
> > > > > >
> > > > > > *github:  github.com/dongjinleekr
> > > > > > keybase:
> > > > > https://keybase.io/dongjinleekr
> > > > > > linkedin:
> > > > > kr.linkedin.com/in/dongjinleekr
> > > > > > speakerdeck:
> > > > > > speakerdeck.com/dongjin
> > > > > > *
> > > > > >
> > > > >
> > > >
> > >
> > >
> > > --
> > > *Dongjin Lee*
> > >
> > > *A hitchhiker in the mathematical world.*
> > >
> > >
> > >
> > > *github:  github.com/dongjinleekr
> > > keybase:
> > https://keybase.io/dongjinleekr
> > > linkedin:
> > kr.linkedin.com/in/dongjinleekr
> > > speakerdeck:
> > > speakerdeck.com/dongjin
> > > *
> > >
> >
>
>
> --
> *Dongjin Lee*
>
> *A hitchhiker in the mathematical world.*
>
>
>
> *github:  github.com/dongjinleekr
> keybase: https://keybase.io/dongjinleekr
> linkedin: kr.linkedin.com/in/dongjinleekr
> speakerdeck: speakerdeck.com/dongjin
> *


[jira] [Resolved] (KAFKA-15717) KRaft support in LeaderEpochIntegrationTest

2024-02-05 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-15717.

Fix Version/s: 3.8.0
   Resolution: Fixed

> KRaft support in LeaderEpochIntegrationTest
> ---
>
> Key: KAFKA-15717
> URL: https://issues.apache.org/jira/browse/KAFKA-15717
> Project: Kafka
>  Issue Type: Task
>  Components: core
>Reporter: Sameer Tejani
>Priority: Minor
>  Labels: kraft, kraft-test, newbie
> Fix For: 3.8.0
>
>
> The following tests in LeaderEpochIntegrationTest in 
> core/src/test/scala/unit/kafka/server/epoch/LeaderEpochIntegrationTest.scala 
> need to be updated to support KRaft
> 67 : def shouldAddCurrentLeaderEpochToMessagesAsTheyAreWrittenToLeader(): 
> Unit = {
> 99 : def shouldSendLeaderEpochRequestAndGetAResponse(): Unit = {
> 144 : def shouldIncreaseLeaderEpochBetweenLeaderRestarts(): Unit = {
> Scanned 305 lines. Found 0 KRaft tests out of 3 tests



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [VOTE] 3.7.0 RC2

2024-02-02 Thread Mickael Maison
Hi Stanislav,

I merged https://github.com/apache/kafka/pull/15308 in trunk. I let
you cherry-pick it to 3.7.

I think fixing the absolute show stoppers and calling JBOD support in
KRaft early access in 3.7.0 is probably the right call. Even without
the bugs we found, there's still quite a few JBOD follow up work to do
(KAFKA-16061) + system tests and documentation updates.

Thanks,
Mickael

On Fri, Feb 2, 2024 at 4:49 PM Stanislav Kozlovski
 wrote:
>
> Thanks for the work everybody. Providing a status update at the end of the
> week:
>
> - docs change explaining migration
>  was merged
> - the blocker KAFKA-16162  was
> merged
> - the blocker KAFKA-14616  was
> merged
> - a small blocker problem with the shadow jar plugin
> 
> - the blockers KAFKALESS-16157 & KAFKALESS-16195 aren't merged
> - the good-to-have KAFKA-16082 isn't merged
>
> I think we should prioritize merging KAFKALESS-16195 and *call JBOD EA*. I
> question whether we may find more blocker bugs in the next RC.
> The release is late by approximately a month so far, so I do want to scope
> down aggressively to meet the time-based goal.
>
> Best,
> Stanislav
>
> On Mon, Jan 29, 2024 at 5:46 PM Omnia Ibrahim 
> wrote:
>
> > Hi Stan and Gaurav,
> > Just to clarify some points mentioned here before
> >  KAFKA-14616: I raised a year ago so it's not related to JBOD work. It is
> > rather a blocker bug for KRAFT in general. The PR from Colin should fix
> > this. Am not sure if it is a blocker for 3.7 per-say as it was a major bug
> > since 3.3 and got missed from all other releases.
> >
> > Regarding the JBOD's work:
> > KAFKA-16082:  Is not a blocker for 3.7 instead it's nice fix. The pr
> > https://github.com/apache/kafka/pull/15136 is quite a small one and was
> > approved by Proven and I but it is waiting for a committer's approval.
> > KAFKA-16162: This is a blocker for 3.7.  Same it’s a small pr
> > https://github.com/apache/kafka/pull/15270 and it is approved Proven and
> > I and the PR is waiting for committer's approval.
> > KAFKA-16157: This is a blocker for 3.7. There is one small suggestion for
> > the pr https://github.com/apache/kafka/pull/15263 but I don't think any
> > of the current feedback is blocking the pr from getting approved. Assuming
> > we get a committer's approval on it.
> > KAFKA-16195:  Same it's a blocker but it has approval from Proven and I
> > and we are waiting for committer's approval on the pr
> > https://github.com/apache/kafka/pull/15262.
> >
> > If we can’t get a committer approval for KAFKA-16162, KAFKA-16157 and
> > KAFKA-16195  in time for 3.7 then we can mark JBOD as early release
> > assuming we merge at least KAFKA-16195.
> >
> > Regards,
> > Omnia
> >
> > > On 26 Jan 2024, at 15:39, ka...@gnarula.com wrote:
> > >
> > > Apologies, I duplicated KAFKA-16157 twice in my previous message. I
> > intended to mention KAFKA-16195
> > > with the PR at https://github.com/apache/kafka/pull/15262 as the second
> > JIRA.
> > >
> > > Thanks,
> > > Gaurav
> > >
> > >> On 26 Jan 2024, at 15:34, ka...@gnarula.com wrote:
> > >>
> > >> Hi Stan,
> > >>
> > >> I wanted to share some updates about the bugs you shared earlier.
> > >>
> > >> - KAFKA-14616: I've reviewed and tested the PR from Colin and have
> > observed
> > >> the fix works as intended.
> > >> - KAFKA-16162: I reviewed Proven's PR and found some gaps in the
> > proposed fix. I've
> > >> therefore raised https://github.com/apache/kafka/pull/15270 following
> > a discussion with Luke in JIRA.
> > >> - KAFKA-16082: I don't think this is marked as a blocker anymore. I'm
> > awaiting
> > >> feedback/reviews at https://github.com/apache/kafka/pull/15136
> > >>
> > >> In addition to the above, there are 2 JIRAs I'd like to bring
> > everyone's attention to:
> > >>
> > >> - KAFKA-16157: This is similar to KAFKA-14616 and is marked as a
> > blocker. I've raised
> > >> https://github.com/apache/kafka/pull/15263 and am awaiting reviews on
> > it.
> > >> - KAFKA-16157: I raised this yesterday and have addressed feedback from
> > Luke. This should
> > >> hopefully get merged soon.
> > >>
> > >> Regards,
> > >> Gaurav
> > >>
> > >>
> > >>> On 24 Jan 2024, at 11:51, ka...@gnarula.com wrote:
> > >>>
> > >>> Hi Stanislav,
> > >>>
> > >>> Thanks for bringing these JIRAs/PRs up.
> > >>>
> > >>> I'll be testing the open PRs for KAFKA-14616 and KAFKA-16162 this week
> > and I hope to have some feedback
> > >>> by Friday. I gather the latter JIRA is marked as a WIP by Proven and
> > he's away. I'll try to build on his work in the meantime.
> > >>>
> > >>> As for KAFKA-16082, we haven't been able to deduce a data loss
> > scenario. There's a PR open
> > >>> by me for promoting an abandoned future replica with approvals from
> > Omnia and Proven,
> > >>> so I'd appreciate a committer reviewing it.
> > >>>
> > 

[jira] [Resolved] (KAFKA-15728) KRaft support in DescribeUserScramCredentialsRequestNotAuthorizedTest

2024-02-02 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15728?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-15728.

Fix Version/s: 3.8.0
   Resolution: Fixed

> KRaft support in DescribeUserScramCredentialsRequestNotAuthorizedTest
> -
>
> Key: KAFKA-15728
> URL: https://issues.apache.org/jira/browse/KAFKA-15728
> Project: Kafka
>  Issue Type: Task
>  Components: core
>Reporter: Sameer Tejani
>Assignee: Zihao Lin
>Priority: Minor
>  Labels: kraft, kraft-test, newbie
> Fix For: 3.8.0
>
>
> The following tests in DescribeUserScramCredentialsRequestNotAuthorizedTest 
> in 
> core/src/test/scala/unit/kafka/server/DescribeUserScramCredentialsRequestNotAuthorizedTest.scala
>  need to be updated to support KRaft
> 39 : def testDescribeNotAuthorized(): Unit = {
> Scanned 52 lines. Found 0 KRaft tests out of 1 tests



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-10047) Unnecessary widening of (int to long) scope in FloatSerializer

2024-02-02 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10047?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-10047.

Fix Version/s: 3.8.0
   Resolution: Fixed

> Unnecessary widening of (int to long) scope in FloatSerializer
> --
>
> Key: KAFKA-10047
> URL: https://issues.apache.org/jira/browse/KAFKA-10047
> Project: Kafka
>  Issue Type: Task
>  Components: clients
>Reporter: Guru Tahasildar
>Priority: Trivial
> Fix For: 3.8.0
>
>
> The following code is present in FloatSerializer:
> {code}
> long bits = Float.floatToRawIntBits(data);
> return new byte[] {
> (byte) (bits >>> 24),
> (byte) (bits >>> 16),
> (byte) (bits >>> 8),
> (byte) bits
> };
> {code}
> {{Float.floatToRawIntBits()}} returns an {{int}} but, the result is assigned 
> to a {{long}} so there is a widening of scope. This is not needed for any 
> subsequent operations hence, can be changed to use {{int}}.
> I would like to volunteer to make this change.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-5561) Java based TopicCommand to use the Admin client

2024-02-02 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-5561?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-5561.
---
Resolution: Duplicate

> Java based TopicCommand to use the Admin client
> ---
>
> Key: KAFKA-5561
> URL: https://issues.apache.org/jira/browse/KAFKA-5561
> Project: Kafka
>  Issue Type: Improvement
>  Components: tools
>Reporter: Paolo Patierno
>Assignee: Paolo Patierno
>Priority: Major
>
> Hi, 
> as suggested in the https://issues.apache.org/jira/browse/KAFKA-3331, it 
> could be great to have the TopicCommand using the new Admin client instead of 
> the way it works today.
> As pushed by [~gwenshap] in the above JIRA, I'm going to work on it.
> Thanks,
> Paolo



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-16204) Stray file core/00000000000000000001.snapshot created when running core tests

2024-01-30 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-16204.

Fix Version/s: 3.8.0
   Resolution: Fixed

> Stray file core/0001.snapshot created when running core tests
> -
>
> Key: KAFKA-16204
> URL: https://issues.apache.org/jira/browse/KAFKA-16204
> Project: Kafka
>  Issue Type: Improvement
>  Components: core, unit tests
>        Reporter: Mickael Maison
>Assignee: Gaurav Narula
>Priority: Major
>  Labels: newbie, newbie++
> Fix For: 3.8.0
>
>
> When running the core tests I often get a file called 
> core/0001.snapshot created in my kafka folder. It looks like 
> one of the test does not clean its resources properly.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16204) Stray file core/00000000000000000001.snapshot created when running core tests

2024-01-29 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-16204:
--

 Summary: Stray file core/0001.snapshot created 
when running core tests
 Key: KAFKA-16204
 URL: https://issues.apache.org/jira/browse/KAFKA-16204
 Project: Kafka
  Issue Type: Improvement
  Components: core, unit tests
Reporter: Mickael Maison


When running the core tests I often get a file called 
core/0001.snapshot created in my kafka folder. It looks like 
one of the test does not clean its resources properly.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16202) Extra dot in error message in producer

2024-01-29 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-16202:
--

 Summary: Extra dot in error message in producer
 Key: KAFKA-16202
 URL: https://issues.apache.org/jira/browse/KAFKA-16202
 Project: Kafka
  Issue Type: Improvement
Reporter: Mickael Maison


If the broker hits a StorageException while handling a record from the 
producer, the producer prints the following warning:

[2024-01-29 15:33:30,722] WARN [Producer clientId=console-producer] Received 
invalid metadata error in produce request on partition topic1-0 due to 
org.apache.kafka.common.errors.KafkaStorageException: Disk error when trying to 
access log file on the disk.. Going to request metadata update now 
(org.apache.kafka.clients.producer.internals.Sender)

There's an extra dot between disk and Going.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [DISCUSS] KIP-877: Mechanism for plugins and connectors to register metrics

2024-01-25 Thread Mickael Maison
Hi Luke,

The reason vary for each plugin, I've added details to most plugins in
the table.
The plugins without an explanation are all from Streams. I admit I
don't know these interfaces enough to decide if it makes sense making
them closeable and instrumenting them. It would be nice to get some
input from Streams contributors to know.

Thanks,
Mickael

On Thu, Jan 25, 2024 at 5:50 PM Mickael Maison  wrote:
>
> Hi Tom,
>
> Thanks for taking a look at the KIP!
>
> 1. Yes I considered several names (see the previous messages in the
> discussion). KIP-608, which this KIP superseeds, used "monitor()" for
> the method name. I find "withMetrics()" to be nicer due to the way the
> method should be used. That said, I'm not attached to the name so if
> more people prefer "monitor()", or can come up with a better name, I'm
> happy to make the change. I updated the javadoc to clarify the usage
> and mention when to close the PluginMetrics instance.
>
> 2. Yes I added a note to the PluginMetrics interface
>
> 3. I used this exception to follow the behavior of Metrics.addMetric()
> which throws IllegalArgumentException if a metric with the same name
> already exist.
>
> 4. I added details to the javadoc
>
> Thanks,
> Mickael
>
>
> On Thu, Jan 25, 2024 at 10:32 AM Luke Chen  wrote:
> >
> > Hi Mickael,
> >
> > Thanks for the KIP.
> > The motivation and solution makes sense to me.
> >
> > Just one question:
> > If we could extend `closable` for Converter plugin, why don't we do that
> > for the "Unsupported Plugins" without close method?
> > I don't say we must do that in this KIP, but maybe you could add the reason
> > in the "rejected alternatives".
> >
> > Thanks.
> > Luke
> >
> > On Thu, Jan 25, 2024 at 3:46 PM Slathia p  wrote:
> >
> > > Hi Team,
> > >
> > >
> > >
> > > Greetings,
> > >
> > >
> > >
> > > Apologies for the delay in reply as I was down with flu.
> > >
> > >
> > >
> > > We actually reached out to you for IT/ SAP/ Oracle/ Infor / Microsoft
> > > “VOTEC IT SERVICE PARTNERSHIP”  “IT SERVICE OUTSOURCING” “ “PARTNER 
> > > SERVICE
> > > SUBCONTRACTING”
> > >
> > >
> > >
> > > We have very attractive newly introduce reasonably price PARTNER IT
> > > SERVICE ODC SUBCONTRACTING MODEL in USA, Philippines, India and Singapore
> > > etc with White Label Model.
> > >
> > >
> > >
> > > Our LOW COST IT SERVICE ODC MODEL eliminate the cost of expensive employee
> > > payroll, Help partner to get profit more than 50% on each project.. ..We
> > > really mean it.
> > >
> > >
> > >
> > > We are already working with platinum partner like NTT DATA, NEC Singapore,
> > > Deloitte, Hitachi consulting. ACCENTURE, Abeam Singapore etc.
> > >
> > >
> > >
> > > Are u keen to understand VOTEC IT SERVICE MODEL PARTNERSHIP offerings?
> > >
> > >
> > >
> > > Let us know your availability this week OR Next week?? We can arrange
> > > discussion with Partner Manager.
> > > > On 01/25/2024 9:56 AM +08 Tom Bentley  wrote:
> > > >
> > > >
> > > > Hi Mickael,
> > > >
> > > > Thanks for the KIP! I can tell a lot of thought went into this. I have a
> > > > few comments, but they're all pretty trivial and aimed at making the
> > > > correct use of this API clearer to implementors.
> > > >
> > > > 1. Configurable and Reconfigurable both use a verb in the imperative 
> > > > mood
> > > > for their method name. Monitorable doesn't, which initially seemed a bit
> > > > inconsistent to me, but I think your intention is to allow plugins to
> > > > merely retain a reference to the PluginMetrics, and allow registering
> > > > metrics at any later point? If that's the case you could add something
> > > like
> > > > "Plugins can register and unregister metrics using the given
> > > PluginMetrics
> > > > at any point in their lifecycle prior to their close method being 
> > > > called"
> > > > to the javadoc to make clear how this can be used.
> > > > 2. I assume PluginMetrics will be thread-safe? We should document that 
> > > > as
> > > > part of the contract.
> > > > 3. I don't think IAE is quite right for duplicate metrics. In this case
> > >

Re: [DISCUSS] KIP-877: Mechanism for plugins and connectors to register metrics

2024-01-25 Thread Mickael Maison
Hi Tom,

Thanks for taking a look at the KIP!

1. Yes I considered several names (see the previous messages in the
discussion). KIP-608, which this KIP superseeds, used "monitor()" for
the method name. I find "withMetrics()" to be nicer due to the way the
method should be used. That said, I'm not attached to the name so if
more people prefer "monitor()", or can come up with a better name, I'm
happy to make the change. I updated the javadoc to clarify the usage
and mention when to close the PluginMetrics instance.

2. Yes I added a note to the PluginMetrics interface

3. I used this exception to follow the behavior of Metrics.addMetric()
which throws IllegalArgumentException if a metric with the same name
already exist.

4. I added details to the javadoc

Thanks,
Mickael


On Thu, Jan 25, 2024 at 10:32 AM Luke Chen  wrote:
>
> Hi Mickael,
>
> Thanks for the KIP.
> The motivation and solution makes sense to me.
>
> Just one question:
> If we could extend `closable` for Converter plugin, why don't we do that
> for the "Unsupported Plugins" without close method?
> I don't say we must do that in this KIP, but maybe you could add the reason
> in the "rejected alternatives".
>
> Thanks.
> Luke
>
> On Thu, Jan 25, 2024 at 3:46 PM Slathia p  wrote:
>
> > Hi Team,
> >
> >
> >
> > Greetings,
> >
> >
> >
> > Apologies for the delay in reply as I was down with flu.
> >
> >
> >
> > We actually reached out to you for IT/ SAP/ Oracle/ Infor / Microsoft
> > “VOTEC IT SERVICE PARTNERSHIP”  “IT SERVICE OUTSOURCING” “ “PARTNER SERVICE
> > SUBCONTRACTING”
> >
> >
> >
> > We have very attractive newly introduce reasonably price PARTNER IT
> > SERVICE ODC SUBCONTRACTING MODEL in USA, Philippines, India and Singapore
> > etc with White Label Model.
> >
> >
> >
> > Our LOW COST IT SERVICE ODC MODEL eliminate the cost of expensive employee
> > payroll, Help partner to get profit more than 50% on each project.. ..We
> > really mean it.
> >
> >
> >
> > We are already working with platinum partner like NTT DATA, NEC Singapore,
> > Deloitte, Hitachi consulting. ACCENTURE, Abeam Singapore etc.
> >
> >
> >
> > Are u keen to understand VOTEC IT SERVICE MODEL PARTNERSHIP offerings?
> >
> >
> >
> > Let us know your availability this week OR Next week?? We can arrange
> > discussion with Partner Manager.
> > > On 01/25/2024 9:56 AM +08 Tom Bentley  wrote:
> > >
> > >
> > > Hi Mickael,
> > >
> > > Thanks for the KIP! I can tell a lot of thought went into this. I have a
> > > few comments, but they're all pretty trivial and aimed at making the
> > > correct use of this API clearer to implementors.
> > >
> > > 1. Configurable and Reconfigurable both use a verb in the imperative mood
> > > for their method name. Monitorable doesn't, which initially seemed a bit
> > > inconsistent to me, but I think your intention is to allow plugins to
> > > merely retain a reference to the PluginMetrics, and allow registering
> > > metrics at any later point? If that's the case you could add something
> > like
> > > "Plugins can register and unregister metrics using the given
> > PluginMetrics
> > > at any point in their lifecycle prior to their close method being called"
> > > to the javadoc to make clear how this can be used.
> > > 2. I assume PluginMetrics will be thread-safe? We should document that as
> > > part of the contract.
> > > 3. I don't think IAE is quite right for duplicate metrics. In this case
> > the
> > > arguments themselves are fine, it's the current state of the
> > PluginMetrics
> > > which causes the problem. If the earlier point about plugins being
> > allowed
> > > to register and unregister metrics at any point is correct then this
> > > exception could be thrown after configuration time. That being the case I
> > > think a new exception type might be clearer.
> > > 4. You define some semantics for PluginMetrics.close(): It might be a
> > good
> > > idea to override the inherited method and add that as javadoc.
> > > 5. You say "It will be the responsibility of the plugin that creates
> > > metrics to call close() of the PluginMetrics instance they were given to
> > > remove their metrics." But you don't provide any guidance to users about
> > > when they need to do this. I guess that they should be doing this in
> > their
> > > plugin's close metho

[jira] [Resolved] (KAFKA-16003) The znode /config/topics is not updated during KRaft migration in "dual-write" mode

2024-01-25 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-16003.

Fix Version/s: 3.8.0
   Resolution: Fixed

> The znode /config/topics is not updated during KRaft migration in 
> "dual-write" mode
> ---
>
> Key: KAFKA-16003
> URL: https://issues.apache.org/jira/browse/KAFKA-16003
> Project: Kafka
>  Issue Type: Bug
>  Components: controller
>Affects Versions: 3.6.1
>Reporter: Paolo Patierno
>Assignee: Mickael Maison
>Priority: Major
> Fix For: 3.8.0
>
>
> I tried the following scenario ...
> I have a ZooKeeper-based cluster and create a my-topic-1 topic (without 
> specifying any specific configuration for it). The correct znodes are created 
> under /config/topics and /brokers/topics.
> I start a migration to KRaft but not moving forward from "dual write" mode. 
> While in this mode, I create a new my-topic-2 topic (still without any 
> specific config). I see that a new znode is created under /brokers/topics but 
> NOT under /config/topics. It seems that the KRaft controller is not updating 
> this information in ZooKeeper during the dual-write. The controller log shows 
> that the write to ZooKeeper was done, but not everything I would say:
> {code:java}
> 2023-12-13 10:23:26,229 TRACE [KRaftMigrationDriver id=3] Create Topic 
> my-topic-2, ID Macbp8BvQUKpzmq2vG_8dA. Transitioned migration state from 
> ZkMigrationLeadershipState{kraftControllerId=3, kraftControllerEpoch=7, 
> kraftMetadataOffset=445, kraftMetadataEpoch=7, 
> lastUpdatedTimeMs=1702462785587, migrationZkVersion=236, controllerZkEpoch=3, 
> controllerZkVersion=3} to ZkMigrationLeadershipState{kraftControllerId=3, 
> kraftControllerEpoch=7, kraftMetadataOffset=445, kraftMetadataEpoch=7, 
> lastUpdatedTimeMs=1702462785587, migrationZkVersion=237, controllerZkEpoch=3, 
> controllerZkVersion=3} 
> (org.apache.kafka.metadata.migration.KRaftMigrationDriver) 
> [controller-3-migration-driver-event-handler]
> 2023-12-13 10:23:26,229 DEBUG [KRaftMigrationDriver id=3] Made the following 
> ZK writes when handling KRaft delta: {CreateTopic=1} 
> (org.apache.kafka.metadata.migration.KRaftMigrationDriver) 
> [controller-3-migration-driver-event-handler] {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-7957) Flaky Test DynamicBrokerReconfigurationTest#testMetricsReporterUpdate

2024-01-25 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7957?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-7957.
---
Resolution: Fixed

> Flaky Test DynamicBrokerReconfigurationTest#testMetricsReporterUpdate
> -
>
> Key: KAFKA-7957
> URL: https://issues.apache.org/jira/browse/KAFKA-7957
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>    Assignee: Mickael Maison
>Priority: Blocker
>  Labels: flaky-test
> Fix For: 3.8.0
>
>
> To get stable nightly builds for `2.2` release, I create tickets for all 
> observed test failures.
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/18/]
> {quote}java.lang.AssertionError: Messages not sent at 
> kafka.utils.TestUtils$.fail(TestUtils.scala:356) at 
> kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:766) at 
> kafka.server.DynamicBrokerReconfigurationTest.startProduceConsume(DynamicBrokerReconfigurationTest.scala:1270)
>  at 
> kafka.server.DynamicBrokerReconfigurationTest.testMetricsReporterUpdate(DynamicBrokerReconfigurationTest.scala:650){quote}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16188) Delete deprecated kafka.common.MessageReader

2024-01-24 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-16188:
--

 Summary: Delete deprecated kafka.common.MessageReader
 Key: KAFKA-16188
 URL: https://issues.apache.org/jira/browse/KAFKA-16188
 Project: Kafka
  Issue Type: Task
Reporter: Mickael Maison
Assignee: Mickael Maison
 Fix For: 4.0.0


[KIP-641|https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=158866569]
 introduced org.apache.kafka.tools.api.RecordReader and deprecated 
kafka.common.MessageReader in Kafka 3.5.0.

We should delete kafka.common.MessageReader in Kafka 4.0.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-16170) Continuous never ending logs observed when running single node kafka in kraft mode with default KRaft properties in 3.7.0 RC2

2024-01-19 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16170?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-16170.

Resolution: Duplicate

Duplicate of https://issues.apache.org/jira/browse/KAFKA-16144

> Continuous never ending logs observed when running single node kafka in kraft 
> mode with default KRaft properties in 3.7.0 RC2
> -
>
> Key: KAFKA-16170
> URL: https://issues.apache.org/jira/browse/KAFKA-16170
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 3.7.0
>Reporter: Vedarth Sharma
>Priority: Major
> Attachments: kafka_logs.txt
>
>
> After kafka server startup, endless logs are observed, even when server is 
> sitting idle. This behaviour was not observed in previous versions.
> It is easy to reproduce this issue
>  * Download the RC tarball for 3.7.0
>  * Follow the [quickstart guide|https://kafka.apache.org/quickstart] to run 
> kafka in KRaft mode i.e. execute following commands
>  ** KAFKA_CLUSTER_ID="$(bin/kafka-storage.sh random-uuid)"
>  ** bin/kafka-storage.sh format -t $KAFKA_CLUSTER_ID -c 
> config/kraft/server.properties
>  ** bin/kafka-server-start.sh config/kraft/server.properties
>  * Once kafka server is started wait for a few seconds and you should see 
> endless logs coming in.
> I have attached a small section of the logs in the ticket just after kafka 
> startup line, just to showcase the nature of endless logs observed.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16163) Constant resignation/reelection of controller when starting a single node in combined mode

2024-01-18 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-16163:
--

 Summary: Constant resignation/reelection of controller when 
starting a single node in combined mode
 Key: KAFKA-16163
 URL: https://issues.apache.org/jira/browse/KAFKA-16163
 Project: Kafka
  Issue Type: Bug
Affects Versions: 3.7.0
Reporter: Mickael Maison


When starting a single node in combined mode:
{noformat}
$ KAFKA_CLUSTER_ID="$(bin/kafka-storage.sh random-uuid)"
$ bin/kafka-storage.sh format -t $KAFKA_CLUSTER_ID -c 
config/kraft/server.properties
$ bin/kafka-server-start.sh config/kraft/server.properties{noformat}
 

it's constantly spamming the logs with:
{noformat}
[2024-01-18 17:37:09,065] INFO 
[broker-1-to-controller-heartbeat-channel-manager]: Recorded new controller, 
from now on will use node localhost:9093 (id: 1 rack: null) 
(kafka.server.NodeToControllerRequestThread)
[2024-01-18 17:37:11,967] INFO [RaftManager id=1] Did not receive fetch request 
from the majority of the voters within 3000ms. Current fetched voters are []. 
(org.apache.kafka.raft.LeaderState)
[2024-01-18 17:37:11,967] INFO [RaftManager id=1] Completed transition to 
ResignedState(localId=1, epoch=138, voters=[1], electionTimeoutMs=1864, 
unackedVoters=[], preferredSuccessors=[]) from Leader(localId=1, epoch=138, 
epochStartOffset=829, highWatermark=Optional[LogOffsetMetadata(offset=835, 
metadata=Optional[(segmentBaseOffset=0,relativePositionInSegment=62788)])], 
voterStates={1=ReplicaState(nodeId=1, 
endOffset=Optional[LogOffsetMetadata(offset=835, 
metadata=Optional[(segmentBaseOffset=0,relativePositionInSegment=62788)])], 
lastFetchTimestamp=-1, lastCaughtUpTimestamp=-1, hasAcknowledgedLeader=true)}) 
(org.apache.kafka.raft.QuorumState)
[2024-01-18 17:37:13,072] INFO [NodeToControllerChannelManager id=1 
name=heartbeat] Client requested disconnect from node 1 
(org.apache.kafka.clients.NetworkClient)
[2024-01-18 17:37:13,072] INFO 
[broker-1-to-controller-heartbeat-channel-manager]: Recorded new controller, 
from now on will use node localhost:9093 (id: 1 rack: null) 
(kafka.server.NodeToControllerRequestThread)
[2024-01-18 17:37:13,123] INFO 
[broker-1-to-controller-heartbeat-channel-manager]: Recorded new controller, 
from now on will use node localhost:9093 (id: 1 rack: null) 
(kafka.server.NodeToControllerRequestThread)
[2024-01-18 17:37:13,124] INFO [NodeToControllerChannelManager id=1 
name=heartbeat] Client requested disconnect from node 1 
(org.apache.kafka.clients.NetworkClient)
[2024-01-18 17:37:13,124] INFO 
[broker-1-to-controller-heartbeat-channel-manager]: Recorded new controller, 
from now on will use node localhost:9093 (id: 1 rack: null) 
(kafka.server.NodeToControllerRequestThread)
[2024-01-18 17:37:13,175] INFO 
[broker-1-to-controller-heartbeat-channel-manager]: Recorded new controller, 
from now on will use node localhost:9093 (id: 1 rack: null) 
(kafka.server.NodeToControllerRequestThread)
[2024-01-18 17:37:13,176] INFO [NodeToControllerChannelManager id=1 
name=heartbeat] Client requested disconnect from node 1 
(org.apache.kafka.clients.NetworkClient)
[2024-01-18 17:37:13,176] INFO 
[broker-1-to-controller-heartbeat-channel-manager]: Recorded new controller, 
from now on will use node localhost:9093 (id: 1 rack: null) 
(kafka.server.NodeToControllerRequestThread)
[2024-01-18 17:37:13,227] INFO 
[broker-1-to-controller-heartbeat-channel-manager]: Recorded new controller, 
from now on will use node localhost:9093 (id: 1 rack: null) 
(kafka.server.NodeToControllerRequestThread)
[2024-01-18 17:37:13,229] INFO [NodeToControllerChannelManager id=1 
name=heartbeat] Client requested disconnect from node 1 
(org.apache.kafka.clients.NetworkClient)
[2024-01-18 17:37:13,229] INFO 
[broker-1-to-controller-heartbeat-channel-manager]: Recorded new controller, 
from now on will use node localhost:9093 (id: 1 rack: null) 
(kafka.server.NodeToControllerRequestThread)
[2024-01-18 17:37:13,279] INFO 
[broker-1-to-controller-heartbeat-channel-manager]: Recorded new controller, 
from now on will use node localhost:9093 (id: 1 rack: null) 
(kafka.server.NodeToControllerRequestThread){noformat}
This did not happen in 3.6.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [DISCUSS] KIP-971 Expose replication-offset-lag MirrorMaker2 metric

2024-01-18 Thread Mickael Maison
Hi Elxan,

Thanks for the updates.

We used dots to separate words in configuration names, so I think
replication.offset.lag.metric.last-replicated-offset.ttl should be
named replication.offset.lag.metric.last.replicated.offset.ttl
instead.

About the names of the metrics, fair enough if you prefer keeping the
replication prefix. Out of the alternatives you mentioned, I think I
prefer replication-record-lag. I think the metrics and configuration
names should match too. Let's see what the others think about it.

Thanks,
Mickael

On Mon, Jan 15, 2024 at 9:50 PM Elxan Eminov  wrote:
>
> Apologies, forgot to reply on your last comment about the metric name.
> I believe both replication-lag and record-lag are a little too abstract -
> what do you think about either leaving it as replication-offset-lag or
> renaming to replication-record-lag?
>
> Thanks
>
> On Wed, 10 Jan 2024 at 15:31, Mickael Maison 
> wrote:
>
> > Hi Elxan,
> >
> > Thanks for the KIP, it looks like a useful addition.
> >
> > Can you add to the KIP the default value you propose for
> > replication.lag.metric.refresh.interval? In MirrorMaker most interval
> > configs can be set to -1 to disable them, will it be the case for this
> > new feature or will this setting only accept positive values?
> > I also wonder if replication-lag, or record-lag would be clearer names
> > instead of replication-offset-lag, WDYT?
> >
> > Thanks,
> > Mickael
> >
> > On Wed, Jan 3, 2024 at 6:15 PM Elxan Eminov 
> > wrote:
> > >
> > > Hi all,
> > > Here is the vote thread:
> > > https://lists.apache.org/thread/ftlnolcrh858dry89sjg06mdcdj9mrqv
> > >
> > > Cheers!
> > >
> > > On Wed, 27 Dec 2023 at 11:23, Elxan Eminov 
> > wrote:
> > >
> > > > Hi all,
> > > > I've updated the KIP with the details we discussed in this thread.
> > > > I'll call in a vote after the holidays if everything looks good.
> > > > Thanks!
> > > >
> > > > On Sat, 26 Aug 2023 at 15:49, Elxan Eminov 
> > > > wrote:
> > > >
> > > >> Relatively minor change with a new metric for MM2
> > > >>
> > > >>
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-971%3A+Expose+replication-offset-lag+MirrorMaker2+metric
> > > >>
> > > >
> >


[jira] [Created] (KAFKA-16153) kraft_upgrade_test system test is broken

2024-01-17 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-16153:
--

 Summary: kraft_upgrade_test system test is broken
 Key: KAFKA-16153
 URL: https://issues.apache.org/jira/browse/KAFKA-16153
 Project: Kafka
  Issue Type: Bug
  Components: system tests
Reporter: Mickael Maison


I get the following failure from all `from_kafka_version` versions:


Command '/opt/kafka-dev/bin/kafka-features.sh --bootstrap-server 
ducker05:9092,ducker06:9092,ducker07:9092 upgrade --metadata 3.8' returned 
non-zero exit status 1. Remote error message: b'SLF4J: Class path contains 
multiple SLF4J bindings.\nSLF4J: Found binding in 
[jar:file:/opt/kafka-dev/tools/build/dependant-libs-2.13.12/slf4j-reload4j-1.7.36.jar!/org/slf4j/impl/StaticLoggerBinder.class]\nSLF4J:
 Found binding in 
[jar:file:/opt/kafka-dev/trogdor/build/dependant-libs-2.13.12/slf4j-reload4j-1.7.36.jar!/org/slf4j/impl/StaticLoggerBinder.class]\nSLF4J:
 See http://www.slf4j.org/codes.html#multiple_bindings for an 
explanation.\nSLF4J: Actual binding is of type 
[org.slf4j.impl.Reload4jLoggerFactory]\nUnsupported metadata version 3.8. 
Supported metadata versions are 3.3-IV0, 3.3-IV1, 3.3-IV2, 3.3-IV3, 3.4-IV0, 
3.5-IV0, 3.5-IV1, 3.5-IV2, 3.6-IV0, 3.6-IV1, 3.6-IV2, 3.7-IV0, 3.7-IV1, 
3.7-IV2, 3.7-IV3, 3.7-IV4, 3.8-IV0\n'



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-15740) KRaft support in DeleteOffsetsConsumerGroupCommandIntegrationTest

2024-01-15 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-15740.

Fix Version/s: 3.8.0
   Resolution: Fixed

> KRaft support in DeleteOffsetsConsumerGroupCommandIntegrationTest
> -
>
> Key: KAFKA-15740
> URL: https://issues.apache.org/jira/browse/KAFKA-15740
> Project: Kafka
>  Issue Type: Task
>  Components: core
>Reporter: Sameer Tejani
>Assignee: Zihao Lin
>Priority: Minor
>  Labels: kraft, kraft-test, newbie
> Fix For: 3.8.0
>
>
> The following tests in DeleteOffsetsConsumerGroupCommandIntegrationTest in 
> core/src/test/scala/unit/kafka/admin/DeleteOffsetsConsumerGroupCommandIntegrationTest.scala
>  need to be updated to support KRaft
> 49 : def testDeleteOffsetsNonExistingGroup(): Unit = {
> 59 : def testDeleteOffsetsOfStableConsumerGroupWithTopicPartition(): Unit = {
> 64 : def testDeleteOffsetsOfStableConsumerGroupWithTopicOnly(): Unit = {
> 69 : def testDeleteOffsetsOfStableConsumerGroupWithUnknownTopicPartition(): 
> Unit = {
> 74 : def testDeleteOffsetsOfStableConsumerGroupWithUnknownTopicOnly(): Unit = 
> {
> 79 : def testDeleteOffsetsOfEmptyConsumerGroupWithTopicPartition(): Unit = {
> 84 : def testDeleteOffsetsOfEmptyConsumerGroupWithTopicOnly(): Unit = {
> 89 : def testDeleteOffsetsOfEmptyConsumerGroupWithUnknownTopicPartition(): 
> Unit = {
> 94 : def testDeleteOffsetsOfEmptyConsumerGroupWithUnknownTopicOnly(): Unit = {
> Scanned 198 lines. Found 0 KRaft tests out of 9 tests



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16130) Test migration rollback

2024-01-15 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-16130:
--

 Summary: Test migration rollback
 Key: KAFKA-16130
 URL: https://issues.apache.org/jira/browse/KAFKA-16130
 Project: Kafka
  Issue Type: Sub-task
Reporter: Mickael Maison
 Fix For: 3.8.0






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-16119) kraft_upgrade_test system test is broken

2024-01-12 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-16119.

Resolution: Invalid

After rebuilding my env from scratch I don't see this error anymore

> kraft_upgrade_test system test is broken
> 
>
> Key: KAFKA-16119
> URL: https://issues.apache.org/jira/browse/KAFKA-16119
> Project: Kafka
>  Issue Type: New Feature
>Affects Versions: 3.6.0, 3.7.0, 3.6.1
>        Reporter: Mickael Maison
>Priority: Major
>
> When the test attempts to restart brokers after the upgrade, brokers fail 
> with:
> [2024-01-12 13:43:40,144] ERROR Exiting Kafka due to fatal exception 
> (kafka.Kafka$)
> java.lang.NoClassDefFoundError: 
> org/apache/kafka/image/loader/MetadataLoaderMetrics
> at kafka.server.KafkaRaftServer.(KafkaRaftServer.scala:68)
> at kafka.Kafka$.buildServer(Kafka.scala:83)
> at kafka.Kafka$.main(Kafka.scala:91)
> at kafka.Kafka.main(Kafka.scala)
> Caused by: java.lang.ClassNotFoundException: 
> org.apache.kafka.image.loader.MetadataLoaderMetrics
> at java.net.URLClassLoader.findClass(URLClassLoader.java:387)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:418)
> at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:351)
> ... 4 more
> MetadataLoaderMetrics was moved from org.apache.kafka.image.loader to 
> org.apache.kafka.image.loader.metrics in 
> https://github.com/apache/kafka/commit/c7de30f38bfd6e2d62a0b5c09b5dc9707e58096b



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16119) kraft_upgrade_test system test is broken

2024-01-12 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-16119:
--

 Summary: kraft_upgrade_test system test is broken
 Key: KAFKA-16119
 URL: https://issues.apache.org/jira/browse/KAFKA-16119
 Project: Kafka
  Issue Type: New Feature
Affects Versions: 3.6.1, 3.6.0, 3.7.0
Reporter: Mickael Maison


When the test attempts to restart brokers after the upgrade, brokers fail with:

[2024-01-12 13:43:40,144] ERROR Exiting Kafka due to fatal exception 
(kafka.Kafka$)
java.lang.NoClassDefFoundError: 
org/apache/kafka/image/loader/MetadataLoaderMetrics
at kafka.server.KafkaRaftServer.(KafkaRaftServer.scala:68)
at kafka.Kafka$.buildServer(Kafka.scala:83)
at kafka.Kafka$.main(Kafka.scala:91)
at kafka.Kafka.main(Kafka.scala)
Caused by: java.lang.ClassNotFoundException: 
org.apache.kafka.image.loader.MetadataLoaderMetrics
at java.net.URLClassLoader.findClass(URLClassLoader.java:387)
at java.lang.ClassLoader.loadClass(ClassLoader.java:418)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352)
at java.lang.ClassLoader.loadClass(ClassLoader.java:351)
... 4 more

MetadataLoaderMetrics was moved from org.apache.kafka.image.loader to 
org.apache.kafka.image.loader.metrics in 
https://github.com/apache/kafka/commit/c7de30f38bfd6e2d62a0b5c09b5dc9707e58096b



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: KIP-991: Allow DropHeaders SMT to drop headers by wildcard/regexp

2024-01-11 Thread Mickael Maison
Hi Roman,

Thanks for the updates, this looks much better.

Just a couple of small comments:
- The type of the field is listed as "boolean". I think it should be
string (or list)
- Should the field be named "headers.patterns" instead of
"headers.pattern" since it accepts a list of patterns?

Thanks,
Mickael

On Thu, Jan 11, 2024 at 12:56 PM Roman Schmitz  wrote:
>
> Hi Mickael,
> Hi all,
>
> Thanks for the feedback!
> I have adapted the KIP description - actually much shorter and just
> reflecting the general functionality and interface/configuration changes.
>
> Kindly let me know if you have any comments, questions, or suggestions for
> this KIP!
>
> Thanks,
> Roman
>
> Am Fr., 5. Jan. 2024 um 17:36 Uhr schrieb Mickael Maison <
> mickael.mai...@gmail.com>:
>
> > Hi Roman,
> >
> > Thanks for the KIP! This would be a useful improvement.
> >
> > Ideally you want to make a concerte proposal in the KIP instead of
> > listing a series of options. Currently the KIP seems to list two
> > alternatives.
> >
> > Also a KIP focuses on the API changes rather than on the pure
> > implementation. It seems you're proposing adding a configuration to
> > the DropHeaders SMT. It would be good to describe that new
> > configuration. For example see KIP-911 which also added a
> > configuration.
> >
> > Thanks,
> > Mickael
> >
> > On Mon, Oct 16, 2023 at 9:50 AM Roman Schmitz 
> > wrote:
> > >
> > > Hi Andrew,
> > >
> > > Ok, thanks for the feedback! I added a few more details and code examples
> > > to explain the proposed changes.
> > >
> > > Thanks,
> > > Roman
> > >
> > > Am So., 15. Okt. 2023 um 22:12 Uhr schrieb Andrew Schofield <
> > > andrew_schofield_j...@outlook.com>:
> > >
> > > > Hi Roman,
> > > > Thanks for the KIP. I think it’s an interesting idea, but I think the
> > KIP
> > > > document needs some
> > > > more details added before it’s ready for review. For example, here’s a
> > KIP
> > > > in the same
> > > > area which was delivered in an earlier version of Kafka. I think this
> > is a
> > > > good KIP to copy
> > > > for a suitable level of detail and description (
> > > >
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-585%3A+Filter+and+Conditional+SMTs
> > > > ).
> > > >
> > > > Hope this helps.
> > > >
> > > > Thanks,
> > > > Andrew
> > > >
> > > > > On 15 Oct 2023, at 21:02, Roman Schmitz 
> > wrote:
> > > > >
> > > > > Hi all,
> > > > >
> > > > > While working with different customers I came across the case several
> > > > times
> > > > > that we'd like to not only explicitly remove headers by name but by
> > > > pattern
> > > > > / regexp. Here is a KIP for this feature!
> > > > >
> > > > > Please let me know if you have any comments, questions, or
> > suggestions!
> > > > >
> > > > > https://cwiki.apache.org/confluence/x/oYtEE
> > > > >
> > > > > Thanks,
> > > > > Roman
> > > >
> > > >
> >


Re: requesting permissions to contribute to Apache Kafka

2024-01-11 Thread Mickael Maison
Hi,

I've granted you permissions in both Jira and the wiki.

Thanks,
Mickael

On Thu, Jan 11, 2024 at 2:40 PM Szymon Scharmach
 wrote:
>
> Hi,
>
> based on:
> https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Improvement+Proposals
> 
> I'd like to request permission to contribute to Apache Kafka.
> wiki ID: theszym
> jira ID: theszym
>
>
>
> Szymon Scharmach


Re: [VOTE] KIP-995: Allow users to specify initial offsets while creating connectors

2024-01-11 Thread Mickael Maison
Hi Ashwin,

+1 (binding), thanks for the KIP

Mickael

On Tue, Jan 9, 2024 at 4:54 PM Chris Egerton  wrote:
>
> Thanks for the KIP! +1 (binding)
>
> On Mon, Jan 8, 2024 at 9:35 AM Ashwin  wrote:
>
> > Hi All,
> >
> > I would like to start  a vote on KIP-995.
> >
> >
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-995%3A+Allow+users+to+specify+initial+offsets+while+creating+connectors
> >
> > Discussion thread -
> > https://lists.apache.org/thread/msorbr63scglf4484yq764v7klsj7c4j
> >
> > Thanks!
> >
> > Ashwin
> >


Re: Logging in Kafka

2024-01-10 Thread Mickael Maison
Hi,

I think the only thing that would need to be done in 3.8 is the
deprecation of the log4j appender (KIP-719). This was a pre-req for
migrating to log4j2 due to conflicts when having both log4j and log4j2
in the classpath. I don't know if that's still the case with reload4j
but I think we should take the opportunity of deprecating it before
4.0 regardless to avoid relying on multiple logging libraries.

Thanks,
Mickael

On Wed, Jan 10, 2024 at 7:58 PM Colin McCabe  wrote:
>
> Hi Mickael,
>
> Thanks for bringing this up.
>
> If we move to log4j2 in 4.0, is there any work that needs to be done in 3.8? 
> That's probably what we should focus on.
>
> P.S. My assumption is that if the log4j2 work misses the train, we'll stick 
> with reload4j in 4.0. Hopefully this won't happen.
>
> best,
> Colin
>
>
> On Wed, Jan 10, 2024, at 09:13, Ismael Juma wrote:
> > Hi Viktor,
> >
> > A logging library that requires Java 17 is a deal breaker since we need to
> > log from modules that will only require Java 11 in Apache Kafka 4.0.
> >
> > Ismael
> >
> > On Wed, Jan 10, 2024 at 6:43 PM Viktor Somogyi-Vass
> >  wrote:
> >
> >> Hi Mickael,
> >>
> >> Reacting to your points:
> >> 1. I think it's somewhat unfortunate that we provide an appender tied to a
> >> chosen logger implementation. I think that this shouldn't be part of the
> >> project in its current form. However, there is the sl4fj2 Fluent API which
> >> may solve our problem and turn KafkaLog4jAppender into a generic
> >> implementation that doesn't depend on a specific library given that we can
> >> upgrade to slf4j2. That is worth considering.
> >> 2. Since KIP-1013 we'd move to Java17 anyways by 4.0, so I don't feel it's
> >> a problem if there's a specific dependency that has Java17 as the minimum
> >> supported version. As I read though from your email thread with the log4j2
> >> folks, it'll be supported for years to come and log4j3 isn't yet stable.
> >> Since we already use log4j2 in our fork, I'm happy to contribute to this,
> >> review PRs or drive it if needed.
> >>
> >> Thanks,
> >> Viktor
> >>
> >> On Wed, Jan 10, 2024 at 3:58 PM Mickael Maison 
> >> wrote:
> >>
> >> > I asked for details about the future of log4j2 on the logging user list:
> >> > https://lists.apache.org/thread/6n6bkgwj8tglgdgzz8wxhkx1p1xpwodl
> >> >
> >> > Let's see what they say.
> >> >
> >> > Thanks,
> >> > Mickael
> >> >
> >> > On Wed, Jan 10, 2024 at 3:23 PM Ismael Juma  wrote:
> >> > >
> >> > > Hi Mickael,
> >> > >
> >> > > Thanks for starting the discussion and for summarizing the state of
> >> > play. I
> >> > > agree with you that it would be important to understand how long log4j2
> >> > > will be supported for. An alternative would be sl4fj 2.x and logback.
> >> > >
> >> > > Ismael
> >> > >
> >> > > On Wed, Jan 10, 2024 at 2:17 PM Mickael Maison <
> >> mickael.mai...@gmail.com
> >> > >
> >> > > wrote:
> >> > >
> >> > > > Hi,
> >> > > >
> >> > > > Starting a new thread to discuss the current logging situation in
> >> > > > Kafka. I'll restate everything we know but see the [DISCUSS] Road to
> >> > > > Kafka 4.0 if you are interested in what has already been said. [0]
> >> > > >
> >> > > > Currently Kafka uses SLF4J and reload4j as the logging backend. We
> >> had
> >> > > > to adopt reload4j in 3.2.0 as log4j was end of life and has a few
> >> > > > security issues.
> >> > > >
> >> > > > In 2020 we adopted KIP-653 to upgrade to log4j2. Due to
> >> > > > incompatibilities in the configuration mechanism with log4j/reload4j
> >> > > > we decide to delay the upgrade to the next major release, Kafka 4.0.
> >> > > >
> >> > > > Kafka also currently provides a log4j appender. In 2022, we adopted
> >> > > > KIP-719 to deprecate it since we wanted to switch to log4j2. At the
> >> > > > time Apache Logging also had a Kafka appender that worked with
> >> log4j2.
> >> > > > They since deprecated that appender in log4j2 and it is not part of
> >> > > > log4j3. [1]
> >> > > >
> >> &g

Re: [VOTE] KIP-877: Mechanism for plugins and connectors to register metrics

2024-01-10 Thread Mickael Maison
Bumping this thread since I've not seen any feedback.

Thanks,
Mickael

On Tue, Dec 19, 2023 at 10:03 AM Mickael Maison
 wrote:
>
> Hi,
>
> I'd like to start a vote on KIP-877:
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-877%3A+Mechanism+for+plugins+and+connectors+to+register+metrics
>
> Let me know if you have any feedback.
>
> Thanks,
> Mickael


[jira] [Resolved] (KAFKA-15747) KRaft support in DynamicConnectionQuotaTest

2024-01-10 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15747?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-15747.

Fix Version/s: 3.8.0
   Resolution: Fixed

> KRaft support in DynamicConnectionQuotaTest
> ---
>
> Key: KAFKA-15747
> URL: https://issues.apache.org/jira/browse/KAFKA-15747
> Project: Kafka
>  Issue Type: Task
>  Components: core
>Reporter: Sameer Tejani
>Priority: Minor
>  Labels: kraft, kraft-test, newbie
> Fix For: 3.8.0
>
>
> The following tests in DynamicConnectionQuotaTest in 
> core/src/test/scala/integration/kafka/network/DynamicConnectionQuotaTest.scala
>  need to be updated to support KRaft
> 77 : def testDynamicConnectionQuota(): Unit = {
> 104 : def testDynamicListenerConnectionQuota(): Unit = {
> 175 : def testDynamicListenerConnectionCreationRateQuota(): Unit = {
> 237 : def testDynamicIpConnectionRateQuota(): Unit = {
> Scanned 416 lines. Found 0 KRaft tests out of 4 tests



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: Logging in Kafka

2024-01-10 Thread Mickael Maison
Hi,

A couple of PMC members from Apache Logging replied and they said they
plan to keep supporting log4j2 for several years.
https://lists.apache.org/thread/6n6bkgwj8tglgdgzz8wxhkx1p1xpwodl

Thanks,
Mickael

On Wed, Jan 10, 2024 at 3:57 PM Mickael Maison  wrote:
>
> I asked for details about the future of log4j2 on the logging user list:
> https://lists.apache.org/thread/6n6bkgwj8tglgdgzz8wxhkx1p1xpwodl
>
> Let's see what they say.
>
> Thanks,
> Mickael
>
> On Wed, Jan 10, 2024 at 3:23 PM Ismael Juma  wrote:
> >
> > Hi Mickael,
> >
> > Thanks for starting the discussion and for summarizing the state of play. I
> > agree with you that it would be important to understand how long log4j2
> > will be supported for. An alternative would be sl4fj 2.x and logback.
> >
> > Ismael
> >
> > On Wed, Jan 10, 2024 at 2:17 PM Mickael Maison 
> > wrote:
> >
> > > Hi,
> > >
> > > Starting a new thread to discuss the current logging situation in
> > > Kafka. I'll restate everything we know but see the [DISCUSS] Road to
> > > Kafka 4.0 if you are interested in what has already been said. [0]
> > >
> > > Currently Kafka uses SLF4J and reload4j as the logging backend. We had
> > > to adopt reload4j in 3.2.0 as log4j was end of life and has a few
> > > security issues.
> > >
> > > In 2020 we adopted KIP-653 to upgrade to log4j2. Due to
> > > incompatibilities in the configuration mechanism with log4j/reload4j
> > > we decide to delay the upgrade to the next major release, Kafka 4.0.
> > >
> > > Kafka also currently provides a log4j appender. In 2022, we adopted
> > > KIP-719 to deprecate it since we wanted to switch to log4j2. At the
> > > time Apache Logging also had a Kafka appender that worked with log4j2.
> > > They since deprecated that appender in log4j2 and it is not part of
> > > log4j3. [1]
> > >
> > > Log4j3 is also nearing release but it seems it will require Java 17.
> > > The website states Java 11 [2] but the artifacts from the latest 3.0.0
> > > beta are built for Java 17. I was not able to find clear maintenance
> > > statement about log4j2 once log4j3 gets released.
> > >
> > > The question is where do we go from here?
> > > We can stick with our plans:
> > > 1. Deprecate the appender in the next 3.x release and plan to remove it in
> > > 4.0
> > > 2. Do the necessary work to switch to log4j2 in 4.0
> > > If so we need people to drive these work items. We have PRs for these
> > > with hopefully the bulk of the code but they need
> > > rebasing/completing/reviewing.
> > >
> > > Otherwise we can reconsider KIP-653 and/or KIP-719.
> > >
> > > Assuming log4j2 does not go end of life in the near future (We can
> > > reach out to Apache Logging to clarify that point.), I think it still
> > > makes sense to adopt it. I would also go ahead and deprecate our
> > > appender.
> > >
> > > Thanks,
> > > Mickael
> > >
> > > 0: https://lists.apache.org/thread/q0sz910o1y9mhq159oy16w31d6dzh79f
> > > 1: https://github.com/apache/logging-log4j2/issues/1951
> > > 2: https://logging.apache.org/log4j/3.x/#requirements
> > >


Re: Logging in Kafka

2024-01-10 Thread Mickael Maison
I asked for details about the future of log4j2 on the logging user list:
https://lists.apache.org/thread/6n6bkgwj8tglgdgzz8wxhkx1p1xpwodl

Let's see what they say.

Thanks,
Mickael

On Wed, Jan 10, 2024 at 3:23 PM Ismael Juma  wrote:
>
> Hi Mickael,
>
> Thanks for starting the discussion and for summarizing the state of play. I
> agree with you that it would be important to understand how long log4j2
> will be supported for. An alternative would be sl4fj 2.x and logback.
>
> Ismael
>
> On Wed, Jan 10, 2024 at 2:17 PM Mickael Maison 
> wrote:
>
> > Hi,
> >
> > Starting a new thread to discuss the current logging situation in
> > Kafka. I'll restate everything we know but see the [DISCUSS] Road to
> > Kafka 4.0 if you are interested in what has already been said. [0]
> >
> > Currently Kafka uses SLF4J and reload4j as the logging backend. We had
> > to adopt reload4j in 3.2.0 as log4j was end of life and has a few
> > security issues.
> >
> > In 2020 we adopted KIP-653 to upgrade to log4j2. Due to
> > incompatibilities in the configuration mechanism with log4j/reload4j
> > we decide to delay the upgrade to the next major release, Kafka 4.0.
> >
> > Kafka also currently provides a log4j appender. In 2022, we adopted
> > KIP-719 to deprecate it since we wanted to switch to log4j2. At the
> > time Apache Logging also had a Kafka appender that worked with log4j2.
> > They since deprecated that appender in log4j2 and it is not part of
> > log4j3. [1]
> >
> > Log4j3 is also nearing release but it seems it will require Java 17.
> > The website states Java 11 [2] but the artifacts from the latest 3.0.0
> > beta are built for Java 17. I was not able to find clear maintenance
> > statement about log4j2 once log4j3 gets released.
> >
> > The question is where do we go from here?
> > We can stick with our plans:
> > 1. Deprecate the appender in the next 3.x release and plan to remove it in
> > 4.0
> > 2. Do the necessary work to switch to log4j2 in 4.0
> > If so we need people to drive these work items. We have PRs for these
> > with hopefully the bulk of the code but they need
> > rebasing/completing/reviewing.
> >
> > Otherwise we can reconsider KIP-653 and/or KIP-719.
> >
> > Assuming log4j2 does not go end of life in the near future (We can
> > reach out to Apache Logging to clarify that point.), I think it still
> > makes sense to adopt it. I would also go ahead and deprecate our
> > appender.
> >
> > Thanks,
> > Mickael
> >
> > 0: https://lists.apache.org/thread/q0sz910o1y9mhq159oy16w31d6dzh79f
> > 1: https://github.com/apache/logging-log4j2/issues/1951
> > 2: https://logging.apache.org/log4j/3.x/#requirements
> >


Re: [DISCUSS] KIP-971 Expose replication-offset-lag MirrorMaker2 metric

2024-01-10 Thread Mickael Maison
Hi Elxan,

Thanks for the KIP, it looks like a useful addition.

Can you add to the KIP the default value you propose for
replication.lag.metric.refresh.interval? In MirrorMaker most interval
configs can be set to -1 to disable them, will it be the case for this
new feature or will this setting only accept positive values?
I also wonder if replication-lag, or record-lag would be clearer names
instead of replication-offset-lag, WDYT?

Thanks,
Mickael

On Wed, Jan 3, 2024 at 6:15 PM Elxan Eminov  wrote:
>
> Hi all,
> Here is the vote thread:
> https://lists.apache.org/thread/ftlnolcrh858dry89sjg06mdcdj9mrqv
>
> Cheers!
>
> On Wed, 27 Dec 2023 at 11:23, Elxan Eminov  wrote:
>
> > Hi all,
> > I've updated the KIP with the details we discussed in this thread.
> > I'll call in a vote after the holidays if everything looks good.
> > Thanks!
> >
> > On Sat, 26 Aug 2023 at 15:49, Elxan Eminov 
> > wrote:
> >
> >> Relatively minor change with a new metric for MM2
> >>
> >> https://cwiki.apache.org/confluence/display/KAFKA/KIP-971%3A+Expose+replication-offset-lag+MirrorMaker2+metric
> >>
> >


Logging in Kafka

2024-01-10 Thread Mickael Maison
Hi,

Starting a new thread to discuss the current logging situation in
Kafka. I'll restate everything we know but see the [DISCUSS] Road to
Kafka 4.0 if you are interested in what has already been said. [0]

Currently Kafka uses SLF4J and reload4j as the logging backend. We had
to adopt reload4j in 3.2.0 as log4j was end of life and has a few
security issues.

In 2020 we adopted KIP-653 to upgrade to log4j2. Due to
incompatibilities in the configuration mechanism with log4j/reload4j
we decide to delay the upgrade to the next major release, Kafka 4.0.

Kafka also currently provides a log4j appender. In 2022, we adopted
KIP-719 to deprecate it since we wanted to switch to log4j2. At the
time Apache Logging also had a Kafka appender that worked with log4j2.
They since deprecated that appender in log4j2 and it is not part of
log4j3. [1]

Log4j3 is also nearing release but it seems it will require Java 17.
The website states Java 11 [2] but the artifacts from the latest 3.0.0
beta are built for Java 17. I was not able to find clear maintenance
statement about log4j2 once log4j3 gets released.

The question is where do we go from here?
We can stick with our plans:
1. Deprecate the appender in the next 3.x release and plan to remove it in 4.0
2. Do the necessary work to switch to log4j2 in 4.0
If so we need people to drive these work items. We have PRs for these
with hopefully the bulk of the code but they need
rebasing/completing/reviewing.

Otherwise we can reconsider KIP-653 and/or KIP-719.

Assuming log4j2 does not go end of life in the near future (We can
reach out to Apache Logging to clarify that point.), I think it still
makes sense to adopt it. I would also go ahead and deprecate our
appender.

Thanks,
Mickael

0: https://lists.apache.org/thread/q0sz910o1y9mhq159oy16w31d6dzh79f
1: https://github.com/apache/logging-log4j2/issues/1951
2: https://logging.apache.org/log4j/3.x/#requirements


Re: [DISCUSS] Road to Kafka 4.0

2024-01-10 Thread Mickael Maison
Hi Colin,

Regarding KIP-719, I think need it to land in 3.8 if we want to remove
the appender in 4.0. I also just noticed the log4j's KafkaAppender is
being deprecated in log4j2 and will not be part of log4j3.

For KIP-653, as I said, my point was to gauge interest in getting it
done. While it may not be a "must-do" to keep Kafka working, we can
only do this type of change in major releases. So if we don't do it
now, it won't happen for a few more years.

Regarding log4j3, even though the website states it requires Java 11
[1], it seems the latest beta release requires Java 17 so it's not
something we'll be able to adopt now.

0: https://github.com/apache/logging-log4j2/issues/1951
1: https://logging.apache.org/log4j/3.x/#requirements

Thanks,
Mickael

On Fri, Jan 5, 2024 at 12:18 AM Colin McCabe  wrote:
>
> Hi Mickael,
>
> Thanks for bringing this up.
>
> The main motivation given in KIP-653 for moving to log4j 2.x is that log4j 
> 1.x is no longer supported. But since we moved to reload4j, which is still 
> supported, that isn't a concern any longer.
>
> To be clear, I'm not saying we shouldn't upgrade, but I'm just trying to 
> explain why I think there hasn't been as much interest in this lately. I see 
> this as a "cool feature" rather than as a must-do.
>
> If we still want to do this for 4.0, it would be good to understand whether 
> there's any work that has to land in 3.8. Do we have to get KIP-719 into 3.8 
> so that we have a reasonable deprecation period?
>
> Also, if we do upgrade, I agree with Ismael that we should consider going to 
> log4j3. Assuming they have a non-beta release by the time 4.0 is ready.
>
> best,
> Colin
>
> On Thu, Jan 4, 2024, at 03:08, Mickael Maison wrote:
> > Hi Ismael,
> >
> > Yes both KIPs have been voted.
> > My point, which admittedly wasn't clear, was to gauge the interest in
> > getting them done and if so identifying people to drive these tasks.
> >
> > KIP-719 shouldn't require too much more work to complete. There's a PR
> > [0] which is relatively straightforward. I pinged Lee Dongjin.
> > KIP-653 is more involved and depends on KIP-719. There's also a PR [1]
> > which is pretty large.
> >
> > Yes log4j3 was on my mind as it's expected to be compatible with
> > log4j2 and bring significant improvements.
> >
> > 0: https://github.com/apache/kafka/pull/10244
> > 1: https://github.com/apache/kafka/pull/7898
> >
> > Thanks,
> > Mickael
> >
> > On Thu, Jan 4, 2024 at 11:34 AM Ismael Juma  wrote:
> >>
> >> Hi Mickael,
> >>
> >> Given that KIP-653 was accepted, the current position is that we would move
> >> to log4j2 - provided that someone is available to drive that. It's also
> >> worth noting that log4j3 is now a thing (but not yet final):
> >>
> >> https://logging.apache.org/log4j/3.x/
> >>
> >> Ismael
> >>
> >> On Thu, Jan 4, 2024 at 2:15 AM Mickael Maison 
> >> wrote:
> >>
> >> > Hi,
> >> >
> >> > I've not seen replies about log4j2.
> >> > The plan was to deprecated the appender (KIP-719) and switch to log4j2
> >> > (KIP-653).
> >> >
> >> > While reload4j works well, I'd still be in favor of switching to
> >> > log4j2 in Kafka 4.0.
> >> >
> >> > Thanks,
> >> > Mickael
> >> >
> >> > On Fri, Dec 29, 2023 at 2:19 AM Colin McCabe  wrote:
> >> > >
> >> > > Hi all,
> >> > >
> >> > > Let's continue this dicsussion on the "[DISCUSS] KIP-1012: The need for
> >> > a Kafka 3.8.x release" email thread.
> >> > >
> >> > > Colin
> >> > >
> >> > >
> >> > > On Tue, Dec 26, 2023, at 12:50, José Armando García Sancio wrote:
> >> > > > Hi Divij,
> >> > > >
> >> > > > Thanks for the feedback. I agree that having a 3.8 release is
> >> > > > beneficial but some of the comments in this message are inaccurate 
> >> > > > and
> >> > > > could mislead the community and users.
> >> > > >
> >> > > > On Thu, Dec 21, 2023 at 7:00 AM Divij Vaidya 
> >> > > > 
> >> > wrote:
> >> > > >> 1\ Durability/availability bugs in kraft - Even though kraft has 
> >> > > >> been
> >> > > >> around for a while, we keep finding bugs that impact availability 
> >> > > >> and
> >> > data
&g

[jira] [Resolved] (KAFKA-15741) KRaft support in DescribeConsumerGroupTest

2024-01-09 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15741?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-15741.

Fix Version/s: 3.8.0
   Resolution: Fixed

> KRaft support in DescribeConsumerGroupTest
> --
>
> Key: KAFKA-15741
> URL: https://issues.apache.org/jira/browse/KAFKA-15741
> Project: Kafka
>  Issue Type: Task
>  Components: core
>Reporter: Sameer Tejani
>Assignee: Zihao Lin
>Priority: Minor
>  Labels: kraft, kraft-test, newbie
> Fix For: 3.8.0
>
>
> The following tests in DescribeConsumerGroupTest in 
> core/src/test/scala/unit/kafka/admin/DescribeConsumerGroupTest.scala need to 
> be updated to support KRaft
> 39 : def testDescribeNonExistingGroup(): Unit = {
> 55 : def testDescribeWithMultipleSubActions(): Unit = {
> 76 : def testDescribeWithStateValue(): Unit = {
> 97 : def testDescribeOffsetsOfNonExistingGroup(): Unit = {
> 113 : def testDescribeMembersOfNonExistingGroup(): Unit = {
> 133 : def testDescribeStateOfNonExistingGroup(): Unit = {
> 151 : def testDescribeExistingGroup(): Unit = {
> 169 : def testDescribeExistingGroups(): Unit = {
> 194 : def testDescribeAllExistingGroups(): Unit = {
> 218 : def testDescribeOffsetsOfExistingGroup(): Unit = {
> 239 : def testDescribeMembersOfExistingGroup(): Unit = {
> 272 : def testDescribeStateOfExistingGroup(): Unit = {
> 291 : def testDescribeStateOfExistingGroupWithRoundRobinAssignor(): Unit = {
> 310 : def testDescribeExistingGroupWithNoMembers(): Unit = {
> 334 : def testDescribeOffsetsOfExistingGroupWithNoMembers(): Unit = {
> 366 : def testDescribeMembersOfExistingGroupWithNoMembers(): Unit = {
> 390 : def testDescribeStateOfExistingGroupWithNoMembers(): Unit = {
> 417 : def testDescribeWithConsumersWithoutAssignedPartitions(): Unit = {
> 436 : def testDescribeOffsetsWithConsumersWithoutAssignedPartitions(): Unit = 
> {
> 455 : def testDescribeMembersWithConsumersWithoutAssignedPartitions(): Unit = 
> {
> 480 : def testDescribeStateWithConsumersWithoutAssignedPartitions(): Unit = {
> 496 : def testDescribeWithMultiPartitionTopicAndMultipleConsumers(): Unit = {
> 517 : def testDescribeOffsetsWithMultiPartitionTopicAndMultipleConsumers(): 
> Unit = {
> 539 : def testDescribeMembersWithMultiPartitionTopicAndMultipleConsumers(): 
> Unit = {
> 565 : def testDescribeStateWithMultiPartitionTopicAndMultipleConsumers(): 
> Unit = {
> 583 : def testDescribeSimpleConsumerGroup(): Unit = {
> 601 : def testDescribeGroupWithShortInitializationTimeout(): Unit = {
> 618 : def testDescribeGroupOffsetsWithShortInitializationTimeout(): Unit = {
> 634 : def testDescribeGroupMembersWithShortInitializationTimeout(): Unit = {
> 652 : def testDescribeGroupStateWithShortInitializationTimeout(): Unit = {
> 668 : def testDescribeWithUnrecognizedNewConsumerOption(): Unit = {
> 674 : def testDescribeNonOffsetCommitGroup(): Unit = {
> Scanned 699 lines. Found 0 KRaft tests out of 32 tests



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-15719) KRaft support in OffsetsForLeaderEpochRequestTest

2024-01-08 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-15719.

Fix Version/s: 3.8.0
   Resolution: Fixed

> KRaft support in OffsetsForLeaderEpochRequestTest
> -
>
> Key: KAFKA-15719
> URL: https://issues.apache.org/jira/browse/KAFKA-15719
> Project: Kafka
>  Issue Type: Task
>  Components: core
>Reporter: Sameer Tejani
>Assignee: Zihao Lin
>Priority: Minor
>  Labels: kraft, kraft-test, newbie
> Fix For: 3.8.0
>
>
> The following tests in OffsetsForLeaderEpochRequestTest in 
> core/src/test/scala/unit/kafka/server/OffsetsForLeaderEpochRequestTest.scala 
> need to be updated to support KRaft
> 37 : def testOffsetsForLeaderEpochErrorCodes(): Unit = {
> 60 : def testCurrentEpochValidation(): Unit = {
> Scanned 127 lines. Found 0 KRaft tests out of 2 tests



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-15725) KRaft support in FetchRequestTest

2024-01-08 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15725?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-15725.

Fix Version/s: 3.7.0
   Resolution: Fixed

> KRaft support in FetchRequestTest
> -
>
> Key: KAFKA-15725
> URL: https://issues.apache.org/jira/browse/KAFKA-15725
> Project: Kafka
>  Issue Type: Task
>  Components: core
>Reporter: Sameer Tejani
>Priority: Minor
>  Labels: kraft, kraft-test, newbie
> Fix For: 3.7.0
>
>
> The following tests in FetchRequestTest in 
> core/src/test/scala/unit/kafka/server/FetchRequestTest.scala need to be 
> updated to support KRaft
> 45 : def testBrokerRespectsPartitionsOrderAndSizeLimits(): Unit = {
> 147 : def testFetchRequestV4WithReadCommitted(): Unit = {
> 165 : def testFetchRequestToNonReplica(): Unit = {
> 195 : def testLastFetchedEpochValidation(): Unit = {
> 200 : def testLastFetchedEpochValidationV12(): Unit = {
> 247 : def testCurrentEpochValidation(): Unit = {
> 252 : def testCurrentEpochValidationV12(): Unit = {
> 295 : def testEpochValidationWithinFetchSession(): Unit = {
> 300 : def testEpochValidationWithinFetchSessionV12(): Unit = {
> 361 : def testDownConversionWithConnectionFailure(): Unit = {
> 428 : def testDownConversionFromBatchedToUnbatchedRespectsOffset(): Unit = {
> 509 : def testCreateIncrementalFetchWithPartitionsInErrorV12(): Unit = {
> 568 : def testFetchWithPartitionsWithIdError(): Unit = {
> 610 : def testZStdCompressedTopic(): Unit = {
> 657 : def testZStdCompressedRecords(): Unit = {
> Scanned 783 lines. Found 0 KRaft tests out of 15 tests



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: KIP-991: Allow DropHeaders SMT to drop headers by wildcard/regexp

2024-01-05 Thread Mickael Maison
Hi Roman,

Thanks for the KIP! This would be a useful improvement.

Ideally you want to make a concerte proposal in the KIP instead of
listing a series of options. Currently the KIP seems to list two
alternatives.

Also a KIP focuses on the API changes rather than on the pure
implementation. It seems you're proposing adding a configuration to
the DropHeaders SMT. It would be good to describe that new
configuration. For example see KIP-911 which also added a
configuration.

Thanks,
Mickael

On Mon, Oct 16, 2023 at 9:50 AM Roman Schmitz  wrote:
>
> Hi Andrew,
>
> Ok, thanks for the feedback! I added a few more details and code examples
> to explain the proposed changes.
>
> Thanks,
> Roman
>
> Am So., 15. Okt. 2023 um 22:12 Uhr schrieb Andrew Schofield <
> andrew_schofield_j...@outlook.com>:
>
> > Hi Roman,
> > Thanks for the KIP. I think it’s an interesting idea, but I think the KIP
> > document needs some
> > more details added before it’s ready for review. For example, here’s a KIP
> > in the same
> > area which was delivered in an earlier version of Kafka. I think this is a
> > good KIP to copy
> > for a suitable level of detail and description (
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-585%3A+Filter+and+Conditional+SMTs
> > ).
> >
> > Hope this helps.
> >
> > Thanks,
> > Andrew
> >
> > > On 15 Oct 2023, at 21:02, Roman Schmitz  wrote:
> > >
> > > Hi all,
> > >
> > > While working with different customers I came across the case several
> > times
> > > that we'd like to not only explicitly remove headers by name but by
> > pattern
> > > / regexp. Here is a KIP for this feature!
> > >
> > > Please let me know if you have any comments, questions, or suggestions!
> > >
> > > https://cwiki.apache.org/confluence/x/oYtEE
> > >
> > > Thanks,
> > > Roman
> >
> >


Re: [DISCUSS] Road to Kafka 4.0

2024-01-04 Thread Mickael Maison
Hi Ismael,

Yes both KIPs have been voted.
My point, which admittedly wasn't clear, was to gauge the interest in
getting them done and if so identifying people to drive these tasks.

KIP-719 shouldn't require too much more work to complete. There's a PR
[0] which is relatively straightforward. I pinged Lee Dongjin.
KIP-653 is more involved and depends on KIP-719. There's also a PR [1]
which is pretty large.

Yes log4j3 was on my mind as it's expected to be compatible with
log4j2 and bring significant improvements.

0: https://github.com/apache/kafka/pull/10244
1: https://github.com/apache/kafka/pull/7898

Thanks,
Mickael

On Thu, Jan 4, 2024 at 11:34 AM Ismael Juma  wrote:
>
> Hi Mickael,
>
> Given that KIP-653 was accepted, the current position is that we would move
> to log4j2 - provided that someone is available to drive that. It's also
> worth noting that log4j3 is now a thing (but not yet final):
>
> https://logging.apache.org/log4j/3.x/
>
> Ismael
>
> On Thu, Jan 4, 2024 at 2:15 AM Mickael Maison 
> wrote:
>
> > Hi,
> >
> > I've not seen replies about log4j2.
> > The plan was to deprecated the appender (KIP-719) and switch to log4j2
> > (KIP-653).
> >
> > While reload4j works well, I'd still be in favor of switching to
> > log4j2 in Kafka 4.0.
> >
> > Thanks,
> > Mickael
> >
> > On Fri, Dec 29, 2023 at 2:19 AM Colin McCabe  wrote:
> > >
> > > Hi all,
> > >
> > > Let's continue this dicsussion on the "[DISCUSS] KIP-1012: The need for
> > a Kafka 3.8.x release" email thread.
> > >
> > > Colin
> > >
> > >
> > > On Tue, Dec 26, 2023, at 12:50, José Armando García Sancio wrote:
> > > > Hi Divij,
> > > >
> > > > Thanks for the feedback. I agree that having a 3.8 release is
> > > > beneficial but some of the comments in this message are inaccurate and
> > > > could mislead the community and users.
> > > >
> > > > On Thu, Dec 21, 2023 at 7:00 AM Divij Vaidya 
> > wrote:
> > > >> 1\ Durability/availability bugs in kraft - Even though kraft has been
> > > >> around for a while, we keep finding bugs that impact availability and
> > data
> > > >> durability in it almost with every release [1] [2]. It's a complex
> > feature
> > > >> and such bugs are expected during the stabilization phase. But we
> > can't
> > > >> remove the alternative until we see stabilization in kraft i.e. no new
> > > >> stability/durability bugs for at least 2 releases.
> > > >
> > > > I took a look at both of these issues and neither of them are bugs
> > > > that affect KRaft's durability and availability.
> > > >
> > > >> [1] https://issues.apache.org/jira/browse/KAFKA-15495
> > > >
> > > > This issue is not specific to KRaft and has been an issue in Apache
> > > > Kafka since the ISR leader election and replication algorithm was
> > > > added to Apache Kafka. I acknowledge that this misunderstanding is
> > > > partially due to the Jira description which insinuates that this only
> > > > applies to KRaft which is not true.
> > > >
> > > >> [2] https://issues.apache.org/jira/browse/KAFKA-15489
> > > >
> > > > First, technically this issue was not first discovered in some recent
> > > > release. This issue was identified by me back in January of 2022:
> > > > https://issues.apache.org/jira/browse/KAFKA-13621. I decided to lower
> > > > the priority as it requires a very specific network partition where
> > > > the controllers are partitioned from the current leader but the
> > > > brokers are not.
> > > >
> > > > This is not a durability bug as the KRaft cluster metadata partition
> > > > leader will not be able to advance the HWM and hence commit records.
> > > >
> > > > Regarding availability, The KRaft's cluster metadata partition favors
> > > > consistency and partition tolerance versus availability from CAP. This
> > > > is by design and not a bug in the protocol or implementation.
> > > >
> > > >> 2\ Parity with Zk - There are also pending bugs [3] which are in the
> > > >> category of Zk parity. Removing Zk from Kafka without having full
> > feature
> > > >> parity with Zk will leave some Kafka users with no upgrade path.
> > > >> 3\ Test coverage - We also don't have sufficient test coverage for
> > kraft
> &g

Re: [DISCUSS] Road to Kafka 4.0

2024-01-04 Thread Mickael Maison
Hi,

I've not seen replies about log4j2.
The plan was to deprecated the appender (KIP-719) and switch to log4j2
(KIP-653).

While reload4j works well, I'd still be in favor of switching to
log4j2 in Kafka 4.0.

Thanks,
Mickael

On Fri, Dec 29, 2023 at 2:19 AM Colin McCabe  wrote:
>
> Hi all,
>
> Let's continue this dicsussion on the "[DISCUSS] KIP-1012: The need for a 
> Kafka 3.8.x release" email thread.
>
> Colin
>
>
> On Tue, Dec 26, 2023, at 12:50, José Armando García Sancio wrote:
> > Hi Divij,
> >
> > Thanks for the feedback. I agree that having a 3.8 release is
> > beneficial but some of the comments in this message are inaccurate and
> > could mislead the community and users.
> >
> > On Thu, Dec 21, 2023 at 7:00 AM Divij Vaidya  
> > wrote:
> >> 1\ Durability/availability bugs in kraft - Even though kraft has been
> >> around for a while, we keep finding bugs that impact availability and data
> >> durability in it almost with every release [1] [2]. It's a complex feature
> >> and such bugs are expected during the stabilization phase. But we can't
> >> remove the alternative until we see stabilization in kraft i.e. no new
> >> stability/durability bugs for at least 2 releases.
> >
> > I took a look at both of these issues and neither of them are bugs
> > that affect KRaft's durability and availability.
> >
> >> [1] https://issues.apache.org/jira/browse/KAFKA-15495
> >
> > This issue is not specific to KRaft and has been an issue in Apache
> > Kafka since the ISR leader election and replication algorithm was
> > added to Apache Kafka. I acknowledge that this misunderstanding is
> > partially due to the Jira description which insinuates that this only
> > applies to KRaft which is not true.
> >
> >> [2] https://issues.apache.org/jira/browse/KAFKA-15489
> >
> > First, technically this issue was not first discovered in some recent
> > release. This issue was identified by me back in January of 2022:
> > https://issues.apache.org/jira/browse/KAFKA-13621. I decided to lower
> > the priority as it requires a very specific network partition where
> > the controllers are partitioned from the current leader but the
> > brokers are not.
> >
> > This is not a durability bug as the KRaft cluster metadata partition
> > leader will not be able to advance the HWM and hence commit records.
> >
> > Regarding availability, The KRaft's cluster metadata partition favors
> > consistency and partition tolerance versus availability from CAP. This
> > is by design and not a bug in the protocol or implementation.
> >
> >> 2\ Parity with Zk - There are also pending bugs [3] which are in the
> >> category of Zk parity. Removing Zk from Kafka without having full feature
> >> parity with Zk will leave some Kafka users with no upgrade path.
> >> 3\ Test coverage - We also don't have sufficient test coverage for kraft
> >> since quite a few tests are Zk only at this stage.
> >>
> >> Given these concerns, I believe we need to reach 100% Zk parity and allow
> >> new feature stabilisation (such as scram, JBOD) for at least 1 version
> >> (maybe more if we find bugs in that feature) before we remove Zk. I also
> >> agree with the point of view that we can't delay 4.0 indefinitely and we
> >> need a clear cut line.
> >
> > There seems to be some misunderstanding regarding Apache Kafka
> > versioning scheme. Minor versions (e.g. 3.x) are needed for feature
> > releases like new RPCs and configurations. They are not needed for bug
> > fixes. Bug fixes can and should be done in patch releases (e.g.
> > 3.7.x).
> >
> > This means that you don't need a 3.8 or 3.9 release to fix a bug in Kafka.
> >
> > Thanks!
> > --
> > -José


Re: [VOTE] KIP-1004: Enforce tasks.max property in Kafka Connect

2024-01-03 Thread Mickael Maison
Hi Chris,

+1 (binding), thanks for the KIP

Mickael

On Tue, Jan 2, 2024 at 8:55 PM Hector Geraldino (BLOOMBERG/ 919 3RD A)
 wrote:
>
> +1 (non-binding)
>
> Thanks Chris!
>
> From: dev@kafka.apache.org At: 01/02/24 11:49:18 UTC-5:00To:  
> dev@kafka.apache.org
> Subject: Re: [VOTE] KIP-1004: Enforce tasks.max property in Kafka Connect
>
> Hi all,
>
> Happy New Year! Wanted to give this a bump now that the holidays are over
> for a lot of us. Looking forward to people's thoughts!
>
> Cheers,
>
> Chris
>
> On Mon, Dec 4, 2023 at 10:36 AM Chris Egerton  wrote:
>
> > Hi all,
> >
> > I'd like to call for a vote on KIP-1004, which adds enforcement for the
> > tasks.max connector property in Kafka Connect.
> >
> > The KIP:
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-1004%3A+Enforce+tasks.max+
> property+in+Kafka+Connect
> >
> > The discussion thread:
> > https://lists.apache.org/thread/scx75cjwm19jyt19wxky41q9smf5nx6d
> >
> > Cheers,
> >
> > Chris
> >
>
>


Re: [VOTE] KIP-1013: Drop broker and tools support for Java 11 in Kafka 4.0 (deprecate in 3.7)

2024-01-03 Thread Mickael Maison
Hi Ismael,

I'm +1 (binding) too.

One small typo, the KIP states "The remaining modules (clients,
streams, connect, tools, etc.) will continue to support Java 11.". I
think we want to remove support for Java 11 in the tools module so it
shouldn't be listed here.

Thanks,
Mickael

On Wed, Jan 3, 2024 at 11:09 AM Divij Vaidya  wrote:
>
> +1 (binding)
>
> --
> Divij Vaidya
>
>
>
> On Wed, Jan 3, 2024 at 11:06 AM Viktor Somogyi-Vass
>  wrote:
>
> > Hi Ismael,
> >
> > I think it's important to make this change, the youtube video you posted on
> > the discussion thread makes very good arguments and so does the KIP. Java 8
> > is almost a liability and Java 11 already has smaller (and decreasing)
> > adoption than 17. It's a +1 (binding) from me.
> >
> > Thanks,
> > Viktor
> >
> > On Wed, Jan 3, 2024 at 7:00 AM Kamal Chandraprakash <
> > kamal.chandraprak...@gmail.com> wrote:
> >
> > > +1 (non-binding).
> > >
> > > On Wed, Jan 3, 2024 at 8:01 AM Satish Duggana 
> > > wrote:
> > >
> > > > Thanks Ismael for the proposal.
> > > >
> > > > Adopting JDK 17 enhances developer productivity and has reached a
> > > > level of maturity that has led to its adoption by several other major
> > > > projects, signifying its reliability and effectiveness.
> > > >
> > > > +1 (binding)
> > > >
> > > >
> > > > ~Satish.
> > > >
> > > > On Wed, 3 Jan 2024 at 06:59, Justine Olshan
> > > >  wrote:
> > > > >
> > > > > Thanks for driving this.
> > > > >
> > > > > +1 (binding) from me.
> > > > >
> > > > > Justine
> > > > >
> > > > > On Tue, Jan 2, 2024 at 4:30 PM Ismael Juma 
> > wrote:
> > > > >
> > > > > > Hi all,
> > > > > >
> > > > > > I would like to start a vote on KIP-1013.
> > > > > >
> > > > > > As stated in the discussion thread, this KIP was proposed after the
> > > KIP
> > > > > > freeze for Apache Kafka 3.7, but it is purely a documentation
> > update
> > > > (if we
> > > > > > decide to adopt it) and I believe it would serve our users best if
> > we
> > > > > > communicate the deprecation for removal sooner (i.e. 3.7) rather
> > than
> > > > later
> > > > > > (i.e. 3.8).
> > > > > >
> > > > > > Please take a look and cast your vote.
> > > > > >
> > > > > > Link:
> > > > > >
> > > >
> > >
> > https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=284789510
> > > > > >
> > > > > > Ismael
> > > > > >
> > > >
> > >
> >


Re: [ANNOUNCE] New Kafka PMC Member: Divij Vaidya

2023-12-27 Thread Mickael Maison
Congratulations Divij!

On Wed, Dec 27, 2023 at 1:05 PM Sagar  wrote:
>
> Congrats Divij! Absolutely well deserved !
>
> Thanks!
> Sagar.
>
> On Wed, Dec 27, 2023 at 5:15 PM Luke Chen  wrote:
>
> > Hi, Everyone,
> >
> > Divij has been a Kafka committer since June, 2023. He has remained very
> > active and instructive in the community since becoming a committer. It's my
> > pleasure to announce that Divij is now a member of Kafka PMC.
> >
> > Congratulations Divij!
> >
> > Luke
> > on behalf of Apache Kafka PMC
> >


Re: [DISCUSS] KIP-975 Docker Image for Apache Kafka

2023-12-20 Thread Mickael Maison
Hi,

Yes changes have to be merged by a committer but for this kind of
decisions it's best if it's seen by more than one.

> Hmm, is this a blocker? I don't see why. It would be nice to include it in 
> 3.7 and we have time, so I'm fine with that.
Sure, it's not a blocker in the usual sense. But if we ship this Go
binary it's possible users extending our images will start depending
on it. Since we want to get rid of it, I'd prefer if we never shipped
it.

Thanks,
Mickael


On Wed, Dec 20, 2023 at 4:28 PM Ismael Juma  wrote:
>
> Hi Mickael,
>
> A couple of comments inline.
>
> On Wed, Dec 20, 2023 at 3:34 AM Mickael Maison 
> wrote:
>
> > When you say, "we have opted to take a different approach", who is
> > "we"? I think this decision should be made by the committers.
> >
>
> Changes can only be merged by committers, so I think it's implicit that at
> least one committer would have to agree. :) I think Vedarth was simply
> saying that the group working on the KIP had a new proposal that addressed
> all the goals in a better way than the original proposal.
>
> I marked the Jira (https://issues.apache.org/jira/browse/KAFKA-16016)
> > as a blocker for 3.7 as I think we need to make this decision before
> > releasing the docker images.
> >
>
> Hmm, is this a blocker? I don't see why. It would be nice to include it in
> 3.7 and we have time, so I'm fine with that.
>
> Ismael


  1   2   3   4   5   6   7   8   9   10   >