Re: [VOTE] 3.3.1 RC0

2022-10-03 Thread Federico Valeri
Hi, I did the following to validate the release:

- Checksums and signatures ok
- Build from source ok
- Unit and integration tests ok
- Quickstart in both ZK and KRaft modes ok
- Test Java app with staging Maven artifacts ok

+1 (non binding)

Thanks
Fede

On Sun, Oct 2, 2022 at 7:47 PM José Armando García Sancio
 wrote:
>
> Hi all,
>
> All of the system tests for 3.3 passed.
>
> http://confluent-kafka-system-test-results.s3-us-west-2.amazonaws.com/3.3/2022-09-30--001.system-test-kafka-3.3--1664605767--confluentinc--3.3--eefe867118/report.html
>
> This build ran all of the tests and there were two failures:
> kafkatest.tests.core.delegation_token_test and
> kafkatest.tests.streams.streams_broker_compatibility_test.
>
> I ran the kafkatest.tests.streams.streams_broker_compatibility_test
> module by itself and it passed:
> http://confluent-kafka-branch-builder-system-test-results.s3-us-west-2.amazonaws.com/system-test-kafka-branch-builder--1664643010--apache--3.3--cdb25e10dc/2022-10-01--001./2022-10-01--001./report.html
>
> For kafkatest.tests.core.delegation_token_test there was an issue with
> the test that David Arthur fixed. I ran that module with that fix and
> it passed: 
> http://confluent-kafka-branch-builder-system-test-results.s3-us-west-2.amazonaws.com/system-test-kafka-branch-builder--1664678563--apache--3.3--c2d7984c8e/2022-10-02--001./2022-10-02--001./report.html
>
> Thank you,
> --
> -José


KIP - Permissions

2022-10-03 Thread Mathieu Amblard
Hello,

I would like to create a KIP, I am doing the procedure to create one :


   1. Send an email to the dev mailing list (dev@kafka.apache.org) containing
   your wiki ID and Jira ID requesting permissions to contribute to Apache
   Kafka.

Here are the necessary information :
Wiki ID : mathieu.amblard
Jira ID : mathieu.amblard

Thank you in advance,
Best Regards
Mathieu Amblard


Re: KIP - Permissions

2022-10-03 Thread Mickael Maison
Hi Matthieu,

I've granted you permissions to both Jira and the wiki.

Thanks for your interest in Apache Kafka!

Mickael

On Mon, Oct 3, 2022 at 4:02 PM Mathieu Amblard
 wrote:
>
> Hello,
>
> I would like to create a KIP, I am doing the procedure to create one :
>
>
>1. Send an email to the dev mailing list (dev@kafka.apache.org) containing
>your wiki ID and Jira ID requesting permissions to contribute to Apache
>Kafka.
>
> Here are the necessary information :
> Wiki ID : mathieu.amblard
> Jira ID : mathieu.amblard
>
> Thank you in advance,
> Best Regards
> Mathieu Amblard


[ANNOUNCE] Apache Kafka 3.3.1

2022-10-03 Thread José Armando García Sancio
The Apache Kafka community is pleased to announce the release for
Apache Kafka 3.3.1.

Kafka 3.3.1 includes a number of significant new features. Here is a
summary of some notable changes:

KIP-833: Mark KRaft as Production Ready
KIP-778: KRaft to KRaft upgrades
KIP-835: Monitor KRaft Controller Quorum health
KIP-794: Strictly Uniform Sticky Partitioner
KIP-834: Pause/resume KafkaStreams topologies
KIP-618: Exactly-Once support for source connectors

All of the changes in this release can be found in the release notes:
https://www.apache.org/dist/kafka/3.3.1/RELEASE_NOTES.html
https://archive.apache.org/dist/kafka/3.3.0/RELEASE_NOTES.html

You can download the source and binary release (Scala 2.12 and 2.13) from:
https://kafka.apache.org/downloads#3.3.1

---

Apache Kafka is a distributed streaming platform with four core APIs:

** The Producer API allows an application to publish a stream of
records to one or more Kafka topics.

** The Consumer API allows an application to subscribe to one or more
topics and process the stream of records produced to them.

** The Streams API allows an application to act as a stream processor,
consuming an input stream from one or more topics and producing an
output stream to one or more output topics, effectively transforming
the input streams to output streams.

** The Connector API allows building and running reusable producers or
consumers that connect Kafka topics to existing applications or data
systems. For example, a connector to a relational database might
capture every change to a table.

With these APIs, Kafka can be used for two broad classes of application:

** Building real-time streaming data pipelines that reliably get data
between systems or applications.

** Building real-time streaming applications that transform or react
to the streams of data.

Apache Kafka is in use at large and small companies worldwide,
including Capital One, Goldman Sachs, ING, LinkedIn, Netflix,
Pinterest, Rabobank, Target, The New York Times, Uber, Yelp, and
Zalando, among others.

A big thank you for the following 115 contributors to this release!

Akhilesh C, Akhilesh Chaganti, Alan Sheinberg, Aleksandr Sorokoumov,
Alex Sorokoumov, Alok Nikhil, Alyssa Huang, Aman Singh, Amir M. Saeid,
Anastasia Vela, András Csáki, Andrew Borley, Andrew Dean, andymg3,
Aneesh Garg, Artem Livshits, A. Sophie Blee-Goldman, Bill Bejeck,
Bounkong Khamphousone, bozhao12, Bruno Cadonna, Chase Thomas, chern,
Chris Egerton, Christo Lolov, Christopher L. Shannon, CHUN-HAO TANG,
Clara Fang, Clay Johnson, Colin Patrick McCabe, David Arthur, David
Jacot, David Mao, Dejan Maric, dengziming, Derek Troy-West, Divij
Vaidya, Edoardo Comar, Edwin, Eugene Tolbakov, Federico Valeri,
Guozhang Wang, Hao Li, Hongten, Idan Kamara, Ismael Juma, Jacklee,
James Hughes, Jason Gustafson, JK-Wang, jnewhouse, Joel Hamill, John
Roesler, Jorge Esteban Quilcate Otoya, José Armando García Sancio,
jparag, Justine Olshan, K8sCat, Kirk True, Konstantine Karantasis,
Kvicii, Lee Dongjin, Levani Kokhreidze, Liam Clarke-Hutchinson, Lucas
Bradstreet, Lucas Wang, Luke Chen, Manikumar Reddy, Marco Aurelio
Lotz, Matthew de Detrich, Matthias J. Sax, Mickael Maison, Mike
Lothian, Mike Tobola, Milind Mantri, nicolasguyomar, Niket, Niket
Goel, Nikolay, Okada Haruki, Philip Nee, Prashanth Joseph Babu, Rajani
Karuturi, Rajini Sivaram, Randall Hauch, Richard Joerger, Rittika
Adhikari, RivenSun, Rohan, Ron Dagostino, ruanliang, runom, Sanjana
Kaundinya, Sayantanu Dey, SC, sciclon2, Shawn, sunshujie1990, Thomas
Cooper, Tim Patterson, Tom Bentley, Tom Kaszuba, Tomonari Yamashita,
vamossagar12, Viktor Somogyi-Vass, Walker Carlson, Xavier Léauté,
Xiaobing Fang, Xiaoyue Xue, xjin-Confluent, xuexiaoyue, Yang Yu, Yash
Mayya, Yu, yun-yun

We welcome your help and feedback. For more information on how to
report problems, and to get involved, visit the project website at
https://kafka.apache.org/

Thank you!
José


[jira] [Resolved] (KAFKA-14247) Implement EventHandler interface and DefaultEventHandler

2022-10-03 Thread Jason Gustafson (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14247?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-14247.
-
Resolution: Fixed

> Implement EventHandler interface and DefaultEventHandler
> 
>
> Key: KAFKA-14247
> URL: https://issues.apache.org/jira/browse/KAFKA-14247
> Project: Kafka
>  Issue Type: Sub-task
>  Components: consumer
>Reporter: Philip Nee
>Assignee: Philip Nee
>Priority: Major
>
> The polling thread uses events to communicate with the background thread.  
> The events send to the background thread are the {_}Requests{_}, and the 
> events send from the background thread to the polling thread are the 
> {_}Responses{_}.
>  
> Here we have an EventHandler interface and DefaultEventHandler 
> implementation.  The implementation uses two blocking queues to send events 
> both ways.  The two methods, add and poll allows the client, i.e., the 
> polling thread, to retrieve and add events to the handler.
>  
> PR: https://github.com/apache/kafka/pull/12663



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [DISCUSS] KIP-793: Sink Connectors: Support topic-mutating SMTs for async connectors (preCommit users)

2022-10-03 Thread Yash Mayya
Hi Randall,

Thanks for elaborating. I think these are all very good points and I see
why the overloaded `SinkTask::put` method is a cleaner solution overall.

> public void put(Collection records, Map updatedTopicPartitions)

I think this should be

`public void put(Collection records, Map originalTopicPartitions)`

instead because the sink records themselves have the updated topic
partitions (i.e. after all transformations have been applied) and the KIP
is proposing a way for the tasks to be able to access the original topic
partition (i.e. before transformations have been applied).

> Of course, if the developer does not need separate methods, they can
easily have the older `put` method simply delegate to the newer method.

If the developer does not need separate methods (i.e. they don't need to
use this new addition), they can simply continue implementing just the
older `put` method right?

> Finally, this gives us a roadmap for *eventually* deprecating the older
method, once the Connect runtime versions without this change are old
enough.

I'm not sure we'd ever want to deprecate the older method. Most common sink
connector implementations do not do their own offset tracking with
asynchronous processing and will probably never have a need for the
additional parameter `Map
originalTopicPartitions` in the proposed new `put` method. These connectors
can continue implementing only the existing `SinkTask::put` method which
will be called by the default implementation of the newer overloaded `put`
method.

> the pre-commit methods use the same `Map currentOffsets` data structure I'm suggesting be used.

The data structure you're suggesting be used is a `Map` which will map `SinkRecord` objects to the original topic
partition of the corresponding `ConsumerRecord` right? To clarify, this is
a new data structure that will need to be managed in the `WorkerSinkTask`.


Thanks,
Yash

On Mon, Oct 3, 2022 at 1:20 AM Randall Hauch  wrote:

> Hi, Yash.
>
> I'm not sure I quite understand why it would be "easier" for connector
> > developers to account for implementing two different overloaded `put`
> > methods (assuming that they want to use this new feature) versus using a
> > try-catch block around `SinkRecord` access methods?
>
>
> Using a try-catch to try around an API method that *might* be there is a
> very unusual thing for most developers. Unfortunately, we've had to resort
> to this atypical approach with Connect in places when there was no good
> alternative. We seem to relying upon pattern because it's easier for us,
> not because it offers a better experience for Connector developers. IMO, if
> there's a practical alternative that uses normal development practices and
> techniques, then we should use that alternative. IIUC, there is at least
> one practical alternative for this KIP that would not require developers to
> use the unusual try-catch to handle the case where methods are not found.
>
> I also think having two `put` methods is easier when the Connector has to
> do different things for different Connect runtimes, too. One of those
> methods is called by newer Connect runtimes with the new behavior, and the
> other method is called by an older Connect runtime. Of course, if the
> developer does not need separate methods, they can easily have the older
> `put` method simply delegate to the newer method.
>
> Finally, this gives us a roadmap for *eventually* deprecating the older
> method, once the Connect runtime versions without this change are old
> enough.
>
> I think the advantage of going with the
> > proposed approach in the KIP is that it wouldn't require extra
> book-keeping
> > (the Map > TopicPartition> in `WorkerSinkTask` in your proposed approach)
> >
>
> The connector does have to do some of this bookkeeping in how they track
> the topic partition offsets used in the `preCommit`, and the pre-commit
> methods use the same `Map
> currentOffsets`
> data structure I'm suggesting be used.
>
> I hope that helps.
>
> Best regards,
>
> Randall
>
> On Mon, Sep 26, 2022 at 9:38 AM Yash Mayya  wrote:
>
> > Hi Randall,
> >
> > Thanks for reviewing the KIP!
> >
> > > That latter logic can get quite ugly.
> >
> > I'm not sure I quite understand why it would be "easier" for connector
> > developers to account for implementing two different overloaded `put`
> > methods (assuming that they want to use this new feature) versus using a
> > try-catch block around `SinkRecord` access methods? In both cases, a
> > connector developer would need to write additional code in order to
> ensure
> > that their connector continues working with older Connect runtimes.
> > Furthermore, we would probably need to carefully document how the
> > implementation for the older `put` method should look like for connectors
> > that want to use this new feature. I think the advantage of going with
> the
> > proposed approach in the KIP is that it wouldn't require extra
> book-keeping
> > (the Map > TopicPartition> in `WorkerSi

Requesting permissions to contribute to Apache Kafka

2022-10-03 Thread Vimal K
wiki id: *vimalinfo10*
Jira id : *vimalinfo10*

Thanks & Regards
Vimal Krishnamoorthy


Re: [ANNOUNCE] Apache Kafka 3.3.1

2022-10-03 Thread Mickael Maison
Congratulations to all the contributors!

Thanks José and David for running this release.



On Mon, Oct 3, 2022 at 6:22 PM José Armando García Sancio
 wrote:
>
> The Apache Kafka community is pleased to announce the release for
> Apache Kafka 3.3.1.
>
> Kafka 3.3.1 includes a number of significant new features. Here is a
> summary of some notable changes:
>
> KIP-833: Mark KRaft as Production Ready
> KIP-778: KRaft to KRaft upgrades
> KIP-835: Monitor KRaft Controller Quorum health
> KIP-794: Strictly Uniform Sticky Partitioner
> KIP-834: Pause/resume KafkaStreams topologies
> KIP-618: Exactly-Once support for source connectors
>
> All of the changes in this release can be found in the release notes:
> https://www.apache.org/dist/kafka/3.3.1/RELEASE_NOTES.html
> https://archive.apache.org/dist/kafka/3.3.0/RELEASE_NOTES.html
>
> You can download the source and binary release (Scala 2.12 and 2.13) from:
> https://kafka.apache.org/downloads#3.3.1
>
> ---
>
> Apache Kafka is a distributed streaming platform with four core APIs:
>
> ** The Producer API allows an application to publish a stream of
> records to one or more Kafka topics.
>
> ** The Consumer API allows an application to subscribe to one or more
> topics and process the stream of records produced to them.
>
> ** The Streams API allows an application to act as a stream processor,
> consuming an input stream from one or more topics and producing an
> output stream to one or more output topics, effectively transforming
> the input streams to output streams.
>
> ** The Connector API allows building and running reusable producers or
> consumers that connect Kafka topics to existing applications or data
> systems. For example, a connector to a relational database might
> capture every change to a table.
>
> With these APIs, Kafka can be used for two broad classes of application:
>
> ** Building real-time streaming data pipelines that reliably get data
> between systems or applications.
>
> ** Building real-time streaming applications that transform or react
> to the streams of data.
>
> Apache Kafka is in use at large and small companies worldwide,
> including Capital One, Goldman Sachs, ING, LinkedIn, Netflix,
> Pinterest, Rabobank, Target, The New York Times, Uber, Yelp, and
> Zalando, among others.
>
> A big thank you for the following 115 contributors to this release!
>
> Akhilesh C, Akhilesh Chaganti, Alan Sheinberg, Aleksandr Sorokoumov,
> Alex Sorokoumov, Alok Nikhil, Alyssa Huang, Aman Singh, Amir M. Saeid,
> Anastasia Vela, András Csáki, Andrew Borley, Andrew Dean, andymg3,
> Aneesh Garg, Artem Livshits, A. Sophie Blee-Goldman, Bill Bejeck,
> Bounkong Khamphousone, bozhao12, Bruno Cadonna, Chase Thomas, chern,
> Chris Egerton, Christo Lolov, Christopher L. Shannon, CHUN-HAO TANG,
> Clara Fang, Clay Johnson, Colin Patrick McCabe, David Arthur, David
> Jacot, David Mao, Dejan Maric, dengziming, Derek Troy-West, Divij
> Vaidya, Edoardo Comar, Edwin, Eugene Tolbakov, Federico Valeri,
> Guozhang Wang, Hao Li, Hongten, Idan Kamara, Ismael Juma, Jacklee,
> James Hughes, Jason Gustafson, JK-Wang, jnewhouse, Joel Hamill, John
> Roesler, Jorge Esteban Quilcate Otoya, José Armando García Sancio,
> jparag, Justine Olshan, K8sCat, Kirk True, Konstantine Karantasis,
> Kvicii, Lee Dongjin, Levani Kokhreidze, Liam Clarke-Hutchinson, Lucas
> Bradstreet, Lucas Wang, Luke Chen, Manikumar Reddy, Marco Aurelio
> Lotz, Matthew de Detrich, Matthias J. Sax, Mickael Maison, Mike
> Lothian, Mike Tobola, Milind Mantri, nicolasguyomar, Niket, Niket
> Goel, Nikolay, Okada Haruki, Philip Nee, Prashanth Joseph Babu, Rajani
> Karuturi, Rajini Sivaram, Randall Hauch, Richard Joerger, Rittika
> Adhikari, RivenSun, Rohan, Ron Dagostino, ruanliang, runom, Sanjana
> Kaundinya, Sayantanu Dey, SC, sciclon2, Shawn, sunshujie1990, Thomas
> Cooper, Tim Patterson, Tom Bentley, Tom Kaszuba, Tomonari Yamashita,
> vamossagar12, Viktor Somogyi-Vass, Walker Carlson, Xavier Léauté,
> Xiaobing Fang, Xiaoyue Xue, xjin-Confluent, xuexiaoyue, Yang Yu, Yash
> Mayya, Yu, yun-yun
>
> We welcome your help and feedback. For more information on how to
> report problems, and to get involved, visit the project website at
> https://kafka.apache.org/
>
> Thank you!
> José


[jira] [Created] (KAFKA-14273) Kafka doesn't start with KRaft on Windows

2022-10-03 Thread Kedar Joshi (Jira)
Kedar Joshi created KAFKA-14273:
---

 Summary: Kafka doesn't start with KRaft on Windows
 Key: KAFKA-14273
 URL: https://issues.apache.org/jira/browse/KAFKA-14273
 Project: Kafka
  Issue Type: Bug
  Components: kraft
Affects Versions: 3.3.1
Reporter: Kedar Joshi


{{Basic cluster setup doesn't work on Windows 10.}}

*{{Steps}}*
 * {{Initialize cluster with -}}

{{    bin\windows\kafka-storage.bat random-uuid}}
{{    bin\windows\kafka-storage.bat format -t %cluster_id% -c 
.\config\kraft\server.properties}}

 
 * Start Kafka with -

{{    bin\windows\kafka-server-start.bat .\config\kraft\server.properties}}

 

*Stacktrace*

Kafka fails to start with following exception -

{{D:\LocationGuru\Servers\Kafka-3.3>bin\windows\kafka-server-start.bat 
.\config\kraft\server.properties}}
{{[2022-10-03 23:14:20,089] INFO Registered kafka:type=kafka.Log4jController 
MBean (kafka.utils.Log4jControllerRegistration$)}}
{{[2022-10-03 23:14:20,375] INFO Setting -D 
jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS 
renegotiation (org.apache.zookeeper.common.X509Util)}}
{{[2022-10-03 23:14:20,594] INFO [LogLoader partition=__cluster_metadata-0, 
dir=D:\LocationGuru\Servers\Kafka-3.3\tmp\kraft-combined-logs] Loading producer 
state till offset 0 with message format version 2 (kafka.log.UnifiedLog$)}}
{{[2022-10-03 23:14:20,594] INFO [LogLoader partition=__cluster_metadata-0, 
dir=D:\LocationGuru\Servers\Kafka-3.3\tmp\kraft-combined-logs] Reloading from 
producer snapshot and rebuilding producer state from offset 0 
(kafka.log.UnifiedLog$)}}
{{[2022-10-03 23:14:20,594] INFO [LogLoader partition=__cluster_metadata-0, 
dir=D:\LocationGuru\Servers\Kafka-3.3\tmp\kraft-combined-logs] Producer state 
recovery took 0ms for snapshot load and 0ms for segment recovery from offset 0 
(kafka.log.UnifiedLog$)}}
{{[2022-10-03 23:14:20,640] INFO Initialized snapshots with IDs SortedSet() 
from 
D:\LocationGuru\Servers\Kafka-3.3\tmp\kraft-combined-logs__cluster_metadata-0 
(kafka.raft.KafkaMetadataLog$)}}
{{[2022-10-03 23:14:20,734] INFO [raft-expiration-reaper]: Starting 
(kafka.raft.TimingWheelExpirationService$ExpiredOperationReaper)}}
{{[2022-10-03 23:14:20,900] ERROR Exiting Kafka due to fatal exception 
(kafka.Kafka$)}}
{{java.io.UncheckedIOException: Error while writing the Quorum status from the 
file 
D:\LocationGuru\Servers\Kafka-3.3\tmp\kraft-combined-logs__cluster_metadata-0\quorum-state}}
{{        at 
org.apache.kafka.raft.FileBasedStateStore.writeElectionStateToFile(FileBasedStateStore.java:155)}}
{{        at 
org.apache.kafka.raft.FileBasedStateStore.writeElectionState(FileBasedStateStore.java:128)}}
{{        at 
org.apache.kafka.raft.QuorumState.transitionTo(QuorumState.java:477)}}
{{        at 
org.apache.kafka.raft.QuorumState.initialize(QuorumState.java:212)}}
{{        at 
org.apache.kafka.raft.KafkaRaftClient.initialize(KafkaRaftClient.java:369)}}
{{        at 
kafka.raft.KafkaRaftManager.buildRaftClient(RaftManager.scala:200)}}
{{        at kafka.raft.KafkaRaftManager.(RaftManager.scala:127)}}
{{        at kafka.server.KafkaRaftServer.(KafkaRaftServer.scala:83)}}
{{        at kafka.Kafka$.buildServer(Kafka.scala:79)}}
{{        at kafka.Kafka$.main(Kafka.scala:87)}}
{{        at kafka.Kafka.main(Kafka.scala)}}
{{Caused by: java.nio.file.FileSystemException: 
D:\LocationGuru\Servers\Kafka-3.3\tmp\kraft-combined-logs__cluster_metadata-0\quorum-state.tmp
 -> 
D:\LocationGuru\Servers\Kafka-3.3\tmp\kraft-combined-logs__cluster_metadata-0\quorum-state:
 The process cannot access the file because it is being used by another 
process}}
{{        at 
java.base/sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:92)}}
{{        at 
java.base/sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:103)}}
{{        at 
java.base/sun.nio.fs.WindowsFileCopy.move(WindowsFileCopy.java:403)}}
{{        at 
java.base/sun.nio.fs.WindowsFileSystemProvider.move(WindowsFileSystemProvider.java:293)}}
{{        at java.base/java.nio.file.Files.move(Files.java:1430)}}
{{        at 
org.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:935)}}
{{        at 
org.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:918)}}
{{        at 
org.apache.kafka.raft.FileBasedStateStore.writeElectionStateToFile(FileBasedStateStore.java:152)}}
{{        ... 10 more}}
{{        Suppressed: java.nio.file.FileSystemException: 
D:\LocationGuru\Servers\Kafka-3.3\tmp\kraft-combined-logs__cluster_metadata-0\quorum-state.tmp
 -> 
D:\LocationGuru\Servers\Kafka-3.3\tmp\kraft-combined-logs__cluster_metadata-0\quorum-state:
 The process cannot access the file because it is being used by another 
process}}
{{                at 
java.base/sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:92)}}
{{                at 
java.base/sun.nio.fs.

[jira] [Created] (KAFKA-14274) Implement fetching logic

2022-10-03 Thread Philip Nee (Jira)
Philip Nee created KAFKA-14274:
--

 Summary: Implement fetching logic
 Key: KAFKA-14274
 URL: https://issues.apache.org/jira/browse/KAFKA-14274
 Project: Kafka
  Issue Type: Sub-task
  Components: consumer
Reporter: Philip Nee


The fetch request and fetch processing should happen asynchronously.  More 
specifically, we have the background thread to send fetch requests autonomously 
and relay the response back to the polling thread.  The polling thread collects 
these fetch requests and returns the ConsumerRecord.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: KIP - Permissions

2022-10-03 Thread Mathieu Amblard
Hi Mickael,

Thank you very much !

Cheers,
Mathieu

Le lun. 3 oct. 2022 à 16:51, Mickael Maison  a
écrit :

> Hi Matthieu,
>
> I've granted you permissions to both Jira and the wiki.
>
> Thanks for your interest in Apache Kafka!
>
> Mickael
>
> On Mon, Oct 3, 2022 at 4:02 PM Mathieu Amblard
>  wrote:
> >
> > Hello,
> >
> > I would like to create a KIP, I am doing the procedure to create one :
> >
> >
> >1. Send an email to the dev mailing list (dev@kafka.apache.org)
> containing
> >your wiki ID and Jira ID requesting permissions to contribute to
> Apache
> >Kafka.
> >
> > Here are the necessary information :
> > Wiki ID : mathieu.amblard
> > Jira ID : mathieu.amblard
> >
> > Thank you in advance,
> > Best Regards
> > Mathieu Amblard
>


Re: [DISCUSS] KIP-793: Sink Connectors: Support topic-mutating SMTs for async connectors (preCommit users)

2022-10-03 Thread Randall Hauch
On Mon, Oct 3, 2022 at 11:45 AM Yash Mayya  wrote:

> Hi Randall,
>
> Thanks for elaborating. I think these are all very good points and I see
> why the overloaded `SinkTask::put` method is a cleaner solution overall.
>
> > public void put(Collection records, Map TopicPartition> updatedTopicPartitions)
>
> I think this should be
>
> `public void put(Collection records, Map TopicPartition> originalTopicPartitions)`
>
> instead because the sink records themselves have the updated topic
> partitions (i.e. after all transformations have been applied) and the KIP
> is proposing a way for the tasks to be able to access the original topic
> partition (i.e. before transformations have been applied).
>

Sounds good.


>
> > Of course, if the developer does not need separate methods, they can
> easily have the older `put` method simply delegate to the newer method.
>
> If the developer does not need separate methods (i.e. they don't need to
> use this new addition), they can simply continue implementing just the
> older `put` method right?
>

Correct. We should update the JavaDoc of both methods to make this clear,
and in general how the two methods should are used and should be
implemented. That can be part of the PR, and the KIP doesn't need this
wording.

>
> > Finally, this gives us a roadmap for *eventually* deprecating the older
> method, once the Connect runtime versions without this change are old
> enough.
>
> I'm not sure we'd ever want to deprecate the older method. Most common sink
> connector implementations do not do their own offset tracking with
> asynchronous processing and will probably never have a need for the
> additional parameter `Map
> originalTopicPartitions` in the proposed new `put` method. These connectors
> can continue implementing only the existing `SinkTask::put` method which
> will be called by the default implementation of the newer overloaded `put`
> method.
>

+1


>
> > the pre-commit methods use the same `Map OffsetAndMetadata> currentOffsets` data structure I'm suggesting be used.
>
> The data structure you're suggesting be used is a `Map TopicPartition>` which will map `SinkRecord` objects to the original topic
> partition of the corresponding `ConsumerRecord` right? To clarify, this is
> a new data structure that will need to be managed in the `WorkerSinkTask`.
>

Ah, you're right. Thanks for the correction.

Best regards,
Randall


> Thanks,
> Yash


> On Mon, Oct 3, 2022 at 1:20 AM Randall Hauch  wrote:
>
> > Hi, Yash.
> >
> > I'm not sure I quite understand why it would be "easier" for connector
> > > developers to account for implementing two different overloaded `put`
> > > methods (assuming that they want to use this new feature) versus using
> a
> > > try-catch block around `SinkRecord` access methods?
> >
> >
> > Using a try-catch to try around an API method that *might* be there is a
> > very unusual thing for most developers. Unfortunately, we've had to
> resort
> > to this atypical approach with Connect in places when there was no good
> > alternative. We seem to relying upon pattern because it's easier for us,
> > not because it offers a better experience for Connector developers. IMO,
> if
> > there's a practical alternative that uses normal development practices
> and
> > techniques, then we should use that alternative. IIUC, there is at least
> > one practical alternative for this KIP that would not require developers
> to
> > use the unusual try-catch to handle the case where methods are not found.
> >
> > I also think having two `put` methods is easier when the Connector has to
> > do different things for different Connect runtimes, too. One of those
> > methods is called by newer Connect runtimes with the new behavior, and
> the
> > other method is called by an older Connect runtime. Of course, if the
> > developer does not need separate methods, they can easily have the older
> > `put` method simply delegate to the newer method.
> >
> > Finally, this gives us a roadmap for *eventually* deprecating the older
> > method, once the Connect runtime versions without this change are old
> > enough.
> >
> > I think the advantage of going with the
> > > proposed approach in the KIP is that it wouldn't require extra
> > book-keeping
> > > (the Map > > TopicPartition> in `WorkerSinkTask` in your proposed approach)
> > >
> >
> > The connector does have to do some of this bookkeeping in how they track
> > the topic partition offsets used in the `preCommit`, and the pre-commit
> > methods use the same `Map
> > currentOffsets`
> > data structure I'm suggesting be used.
> >
> > I hope that helps.
> >
> > Best regards,
> >
> > Randall
> >
> > On Mon, Sep 26, 2022 at 9:38 AM Yash Mayya  wrote:
> >
> > > Hi Randall,
> > >
> > > Thanks for reviewing the KIP!
> > >
> > > > That latter logic can get quite ugly.
> > >
> > > I'm not sure I quite understand why it would be "easier" for connector
> > > developers to account for implementing two different overloaded `put`
> > > metho

Re: [ANNOUNCE] Apache Kafka 3.3.1

2022-10-03 Thread Randall Hauch
Thanks, Jose and David, for running this patch release. And congratulations
to all the contributors!

On Mon, Oct 3, 2022 at 12:12 PM Mickael Maison 
wrote:

> Congratulations to all the contributors!
>
> Thanks José and David for running this release.
>
>
>
> On Mon, Oct 3, 2022 at 6:22 PM José Armando García Sancio
>  wrote:
> >
> > The Apache Kafka community is pleased to announce the release for
> > Apache Kafka 3.3.1.
> >
> > Kafka 3.3.1 includes a number of significant new features. Here is a
> > summary of some notable changes:
> >
> > KIP-833: Mark KRaft as Production Ready
> > KIP-778: KRaft to KRaft upgrades
> > KIP-835: Monitor KRaft Controller Quorum health
> > KIP-794: Strictly Uniform Sticky Partitioner
> > KIP-834: Pause/resume KafkaStreams topologies
> > KIP-618: Exactly-Once support for source connectors
> >
> > All of the changes in this release can be found in the release notes:
> > https://www.apache.org/dist/kafka/3.3.1/RELEASE_NOTES.html
> > https://archive.apache.org/dist/kafka/3.3.0/RELEASE_NOTES.html
> >
> > You can download the source and binary release (Scala 2.12 and 2.13)
> from:
> > https://kafka.apache.org/downloads#3.3.1
> >
> >
> ---
> >
> > Apache Kafka is a distributed streaming platform with four core APIs:
> >
> > ** The Producer API allows an application to publish a stream of
> > records to one or more Kafka topics.
> >
> > ** The Consumer API allows an application to subscribe to one or more
> > topics and process the stream of records produced to them.
> >
> > ** The Streams API allows an application to act as a stream processor,
> > consuming an input stream from one or more topics and producing an
> > output stream to one or more output topics, effectively transforming
> > the input streams to output streams.
> >
> > ** The Connector API allows building and running reusable producers or
> > consumers that connect Kafka topics to existing applications or data
> > systems. For example, a connector to a relational database might
> > capture every change to a table.
> >
> > With these APIs, Kafka can be used for two broad classes of application:
> >
> > ** Building real-time streaming data pipelines that reliably get data
> > between systems or applications.
> >
> > ** Building real-time streaming applications that transform or react
> > to the streams of data.
> >
> > Apache Kafka is in use at large and small companies worldwide,
> > including Capital One, Goldman Sachs, ING, LinkedIn, Netflix,
> > Pinterest, Rabobank, Target, The New York Times, Uber, Yelp, and
> > Zalando, among others.
> >
> > A big thank you for the following 115 contributors to this release!
> >
> > Akhilesh C, Akhilesh Chaganti, Alan Sheinberg, Aleksandr Sorokoumov,
> > Alex Sorokoumov, Alok Nikhil, Alyssa Huang, Aman Singh, Amir M. Saeid,
> > Anastasia Vela, András Csáki, Andrew Borley, Andrew Dean, andymg3,
> > Aneesh Garg, Artem Livshits, A. Sophie Blee-Goldman, Bill Bejeck,
> > Bounkong Khamphousone, bozhao12, Bruno Cadonna, Chase Thomas, chern,
> > Chris Egerton, Christo Lolov, Christopher L. Shannon, CHUN-HAO TANG,
> > Clara Fang, Clay Johnson, Colin Patrick McCabe, David Arthur, David
> > Jacot, David Mao, Dejan Maric, dengziming, Derek Troy-West, Divij
> > Vaidya, Edoardo Comar, Edwin, Eugene Tolbakov, Federico Valeri,
> > Guozhang Wang, Hao Li, Hongten, Idan Kamara, Ismael Juma, Jacklee,
> > James Hughes, Jason Gustafson, JK-Wang, jnewhouse, Joel Hamill, John
> > Roesler, Jorge Esteban Quilcate Otoya, José Armando García Sancio,
> > jparag, Justine Olshan, K8sCat, Kirk True, Konstantine Karantasis,
> > Kvicii, Lee Dongjin, Levani Kokhreidze, Liam Clarke-Hutchinson, Lucas
> > Bradstreet, Lucas Wang, Luke Chen, Manikumar Reddy, Marco Aurelio
> > Lotz, Matthew de Detrich, Matthias J. Sax, Mickael Maison, Mike
> > Lothian, Mike Tobola, Milind Mantri, nicolasguyomar, Niket, Niket
> > Goel, Nikolay, Okada Haruki, Philip Nee, Prashanth Joseph Babu, Rajani
> > Karuturi, Rajini Sivaram, Randall Hauch, Richard Joerger, Rittika
> > Adhikari, RivenSun, Rohan, Ron Dagostino, ruanliang, runom, Sanjana
> > Kaundinya, Sayantanu Dey, SC, sciclon2, Shawn, sunshujie1990, Thomas
> > Cooper, Tim Patterson, Tom Bentley, Tom Kaszuba, Tomonari Yamashita,
> > vamossagar12, Viktor Somogyi-Vass, Walker Carlson, Xavier Léauté,
> > Xiaobing Fang, Xiaoyue Xue, xjin-Confluent, xuexiaoyue, Yang Yu, Yash
> > Mayya, Yu, yun-yun
> >
> > We welcome your help and feedback. For more information on how to
> > report problems, and to get involved, visit the project website at
> > https://kafka.apache.org/
> >
> > Thank you!
> > José
>


Jenkins build is still unstable: Kafka » Kafka Branch Builder » trunk #1269

2022-10-03 Thread Apache Jenkins Server
See 




Re: [ANNOUNCE] Apache Kafka 3.3.1

2022-10-03 Thread Igor Soarez
Thanks Jose and David for running this patch release. Congratulations to all!

I don't see the tag or the usual commit sequence in the 3.3 branch for this 
release. I'd expect a `3.3.1` and a commit moving the version to 
3.3.2-SNAPSHOT. The latest commit in the 3.3 branch still has 
`version=3.3.1-SNAPSHOT` in gradle.properties. Sorry if it's a silly question 
maybe I'm missing something, but is this normal or has something gone wrong 
with the process?

Thanks,
--
Igor

On Mon, Oct 3, 2022, at 2:09 PM, Randall Hauch wrote:
> Thanks, Jose and David, for running this patch release. And congratulations
> to all the contributors!
>
> On Mon, Oct 3, 2022 at 12:12 PM Mickael Maison 
> wrote:
>
>> Congratulations to all the contributors!
>>
>> Thanks José and David for running this release.
>>
>>
>>
>> On Mon, Oct 3, 2022 at 6:22 PM José Armando García Sancio
>>  wrote:
>> >
>> > The Apache Kafka community is pleased to announce the release for
>> > Apache Kafka 3.3.1.
>> >
>> > Kafka 3.3.1 includes a number of significant new features. Here is a
>> > summary of some notable changes:
>> >
>> > KIP-833: Mark KRaft as Production Ready
>> > KIP-778: KRaft to KRaft upgrades
>> > KIP-835: Monitor KRaft Controller Quorum health
>> > KIP-794: Strictly Uniform Sticky Partitioner
>> > KIP-834: Pause/resume KafkaStreams topologies
>> > KIP-618: Exactly-Once support for source connectors
>> >
>> > All of the changes in this release can be found in the release notes:
>> > https://www.apache.org/dist/kafka/3.3.1/RELEASE_NOTES.html
>> > https://archive.apache.org/dist/kafka/3.3.0/RELEASE_NOTES.html
>> >
>> > You can download the source and binary release (Scala 2.12 and 2.13)
>> from:
>> > https://kafka.apache.org/downloads#3.3.1
>> >
>> >
>> ---
>> >
>> > Apache Kafka is a distributed streaming platform with four core APIs:
>> >
>> > ** The Producer API allows an application to publish a stream of
>> > records to one or more Kafka topics.
>> >
>> > ** The Consumer API allows an application to subscribe to one or more
>> > topics and process the stream of records produced to them.
>> >
>> > ** The Streams API allows an application to act as a stream processor,
>> > consuming an input stream from one or more topics and producing an
>> > output stream to one or more output topics, effectively transforming
>> > the input streams to output streams.
>> >
>> > ** The Connector API allows building and running reusable producers or
>> > consumers that connect Kafka topics to existing applications or data
>> > systems. For example, a connector to a relational database might
>> > capture every change to a table.
>> >
>> > With these APIs, Kafka can be used for two broad classes of application:
>> >
>> > ** Building real-time streaming data pipelines that reliably get data
>> > between systems or applications.
>> >
>> > ** Building real-time streaming applications that transform or react
>> > to the streams of data.
>> >
>> > Apache Kafka is in use at large and small companies worldwide,
>> > including Capital One, Goldman Sachs, ING, LinkedIn, Netflix,
>> > Pinterest, Rabobank, Target, The New York Times, Uber, Yelp, and
>> > Zalando, among others.
>> >
>> > A big thank you for the following 115 contributors to this release!
>> >
>> > Akhilesh C, Akhilesh Chaganti, Alan Sheinberg, Aleksandr Sorokoumov,
>> > Alex Sorokoumov, Alok Nikhil, Alyssa Huang, Aman Singh, Amir M. Saeid,
>> > Anastasia Vela, András Csáki, Andrew Borley, Andrew Dean, andymg3,
>> > Aneesh Garg, Artem Livshits, A. Sophie Blee-Goldman, Bill Bejeck,
>> > Bounkong Khamphousone, bozhao12, Bruno Cadonna, Chase Thomas, chern,
>> > Chris Egerton, Christo Lolov, Christopher L. Shannon, CHUN-HAO TANG,
>> > Clara Fang, Clay Johnson, Colin Patrick McCabe, David Arthur, David
>> > Jacot, David Mao, Dejan Maric, dengziming, Derek Troy-West, Divij
>> > Vaidya, Edoardo Comar, Edwin, Eugene Tolbakov, Federico Valeri,
>> > Guozhang Wang, Hao Li, Hongten, Idan Kamara, Ismael Juma, Jacklee,
>> > James Hughes, Jason Gustafson, JK-Wang, jnewhouse, Joel Hamill, John
>> > Roesler, Jorge Esteban Quilcate Otoya, José Armando García Sancio,
>> > jparag, Justine Olshan, K8sCat, Kirk True, Konstantine Karantasis,
>> > Kvicii, Lee Dongjin, Levani Kokhreidze, Liam Clarke-Hutchinson, Lucas
>> > Bradstreet, Lucas Wang, Luke Chen, Manikumar Reddy, Marco Aurelio
>> > Lotz, Matthew de Detrich, Matthias J. Sax, Mickael Maison, Mike
>> > Lothian, Mike Tobola, Milind Mantri, nicolasguyomar, Niket, Niket
>> > Goel, Nikolay, Okada Haruki, Philip Nee, Prashanth Joseph Babu, Rajani
>> > Karuturi, Rajini Sivaram, Randall Hauch, Richard Joerger, Rittika
>> > Adhikari, RivenSun, Rohan, Ron Dagostino, ruanliang, runom, Sanjana
>> > Kaundinya, Sayantanu Dey, SC, sciclon2, Shawn, sunshujie1990, Thomas
>> > Cooper, Tim Patterson, Tom Bentley, Tom Kaszuba, Tomonari Yamashita,
>> > vamossagar12, Viktor Somogyi-Vass, Walk

Re: [ANNOUNCE] Apache Kafka 3.3.1

2022-10-03 Thread José Armando García Sancio
On Mon, Oct 3, 2022 at 2:00 PM Igor Soarez  wrote:
>
> Thanks Jose and David for running this patch release. Congratulations to all!
>
> I don't see the tag or the usual commit sequence in the 3.3 branch for this 
> release. I'd expect a `3.3.1` and a commit moving the version to 
> 3.3.2-SNAPSHOT. The latest commit in the 3.3 branch still has 
> `version=3.3.1-SNAPSHOT` in gradle.properties. Sorry if it's a silly question 
> maybe I'm missing something, but is this normal or has something gone wrong 
> with the process?

Not a silly question. I haven't finished all of the release steps
after the announcement. Give me a couple of days to get that in
order.Right now the best tag is 3.3.1-rc0.

Thanks!
-- 
-José


[jira] [Created] (KAFKA-14275) KRaft Controllers should crash after failing to apply any metadata record

2022-10-03 Thread Niket Goel (Jira)
Niket Goel created KAFKA-14275:
--

 Summary: KRaft Controllers should crash after failing to apply any 
metadata record 
 Key: KAFKA-14275
 URL: https://issues.apache.org/jira/browse/KAFKA-14275
 Project: Kafka
  Issue Type: Bug
  Components: kraft
Affects Versions: 3.3.1
Reporter: Niket Goel


When replaying records on a standby controller, any error encountered will halt 
further processing of that batch. Currently we log an error and allow the 
controller to continue normal operation. In contrast a similar error on the 
active controller causes it to halt and exit the jvm. This is inconsistent 
behavior as nothing prevents a standby from eventually becoming the active 
controller (even when it had skipped over a record batch). We should halt the 
process in the case of a standby controller as well.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Build failed in Jenkins: Kafka » Kafka Branch Builder » 3.3 #96

2022-10-03 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 496880 lines...]
[2022-10-04T00:21:47.673Z] 
org.apache.kafka.streams.processor.internals.StreamsAssignmentScaleTest > 
testStickyTaskAssignorManyThreadsPerClient STARTED
[2022-10-04T00:21:47.673Z] 
[2022-10-04T00:21:47.673Z] 
org.apache.kafka.streams.processor.internals.StreamsAssignmentScaleTest > 
testStickyTaskAssignorManyThreadsPerClient PASSED
[2022-10-04T00:21:47.673Z] 
[2022-10-04T00:21:47.673Z] 
org.apache.kafka.streams.processor.internals.StreamsAssignmentScaleTest > 
testFallbackPriorTaskAssignorManyThreadsPerClient STARTED
[2022-10-04T00:21:48.915Z] 
[2022-10-04T00:21:48.915Z] 
org.apache.kafka.streams.processor.internals.StreamsAssignmentScaleTest > 
testFallbackPriorTaskAssignorManyThreadsPerClient PASSED
[2022-10-04T00:21:48.915Z] 
[2022-10-04T00:21:48.915Z] 
org.apache.kafka.streams.processor.internals.StreamsAssignmentScaleTest > 
testFallbackPriorTaskAssignorLargePartitionCount STARTED
[2022-10-04T00:22:16.312Z] 
[2022-10-04T00:22:16.312Z] 
org.apache.kafka.streams.processor.internals.StreamsAssignmentScaleTest > 
testFallbackPriorTaskAssignorLargePartitionCount PASSED
[2022-10-04T00:22:16.312Z] 
[2022-10-04T00:22:16.312Z] 
org.apache.kafka.streams.processor.internals.StreamsAssignmentScaleTest > 
testStickyTaskAssignorLargePartitionCount STARTED
[2022-10-04T00:22:43.632Z] 
[2022-10-04T00:22:43.632Z] 
org.apache.kafka.streams.processor.internals.StreamsAssignmentScaleTest > 
testStickyTaskAssignorLargePartitionCount PASSED
[2022-10-04T00:22:43.632Z] 
[2022-10-04T00:22:43.632Z] 
org.apache.kafka.streams.processor.internals.StreamsAssignmentScaleTest > 
testFallbackPriorTaskAssignorManyStandbys STARTED
[2022-10-04T00:22:47.095Z] 
[2022-10-04T00:22:47.095Z] 
org.apache.kafka.streams.processor.internals.StreamsAssignmentScaleTest > 
testFallbackPriorTaskAssignorManyStandbys PASSED
[2022-10-04T00:22:47.095Z] 
[2022-10-04T00:22:47.095Z] 
org.apache.kafka.streams.processor.internals.StreamsAssignmentScaleTest > 
testHighAvailabilityTaskAssignorManyStandbys STARTED
[2022-10-04T00:23:11.578Z] 
[2022-10-04T00:23:11.578Z] 
org.apache.kafka.streams.processor.internals.StreamsAssignmentScaleTest > 
testHighAvailabilityTaskAssignorManyStandbys PASSED
[2022-10-04T00:23:11.578Z] 
[2022-10-04T00:23:11.578Z] 
org.apache.kafka.streams.processor.internals.StreamsAssignmentScaleTest > 
testFallbackPriorTaskAssignorLargeNumConsumers STARTED
[2022-10-04T00:23:12.800Z] 
[2022-10-04T00:23:12.800Z] 
org.apache.kafka.streams.processor.internals.StreamsAssignmentScaleTest > 
testFallbackPriorTaskAssignorLargeNumConsumers PASSED
[2022-10-04T00:23:12.800Z] 
[2022-10-04T00:23:12.800Z] 
org.apache.kafka.streams.processor.internals.StreamsAssignmentScaleTest > 
testStickyTaskAssignorLargeNumConsumers STARTED
[2022-10-04T00:23:14.372Z] 
[2022-10-04T00:23:14.372Z] 
org.apache.kafka.streams.processor.internals.StreamsAssignmentScaleTest > 
testStickyTaskAssignorLargeNumConsumers PASSED
[2022-10-04T00:23:14.372Z] streams-0: SMOKE-TEST-CLIENT-CLOSED
[2022-10-04T00:23:14.372Z] streams-3: SMOKE-TEST-CLIENT-CLOSED
[2022-10-04T00:23:14.372Z] streams-6: SMOKE-TEST-CLIENT-CLOSED
[2022-10-04T00:23:14.372Z] streams-5: SMOKE-TEST-CLIENT-CLOSED
[2022-10-04T00:23:14.372Z] streams-2: SMOKE-TEST-CLIENT-CLOSED
[2022-10-04T00:23:14.372Z] streams-1: SMOKE-TEST-CLIENT-CLOSED
[2022-10-04T00:23:14.372Z] streams-4: SMOKE-TEST-CLIENT-CLOSED
[2022-10-04T00:23:19.454Z] 
[2022-10-04T00:23:19.454Z] BUILD SUCCESSFUL in 2h 30m 36s
[2022-10-04T00:23:19.454Z] 212 actionable tasks: 115 executed, 97 up-to-date
[2022-10-04T00:23:19.454Z] 
[2022-10-04T00:23:19.454Z] See the profiling report at: 
file:///home/jenkins/workspace/Kafka_kafka_3.3_2/build/reports/profile/profile-2022-10-03-21-52-46.html
[2022-10-04T00:23:19.454Z] A fine-grained performance profile is available: use 
the --scan option.
[Pipeline] junit
[2022-10-04T00:23:20.503Z] Recording test results
[2022-10-04T00:23:30.829Z] [Checks API] No suitable checks publisher found.
[Pipeline] echo
[2022-10-04T00:23:30.831Z] Verify that Kafka Streams archetype compiles
[Pipeline] sh
[2022-10-04T00:23:32.839Z] + ./gradlew streams:publishToMavenLocal 
clients:publishToMavenLocal connect:json:publishToMavenLocal 
connect:api:publishToMavenLocal
[2022-10-04T00:23:33.789Z] To honour the JVM settings for this build a 
single-use Daemon process will be forked. See 
https://docs.gradle.org/7.4.2/userguide/gradle_daemon.html#sec:disabling_the_daemon.
[2022-10-04T00:23:34.741Z] Daemon will be stopped at the end of the build 
[2022-10-04T00:23:43.428Z] 
[2022-10-04T00:23:43.428Z] > Configure project :
[2022-10-04T00:23:43.428Z] Starting build with version 3.3.1 (commit id 
8c98308c) using Gradle 7.4.2, Java 1.8 and Scala 2.13.8
[2022-10-04T00:23:43.428Z] Build properties: maxParallelForks=24, 
maxScalacThreads=8, maxTestRetries=0
[2022-10-04T00:23:49.273Z] 
[2022-10

Re: Requesting permissions to contribute to Apache Kafka

2022-10-03 Thread Luke Chen
Hi Vimal,

You are all set.
Thanks for the interest in Apache Kafka.

Luke

On Tue, Oct 4, 2022 at 12:50 AM Vimal K  wrote:

> wiki id: *vimalinfo10*
> Jira id : *vimalinfo10*
>
> Thanks & Regards
> Vimal Krishnamoorthy
>


Request to add to Kafka dev list

2022-10-03 Thread abhishek yadav
Hi Team,

I want to contribute to kafka project, can you please add me to dev list DL.


Thanks,
Abhishek Yadav


Re: Request to add to Kafka dev list

2022-10-03 Thread Luke Chen
hello Abhishek,

What's your JIRA ID here

?
Also, the wiki ID here

?

Please provide me these info so that I can grant your access.

Thank you.
Luke

On Tue, Oct 4, 2022 at 10:12 AM abhishek yadav  wrote:

> Hi Team,
>
> I want to contribute to kafka project, can you please add me to dev list
> DL.
>
>
> Thanks,
> Abhishek Yadav
>