[GitHub] [kafka-site] jlprat merged pull request #467: MINOR: Add Viktor as a committer

2022-12-21 Thread GitBox


jlprat merged PR #467:
URL: https://github.com/apache/kafka-site/pull/467


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [ANNOUNCE] New committer: Josep Prat

2022-12-21 Thread Tom Bentley
Congratulations!

On Wed, 21 Dec 2022 at 03:05, John Roesler  wrote:

> Congratulations, Josep!
> -John
>
> On Tue, Dec 20, 2022, at 20:02, Luke Chen wrote:
> > Congratulations, Josep!
> >
> > Luke
> >
> > On Wed, Dec 21, 2022 at 6:26 AM Viktor Somogyi-Vass
> >  wrote:
> >
> >> Congrats Josep!
> >>
> >> On Tue, Dec 20, 2022, 21:56 Matthias J. Sax  wrote:
> >>
> >> > Congrats!
> >> >
> >> > On 12/20/22 12:01 PM, Josep Prat wrote:
> >> > > Thank you all!
> >> > >
> >> > > ———
> >> > > Josep Prat
> >> > >
> >> > > Aiven Deutschland GmbH
> >> > >
> >> > > Immanuelkirchstraße 26, 10405 Berlin
> >> > >
> >> > > Amtsgericht Charlottenburg, HRB 209739 B
> >> > >
> >> > > Geschäftsführer: Oskari Saarenmaa & Hannu Valtonen
> >> > >
> >> > > m: +491715557497
> >> > >
> >> > > w: aiven.io
> >> > >
> >> > > e: josep.p...@aiven.io
> >> > >
> >> > > On Tue, Dec 20, 2022, 20:42 Bill Bejeck  wrote:
> >> > >
> >> > >> Congratulations Josep!
> >> > >>
> >> > >> -Bill
> >> > >>
> >> > >> On Tue, Dec 20, 2022 at 1:11 PM Mickael Maison <
> >> > mickael.mai...@gmail.com>
> >> > >> wrote:
> >> > >>
> >> > >>> Congratulations Josep!
> >> > >>>
> >> > >>> On Tue, Dec 20, 2022 at 6:55 PM Bruno Cadonna  >
> >> > >> wrote:
> >> > 
> >> >  Congrats, Josep!
> >> > 
> >> >  Well deserved!
> >> > 
> >> >  Best,
> >> >  Bruno
> >> > 
> >> >  On 20.12.22 18:40, Kirk True wrote:
> >> > > Congrats Josep!
> >> > >
> >> > > On Tue, Dec 20, 2022, at 9:33 AM, Jorge Esteban Quilcate Otoya
> >> wrote:
> >> > >> Congrats Josep!!
> >> > >>
> >> > >> On Tue, 20 Dec 2022, 17:31 Greg Harris,
> >> > >>  >> > 
> >> > >> wrote:
> >> > >>
> >> > >>> Congratulations Josep!
> >> > >>>
> >> > >>> On Tue, Dec 20, 2022 at 9:29 AM Chris Egerton <
> >> > >>> fearthecel...@gmail.com>
> >> > >>> wrote:
> >> > >>>
> >> >  Congrats Josep! Well-earned.
> >> > 
> >> >  On Tue, Dec 20, 2022, 12:26 Jun Rao  >
> >> > >>> wrote:
> >> > 
> >> > > Hi, Everyone,
> >> > >
> >> > > The PMC of Apache Kafka is pleased to announce a new Kafka
> >> > >>> committer
> >> >  Josep
> >> > >Prat.
> >> > >
> >> > > Josep has been contributing to Kafka since May 2021. He
> >> > >>> contributed 20
> >> >  PRs
> >> > > including the following 2 KIPs.
> >> > >
> >> > > KIP-773 Differentiate metric latency measured in ms and ns
> >> > > KIP-744: Migrate TaskMetadata and ThreadMetadata to an
> >> interface
> >> > >>> with
> >> > > internal implementation
> >> > >
> >> > > Congratulations, Josep!
> >> > >
> >> > > Thanks,
> >> > >
> >> > > Jun (on behalf of the Apache Kafka PMC)
> >> > >
> >> > 
> >> > >>>
> >> > >>
> >> > >
> >> > >>>
> >> > >>
> >> > >
> >> >
> >>
>
>


[GitHub] [kafka-site] jlprat opened a new pull request, #468: MINOR: Add Josep as a committer

2022-12-21 Thread GitBox


jlprat opened a new pull request, #468:
URL: https://github.com/apache/kafka-site/pull/468

   Add Josep Prat as a committer


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Resolved] (KAFKA-14532) Correctly handle failed fetch when partitions unassigned

2022-12-21 Thread David Jacot (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14532?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Jacot resolved KAFKA-14532.
-
Resolution: Fixed

> Correctly handle failed fetch when partitions unassigned
> 
>
> Key: KAFKA-14532
> URL: https://issues.apache.org/jira/browse/KAFKA-14532
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Reporter: Lucas Brutschy
>Assignee: Lucas Brutschy
>Priority: Blocker
> Fix For: 3.4.0, 3.3.2
>
>
> On master, all our long-running test jobs are running into this exception: 
> {code:java}
> java.lang.IllegalStateException: No current assignment for partition 
> stream-soak-test-KSTREAM-OUTERTHIS-86-store-changelog-1 2 at 
> org.apache.kafka.clients.consumer.internals.SubscriptionState.assignedState(SubscriptionState.java:370)
>  3 at 
> org.apache.kafka.clients.consumer.internals.SubscriptionState.clearPreferredReadReplica(SubscriptionState.java:623)
>  4 at java.util.LinkedHashMap$LinkedKeySet.forEach(LinkedHashMap.java:559) 5 
> at 
> org.apache.kafka.clients.consumer.internals.Fetcher$1.onFailure(Fetcher.java:349)
>  6 at 
> org.apache.kafka.clients.consumer.internals.RequestFuture.fireFailure(RequestFuture.java:179)
>  7 at 
> org.apache.kafka.clients.consumer.internals.RequestFuture.raise(RequestFuture.java:149)
>  8 at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler.fireCompletion(ConsumerNetworkClient.java:613)
>  9 at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.firePendingCompletedRequests(ConsumerNetworkClient.java:427)
>  10 at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:262)
>  11 at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:251)
>  12 at 
> org.apache.kafka.clients.consumer.KafkaConsumer.pollForFetches(KafkaConsumer.java:1307)
>  13 at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1243) 
> 14 at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1216) 
> 15 at 
> org.apache.kafka.streams.processor.internals.StoreChangelogReader.restore(StoreChangelogReader.java:450)
>  16 at 
> org.apache.kafka.streams.processor.internals.StreamThread.initializeAndRestorePhase(StreamThread.java:910)
>  17 at 
> org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:773)
>  18 at 
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:613)
>  19 at 
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:575)
>  20[2022-12-13 04:01:59,024] ERROR [i-016cf5d2c1889c316-StreamThread-1] 
> stream-client [i-016cf5d2c1889c316] Encountered the following exception 
> during processing and sent shutdown request for the entire application. 
> (org.apache.kafka.streams.KafkaStreams) 
> 21org.apache.kafka.streams.errors.StreamsException: 
> java.lang.IllegalStateException: No current assignment for partition 
> stream-soak-test-KSTREAM-OUTERTHIS-86-store-changelog-1 22 at 
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:653)
>  23 at 
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:575)
>  24Caused by: java.lang.IllegalStateException: No current assignment for 
> partition stream-soak-test-KSTREAM-OUTERTHIS-86-store-changelog-1 25 
> at 
> org.apache.kafka.clients.consumer.internals.SubscriptionState.assignedState(SubscriptionState.java:370)
>  26 at 
> org.apache.kafka.clients.consumer.internals.SubscriptionState.clearPreferredReadReplica(SubscriptionState.java:623)
>  27 at java.util.LinkedHashMap$LinkedKeySet.forEach(LinkedHashMap.java:559) 
> 28 at 
> org.apache.kafka.clients.consumer.internals.Fetcher$1.onFailure(Fetcher.java:349)
>  29 at 
> org.apache.kafka.clients.consumer.internals.RequestFuture.fireFailure(RequestFuture.java:179)
>  30 at 
> org.apache.kafka.clients.consumer.internals.RequestFuture.raise(RequestFuture.java:149)
>  31 at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler.fireCompletion(ConsumerNetworkClient.java:613)
>  32 at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.firePendingCompletedRequests(ConsumerNetworkClient.java:427)
>  33 at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:262)
>  34 at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:251)
>  35 at 
> org.apache.kafka.clients.consumer.KafkaConsumer.pollForFetches(KafkaConsumer.java:1307)
>  36 at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.j

Re: [DISCUSS] Apache Kafka 3.4.0 release

2022-12-21 Thread David Jacot
Hi Sophie,

I have merged https://issues.apache.org/jira/browse/KAFKA-14532.

Best,
David

On Tue, Dec 20, 2022 at 4:31 PM David Arthur
 wrote:
>
> Hey Sophie,
>
> I found a KRaft blocker for 3.4
> https://issues.apache.org/jira/browse/KAFKA-14531. The fix is committed to
> trunk and is quite small. If you agree, I'll merge the fix to the 3.4
> branch.
>
> Thanks!
> David
>
> On Tue, Dec 20, 2022 at 7:53 AM David Jacot 
> wrote:
>
> > Hi Sophie,
> >
> > We just found a blocker for 3.4.0:
> > https://issues.apache.org/jira/browse/KAFKA-14532. The PR is on the
> > way.
> >
> > Best,
> > David
> >
> > On Sat, Dec 17, 2022 at 1:08 AM Sophie Blee-Goldman
> >  wrote:
> > >
> > > Thanks Jose & Kirk. I agree both those fixes should be included in the
> > 3.4
> > > release
> > >
> > > On Fri, Dec 16, 2022 at 12:30 PM José Armando García Sancio
> > >  wrote:
> > >
> > > > Hi Sophie,
> > > >
> > > > I am interested in including a bug fix for
> > > > https://issues.apache.org/jira/browse/KAFKA-14457 in the 3.4.0
> > > > release. The fix is here: https://github.com/apache/kafka/pull/12994.
> > > >
> > > > I think it is important to include this fix because some of the
> > > > controller metrics are inaccurate without this fix. This could impact
> > > > some users' ability to monitor the cluster.
> > > >
> > > > What do you think?
> > > > --
> > > > -José
> > > >
> >
>
>
> --
> -David


[GitHub] [kafka-site] jlprat merged pull request #468: MINOR: Add Josep as a committer

2022-12-21 Thread GitBox


jlprat merged PR #468:
URL: https://github.com/apache/kafka-site/pull/468


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka-site] jlprat commented on pull request #468: MINOR: Add Josep as a committer

2022-12-21 Thread GitBox


jlprat commented on PR #468:
URL: https://github.com/apache/kafka-site/pull/468#issuecomment-1361022550

   Thanks David!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [VOTE] KIP-837 Allow MultiCasting a Result Record.

2022-12-21 Thread Sagar
Hi All,

Just as an update, the changes described here:

```
Changes at a high level are:

1) KeyQueryMetada enhanced to have a new method called partitions().
2) Lifting the restriction of a single partition for IQ. Now the
restriction holds only for FK Join.
```

are reverted back. As things stand,  KeyQueryMetada exposes only the
partition() method and the restriction for single partition is added back
for IQ. This has been done based on the points raised by Matthias above.

The KIP has been updated accordingly.

Thanks!
Sagar.

On Sat, Dec 10, 2022 at 12:09 AM Sagar  wrote:

> Hey Matthias,
>
> Actually I had shared the PR link for any potential issues that might have
> gone missing. I guess it didn't come out that way in my response. Apologies
> for that!
>
> I am more than happy to incorporate any feedback/changes or address any
> concerns that are still present around this at this point as well.
>
> And I would keep in mind the feedback to provide more time in such a
> scenario.
>
> Thanks!
> Sagar.
>
> On Fri, Dec 9, 2022 at 11:41 PM Matthias J. Sax  wrote:
>
>> It is what it is.
>>
>> > we did have internal discussions on this
>>
>> We sometimes have the case that a KIP need adjustment as stuff is
>> discovered during coding. And having a discussion on the PR about it is
>> fine. -- However, before the PR gets merge, the KIP change should be
>> announced to verify that nobody has objections to he change, before we
>> carry forward.
>>
>> It's up to the committer who reviews/merges the PR to ensure that this
>> process is followed IMHO. I hope we can do better next time.
>>
>> (And yes, there was the 3.4 release KIP deadline that might explain it,
>> but it seems important that we give enough time is make "tricky" changes
>> and not rush into stuff IMHO.)
>>
>>
>> -Matthias
>>
>>
>> On 12/8/22 7:04 PM, Sagar wrote:
>> > Thanks Matthias,
>> >
>> > Well, as things stand, we did have internal discussions on this and it
>> > seemed ok to open it up for IQ and more importantly not ok to have it
>> > opened up for FK-Join. And more importantly, the PR for this is already
>> > merged and some of these things came up during that. Here's the PR link:
>> > https://github.com/apache/kafka/pull/12803.
>> >
>> > Thanks!
>> > Sagar.
>> >
>> >
>> > On Fri, Dec 9, 2022 at 5:15 AM Matthias J. Sax 
>> wrote:
>> >
>> >> Ah. Missed it as it does not have a nice "code block" similar to
>> >> `StreamPartitioner` changes.
>> >>
>> >> I understand the motivation, but I am wondering if we might head into a
>> >> tricky direction? State stores (at least the built-in ones) and IQ are
>> >> kinda build with the idea to have sharded data and that a multi-cast of
>> >> keys is an anti-pattern?
>> >>
>> >> Maybe it's fine, but I also don't want to open Pandora's Box. Are we
>> >> sure that generalizing the concepts does not cause issues in the
>> future?
>> >>
>> >> Ie, should we claim that the multi-cast feature should be used for
>> >> KStreams only, but not for KTables?
>> >>
>> >> Just want to double check that we are not doing something we regret
>> later.
>> >>
>> >>
>> >> -Matthias
>> >>
>> >>
>> >> On 12/7/22 6:45 PM, Sagar wrote:
>> >>> Hi Mathias,
>> >>>
>> >>> I did save it. The changes are added under Public Interfaces (Pt#2
>> about
>> >>> enhancing KeyQueryMetadata with partitions method) and
>> >>> throwing IllegalArgumentException when StreamPartitioner#partitions
>> >> method
>> >>> returns multiple partitions for just FK-join instead of the earlier
>> >> decided
>> >>> FK-Join and IQ.
>> >>>
>> >>> The background is that for IQ, if the users have multi casted records
>> to
>> >>> multiple partitions during ingestion but the fetch returns only a
>> single
>> >>> partition, then it would be wrong. That's why the restriction was
>> lifted
>> >>> for IQ and that's the reason KeyQueryMetadata now has another
>> >> partitions()
>> >>> method to signify the same.
>> >>>
>> >>> FK-Join also has a similar case, but while reviewing it was felt that
>> >>> FK-Join on it's own is fairly complicated and we don't need this
>> feature
>> >>> right away so the restriction still exists.
>> >>>
>> >>> Thanks!
>> >>> Sagar.
>> >>>
>> >>>
>> >>> On Wed, Dec 7, 2022 at 9:42 PM Matthias J. Sax 
>> wrote:
>> >>>
>>  I don't see any update on the wiki about it. Did you forget to hit
>> >> "save"?
>> 
>>  Can you also provide some background? I am not sure right now if I
>>  understand the proposed changes?
>> 
>> 
>>  -Matthias
>> 
>>  On 12/6/22 6:36 PM, Sophie Blee-Goldman wrote:
>> > Thanks Sagar, this makes sense to me -- we clearly need additional
>>  changes
>> > to
>> > avoid breaking IQ when using this feature, but I agree with
>> continuing
>> >> to
>> > restrict
>> > FKJ since they wouldn't stop working without it, and would become
>> much
>> > harder
>> > to reason about (than they already are) if we did enable them to use
>> >> it.
>> >
>> > And of

Re: [DISCUSS] Apache Kafka 3.4.0 release

2022-12-21 Thread Luke Chen
Hi Sophie and devs,

KAFKA-14540  is reported
(from Michael Marshall, thanks!) that we might write corrupted data to the
output stream due to the wrong buffer position set in
DataOutputStreamWritable#writeByteBuffer.
Had a search in the project, it looks like there is no place using this
method directly.
But I'd like to make sure I didn't miss anything and make things worse
after we release it.

Thank you.
Luke

On Wed, Dec 21, 2022 at 4:36 PM David Jacot 
wrote:

> Hi Sophie,
>
> I have merged https://issues.apache.org/jira/browse/KAFKA-14532.
>
> Best,
> David
>
> On Tue, Dec 20, 2022 at 4:31 PM David Arthur
>  wrote:
> >
> > Hey Sophie,
> >
> > I found a KRaft blocker for 3.4
> > https://issues.apache.org/jira/browse/KAFKA-14531. The fix is committed
> to
> > trunk and is quite small. If you agree, I'll merge the fix to the 3.4
> > branch.
> >
> > Thanks!
> > David
> >
> > On Tue, Dec 20, 2022 at 7:53 AM David Jacot  >
> > wrote:
> >
> > > Hi Sophie,
> > >
> > > We just found a blocker for 3.4.0:
> > > https://issues.apache.org/jira/browse/KAFKA-14532. The PR is on the
> > > way.
> > >
> > > Best,
> > > David
> > >
> > > On Sat, Dec 17, 2022 at 1:08 AM Sophie Blee-Goldman
> > >  wrote:
> > > >
> > > > Thanks Jose & Kirk. I agree both those fixes should be included in
> the
> > > 3.4
> > > > release
> > > >
> > > > On Fri, Dec 16, 2022 at 12:30 PM José Armando García Sancio
> > > >  wrote:
> > > >
> > > > > Hi Sophie,
> > > > >
> > > > > I am interested in including a bug fix for
> > > > > https://issues.apache.org/jira/browse/KAFKA-14457 in the 3.4.0
> > > > > release. The fix is here:
> https://github.com/apache/kafka/pull/12994.
> > > > >
> > > > > I think it is important to include this fix because some of the
> > > > > controller metrics are inaccurate without this fix. This could
> impact
> > > > > some users' ability to monitor the cluster.
> > > > >
> > > > > What do you think?
> > > > > --
> > > > > -José
> > > > >
> > >
> >
> >
> > --
> > -David
>


[jira] [Resolved] (KAFKA-14461) StoreQueryIntegrationTest#shouldQuerySpecificActivePartitionStores logic to check for active partitions seems brittle.

2022-12-21 Thread Bruno Cadonna (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14461?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bruno Cadonna resolved KAFKA-14461.
---
Resolution: Fixed

> StoreQueryIntegrationTest#shouldQuerySpecificActivePartitionStores logic to 
> check for active partitions seems brittle.
> --
>
> Key: KAFKA-14461
> URL: https://issues.apache.org/jira/browse/KAFKA-14461
> Project: Kafka
>  Issue Type: Bug
>  Components: streams, unit tests
>Reporter: Sagar Rao
>Assignee: Sagar Rao
>Priority: Major
>
> {noformat}
> StoreQueryIntegrationTest#shouldQuerySpecificActivePartitionStores {noformat}
> has a logic to figure out active partitions:
>  
>  
> {code:java}
> final boolean kafkaStreams1IsActive = (keyQueryMetadata.activeHost().port() % 
> 2) == 1;{code}
>  
>  
> This is very brittle as when a new test gets added, this check would need to 
> be changed to `==0`. It's a hassle to change it everytime with a new test 
> added. Should look to improve this.
> Also, this test relies on junit4 annotations which can be migrated to Junit 5 
> so that we can use @BeforeAll to set up and @AfterAll to shutdown the cluster 
> instead of the current way where it's being done before/after every test.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Jenkins build is still unstable: Kafka » Kafka Branch Builder » trunk #1456

2022-12-21 Thread Apache Jenkins Server
See 




Jenkins build is still unstable: Kafka » Kafka Branch Builder » 3.4 #17

2022-12-21 Thread Apache Jenkins Server
See 




Build failed in Jenkins: Kafka » Kafka Branch Builder » 3.3 #140

2022-12-21 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 418851 lines...]
[2022-12-21T12:53:11.261Z] 
/home/jenkins/workspace/Kafka_kafka_3.3/streams/src/main/java/org/apache/kafka/streams/kstream/KStream.java:919:
 warning - Tag @link: reference not found: DefaultPartitioner
[2022-12-21T12:53:11.261Z] 
/home/jenkins/workspace/Kafka_kafka_3.3/streams/src/main/java/org/apache/kafka/streams/kstream/KStream.java:939:
 warning - Tag @link: reference not found: DefaultPartitioner
[2022-12-21T12:53:11.261Z] 
/home/jenkins/workspace/Kafka_kafka_3.3/streams/src/main/java/org/apache/kafka/streams/kstream/KStream.java:854:
 warning - Tag @link: reference not found: DefaultPartitioner
[2022-12-21T12:53:11.261Z] 
/home/jenkins/workspace/Kafka_kafka_3.3/streams/src/main/java/org/apache/kafka/streams/kstream/KStream.java:890:
 warning - Tag @link: reference not found: DefaultPartitioner
[2022-12-21T12:53:11.261Z] 
/home/jenkins/workspace/Kafka_kafka_3.3/streams/src/main/java/org/apache/kafka/streams/kstream/KStream.java:919:
 warning - Tag @link: reference not found: DefaultPartitioner
[2022-12-21T12:53:11.261Z] 
/home/jenkins/workspace/Kafka_kafka_3.3/streams/src/main/java/org/apache/kafka/streams/kstream/KStream.java:939:
 warning - Tag @link: reference not found: DefaultPartitioner
[2022-12-21T12:53:11.261Z] 
/home/jenkins/workspace/Kafka_kafka_3.3/streams/src/main/java/org/apache/kafka/streams/kstream/Produced.java:84:
 warning - Tag @link: reference not found: DefaultPartitioner
[2022-12-21T12:53:11.261Z] 
/home/jenkins/workspace/Kafka_kafka_3.3/streams/src/main/java/org/apache/kafka/streams/kstream/Produced.java:136:
 warning - Tag @link: reference not found: DefaultPartitioner
[2022-12-21T12:53:11.261Z] 
/home/jenkins/workspace/Kafka_kafka_3.3/streams/src/main/java/org/apache/kafka/streams/kstream/Produced.java:147:
 warning - Tag @link: reference not found: DefaultPartitioner
[2022-12-21T12:53:11.261Z] 
/home/jenkins/workspace/Kafka_kafka_3.3/streams/src/main/java/org/apache/kafka/streams/kstream/Repartitioned.java:101:
 warning - Tag @link: reference not found: DefaultPartitioner
[2022-12-21T12:53:11.261Z] 
/home/jenkins/workspace/Kafka_kafka_3.3/streams/src/main/java/org/apache/kafka/streams/kstream/Repartitioned.java:167:
 warning - Tag @link: reference not found: DefaultPartitioner
[2022-12-21T12:53:11.261Z] 
/home/jenkins/workspace/Kafka_kafka_3.3/streams/src/main/java/org/apache/kafka/streams/TopologyConfig.java:58:
 warning - Tag @link: missing '#': "org.apache.kafka.streams.StreamsBuilder()"
[2022-12-21T12:53:11.261Z] 
/home/jenkins/workspace/Kafka_kafka_3.3/streams/src/main/java/org/apache/kafka/streams/TopologyConfig.java:58:
 warning - Tag @link: can't find org.apache.kafka.streams.StreamsBuilder() in 
org.apache.kafka.streams.TopologyConfig
[2022-12-21T12:53:11.261Z] 
/home/jenkins/workspace/Kafka_kafka_3.3/streams/src/main/java/org/apache/kafka/streams/TopologyDescription.java:38:
 warning - Tag @link: reference not found: ProcessorContext#forward(Object, 
Object) forwards
[2022-12-21T12:53:11.261Z] 
/home/jenkins/workspace/Kafka_kafka_3.3/streams/src/main/java/org/apache/kafka/streams/query/Position.java:44:
 warning - Tag @link: can't find query(Query,
[2022-12-21T12:53:11.261Z]  PositionBound, boolean) in 
org.apache.kafka.streams.processor.StateStore
[2022-12-21T12:53:11.261Z] 
/home/jenkins/workspace/Kafka_kafka_3.3/streams/src/main/java/org/apache/kafka/streams/query/QueryResult.java:44:
 warning - Tag @link: can't find query(Query, PositionBound, boolean) in 
org.apache.kafka.streams.processor.StateStore
[2022-12-21T12:53:11.261Z] 
/home/jenkins/workspace/Kafka_kafka_3.3/streams/src/main/java/org/apache/kafka/streams/query/QueryResult.java:36:
 warning - Tag @link: can't find query(Query, PositionBound, boolean) in 
org.apache.kafka.streams.processor.StateStore
[2022-12-21T12:53:11.261Z] 
/home/jenkins/workspace/Kafka_kafka_3.3/streams/src/main/java/org/apache/kafka/streams/query/QueryResult.java:57:
 warning - Tag @link: can't find query(Query, PositionBound, boolean) in 
org.apache.kafka.streams.processor.StateStore
[2022-12-21T12:53:11.261Z] 
/home/jenkins/workspace/Kafka_kafka_3.3/streams/src/main/java/org/apache/kafka/streams/query/QueryResult.java:74:
 warning - Tag @link: can't find query(Query, PositionBound, boolean) in 
org.apache.kafka.streams.processor.StateStore
[2022-12-21T12:53:11.261Z] 
/home/jenkins/workspace/Kafka_kafka_3.3/streams/src/main/java/org/apache/kafka/streams/query/QueryResult.java:110:
 warning - Tag @link: reference not found: this#getResult()
[2022-12-21T12:53:11.261Z] 
/home/jenkins/workspace/Kafka_kafka_3.3/streams/src/main/java/org/apache/kafka/streams/query/QueryResult.java:117:
 warning - Tag @link: reference not found: this#getFailureReason()
[2022-12-21T12:53:11.261Z] 
/home/jenkins/workspace/Kafka_kafka_3.3/streams/src/main/java/org/apache/kafka/streams/quer

Re: [ANNOUNCE] New committer: Josep Prat

2022-12-21 Thread Ismael Juma
Congratulations Josep!

Ismael

On Tue, Dec 20, 2022 at 9:26 AM Jun Rao  wrote:

> Hi, Everyone,
>
> The PMC of Apache Kafka is pleased to announce a new Kafka committer Josep
>  Prat.
>
> Josep has been contributing to Kafka since May 2021. He contributed 20 PRs
> including the following 2 KIPs.
>
> KIP-773 Differentiate metric latency measured in ms and ns
> KIP-744: Migrate TaskMetadata and ThreadMetadata to an interface with
> internal implementation
>
> Congratulations, Josep!
>
> Thanks,
>
> Jun (on behalf of the Apache Kafka PMC)
>


Re: [ANNOUNCE] New committer: Viktor Somogyi-Vass

2022-12-21 Thread Ismael Juma
Congratulations Viktor!

Ismael

On Wed, Dec 14, 2022 at 3:10 PM Jun Rao  wrote:

> Hi, Everyone,
>
> The PMC of Apache Kafka is pleased to announce a new Kafka committer Viktor
> Somogyi-Vass.
>
> Viktor has been contributing to Kafka since 2018. He contributed 35 PRs and
> reviewed 49 PRs.
>
> Congratulations, Viktor!
>
> Thanks,
>
> Jun (on behalf of the Apache Kafka PMC)
>


Re: [DISCUSS] Apache Kafka 3.4.0 release

2022-12-21 Thread Ismael Juma
Hi Luke,

Is this a recent change? If not, that gives further credence that we don't
use this method today.

Ismael

On Wed, Dec 21, 2022 at 2:22 AM Luke Chen  wrote:

> Hi Sophie and devs,
>
> KAFKA-14540  is
> reported
> (from Michael Marshall, thanks!) that we might write corrupted data to the
> output stream due to the wrong buffer position set in
> DataOutputStreamWritable#writeByteBuffer.
> Had a search in the project, it looks like there is no place using this
> method directly.
> But I'd like to make sure I didn't miss anything and make things worse
> after we release it.
>
> Thank you.
> Luke
>
> On Wed, Dec 21, 2022 at 4:36 PM David Jacot 
> wrote:
>
> > Hi Sophie,
> >
> > I have merged https://issues.apache.org/jira/browse/KAFKA-14532.
> >
> > Best,
> > David
> >
> > On Tue, Dec 20, 2022 at 4:31 PM David Arthur
> >  wrote:
> > >
> > > Hey Sophie,
> > >
> > > I found a KRaft blocker for 3.4
> > > https://issues.apache.org/jira/browse/KAFKA-14531. The fix is
> committed
> > to
> > > trunk and is quite small. If you agree, I'll merge the fix to the 3.4
> > > branch.
> > >
> > > Thanks!
> > > David
> > >
> > > On Tue, Dec 20, 2022 at 7:53 AM David Jacot
>  > >
> > > wrote:
> > >
> > > > Hi Sophie,
> > > >
> > > > We just found a blocker for 3.4.0:
> > > > https://issues.apache.org/jira/browse/KAFKA-14532. The PR is on the
> > > > way.
> > > >
> > > > Best,
> > > > David
> > > >
> > > > On Sat, Dec 17, 2022 at 1:08 AM Sophie Blee-Goldman
> > > >  wrote:
> > > > >
> > > > > Thanks Jose & Kirk. I agree both those fixes should be included in
> > the
> > > > 3.4
> > > > > release
> > > > >
> > > > > On Fri, Dec 16, 2022 at 12:30 PM José Armando García Sancio
> > > > >  wrote:
> > > > >
> > > > > > Hi Sophie,
> > > > > >
> > > > > > I am interested in including a bug fix for
> > > > > > https://issues.apache.org/jira/browse/KAFKA-14457 in the 3.4.0
> > > > > > release. The fix is here:
> > https://github.com/apache/kafka/pull/12994.
> > > > > >
> > > > > > I think it is important to include this fix because some of the
> > > > > > controller metrics are inaccurate without this fix. This could
> > impact
> > > > > > some users' ability to monitor the cluster.
> > > > > >
> > > > > > What do you think?
> > > > > > --
> > > > > > -José
> > > > > >
> > > >
> > >
> > >
> > > --
> > > -David
> >
>


[jira] [Resolved] (KAFKA-14479) Move CleanerConfig to storage module

2022-12-21 Thread Ismael Juma (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14479?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma resolved KAFKA-14479.
-
Resolution: Duplicate

Doing it as part of KAFKA-14478 instead.

> Move CleanerConfig to storage module
> 
>
> Key: KAFKA-14479
> URL: https://issues.apache.org/jira/browse/KAFKA-14479
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Ismael Juma
>Assignee: Ismael Juma
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-14541) Profile produce workload for Apache Kafka

2022-12-21 Thread Divij Vaidya (Jira)
Divij Vaidya created KAFKA-14541:


 Summary: Profile produce workload for Apache Kafka
 Key: KAFKA-14541
 URL: https://issues.apache.org/jira/browse/KAFKA-14541
 Project: Kafka
  Issue Type: Improvement
Reporter: Divij Vaidya
Assignee: Divij Vaidya
 Attachments: 
flamegraph-openjdk11nodebug-cpu-withoutobjectserializer.html

I have been profiling Kafka (3.4.0 / trunk right now) for a produce only 
workload and the [OpenMessaging|https://openmessaging.cloud/docs/benchmarks/] 
workloads. The goal is to get a better understanding of CPU usage profile for 
Kafka and eliminate potential overheads to reduce CPU consumption.
h2. *Setup*

R6i.16xl (64 cores)
OS: Amazon Linux 2

Single broker, One topic, One partition

Plaintext

Prometheus Java agent attached

 
{code:java}
-XX:+PreserveFramePointer
-XX:+UnlockDiagnosticVMOptions
-XX:+DebugNonSafepoints{code}
 

 
{code:java}
queued.max.requests=1
num.network.threads=32
num.io.threads=128
socket.request.max.bytes=104857600{code}
 
h3. Producer setup:
{code:java}
batch.size=9000
buffer.memory=33554432
enable.idempotence=false
linger.ms=0
receive.buffer.bytes=-1
send.buffer.bytes=-1
max.in.flight.requests.per.connection=10{code}
 
h3. Profiler setup:


[async-profiler|https://github.com/jvm-profiling-tools/async-profiler] (this 
profiler + -XX:+DebugNonSafepoints ensure that profiling doesn't suffer from 
safepoint bias). Note that flamegraph can be generated in "cpu" mode or "wall" 
mode (wall clock time) or "cycles" mode (used for better kernel call stack)
h2. Observations
 # Processor.run > Processor#updateRequestMetrics() is a very expensive call. 
We should revisit whether we want to pre-compute histograms or not. Maybe 
upgrading to latest dropwizard will improve this?
 # (Processor > Selector.poll), (Processor > Selector.write) and many other 
places  - Accumulative cost of Sensor#recordInternal is high. 
 # Processor threads are consuming more CPU than Handler threads?!! Perhaps 
because handler threads spend a lot of time waiting for partition lock at 
UnifiedLock.scala
 # HandleProduceRequest > RequestSizeInBytes - Unnecessary call to calculate 
size in bytes here. Low hanging opportunity to improve CPU utilisation for a 
request heavy workload.
 # UnifiedLog#append > HeapByteBuffer.duplicate() - Why do we duplicate the 
buffer here? Ideally we shouldn't be making copies of buffer during the produce 
workflow. We should be using the same buffer after reading from socket to 
writing in a file.
 # Processor > Selector.select - Why is epoll consuming CPU cycles? It should 
have the thread in a timed_waiting state and hence, shouldn't consume CPU at 
all.
 # In a produce workload writing to socket is more CPU intensive than reading 
from the socket. This is surprising because reading would read more data from 
the socket (produce records) whereas writing would only write the response back 
which doesn't contain record data.
 # RequestChannel#sendResponse > wakeup - This is the call which wakes up the 
selector by writing to a file descriptor. Why is this so expensive?

I am still analysing the flamegraph (cpu mode attached here). Please feel free 
to comment on any of the observations or add your own observations here.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Kafka issue

2022-12-21 Thread Amoli Tandon
Hi Team,

I am working on a POC and i am facing issues while producing messages to
Kafka cluster.

I am getting 'topic doesn't exist in the metadata after 6ms' error when
trying to produce the message using 500 threads. Topic has a partition of
500.

The code is working fine with 100 threads and 100 partition topic.

Please suggest the resolution steps.

Regards,
Amoli.


[jira] [Created] (KAFKA-14542) Deprecate OffsetFetch/Commit version 0 and remove them in 4.0

2022-12-21 Thread David Jacot (Jira)
David Jacot created KAFKA-14542:
---

 Summary: Deprecate OffsetFetch/Commit version 0 and remove them in 
4.0
 Key: KAFKA-14542
 URL: https://issues.apache.org/jira/browse/KAFKA-14542
 Project: Kafka
  Issue Type: Improvement
Reporter: David Jacot
Assignee: David Jacot


We should deprecate OffsetFetch/Commit APIs and remove them in AK 4.0. Those 
two APIs are used by old clients to write offsets to and read offsets from ZK.

We need a small KIP for this.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [VOTE] KIP-889 Versioned State Stores

2022-12-21 Thread Victoria Xia
Hi everyone,

We have 3 binding and 1 non-binding vote in favor of this KIP (and no
objections) so KIP-889 is now accepted.

Thanks for voting, and for your excellent comments in the KIP discussion
thread!

Happy holidays,
Victoria

On Tue, Dec 20, 2022 at 12:24 PM Sagar  wrote:

> Hi Victoria,
>
> +1 (non-binding).
>
> Thanks!
> Sagar.
>
> On Tue, Dec 20, 2022 at 1:39 PM Bruno Cadonna  wrote:
>
> > Hi Victoria,
> >
> > Thanks for the KIP!
> >
> > +1 (binding)
> >
> > Best,
> > Bruno
> >
> > On 19.12.22 20:03, Matthias J. Sax wrote:
> > > +1 (binding)
> > >
> > > On 12/15/22 1:27 PM, John Roesler wrote:
> > >> Thanks for the thorough KIP, Victoria!
> > >>
> > >> I'm +1 (binding)
> > >>
> > >> -John
> > >>
> > >> On 2022/12/15 19:56:21 Victoria Xia wrote:
> > >>> Hi all,
> > >>>
> > >>> I'd like to start a vote on KIP-889 for introducing versioned
> key-value
> > >>> state stores to Kafka Streams:
> > >>>
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-889%3A+Versioned+State+Stores
> > >>>
> > >>> The discussion thread has been open for a few weeks now and has
> > >>> converged
> > >>> among the current participants.
> > >>>
> > >>> Thanks,
> > >>> Victoria
> > >>>
> >
>


Re: [ANNOUNCE] New committer: Josep Prat

2022-12-21 Thread Yash Mayya
Congratulations Josep!

On Tue, Dec 20, 2022, 22:56 Jun Rao  wrote:

> Hi, Everyone,
>
> The PMC of Apache Kafka is pleased to announce a new Kafka committer Josep
>  Prat.
>
> Josep has been contributing to Kafka since May 2021. He contributed 20 PRs
> including the following 2 KIPs.
>
> KIP-773 Differentiate metric latency measured in ms and ns
> KIP-744: Migrate TaskMetadata and ThreadMetadata to an interface with
> internal implementation
>
> Congratulations, Josep!
>
> Thanks,
>
> Jun (on behalf of the Apache Kafka PMC)
>


Jenkins build is unstable: Kafka » Kafka Branch Builder » 3.3 #141

2022-12-21 Thread Apache Jenkins Server
See 




[jira] [Created] (KAFKA-14543) Move LogOffsetMetadata to storage module

2022-12-21 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-14543:
--

 Summary: Move LogOffsetMetadata to storage module
 Key: KAFKA-14543
 URL: https://issues.apache.org/jira/browse/KAFKA-14543
 Project: Kafka
  Issue Type: Sub-task
Reporter: Mickael Maison
Assignee: Mickael Maison






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Jenkins build is still unstable: Kafka » Kafka Branch Builder » 3.4 #18

2022-12-21 Thread Apache Jenkins Server
See 




Re: [VOTE] 3.3.2 RC0

2022-12-21 Thread Chris Egerton
Hi all,

Thank you to Federico for testing out and voting on the release!

Due to a number of recently-raised-and-fixed issues, we have decided to
abandon this release candidate and publish a new one with some extra fixes
included.

I am cancelling this vote thread and aim to open a new one for RC1 sometime
today or tomorrow.

Cheers,

Chris

On Fri, Dec 16, 2022 at 8:26 AM Federico Valeri 
wrote:

> Hi, I did the following to validate the release:
>
> - Checksums and signatures ok
> - Build from source using Java 17 and Scala 2.13 ok
> - Unit and integration tests ok
> - Quickstart in both ZK and KRaft modes ok
> - Test app with staging Maven artifacts ok
>
> Documentation still has 3.3.1 version references, but I guess this
> will be updated later.
>
> +1 (non binding)
>
> Thanks
> Fede
>
>
> On Fri, Dec 16, 2022 at 11:51 AM jacob bogers  wrote:
> >
> > Hi, I have tried several times to unsub from dev@kafka.apache.org, but
> it
> > just isnt working
> >
> > Can someone help me?
> >
> > cheers
> > Jacob
> >
> > On Thu, Dec 15, 2022 at 5:37 PM Chris Egerton 
> > wrote:
> >
> > > Hello Kafka users, developers and client-developers,
> > >
> > > This is the first candidate for release of Apache Kafka 3.3.2.
> > >
> > > This is a bugfix release with several fixes since the release of
> 3.3.1. A
> > > few of the major issues include:
> > >
> > > * KAFKA-14358 Users should not be able to create a regular topic name
> > > __cluster_metadata
> > > KAFKA-14379 Consumer should refresh preferred read replica on update
> > > metadata
> > > * KAFKA-13586 Prevent exception thrown during connector update from
> > > crashing distributed herder
> > >
> > >
> > > Release notes for the 3.3.2 release:
> > > https://home.apache.org/~cegerton/kafka-3.3.2-rc0/RELEASE_NOTES.html
> > >
> > >
> > >
> > > *** Please download, test and vote by Tuesday, December 20, 10pm UTC
> > >
> > > Kafka's KEYS file containing PGP keys we use to sign the release:
> > > https://kafka.apache.org/KEYS
> > >
> > > * Release artifacts to be voted upon (source and binary):
> > > https://home.apache.org/~cegerton/kafka-3.3.2-rc0/
> > >
> > > * Maven artifacts to be voted upon:
> > > https://repository.apache.org/content/groups/staging/org/apache/kafka/
> > >
> > > * Javadoc:
> > > https://home.apache.org/~cegerton/kafka-3.3.2-rc0/javadoc/
> > >
> > > * Tag to be voted upon (off 3.3 branch) is the 3.3.2 tag:
> > > https://github.com/apache/kafka/releases/tag/3.3.2-rc0
> > >
> > > * Documentation:
> > > https://kafka.apache.org/33/documentation.html
> > >
> > > * Protocol:
> > > https://kafka.apache.org/33/protocol.html
> > >
> > > The most recent build has had test failures. These all appear to be
> due to
> > > flakiness, but it would be nice if someone more familiar with the
> failed
> > > tests could confirm this. I may update this thread with passing build
> links
> > > if I can get one, or start a new release vote thread if test failures
> must
> > > be addressed beyond re-running builds until they pass.
> > >
> > > Unit/integration tests:
> > >
> https://ci-builds.apache.org/job/Kafka/job/kafka/job/3.3/135/testReport/
> > >
> > > System tests:
> > >
> > >
> http://confluent-kafka-branch-builder-system-test-results.s3-us-west-2.amazonaws.com/system-test-kafka-branch-builder--1670984851--apache--3.3--22af3f29ce/2022-12-13--001./2022-12-13--001./report.html
> > > (initial with three flaky failures)
> > > Follow-up system tests:
> > >
> https://home.apache.org/~cegerton/system_tests/2022-12-14--015/report.html
> > > ,
> > >
> https://home.apache.org/~cegerton/system_tests/2022-12-14--016/report.html
> > > ,
> > >
> > >
> http://confluent-kafka-branch-builder-system-test-results.s3-us-west-2.amazonaws.com/system-test-kafka-branch-builder--1671061000--apache--3.3--69fbaf2457/2022-12-14--001./2022-12-14--001./report.html
> > >
> > > (Note that the exact commit used for some of the system test runs will
> not
> > > precisely match the commit for the release candidate, but that all
> > > differences between those two commits should have no effect on the
> > > relevance or accuracy of the test results.)
> > >
> > > Thanks,
> > >
> > > Chris
> > >
>


Jenkins build is still unstable: Kafka » Kafka Branch Builder » trunk #1457

2022-12-21 Thread Apache Jenkins Server
See 




[jira] [Created] (KAFKA-14544) The "is-future" should be removed from metrics tags after future log becomes current log

2022-12-21 Thread Chia-Ping Tsai (Jira)
Chia-Ping Tsai created KAFKA-14544:
--

 Summary: The "is-future" should be removed from metrics tags after 
future log becomes current log
 Key: KAFKA-14544
 URL: https://issues.apache.org/jira/browse/KAFKA-14544
 Project: Kafka
  Issue Type: Bug
Reporter: Chia-Ping Tsai
Assignee: Chia-Ping Tsai


we don't remove "is-future=true" tag from future log after the future log 
becomes "current" log. It causes two potential issues:
 # the metrics monitors can't get metrics of Log if they don't trace the 
property "is-future=true".
 # all Log metrics of specify partition get removed if the partition is moved 
to another folder again.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-14545) MirrorCheckpointTask throws NullPointerException when group hasn't consumed from some partitions

2022-12-21 Thread Chris Solidum (Jira)
Chris Solidum created KAFKA-14545:
-

 Summary: MirrorCheckpointTask throws NullPointerException when 
group hasn't consumed from some partitions
 Key: KAFKA-14545
 URL: https://issues.apache.org/jira/browse/KAFKA-14545
 Project: Kafka
  Issue Type: Bug
  Components: mirrormaker
Affects Versions: 3.3.0
Reporter: Chris Solidum
Assignee: Chris Solidum


MirrorTaskConnector looks like it's throwing a NullPointerException when a 
consumer group hasn't consumed from all topics from a partition. This blocks 
the syncing of consumer group offsets to the target cluster. The stacktrace and 
error message is as follows:
{code:java}
WARN Failure polling consumer state for checkpoints. 
(org.apache.kafka.connect.mirror.MirrorCheckpointTask)
at java.base/java.lang.Thread.run(Thread.java:829)Dec 20
at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at 
java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at 
org.apache.kafka.connect.runtime.AbstractWorkerSourceTask.run(AbstractWorkerSourceTask.java:72)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:244)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:189)
at 
org.apache.kafka.connect.runtime.AbstractWorkerSourceTask.execute(AbstractWorkerSourceTask.java:346)
at 
org.apache.kafka.connect.runtime.AbstractWorkerSourceTask.poll(AbstractWorkerSourceTask.java:452)
at 
org.apache.kafka.connect.mirror.MirrorCheckpointTask.poll(MirrorCheckpointTask.java:142)
at 
org.apache.kafka.connect.mirror.MirrorCheckpointTask.sourceRecordsForGroup(MirrorCheckpointTask.java:160)
at 
org.apache.kafka.connect.mirror.MirrorCheckpointTask.checkpointsForGroup(MirrorCheckpointTask.java:177)
at 
java.base/java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:578)
at 
java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
at 
java.base/java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:913)
at 
java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:474)
at 
java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:484)
at 
java.base/java.util.HashMap$EntrySpliterator.forEachRemaining(HashMap.java:1764)
at 
java.base/java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:177)
at 
java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:195)
at 
org.apache.kafka.connect.mirror.MirrorCheckpointTask.lambda$checkpointsForGroup$2(MirrorCheckpointTask.java:174)
at 
org.apache.kafka.connect.mirror.MirrorCheckpointTask.checkpoint(MirrorCheckpointTask.java:191)
java.lang.NullPointerException
 {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Build failed in Jenkins: Kafka » Kafka Branch Builder » 3.3 #142

2022-12-21 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 511628 lines...]
[2022-12-21T21:29:39.150Z] at 
org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:135)
[2022-12-21T21:29:39.150Z] at 
org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:66)
[2022-12-21T21:29:39.150Z] at 
org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:151)
[2022-12-21T21:29:39.150Z] at 
org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
[2022-12-21T21:29:39.150Z] at 
org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:141)
[2022-12-21T21:29:39.151Z] at 
org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137)
[2022-12-21T21:29:39.151Z] at 
org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$9(NodeTestTask.java:139)
[2022-12-21T21:29:39.151Z] at 
org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
[2022-12-21T21:29:39.151Z] at 
org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:138)
[2022-12-21T21:29:39.151Z] at 
org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:95)
[2022-12-21T21:29:39.151Z] at 
java.util.ArrayList.forEach(ArrayList.java:1259)
[2022-12-21T21:29:39.151Z] at 
org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:41)
[2022-12-21T21:29:39.151Z] at 
org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:155)
[2022-12-21T21:29:39.151Z] at 
org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
[2022-12-21T21:29:39.151Z] at 
org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:141)
[2022-12-21T21:29:39.151Z] at 
org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137)
[2022-12-21T21:29:39.151Z] at 
org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$9(NodeTestTask.java:139)
[2022-12-21T21:29:39.151Z] at 
org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
[2022-12-21T21:29:39.151Z] at 
org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:138)
[2022-12-21T21:29:39.151Z] at 
org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:95)
[2022-12-21T21:29:39.151Z] at 
java.util.ArrayList.forEach(ArrayList.java:1259)
[2022-12-21T21:29:39.151Z] at 
org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:41)
[2022-12-21T21:29:39.151Z] at 
org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:155)
[2022-12-21T21:29:39.151Z] at 
org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
[2022-12-21T21:29:39.151Z] at 
org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:141)
[2022-12-21T21:29:39.151Z] at 
org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137)
[2022-12-21T21:29:39.151Z] at 
org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$9(NodeTestTask.java:139)
[2022-12-21T21:29:39.151Z] at 
org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
[2022-12-21T21:29:39.151Z] at 
org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:138)
[2022-12-21T21:29:39.151Z] at 
org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:95)
[2022-12-21T21:29:39.151Z] at 
org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.submit(SameThreadHierarchicalTestExecutorService.java:35)
[2022-12-21T21:29:39.151Z] at 
org.junit.platform.engine.support.hierarchical.HierarchicalTestExecutor.execute(HierarchicalTestExecutor.java:57)
[2022-12-21T21:29:39.151Z] at 
org.junit.platform.engine.support.hierarchical.HierarchicalTestEngine.execute(HierarchicalTestEngine.java:54)
[2022-12-21T21:29:39.151Z] at 
org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:108)
[2022-12-2

Jenkins build is still unstable: Kafka » Kafka Branch Builder » trunk #1458

2022-12-21 Thread Apache Jenkins Server
See 




[VOTE] 3.3.2 RC1

2022-12-21 Thread Chris Egerton
Hello Kafka users, developers and client-developers,

This is the second candidate for release of Apache Kafka 3.3.2.

This is a bugfix release with several fixes since the release of 3.3.1. A
few of the major issues include:

* KAFKA-14358 Users should not be able to create a regular topic name
__cluster_metadata
KAFKA-14379 Consumer should refresh preferred read replica on update
metadata
* KAFKA-13586 Prevent exception thrown during connector update from
crashing distributed herder


Release notes for the 3.3.2 release:
https://home.apache.org/~cegerton/kafka-3.3.2-rc1/RELEASE_NOTES.html

*** Please download, test and vote by Friday, January 6, 2023, 10pm UTC
(this date is chosen to accommodate the various upcoming holidays that
members of the community will be taking and give everyone enough time to
test out the release candidate, without unduly delaying the release)

Kafka's KEYS file containing PGP keys we use to sign the release:
https://kafka.apache.org/KEYS

* Release artifacts to be voted upon (source and binary):
https://home.apache.org/~cegerton/kafka-3.3.2-rc1/

* Maven artifacts to be voted upon:
https://repository.apache.org/content/groups/staging/org/apache/kafka/

* Javadoc:
https://home.apache.org/~cegerton/kafka-3.3.2-rc1/javadoc/

* Tag to be voted upon (off 3.3 branch) is the 3.3.2 tag:
https://github.com/apache/kafka/releases/tag/3.3.2-rc1

* Documentation:
https://kafka.apache.org/33/documentation.html

* Protocol:
https://kafka.apache.org/33/protocol.html

The most recent build has had test failures. These all appear to be due to
flakiness, but it would be nice if someone more familiar with the failed
tests could confirm this. I may update this thread with passing build links
if I can get one, or start a new release vote thread if test failures must
be addressed beyond re-running builds until they pass.

Unit/integration tests:
https://ci-builds.apache.org/job/Kafka/job/kafka/job/3.3/142/testReport/

José, would it be possible to re-run the system tests for 3.3 on the latest
commit for 3.3 (e3212f2), and share the results on this thread?

Cheers,

Chris


[jira] [Created] (KAFKA-14546) Allow Partitioner to return -1 to indicate default partitioning

2022-12-21 Thread James Olsen (Jira)
James Olsen created KAFKA-14546:
---

 Summary: Allow Partitioner to return -1 to indicate default 
partitioning
 Key: KAFKA-14546
 URL: https://issues.apache.org/jira/browse/KAFKA-14546
 Project: Kafka
  Issue Type: Improvement
  Components: producer 
Affects Versions: 3.3.1
Reporter: James Olsen


Prior to KIP-794 it was possible to create a custom Partitioner that could 
delegate to the DefaultPartitioner.  DefaultPartitioner has been deprecated so 
we can now only delegate to BuiltInPartitioner.partitionForKey which does not 
handle a non-keyed message.  Hence there is now no mechanism for a custom 
Partitioner to fallback to default partitioning, e.g. for the non-keyed sticky 
case.

I would like to propose that KafkaProducer.partition(...) not throw 
IllegalArgumentException if the Partitioner returns 
RecordMetadata.UNKNOWN_PARTITION and instead continue with the default 
behaviour.  Maybe with a configuration flag to enable this behaviour so as not 
to break existing expectations?

Why was Partitioner delegation with default fallback useful?
 # A single Producer can be used to write to multiple Topics where each Topic 
may have different partitioning requirements.  The Producer can only have a 
single Partitioner so the Partitioner needs to be able to switch behaviour 
based on the Topic, including the need to fallback to default behaviour if a 
given Topic does not have a custom requirement.
 # Multiple services may need to produce to the same Topic and these services 
may be authored by different teams.  A single custom Partitioner that 
encapsulates all Topic specific partitioning logic can be used by all teams at 
all times for all Topics ensuring that mistakes are not made.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [DISCUSS] KIP-858: Handle JBOD broker disk failure in KRaft

2022-12-21 Thread Jun Rao
Hi, Igor,

Thanks for the reply.

11. Yes, your proposal could work. Once the broker receives confirmation of
the metadata change, I guess it needs to briefly block appends to the old
replica, make sure the future log fully catches up and then make the switch?

13 (b). The kafka-storage.sh is only required in KIP-631 for a brand new
KRaft cluster. If the cluster already exists and one just wants to add a
log dir, it seems inconvenient to have to run the kafka-storage.sh tool
again.

Thanks,

Jun


On Thu, Dec 1, 2022 at 10:19 AM Igor Soarez 
wrote:

>
> Hi Jun,
>
> Thank you for reviewing the KIP. Please find my replies to
> your comments below.
>
> 10. Thanks for pointing out this typo; it has been corrected.
>
>
> 11. I agree that the additional delay in switching to the
> future replica is undesirable, however I see a couple of
> issues if we forward the request to the controller
> as you describe:
>
>a) If the controller persists the change to the log directory
>assignment before the future replica has caught up and there
>is a failure in the original log directory then if the broker
>is a leader for the partition there will be no failover
>and the partition will become unavailable. It is not safe to
>call AssignReplicasToDirectories before the replica exists
>in the designated log directory.
>
>b) An existing behavior we'd like to maintain if possible is
>the ability to move partitions between log directories when a
>broker is offline, as it can be very useful to manage storage
>space. ZK brokers will load and accept logs in the new
>location after startup.
>Maintaining this behavior requires that the broker be able to
>override/correct assignments that are seen in the cluster metadata.
>i.e. the broker is the authority on log directory placement in case
>of mismatch.
>If we want to keep this feature and have the controller send log
>directory reassignments, we'll need a way to distinguish between
>mismatch due to offline movement and mismatch due to controller
>triggered reassignment.
>
> To keep the delay low, instead of sending AlterReplicaLogDirs
> within the lock the RPC can be queued elsewhere when the future
> replica first catches up. ReplicaAlterLogDirsThread can keep
> going and not switch the replicas yet.
> Once the broker receives confirmation of the metadata change
> it can then briefly block appends to the old replica and make the switch.
> In the unlikely event that source log directory fails between the moment
> AssignReplicasToDirectories is acknowledged by the controller and
> before the broker is able to make the switch, then the broker
> can issue AssignReplicasToDirectories of that replica back to the offline
> log directory to let the controller know that the replica is actually
> offline.
> What do you think?
>
>
> 12. Indeed, the metadata.log.dir, if explicitly defined to a separate
> directory should not be included in the directory UUID list sent
> to the controller in broker registration and heartbeat requests.
> I have updated the KIP to make this explicit.
>
>
> 13. Thank you for making this suggestion.
> Let's address the different scenarios you enumerated:
>
>  a) When enabling JBOD for an existing KRaft cluster
>
>  In this scenario, the broker finds a single log directory configured
>  in `log.dirs`, with an already existing `meta.properties`, which is
>  simply missing `directory.id ` and `directory.ids`.
>
>  It is safe to have the broker automatically generate the log dir
>  UUID and update the `meta.properties` file. This removes any need
>  to have extra steps in upgrading and enabling of this feature,
>  so it is quite useful.
>
>
>  b) Adding a log dir to an existing JBOD KRaft cluster
>
>  In this scenario, the broker finds a shorter list of `directory.ids`
>  in `meta.properties` than what is configured in `log.dirs`.
>
>  KIP-631 introduced the requirement to run the command kafka-storage.sh
>  to format storage directories.
>
>  Currently, if the broker in KRaft mode cannot find `meta.properties`
>  in each log directory it will fail at startup. KIP-785 proposes
>  removing the need to run the format storage command, but it is
>  still open. If new log directories are being added the storage
>  command must be run anyway. So I don't think there will be any
>  benefit in this case.
>
>
>  c) Removing a log dir from an existing JBOD KRaft cluster
>
>  In this scenario the broker finds a larger list of `directory.ids`
>  in `meta.properties` than what is configured in `log.dirs`.
>
>  The removal of log directories requires an explicit update to
>  the `log.dirs` configuration, so it is also safe to have the
>  broker automatically update `directory.ids` in `meta.properties`
>  to remove the extra UUIDs. It's also useful to drop the requirement
>  to run the storage command after removal of log directories from
>  configuration, as it reduces operational burde

Jenkins build is still unstable: Kafka » Kafka Branch Builder » trunk #1459

2022-12-21 Thread Apache Jenkins Server
See 




Re: [DISCUSS] Apache Kafka 3.4.0 release

2022-12-21 Thread Luke Chen
Hi Ismael,

Good point! This file hasn't been changed after 2020 Oct..
So I think this is not a blocker for v3.4.0.

Thank you.
Luke

On Wed, Dec 21, 2022 at 9:49 PM Ismael Juma  wrote:

> Hi Luke,
>
> Is this a recent change? If not, that gives further credence that we don't
> use this method today.
>
> Ismael
>
> On Wed, Dec 21, 2022 at 2:22 AM Luke Chen  wrote:
>
> > Hi Sophie and devs,
> >
> > KAFKA-14540  is
> > reported
> > (from Michael Marshall, thanks!) that we might write corrupted data to
> the
> > output stream due to the wrong buffer position set in
> > DataOutputStreamWritable#writeByteBuffer.
> > Had a search in the project, it looks like there is no place using this
> > method directly.
> > But I'd like to make sure I didn't miss anything and make things worse
> > after we release it.
> >
> > Thank you.
> > Luke
> >
> > On Wed, Dec 21, 2022 at 4:36 PM David Jacot  >
> > wrote:
> >
> > > Hi Sophie,
> > >
> > > I have merged https://issues.apache.org/jira/browse/KAFKA-14532.
> > >
> > > Best,
> > > David
> > >
> > > On Tue, Dec 20, 2022 at 4:31 PM David Arthur
> > >  wrote:
> > > >
> > > > Hey Sophie,
> > > >
> > > > I found a KRaft blocker for 3.4
> > > > https://issues.apache.org/jira/browse/KAFKA-14531. The fix is
> > committed
> > > to
> > > > trunk and is quite small. If you agree, I'll merge the fix to the 3.4
> > > > branch.
> > > >
> > > > Thanks!
> > > > David
> > > >
> > > > On Tue, Dec 20, 2022 at 7:53 AM David Jacot
> >  > > >
> > > > wrote:
> > > >
> > > > > Hi Sophie,
> > > > >
> > > > > We just found a blocker for 3.4.0:
> > > > > https://issues.apache.org/jira/browse/KAFKA-14532. The PR is on
> the
> > > > > way.
> > > > >
> > > > > Best,
> > > > > David
> > > > >
> > > > > On Sat, Dec 17, 2022 at 1:08 AM Sophie Blee-Goldman
> > > > >  wrote:
> > > > > >
> > > > > > Thanks Jose & Kirk. I agree both those fixes should be included
> in
> > > the
> > > > > 3.4
> > > > > > release
> > > > > >
> > > > > > On Fri, Dec 16, 2022 at 12:30 PM José Armando García Sancio
> > > > > >  wrote:
> > > > > >
> > > > > > > Hi Sophie,
> > > > > > >
> > > > > > > I am interested in including a bug fix for
> > > > > > > https://issues.apache.org/jira/browse/KAFKA-14457 in the 3.4.0
> > > > > > > release. The fix is here:
> > > https://github.com/apache/kafka/pull/12994.
> > > > > > >
> > > > > > > I think it is important to include this fix because some of the
> > > > > > > controller metrics are inaccurate without this fix. This could
> > > impact
> > > > > > > some users' ability to monitor the cluster.
> > > > > > >
> > > > > > > What do you think?
> > > > > > > --
> > > > > > > -José
> > > > > > >
> > > > >
> > > >
> > > >
> > > > --
> > > > -David
> > >
> >
>


Re: Kafka issue

2022-12-21 Thread Luke Chen
Hi Amoli,

> The code is working fine with 100 threads and 100 partition topic.
It's good to know you have a workable solution.

For 500 threads in 500 partitions, could you firstly make sure the topic is
created successfully?
If topic cannot be created successfully, no records will be sent even if
you have 1 thread.
So, as long as the topic can be created successfully, I think it won't be a
problem with 500 threads as long as your machine is powerful enough.

Thank you.
Luke


On Wed, Dec 21, 2022 at 11:12 PM Amoli Tandon  wrote:

> Hi Team,
>
> I am working on a POC and i am facing issues while producing messages to
> Kafka cluster.
>
> I am getting 'topic doesn't exist in the metadata after 6ms' error when
> trying to produce the message using 500 threads. Topic has a partition of
> 500.
>
> The code is working fine with 100 threads and 100 partition topic.
>
> Please suggest the resolution steps.
>
> Regards,
> Amoli.
>


Jenkins build is still unstable: Kafka » Kafka Branch Builder » trunk #1460

2022-12-21 Thread Apache Jenkins Server
See 




Re: [VOTE] 3.3.2 RC1

2022-12-21 Thread Yash Mayya
Hi Chris,

I did the following release validations -

- Verified the MD5 / SHA-1 / SHA-512 checksums and the PGP signatures
- Built from source using Java 8 and Scala 2.13
- Ran all the unit tests successfully
- Ran all the integration tests successfully (couple of flaky failures that
passed on a rerun - `TopicCommandIntegrationTest.
testDeleteInternalTopic(String).quorum=kraft` and
`SaslScramSslEndToEndAuthorizationTest.
testNoConsumeWithoutDescribeAclViaSubscribe()`)
- Quickstart for Kafka and Kafka Connect with both ZooKeeper and KRaft

I'm +1 (non-binding) assuming that the system test results look good.

Thanks,
Yash

On Thu, Dec 22, 2022 at 3:52 AM Chris Egerton 
wrote:

> Hello Kafka users, developers and client-developers,
>
> This is the second candidate for release of Apache Kafka 3.3.2.
>
> This is a bugfix release with several fixes since the release of 3.3.1. A
> few of the major issues include:
>
> * KAFKA-14358 Users should not be able to create a regular topic name
> __cluster_metadata
> KAFKA-14379 Consumer should refresh preferred read replica on update
> metadata
> * KAFKA-13586 Prevent exception thrown during connector update from
> crashing distributed herder
>
>
> Release notes for the 3.3.2 release:
> https://home.apache.org/~cegerton/kafka-3.3.2-rc1/RELEASE_NOTES.html
>
> *** Please download, test and vote by Friday, January 6, 2023, 10pm UTC
> (this date is chosen to accommodate the various upcoming holidays that
> members of the community will be taking and give everyone enough time to
> test out the release candidate, without unduly delaying the release)
>
> Kafka's KEYS file containing PGP keys we use to sign the release:
> https://kafka.apache.org/KEYS
>
> * Release artifacts to be voted upon (source and binary):
> https://home.apache.org/~cegerton/kafka-3.3.2-rc1/
>
> * Maven artifacts to be voted upon:
> https://repository.apache.org/content/groups/staging/org/apache/kafka/
>
> * Javadoc:
> https://home.apache.org/~cegerton/kafka-3.3.2-rc1/javadoc/
>
> * Tag to be voted upon (off 3.3 branch) is the 3.3.2 tag:
> https://github.com/apache/kafka/releases/tag/3.3.2-rc1
>
> * Documentation:
> https://kafka.apache.org/33/documentation.html
>
> * Protocol:
> https://kafka.apache.org/33/protocol.html
>
> The most recent build has had test failures. These all appear to be due to
> flakiness, but it would be nice if someone more familiar with the failed
> tests could confirm this. I may update this thread with passing build links
> if I can get one, or start a new release vote thread if test failures must
> be addressed beyond re-running builds until they pass.
>
> Unit/integration tests:
> https://ci-builds.apache.org/job/Kafka/job/kafka/job/3.3/142/testReport/
>
> José, would it be possible to re-run the system tests for 3.3 on the latest
> commit for 3.3 (e3212f2), and share the results on this thread?
>
> Cheers,
>
> Chris
>


Jenkins build is still unstable: Kafka » Kafka Branch Builder » trunk #1461

2022-12-21 Thread Apache Jenkins Server
See