[jira] [Created] (KAFKA-13535) Workaround for mitigating CVE-2021-44228 Kafka

2021-12-10 Thread Akansh Shandilya (Jira)
Akansh Shandilya created KAFKA-13535:


 Summary: Workaround for mitigating CVE-2021-44228 Kafka 
 Key: KAFKA-13535
 URL: https://issues.apache.org/jira/browse/KAFKA-13535
 Project: Kafka
  Issue Type: Bug
Affects Versions: 2.8.1
Reporter: Akansh Shandilya


Kafka v2.8.1 uses log4j v1.x . Please review following information :

 

Is Kafka v2.8.1 impacted by  CVE-2021-44228?

If yes, is there any workaround/recommendation available for Kafka  v2.8.1 to 
mitigate CVE-2021-44228



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


Build failed in Jenkins: Kafka » Kafka Branch Builder » trunk #577

2021-12-10 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 506467 lines...]
[2021-12-11T03:40:48.741Z] [INFO] Changes detected - recompiling the module!
[2021-12-11T03:40:48.741Z] [INFO] Compiling 3 source files to 
/home/jenkins/workspace/Kafka_kafka_trunk/streams/quickstart/test-streams-archetype/streams.examples/target/classes
[2021-12-11T03:40:48.765Z] 
[2021-12-11T03:40:48.765Z] PlaintextConsumerTest > 
testFetchHonoursFetchSizeIfLargeRecordNotFirst() PASSED
[2021-12-11T03:40:48.765Z] 
[2021-12-11T03:40:48.765Z] PlaintextConsumerTest > testSeek() STARTED
[2021-12-11T03:40:49.697Z] [INFO] 

[2021-12-11T03:40:49.697Z] [INFO] BUILD SUCCESS
[2021-12-11T03:40:49.697Z] [INFO] 

[2021-12-11T03:40:49.697Z] [INFO] Total time:  1.859 s
[2021-12-11T03:40:49.697Z] [INFO] Finished at: 2021-12-11T03:40:49Z
[2021-12-11T03:40:49.697Z] [INFO] 

[Pipeline] }
[Pipeline] // dir
[Pipeline] }
[Pipeline] // dir
[Pipeline] }
[Pipeline] // dir
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // node
[Pipeline] }
[Pipeline] // timestamps
[Pipeline] }
[Pipeline] // timeout
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[2021-12-11T03:40:55.796Z] 
[2021-12-11T03:40:55.796Z] PlaintextConsumerTest > testSeek() PASSED
[2021-12-11T03:40:55.796Z] 
[2021-12-11T03:40:55.796Z] PlaintextConsumerTest > 
testConsumingWithNullGroupId() STARTED
[2021-12-11T03:41:04.243Z] 
[2021-12-11T03:41:04.243Z] PlaintextConsumerTest > 
testConsumingWithNullGroupId() PASSED
[2021-12-11T03:41:04.243Z] 
[2021-12-11T03:41:04.243Z] PlaintextConsumerTest > testPositionAndCommit() 
STARTED
[2021-12-11T03:41:07.831Z] 
[2021-12-11T03:41:07.831Z] PlaintextConsumerTest > testPositionAndCommit() 
PASSED
[2021-12-11T03:41:07.831Z] 
[2021-12-11T03:41:07.831Z] PlaintextConsumerTest > 
testFetchRecordLargerThanMaxPartitionFetchBytes() STARTED
[2021-12-11T03:41:12.459Z] 
[2021-12-11T03:41:12.459Z] PlaintextConsumerTest > 
testFetchRecordLargerThanMaxPartitionFetchBytes() PASSED
[2021-12-11T03:41:12.459Z] 
[2021-12-11T03:41:12.459Z] PlaintextConsumerTest > testUnsubscribeTopic() 
STARTED
[2021-12-11T03:41:17.084Z] 
[2021-12-11T03:41:17.084Z] PlaintextConsumerTest > testUnsubscribeTopic() PASSED
[2021-12-11T03:41:17.084Z] 
[2021-12-11T03:41:17.084Z] PlaintextConsumerTest > 
testMultiConsumerSessionTimeoutOnClose() STARTED
[2021-12-11T03:41:27.117Z] 
[2021-12-11T03:41:27.117Z] PlaintextConsumerTest > 
testMultiConsumerSessionTimeoutOnClose() PASSED
[2021-12-11T03:41:27.117Z] 
[2021-12-11T03:41:27.117Z] PlaintextConsumerTest > 
testMultiConsumerStickyAssignor() STARTED
[2021-12-11T03:41:39.011Z] 
[2021-12-11T03:41:39.011Z] PlaintextConsumerTest > 
testMultiConsumerStickyAssignor() PASSED
[2021-12-11T03:41:39.011Z] 
[2021-12-11T03:41:39.011Z] PlaintextConsumerTest > 
testFetchRecordLargerThanFetchMaxBytes() STARTED
[2021-12-11T03:41:42.637Z] 
[2021-12-11T03:41:42.637Z] PlaintextConsumerTest > 
testFetchRecordLargerThanFetchMaxBytes() PASSED
[2021-12-11T03:41:42.637Z] 
[2021-12-11T03:41:42.637Z] PlaintextConsumerTest > testAutoCommitOnClose() 
STARTED
[2021-12-11T03:41:49.724Z] 
[2021-12-11T03:41:49.724Z] PlaintextConsumerTest > testAutoCommitOnClose() 
PASSED
[2021-12-11T03:41:49.724Z] 
[2021-12-11T03:41:49.724Z] PlaintextConsumerTest > testListTopics() STARTED
[2021-12-11T03:41:53.326Z] 
[2021-12-11T03:41:53.326Z] PlaintextConsumerTest > testListTopics() PASSED
[2021-12-11T03:41:53.326Z] 
[2021-12-11T03:41:53.326Z] PlaintextConsumerTest > 
testExpandingTopicSubscriptions() STARTED
[2021-12-11T03:41:57.954Z] 
[2021-12-11T03:41:57.954Z] PlaintextConsumerTest > 
testExpandingTopicSubscriptions() PASSED
[2021-12-11T03:41:57.954Z] 
[2021-12-11T03:41:57.954Z] PlaintextConsumerTest > 
testMultiConsumerDefaultAssignor() STARTED
[2021-12-11T03:42:09.799Z] 
[2021-12-11T03:42:09.799Z] PlaintextConsumerTest > 
testMultiConsumerDefaultAssignor() PASSED
[2021-12-11T03:42:09.799Z] 
[2021-12-11T03:42:09.799Z] PlaintextConsumerTest > testInterceptors() STARTED
[2021-12-11T03:42:15.564Z] 
[2021-12-11T03:42:15.564Z] PlaintextConsumerTest > testInterceptors() PASSED
[2021-12-11T03:42:15.564Z] 
[2021-12-11T03:42:15.564Z] PlaintextConsumerTest > 
testConsumingWithEmptyGroupId() STARTED
[2021-12-11T03:42:19.323Z] 
[2021-12-11T03:42:19.323Z] PlaintextConsumerTest > 
testConsumingWithEmptyGroupId() PASSED
[2021-12-11T03:42:19.323Z] 
[2021-12-11T03:42:19.323Z] PlaintextConsumerTest > testPatternUnsubscription() 
STARTED
[2021-12-11T03:42:26.354Z] 
[2021-12-11T03:42:26.354Z] PlaintextConsumerTest > testPatternUnsubscription() 
PASSED
[2021-12-11T03:42:26.354Z] 
[2021-12-11T03:42:26.354Z] PlaintextConsumerTest > testG

Re: [VOTE] KIP-778 KRaft upgrades

2021-12-10 Thread deng ziming
Hi, David

Looking forwarding to this feature

+1 (non-binding)

Thanks!

Ziming Deng

> On Dec 11, 2021, at 4:49 AM, David Arthur  wrote:
> 
> Hey everyone, I'd like to start a vote for KIP-778 which adds support for
> KRaft to KRaft upgrades.
> 
> Notably in this KIP is the first use case of KIP-584 feature flags. As
> such, there are some addendums to KIP-584 included.
> 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-778%3A+KRaft+Upgrades
> 
> Thanks!
> David



Jenkins build is still unstable: Kafka » Kafka Branch Builder » trunk #576

2021-12-10 Thread Apache Jenkins Server
See 




Re: [DISCUSS] KIP-782: Expandable batch size in producer

2021-12-10 Thread Lucas Bradstreet
Hi Jun,

One difference compared to increasing the default batch size is that users
may actually prefer smaller batches but it makes much less sense to
accumulate many small batches if a batch is already sending.

For example, imagine a user that prefer 16K batches with 5ms linger.
Everything is functioning normally and 16KB batches are being sent. Then
there's a 500ms blip for that broker. Do we want to continue to accumulate
16KB batches, each of which requires a round trip, or would we prefer to
accumulate larger batches while sending is blocked?

I'm not hugely against increasing the default batch.size in general, but
batch.max.size does seem to have some nice properties.

Thanks,

Lucas

On Fri, Dec 10, 2021 at 9:42 AM Jun Rao  wrote:

> Hi, Artem, Luke,
>
> Thanks for the reply.
>
> 11. If we get rid of batch.max.size and increase the default batch.size,
> it's true the behavior is slightly different than before. However, does
> that difference matter to most of our users? In your example, if a user
> sets linger.ms to 100ms and thinks 256KB is good for throughput, does it
> matter to deliver any batch smaller than 256KB before 100ms? I also find it
> a bit hard to explain to our users these 3 different settings related to
> batch size.
>
> Thanks,
>
> Jun
>
> On Thu, Dec 9, 2021 at 5:47 AM Luke Chen  wrote:
>
> > Hi Jun,
> >
> > 11. In addition to Artem's comment, I think the reason to have additional
> > "batch.max.size" is to have more flexibility to users.
> > For example:
> > With linger.ms=100ms, batch.size=16KB, now, we have 20KB of data coming
> to
> > a partition within 50ms. Now, sender is ready to pick up the batch to
> send.
> > In current design, we send 16KB data to broker, and keep the remaining
> 4KB
> > in the producer, to keep accumulating data.
> > But after this KIP, user can send the whole 20KB of data together. That
> is,
> > user can decide if they want to accumulate more data before the sender is
> > ready, and send them together, to have higher throughput. The
> > "batch.size=16KB" in the proposal, is more like a soft limit, (and
> > "batch.max.size" is like a hard limit), or it's like a switch to enable
> the
> > batch to become ready. Before sender is ready, we can still accumulate
> more
> > data, and wrap them together to send to broker.
> >
> > User can increase "batch.size" to 20KB to achieve the same goal in the
> > current design, of course. But you can imagine, if the data within 100ms
> is
> > just 18KB, then the batch of data will wait for 100ms passed to be sent
> > out. This "batch.max.size" config will allow more flexible for user
> config.
> >
> > Does that make sense?
> >
> > Thank you.
> > Luke
> >
> > On Thu, Dec 9, 2021 at 7:53 AM Artem Livshits
> >  wrote:
> >
> > > Hi Jun,
> > >
> > > 11. That was my initial thinking as well, but in a discussion some
> people
> > > pointed out the change of behavior in some scenarios.  E.g. if someone
> > for
> > > some reason really wants batches to be at least 16KB and sets large
> > > linger.ms, and most of the time the batches are filled quickly enough
> > and
> > > they observe a certain latency.  Then they upgrade their client with a
> > > default size 256KB and the latency increases.  This could be seen as a
> > > regression.  It could be fixed by just reducing linger.ms to specify
> the
> > > expected latency, but still could be seen as a disruption by some
> users.
> > > The other reason to have 2 sizes is to avoid allocating large buffers
> > > upfront.
> > >
> > > -Artem
> > >
> > > On Wed, Dec 8, 2021 at 3:07 PM Jun Rao 
> wrote:
> > >
> > > > Hi, Artem,
> > > >
> > > > Thanks for the reply.
> > > >
> > > > 11. Got it. To me, batch.size is really used for throughput and not
> for
> > > > latency guarantees. There is no guarantee when 16KB will be
> > accumulated.
> > > > So, if users want any latency guarantee, they will need to specify
> > > > linger.ms accordingly.
> > > > Then, batch.size can just be used to tune for throughput.
> > > >
> > > > 20. Could we also describe the unit of compression? Is
> > > > it batch.initial.size, batch.size or batch.max.size?
> > > >
> > > > Thanks,
> > > >
> > > > Jun
> > > >
> > > > On Wed, Dec 8, 2021 at 9:58 AM Artem Livshits
> > > >  wrote:
> > > >
> > > > > Hi Jun,
> > > > >
> > > > > 10. My understanding is that MemoryRecords would under the covers
> be
> > > > > allocated in chunks, so logically it still would be one
> MemoryRecords
> > > > > object, it's just instead of allocating one large chunk upfront,
> > > smaller
> > > > > chunks are allocated as needed to grow the batch and linked into a
> > > list.
> > > > >
> > > > > 11. The reason for 2 sizes is to avoid change of behavior when
> > > triggering
> > > > > batch send with large linger.ms.  Currently, a batch send is
> > triggered
> > > > > once
> > > > > the batch reaches 16KB by default, if we just raise the default to
> > > 256KB,
> > > > > then the batch send will be delayed.  Using a separate value would

Build failed in Jenkins: Kafka » Kafka Branch Builder » 3.1 #33

2021-12-10 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 500227 lines...]
[2021-12-10T23:46:54.079Z] 
[2021-12-10T23:46:54.079Z] GetOffsetShellTest > 
testTopicPartitionsNotFoundForNonMatchingTopicPartitionPattern() PASSED
[2021-12-10T23:46:54.079Z] 
[2021-12-10T23:46:54.079Z] GetOffsetShellTest > 
testTopicPartitionsNotFoundForExcludedInternalTopic() STARTED
[2021-12-10T23:46:59.941Z] 
[2021-12-10T23:46:59.941Z] GetOffsetShellTest > 
testTopicPartitionsNotFoundForExcludedInternalTopic() PASSED
[2021-12-10T23:46:59.941Z] 
[2021-12-10T23:46:59.941Z] DeleteTopicTest > testDeleteTopicWithCleaner() 
STARTED
[2021-12-10T23:47:44.277Z] 
[2021-12-10T23:47:44.277Z] DeleteTopicTest > testDeleteTopicWithCleaner() PASSED
[2021-12-10T23:47:44.277Z] 
[2021-12-10T23:47:44.277Z] DeleteTopicTest > 
testResumeDeleteTopicOnControllerFailover() STARTED
[2021-12-10T23:47:46.965Z] 
[2021-12-10T23:47:46.965Z] DeleteTopicTest > 
testResumeDeleteTopicOnControllerFailover() PASSED
[2021-12-10T23:47:46.965Z] 
[2021-12-10T23:47:46.965Z] DeleteTopicTest > 
testResumeDeleteTopicWithRecoveredFollower() STARTED
[2021-12-10T23:47:54.106Z] 
[2021-12-10T23:47:54.106Z] DeleteTopicTest > 
testResumeDeleteTopicWithRecoveredFollower() PASSED
[2021-12-10T23:47:54.106Z] 
[2021-12-10T23:47:54.106Z] DeleteTopicTest > 
testDeleteTopicAlreadyMarkedAsDeleted() STARTED
[2021-12-10T23:47:58.811Z] 
[2021-12-10T23:47:58.811Z] DeleteTopicTest > 
testDeleteTopicAlreadyMarkedAsDeleted() PASSED
[2021-12-10T23:47:58.811Z] 
[2021-12-10T23:47:58.811Z] DeleteTopicTest > 
testIncreasePartitionCountDuringDeleteTopic() STARTED
[2021-12-10T23:48:08.989Z] 
[2021-12-10T23:48:08.989Z] DeleteTopicTest > 
testIncreasePartitionCountDuringDeleteTopic() PASSED
[2021-12-10T23:48:08.989Z] 
[2021-12-10T23:48:08.989Z] DeleteTopicTest > 
testPartitionReassignmentDuringDeleteTopic() STARTED
[2021-12-10T23:48:14.851Z] 
[2021-12-10T23:48:14.851Z] DeleteTopicTest > 
testPartitionReassignmentDuringDeleteTopic() PASSED
[2021-12-10T23:48:14.851Z] 
[2021-12-10T23:48:14.851Z] DeleteTopicTest > testDeleteNonExistingTopic() 
STARTED
[2021-12-10T23:48:18.506Z] 
[2021-12-10T23:48:18.506Z] DeleteTopicTest > testDeleteNonExistingTopic() PASSED
[2021-12-10T23:48:18.506Z] 
[2021-12-10T23:48:18.506Z] DeleteTopicTest > testRecreateTopicAfterDeletion() 
STARTED
[2021-12-10T23:48:22.159Z] 
[2021-12-10T23:48:22.159Z] DeleteTopicTest > testRecreateTopicAfterDeletion() 
PASSED
[2021-12-10T23:48:22.159Z] 
[2021-12-10T23:48:22.159Z] DeleteTopicTest > testDisableDeleteTopic() STARTED
[2021-12-10T23:48:24.847Z] 
[2021-12-10T23:48:24.847Z] DeleteTopicTest > testDisableDeleteTopic() PASSED
[2021-12-10T23:48:24.847Z] 
[2021-12-10T23:48:24.847Z] DeleteTopicTest > 
testAddPartitionDuringDeleteTopic() STARTED
[2021-12-10T23:48:28.502Z] 
[2021-12-10T23:48:28.502Z] DeleteTopicTest > 
testAddPartitionDuringDeleteTopic() PASSED
[2021-12-10T23:48:28.502Z] 
[2021-12-10T23:48:28.502Z] DeleteTopicTest > 
testDeleteTopicWithAllAliveReplicas() STARTED
[2021-12-10T23:48:32.156Z] 
[2021-12-10T23:48:32.156Z] DeleteTopicTest > 
testDeleteTopicWithAllAliveReplicas() PASSED
[2021-12-10T23:48:32.156Z] 
[2021-12-10T23:48:32.156Z] DeleteTopicTest > 
testDeleteTopicDuringAddPartition() STARTED
[2021-12-10T23:48:39.295Z] 
[2021-12-10T23:48:39.295Z] DeleteTopicTest > 
testDeleteTopicDuringAddPartition() PASSED
[2021-12-10T23:48:39.295Z] 
[2021-12-10T23:48:39.295Z] DeleteTopicTest > 
testDeletingPartiallyDeletedTopic() STARTED
[2021-12-10T23:48:53.551Z] 
[2021-12-10T23:48:53.551Z] DeleteTopicTest > 
testDeletingPartiallyDeletedTopic() PASSED
[2021-12-10T23:48:53.551Z] 
[2021-12-10T23:48:53.551Z] AddPartitionsTest > testReplicaPlacementAllServers() 
STARTED
[2021-12-10T23:48:58.256Z] 
[2021-12-10T23:48:58.256Z] AddPartitionsTest > testReplicaPlacementAllServers() 
PASSED
[2021-12-10T23:48:58.256Z] 
[2021-12-10T23:48:58.256Z] AddPartitionsTest > testMissingPartition0() STARTED
[2021-12-10T23:49:04.121Z] 
[2021-12-10T23:49:04.121Z] AddPartitionsTest > testMissingPartition0() PASSED
[2021-12-10T23:49:04.121Z] 
[2021-12-10T23:49:04.121Z] AddPartitionsTest > testWrongReplicaCount() STARTED
[2021-12-10T23:49:07.776Z] 
[2021-12-10T23:49:07.776Z] AddPartitionsTest > testWrongReplicaCount() PASSED
[2021-12-10T23:49:07.776Z] 
[2021-12-10T23:49:07.776Z] AddPartitionsTest > 
testReplicaPlacementPartialServers() STARTED
[2021-12-10T23:49:12.483Z] 
[2021-12-10T23:49:12.483Z] AddPartitionsTest > 
testReplicaPlacementPartialServers() PASSED
[2021-12-10T23:49:12.483Z] 
[2021-12-10T23:49:12.483Z] AddPartitionsTest > testIncrementPartitions() STARTED
[2021-12-10T23:49:17.189Z] 
[2021-12-10T23:49:17.189Z] AddPartitionsTest > testIncrementPartitions() PASSED
[2021-12-10T23:49:17.189Z] 
[2021-12-10T23:49:17.189Z] AddPartitionsTest > testManualAssignmentOfReplicas() 
STARTED
[2021-12-10T23:49:23.050Z] 
[2021-12-10T23:49:23.050Z] AddPartitionsTest > testManualAssig

Re: [VOTE] KIP-806: Add session and window query over KV-store in IQv2

2021-12-10 Thread Guozhang Wang
Hi Patrick,

I made a pass on the KIP and have a few comments below:

1. The `WindowRangeQuery` has a private constructor while the
`WindowKeyQuery` has not, is that intentional?

2. The `WindowRangeQuery` seems not allowing to range over both window and
key, but only window with a fixed key, in that case it seems pretty much
the same as the other (ignoring the constructor), since we know we would
only have a single `key` value in the returned iterator, and hence it seems
returning in the form of `WindowStoreIterator` is also fine as the key
is fixed and hence no need to maintain it in the returned iterator. I'm
wondering should we actually support ranging over keys as well in
`WindowRangeQuery`?

3. The KIP title mentioned both session and window, but the APIs only
involves window stores; However the return type `WindowStoreIterator` is
only for window stores not session stores, so I feel we would still have
some differences for session window query interface?


Guozhang

On Fri, Dec 10, 2021 at 1:32 PM Patrick Stuedi 
wrote:

> Hi everyone,
>
> I would like to start the vote for KIP-806 that adds window and session
> query support to query KV-stores using IQv2.
>
> The KIP can be found here:
> https://cwiki.apache.org/confluence/x/LJaqCw
>
> Skipping the discussion phase as this KIP is following the same pattern as
> the previously submitted KIP-805 (KIP:
> https://cwiki.apache.org/confluence/x/85OqCw, Discussion:
> https://tinyurl.com/msp5mcb2). Of course concerns/comments can still be
> brought up in this thread.
>
> -Patrick
>


-- 
-- Guozhang


Jenkins build is still unstable: Kafka » Kafka Branch Builder » trunk #575

2021-12-10 Thread Apache Jenkins Server
See 




Re: [DISCUSS] Please review the 3.1.0 blog post

2021-12-10 Thread David Jacot
I have put the wrong link in my previous email. Here is the public one:

https://blogs.apache.org/preview/kafka/?previewEntry=what-s-new-in-apache7

Best,
David

On Fri, Dec 10, 2021 at 10:35 PM David Jacot  wrote:
>
> Hello all,
>
> I have prepared a draft of the release announcement post for the
> Apache Kafka 3.1.0 release:
>
> https://blogs.apache.org/roller-ui/authoring/preview/kafka/?previewEntry=what-s-new-in-apache7
>
> I would greatly appreciate your reviews if you have a moment.
>
> Thanks,
> David


[DISCUSS] Please review the 3.1.0 blog post

2021-12-10 Thread David Jacot
Hello all,

I have prepared a draft of the release announcement post for the
Apache Kafka 3.1.0 release:

https://blogs.apache.org/roller-ui/authoring/preview/kafka/?previewEntry=what-s-new-in-apache7

I would greatly appreciate your reviews if you have a moment.

Thanks,
David


[VOTE] KIP-806: Add session and window query over KV-store in IQv2

2021-12-10 Thread Patrick Stuedi
Hi everyone,

I would like to start the vote for KIP-806 that adds window and session
query support to query KV-stores using IQv2.

The KIP can be found here:
https://cwiki.apache.org/confluence/x/LJaqCw

Skipping the discussion phase as this KIP is following the same pattern as
the previously submitted KIP-805 (KIP:
https://cwiki.apache.org/confluence/x/85OqCw, Discussion:
https://tinyurl.com/msp5mcb2). Of course concerns/comments can still be
brought up in this thread.

-Patrick


Jenkins build is still unstable: Kafka » Kafka Branch Builder » trunk #574

2021-12-10 Thread Apache Jenkins Server
See 




[VOTE] KIP-778 KRaft upgrades

2021-12-10 Thread David Arthur
Hey everyone, I'd like to start a vote for KIP-778 which adds support for
KRaft to KRaft upgrades.

Notably in this KIP is the first use case of KIP-584 feature flags. As
such, there are some addendums to KIP-584 included.

https://cwiki.apache.org/confluence/display/KAFKA/KIP-778%3A+KRaft+Upgrades

Thanks!
David


[jira] [Created] (KAFKA-13534) Upgrade Log4j to 2.15.0 - CVE-2021-44228

2021-12-10 Thread Sai Kiran Vudutala (Jira)
Sai Kiran Vudutala created KAFKA-13534:
--

 Summary: Upgrade Log4j to 2.15.0 - CVE-2021-44228
 Key: KAFKA-13534
 URL: https://issues.apache.org/jira/browse/KAFKA-13534
 Project: Kafka
  Issue Type: Improvement
Affects Versions: 3.0.0, 2.8.0, 2.7.0
Reporter: Sai Kiran Vudutala


Log4j has an RCE vulnerability, see 
[https://www.lunasec.io/docs/blog/log4j-zero-day/]



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


Re: [VOTE] KIP-805: Add range and scan query support in IQ v2

2021-12-10 Thread Guozhang Wang
Thanks Vicky,

I'd suggest we change the KIP title as "add range and scan query over
kv-store in IQv2" just for clarification, otherwise I'm +1.

Guozhang

On Wed, Dec 8, 2021 at 4:18 PM Matthias J. Sax  wrote:

> Thanks for the KIP.
>
> +1 (binding)
>
> On 12/5/21 7:03 PM, Luke Chen wrote:
> > Hi Vasiliki,
> >
> > Thanks for the KIP!
> > It makes sense to have the range and scan query in IQv2, as in IQv1.
> >
> > +1 (non-binding)
> >
> > Thank you.
> > Luke
> >
> > On Thu, Dec 2, 2021 at 5:41 AM John Roesler  wrote:
> >
> >> Thanks for the KIP, Vicky!
> >>
> >> I’m +1 (binding)
> >>
> >> -John
> >>
> >> On Tue, Nov 30, 2021, at 14:51, Vasiliki Papavasileiou wrote:
> >>> Hello everyone,
> >>>
> >>> I would like to start a vote for KIP-805 that adds range and scan
> >> KeyValue
> >>> queries in IQ2.
> >>>
> >>> The KIP can be found here:
> >>>
> >>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-805%3A+Add+range+and+scan+query+support+in+IQ+v2
> >>>
> >>> Cheers!
> >>> Vicky
> >>
> >
>


-- 
-- Guozhang


Re: [DISCUSS] KIP-805: Add range and scan query support in IQ v2

2021-12-10 Thread Vasiliki Papavasileiou
Hey Guozhang,

Thank you for looking into the KIP.

Windowed stores are addressed in another KIP.

FYI, I made a change to the KIP and removed the `RawRangeQuery`. After some
more thought, it doesn't provide us with many benefits (we save on one
cast) which doesn't justify the cost of adding an extra query to the public
interface.

On Thu, Dec 9, 2021 at 9:50 PM Guozhang Wang  wrote:

> Hi Vicky,
>
> Thanks for the KIP. Just for a bit more clarification, could you elaborate
> an example for windowed stores, beyond a key-value store (I think the
> `myStore` is for kv-store right?). Otherwise LGTM.
>
>
> Guozhang
>
> On Wed, Dec 8, 2021 at 4:18 PM Matthias J. Sax  wrote:
>
> > Thanks for the details!
> >
> > I also chatted with John about it, and he filed
> > https://issues.apache.org/jira/browse/KAFKA-13526 to incorporate some
> > feedback as follow up work.
> >
> > IMHO, the hard coded query translation is not ideal and should be
> > plugable. But for a v1 of IQv2 (pun intended) the hardcoded translation
> > seems to be good enough.
> >
> >
> > -Matthias
> >
> > On 12/8/21 9:37 AM, Vasiliki Papavasileiou wrote:
> > > Hey Matthias,
> > >
> > > Thank you for looking into the KIP!
> > >
> > > We are adding raw versions of typed queries, like `RawRangeQuery`
> because
> > > it simplifies internal query handling since the bytes stores only
> support
> > > raw queries. A typed RangeQuery is handled by the `MeteredStore` which
> > > creates a new `RawRangeQuery` to pass down to the wrapped stores. When
> it
> > > gets the result back, it deserializes the data and creates a typed
> query
> > > result to return to the user. So, the store's key serde are used to
> > > translate typed `RangeQueries` into `RawRangeQueries` and it's value
> > serde
> > > are used to translate the result of the query on the way back. This
> > allows
> > > users to provide their own queries even if the MeteredStore has no
> > > knowledge of them.
> > >
> > > I hope this answers your question. Let me know if you have any other
> > > questions.
> > >
> > > Best,
> > > Vicky
> > >
> > >
> > > On Tue, Dec 7, 2021 at 12:46 AM Matthias J. Sax 
> > wrote:
> > >
> > >> Thanks for the KIP. Overall, make sense.
> > >>
> > >> One question: What is the purpose to `RawRangeQuery`? Seems not very
> > >> user friendly.
> > >>
> > >> -Matthias
> > >>
> > >>
> > >> On 11/30/21 12:48 PM, Vasiliki Papavasileiou wrote:
> > >>> Thank you John! Yes, that was a typo from copying and I fixed it.
> > >>>
> > >>> Since there have been no more comments, I will start the vote.
> > >>>
> > >>> Best,
> > >>> Vicky
> > >>>
> > >>> On Tue, Nov 30, 2021 at 5:22 AM John Roesler 
> > >> wrote:
> > >>>
> >  Thanks for the KIP, Vicky!
> > 
> >  This KIP will help fill in the parity gap between IQ and
> >  IQv2.
> > 
> >  One thing I noticed, which looks like just a typo is that
> >  the value type of the proposed RangeQuery should probably be
> >  KeyValueIterator, right?
> > 
> >  Otherwise, it looks good to me!
> > 
> >  Thanks,
> >  -John
> > 
> >  On Mon, 2021-11-29 at 12:20 +, Vasiliki Papavasileiou
> >  wrote:
> > > Hello everyone,
> > >
> > > I would like to start the discussion for KIP-805: Add range and
> scan
> >  query
> > > support in IQ v2
> > >
> > > The KIP can be found here:
> > >
> > 
> > >>
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-805%3A+Add+range+and+scan+query+support+in+IQ+v2
> > >
> > > Any suggestions are more than welcome.
> > >
> > > Many thanks,
> > > Vicky
> > 
> > 
> > >>>
> > >>
> > >
> >
>
>
> --
> -- Guozhang
>


Re: [DISCUSS] KIP-782: Expandable batch size in producer

2021-12-10 Thread Jun Rao
Hi, Artem, Luke,

Thanks for the reply.

11. If we get rid of batch.max.size and increase the default batch.size,
it's true the behavior is slightly different than before. However, does
that difference matter to most of our users? In your example, if a user
sets linger.ms to 100ms and thinks 256KB is good for throughput, does it
matter to deliver any batch smaller than 256KB before 100ms? I also find it
a bit hard to explain to our users these 3 different settings related to
batch size.

Thanks,

Jun

On Thu, Dec 9, 2021 at 5:47 AM Luke Chen  wrote:

> Hi Jun,
>
> 11. In addition to Artem's comment, I think the reason to have additional
> "batch.max.size" is to have more flexibility to users.
> For example:
> With linger.ms=100ms, batch.size=16KB, now, we have 20KB of data coming to
> a partition within 50ms. Now, sender is ready to pick up the batch to send.
> In current design, we send 16KB data to broker, and keep the remaining 4KB
> in the producer, to keep accumulating data.
> But after this KIP, user can send the whole 20KB of data together. That is,
> user can decide if they want to accumulate more data before the sender is
> ready, and send them together, to have higher throughput. The
> "batch.size=16KB" in the proposal, is more like a soft limit, (and
> "batch.max.size" is like a hard limit), or it's like a switch to enable the
> batch to become ready. Before sender is ready, we can still accumulate more
> data, and wrap them together to send to broker.
>
> User can increase "batch.size" to 20KB to achieve the same goal in the
> current design, of course. But you can imagine, if the data within 100ms is
> just 18KB, then the batch of data will wait for 100ms passed to be sent
> out. This "batch.max.size" config will allow more flexible for user config.
>
> Does that make sense?
>
> Thank you.
> Luke
>
> On Thu, Dec 9, 2021 at 7:53 AM Artem Livshits
>  wrote:
>
> > Hi Jun,
> >
> > 11. That was my initial thinking as well, but in a discussion some people
> > pointed out the change of behavior in some scenarios.  E.g. if someone
> for
> > some reason really wants batches to be at least 16KB and sets large
> > linger.ms, and most of the time the batches are filled quickly enough
> and
> > they observe a certain latency.  Then they upgrade their client with a
> > default size 256KB and the latency increases.  This could be seen as a
> > regression.  It could be fixed by just reducing linger.ms to specify the
> > expected latency, but still could be seen as a disruption by some users.
> > The other reason to have 2 sizes is to avoid allocating large buffers
> > upfront.
> >
> > -Artem
> >
> > On Wed, Dec 8, 2021 at 3:07 PM Jun Rao  wrote:
> >
> > > Hi, Artem,
> > >
> > > Thanks for the reply.
> > >
> > > 11. Got it. To me, batch.size is really used for throughput and not for
> > > latency guarantees. There is no guarantee when 16KB will be
> accumulated.
> > > So, if users want any latency guarantee, they will need to specify
> > > linger.ms accordingly.
> > > Then, batch.size can just be used to tune for throughput.
> > >
> > > 20. Could we also describe the unit of compression? Is
> > > it batch.initial.size, batch.size or batch.max.size?
> > >
> > > Thanks,
> > >
> > > Jun
> > >
> > > On Wed, Dec 8, 2021 at 9:58 AM Artem Livshits
> > >  wrote:
> > >
> > > > Hi Jun,
> > > >
> > > > 10. My understanding is that MemoryRecords would under the covers be
> > > > allocated in chunks, so logically it still would be one MemoryRecords
> > > > object, it's just instead of allocating one large chunk upfront,
> > smaller
> > > > chunks are allocated as needed to grow the batch and linked into a
> > list.
> > > >
> > > > 11. The reason for 2 sizes is to avoid change of behavior when
> > triggering
> > > > batch send with large linger.ms.  Currently, a batch send is
> triggered
> > > > once
> > > > the batch reaches 16KB by default, if we just raise the default to
> > 256KB,
> > > > then the batch send will be delayed.  Using a separate value would
> > allow
> > > > keeping the current behavior when sending the batch out, but provide
> > > better
> > > > throughput with high latency + high bandwidth channels.
> > > >
> > > > -Artem
> > > >
> > > > On Tue, Dec 7, 2021 at 5:29 PM Jun Rao 
> > wrote:
> > > >
> > > > > Hi, Luke,
> > > > >
> > > > > Thanks for the KIP.  A few comments below.
> > > > >
> > > > > 10. Accumulating small batches could improve memory usage. Will
> that
> > > > > introduce extra copying when generating a produce request?
> > Currently, a
> > > > > produce request takes a single MemoryRecords per partition.
> > > > > 11. Do we need to introduce a new config batch.max.size? Could we
> > just
> > > > > increase the default of batch.size? We probably need to have
> KIP-794
> > > > > <
> > > > >
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-794%3A+Strictly+Uniform+Sticky+Partitioner
> > > > > >
> > > > > resolved
> > > > > before increasing the default batch size since

[jira] [Created] (KAFKA-13533) Task resources are not cleaned up if task initialization fails

2021-12-10 Thread Chris Egerton (Jira)
Chris Egerton created KAFKA-13533:
-

 Summary: Task resources are not cleaned up if task initialization 
fails
 Key: KAFKA-13533
 URL: https://issues.apache.org/jira/browse/KAFKA-13533
 Project: Kafka
  Issue Type: Bug
  Components: KafkaConnect
Reporter: Chris Egerton
Assignee: Chris Egerton


Many closeable resources are instantiated in the [Worker::buildWorkerTask 
method|https://github.com/apache/kafka/blob/d5eb3c10ecd394015336868f948348c62c0e4e77/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/Worker.java#L571-L636],
 including but not limited to:
 * Producers (for source tasks, and for sink tasks configured to write to a DLQ 
topic)
 * Consumers (for sink tasks)
 * Admin clients (for source tasks with topic creation enabled)
 * Transformation and Predicate instances

These resources are all cleaned up correctly if the worker is able to 
successfully instantiate the task (see 
[WorkerSourceTask::close|https://github.com/apache/kafka/blob/d5eb3c10ecd394015336868f948348c62c0e4e77/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerSourceTask.java#L163-L184]
 and 
[WorkerSinkTask::close|https://github.com/apache/kafka/blob/d5eb3c10ecd394015336868f948348c62c0e4e77/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerSinkTask.java#L166-L179]
 for how most task resources are cleaned up at the end of their lifetime).

However, if anything fails during {{{}Worker::buildWorkerTask{}}}, no attempt 
is made to clean up any resources that were allocated before that failure.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


Re: [DISCUSS] KIP-779: Allow Source Tasks to Handle Producer Exceptions

2021-12-10 Thread Knowles Atchison Jr
Good morning,

Any additional feedback and/or review on the PR for this change would be
greatly appreciated:

https://github.com/apache/kafka/pull/11382

Knowles

On Tue, Nov 16, 2021 at 4:02 PM Knowles Atchison Jr 
wrote:

> Thank you all for the feedback, the KIP has been updated.
>
> On Tue, Nov 16, 2021 at 10:46 AM Arjun Satish 
> wrote:
>
>> One more nit: the RetryWithToleranceOperator class is not a public
>> interface. So we do not have to call the changes in them out in the Public
>> Interfaces section.
>>
>>
>> On Tue, Nov 16, 2021 at 10:42 AM Arjun Satish 
>> wrote:
>>
>> > Chris' point about upgrades is valid. An existing configuration will now
>> > have additional behavior. We should clearly call this out in the kip,
>> and
>> > whenever they are prepared -- the release notes. It's a bit crummy when
>> > upgrading, but I do think it's better than introducing a new
>> configuration
>> > in the long term.
>> >
>> > On Mon, Nov 15, 2021 at 2:52 PM Knowles Atchison Jr <
>> katchiso...@gmail.com>
>> > wrote:
>> >
>> >> Chris,
>> >>
>> >> Thank you for the feedback. I can certainly update the KIP to state
>> that
>> >> once exactly one support is in place, the task would be failed even if
>> >> error.tolerance were set to all. Programmatically it would still
>> require
>> >> PRs to be merged to build on top of. I also liked my original
>> >> implementation of the hook as it gave the connector writers the most
>> >> flexibility in handling producer errors. I changed the original
>> >> implementation as the progression/changes still supported my use case
>> and
>> >> I
>> >> thought it would move this process along faster.
>> >>
>> >> Knowles
>> >>
>> >> On Thu, Nov 11, 2021 at 3:43 PM Chris Egerton
>> > >> >
>> >> wrote:
>> >>
>> >> > Hi Knowles,
>> >> >
>> >> > I think this looks good for the most part but I'd still like to see
>> an
>> >> > explicit mention in the KIP (and proposed doc/Javadoc changes) that
>> >> states
>> >> > that, with exactly-once support enabled, producer exceptions that
>> result
>> >> > from failures related to exactly-once support (including but not
>> >> limited to
>> >> > ProducerFencedExcecption instances (
>> >> >
>> >> >
>> >>
>> https://kafka.apache.org/30/javadoc/org/apache/kafka/common/errors/ProducerFencedException.html
>> >> > ))
>> >> > will not be skipped even with "errors.tolerance" set to "all", and
>> will
>> >> > instead unconditionally cause the task to fail. Your proposal that
>> >> > "WorkerSourceTask could check the configuration before handing off
>> the
>> >> > records and exception to this function" seems great as long as we
>> update
>> >> > "handing off the records and exceptions to this function" to the
>> >> > newly-proposed behavior of "logging the exception and continuing to
>> poll
>> >> > the task for data".
>> >> >
>> >> > I'm also a little bit wary of updating the existing
>> "errors.tolerance"
>> >> > configuration to have new behavior that users can't opt out of
>> without
>> >> also
>> >> > opting out of the current behavior they get with "errors.tolerance"
>> set
>> >> to
>> >> > "all", but I think I've found a decent argument in favor of it. One
>> >> thought
>> >> > that came to mind is whether this use case was originally considered
>> >> when
>> >> > KIP-298 was being discussed. However, it appears that KAFKA-8586 (
>> >> > https://issues.apache.org/jira/browse/KAFKA-8586), the fix for which
>> >> > caused
>> >> > tasks to fail on non-retriable, asynchronous producer exceptions
>> >> instead of
>> >> > logging them and continuing, was discovered over a full year after
>> the
>> >> > changes for KIP-298 (https://github.com/apache/kafka/pull/5065) were
>> >> > merged. I suspect that the current proposal aligns nicely with the
>> >> original
>> >> > design intent of KIP-298, and that if KAFKA-8586 were discovered
>> before
>> >> or
>> >> > during discussion for KIP-298, non-retriable, asynchronous producer
>> >> > exceptions would have been included in its scope. With that in mind,
>> >> > although it may cause issues for some niche use cases, I think that
>> >> this is
>> >> > a valid change and would be worth the tradeoff of potentially
>> >> complicating
>> >> > life for a small number of users. I'd be interested in Arjun's
>> thoughts
>> >> on
>> >> > this though (as he designed and implemented KIP-298), and if this
>> >> analysis
>> >> > is agreeable, we may want to document that information in the KIP as
>> >> well
>> >> > to strengthen our case for not introducing a new configuration
>> property
>> >> and
>> >> > instead making this behavior tied to the existing "errors.tolerance"
>> >> > property with no opt-out besides using a new value for that property.
>> >> >
>> >> > My last thought is that, although it may be outside the scope of this
>> >> KIP,
>> >> > I believe your original proposal of giving tasks a hook to handle
>> >> > downstream exceptions is actually quite valid. The DLQ feature for
>> sink
>> >> > connectors is an

[jira] [Created] (KAFKA-13532) Flaky test KafkaStreamsTest.testInitializesAndDestroysMetricsReporters

2021-12-10 Thread Matthias J. Sax (Jira)
Matthias J. Sax created KAFKA-13532:
---

 Summary: Flaky test 
KafkaStreamsTest.testInitializesAndDestroysMetricsReporters
 Key: KAFKA-13532
 URL: https://issues.apache.org/jira/browse/KAFKA-13532
 Project: Kafka
  Issue Type: Test
  Components: streams, unit tests
Reporter: Matthias J. Sax


{quote}java.lang.AssertionError: expected:<26> but was:<27> at 
org.junit.Assert.fail(Assert.java:89) at 
org.junit.Assert.failNotEquals(Assert.java:835) at 
org.junit.Assert.assertEquals(Assert.java:647) at 
org.junit.Assert.assertEquals(Assert.java:633) at 
org.apache.kafka.streams.KafkaStreamsTest.testInitializesAndDestroysMetricsReporters(KafkaStreamsTest.java:556){quote}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (KAFKA-13531) Flaky test NamedTopologyIntegrationTest

2021-12-10 Thread Matthias J. Sax (Jira)
Matthias J. Sax created KAFKA-13531:
---

 Summary: Flaky test NamedTopologyIntegrationTest
 Key: KAFKA-13531
 URL: https://issues.apache.org/jira/browse/KAFKA-13531
 Project: Kafka
  Issue Type: Test
  Components: streams, unit tests
Reporter: Matthias J. Sax


org.apache.kafka.streams.integration.NamedTopologyIntegrationTest.shouldRemoveNamedTopologyToRunningApplicationWithMultipleNodesAndResetsOffsets
{quote}java.lang.AssertionError: Did not receive all 3 records from topic 
output-stream-2 within 6 ms, currently accumulated data is [] Expected: is 
a value equal to or greater than <3> but: <0> was less than <3> at 
org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20) at 
org.apache.kafka.streams.integration.utils.IntegrationTestUtils.lambda$waitUntilMinKeyValueRecordsReceived$1(IntegrationTestUtils.java:648)
 at 
org.apache.kafka.test.TestUtils.retryOnExceptionWithTimeout(TestUtils.java:368) 
at 
org.apache.kafka.test.TestUtils.retryOnExceptionWithTimeout(TestUtils.java:336) 
at 
org.apache.kafka.streams.integration.utils.IntegrationTestUtils.waitUntilMinKeyValueRecordsReceived(IntegrationTestUtils.java:644)
 at 
org.apache.kafka.streams.integration.utils.IntegrationTestUtils.waitUntilMinKeyValueRecordsReceived(IntegrationTestUtils.java:617)
 at 
org.apache.kafka.streams.integration.NamedTopologyIntegrationTest.shouldRemoveNamedTopologyToRunningApplicationWithMultipleNodesAndResetsOffsets(NamedTopologyIntegrationTest.java:439){quote}
STDERR
{quote}java.util.concurrent.ExecutionException: 
org.apache.kafka.common.errors.GroupSubscribedToTopicException: Deleting 
offsets of a topic is forbidden while the consumer group is actively subscribed 
to it. at 
java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357) at 
java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1895) at 
org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:165) 
at 
org.apache.kafka.streams.processor.internals.namedtopology.KafkaStreamsNamedTopologyWrapper.lambda$removeNamedTopology$3(KafkaStreamsNamedTopologyWrapper.java:213)
 at 
org.apache.kafka.common.internals.KafkaFutureImpl.lambda$whenComplete$2(KafkaFutureImpl.java:107)
 at 
java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760)
 at 
java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736)
 at 
java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474) 
at java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1962) 
at 
org.apache.kafka.common.internals.KafkaCompletableFuture.kafkaComplete(KafkaCompletableFuture.java:39)
 at 
org.apache.kafka.common.internals.KafkaFutureImpl.complete(KafkaFutureImpl.java:122)
 at 
org.apache.kafka.streams.processor.internals.TopologyMetadata.maybeNotifyTopologyVersionWaiters(TopologyMetadata.java:154)
 at 
org.apache.kafka.streams.processor.internals.StreamThread.checkForTopologyUpdates(StreamThread.java:916)
 at 
org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:598)
 at 
org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:575)
 Caused by: org.apache.kafka.common.errors.GroupSubscribedToTopicException: 
Deleting offsets of a topic is forbidden while the consumer group is actively 
subscribed to it. java.util.concurrent.ExecutionException: 
org.apache.kafka.common.errors.GroupSubscribedToTopicException: Deleting 
offsets of a topic is forbidden while the consumer group is actively subscribed 
to it. at 
java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357) at 
java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1895) at 
org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:165) 
at 
org.apache.kafka.streams.processor.internals.namedtopology.KafkaStreamsNamedTopologyWrapper.lambda$removeNamedTopology$3(KafkaStreamsNamedTopologyWrapper.java:213)
 at 
org.apache.kafka.common.internals.KafkaFutureImpl.lambda$whenComplete$2(KafkaFutureImpl.java:107)
 at 
java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760)
 at 
java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736)
 at 
java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474) 
at java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1962) 
at 
org.apache.kafka.common.internals.KafkaCompletableFuture.kafkaComplete(KafkaCompletableFuture.java:39)
 at 
org.apache.kafka.common.internals.KafkaFutureImpl.complete(KafkaFutureImpl.java:122)
 at 
org.apache.kafka.streams.processor.internals.TopologyMetadata.maybeNotifyTopologyVersionWaiters(TopologyMetadata.java:154)
 at 
org.apache.kafka.streams.processor.internals.StreamThread.checkForTopologyUpdates(StreamThread.java:916)
 at 
org.apache.kafka.strea

[jira] [Resolved] (KAFKA-12933) Flaky test ReassignPartitionsIntegrationTest.testReassignmentWithAlterIsrDisabled

2021-12-10 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12933?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax resolved KAFKA-12933.
-
Resolution: Fixed

> Flaky test 
> ReassignPartitionsIntegrationTest.testReassignmentWithAlterIsrDisabled
> -
>
> Key: KAFKA-12933
> URL: https://issues.apache.org/jira/browse/KAFKA-12933
> Project: Kafka
>  Issue Type: Test
>  Components: admin
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
>
> {quote}org.opentest4j.AssertionFailedError: expected:  but was:  
> at org.junit.jupiter.api.AssertionUtils.fail(AssertionUtils.java:55) at 
> org.junit.jupiter.api.AssertTrue.assertTrue(AssertTrue.java:40) at 
> org.junit.jupiter.api.AssertTrue.assertTrue(AssertTrue.java:35) at 
> org.junit.jupiter.api.Assertions.assertTrue(Assertions.java:162) at 
> kafka.admin.ReassignPartitionsIntegrationTest.executeAndVerifyReassignment(ReassignPartitionsIntegrationTest.scala:130)
>  at 
> kafka.admin.ReassignPartitionsIntegrationTest.testReassignmentWithAlterIsrDisabled(ReassignPartitionsIntegrationTest.scala:74){quote}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (KAFKA-13530) Flaky test ReplicaManagerTest

2021-12-10 Thread Matthias J. Sax (Jira)
Matthias J. Sax created KAFKA-13530:
---

 Summary: Flaky test ReplicaManagerTest
 Key: KAFKA-13530
 URL: https://issues.apache.org/jira/browse/KAFKA-13530
 Project: Kafka
  Issue Type: Test
  Components: core, unit tests
Reporter: Matthias J. Sax


kafka.server.ReplicaManagerTest.[1] usesTopicIds=true
{quote}org.opentest4j.AssertionFailedError: expected:  but was:  
at org.junit.jupiter.api.AssertionUtils.fail(AssertionUtils.java:55) at 
org.junit.jupiter.api.AssertTrue.assertTrue(AssertTrue.java:40) at 
org.junit.jupiter.api.AssertTrue.assertTrue(AssertTrue.java:35) at 
org.junit.jupiter.api.Assertions.assertTrue(Assertions.java:162) at 
kafka.server.ReplicaManagerTest.assertFetcherHasTopicId(ReplicaManagerTest.scala:3502)
 at 
kafka.server.ReplicaManagerTest.testReplicaAlterLogDirsWithAndWithoutIds(ReplicaManagerTest.scala:3572){quote}
STDOUT
{quote}[2021-12-07 16:19:35,906] ERROR Error while reading checkpoint file 
/tmp/kafka-6310287969113820536/replication-offset-checkpoint 
(kafka.server.LogDirFailureChannel:76) java.nio.file.NoSuchFileException: 
/tmp/kafka-6310287969113820536/replication-offset-checkpoint at 
sun.nio.fs.UnixException.translateToIOException(UnixException.java:86) at 
sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102) at 
sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107) at 
sun.nio.fs.UnixFileSystemProvider.newByteChannel(UnixFileSystemProvider.java:214)
 at java.nio.file.Files.newByteChannel(Files.java:361) at 
java.nio.file.Files.newByteChannel(Files.java:407) at 
java.nio.file.spi.FileSystemProvider.newInputStream(FileSystemProvider.java:384)
 at java.nio.file.Files.newInputStream(Files.java:152) at 
java.nio.file.Files.newBufferedReader(Files.java:2784) at 
java.nio.file.Files.newBufferedReader(Files.java:2816) at 
org.apache.kafka.server.common.CheckpointFile.read(CheckpointFile.java:104) at 
kafka.server.checkpoints.CheckpointFileWithFailureHandler.read(CheckpointFileWithFailureHandler.scala:48)
 at 
kafka.server.checkpoints.OffsetCheckpointFile.read(OffsetCheckpointFile.scala:70)
 at 
kafka.server.checkpoints.LazyOffsetCheckpointMap.offsets$lzycompute(OffsetCheckpointFile.scala:94)
 at 
kafka.server.checkpoints.LazyOffsetCheckpointMap.offsets(OffsetCheckpointFile.scala:94)
 at 
kafka.server.checkpoints.LazyOffsetCheckpointMap.fetch(OffsetCheckpointFile.scala:97)
 at 
kafka.server.checkpoints.LazyOffsetCheckpoints.fetch(OffsetCheckpointFile.scala:89)
 at kafka.cluster.Partition.updateHighWatermark$1(Partition.scala:348) at 
kafka.cluster.Partition.createLog(Partition.scala:361) at 
kafka.cluster.Partition.maybeCreate$1(Partition.scala:334) at 
kafka.cluster.Partition.createLogIfNotExists(Partition.scala:341) at 
kafka.cluster.Partition.$anonfun$makeLeader$1(Partition.scala:546) at 
kafka.cluster.Partition.makeLeader(Partition.scala:530) at 
kafka.server.ReplicaManager.$anonfun$applyLocalLeadersDelta$3(ReplicaManager.scala:2163)
 at scala.Option.foreach(Option.scala:437) at 
kafka.server.ReplicaManager.$anonfun$applyLocalLeadersDelta$2(ReplicaManager.scala:2160)
 at 
kafka.server.ReplicaManager.$anonfun$applyLocalLeadersDelta$2$adapted(ReplicaManager.scala:2159)
 at 
kafka.utils.Implicits$MapExtensionMethods$.$anonfun$forKeyValue$1(Implicits.scala:62)
 at 
scala.collection.convert.JavaCollectionWrappers$JMapWrapperLike.foreachEntry(JavaCollectionWrappers.scala:359)
 at 
scala.collection.convert.JavaCollectionWrappers$JMapWrapperLike.foreachEntry$(JavaCollectionWrappers.scala:355)
 at 
scala.collection.convert.JavaCollectionWrappers$AbstractJMapWrapper.foreachEntry(JavaCollectionWrappers.scala:309)
 at 
kafka.server.ReplicaManager.applyLocalLeadersDelta(ReplicaManager.scala:2159) 
at kafka.server.ReplicaManager.applyDelta(ReplicaManager.scala:2136) at 
kafka.server.ReplicaManagerTest.testDeltaToLeaderOrFollowerMarksPartitionOfflineIfLogCantBeCreated(ReplicaManagerTest.scala:3349){quote}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)