[jira] [Created] (KAFKA-14181) Dose kafka counter Brute Force Attack?

2022-08-24 Thread wooo (Jira)
wooo created KAFKA-14181:


 Summary: Dose kafka counter Brute Force Attack?
 Key: KAFKA-14181
 URL: https://issues.apache.org/jira/browse/KAFKA-14181
 Project: Kafka
  Issue Type: Improvement
Reporter: wooo


Dose kafka counter Brute Force Attack?



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Jenkins build is still unstable: Kafka » Kafka Branch Builder » trunk #1172

2022-08-24 Thread Apache Jenkins Server
See 




Jenkins build is still unstable: Kafka » Kafka Branch Builder » trunk #1171

2022-08-24 Thread Apache Jenkins Server
See 




[jira] [Created] (KAFKA-14180) Help, I've been changed into a monstrous, verminous bug

2022-08-24 Thread Gregor Samsa (Jira)
Gregor Samsa created KAFKA-14180:


 Summary: Help, I've been changed into a monstrous, verminous bug
 Key: KAFKA-14180
 URL: https://issues.apache.org/jira/browse/KAFKA-14180
 Project: Kafka
  Issue Type: Bug
Reporter: Gregor Samsa






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [VOTE] KIP-837 Allow MultiCasting a Result Record.

2022-08-24 Thread Sagar
Thank you Bruno/Matthew for your comments.

I agree using null does seem error prone. However I think using a singleton
list of [-1] might be better in terms of usability, I am saying this
because the KIP also has a provision to return an empty list to refer to
dropping the record. So, an empty optional and an empty list have totally
different meanings which could get confusing.

Let me know what you think.

Thanks!
Sagar.


On Wed, Aug 24, 2022 at 7:30 PM Matthew Benedict de Detrich
 wrote:

> I also concur with this, having an Optional in the type makes it very
> clear what’s going on and better signifies an absence of value (or in this
> case the broadcast value).
>
> --
> Matthew de Detrich
> Aiven Deutschland GmbH
> Immanuelkirchstraße 26, 10405 Berlin
> Amtsgericht Charlottenburg, HRB 209739 B
>
> Geschäftsführer: Oskari Saarenmaa & Hannu Valtonen
> m: +491603708037
> w: aiven.io e: matthew.dedetr...@aiven.io
> On 24. Aug 2022, 14:19 +0200, dev@kafka.apache.org, wrote:
> >
> > 2.
> > I would prefer changing the return type of partitions() to
> > Optional> and using Optional.empty() as the broadcast
> > value. IMO, The chances that an implementation returns null due to a bug
> > is much higher than that an implementation returns an empty Optional due
> > to a bug.
>


Build failed in Jenkins: Kafka » Kafka Branch Builder » 3.3 #43

2022-08-24 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 490383 lines...]
[2022-08-24T17:14:45.340Z] 
org.apache.kafka.streams.processor.internals.StreamsAssignmentScaleTest > 
testHighAvailabilityTaskAssignorManyStandbys PASSED
[2022-08-24T17:14:45.340Z] 
[2022-08-24T17:14:45.340Z] 
org.apache.kafka.streams.processor.internals.StreamsAssignmentScaleTest > 
testFallbackPriorTaskAssignorLargeNumConsumers STARTED
[2022-08-24T17:14:45.340Z] 
[2022-08-24T17:14:45.340Z] 
org.apache.kafka.streams.processor.internals.StreamsAssignmentScaleTest > 
testFallbackPriorTaskAssignorLargeNumConsumers PASSED
[2022-08-24T17:14:45.340Z] 
[2022-08-24T17:14:45.340Z] 
org.apache.kafka.streams.processor.internals.StreamsAssignmentScaleTest > 
testStickyTaskAssignorLargeNumConsumers STARTED
[2022-08-24T17:14:45.340Z] 
[2022-08-24T17:14:45.340Z] 
org.apache.kafka.streams.processor.internals.StreamsAssignmentScaleTest > 
testStickyTaskAssignorLargeNumConsumers PASSED
[2022-08-24T17:14:46.258Z] streams-6: SMOKE-TEST-CLIENT-CLOSED
[2022-08-24T17:14:46.258Z] streams-5: SMOKE-TEST-CLIENT-CLOSED
[2022-08-24T17:14:46.258Z] streams-2: SMOKE-TEST-CLIENT-CLOSED
[2022-08-24T17:14:46.258Z] streams-4: SMOKE-TEST-CLIENT-CLOSED
[2022-08-24T17:14:46.258Z] streams-0: SMOKE-TEST-CLIENT-CLOSED
[2022-08-24T17:14:46.258Z] streams-3: SMOKE-TEST-CLIENT-CLOSED
[2022-08-24T17:14:46.258Z] streams-1: SMOKE-TEST-CLIENT-CLOSED
[2022-08-24T17:14:50.357Z] 
[2022-08-24T17:14:50.357Z] BUILD SUCCESSFUL in 2h 38m 53s
[2022-08-24T17:14:50.357Z] 212 actionable tasks: 115 executed, 97 up-to-date
[2022-08-24T17:14:50.357Z] 
[2022-08-24T17:14:50.357Z] See the profiling report at: 
file:///home/jenkins/workspace/Kafka_kafka_3.3/build/reports/profile/profile-2022-08-24-14-36-01.html
[2022-08-24T17:14:50.357Z] A fine-grained performance profile is available: use 
the --scan option.
[Pipeline] junit
[2022-08-24T17:14:51.357Z] Recording test results
[2022-08-24T17:15:05.403Z] [Checks API] No suitable checks publisher found.
[Pipeline] echo
[2022-08-24T17:15:05.404Z] Skipping Kafka Streams archetype test for Java 11
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // node
[Pipeline] }
[Pipeline] // timestamps
[Pipeline] }
[Pipeline] // timeout
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[2022-08-24T17:15:22.461Z] 
[2022-08-24T17:15:22.461Z] 
org.apache.kafka.streams.integration.SuppressionDurabilityIntegrationTest > 
shouldRecoverBufferAfterShutdown[exactly_once_v2] PASSED
[2022-08-24T17:15:22.461Z] 
[2022-08-24T17:15:22.461Z] 
org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testLeftInner[caching enabled = true] STARTED
[2022-08-24T17:15:25.132Z] 
[2022-08-24T17:15:25.132Z] 
org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testLeftInner[caching enabled = true] PASSED
[2022-08-24T17:15:25.132Z] 
[2022-08-24T17:15:25.132Z] 
org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testLeftOuter[caching enabled = true] STARTED
[2022-08-24T17:15:31.163Z] 
[2022-08-24T17:15:31.163Z] 
org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testLeftOuter[caching enabled = true] PASSED
[2022-08-24T17:15:31.163Z] 
[2022-08-24T17:15:31.163Z] 
org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testLeftLeft[caching enabled = true] STARTED
[2022-08-24T17:15:36.031Z] 
[2022-08-24T17:15:36.031Z] 
org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testLeftLeft[caching enabled = true] PASSED
[2022-08-24T17:15:36.031Z] 
[2022-08-24T17:15:36.031Z] 
org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testOuterLeft[caching enabled = true] STARTED
[2022-08-24T17:15:41.706Z] 
[2022-08-24T17:15:41.707Z] 
org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testOuterLeft[caching enabled = true] PASSED
[2022-08-24T17:15:41.707Z] 
[2022-08-24T17:15:41.707Z] 
org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testInner[caching enabled = true] STARTED
[2022-08-24T17:15:46.689Z] 
[2022-08-24T17:15:46.689Z] 
org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testInner[caching enabled = true] PASSED
[2022-08-24T17:15:46.689Z] 
[2022-08-24T17:15:46.689Z] 
org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testOuter[caching enabled = true] STARTED
[2022-08-24T17:15:52.527Z] 
[2022-08-24T17:15:52.528Z] 
org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testOuter[caching enabled = true] PASSED
[2022-08-24T17:15:52.528Z] 
[2022-08-24T17:15:52.528Z] 
org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testLeft[caching enabled = true] STARTED
[2022-08-24T17:15:58.201Z] 
[2022-08-24T17:15:58.201Z] 
org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testLeft[caching enabled = true] PASSED

[jira] [Resolved] (KAFKA-14178) NoOpRecord incorrectly causes high controller queue time metric

2022-08-24 Thread Colin McCabe (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14178?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin McCabe resolved KAFKA-14178.
--
Resolution: Fixed

> NoOpRecord incorrectly causes high controller queue time metric
> ---
>
> Key: KAFKA-14178
> URL: https://issues.apache.org/jira/browse/KAFKA-14178
> Project: Kafka
>  Issue Type: Bug
>  Components: controller, kraft, metrics
>Reporter: David Arthur
>Assignee: David Arthur
>Priority: Minor
> Fix For: 3.3.0
>
>
> When a deferred event is added to the queue in ControllerQuorum, we include 
> the total time it sat in the queue as part of the "EventQueueTimeMs" metric 
> in QuorumControllerMetrics.
> With the introduction of NoOpRecords, the p99 value for this metric is equal 
> to the frequency that we schedule the no-op records. E.g., if no-op records 
> are scheduled every 5 seconds, we will see p99 EventQueueTimeMs of 5 seconds.
> This makes it difficult (impossible) to see if there is some delay in the 
> event processing on the controller.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-14179) Improve docs/upgrade.html to talk about metadata.version upgrades

2022-08-24 Thread Jose Armando Garcia Sancio (Jira)
Jose Armando Garcia Sancio created KAFKA-14179:
--

 Summary: Improve docs/upgrade.html to talk about metadata.version 
upgrades
 Key: KAFKA-14179
 URL: https://issues.apache.org/jira/browse/KAFKA-14179
 Project: Kafka
  Issue Type: Improvement
  Components: documentation
Reporter: Jose Armando Garcia Sancio
 Fix For: 3.3.0


The rolling upgrade documentation for 3.3.0 only talks about software and IBP 
upgrades. It doesn't talk about metadata.version upgrades.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Jenkins build is unstable: Kafka » Kafka Branch Builder » trunk #1170

2022-08-24 Thread Apache Jenkins Server
See 




[jira] [Resolved] (KAFKA-10360) Disabling JmxReporter registration

2022-08-24 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10360?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-10360.

Fix Version/s: 3.4.0
 Assignee: Mickael Maison
   Resolution: Fixed

> Disabling JmxReporter registration 
> ---
>
> Key: KAFKA-10360
> URL: https://issues.apache.org/jira/browse/KAFKA-10360
> Project: Kafka
>  Issue Type: New Feature
>  Components: clients
>Reporter: Romain Quinio
>Assignee: Mickael Maison
>Priority: Minor
> Fix For: 3.4.0
>
>
> In Kafka client applications, JMX usage is often being replaced in favor of 
> frameworks like micrometer or microprofile-metrics.
> It would be nice to be able to disable the JmxReporter that is today built-in 
> with KafkaProducer/KafkaConsumer/KafkaStreams
> [https://github.com/apache/kafka/blob/783a6451f5f8c50dbe151caf5e76b74917690364/clients/src/main/java/org/apache/kafka/clients/producer/KafkaProducer.java#L355-L357]
> [https://github.com/apache/kafka/blob/ffdec02e25bb3be52ee5c06fe76d388303f6ea43/clients/src/main/java/org/apache/kafka/clients/consumer/KafkaConsumer.java#L869-L871]
> [https://github.com/apache/kafka/blob/42f46abb34a2b29993b1a8e6333a400a00227e30/streams/src/main/java/org/apache/kafka/streams/KafkaStreams.java#L685-L687]
> Example of issue in Quarkus: https://github.com/quarkusio/quarkus/issues/9799



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


RE: Re: [DISCUSS] KIP-710: Full support for distributed mode in dedicated MirrorMaker 2.0 clusters

2022-08-24 Thread Chris Egerton
Hi Daniel,

I'd like to resurface this KIP in case you're still interested in pursuing
it. I know it's been a while since you published it, and it hasn't received
much attention, but I'm hoping we can give it a try now and finally put
this long-standing bug to rest. To that end, I have some thoughts about the
proposal. This isn't a complete review, but I wanted to give enough to get
the ball rolling:

1. Some environments with firewalls or strict security policies may not be
able to bring up a REST server for each MM2 node. If we decide that we'd
like to use the Connect REST API (or even just parts of it) to address this
bug with MM2, it does make sense to eventually make the availability of the
REST API a hard requirement for running MM2, but it might be a bit too
abrupt to do that all in a single release. What do you think about making
the REST API optional for now, but noting that it will become required in a
later release (probably 4.0.0 or, if that's not enough time, 5.0.0)? We
could choose not to bring the REST server for any node whose configuration
doesn't explicitly opt into one, and maybe log a warning message on startup
if none is configured. In effect, we'd be marking the current mode (no REST
server) as deprecated.

2. I'm not sure that we should count out the "Creating an internal-only
derivation of the Connect REST API" rejected alternative. Right now, the
single source of truth for the configuration of a MM2 cluster (assuming
it's being run in dedicated mode, and not as a connector in a vanilla
Connect cluster) is the configuration file used for the process. By
bringing up the REST API, we'd expose endpoints to modify connector
configurations, which would not only add complexity to the operation of a
MM2 cluster, but even qualify as an attack vector for malicious entities.
Thanks to KIP-507 we have some amount of security around the internal-only
endpoints used by the Connect framework, but for any public endpoints, the
Connect REST API doesn't come with any security out of the box.

3. Small point, but with support for exactly-once source connectors coming
out in 3.3.0, it's also worth noting that that's another feature that won't
work properly with multi-node MM2 clusters without adding a REST server for
each node (or some substitute that accomplishes the same goal). I don't
think this will affect the direction of the design discussion too much, but
it does help strengthen the motivation.

Cheers,

Chris

On 2021/02/18 15:57:36 Dániel Urbán wrote:
> Hello everyone,
>
> * Sorry, I meant KIP-710.
>
> Right now the MirrorMaker cluster is somewhat unreliable, and not
> supporting running in a cluster properly. I'd say that fixing this would
be
> a nice addition.
> Does anyone have some input on this?
>
> Thanks in advance
> Daniel
>
> Dániel Urbán  ezt írta (időpont: 2021. jan. 26., K,
> 15:56):
>
> > Hello everyone,
> >
> > I would like to start a discussion on KIP-709, which addresses some
> > missing features in MM2 dedicated mode.
> >
> >
https://cwiki.apache.org/confluence/display/KAFKA/KIP-710%3A+Full+support+for+distributed+mode+in+dedicated+MirrorMaker+2.0+clusters
> >
> > Currently, the dedicated mode of MM2 does not fully support running in a
> > cluster. The core issue is that the Connect REST Server is not included
in
> > the dedicated mode, which makes follower->leader communication
impossible.
> > In some cases, this results in the cluster not being able to react to
> > dynamic configuration changes (e.g. dynamic topic filter changes).
> > Another smaller detail is that MM2 dedicated mode eagerly resolves
config
> > provider references in the Connector configurations, which is
undesirable
> > and a breaking change compared to vanilla Connect. This can cause an
issue
> > for example when there is an environment variable reference, which
contains
> > some host-specific information, like a file path. The leader resolves
the
> > reference eagerly, and the resolved value is propagated to other MM2
nodes
> > instead of the reference being resolved locally, separately on each
node.
> >
> > The KIP addresses these by adding the Connect REST Server to the MM2
> > dedicated mode for each replication flow, and postponing the config
> > provider reference resolution.
> >
> > Please discuss, I know this is a major change, but also an important
> > feature for MM2 users.
> >
> > Daniel
> >
>


Re: [VOTE] KIP-837 Allow MultiCasting a Result Record.

2022-08-24 Thread Matthew Benedict de Detrich
I also concur with this, having an Optional in the type makes it very clear 
what’s going on and better signifies an absence of value (or in this case the 
broadcast value).

--
Matthew de Detrich
Aiven Deutschland GmbH
Immanuelkirchstraße 26, 10405 Berlin
Amtsgericht Charlottenburg, HRB 209739 B

Geschäftsführer: Oskari Saarenmaa & Hannu Valtonen
m: +491603708037
w: aiven.io e: matthew.dedetr...@aiven.io
On 24. Aug 2022, 14:19 +0200, dev@kafka.apache.org, wrote:
>
> 2.
> I would prefer changing the return type of partitions() to
> Optional> and using Optional.empty() as the broadcast
> value. IMO, The chances that an implementation returns null due to a bug
> is much higher than that an implementation returns an empty Optional due
> to a bug.


Re: [VOTE] KIP-837 Allow MultiCasting a Result Record.

2022-08-24 Thread Bruno Cadonna

Hi Sagar,

Thank you for the KIP and sorry for being late to the party!

1.
The java docs for partitions() say:

"Note that returning a single valued list with value -1 is a shorthand 
for broadcasting the record to all the partitions of the topic."


I guess that is not true anymore since the code in the "Proposed 
Changes" section checks for null to decide about broadcasting.


2.
I would prefer changing the return type of partitions() to 
Optional> and using Optional.empty() as the broadcast 
value. IMO, The chances that an implementation returns null due to a bug 
is much higher than that an implementation returns an empty Optional due 
to a bug. I would also be fine with a singleton list with a -1 as you 
describe in the java docs.


3.
Recently a "Test Plan" section was added to the KIP template. Your KIP 
is missing this section.



Best,
Bruno

On 24.08.22 00:22, Sophie Blee-Goldman wrote:

Thanks for the updates, it reads much more clearly to me now. Looks great

+1 (binding)

Cheers,
Sophie

On Fri, Aug 19, 2022 at 1:42 AM Sagar  wrote:


Thanks Sophie for the review. I see the confusion. As you pointed out, the
problem the KIP is trying to solve is not the avoidance of a custom
partitioner. Instead the process of sending or replicating the message N
times and then having the record wired through via a new custom partitioner
for every replication. That's what I tried to convey in the motivation
section. I updated the motivation slightly, let me know if that sounds ok.

Also, yes the dropping of records using a custom partitioner is an added
benefit that we get. I think the custom partitioner bit is important as one
can always filter the records out initially.

Let me know if this looks ok?

Thanks!
Sagar.

On Fri, Aug 19, 2022 at 10:17 AM Sophie Blee-Goldman
 wrote:


Thanks Sagar -- one thing I'm still confused about, and sorry to keep
pushing on this, but the example
you gave for how this works in today's world seems not to correspond to

the

method described in the
text of the Motivation exception, ie

Currently, if a user wants to replicate a message into N partitions, the

only way of doing that is to replicate the message N times and then

plug-in

a custom partitioner to write the message N times into N different
partitions.




  This seems a little cumbersome way to broadcast. Also, there seems to be

no way of dropping a record within the partitioner. This KIP aims to

make

this process simpler in Kafka Streams.



It sounds like you're saying the problem this KIP is fixing is that the
only way to do this is by implementing
a custom partitioner and that this is cumbersome, but that's actually
exactly what this KIP is doing: providing
a method od multi-casting via implementing a custom partitioner (as seen

in

the example usage you provided).
Thanks to your examples I think I now understand better what the KIP is
doing, and assume what's written in
the motivation section is just a type/mistake -- can you confirm?

That said, the claim about having "no way of dropping a record within the
partitioner" does actually seem to be
correct, that is you couldn't do it with a custom partitioner prior to

this

KIP and now you can. I would consider
that a secondary/additional improvement that these changes provide, but
it's not strictly speaking related to
  multi-casting, right? (Just checking my understanding, not challenging
anything about this)

Cheers,
Sophie

On Thu, Aug 18, 2022 at 7:27 AM Sagar  wrote:


Hello Sophie,

Thanks for your feedback. I have made all the suggested changes.

One note, on how users can accomplish this in today's world , I have

made

up this example and have never tried myself before. But I am assuming

it

will work.

Let me know what you think.

Thanks!
Sagar.


On Thu, Aug 18, 2022 at 7:17 AM Sophie Blee-Goldman
 wrote:


Hey Sagar, thanks for the KIP!

Just some cosmetic points to make it absolutely clear what this KIP

is

doing:
1) could you clarify up front in the Motivation section that this is
focused on Kafka Streams applications, and not the plain Producer

client?

2) you included the entire implementation of the `#send` method to
demonstrate the change in logic, but can you either remove the parts

of

the implementation that aren't being touched here or at least

highlight

in

some way the specific lines that have changed?
3) In general the implementation is, well, an implementation detail

that

doesn't need to be included in the KIP, but it's ok -- always nice to

get a

sense of how things will work internally. But what I think would be

more

useful to show in the KIP is how things will work with the new public
interface -- ie, can you provide a brief example of how a user would

go

about taking advantage of this new interface? Even better, include an
example of what it takes for a user to accomplish this behavior

before

this

KIP. It would help showcase the concrete benefit this KIP is bringing

and

anchor the motivation section a bit 

Build failed in Jenkins: Kafka » Kafka Branch Builder » trunk #1169

2022-08-24 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 506704 lines...]
[2022-08-24T09:46:52.305Z] 
org.apache.kafka.streams.processor.internals.StreamsAssignmentScaleTest.testFallbackPriorTaskAssignorLargePartitionCount
 failed, log available in 
/home/jenkins/workspace/Kafka_kafka_trunk/streams/build/reports/testOutput/org.apache.kafka.streams.processor.internals.StreamsAssignmentScaleTest.testFallbackPriorTaskAssignorLargePartitionCount.test.stdout
[2022-08-24T09:46:52.305Z] 
[2022-08-24T09:46:52.305Z] StreamsAssignmentScaleTest > 
testFallbackPriorTaskAssignorLargePartitionCount FAILED
[2022-08-24T09:46:52.305Z] java.lang.AssertionError: The first assignment 
took too long to complete at 65492ms.
[2022-08-24T09:46:52.305Z] at 
org.apache.kafka.streams.processor.internals.StreamsAssignmentScaleTest.completeLargeAssignment(StreamsAssignmentScaleTest.java:216)
[2022-08-24T09:46:52.305Z] at 
org.apache.kafka.streams.processor.internals.StreamsAssignmentScaleTest.testFallbackPriorTaskAssignorLargePartitionCount(StreamsAssignmentScaleTest.java:120)
[2022-08-24T09:46:52.305Z] 
[2022-08-24T09:46:52.305Z] StreamsAssignmentScaleTest > 
testStickyTaskAssignorLargePartitionCount STARTED
[2022-08-24T09:47:37.946Z] 
[2022-08-24T09:47:37.946Z] StreamsAssignmentScaleTest > 
testStickyTaskAssignorLargePartitionCount PASSED
[2022-08-24T09:47:37.946Z] 
[2022-08-24T09:47:37.946Z] StreamsAssignmentScaleTest > 
testFallbackPriorTaskAssignorManyStandbys STARTED
[2022-08-24T09:47:46.275Z] 
[2022-08-24T09:47:46.275Z] StreamsAssignmentScaleTest > 
testFallbackPriorTaskAssignorManyStandbys PASSED
[2022-08-24T09:47:46.275Z] 
[2022-08-24T09:47:46.275Z] StreamsAssignmentScaleTest > 
testHighAvailabilityTaskAssignorManyStandbys STARTED
[2022-08-24T09:48:07.659Z] 
[2022-08-24T09:48:07.659Z] StreamsAssignmentScaleTest > 
testHighAvailabilityTaskAssignorManyStandbys PASSED
[2022-08-24T09:48:07.659Z] 
[2022-08-24T09:48:07.659Z] StreamsAssignmentScaleTest > 
testFallbackPriorTaskAssignorLargeNumConsumers STARTED
[2022-08-24T09:48:09.510Z] 
[2022-08-24T09:48:09.510Z] StreamsAssignmentScaleTest > 
testFallbackPriorTaskAssignorLargeNumConsumers PASSED
[2022-08-24T09:48:09.510Z] 
[2022-08-24T09:48:09.510Z] StreamsAssignmentScaleTest > 
testStickyTaskAssignorLargeNumConsumers STARTED
[2022-08-24T09:48:11.268Z] 
[2022-08-24T09:48:11.268Z] StreamsAssignmentScaleTest > 
testStickyTaskAssignorLargeNumConsumers PASSED
[2022-08-24T09:48:12.471Z] 
[2022-08-24T09:48:12.471Z] AdjustStreamThreadCountTest > 
testConcurrentlyAccessThreads() STARTED
[2022-08-24T09:48:14.229Z] 
[2022-08-24T09:48:14.229Z] AdjustStreamThreadCountTest > 
testConcurrentlyAccessThreads() PASSED
[2022-08-24T09:48:14.229Z] 
[2022-08-24T09:48:14.229Z] AdjustStreamThreadCountTest > 
shouldResizeCacheAfterThreadReplacement() STARTED
[2022-08-24T09:48:19.005Z] 
[2022-08-24T09:48:19.005Z] AdjustStreamThreadCountTest > 
shouldResizeCacheAfterThreadReplacement() PASSED
[2022-08-24T09:48:19.005Z] 
[2022-08-24T09:48:19.005Z] AdjustStreamThreadCountTest > 
shouldAddAndRemoveThreadsMultipleTimes() STARTED
[2022-08-24T09:48:26.535Z] 
[2022-08-24T09:48:26.535Z] AdjustStreamThreadCountTest > 
shouldAddAndRemoveThreadsMultipleTimes() PASSED
[2022-08-24T09:48:26.535Z] 
[2022-08-24T09:48:26.535Z] AdjustStreamThreadCountTest > 
shouldnNotRemoveStreamThreadWithinTimeout() STARTED
[2022-08-24T09:48:31.388Z] 
[2022-08-24T09:48:31.388Z] AdjustStreamThreadCountTest > 
shouldnNotRemoveStreamThreadWithinTimeout() PASSED
[2022-08-24T09:48:31.388Z] 
[2022-08-24T09:48:31.388Z] AdjustStreamThreadCountTest > 
shouldAddAndRemoveStreamThreadsWhileKeepingNamesCorrect() STARTED
[2022-08-24T09:48:52.431Z] 
[2022-08-24T09:48:52.431Z] AdjustStreamThreadCountTest > 
shouldAddAndRemoveStreamThreadsWhileKeepingNamesCorrect() PASSED
[2022-08-24T09:48:52.431Z] 
[2022-08-24T09:48:52.431Z] AdjustStreamThreadCountTest > 
shouldAddStreamThread() STARTED
[2022-08-24T09:48:55.701Z] 
[2022-08-24T09:48:55.701Z] AdjustStreamThreadCountTest > 
shouldAddStreamThread() PASSED
[2022-08-24T09:48:55.701Z] 
[2022-08-24T09:48:55.701Z] AdjustStreamThreadCountTest > 
shouldRemoveStreamThreadWithStaticMembership() STARTED
[2022-08-24T09:48:59.829Z] 
[2022-08-24T09:48:59.829Z] AdjustStreamThreadCountTest > 
shouldRemoveStreamThreadWithStaticMembership() PASSED
[2022-08-24T09:48:59.829Z] 
[2022-08-24T09:48:59.829Z] AdjustStreamThreadCountTest > 
shouldRemoveStreamThread() STARTED
[2022-08-24T09:49:07.081Z] 
[2022-08-24T09:49:07.081Z] AdjustStreamThreadCountTest > 
shouldRemoveStreamThread() PASSED
[2022-08-24T09:49:07.081Z] 
[2022-08-24T09:49:07.081Z] AdjustStreamThreadCountTest > 
shouldResizeCacheAfterThreadRemovalTimesOut() STARTED
[2022-08-24T09:49:08.190Z] 
[2022-08-24T09:49:08.190Z] AdjustStreamThreadCountTest > 
shouldResizeCacheAfterThreadRemovalTimesOut() PASSED
[2022-08-24T09:49:11.862Z] 

[jira] [Resolved] (KAFKA-14168) Constant memory usage increase

2022-08-24 Thread zhangdong7 (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhangdong7 resolved KAFKA-14168.

Resolution: Invalid

> Constant memory usage increase
> --
>
> Key: KAFKA-14168
> URL: https://issues.apache.org/jira/browse/KAFKA-14168
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 2.8.0
>Reporter: zhangdong7
>Priority: Blocker
> Attachments: image-2022-08-16-17-16-53-039.png
>
>
>  the producer threads  grows on demand and is not reduced
> !image-2022-08-16-17-16-53-039.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [kafka-site] mimaison merged pull request #435: MINOR:Clean up images

2022-08-24 Thread GitBox


mimaison merged PR #435:
URL: https://github.com/apache/kafka-site/pull/435


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka-site] sadatrafsan commented on pull request #431: Brain Station 23 adopted Kafka

2022-08-24 Thread GitBox


sadatrafsan commented on PR #431:
URL: https://github.com/apache/kafka-site/pull/431#issuecomment-1225247466

   image added to the instructed folder


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [kafka-site] sadatrafsan commented on pull request #431: Brain Station 23 adopted Kafka

2022-08-24 Thread GitBox


sadatrafsan commented on PR #431:
URL: https://github.com/apache/kafka-site/pull/431#issuecomment-1225242243

   > Hi @sadatrafsan, thanks for the PR - can you add the image `bs-23.png` to 
the `images/powered-by` directory?
   
   remote: Permission to apache/kafka-site.git denied to sadatrafsan.
   fatal: unable to access 'https://github.com/apache/kafka-site.git/': The 
requested URL returned error: 403
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org