Re: [DISCUSS] KIP-680: TopologyTestDriver should not require a Properties argument

2020-11-06 Thread Rohit Deshpande
Thanks John, I will go ahead and update the KIP with a randomized
application id requirement.

On Fri, Nov 6, 2020 at 3:12 PM John Roesler  wrote:

> Hi Rohit,
>
> Ah, indeed, that was my mistake. I made a bad assumption about the code.
>
> Since we are already cleaning up, then I’d suggest only that we might
> generate a randomized application id so that concurrent tests won’t
> interfere with each other. But this is sounding like a minor implementation
> note, not a concern for the KIP.
>
> The proposal looks good to me.
>
> Thanks again,
> John
>
> On Fri, Nov 6, 2020, at 16:54, Rohit Deshpande wrote:
> > Hi John,
> > Thank you for your review and the feedback.
> >
> > In existing method TTD.close(),  stateDirectory.clean()
> > <
> https://github.com/apache/kafka/blob/trunk/streams/test-utils/src/main/java/org/apache/kafka/streams/TopologyTestDriver.java#L1193
> >
> > method is getting called which is cleaning up task and global
> > directories
> > <
> https://github.com/apache/kafka/blob/trunk/streams/src/main/java/org/apache/kafka/streams/processor/internals/StateDirectory.java#L285-L291
> >.
> > If RocksDB directories are not getting cleaned up in that close method,
> > would like to hear about how we can clean them up in that method.
> > Currently default value of state_directory is set to /tmp/kafka-streams
> > <
> https://github.com/apache/kafka/blob/trunk/streams/src/main/java/org/apache/kafka/streams/StreamsConfig.java#L605
> >
> > so
> > I am not setting it's value explicitly in proposed no argument
> > constructor.
> > Does the directory have to be unique in each test? If yes, then I agree
> > that we can tackle RocksDb directories cleanup and creating unique
> > directory tasks in separate KIP.
> >
> > Thanks,
> > Rohit
> >
> >
> > On Fri, Nov 6, 2020 at 7:12 AM John Roesler  wrote:
> >
> > > Hello Rohit,
> > >
> > > Thanks for picking this up! I think your KIP looks good.
> > >
> > > While I was doing some cleanup of our tests before, one thing I
> > > encountered is that, while most tests don’t semantically need to
> specify
> > > any configs, many tests actually do set the state directory config.
> They
> > > set it specifically so that they can delete it at the end of the test.
> > > Otherwise, the tests would leave RocksDB directories behind.
> > >
> > > I’m wondering if we should address this issue as part of your KIP. What
> > > I’m thinking is this: if no state directory is specified, then we
> create a
> > > new, unique temp directory and register it for cleanup when the JVM
> exits.
> > > Additionally, we would set a flag and clean up the state dir when
> > > TTD.close() is called.
> > >
> > > That way, TTD tests would be by default independent and tidy.
> > >
> > > Admittedly, this is outside the current scope of your KIP, so please
> feel
> > > free to reject this idea, in which case I can file a separate ticket
> for
> > > it.
> > >
> > > Thanks!
> > > -John
> > >
> > > On Tue, Nov 3, 2020, at 18:59, Rohit Deshpande wrote:
> > > > Hi Matthias,
> > > > Thank you for the review and the suggestion.
> > > > Considering at most 3 parameters to the constructor of
> > > > TopologyTestDriver
> > > > and topology being required parameter, we can definitely add a new
> > > > constructor `TopologyTestDriver(Topology, Instant)` .
> > > > Right now, I see one test where we can use this constructor:
> > > >
> > >
> https://github.com/apache/kafka/blob/trunk/streams/src/test/java/org/apache/kafka/streams/kstream/internals/KStreamTransformTest.java#L80-L83
> > > > Also we can get rid of this method in TestDriver trait:
> > > >
> > >
> https://github.com/apache/kafka/blob/trunk/streams/streams-scala/src/test/scala/org/apache/kafka/streams/scala/utils/TestDriver.scala#L32-L35
> > > > which is used in multiple test classes and seems redundant. I agree
> with
> > > > your suggestion.
> > > > Thanks,
> > > > Rohit
> > > >
> > > > On Tue, Nov 3, 2020 at 3:19 PM Matthias J. Sax 
> wrote:
> > > >
> > > > > Thanks for the KIP Rohit.
> > > > >
> > > > > Wondering, if we should also add `TopologyTestDriver(Topology,
> > > > > Instant)`? Not totally sure, as having too many overload could
> also be
> > > > > annoying.
> > > > >
> > > > >
> > > > > -Matthias
> > > > >
> > > > > On 11/3/20 10:02 AM, Rohit Deshpande wrote:
> > > > > > Hello all,
> > > > > > I have created KIP-680: TopologyTestDriver should not require a
> > > > > Properties
> > > > > > argument.
> > > > > >
> > > > >
> > >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-680%3A+TopologyTestDriver+should+not+require+a+Properties+argument
> > > > > >
> > > > > > Jira for the KIP:
> > > > > > https://issues.apache.org/jira/browse/KAFKA-10629
> > > > > >
> > > > > > If we end up making changes, they will look like this:
> > > > > >
> https://github.com/apache/kafka/compare/trunk...rohitrmd:KAFKA-10629
> > > > > >
> > > > > > Please have a look and let me know what you think.
> > > > > >
> > > > > > Thanks,
> > > > > > Rohit
> > > > > >
> > 

Re: [DISCUSS] KIP-680: TopologyTestDriver should not require a Properties argument

2020-11-06 Thread John Roesler
Hi Rohit,

Ah, indeed, that was my mistake. I made a bad assumption about the code.

Since we are already cleaning up, then I’d suggest only that we might generate 
a randomized application id so that concurrent tests won’t interfere with each 
other. But this is sounding like a minor implementation note, not a concern for 
the KIP. 

The proposal looks good to me. 

Thanks again,
John

On Fri, Nov 6, 2020, at 16:54, Rohit Deshpande wrote:
> Hi John,
> Thank you for your review and the feedback.
> 
> In existing method TTD.close(),  stateDirectory.clean()
> 
> method is getting called which is cleaning up task and global 
> directories
> .
> If RocksDB directories are not getting cleaned up in that close method,
> would like to hear about how we can clean them up in that method.
> Currently default value of state_directory is set to /tmp/kafka-streams
> 
> so
> I am not setting it's value explicitly in proposed no argument 
> constructor.
> Does the directory have to be unique in each test? If yes, then I agree
> that we can tackle RocksDb directories cleanup and creating unique
> directory tasks in separate KIP.
> 
> Thanks,
> Rohit
> 
> 
> On Fri, Nov 6, 2020 at 7:12 AM John Roesler  wrote:
> 
> > Hello Rohit,
> >
> > Thanks for picking this up! I think your KIP looks good.
> >
> > While I was doing some cleanup of our tests before, one thing I
> > encountered is that, while most tests don’t semantically need to specify
> > any configs, many tests actually do set the state directory config. They
> > set it specifically so that they can delete it at the end of the test.
> > Otherwise, the tests would leave RocksDB directories behind.
> >
> > I’m wondering if we should address this issue as part of your KIP. What
> > I’m thinking is this: if no state directory is specified, then we create a
> > new, unique temp directory and register it for cleanup when the JVM exits.
> > Additionally, we would set a flag and clean up the state dir when
> > TTD.close() is called.
> >
> > That way, TTD tests would be by default independent and tidy.
> >
> > Admittedly, this is outside the current scope of your KIP, so please feel
> > free to reject this idea, in which case I can file a separate ticket for
> > it.
> >
> > Thanks!
> > -John
> >
> > On Tue, Nov 3, 2020, at 18:59, Rohit Deshpande wrote:
> > > Hi Matthias,
> > > Thank you for the review and the suggestion.
> > > Considering at most 3 parameters to the constructor of
> > > TopologyTestDriver
> > > and topology being required parameter, we can definitely add a new
> > > constructor `TopologyTestDriver(Topology, Instant)` .
> > > Right now, I see one test where we can use this constructor:
> > >
> > https://github.com/apache/kafka/blob/trunk/streams/src/test/java/org/apache/kafka/streams/kstream/internals/KStreamTransformTest.java#L80-L83
> > > Also we can get rid of this method in TestDriver trait:
> > >
> > https://github.com/apache/kafka/blob/trunk/streams/streams-scala/src/test/scala/org/apache/kafka/streams/scala/utils/TestDriver.scala#L32-L35
> > > which is used in multiple test classes and seems redundant. I agree with
> > > your suggestion.
> > > Thanks,
> > > Rohit
> > >
> > > On Tue, Nov 3, 2020 at 3:19 PM Matthias J. Sax  wrote:
> > >
> > > > Thanks for the KIP Rohit.
> > > >
> > > > Wondering, if we should also add `TopologyTestDriver(Topology,
> > > > Instant)`? Not totally sure, as having too many overload could also be
> > > > annoying.
> > > >
> > > >
> > > > -Matthias
> > > >
> > > > On 11/3/20 10:02 AM, Rohit Deshpande wrote:
> > > > > Hello all,
> > > > > I have created KIP-680: TopologyTestDriver should not require a
> > > > Properties
> > > > > argument.
> > > > >
> > > >
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-680%3A+TopologyTestDriver+should+not+require+a+Properties+argument
> > > > >
> > > > > Jira for the KIP:
> > > > > https://issues.apache.org/jira/browse/KAFKA-10629
> > > > >
> > > > > If we end up making changes, they will look like this:
> > > > > https://github.com/apache/kafka/compare/trunk...rohitrmd:KAFKA-10629
> > > > >
> > > > > Please have a look and let me know what you think.
> > > > >
> > > > > Thanks,
> > > > > Rohit
> > > > >
> > > >
> > >
> >
>


Re: [VOTE] KIP-612: Ability to Limit Connection Creation Rate on Brokers

2020-11-06 Thread David Mao
Hi all,

I updated the KIP with more details on per-IP connection rate limiting.
Notable changes are the addition of metrics tracking per-IP connection rate 
acceptance and IP connection rate limiting throttling.
In addition, I fleshed out details around the API we will use to describe and 
alter per-IP quotas. Let me know if there are any questions/concerns.

Thanks
David (Mao)

On 2020/11/03 21:04:37, Anna Povzner  wrote: 
> Hi David,> 
> 
> You are right, using a token bucket exposes the metric for the number of> 
> remaining tokens in the bucket. Since we also want to evaluate using a> 
> token bucket for bandwidth & request throttling, it would be better to have> 
> this discussion separately in a separate KIP. This KIP does not implement> 
> token bucket throttling, and I made sure that KIP wiki does not mention it> 
> as well. Hopefully, I haven't added any confusion.> 
> 
> As David Mao is implementing per-IP part of this KIP, he will update the> 
> KIP wiki and notify this thread with a couple of more details that were not> 
> mentioned in the KIP wiki (e.g., per-IP rate metrics exposed when> 
> throttling).> 
> 
> Thanks,> 
> Anna> 
> 
> On Wed, Sep 2, 2020 at 1:06 PM David Jacot  wrote:> 
> 
> > Hi Anna,> 
> >> 
> > Thanks for the update.> 
> >> 
> > If you use Token Bucket, it will expose another metric which reports the> 
> > number of remaining tokens in the bucket, in addition to the current rate> 
> > metric. It would be great to add it in the metrics section of the KIP as> 
> > well> 
> > for completeness.> 
> >> 
> > Best,> 
> > David> 
> >> 
> > On Tue, Aug 11, 2020 at 4:28 AM Anna Povzner  wrote:> 
> >> 
> > > Hi All,> 
> > >> 
> > > I wanted to let everyone know that we would like to make the following> 
> > > changes to the KIP:> 
> > >> 
> > >1.> 
> > >> 
> > >Expose connection acceptance rate metrics (broker-wide and> 
> > per-listener)> 
> > >and per-listener average throttle time metrics for better> 
> > observability> 
> > > and> 
> > >debugging.> 
> > >2.> 
> > >> 
> > >KIP-599 introduced a new implementation of MeasurableStat that> 
> > >implements a token bucket, which improves rate throttling for bursty> 
> > >workloads (KAFKA-10162). We would like to use this same mechanism for> 
> > >connection accept rate throttling.> 
> > >> 
> > >> 
> > > I updated the KIP to reflect these changes.> 
> > >> 
> > > Let me know if you have any concerns.> 
> > >> 
> > > Thanks,> 
> > >> 
> > > Anna> 
> > >> 
> > >> 
> > > On Thu, May 21, 2020 at 5:42 PM Anna Povzner  wrote:> 
> > >> 
> > > > The vote for KIP-612 has passed with 3 binding and 3 non-binding +1s,> 
> > and> 
> > > > no objections.> 
> > > >> 
> > > >> 
> > > > Thanks everyone for reviews and feedback,> 
> > > >> 
> > > > Anna> 
> > > >> 
> > > > On Tue, May 19, 2020 at 2:41 AM Rajini Sivaram <> 
> > rajinisiva...@gmail.com>> 
> > > > wrote:> 
> > > >> 
> > > >> +1 (binding)> 
> > > >>> 
> > > >> Thanks for the KIP, Anna!> 
> > > >>> 
> > > >> Regards,> 
> > > >>> 
> > > >> Rajini> 
> > > >>> 
> > > >>> 
> > > >> On Tue, May 19, 2020 at 9:32 AM Alexandre Dupriez <> 
> > > >> alexandre.dupr...@gmail.com> wrote:> 
> > > >>> 
> > > >> > +1 (non-binding)> 
> > > >> >> 
> > > >> > Thank you for the KIP!> 
> > > >> >> 
> > > >> >> 
> > > >> > Le mar. 19 mai 2020 à 07:57, David Jacot  a> 
> > > écrit> 
> > > >> :> 
> > > >> > >> 
> > > >> > > +1 (non-binding)> 
> > > >> > >> 
> > > >> > > Thanks for the KIP, Anna!> 
> > > >> > >> 
> > > >> > > On Tue, May 19, 2020 at 7:12 AM Satish Duggana <> 
> > > >> satish.dugg...@gmail.com> 
> > > >> > >> 
> > > >> > > wrote:> 
> > > >> > >> 
> > > >> > > > +1 (non-binding)> 
> > > >> > > > Thanks Anna for the nice feature to control the connection> 
> > > creation> 
> > > >> > rate> 
> > > >> > > > from the clients.> 
> > > >> > > >> 
> > > >> > > > On Tue, May 19, 2020 at 8:16 AM Gwen Shapira  
> > >> 
> > > >> > wrote:> 
> > > >> > > >> 
> > > >> > > > > +1 (binding)> 
> > > >> > > > >> 
> > > >> > > > > Thank you for driving this, Anna> 
> > > >> > > > >> 
> > > >> > > > > On Mon, May 18, 2020 at 4:55 PM Anna Povzner <> 
> > a...@confluent.io> 
> > > >> 
> > > >> > wrote:> 
> > > >> > > > >> 
> > > >> > > > > > Hi All,> 
> > > >> > > > > >> 
> > > >> > > > > > I would like to start the vote on KIP-612: Ability to limit> 
> > > >> > connection> 
> > > >> > > > > > creation rate on brokers.> 
> > > >> > > > > >> 
> > > >> > > > > > For reference, here is the KIP wiki:> 
> > > >> > > > > >> 
> > > >> > > > > >> 
> > > >> > > > >> 
> > > >> > > >> 
> > > >> >> 
> > > >>> 
> > >> 
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-612%3A+Ability+to+Limit+Connection+Creation+Rate+on+Brokers>
> >  
> > > >> > > > > >> 
> > > >> > > > > > And discussion thread:> 
> > > >> > > > > >> 
> > > >> > > > > >> 
> > > >> > > > >> 
> > > >> > > >> 
> > > >> >> 
> > > >>> 
> > >> 
> > 

Re: [DISCUSS] KIP-680: TopologyTestDriver should not require a Properties argument

2020-11-06 Thread Rohit Deshpande
Hi John,
Thank you for your review and the feedback.

In existing method TTD.close(),  stateDirectory.clean()

method is getting called which is cleaning up task and global directories
.
If RocksDB directories are not getting cleaned up in that close method,
would like to hear about how we can clean them up in that method.
Currently default value of state_directory is set to /tmp/kafka-streams

so
I am not setting it's value explicitly in proposed no argument constructor.
Does the directory have to be unique in each test? If yes, then I agree
that we can tackle RocksDb directories cleanup and creating unique
directory tasks in separate KIP.

Thanks,
Rohit


On Fri, Nov 6, 2020 at 7:12 AM John Roesler  wrote:

> Hello Rohit,
>
> Thanks for picking this up! I think your KIP looks good.
>
> While I was doing some cleanup of our tests before, one thing I
> encountered is that, while most tests don’t semantically need to specify
> any configs, many tests actually do set the state directory config. They
> set it specifically so that they can delete it at the end of the test.
> Otherwise, the tests would leave RocksDB directories behind.
>
> I’m wondering if we should address this issue as part of your KIP. What
> I’m thinking is this: if no state directory is specified, then we create a
> new, unique temp directory and register it for cleanup when the JVM exits.
> Additionally, we would set a flag and clean up the state dir when
> TTD.close() is called.
>
> That way, TTD tests would be by default independent and tidy.
>
> Admittedly, this is outside the current scope of your KIP, so please feel
> free to reject this idea, in which case I can file a separate ticket for
> it.
>
> Thanks!
> -John
>
> On Tue, Nov 3, 2020, at 18:59, Rohit Deshpande wrote:
> > Hi Matthias,
> > Thank you for the review and the suggestion.
> > Considering at most 3 parameters to the constructor of
> > TopologyTestDriver
> > and topology being required parameter, we can definitely add a new
> > constructor `TopologyTestDriver(Topology, Instant)` .
> > Right now, I see one test where we can use this constructor:
> >
> https://github.com/apache/kafka/blob/trunk/streams/src/test/java/org/apache/kafka/streams/kstream/internals/KStreamTransformTest.java#L80-L83
> > Also we can get rid of this method in TestDriver trait:
> >
> https://github.com/apache/kafka/blob/trunk/streams/streams-scala/src/test/scala/org/apache/kafka/streams/scala/utils/TestDriver.scala#L32-L35
> > which is used in multiple test classes and seems redundant. I agree with
> > your suggestion.
> > Thanks,
> > Rohit
> >
> > On Tue, Nov 3, 2020 at 3:19 PM Matthias J. Sax  wrote:
> >
> > > Thanks for the KIP Rohit.
> > >
> > > Wondering, if we should also add `TopologyTestDriver(Topology,
> > > Instant)`? Not totally sure, as having too many overload could also be
> > > annoying.
> > >
> > >
> > > -Matthias
> > >
> > > On 11/3/20 10:02 AM, Rohit Deshpande wrote:
> > > > Hello all,
> > > > I have created KIP-680: TopologyTestDriver should not require a
> > > Properties
> > > > argument.
> > > >
> > >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-680%3A+TopologyTestDriver+should+not+require+a+Properties+argument
> > > >
> > > > Jira for the KIP:
> > > > https://issues.apache.org/jira/browse/KAFKA-10629
> > > >
> > > > If we end up making changes, they will look like this:
> > > > https://github.com/apache/kafka/compare/trunk...rohitrmd:KAFKA-10629
> > > >
> > > > Please have a look and let me know what you think.
> > > >
> > > > Thanks,
> > > > Rohit
> > > >
> > >
> >
>


Preview blog post for the Apache 2.7.0 release

2020-11-06 Thread Bill Bejeck
All,

I've written an initial blog post about the upcoming Apache Kafka 2.7.0
release.

Please take a look and let me know about any additions/modifications on
this thread.

https://blogs.apache.org/preview/kafka/?previewEntry=what-s-new-in-apache4

Thanks,
Bill


[jira] [Created] (KAFKA-10693) Tests instantiate QuotaManagers without closing the managers in teardown

2020-11-06 Thread David Mao (Jira)
David Mao created KAFKA-10693:
-

 Summary: Tests instantiate QuotaManagers without closing the 
managers in teardown
 Key: KAFKA-10693
 URL: https://issues.apache.org/jira/browse/KAFKA-10693
 Project: Kafka
  Issue Type: Bug
Reporter: David Mao
Assignee: David Mao






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Will Cache will get invalidated if i read lots of messages using random offsets and partitions

2020-11-06 Thread Rupesh Kumar
Hi Team,

I understand that Kafka is meant for sequential access, but I have use case of 
accessing the messages from Kafka based on random offsets and partitions.

For Example

There is topic called “topic-A” some consumers are listening from this topic, 
it may happen that processing of these messages gets failed inside consumer so 
in that case we will have to reprocess those messages.
We store the offset and partition details for the failed messages.

Another consumer will process these failed messages and put again to the topic, 
so to reprocess those messages again we will be fetching those messages from 
respective
topics based on offsets and partitions. So in this case we will have lots of 
random offsets to access from different topics (our application should support 
1000 messages/second including all topics).


So now my doubt comes here
 Will accessing messages using random offsets and partitions invalidate the 
cache for the topic that was used for sequential access (other consumers was 
reading sequentially).

 I want to understand the impact of this random offset, partition access on 
kafka,


  1.  Will it slow down Kafka ?
  2.  Will it read messages from cache or from disk ?
  3.  If messages is not in cache then will it load from disk and store in 
cache … in that case what will happen to the existing data that was in cache 
and was being used for sequential access ?

I went through Kafka documentation but couldn’t  answer to all my questions.
Need your help here.

Please let me know if my question is not clear or you need more info.

Regards
Rupesh



Build failed in Jenkins: Kafka » kafka-2.7-jdk8 #55

2020-11-06 Thread Apache Jenkins Server
See 


Changes:

[Bill Bejeck] MINOR: Add back section taken out by mistake (#9544)


--
[...truncated 6.87 MB...]
org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 

Re: Confluence edit permissions

2020-11-06 Thread Jun Rao
Hi, David,

Thanks for your interest. Just gave you the wiki permissions.

Jun

On Fri, Nov 6, 2020 at 11:03 AM David Mao  wrote:

> Hi all,
>
> I'd like to make an update to a KIP, can I get edit permissions for
> Confluence?
>
> Thanks,
> David
>


Jenkins build is back to normal : Kafka » kafka-2.6-jdk8 #47

2020-11-06 Thread Apache Jenkins Server
See 




Confluence edit permissions

2020-11-06 Thread David Mao
Hi all,

I'd like to make an update to a KIP, can I get edit permissions for
Confluence?

Thanks,
David


[jira] [Resolved] (KAFKA-10393) Message for fetch snapshot and fetch

2020-11-06 Thread Jose Armando Garcia Sancio (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10393?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jose Armando Garcia Sancio resolved KAFKA-10393.

Resolution: Duplicate

> Message for fetch snapshot and fetch
> 
>
> Key: KAFKA-10393
> URL: https://issues.apache.org/jira/browse/KAFKA-10393
> Project: Kafka
>  Issue Type: Sub-task
>  Components: replication
>Reporter: Jose Armando Garcia Sancio
>Assignee: Jose Armando Garcia Sancio
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] KIP-405: Kafka Tiered Storage

2020-11-06 Thread Kowshik Prakasam
Hi Satish,

Thanks for your response.

5015. That makes sense, good point.

5019 and 5020. My 2 cents is that while you are implementing it, it will be
useful to update the KIP with details about the RocksDB-based design that
you envision. This will facilitate the discussions.


Cheers,
Kowshik



On Fri, Nov 6, 2020 at 5:45 AM Satish Duggana 
wrote:

> Hi Kowshik,
> Thanks for your comments.
>
> 5012. In the RemoteStorageManager interface, there is an API defined for
> each file type. For example, fetchOffsetIndex, fetchTimestampIndex etc. To
> avoid the duplication, I'd suggest we can instead have a FileType enum and
> a common get API based on the FileType.
>
> That is a good point. We can have suggested changes.
>
>
> 5014. There are some TODO sections in the KIP. Would these be filled up in
> future iterations?
>
> Right.
>
> 5015. Under "Topic deletion lifecycle", I'm trying to understand why do we
> need delete_partition_marked as well as the delete_partition_started
> messages. I couldn't spot a drawback if supposing we simplified the design
> such that the controller would only write delete_partition_started message,
> and RemoteLogCleaner (RLC) instance picks it up for processing. What am I
> Missing?
>
> Having delete_partition_marked event  does not add any complexity but
> it gives audit of the source of the respective action. imho, removing
> this does not make it simpler.
>
> 5016. Under "Topic deletion lifecycle", step (4) is mentioned as "RLC gets
> all the remote log segments for the partition and each of these remote log
> segments is deleted with the next steps.". Since the RLC instance runs on
> each tier topic partition leader, how does the RLC then get the list of
> remote log segments to be deleted? It will be useful to add that detail to
> the KIP.
>
> Sure, we will address that in the KIP.
>
> 5017. Under "Public Interfaces -> Configs", there is a line mentioning "We
> will support flipping remote.log.storage.enable in next versions." It will
> be useful to mention this in the "Future Work" section of the KIP too.
>
> That makes sense. Will add that in future work items.
>
> 5018. The KIP introduces a number of configuration parameters. It will be
> useful to mention in the KIP if the user should assume these as static
> configuration in the server.properties file, or dynamic configuration which
> can be modified without restarting the broker.
>
> As discussed earlier, we will update with the config types.
>
> 5019.  Maybe this is planned as a future update to the KIP, but I thought
> I'd mention it here. Could you please add details to the KIP on why RocksDB
> was chosen as the default cache implementation of RLMM, and how it is going
> to be used? Were alternatives compared/considered? For example, it would be
> useful to explain/evaluate the following: 1) debuggability of the RocksDB
> JNI interface, 2) performance, 3) portability across platforms and 4)
> interface parity of RocksDB’s JNI api with it's underlying C/C++ api.
>
> RocksDB is widely used in Kafka Streams. We were evaluating RocksDB
> and a custom file store. Custom file store adds lot of complexity in
> maintaining the files and compacting them etc, RocksDB already
> provides the required features and it is  simple to use. We are
> working on RocksDB implementation with a couple of approaches and we
> will update the results once we are done.
>
> 5020. Following up on (5019), for the RocksDB cache, it will be useful to
> explain the relationship/mapping between the following in the KIP: 1) # of
> tiered partitions, 2) # of partitions of metadata topic
> __remote_log_metadata and 3) # of RocksDB instances. i.e. is the plan to
> have a RocksDB instance per tiered partition, or per metadata topic
> partition, or just 1 for per broker?
>
> We are exploring of having not more than 2 instances per broker.
>
> 5021. I was looking at the implementation prototype (PR link:
> https://github.com/apache/kafka/pull/7561). It seems that a boolean
> attribute is being introduced into the Log layer to check if remote log
> capability is enabled. While the boolean footprint is small at the moment,
> this can easily grow in the future and become harder to
> test/maintain, considering that the Log layer is already pretty complex. We
> should start thinking about how to manage such changes to the Log layer
> (for the purpose of improved testability, better separation of concerns and
> readability). One proposal I have is to take a step back and define a
> higher level Log interface. Then, the Broker code can be changed to use
> this interface. It can be changed such that only a handle to the interface
> is exposed to other components (such as LogCleaner, ReplicaManager etc.)
> and not the underlying Log object. This approach keeps the user of the Log
> layer agnostic of the whereabouts of the data. Underneath the interface,
> the implementing classes can completely separate local log capabilities
> from the remote log. For example, 

[DISCUSS] KIP-681: Rename master key in delegation token feature

2020-11-06 Thread Tom Bentley
Hi,

I'd like to start discussion on KIP-681 which proposes to rename a broker
config to use a more racially neutral term:

https://cwiki.apache.org/confluence/display/KAFKA/KIP-681%3A+Rename+master+key+in+delegation+token+feature

As always, I'd be grateful for any feedback.

Kind regards,

Tom


[jira] [Created] (KAFKA-10692) Rename broker master key config for KIP 681

2020-11-06 Thread Tom Bentley (Jira)
Tom Bentley created KAFKA-10692:
---

 Summary: Rename broker master key config for KIP 681
 Key: KAFKA-10692
 URL: https://issues.apache.org/jira/browse/KAFKA-10692
 Project: Kafka
  Issue Type: Sub-task
  Components: core
Reporter: Tom Bentley
Assignee: Tom Bentley






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] KIP-680: TopologyTestDriver should not require a Properties argument

2020-11-06 Thread John Roesler
Hello Rohit,

Thanks for picking this up! I think your KIP looks good.

While I was doing some cleanup of our tests before, one thing I encountered is 
that, while most tests don’t semantically need to specify any configs, many 
tests actually do set the state directory config. They set it specifically so 
that they can delete it at the end of the test. Otherwise, the tests would 
leave RocksDB directories behind.

I’m wondering if we should address this issue as part of your KIP. What I’m 
thinking is this: if no state directory is specified, then we create a new, 
unique temp directory and register it for cleanup when the JVM exits. 
Additionally, we would set a flag and clean up the state dir when TTD.close() 
is called.

That way, TTD tests would be by default independent and tidy.

Admittedly, this is outside the current scope of your KIP, so please feel free 
to reject this idea, in which case I can file a separate ticket for it. 

Thanks!
-John

On Tue, Nov 3, 2020, at 18:59, Rohit Deshpande wrote:
> Hi Matthias,
> Thank you for the review and the suggestion.
> Considering at most 3 parameters to the constructor of 
> TopologyTestDriver
> and topology being required parameter, we can definitely add a new
> constructor `TopologyTestDriver(Topology, Instant)` .
> Right now, I see one test where we can use this constructor:
> https://github.com/apache/kafka/blob/trunk/streams/src/test/java/org/apache/kafka/streams/kstream/internals/KStreamTransformTest.java#L80-L83
> Also we can get rid of this method in TestDriver trait:
> https://github.com/apache/kafka/blob/trunk/streams/streams-scala/src/test/scala/org/apache/kafka/streams/scala/utils/TestDriver.scala#L32-L35
> which is used in multiple test classes and seems redundant. I agree with
> your suggestion.
> Thanks,
> Rohit
> 
> On Tue, Nov 3, 2020 at 3:19 PM Matthias J. Sax  wrote:
> 
> > Thanks for the KIP Rohit.
> >
> > Wondering, if we should also add `TopologyTestDriver(Topology,
> > Instant)`? Not totally sure, as having too many overload could also be
> > annoying.
> >
> >
> > -Matthias
> >
> > On 11/3/20 10:02 AM, Rohit Deshpande wrote:
> > > Hello all,
> > > I have created KIP-680: TopologyTestDriver should not require a
> > Properties
> > > argument.
> > >
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-680%3A+TopologyTestDriver+should+not+require+a+Properties+argument
> > >
> > > Jira for the KIP:
> > > https://issues.apache.org/jira/browse/KAFKA-10629
> > >
> > > If we end up making changes, they will look like this:
> > > https://github.com/apache/kafka/compare/trunk...rohitrmd:KAFKA-10629
> > >
> > > Please have a look and let me know what you think.
> > >
> > > Thanks,
> > > Rohit
> > >
> >
>


Re: [DISCUSS] KIP-405: Kafka Tiered Storage

2020-11-06 Thread Satish Duggana
Hi Kowshik,
Thanks for your comments.

5012. In the RemoteStorageManager interface, there is an API defined for
each file type. For example, fetchOffsetIndex, fetchTimestampIndex etc. To
avoid the duplication, I'd suggest we can instead have a FileType enum and
a common get API based on the FileType.

That is a good point. We can have suggested changes.


5014. There are some TODO sections in the KIP. Would these be filled up in
future iterations?

Right.

5015. Under "Topic deletion lifecycle", I'm trying to understand why do we
need delete_partition_marked as well as the delete_partition_started
messages. I couldn't spot a drawback if supposing we simplified the design
such that the controller would only write delete_partition_started message,
and RemoteLogCleaner (RLC) instance picks it up for processing. What am I
Missing?

Having delete_partition_marked event  does not add any complexity but
it gives audit of the source of the respective action. imho, removing
this does not make it simpler.

5016. Under "Topic deletion lifecycle", step (4) is mentioned as "RLC gets
all the remote log segments for the partition and each of these remote log
segments is deleted with the next steps.". Since the RLC instance runs on
each tier topic partition leader, how does the RLC then get the list of
remote log segments to be deleted? It will be useful to add that detail to
the KIP.

Sure, we will address that in the KIP.

5017. Under "Public Interfaces -> Configs", there is a line mentioning "We
will support flipping remote.log.storage.enable in next versions." It will
be useful to mention this in the "Future Work" section of the KIP too.

That makes sense. Will add that in future work items.

5018. The KIP introduces a number of configuration parameters. It will be
useful to mention in the KIP if the user should assume these as static
configuration in the server.properties file, or dynamic configuration which
can be modified without restarting the broker.

As discussed earlier, we will update with the config types.

5019.  Maybe this is planned as a future update to the KIP, but I thought
I'd mention it here. Could you please add details to the KIP on why RocksDB
was chosen as the default cache implementation of RLMM, and how it is going
to be used? Were alternatives compared/considered? For example, it would be
useful to explain/evaluate the following: 1) debuggability of the RocksDB
JNI interface, 2) performance, 3) portability across platforms and 4)
interface parity of RocksDB’s JNI api with it's underlying C/C++ api.

RocksDB is widely used in Kafka Streams. We were evaluating RocksDB
and a custom file store. Custom file store adds lot of complexity in
maintaining the files and compacting them etc, RocksDB already
provides the required features and it is  simple to use. We are
working on RocksDB implementation with a couple of approaches and we
will update the results once we are done.

5020. Following up on (5019), for the RocksDB cache, it will be useful to
explain the relationship/mapping between the following in the KIP: 1) # of
tiered partitions, 2) # of partitions of metadata topic
__remote_log_metadata and 3) # of RocksDB instances. i.e. is the plan to
have a RocksDB instance per tiered partition, or per metadata topic
partition, or just 1 for per broker?

We are exploring of having not more than 2 instances per broker.

5021. I was looking at the implementation prototype (PR link:
https://github.com/apache/kafka/pull/7561). It seems that a boolean
attribute is being introduced into the Log layer to check if remote log
capability is enabled. While the boolean footprint is small at the moment,
this can easily grow in the future and become harder to
test/maintain, considering that the Log layer is already pretty complex. We
should start thinking about how to manage such changes to the Log layer
(for the purpose of improved testability, better separation of concerns and
readability). One proposal I have is to take a step back and define a
higher level Log interface. Then, the Broker code can be changed to use
this interface. It can be changed such that only a handle to the interface
is exposed to other components (such as LogCleaner, ReplicaManager etc.)
and not the underlying Log object. This approach keeps the user of the Log
layer agnostic of the whereabouts of the data. Underneath the interface,
the implementing classes can completely separate local log capabilities
from the remote log. For example, the Log class can be simplified to only
manage logic surrounding local log segments and metadata. Additionally, a
wrapper class can be provided (implementing the higher level Log interface)
which will contain any/all logic surrounding tiered data. The wrapper
class will wrap around an instance of the Log class delegating the local
log logic to it. Finally, a handle to the wrapper class can be exposed to
the other components wherever they need a handle to the higher level Log
interface.

It is still a draft 

Jenkins build is back to normal : Kafka » kafka-trunk-jdk8 #207

2020-11-06 Thread Apache Jenkins Server
See 




Build failed in Jenkins: Kafka » kafka-trunk-jdk11 #216

2020-11-06 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Always return partitions with diverging epochs in fetch 
response (#9567)


--
[...truncated 3.46 MB...]

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@3c905d41, 
timestamped = false, caching = false, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@618229bb, 
timestamped = false, caching = false, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@618229bb, 
timestamped = false, caching = false, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@2c8ef130, 
timestamped = false, caching = false, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@2c8ef130, 
timestamped = false, caching = false, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@5cf2fdff, 
timestamped = false, caching = false, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@5cf2fdff, 
timestamped = false, caching = false, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@62b6c975, 
timestamped = false, caching = false, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@62b6c975, 
timestamped = false, caching = false, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@1209eb69, 
timestamped = false, caching = true, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@1209eb69, 
timestamped = false, caching = true, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@49558f69, 
timestamped = false, caching = true, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@49558f69, 
timestamped = false, caching = true, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@2038eb9, 
timestamped = false, caching = true, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@2038eb9, 
timestamped = false, caching = true, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@56aa02ed, 
timestamped = false, caching = true, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@56aa02ed, 
timestamped = false, caching = true, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@b25e9f2, 
timestamped = false, caching = false, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@b25e9f2, 
timestamped = false, caching = false, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@6e6e0138, 
timestamped = false, caching = false, logging = 

Build failed in Jenkins: Kafka » kafka-trunk-jdk11 #215

2020-11-06 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-10673: Cache inter broker listener name used in connection 
quotas (#9555)


--
[...truncated 3.46 MB...]
org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@73934268,
 timestamped = true, caching = true, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@73934268,
 timestamped = true, caching = true, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@690e467f,
 timestamped = true, caching = true, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@690e467f,
 timestamped = true, caching = true, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@5aeda1ce,
 timestamped = true, caching = true, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@5aeda1ce,
 timestamped = true, caching = true, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@195d4936,
 timestamped = true, caching = true, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@195d4936,
 timestamped = true, caching = true, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@7034d877,
 timestamped = true, caching = true, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@7034d877,
 timestamped = true, caching = true, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@6b9255b0,
 timestamped = true, caching = false, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@6b9255b0,
 timestamped = true, caching = false, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@5c467add,
 timestamped = true, caching = false, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@5c467add,
 timestamped = true, caching = false, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@2f218c9a,
 timestamped = true, caching = false, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@2f218c9a,
 timestamped = true, caching = false, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@79000e69,
 timestamped = true, caching = false, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@79000e69,
 timestamped = true, caching = false, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@46eb2c, 
timestamped = true, caching = false, logging = false] STARTED


[jira] [Resolved] (KAFKA-10673) ConnectionQuotas should cache interbroker listener name

2020-11-06 Thread David Jacot (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10673?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Jacot resolved KAFKA-10673.
-
Resolution: Fixed

> ConnectionQuotas should cache interbroker listener name
> ---
>
> Key: KAFKA-10673
> URL: https://issues.apache.org/jira/browse/KAFKA-10673
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 2.7.0
>Reporter: David Mao
>Assignee: David Mao
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: Screen Shot 2020-11-03 at 8.34.48 PM.png
>
>
> ConnectionQuotas.protectedListener calls config.interBrokerListenerName. This 
> is a surprisingly expensive call that creates a copy of all properties set on 
> the config. Given that this method is called multiple times per connection 
> created, this is not really ideal.
>  
> Profile attached showing allocations in protectedListener()
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)