Re: Add a customized logo for Kafka Streams

2020-03-07 Thread Sophie Blee-Goldman
Matthias makes a good point about being careful not to position Streams as
outside of Apache Kafka. One obvious thing we could do it just include the
Kafka logo as-is in the Streams logo, somehow.

I have some unqualified opinions on what that might look like:
A good logo is simple and clean, so incorporating the Kafka logo as a minor
detail within a more complicated image is probably not the best way to get
the quick and easy comprehension/recognition that we're going for.

That said I'd throw out the idea of just attaching something to the Kafka
logo,
perhaps a stream-dwelling animal, perhaps a (river) otter? It could be
"swimming" left of the Kafka logo, with its head touching the upper circle
and
its tail touching the bottom one. Like Streams, it starts with Kafka and
ends
with Kafka (ie reading input topics and writing to output topics).

Without further ado, here's my very rough prototype for the Kafka Streams
logo:

[image: image.png]
Obviously the real thing would be colored and presumably done by someone
with actual artist talent/experience (or at least photoshop ability).

Thoughts?

On Sat, Mar 7, 2020, 1:08 PM Matthias J. Sax  wrote:

> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA512
>
> Boyang,
>
> thanks for starting this discussion. I like the idea in general
> however we need to be a little careful IMHO -- as you mentioned Kafka
> is one project and thus we should avoid the impression that Kafka
> Streams is not part of Apache Kafka.
>
> Besides this, many projects use animals that are often very adorable.
> Maybe we could find a cute Streams related mascot? :)
>
> I would love to hear opinions especially from the PMC if having a logo
> for Kafka Streams is a viable thing to do.
>
>
> - -Matthias
>
> On 3/3/20 1:01 AM, Patrik Kleindl wrote:
> > Hi Boyang Great idea, that would help in some discussions. To throw
> > in a first idea: https://imgur.com/a/UowXaMk best regards Patrik
> >
> > On Mon, 2 Mar 2020 at 18:23, Boyang Chen
> >  wrote:
> >
> >> Hey Apache Kafka committers and community folks,
> >>
> >> over the years Kafka Streams has been widely adopted and tons of
> >> blog posts and tech talks have been trying to introduce it to
> >> people with need of stream processing. As it is part of Apache
> >> Kafka project, there is always an awkward situation where Kafka
> >> Streams could not be campaigned as a standalone streaming engine,
> >> and makes people confused about its relation to Kafka.
> >>
> >> So, do we want to introduce a customized logo just for Streams?
> >> The immediate benefit is when people are making technical
> >> decisions, we could list Streams as a logo just like Flink and
> >> Spark Streaming, instead of putting Kafka logo there as it is not
> >> literally a legitimate comparison between processing framework
> >> and messaging system. Should we do a KIP for this?
> >>
> >> Boyang
> >>
> >
> -BEGIN PGP SIGNATURE-
>
> iQIzBAEBCgAdFiEEI8mthP+5zxXZZdDSO4miYXKq/OgFAl5kDVQACgkQO4miYXKq
> /Og2Hg/+IOjjz3yrvHOz/p/Qbi9hXDK+GZT6Ixzk2HrrpMiLAAZs6WrNGQMwwI6a
> qrPdMRLKA45F53wwMXBqhLYe0O0vOIRSur3pef8UuTVEkbFstY6dNzduPjTXK4vV
> Ahzb+pu1lZsF+n2DbMuXWvkmvMAsSrKQXbH58rA7I4qx7Zr6g1a/KL2b2oOoo3kI
> 4x3PJfG6oLSnQHwkJxmM79ZjM7MLZh0d8cRqb7Oudy5MJiMzHY+Rm5aTu4nhPgRr
> cLxA8kz1PbGPboxjD9/ZGuZJMWfVnvY1wJcOp5UnOUs4kX5uYDyWw1sKIn3DcnW8
> YVzoto0syCHTAdbl89H2fxhJbtVp8JSxbBx9AW8mdgLOLsYRZGHZ1cbdJ3h4NkeE
> xTPTuTptQbhdcjbSVX6F0q+h1hiPCU5PKqcR12zGVTBI4rOGkhPIhDdnIti5Qp2e
> MQ1Urh/tWCfwiJZbOyjZ9Tz/2vbUBB4kzUI9DxwWpD15jgMuN3JfqFwcwSIm7NrY
> jHcM3UB2QlUt+uymU78xjy6er2AFiGrfL7UXeFHzzVIWBU3fdw4wqpuIuWjFhL3t
> NWnAvamXOc7kfE4VXQ8igoRUVDZ8xCCqNvVoaJZ1cWboY/Cfk+4sgd9QnDMZN9jU
> bzRpghQp3R93Vp5FH4L9z478uuh8DFM2/+6pqqapd9efydGtKrU=
> =OFBs
> -END PGP SIGNATURE-
>


Re: Subject: [VOTE] 2.4.1 RC0

2020-03-07 Thread Vahid Hashemian
+1 (binding)

Verified signature, built from source, and ran quickstart
successfully (using openjdk version "11.0.6").
I also ran unit tests locally which resulted in a few flaky tests for which
there are already open Jiras:

ReassignPartitionsClusterTest.shouldMoveSinglePartitionWithinBroker
ConsumerBounceTest.testCloseDuringRebalance

ConsumerBounceTest.testConsumerReceivesFatalExceptionWhenGroupPassesMaxSize
PlaintextEndToEndAuthorizationTest.testNoConsumeWithDescribeAclViaAssign

SaslClientsWithInvalidCredentialsTest.testManualAssignmentConsumerWithAuthenticationFailure
SaslMultiMechanismConsumerTest.testCoordinatorFailover

Thanks for running the release Bill.

Regards,
--Vahid

On Fri, Mar 6, 2020 at 9:20 AM Colin McCabe  wrote:

> +1 (binding)
>
> Checked the git hash and branch, looked at the docs a bit.  Ran quickstart
> (although not the connect or streams parts).  Looks good.
>
> best,
> Colin
>
>
> On Fri, Mar 6, 2020, at 07:31, David Arthur wrote:
> > +1 (binding)
> >
> > Download kafka_2.13-2.4.1 and verified signature, ran quickstart,
> > everything looks good.
> >
> > Thanks for running this release, Bill!
> >
> > -David
> >
> >
> >
> > On Wed, Mar 4, 2020 at 6:06 AM Eno Thereska 
> wrote:
> >
> > > Hi Bill,
> > >
> > > I built from source and ran unit and integration tests. They passed.
> > > There was a large number of skipped tests, but I'm assuming that is
> > > intentional.
> > >
> > > Cheers
> > > Eno
> > >
> > > On Tue, Mar 3, 2020 at 8:42 PM Eric Lalonde  wrote:
> > > >
> > > > Hi,
> > > >
> > > > I ran:
> > > > $
> https://github.com/elalonde/kafka/blob/master/bin/verify-kafka-rc.sh
> > > 
> > > 2.4.1 https://home.apache.org/~bbejeck/kafka-2.4.1-rc0 <
> > > https://home.apache.org/~bbejeck/kafka-2.4.1-rc0>
> > > >
> > > > All checksums and signatures are good and all unit and integration
> tests
> > > that were executed passed successfully.
> > > >
> > > > - Eric
> > > >
> > > > > On Mar 2, 2020, at 6:39 PM, Bill Bejeck  wrote:
> > > > >
> > > > > Hello Kafka users, developers and client-developers,
> > > > >
> > > > > This is the first candidate for release of Apache Kafka 2.4.1.
> > > > >
> > > > > This is a bug fix release and it includes fixes and improvements
> from
> > > 38
> > > > > JIRAs, including a few critical bugs.
> > > > >
> > > > > Release notes for the 2.4.1 release:
> > > > >
> https://home.apache.org/~bbejeck/kafka-2.4.1-rc0/RELEASE_NOTES.html
> > > > >
> > > > > *Please download, test and vote by Thursday, March 5, 9 am PT*
> > > > >
> > > > > Kafka's KEYS file containing PGP keys we use to sign the release:
> > > > > https://kafka.apache.org/KEYS
> > > > >
> > > > > * Release artifacts to be voted upon (source and binary):
> > > > > https://home.apache.org/~bbejeck/kafka-2.4.1-rc0/
> > > > >
> > > > > * Maven artifacts to be voted upon:
> > > > >
> https://repository.apache.org/content/groups/staging/org/apache/kafka/
> > > > >
> > > > > * Javadoc:
> > > > > https://home.apache.org/~bbejeck/kafka-2.4.1-rc0/javadoc/
> > > > >
> > > > > * Tag to be voted upon (off 2.4 branch) is the 2.4.1 tag:
> > > > > https://github.com/apache/kafka/releases/tag/2.4.1-rc0
> > > > >
> > > > > * Documentation:
> > > > > https://kafka.apache.org/24/documentation.html
> > > > >
> > > > > * Protocol:
> > > > > https://kafka.apache.org/24/protocol.html
> > > > >
> > > > > * Successful Jenkins builds for the 2.4 branch:
> > > > > Unit/integration tests: Links to successful unit/integration test
> > > build to
> > > > > follow
> > > > > System tests:
> > > > > https://jenkins.confluent.io/job/system-test-kafka/job/2.4/152/
> > > > >
> > > > >
> > > > > Thanks,
> > > > > Bill Bejeck
> > > >
> > >
> >
> >
> > --
> > David Arthur
> >
>


-- 

Thanks!
--Vahid


Re: [DISCUSS] KIP-508: Make Suppression State Queriable - rebooted.

2020-03-07 Thread John Roesler
Thanks Matthias,

Good idea. I've changed the ticket name and added a note
clarifying that this ticket is not the same as
https://issues.apache.org/jira/browse/KAFKA-7224

Incidentally, I learned that I never documented my reasons
for abandoning my work on KAFKA-7224 ! I've now updated
that ticket, too, so your question had an unexpected side-benefit.

Thanks,
-John

On Sat, Mar 7, 2020, at 18:01, Matthias J. Sax wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA512
> 
> Thanks for clarification.
> 
> Can you maybe update the Jira ticket? Do we have a ticket for
> spill-to-disk? Maybe link to it and explain that it's two different
> things? Maybe even rename the ticket to something more clear, ie,
> "make suppress result queryable" or simliar?
> 
> 
> - -Matthias
> 
> On 3/7/20 1:58 PM, John Roesler wrote:
> > Hey Matthias,
> >
> > I’m sorry if the ticket was poorly stated. The ticket is to add a
> DSL overload to pass a Materialized argument to suppress. As a result,
> the result of the suppression would be queriable.
> >
> > This is unrelated to “persistent buffer” aka “spill-to-disk”.
> >
> > There was some confusion before about whether this ticket could be
> implemented as “query the buffer”. Maybe it can, but not trivially.
> The obvious way is just to add a new state store which we write the
> results into just before we forward. I.e., it’s exactly like the
> materialized variant of any stateless KTable operation.
> >
> > Thanks, John
> >
> > On Sat, Mar 7, 2020, at 15:32, Matthias J. Sax wrote: Thanks for
> > the KIP Dongjin,
> >
> > I am still not sure if I can follow, what might also be caused by
> > the backing JIRA ticket (maybe John can clarify the intent of the
> > ticket as he created it):
> >
> > Currently, suppress() only uses an in-memory buffer and my
> > understanding of the Jira is, to add the ability to use a
> > persistent buffer (ie, spill to disk backed by RocksDB).
> >
> > Adding a persistent buffer is completely unrelated to allow
> > querying the buffer. In fact, one could query an in-memory buffer,
> > too. However, querying the buffer does not really seem to be useful
> > as pointed out by John, as you can always query the upstream KTable
> > store.
> >
> > Also note that for the emit-on-window-close case the result is
> > deleted from the buffer when it is emitted, and thus cannot be
> > queried any longe r.
> >
> >
> > Can you please clarify if you intend to allow spilling to disk or
> > if you intent to enable IQ (even if I don't see why querying make
> > sense, as the data is either upstream or deleted). Also, if you
> > want to enable IQ, why do we need all those new interfaces? The
> > result of a suppress() is a KTable that is the same as any other
> > key-value/windowed/sessions store?
> >
> > We should also have corresponding Jira tickets for different cases
> > to avoid the confusion I am in atm :)
> >
> >
> > -Matthias
> >
> >
> > On 2/27/20 8:21 AM, John Roesler wrote:
>  Hi Dongjin,
> 
>  No problem; glad we got it sorted out.
> 
>  Thanks again for picking this up! -John
> 
>  On Wed, Feb 26, 2020, at 09:24, Dongjin Lee wrote:
> >> I was under the impression that you wanted to expand the
> >> scope of the KIP
> > to additionally allow querying the internal buffer, not
> > just the result. Can you clarify whether you are proposing
> > to allow querying the state of the internal buffer, the
> > result, or both?
> >
> > Sorry for the confusion. As we already talked with, we only
> > need to query the suppressed output, not the internal
> > buffer. The current implementation is wrong. After refining
> > the KIP and implementation accordingly I will notify you -
> > I must be confused, also.
> >
> > Thanks, Dongjin
> >
> > On Tue, Feb 25, 2020 at 12:17 AM John Roesler
> >  wrote:
> >
> >> Hi Dongjin,
> >>
> >> Ah, I think I may have been confused. I 100% agree that
> >> we need a materialized variant for suppress(). Then, you
> >> could do: ...suppress(...,
> >> Materialized.as(“final-count”))
> >>
> >> If that’s your proposal, then we are on the same page.
> >>
> >> I was under the impression that you wanted to expand the
> >> scope of the KIP to additionally allow querying the
> >> internal buffer, not just the result. Can you clarify
> >> whether you are proposing to allow querying the state of
> >> the internal buffer, the result, or both?
> >>
> >> Thanks, John
> >>
> >> On Thu, Feb 20, 2020, at 08:41, Dongjin Lee wrote:
> >>> Hi John, Thanks for your kind explanation with an
> >>> example.
> >>>
>  But it feels like you're saying you're trying to do
>  something different
> >>> than just query the windowed key and get back the
> >>> current count?
> >>>
> >>> Yes, for example, what if we need to retrieve the (all
> >>> or range) keys
> >

[jira] [Created] (KAFKA-9682) Flaky Test KafkaBasedLogTest#testSendAndReadToEnd

2020-03-07 Thread Matthias J. Sax (Jira)
Matthias J. Sax created KAFKA-9682:
--

 Summary: Flaky Test KafkaBasedLogTest#testSendAndReadToEnd
 Key: KAFKA-9682
 URL: https://issues.apache.org/jira/browse/KAFKA-9682
 Project: Kafka
  Issue Type: Bug
  Components: KafkaConnect, unit tests
Reporter: Matthias J. Sax


[https://builds.apache.org/job/kafka-pr-jdk8-scala2.12/1048/testReport/org.apache.kafka.connect.util/KafkaBasedLogTest/testSendAndReadToEnd/]
{quote}java.lang.AssertionError: expected:<2> but was:<0> at 
org.junit.Assert.fail(Assert.java:89) at 
org.junit.Assert.failNotEquals(Assert.java:835) at 
org.junit.Assert.assertEquals(Assert.java:647) at 
org.junit.Assert.assertEquals(Assert.java:633) at 
org.apache.kafka.connect.util.KafkaBasedLogTest.testSendAndReadToEnd(KafkaBasedLogTest.java:355){quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] KIP-508: Make Suppression State Queriable - rebooted.

2020-03-07 Thread Matthias J. Sax
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

Thanks for clarification.

Can you maybe update the Jira ticket? Do we have a ticket for
spill-to-disk? Maybe link to it and explain that it's two different
things? Maybe even rename the ticket to something more clear, ie,
"make suppress result queryable" or simliar?


- -Matthias

On 3/7/20 1:58 PM, John Roesler wrote:
> Hey Matthias,
>
> I’m sorry if the ticket was poorly stated. The ticket is to add a
DSL overload to pass a Materialized argument to suppress. As a result,
the result of the suppression would be queriable.
>
> This is unrelated to “persistent buffer” aka “spill-to-disk”.
>
> There was some confusion before about whether this ticket could be
implemented as “query the buffer”. Maybe it can, but not trivially.
The obvious way is just to add a new state store which we write the
results into just before we forward. I.e., it’s exactly like the
materialized variant of any stateless KTable operation.
>
> Thanks, John
>
> On Sat, Mar 7, 2020, at 15:32, Matthias J. Sax wrote: Thanks for
> the KIP Dongjin,
>
> I am still not sure if I can follow, what might also be caused by
> the backing JIRA ticket (maybe John can clarify the intent of the
> ticket as he created it):
>
> Currently, suppress() only uses an in-memory buffer and my
> understanding of the Jira is, to add the ability to use a
> persistent buffer (ie, spill to disk backed by RocksDB).
>
> Adding a persistent buffer is completely unrelated to allow
> querying the buffer. In fact, one could query an in-memory buffer,
> too. However, querying the buffer does not really seem to be useful
> as pointed out by John, as you can always query the upstream KTable
> store.
>
> Also note that for the emit-on-window-close case the result is
> deleted from the buffer when it is emitted, and thus cannot be
> queried any longe r.
>
>
> Can you please clarify if you intend to allow spilling to disk or
> if you intent to enable IQ (even if I don't see why querying make
> sense, as the data is either upstream or deleted). Also, if you
> want to enable IQ, why do we need all those new interfaces? The
> result of a suppress() is a KTable that is the same as any other
> key-value/windowed/sessions store?
>
> We should also have corresponding Jira tickets for different cases
> to avoid the confusion I am in atm :)
>
>
> -Matthias
>
>
> On 2/27/20 8:21 AM, John Roesler wrote:
 Hi Dongjin,

 No problem; glad we got it sorted out.

 Thanks again for picking this up! -John

 On Wed, Feb 26, 2020, at 09:24, Dongjin Lee wrote:
>> I was under the impression that you wanted to expand the
>> scope of the KIP
> to additionally allow querying the internal buffer, not
> just the result. Can you clarify whether you are proposing
> to allow querying the state of the internal buffer, the
> result, or both?
>
> Sorry for the confusion. As we already talked with, we only
> need to query the suppressed output, not the internal
> buffer. The current implementation is wrong. After refining
> the KIP and implementation accordingly I will notify you -
> I must be confused, also.
>
> Thanks, Dongjin
>
> On Tue, Feb 25, 2020 at 12:17 AM John Roesler
>  wrote:
>
>> Hi Dongjin,
>>
>> Ah, I think I may have been confused. I 100% agree that
>> we need a materialized variant for suppress(). Then, you
>> could do: ...suppress(...,
>> Materialized.as(“final-count”))
>>
>> If that’s your proposal, then we are on the same page.
>>
>> I was under the impression that you wanted to expand the
>> scope of the KIP to additionally allow querying the
>> internal buffer, not just the result. Can you clarify
>> whether you are proposing to allow querying the state of
>> the internal buffer, the result, or both?
>>
>> Thanks, John
>>
>> On Thu, Feb 20, 2020, at 08:41, Dongjin Lee wrote:
>>> Hi John, Thanks for your kind explanation with an
>>> example.
>>>
 But it feels like you're saying you're trying to do
 something different
>>> than just query the windowed key and get back the
>>> current count?
>>>
>>> Yes, for example, what if we need to retrieve the (all
>>> or range) keys
>> with
>>> a closed window? In this example, let's imagine we need
>>> to retrieve only (key=A, window=10), not (key=A,
>>> window=20).
>>>
>>> Of course, the value accompanied by a flushed key is
>>> exactly the same to the one in the upstream KTable;
>>> However, if our intention is not pointing out a
>>> specific key but retrieving a group of unspecified
>>> keys, we stuck
>> in
>>> trouble - since we can't be sure which key is flushed
>>> out beforehand.
>>>
>>> One workaround would be materializing it with
>>> `suppressed.filter(e ->
>> true,
>>> Materialized.as("final-count"))`. But I 

Re: KIP-560 Discuss

2020-03-07 Thread Matthias J. Sax
Thanks for the KIP Sang!

I have a couple of more comments about the wiki page:

(1) The "Public Interface" section should only list the new stuff. This
KIP does not change anything with regard to the existing options
`--input-topic` or `--intermediate-topic` and thus it's just "noise" to
have them in this section. Only list the new option `allInputTopicsOption`.

(2) Don't post code, ie, the implementation of private methods. KIPs
should only describe public interface changes.

(3) The KIP should describe that we intend to use
`describeConsumerGroups` calls to discover the topic names -- atm, it's
unclear from the KIP how the new feature actually works.

(4) If the new flag is used, we will discover input and intermediate
topics. Hence, the name is miss leading. We could call it
`--all-user-topics` and explain in the description that "user topics"
are input and intermediate topics for this case (in general, also output
topics are "user topics" but there is nothing to be done for output
topics). Thoughts?


-Matthias

On 1/27/20 6:35 AM, Sang wn Lee wrote:
> thank you John Roesle
> 
> It is a good idea
> "—all-input-topics"
> 
> I agree with you
> 
> I'll update right away
> 
> 
> On 2020/01/24 14:14:17, "John Roesler"  wrote: 
>> Hi all, thanks for the explanation. I was also not sure how the kip would be 
>> possible to implement. 
>>
>> No that it does seem plausible, my only feedback is that the command line 
>> option could align better with the existing one. That is, the existing 
>> option is called “—input-topics”, so it seems like the new one should be 
>> called “—all-input-topics”. 
>>
>> Thanks,
>> John
>>
>> On Fri, Jan 24, 2020, at 01:42, Boyang Chen wrote:
>>> Thanks Sophie for the explanation! I read Sang's PR and basically he did
>>> exactly what you proposed (check it here
>>>  in case I'm wrong).
>>>
>>> I think Sophie's response answers Gwen's question already, while in the
>>> meantime for a KIP itself we are not required to mention all the internal
>>> details about how to make the changes happen (like how to actually get the
>>> external topics), considering the change scope is pretty small as well. But
>>> again, it would do no harm if we mention it inside Proposed Change session
>>> specifically so that people won't get confused about how.
>>>
>>>
>>> On Thu, Jan 23, 2020 at 8:26 PM Sophie Blee-Goldman 
>>> wrote:
>>>
 Hi all,

 I think what Gwen is trying to ask (correct me if I'm wrong) is how we can
 infer which topics are associated with
 Streams from the admin client's topic list. I agree that this doesn't seem
 possible, since as she pointed out the
 topics list (or even description) lacks the specific information we need.

 What we could do instead is use the admin client's
 `describeConsumerGroups` API to get the information
 on the Streams app's consumer group specifically -- note that the Streams
 application.id config is also used
 as the consumer group id, so each app forms a group to read from the input
 topics. We could compile a list
 of these topics just by looking at each member's assignment (and even check
 for a StreamsPartitionAssignor
 to verify that this is indeed a Streams app group, if we're being
 paranoid).

 The reset tool actually already gets the consumer group description, in
 order to validate there are no active
 consumers in the group. We may as well grab the list of topics from it
 while it's there. Or did you have something
 else in mind?

 On Sat, Jan 18, 2020 at 6:17 PM Sang wn Lee  wrote:

> Thank you
>
> I understand you
>
> 1. admin client has topic list
> 2. applicationId can only have one stream, so It won't be a problem!
> 3. For example, --input-topic [reg]
> Allowing reg solves some inconvenience
>
>
> On 2020/01/18 18:15:23, Gwen Shapira  wrote:
>> I am not sure I follow. Afaik:
>>
>> 1. Topics don't include client ID information
>> 2. Even if you did, the same ID could be used for topics that are not
> Kafka
>> Streams input
>>
>> The regex idea sounds doable, but I'm not sure it solves much?
>>
>>
>> On Sat, Jan 18, 2020, 7:12 AM Sang wn Lee 
 wrote:
>>
>>> Thank you
>>> Gwen Shapira!
>>> We'll add a flag to clear all topics by clientId
>>> It is ‘reset-all-external-topics’
>>>
>>> I also want to use regex on the input topic flag to clear all
 matching
>>> topics.
>>>
>>> On 2020/01/17 19:29:09, Gwen Shapira  wrote:
 Seem like a very nice improvement to me. But I have to admit that I
 don't understand how this will how - how could you infer the input
 topics?

 On Thu, Jan 16, 2020 at 10:03 AM Sang wn Lee 
>>> wrote:
>
> Hello,
>
> Starting this thread to discuss KIP-5

Re: [VOTE] KIP-557: Add emit on change support for Kafka Streams

2020-03-07 Thread Richard Yu
Hi Matthias,

Oh, I see. Next time, I will take that into account.
It looked like at the time there wasn't much contention over the major
points of the proposal, so I thought I could pass it.

I will also make some last modifications to the KIP.

Thanks for your vote!

Best,
Richard


On Sat, Mar 7, 2020 at 1:00 PM Matthias J. Sax  wrote:

> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA512
>
> Richard,
>
> you cannot close a KIP as accepted with 2 binging votes. (cf
> https://cwiki.apache.org/confluence/display/KAFKA/Bylaws)
>
> You could only discard the KIP as long as it's not accepted :D
>
> However, I am +1 (binding) and thus you can close the VOTE as accepted.
>
>
> Just a three minor follow up comments:
>
> (1) In "Reporting Strategies" you mention in point (2) "Emit on update
> / non-empty content" -- I am not sure what "empty content" would be.
> This is a little bit confusing. Maybe just remove it?
>
>
> (2) "Design Reasoning"
>
> > we have decided that we will forward aggregation results if and
> > only if the timestamp and the value had not changed
>
> This sounds incorrect. If both value and timestamp have not changed,
> we would skip the update from my understanding?
>
> Ie, to phrase is differently: for a table-operation we only consider
> the value to make a comparison and if the value does not change, we
> don't emit anything (even if the timestamp changed).
>
> For windowed aggregations however, even if the value does not change,
> but the timestamp advances, we emit, ie, a changing timestamp is not
> considered idempotent for this case. (Note, that the timestamp can
> never go backward for this case, because it's computed as maximum over
> all input record for the window).
>
>
> (3) The discussion about stream time is very interesting. I agree that
> it's an orthogonal concern to this KIP.
>
>
>
> - -Matthias
>
>
> On 3/6/20 1:52 PM, Richard Yu wrote:
> > Hi all,
> >
> > I have decided to pass this KIP with 2 binding votes and 3
> > non-binding votes (including mine). I will update KIP status
> > shortly after this.
> >
> > Best, Richard
> >
> > On Thu, Mar 5, 2020 at 3:45 PM Richard Yu
> >  wrote:
> >
> >> Hi all,
> >>
> >> Just polling for some last changes on the name. I think that
> >> since there doesn't seem to be much objection to any major
> >> changes in the KIP, I will pass it this Friday.
> >>
> >> If you feel that we still need some more discussion, please let
> >> me know. :)
> >>
> >> Best, Richard
> >>
> >> P.S. Will start working on a PR for this one soon.
> >>
> >> On Wed, Mar 4, 2020 at 1:30 PM Guozhang Wang 
> >> wrote:
> >>
> >>> Regarding the metric name, I was actually trying to be
> >>> consistent with the node-level `suppression-emit` as I feel
> >>> this one's characteristics is closer to that. I other folks
> >>> feels better to align with the task-level "dropped-records" I
> >>> think I can be convinced too.
> >>>
> >>>
> >>> Guozhang
> >>>
> >>> On Wed, Mar 4, 2020 at 12:09 AM Bruno Cadonna
> >>>  wrote:
> >>>
>  Hi all,
> 
>  may I make a non-binding proposal for the metric name? I
>  would prefer "skipped-idempotent-updates" to be consistent
>  with the "dropped-records".
> 
>  Best, Bruno
> 
>  On Tue, Mar 3, 2020 at 11:57 PM Richard Yu
>   wrote:
> >
> > Hi all,
> >
> > Thanks for the discussion!
> >
> > @Guozhang, I will make the corresponding changes to the KIP
> > (i.e.
>  renaming
> > the sensor and adding some notes). With the current state
> > of things, we are very close. Just need that
> >>> one
> > last binding vote.
> >
> > @Matthias J. Sax   It would be ideal
> > if we can
>  also
> > get your last two cents on this as well. Other than that,
> > we are good.
> >
> > Best, Richard
> >
> >
> > On Tue, Mar 3, 2020 at 10:46 AM Guozhang Wang
> > 
>  wrote:
> >
> >> Hi Bruno, John:
> >>
> >> 1) That makes sense. If we consider them to be
> >> node-specific metrics
>  that
> >> only applies to a subset of built-in processor nodes that
> >> are
>  irrelevant to
> >> alert-relevant metrics (just like suppression-emit (rate
> >> | total)),
>  they'd
> >> better be per-node instead of per-task and we would not
> >> associate
> >>> such
> >> events with warning. With that in mind, I'd suggest we
> >> consider
>  renaming
> >> the metric without the `dropped` keyword to distinguish
> >> it with the per-task level sensor. How about
> >> "idempotent-update-skip (rate |
>  total)"?
> >>
> >> Also a minor suggestion: we should clarify in the KIP /
> >> javadocs
> >>> which
> >> built-in processor nodes would have this metric while
> >> others don't.
> >>
> >> 2) About stream time tracking, there are multiple known
> >> issues that
> >>> we
> >> should close to improve our consistency semantics:
> >>
> >

Re: [DISCUSS] KIP-508: Make Suppression State Queriable - rebooted.

2020-03-07 Thread John Roesler
Hey Matthias,

I’m sorry if the ticket was poorly stated. The ticket is to add a DSL overload 
to pass a Materialized argument to suppress. As a result, the result of the 
suppression would be queriable.

This is unrelated to “persistent buffer” aka “spill-to-disk”.

There was some confusion before about whether this ticket could be implemented 
as “query the buffer”. Maybe it can, but not trivially. The obvious way is just 
to add a new state store which we write the results into just before we 
forward. I.e., it’s exactly like the materialized variant of any stateless 
KTable operation. 

Thanks,
John

On Sat, Mar 7, 2020, at 15:32, Matthias J. Sax wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA512
> 
> Thanks for the KIP Dongjin,
> 
> I am still not sure if I can follow, what might also be caused by the
> backing JIRA ticket (maybe John can clarify the intent of the ticket
> as he created it):
> 
> Currently, suppress() only uses an in-memory buffer and my
> understanding of the Jira is, to add the ability to use a persistent
> buffer (ie, spill to disk backed by RocksDB).
> 
> Adding a persistent buffer is completely unrelated to allow querying
> the buffer. In fact, one could query an in-memory buffer, too.
> However, querying the buffer does not really seem to be useful as
> pointed out by John, as you can always query the upstream KTable store.
> 
> Also note that for the emit-on-window-close case the result is deleted
> from the buffer when it is emitted, and thus cannot be queried any longe
> r.
> 
> 
> Can you please clarify if you intend to allow spilling to disk or if
> you intent to enable IQ (even if I don't see why querying make sense,
> as the data is either upstream or deleted). Also, if you want to
> enable IQ, why do we need all those new interfaces? The result of a
> suppress() is a KTable that is the same as any other
> key-value/windowed/sessions store?
> 
> We should also have corresponding Jira tickets for different cases to
> avoid the confusion I am in atm :)
> 
> 
> - -Matthias
> 
> 
> On 2/27/20 8:21 AM, John Roesler wrote:
> > Hi Dongjin,
> >
> > No problem; glad we got it sorted out.
> >
> > Thanks again for picking this up! -John
> >
> > On Wed, Feb 26, 2020, at 09:24, Dongjin Lee wrote:
> >>> I was under the impression that you wanted to expand the scope
> >>> of the KIP
> >> to additionally allow querying the internal buffer, not just the
> >> result. Can you clarify whether you are proposing to allow
> >> querying the state of the internal buffer, the result, or both?
> >>
> >> Sorry for the confusion. As we already talked with, we only need
> >> to query the suppressed output, not the internal buffer. The
> >> current implementation is wrong. After refining the KIP and
> >> implementation accordingly I will notify you - I must be
> >> confused, also.
> >>
> >> Thanks, Dongjin
> >>
> >> On Tue, Feb 25, 2020 at 12:17 AM John Roesler
> >>  wrote:
> >>
> >>> Hi Dongjin,
> >>>
> >>> Ah, I think I may have been confused. I 100% agree that we need
> >>> a materialized variant for suppress(). Then, you could do:
> >>> ...suppress(..., Materialized.as(“final-count”))
> >>>
> >>> If that’s your proposal, then we are on the same page.
> >>>
> >>> I was under the impression that you wanted to expand the scope
> >>> of the KIP to additionally allow querying the internal buffer,
> >>> not just the result. Can you clarify whether you are proposing
> >>> to allow querying the state of the internal buffer, the result,
> >>> or both?
> >>>
> >>> Thanks, John
> >>>
> >>> On Thu, Feb 20, 2020, at 08:41, Dongjin Lee wrote:
>  Hi John, Thanks for your kind explanation with an example.
> 
> > But it feels like you're saying you're trying to do
> > something different
>  than just query the windowed key and get back the current
>  count?
> 
>  Yes, for example, what if we need to retrieve the (all or
>  range) keys
> >>> with
>  a closed window? In this example, let's imagine we need to
>  retrieve only (key=A, window=10), not (key=A, window=20).
> 
>  Of course, the value accompanied by a flushed key is exactly
>  the same to the one in the upstream KTable; However, if our
>  intention is not pointing out a specific key but retrieving a
>  group of unspecified keys, we stuck
> >>> in
>  trouble - since we can't be sure which key is flushed out
>  beforehand.
> 
>  One workaround would be materializing it with
>  `suppressed.filter(e ->
> >>> true,
>  Materialized.as("final-count"))`. But I think providing a
>  materialized variant for suppress method is better than this
>  workaround.
> 
>  Thanks, Dongjin
> 
>  On Thu, Feb 20, 2020 at 1:26 AM John Roesler
>  
> >>> wrote:
> 
> > Thanks for the response, Dongjin,
> >
> > I'm sorry, but I'm still not following. It seems like the
> > view you
> >>> would
> > get on the "current state of the buffer" wo

Suppress PyCharm Inspection warning

2020-03-07 Thread Keefe, Daniel
I get an inspection warning in PyCharm "if a protected member is accessed 
outside the class where it's defined or in a module."  I import TopicPartition 
in order to be able to create TopicPartition objects for assignment to the 
consumer.

The following change in __init__ in the kafka directory remedies this 
situation.[cid:image001.png@01D5F4A0.4B3DBE60]
Daniel Keefe
Distinguished Member Technical Staff
Dell EMC | Integrated Offerings



Re: [DISCUSS] KIP-508: Make Suppression State Queriable - rebooted.

2020-03-07 Thread Matthias J. Sax
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

Thanks for the KIP Dongjin,

I am still not sure if I can follow, what might also be caused by the
backing JIRA ticket (maybe John can clarify the intent of the ticket
as he created it):

Currently, suppress() only uses an in-memory buffer and my
understanding of the Jira is, to add the ability to use a persistent
buffer (ie, spill to disk backed by RocksDB).

Adding a persistent buffer is completely unrelated to allow querying
the buffer. In fact, one could query an in-memory buffer, too.
However, querying the buffer does not really seem to be useful as
pointed out by John, as you can always query the upstream KTable store.

Also note that for the emit-on-window-close case the result is deleted
from the buffer when it is emitted, and thus cannot be queried any longe
r.


Can you please clarify if you intend to allow spilling to disk or if
you intent to enable IQ (even if I don't see why querying make sense,
as the data is either upstream or deleted). Also, if you want to
enable IQ, why do we need all those new interfaces? The result of a
suppress() is a KTable that is the same as any other
key-value/windowed/sessions store?

We should also have corresponding Jira tickets for different cases to
avoid the confusion I am in atm :)


- -Matthias


On 2/27/20 8:21 AM, John Roesler wrote:
> Hi Dongjin,
>
> No problem; glad we got it sorted out.
>
> Thanks again for picking this up! -John
>
> On Wed, Feb 26, 2020, at 09:24, Dongjin Lee wrote:
>>> I was under the impression that you wanted to expand the scope
>>> of the KIP
>> to additionally allow querying the internal buffer, not just the
>> result. Can you clarify whether you are proposing to allow
>> querying the state of the internal buffer, the result, or both?
>>
>> Sorry for the confusion. As we already talked with, we only need
>> to query the suppressed output, not the internal buffer. The
>> current implementation is wrong. After refining the KIP and
>> implementation accordingly I will notify you - I must be
>> confused, also.
>>
>> Thanks, Dongjin
>>
>> On Tue, Feb 25, 2020 at 12:17 AM John Roesler
>>  wrote:
>>
>>> Hi Dongjin,
>>>
>>> Ah, I think I may have been confused. I 100% agree that we need
>>> a materialized variant for suppress(). Then, you could do:
>>> ...suppress(..., Materialized.as(“final-count”))
>>>
>>> If that’s your proposal, then we are on the same page.
>>>
>>> I was under the impression that you wanted to expand the scope
>>> of the KIP to additionally allow querying the internal buffer,
>>> not just the result. Can you clarify whether you are proposing
>>> to allow querying the state of the internal buffer, the result,
>>> or both?
>>>
>>> Thanks, John
>>>
>>> On Thu, Feb 20, 2020, at 08:41, Dongjin Lee wrote:
 Hi John, Thanks for your kind explanation with an example.

> But it feels like you're saying you're trying to do
> something different
 than just query the windowed key and get back the current
 count?

 Yes, for example, what if we need to retrieve the (all or
 range) keys
>>> with
 a closed window? In this example, let's imagine we need to
 retrieve only (key=A, window=10), not (key=A, window=20).

 Of course, the value accompanied by a flushed key is exactly
 the same to the one in the upstream KTable; However, if our
 intention is not pointing out a specific key but retrieving a
 group of unspecified keys, we stuck
>>> in
 trouble - since we can't be sure which key is flushed out
 beforehand.

 One workaround would be materializing it with
 `suppressed.filter(e ->
>>> true,
 Materialized.as("final-count"))`. But I think providing a
 materialized variant for suppress method is better than this
 workaround.

 Thanks, Dongjin

 On Thu, Feb 20, 2020 at 1:26 AM John Roesler
 
>>> wrote:

> Thanks for the response, Dongjin,
>
> I'm sorry, but I'm still not following. It seems like the
> view you
>>> would
> get on the "current state of the buffer" would always be
> equivalent to the view of the upstream table.
>
> Let me try an example, and maybe you can point out the flaw
> in my reasoning.
>
> Let's say we're doing 10 ms windows with a grace period of
> zero. Let's also say we're computing a windowed count, and
> that we have a "final results" suppression after the count.
> Let's  materialize the count as "Count" and the suppressed
> result as "Final Count".
>
> Suppose we get an input event: (time=10, key=A, value=...)
>
> Then, Count will look like:
>
> | window | key | value | | 10 | A   | 1 |
>
> The (internal) suppression buffer will contain:
>
> | window | key | value | | 10 | A   | 1 |
>
> The record is still buffered because the window isn't
> closed yet. Final Count is an empty table:
>
> | window | key | value 

Re: [DISCUSS] KIP-552: Add interface to handle unused config

2020-03-07 Thread Matthias J. Sax
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

What is the status of this KIP?


- -Matthias

On 2/17/20 2:41 PM, John Roesler wrote:
> Thanks Matthias,
>
> I got the impression this was considered and rejected in
> KAFKA-7509, but I'm not sure why. Maybe it was never really
> considered at all, just proposed and not-noticed? Perhaps Randall
> or Sönke can comment. See:
https://issues.apache.org/jira/browse/KAFKA-7509?focusedCommentId=166608
68&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpan
el#comment-16660868
>
> It would be good to know why that proposal didn't move forward.
>
> Thanks, -John
>
>
>
> On Mon, Feb 17, 2020, at 12:17, Matthias J. Sax wrote: I am just
> getting aware of this KIP (not sure why I missed it).
>
> In Kafka Streams we have nested clients and need to "forward"
> configs from outer layer to inner layers -- hence, we prefix some
> configs to be able to know which inner nested clients needs this
> config.
>
> I think the simplest approach is, to add a prefix (like
> "userconfig."). All thus configs would be skipped in the
> validation step to avoid the WARN log.
>
> When forwarding configs to inner classed (like nested clients in
> KS, serializers etc) we would remove this prefix).
>
> Using a `RecordingMap` seem rather heavy weight and complex?
>
> Thoughts?
>
> -Matthias
>
> On 2/17/20 9:09 AM, John Roesler wrote:
 Thanks Patrik,

 This seems to be a long and wandering issue. It seems that
 KAFKA-7509 has followed a similar trajectory to
 KAFKA-6793/KIP-552 , and 7509 is just recently closed in
 favor of whatever we decide to do in KAFKA-6793.

 Considering (what I hope is) the whole history of this issue,
 a few things emerge:

 1. It's useful to get warned when you pass an invalid
 configuration 2. It's not possible for the "top layer"
 (Streams, Connect, etc.) to know up front which
 configurations are applicable to pass down to the "second"
 layer (Clients, RocksDB) because those layers themselves are
 extensible (see below) 3. We should propose a change that
 fixes this issue for the whole Kafka ecosystem at once.

 Elaboration on point 2: Users of Kafka libraries need to
 register extra components like Processors, Interceptors,
 RocksDBConfigSetters, RebalanceListeners, etc. They need to
 pass configurations into these self-registered components.
 Therefore, the outermost component (the one that you directly
 pass a Properties to, and that instantiates other
 Configurable components) _cannot_ know which configurations
 are needed by the "extra" components inside the Configurable
 components. Therefore, no approach that involves filtering
 only the "needed" configurations up front, before
 constructing a Configurable component, could work.

 Randall made an aside in this comment:
 https://issues.apache.org/jira/browse/KAFKA-7509?focusedCommentId=1
667
>

3834&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpan
> el#comment-16673834


> which I think is the most promising path right now.
 Namely, to use RecordingMap (or a similar approach) when
 configuring internal components and finally warn when
 _everything_ has been wired up if some configuration value
 wasn't used by _any_ component.

 It seems like this approach would satisfy all three of the
 above points, but it needs some design/discovery work to see
 what gaps exist in the current code base to achieve the goal.
 It also might be a fair amount of work (which is why we
 didn't follow that approach in KAFKA-7509), but I don't think
 there have been any other suggestions that satisfy both point
 1 and point 2.

 Thoughts? -John

 On Wed, Feb 12, 2020, at 02:07, Patrik Kleindl wrote:
> Hi John
>
> Regarding Kafka Streams this can probably be fixed easily,
> but it does not handle the underlying issue that other
> custom prefixes are not supported. Seems I even did a short
> analysis several months ago and forgot about it, see
> https://issues.apache.org/jira/browse/KAFKA-6793?focusedCommentId=
168
>
>
70899&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpa
> nel#comment-16870899
>
>
>
> We have used custom prefixes to pass properties to the
> RocksDBConfigSett er
> and it seems people are doing something similar in Connect,
> see https://issues.apache.org/jira/browse/KAFKA-7509
>
> This KIP just seeks to avoid the false positives and
> setting it to debug was preferred over implementing the
> custom prefixes.
>
> best regards
>
> Patrik
>
> On Tue, 11 Feb 2020 at 18:21, John Roesler
>  wrote:
>
>> Ah... I've just looked at some integration tests in
>> Streams, and see the same thing.
>>
>> I need to apologize to everyone in the thread for my lack

Re: Add a customized logo for Kafka Streams

2020-03-07 Thread Matthias J. Sax
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

Boyang,

thanks for starting this discussion. I like the idea in general
however we need to be a little careful IMHO -- as you mentioned Kafka
is one project and thus we should avoid the impression that Kafka
Streams is not part of Apache Kafka.

Besides this, many projects use animals that are often very adorable.
Maybe we could find a cute Streams related mascot? :)

I would love to hear opinions especially from the PMC if having a logo
for Kafka Streams is a viable thing to do.


- -Matthias

On 3/3/20 1:01 AM, Patrik Kleindl wrote:
> Hi Boyang Great idea, that would help in some discussions. To throw
> in a first idea: https://imgur.com/a/UowXaMk best regards Patrik
>
> On Mon, 2 Mar 2020 at 18:23, Boyang Chen
>  wrote:
>
>> Hey Apache Kafka committers and community folks,
>>
>> over the years Kafka Streams has been widely adopted and tons of
>> blog posts and tech talks have been trying to introduce it to
>> people with need of stream processing. As it is part of Apache
>> Kafka project, there is always an awkward situation where Kafka
>> Streams could not be campaigned as a standalone streaming engine,
>> and makes people confused about its relation to Kafka.
>>
>> So, do we want to introduce a customized logo just for Streams?
>> The immediate benefit is when people are making technical
>> decisions, we could list Streams as a logo just like Flink and
>> Spark Streaming, instead of putting Kafka logo there as it is not
>> literally a legitimate comparison between processing framework
>> and messaging system. Should we do a KIP for this?
>>
>> Boyang
>>
>
-BEGIN PGP SIGNATURE-

iQIzBAEBCgAdFiEEI8mthP+5zxXZZdDSO4miYXKq/OgFAl5kDVQACgkQO4miYXKq
/Og2Hg/+IOjjz3yrvHOz/p/Qbi9hXDK+GZT6Ixzk2HrrpMiLAAZs6WrNGQMwwI6a
qrPdMRLKA45F53wwMXBqhLYe0O0vOIRSur3pef8UuTVEkbFstY6dNzduPjTXK4vV
Ahzb+pu1lZsF+n2DbMuXWvkmvMAsSrKQXbH58rA7I4qx7Zr6g1a/KL2b2oOoo3kI
4x3PJfG6oLSnQHwkJxmM79ZjM7MLZh0d8cRqb7Oudy5MJiMzHY+Rm5aTu4nhPgRr
cLxA8kz1PbGPboxjD9/ZGuZJMWfVnvY1wJcOp5UnOUs4kX5uYDyWw1sKIn3DcnW8
YVzoto0syCHTAdbl89H2fxhJbtVp8JSxbBx9AW8mdgLOLsYRZGHZ1cbdJ3h4NkeE
xTPTuTptQbhdcjbSVX6F0q+h1hiPCU5PKqcR12zGVTBI4rOGkhPIhDdnIti5Qp2e
MQ1Urh/tWCfwiJZbOyjZ9Tz/2vbUBB4kzUI9DxwWpD15jgMuN3JfqFwcwSIm7NrY
jHcM3UB2QlUt+uymU78xjy6er2AFiGrfL7UXeFHzzVIWBU3fdw4wqpuIuWjFhL3t
NWnAvamXOc7kfE4VXQ8igoRUVDZ8xCCqNvVoaJZ1cWboY/Cfk+4sgd9QnDMZN9jU
bzRpghQp3R93Vp5FH4L9z478uuh8DFM2/+6pqqapd9efydGtKrU=
=OFBs
-END PGP SIGNATURE-


Re: [VOTE] KIP-557: Add emit on change support for Kafka Streams

2020-03-07 Thread Matthias J. Sax
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

Richard,

you cannot close a KIP as accepted with 2 binging votes. (cf
https://cwiki.apache.org/confluence/display/KAFKA/Bylaws)

You could only discard the KIP as long as it's not accepted :D

However, I am +1 (binding) and thus you can close the VOTE as accepted.


Just a three minor follow up comments:

(1) In "Reporting Strategies" you mention in point (2) "Emit on update
/ non-empty content" -- I am not sure what "empty content" would be.
This is a little bit confusing. Maybe just remove it?


(2) "Design Reasoning"

> we have decided that we will forward aggregation results if and
> only if the timestamp and the value had not changed

This sounds incorrect. If both value and timestamp have not changed,
we would skip the update from my understanding?

Ie, to phrase is differently: for a table-operation we only consider
the value to make a comparison and if the value does not change, we
don't emit anything (even if the timestamp changed).

For windowed aggregations however, even if the value does not change,
but the timestamp advances, we emit, ie, a changing timestamp is not
considered idempotent for this case. (Note, that the timestamp can
never go backward for this case, because it's computed as maximum over
all input record for the window).


(3) The discussion about stream time is very interesting. I agree that
it's an orthogonal concern to this KIP.



- -Matthias


On 3/6/20 1:52 PM, Richard Yu wrote:
> Hi all,
>
> I have decided to pass this KIP with 2 binding votes and 3
> non-binding votes (including mine). I will update KIP status
> shortly after this.
>
> Best, Richard
>
> On Thu, Mar 5, 2020 at 3:45 PM Richard Yu
>  wrote:
>
>> Hi all,
>>
>> Just polling for some last changes on the name. I think that
>> since there doesn't seem to be much objection to any major
>> changes in the KIP, I will pass it this Friday.
>>
>> If you feel that we still need some more discussion, please let
>> me know. :)
>>
>> Best, Richard
>>
>> P.S. Will start working on a PR for this one soon.
>>
>> On Wed, Mar 4, 2020 at 1:30 PM Guozhang Wang 
>> wrote:
>>
>>> Regarding the metric name, I was actually trying to be
>>> consistent with the node-level `suppression-emit` as I feel
>>> this one's characteristics is closer to that. I other folks
>>> feels better to align with the task-level "dropped-records" I
>>> think I can be convinced too.
>>>
>>>
>>> Guozhang
>>>
>>> On Wed, Mar 4, 2020 at 12:09 AM Bruno Cadonna
>>>  wrote:
>>>
 Hi all,

 may I make a non-binding proposal for the metric name? I
 would prefer "skipped-idempotent-updates" to be consistent
 with the "dropped-records".

 Best, Bruno

 On Tue, Mar 3, 2020 at 11:57 PM Richard Yu
  wrote:
>
> Hi all,
>
> Thanks for the discussion!
>
> @Guozhang, I will make the corresponding changes to the KIP
> (i.e.
 renaming
> the sensor and adding some notes). With the current state
> of things, we are very close. Just need that
>>> one
> last binding vote.
>
> @Matthias J. Sax   It would be ideal
> if we can
 also
> get your last two cents on this as well. Other than that,
> we are good.
>
> Best, Richard
>
>
> On Tue, Mar 3, 2020 at 10:46 AM Guozhang Wang
> 
 wrote:
>
>> Hi Bruno, John:
>>
>> 1) That makes sense. If we consider them to be
>> node-specific metrics
 that
>> only applies to a subset of built-in processor nodes that
>> are
 irrelevant to
>> alert-relevant metrics (just like suppression-emit (rate
>> | total)),
 they'd
>> better be per-node instead of per-task and we would not
>> associate
>>> such
>> events with warning. With that in mind, I'd suggest we
>> consider
 renaming
>> the metric without the `dropped` keyword to distinguish
>> it with the per-task level sensor. How about
>> "idempotent-update-skip (rate |
 total)"?
>>
>> Also a minor suggestion: we should clarify in the KIP /
>> javadocs
>>> which
>> built-in processor nodes would have this metric while
>> others don't.
>>
>> 2) About stream time tracking, there are multiple known
>> issues that
>>> we
>> should close to improve our consistency semantics:
>>
>> a. preserve stream time of active tasks across rebalances
>> where
>>> they
 may
>> be migrated. This is what KAFKA-9368
>>  meant
>> for. b. preserve stream time of standby tasks to be
>> aligned with the
>>> active
>> tasks, via the changelog topics.
>>
>> And what I'm more concerning is b) here. For example:
>> let's say we
 have a
>> topology of `source -> A -> repartition -> B` where both
>> A and B
>>> have
>> states along with changelogs, and both of them have
>> standbys. If a
 record
>> is piped from t

Build failed in Jenkins: kafka-trunk-jdk11 #1222

2020-03-07 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Break up StreamsPartitionAssignor's gargantuan #assign (#8245)


--
[...truncated 2.91 MB...]

org.apache.kafka.streams.TestTopicsTest > testStartTimestamp STARTED

org.apache.kafka.streams.TestTopicsTest > testStartTimestamp PASSED

org.apache.kafka.streams.TestTopicsTest > testNegativeAdvance STARTED

org.apache.kafka.streams.TestTopicsTest > testNegativeAdvance PASSED

org.apache.kafka.streams.TestTopicsTest > shouldNotAllowToCreateWithNullDriver 
STARTED

org.apache.kafka.streams.TestTopicsTest > shouldNotAllowToCreateWithNullDriver 
PASSED

org.apache.kafka.streams.TestTopicsTest > testDuration STARTED

org.apache.kafka.streams.TestTopicsTest > testDuration PASSED

org.apache.kafka.streams.TestTopicsTest > testOutputToString STARTED

org.apache.kafka.streams.TestTopicsTest > testOutputToString PASSED

org.apache.kafka.streams.TestTopicsTest > testValue STARTED

org.apache.kafka.streams.TestTopicsTest > testValue PASSED

org.apache.kafka.streams.TestTopicsTest > testTimestampAutoAdvance STARTED

org.apache.kafka.streams.TestTopicsTest > testTimestampAutoAdvance PASSED

org.apache.kafka.streams.TestTopicsTest > testOutputWrongSerde STARTED

org.apache.kafka.streams.TestTopicsTest > testOutputWrongSerde PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputTopicWithNullTopicName STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputTopicWithNullTopicName PASSED

org.apache.kafka.streams.TestTopicsTest > testWrongSerde STARTED

org.apache.kafka.streams.TestTopicsTest > testWrongSerde PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMapWithNull STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMapWithNull PASSED

org.apache.kafka.streams.TestTopicsTest > testNonExistingOutputTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testNonExistingOutputTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testMultipleTopics STARTED

org.apache.kafka.streams.TestTopicsTest > testMultipleTopics PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValueList STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValueList PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputWithNullDriver STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputWithNullDriver PASSED

org.apache.kafka.streams.TestTopicsTest > testValueList STARTED

org.apache.kafka.streams.TestTopicsTest > testValueList PASSED

org.apache.kafka.streams.TestTopicsTest > testRecordList STARTED

org.apache.kafka.streams.TestTopicsTest > testRecordList PASSED

org.apache.kafka.streams.TestTopicsTest > testNonExistingInputTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testNonExistingInputTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMap STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMap PASSED

org.apache.kafka.streams.TestTopicsTest > testRecordsToList STARTED

org.apache.kafka.streams.TestTopicsTest > testRecordsToList PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValueListDuration STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValueListDuration PASSED

org.apache.kafka.streams.TestTopicsTest > testInputToString STARTED

org.apache.kafka.streams.TestTopicsTest > testInputToString PASSED

org.apache.kafka.streams.TestTopicsTest > testTimestamp STARTED

org.apache.kafka.streams.TestTopicsTest > testTimestamp PASSED

org.apache.kafka.streams.TestTopicsTest > testWithHeaders STARTED

org.apache.kafka.streams.TestTopicsTest > testWithHeaders PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValue STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValue PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateTopicWithNullTopicName STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateTopicWithNullTopicName PASSED

> Task :streams:upgrade-system-tests-0100:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0100:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0100:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0100:compileTestJava
> Task :streams:upgrade-system-tests-0100:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:testClasses
> Task :streams:upgrade-system-tests-0100:checkstyleTest
> Task :streams:upgrade-system-tests-0100:spotbugsMain NO-SOURCE
> Task :streams:upgrade-system-tests-0100:test
> Task :streams:upgrade-system-tests-0101:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0101:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0101:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0101:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0101:compileTestJava
> Task 

Re: KIP-560 Discuss

2020-03-07 Thread Matthias J. Sax
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

Thanks for the KIP Sang!

I have a couple of more comments about the wiki page:

(1) The "Public Interface" section should only list the new stuff. This
KIP does not change anything with regard to the existing options
`--input-topic` or `--intermediate-topic` and thus it's just "noise" to
have them in this section. Only list the new option
`allInputTopicsOption`.

(2) Don't post code, ie, the implementation of private methods. KIPs
should only describe public interface changes.

(3) The KIP should describe that we intend to use
`describeConsumerGroups` calls to discover the topic names -- atm, it's
unclear from the KIP how the new feature actually works.

(4) If the new flag is used, we will discover input and intermediate
topics. Hence, the name is miss leading. We could call it
`--all-user-topics` and explain in the description that "user topics"
are input and intermediate topics for this case (in general, also output
topics are "user topics" but there is nothing to be done for output
topics). Thoughts?


- -Matthias


On 1/27/20 6:35 AM, Sang wn Lee wrote:
> thank you John Roesle
>
> It is a good idea "—all-input-topics"
>
> I agree with you
>
> I'll update right away
>
>
> On 2020/01/24 14:14:17, "John Roesler" 
> wrote:
>
>> Hi all, thanks for the explanation. I was also not sure how the
>> kip would be possible to implement.
>>
>> No that it does seem plausible, my only feedback is that the
>> command line option could align better with the existing one.
>> That is, the existing option is called “—input-topics”, so it
>> seems like the new one should be called “—all-input-topics”.
>>
>> Thanks, John
>>
>> On Fri, Jan 24, 2020, at 01:42, Boyang Chen wrote:
>>> Thanks Sophie for the explanation! I read Sang's PR and
>>> basically he did exactly what you proposed (check it here
>>>  in case I'm
>>> wrong).
>>>
>>> I think Sophie's response answers Gwen's question already,
>>> while in the meantime for a KIP itself we are not required to
>>> mention all the internal details about how to make the changes
>>> happen (like how to actually get the external topics),
>>> considering the change scope is pretty small as well. But
>>> again, it would do no harm if we mention it inside Proposed
>>> Change session specifically so that people won't get confused
>>> about how.
>>>
>>>
>>> On Thu, Jan 23, 2020 at 8:26 PM Sophie Blee-Goldman
>>>  wrote:
>>>
 Hi all,

 I think what Gwen is trying to ask (correct me if I'm wrong)
 is how we can infer which topics are associated with Streams
 from the admin client's topic list. I agree that this
 doesn't seem possible, since as she pointed out the topics
 list (or even description) lacks the specific information we
 need.

 What we could do instead is use the admin client's
 `describeConsumerGroups` API to get the information on the
 Streams app's consumer group specifically -- note that the
 Streams application.id config is also used as the consumer
 group id, so each app forms a group to read from the input
 topics. We could compile a list of these topics just by
 looking at each member's assignment (and even check for a
 StreamsPartitionAssignor to verify that this is indeed a
 Streams app group, if we're being paranoid).

 The reset tool actually already gets the consumer group
 description, in order to validate there are no active
 consumers in the group. We may as well grab the list of
 topics from it while it's there. Or did you have something
 else in mind?

 On Sat, Jan 18, 2020 at 6:17 PM Sang wn Lee
  wrote:

> Thank you
>
> I understand you
>
> 1. admin client has topic list 2. applicationId can only
> have one stream, so It won't be a problem! 3. For example,
> --input-topic [reg] Allowing reg solves some inconvenience
>
>
> On 2020/01/18 18:15:23, Gwen Shapira 
> wrote:
>> I am not sure I follow. Afaik:
>>
>> 1. Topics don't include client ID information 2. Even if
>> you did, the same ID could be used for topics that are
>> not
> Kafka
>> Streams input
>>
>> The regex idea sounds doable, but I'm not sure it solves
>> much?
>>
>>
>> On Sat, Jan 18, 2020, 7:12 AM Sang wn Lee
>> 
 wrote:
>>
>>> Thank you Gwen Shapira! We'll add a flag to clear all
>>> topics by clientId It is ‘reset-all-external-topics’
>>>
>>> I also want to use regex on the input topic flag to
>>> clear all
 matching
>>> topics.
>>>
>>> On 2020/01/17 19:29:09, Gwen Shapira
>>>  wrote:
 Seem like a very nice improvement to me. But I have
 to admit that I don't understand how this will how -
 how could you infer the input topics?

 On Thu, Jan 16, 2020 at 10:03 AM Sang wn Lee
 
>>> wrote

Build failed in Jenkins: kafka-trunk-jdk8 #4303

2020-03-07 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Reset `streamTime` on clear (#8250)

[github] MINOR: Break up StreamsPartitionAssignor's gargantuan #assign (#8245)


--
[...truncated 2.89 MB...]

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] PASSED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis STARTED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis PASSED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime STARTED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime PASSED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep PASSED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
PASSED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testToString STARTED

org.apache.kafka.streams.test.TestRecordTest > testToString PASSED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords STARTED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords PASSED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
STARTED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
PASSED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher STARTED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher PASSED

org.apache.kafka.streams.test.TestRecordTest > testFields STARTED

org.apache.kafka.streams.test.TestRecordTest > testFields PASSED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode STARTED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode PASSED

> Task :streams:upgrade-system-tests-0100:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0100:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0100:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0100:compileTestJava
> Task :streams:upgrade-system-tests-0100:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:testClasses
> Task :streams:upgrade-system-tests-0100:checkstyleTest
> Task :streams:upgrade-system-tests-0100:spotbugsMain NO-SOURCE
> Task :streams:upgrade-system-tests-0100:test
> Task :streams:upgrade-system-tests-0101:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0101:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0101:classes UP-TO-DATE
> Task :stre

Build failed in Jenkins: kafka-trunk-jdk11 #1221

2020-03-07 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-9645: Fallback to unsubscribe during Task Migrated (#8220)

[github] MINOR: Reset `streamTime` on clear (#8250)


--
[...truncated 2.91 MB...]
org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
PASSED

org.apache.kafka.streams.TestTopicsTest > testNonUsedOutputTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testNonUsedOutputTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testEmptyTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testEmptyTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testStartTimestamp STARTED

org.apache.kafka.streams.TestTopicsTest > testStartTimestamp PASSED

org.apache.kafka.streams.TestTopicsTest > testNegativeAdvance STARTED

org.apache.kafka.streams.TestTopicsTest > testNegativeAdvance PASSED

org.apache.kafka.streams.TestTopicsTest > shouldNotAllowToCreateWithNullDriver 
STARTED

org.apache.kafka.streams.TestTopicsTest > shouldNotAllowToCreateWithNullDriver 
PASSED

org.apache.kafka.streams.TestTopicsTest > testDuration STARTED

org.apache.kafka.streams.TestTopicsTest > testDuration PASSED

org.apache.kafka.streams.TestTopicsTest > testOutputToString STARTED

org.apache.kafka.streams.TestTopicsTest > testOutputToString PASSED

org.apache.kafka.streams.TestTopicsTest > testValue STARTED

org.apache.kafka.streams.TestTopicsTest > testValue PASSED

org.apache.kafka.streams.TestTopicsTest > testTimestampAutoAdvance STARTED

org.apache.kafka.streams.TestTopicsTest > testTimestampAutoAdvance PASSED

org.apache.kafka.streams.TestTopicsTest > testOutputWrongSerde STARTED

org.apache.kafka.streams.TestTopicsTest > testOutputWrongSerde PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputTopicWithNullTopicName STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputTopicWithNullTopicName PASSED

org.apache.kafka.streams.TestTopicsTest > testWrongSerde STARTED

org.apache.kafka.streams.TestTopicsTest > testWrongSerde PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMapWithNull STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMapWithNull PASSED

org.apache.kafka.streams.TestTopicsTest > testNonExistingOutputTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testNonExistingOutputTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testMultipleTopics STARTED

org.apache.kafka.streams.TestTopicsTest > testMultipleTopics PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValueList STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValueList PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputWithNullDriver STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputWithNullDriver PASSED

org.apache.kafka.streams.TestTopicsTest > testValueList STARTED

org.apache.kafka.streams.TestTopicsTest > testValueList PASSED

org.apache.kafka.streams.TestTopicsTest > testRecordList STARTED

org.apache.kafka.streams.TestTopicsTest > testRecordList PASSED

org.apache.kafka.streams.TestTopicsTest > testNonExistingInputTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testNonExistingInputTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMap STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMap PASSED

org.apache.kafka.streams.TestTopicsTest > testRecordsToList STARTED

org.apache.kafka.streams.TestTopicsTest > testRecordsToList PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValueListDuration STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValueListDuration PASSED

org.apache.kafka.streams.TestTopicsTest > testInputToString STARTED

org.apache.kafka.streams.TestTopicsTest > testInputToString PASSED

org.apache.kafka.streams.TestTopicsTest > testTimestamp STARTED

org.apache.kafka.streams.TestTopicsTest > testTimestamp PASSED

org.apache.kafka.streams.TestTopicsTest > testWithHeaders STARTED

org.apache.kafka.streams.TestTopicsTest > testWithHeaders PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValue STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValue PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateTopicWithNullTopicName STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateTopicWithNullTopicName PASSED

> Task :streams:upgrade-system-tests-0100:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0100:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0100:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0100:compileTestJava
> Task :streams:upgrade-system-tests-

Build failed in Jenkins: kafka-trunk-jdk8 #4302

2020-03-07 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-9645: Fallback to unsubscribe during Task Migrated (#8220)


--
[...truncated 2.89 MB...]

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] PASSED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis STARTED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis PASSED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime STARTED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime PASSED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep PASSED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
PASSED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testToString STARTED

org.apache.kafka.streams.test.TestRecordTest > testToString PASSED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords STARTED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords PASSED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
STARTED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
PASSED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher STARTED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher PASSED

org.apache.kafka.streams.test.TestRecordTest > testFields STARTED

org.apache.kafka.streams.test.TestRecordTest > testFields PASSED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode STARTED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode PASSED

> Task :streams:upgrade-system-tests-0100:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0100:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0100:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0100:compileTestJava
> Task :streams:upgrade-system-tests-0100:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:testClasses
> Task :streams:upgrade-system-tests-0100:checkstyleTest
> Task :streams:upgrade-system-tests-0100:spotbugsMain NO-SOURCE
> Task :streams:upgrade-system-tests-0100:test
> Task :streams:upgrade-system-tests-0101:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0101:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0101:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0101:checkstyleMain NO-SOURCE
> T

[jira] [Resolved] (KAFKA-9645) Records could not find corresponding partition/task

2020-03-07 Thread Guozhang Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9645?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-9645.
--
Fix Version/s: 2.6.0
   Resolution: Fixed

> Records could not find corresponding partition/task
> ---
>
> Key: KAFKA-9645
> URL: https://issues.apache.org/jira/browse/KAFKA-9645
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 2.6.0
>Reporter: Boyang Chen
>Assignee: Boyang Chen
>Priority: Major
> Fix For: 2.6.0
>
>
> We could be hitting the illegal state when stream kicks off a rebalance with 
> all tasks closed:
> ```
> [2020-03-03T18:36:09-08:00] 
> (streams-soak-trunk-eos_soak_i-050294ea2392cf355_streamslog) [2020-03-04 
> 02:36:09,105] WARN 
> [stream-soak-test-e639a3a1-bd9d-49e0-896e-df9fe0cd63db-StreamThread-1] 
> stream-thread 
> [stream-soak-test-e639a3a1-bd9d-49e0-896e-df9fe0cd63db-StreamThread-1] 
> Detected that the thread is being fenced. This implies that this thread 
> missed a rebalance and dropped out of the consumer group. Will close out all 
> assigned tasks and rejoin the consumer group. 
> (org.apache.kafka.streams.processor.internals.StreamThread)
> [2020-03-03T18:36:09-08:00] 
> (streams-soak-trunk-eos_soak_i-050294ea2392cf355_streamslog) [2020-03-04 
> 02:36:09,105] INFO 
> [stream-soak-test-e639a3a1-bd9d-49e0-896e-df9fe0cd63db-StreamThread-1] 
> [Consumer 
> clientId=stream-soak-test-e639a3a1-bd9d-49e0-896e-df9fe0cd63db-StreamThread-1-restore-consumer,
>  groupId=null] Subscribed to partition(s): 
> stream-soak-test-KSTREAM-AGGREGATE-STATE-STORE-49-changelog-2 
> (org.apache.kafka.clients.consumer.KafkaConsumer)
> [2020-03-03T18:36:10-08:00] 
> (streams-soak-trunk-eos_soak_i-050294ea2392cf355_streamslog) [2020-03-04 
> 02:36:09,286] INFO 
> [stream-soak-test-e639a3a1-bd9d-49e0-896e-df9fe0cd63db-StreamThread-1] 
> [Producer 
> clientId=stream-soak-test-e639a3a1-bd9d-49e0-896e-df9fe0cd63db-StreamThread-1-1_1-producer,
>  transactionalId=stream-soak-test-1_1] Closing the Kafka producer with 
> timeoutMillis = 9223372036854775807 ms. 
> (org.apache.kafka.clients.producer.KafkaProducer)
> [2020-03-03T18:36:10-08:00] 
> (streams-soak-trunk-eos_soak_i-050294ea2392cf355_streamslog) [2020-03-04 
> 02:36:09,287] INFO 
> [stream-soak-test-e639a3a1-bd9d-49e0-896e-df9fe0cd63db-StreamThread-1] 
> stream-thread 
> [stream-soak-test-e639a3a1-bd9d-49e0-896e-df9fe0cd63db-StreamThread-1] task 
> [1_1] Closed dirty (org.apache.kafka.streams.processor.internals.StreamTask)
> [2020-03-03T18:36:10-08:00] 
> (streams-soak-trunk-eos_soak_i-050294ea2392cf355_streamslog) [2020-03-04 
> 02:36:09,287] INFO 
> [stream-soak-test-e639a3a1-bd9d-49e0-896e-df9fe0cd63db-StreamThread-1] 
> [Consumer 
> clientId=stream-soak-test-e639a3a1-bd9d-49e0-896e-df9fe0cd63db-StreamThread-1-restore-consumer,
>  groupId=null] Unsubscribed all topics or patterns and assigned partitions 
> (org.apache.kafka.clients.consumer.KafkaConsumer)
> [2020-03-03T18:36:10-08:00] 
> (streams-soak-trunk-eos_soak_i-050294ea2392cf355_streamslog) [2020-03-04 
> 02:36:09,290] INFO 
> [stream-soak-test-e639a3a1-bd9d-49e0-896e-df9fe0cd63db-StreamThread-1] 
> [Producer 
> clientId=stream-soak-test-e639a3a1-bd9d-49e0-896e-df9fe0cd63db-StreamThread-1-3_2-producer,
>  transactionalId=stream-soak-test-3_2] Closing the Kafka producer with 
> timeoutMillis = 9223372036854775807 ms. 
> (org.apache.kafka.clients.producer.KafkaProducer)
> [2020-03-03T18:36:10-08:00] 
> (streams-soak-trunk-eos_soak_i-050294ea2392cf355_streamslog) [2020-03-04 
> 02:36:09,292] INFO 
> [stream-soak-test-e639a3a1-bd9d-49e0-896e-df9fe0cd63db-StreamThread-1] 
> stream-thread 
> [stream-soak-test-e639a3a1-bd9d-49e0-896e-df9fe0cd63db-StreamThread-1] task 
> [3_2] Closed dirty (org.apache.kafka.streams.processor.internals.StreamTask)
> [2020-03-03T18:36:10-08:00] 
> (streams-soak-trunk-eos_soak_i-050294ea2392cf355_streamslog) [2020-03-04 
> 02:36:09,293] ERROR 
> [stream-soak-test-e639a3a1-bd9d-49e0-896e-df9fe0cd63db-StreamThread-1] 
> stream-thread 
> [stream-soak-test-e639a3a1-bd9d-49e0-896e-df9fe0cd63db-StreamThread-1] Unable 
> to locate active task for received-record partition node-name-repartition-1. 
> Current tasks: TaskManager
> [2020-03-03T18:36:10-08:00] 
> (streams-soak-trunk-eos_soak_i-050294ea2392cf355_streamslog) >      
> MetadataState:
> [2020-03-03T18:36:10-08:00] 
> (streams-soak-trunk-eos_soak_i-050294ea2392cf355_streamslog) >      Tasks:
>  (org.apache.kafka.streams.processor.internals.StreamThread)
> [2020-03-03T18:36:10-08:00] 
> (streams-soak-trunk-eos_soak_i-050294ea2392cf355_streamslog) [2020-03-04 
> 02:36:09,293] ERROR 
> [stream-soak-test-e639a3a1-bd9d-49e0-896e-df9fe0cd63db-StreamThread-1] 
> stream-thread 
> [stream-soak-test-e639a3a1-bd9d-49e0-896e-df9fe0cd6

[jira] [Created] (KAFKA-9681) Change whitelist/blacklist terms

2020-03-07 Thread Jira
Gérald Quintana created KAFKA-9681:
--

 Summary: Change whitelist/blacklist terms
 Key: KAFKA-9681
 URL: https://issues.apache.org/jira/browse/KAFKA-9681
 Project: Kafka
  Issue Type: Wish
  Components: KafkaConnect, mirrormaker
Affects Versions: 2.4.0
Reporter: Gérald Quintana


The whitelist/blacklist terms are not very inclusive, and can be perceived as 
racist.

Using allow/deny or include/exclude for example is more neutral



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Jenkins build is back to normal : kafka-trunk-jdk8 #4301

2020-03-07 Thread Apache Jenkins Server
See 




[jira] [Created] (KAFKA-9680) Negative consumer lag

2020-03-07 Thread Sayed Mohammad Hossein Torabi (Jira)
Sayed Mohammad Hossein Torabi created KAFKA-9680:


 Summary: Negative consumer lag
 Key: KAFKA-9680
 URL: https://issues.apache.org/jira/browse/KAFKA-9680
 Project: Kafka
  Issue Type: Bug
  Components: consumer
Affects Versions: 2.3.1
Reporter: Sayed Mohammad Hossein Torabi


I'm using Kafka 2.3.1 and got negative consumer lag on it. Here is the result 
of my consumergroup
{code:java}
//   1  18985   8576-10409  
connector-consumer-hdfs-sink-1-20e86f06-cf92-4b68-a67a-32e1876118f7 
/172.16.2.220   connector-consumer-hdfs-sink-1

{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: kafka-trunk-jdk8 #4300

2020-03-07 Thread Apache Jenkins Server
See 


Changes:

[github] HOTFIX: fix StateManagerUtilTest and StreamTaskTest failures (#8247)


--
[...truncated 5.84 MB...]

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] PASSED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis STARTED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis PASSED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime STARTED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime PASSED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep PASSED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
PASSED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testToString STARTED

org.apache.kafka.streams.test.TestRecordTest > testToString PASSED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords STARTED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords PASSED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
STARTED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
PASSED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher STARTED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher PASSED

org.apache.kafka.streams.test.TestRecordTest > testFields STARTED

org.apache.kafka.streams.test.TestRecordTest > testFields PASSED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode STARTED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode PASSED

> Task :streams:upgrade-system-tests-0100:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0100:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0100:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0100:compileTestJava
> Task :streams:upgrade-system-tests-0100:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:testClasses
> Task :streams:upgrade-system-tests-0100:checkstyleTest
> Task :streams:upgrade-system-tests-0100:spotbugsMain NO-SOURCE
> Task :streams:upgrade-system-tests-0100:test
> Task :streams:upgrade-system-tests-0101:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0101:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0101:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0101:checkstyleMain NO-SOURCE

Jenkins build is back to normal : kafka-trunk-jdk11 #1219

2020-03-07 Thread Apache Jenkins Server
See 




回复:回复:回复:[Vote] KIP-571: Add option to force remove members in StreamsResetter

2020-03-07 Thread feyman2009
Hi, Matthias
Thanks, I updated the KIP to mention the deprecated and newly added methods.

1. What happens is `groupInstanceId` is used for a dynamic group? What
happens if both parameters are specified? What happens if `memberId`
is specified for a static group?

=> my understanding is that the dynamic/static membership is member level other 
than group level, and I think above questions could be answered by the "Leave 
Group Logic Change" section in KIP-345: 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-345%3A+Introduce+static+membership+protocol+to+reduce+consumer+rebalances,
 this KIP stays consistent with KIP-345.

2. About the `--force` option. Currently, StreamsResetter fails with an
error if the consumer group is not empty. You state in your KIP that:

> without --force, we need to wait for session timeout.

Is this an intended behavior change if `--force` is not used or is the
KIP description incorrect?

=> This is the intended behavior. For this part, I think there are two ways to 
go:
1) (the implicit way) Not introducing the new "--force" option, with this KIP, 
StreamsResetter will by default remove active members(with long session timeout 
configured) on broker side 
2) (the explicit way) Introduce the new "--force" option, users need to 
explicitly specify --force to remove the active members. If --force not 
specified, the StreamsResetter behaviour is as previous versions'.

I think the two alternatives above are both feasible, personally I prefer way 2.

3. For my own understanding: with the `--force` option, we intend to get
all `memberIds` and send a "remove member" request for each with
corresponding `memberId` to remove the member from the group
(independent is the group is static or dynamic)?

=> Yeah, minor thing to mention is we will send the "remove member" request for 
each member(could be dynamic member or static member) to remove them from group
for dynamic members, both "group.instance.id" and "member.id" will be specified
for dynamic members, only "member.id" will be specified

4. I am really wondering, if for a static group, we should allow users to
remove individual members? For a dynamic group this feature would not
make much sense IMHO, because the `memberId` is not know by the user.

=> KIP-345 introduced the batch removal feature for both static member and 
dynamic member, my understanding is that "allow users to
remove individual members" could be useful for rolling bounce and scale down 
accoding to KIP-345. KafkaAdminClient currently only support static member 
removal and this KIP-571 enables dynamic member removal for it, which is also 
consistent with the broker side logic. Users could get the member.id (and 
group.instance.id if for static member) by adminClient.describeConsumerGroups.

Furthermore, I don't have an assumption that a consumer group should contain 
only static or dynamic members, and I think KIP-345 and this KIP don't need to 
be based on this assumption.
You could correct me if I have the wrong understanding :)

Thanks!
Feyman







--
发件人:Matthias J. Sax 
发送时间:2020年3月6日(星期五) 02:20
收件人:dev 
主 题:Re: 回复:回复:[Vote] KIP-571: Add option to force remove members in 
StreamsResetter

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

Feyman,

thanks a lot for the KIP. Overall LGTM. I have a few more comment and
questions (sorry for the late reply):


The KIP mentions that some constructors will be deprecated. Those should
be listed explicitly. For example:


public class MemberToRemove {

  // deprecated methods

  @Deprecated
  public MemberToRemove(String groupInstanceId);

  // new methods

  public MemberToRemove()

  public MemberToRemove withGroupInstanceId(String groupInstanceId)

  public MemberToRemove withMemberId(String memberId)
}

What happens is `groupInstanceId` is used for a dynamic group? What
happens if both parameters are specified? What happens if `memberId`
is specified for a static group?


About the `--force` option. Currently, StreamsResetter fails with an
error if the consumer group is not empty. You state in your KIP that:

> without --force, we need to wait for session timeout.

Is this an intended behavior change if `--force` is not used or is the
KIP description incorrect?

For my own understanding: with the `--force` option, we intend to get
all `memberIds` and send a "remove member" request for each with
corresponding `memberId` to remove the member from the group
(independent is the group is static or dynamic)?

I am really wondering, if for a static group, we should allow users to
remove individual members? For a dynamic group this feature would not
make much sense IMHO, because the `memberId` is not know by the user.



- -Matthias


On 3/5/20 7:15 AM, Bill Bejeck wrote:
> Thanks for the KIP.  +1 (binding).
>
> -Bill
>
>
>
> On Wed, Mar 4, 2020 at 12:40 AM Guozhang Wang 
> wrote:
>
>> Thanks, +1 from me (binding).
>>
>> On Tue, Mar 3, 2020 at 9:39 P