Build failed in Jenkins: kafka-trunk-jdk11 #1347

2020-04-10 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-3720: Change TimeoutException to BufferExhaustedException when no


--
[...truncated 6.24 MB...]
org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnStreamsTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 

[DISCUSS] KIP-592: MirrorMaker should replicate topics from earliest

2020-04-10 Thread Jeff Widman
https://cwiki.apache.org/confluence/display/KAFKA/KIP-592%3A+MirrorMaker+should+replicate+topics+from+earliest

It's a relatively minor change, only one line of code. :-D



-- 

*Jeff Widman*
jeffwidman.com  | 740-WIDMAN-J (943-6265)
<><


Jenkins build is back to normal : kafka-trunk-jdk11 #1346

2020-04-10 Thread Apache Jenkins Server
See 




Jenkins build is back to normal : kafka-trunk-jdk8 #4425

2020-04-10 Thread Apache Jenkins Server
See 




Re: [DISCUSS] KIP-588: Allow producers to recover gracefully from transaction timeouts

2020-04-10 Thread Guozhang Wang
Thanks Boyang, the newly added example looks good to me.

On Thu, Apr 9, 2020 at 2:47 PM Boyang Chen 
wrote:

> Hey Guozhang,
>
> I have added an example of the producer API usage under new improvements.
> Let me know if this looks good to you.
>
> Boyang
>
> On Wed, Apr 8, 2020 at 1:38 PM Boyang Chen 
> wrote:
>
> > That's a good suggestion Jason. Adding a dedicated PRODUCER_FENCED error
> > should help distinguish exceptions and could safely mark
> > INVALID_PRODUCER_EPOCH exception as non-fatal in the new code. Updated
> the
> > KIP.
> >
> > Boyang
> >
> > On Wed, Apr 8, 2020 at 12:18 PM Jason Gustafson 
> > wrote:
> >
> >> Hey Boyang,
> >>
> >> Thanks for the KIP. I think the main problem we've identified here is
> that
> >> the current errors conflate transaction timeouts with producer fencing.
> >> The
> >> first of these ought to be recoverable, but we cannot distinguish it.
> The
> >> suggestion to add a new error code makes sense to me, but it leaves this
> >> bit of awkwardness:
> >>
> >> > One extra issue that needs to be addressed is how to handle
> >> `ProducerFenced` from Produce requests.
> >>
> >> In fact, the underlying error code here is INVALID_PRODUCER_EPOCH. It's
> >> just that the code treats this as equivalent to `ProducerFenced`. One
> >> thought I had is maybe PRODUCER_FENCED needs to be a separate error code
> >> as
> >> well. After all, only the transaction coordinator knows whether a
> producer
> >> has been fenced or not. So maybe the handling could be something like
> the
> >> following:
> >>
> >> 1. Produce requests may return INVALID_PRODUCER_EPOCH. The producer
> >> recovers by following KIP-360 logic to see whether the epoch can be
> >> bumped.
> >> If it cannot because the broker version is too old, we fail.
> >> 2. Transactional APIs may return either TRANSACTION_TIMEOUT or
> >> PRODUCER_FENCED. In the first case, we do the same as above. We try to
> >> recover by bumping the epoch. If the error is PRODUCER_FENCED, it is
> >> fatal.
> >> 3. Older brokers may return INVALID_PRODUCER_EPOCH as well from
> >> transactional APIs. We treat this the same as 1.
> >>
> >> What do you think?
> >>
> >> Thanks,
> >> Jason
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >> On Mon, Apr 6, 2020 at 3:41 PM Boyang Chen 
> >> wrote:
> >>
> >> > Yep, updated the KIP, thanks!
> >> >
> >> > On Mon, Apr 6, 2020 at 3:11 PM Guozhang Wang 
> >> wrote:
> >> >
> >> > > Regarding 2), sounds good, I saw UNKNOWN_PRODUCER_ID is properly
> >> handled
> >> > > today in produce / add-partitions-to-txn / add-offsets-to-txn /
> >> end-txn
> >> > > responses, so that should be well covered.
> >> > >
> >> > > Could you reflect this in the wiki page that the broker should be
> >> > > responsible for using different error codes given client request
> >> versions
> >> > > as well?
> >> > >
> >> > >
> >> > >
> >> > > Guozhang
> >> > >
> >> > > On Mon, Apr 6, 2020 at 9:20 AM Boyang Chen <
> >> reluctanthero...@gmail.com>
> >> > > wrote:
> >> > >
> >> > > > Thanks Guozhang for the review!
> >> > > >
> >> > > > On Sun, Apr 5, 2020 at 5:47 PM Guozhang Wang 
> >> > wrote:
> >> > > >
> >> > > > > Hello Boyang,
> >> > > > >
> >> > > > > Thank you for the proposed KIP. Just some minor comments below:
> >> > > > >
> >> > > > > 1. Could you also describe which producer APIs could potentially
> >> > throw
> >> > > > the
> >> > > > > new TransactionTimedOutException, and also how should callers
> >> handle
> >> > > them
> >> > > > > differently (i.e. just to make your description more concrete as
> >> > > > javadocs).
> >> > > > >
> >> > > > > Good point, I will add example java doc changes.
> >> > > >
> >> > > >
> >> > > > > 2. It's straight-forward if client is on newer version while
> >> broker's
> >> > > on
> >> > > > > older version; however If the client is on older version while
> >> > broker's
> >> > > > on
> >> > > > > newer version, today would the internal module of producers
> treat
> >> it
> >> > > as a
> >> > > > > general fatal error or not? If not, should the broker set a
> >> different
> >> > > > error
> >> > > > > code upon detecting older request versions?
> >> > > > >
> >> > > > > That's a good suggestion, my understanding is that the
> >> prerequisite
> >> > of
> >> > > > this change is the new KIP-360 API which is going out with 2.5,
> >> > > > so we could just return UNKNOWN_PRODUCER_ID instead of
> >> PRODUCER_FENCED
> >> > as
> >> > > > it could be interpreted as abortable error
> >> > > > in 2.5 producer and retry. Producers older than 2.5 will not be
> >> > covered.
> >> > > > WDYT?
> >> > > >
> >> > > > >
> >> > > > > Guozhang
> >> > > > >
> >> > > > > On Thu, Apr 2, 2020 at 1:40 PM Boyang Chen <
> >> > reluctanthero...@gmail.com
> >> > > >
> >> > > > > wrote:
> >> > > > >
> >> > > > > > Hey there,
> >> > > > > >
> >> > > > > > I would like to start discussion for KIP-588:
> >> > > > > >
> >> > > > > >
> >> > > > >
> >> > > >
> >> > >
> >> >
> >>
> 

[jira] [Created] (KAFKA-9852) Lower block duration in BufferPoolTest to cut down on overall test runtime

2020-04-10 Thread Jira
Sönke Liebau created KAFKA-9852:
---

 Summary: Lower block duration in BufferPoolTest to cut down on 
overall test runtime
 Key: KAFKA-9852
 URL: https://issues.apache.org/jira/browse/KAFKA-9852
 Project: Kafka
  Issue Type: Improvement
  Components: unit tests
Reporter: Sönke Liebau
Assignee: Sönke Liebau


In BufferPoolTest we use a global setting for the maximum duration that calls 
can block (max.block.ms) of [2000ms 
|https://github.com/apache/kafka/blob/e032a360708cec2284f714e4cae388066064d61c/clients/src/test/java/org/apache/kafka/clients/producer/internals/BufferPoolTest.java#L54]

Since this is wall clock time that might be waited on and could potentially 
come into play multiple times while this class is executed this is a very long 
timeout for testing.

We should reduce this timeout to a much lower value to cut back on test 
runtimes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] KIP-584: Versioning scheme for features

2020-04-10 Thread Jun Rao
Hi, Kowshik,

Thanks for the reply. A few more comments.

110. Keeping the feature version as int is probably fine. I just felt that
for some of the common user interactions, it's more convenient to
relate that to a release version. For example, if a user wants to downgrade
to a release 2.5, it's easier for the user to use the tool like "tool
--downgrade 2.5" instead of "tool --downgrade --feature X --version 6".
Similarly, if the client library finds a feature mismatch with the broker,
the client likely needs to log some error message for the user to take some
actions. It's much more actionable if the error message is "upgrade the
broker to release version 2.6" than just "upgrade the broker to feature
version 7".

111. Sounds good.

120. When should a developer bump up the version of a feature?

Jun

On Tue, Apr 7, 2020 at 12:26 AM Kowshik Prakasam 
wrote:

> Hi Jun,
>
> I have updated the KIP for the item 111.
> I'm in the process of addressing 100.6, and will provide an update soon.
> I think item 110 is still under discussion given we are now providing a way
> to finalize
> all features to their latest version levels. In any case, please let us
> know
> how you feel in response to Colin's comments on this topic.
>
> > 111. To put this in context, when we had IBP, the default value is the
> > current released version. So, if you are a brand new user, you don't need
> > to configure IBP and all new features will be immediately available in
> the
> > new cluster. If you are upgrading from an old version, you do need to
> > understand and configure IBP. I see a similar pattern here for
> > features. From the ease of use perspective, ideally, we shouldn't require
> a
> > new user to have an extra step such as running a bootstrap script unless
> > it's truly necessary. If someone has a special need (all the cases you
> > mentioned seem special cases?), they can configure a mode such that
> > features are enabled/disabled manually.
>
> (Kowshik): That makes sense, thanks for the idea! Sorry if I didn't
> understand
> this need earlier. I have updated the KIP with the approach that whenever
> the '/features' node is absent, the controller by default will bootstrap
> the node
> to contain the latest feature levels. Here is the new section in the KIP
> describing
> the same:
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-584%3A+Versioning+scheme+for+features#KIP-584:Versioningschemeforfeatures-Controller:ZKnodebootstrapwithdefaultvalues
>
> Next, as I explained in my response to Colin's suggestions, we are now
> providing a `--finalize-latest-features` flag with the tooling. This lets
> the sysadmin finalize all features known to the controller to their latest
> version
> levels. Please look at this section (point #3 and the tooling example
> later):
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-584%3A+Versioning+scheme+for+features#KIP-584:Versioningschemeforfeatures-Toolingsupport
>
>
> Do you feel this addresses your comment/concern?
>
>
> Cheers,
> Kowshik
>
> On Mon, Apr 6, 2020 at 12:06 PM Jun Rao  wrote:
>
> > Hi, Kowshik,
> >
> > Thanks for the reply. A few more replies below.
> >
> > 100.6 You can look for the sentence "This operation requires ALTER on
> > CLUSTER." in KIP-455. Also, you can check its usage in
> > KafkaApis.authorize().
> >
> > 110. From the external client/tooling perspective, it's more natural to
> use
> > the release version for features. If we can use the same release version
> > for internal representation, it seems simpler (easier to understand, no
> > mapping overhead, etc). Is there a benefit with separate external and
> > internal versioning schemes?
> >
> > 111. To put this in context, when we had IBP, the default value is the
> > current released version. So, if you are a brand new user, you don't need
> > to configure IBP and all new features will be immediately available in
> the
> > new cluster. If you are upgrading from an old version, you do need to
> > understand and configure IBP. I see a similar pattern here for
> > features. From the ease of use perspective, ideally, we shouldn't
> require a
> > new user to have an extra step such as running a bootstrap script unless
> > it's truly necessary. If someone has a special need (all the cases you
> > mentioned seem special cases?), they can configure a mode such that
> > features are enabled/disabled manually.
> >
> > Jun
> >
> > On Fri, Apr 3, 2020 at 5:54 PM Kowshik Prakasam 
> > wrote:
> >
> > > Hi Jun,
> > >
> > > Thanks for the feedback and suggestions. Please find my response below.
> > >
> > > > 100.6 For every new request, the admin needs to control who is
> allowed
> > to
> > > > issue that request if security is enabled. So, we need to assign the
> > new
> > > > request a ResourceType and possible AclOperations. See
> > > >
> > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-455%3A+Create+an+Administrative+API+for+Replica+Reassignment
> > > > as an example.
> > >
> > > 

Re: [kafka-clients] Re: [VOTE] 2.5.0 RC3

2020-04-10 Thread Israel Ekpo
+1 (non-binding)

Used the following environment in my validation of the release artifacts:
Ubuntu 18.04, OpenJDK 11, Scala 2.13.1, Gradle 5.6.2

Verified GPG Signatures for all release artifacts
Verified md5 sha1 sha512 checksums for each artifact
Checked Scala and Java Docs
Ran Unit and Integration Tests successfully

One comment I have is that the following release artifacts initially in the
release process were not reachable directly for validation like the other
release artifacts.

* Documentation:
https://kafka.apache.org/25/documentation.html

* Protocol:
https://kafka.apache.org/25/protocol.html

If we can improve that in future releases that would be great.

Thanks for running the release, David.

On Fri, Apr 10, 2020 at 1:55 PM Manikumar  wrote:

> Hi David,
>
> +1 (binding)
>
> - Verified signatures, artifacts,  Release notes
> - Built from sources, ran tests
> - Ran core/streams quick start for Scala 2.13 binary, ran few manual tests
> - Verified docs
>
> As discussed offline, we need to add upgrade instructions to 2.5 docs.
>
> Thanks for driving the release.
>
> Thanks,
> Manikumar
>
> On Fri, Apr 10, 2020 at 7:53 PM Bill Bejeck  wrote:
>
>> Hi David,
>>
>> +1 (non-binding) Verified signatures, built jars from source, ran unit and
>> integration tests, all passed.
>>
>> Thanks for running the release.
>>
>> -Bill
>>
>> On Wed, Apr 8, 2020 at 10:10 AM David Arthur  wrote:
>>
>> > Passing Jenkins build on 2.5 branch:
>> > https://builds.apache.org/job/kafka-2.5-jdk8/90/
>> >
>> > On Wed, Apr 8, 2020 at 12:03 AM David Arthur  wrote:
>> >
>> >> Hello Kafka users, developers and client-developers,
>> >>
>> >> This is the forth candidate for release of Apache Kafka 2.5.0.
>> >>
>> >> * TLS 1.3 support (1.2 is now the default)
>> >> * Co-groups for Kafka Streams
>> >> * Incremental rebalance for Kafka Consumer
>> >> * New metrics for better operational insight
>> >> * Upgrade Zookeeper to 3.5.7
>> >> * Deprecate support for Scala 2.11
>> >>
>> >> Release notes for the 2.5.0 release:
>> >>
>> https://home.apache.org/~davidarthur/kafka-2.5.0-rc3/RELEASE_NOTES.html
>> >>
>> >> *** Please download, test and vote by Friday April 10th 5pm PT
>> >>
>> >> Kafka's KEYS file containing PGP keys we use to sign the release:
>> >> https://kafka.apache.org/KEYS
>> >>
>> >> * Release artifacts to be voted upon (source and binary):
>> >> https://home.apache.org/~davidarthur/kafka-2.5.0-rc3/
>> >>
>> >> * Maven artifacts to be voted upon:
>> >> https://repository.apache.org/content/groups/staging/org/apache/kafka/
>> >>
>> >> * Javadoc:
>> >> https://home.apache.org/~davidarthur/kafka-2.5.0-rc3/javadoc/
>> >>
>> >> * Tag to be voted upon (off 2.5 branch) is the 2.5.0 tag:
>> >> https://github.com/apache/kafka/releases/tag/2.5.0-rc3
>> >>
>> >> * Documentation:
>> >> https://kafka.apache.org/25/documentation.html
>> >>
>> >> * Protocol:
>> >> https://kafka.apache.org/25/protocol.html
>> >>
>> >> Successful Jenkins builds to follow
>> >>
>> >> Thanks!
>> >> David
>> >>
>> >
>> >
>> > --
>> > David Arthur
>> >
>> > --
>> > You received this message because you are subscribed to the Google
>> Groups
>> > "kafka-clients" group.
>> > To unsubscribe from this group and stop receiving emails from it, send
>> an
>> > email to kafka-clients+unsubscr...@googlegroups.com.
>> > To view this discussion on the web visit
>> >
>> https://groups.google.com/d/msgid/kafka-clients/CA%2B0Ze6oy4_Vw6B4M%3DoFtLvfk0OZAnioQW2u1xjgqe9r%3D3sC%2B5A%40mail.gmail.com
>> > <
>> https://groups.google.com/d/msgid/kafka-clients/CA%2B0Ze6oy4_Vw6B4M%3DoFtLvfk0OZAnioQW2u1xjgqe9r%3D3sC%2B5A%40mail.gmail.com?utm_medium=email_source=footer
>> >
>> > .
>> >
>>
> --
> You received this message because you are subscribed to the Google Groups
> "kafka-clients" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to kafka-clients+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/kafka-clients/CAMVt_AwDkY5_%3DPUL9rEYN5o%3DxvXd%2Bq9sTt_602RzgXoKfT_sbA%40mail.gmail.com
> 
> .
>


Re: Want to be Part of Kafka Committer - Siva

2020-04-10 Thread siva deva
Thank you.

On Fri, Apr 10, 2020 at 2:37 PM Matthias J. Sax  wrote:

> Added you.
>
>
> On 4/10/20 12:45 PM, siva deva wrote:
> > Sure.
> >
> > This is my Username is SivagurunathanV. Please let me know if this works.
> >
> > Thanks,
> > Siva
> >
> > On Fri, Apr 10, 2020 at 12:31 PM Matthias J. Sax 
> wrote:
> >
> >> Siva,
> >>
> >> I could not fine you in the system. Is the user-id you provided
> >> something internal? If yes, I would need the "user name" instead, ie,
> >> whatever you use to log in.
> >>
> >> Sorry for the confusion.
> >>
> >>
> >> -Matthias
> >>
> >> On 4/9/20 10:25 PM, siva deva wrote:
> >>> Hi
> >>>
> >>> Can someone add me to the contributor's list? user-id `3854148`
> >>>
> >>>
> >>> Thanks in advance,
> >>> Siva
> >>>
> >>> On Wed, Apr 8, 2020 at 4:08 PM siva deva 
> wrote:
> >>>
>  Sure. Here is my user-id `3854148`
> 
>  Thanks,
>  Siva
> 
> 
>  On Wed, Apr 8, 2020 at 3:58 PM Matthias J. Sax 
> >> wrote:
> 
> > Yes, I meant JIRA. You can just create a JIRA account (self service)
> >> and
> > share you account ID here -- than we can add you to the JIRA
> >> contributor
> > list -- this allows you to self-assign tickets.
> >
> >
> > -Matthias
> >
> > On 4/8/20 3:29 PM, siva deva wrote:
> >> Sorry, which account ID you mean? can you please guide here. Where
> > should I
> >> create an account (in JIRA) ?
> >>
> >> On Wed, Apr 8, 2020 at 2:27 PM Matthias J. Sax 
> > wrote:
> >>
> >>> Did you create an account already? What is your account ID (we need
> >> it
> >>> to add you to the list of contributors).
> >>>
> >>> -Matthias
> >>>
> >>>
> >>>
> >>> On 4/8/20 9:41 AM, siva deva wrote:
>  It’s mentioned that I need to add to the contributor list for
> > assigning a
>  JIRA.
> 
>  Please contact us to be added the contributor list. After that you
> >> can
>  assign yourself to the JIRA ticket you have started working on so
> > others
>  will notice.
> 
>  On Wed, Apr 8, 2020 at 8:58 AM Boyang Chen <
> > reluctanthero...@gmail.com>
>  wrote:
> 
> > Have you checked out https://kafka.apache.org/contributing.html?
> >
> > On Wed, Apr 8, 2020 at 8:14 AM siva deva 
> > wrote:
> >
> >> Hi,
> >>
> >> I am a newbie to Kafka Project. Can you please provide JIRA
> access
> > to
> >> starter/newbie tickets, so that I can start working on it.
> >>
> >> --
> >> Best,
> >> Siva
> >>
> >
> >>>
> >>>
> >>
> >
> >
> 
>  --
>  Best,
>  Siva
> 
> >>>
> >>>
> >>
> >>
> >
>
>

-- 
Best,
Siva


Build failed in Jenkins: kafka-trunk-jdk11 #1345

2020-04-10 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-6145: KIP-441: avoid unnecessary movement of standbys (#8436)

[github] KAFKA-9832: Extend Streams system tests for EOS-beta (#8443)


--
[...truncated 3.03 MB...]
org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairs 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicNameAndTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicNameAndTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairsAndCustomTimestamps
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairsAndCustomTimestamps
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithCustomTimestampAndIncrementsAndNotAdvanceTime
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithCustomTimestampAndIncrementsAndNotAdvanceTime
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithTimestampWithTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithTimestampWithTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKeyAndDefaultTimestamp
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKeyAndDefaultTimestamp
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithTimestampAndIncrements STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithTimestampAndIncrements PASSED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis STARTED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis PASSED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime STARTED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime PASSED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep PASSED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnIsOpen 
STARTED


Re: Want to be Part of Kafka Committer - Siva

2020-04-10 Thread Matthias J. Sax
Added you.


On 4/10/20 12:45 PM, siva deva wrote:
> Sure.
> 
> This is my Username is SivagurunathanV. Please let me know if this works.
> 
> Thanks,
> Siva
> 
> On Fri, Apr 10, 2020 at 12:31 PM Matthias J. Sax  wrote:
> 
>> Siva,
>>
>> I could not fine you in the system. Is the user-id you provided
>> something internal? If yes, I would need the "user name" instead, ie,
>> whatever you use to log in.
>>
>> Sorry for the confusion.
>>
>>
>> -Matthias
>>
>> On 4/9/20 10:25 PM, siva deva wrote:
>>> Hi
>>>
>>> Can someone add me to the contributor's list? user-id `3854148`
>>>
>>>
>>> Thanks in advance,
>>> Siva
>>>
>>> On Wed, Apr 8, 2020 at 4:08 PM siva deva  wrote:
>>>
 Sure. Here is my user-id `3854148`

 Thanks,
 Siva


 On Wed, Apr 8, 2020 at 3:58 PM Matthias J. Sax 
>> wrote:

> Yes, I meant JIRA. You can just create a JIRA account (self service)
>> and
> share you account ID here -- than we can add you to the JIRA
>> contributor
> list -- this allows you to self-assign tickets.
>
>
> -Matthias
>
> On 4/8/20 3:29 PM, siva deva wrote:
>> Sorry, which account ID you mean? can you please guide here. Where
> should I
>> create an account (in JIRA) ?
>>
>> On Wed, Apr 8, 2020 at 2:27 PM Matthias J. Sax 
> wrote:
>>
>>> Did you create an account already? What is your account ID (we need
>> it
>>> to add you to the list of contributors).
>>>
>>> -Matthias
>>>
>>>
>>>
>>> On 4/8/20 9:41 AM, siva deva wrote:
 It’s mentioned that I need to add to the contributor list for
> assigning a
 JIRA.

 Please contact us to be added the contributor list. After that you
>> can
 assign yourself to the JIRA ticket you have started working on so
> others
 will notice.

 On Wed, Apr 8, 2020 at 8:58 AM Boyang Chen <
> reluctanthero...@gmail.com>
 wrote:

> Have you checked out https://kafka.apache.org/contributing.html?
>
> On Wed, Apr 8, 2020 at 8:14 AM siva deva 
> wrote:
>
>> Hi,
>>
>> I am a newbie to Kafka Project. Can you please provide JIRA access
> to
>> starter/newbie tickets, so that I can start working on it.
>>
>> --
>> Best,
>> Siva
>>
>
>>>
>>>
>>
>
>

 --
 Best,
 Siva

>>>
>>>
>>
>>
> 



signature.asc
Description: OpenPGP digital signature


Build failed in Jenkins: kafka-trunk-jdk8 #4424

2020-04-10 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-6145: KIP-441: avoid unnecessary movement of standbys (#8436)

[github] KAFKA-9832: Extend Streams system tests for EOS-beta (#8443)


--
[...truncated 3.02 MB...]

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] PASSED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis STARTED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis PASSED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime STARTED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime PASSED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep PASSED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
PASSED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testToString STARTED

org.apache.kafka.streams.test.TestRecordTest > testToString PASSED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords STARTED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords PASSED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
STARTED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
PASSED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher STARTED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher PASSED

org.apache.kafka.streams.test.TestRecordTest > testFields STARTED

org.apache.kafka.streams.test.TestRecordTest > testFields PASSED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode STARTED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode PASSED

> Task :streams:upgrade-system-tests-0100:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0100:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0100:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0100:compileTestJava
> Task :streams:upgrade-system-tests-0100:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:testClasses
> Task :streams:upgrade-system-tests-0100:checkstyleTest
> Task :streams:upgrade-system-tests-0100:spotbugsMain NO-SOURCE
> Task :streams:upgrade-system-tests-0100:test
> Task :streams:upgrade-system-tests-0101:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0101:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0101:classes 

Re: Want to be Part of Kafka Committer - Siva

2020-04-10 Thread siva deva
Sure.

This is my Username is SivagurunathanV. Please let me know if this works.

Thanks,
Siva

On Fri, Apr 10, 2020 at 12:31 PM Matthias J. Sax  wrote:

> Siva,
>
> I could not fine you in the system. Is the user-id you provided
> something internal? If yes, I would need the "user name" instead, ie,
> whatever you use to log in.
>
> Sorry for the confusion.
>
>
> -Matthias
>
> On 4/9/20 10:25 PM, siva deva wrote:
> > Hi
> >
> > Can someone add me to the contributor's list? user-id `3854148`
> >
> >
> > Thanks in advance,
> > Siva
> >
> > On Wed, Apr 8, 2020 at 4:08 PM siva deva  wrote:
> >
> >> Sure. Here is my user-id `3854148`
> >>
> >> Thanks,
> >> Siva
> >>
> >>
> >> On Wed, Apr 8, 2020 at 3:58 PM Matthias J. Sax 
> wrote:
> >>
> >>> Yes, I meant JIRA. You can just create a JIRA account (self service)
> and
> >>> share you account ID here -- than we can add you to the JIRA
> contributor
> >>> list -- this allows you to self-assign tickets.
> >>>
> >>>
> >>> -Matthias
> >>>
> >>> On 4/8/20 3:29 PM, siva deva wrote:
>  Sorry, which account ID you mean? can you please guide here. Where
> >>> should I
>  create an account (in JIRA) ?
> 
>  On Wed, Apr 8, 2020 at 2:27 PM Matthias J. Sax 
> >>> wrote:
> 
> > Did you create an account already? What is your account ID (we need
> it
> > to add you to the list of contributors).
> >
> > -Matthias
> >
> >
> >
> > On 4/8/20 9:41 AM, siva deva wrote:
> >> It’s mentioned that I need to add to the contributor list for
> >>> assigning a
> >> JIRA.
> >>
> >> Please contact us to be added the contributor list. After that you
> can
> >> assign yourself to the JIRA ticket you have started working on so
> >>> others
> >> will notice.
> >>
> >> On Wed, Apr 8, 2020 at 8:58 AM Boyang Chen <
> >>> reluctanthero...@gmail.com>
> >> wrote:
> >>
> >>> Have you checked out https://kafka.apache.org/contributing.html?
> >>>
> >>> On Wed, Apr 8, 2020 at 8:14 AM siva deva 
> >>> wrote:
> >>>
>  Hi,
> 
>  I am a newbie to Kafka Project. Can you please provide JIRA access
> >>> to
>  starter/newbie tickets, so that I can start working on it.
> 
>  --
>  Best,
>  Siva
> 
> >>>
> >
> >
> 
> >>>
> >>>
> >>
> >> --
> >> Best,
> >> Siva
> >>
> >
> >
>
>

-- 
Best,
Siva


Re: Want to be Part of Kafka Committer - Siva

2020-04-10 Thread Matthias J. Sax
Siva,

I could not fine you in the system. Is the user-id you provided
something internal? If yes, I would need the "user name" instead, ie,
whatever you use to log in.

Sorry for the confusion.


-Matthias

On 4/9/20 10:25 PM, siva deva wrote:
> Hi
> 
> Can someone add me to the contributor's list? user-id `3854148`
> 
> 
> Thanks in advance,
> Siva
> 
> On Wed, Apr 8, 2020 at 4:08 PM siva deva  wrote:
> 
>> Sure. Here is my user-id `3854148`
>>
>> Thanks,
>> Siva
>>
>>
>> On Wed, Apr 8, 2020 at 3:58 PM Matthias J. Sax  wrote:
>>
>>> Yes, I meant JIRA. You can just create a JIRA account (self service) and
>>> share you account ID here -- than we can add you to the JIRA contributor
>>> list -- this allows you to self-assign tickets.
>>>
>>>
>>> -Matthias
>>>
>>> On 4/8/20 3:29 PM, siva deva wrote:
 Sorry, which account ID you mean? can you please guide here. Where
>>> should I
 create an account (in JIRA) ?

 On Wed, Apr 8, 2020 at 2:27 PM Matthias J. Sax 
>>> wrote:

> Did you create an account already? What is your account ID (we need it
> to add you to the list of contributors).
>
> -Matthias
>
>
>
> On 4/8/20 9:41 AM, siva deva wrote:
>> It’s mentioned that I need to add to the contributor list for
>>> assigning a
>> JIRA.
>>
>> Please contact us to be added the contributor list. After that you can
>> assign yourself to the JIRA ticket you have started working on so
>>> others
>> will notice.
>>
>> On Wed, Apr 8, 2020 at 8:58 AM Boyang Chen <
>>> reluctanthero...@gmail.com>
>> wrote:
>>
>>> Have you checked out https://kafka.apache.org/contributing.html?
>>>
>>> On Wed, Apr 8, 2020 at 8:14 AM siva deva 
>>> wrote:
>>>
 Hi,

 I am a newbie to Kafka Project. Can you please provide JIRA access
>>> to
 starter/newbie tickets, so that I can start working on it.

 --
 Best,
 Siva

>>>
>
>

>>>
>>>
>>
>> --
>> Best,
>> Siva
>>
> 
> 



signature.asc
Description: OpenPGP digital signature


Re: 回复:回复:回复:回复:回复:回复:[Vote] KIP-571: Add option to force remove members in StreamsResetter

2020-04-10 Thread Guozhang Wang
Thanks Feyman,

I've looked at the update that you incorporated from Matthias and that LGTM
too. I'm still +1 :)

Guozhang

On Fri, Apr 10, 2020 at 11:18 AM John Roesler  wrote:

> Hey Feyman,
>
> Just to remove any ambiguity, I've been casually following the discussion,
> I've just looked at the KIP document again, and I'm still +1 (binding).
>
> Thanks,
> -John
>
> On Fri, Apr 10, 2020, at 01:44, feyman2009 wrote:
> > Hi, all
> > KIP-571 has already collected 4 bind +1 (John, Guochang, Bill,
> > Matthias) and 3 non-binding +1(Boyang, Sophie), I will mark it as
> > approved and create a PR shortly.
> > Thanks!
> >
> > Feyman
> > --
> > 发件人:feyman2009 
> > 发送时间:2020年4月8日(星期三) 14:21
> > 收件人:dev ; Boyang Chen 
> > 主 题:回复:回复:回复:回复:回复:回复:[Vote] KIP-571: Add option to force remove
> > members in StreamsResetter
> >
> > Hi Boyang,
> > Thanks for reminding me of that!
> > I'm not sure about the convention, I thought it would need to
> > re-collect votes if the KIP has changed~
> > Let's leave the vote thread here for 2 days, if no objection, I
> > will take it as approved and update the PR accordingly.
> >
> > Thanks!
> > Feyman
> >
> >
> >
> > --
> > 发件人:Boyang Chen 
> > 发送时间:2020年4月8日(星期三) 12:42
> > 收件人:dev ; feyman2009 
> > 主 题:Re: 回复:回复:回复:回复:回复:[Vote] KIP-571: Add option to force remove
> > members in StreamsResetter
> >
> > You should already get enough votes if I'm counting correctly
> > (Guozhang, John, Matthias)
> > On Tue, Apr 7, 2020 at 6:59 PM feyman2009
> >  wrote:
> > Hi, Boyang
> >  I think Matthias's proposal makes sense, but we can use the admin
> > tool for this scenario as Boyang mentioned or follow up later, so I
> > prefer to keep this KIP unchanged to minimize the scope.
> >  Calling for vote ~
> >
> >  Thanks!
> >  Feyman
> >
> >  --
> >  发件人:Boyang Chen 
> >  发送时间:2020年4月8日(星期三) 02:15
> >  收件人:dev 
> >  主 题:Re: 回复:回复:回复:回复:回复:[Vote] KIP-571: Add option to force remove
> > members in StreamsResetter
> >
> >  Hey Feyman,
> >
> >  I think Matthias' suggestion is optional, and we could just use admin
> tool
> >  to remove single static members as well.
> >
> >  Boyang
> >
> >  On Tue, Apr 7, 2020 at 11:00 AM Matthias J. Sax 
> wrote:
> >
> >  > > Would you mind to elaborate why we still need that
> >  >
> >  > Sure.
> >  >
> >  > For static memership, the session timeout it usually set quite high.
> >  > This make scaling in an application tricky: if you shut down one
> >  > instance, no rebalance would happen until `session.timeout.ms` hits.
> >  > This is specific to Kafka Streams, because when a Kafka Stream
> > client is
> >  > closed, it does _not_ send a `LeaveGroupRequest`. Hence, the
> >  > corresponding partitions would not be processed for a long time and
> >  > thus, fall back.
> >  >
> >  > Given that each instance will have a unique `instance.id` provided by
> >  > the user, we could allow users to remove the instance they want to
> >  > decommission from the consumer group without the need to wait for
> >  > `session.timeout.ms`.
> >  >
> >  > Hence, it's not an application reset scenario for which one wants to
> >  > remove all members, but a scaling-in scenario. For dynamic
> > membership,
> >  > this issue usually does not occur because the `session.timeout.ms` is
> >  > set to a fairly low value and a rebalance would happen quickly after
> > an
> >  > instance is decommissioned.
> >  >
> >  > Does this make sense?
> >  >
> >  > As said before, we may or may not include this in this KIP. It's up
> > to
> >  > you if you want to address it or not.
> >  >
> >  >
> >  > -Matthias
> >  >
> >  >
> >  >
> >  > On 4/7/20 7:12 AM, feyman2009 wrote:
> >  > > Hi, Matthias
> >  > > Thanks a lot!
> >  > > So you do not plan so support removing a _single static_
> > member via
> >  > `StreamsResetter`?
> >  > > =>
> >  > > Would you mind to elaborate why we still need that if we
> > are
> >  > able to batch remove active members with adminClient?
> >  > >
> >  > > Thanks!
> >  > >
> >  > > Feyman
> >  > >  --
> >  > > 发件人:Matthias J. Sax 
> >  > > 发送时间:2020年4月7日(星期二) 08:25
> >  > > 收件人:dev 
> >  > > 主 题:Re: 回复:回复:回复:回复:[Vote] KIP-571: Add option to force remove
> > members
> >  > in StreamsResetter
> >  > >
> >  > > Overall LGTM.
> >  > >
> >  > > +1 (binding)
> >  > >
> >  > > So you do not plan so support removing a _single static_ member via
> >  > > `StreamsResetter`? We can of course still add this as a follow up
> > but it
> >  > > might be nice to just add it to this KIP right away. Up to you if
> > you
> >  > > want to include it or not.
> >  > >
> >  > >
> >  > > -Matthias
> >  > >
> >  > >
> >  > >
> >  > > On 3/30/20 8:16 AM, feyman2009 wrote:
> >  > >> Hi, Boyang
> >  > >> 

Re: 回复:回复:回复:回复:回复:回复:[Vote] KIP-571: Add option to force remove members in StreamsResetter

2020-04-10 Thread John Roesler
Hey Feyman,

Just to remove any ambiguity, I've been casually following the discussion, I've 
just looked at the KIP document again, and I'm still +1 (binding).

Thanks,
-John

On Fri, Apr 10, 2020, at 01:44, feyman2009 wrote:
> Hi, all
> KIP-571 has already collected 4 bind +1 (John, Guochang, Bill, 
> Matthias) and 3 non-binding +1(Boyang, Sophie), I will mark it as 
> approved and create a PR shortly.
> Thanks!
> 
> Feyman
> --
> 发件人:feyman2009 
> 发送时间:2020年4月8日(星期三) 14:21
> 收件人:dev ; Boyang Chen 
> 主 题:回复:回复:回复:回复:回复:回复:[Vote] KIP-571: Add option to force remove 
> members in StreamsResetter
> 
> Hi Boyang,
> Thanks for reminding me of that!
> I'm not sure about the convention, I thought it would need to 
> re-collect votes if the KIP has changed~
> Let's leave the vote thread here for 2 days, if no objection, I 
> will take it as approved and update the PR accordingly.
> 
> Thanks!
> Feyman
> 
> 
> 
> --
> 发件人:Boyang Chen 
> 发送时间:2020年4月8日(星期三) 12:42
> 收件人:dev ; feyman2009 
> 主 题:Re: 回复:回复:回复:回复:回复:[Vote] KIP-571: Add option to force remove 
> members in StreamsResetter
> 
> You should already get enough votes if I'm counting correctly 
> (Guozhang, John, Matthias)
> On Tue, Apr 7, 2020 at 6:59 PM feyman2009 
>  wrote:
> Hi, Boyang
>  I think Matthias's proposal makes sense, but we can use the admin 
> tool for this scenario as Boyang mentioned or follow up later, so I 
> prefer to keep this KIP unchanged to minimize the scope.
>  Calling for vote ~
> 
>  Thanks!
>  Feyman
> 
>  --
>  发件人:Boyang Chen 
>  发送时间:2020年4月8日(星期三) 02:15
>  收件人:dev 
>  主 题:Re: 回复:回复:回复:回复:回复:[Vote] KIP-571: Add option to force remove 
> members in StreamsResetter
> 
>  Hey Feyman,
> 
>  I think Matthias' suggestion is optional, and we could just use admin tool
>  to remove single static members as well.
> 
>  Boyang
> 
>  On Tue, Apr 7, 2020 at 11:00 AM Matthias J. Sax  wrote:
> 
>  > > Would you mind to elaborate why we still need that
>  >
>  > Sure.
>  >
>  > For static memership, the session timeout it usually set quite high.
>  > This make scaling in an application tricky: if you shut down one
>  > instance, no rebalance would happen until `session.timeout.ms` hits.
>  > This is specific to Kafka Streams, because when a Kafka Stream 
> client is
>  > closed, it does _not_ send a `LeaveGroupRequest`. Hence, the
>  > corresponding partitions would not be processed for a long time and
>  > thus, fall back.
>  >
>  > Given that each instance will have a unique `instance.id` provided by
>  > the user, we could allow users to remove the instance they want to
>  > decommission from the consumer group without the need to wait for
>  > `session.timeout.ms`.
>  >
>  > Hence, it's not an application reset scenario for which one wants to
>  > remove all members, but a scaling-in scenario. For dynamic 
> membership,
>  > this issue usually does not occur because the `session.timeout.ms` is
>  > set to a fairly low value and a rebalance would happen quickly after 
> an
>  > instance is decommissioned.
>  >
>  > Does this make sense?
>  >
>  > As said before, we may or may not include this in this KIP. It's up 
> to
>  > you if you want to address it or not.
>  >
>  >
>  > -Matthias
>  >
>  >
>  >
>  > On 4/7/20 7:12 AM, feyman2009 wrote:
>  > > Hi, Matthias
>  > > Thanks a lot!
>  > > So you do not plan so support removing a _single static_ 
> member via
>  > `StreamsResetter`?
>  > > =>
>  > > Would you mind to elaborate why we still need that if we 
> are
>  > able to batch remove active members with adminClient?
>  > >
>  > > Thanks!
>  > >
>  > > Feyman
>  > >  --
>  > > 发件人:Matthias J. Sax 
>  > > 发送时间:2020年4月7日(星期二) 08:25
>  > > 收件人:dev 
>  > > 主 题:Re: 回复:回复:回复:回复:[Vote] KIP-571: Add option to force remove 
> members
>  > in StreamsResetter
>  > >
>  > > Overall LGTM.
>  > >
>  > > +1 (binding)
>  > >
>  > > So you do not plan so support removing a _single static_ member via
>  > > `StreamsResetter`? We can of course still add this as a follow up 
> but it
>  > > might be nice to just add it to this KIP right away. Up to you if 
> you
>  > > want to include it or not.
>  > >
>  > >
>  > > -Matthias
>  > >
>  > >
>  > >
>  > > On 3/30/20 8:16 AM, feyman2009 wrote:
>  > >> Hi, Boyang
>  > >> Thanks a lot, that make sense, we should not expose member.id 
> in
>  > the MemberToRemove anymore, I have fixed it in the KIP.
>  > >>
>  > >>
>  > >> Feyman
>  > >> --
>  > >> 发件人:Boyang Chen 
>  > >> 发送时间:2020年3月29日(星期日) 01:45
>  > >> 收件人:dev ; feyman2009 
>  > >> 主 题:Re: 回复:回复:回复:[Vote] KIP-571: Add option to force remove 
> members in
>  > StreamsResetter
>  > >>
>  > >> 

Re: [kafka-clients] Re: [VOTE] 2.5.0 RC3

2020-04-10 Thread Manikumar
Hi David,

+1 (binding)

- Verified signatures, artifacts,  Release notes
- Built from sources, ran tests
- Ran core/streams quick start for Scala 2.13 binary, ran few manual tests
- Verified docs

As discussed offline, we need to add upgrade instructions to 2.5 docs.

Thanks for driving the release.

Thanks,
Manikumar

On Fri, Apr 10, 2020 at 7:53 PM Bill Bejeck  wrote:

> Hi David,
>
> +1 (non-binding) Verified signatures, built jars from source, ran unit and
> integration tests, all passed.
>
> Thanks for running the release.
>
> -Bill
>
> On Wed, Apr 8, 2020 at 10:10 AM David Arthur  wrote:
>
> > Passing Jenkins build on 2.5 branch:
> > https://builds.apache.org/job/kafka-2.5-jdk8/90/
> >
> > On Wed, Apr 8, 2020 at 12:03 AM David Arthur  wrote:
> >
> >> Hello Kafka users, developers and client-developers,
> >>
> >> This is the forth candidate for release of Apache Kafka 2.5.0.
> >>
> >> * TLS 1.3 support (1.2 is now the default)
> >> * Co-groups for Kafka Streams
> >> * Incremental rebalance for Kafka Consumer
> >> * New metrics for better operational insight
> >> * Upgrade Zookeeper to 3.5.7
> >> * Deprecate support for Scala 2.11
> >>
> >> Release notes for the 2.5.0 release:
> >> https://home.apache.org/~davidarthur/kafka-2.5.0-rc3/RELEASE_NOTES.html
> >>
> >> *** Please download, test and vote by Friday April 10th 5pm PT
> >>
> >> Kafka's KEYS file containing PGP keys we use to sign the release:
> >> https://kafka.apache.org/KEYS
> >>
> >> * Release artifacts to be voted upon (source and binary):
> >> https://home.apache.org/~davidarthur/kafka-2.5.0-rc3/
> >>
> >> * Maven artifacts to be voted upon:
> >> https://repository.apache.org/content/groups/staging/org/apache/kafka/
> >>
> >> * Javadoc:
> >> https://home.apache.org/~davidarthur/kafka-2.5.0-rc3/javadoc/
> >>
> >> * Tag to be voted upon (off 2.5 branch) is the 2.5.0 tag:
> >> https://github.com/apache/kafka/releases/tag/2.5.0-rc3
> >>
> >> * Documentation:
> >> https://kafka.apache.org/25/documentation.html
> >>
> >> * Protocol:
> >> https://kafka.apache.org/25/protocol.html
> >>
> >> Successful Jenkins builds to follow
> >>
> >> Thanks!
> >> David
> >>
> >
> >
> > --
> > David Arthur
> >
> > --
> > You received this message because you are subscribed to the Google Groups
> > "kafka-clients" group.
> > To unsubscribe from this group and stop receiving emails from it, send an
> > email to kafka-clients+unsubscr...@googlegroups.com.
> > To view this discussion on the web visit
> >
> https://groups.google.com/d/msgid/kafka-clients/CA%2B0Ze6oy4_Vw6B4M%3DoFtLvfk0OZAnioQW2u1xjgqe9r%3D3sC%2B5A%40mail.gmail.com
> > <
> https://groups.google.com/d/msgid/kafka-clients/CA%2B0Ze6oy4_Vw6B4M%3DoFtLvfk0OZAnioQW2u1xjgqe9r%3D3sC%2B5A%40mail.gmail.com?utm_medium=email_source=footer
> >
> > .
> >
>


[jira] [Created] (KAFKA-9851) Revoking Connect tasks due to connectivity issues should also clear running assignment

2020-04-10 Thread Konstantine Karantasis (Jira)
Konstantine Karantasis created KAFKA-9851:
-

 Summary: Revoking Connect tasks due to connectivity issues should 
also clear running assignment
 Key: KAFKA-9851
 URL: https://issues.apache.org/jira/browse/KAFKA-9851
 Project: Kafka
  Issue Type: Bug
Reporter: Konstantine Karantasis
Assignee: Konstantine Karantasis


https://issues.apache.org/jira/browse/KAFKA-9184 fixed an issue with workers 
continuing to run tasks even after they'd lose connectivity with the broker 
coordinator and they'd detect that they are out of the group. 

 

However, because the revocation of tasks in this case is voluntary and does not 
come with an explicit assignment (containing revoked tasks) from the leader 
worker, the worker that quits running its tasks due to connectivity issues 
needs to also clear its running task assignment snapshot. 

This will allow for proper restart of the stopped tasks after the worker 
rejoins the group when connectivity returns and get assigned the same 
connectors or tasks. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-9850) Move KStream#repartition operator validation during Topology build process

2020-04-10 Thread Levani Kokhreidze (Jira)
Levani Kokhreidze created KAFKA-9850:


 Summary: Move KStream#repartition operator validation during 
Topology build process 
 Key: KAFKA-9850
 URL: https://issues.apache.org/jira/browse/KAFKA-9850
 Project: Kafka
  Issue Type: Improvement
  Components: streams
Reporter: Levani Kokhreidze


`KStream#repartition` operation performs most of its validation regarding 
joining, co-partitioning, etc after starting Kafka Streams instance. Some parts 
of this validation can be detected much earlier, specifically during topology 
`build()`.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-6910) Ability to specify a default state store type or factory

2020-04-10 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6910?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax resolved KAFKA-6910.

Resolution: Duplicate

> Ability to specify a default state store type or factory
> 
>
> Key: KAFKA-6910
> URL: https://issues.apache.org/jira/browse/KAFKA-6910
> Project: Kafka
>  Issue Type: New Feature
>  Components: streams
>Affects Versions: 1.1.0, 1.1.1
>Reporter: Antony Stubbs
>Priority: Major
>
> For large projects, it's a huge pain and not really practically at all to use 
> a custom state store everywhere just to use in memory or avoid rocksdb, for 
> example for running a test suite on windows.
> It would be great to be able to set a global config for KS so that it uses a 
> different state store implementation everywhere.
> Blocked by KAFKA-4730 - Streams does not have an in-memory windowed store. 
> Also blocked by not having an in-memory session store implementation. A 
> version simple in memory window and session store that's not suitable for 
> production would still be very useful for running test suites.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [kafka-clients] Re: [VOTE] 2.5.0 RC3

2020-04-10 Thread Bill Bejeck
Hi David,

+1 (non-binding) Verified signatures, built jars from source, ran unit and
integration tests, all passed.

Thanks for running the release.

-Bill

On Wed, Apr 8, 2020 at 10:10 AM David Arthur  wrote:

> Passing Jenkins build on 2.5 branch:
> https://builds.apache.org/job/kafka-2.5-jdk8/90/
>
> On Wed, Apr 8, 2020 at 12:03 AM David Arthur  wrote:
>
>> Hello Kafka users, developers and client-developers,
>>
>> This is the forth candidate for release of Apache Kafka 2.5.0.
>>
>> * TLS 1.3 support (1.2 is now the default)
>> * Co-groups for Kafka Streams
>> * Incremental rebalance for Kafka Consumer
>> * New metrics for better operational insight
>> * Upgrade Zookeeper to 3.5.7
>> * Deprecate support for Scala 2.11
>>
>> Release notes for the 2.5.0 release:
>> https://home.apache.org/~davidarthur/kafka-2.5.0-rc3/RELEASE_NOTES.html
>>
>> *** Please download, test and vote by Friday April 10th 5pm PT
>>
>> Kafka's KEYS file containing PGP keys we use to sign the release:
>> https://kafka.apache.org/KEYS
>>
>> * Release artifacts to be voted upon (source and binary):
>> https://home.apache.org/~davidarthur/kafka-2.5.0-rc3/
>>
>> * Maven artifacts to be voted upon:
>> https://repository.apache.org/content/groups/staging/org/apache/kafka/
>>
>> * Javadoc:
>> https://home.apache.org/~davidarthur/kafka-2.5.0-rc3/javadoc/
>>
>> * Tag to be voted upon (off 2.5 branch) is the 2.5.0 tag:
>> https://github.com/apache/kafka/releases/tag/2.5.0-rc3
>>
>> * Documentation:
>> https://kafka.apache.org/25/documentation.html
>>
>> * Protocol:
>> https://kafka.apache.org/25/protocol.html
>>
>> Successful Jenkins builds to follow
>>
>> Thanks!
>> David
>>
>
>
> --
> David Arthur
>
> --
> You received this message because you are subscribed to the Google Groups
> "kafka-clients" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to kafka-clients+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/kafka-clients/CA%2B0Ze6oy4_Vw6B4M%3DoFtLvfk0OZAnioQW2u1xjgqe9r%3D3sC%2B5A%40mail.gmail.com
> 
> .
>


Build failed in Jenkins: kafka-2.1-jdk8 #264

2020-04-10 Thread Apache Jenkins Server
See 


Changes:

[jason] KAFKA-9654; Update epoch in `ReplicaAlterLogDirsThread` after new

[jason] KAFKA-9750; Fix race condition with log dir reassign completion (#8412)


--
[...truncated 925.54 KB...]
kafka.log.LogCleanerTest > testSegmentGroupingFollowingLoadOfZeroIndex STARTED

kafka.log.LogCleanerTest > testSegmentGroupingFollowingLoadOfZeroIndex PASSED

kafka.log.LogCleanerTest > testLogToCleanWithUncleanableSection STARTED

kafka.log.LogCleanerTest > testLogToCleanWithUncleanableSection PASSED

kafka.log.LogCleanerTest > testBuildPartialOffsetMap STARTED

kafka.log.LogCleanerTest > testBuildPartialOffsetMap PASSED

kafka.log.LogCleanerTest > testCleaningWithUnkeyedMessages STARTED

kafka.log.LogCleanerTest > testCleaningWithUnkeyedMessages PASSED

kafka.log.LogCleanerTest > testSegmentWithOffsetOverflow STARTED

kafka.log.LogCleanerTest > testSegmentWithOffsetOverflow PASSED

kafka.log.LogCleanerTest > testPartialSegmentClean STARTED

kafka.log.LogCleanerTest > testPartialSegmentClean PASSED

kafka.log.LogCleanerTest > testCommitMarkerRemoval STARTED

kafka.log.LogCleanerTest > testCommitMarkerRemoval PASSED

kafka.log.LogCleanerTest > testCleanSegmentsWithConcurrentSegmentDeletion 
STARTED

kafka.log.LogCleanerTest > testCleanSegmentsWithConcurrentSegmentDeletion PASSED

kafka.log.LogValidatorTest > testRecompressedBatchWithoutRecordsNotAllowed 
STARTED

kafka.log.LogValidatorTest > testRecompressedBatchWithoutRecordsNotAllowed 
PASSED

kafka.log.LogValidatorTest > testCompressedV1 STARTED

kafka.log.LogValidatorTest > testCompressedV1 PASSED

kafka.log.LogValidatorTest > testCompressedV2 STARTED

kafka.log.LogValidatorTest > testCompressedV2 PASSED

kafka.log.LogValidatorTest > testDownConversionOfIdempotentRecordsNotPermitted 
STARTED

kafka.log.LogValidatorTest > testDownConversionOfIdempotentRecordsNotPermitted 
PASSED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterUpConversionV0ToV2NonCompressed STARTED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterUpConversionV0ToV2NonCompressed PASSED

kafka.log.LogValidatorTest > testAbsoluteOffsetAssignmentCompressed STARTED

kafka.log.LogValidatorTest > testAbsoluteOffsetAssignmentCompressed PASSED

kafka.log.LogValidatorTest > testLogAppendTimeWithRecompressionV1 STARTED

kafka.log.LogValidatorTest > testLogAppendTimeWithRecompressionV1 PASSED

kafka.log.LogValidatorTest > testLogAppendTimeWithRecompressionV2 STARTED

kafka.log.LogValidatorTest > testLogAppendTimeWithRecompressionV2 PASSED

kafka.log.LogValidatorTest > testCreateTimeUpConversionV0ToV1 STARTED

kafka.log.LogValidatorTest > testCreateTimeUpConversionV0ToV1 PASSED

kafka.log.LogValidatorTest > testCreateTimeUpConversionV0ToV2 STARTED

kafka.log.LogValidatorTest > testCreateTimeUpConversionV0ToV2 PASSED

kafka.log.LogValidatorTest > testCreateTimeUpConversionV1ToV2 STARTED

kafka.log.LogValidatorTest > testCreateTimeUpConversionV1ToV2 PASSED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterDownConversionV2ToV0Compressed STARTED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterDownConversionV2ToV0Compressed PASSED

kafka.log.LogValidatorTest > testZStdCompressedWithUnavailableIBPVersion STARTED

kafka.log.LogValidatorTest > testZStdCompressedWithUnavailableIBPVersion PASSED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterUpConversionV1ToV2Compressed STARTED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterUpConversionV1ToV2Compressed PASSED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterUpConversionV0ToV1NonCompressed STARTED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterUpConversionV0ToV1NonCompressed PASSED

kafka.log.LogValidatorTest > 
testDownConversionOfTransactionalRecordsNotPermitted STARTED

kafka.log.LogValidatorTest > 
testDownConversionOfTransactionalRecordsNotPermitted PASSED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterUpConversionV0ToV1Compressed STARTED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterUpConversionV0ToV1Compressed PASSED

kafka.log.LogValidatorTest > testRelativeOffsetAssignmentNonCompressedV1 STARTED

kafka.log.LogValidatorTest > testRelativeOffsetAssignmentNonCompressedV1 PASSED

kafka.log.LogValidatorTest > testRelativeOffsetAssignmentNonCompressedV2 STARTED

kafka.log.LogValidatorTest > testRelativeOffsetAssignmentNonCompressedV2 PASSED

kafka.log.LogValidatorTest > testControlRecordsNotAllowedFromClients STARTED

kafka.log.LogValidatorTest > testControlRecordsNotAllowedFromClients PASSED

kafka.log.LogValidatorTest > testRelativeOffsetAssignmentCompressedV1 STARTED

kafka.log.LogValidatorTest > testRelativeOffsetAssignmentCompressedV1 PASSED

kafka.log.LogValidatorTest > testRelativeOffsetAssignmentCompressedV2 STARTED

kafka.log.LogValidatorTest > testRelativeOffsetAssignmentCompressedV2 PASSED

kafka.log.LogValidatorTest > 

Build failed in Jenkins: kafka-trunk-jdk11 #1344

2020-04-10 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Fix JavaDocs markup (#8459)

[github] MINOR: Only start log dir fetcher after LeaderAndIsr epoch validation


--
[...truncated 3.03 MB...]
org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
PASSED

org.apache.kafka.streams.TestTopicsTest > testNonUsedOutputTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testNonUsedOutputTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testEmptyTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testEmptyTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testStartTimestamp STARTED

org.apache.kafka.streams.TestTopicsTest > testStartTimestamp PASSED

org.apache.kafka.streams.TestTopicsTest > testNegativeAdvance STARTED

org.apache.kafka.streams.TestTopicsTest > testNegativeAdvance PASSED

org.apache.kafka.streams.TestTopicsTest > shouldNotAllowToCreateWithNullDriver 
STARTED

org.apache.kafka.streams.TestTopicsTest > shouldNotAllowToCreateWithNullDriver 
PASSED

org.apache.kafka.streams.TestTopicsTest > testDuration STARTED

org.apache.kafka.streams.TestTopicsTest > testDuration PASSED

org.apache.kafka.streams.TestTopicsTest > testOutputToString STARTED

org.apache.kafka.streams.TestTopicsTest > testOutputToString PASSED

org.apache.kafka.streams.TestTopicsTest > testValue STARTED

org.apache.kafka.streams.TestTopicsTest > testValue PASSED

org.apache.kafka.streams.TestTopicsTest > testTimestampAutoAdvance STARTED

org.apache.kafka.streams.TestTopicsTest > testTimestampAutoAdvance PASSED

org.apache.kafka.streams.TestTopicsTest > testOutputWrongSerde STARTED

org.apache.kafka.streams.TestTopicsTest > testOutputWrongSerde PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputTopicWithNullTopicName STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputTopicWithNullTopicName PASSED

org.apache.kafka.streams.TestTopicsTest > testWrongSerde STARTED

org.apache.kafka.streams.TestTopicsTest > testWrongSerde PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMapWithNull STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMapWithNull PASSED

org.apache.kafka.streams.TestTopicsTest > testNonExistingOutputTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testNonExistingOutputTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testMultipleTopics STARTED

org.apache.kafka.streams.TestTopicsTest > testMultipleTopics PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValueList STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValueList PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputWithNullDriver STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputWithNullDriver PASSED

org.apache.kafka.streams.TestTopicsTest > testValueList STARTED

org.apache.kafka.streams.TestTopicsTest > testValueList PASSED

org.apache.kafka.streams.TestTopicsTest > testRecordList STARTED

org.apache.kafka.streams.TestTopicsTest > testRecordList PASSED

org.apache.kafka.streams.TestTopicsTest > testNonExistingInputTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testNonExistingInputTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMap STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMap PASSED

org.apache.kafka.streams.TestTopicsTest > testRecordsToList STARTED

org.apache.kafka.streams.TestTopicsTest > testRecordsToList PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValueListDuration STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValueListDuration PASSED

org.apache.kafka.streams.TestTopicsTest > testInputToString STARTED

org.apache.kafka.streams.TestTopicsTest > testInputToString PASSED

org.apache.kafka.streams.TestTopicsTest > testTimestamp STARTED

org.apache.kafka.streams.TestTopicsTest > testTimestamp PASSED

org.apache.kafka.streams.TestTopicsTest > testWithHeaders STARTED

org.apache.kafka.streams.TestTopicsTest > testWithHeaders PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValue STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValue PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateTopicWithNullTopicName STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateTopicWithNullTopicName PASSED

> Task :streams:upgrade-system-tests-0100:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0100:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0100:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0100:compileTestJava
> Task 

Build failed in Jenkins: kafka-2.2-jdk8 #44

2020-04-10 Thread Apache Jenkins Server
See 


Changes:

[jason] KAFKA-9654; Update epoch in `ReplicaAlterLogDirsThread` after new

[jason] KAFKA-9750; Fix race condition with log dir reassign completion (#8412)


--
[...truncated 2.59 MB...]

kafka.utils.CommandLineUtilsTest > testParseArgs STARTED

kafka.utils.CommandLineUtilsTest > testParseArgs PASSED

kafka.utils.CommandLineUtilsTest > testParseArgsWithMultipleDelimiters STARTED

kafka.utils.CommandLineUtilsTest > testParseArgsWithMultipleDelimiters PASSED

kafka.utils.CommandLineUtilsTest > testMaybeMergeOptionsDefaultValueIfNotExist 
STARTED

kafka.utils.CommandLineUtilsTest > testMaybeMergeOptionsDefaultValueIfNotExist 
PASSED

kafka.utils.CommandLineUtilsTest > testParseEmptyArgWithNoDelimiter STARTED

kafka.utils.CommandLineUtilsTest > testParseEmptyArgWithNoDelimiter PASSED

kafka.utils.CommandLineUtilsTest > 
testMaybeMergeOptionsDefaultOverwriteExisting STARTED

kafka.utils.CommandLineUtilsTest > 
testMaybeMergeOptionsDefaultOverwriteExisting PASSED

kafka.utils.CommandLineUtilsTest > testParseEmptyArgAsValid STARTED

kafka.utils.CommandLineUtilsTest > testParseEmptyArgAsValid PASSED

kafka.utils.CommandLineUtilsTest > testMaybeMergeOptionsNotOverwriteExisting 
STARTED

kafka.utils.CommandLineUtilsTest > testMaybeMergeOptionsNotOverwriteExisting 
PASSED

kafka.utils.JsonTest > testParseToWithInvalidJson STARTED

kafka.utils.JsonTest > testParseToWithInvalidJson PASSED

kafka.utils.JsonTest > testParseTo STARTED

kafka.utils.JsonTest > testParseTo PASSED

kafka.utils.JsonTest > testJsonParse STARTED

kafka.utils.JsonTest > testJsonParse PASSED

kafka.utils.JsonTest > testLegacyEncodeAsString STARTED

kafka.utils.JsonTest > testLegacyEncodeAsString PASSED

kafka.utils.JsonTest > testEncodeAsBytes STARTED

kafka.utils.JsonTest > testEncodeAsBytes PASSED

kafka.utils.JsonTest > testEncodeAsString STARTED

kafka.utils.JsonTest > testEncodeAsString PASSED

kafka.utils.ReplicationUtilsTest > testUpdateLeaderAndIsr STARTED

kafka.utils.ReplicationUtilsTest > testUpdateLeaderAndIsr PASSED

kafka.utils.ZkUtilsTest > testGetSequenceIdMethod STARTED

kafka.utils.ZkUtilsTest > testGetSequenceIdMethod PASSED

kafka.utils.ZkUtilsTest > testAbortedConditionalDeletePath STARTED

kafka.utils.ZkUtilsTest > testAbortedConditionalDeletePath PASSED

kafka.utils.ZkUtilsTest > testGetAllPartitionsTopicWithoutPartitions STARTED

kafka.utils.ZkUtilsTest > testGetAllPartitionsTopicWithoutPartitions PASSED

kafka.utils.ZkUtilsTest > testSuccessfulConditionalDeletePath STARTED

kafka.utils.ZkUtilsTest > testSuccessfulConditionalDeletePath PASSED

kafka.utils.ZkUtilsTest > testPersistentSequentialPath STARTED

kafka.utils.ZkUtilsTest > testPersistentSequentialPath PASSED

kafka.utils.ZkUtilsTest > testClusterIdentifierJsonParsing STARTED

kafka.utils.ZkUtilsTest > testClusterIdentifierJsonParsing PASSED

kafka.utils.ZkUtilsTest > testGetLeaderIsrAndEpochForPartition STARTED

kafka.utils.ZkUtilsTest > testGetLeaderIsrAndEpochForPartition PASSED

kafka.utils.PasswordEncoderTest > testEncoderConfigChange STARTED

kafka.utils.PasswordEncoderTest > testEncoderConfigChange PASSED

kafka.utils.PasswordEncoderTest > testEncodeDecodeAlgorithms STARTED

kafka.utils.PasswordEncoderTest > testEncodeDecodeAlgorithms PASSED

kafka.utils.PasswordEncoderTest > testEncodeDecode STARTED

kafka.utils.PasswordEncoderTest > testEncodeDecode PASSED

kafka.utils.timer.TimerTaskListTest > testAll STARTED

kafka.utils.timer.TimerTaskListTest > testAll PASSED

kafka.utils.timer.TimerTest > testAlreadyExpiredTask STARTED

kafka.utils.timer.TimerTest > testAlreadyExpiredTask PASSED

kafka.utils.timer.TimerTest > testTaskExpiration STARTED

kafka.utils.timer.TimerTest > testTaskExpiration PASSED

kafka.utils.ShutdownableThreadTest > testShutdownWhenCalledAfterThreadStart 
STARTED

kafka.utils.ShutdownableThreadTest > testShutdownWhenCalledAfterThreadStart 
PASSED

kafka.utils.SchedulerTest > testMockSchedulerNonPeriodicTask STARTED

kafka.utils.SchedulerTest > testMockSchedulerNonPeriodicTask PASSED

kafka.utils.SchedulerTest > testMockSchedulerPeriodicTask STARTED

kafka.utils.SchedulerTest > testMockSchedulerPeriodicTask PASSED

kafka.utils.SchedulerTest > testNonPeriodicTask STARTED

kafka.utils.SchedulerTest > testNonPeriodicTask PASSED

kafka.utils.SchedulerTest > testRestart STARTED

kafka.utils.SchedulerTest > testRestart PASSED

kafka.utils.SchedulerTest > testReentrantTaskInMockScheduler STARTED

kafka.utils.SchedulerTest > testReentrantTaskInMockScheduler PASSED

kafka.utils.SchedulerTest > testPeriodicTask STARTED

kafka.utils.SchedulerTest > testPeriodicTask PASSED

kafka.utils.json.JsonValueTest > testJsonObjectIterator STARTED

kafka.utils.json.JsonValueTest > testJsonObjectIterator PASSED

kafka.utils.json.JsonValueTest > testDecodeLong STARTED

kafka.utils.json.JsonValueTest > testDecodeLong PASSED


Jenkins build is back to normal : kafka-trunk-jdk8 #4422

2020-04-10 Thread Apache Jenkins Server
See 




回复:回复:回复:回复:回复:回复:[Vote] KIP-571: Add option to force remove members in StreamsResetter

2020-04-10 Thread feyman2009
Hi, all
KIP-571 has already collected 4 bind +1 (John, Guochang, Bill, Matthias) 
and 3 non-binding +1(Boyang, Sophie), I will mark it as approved and create a 
PR shortly.
Thanks!

Feyman
--
发件人:feyman2009 
发送时间:2020年4月8日(星期三) 14:21
收件人:dev ; Boyang Chen 
主 题:回复:回复:回复:回复:回复:回复:[Vote] KIP-571: Add option to force remove members in 
StreamsResetter

Hi Boyang,
Thanks for reminding me of that!
I'm not sure about the convention, I thought it would need to re-collect 
votes if the KIP has changed~
Let's leave the vote thread here for 2 days, if no objection, I will take 
it as approved and update the PR accordingly.

Thanks!
Feyman



--
发件人:Boyang Chen 
发送时间:2020年4月8日(星期三) 12:42
收件人:dev ; feyman2009 
主 题:Re: 回复:回复:回复:回复:回复:[Vote] KIP-571: Add option to force remove members in 
StreamsResetter

You should already get enough votes if I'm counting correctly (Guozhang, John, 
Matthias)
On Tue, Apr 7, 2020 at 6:59 PM feyman2009  wrote:
Hi, Boyang
 I think Matthias's proposal makes sense, but we can use the admin tool for 
this scenario as Boyang mentioned or follow up later, so I prefer to keep this 
KIP unchanged to minimize the scope.
 Calling for vote ~

 Thanks!
 Feyman

 --
 发件人:Boyang Chen 
 发送时间:2020年4月8日(星期三) 02:15
 收件人:dev 
 主 题:Re: 回复:回复:回复:回复:回复:[Vote] KIP-571: Add option to force remove members in 
StreamsResetter

 Hey Feyman,

 I think Matthias' suggestion is optional, and we could just use admin tool
 to remove single static members as well.

 Boyang

 On Tue, Apr 7, 2020 at 11:00 AM Matthias J. Sax  wrote:

 > > Would you mind to elaborate why we still need that
 >
 > Sure.
 >
 > For static memership, the session timeout it usually set quite high.
 > This make scaling in an application tricky: if you shut down one
 > instance, no rebalance would happen until `session.timeout.ms` hits.
 > This is specific to Kafka Streams, because when a Kafka Stream client is
 > closed, it does _not_ send a `LeaveGroupRequest`. Hence, the
 > corresponding partitions would not be processed for a long time and
 > thus, fall back.
 >
 > Given that each instance will have a unique `instance.id` provided by
 > the user, we could allow users to remove the instance they want to
 > decommission from the consumer group without the need to wait for
 > `session.timeout.ms`.
 >
 > Hence, it's not an application reset scenario for which one wants to
 > remove all members, but a scaling-in scenario. For dynamic membership,
 > this issue usually does not occur because the `session.timeout.ms` is
 > set to a fairly low value and a rebalance would happen quickly after an
 > instance is decommissioned.
 >
 > Does this make sense?
 >
 > As said before, we may or may not include this in this KIP. It's up to
 > you if you want to address it or not.
 >
 >
 > -Matthias
 >
 >
 >
 > On 4/7/20 7:12 AM, feyman2009 wrote:
 > > Hi, Matthias
 > > Thanks a lot!
 > > So you do not plan so support removing a _single static_ member via
 > `StreamsResetter`?
 > > =>
 > > Would you mind to elaborate why we still need that if we are
 > able to batch remove active members with adminClient?
 > >
 > > Thanks!
 > >
 > > Feyman
 > >  --
 > > 发件人:Matthias J. Sax 
 > > 发送时间:2020年4月7日(星期二) 08:25
 > > 收件人:dev 
 > > 主 题:Re: 回复:回复:回复:回复:[Vote] KIP-571: Add option to force remove members
 > in StreamsResetter
 > >
 > > Overall LGTM.
 > >
 > > +1 (binding)
 > >
 > > So you do not plan so support removing a _single static_ member via
 > > `StreamsResetter`? We can of course still add this as a follow up but it
 > > might be nice to just add it to this KIP right away. Up to you if you
 > > want to include it or not.
 > >
 > >
 > > -Matthias
 > >
 > >
 > >
 > > On 3/30/20 8:16 AM, feyman2009 wrote:
 > >> Hi, Boyang
 > >> Thanks a lot, that make sense, we should not expose member.id in
 > the MemberToRemove anymore, I have fixed it in the KIP.
 > >>
 > >>
 > >> Feyman
 > >> --
 > >> 发件人:Boyang Chen 
 > >> 发送时间:2020年3月29日(星期日) 01:45
 > >> 收件人:dev ; feyman2009 
 > >> 主 题:Re: 回复:回复:回复:[Vote] KIP-571: Add option to force remove members in
 > StreamsResetter
 > >>
 > >> Hey Feyman,
 > >>
 > >> thanks for the update. I assume we would rely entirely on the internal
 > changes for `removeMemberFromGroup` by sending a DescribeGroup request
 > first. With that in mind, I don't think we need to add member.id to
 > MemberToRemove anymore, as it is only facing public where users will only
 > configure group.instance.id?
 > >> On Fri, Mar 27, 2020 at 5:04 PM feyman2009
 >  wrote:
 > >> Bump, can anyone kindly take a look at the updated KIP-571? Thanks!
 > >>
 > >>
 > >>