Jenkins build is back to normal : kafka-2.2-jdk8 #17

2019-02-14 Thread Apache Jenkins Server
See 




Re: [ANNOUNCE] New Committer: Randall Hauch

2019-02-14 Thread James Cheng
Congrats, Randall! Well deserved!

-James

Sent from my iPhone

> On Feb 14, 2019, at 6:16 PM, Guozhang Wang  wrote:
> 
> Hello all,
> 
> The PMC of Apache Kafka is happy to announce another new committer joining
> the project today: we have invited Randall Hauch as a project committer and
> he has accepted.
> 
> Randall has been participating in the Kafka community for the past 3 years,
> and is well known as the founder of the Debezium project, a popular project
> for database change-capture streams using Kafka (https://debezium.io). More
> recently he has become the main person keeping Kafka Connect moving
> forward, participated in nearly all KIP discussions and QAs on the mailing
> list. He's authored 6 KIPs and authored 50 pull requests and conducted over
> a hundred reviews around Kafka Connect, and has also been evangelizing
> Kafka Connect at several Kafka Summit venues.
> 
> 
> Thank you very much for your contributions to the Connect community Randall
> ! And looking forward to many more :)
> 
> 
> Guozhang, on behalf of the Apache Kafka PMC


Re: [ANNOUNCE] New Committer: Randall Hauch

2019-02-14 Thread Gurudatt Kulkarni
Congratulations Randall!

On Friday, February 15, 2019, Vahid Hashemian 
wrote:
> Congrats Randall!
>
> On Thu, Feb 14, 2019, 19:44 Ismael Juma  wrote:
>
>> Congratulations Randall!
>>
>> On Thu, Feb 14, 2019, 6:16 PM Guozhang Wang >
>> > Hello all,
>> >
>> > The PMC of Apache Kafka is happy to announce another new committer
>> joining
>> > the project today: we have invited Randall Hauch as a project committer
>> and
>> > he has accepted.
>> >
>> > Randall has been participating in the Kafka community for the past 3
>> years,
>> > and is well known as the founder of the Debezium project, a popular
>> project
>> > for database change-capture streams using Kafka (https://debezium.io).
>> > More
>> > recently he has become the main person keeping Kafka Connect moving
>> > forward, participated in nearly all KIP discussions and QAs on the
>> mailing
>> > list. He's authored 6 KIPs and authored 50 pull requests and conducted
>> over
>> > a hundred reviews around Kafka Connect, and has also been evangelizing
>> > Kafka Connect at several Kafka Summit venues.
>> >
>> >
>> > Thank you very much for your contributions to the Connect community
>> Randall
>> > ! And looking forward to many more :)
>> >
>> >
>> > Guozhang, on behalf of the Apache Kafka PMC
>> >
>>
>


Re: [kafka-clients] [VOTE] 2.1.1 RC2

2019-02-14 Thread Jason Gustafson
Ran the quickstart against the 2.11 artifact and checked the release notes.
For some reason, KAFKA-7897 is not included in the notes, though I
definitely see it in the tagged version. The RC was probably created before
the JIRA was resolved. I think we can regenerate without another RC, so +1
from me.

Thanks Colin!

On Thu, Feb 14, 2019 at 3:32 PM Jun Rao  wrote:

> Hi, Colin,
>
> Thanks for running the release. Verified the quickstart for 2.12 binary. +1
> from me.
>
> Jun
>
> On Fri, Feb 8, 2019 at 12:02 PM Colin McCabe  wrote:
>
> > Hi all,
> >
> > This is the third candidate for release of Apache Kafka 2.1.1.  This
> > release includes many bug fixes for Apache Kafka 2.1.
> >
> > Compared to rc1, this release includes the following changes:
> > * MINOR: release.py: fix some compatibility problems.
> > * KAFKA-7897; Disable leader epoch cache when older message formats are
> > used
> > * KAFKA-7902: Replace original loginContext if SASL/OAUTHBEARER refresh
> > login fails
> > * MINOR: Fix more places where the version should be bumped from 2.1.0 ->
> > 2.1.1
> > * KAFKA-7890: Invalidate ClusterConnectionState cache for a broker if the
> > hostname of the broker changes.
> > * KAFKA-7873; Always seek to beginning in KafkaBasedLog
> > * MINOR: Correctly set dev version in version.py
> >
> > Check out the release notes here:
> > http://home.apache.org/~cmccabe/kafka-2.1.1-rc2/RELEASE_NOTES.html
> >
> > The vote will go until Wednesday, February 13st.
> >
> > * Release artifacts to be voted upon (source and binary):
> > http://home.apache.org/~cmccabe/kafka-2.1.1-rc2/
> >
> > * Maven artifacts to be voted upon:
> > https://repository.apache.org/content/groups/staging/
> >
> > * Javadoc:
> > http://home.apache.org/~cmccabe/kafka-2.1.1-rc2/javadoc/
> >
> > * Tag to be voted upon (off 2.1 branch) is the 2.1.1 tag:
> > https://github.com/apache/kafka/releases/tag/2.1.1-rc2
> >
> > * Jenkins builds for the 2.1 branch:
> > Unit/integration tests: https://builds.apache.org/job/kafka-2.1-jdk8/
> >
> > Thanks to everyone who tested the earlier RCs.
> >
> > cheers,
> > Colin
> >
> > --
> > You received this message because you are subscribed to the Google Groups
> > "kafka-clients" group.
> > To unsubscribe from this group and stop receiving emails from it, send an
> > email to kafka-clients+unsubscr...@googlegroups.com.
> > To post to this group, send email to kafka-clie...@googlegroups.com.
> > Visit this group at https://groups.google.com/group/kafka-clients.
> > To view this discussion on the web visit
> >
> https://groups.google.com/d/msgid/kafka-clients/ea314ca1-d23a-47c4-8fc7-83b9b1c792db%40www.fastmail.com
> > .
> > For more options, visit https://groups.google.com/d/optout.
> >
>


Re: [ANNOUNCE] New Committer: Randall Hauch

2019-02-14 Thread Vahid Hashemian
Congrats Randall!

On Thu, Feb 14, 2019, 19:44 Ismael Juma  wrote:

> Congratulations Randall!
>
> On Thu, Feb 14, 2019, 6:16 PM Guozhang Wang 
> > Hello all,
> >
> > The PMC of Apache Kafka is happy to announce another new committer
> joining
> > the project today: we have invited Randall Hauch as a project committer
> and
> > he has accepted.
> >
> > Randall has been participating in the Kafka community for the past 3
> years,
> > and is well known as the founder of the Debezium project, a popular
> project
> > for database change-capture streams using Kafka (https://debezium.io).
> > More
> > recently he has become the main person keeping Kafka Connect moving
> > forward, participated in nearly all KIP discussions and QAs on the
> mailing
> > list. He's authored 6 KIPs and authored 50 pull requests and conducted
> over
> > a hundred reviews around Kafka Connect, and has also been evangelizing
> > Kafka Connect at several Kafka Summit venues.
> >
> >
> > Thank you very much for your contributions to the Connect community
> Randall
> > ! And looking forward to many more :)
> >
> >
> > Guozhang, on behalf of the Apache Kafka PMC
> >
>


Re: OfflinePartitionLeaderElection improvement?

2019-02-14 Thread hacker win7
Seems like this proposal is between unclean elect and clean elect, maybe need 
add new policy for this?


— hackerwin7
— hackersw...@gmail.com

> On Feb 15, 2019, at 07:50, Ming Liu  wrote:
> 
> Hi Kafka community,
>   I like to propose a small change related to
> OfflinePartitionLeaderElectionStrategy.
>   In our system, we usually has RF = 3, Min_ISR = 2,
> unclean.leader.election = false and client usually set the ACK.all when
> publishing. We have observed that occasionally, when disk become bad, we
> have partition  offline and stayed on the offline state, which of cause,
> causing the availability issue and we have to manually set
> unclean.leader.election = true to bring the partition online.
>   This partition offlie due to disk failure become a huge operational pain
> for us.
> 
>   Looking into, the sequence of events are:
>   1. First, ISR for that partition drops to 1 (maybe bad disk causing the
> broker to respond to fetch more slowly. Note dead disk doesn't cause this
> to happen every time, but occasionally)
>   2. Then disk completely give up and the failure causing leader replica
> offline
>   3. Because the ISR is 1, OfflinePartitionLeaderElectionStrategy won't
> choose the leader if unclean.leader.election = false.
> 
>   The observation here is, in this case, even the last failed replica is
> not in ISR, it still should have the HW same as the failed leader replica.
> So the OfflinePartitionLeaderElectionStrategy should select the last failed
> replica as the leader, espcially if it has the same HW.
> 
>   So the proposal is:
>   1. Choose replica as the leader if it has the same HW (and even it is
> not in ISR)
>   2. Further, when unclean.leader.election = true, choose the replica with
> highest HW as the leader.
> 
>   Let me know if this makes sense or any suggestions. If yes, I will
> create a JIRA and work on it.
> 
>   Thanks!
>   Ming



Re: [ANNOUNCE] New Committer: Randall Hauch

2019-02-14 Thread Ismael Juma
Congratulations Randall!

On Thu, Feb 14, 2019, 6:16 PM Guozhang Wang  Hello all,
>
> The PMC of Apache Kafka is happy to announce another new committer joining
> the project today: we have invited Randall Hauch as a project committer and
> he has accepted.
>
> Randall has been participating in the Kafka community for the past 3 years,
> and is well known as the founder of the Debezium project, a popular project
> for database change-capture streams using Kafka (https://debezium.io).
> More
> recently he has become the main person keeping Kafka Connect moving
> forward, participated in nearly all KIP discussions and QAs on the mailing
> list. He's authored 6 KIPs and authored 50 pull requests and conducted over
> a hundred reviews around Kafka Connect, and has also been evangelizing
> Kafka Connect at several Kafka Summit venues.
>
>
> Thank you very much for your contributions to the Connect community Randall
> ! And looking forward to many more :)
>
>
> Guozhang, on behalf of the Apache Kafka PMC
>


Re: [ANNOUNCE] New Committer: Randall Hauch

2019-02-14 Thread Bill Bejeck
Congrats Randall!

-Bill

On Thu, Feb 14, 2019 at 9:17 PM Guozhang Wang  wrote:

> Hello all,
>
> The PMC of Apache Kafka is happy to announce another new committer joining
> the project today: we have invited Randall Hauch as a project committer and
> he has accepted.
>
> Randall has been participating in the Kafka community for the past 3 years,
> and is well known as the founder of the Debezium project, a popular project
> for database change-capture streams using Kafka (https://debezium.io).
> More
> recently he has become the main person keeping Kafka Connect moving
> forward, participated in nearly all KIP discussions and QAs on the mailing
> list. He's authored 6 KIPs and authored 50 pull requests and conducted over
> a hundred reviews around Kafka Connect, and has also been evangelizing
> Kafka Connect at several Kafka Summit venues.
>
>
> Thank you very much for your contributions to the Connect community Randall
> ! And looking forward to many more :)
>
>
> Guozhang, on behalf of the Apache Kafka PMC
>


[ANNOUNCE] New Committer: Randall Hauch

2019-02-14 Thread Guozhang Wang
Hello all,

The PMC of Apache Kafka is happy to announce another new committer joining
the project today: we have invited Randall Hauch as a project committer and
he has accepted.

Randall has been participating in the Kafka community for the past 3 years,
and is well known as the founder of the Debezium project, a popular project
for database change-capture streams using Kafka (https://debezium.io). More
recently he has become the main person keeping Kafka Connect moving
forward, participated in nearly all KIP discussions and QAs on the mailing
list. He's authored 6 KIPs and authored 50 pull requests and conducted over
a hundred reviews around Kafka Connect, and has also been evangelizing
Kafka Connect at several Kafka Summit venues.


Thank you very much for your contributions to the Connect community Randall
! And looking forward to many more :)


Guozhang, on behalf of the Apache Kafka PMC


Re: [VOTE] KIP-428: Add in-memory window store

2019-02-14 Thread Guozhang Wang
+1 (binding).

On Thu, Feb 14, 2019 at 4:07 PM Matthias J. Sax 
wrote:

> +1 (binding)
>
>
> -Matthias
>
> On 2/14/19 3:36 PM, Sophie Blee-Goldman wrote:
> > Hi all,
> >
> > I would like to call for a vote on KIP-428 regarding adding an in-memory
> > version of the window store.
> >
> > The KIP can be found here:
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-428%3A+Add+in-memory+window+store
> >
> > Cheers,
> > Sophie
> >
>
>

-- 
-- Guozhang


Build failed in Jenkins: kafka-2.2-jdk8 #16

2019-02-14 Thread Apache Jenkins Server
See 


Changes:

[matthias] MINOR: drop dbAccessor reference on close (#6254)

[jason] MINOR: Make MockClient#poll() more thread-safe (#5942)

--
[...truncated 2.49 MB...]
kafka.controller.PartitionStateMachineTest > 
testOnlinePartitionToOfflineTransition PASSED

kafka.controller.PartitionStateMachineTest > 
testNewPartitionToOfflinePartitionTransition STARTED

kafka.controller.PartitionStateMachineTest > 
testNewPartitionToOfflinePartitionTransition PASSED

kafka.controller.PartitionStateMachineTest > testUpdatingOfflinePartitionsCount 
STARTED

kafka.controller.PartitionStateMachineTest > testUpdatingOfflinePartitionsCount 
PASSED

kafka.controller.PartitionStateMachineTest > 
testInvalidNonexistentPartitionToOnlinePartitionTransition STARTED

kafka.controller.PartitionStateMachineTest > 
testInvalidNonexistentPartitionToOnlinePartitionTransition PASSED

kafka.controller.PartitionStateMachineTest > 
testInvalidNonexistentPartitionToOfflinePartitionTransition STARTED

kafka.controller.PartitionStateMachineTest > 
testInvalidNonexistentPartitionToOfflinePartitionTransition PASSED

kafka.controller.PartitionStateMachineTest > 
testOnlinePartitionToOnlineTransition STARTED

kafka.controller.PartitionStateMachineTest > 
testOnlinePartitionToOnlineTransition PASSED

kafka.controller.PartitionStateMachineTest > 
testNewPartitionToOnlinePartitionTransitionZkUtilsExceptionFromCreateStates 
STARTED

kafka.controller.PartitionStateMachineTest > 
testNewPartitionToOnlinePartitionTransitionZkUtilsExceptionFromCreateStates 
PASSED

kafka.controller.PartitionStateMachineTest > 
testInvalidNewPartitionToNonexistentPartitionTransition STARTED

kafka.controller.PartitionStateMachineTest > 
testInvalidNewPartitionToNonexistentPartitionTransition PASSED

kafka.controller.PartitionStateMachineTest > 
testNewPartitionToOnlinePartitionTransition STARTED

kafka.controller.PartitionStateMachineTest > 
testNewPartitionToOnlinePartitionTransition PASSED

kafka.controller.PartitionStateMachineTest > 
testInvalidOnlinePartitionToNewPartitionTransition STARTED

kafka.controller.PartitionStateMachineTest > 
testInvalidOnlinePartitionToNewPartitionTransition PASSED

kafka.controller.PartitionStateMachineTest > 
testUpdatingOfflinePartitionsCountDuringTopicDeletion STARTED

kafka.controller.PartitionStateMachineTest > 
testUpdatingOfflinePartitionsCountDuringTopicDeletion PASSED

kafka.controller.PartitionStateMachineTest > 
testOfflinePartitionToOnlinePartitionTransitionErrorCodeFromStateLookup STARTED

kafka.controller.PartitionStateMachineTest > 
testOfflinePartitionToOnlinePartitionTransitionErrorCodeFromStateLookup PASSED

kafka.controller.PartitionStateMachineTest > 
testOnlinePartitionToOnlineTransitionForControlledShutdown STARTED

kafka.controller.PartitionStateMachineTest > 
testOnlinePartitionToOnlineTransitionForControlledShutdown PASSED

kafka.controller.PartitionStateMachineTest > 
testOfflinePartitionToOnlinePartitionTransitionZkUtilsExceptionFromStateLookup 
STARTED

kafka.controller.PartitionStateMachineTest > 
testOfflinePartitionToOnlinePartitionTransitionZkUtilsExceptionFromStateLookup 
PASSED

kafka.controller.PartitionStateMachineTest > 
testNoOfflinePartitionsChangeForTopicsBeingDeleted STARTED

kafka.controller.PartitionStateMachineTest > 
testNoOfflinePartitionsChangeForTopicsBeingDeleted PASSED

kafka.controller.PartitionStateMachineTest > 
testInvalidOnlinePartitionToNonexistentPartitionTransition STARTED

kafka.controller.PartitionStateMachineTest > 
testInvalidOnlinePartitionToNonexistentPartitionTransition PASSED

kafka.controller.PartitionStateMachineTest > 
testInvalidOfflinePartitionToNewPartitionTransition STARTED

kafka.controller.PartitionStateMachineTest > 
testInvalidOfflinePartitionToNewPartitionTransition PASSED

kafka.controller.PartitionStateMachineTest > 
testOfflinePartitionToOnlinePartitionTransition STARTED

kafka.controller.PartitionStateMachineTest > 
testOfflinePartitionToOnlinePartitionTransition PASSED

kafka.controller.ControllerFailoverTest > testHandleIllegalStateException 
STARTED

kafka.controller.ControllerFailoverTest > testHandleIllegalStateException PASSED

kafka.controller.ControllerIntegrationTest > 
testControllerDetectsBouncedBrokers STARTED

kafka.controller.ControllerIntegrationTest > 
testControllerDetectsBouncedBrokers PASSED

kafka.controller.ControllerIntegrationTest > testControlledShutdown STARTED

kafka.controller.ControllerIntegrationTest > testControlledShutdown PASSED

kafka.controller.ControllerIntegrationTest > 
testPartitionReassignmentWithOfflineReplicaHaltingProgress STARTED

kafka.controller.ControllerIntegrationTest > 
testPartitionReassignmentWithOfflineReplicaHaltingProgress PASSED

kafka.controller.ControllerIntegrationTest > 
testControllerEpochPersistsWhenAllBrokersDown STARTED

kafka.controller.ControllerIntegrationTest > 

Build failed in Jenkins: kafka-trunk-jdk8 #3387

2019-02-14 Thread Apache Jenkins Server
See 


Changes:

[mjsax] KAFKA-6474: Rewrite tests to use new public TopologyTestDriver [part 4]

--
[...truncated 2.30 MB...]
org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestampWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareKeyValueTimestamp 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareKeyValueTimestamp 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualWithNullForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualWithNullForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 

Re: [VOTE] KIP-428: Add in-memory window store

2019-02-14 Thread Matthias J. Sax
+1 (binding)


-Matthias

On 2/14/19 3:36 PM, Sophie Blee-Goldman wrote:
> Hi all,
> 
> I would like to call for a vote on KIP-428 regarding adding an in-memory
> version of the window store.
> 
> The KIP can be found here:
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-428%3A+Add+in-memory+window+store
> 
> Cheers,
> Sophie
> 



signature.asc
Description: OpenPGP digital signature


Re: [DISCUSS] KIP-428: Add in-memory window store

2019-02-14 Thread Matthias J. Sax
Thanks for the KIP Sophie. Overall LGTM.

One typo: "that have not yet that their iterator closed"

I also added the KIP to
https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Streams


-Matthias

On 2/14/19 3:29 PM, Guozhang Wang wrote:
> Made another pass over the KIP page, lgtm!
> 
> On Thu, Feb 14, 2019 at 3:05 PM Sophie Blee-Goldman 
> wrote:
> 
>> Cleaned up the KIP, please take another look and if all seems good will
>> call for a vote since there seem to be no strong opinions against.
>>
>> On Wed, Feb 13, 2019 at 11:45 PM Guozhang Wang  wrote:
>>
>>> Hi Sophie,
>>>
>>> Thanks for the KIP write-up, I made a pass over the wiki and the PR as
>>> well, here's some comments:
>>>
>>> 1. the proposed API seems to be inconsistent from the PR, should it be:
>>>
>>> public static WindowBytesStoreSupplier inMemoryWindowStore(final String
>>> name,
>>>
>>>final Duration retentionPeriod,
>>>
>>>final Duration windowSize,
>>> +
>>>final boolean retainDuplicates) throws
>>> IllegalArgumentException ...
>>> -
>>> final Duration gracePeriod
>>>
>>> 2. As Boyang mentioned, we usually do not need to elaborate on the
>> internal
>>> implementation in the KIP, unless it has some user-facing implications.
>> As
>>> for this specific KIP, I think it make more sense to talk about what
>> memory
>>> footprint users would expect to have with the implementation: should they
>>> be expecting exact number of bytes used for key-value pairs only, or
>> should
>>> they expect some additional memory used for maintaining the window data
>>> structures.
>>>
>>>
>>>
>>> Guozhang
>>>
>>>
>>>
>>>
>>> On Fri, Feb 8, 2019 at 4:21 AM Dongjin Lee  wrote:
>>>
 Thanks for the KIP, Sophie. I added your KIP into the 'Under
>> Discussion'
 section here
 <

>>>
>> https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Improvement+Proposals
>
 .

 I am +1 for this proposal for reaching parity between key-value store
>> and
 windowed store.

 Thanks,
 Dongjin

 On Fri, Feb 8, 2019 at 1:41 PM Boyang Chen 
>> wrote:

> Thanks Sophie for proposing this new feature! In-memory window store
>> is
> very useful in long term. One meta comment is that we don't need to
 include
> implementation details in the public interface section, and those
> validation steps are pretty trivial.
>
> Boyang
>
> 
> From: Sophie Blee-Goldman 
> Sent: Friday, February 8, 2019 9:56 AM
> To: dev@kafka.apache.org
> Subject: [DISCUSS] KIP-428: Add in-memory window store
>
> Streams currently only has support for a RocksDB window store, but
>>> users
> have been requesting an in-memory version. This KIP introduces a
>> design
 for
> an in-memory window store implementation.
>
>
>

>>>
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-428%3A+Add+in-memory+window+store
>


 --
 *Dongjin Lee*

 *A hitchhiker in the mathematical world.*
 *github:  github.com/dongjinleekr
 linkedin:
>>> kr.linkedin.com/in/dongjinleekr
 speakerdeck:
 speakerdeck.com/dongjin
 *

>>>
>>>
>>> --
>>> -- Guozhang
>>>
>>
> 
> 



signature.asc
Description: OpenPGP digital signature


OfflinePartitionLeaderElection improvement?

2019-02-14 Thread Ming Liu
Hi Kafka community,
   I like to propose a small change related to
OfflinePartitionLeaderElectionStrategy.
   In our system, we usually has RF = 3, Min_ISR = 2,
unclean.leader.election = false and client usually set the ACK.all when
publishing. We have observed that occasionally, when disk become bad, we
have partition  offline and stayed on the offline state, which of cause,
causing the availability issue and we have to manually set
unclean.leader.election = true to bring the partition online.
   This partition offlie due to disk failure become a huge operational pain
for us.

   Looking into, the sequence of events are:
   1. First, ISR for that partition drops to 1 (maybe bad disk causing the
broker to respond to fetch more slowly. Note dead disk doesn't cause this
to happen every time, but occasionally)
   2. Then disk completely give up and the failure causing leader replica
offline
   3. Because the ISR is 1, OfflinePartitionLeaderElectionStrategy won't
choose the leader if unclean.leader.election = false.

   The observation here is, in this case, even the last failed replica is
not in ISR, it still should have the HW same as the failed leader replica.
So the OfflinePartitionLeaderElectionStrategy should select the last failed
replica as the leader, espcially if it has the same HW.

   So the proposal is:
   1. Choose replica as the leader if it has the same HW (and even it is
not in ISR)
   2. Further, when unclean.leader.election = true, choose the replica with
highest HW as the leader.

   Let me know if this makes sense or any suggestions. If yes, I will
create a JIRA and work on it.

   Thanks!
   Ming


[VOTE] KIP-428: Add in-memory window store

2019-02-14 Thread Sophie Blee-Goldman
Hi all,

I would like to call for a vote on KIP-428 regarding adding an in-memory
version of the window store.

The KIP can be found here:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-428%3A+Add+in-memory+window+store

Cheers,
Sophie


Re: [kafka-clients] [VOTE] 2.1.1 RC2

2019-02-14 Thread Jun Rao
Hi, Colin,

Thanks for running the release. Verified the quickstart for 2.12 binary. +1
from me.

Jun

On Fri, Feb 8, 2019 at 12:02 PM Colin McCabe  wrote:

> Hi all,
>
> This is the third candidate for release of Apache Kafka 2.1.1.  This
> release includes many bug fixes for Apache Kafka 2.1.
>
> Compared to rc1, this release includes the following changes:
> * MINOR: release.py: fix some compatibility problems.
> * KAFKA-7897; Disable leader epoch cache when older message formats are
> used
> * KAFKA-7902: Replace original loginContext if SASL/OAUTHBEARER refresh
> login fails
> * MINOR: Fix more places where the version should be bumped from 2.1.0 ->
> 2.1.1
> * KAFKA-7890: Invalidate ClusterConnectionState cache for a broker if the
> hostname of the broker changes.
> * KAFKA-7873; Always seek to beginning in KafkaBasedLog
> * MINOR: Correctly set dev version in version.py
>
> Check out the release notes here:
> http://home.apache.org/~cmccabe/kafka-2.1.1-rc2/RELEASE_NOTES.html
>
> The vote will go until Wednesday, February 13st.
>
> * Release artifacts to be voted upon (source and binary):
> http://home.apache.org/~cmccabe/kafka-2.1.1-rc2/
>
> * Maven artifacts to be voted upon:
> https://repository.apache.org/content/groups/staging/
>
> * Javadoc:
> http://home.apache.org/~cmccabe/kafka-2.1.1-rc2/javadoc/
>
> * Tag to be voted upon (off 2.1 branch) is the 2.1.1 tag:
> https://github.com/apache/kafka/releases/tag/2.1.1-rc2
>
> * Jenkins builds for the 2.1 branch:
> Unit/integration tests: https://builds.apache.org/job/kafka-2.1-jdk8/
>
> Thanks to everyone who tested the earlier RCs.
>
> cheers,
> Colin
>
> --
> You received this message because you are subscribed to the Google Groups
> "kafka-clients" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to kafka-clients+unsubscr...@googlegroups.com.
> To post to this group, send email to kafka-clie...@googlegroups.com.
> Visit this group at https://groups.google.com/group/kafka-clients.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/kafka-clients/ea314ca1-d23a-47c4-8fc7-83b9b1c792db%40www.fastmail.com
> .
> For more options, visit https://groups.google.com/d/optout.
>


Re: [DISCUSS] KIP-428: Add in-memory window store

2019-02-14 Thread Guozhang Wang
Made another pass over the KIP page, lgtm!

On Thu, Feb 14, 2019 at 3:05 PM Sophie Blee-Goldman 
wrote:

> Cleaned up the KIP, please take another look and if all seems good will
> call for a vote since there seem to be no strong opinions against.
>
> On Wed, Feb 13, 2019 at 11:45 PM Guozhang Wang  wrote:
>
> > Hi Sophie,
> >
> > Thanks for the KIP write-up, I made a pass over the wiki and the PR as
> > well, here's some comments:
> >
> > 1. the proposed API seems to be inconsistent from the PR, should it be:
> >
> > public static WindowBytesStoreSupplier inMemoryWindowStore(final String
> > name,
> >
> >final Duration retentionPeriod,
> >
> >final Duration windowSize,
> > +
> >final boolean retainDuplicates) throws
> > IllegalArgumentException ...
> > -
> > final Duration gracePeriod
> >
> > 2. As Boyang mentioned, we usually do not need to elaborate on the
> internal
> > implementation in the KIP, unless it has some user-facing implications.
> As
> > for this specific KIP, I think it make more sense to talk about what
> memory
> > footprint users would expect to have with the implementation: should they
> > be expecting exact number of bytes used for key-value pairs only, or
> should
> > they expect some additional memory used for maintaining the window data
> > structures.
> >
> >
> >
> > Guozhang
> >
> >
> >
> >
> > On Fri, Feb 8, 2019 at 4:21 AM Dongjin Lee  wrote:
> >
> > > Thanks for the KIP, Sophie. I added your KIP into the 'Under
> Discussion'
> > > section here
> > > <
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Improvement+Proposals
> > > >
> > > .
> > >
> > > I am +1 for this proposal for reaching parity between key-value store
> and
> > > windowed store.
> > >
> > > Thanks,
> > > Dongjin
> > >
> > > On Fri, Feb 8, 2019 at 1:41 PM Boyang Chen 
> wrote:
> > >
> > > > Thanks Sophie for proposing this new feature! In-memory window store
> is
> > > > very useful in long term. One meta comment is that we don't need to
> > > include
> > > > implementation details in the public interface section, and those
> > > > validation steps are pretty trivial.
> > > >
> > > > Boyang
> > > >
> > > > 
> > > > From: Sophie Blee-Goldman 
> > > > Sent: Friday, February 8, 2019 9:56 AM
> > > > To: dev@kafka.apache.org
> > > > Subject: [DISCUSS] KIP-428: Add in-memory window store
> > > >
> > > > Streams currently only has support for a RocksDB window store, but
> > users
> > > > have been requesting an in-memory version. This KIP introduces a
> design
> > > for
> > > > an in-memory window store implementation.
> > > >
> > > >
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-428%3A+Add+in-memory+window+store
> > > >
> > >
> > >
> > > --
> > > *Dongjin Lee*
> > >
> > > *A hitchhiker in the mathematical world.*
> > > *github:  github.com/dongjinleekr
> > > linkedin:
> > kr.linkedin.com/in/dongjinleekr
> > > speakerdeck:
> > > speakerdeck.com/dongjin
> > > *
> > >
> >
> >
> > --
> > -- Guozhang
> >
>


-- 
-- Guozhang


Build failed in Jenkins: kafka-2.1-jdk8 #127

2019-02-14 Thread Apache Jenkins Server
See 


Changes:

[jason] MINOR: Make MockClient#poll() more thread-safe (#5942)

--
[...truncated 2.84 MB...]
org.apache.kafka.connect.runtime.standalone.StandaloneHerderTest > 
testRestartConnector PASSED

org.apache.kafka.connect.runtime.standalone.StandaloneHerderTest > 
testRestartTask STARTED

org.apache.kafka.connect.runtime.standalone.StandaloneHerderTest > 
testRestartTask PASSED

org.apache.kafka.connect.runtime.standalone.StandaloneHerderTest > 
testAccessors STARTED

org.apache.kafka.connect.runtime.standalone.StandaloneHerderTest > 
testAccessors PASSED

org.apache.kafka.connect.runtime.standalone.StandaloneHerderTest > 
testCreateConnectorFailedBasicValidation STARTED

org.apache.kafka.connect.runtime.standalone.StandaloneHerderTest > 
testCreateConnectorFailedBasicValidation PASSED

org.apache.kafka.connect.runtime.standalone.StandaloneHerderTest > 
testCreateConnectorFailedCustomValidation STARTED

org.apache.kafka.connect.runtime.standalone.StandaloneHerderTest > 
testCreateConnectorFailedCustomValidation PASSED

org.apache.kafka.connect.runtime.standalone.StandaloneHerderTest > 
testPutConnectorConfig STARTED

org.apache.kafka.connect.runtime.standalone.StandaloneHerderTest > 
testPutConnectorConfig PASSED

org.apache.kafka.connect.runtime.standalone.StandaloneHerderTest > 
testPutTaskConfigs STARTED

org.apache.kafka.connect.runtime.standalone.StandaloneHerderTest > 
testPutTaskConfigs PASSED

org.apache.kafka.connect.runtime.WorkerTest > testStartAndStopConnector STARTED

org.apache.kafka.connect.runtime.WorkerTest > testStartAndStopConnector PASSED

org.apache.kafka.connect.runtime.WorkerTest > testStartConnectorFailure STARTED

org.apache.kafka.connect.runtime.WorkerTest > testStartConnectorFailure PASSED

org.apache.kafka.connect.runtime.WorkerTest > testAddConnectorByAlias STARTED

org.apache.kafka.connect.runtime.WorkerTest > testAddConnectorByAlias PASSED

org.apache.kafka.connect.runtime.WorkerTest > testAddConnectorByShortAlias 
STARTED

org.apache.kafka.connect.runtime.WorkerTest > testAddConnectorByShortAlias 
PASSED

org.apache.kafka.connect.runtime.WorkerTest > testStopInvalidConnector STARTED

org.apache.kafka.connect.runtime.WorkerTest > testStopInvalidConnector PASSED

org.apache.kafka.connect.runtime.WorkerTest > testReconfigureConnectorTasks 
STARTED

org.apache.kafka.connect.runtime.WorkerTest > testReconfigureConnectorTasks 
PASSED

org.apache.kafka.connect.runtime.WorkerTest > testAddRemoveTask STARTED

org.apache.kafka.connect.runtime.WorkerTest > testAddRemoveTask PASSED

org.apache.kafka.connect.runtime.WorkerTest > testStartTaskFailure STARTED

org.apache.kafka.connect.runtime.WorkerTest > testStartTaskFailure PASSED

org.apache.kafka.connect.runtime.WorkerTest > testCleanupTasksOnStop STARTED

org.apache.kafka.connect.runtime.WorkerTest > testCleanupTasksOnStop PASSED

org.apache.kafka.connect.runtime.WorkerTest > testConverterOverrides STARTED

org.apache.kafka.connect.runtime.WorkerTest > testConverterOverrides PASSED

org.apache.kafka.connect.runtime.isolation.PluginDescTest > 
testPluginDescWithNullVersion STARTED

org.apache.kafka.connect.runtime.isolation.PluginDescTest > 
testPluginDescWithNullVersion PASSED

org.apache.kafka.connect.runtime.isolation.PluginDescTest > 
testPluginDescComparison STARTED

org.apache.kafka.connect.runtime.isolation.PluginDescTest > 
testPluginDescComparison PASSED

org.apache.kafka.connect.runtime.isolation.PluginDescTest > 
testRegularPluginDesc STARTED

org.apache.kafka.connect.runtime.isolation.PluginDescTest > 
testRegularPluginDesc PASSED

org.apache.kafka.connect.runtime.isolation.PluginDescTest > 
testPluginDescEquality STARTED

org.apache.kafka.connect.runtime.isolation.PluginDescTest > 
testPluginDescEquality PASSED

org.apache.kafka.connect.runtime.isolation.PluginDescTest > 
testPluginDescWithSystemClassLoader STARTED

org.apache.kafka.connect.runtime.isolation.PluginDescTest > 
testPluginDescWithSystemClassLoader PASSED

org.apache.kafka.connect.runtime.isolation.DelegatingClassLoaderTest > 
testWhiteListedManifestResources STARTED

org.apache.kafka.connect.runtime.isolation.DelegatingClassLoaderTest > 
testWhiteListedManifestResources PASSED

org.apache.kafka.connect.runtime.isolation.DelegatingClassLoaderTest > 
testOtherResources STARTED

org.apache.kafka.connect.runtime.isolation.DelegatingClassLoaderTest > 
testOtherResources PASSED

org.apache.kafka.connect.runtime.isolation.PluginsTest > 
shouldInstantiateAndConfigureConnectRestExtension STARTED

org.apache.kafka.connect.runtime.isolation.PluginsTest > 
shouldInstantiateAndConfigureConnectRestExtension PASSED

org.apache.kafka.connect.runtime.isolation.PluginsTest > 
shouldInstantiateAndConfigureConverters STARTED

org.apache.kafka.connect.runtime.isolation.PluginsTest > 
shouldInstantiateAndConfigureConverters PASSED


Build failed in Jenkins: kafka-2.2-jdk8 #15

2019-02-14 Thread Apache Jenkins Server
See 


Changes:

[wangguoz] HOTFIX: remove unused imports

--
[...truncated 2.50 MB...]
kafka.coordinator.group.GroupCoordinatorTest > 
testJoinGroupUnknownConsumerNewGroup STARTED

kafka.coordinator.group.GroupCoordinatorTest > 
testJoinGroupUnknownConsumerNewGroup PASSED

kafka.coordinator.group.GroupCoordinatorTest > 
testJoinGroupFromUnchangedFollowerDoesNotRebalance STARTED

kafka.coordinator.group.GroupCoordinatorTest > 
testJoinGroupFromUnchangedFollowerDoesNotRebalance PASSED

kafka.coordinator.group.GroupCoordinatorTest > testValidJoinGroup STARTED

kafka.coordinator.group.GroupCoordinatorTest > testValidJoinGroup PASSED

kafka.coordinator.group.GroupCoordinatorTest > 
shouldDelayRebalanceUptoRebalanceTimeout STARTED

kafka.coordinator.group.GroupCoordinatorTest > 
shouldDelayRebalanceUptoRebalanceTimeout PASSED

kafka.coordinator.group.GroupCoordinatorTest > testFetchOffsets STARTED

kafka.coordinator.group.GroupCoordinatorTest > testFetchOffsets PASSED

kafka.coordinator.group.GroupCoordinatorTest > 
testSessionTimeoutDuringRebalance STARTED

kafka.coordinator.group.GroupCoordinatorTest > 
testSessionTimeoutDuringRebalance PASSED

kafka.coordinator.group.GroupCoordinatorTest > testNewMemberJoinExpiration 
STARTED

kafka.coordinator.group.GroupCoordinatorTest > testNewMemberJoinExpiration 
PASSED

kafka.coordinator.group.GroupCoordinatorTest > testFetchTxnOffsetsWithAbort 
STARTED

kafka.coordinator.group.GroupCoordinatorTest > testFetchTxnOffsetsWithAbort 
PASSED

kafka.coordinator.group.GroupCoordinatorTest > testSyncGroupLeaderAfterFollower 
STARTED

kafka.coordinator.group.GroupCoordinatorTest > testSyncGroupLeaderAfterFollower 
PASSED

kafka.coordinator.group.GroupCoordinatorTest > testSyncGroupFromUnknownMember 
STARTED

kafka.coordinator.group.GroupCoordinatorTest > testSyncGroupFromUnknownMember 
PASSED

kafka.coordinator.group.GroupCoordinatorTest > testValidLeaveGroup STARTED

kafka.coordinator.group.GroupCoordinatorTest > testValidLeaveGroup PASSED

kafka.coordinator.group.GroupCoordinatorTest > testDescribeGroupInactiveGroup 
STARTED

kafka.coordinator.group.GroupCoordinatorTest > testDescribeGroupInactiveGroup 
PASSED

kafka.coordinator.group.GroupCoordinatorTest > 
testFetchTxnOffsetsIgnoreSpuriousCommit STARTED

kafka.coordinator.group.GroupCoordinatorTest > 
testFetchTxnOffsetsIgnoreSpuriousCommit PASSED

kafka.coordinator.group.GroupCoordinatorTest > testSyncGroupNotCoordinator 
STARTED

kafka.coordinator.group.GroupCoordinatorTest > testSyncGroupNotCoordinator 
PASSED

kafka.coordinator.group.GroupCoordinatorTest > testBasicFetchTxnOffsets STARTED

kafka.coordinator.group.GroupCoordinatorTest > testBasicFetchTxnOffsets PASSED

kafka.coordinator.group.GroupCoordinatorTest > 
shouldResetRebalanceDelayWhenNewMemberJoinsGroupInInitialRebalance STARTED

kafka.coordinator.group.GroupCoordinatorTest > 
shouldResetRebalanceDelayWhenNewMemberJoinsGroupInInitialRebalance PASSED

kafka.coordinator.group.GroupCoordinatorTest > 
testHeartbeatUnknownConsumerExistingGroup STARTED

kafka.coordinator.group.GroupCoordinatorTest > 
testHeartbeatUnknownConsumerExistingGroup PASSED

kafka.coordinator.group.GroupCoordinatorTest > testValidHeartbeat STARTED

kafka.coordinator.group.GroupCoordinatorTest > testValidHeartbeat PASSED

kafka.coordinator.group.GroupCoordinatorTest > 
testRequestHandlingWhileLoadingInProgress STARTED

kafka.coordinator.group.GroupCoordinatorTest > 
testRequestHandlingWhileLoadingInProgress PASSED

kafka.network.SocketServerTest > testGracefulClose STARTED

kafka.network.SocketServerTest > testGracefulClose PASSED

kafka.network.SocketServerTest > 
testSendActionResponseWithThrottledChannelWhereThrottlingAlreadyDone STARTED

kafka.network.SocketServerTest > 
testSendActionResponseWithThrottledChannelWhereThrottlingAlreadyDone PASSED

kafka.network.SocketServerTest > controlThrowable STARTED

kafka.network.SocketServerTest > controlThrowable PASSED

kafka.network.SocketServerTest > testRequestMetricsAfterStop STARTED

kafka.network.SocketServerTest > testRequestMetricsAfterStop PASSED

kafka.network.SocketServerTest > testConnectionIdReuse STARTED

kafka.network.SocketServerTest > testConnectionIdReuse PASSED

kafka.network.SocketServerTest > testClientDisconnectionUpdatesRequestMetrics 
STARTED

kafka.network.SocketServerTest > testClientDisconnectionUpdatesRequestMetrics 
PASSED

kafka.network.SocketServerTest > testProcessorMetricsTags STARTED

kafka.network.SocketServerTest > testProcessorMetricsTags PASSED

kafka.network.SocketServerTest > testMaxConnectionsPerIp STARTED

kafka.network.SocketServerTest > testMaxConnectionsPerIp PASSED

kafka.network.SocketServerTest > testConnectionId STARTED

kafka.network.SocketServerTest > testConnectionId PASSED

kafka.network.SocketServerTest > 
testBrokerSendAfterChannelClosedUpdatesRequestMetrics STARTED


Re: [DISCUSS] KIP-428: Add in-memory window store

2019-02-14 Thread Sophie Blee-Goldman
Cleaned up the KIP, please take another look and if all seems good will
call for a vote since there seem to be no strong opinions against.

On Wed, Feb 13, 2019 at 11:45 PM Guozhang Wang  wrote:

> Hi Sophie,
>
> Thanks for the KIP write-up, I made a pass over the wiki and the PR as
> well, here's some comments:
>
> 1. the proposed API seems to be inconsistent from the PR, should it be:
>
> public static WindowBytesStoreSupplier inMemoryWindowStore(final String
> name,
>
>final Duration retentionPeriod,
>
>final Duration windowSize,
> +
>final boolean retainDuplicates) throws
> IllegalArgumentException ...
> -
> final Duration gracePeriod
>
> 2. As Boyang mentioned, we usually do not need to elaborate on the internal
> implementation in the KIP, unless it has some user-facing implications. As
> for this specific KIP, I think it make more sense to talk about what memory
> footprint users would expect to have with the implementation: should they
> be expecting exact number of bytes used for key-value pairs only, or should
> they expect some additional memory used for maintaining the window data
> structures.
>
>
>
> Guozhang
>
>
>
>
> On Fri, Feb 8, 2019 at 4:21 AM Dongjin Lee  wrote:
>
> > Thanks for the KIP, Sophie. I added your KIP into the 'Under Discussion'
> > section here
> > <
> >
> https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Improvement+Proposals
> > >
> > .
> >
> > I am +1 for this proposal for reaching parity between key-value store and
> > windowed store.
> >
> > Thanks,
> > Dongjin
> >
> > On Fri, Feb 8, 2019 at 1:41 PM Boyang Chen  wrote:
> >
> > > Thanks Sophie for proposing this new feature! In-memory window store is
> > > very useful in long term. One meta comment is that we don't need to
> > include
> > > implementation details in the public interface section, and those
> > > validation steps are pretty trivial.
> > >
> > > Boyang
> > >
> > > 
> > > From: Sophie Blee-Goldman 
> > > Sent: Friday, February 8, 2019 9:56 AM
> > > To: dev@kafka.apache.org
> > > Subject: [DISCUSS] KIP-428: Add in-memory window store
> > >
> > > Streams currently only has support for a RocksDB window store, but
> users
> > > have been requesting an in-memory version. This KIP introduces a design
> > for
> > > an in-memory window store implementation.
> > >
> > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-428%3A+Add+in-memory+window+store
> > >
> >
> >
> > --
> > *Dongjin Lee*
> >
> > *A hitchhiker in the mathematical world.*
> > *github:  github.com/dongjinleekr
> > linkedin:
> kr.linkedin.com/in/dongjinleekr
> > speakerdeck:
> > speakerdeck.com/dongjin
> > *
> >
>
>
> --
> -- Guozhang
>


Jenkins build is back to normal : kafka-trunk-jdk8 #3386

2019-02-14 Thread Apache Jenkins Server
See 




Jenkins build is back to normal : kafka-2.0-jdk8 #226

2019-02-14 Thread Apache Jenkins Server
See 




Build failed in Jenkins: kafka-1.1-jdk7 #244

2019-02-14 Thread Apache Jenkins Server
See 


Changes:

[jason] MINOR: Make MockClient#poll() more thread-safe (#5942)

--
[...truncated 427.61 KB...]
kafka.security.token.delegation.DelegationTokenManagerTest > 
testTokenRequestsWithDelegationTokenDisabled PASSED

kafka.security.token.delegation.DelegationTokenManagerTest > testDescribeToken 
STARTED

kafka.security.token.delegation.DelegationTokenManagerTest > testDescribeToken 
PASSED

kafka.security.token.delegation.DelegationTokenManagerTest > testCreateToken 
STARTED

kafka.security.token.delegation.DelegationTokenManagerTest > testCreateToken 
PASSED

kafka.security.token.delegation.DelegationTokenManagerTest > testExpireToken 
STARTED

kafka.security.token.delegation.DelegationTokenManagerTest > testExpireToken 
PASSED

kafka.security.token.delegation.DelegationTokenManagerTest > testRenewToken 
STARTED

kafka.security.token.delegation.DelegationTokenManagerTest > testRenewToken 
PASSED

kafka.security.auth.ZkAuthorizationTest > testIsZkSecurityEnabled STARTED

kafka.security.auth.ZkAuthorizationTest > testIsZkSecurityEnabled PASSED

kafka.security.auth.ZkAuthorizationTest > testZkUtils STARTED

kafka.security.auth.ZkAuthorizationTest > testZkUtils PASSED

kafka.security.auth.ZkAuthorizationTest > testZkAntiMigration STARTED

kafka.security.auth.ZkAuthorizationTest > testZkAntiMigration PASSED

kafka.security.auth.ZkAuthorizationTest > testConsumerOffsetPathAcls STARTED

kafka.security.auth.ZkAuthorizationTest > testConsumerOffsetPathAcls PASSED

kafka.security.auth.ZkAuthorizationTest > testZkMigration STARTED

kafka.security.auth.ZkAuthorizationTest > testZkMigration PASSED

kafka.security.auth.ZkAuthorizationTest > testChroot STARTED

kafka.security.auth.ZkAuthorizationTest > testChroot PASSED

kafka.security.auth.ZkAuthorizationTest > testDelete STARTED

kafka.security.auth.ZkAuthorizationTest > testDelete PASSED

kafka.security.auth.ZkAuthorizationTest > testDeleteRecursive STARTED

kafka.security.auth.ZkAuthorizationTest > testDeleteRecursive PASSED

kafka.security.auth.AclTest > testAclJsonConversion STARTED

kafka.security.auth.AclTest > testAclJsonConversion PASSED

kafka.security.auth.OperationTest > testJavaConversions STARTED

kafka.security.auth.OperationTest > testJavaConversions PASSED

kafka.security.auth.ResourceTypeTest > testJavaConversions STARTED

kafka.security.auth.ResourceTypeTest > testJavaConversions PASSED

kafka.security.auth.ResourceTypeTest > testFromString STARTED

kafka.security.auth.ResourceTypeTest > testFromString PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testAllowAllAccess STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testAllowAllAccess PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testLocalConcurrentModificationOfResourceAcls STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testLocalConcurrentModificationOfResourceAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testHighConcurrencyDeletionOfResourceAcls STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testHighConcurrencyDeletionOfResourceAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testNoAclFound STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testNoAclFound PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testAclInheritance STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testAclInheritance PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testDistributedConcurrentModificationOfResourceAcls STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testDistributedConcurrentModificationOfResourceAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testAclManagementAPIs STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testAclManagementAPIs PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testWildCardAcls STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testWildCardAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testTopicAcl STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testTopicAcl PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testSuperUserHasAccess STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testSuperUserHasAccess PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testDenyTakesPrecedence STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testDenyTakesPrecedence PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testNoAclFoundOverride STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testNoAclFoundOverride PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testChangeListenerTiming STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testChangeListenerTiming PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testHighConcurrencyModificationOfResourceAcls STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testHighConcurrencyModificationOfResourceAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 

[jira] [Resolved] (KAFKA-7670) Fix flaky test - KafkaAdminClientTest.testUnreachableBootstrapServer

2019-02-14 Thread Jason Gustafson (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7670?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-7670.

Resolution: Fixed

> Fix flaky test - KafkaAdminClientTest.testUnreachableBootstrapServer
> 
>
> Key: KAFKA-7670
> URL: https://issues.apache.org/jira/browse/KAFKA-7670
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Stanislav Kozlovski
>Assignee: Stanislav Kozlovski
>Priority: Minor
>
> It fails around once every 30 runs locally with
> {code:java}
> java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.errors.TimeoutException: Timed out waiting for a node 
> assignment.
> at 
> org.apache.kafka.common.internals.KafkaFutureImpl.wrapAndThrow(KafkaFutureImpl.java:45)
> at 
> org.apache.kafka.common.internals.KafkaFutureImpl.access$000(KafkaFutureImpl.java:32)
> at 
> org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:89)
> at 
> org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:262)
> at 
> org.apache.kafka.clients.admin.KafkaAdminClientTest.testUnreachableBootstrapServer(KafkaAdminClientTest.java:277)
> at sun.reflect.GeneratedMethodAccessor1.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
> at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: org.apache.kafka.common.errors.TimeoutException: Timed out waiting 
> for a node assignment.
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-7932) Streams needs to handle new Producer exceptions

2019-02-14 Thread John Roesler (JIRA)
John Roesler created KAFKA-7932:
---

 Summary: Streams needs to handle new Producer exceptions
 Key: KAFKA-7932
 URL: https://issues.apache.org/jira/browse/KAFKA-7932
 Project: Kafka
  Issue Type: Improvement
  Components: streams
Reporter: John Roesler


Following on KAFKA-7763, Streams needs to handle the new behavior.

See also https://github.com/apache/kafka/pull/6066

Streams code (StreamTask.java) needs to be modified to handle the new exception.

Also, from another upstream change, `initTxn` can also throw TimeoutException 
now: default `MAX_BLOCK_MS_CONFIG` in producer is 60 seconds, so I think just 
wrapping it as StreamsException should be reasonable, similar to what we do for 
`producer#send`'s TimeoutException 
([https://github.com/apache/kafka/blob/trunk/streams/src/main/java/org/apache/kafka/streams/processor/internals/RecordCollectorImpl.java#L220-L225]
 ).

 

Note we need to handle in three functions: init/commit/abortTxn.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Build failed in Jenkins: kafka-trunk-jdk8 #3385

2019-02-14 Thread Apache Jenkins Server
See 


Changes:

[wangguoz] KAFKA-7916: Unify store wrapping code for clarity (#6255)

--
[...truncated 2.54 MB...]
> Task :streams:upgrade-system-tests-0102:testClasses
> Task :streams:upgrade-system-tests-0102:checkstyleTest
> Task :streams:upgrade-system-tests-0102:spotbugsMain NO-SOURCE
> Task :streams:upgrade-system-tests-0102:test
> Task :streams:upgrade-system-tests-0110:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0110:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0110:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0110:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0110:compileTestJava
> Task :streams:upgrade-system-tests-0110:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-0110:testClasses
> Task :streams:upgrade-system-tests-0110:checkstyleTest
> Task :streams:upgrade-system-tests-0110:spotbugsMain NO-SOURCE
> Task :streams:upgrade-system-tests-0110:test

> Task :streams:streams-scala:test

org.apache.kafka.streams.scala.StreamToTableJoinScalaIntegrationTestImplicitSerdes
 > testShouldCountClicksPerRegionWithNamedRepartitionTopic STARTED

org.apache.kafka.streams.scala.StreamToTableJoinScalaIntegrationTestImplicitSerdes
 > testShouldCountClicksPerRegionWithNamedRepartitionTopic PASSED

org.apache.kafka.streams.scala.StreamToTableJoinScalaIntegrationTestImplicitSerdes
 > testShouldCountClicksPerRegionJava STARTED

org.apache.kafka.streams.scala.StreamToTableJoinScalaIntegrationTestImplicitSerdes
 > testShouldCountClicksPerRegionJava PASSED

org.apache.kafka.streams.scala.StreamToTableJoinScalaIntegrationTestImplicitSerdes
 > testShouldCountClicksPerRegion STARTED

org.apache.kafka.streams.scala.StreamToTableJoinScalaIntegrationTestImplicitSerdes
 > testShouldCountClicksPerRegion PASSED

org.apache.kafka.streams.scala.TopologyTest > 
shouldBuildIdenticalTopologyInJavaNScalaJoin STARTED

org.apache.kafka.streams.scala.TopologyTest > 
shouldBuildIdenticalTopologyInJavaNScalaJoin PASSED

org.apache.kafka.streams.scala.TopologyTest > 
shouldBuildIdenticalTopologyInJavaNScalaSimple STARTED

org.apache.kafka.streams.scala.TopologyTest > 
shouldBuildIdenticalTopologyInJavaNScalaSimple PASSED

org.apache.kafka.streams.scala.TopologyTest > 
shouldBuildIdenticalTopologyInJavaNScalaAggregate STARTED

org.apache.kafka.streams.scala.TopologyTest > 
shouldBuildIdenticalTopologyInJavaNScalaAggregate PASSED

org.apache.kafka.streams.scala.TopologyTest > 
shouldBuildIdenticalTopologyInJavaNScalaTransform STARTED

org.apache.kafka.streams.scala.TopologyTest > 
shouldBuildIdenticalTopologyInJavaNScalaTransform PASSED

org.apache.kafka.streams.scala.WordCountTest > testShouldCountWordsMaterialized 
STARTED

org.apache.kafka.streams.scala.WordCountTest > testShouldCountWordsMaterialized 
PASSED

org.apache.kafka.streams.scala.WordCountTest > testShouldCountWordsJava STARTED

org.apache.kafka.streams.scala.WordCountTest > testShouldCountWordsJava PASSED

org.apache.kafka.streams.scala.WordCountTest > testShouldCountWords STARTED

org.apache.kafka.streams.scala.WordCountTest > testShouldCountWords PASSED

org.apache.kafka.streams.scala.kstream.MaterializedTest > Create a Materialized 
should create a Materialized with Serdes STARTED

org.apache.kafka.streams.scala.kstream.MaterializedTest > Create a Materialized 
should create a Materialized with Serdes PASSED

org.apache.kafka.streams.scala.kstream.MaterializedTest > Create a Materialize 
with a store name should create a Materialized with Serdes and a store name 
STARTED

org.apache.kafka.streams.scala.kstream.MaterializedTest > Create a Materialize 
with a store name should create a Materialized with Serdes and a store name 
PASSED

org.apache.kafka.streams.scala.kstream.MaterializedTest > Create a Materialize 
with a window store supplier should create a Materialized with Serdes and a 
store supplier STARTED

org.apache.kafka.streams.scala.kstream.MaterializedTest > Create a Materialize 
with a window store supplier should create a Materialized with Serdes and a 
store supplier PASSED

org.apache.kafka.streams.scala.kstream.MaterializedTest > Create a Materialize 
with a key value store supplier should create a Materialized with Serdes and a 
store supplier STARTED

org.apache.kafka.streams.scala.kstream.MaterializedTest > Create a Materialize 
with a key value store supplier should create a Materialized with Serdes and a 
store supplier PASSED

org.apache.kafka.streams.scala.kstream.MaterializedTest > Create a Materialize 
with a session store supplier should create a Materialized with Serdes and a 
store supplier STARTED

org.apache.kafka.streams.scala.kstream.MaterializedTest > Create a Materialize 
with a session store supplier should create a Materialized with Serdes and a 
store supplier PASSED

org.apache.kafka.streams.scala.kstream.KTableTest > filter a KTable 

[jira] [Resolved] (KAFKA-7811) Avoid unnecessary lock acquire when KafkaConsumer commits offsets

2019-02-14 Thread Guozhang Wang (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7811?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-7811.
--
   Resolution: Fixed
Fix Version/s: 2.3.0

> Avoid unnecessary lock acquire when KafkaConsumer commits offsets
> -
>
> Key: KAFKA-7811
> URL: https://issues.apache.org/jira/browse/KAFKA-7811
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients
>Affects Versions: 0.10.2.2, 0.11.0.3, 1.0.2, 1.1.1, 2.0.1, 2.1.0
>Reporter: lambdaliu
>Assignee: lambdaliu
>Priority: Major
> Fix For: 2.3.0
>
>
> In KafkaConsumer#commitAsync that does not take offset parameters, we have 
> the following logic:
> {code:java}
> public void commitAsync(OffsetCommitCallback callback) {
> acquireAndEnsureOpen();
> try {
> commitAsync(subscriptions.allConsumed(), callback);
> } finally {
> release();
> }
> }
> {code}
> This function calls another commitAsync with default all consumed offset 
> which also call `acquireAndEnsureOpen`.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [ANNOUNCE] New Committer: Bill Bejeck

2019-02-14 Thread Colin McCabe
Congratulations, Bill!

best,
Colin

On Thu, Feb 14, 2019, at 09:17, Viktor Somogyi-Vass wrote:
> Congrats Bill! :)
> 
> On Thu, Feb 14, 2019 at 4:50 PM Dongjin Lee  wrote:
> 
> > Congrats Bill, and thank you again for your great book and contributions on
> > Kafka Streams!
> >
> > Best,
> > Dongjin
> >
> > On Thu, 14 Feb 2019 at 7:31 PM Kamal Chandraprakash <
> > kamal.chandraprak...@gmail.com> wrote:
> >
> > > Congratulations Bill!
> > >
> > > On Thu, Feb 14, 2019 at 3:55 PM Ivan Ponomarev
> >  > > >
> > > wrote:
> > >
> > > > Congratulations, Bill!
> > > >
> > > > Your 'Kafka Streams in Action' is a great book. These months it is
> > > > always travelling with me in my backpack with my laptop ))
> > > >
> > > > Regards,
> > > >
> > > > Ivan
> > > >
> > > > 14.02.2019 3:56, Guozhang Wang пишет:
> > > > > Hello all,
> > > > >
> > > > > The PMC of Apache Kafka is happy to announce that we've added Bill
> > > Bejeck
> > > > > as our newest project committer.
> > > > >
> > > > > Bill has been active in the Kafka community since 2015. He has made
> > > > > significant contributions to the Kafka Streams project with more than
> > > 100
> > > > > PRs and 4 authored KIPs, including the streams topology optimization
> > > > > framework. Bill's also very keen on tightening Kafka's unit test /
> > > system
> > > > > tests coverage, which is a great value to our project codebase.
> > > > >
> > > > > In addition, Bill has been very active in evangelizing Kafka for
> > stream
> > > > > processing in the community. He has given several Kafka meetup talks
> > in
> > > > the
> > > > > past year, including a presentation at Kafka Summit SF. He's also
> > > > authored
> > > > > a book about Kafka Streams (
> > > > > https://www.manning.com/books/kafka-streams-in-action), as well as
> > > > various
> > > > > of posts in public venues like DZone as well as his personal blog (
> > > > > http://codingjunkie.net/).
> > > > >
> > > > > We really appreciate the contributions and are looking forward to see
> > > > more
> > > > > from him. Congratulations, Bill !
> > > > >
> > > > >
> > > > > Guozhang, on behalf of the Apache Kafka PMC
> > > > >
> > > >
> > > >
> > >
> > --
> > *Dongjin Lee*
> >
> > *A hitchhiker in the mathematical world.*
> > *github:  github.com/dongjinleekr
> > linkedin: kr.linkedin.com/in/dongjinleekr
> > speakerdeck:
> > speakerdeck.com/dongjin
> > *
> >
>


[jira] [Resolved] (KAFKA-7882) StateStores are frequently closed during the 'transform' method

2019-02-14 Thread Matthias J. Sax (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax resolved KAFKA-7882.

Resolution: Duplicate

> StateStores are frequently closed during the 'transform' method
> ---
>
> Key: KAFKA-7882
> URL: https://issues.apache.org/jira/browse/KAFKA-7882
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.0.0
>Reporter: Mateusz Owczarek
>Priority: Major
>
> Hello, I have a problem with the state store being closed frequently while 
> transforming upcoming records. To ensure only one record of the same key and 
> the window comes to an aggregate I have created a custom Transformer (I know 
> something similar is going to be introduced with suppress method on KTable in 
> the future, but my implementation is quite simple and imo should work 
> correctly) with the following implementation:
> {code:java}
> override def transform(key: Windowed[K], value: V): (Windowed[K], V) = {
> val partition = context.partition() 
> if (partition != -1) store.put(key.key(), (value, partition), 
> key.window().start()) 
> else logger.warn(s"-1 partition")
> null //Ensuring no 1:1 forwarding, context.forward and commit logic is in the 
> punctuator callback
> }
> {code}
>  
> What I do get is the following error:
> {code:java}
> Store MyStore is currently closed{code}
> This problem appears only when the number of streaming threads (or input 
> topic partitions) is greater than 1 even if I'm just saving to the store and 
> turn off the punctuation.
> If punctuation is present, however, I sometimes get -1 as a partition value 
> in the transform method. I'm familiar with the basic docs, however, I haven't 
> found anything that could help me here.
> I build my state store like this:
> {code:java}
> val stateStore = Stores.windowStoreBuilder(
>   Stores.persistentWindowStore(
> stateStoreName,
> timeWindows.maintainMs() + timeWindows.sizeMs + 
> TimeUnit.DAYS.toMillis(1),
> timeWindows.segments,
> timeWindows.sizeMs,
> false
>   ),
>   serde[K],
>   serde[(V, Int)]
> )
> {code}
> and include it in a DSL API like this:
> {code:java}
> builder.addStateStore(stateStore)
> (...).transform(new MyTransformer(...), "MyStore")
> {code}
> INB4: I don't close any state stores manually, I gave them retention time as 
> long as possible for the debugging stage, I tried to hotfix that with the 
> retry in the transform method and the state stores reopen at the end and the 
> `put` method works, but this approach is questionable and I am concerned if 
> it actually works.
> Edit:
> May this be because of the fact that the 
> {code:java}StreamsConfig.CACHE_MAX_BYTES_BUFFERING_CONFIG{code} is set to low 
> value? If I understand correctly, spilling to disk is done therefore more 
> frequently, may it, therefore, cause closing the store?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [ANNOUNCE] New Committer: Bill Bejeck

2019-02-14 Thread Viktor Somogyi-Vass
Congrats Bill! :)

On Thu, Feb 14, 2019 at 4:50 PM Dongjin Lee  wrote:

> Congrats Bill, and thank you again for your great book and contributions on
> Kafka Streams!
>
> Best,
> Dongjin
>
> On Thu, 14 Feb 2019 at 7:31 PM Kamal Chandraprakash <
> kamal.chandraprak...@gmail.com> wrote:
>
> > Congratulations Bill!
> >
> > On Thu, Feb 14, 2019 at 3:55 PM Ivan Ponomarev
>  > >
> > wrote:
> >
> > > Congratulations, Bill!
> > >
> > > Your 'Kafka Streams in Action' is a great book. These months it is
> > > always travelling with me in my backpack with my laptop ))
> > >
> > > Regards,
> > >
> > > Ivan
> > >
> > > 14.02.2019 3:56, Guozhang Wang пишет:
> > > > Hello all,
> > > >
> > > > The PMC of Apache Kafka is happy to announce that we've added Bill
> > Bejeck
> > > > as our newest project committer.
> > > >
> > > > Bill has been active in the Kafka community since 2015. He has made
> > > > significant contributions to the Kafka Streams project with more than
> > 100
> > > > PRs and 4 authored KIPs, including the streams topology optimization
> > > > framework. Bill's also very keen on tightening Kafka's unit test /
> > system
> > > > tests coverage, which is a great value to our project codebase.
> > > >
> > > > In addition, Bill has been very active in evangelizing Kafka for
> stream
> > > > processing in the community. He has given several Kafka meetup talks
> in
> > > the
> > > > past year, including a presentation at Kafka Summit SF. He's also
> > > authored
> > > > a book about Kafka Streams (
> > > > https://www.manning.com/books/kafka-streams-in-action), as well as
> > > various
> > > > of posts in public venues like DZone as well as his personal blog (
> > > > http://codingjunkie.net/).
> > > >
> > > > We really appreciate the contributions and are looking forward to see
> > > more
> > > > from him. Congratulations, Bill !
> > > >
> > > >
> > > > Guozhang, on behalf of the Apache Kafka PMC
> > > >
> > >
> > >
> >
> --
> *Dongjin Lee*
>
> *A hitchhiker in the mathematical world.*
> *github:  github.com/dongjinleekr
> linkedin: kr.linkedin.com/in/dongjinleekr
> speakerdeck:
> speakerdeck.com/dongjin
> *
>


Build failed in Jenkins: kafka-2.2-jdk8 #14

2019-02-14 Thread Apache Jenkins Server
See 


Changes:

[wangguoz] KAFKA-7916: Unify store wrapping code for clarity (#6255)

--
[...truncated 37.69 KB...]
  ^
:795:
 value maxNumOffsets in class PartitionData is deprecated: see corresponding 
Javadoc for more information.
  maxNumOffsets = partitionData.maxNumOffsets,
^
:798:
 constructor PartitionData in class PartitionData is deprecated: see 
corresponding Javadoc for more information.
(topicPartition, new ListOffsetResponse.PartitionData(Errors.NONE, 
offsets.map(JLong.valueOf).asJava))
 ^
:807:
 constructor PartitionData in class PartitionData is deprecated: see 
corresponding Javadoc for more information.
  (topicPartition, new 
ListOffsetResponse.PartitionData(Errors.forException(e), List[JLong]().asJava))
   ^
:810:
 constructor PartitionData in class PartitionData is deprecated: see 
corresponding Javadoc for more information.
  (topicPartition, new 
ListOffsetResponse.PartitionData(Errors.forException(e), List[JLong]().asJava))
   ^
:231:
 value DEFAULT_SASL_ENABLED_MECHANISMS in object SaslConfigs is deprecated: see 
corresponding Javadoc for more information.
  val SaslEnabledMechanisms = SaslConfigs.DEFAULT_SASL_ENABLED_MECHANISMS
  ^
:231:
 value offsets in class PartitionData is deprecated: see corresponding Javadoc 
for more information.
  responsePartitionData.offsets.get(0)
^
:573:
 method checksum in class ConsumerRecord is deprecated: see corresponding 
Javadoc for more information.
output.println(topicStr + "checksum:" + consumerRecord.checksum)
   ^
:197:
 class BaseConsumerRecord in package consumer is deprecated: This class has 
been deprecated and will be removed in a future release. Please use 
org.apache.kafka.clients.consumer.ConsumerRecord instead.
private def toBaseConsumerRecord(record: ConsumerRecord[Array[Byte], 
Array[Byte]]): BaseConsumerRecord =

^
:390:
 method close in trait Producer is deprecated: see corresponding Javadoc for 
more information.
  this.producer.close(timeout, TimeUnit.MILLISECONDS)
^
:417:
 class BaseConsumerRecord in package consumer is deprecated: This class has 
been deprecated and will be removed in a future release. Please use 
org.apache.kafka.clients.consumer.ConsumerRecord instead.
def handle(record: BaseConsumerRecord): 
util.List[ProducerRecord[Array[Byte], Array[Byte]]]
   ^
:421:
 class BaseConsumerRecord in package consumer is deprecated: This class has 
been deprecated and will be removed in a future release. Please use 
org.apache.kafka.clients.consumer.ConsumerRecord instead.
override def handle(record: BaseConsumerRecord): 
util.List[ProducerRecord[Array[Byte], Array[Byte]]] = {
^
20 warnings found
Note: 

 uses unchecked or unsafe operations.
Note: Recompile with -Xlint:unchecked for details.

> Task :kafka-2.2-jdk8:core:processResources NO-SOURCE
> Task :kafka-2.2-jdk8:core:classes
> Task :kafka-2.2-jdk8:core:checkstyleMain
> Task :kafka-2.2-jdk8:core:compileTestJava NO-SOURCE

> Task :kafka-2.2-jdk8:core:compileTestScala
Pruning sources from previous analysis, due to incompatible CompileSetup.
:1344:
 enclosing method testElectPreferredLeaders has 

[jira] [Resolved] (KAFKA-7916) Streams store cleanup: unify wrapping

2019-02-14 Thread John Roesler (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7916?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Roesler resolved KAFKA-7916.
-
Resolution: Fixed

> Streams store cleanup: unify wrapping
> -
>
> Key: KAFKA-7916
> URL: https://issues.apache.org/jira/browse/KAFKA-7916
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: John Roesler
>Assignee: John Roesler
>Priority: Major
>
> The internal store handling in Streams has become quite complex, with many 
> layers of wrapping for different bookeeping operations.
> The first thing we can do about this is to at least unify the wrapping 
> strategy, such that *all* store wrappers extend WrappedStateStore. This would 
> make the code easier to understand, since all wrappers would have the same 
> basic shape.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [ANNOUNCE] New Committer: Bill Bejeck

2019-02-14 Thread Dongjin Lee
Congrats Bill, and thank you again for your great book and contributions on
Kafka Streams!

Best,
Dongjin

On Thu, 14 Feb 2019 at 7:31 PM Kamal Chandraprakash <
kamal.chandraprak...@gmail.com> wrote:

> Congratulations Bill!
>
> On Thu, Feb 14, 2019 at 3:55 PM Ivan Ponomarev  >
> wrote:
>
> > Congratulations, Bill!
> >
> > Your 'Kafka Streams in Action' is a great book. These months it is
> > always travelling with me in my backpack with my laptop ))
> >
> > Regards,
> >
> > Ivan
> >
> > 14.02.2019 3:56, Guozhang Wang пишет:
> > > Hello all,
> > >
> > > The PMC of Apache Kafka is happy to announce that we've added Bill
> Bejeck
> > > as our newest project committer.
> > >
> > > Bill has been active in the Kafka community since 2015. He has made
> > > significant contributions to the Kafka Streams project with more than
> 100
> > > PRs and 4 authored KIPs, including the streams topology optimization
> > > framework. Bill's also very keen on tightening Kafka's unit test /
> system
> > > tests coverage, which is a great value to our project codebase.
> > >
> > > In addition, Bill has been very active in evangelizing Kafka for stream
> > > processing in the community. He has given several Kafka meetup talks in
> > the
> > > past year, including a presentation at Kafka Summit SF. He's also
> > authored
> > > a book about Kafka Streams (
> > > https://www.manning.com/books/kafka-streams-in-action), as well as
> > various
> > > of posts in public venues like DZone as well as his personal blog (
> > > http://codingjunkie.net/).
> > >
> > > We really appreciate the contributions and are looking forward to see
> > more
> > > from him. Congratulations, Bill !
> > >
> > >
> > > Guozhang, on behalf of the Apache Kafka PMC
> > >
> >
> >
>
-- 
*Dongjin Lee*

*A hitchhiker in the mathematical world.*
*github:  github.com/dongjinleekr
linkedin: kr.linkedin.com/in/dongjinleekr
speakerdeck: speakerdeck.com/dongjin
*


Build failed in Jenkins: kafka-trunk-jdk8 #3384

2019-02-14 Thread Apache Jenkins Server
See 


Changes:

[manikumar] MINOR: Add missing Alter Operation to Topic supported operations 
list in

--
[...truncated 4.62 MB...]
org.apache.kafka.connect.json.JsonConverterTest > stringHeaderToConnect PASSED

org.apache.kafka.connect.json.JsonConverterTest > mapToJsonNonStringKeys STARTED

org.apache.kafka.connect.json.JsonConverterTest > mapToJsonNonStringKeys PASSED

org.apache.kafka.connect.json.JsonConverterTest > longToJson STARTED

org.apache.kafka.connect.json.JsonConverterTest > longToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > mismatchSchemaJson STARTED

org.apache.kafka.connect.json.JsonConverterTest > mismatchSchemaJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > mapToConnectNonStringKeys 
STARTED

org.apache.kafka.connect.json.JsonConverterTest > mapToConnectNonStringKeys 
PASSED

org.apache.kafka.connect.json.JsonConverterTest > 
testJsonSchemaMetadataTranslation STARTED

org.apache.kafka.connect.json.JsonConverterTest > 
testJsonSchemaMetadataTranslation PASSED

org.apache.kafka.connect.json.JsonConverterTest > bytesToJson STARTED

org.apache.kafka.connect.json.JsonConverterTest > bytesToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > shortToConnect STARTED

org.apache.kafka.connect.json.JsonConverterTest > shortToConnect PASSED

org.apache.kafka.connect.json.JsonConverterTest > intToJson STARTED

org.apache.kafka.connect.json.JsonConverterTest > intToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > nullSchemaAndNullValueToJson 
STARTED

org.apache.kafka.connect.json.JsonConverterTest > nullSchemaAndNullValueToJson 
PASSED

org.apache.kafka.connect.json.JsonConverterTest > timestampToConnectOptional 
STARTED

org.apache.kafka.connect.json.JsonConverterTest > timestampToConnectOptional 
PASSED

org.apache.kafka.connect.json.JsonConverterTest > structToConnect STARTED

org.apache.kafka.connect.json.JsonConverterTest > structToConnect PASSED

org.apache.kafka.connect.json.JsonConverterTest > stringToJson STARTED

org.apache.kafka.connect.json.JsonConverterTest > stringToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > nullSchemaAndArrayToJson 
STARTED

org.apache.kafka.connect.json.JsonConverterTest > nullSchemaAndArrayToJson 
PASSED

org.apache.kafka.connect.json.JsonConverterTest > byteToConnect STARTED

org.apache.kafka.connect.json.JsonConverterTest > byteToConnect PASSED

org.apache.kafka.connect.json.JsonConverterTest > nullSchemaPrimitiveToConnect 
STARTED

org.apache.kafka.connect.json.JsonConverterTest > nullSchemaPrimitiveToConnect 
PASSED

org.apache.kafka.connect.json.JsonConverterTest > byteToJson STARTED

org.apache.kafka.connect.json.JsonConverterTest > byteToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > intToConnect STARTED

org.apache.kafka.connect.json.JsonConverterTest > intToConnect PASSED

org.apache.kafka.connect.json.JsonConverterTest > dateToJson STARTED

org.apache.kafka.connect.json.JsonConverterTest > dateToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > noSchemaToConnect STARTED

org.apache.kafka.connect.json.JsonConverterTest > noSchemaToConnect PASSED

org.apache.kafka.connect.json.JsonConverterTest > noSchemaToJson STARTED

org.apache.kafka.connect.json.JsonConverterTest > noSchemaToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > nullSchemaAndPrimitiveToJson 
STARTED

org.apache.kafka.connect.json.JsonConverterTest > nullSchemaAndPrimitiveToJson 
PASSED

org.apache.kafka.connect.json.JsonConverterTest > 
decimalToConnectOptionalWithDefaultValue STARTED

org.apache.kafka.connect.json.JsonConverterTest > 
decimalToConnectOptionalWithDefaultValue PASSED

org.apache.kafka.connect.json.JsonConverterTest > mapToJsonStringKeys STARTED

org.apache.kafka.connect.json.JsonConverterTest > mapToJsonStringKeys PASSED

org.apache.kafka.connect.json.JsonConverterTest > arrayToJson STARTED

org.apache.kafka.connect.json.JsonConverterTest > arrayToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > nullToConnect STARTED

org.apache.kafka.connect.json.JsonConverterTest > nullToConnect PASSED

org.apache.kafka.connect.json.JsonConverterTest > timeToJson STARTED

org.apache.kafka.connect.json.JsonConverterTest > timeToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > structToJson STARTED

org.apache.kafka.connect.json.JsonConverterTest > structToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > 
testConnectSchemaMetadataTranslation STARTED

org.apache.kafka.connect.json.JsonConverterTest > 
testConnectSchemaMetadataTranslation PASSED

org.apache.kafka.connect.json.JsonConverterTest > shortToJson STARTED

org.apache.kafka.connect.json.JsonConverterTest > shortToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > dateToConnect STARTED

org.apache.kafka.connect.json.JsonConverterTest > 

Re: [ANNOUNCE] New Committer: Bill Bejeck

2019-02-14 Thread Kamal Chandraprakash
Congratulations Bill!

On Thu, Feb 14, 2019 at 3:55 PM Ivan Ponomarev 
wrote:

> Congratulations, Bill!
>
> Your 'Kafka Streams in Action' is a great book. These months it is
> always travelling with me in my backpack with my laptop ))
>
> Regards,
>
> Ivan
>
> 14.02.2019 3:56, Guozhang Wang пишет:
> > Hello all,
> >
> > The PMC of Apache Kafka is happy to announce that we've added Bill Bejeck
> > as our newest project committer.
> >
> > Bill has been active in the Kafka community since 2015. He has made
> > significant contributions to the Kafka Streams project with more than 100
> > PRs and 4 authored KIPs, including the streams topology optimization
> > framework. Bill's also very keen on tightening Kafka's unit test / system
> > tests coverage, which is a great value to our project codebase.
> >
> > In addition, Bill has been very active in evangelizing Kafka for stream
> > processing in the community. He has given several Kafka meetup talks in
> the
> > past year, including a presentation at Kafka Summit SF. He's also
> authored
> > a book about Kafka Streams (
> > https://www.manning.com/books/kafka-streams-in-action), as well as
> various
> > of posts in public venues like DZone as well as his personal blog (
> > http://codingjunkie.net/).
> >
> > We really appreciate the contributions and are looking forward to see
> more
> > from him. Congratulations, Bill !
> >
> >
> > Guozhang, on behalf of the Apache Kafka PMC
> >
>
>


Re: [ANNOUNCE] New Committer: Bill Bejeck

2019-02-14 Thread Ivan Ponomarev

Congratulations, Bill!

Your 'Kafka Streams in Action' is a great book. These months it is 
always travelling with me in my backpack with my laptop ))


Regards,

Ivan

14.02.2019 3:56, Guozhang Wang пишет:

Hello all,

The PMC of Apache Kafka is happy to announce that we've added Bill Bejeck
as our newest project committer.

Bill has been active in the Kafka community since 2015. He has made
significant contributions to the Kafka Streams project with more than 100
PRs and 4 authored KIPs, including the streams topology optimization
framework. Bill's also very keen on tightening Kafka's unit test / system
tests coverage, which is a great value to our project codebase.

In addition, Bill has been very active in evangelizing Kafka for stream
processing in the community. He has given several Kafka meetup talks in the
past year, including a presentation at Kafka Summit SF. He's also authored
a book about Kafka Streams (
https://www.manning.com/books/kafka-streams-in-action), as well as various
of posts in public venues like DZone as well as his personal blog (
http://codingjunkie.net/).

We really appreciate the contributions and are looking forward to see more
from him. Congratulations, Bill !


Guozhang, on behalf of the Apache Kafka PMC





Re: [ANNOUNCE] New Committer: Bill Bejeck

2019-02-14 Thread Edoardo Comar
Well done Bill!
--

Edoardo Comar

IBM Event Streams
IBM UK Ltd, Hursley Park, SO21 2JN




From:   Rajini Sivaram 
To: dev 
Date:   14/02/2019 09:25
Subject:Re: [ANNOUNCE] New Committer: Bill Bejeck



Congratulations, Bill!

On Thu, Feb 14, 2019 at 9:04 AM Jorge Esteban Quilcate Otoya <
quilcate.jo...@gmail.com> wrote:

> Congratulations Bill!
>
> On Thu, 14 Feb 2019, 09:29 Mickael Maison, 
> wrote:
>
> > Congratulations Bill!
> >
> > On Thu, Feb 14, 2019 at 7:52 AM Gurudatt Kulkarni 

> > wrote:
> >
> > > Congratulations Bill!
> > >
> > > On Thursday, February 14, 2019, Konstantine Karantasis <
> > > konstant...@confluent.io> wrote:
> > > > Congrats Bill!
> > > >
> > > > -Konstantine
> > > >
> > > > On Wed, Feb 13, 2019 at 8:42 PM Srinivas Reddy <
> > > srinivas96all...@gmail.com
> > > >
> > > > wrote:
> > > >
> > > >> Congratulations Bill 
> > > >>
> > > >> Well deserved!!
> > > >>
> > > >> -
> > > >> Srinivas
> > > >>
> > > >> - Typed on tiny keys. pls ignore typos.{mobile app}
> > > >>
> > > >> On Thu, 14 Feb, 2019, 11:21 Ismael Juma  > > >>
> > > >> > Congratulations Bill!
> > > >> >
> > > >> > On Wed, Feb 13, 2019, 5:03 PM Guozhang Wang  > > wrote:
> > > >> >
> > > >> > > Hello all,
> > > >> > >
> > > >> > > The PMC of Apache Kafka is happy to announce that we've added
> Bill
> > > >> Bejeck
> > > >> > > as our newest project committer.
> > > >> > >
> > > >> > > Bill has been active in the Kafka community since 2015. He 
has
> > made
> > > >> > > significant contributions to the Kafka Streams project with 
more
> > > than
> > > >> 100
> > > >> > > PRs and 4 authored KIPs, including the streams topology
> > optimization
> > > >> > > framework. Bill's also very keen on tightening Kafka's unit
> test /
> > > >> system
> > > >> > > tests coverage, which is a great value to our project 
codebase.
> > > >> > >
> > > >> > > In addition, Bill has been very active in evangelizing Kafka 
for
> > > stream
> > > >> > > processing in the community. He has given several Kafka 
meetup
> > talks
> > > in
> > > >> > the
> > > >> > > past year, including a presentation at Kafka Summit SF. He's
> also
> > > >> > authored
> > > >> > > a book about Kafka Streams (
> > > >> > > 
https://urldefense.proofpoint.com/v2/url?u=https-3A__www.manning.com_books_kafka-2Dstreams-2Din-2Daction=DwIFaQ=jf_iaSHvJObTbx-siA1ZOg=EzRhmSah4IHsUZVekRUIINhltZK7U0OaeRo7hgW4_tQ=KJZMhaqmHaB06mnORSUk3ZCMhhs-Q-KMRty3OPPS28k=KQXXkpCoIhSnbCiL1As-0nEdq8oHZGCcqYUZGOq118E=
), as well
> > as
> > > >> > various
> > > >> > > of posts in public venues like DZone as well as his personal
> blog
> > (
> > > >> > > 
https://urldefense.proofpoint.com/v2/url?u=http-3A__codingjunkie.net_=DwIFaQ=jf_iaSHvJObTbx-siA1ZOg=EzRhmSah4IHsUZVekRUIINhltZK7U0OaeRo7hgW4_tQ=KJZMhaqmHaB06mnORSUk3ZCMhhs-Q-KMRty3OPPS28k=K4jgRN4mNUsjGag4cb7mdSZXOV4oVbbwO48t0OxB4b0=
).
> > > >> > >
> > > >> > > We really appreciate the contributions and are looking 
forward
> to
> > > see
> > > >> > more
> > > >> > > from him. Congratulations, Bill !
> > > >> > >
> > > >> > >
> > > >> > > Guozhang, on behalf of the Apache Kafka PMC
> > > >> > >
> > > >> >
> > > >>
> > > >
> > >
> >
>



Unless stated otherwise above:
IBM United Kingdom Limited - Registered in England and Wales with number 
741598. 
Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU



Re: [ANNOUNCE] New Committer: Bill Bejeck

2019-02-14 Thread Rajini Sivaram
Congratulations, Bill!

On Thu, Feb 14, 2019 at 9:04 AM Jorge Esteban Quilcate Otoya <
quilcate.jo...@gmail.com> wrote:

> Congratulations Bill!
>
> On Thu, 14 Feb 2019, 09:29 Mickael Maison, 
> wrote:
>
> > Congratulations Bill!
> >
> > On Thu, Feb 14, 2019 at 7:52 AM Gurudatt Kulkarni 
> > wrote:
> >
> > > Congratulations Bill!
> > >
> > > On Thursday, February 14, 2019, Konstantine Karantasis <
> > > konstant...@confluent.io> wrote:
> > > > Congrats Bill!
> > > >
> > > > -Konstantine
> > > >
> > > > On Wed, Feb 13, 2019 at 8:42 PM Srinivas Reddy <
> > > srinivas96all...@gmail.com
> > > >
> > > > wrote:
> > > >
> > > >> Congratulations Bill 
> > > >>
> > > >> Well deserved!!
> > > >>
> > > >> -
> > > >> Srinivas
> > > >>
> > > >> - Typed on tiny keys. pls ignore typos.{mobile app}
> > > >>
> > > >> On Thu, 14 Feb, 2019, 11:21 Ismael Juma  > > >>
> > > >> > Congratulations Bill!
> > > >> >
> > > >> > On Wed, Feb 13, 2019, 5:03 PM Guozhang Wang  > > wrote:
> > > >> >
> > > >> > > Hello all,
> > > >> > >
> > > >> > > The PMC of Apache Kafka is happy to announce that we've added
> Bill
> > > >> Bejeck
> > > >> > > as our newest project committer.
> > > >> > >
> > > >> > > Bill has been active in the Kafka community since 2015. He has
> > made
> > > >> > > significant contributions to the Kafka Streams project with more
> > > than
> > > >> 100
> > > >> > > PRs and 4 authored KIPs, including the streams topology
> > optimization
> > > >> > > framework. Bill's also very keen on tightening Kafka's unit
> test /
> > > >> system
> > > >> > > tests coverage, which is a great value to our project codebase.
> > > >> > >
> > > >> > > In addition, Bill has been very active in evangelizing Kafka for
> > > stream
> > > >> > > processing in the community. He has given several Kafka meetup
> > talks
> > > in
> > > >> > the
> > > >> > > past year, including a presentation at Kafka Summit SF. He's
> also
> > > >> > authored
> > > >> > > a book about Kafka Streams (
> > > >> > > https://www.manning.com/books/kafka-streams-in-action), as well
> > as
> > > >> > various
> > > >> > > of posts in public venues like DZone as well as his personal
> blog
> > (
> > > >> > > http://codingjunkie.net/).
> > > >> > >
> > > >> > > We really appreciate the contributions and are looking forward
> to
> > > see
> > > >> > more
> > > >> > > from him. Congratulations, Bill !
> > > >> > >
> > > >> > >
> > > >> > > Guozhang, on behalf of the Apache Kafka PMC
> > > >> > >
> > > >> >
> > > >>
> > > >
> > >
> >
>


Kafka connect tasks rebalancing

2019-02-14 Thread Федор Чернилин
Hello. I'm concerned about the following question.
The documentation of Kafka Connect states that
"When a worker fails, tasks are rebalanced across the active workers. *When
a task fails, no rebalance is triggered as a task failure is considered an
exceptional case. As such, failed tasks are not automatically restarted by
the framework and should be restarted via the** REST API
.*
",
but when one of my tasks failed, I got next
[image: Снимок.PNG]
I.e. rebalancing happens. Why? And can I manage this? Thanks


Re: [ANNOUNCE] New Committer: Bill Bejeck

2019-02-14 Thread Jorge Esteban Quilcate Otoya
Congratulations Bill!

On Thu, 14 Feb 2019, 09:29 Mickael Maison,  wrote:

> Congratulations Bill!
>
> On Thu, Feb 14, 2019 at 7:52 AM Gurudatt Kulkarni 
> wrote:
>
> > Congratulations Bill!
> >
> > On Thursday, February 14, 2019, Konstantine Karantasis <
> > konstant...@confluent.io> wrote:
> > > Congrats Bill!
> > >
> > > -Konstantine
> > >
> > > On Wed, Feb 13, 2019 at 8:42 PM Srinivas Reddy <
> > srinivas96all...@gmail.com
> > >
> > > wrote:
> > >
> > >> Congratulations Bill 
> > >>
> > >> Well deserved!!
> > >>
> > >> -
> > >> Srinivas
> > >>
> > >> - Typed on tiny keys. pls ignore typos.{mobile app}
> > >>
> > >> On Thu, 14 Feb, 2019, 11:21 Ismael Juma  > >>
> > >> > Congratulations Bill!
> > >> >
> > >> > On Wed, Feb 13, 2019, 5:03 PM Guozhang Wang  > wrote:
> > >> >
> > >> > > Hello all,
> > >> > >
> > >> > > The PMC of Apache Kafka is happy to announce that we've added Bill
> > >> Bejeck
> > >> > > as our newest project committer.
> > >> > >
> > >> > > Bill has been active in the Kafka community since 2015. He has
> made
> > >> > > significant contributions to the Kafka Streams project with more
> > than
> > >> 100
> > >> > > PRs and 4 authored KIPs, including the streams topology
> optimization
> > >> > > framework. Bill's also very keen on tightening Kafka's unit test /
> > >> system
> > >> > > tests coverage, which is a great value to our project codebase.
> > >> > >
> > >> > > In addition, Bill has been very active in evangelizing Kafka for
> > stream
> > >> > > processing in the community. He has given several Kafka meetup
> talks
> > in
> > >> > the
> > >> > > past year, including a presentation at Kafka Summit SF. He's also
> > >> > authored
> > >> > > a book about Kafka Streams (
> > >> > > https://www.manning.com/books/kafka-streams-in-action), as well
> as
> > >> > various
> > >> > > of posts in public venues like DZone as well as his personal blog
> (
> > >> > > http://codingjunkie.net/).
> > >> > >
> > >> > > We really appreciate the contributions and are looking forward to
> > see
> > >> > more
> > >> > > from him. Congratulations, Bill !
> > >> > >
> > >> > >
> > >> > > Guozhang, on behalf of the Apache Kafka PMC
> > >> > >
> > >> >
> > >>
> > >
> >
>


Re: [DISCUSS] Kafka 2.2.0 in February 2018

2019-02-14 Thread Matthias J. Sax
Thanks for pointing out. It's was actually scheduled for Thursday 2/14.

I will move it to Friday 2/15 (to not cut off a day from anybody...)


-Matthias


On 2/14/19 12:17 AM, James Cheng wrote:
> Matthias, 
> 
> You said “Friday 2/14”, but 2/14 is this Thursday. 
> 
> -James
> 
> Sent from my iPhone
> 
>> On Feb 11, 2019, at 2:31 PM, Matthias J. Sax  wrote:
>>
>> Hello,
>>
>> this is a short reminder, that feature freeze for AK 2.2 release is end
>> of this week, Friday 2/14.
>>
>> Currently, there are two blocker issues
>>
>> - https://issues.apache.org/jira/browse/KAFKA-7909
>> - https://issues.apache.org/jira/browse/KAFKA-7481
>>
>> and five critical issues
>>
>> - https://issues.apache.org/jira/browse/KAFKA-7915
>> - https://issues.apache.org/jira/browse/KAFKA-7565
>> - https://issues.apache.org/jira/browse/KAFKA-7556
>> - https://issues.apache.org/jira/browse/KAFKA-7304
>> - https://issues.apache.org/jira/browse/KAFKA-3955
>>
>> marked with "fixed version" 2.2. Please let me know, if I missed any
>> other blocker/critical issue that is relevant for 2.2 release.
>>
>> I will start to move out all other non-closed Jiras out of the release
>> after code freeze and check again on the critical issues.
>>
>> After code freeze, only blocker issues can be merged to 2.2 branch.
>>
>>
>> Thanks a lot!
>>
>> -Matthias
>>
>>> On 1/19/19 11:09 AM, Matthias J. Sax wrote:
>>> Thanks you all!
>>>
>>> Added 291, 379, 389, and 420 for tracking.
>>>
>>>
>>> -Matthias
>>>
>>>
 On 1/19/19 6:32 AM, Dongjin Lee wrote:
 Hi Matthias,

 Thank you for taking the lead. KIP-389[^1] was accepted last week[^2], so
 it seems like to be included.

 Thanks,
 Dongjin

 [^1]:
 https://cwiki.apache.org/confluence/display/KAFKA/KIP-389%3A+Introduce+a+configurable+consumer+group+size+limit
 [^2]:
 https://lists.apache.org/thread.html/53b84cc35c93eddbc67c8d0dd86aedb93050e45016dfe0fc7b82caaa@%3Cdev.kafka.apache.org%3E

> On Sat, Jan 19, 2019 at 9:04 PM Alex D  wrote:
>
> KIP-379?
>
>> On Fri, 18 Jan 2019, 22:33 Matthias J. Sax >
>> Just a quick update on the release.
>>
>>
>> We have 22 KIP atm:
>>
>> 81, 207, 258, 289, 313, 328, 331, 339, 341, 351, 359, 361, 367, 368,
>> 371, 376, 377, 380, 386, 393, 394, 414
>>
>> Let me know if I missed any KIP that is targeted for 2.2 release.
>>
>> 21 of those KIPs are accepted, and the vote for the last one is open and
>> can be closed on time.
>>
>> The KIP deadline is Jan 24, so if any late KIPs are coming in, the vote
>> must be started latest next Monday Jan 21, to be open for at least 72h
>> and to meet the deadline.
>>
>> Also keep the feature freeze deadline in mind (31 Jan).
>>
>>
>> Besides this, there are 91 open tickets and 41 ticket in progress. I
>> will start to go through those tickets soon to see what will make it
>> into 2.2 and what we need to defer. If you have any tickets assigned to
>> yourself that are target for 2.2 and you know you cannot make it, I
>> would appreciate if you could update those ticket yourself to help
>> streamlining the release process. Thanks a lot for you support!
>>
>>
>> -Matthias
>>
>>
>>> On 1/8/19 7:27 PM, Ismael Juma wrote:
>>> Thanks for volunteering Matthias! The plan sounds good to me.
>>>
>>> Ismael
>>>
 On Tue, Jan 8, 2019, 1:07 PM Matthias J. Sax >> wrote:
>>>
 Hi all,

 I would like to propose a release plan (with me being release manager)
 for the next time-based feature release 2.2.0 in February.

 The recent Kafka release history can be found at
 https://cwiki.apache.org/confluence/display/KAFKA/Future+release+plan
> .
 The release plan (with open issues and planned KIPs) for 2.2.0 can be
 found at
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=100827512
 .


 Here are the suggested dates for Apache Kafka 2.2.0 release:

 1) KIP Freeze: Jan 24, 2019.

 A KIP must be accepted by this date in order to be considered for this
 release)

 2) Feature Freeze: Jan 31, 2019

 Major features merged & working on stabilization, minor features have
 PR, release branch cut; anything not in this state will be
> automatically
 moved to the next release in JIRA.

 3) Code Freeze: Feb 14, 2019

 The KIP and feature freeze date is about 2-3 weeks from now. Please
> plan
 accordingly for the features you want push into Apache Kafka 2.2.0
>> release.

 4) Release Date: Feb 28, 2019 (tentative)


 -Matthias
>>



signature.asc
Description: OpenPGP digital signature


Re: [ANNOUNCE] New Committer: Bill Bejeck

2019-02-14 Thread Mickael Maison
Congratulations Bill!

On Thu, Feb 14, 2019 at 7:52 AM Gurudatt Kulkarni 
wrote:

> Congratulations Bill!
>
> On Thursday, February 14, 2019, Konstantine Karantasis <
> konstant...@confluent.io> wrote:
> > Congrats Bill!
> >
> > -Konstantine
> >
> > On Wed, Feb 13, 2019 at 8:42 PM Srinivas Reddy <
> srinivas96all...@gmail.com
> >
> > wrote:
> >
> >> Congratulations Bill 
> >>
> >> Well deserved!!
> >>
> >> -
> >> Srinivas
> >>
> >> - Typed on tiny keys. pls ignore typos.{mobile app}
> >>
> >> On Thu, 14 Feb, 2019, 11:21 Ismael Juma  >>
> >> > Congratulations Bill!
> >> >
> >> > On Wed, Feb 13, 2019, 5:03 PM Guozhang Wang  wrote:
> >> >
> >> > > Hello all,
> >> > >
> >> > > The PMC of Apache Kafka is happy to announce that we've added Bill
> >> Bejeck
> >> > > as our newest project committer.
> >> > >
> >> > > Bill has been active in the Kafka community since 2015. He has made
> >> > > significant contributions to the Kafka Streams project with more
> than
> >> 100
> >> > > PRs and 4 authored KIPs, including the streams topology optimization
> >> > > framework. Bill's also very keen on tightening Kafka's unit test /
> >> system
> >> > > tests coverage, which is a great value to our project codebase.
> >> > >
> >> > > In addition, Bill has been very active in evangelizing Kafka for
> stream
> >> > > processing in the community. He has given several Kafka meetup talks
> in
> >> > the
> >> > > past year, including a presentation at Kafka Summit SF. He's also
> >> > authored
> >> > > a book about Kafka Streams (
> >> > > https://www.manning.com/books/kafka-streams-in-action), as well as
> >> > various
> >> > > of posts in public venues like DZone as well as his personal blog (
> >> > > http://codingjunkie.net/).
> >> > >
> >> > > We really appreciate the contributions and are looking forward to
> see
> >> > more
> >> > > from him. Congratulations, Bill !
> >> > >
> >> > >
> >> > > Guozhang, on behalf of the Apache Kafka PMC
> >> > >
> >> >
> >>
> >
>


Re: [DISCUSS] Kafka 2.2.0 in February 2018

2019-02-14 Thread James Cheng
Matthias, 

You said “Friday 2/14”, but 2/14 is this Thursday. 

-James

Sent from my iPhone

> On Feb 11, 2019, at 2:31 PM, Matthias J. Sax  wrote:
> 
> Hello,
> 
> this is a short reminder, that feature freeze for AK 2.2 release is end
> of this week, Friday 2/14.
> 
> Currently, there are two blocker issues
> 
> - https://issues.apache.org/jira/browse/KAFKA-7909
> - https://issues.apache.org/jira/browse/KAFKA-7481
> 
> and five critical issues
> 
> - https://issues.apache.org/jira/browse/KAFKA-7915
> - https://issues.apache.org/jira/browse/KAFKA-7565
> - https://issues.apache.org/jira/browse/KAFKA-7556
> - https://issues.apache.org/jira/browse/KAFKA-7304
> - https://issues.apache.org/jira/browse/KAFKA-3955
> 
> marked with "fixed version" 2.2. Please let me know, if I missed any
> other blocker/critical issue that is relevant for 2.2 release.
> 
> I will start to move out all other non-closed Jiras out of the release
> after code freeze and check again on the critical issues.
> 
> After code freeze, only blocker issues can be merged to 2.2 branch.
> 
> 
> Thanks a lot!
> 
> -Matthias
> 
>> On 1/19/19 11:09 AM, Matthias J. Sax wrote:
>> Thanks you all!
>> 
>> Added 291, 379, 389, and 420 for tracking.
>> 
>> 
>> -Matthias
>> 
>> 
>>> On 1/19/19 6:32 AM, Dongjin Lee wrote:
>>> Hi Matthias,
>>> 
>>> Thank you for taking the lead. KIP-389[^1] was accepted last week[^2], so
>>> it seems like to be included.
>>> 
>>> Thanks,
>>> Dongjin
>>> 
>>> [^1]:
>>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-389%3A+Introduce+a+configurable+consumer+group+size+limit
>>> [^2]:
>>> https://lists.apache.org/thread.html/53b84cc35c93eddbc67c8d0dd86aedb93050e45016dfe0fc7b82caaa@%3Cdev.kafka.apache.org%3E
>>> 
 On Sat, Jan 19, 2019 at 9:04 PM Alex D  wrote:
 
 KIP-379?
 
> On Fri, 18 Jan 2019, 22:33 Matthias J. Sax  
> Just a quick update on the release.
> 
> 
> We have 22 KIP atm:
> 
> 81, 207, 258, 289, 313, 328, 331, 339, 341, 351, 359, 361, 367, 368,
> 371, 376, 377, 380, 386, 393, 394, 414
> 
> Let me know if I missed any KIP that is targeted for 2.2 release.
> 
> 21 of those KIPs are accepted, and the vote for the last one is open and
> can be closed on time.
> 
> The KIP deadline is Jan 24, so if any late KIPs are coming in, the vote
> must be started latest next Monday Jan 21, to be open for at least 72h
> and to meet the deadline.
> 
> Also keep the feature freeze deadline in mind (31 Jan).
> 
> 
> Besides this, there are 91 open tickets and 41 ticket in progress. I
> will start to go through those tickets soon to see what will make it
> into 2.2 and what we need to defer. If you have any tickets assigned to
> yourself that are target for 2.2 and you know you cannot make it, I
> would appreciate if you could update those ticket yourself to help
> streamlining the release process. Thanks a lot for you support!
> 
> 
> -Matthias
> 
> 
>> On 1/8/19 7:27 PM, Ismael Juma wrote:
>> Thanks for volunteering Matthias! The plan sounds good to me.
>> 
>> Ismael
>> 
>>> On Tue, Jan 8, 2019, 1:07 PM Matthias J. Sax > wrote:
>> 
>>> Hi all,
>>> 
>>> I would like to propose a release plan (with me being release manager)
>>> for the next time-based feature release 2.2.0 in February.
>>> 
>>> The recent Kafka release history can be found at
>>> https://cwiki.apache.org/confluence/display/KAFKA/Future+release+plan
 .
>>> The release plan (with open issues and planned KIPs) for 2.2.0 can be
>>> found at
 https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=100827512
>>> .
>>> 
>>> 
>>> Here are the suggested dates for Apache Kafka 2.2.0 release:
>>> 
>>> 1) KIP Freeze: Jan 24, 2019.
>>> 
>>> A KIP must be accepted by this date in order to be considered for this
>>> release)
>>> 
>>> 2) Feature Freeze: Jan 31, 2019
>>> 
>>> Major features merged & working on stabilization, minor features have
>>> PR, release branch cut; anything not in this state will be
 automatically
>>> moved to the next release in JIRA.
>>> 
>>> 3) Code Freeze: Feb 14, 2019
>>> 
>>> The KIP and feature freeze date is about 2-3 weeks from now. Please
 plan
>>> accordingly for the features you want push into Apache Kafka 2.2.0
> release.
>>> 
>>> 4) Release Date: Feb 28, 2019 (tentative)
>>> 
>>> 
>>> -Matthias
>