Build failed in Jenkins: kafka-trunk-jdk8 #3391

2019-02-15 Thread Apache Jenkins Server
See 


Changes:

[jason] KAFKA-7884; Docs for message.format.version should display valid values

--
[...truncated 4.00 MB...]

kafka.api.PlaintextConsumerTest > testMultiConsumerStickyAssignment STARTED

kafka.api.PlaintextConsumerTest > testMultiConsumerStickyAssignment PASSED

kafka.api.PlaintextConsumerTest > testPauseStateNotPreservedByRebalance STARTED

kafka.api.PlaintextConsumerTest > testPauseStateNotPreservedByRebalance PASSED

kafka.api.PlaintextConsumerTest > 
testFetchHonoursFetchSizeIfLargeRecordNotFirst STARTED

kafka.api.PlaintextConsumerTest > 
testFetchHonoursFetchSizeIfLargeRecordNotFirst PASSED

kafka.api.PlaintextConsumerTest > testSeek STARTED

kafka.api.PlaintextConsumerTest > testSeek PASSED

kafka.api.PlaintextConsumerTest > testConsumingWithNullGroupId STARTED

kafka.api.PlaintextConsumerTest > testConsumingWithNullGroupId PASSED

kafka.api.PlaintextConsumerTest > testPositionAndCommit STARTED

kafka.api.PlaintextConsumerTest > testPositionAndCommit PASSED

kafka.api.PlaintextConsumerTest > 
testFetchRecordLargerThanMaxPartitionFetchBytes STARTED

kafka.api.PlaintextConsumerTest > 
testFetchRecordLargerThanMaxPartitionFetchBytes PASSED

kafka.api.PlaintextConsumerTest > testUnsubscribeTopic STARTED

kafka.api.PlaintextConsumerTest > testUnsubscribeTopic PASSED

kafka.api.PlaintextConsumerTest > testMultiConsumerSessionTimeoutOnClose STARTED

kafka.api.PlaintextConsumerTest > testMultiConsumerSessionTimeoutOnClose PASSED

kafka.api.PlaintextConsumerTest > testFetchRecordLargerThanFetchMaxBytes STARTED

kafka.api.PlaintextConsumerTest > testFetchRecordLargerThanFetchMaxBytes PASSED

kafka.api.PlaintextConsumerTest > testMultiConsumerDefaultAssignment STARTED

kafka.api.PlaintextConsumerTest > testMultiConsumerDefaultAssignment PASSED

kafka.api.PlaintextConsumerTest > testAutoCommitOnClose STARTED

kafka.api.PlaintextConsumerTest > testAutoCommitOnClose PASSED

kafka.api.PlaintextConsumerTest > testListTopics STARTED

kafka.api.PlaintextConsumerTest > testListTopics PASSED

kafka.api.PlaintextConsumerTest > testExpandingTopicSubscriptions STARTED

kafka.api.PlaintextConsumerTest > testExpandingTopicSubscriptions PASSED

kafka.api.PlaintextConsumerTest > testInterceptors STARTED

kafka.api.PlaintextConsumerTest > testInterceptors PASSED

kafka.api.PlaintextConsumerTest > testConsumingWithEmptyGroupId STARTED

kafka.api.PlaintextConsumerTest > testConsumingWithEmptyGroupId PASSED

kafka.api.PlaintextConsumerTest > testPatternUnsubscription STARTED

kafka.api.PlaintextConsumerTest > testPatternUnsubscription PASSED

kafka.api.PlaintextConsumerTest > testGroupConsumption STARTED

kafka.api.PlaintextConsumerTest > testGroupConsumption PASSED

kafka.api.PlaintextConsumerTest > testPartitionsFor STARTED

kafka.api.PlaintextConsumerTest > testPartitionsFor PASSED

kafka.api.PlaintextConsumerTest > testAutoCommitOnRebalance STARTED

kafka.api.PlaintextConsumerTest > testAutoCommitOnRebalance PASSED

kafka.api.PlaintextConsumerTest > testInterceptorsWithWrongKeyValue STARTED

kafka.api.PlaintextConsumerTest > testInterceptorsWithWrongKeyValue PASSED

kafka.api.PlaintextConsumerTest > testPerPartitionLeadWithMaxPollRecords STARTED

kafka.api.PlaintextConsumerTest > testPerPartitionLeadWithMaxPollRecords PASSED

kafka.api.PlaintextConsumerTest > testHeaders STARTED

kafka.api.PlaintextConsumerTest > testHeaders PASSED

kafka.api.PlaintextConsumerTest > testMaxPollIntervalMsDelayInAssignment STARTED

kafka.api.PlaintextConsumerTest > testMaxPollIntervalMsDelayInAssignment PASSED

kafka.api.PlaintextConsumerTest > testHeadersSerializerDeserializer STARTED

kafka.api.PlaintextConsumerTest > testHeadersSerializerDeserializer PASSED

kafka.api.PlaintextConsumerTest > testDeprecatedPollBlocksForAssignment STARTED

kafka.api.PlaintextConsumerTest > testDeprecatedPollBlocksForAssignment PASSED

kafka.api.PlaintextConsumerTest > testMultiConsumerRoundRobinAssignment STARTED

kafka.api.PlaintextConsumerTest > testMultiConsumerRoundRobinAssignment PASSED

kafka.api.PlaintextConsumerTest > testPartitionPauseAndResume STARTED

kafka.api.PlaintextConsumerTest > testPartitionPauseAndResume PASSED

kafka.api.PlaintextConsumerTest > 
testQuotaMetricsNotCreatedIfNoQuotasConfigured STARTED

kafka.api.PlaintextConsumerTest > 
testQuotaMetricsNotCreatedIfNoQuotasConfigured PASSED

kafka.api.PlaintextConsumerTest > 
testPerPartitionLagMetricsCleanUpWithSubscribe STARTED

kafka.api.PlaintextConsumerTest > 
testPerPartitionLagMetricsCleanUpWithSubscribe PASSED

kafka.api.PlaintextConsumerTest > testConsumeMessagesWithLogAppendTime STARTED

kafka.api.PlaintextConsumerTest > testConsumeMessagesWithLogAppendTime PASSED

kafka.api.PlaintextConsumerTest > testPerPartitionLagMetricsWhenReadCommitted 
STARTED

kafka.api.PlaintextConsumerTest > testPerPartitionLagMetricsWhenReadCommitted 
PASSED


Build failed in Jenkins: kafka-trunk-jdk11 #291

2019-02-15 Thread Apache Jenkins Server
See 


Changes:

[jason] KAFKA-7884; Docs for message.format.version should display valid values

--
[...truncated 2.31 MB...]
org.apache.kafka.connect.json.JsonConverterTest > timestampToJson STARTED

org.apache.kafka.connect.json.JsonConverterTest > timestampToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > timestampToConnect STARTED

org.apache.kafka.connect.json.JsonConverterTest > timestampToConnect PASSED

org.apache.kafka.connect.json.JsonConverterTest > 
timestampToConnectWithDefaultValue STARTED

org.apache.kafka.connect.json.JsonConverterTest > 
timestampToConnectWithDefaultValue PASSED

org.apache.kafka.connect.json.JsonConverterTest > timeToConnectOptional STARTED

org.apache.kafka.connect.json.JsonConverterTest > timeToConnectOptional PASSED

org.apache.kafka.connect.json.JsonConverterTest > dateToConnectWithDefaultValue 
STARTED

org.apache.kafka.connect.json.JsonConverterTest > dateToConnectWithDefaultValue 
PASSED

org.apache.kafka.connect.json.JsonConverterTest > nullValueToJson STARTED

org.apache.kafka.connect.json.JsonConverterTest > nullValueToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > decimalToConnect STARTED

org.apache.kafka.connect.json.JsonConverterTest > decimalToConnect PASSED

org.apache.kafka.connect.json.JsonConverterTest > stringHeaderToConnect STARTED

org.apache.kafka.connect.json.JsonConverterTest > stringHeaderToConnect PASSED

org.apache.kafka.connect.json.JsonConverterTest > mapToJsonNonStringKeys STARTED

org.apache.kafka.connect.json.JsonConverterTest > mapToJsonNonStringKeys PASSED

org.apache.kafka.connect.json.JsonConverterTest > longToJson STARTED

org.apache.kafka.connect.json.JsonConverterTest > longToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > mismatchSchemaJson STARTED

org.apache.kafka.connect.json.JsonConverterTest > mismatchSchemaJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > mapToConnectNonStringKeys 
STARTED

org.apache.kafka.connect.json.JsonConverterTest > mapToConnectNonStringKeys 
PASSED

org.apache.kafka.connect.json.JsonConverterTest > 
testJsonSchemaMetadataTranslation STARTED

org.apache.kafka.connect.json.JsonConverterTest > 
testJsonSchemaMetadataTranslation PASSED

org.apache.kafka.connect.json.JsonConverterTest > bytesToJson STARTED

org.apache.kafka.connect.json.JsonConverterTest > bytesToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > shortToConnect STARTED

org.apache.kafka.connect.json.JsonConverterTest > shortToConnect PASSED

org.apache.kafka.connect.json.JsonConverterTest > intToJson STARTED

org.apache.kafka.connect.json.JsonConverterTest > intToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > nullSchemaAndNullValueToJson 
STARTED

org.apache.kafka.connect.json.JsonConverterTest > nullSchemaAndNullValueToJson 
PASSED

org.apache.kafka.connect.json.JsonConverterTest > timestampToConnectOptional 
STARTED

org.apache.kafka.connect.json.JsonConverterTest > timestampToConnectOptional 
PASSED

org.apache.kafka.connect.json.JsonConverterTest > structToConnect STARTED

org.apache.kafka.connect.json.JsonConverterTest > structToConnect PASSED

org.apache.kafka.connect.json.JsonConverterTest > stringToJson STARTED

org.apache.kafka.connect.json.JsonConverterTest > stringToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > nullSchemaAndArrayToJson 
STARTED

org.apache.kafka.connect.json.JsonConverterTest > nullSchemaAndArrayToJson 
PASSED

org.apache.kafka.connect.json.JsonConverterTest > byteToConnect STARTED

org.apache.kafka.connect.json.JsonConverterTest > byteToConnect PASSED

org.apache.kafka.connect.json.JsonConverterTest > nullSchemaPrimitiveToConnect 
STARTED

org.apache.kafka.connect.json.JsonConverterTest > nullSchemaPrimitiveToConnect 
PASSED

org.apache.kafka.connect.json.JsonConverterTest > byteToJson STARTED

org.apache.kafka.connect.json.JsonConverterTest > byteToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > intToConnect STARTED

org.apache.kafka.connect.json.JsonConverterTest > intToConnect PASSED

org.apache.kafka.connect.json.JsonConverterTest > dateToJson STARTED

org.apache.kafka.connect.json.JsonConverterTest > dateToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > noSchemaToConnect STARTED

org.apache.kafka.connect.json.JsonConverterTest > noSchemaToConnect PASSED

org.apache.kafka.connect.json.JsonConverterTest > noSchemaToJson STARTED

org.apache.kafka.connect.json.JsonConverterTest > noSchemaToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > nullSchemaAndPrimitiveToJson 
STARTED

org.apache.kafka.connect.json.JsonConverterTest > nullSchemaAndPrimitiveToJson 
PASSED

org.apache.kafka.connect.json.JsonConverterTest > 
decimalToConnectOptionalWithDefaultValue STARTED

org.apache.kafka.connect.json.JsonConverterTest > 

Build failed in Jenkins: kafka-2.2-jdk8 #18

2019-02-15 Thread Apache Jenkins Server
See 


Changes:

[jason] KAFKA-7884; Docs for message.format.version should display valid values

--
[...truncated 2.72 MB...]

kafka.controller.PartitionStateMachineTest > 
testInvalidOnlinePartitionToNewPartitionTransition PASSED

kafka.controller.PartitionStateMachineTest > 
testUpdatingOfflinePartitionsCountDuringTopicDeletion STARTED

kafka.controller.PartitionStateMachineTest > 
testUpdatingOfflinePartitionsCountDuringTopicDeletion PASSED

kafka.controller.PartitionStateMachineTest > 
testOfflinePartitionToOnlinePartitionTransitionErrorCodeFromStateLookup STARTED

kafka.controller.PartitionStateMachineTest > 
testOfflinePartitionToOnlinePartitionTransitionErrorCodeFromStateLookup PASSED

kafka.controller.PartitionStateMachineTest > 
testOnlinePartitionToOnlineTransitionForControlledShutdown STARTED

kafka.controller.PartitionStateMachineTest > 
testOnlinePartitionToOnlineTransitionForControlledShutdown PASSED

kafka.controller.PartitionStateMachineTest > 
testOfflinePartitionToOnlinePartitionTransitionZkUtilsExceptionFromStateLookup 
STARTED

kafka.controller.PartitionStateMachineTest > 
testOfflinePartitionToOnlinePartitionTransitionZkUtilsExceptionFromStateLookup 
PASSED

kafka.controller.PartitionStateMachineTest > 
testNoOfflinePartitionsChangeForTopicsBeingDeleted STARTED

kafka.controller.PartitionStateMachineTest > 
testNoOfflinePartitionsChangeForTopicsBeingDeleted PASSED

kafka.controller.PartitionStateMachineTest > 
testInvalidOnlinePartitionToNonexistentPartitionTransition STARTED

kafka.controller.PartitionStateMachineTest > 
testInvalidOnlinePartitionToNonexistentPartitionTransition PASSED

kafka.controller.PartitionStateMachineTest > 
testInvalidOfflinePartitionToNewPartitionTransition STARTED

kafka.controller.PartitionStateMachineTest > 
testInvalidOfflinePartitionToNewPartitionTransition PASSED

kafka.controller.PartitionStateMachineTest > 
testOfflinePartitionToOnlinePartitionTransition STARTED

kafka.controller.PartitionStateMachineTest > 
testOfflinePartitionToOnlinePartitionTransition PASSED

kafka.controller.ControllerEventManagerTest > testEventThatThrowsException 
STARTED

kafka.controller.ControllerEventManagerTest > testEventThatThrowsException 
PASSED

kafka.controller.ControllerEventManagerTest > testSuccessfulEvent STARTED

kafka.controller.ControllerEventManagerTest > testSuccessfulEvent PASSED

kafka.controller.ControllerFailoverTest > testHandleIllegalStateException 
STARTED

kafka.controller.ControllerFailoverTest > testHandleIllegalStateException PASSED

kafka.network.SocketServerTest > testGracefulClose STARTED

kafka.network.SocketServerTest > testGracefulClose PASSED

kafka.network.SocketServerTest > 
testSendActionResponseWithThrottledChannelWhereThrottlingAlreadyDone STARTED

kafka.network.SocketServerTest > 
testSendActionResponseWithThrottledChannelWhereThrottlingAlreadyDone PASSED

kafka.network.SocketServerTest > controlThrowable STARTED

kafka.network.SocketServerTest > controlThrowable PASSED

kafka.network.SocketServerTest > testRequestMetricsAfterStop STARTED

kafka.network.SocketServerTest > testRequestMetricsAfterStop PASSED

kafka.network.SocketServerTest > testConnectionIdReuse STARTED

kafka.network.SocketServerTest > testConnectionIdReuse PASSED

kafka.network.SocketServerTest > testClientDisconnectionUpdatesRequestMetrics 
STARTED

kafka.network.SocketServerTest > testClientDisconnectionUpdatesRequestMetrics 
PASSED

kafka.network.SocketServerTest > testProcessorMetricsTags STARTED

kafka.network.SocketServerTest > testProcessorMetricsTags PASSED

kafka.network.SocketServerTest > testMaxConnectionsPerIp STARTED

kafka.network.SocketServerTest > testMaxConnectionsPerIp PASSED

kafka.network.SocketServerTest > testConnectionId STARTED

kafka.network.SocketServerTest > testConnectionId PASSED

kafka.network.SocketServerTest > 
testBrokerSendAfterChannelClosedUpdatesRequestMetrics STARTED

kafka.network.SocketServerTest > 
testBrokerSendAfterChannelClosedUpdatesRequestMetrics PASSED

kafka.network.SocketServerTest > testNoOpAction STARTED

kafka.network.SocketServerTest > testNoOpAction PASSED

kafka.network.SocketServerTest > simpleRequest STARTED

kafka.network.SocketServerTest > simpleRequest PASSED

kafka.network.SocketServerTest > closingChannelException STARTED

kafka.network.SocketServerTest > closingChannelException PASSED

kafka.network.SocketServerTest > 
testSendActionResponseWithThrottledChannelWhereThrottlingInProgress STARTED

kafka.network.SocketServerTest > 
testSendActionResponseWithThrottledChannelWhereThrottlingInProgress PASSED

kafka.network.SocketServerTest > testIdleConnection STARTED

kafka.network.SocketServerTest > testIdleConnection PASSED

kafka.network.SocketServerTest > 
testClientDisconnectionWithStagedReceivesFullyProcessed STARTED

kafka.network.SocketServerTest > 

[jira] [Resolved] (KAFKA-7884) Docs for message.format.version and log.message.format.version show invalid (corrupt?) "valid values"

2019-02-15 Thread Jason Gustafson (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7884?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-7884.

Resolution: Fixed

> Docs for message.format.version and log.message.format.version show invalid 
> (corrupt?) "valid values"
> -
>
> Key: KAFKA-7884
> URL: https://issues.apache.org/jira/browse/KAFKA-7884
> Project: Kafka
>  Issue Type: Bug
>  Components: documentation
>Reporter: James Cheng
>Assignee: Lee Dongjin
>Priority: Major
> Fix For: 2.2.0
>
>
> In the docs for message.format.version and log.message.format.version, the 
> list of valid values is
>  
> {code:java}
> kafka.api.ApiVersionValidator$@56aac163 
> {code}
>  
> It appears it's simply doing a .toString on the class/instance.
> At a minimum, we should remove this java-y-ness.
> Even better is, it should show all the valid values.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Jenkins build is back to normal : kafka-trunk-jdk11 #290

2019-02-15 Thread Apache Jenkins Server
See 




Re: [VOTE] - KIP-213 (new vote) - Simplified and revised.

2019-02-15 Thread Matthias J. Sax
I think, he needs to re-cast his vote.

-Matthias

On 2/15/19 5:49 AM, Adam Bellemare wrote:
> Hi all
> 
> Since Bill is now a committer, the vote is changed to 3 binding and 3 
> non-binding (unless I am somehow mistaken - please let me know!). In this 
> case, I believe the vote passes.
> 
> Thanks
> 
> Adam 
> 
>> On Jan 24, 2019, at 7:28 PM, Adam Bellemare  wrote:
>>
>> Bumping this vote because I don't want it to languish. It is very unlikely 
>> to go into 2.2 at this point, but I would like to avoid resurrecting a dead 
>> thread in 30 days time.
>>
>>> On Tue, Jan 15, 2019 at 5:07 PM Adam Bellemare  
>>> wrote:
>>> All good Matthias. If it doesn’t get in for 2.2 I’ll just do it for the 
>>> next release. 
>>>
>>> Thanks
>>>
 On Jan 15, 2019, at 12:25 PM, Matthias J. Sax  
 wrote:

 I'll try to review the KIP before the deadline, but as I am acting as
 release manager and also working on KIP-258, I cannot promise. Even if
 we make the voting deadline, it might also be tight to review the PR, as
 it seems to be big and complicated.

 I'll try my very best to get it into 2.2...


 -Matthias

> On 1/15/19 3:27 AM, Adam Bellemare wrote:
> If I can get one more binding vote in here, I may be able to get this out
> for the 2.2 release in February.
>
> Currently at:
> 2 binding, 4 non-binding.
>
>
>
>> On Sun, Jan 13, 2019 at 2:41 PM Patrik Kleindl  
>> wrote:
>>
>> +1 (non-binding)
>> I have followed the discussion too and think this feature will be very
>> helpful.
>> Thanks Adam for staying on this.
>> Best regards
>> Patrik
>>
>>> Am 13.01.2019 um 19:55 schrieb Paul Whalen :
>>>
>>> +1 non binding.  I haven't contributed at all to discussion but have
>>> followed since Adam reinvigorated it a few months ago and am very 
>>> excited
>>> about it.  It would be a huge help on the project I'm working on.
>>>
>>> On Fri, Jan 11, 2019 at 9:05 AM Adam Bellemare >>
>>> wrote:
>>>
 Thanks all -

 So far that's +2 Binding, +2 non-binding

 If we get a few more votes I can likely get this out as part of the
>> Kafka
 2.2 release, as the KIP Freeze is Jan 24, 2019. The current PR I have
>> could
 be modified to match the PR in short order.

 Adam


> On Fri, Jan 11, 2019 at 7:11 AM Damian Guy 
>> wrote:
>
> +1 binding
>
>> On Thu, 10 Jan 2019 at 16:57, Bill Bejeck  wrote:
>>
>> +1 from me.  Great job on the KIP.
>>
>> -Bill
>>
>> On Thu, Jan 10, 2019 at 11:35 AM John Roesler 
 wrote:
>>
>>> It's a +1 (nonbinding) from me as well.
>>>
>>> Thanks for sticking with this, Adam!
>>> -John
>>>
>>> On Wed, Jan 9, 2019 at 6:22 PM Guozhang Wang 
> wrote:
>>>
 Hello Adam,

 I'm +1 on the current proposal, thanks!


 Guozhang

 On Mon, Jan 7, 2019 at 6:13 AM Adam Bellemare <
>> adam.bellem...@gmail.com>
 wrote:

> Hi All
>
> I would like to call a new vote on KIP-213. The design has
 changed
> substantially. Perhaps more importantly, the KIP and associated
> documentation has been greatly simplified. I know this KIP has
 been
>> on
 the
> mailing list for a long time, but the help from John Roesler and
>>> Guozhang
> Wang have helped put it into a much better state. I would
> appreciate
>>> any
> feedback or votes.
>
>
>

>>>
>>
>

>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-213+Support+non-key+joining+in+KTable
>
>
>
> Thank you
>
> Adam Bellemare
>


 --
 -- Guozhang

>>>
>>
>

>>
>

> 



signature.asc
Description: OpenPGP digital signature


Build failed in Jenkins: kafka-trunk-jdk8 #3390

2019-02-15 Thread Apache Jenkins Server
See 


Changes:

[me] MINOR: Save failed test output to build output directory

--
[...truncated 2.30 MB...]

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualWithNullForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualWithNullForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 

Build failed in Jenkins: kafka-trunk-jdk11 #289

2019-02-15 Thread Apache Jenkins Server
See 


Changes:

[bbejeck] MINOR: add test for StreamsSmokeTestDriver (#6231)

--
[...truncated 2.31 MB...]
org.apache.kafka.connect.json.JsonConverterTest > timestampToJson STARTED

org.apache.kafka.connect.json.JsonConverterTest > timestampToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > timestampToConnect STARTED

org.apache.kafka.connect.json.JsonConverterTest > timestampToConnect PASSED

org.apache.kafka.connect.json.JsonConverterTest > 
timestampToConnectWithDefaultValue STARTED

org.apache.kafka.connect.json.JsonConverterTest > 
timestampToConnectWithDefaultValue PASSED

org.apache.kafka.connect.json.JsonConverterTest > timeToConnectOptional STARTED

org.apache.kafka.connect.json.JsonConverterTest > timeToConnectOptional PASSED

org.apache.kafka.connect.json.JsonConverterTest > dateToConnectWithDefaultValue 
STARTED

org.apache.kafka.connect.json.JsonConverterTest > dateToConnectWithDefaultValue 
PASSED

org.apache.kafka.connect.json.JsonConverterTest > nullValueToJson STARTED

org.apache.kafka.connect.json.JsonConverterTest > nullValueToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > decimalToConnect STARTED

org.apache.kafka.connect.json.JsonConverterTest > decimalToConnect PASSED

org.apache.kafka.connect.json.JsonConverterTest > stringHeaderToConnect STARTED

org.apache.kafka.connect.json.JsonConverterTest > stringHeaderToConnect PASSED

org.apache.kafka.connect.json.JsonConverterTest > mapToJsonNonStringKeys STARTED

org.apache.kafka.connect.json.JsonConverterTest > mapToJsonNonStringKeys PASSED

org.apache.kafka.connect.json.JsonConverterTest > longToJson STARTED

org.apache.kafka.connect.json.JsonConverterTest > longToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > mismatchSchemaJson STARTED

org.apache.kafka.connect.json.JsonConverterTest > mismatchSchemaJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > mapToConnectNonStringKeys 
STARTED

org.apache.kafka.connect.json.JsonConverterTest > mapToConnectNonStringKeys 
PASSED

org.apache.kafka.connect.json.JsonConverterTest > 
testJsonSchemaMetadataTranslation STARTED

org.apache.kafka.connect.json.JsonConverterTest > 
testJsonSchemaMetadataTranslation PASSED

org.apache.kafka.connect.json.JsonConverterTest > bytesToJson STARTED

org.apache.kafka.connect.json.JsonConverterTest > bytesToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > shortToConnect STARTED

org.apache.kafka.connect.json.JsonConverterTest > shortToConnect PASSED

org.apache.kafka.connect.json.JsonConverterTest > intToJson STARTED

org.apache.kafka.connect.json.JsonConverterTest > intToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > nullSchemaAndNullValueToJson 
STARTED

org.apache.kafka.connect.json.JsonConverterTest > nullSchemaAndNullValueToJson 
PASSED

org.apache.kafka.connect.json.JsonConverterTest > timestampToConnectOptional 
STARTED

org.apache.kafka.connect.json.JsonConverterTest > timestampToConnectOptional 
PASSED

org.apache.kafka.connect.json.JsonConverterTest > structToConnect STARTED

org.apache.kafka.connect.json.JsonConverterTest > structToConnect PASSED

org.apache.kafka.connect.json.JsonConverterTest > stringToJson STARTED

org.apache.kafka.connect.json.JsonConverterTest > stringToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > nullSchemaAndArrayToJson 
STARTED

org.apache.kafka.connect.json.JsonConverterTest > nullSchemaAndArrayToJson 
PASSED

org.apache.kafka.connect.json.JsonConverterTest > byteToConnect STARTED

org.apache.kafka.connect.json.JsonConverterTest > byteToConnect PASSED

org.apache.kafka.connect.json.JsonConverterTest > nullSchemaPrimitiveToConnect 
STARTED

org.apache.kafka.connect.json.JsonConverterTest > nullSchemaPrimitiveToConnect 
PASSED

org.apache.kafka.connect.json.JsonConverterTest > byteToJson STARTED

org.apache.kafka.connect.json.JsonConverterTest > byteToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > intToConnect STARTED

org.apache.kafka.connect.json.JsonConverterTest > intToConnect PASSED

org.apache.kafka.connect.json.JsonConverterTest > dateToJson STARTED

org.apache.kafka.connect.json.JsonConverterTest > dateToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > noSchemaToConnect STARTED

org.apache.kafka.connect.json.JsonConverterTest > noSchemaToConnect PASSED

org.apache.kafka.connect.json.JsonConverterTest > noSchemaToJson STARTED

org.apache.kafka.connect.json.JsonConverterTest > noSchemaToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > nullSchemaAndPrimitiveToJson 
STARTED

org.apache.kafka.connect.json.JsonConverterTest > nullSchemaAndPrimitiveToJson 
PASSED

org.apache.kafka.connect.json.JsonConverterTest > 
decimalToConnectOptionalWithDefaultValue STARTED

org.apache.kafka.connect.json.JsonConverterTest > 
decimalToConnectOptionalWithDefaultValue PASSED


Re: [VOTE] - KIP-213 (new vote) - Simplified and revised.

2019-02-15 Thread Adam Bellemare
Hi Bill 

Now that you are a committer, does your vote add a +1 to binding? Can you 
recast it if you believe this is a sound decision? I am eager to finally finish 
up this KIP that has been open so long.

Thanks

Adam 

> On Feb 15, 2019, at 12:50 PM, Matthias J. Sax  wrote:
> 
> I think, he needs to re-cast his vote.
> 
> -Matthias
> 
>> On 2/15/19 5:49 AM, Adam Bellemare wrote:
>> Hi all
>> 
>> Since Bill is now a committer, the vote is changed to 3 binding and 3 
>> non-binding (unless I am somehow mistaken - please let me know!). In this 
>> case, I believe the vote passes.
>> 
>> Thanks
>> 
>> Adam 
>> 
>>> On Jan 24, 2019, at 7:28 PM, Adam Bellemare  
>>> wrote:
>>> 
>>> Bumping this vote because I don't want it to languish. It is very unlikely 
>>> to go into 2.2 at this point, but I would like to avoid resurrecting a dead 
>>> thread in 30 days time.
>>> 
 On Tue, Jan 15, 2019 at 5:07 PM Adam Bellemare  
 wrote:
 All good Matthias. If it doesn’t get in for 2.2 I’ll just do it for the 
 next release. 
 
 Thanks
 
> On Jan 15, 2019, at 12:25 PM, Matthias J. Sax  
> wrote:
> 
> I'll try to review the KIP before the deadline, but as I am acting as
> release manager and also working on KIP-258, I cannot promise. Even if
> we make the voting deadline, it might also be tight to review the PR, as
> it seems to be big and complicated.
> 
> I'll try my very best to get it into 2.2...
> 
> 
> -Matthias
> 
>> On 1/15/19 3:27 AM, Adam Bellemare wrote:
>> If I can get one more binding vote in here, I may be able to get this out
>> for the 2.2 release in February.
>> 
>> Currently at:
>> 2 binding, 4 non-binding.
>> 
>> 
>> 
>>> On Sun, Jan 13, 2019 at 2:41 PM Patrik Kleindl  
>>> wrote:
>>> 
>>> +1 (non-binding)
>>> I have followed the discussion too and think this feature will be very
>>> helpful.
>>> Thanks Adam for staying on this.
>>> Best regards
>>> Patrik
>>> 
 Am 13.01.2019 um 19:55 schrieb Paul Whalen :
 
 +1 non binding.  I haven't contributed at all to discussion but have
 followed since Adam reinvigorated it a few months ago and am very 
 excited
 about it.  It would be a huge help on the project I'm working on.
 
 On Fri, Jan 11, 2019 at 9:05 AM Adam Bellemare 
 >>> 
 wrote:
 
> Thanks all -
> 
> So far that's +2 Binding, +2 non-binding
> 
> If we get a few more votes I can likely get this out as part of the
>>> Kafka
> 2.2 release, as the KIP Freeze is Jan 24, 2019. The current PR I have
>>> could
> be modified to match the PR in short order.
> 
> Adam
> 
> 
>> On Fri, Jan 11, 2019 at 7:11 AM Damian Guy 
>>> wrote:
>> 
>> +1 binding
>> 
>>> On Thu, 10 Jan 2019 at 16:57, Bill Bejeck  wrote:
>>> 
>>> +1 from me.  Great job on the KIP.
>>> 
>>> -Bill
>>> 
>>> On Thu, Jan 10, 2019 at 11:35 AM John Roesler 
> wrote:
>>> 
 It's a +1 (nonbinding) from me as well.
 
 Thanks for sticking with this, Adam!
 -John
 
 On Wed, Jan 9, 2019 at 6:22 PM Guozhang Wang 
>> wrote:
 
> Hello Adam,
> 
> I'm +1 on the current proposal, thanks!
> 
> 
> Guozhang
> 
> On Mon, Jan 7, 2019 at 6:13 AM Adam Bellemare <
>>> adam.bellem...@gmail.com>
> wrote:
> 
>> Hi All
>> 
>> I would like to call a new vote on KIP-213. The design has
> changed
>> substantially. Perhaps more importantly, the KIP and associated
>> documentation has been greatly simplified. I know this KIP has
> been
>>> on
> the
>> mailing list for a long time, but the help from John Roesler and
 Guozhang
>> Wang have helped put it into a much better state. I would
>> appreciate
 any
>> feedback or votes.
>> 
>> 
>> 
> 
 
>>> 
>> 
> 
>>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-213+Support+non-key+joining+in+KTable
>> 
>> 
>> 
>> Thank you
>> 
>> Adam Bellemare
>> 
> 
> 
> --
> -- Guozhang
> 
 
>>> 
>> 
> 
>>> 
>> 
> 
>> 
> 


Build failed in Jenkins: kafka-trunk-jdk8 #3389

2019-02-15 Thread Apache Jenkins Server
See 


Changes:

[bbejeck] MINOR: add test for StreamsSmokeTestDriver (#6231)

--
[...truncated 2.30 MB...]
org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
STARTED


Re: [ANNOUNCE] New Committer: Randall Hauch

2019-02-15 Thread Matthias J. Sax
Congrats Randall!


-Matthias

On 2/14/19 6:16 PM, Guozhang Wang wrote:
> Hello all,
> 
> The PMC of Apache Kafka is happy to announce another new committer joining
> the project today: we have invited Randall Hauch as a project committer and
> he has accepted.
> 
> Randall has been participating in the Kafka community for the past 3 years,
> and is well known as the founder of the Debezium project, a popular project
> for database change-capture streams using Kafka (https://debezium.io). More
> recently he has become the main person keeping Kafka Connect moving
> forward, participated in nearly all KIP discussions and QAs on the mailing
> list. He's authored 6 KIPs and authored 50 pull requests and conducted over
> a hundred reviews around Kafka Connect, and has also been evangelizing
> Kafka Connect at several Kafka Summit venues.
> 
> 
> Thank you very much for your contributions to the Connect community Randall
> ! And looking forward to many more :)
> 
> 
> Guozhang, on behalf of the Apache Kafka PMC
> 



signature.asc
Description: OpenPGP digital signature


Re: [ANNOUNCE] New Committer: Randall Hauch

2019-02-15 Thread Dong Lin
Congratulations Randall!!

On Thu, Feb 14, 2019 at 6:17 PM Guozhang Wang  wrote:

> Hello all,
>
> The PMC of Apache Kafka is happy to announce another new committer joining
> the project today: we have invited Randall Hauch as a project committer and
> he has accepted.
>
> Randall has been participating in the Kafka community for the past 3 years,
> and is well known as the founder of the Debezium project, a popular project
> for database change-capture streams using Kafka (https://debezium.io).
> More
> recently he has become the main person keeping Kafka Connect moving
> forward, participated in nearly all KIP discussions and QAs on the mailing
> list. He's authored 6 KIPs and authored 50 pull requests and conducted over
> a hundred reviews around Kafka Connect, and has also been evangelizing
> Kafka Connect at several Kafka Summit venues.
>
>
> Thank you very much for your contributions to the Connect community Randall
> ! And looking forward to many more :)
>
>
> Guozhang, on behalf of the Apache Kafka PMC
>


Re: [ANNOUNCE] New Committer: Bill Bejeck

2019-02-15 Thread Dong Lin
Congratulations Bill!!

On Wed, Feb 13, 2019 at 5:03 PM Guozhang Wang  wrote:

> Hello all,
>
> The PMC of Apache Kafka is happy to announce that we've added Bill Bejeck
> as our newest project committer.
>
> Bill has been active in the Kafka community since 2015. He has made
> significant contributions to the Kafka Streams project with more than 100
> PRs and 4 authored KIPs, including the streams topology optimization
> framework. Bill's also very keen on tightening Kafka's unit test / system
> tests coverage, which is a great value to our project codebase.
>
> In addition, Bill has been very active in evangelizing Kafka for stream
> processing in the community. He has given several Kafka meetup talks in the
> past year, including a presentation at Kafka Summit SF. He's also authored
> a book about Kafka Streams (
> https://www.manning.com/books/kafka-streams-in-action), as well as various
> of posts in public venues like DZone as well as his personal blog (
> http://codingjunkie.net/).
>
> We really appreciate the contributions and are looking forward to see more
> from him. Congratulations, Bill !
>
>
> Guozhang, on behalf of the Apache Kafka PMC
>


Re: [ANNOUNCE] New Committer: Randall Hauch

2019-02-15 Thread Daniel Hanley
Congratulations Randall!

On Fri, Feb 15, 2019 at 9:35 AM Viktor Somogyi-Vass 
wrote:

> Congrats Randall! :)
>
> On Fri, Feb 15, 2019 at 10:15 AM Satish Duggana 
> wrote:
>
> > Congratulations Randall!
> >
> > On Fri, Feb 15, 2019 at 1:51 PM Mickael Maison  >
> > wrote:
> > >
> > > Congrats Randall!
> > >
> > > On Fri, Feb 15, 2019 at 6:37 AM James Cheng 
> > wrote:
> > > >
> > > > Congrats, Randall! Well deserved!
> > > >
> > > > -James
> > > >
> > > > Sent from my iPhone
> > > >
> > > > > On Feb 14, 2019, at 6:16 PM, Guozhang Wang 
> > wrote:
> > > > >
> > > > > Hello all,
> > > > >
> > > > > The PMC of Apache Kafka is happy to announce another new committer
> > joining
> > > > > the project today: we have invited Randall Hauch as a project
> > committer and
> > > > > he has accepted.
> > > > >
> > > > > Randall has been participating in the Kafka community for the past
> 3
> > years,
> > > > > and is well known as the founder of the Debezium project, a popular
> > project
> > > > > for database change-capture streams using Kafka (
> https://debezium.io).
> > More
> > > > > recently he has become the main person keeping Kafka Connect moving
> > > > > forward, participated in nearly all KIP discussions and QAs on the
> > mailing
> > > > > list. He's authored 6 KIPs and authored 50 pull requests and
> > conducted over
> > > > > a hundred reviews around Kafka Connect, and has also been
> > evangelizing
> > > > > Kafka Connect at several Kafka Summit venues.
> > > > >
> > > > >
> > > > > Thank you very much for your contributions to the Connect community
> > Randall
> > > > > ! And looking forward to many more :)
> > > > >
> > > > >
> > > > > Guozhang, on behalf of the Apache Kafka PMC
> >
>


[jira] [Resolved] (KAFKA-7886) Some partitions are fully truncated during recovery when log.message.format = 0.10.2 & inter.broker.protocol >= 0.11

2019-02-15 Thread Jason Gustafson (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-7886.

Resolution: Duplicate

Resolving this as a duplicate of KAFKA-7897. This will be included in 2.1.1, 
which is set to release this week. Please reopen if the issue still persists.

> Some partitions are fully truncated during recovery when log.message.format = 
> 0.10.2 & inter.broker.protocol >= 0.11
> 
>
> Key: KAFKA-7886
> URL: https://issues.apache.org/jira/browse/KAFKA-7886
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.11.0.0, 2.0.1, 2.1.0
> Environment: centos 7 
>Reporter: Hervé RIVIERE
>Priority: Major
> Attachments: broker.log
>
>
> On a cluster of Kafka 2.0.1, and brokers configured with
>  * inter.broker.protocol.format = 2.0
>  * log.message.format.version = 0.10.2
>  
> In such configuration, when a broker is restarted (clean shutdown), the 
> recovery process, for some partitions, is not taking in account the high 
> watermark and is truncating and re-downloading the full partition.
> Typically for brokers with 500 partitions each / 5 TB of disk usage the 
> recovery process with this configuration is during up to 1 hour whereas it 
> usually takes less than 10 min in the same broker when 
> (inter.broker.protocol.format = log.message.format.version)
>  Partitions redownloaded seems not predictable : after several restart of the 
> same broker, partitions redownloaded are now always the same.
> Broker log filter for one specific partition that was redownloaded ( the 
> truncate offset : 12878451349 is corresponding to the log-start-offset) :
>  
> {code:java}
> 2019-01-31 09:23:34,703 INFO [ProducerStateManager partition=my_topic-11] 
> Writing producer snapshot at offset 13132373966 
> (kafka.log.ProducerStateManager)
> 2019-01-31 09:25:15,245 INFO [Log partition=my_topic-11, dir=/var/lib/kafka] 
> Loading producer state till offset 13132373966 with message format version 1 
> (kafka.log.Log)
> 2019-01-31 09:25:15,245 INFO [ProducerStateManager partition=my_topic-11] 
> Writing producer snapshot at offset 13130789408 
> (kafka.log.ProducerStateManager)
> 2019-01-31 09:25:15,249 INFO [ProducerStateManager partition=my_topic-11] 
> Writing producer snapshot at offset 13131829288 
> (kafka.log.ProducerStateManager)
> 2019-01-31 09:25:15,388 INFO [ProducerStateManager partition=my_topic-11] 
> Writing producer snapshot at offset 13132373966 
> (kafka.log.ProducerStateManager)
> 2019-01-31 09:25:15,388 INFO [Log partition=my_topic-11, dir=/var/lib/kafka] 
> Completed load of log with 243 segments, log start offset 12878451349 and log 
> end offset 13132373966 in 46273 ms (kafka.log.Log)
> 2019-01-31 09:28:38,226 INFO Replica loaded for partition my_topic-11 with 
> initial high watermark 13132373966 (kafka.cluster.Replica)
> 2019-01-31 09:28:38,226 INFO Replica loaded for partition my_topic-11 with 
> initial high watermark 0 (kafka.cluster.Replica)
> 2019-01-31 09:28:38,226 INFO Replica loaded for partition my_topic-11 with 
> initial high watermark 0 (kafka.cluster.Replica)
> 2019-01-31 09:28:42,132 INFO The cleaning for partition my_topic-11 is 
> aborted and paused (kafka.log.LogCleaner)
> 2019-01-31 09:28:42,133 INFO [Log partition=my_topic-11, dir=/var/lib/kafka] 
> Truncating to offset 12878451349 (kafka.log.Log)
> 2019-01-31 09:28:42,135 INFO [Log partition=my_topic-11, dir=/var/lib/kafka] 
> Scheduling log segment [baseOffset 12879521312, size 536869342] for deletion. 
> (kafka.log.Log)
> (...)
> 2019-01-31 09:28:42,521 INFO [Log partition=my_topic-11, dir=/var/lib/kafka] 
> Scheduling log segment [baseOffset 13131829288, size 280543535] for deletion. 
> (kafka.log.Log)
> 2019-01-31 09:28:43,870 WARN [ReplicaFetcher replicaId=11, leaderId=13, 
> fetcherId=1] Truncating my_topic-11 to offset 12878451349 below high 
> watermark 13132373966 (kafka.server.ReplicaFetcherThread)
> 2019-01-31 09:29:03,703 INFO [Log partition=my_topic-11, dir=/var/lib/kafka] 
> Found deletable segments with base offsets [12878451349] due to retention 
> time 25920ms breach (kafka.log.Log)
> 2019-01-31 09:28:42,550 INFO Compaction for partition my_topic-11 is resumed 
> (kafka.log.LogManager)
> {code}
>  
> We sucessfull tried to reproduce the same bug with kafka 0.11, 2.0.1 & 2.1.0
>  
> Same issue appears when we are doing a rolling restart by switching 
> log.message.format to 2.0
> Issue disappears when all brokers are with log.message.format = 2.0 & 
> inter.broker.protocol = 2.0



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-7933) KTableKTableLeftJoinTest takes an hour to finish

2019-02-15 Thread Viktor Somogyi (JIRA)
Viktor Somogyi created KAFKA-7933:
-

 Summary: KTableKTableLeftJoinTest takes an hour to finish
 Key: KAFKA-7933
 URL: https://issues.apache.org/jira/browse/KAFKA-7933
 Project: Kafka
  Issue Type: Improvement
  Components: streams
Affects Versions: 2.2.0
Reporter: Viktor Somogyi
 Attachments: jenkins-output-one-hour-test.log

PRs might time out as 
{{KTableKTableLeftJoinTest.shouldNotThrowIllegalStateExceptionWhenMultiCacheEvictions}}
 took one hour to complete.

{noformat}
11:57:45 org.apache.kafka.streams.kstream.internals.KTableKTableLeftJoinTest > 
shouldNotThrowIllegalStateExceptionWhenMultiCacheEvictions STARTED
12:53:35 
12:53:35 org.apache.kafka.streams.kstream.internals.KTableKTableLeftJoinTest > 
shouldNotThrowIllegalStateExceptionWhenMultiCacheEvictions PASSED
{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [VOTE] - KIP-213 (new vote) - Simplified and revised.

2019-02-15 Thread Adam Bellemare
Hi all

Since Bill is now a committer, the vote is changed to 3 binding and 3 
non-binding (unless I am somehow mistaken - please let me know!). In this case, 
I believe the vote passes.

Thanks

Adam 

> On Jan 24, 2019, at 7:28 PM, Adam Bellemare  wrote:
> 
> Bumping this vote because I don't want it to languish. It is very unlikely to 
> go into 2.2 at this point, but I would like to avoid resurrecting a dead 
> thread in 30 days time.
> 
>> On Tue, Jan 15, 2019 at 5:07 PM Adam Bellemare  
>> wrote:
>> All good Matthias. If it doesn’t get in for 2.2 I’ll just do it for the next 
>> release. 
>> 
>> Thanks
>> 
>> > On Jan 15, 2019, at 12:25 PM, Matthias J. Sax  
>> > wrote:
>> > 
>> > I'll try to review the KIP before the deadline, but as I am acting as
>> > release manager and also working on KIP-258, I cannot promise. Even if
>> > we make the voting deadline, it might also be tight to review the PR, as
>> > it seems to be big and complicated.
>> > 
>> > I'll try my very best to get it into 2.2...
>> > 
>> > 
>> > -Matthias
>> > 
>> >> On 1/15/19 3:27 AM, Adam Bellemare wrote:
>> >> If I can get one more binding vote in here, I may be able to get this out
>> >> for the 2.2 release in February.
>> >> 
>> >> Currently at:
>> >> 2 binding, 4 non-binding.
>> >> 
>> >> 
>> >> 
>> >>> On Sun, Jan 13, 2019 at 2:41 PM Patrik Kleindl  
>> >>> wrote:
>> >>> 
>> >>> +1 (non-binding)
>> >>> I have followed the discussion too and think this feature will be very
>> >>> helpful.
>> >>> Thanks Adam for staying on this.
>> >>> Best regards
>> >>> Patrik
>> >>> 
>>  Am 13.01.2019 um 19:55 schrieb Paul Whalen :
>>  
>>  +1 non binding.  I haven't contributed at all to discussion but have
>>  followed since Adam reinvigorated it a few months ago and am very 
>>  excited
>>  about it.  It would be a huge help on the project I'm working on.
>>  
>>  On Fri, Jan 11, 2019 at 9:05 AM Adam Bellemare >  
>>  wrote:
>>  
>> > Thanks all -
>> > 
>> > So far that's +2 Binding, +2 non-binding
>> > 
>> > If we get a few more votes I can likely get this out as part of the
>> >>> Kafka
>> > 2.2 release, as the KIP Freeze is Jan 24, 2019. The current PR I have
>> >>> could
>> > be modified to match the PR in short order.
>> > 
>> > Adam
>> > 
>> > 
>> >> On Fri, Jan 11, 2019 at 7:11 AM Damian Guy 
>> >>> wrote:
>> >> 
>> >> +1 binding
>> >> 
>> >>> On Thu, 10 Jan 2019 at 16:57, Bill Bejeck  wrote:
>> >>> 
>> >>> +1 from me.  Great job on the KIP.
>> >>> 
>> >>> -Bill
>> >>> 
>> >>> On Thu, Jan 10, 2019 at 11:35 AM John Roesler 
>> > wrote:
>> >>> 
>>  It's a +1 (nonbinding) from me as well.
>>  
>>  Thanks for sticking with this, Adam!
>>  -John
>>  
>>  On Wed, Jan 9, 2019 at 6:22 PM Guozhang Wang 
>> >> wrote:
>>  
>> > Hello Adam,
>> > 
>> > I'm +1 on the current proposal, thanks!
>> > 
>> > 
>> > Guozhang
>> > 
>> > On Mon, Jan 7, 2019 at 6:13 AM Adam Bellemare <
>> >>> adam.bellem...@gmail.com>
>> > wrote:
>> > 
>> >> Hi All
>> >> 
>> >> I would like to call a new vote on KIP-213. The design has
>> > changed
>> >> substantially. Perhaps more importantly, the KIP and associated
>> >> documentation has been greatly simplified. I know this KIP has
>> > been
>> >>> on
>> > the
>> >> mailing list for a long time, but the help from John Roesler and
>>  Guozhang
>> >> Wang have helped put it into a much better state. I would
>> >> appreciate
>>  any
>> >> feedback or votes.
>> >> 
>> >> 
>> >> 
>> > 
>>  
>> >>> 
>> >> 
>> > 
>> >>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-213+Support+non-key+joining+in+KTable
>> >> 
>> >> 
>> >> 
>> >> Thank you
>> >> 
>> >> Adam Bellemare
>> >> 
>> > 
>> > 
>> > --
>> > -- Guozhang
>> > 
>>  
>> >>> 
>> >> 
>> > 
>> >>> 
>> >> 
>> > 


Re: [ANNOUNCE] New Committer: Bill Bejeck

2019-02-15 Thread Adam Bellemare
Great work Bill! Well deserved! 

> On Feb 14, 2019, at 3:55 AM, Edoardo Comar  wrote:
> 
> Well done Bill!
> --
> 
> Edoardo Comar
> 
> IBM Event Streams
> IBM UK Ltd, Hursley Park, SO21 2JN
> 
> 
> 
> 
> From:   Rajini Sivaram 
> To: dev 
> Date:   14/02/2019 09:25
> Subject:Re: [ANNOUNCE] New Committer: Bill Bejeck
> 
> 
> 
> Congratulations, Bill!
> 
> On Thu, Feb 14, 2019 at 9:04 AM Jorge Esteban Quilcate Otoya <
> quilcate.jo...@gmail.com> wrote:
> 
>> Congratulations Bill!
>> 
>> On Thu, 14 Feb 2019, 09:29 Mickael Maison, 
>> wrote:
>> 
>>> Congratulations Bill!
>>> 
>>> On Thu, Feb 14, 2019 at 7:52 AM Gurudatt Kulkarni 
> 
>>> wrote:
>>> 
 Congratulations Bill!
 
 On Thursday, February 14, 2019, Konstantine Karantasis <
 konstant...@confluent.io> wrote:
> Congrats Bill!
> 
> -Konstantine
> 
> On Wed, Feb 13, 2019 at 8:42 PM Srinivas Reddy <
 srinivas96all...@gmail.com
> 
> wrote:
> 
>> Congratulations Bill 
>> 
>> Well deserved!!
>> 
>> -
>> Srinivas
>> 
>> - Typed on tiny keys. pls ignore typos.{mobile app}
>> 
>>> On Thu, 14 Feb, 2019, 11:21 Ismael Juma >> 
>>> Congratulations Bill!
>>> 
>>> On Wed, Feb 13, 2019, 5:03 PM Guozhang Wang >>> wrote:
>>> 
 Hello all,
 
 The PMC of Apache Kafka is happy to announce that we've added
>> Bill
>> Bejeck
 as our newest project committer.
 
 Bill has been active in the Kafka community since 2015. He 
> has
>>> made
 significant contributions to the Kafka Streams project with 
> more
 than
>> 100
 PRs and 4 authored KIPs, including the streams topology
>>> optimization
 framework. Bill's also very keen on tightening Kafka's unit
>> test /
>> system
 tests coverage, which is a great value to our project 
> codebase.
 
 In addition, Bill has been very active in evangelizing Kafka 
> for
 stream
 processing in the community. He has given several Kafka 
> meetup
>>> talks
 in
>>> the
 past year, including a presentation at Kafka Summit SF. He's
>> also
>>> authored
 a book about Kafka Streams (
 
> https://urldefense.proofpoint.com/v2/url?u=https-3A__www.manning.com_books_kafka-2Dstreams-2Din-2Daction=DwIFaQ=jf_iaSHvJObTbx-siA1ZOg=EzRhmSah4IHsUZVekRUIINhltZK7U0OaeRo7hgW4_tQ=KJZMhaqmHaB06mnORSUk3ZCMhhs-Q-KMRty3OPPS28k=KQXXkpCoIhSnbCiL1As-0nEdq8oHZGCcqYUZGOq118E=
> ), as well
>>> as
>>> various
 of posts in public venues like DZone as well as his personal
>> blog
>>> (
 
> https://urldefense.proofpoint.com/v2/url?u=http-3A__codingjunkie.net_=DwIFaQ=jf_iaSHvJObTbx-siA1ZOg=EzRhmSah4IHsUZVekRUIINhltZK7U0OaeRo7hgW4_tQ=KJZMhaqmHaB06mnORSUk3ZCMhhs-Q-KMRty3OPPS28k=K4jgRN4mNUsjGag4cb7mdSZXOV4oVbbwO48t0OxB4b0=
> ).
 
 We really appreciate the contributions and are looking 
> forward
>> to
 see
>>> more
 from him. Congratulations, Bill !
 
 
 Guozhang, on behalf of the Apache Kafka PMC
 
>>> 
>> 
> 
 
>>> 
>> 
> 
> 
> 
> Unless stated otherwise above:
> IBM United Kingdom Limited - Registered in England and Wales with number 
> 741598. 
> Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU
> 


Build failed in Jenkins: kafka-2.1-jdk8 #128

2019-02-15 Thread Apache Jenkins Server
See 


Changes:

[cmccabe] Bump version to 2.1.1

[cmccabe] Update versions to 2.1.2-SNAPSHOT

--
[...truncated 1.06 MB...]
org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testListConnectorsNotLeader PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testListConnectorsNotSynced STARTED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testListConnectorsNotSynced PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testCreateConnectorNotLeader STARTED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testCreateConnectorNotLeader PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testCreateConnectorExists STARTED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testCreateConnectorExists PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testCreateConnectorNameTrimWhitespaces STARTED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testCreateConnectorNameTrimWhitespaces PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testCreateConnectorNameAllWhitespaces STARTED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testCreateConnectorNameAllWhitespaces PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testCreateConnectorNoName STARTED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testCreateConnectorNoName PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testDeleteConnector STARTED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testDeleteConnector PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testDeleteConnectorNotLeader STARTED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testDeleteConnectorNotLeader PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testDeleteConnectorNotFound STARTED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testDeleteConnectorNotFound PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testGetConnector STARTED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testGetConnector PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testCreateConnector STARTED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testCreateConnector PASSED

org.apache.kafka.connect.runtime.rest.resources.RootResourceTest > testRootGet 
STARTED

org.apache.kafka.connect.runtime.rest.resources.RootResourceTest > testRootGet 
PASSED

org.apache.kafka.connect.runtime.isolation.DelegatingClassLoaderTest > 
testWhiteListedManifestResources STARTED

org.apache.kafka.connect.runtime.isolation.DelegatingClassLoaderTest > 
testWhiteListedManifestResources PASSED

org.apache.kafka.connect.runtime.isolation.DelegatingClassLoaderTest > 
testOtherResources STARTED

org.apache.kafka.connect.runtime.isolation.DelegatingClassLoaderTest > 
testOtherResources PASSED

org.apache.kafka.connect.runtime.isolation.PluginDescTest > 
testPluginDescWithNullVersion STARTED

org.apache.kafka.connect.runtime.isolation.PluginDescTest > 
testPluginDescWithNullVersion PASSED

org.apache.kafka.connect.runtime.isolation.PluginDescTest > 
testPluginDescComparison STARTED

org.apache.kafka.connect.runtime.isolation.PluginDescTest > 
testPluginDescComparison PASSED

org.apache.kafka.connect.runtime.isolation.PluginDescTest > 
testRegularPluginDesc STARTED

org.apache.kafka.connect.runtime.isolation.PluginDescTest > 
testRegularPluginDesc PASSED

org.apache.kafka.connect.runtime.isolation.PluginDescTest > 
testPluginDescEquality STARTED

org.apache.kafka.connect.runtime.isolation.PluginDescTest > 
testPluginDescEquality PASSED

org.apache.kafka.connect.runtime.isolation.PluginDescTest > 
testPluginDescWithSystemClassLoader STARTED

org.apache.kafka.connect.runtime.isolation.PluginDescTest > 
testPluginDescWithSystemClassLoader PASSED

org.apache.kafka.connect.runtime.isolation.PluginsTest > 
shouldInstantiateAndConfigureConnectRestExtension STARTED

org.apache.kafka.connect.runtime.isolation.PluginsTest > 
shouldInstantiateAndConfigureConnectRestExtension PASSED

org.apache.kafka.connect.runtime.isolation.PluginsTest > 
shouldInstantiateAndConfigureConverters STARTED

org.apache.kafka.connect.runtime.isolation.PluginsTest > 
shouldInstantiateAndConfigureConverters PASSED

org.apache.kafka.connect.runtime.isolation.PluginsTest > 
shouldInstantiateAndConfigureInternalConverters STARTED

org.apache.kafka.connect.runtime.isolation.PluginsTest > 

Re: [ANNOUNCE] New Committer: Randall Hauch

2019-02-15 Thread Rajini Sivaram
Congratulations, Randall!

On Fri, Feb 15, 2019 at 11:56 AM Daniel Hanley  wrote:

> Congratulations Randall!
>
> On Fri, Feb 15, 2019 at 9:35 AM Viktor Somogyi-Vass <
> viktorsomo...@gmail.com>
> wrote:
>
> > Congrats Randall! :)
> >
> > On Fri, Feb 15, 2019 at 10:15 AM Satish Duggana <
> satish.dugg...@gmail.com>
> > wrote:
> >
> > > Congratulations Randall!
> > >
> > > On Fri, Feb 15, 2019 at 1:51 PM Mickael Maison <
> mickael.mai...@gmail.com
> > >
> > > wrote:
> > > >
> > > > Congrats Randall!
> > > >
> > > > On Fri, Feb 15, 2019 at 6:37 AM James Cheng 
> > > wrote:
> > > > >
> > > > > Congrats, Randall! Well deserved!
> > > > >
> > > > > -James
> > > > >
> > > > > Sent from my iPhone
> > > > >
> > > > > > On Feb 14, 2019, at 6:16 PM, Guozhang Wang 
> > > wrote:
> > > > > >
> > > > > > Hello all,
> > > > > >
> > > > > > The PMC of Apache Kafka is happy to announce another new
> committer
> > > joining
> > > > > > the project today: we have invited Randall Hauch as a project
> > > committer and
> > > > > > he has accepted.
> > > > > >
> > > > > > Randall has been participating in the Kafka community for the
> past
> > 3
> > > years,
> > > > > > and is well known as the founder of the Debezium project, a
> popular
> > > project
> > > > > > for database change-capture streams using Kafka (
> > https://debezium.io).
> > > More
> > > > > > recently he has become the main person keeping Kafka Connect
> moving
> > > > > > forward, participated in nearly all KIP discussions and QAs on
> the
> > > mailing
> > > > > > list. He's authored 6 KIPs and authored 50 pull requests and
> > > conducted over
> > > > > > a hundred reviews around Kafka Connect, and has also been
> > > evangelizing
> > > > > > Kafka Connect at several Kafka Summit venues.
> > > > > >
> > > > > >
> > > > > > Thank you very much for your contributions to the Connect
> community
> > > Randall
> > > > > > ! And looking forward to many more :)
> > > > > >
> > > > > >
> > > > > > Guozhang, on behalf of the Apache Kafka PMC
> > >
> >
>


Re: [ANNOUNCE] New Committer: Randall Hauch

2019-02-15 Thread Dongjin Lee
Congratulations, Randall! You deserve it!

Best,
Dongjin

On Fri, Feb 15, 2019, 6:35 PM Viktor Somogyi-Vass  Congrats Randall! :)
>
> On Fri, Feb 15, 2019 at 10:15 AM Satish Duggana 
> wrote:
>
> > Congratulations Randall!
> >
> > On Fri, Feb 15, 2019 at 1:51 PM Mickael Maison  >
> > wrote:
> > >
> > > Congrats Randall!
> > >
> > > On Fri, Feb 15, 2019 at 6:37 AM James Cheng 
> > wrote:
> > > >
> > > > Congrats, Randall! Well deserved!
> > > >
> > > > -James
> > > >
> > > > Sent from my iPhone
> > > >
> > > > > On Feb 14, 2019, at 6:16 PM, Guozhang Wang 
> > wrote:
> > > > >
> > > > > Hello all,
> > > > >
> > > > > The PMC of Apache Kafka is happy to announce another new committer
> > joining
> > > > > the project today: we have invited Randall Hauch as a project
> > committer and
> > > > > he has accepted.
> > > > >
> > > > > Randall has been participating in the Kafka community for the past
> 3
> > years,
> > > > > and is well known as the founder of the Debezium project, a popular
> > project
> > > > > for database change-capture streams using Kafka (
> https://debezium.io).
> > More
> > > > > recently he has become the main person keeping Kafka Connect moving
> > > > > forward, participated in nearly all KIP discussions and QAs on the
> > mailing
> > > > > list. He's authored 6 KIPs and authored 50 pull requests and
> > conducted over
> > > > > a hundred reviews around Kafka Connect, and has also been
> > evangelizing
> > > > > Kafka Connect at several Kafka Summit venues.
> > > > >
> > > > >
> > > > > Thank you very much for your contributions to the Connect community
> > Randall
> > > > > ! And looking forward to many more :)
> > > > >
> > > > >
> > > > > Guozhang, on behalf of the Apache Kafka PMC
> >
>


Re: [ANNOUNCE] New Committer: Randall Hauch

2019-02-15 Thread Viktor Somogyi-Vass
Congrats Randall! :)

On Fri, Feb 15, 2019 at 10:15 AM Satish Duggana 
wrote:

> Congratulations Randall!
>
> On Fri, Feb 15, 2019 at 1:51 PM Mickael Maison 
> wrote:
> >
> > Congrats Randall!
> >
> > On Fri, Feb 15, 2019 at 6:37 AM James Cheng 
> wrote:
> > >
> > > Congrats, Randall! Well deserved!
> > >
> > > -James
> > >
> > > Sent from my iPhone
> > >
> > > > On Feb 14, 2019, at 6:16 PM, Guozhang Wang 
> wrote:
> > > >
> > > > Hello all,
> > > >
> > > > The PMC of Apache Kafka is happy to announce another new committer
> joining
> > > > the project today: we have invited Randall Hauch as a project
> committer and
> > > > he has accepted.
> > > >
> > > > Randall has been participating in the Kafka community for the past 3
> years,
> > > > and is well known as the founder of the Debezium project, a popular
> project
> > > > for database change-capture streams using Kafka (https://debezium.io).
> More
> > > > recently he has become the main person keeping Kafka Connect moving
> > > > forward, participated in nearly all KIP discussions and QAs on the
> mailing
> > > > list. He's authored 6 KIPs and authored 50 pull requests and
> conducted over
> > > > a hundred reviews around Kafka Connect, and has also been
> evangelizing
> > > > Kafka Connect at several Kafka Summit venues.
> > > >
> > > >
> > > > Thank you very much for your contributions to the Connect community
> Randall
> > > > ! And looking forward to many more :)
> > > >
> > > >
> > > > Guozhang, on behalf of the Apache Kafka PMC
>


Re: [VOTE] #2 KIP-248: Create New ConfigCommand That Uses The New AdminClient

2019-02-15 Thread Viktor Somogyi-Vass
Hi Everyone,

Sorry for dropping the ball on this. I'd like to discard this KIP as there
were some overlapping works in the meanwhile and I think now some design
decisions could be made differently. I'll try to revamp this and take the
parts that are not implemented yet and compile them into smaller KIPs.

Viktor

On Thu, Jul 5, 2018 at 6:28 PM Rajini Sivaram 
wrote:

> Hi Magnus,
>
> Thanks for pointing that out. I haven't been keeping up-to-date with all
> the changes and I have no idea how we got here. I had requested the change
> to use SET/ADD/DELETE way back in February (
>
> http://mail-archives.apache.org/mod_mbox/kafka-dev/201802.mbox/%3cCAOJcB39amNDgf+8EO5fyA0JvpcUEQZBHu=ujhasj4zhjf-b...@mail.gmail.com%3e
> )
> and I thought the KIP was updated. Non-incremental updates are pretty much
> useless for broker config updates, and the KIP as-is doesn't work for
> broker configs until ADD/DELETE is reintroduced. DescribeConfigs doesn't
> return sensitive configs, so apart from atomicity, any sequence of changes
> that requires non-incremental updates simply doesn't work for broker
> configs.
>
> Hi Viktor, Are you still working on this KIP?
>
> Regards,
>
> Rajini
>
> On Wed, Jul 4, 2018 at 12:14 PM, Magnus Edenhill 
> wrote:
>
> > There are some concerns about the incremental option that needs to be
> > discussed further.
> >
> > I believe everyone agrees on the need for incremental updates, allowing a
> > client
> > to only alter the configuration it provides in an atomic fashion.
> > The proposal adds a request-level incremental bool for this purpose,
> which
> > is good.
> >
> > But I suspect this might not be enough, and thus suggest that we should
> > extend
> > the per-config-entry struct with a mode field that tells the broker how
> to
> > alter
> > the given configuration entry:
> >  - append - append value to entry (if no previous value it acts like set)
> >  - set - overwrite value
> >  - delete - delete configuration entry / revert to default.
> >
> > If we don't do this, the incremental mode can only be used in "append"
> > mode,
> > and a client that wishes to overwrite property A, delete B, and append to
> > C,
> > will need to issue three requests:
> >  - 1. DescribeConfigs to get the current config.
> >  - 2. AlterConfigs(incremental=False) to overwrite config property A and
> > delete B.
> >  - 3. AlterConfigs(incremental=True) to append to config property C.
> >
> > This makes the configuration update non-atomic, which incremental is set
> > out to fix,
> > any configuration changes made by another client between 1 and 2 would be
> > lost at 2.
> >
> >
> > This also needs to be exposed in the Admin API to make the user intention
> > clear,
> > ConfigEntry should be extended with a new constructor that takes the mode
> > parameter: append, set, or delete.
> > The existing constructor should default to set/overwrite (as in the
> > existing pre-incremental case).
> > If an application issues an AlterConfigs() request with append or delete
> > ConfigEntrys and the broker does not support KIP-248,
> > the request should fail locally in the client.
> >
> > For reference, this is how it is exposed in the corresponding C API:
> > https://github.com/edenhill/librdkafka/blob/master/src/rdkafka.h#L5200
> >
> >
> >
> > 2018-07-04 11:28 GMT+02:00 Rajini Sivaram :
> >
> > > Hi Viktor,
> > >
> > > Where are we with this KIP? Is it just waiting for votes? We should try
> > and
> > > get this in earlier in the release cycle this time.
> > >
> > > Thank you,
> > >
> > > Rajini
> > >
> > > On Mon, May 21, 2018 at 7:44 AM, Viktor Somogyi <
> viktorsomo...@gmail.com
> > >
> > > wrote:
> > >
> > > > Hi All,
> > > >
> > > > I'd like to ask the community to please vote for this as the KIP
> > > > freeze is tomorrow.
> > > >
> > > > Thank you very much,
> > > > Viktor
> > > >
> > > > On Mon, May 21, 2018 at 9:39 AM, Viktor Somogyi <
> > viktorsomo...@gmail.com
> > > >
> > > > wrote:
> > > > > Hi Colin,
> > > > >
> > > > > Sure, I'll add a note.
> > > > > Thanks for your vote.
> > > > >
> > > > > Viktor
> > > > >
> > > > > On Sat, May 19, 2018 at 1:01 AM, Colin McCabe 
> > > > wrote:
> > > > >> Hi Viktor,
> > > > >>
> > > > >> Thanks, this looks good.
> > > > >>
> > > > >> The boolean should default to false if not set, to ensure that
> > > existing
> > > > clients continue to work as-is, right?  Might be good to add a note
> > > > specifying that.
> > > > >>
> > > > >> +1 (non-binding)
> > > > >>
> > > > >> best,
> > > > >> Colin
> > > > >>
> > > > >> On Fri, May 18, 2018, at 08:16, Viktor Somogyi wrote:
> > > > >>> Updated KIP-248:
> > > > >>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > > > 248+-+Create+New+ConfigCommand+That+Uses+The+New+AdminClient
> > > > >>>
> > > > >>> I'd like to ask project members, committers and contributors to
> > vote
> > > > >>> as this would be a useful improvement in Kafka.
> > > > >>>
> > > > >>> Sections changed:
> > > > >>> - Public interfaces: added the 

Re: [DISCUSSION] KIP-422: Add support for user/client configuration in the Kafka Admin Client

2019-02-15 Thread Viktor Somogyi-Vass
Hi Guys,

I wanted to reject that KIP, split it up and revamp it as in the meantime
there were some overlapping works I just didn't get to it due to other
higher priority work.
One of the splitted KIPs would have been the quota part of that and I'd be
happy if that lived in this KIP if Yaodong thinks it's worth to
incorporate. I'd be also happy to rebase that wire protocol and contribute
it to this KIP.

Viktor

On Wed, Feb 13, 2019 at 7:14 PM Jun Rao  wrote:

> Hi, Yaodong,
>
> Thanks for the KIP. As Stan mentioned earlier, it seems that this is
> mostly covered by KIP-248, which was originally proposed by Victor.
>
> Hi, Victor,
>
> Do you still plan to work on KIP-248? It seems that you already got pretty
> far on that. If not, would you mind letting Yaodong take over this?
>
> For both KIP-248 and KIP-422, one thing that I found missing is the
> support for customized quota (
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-257+-+Configurable+Quota+Management).
> With KIP-257, it's possible for one to construct a customized quota defined
> through a map of metric tags. It would be useful to support that in the
> AdminClient API and the wire protocol.
>
> Hi, Sonke,
>
> I think the proposal is to support the user/clientId level quota through
> an AdminClient api. The user can be obtained from any existing
> authentication mechanisms.
>
> Thanks,
>
> Jun
>
> On Thu, Feb 7, 2019 at 5:59 AM Sönke Liebau
>  wrote:
>
>> Hi Yaodong,
>>
>> thanks for the KIP!
>>
>> If I understand your intentions correctly then this KIP would only
>> address a fairly specific use case, namely SASL-PLAIN with users
>> defined in Zookeeper. For all other authentication mechanisms like
>> SSL, SASL-GSSAPI or SASL-PLAIN with users defined in jaas files I
>> don't see how the AdminClient could directly create new users.
>> Is this correct, or am I missing something?
>>
>> Best regards,
>> Sönke
>>
>> On Thu, Feb 7, 2019 at 2:47 PM Stanislav Kozlovski
>>  wrote:
>> >
>> > This KIP seems to duplicate some of the functionality proposed in
>> KIP-248
>> > <
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-248+-+Create+New+ConfigCommand+That+Uses+The+New+AdminClient
>> >.
>> > KIP-248 has been stuck in a vote thread since July 2018.
>> >
>> > Viktor, do you plan on working on the KIP?
>> >
>> > On Thu, Feb 7, 2019 at 1:27 PM Stanislav Kozlovski <
>> stanis...@confluent.io>
>> > wrote:
>> >
>> > > Hey there Yaodong, thanks for the KIP!
>> > >
>> > > I'm not too familiar with the user/client configurations we currently
>> > > allow, is there a KIP describing the initial feature? If there is, it
>> would
>> > > be useful to include in KIP-422.
>> > >
>> > > I also didn't see any authorization in the PR, have we thought about
>> > > needing to authorize the alter/describe requests per the user/client?
>> > >
>> > > Thanks,
>> > > Stanislav
>> > >
>> > > On Fri, Jan 25, 2019 at 5:47 PM Yaodong Yang > >
>> > > wrote:
>> > >
>> > >> Hi folks,
>> > >>
>> > >> I've published KIP-422 which is about adding support for user/client
>> > >> configurations in the Kafka Admin Client.
>> > >>
>> > >> Basically the story here is to allow KafkaAdminClient to configure
>> the
>> > >> user
>> > >> or client configurations for users, instead of requiring users to
>> directly
>> > >> talk to ZK.
>> > >>
>> > >> The link for this KIP is
>> > >> following:
>> > >>
>> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=97555704
>> > >>
>> > >> I'd be happy to receive some feedback about the KIP I published.
>> > >>
>> > >> --
>> > >> Best,
>> > >> Yaodong Yang
>> > >>
>> > >
>> > >
>> > > --
>> > > Best,
>> > > Stanislav
>> > >
>> >
>> >
>> > --
>> > Best,
>> > Stanislav
>>
>>
>>
>> --
>> Sönke Liebau
>> Partner
>> Tel. +49 179 7940878
>> OpenCore GmbH & Co. KG - Thomas-Mann-Straße 8 - 22880 Wedel - Germany
>>
>


Re: [ANNOUNCE] New Committer: Randall Hauch

2019-02-15 Thread Satish Duggana
Congratulations Randall!

On Fri, Feb 15, 2019 at 1:51 PM Mickael Maison  wrote:
>
> Congrats Randall!
>
> On Fri, Feb 15, 2019 at 6:37 AM James Cheng  wrote:
> >
> > Congrats, Randall! Well deserved!
> >
> > -James
> >
> > Sent from my iPhone
> >
> > > On Feb 14, 2019, at 6:16 PM, Guozhang Wang  wrote:
> > >
> > > Hello all,
> > >
> > > The PMC of Apache Kafka is happy to announce another new committer joining
> > > the project today: we have invited Randall Hauch as a project committer 
> > > and
> > > he has accepted.
> > >
> > > Randall has been participating in the Kafka community for the past 3 
> > > years,
> > > and is well known as the founder of the Debezium project, a popular 
> > > project
> > > for database change-capture streams using Kafka (https://debezium.io). 
> > > More
> > > recently he has become the main person keeping Kafka Connect moving
> > > forward, participated in nearly all KIP discussions and QAs on the mailing
> > > list. He's authored 6 KIPs and authored 50 pull requests and conducted 
> > > over
> > > a hundred reviews around Kafka Connect, and has also been evangelizing
> > > Kafka Connect at several Kafka Summit venues.
> > >
> > >
> > > Thank you very much for your contributions to the Connect community 
> > > Randall
> > > ! And looking forward to many more :)
> > >
> > >
> > > Guozhang, on behalf of the Apache Kafka PMC


Re: [kafka-clients] [VOTE] 2.1.1 RC2

2019-02-15 Thread Colin McCabe
P.S. I have added KAFKA-7897 to the release notes. Good catch, Jason.

best,
Colin

On Fri, Feb 15, 2019, at 00:49, Colin McCabe wrote:
> Hi all,
> 
> With 7 non-binding +1 votes, 3 binding +1 votes, no +0 votes, and no -1 
> votes, the vote passes.
> 
> Thanks, all!
> 
> cheers,
> Colin
> 
> 
> On Fri, Feb 15, 2019, at 00:07, Jonathan Santilli wrote:
>> 
>> 
>> Hello,
>> 
>> I have downloaded the source and executed integration and unit tests 
>> successfully.
>> Ran kafka-monitor for about 1 hour without any issues.
>> 
>> +1
>> 
>> Thanks for the release Colin.
>> --
>> Jonathan Santilli
>> 
>> 
>> 
>> On Fri, Feb 15, 2019 at 6:16 AM Jason Gustafson  wrote:
>>> Ran the quickstart against the 2.11 artifact and checked the release notes.
>>> For some reason, KAFKA-7897 is not included in the notes, though I
>>> definitely see it in the tagged version. The RC was probably created before
>>> the JIRA was resolved. I think we can regenerate without another RC, so +1
>>> from me.
>>> 
>>> Thanks Colin!
>>> 
>>> On Thu, Feb 14, 2019 at 3:32 PM Jun Rao  wrote:
>>> 
>>> > Hi, Colin,
>>> >
>>> > Thanks for running the release. Verified the quickstart for 2.12 binary. 
>>> > +1
>>> > from me.
>>> >
>>> > Jun
>>> >
>>> > On Fri, Feb 8, 2019 at 12:02 PM Colin McCabe  wrote:
>>> >
>>> > > Hi all,
>>> > >
>>> > > This is the third candidate for release of Apache Kafka 2.1.1. This
>>> > > release includes many bug fixes for Apache Kafka 2.1.
>>> > >
>>> > > Compared to rc1, this release includes the following changes:
>>> > > * MINOR: release.py: fix some compatibility problems.
>>> > > * KAFKA-7897; Disable leader epoch cache when older message formats are
>>> > > used
>>> > > * KAFKA-7902: Replace original loginContext if SASL/OAUTHBEARER refresh
>>> > > login fails
>>> > > * MINOR: Fix more places where the version should be bumped from 2.1.0 
>>> > > ->
>>> > > 2.1.1
>>> > > * KAFKA-7890: Invalidate ClusterConnectionState cache for a broker if 
>>> > > the
>>> > > hostname of the broker changes.
>>> > > * KAFKA-7873; Always seek to beginning in KafkaBasedLog
>>> > > * MINOR: Correctly set dev version in version.py
>>> > >
>>> > > Check out the release notes here:
>>> > > http://home.apache.org/~cmccabe/kafka-2.1.1-rc2/RELEASE_NOTES.html
>>> > >
>>> > > The vote will go until Wednesday, February 13st.
>>> > >
>>> > > * Release artifacts to be voted upon (source and binary):
>>> > > http://home.apache.org/~cmccabe/kafka-2.1.1-rc2/
>>> > >
>>> > > * Maven artifacts to be voted upon:
>>> > > https://repository.apache.org/content/groups/staging/
>>> > >
>>> > > * Javadoc:
>>> > > http://home.apache.org/~cmccabe/kafka-2.1.1-rc2/javadoc/
>>> > >
>>> > > * Tag to be voted upon (off 2.1 branch) is the 2.1.1 tag:
>>> > > https://github.com/apache/kafka/releases/tag/2.1.1-rc2
>>> > >
>>> > > * Jenkins builds for the 2.1 branch:
>>> > > Unit/integration tests: https://builds.apache.org/job/kafka-2.1-jdk8/
>>> > >
>>> > > Thanks to everyone who tested the earlier RCs.
>>> > >
>>> > > cheers,
>>> > > Colin
>>> > >
>>> > > --
>>> > > You received this message because you are subscribed to the Google 
>>> > > Groups
>>> > > "kafka-clients" group.
>>> > > To unsubscribe from this group and stop receiving emails from it, send 
>>> > > an
>>> > > email to kafka-clients+unsubscr...@googlegroups.com 
>>> > > .
>>> > > To post to this group, send email to kafka-clie...@googlegroups.com.
>>> > > Visit this group at https://groups.google.com/group/kafka-clients.
>>> > > To view this discussion on the web visit
>>> > >
>>> > https://groups.google.com/d/msgid/kafka-clients/ea314ca1-d23a-47c4-8fc7-83b9b1c792db%40www.fastmail.com
>>> > > .
>>> > > For more options, visit https://groups.google.com/d/optout.
>>> > >
>>> >
>> 
>> 
>> -- 
>> Santilli Jonathan
> 
> 


> --
>  You received this message because you are subscribed to the Google Groups 
> "kafka-clients" group.
>  To unsubscribe from this group and stop receiving emails from it, send an 
> email to kafka-clients+unsubscr...@googlegroups.com.
>  To post to this group, send email to kafka-clie...@googlegroups.com.
>  Visit this group at https://groups.google.com/group/kafka-clients.
>  To view this discussion on the web visit 
> https://groups.google.com/d/msgid/kafka-clients/dcfd1bcc-8960-4c7f-b95c-57e9a6aae0b7%40www.fastmail.com
>  
> .
>  For more options, visit https://groups.google.com/d/optout.


Re: [kafka-clients] [VOTE] 2.1.1 RC2

2019-02-15 Thread Colin McCabe
Hi all,

With 7 non-binding +1 votes, 3 binding +1 votes, no +0 votes, and no -1 votes, 
the vote passes.

Thanks, all!

cheers,
Colin


On Fri, Feb 15, 2019, at 00:07, Jonathan Santilli wrote:
> 
> 
> Hello,
> 
> I have downloaded the source and executed integration and unit tests 
> successfully.
> Ran kafka-monitor for about 1 hour without any issues.
> 
> +1
> 
> Thanks for the release Colin.
> --
> Jonathan Santilli
> 
> 
> 
> On Fri, Feb 15, 2019 at 6:16 AM Jason Gustafson  wrote:
>> Ran the quickstart against the 2.11 artifact and checked the release notes.
>> For some reason, KAFKA-7897 is not included in the notes, though I
>> definitely see it in the tagged version. The RC was probably created before
>> the JIRA was resolved. I think we can regenerate without another RC, so +1
>> from me.
>>  
>> Thanks Colin!
>>  
>> On Thu, Feb 14, 2019 at 3:32 PM Jun Rao  wrote:
>>  
>> > Hi, Colin,
>> >
>> > Thanks for running the release. Verified the quickstart for 2.12 binary. +1
>> > from me.
>> >
>> > Jun
>> >
>> > On Fri, Feb 8, 2019 at 12:02 PM Colin McCabe  wrote:
>> >
>> > > Hi all,
>> > >
>> > > This is the third candidate for release of Apache Kafka 2.1.1. This
>> > > release includes many bug fixes for Apache Kafka 2.1.
>> > >
>> > > Compared to rc1, this release includes the following changes:
>> > > * MINOR: release.py: fix some compatibility problems.
>> > > * KAFKA-7897; Disable leader epoch cache when older message formats are
>> > > used
>> > > * KAFKA-7902: Replace original loginContext if SASL/OAUTHBEARER refresh
>> > > login fails
>> > > * MINOR: Fix more places where the version should be bumped from 2.1.0 ->
>> > > 2.1.1
>> > > * KAFKA-7890: Invalidate ClusterConnectionState cache for a broker if the
>> > > hostname of the broker changes.
>> > > * KAFKA-7873; Always seek to beginning in KafkaBasedLog
>> > > * MINOR: Correctly set dev version in version.py
>> > >
>> > > Check out the release notes here:
>> > > http://home.apache.org/~cmccabe/kafka-2.1.1-rc2/RELEASE_NOTES.html
>> > >
>> > > The vote will go until Wednesday, February 13st.
>> > >
>> > > * Release artifacts to be voted upon (source and binary):
>> > > http://home.apache.org/~cmccabe/kafka-2.1.1-rc2/
>> > >
>> > > * Maven artifacts to be voted upon:
>> > > https://repository.apache.org/content/groups/staging/
>> > >
>> > > * Javadoc:
>> > > http://home.apache.org/~cmccabe/kafka-2.1.1-rc2/javadoc/
>> > >
>> > > * Tag to be voted upon (off 2.1 branch) is the 2.1.1 tag:
>> > > https://github.com/apache/kafka/releases/tag/2.1.1-rc2
>> > >
>> > > * Jenkins builds for the 2.1 branch:
>> > > Unit/integration tests: https://builds.apache.org/job/kafka-2.1-jdk8/
>> > >
>> > > Thanks to everyone who tested the earlier RCs.
>> > >
>> > > cheers,
>> > > Colin
>> > >
>> > > --
>> > > You received this message because you are subscribed to the Google Groups
>> > > "kafka-clients" group.
>> > > To unsubscribe from this group and stop receiving emails from it, send an
>> > > email to kafka-clients+unsubscr...@googlegroups.com 
>> > > .
>> > > To post to this group, send email to kafka-clie...@googlegroups.com.
>> > > Visit this group at https://groups.google.com/group/kafka-clients.
>> > > To view this discussion on the web visit
>> > >
>> > https://groups.google.com/d/msgid/kafka-clients/ea314ca1-d23a-47c4-8fc7-83b9b1c792db%40www.fastmail.com
>> > > .
>> > > For more options, visit https://groups.google.com/d/optout.
>> > >
>> >
> 
> 
> -- 
> Santilli Jonathan


Re: [ANNOUNCE] New Committer: Randall Hauch

2019-02-15 Thread Mickael Maison
Congrats Randall!

On Fri, Feb 15, 2019 at 6:37 AM James Cheng  wrote:
>
> Congrats, Randall! Well deserved!
>
> -James
>
> Sent from my iPhone
>
> > On Feb 14, 2019, at 6:16 PM, Guozhang Wang  wrote:
> >
> > Hello all,
> >
> > The PMC of Apache Kafka is happy to announce another new committer joining
> > the project today: we have invited Randall Hauch as a project committer and
> > he has accepted.
> >
> > Randall has been participating in the Kafka community for the past 3 years,
> > and is well known as the founder of the Debezium project, a popular project
> > for database change-capture streams using Kafka (https://debezium.io). More
> > recently he has become the main person keeping Kafka Connect moving
> > forward, participated in nearly all KIP discussions and QAs on the mailing
> > list. He's authored 6 KIPs and authored 50 pull requests and conducted over
> > a hundred reviews around Kafka Connect, and has also been evangelizing
> > Kafka Connect at several Kafka Summit venues.
> >
> >
> > Thank you very much for your contributions to the Connect community Randall
> > ! And looking forward to many more :)
> >
> >
> > Guozhang, on behalf of the Apache Kafka PMC


Build failed in Jenkins: kafka-trunk-jdk8 #3388

2019-02-15 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Fix bugs identified by compiler warnings (#6258)

--
[...truncated 4.31 MB...]
kafka.api.SslEndToEndAuthorizationTest > 
testNoConsumeWithoutDescribeAclViaAssign STARTED

kafka.api.SslEndToEndAuthorizationTest > 
testNoConsumeWithoutDescribeAclViaAssign PASSED

kafka.api.SslEndToEndAuthorizationTest > testNoGroupAcl STARTED

kafka.api.SslEndToEndAuthorizationTest > testNoGroupAcl PASSED

kafka.api.SslEndToEndAuthorizationTest > testNoProduceWithDescribeAcl STARTED

kafka.api.SslEndToEndAuthorizationTest > testNoProduceWithDescribeAcl PASSED

kafka.api.SslEndToEndAuthorizationTest > testProduceConsumeViaSubscribe STARTED

kafka.api.SslEndToEndAuthorizationTest > testProduceConsumeViaSubscribe PASSED

kafka.api.SslEndToEndAuthorizationTest > testNoProduceWithoutDescribeAcl STARTED

kafka.api.SslEndToEndAuthorizationTest > testNoProduceWithoutDescribeAcl PASSED

kafka.api.PlaintextProducerSendTest > 
testSendCompressedMessageWithLogAppendTime STARTED

kafka.api.PlaintextProducerSendTest > 
testSendCompressedMessageWithLogAppendTime PASSED

kafka.api.PlaintextProducerSendTest > testAutoCreateTopic STARTED

kafka.api.PlaintextProducerSendTest > testAutoCreateTopic PASSED

kafka.api.PlaintextProducerSendTest > testSendWithInvalidCreateTime STARTED

kafka.api.PlaintextProducerSendTest > testSendWithInvalidCreateTime PASSED

kafka.api.PlaintextProducerSendTest > testBatchSizeZero STARTED

kafka.api.PlaintextProducerSendTest > testBatchSizeZero PASSED

kafka.api.PlaintextProducerSendTest > testWrongSerializer STARTED

kafka.api.PlaintextProducerSendTest > testWrongSerializer PASSED

kafka.api.PlaintextProducerSendTest > 
testSendNonCompressedMessageWithLogAppendTime STARTED

kafka.api.PlaintextProducerSendTest > 
testSendNonCompressedMessageWithLogAppendTime PASSED

kafka.api.PlaintextProducerSendTest > 
testSendNonCompressedMessageWithCreateTime STARTED

kafka.api.PlaintextProducerSendTest > 
testSendNonCompressedMessageWithCreateTime PASSED

kafka.api.PlaintextProducerSendTest > testClose STARTED

kafka.api.PlaintextProducerSendTest > testClose PASSED

kafka.api.PlaintextProducerSendTest > testFlush STARTED

kafka.api.PlaintextProducerSendTest > testFlush PASSED

kafka.api.PlaintextProducerSendTest > testSendToPartition STARTED

kafka.api.PlaintextProducerSendTest > testSendToPartition PASSED

kafka.api.PlaintextProducerSendTest > testSendOffset STARTED

kafka.api.PlaintextProducerSendTest > testSendOffset PASSED

kafka.api.PlaintextProducerSendTest > testSendCompressedMessageWithCreateTime 
STARTED

kafka.api.PlaintextProducerSendTest > testSendCompressedMessageWithCreateTime 
PASSED

kafka.api.PlaintextProducerSendTest > testCloseWithZeroTimeoutFromCallerThread 
STARTED

kafka.api.PlaintextProducerSendTest > testCloseWithZeroTimeoutFromCallerThread 
PASSED

kafka.api.PlaintextProducerSendTest > testCloseWithZeroTimeoutFromSenderThread 
STARTED

kafka.api.PlaintextProducerSendTest > testCloseWithZeroTimeoutFromSenderThread 
PASSED

kafka.api.PlaintextProducerSendTest > testSendBeforeAndAfterPartitionExpansion 
STARTED

kafka.api.PlaintextProducerSendTest > testSendBeforeAndAfterPartitionExpansion 
PASSED

kafka.api.TransactionsBounceTest > testBrokerFailure STARTED

kafka.api.TransactionsBounceTest > testBrokerFailure PASSED

kafka.api.ProducerFailureHandlingTest > testCannotSendToInternalTopic STARTED

kafka.api.ProducerFailureHandlingTest > testCannotSendToInternalTopic PASSED

kafka.api.ProducerFailureHandlingTest > testTooLargeRecordWithAckOne STARTED

kafka.api.ProducerFailureHandlingTest > testTooLargeRecordWithAckOne PASSED

kafka.api.ProducerFailureHandlingTest > testWrongBrokerList STARTED

kafka.api.ProducerFailureHandlingTest > testWrongBrokerList PASSED

kafka.api.ProducerFailureHandlingTest > testNotEnoughReplicas STARTED

kafka.api.ProducerFailureHandlingTest > testNotEnoughReplicas PASSED

kafka.api.ProducerFailureHandlingTest > 
testResponseTooLargeForReplicationWithAckAll STARTED

kafka.api.ProducerFailureHandlingTest > 
testResponseTooLargeForReplicationWithAckAll PASSED

kafka.api.ProducerFailureHandlingTest > testNonExistentTopic STARTED

kafka.api.ProducerFailureHandlingTest > testNonExistentTopic PASSED

kafka.api.ProducerFailureHandlingTest > testInvalidPartition STARTED

kafka.api.ProducerFailureHandlingTest > testInvalidPartition PASSED

kafka.api.ProducerFailureHandlingTest > testSendAfterClosed STARTED

kafka.api.ProducerFailureHandlingTest > testSendAfterClosed PASSED

kafka.api.ProducerFailureHandlingTest > testTooLargeRecordWithAckZero STARTED

kafka.api.ProducerFailureHandlingTest > testTooLargeRecordWithAckZero PASSED

kafka.api.ProducerFailureHandlingTest > 
testPartitionTooLargeForReplicationWithAckAll STARTED

kafka.api.ProducerFailureHandlingTest > 
testPartitionTooLargeForReplicationWithAckAll PASSED


Re: [kafka-clients] [VOTE] 2.1.1 RC2

2019-02-15 Thread Jonathan Santilli
Hello,

I have downloaded the source and executed integration and unit tests
successfully.
Ran kafka-monitor for about 1 hour without any issues.

+1

Thanks for the release Colin.
--
Jonathan Santilli



On Fri, Feb 15, 2019 at 6:16 AM Jason Gustafson  wrote:

> Ran the quickstart against the 2.11 artifact and checked the release notes.
> For some reason, KAFKA-7897 is not included in the notes, though I
> definitely see it in the tagged version. The RC was probably created before
> the JIRA was resolved. I think we can regenerate without another RC, so +1
> from me.
>
> Thanks Colin!
>
> On Thu, Feb 14, 2019 at 3:32 PM Jun Rao  wrote:
>
> > Hi, Colin,
> >
> > Thanks for running the release. Verified the quickstart for 2.12 binary.
> +1
> > from me.
> >
> > Jun
> >
> > On Fri, Feb 8, 2019 at 12:02 PM Colin McCabe  wrote:
> >
> > > Hi all,
> > >
> > > This is the third candidate for release of Apache Kafka 2.1.1.  This
> > > release includes many bug fixes for Apache Kafka 2.1.
> > >
> > > Compared to rc1, this release includes the following changes:
> > > * MINOR: release.py: fix some compatibility problems.
> > > * KAFKA-7897; Disable leader epoch cache when older message formats are
> > > used
> > > * KAFKA-7902: Replace original loginContext if SASL/OAUTHBEARER refresh
> > > login fails
> > > * MINOR: Fix more places where the version should be bumped from 2.1.0
> ->
> > > 2.1.1
> > > * KAFKA-7890: Invalidate ClusterConnectionState cache for a broker if
> the
> > > hostname of the broker changes.
> > > * KAFKA-7873; Always seek to beginning in KafkaBasedLog
> > > * MINOR: Correctly set dev version in version.py
> > >
> > > Check out the release notes here:
> > > http://home.apache.org/~cmccabe/kafka-2.1.1-rc2/RELEASE_NOTES.html
> > >
> > > The vote will go until Wednesday, February 13st.
> > >
> > > * Release artifacts to be voted upon (source and binary):
> > > http://home.apache.org/~cmccabe/kafka-2.1.1-rc2/
> > >
> > > * Maven artifacts to be voted upon:
> > > https://repository.apache.org/content/groups/staging/
> > >
> > > * Javadoc:
> > > http://home.apache.org/~cmccabe/kafka-2.1.1-rc2/javadoc/
> > >
> > > * Tag to be voted upon (off 2.1 branch) is the 2.1.1 tag:
> > > https://github.com/apache/kafka/releases/tag/2.1.1-rc2
> > >
> > > * Jenkins builds for the 2.1 branch:
> > > Unit/integration tests: https://builds.apache.org/job/kafka-2.1-jdk8/
> > >
> > > Thanks to everyone who tested the earlier RCs.
> > >
> > > cheers,
> > > Colin
> > >
> > > --
> > > You received this message because you are subscribed to the Google
> Groups
> > > "kafka-clients" group.
> > > To unsubscribe from this group and stop receiving emails from it, send
> an
> > > email to kafka-clients+unsubscr...@googlegroups.com.
> > > To post to this group, send email to kafka-clie...@googlegroups.com.
> > > Visit this group at https://groups.google.com/group/kafka-clients.
> > > To view this discussion on the web visit
> > >
> >
> https://groups.google.com/d/msgid/kafka-clients/ea314ca1-d23a-47c4-8fc7-83b9b1c792db%40www.fastmail.com
> > > .
> > > For more options, visit https://groups.google.com/d/optout.
> > >
> >
>


-- 
Santilli Jonathan