Build failed in Jenkins: kafka-2.4-jdk8 #48

2019-11-01 Thread Apache Jenkins Server
See 


Changes:

[wangguoz] KAFKA-8972 (2.4 blocker): TaskManager state should always be updated


--
[...truncated 5.08 MB...]

kafka.server.KafkaConfigTest > testListenerAndAdvertisedListenerNames PASSED

kafka.server.KafkaConfigTest > testNonroutableAdvertisedListeners STARTED

kafka.server.KafkaConfigTest > testNonroutableAdvertisedListeners PASSED

kafka.server.KafkaConfigTest > 
testInterBrokerListenerNameAndSecurityProtocolSet STARTED

kafka.server.KafkaConfigTest > 
testInterBrokerListenerNameAndSecurityProtocolSet PASSED

kafka.server.KafkaConfigTest > testFromPropsInvalid STARTED

kafka.server.KafkaConfigTest > testFromPropsInvalid PASSED

kafka.server.KafkaConfigTest > testInvalidCompressionType STARTED

kafka.server.KafkaConfigTest > testInvalidCompressionType PASSED

kafka.server.KafkaConfigTest > testAdvertiseHostNameDefault STARTED

kafka.server.KafkaConfigTest > testAdvertiseHostNameDefault PASSED

kafka.server.KafkaConfigTest > testLogRetentionTimeMinutesProvided STARTED

kafka.server.KafkaConfigTest > testLogRetentionTimeMinutesProvided PASSED

kafka.server.KafkaConfigTest > testValidCompressionType STARTED

kafka.server.KafkaConfigTest > testValidCompressionType PASSED

kafka.server.KafkaConfigTest > testUncleanElectionInvalid STARTED

kafka.server.KafkaConfigTest > testUncleanElectionInvalid PASSED

kafka.server.KafkaConfigTest > testListenerNamesWithAdvertisedListenerUnset 
STARTED

kafka.server.KafkaConfigTest > testListenerNamesWithAdvertisedListenerUnset 
PASSED

kafka.server.KafkaConfigTest > testLogRetentionTimeBothMinutesAndMsProvided 
STARTED

kafka.server.KafkaConfigTest > testLogRetentionTimeBothMinutesAndMsProvided 
PASSED

kafka.server.KafkaConfigTest > testLogRollTimeMsProvided STARTED

kafka.server.KafkaConfigTest > testLogRollTimeMsProvided PASSED

kafka.server.KafkaConfigTest > testUncleanLeaderElectionDefault STARTED

kafka.server.KafkaConfigTest > testUncleanLeaderElectionDefault PASSED

kafka.server.KafkaConfigTest > testInvalidAdvertisedListenersProtocol STARTED

kafka.server.KafkaConfigTest > testInvalidAdvertisedListenersProtocol PASSED

kafka.server.KafkaConfigTest > testUncleanElectionEnabled STARTED

kafka.server.KafkaConfigTest > testUncleanElectionEnabled PASSED

kafka.server.KafkaConfigTest > testInterBrokerVersionMessageFormatCompatibility 
STARTED

kafka.server.KafkaConfigTest > testInterBrokerVersionMessageFormatCompatibility 
PASSED

kafka.server.KafkaConfigTest > testAdvertisePortDefault STARTED

kafka.server.KafkaConfigTest > testAdvertisePortDefault PASSED

kafka.server.KafkaConfigTest > testVersionConfiguration STARTED

kafka.server.KafkaConfigTest > testVersionConfiguration PASSED

kafka.server.KafkaConfigTest > testEqualAdvertisedListenersProtocol STARTED

kafka.server.KafkaConfigTest > testEqualAdvertisedListenersProtocol PASSED

kafka.server.IsrExpirationTest > testIsrExpirationForSlowFollowers STARTED

kafka.server.IsrExpirationTest > testIsrExpirationForSlowFollowers PASSED

kafka.server.IsrExpirationTest > testIsrExpirationForCaughtUpFollowers STARTED

kafka.server.IsrExpirationTest > testIsrExpirationForCaughtUpFollowers PASSED

kafka.server.IsrExpirationTest > testIsrExpirationForStuckFollowers STARTED

kafka.server.IsrExpirationTest > testIsrExpirationForStuckFollowers PASSED

kafka.server.IsrExpirationTest > testIsrExpirationIfNoFetchRequestMade STARTED

kafka.server.IsrExpirationTest > testIsrExpirationIfNoFetchRequestMade PASSED

kafka.server.LogDirFailureTest > testIOExceptionDuringLogRoll STARTED

kafka.server.LogDirFailureTest > testIOExceptionDuringLogRoll PASSED

kafka.server.LogDirFailureTest > testIOExceptionDuringCheckpoint STARTED

kafka.server.LogDirFailureTest > testIOExceptionDuringCheckpoint PASSED

kafka.server.LogDirFailureTest > testProduceErrorFromFailureOnCheckpoint STARTED

kafka.server.LogDirFailureTest > testProduceErrorFromFailureOnCheckpoint PASSED

kafka.server.LogDirFailureTest > 
brokerWithOldInterBrokerProtocolShouldHaltOnLogDirFailure STARTED

kafka.server.LogDirFailureTest > 
brokerWithOldInterBrokerProtocolShouldHaltOnLogDirFailure PASSED

kafka.server.LogDirFailureTest > 
testReplicaFetcherThreadAfterLogDirFailureOnFollower STARTED

kafka.server.LogDirFailureTest > 
testReplicaFetcherThreadAfterLogDirFailureOnFollower PASSED

kafka.server.LogDirFailureTest > testProduceErrorFromFailureOnLogRoll STARTED

kafka.server.LogDirFailureTest > testProduceErrorFromFailureOnLogRoll PASSED

kafka.server.OffsetsForLeaderEpochRequestTest > 
testOffsetsForLeaderEpochErrorCodes STARTED

kafka.server.OffsetsForLeaderEpochRequestTest > 
testOffsetsForLeaderEpochErrorCodes PASSED

kafka.server.OffsetsForLeaderEpochRequestTest > testCurrentEpochValidation 
STARTED

kafka.server.OffsetsForLeaderEpochRequestTest > testCurrentEpochValidation 
PASSED

kafka.server.ReplicaAlterLogDirsThreadTest > 

Build failed in Jenkins: kafka-trunk-jdk11 #928

2019-11-01 Thread Apache Jenkins Server
See 


Changes:

[manikumar.reddy] MINOR: Replace some Java 7 style code with Java 8 style 
(#7623)

[wangguoz] MINOR: Fix sensor retrieval in stand0by task's constructor (#7632)

[bbejeck] MINOR: Fix Kafka Streams JavaDocs with regard to new StreamJoined 
class


--
[...truncated 5.55 MB...]
org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTimeDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTimeDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldForwardRecordsFromSubtopologyToSubtopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldForwardRecordsFromSubtopologyToSubtopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] PASSED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldStoreAndReturnStateStores STARTED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldStoreAndReturnStateStores PASSED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldCaptureOutputRecordsUsingTo STARTED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldCaptureOutputRecordsUsingTo PASSED


Re: [VOTE] KIP-545 support automated consumer offset sync across clusters in MM 2.0

2019-11-01 Thread Xu Jianhai
I think this kip will implement a task in sinkTask ? right?

On Sat, Nov 2, 2019 at 1:06 AM Ryanne Dolan  wrote:

> Hey y'all, Ning Zhang and I would like to start the vote for the following
> small KIP:
>
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-545%3A+support+automated+consumer+offset+sync+across+clusters+in+MM+2.0
>
> This is an elegant way to automatically write consumer group offsets to
> downstream clusters without breaking existing use cases. Currently, we rely
> on external tooling based on RemoteClusterUtils and kafka-consumer-groups
> command to write offsets. This KIP bakes this functionality into MM2
> itself, reducing the effort required to failover/failback workloads between
> clusters.
>
> Thanks for the votes!
>
> Ryanne
>


Build failed in Jenkins: kafka-trunk-jdk8 #4014

2019-11-01 Thread Apache Jenkins Server
See 


Changes:

[bbejeck] MINOR: Fix Kafka Streams JavaDocs with regard to new StreamJoined 
class

[wangguoz] KAFKA-8972 (2.4 blocker): TaskManager state should always be updated


--
[...truncated 2.74 MB...]
org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldDeleteAndReturnPlainValue STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldDeleteAndReturnPlainValue PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnName 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnName 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutAllWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutAllWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldReturnIsPersistent STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldReturnIsPersistent PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutIfAbsentWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutIfAbsentWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldForwardClose 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldForwardClose 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldForwardFlush 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldForwardFlush 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldForwardInit 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldForwardInit 
PASSED

org.apache.kafka.streams.TestTopicsTest > testNonUsedOutputTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testNonUsedOutputTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testEmptyTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testEmptyTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testStartTimestamp STARTED

org.apache.kafka.streams.TestTopicsTest > testStartTimestamp PASSED

org.apache.kafka.streams.TestTopicsTest > testNegativeAdvance STARTED

org.apache.kafka.streams.TestTopicsTest > testNegativeAdvance PASSED

org.apache.kafka.streams.TestTopicsTest > shouldNotAllowToCreateWithNullDriver 
STARTED

org.apache.kafka.streams.TestTopicsTest > shouldNotAllowToCreateWithNullDriver 
PASSED

org.apache.kafka.streams.TestTopicsTest > testDuration STARTED

org.apache.kafka.streams.TestTopicsTest > testDuration PASSED

org.apache.kafka.streams.TestTopicsTest > testOutputToString STARTED

org.apache.kafka.streams.TestTopicsTest > testOutputToString PASSED

org.apache.kafka.streams.TestTopicsTest > testValue STARTED

org.apache.kafka.streams.TestTopicsTest > testValue PASSED

org.apache.kafka.streams.TestTopicsTest > testTimestampAutoAdvance STARTED

org.apache.kafka.streams.TestTopicsTest > testTimestampAutoAdvance PASSED

org.apache.kafka.streams.TestTopicsTest > testOutputWrongSerde STARTED

org.apache.kafka.streams.TestTopicsTest > testOutputWrongSerde PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputTopicWithNullTopicName STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputTopicWithNullTopicName PASSED

org.apache.kafka.streams.TestTopicsTest > testWrongSerde STARTED

org.apache.kafka.streams.TestTopicsTest > testWrongSerde PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMapWithNull STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMapWithNull PASSED

org.apache.kafka.streams.TestTopicsTest > testNonExistingOutputTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testNonExistingOutputTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testMultipleTopics STARTED


[jira] [Resolved] (KAFKA-9080) System Test Failure: MessageFormatChangeTest.testCompatibilty

2019-11-01 Thread Manikumar (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9080?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-9080.
--
Resolution: Fixed

> System Test Failure: MessageFormatChangeTest.testCompatibilty
> -
>
> Key: KAFKA-9080
> URL: https://issues.apache.org/jira/browse/KAFKA-9080
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 2.4.0
>Reporter: Manikumar
>Assignee: Tu Tran
>Priority: Blocker
> Fix For: 2.4.0
>
>
> MessageFormatChangeTest tests are failing on 2.4 and trunk for 0.9.0.1 
> version.
> http://confluent-kafka-2-4-system-test-results.s3-us-west-2.amazonaws.com/2019-10-21--001.1571716233--confluentinc--2.4--cb4944f/report.html
> {code}
> Module: kafkatest.tests.client.message_format_change_test
> Class:  MessageFormatChangeTest
> Method: test_compatibility
> Arguments:
> {
>   "consumer_version": "0.9.0.1",
>   "producer_version": "0.9.0.1"
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [VOTE] KIP-541: Create a fetch.max.bytes configuration for the broker

2019-11-01 Thread Colin McCabe
Hi all,

With binding +1 votes from Ismael Juma, David Arthur, and Colin McCabe, and 
non-binding +1 votes from Tom Bentley and Stanislav Kozlovski, the vote passes.

thanks, all.
Colin

On Fri, Nov 1, 2019, at 09:41, Colin McCabe wrote:
> I will add my +1, binding
> 
> best,
> Colin
> 
> On Fri, Nov 1, 2019, at 01:57, Stanislav Kozlovski wrote:
> > +1 (non-binding).
> > Thanks!
> > Stanislav
> > 
> > On Fri, Oct 25, 2019 at 2:29 PM David Arthur  wrote:
> > 
> > > +1 binding, this will be a nice improvement. Thanks, Colin!
> > >
> > > -David
> > >
> > > On Fri, Oct 25, 2019 at 4:33 AM Tom Bentley  wrote:
> > >
> > > > +1 nb. Thanks!
> > > >
> > > > On Fri, Oct 25, 2019 at 7:43 AM Ismael Juma  wrote:
> > > >
> > > > > +1 (binding)
> > > > >
> > > > > On Thu, Oct 24, 2019, 4:56 PM Colin McCabe  wrote:
> > > > >
> > > > > > Hi all,
> > > > > >
> > > > > > I'd like to start the vote on KIP-541: Create a fetch.max.bytes
> > > > > > configuration for the broker.
> > > > > >
> > > > > > KIP: https://cwiki.apache.org/confluence/x/4g73Bw
> > > > > >
> > > > > > Discussion thread:
> > > > > >
> > > > >
> > > >
> > > https://lists.apache.org/thread.html/9d9dde93a07e1f1fc8d9f182f94f4bda9d016c5e9f3c8541cdc6f53b@%3Cdev.kafka.apache.org%3E
> > > > > >
> > > > > > cheers,
> > > > > > Colin
> > > > > >
> > > > >
> > > >
> > >
> > >
> > > --
> > > David Arthur
> > >
> > 
> > 
> > -- 
> > Best,
> > Stanislav
> >
>


Build failed in Jenkins: kafka-trunk-jdk8 #4013

2019-11-01 Thread Apache Jenkins Server
See 


Changes:

[manikumar.reddy] MINOR: Replace some Java 7 style code with Java 8 style 
(#7623)

[wangguoz] MINOR: Fix sensor retrieval in stand0by task's constructor (#7632)


--
[...truncated 2.74 MB...]
org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTimeDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTimeDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldForwardRecordsFromSubtopologyToSubtopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldForwardRecordsFromSubtopologyToSubtopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldDeleteAndReturnPlainValue STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldDeleteAndReturnPlainValue PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnName 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnName 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutAllWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutAllWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldReturnIsPersistent STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldReturnIsPersistent PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutIfAbsentWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutIfAbsentWithUnknownTimestamp PASSED


[jira] [Resolved] (KAFKA-8972) KafkaConsumer.unsubscribe could leave inconsistent user rebalance callback state

2019-11-01 Thread Guozhang Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8972?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-8972.
--
Resolution: Fixed

> KafkaConsumer.unsubscribe could leave inconsistent user rebalance callback 
> state
> 
>
> Key: KAFKA-8972
> URL: https://issues.apache.org/jira/browse/KAFKA-8972
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 2.4.0
>Reporter: Boyang Chen
>Assignee: Boyang Chen
>Priority: Blocker
> Fix For: 2.4.0
>
>
> Our current implementation ordering of {{KafkaConsumer.unsubscribe}} is the 
> following:
> {code}
> this.subscriptions.unsubscribe();
> this.coordinator.onLeavePrepare();
> this.coordinator.maybeLeaveGroup("the consumer unsubscribed from all topics");
> {code}
> And inside {{onLeavePrepare}} we would look into the assignment and try to 
> revoke them and notify users via {{RebalanceListener#onPartitionsRevoked}}, 
> and then clear the assignment.
> However, the subscription's assignment is already cleared in 
> {{this.subscriptions.unsubscribe();}} which means user's rebalance listener 
> would never be triggered. In other words, from consumer client's pov nothing 
> is owned after unsubscribe, but from the user caller's pov the partitions are 
> not revoked yet. For callers like Kafka Streams which rely on the rebalance 
> listener to maintain their internal state, this leads to inconsistent state 
> management and failure cases.
> Before KIP-429 this issue is hidden away since every time the consumer 
> re-joins the group later, it would still revoke everything anyways regardless 
> of the passed-in parameters of the rebalance listener; with KIP-429 this is 
> easier to reproduce now.
> I think we can summarize our fix as:
> • Inside `unsubscribe`, first do `onLeavePrepare / maybeLeaveGroup` and then 
> `subscription.unsubscribe`. This we we are guaranteed that the streams' tasks 
> are all closed as revoked by then.
> • [Optimization] If the generation is reset due to fatal error from join / hb 
> response etc, then we know that all partitions are lost, and we should not 
> trigger `onPartitionRevoked`, but instead just `onPartitionsLost` inside 
> `onLeavePrepare`.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] KIP-535: Allow state stores to serve stale reads during rebalance

2019-11-01 Thread Sophie Blee-Goldman
Adding on to John's response to 3), can you clarify when and why exactly we
cannot
convert between taskIds and partitions? If that's really the case I don't
feel confident
that the StreamsPartitionAssignor is not full of bugs...

It seems like it currently just encodes a list of all partitions (the
assignment) and also
a list of the corresponding task ids, duplicated to ensure each partition
has the corresponding
taskId at the same offset into the list. Why is that problematic?


On Fri, Nov 1, 2019 at 12:39 PM John Roesler  wrote:

> Thanks, all, for considering the points!
>
> 3. Interesting. I have a vague recollection of that... Still, though,
> it seems a little fishy. After all, we return the assignments
> themselves as task ids, and the members have to map these to topic
> partitions in order to configure themselves properly. If it's too
> complicated to get this right, then how do we know that Streams is
> computing the correct partitions at all?
>
> 4. How about just checking the log-end timestamp when you call the
> method? Then, when you get an answer, it's as fresh as it could
> possibly be. And as a user you have just one, obvious, "knob" to
> configure how much overhead you want to devote to checking... If you
> want to call the broker API less frequently, you just call the Streams
> API less frequently. And you don't have to worry about the
> relationship between your invocations of that method and the config
> setting (e.g., you'll never get a negative number, which you could if
> you check the log-end timestamp less frequently than you check the
> lag).
>
> Thanks,
> -John
>
> On Thu, Oct 31, 2019 at 11:52 PM Navinder Brar
>  wrote:
> >
> > Thanks John for going through this.
> >
> >- +1, makes sense
> >- +1, no issues there
> >- Yeah the initial patch I had submitted for K-7149(
> https://github.com/apache/kafka/pull/6935) to reduce assignmentInfo
> object had taskIds but the merged PR had similar size according to Vinoth
> and it was simpler so if the end result is of same size, it would not make
> sense to pivot from dictionary and again move to taskIDs.
> >- Not sure about what a good default would be if we don't have a
> configurable setting. This gives the users the flexibility to the users to
> serve their requirements as at the end of the day it would take CPU cycles.
> I am ok with starting it with a default and see how it goes based upon
> feedback.
> >
> > Thanks,
> > Navinder
> > On Friday, 1 November, 2019, 03:46:42 am IST, Vinoth Chandar <
> vchan...@confluent.io> wrote:
> >
> >  1. Was trying to spell them out separately. but makes sense for
> > readability. done
> >
> > 2. No I immediately agree :) .. makes sense. @navinder?
> >
> > 3. I actually attempted only sending taskIds while working on KAFKA-7149.
> > Its non-trivial to handle edges cases resulting from newly added topic
> > partitions and wildcarded topic entries. I ended up simplifying it to
> just
> > dictionary encoding the topic names to reduce size. We can apply the same
> > technique here for this map. Additionally, we could also dictionary
> encode
> > HostInfo, given its now repeated twice. I think this would save more
> space
> > than having a flag per topic partition entry. Lmk if you are okay with
> > this.
> >
> > 4. This opens up a good discussion. Given we support time lag estimates
> > also, we need to read the tail record of the changelog periodically
> (unlike
> > offset lag, which we can potentially piggyback on metadata in
> > ConsumerRecord IIUC). we thought we should have a config that control how
> > often this read happens? Let me know if there is a simple way to get
> > timestamp value of the tail record that we are missing.
> >
> > On Thu, Oct 31, 2019 at 12:58 PM John Roesler  wrote:
> >
> > > Hey Navinder,
> > >
> > > Thanks for updating the KIP, it's a lot easier to see the current
> > > state of the proposal now.
> > >
> > > A few remarks:
> > > 1. I'm sure it was just an artifact of revisions, but you have two
> > > separate sections where you list additions to the KafkaStreams
> > > interface. Can you consolidate those so we can see all the additions
> > > at once?
> > >
> > > 2. For messageLagEstimate, can I suggest "offsetLagEstimate" instead,
> > > to be clearer that we're specifically measuring a number of offsets?
> > > If you don't immediately agree, then I'd at least point out that we
> > > usually refer to elements of Kafka topics as "records", not
> > > "messages", so "recordLagEstimate" might be more appropriate.
> > >
> > > 3. The proposal mentions adding a map of the standby _partitions_ for
> > > each host to AssignmentInfo. I assume this is designed to mirror the
> > > existing "partitionsByHost" map. To keep the size of these metadata
> > > messages down, maybe we can consider making two changes:
> > > (a) for both actives and standbys, encode the _task ids_ instead of
> > > _partitions_. Every member of the cluster has a copy of the topology,
> > > 

Build failed in Jenkins: kafka-trunk-jdk11 #927

2019-11-01 Thread Apache Jenkins Server
See 


Changes:

[wangguoz] KAFKA-8868: Generate SubscriptionInfo protocol message (#7248)


--
[...truncated 2.75 MB...]

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializersDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializersDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfEvenTimeAdvances[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfEvenTimeAdvances[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldInitProcessor[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldInitProcessor[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowForUnknownTopic[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowForUnknownTopic[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnStreamsTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnStreamsTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] PASSED


Build failed in Jenkins: kafka-trunk-jdk8 #4012

2019-11-01 Thread Apache Jenkins Server
See 


Changes:

[wangguoz] KAFKA-8868: Generate SubscriptionInfo protocol message (#7248)


--
[...truncated 2.74 MB...]
org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFeedStoreFromGlobalKTable[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCleanUpPersistentStateStoresOnClose[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCleanUpPersistentStateStoresOnClose[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowForUnknownTopicDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowForUnknownTopicDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowPatternNotValidForTopicNameException[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowPatternNotValidForTopicNameException[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializersDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializersDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfEvenTimeAdvances[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfEvenTimeAdvances[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldInitProcessor[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldInitProcessor[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowForUnknownTopic[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowForUnknownTopic[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnStreamsTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnStreamsTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 

[jira] [Created] (KAFKA-9132) Refactor StreamThread to take advantage of new ConsumerRebalanceListener exception handling

2019-11-01 Thread Sophie Blee-Goldman (Jira)
Sophie Blee-Goldman created KAFKA-9132:
--

 Summary: Refactor StreamThread to take advantage of new 
ConsumerRebalanceListener exception handling
 Key: KAFKA-9132
 URL: https://issues.apache.org/jira/browse/KAFKA-9132
 Project: Kafka
  Issue Type: Improvement
  Components: streams
Affects Versions: 2.4.0
Reporter: Sophie Blee-Goldman


As part of KIP-429 we solved the long-standing issue where exceptions thrown 
during the ConsumerRebalanceListener's callbacks were swallowed, and changed 
the behavior so that these exceptions are now bubbled all the way up to the 
Consumer#poll call.

Because of the original behavior, any exceptions thrown during task creation, 
suspension, closure, etc. had to be caught by the rebalance listener and passed 
on to the calling StreamThread by setting a "rebalanceException" field. This 
then has to be checked after every polling loop.

We should refactor this in light of the new & fixed behavior, so that we can 
simply catch rebalance exceptions thrown from poll rather than check for them 
explicitly after every call. This has the additional benefit of being able to 
react to it immediately (whereas currently we have to go through the remainder 
of the entire `StreamThread#runOnce` loop before we notice the exception.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] KIP-535: Allow state stores to serve stale reads during rebalance

2019-11-01 Thread John Roesler
Thanks, all, for considering the points!

3. Interesting. I have a vague recollection of that... Still, though,
it seems a little fishy. After all, we return the assignments
themselves as task ids, and the members have to map these to topic
partitions in order to configure themselves properly. If it's too
complicated to get this right, then how do we know that Streams is
computing the correct partitions at all?

4. How about just checking the log-end timestamp when you call the
method? Then, when you get an answer, it's as fresh as it could
possibly be. And as a user you have just one, obvious, "knob" to
configure how much overhead you want to devote to checking... If you
want to call the broker API less frequently, you just call the Streams
API less frequently. And you don't have to worry about the
relationship between your invocations of that method and the config
setting (e.g., you'll never get a negative number, which you could if
you check the log-end timestamp less frequently than you check the
lag).

Thanks,
-John

On Thu, Oct 31, 2019 at 11:52 PM Navinder Brar
 wrote:
>
> Thanks John for going through this.
>
>- +1, makes sense
>- +1, no issues there
>- Yeah the initial patch I had submitted for 
> K-7149(https://github.com/apache/kafka/pull/6935) to reduce assignmentInfo 
> object had taskIds but the merged PR had similar size according to Vinoth and 
> it was simpler so if the end result is of same size, it would not make sense 
> to pivot from dictionary and again move to taskIDs.
>- Not sure about what a good default would be if we don't have a 
> configurable setting. This gives the users the flexibility to the users to 
> serve their requirements as at the end of the day it would take CPU cycles. I 
> am ok with starting it with a default and see how it goes based upon feedback.
>
> Thanks,
> Navinder
> On Friday, 1 November, 2019, 03:46:42 am IST, Vinoth Chandar 
>  wrote:
>
>  1. Was trying to spell them out separately. but makes sense for
> readability. done
>
> 2. No I immediately agree :) .. makes sense. @navinder?
>
> 3. I actually attempted only sending taskIds while working on KAFKA-7149.
> Its non-trivial to handle edges cases resulting from newly added topic
> partitions and wildcarded topic entries. I ended up simplifying it to just
> dictionary encoding the topic names to reduce size. We can apply the same
> technique here for this map. Additionally, we could also dictionary encode
> HostInfo, given its now repeated twice. I think this would save more space
> than having a flag per topic partition entry. Lmk if you are okay with
> this.
>
> 4. This opens up a good discussion. Given we support time lag estimates
> also, we need to read the tail record of the changelog periodically (unlike
> offset lag, which we can potentially piggyback on metadata in
> ConsumerRecord IIUC). we thought we should have a config that control how
> often this read happens? Let me know if there is a simple way to get
> timestamp value of the tail record that we are missing.
>
> On Thu, Oct 31, 2019 at 12:58 PM John Roesler  wrote:
>
> > Hey Navinder,
> >
> > Thanks for updating the KIP, it's a lot easier to see the current
> > state of the proposal now.
> >
> > A few remarks:
> > 1. I'm sure it was just an artifact of revisions, but you have two
> > separate sections where you list additions to the KafkaStreams
> > interface. Can you consolidate those so we can see all the additions
> > at once?
> >
> > 2. For messageLagEstimate, can I suggest "offsetLagEstimate" instead,
> > to be clearer that we're specifically measuring a number of offsets?
> > If you don't immediately agree, then I'd at least point out that we
> > usually refer to elements of Kafka topics as "records", not
> > "messages", so "recordLagEstimate" might be more appropriate.
> >
> > 3. The proposal mentions adding a map of the standby _partitions_ for
> > each host to AssignmentInfo. I assume this is designed to mirror the
> > existing "partitionsByHost" map. To keep the size of these metadata
> > messages down, maybe we can consider making two changes:
> > (a) for both actives and standbys, encode the _task ids_ instead of
> > _partitions_. Every member of the cluster has a copy of the topology,
> > so they can convert task ids into specific partitions on their own,
> > and task ids are only (usually) three characters.
> > (b) instead of encoding two maps (hostinfo -> actives AND hostinfo ->
> > standbys), which requires serializing all the hostinfos twice, maybe
> > we can pack them together in one map with a structured value (hostinfo
> > -> [actives,standbys]).
> > Both of these ideas still require bumping the protocol version to 6,
> > and they basically mean we drop the existing `PartitionsByHost` field
> > and add a new `TasksByHost` field with the structured value I
> > mentioned.
> >
> > 4. Can we avoid adding the new "lag refresh" config? The lags would
> > necessarily be approximate anyway, so adding the 

Re: question about offsetSync

2019-11-01 Thread Xu Jianhai
>From my opinion,
condition 4 is downstream data loss because of master down without sync in
time, so downstream data producer get smaller downstreamOffset.
condition 3 meaning upstream broker down without sync in time
condition 1 is init state
so condition 2 may write like this: `downstreamTargetOffset -
lastSyncDownstreamOffset >= maxOffsetLag` meaning not sync offset for long
time, should we rewrite ?

On Sat, Nov 2, 2019 at 12:41 AM Xu Jianhai  wrote:

> Hi:
> I am engineer from China, I review kafka mirror latest impl, but I do
> not figure out the reason why PartitionState update like that:
> ```
>
> // true if we should emit an offset sync
> boolean update(long upstreamOffset, long downstreamOffset) {
> boolean shouldSyncOffsets = false;
> long upstreamStep = upstreamOffset - lastSyncUpstreamOffset;
> long downstreamTargetOffset = lastSyncDownstreamOffset + upstreamStep;
> if (lastSyncDownstreamOffset == -1L
> || downstreamOffset - downstreamTargetOffset >= maxOffsetLag
> || upstreamOffset - previousUpstreamOffset != 1L
> || downstreamOffset < previousDownstreamOffset) {
> lastSyncUpstreamOffset = upstreamOffset;
> lastSyncDownstreamOffset = downstreamOffset;
> shouldSyncOffsets = true;
> }
> previousUpstreamOffset = upstreamOffset;
> previousDownstreamOffset = downstreamOffset;
> return shouldSyncOffsets;
> }
> }
>
> ```
> I can not know why the condition is like that.
> 1. lastSyncDownstreamOffset == -1L: never sync, so call sync method
> 2. downstreamOffset - downstreamTargetOffset >= maxOffsetLag: offset is
> not accurate, so sync. but why use maxOffsetLag? why not >0: meaning not
> accurate
> 3. upstreamOffset - previousUpstreamOffset != 1L: meaning why?
> 4. downstreamOffset < previousDownstreamOffset: meaning why?
>
>
>
>
>
>
>


[jira] [Created] (KAFKA-9131) failed producer metadata updates result in the unrelated error message

2019-11-01 Thread Ask Zuckerberg (Jira)
Ask Zuckerberg created KAFKA-9131:
-

 Summary: failed producer metadata updates result in the unrelated 
error message
 Key: KAFKA-9131
 URL: https://issues.apache.org/jira/browse/KAFKA-9131
 Project: Kafka
  Issue Type: Bug
  Components: streams
Affects Versions: 2.3.0
Reporter: Ask Zuckerberg


{{Producer Metadata TimeoutException}} is processed as a generic 
RetriableException in RecordCollectorImpl.sendError. This results in an 
irrelevant error message.

We were supposed to see this

"Timeout exception caught when sending record to topic %s. " +
 "This might happen if the producer cannot send data to the Kafka cluster and 
thus, " +
 "its internal buffer fills up. " +
 "This can also happen if the broker is slow to respond, if the network 
connection to " +
 "the broker was interrupted, or if similar circumstances arise. " +
 "You can increase producer parameter `max.block.ms` to increase this timeout."

but got this:

"You can increase the producer configs `delivery.timeout.ms` and/or " +
 "`retries` to avoid this error. Note that `retries` is set to infinite by 
default."

These params are not applicable to metadata updates.

Technical details:

(1) Lines 221 - 236 in 
kafka/streams/src/main/java/org/apache/kafka/streams/processor/internals/RecordCollectorImpl.java
are dead code. They are never executed because {{producer.send}} never throws 
TimeoutException, but returns a failed future. You can see it in lines 948-955 
in 
kafka/clients/src/main/java/org/apache/kafka/clients/producer/KafkaProducer.java

(2) The exception is then processed in a callback function in the method 
{{recordSendError}} on line 202. The DefaultProductionExceptionHandler is used.

(3) in {{recordSendError}} in the same class the timeout exception is processed 
as RetriableException at lines 133-136. The error message is simply wrong 
because tweaking  {{[delivery.timeout.ms|http://delivery.timeout.ms/]}} and 
{{retries}} has nothing to do with the issue in this case.

Proposed solution:

(1) Remove unreachable catch (final TimeoutException e) in 
RecordCollectorImpl.java as Producer does not throw ApiExceptions.

(2) Move the aforementioned catch clause to recordSendError method.

(3) Process TimeoutException separately from RetiriableException.

(4) Implement a unit test to cover this corner case

 

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


question about offsetSync

2019-11-01 Thread Xu Jianhai
Hi:
I am engineer from China, I review kafka mirror latest impl, but I do
not figure out the reason why PartitionState update like that:
```

// true if we should emit an offset sync
boolean update(long upstreamOffset, long downstreamOffset) {
boolean shouldSyncOffsets = false;
long upstreamStep = upstreamOffset - lastSyncUpstreamOffset;
long downstreamTargetOffset = lastSyncDownstreamOffset + upstreamStep;
if (lastSyncDownstreamOffset == -1L
|| downstreamOffset - downstreamTargetOffset >= maxOffsetLag
|| upstreamOffset - previousUpstreamOffset != 1L
|| downstreamOffset < previousDownstreamOffset) {
lastSyncUpstreamOffset = upstreamOffset;
lastSyncDownstreamOffset = downstreamOffset;
shouldSyncOffsets = true;
}
previousUpstreamOffset = upstreamOffset;
previousDownstreamOffset = downstreamOffset;
return shouldSyncOffsets;
}
}

```
I can not know why the condition is like that.
1. lastSyncDownstreamOffset == -1L: never sync, so call sync method
2. downstreamOffset - downstreamTargetOffset >= maxOffsetLag: offset is not
accurate, so sync. but why use maxOffsetLag? why not >0: meaning not
accurate
3. upstreamOffset - previousUpstreamOffset != 1L: meaning why?
4. downstreamOffset < previousDownstreamOffset: meaning why?


Re: [VOTE] KIP-541: Create a fetch.max.bytes configuration for the broker

2019-11-01 Thread Colin McCabe
I will add my +1, binding

best,
Colin

On Fri, Nov 1, 2019, at 01:57, Stanislav Kozlovski wrote:
> +1 (non-binding).
> Thanks!
> Stanislav
> 
> On Fri, Oct 25, 2019 at 2:29 PM David Arthur  wrote:
> 
> > +1 binding, this will be a nice improvement. Thanks, Colin!
> >
> > -David
> >
> > On Fri, Oct 25, 2019 at 4:33 AM Tom Bentley  wrote:
> >
> > > +1 nb. Thanks!
> > >
> > > On Fri, Oct 25, 2019 at 7:43 AM Ismael Juma  wrote:
> > >
> > > > +1 (binding)
> > > >
> > > > On Thu, Oct 24, 2019, 4:56 PM Colin McCabe  wrote:
> > > >
> > > > > Hi all,
> > > > >
> > > > > I'd like to start the vote on KIP-541: Create a fetch.max.bytes
> > > > > configuration for the broker.
> > > > >
> > > > > KIP: https://cwiki.apache.org/confluence/x/4g73Bw
> > > > >
> > > > > Discussion thread:
> > > > >
> > > >
> > >
> > https://lists.apache.org/thread.html/9d9dde93a07e1f1fc8d9f182f94f4bda9d016c5e9f3c8541cdc6f53b@%3Cdev.kafka.apache.org%3E
> > > > >
> > > > > cheers,
> > > > > Colin
> > > > >
> > > >
> > >
> >
> >
> > --
> > David Arthur
> >
> 
> 
> -- 
> Best,
> Stanislav
>


Re: [DISCUSS] KIP-518: Allow listing consumer groups per state

2019-11-01 Thread Mickael Maison
Hi Tom,

Thanks for taking a look at the KIP!
You are right, even if we serialize the field as a String, we should
use ConsumerGroupState in the API.
As suggested, I've also updated the API so a list of states is specified.

Regards,


On Tue, Oct 22, 2019 at 10:03 AM Tom Bentley  wrote:
>
> Hi Mickael,
>
> Thanks for the KIP.
>
> The use of String to represent the desired state in the API seems less
> typesafe than would be ideal. Is there a reason not to use the existing
> ConsumerGroupState enum (even if the state is serialized as a String)?
>
> While you say that the list-of-names result from listConsumerGroups is a
> reason not to support supplying a set of desired states I don't find that
> argument entirely convincing. Sure, if the results are going to be shown to
> a user then it would be ambiguous and multiple queries would be needed. But
> it seems quite possible that the returned list of groups will immediately
> be used in a describeConsumerGroups query (for example, so show a user
> additional information about the groups of interest, for example). In that
> case the grouping by state could be done on the descriptions, and some RPCs
> could be avoided. It would also avoid the race inherent in making multiple
> listConsumerGroups requests. So supporting a set of states isn't entirely
> worthless and it wouldn't really add very much complexity.
>
> Kind regards,
>
> Tom
>
> On Mon, Oct 21, 2019 at 5:54 PM Mickael Maison 
> wrote:
>
> > Bump
> > Now that the rush for 2.4.0 is ending I hope to get some feedback
> >
> > Thanks
> >
> > On Mon, Sep 9, 2019 at 5:44 PM Mickael Maison 
> > wrote:
> > >
> > > Hi,
> > >
> > > I have created a KIP to allow listing groups per state:
> > >
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-518%3A+Allow+listing+consumer+groups+per+state
> > >
> > > Have a look and let me know what you think.
> > > Thank you
> >


Build failed in Jenkins: kafka-trunk-jdk11 #926

2019-11-01 Thread Apache Jenkins Server
See 


Changes:

[manikumar] MINOR: Correctly mark offset expiry in GroupMetadataManager's


--
[...truncated 5.55 MB...]
org.apache.kafka.streams.TestTopicsTest > testWithHeaders STARTED

org.apache.kafka.streams.TestTopicsTest > testWithHeaders PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValue STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValue PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateTopicWithNullTopicName STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateTopicWithNullTopicName PASSED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldStoreAndReturnStateStores STARTED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldStoreAndReturnStateStores PASSED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldCaptureOutputRecordsUsingTo STARTED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldCaptureOutputRecordsUsingTo PASSED

org.apache.kafka.streams.MockProcessorContextTest > shouldCaptureOutputRecords 
STARTED

org.apache.kafka.streams.MockProcessorContextTest > shouldCaptureOutputRecords 
PASSED

org.apache.kafka.streams.MockProcessorContextTest > 
fullConstructorShouldSetAllExpectedAttributes STARTED

org.apache.kafka.streams.MockProcessorContextTest > 
fullConstructorShouldSetAllExpectedAttributes PASSED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldCaptureCommitsAndAllowReset STARTED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldCaptureCommitsAndAllowReset PASSED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldThrowIfForwardedWithDeprecatedChildName STARTED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldThrowIfForwardedWithDeprecatedChildName PASSED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldThrowIfForwardedWithDeprecatedChildIndex STARTED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldThrowIfForwardedWithDeprecatedChildIndex PASSED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldCaptureApplicationAndRecordMetadata STARTED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldCaptureApplicationAndRecordMetadata PASSED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldCaptureRecordsOutputToChildByName STARTED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldCaptureRecordsOutputToChildByName PASSED

org.apache.kafka.streams.MockProcessorContextTest > shouldCapturePunctuator 
STARTED

org.apache.kafka.streams.MockProcessorContextTest > shouldCapturePunctuator 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldDeleteAndReturnPlainValue STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldDeleteAndReturnPlainValue PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnName 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnName 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutAllWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutAllWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldReturnIsPersistent STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldReturnIsPersistent PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutIfAbsentWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutIfAbsentWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldForwardClose 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldForwardClose 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldForwardFlush 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldForwardFlush 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldForwardInit 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldForwardInit 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 

[jira] [Created] (KAFKA-9130) Allow listing consumer groups per state

2019-11-01 Thread Mickael Maison (Jira)
Mickael Maison created KAFKA-9130:
-

 Summary: Allow listing consumer groups per state
 Key: KAFKA-9130
 URL: https://issues.apache.org/jira/browse/KAFKA-9130
 Project: Kafka
  Issue Type: Improvement
Reporter: Mickael Maison
Assignee: Mickael Maison


Ticket for KIP-518: 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-518%3A+Allow+listing+consumer+groups+per+state



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] Apache Kafka 2.4.0 release

2019-11-01 Thread Manikumar
Hi All,

We still have couple of blockers to close.  PRs available for both the
blockers.

https://issues.apache.org/jira/browse/KAFKA-8972
https://issues.apache.org/jira/browse/KAFKA-9080


Thanks,


On Fri, Oct 25, 2019 at 10:48 PM Manikumar 
wrote:

> Hi all,
>
> Quick update on the 2.4 release. We still have one blocker to close.
> I will create the first RC after closing the blocker.
>
> https://issues.apache.org/jira/browse/KAFKA-8972
>
> Thank you!
>
> On Fri, Oct 18, 2019 at 12:51 AM Matthias J. Sax 
> wrote:
>
>> Just FYI:
>>
>> There was also https://issues.apache.org/jira/browse/KAFKA-9058 that I
>> just merged.
>>
>>
>> -Matthias
>>
>> On 10/17/19 7:59 AM, Manikumar wrote:
>> > Hi all,
>> >
>> > The code freeze deadline has now passed and at this point only blockers
>> > will be allowed.
>> > We have three blockers for 2.4.0. I will move out most of the JIRAs that
>> > aren't currently
>> > being worked on. If you think any of the other JIRAs are critical to
>> > include in 2.4.0,
>> > please update the fix version, mark as blocker and ensure a PR is ready
>> to
>> > merge.
>> > I will create the first RC as soon as we close the blockers.
>> > Please help to close out the 2.4.0 JIRAs.
>> >
>> > current blockers:
>> > https://issues.apache.org/jira/browse/KAFKA-8943
>> > https://issues.apache.org/jira/browse/KAFKA-8992
>> > https://issues.apache.org/jira/browse/KAFKA-8972
>> >
>> > Thank you!
>> >
>> > On Tue, Oct 8, 2019 at 8:27 PM Manikumar 
>> wrote:
>> >
>> >> Thanks Bruno. We will mark KIP-471 as complete.
>> >>
>> >> On Tue, Oct 8, 2019 at 2:39 PM Bruno Cadonna 
>> wrote:
>> >>
>> >>> Hi Manikumar,
>> >>>
>> >>> It is technically true that KIP-471 is not completed, but the only
>> >>> aspect that is not there are merely two metrics that I could not add
>> >>> due to the RocksDB version currently used in Streams. Adding those two
>> >>> metrics once the RocksDB version will have been increased, will be a
>> >>> minor effort. So, I would consider KIP-471 as complete with those two
>> >>> metrics blocked.
>> >>>
>> >>> Best,
>> >>> Bruno
>> >>>
>> >>> On Mon, Oct 7, 2019 at 8:44 PM Manikumar 
>> >>> wrote:
>> 
>>  Hi all,
>> 
>>  I have moved couple of accepted KIPs without a PR to the next
>> release.
>> >>> We
>>  still have quite a few KIPs
>>  with PRs that are being reviewed, but haven't yet been merged. I have
>> >>> left
>>  all of these in assuming these
>>  PRs are ready and not risky to merge.  Please update your assigned
>>  KIPs/JIRAs, if they are not ready and
>>   if you know they cannot make it to 2.4.0.
>> 
>>  Please ensure that all KIPs for 2.4.0 have been merged by Oct 16th.
>> Any
>>  remaining KIPs
>>  will be moved to the next release.
>> 
>>  The KIPs still in progress are:
>> 
>>  - KIP-517: Add consumer metrics to observe user poll behavior
>>   <
>> 
>> >>>
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-517%3A+Add+consumer+metrics+to+observe+user+poll+behavior
>> >
>> 
>>  - KIP-511: Collect and Expose Client's Name and Version in the
>> Brokers
>>   <
>> 
>> >>>
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-511%3A+Collect+and+Expose+Client%27s+Name+and+Version+in+the+Brokers
>> >
>> 
>>  - KIP-474: To deprecate WindowStore#put(key, value)
>>   
>> 
>>  - KIP-471: Expose RocksDB Metrics in Kafka Streams
>>   <
>> 
>> >>>
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-471%3A+Expose+RocksDB+Metrics+in+Kafka+Streams
>> >
>> 
>>  - KIP-466: Add support for List serialization and deserialization
>>   <
>> 
>> >>>
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-466%3A+Add+support+for+List
>>  +serialization+and+deserialization>
>> 
>>  - KIP-455: Create an Administrative API for Replica Reassignment
>>   <
>> 
>> >>>
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-455%3A+Create+an+Administrative+API+for+Replica+Reassignment
>> >
>> 
>>  - KIP-446: Add changelog topic configuration to KTable suppress
>>   <
>> 
>> >>>
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-446%3A+Add+changelog+topic+configuration+to+KTable+suppress
>> >
>> 
>>  - KIP-444: Augment metrics for Kafka Streams
>>   <
>> 
>> >>>
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-444%3A+Augment+metrics+for+Kafka+Streams
>> >
>> 
>>  - KIP-434: Add Replica Fetcher and Log Cleaner Count Metrics
>>   <
>> 
>> >>>
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-434%3A+Add+Replica+Fetcher+and+Log+Cleaner+Count+Metrics
>> >
>> 
>>  - KIP-401: TransformerSupplier/ProcessorSupplier StateStore
>> connecting
>>   <
>> >>>
>> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=97553756
>> 
>> 
>>  - KIP-396: Add Reset/List 

Build failed in Jenkins: kafka-2.4-jdk8 #47

2019-11-01 Thread Apache Jenkins Server
See 


Changes:

[manikumar] MINOR: Correctly mark offset expiry in GroupMetadataManager's


--
[...truncated 5.06 MB...]
kafka.server.LogOffsetTest > testFetchOffsetsBeforeWithChangingSegmentSize 
PASSED

kafka.server.LogOffsetTest > testGetOffsetsBeforeEarliestTime STARTED

kafka.server.LogOffsetTest > testGetOffsetsBeforeEarliestTime PASSED

kafka.server.LogOffsetTest > testGetOffsetsForUnknownTopic STARTED

kafka.server.LogOffsetTest > testGetOffsetsForUnknownTopic PASSED

kafka.server.LogOffsetTest > testEmptyLogsGetOffsets STARTED

kafka.server.LogOffsetTest > testEmptyLogsGetOffsets PASSED

kafka.server.LogOffsetTest > testFetchOffsetsBeforeWithChangingSegments STARTED

kafka.server.LogOffsetTest > testFetchOffsetsBeforeWithChangingSegments PASSED

kafka.server.LogOffsetTest > testGetOffsetsBeforeLatestTime STARTED

kafka.server.LogOffsetTest > testGetOffsetsBeforeLatestTime PASSED

kafka.server.LogOffsetTest > testGetOffsetsBeforeNow STARTED

kafka.server.LogOffsetTest > testGetOffsetsBeforeNow PASSED

kafka.server.LogOffsetTest > testGetOffsetsAfterDeleteRecords STARTED

kafka.server.LogOffsetTest > testGetOffsetsAfterDeleteRecords PASSED

kafka.server.DynamicConfigTest > shouldFailFollowerConfigsWithInvalidValues 
STARTED

kafka.server.DynamicConfigTest > shouldFailFollowerConfigsWithInvalidValues 
PASSED

kafka.server.DynamicConfigTest > shouldFailWhenChangingUserUnknownConfig STARTED

kafka.server.DynamicConfigTest > shouldFailWhenChangingUserUnknownConfig PASSED

kafka.server.DynamicConfigTest > shouldFailLeaderConfigsWithInvalidValues 
STARTED

kafka.server.DynamicConfigTest > shouldFailLeaderConfigsWithInvalidValues PASSED

kafka.server.DynamicConfigTest > shouldFailWhenChangingClientIdUnknownConfig 
STARTED

kafka.server.DynamicConfigTest > shouldFailWhenChangingClientIdUnknownConfig 
PASSED

kafka.server.ProduceRequestTest > testSimpleProduceRequest STARTED

kafka.server.ProduceRequestTest > testSimpleProduceRequest PASSED

kafka.server.ProduceRequestTest > testCorruptLz4ProduceRequest STARTED

kafka.server.ProduceRequestTest > testCorruptLz4ProduceRequest PASSED

kafka.server.ProduceRequestTest > testProduceToNonReplica STARTED

kafka.server.ProduceRequestTest > testProduceToNonReplica PASSED

kafka.server.ProduceRequestTest > testZSTDProduceRequest STARTED

kafka.server.ProduceRequestTest > testZSTDProduceRequest PASSED

kafka.server.ProduceRequestTest > testProduceWithInvalidTimestamp STARTED

kafka.server.ProduceRequestTest > testProduceWithInvalidTimestamp PASSED

kafka.server.DelayedOperationTest > testRequestPurge STARTED

kafka.server.DelayedOperationTest > testRequestPurge PASSED

kafka.server.DelayedOperationTest > testRequestExpiry STARTED

kafka.server.DelayedOperationTest > testRequestExpiry PASSED

kafka.server.DelayedOperationTest > 
shouldReturnNilOperationsOnCancelForKeyWhenKeyDoesntExist STARTED

kafka.server.DelayedOperationTest > 
shouldReturnNilOperationsOnCancelForKeyWhenKeyDoesntExist PASSED

kafka.server.DelayedOperationTest > testDelayedOperationLockOverride STARTED

kafka.server.DelayedOperationTest > testDelayedOperationLockOverride PASSED

kafka.server.DelayedOperationTest > testTryCompleteLockContention STARTED

kafka.server.DelayedOperationTest > testTryCompleteLockContention PASSED

kafka.server.DelayedOperationTest > testTryCompleteWithMultipleThreads STARTED

kafka.server.DelayedOperationTest > testTryCompleteWithMultipleThreads PASSED

kafka.server.DelayedOperationTest > 
shouldCancelForKeyReturningCancelledOperations STARTED

kafka.server.DelayedOperationTest > 
shouldCancelForKeyReturningCancelledOperations PASSED

kafka.server.DelayedOperationTest > testDelayedFuture STARTED

kafka.server.DelayedOperationTest > testDelayedFuture PASSED

kafka.server.DelayedOperationTest > testRequestSatisfaction STARTED

kafka.server.DelayedOperationTest > testRequestSatisfaction PASSED

kafka.server.DelayedOperationTest > testDelayedOperationLock STARTED

kafka.server.DelayedOperationTest > testDelayedOperationLock PASSED

kafka.server.DynamicBrokerReconfigurationTest > testDefaultTopicConfig STARTED

kafka.server.DynamicBrokerReconfigurationTest > testDefaultTopicConfig PASSED

kafka.server.DynamicBrokerReconfigurationTest > testMetricsReporterUpdate 
STARTED

kafka.server.DynamicBrokerReconfigurationTest > testMetricsReporterUpdate PASSED

kafka.server.DynamicBrokerReconfigurationTest > testAdvertisedListenerUpdate 
STARTED

kafka.server.DynamicBrokerReconfigurationTest > testAdvertisedListenerUpdate 
PASSED

kafka.server.DynamicBrokerReconfigurationTest > testThreadPoolResize STARTED
ERROR: Could not install GRADLE_4_10_3_HOME
java.lang.NullPointerException

kafka.server.DynamicBrokerReconfigurationTest > testThreadPoolResize PASSED

kafka.server.DynamicBrokerReconfigurationTest > testAddRemoveSaslListeners 
STARTED
Build timed out (after 360 

[jira] [Created] (KAFKA-9129) Add Thread ID to the InternalProcessorContext

2019-11-01 Thread Bruno Cadonna (Jira)
Bruno Cadonna created KAFKA-9129:


 Summary: Add Thread ID to the InternalProcessorContext
 Key: KAFKA-9129
 URL: https://issues.apache.org/jira/browse/KAFKA-9129
 Project: Kafka
  Issue Type: Improvement
  Components: streams
Reporter: Bruno Cadonna


When we added client metrics we had to move the {{StreamsMetricsImpl}} object 
to the client level. That means that now instead of having one 
{{StreamsMetricsImpl}} object per thread, we have now one per client. That also 
means that we cannot store the thread ID in the {{StreamsMetricsImpl}} anymore. 
Currently, we get the thread ID from {{Thread.currentThread().getName()}} when 
we need to create a sensor. However, that is not robust against code 
refactoring because we need to ensure that the thread that creates the sensor 
is also the one that records the metrics. To be more flexible, we should expose 
the ID of the thread that executes a processor in the 
{{InternalProcessorContext}} like it already exposes the task ID.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: kafka-trunk-jdk8 #4011

2019-11-01 Thread Apache Jenkins Server
See 


Changes:

[manikumar] MINOR: Correctly mark offset expiry in GroupMetadataManager's


--
[...truncated 2.74 MB...]

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testToString STARTED

org.apache.kafka.streams.test.TestRecordTest > testToString PASSED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords STARTED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords PASSED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
STARTED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
PASSED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher STARTED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher PASSED

org.apache.kafka.streams.test.TestRecordTest > testFields STARTED

org.apache.kafka.streams.test.TestRecordTest > testFields PASSED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode STARTED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairs STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKeyAndDefaultTimestamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKeyAndDefaultTimestamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithDefaultTimestamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithDefaultTimestamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKey STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKey PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithOtherTopicNameAndTimestampWithTimetamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithOtherTopicNameAndTimestampWithTimetamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullHeaders STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullHeaders PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithDefaultTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithDefaultTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 

Re: [DISCUSS] KIP-405: Kafka Tiered Storage

2019-11-01 Thread Satish Duggana
Hi Jun,
Thanks for looking into the updated KIP and clarifying our earlier queries.

>20. It's fine to keep the HDFS binding temporarily in the PR. We just need
to remove it before it's merged to trunk. As Victor mentioned, we can
provide a reference implementation based on a mocked version of remote
storage.

Sure, sounds good.

>21. I am not sure that I understood the need for RemoteLogIndexEntry and
its relationship with RemoteLogSegmentInfo. It seems
that RemoteLogIndexEntry are offset index entries pointing to record
batches inside a segment. That seems to be the same as the .index file?

That is a good point. `RemoteLogManager` does not put a restriction on
`RemoteStorageManager(RSM)` for maintaining positions in the remote
segment same as the local segments or keeping a correlation between
local segment's positions to the remote segment positions. RSM gives
back the respective entries for a given log segment, call RSM to fetch
the data by giving the respective entry. This allows RSM to have
better control in managing the given log segments.

Thanks,
Satish.

On Fri, Nov 1, 2019 at 2:28 AM Jun Rao  wrote:
>
> Hi, Harsha,
>
> I am still looking at the KIP and the PR. A couple of quick
> comments/questions.
>
> 20. It's fine to keep the HDFS binding temporarily in the PR. We just need
> to remove it before it's merged to trunk. As Victor mentioned, we can
> provide a reference implementation based on a mocked version of remote
> storage.
>
> 21. I am not sure that I understood the need for RemoteLogIndexEntry and
> its relationship with RemoteLogSegmentInfo. It seems
> that RemoteLogIndexEntry are offset index entries pointing to record
> batches inside a segment. That seems to be the same as the .index file?
>
> Thanks,
>
> Jun
>
> On Mon, Oct 28, 2019 at 9:11 PM Satish Duggana 
> wrote:
>
> > Hi Viktor,
> > >1. Can we allow RLM Followers to serve read requests? After all segments
> > on
> > the cold storage are closed ones, no modification is allowed. Besides
> > KIP-392 (
> >
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-392%3A+Allow+consumers+to+fetch+from+closest+replica
> > )
> > would introduce follower fetching too, so I think it would be nice to
> > prepare RLM for this as well.
> >
> > That is a good point. We plan to support fetching remote storage from
> > followers too. Current code in the PR work fine for this scenario
> > though there may be some edge cases to be handled. We have not yet
> > tested this scenario.
> >
> > >2. I think the remote.log.storage.enable config is redundant. By
> > specifying
> > remote.log.storage.manager.class.name one already declares that they want
> > to use remote storage. Would it make sense to remove
> > the remote.log.storage.enable config?
> >
> > I do not think it is really needed. `remote.log.storage.enable`
> > property can be removed.
> >
> > Thanks,
> > Satish.
> >
> >
> > On Thu, Oct 24, 2019 at 2:46 PM Viktor Somogyi-Vass
> >  wrote:
> > >
> > > Hi Harsha,
> > >
> > > A couple more questions:
> > > 1. Can we allow RLM Followers to serve read requests? After all segments
> > on
> > > the cold storage are closed ones, no modification is allowed. Besides
> > > KIP-392 (
> > >
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-392%3A+Allow+consumers+to+fetch+from+closest+replica
> > )
> > > would introduce follower fetching too, so I think it would be nice to
> > > prepare RLM for this as well.
> > > 2. I think the remote.log.storage.enable config is redundant. By
> > specifying
> > > remote.log.storage.manager.class.name one already declares that they
> > want
> > > to use remote storage. Would it make sense to remove
> > > the remote.log.storage.enable config?
> > >
> > > Thanks,
> > > Viktor
> > >
> > >
> > > On Thu, Oct 24, 2019 at 10:37 AM Viktor Somogyi-Vass <
> > > viktorsomo...@gmail.com> wrote:
> > >
> > > > Hi Jun & Harsha,
> > > >
> > > > I think it would be beneficial to at least provide one simple reference
> > > > implementation (file system based?) as we do with connect too.
> > > > That would as a simple example and would help plugin developers to
> > better
> > > > understand the concept and the interfaces.
> > > >
> > > > Best,
> > > > Viktor
> > > >
> > > > On Wed, Oct 23, 2019 at 8:49 PM Jun Rao  wrote:
> > > >
> > > >> Hi, Harsha,
> > > >>
> > > >> Regarding feature branch, if the goal is faster collaboration, it
> > seems
> > > >> that doing the development on your own fork is better since
> > non-committers
> > > >> can push changes there.
> > > >>
> > > >> Regarding the dependencies, this is an important thing to clarify. My
> > > >> understanding for this KIP is that in Apache Kafka, we won't provide
> > any
> > > >> specific implementation for a particular block storage. There are many
> > > >> block storage systems out there (HDFS, S3, Google storage, Azure
> > storage,
> > > >> Ceph, etc). We don't want to drag in all those dependencies in Apache
> > > >> Kafka, even if they are in a separate module. 

Re: [DISCUSS] KIP-536: Propagate broker timestamp to Admin API

2019-11-01 Thread Stanislav Kozlovski
Hey Noa,

KIP-436 added a JMX metric in Kafka for this exact use-case, called
`start-time-ms`. Perhaps it would be useful to name this public interface
in the same way for consistency.

Could you update the KIP to include the specific RPC changes regarding the
metadata request/responses? Here is a recent example of how to portray the
changes -
https://cwiki.apache.org/confluence/display/KAFKA/KIP-525+-+Return+topic+metadata+and+configs+in+CreateTopics+response

Thanks,
Stanislav!

On Mon, Oct 14, 2019 at 2:46 PM Noa Resare  wrote:

> We are in the process of migrating the pieces of automation that currently
> reads and modifies zookeeper state to use the Admin API.
>
> One of the things that we miss doing this is access to the start time of
> brokers in a cluster which is used by our automation doing rolling
> restarts. We currently read this from the timestamp field from the
> epehmeral broker znodes in zookeeper. To address this limitation, I have
> authored KIP-536, that proposes adding a timestamp field to the Node class
> that the AdminClient.describeCluster() method returns.
>
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-536%3A+Propagate+broker+timestamp+to+Admin+API
> <
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-536:+Propagate+broker+timestamp+to+Admin+API
> >
>
> Any and all feedback is most welcome
>
> cheers
> noa



-- 
Best,
Stanislav


Re: [VOTE] KIP-541: Create a fetch.max.bytes configuration for the broker

2019-11-01 Thread Stanislav Kozlovski
+1 (non-binding).
Thanks!
Stanislav

On Fri, Oct 25, 2019 at 2:29 PM David Arthur  wrote:

> +1 binding, this will be a nice improvement. Thanks, Colin!
>
> -David
>
> On Fri, Oct 25, 2019 at 4:33 AM Tom Bentley  wrote:
>
> > +1 nb. Thanks!
> >
> > On Fri, Oct 25, 2019 at 7:43 AM Ismael Juma  wrote:
> >
> > > +1 (binding)
> > >
> > > On Thu, Oct 24, 2019, 4:56 PM Colin McCabe  wrote:
> > >
> > > > Hi all,
> > > >
> > > > I'd like to start the vote on KIP-541: Create a fetch.max.bytes
> > > > configuration for the broker.
> > > >
> > > > KIP: https://cwiki.apache.org/confluence/x/4g73Bw
> > > >
> > > > Discussion thread:
> > > >
> > >
> >
> https://lists.apache.org/thread.html/9d9dde93a07e1f1fc8d9f182f94f4bda9d016c5e9f3c8541cdc6f53b@%3Cdev.kafka.apache.org%3E
> > > >
> > > > cheers,
> > > > Colin
> > > >
> > >
> >
>
>
> --
> David Arthur
>


-- 
Best,
Stanislav


Re: [DISCUSS] KIP-542: Partition Reassignment Throttling

2019-11-01 Thread Stanislav Kozlovski
Hey Viktor. Thanks for the KIP!

> We will introduce two new configs in order to eventually replace
*.replication.throttled.rate.
Just to clarify, you mean to replace said config in the context of
reassignment throttling, right? We are not planning to remove that config

And also to clarify, *.throttled.replicas will not apply to the new
*reassignment* configs, correct? We will throttle all reassigning replicas.
(I am +1 on this, I believe it is easier to reason about. We could always
add a new config later)

I have one comment about backwards-compatibility - should we ensure that
the old `*.replication.throttled.rate` and `*.throttled.replicas` still
apply to reassigning traffic if set? We could have the new config take
precedence, but still preserve backwards compatibility.

Thanks,
Stanislav

On Thu, Oct 24, 2019 at 1:38 PM Viktor Somogyi-Vass 
wrote:

> Hi People,
>
> I've created a KIP to improve replication quotas by handling reassignment
> related throttling as a separate case with its own configurable limits and
> change the kafka-reassign-partitions tool to use these new configs going
> forward.
> Please have a look, I'd be happy to receive any feedback and answer
> all your questions.
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-542%3A+Partition+Reassignment+Throttling
>
> Thanks,
> Viktor
>


-- 
Best,
Stanislav


Jenkins build is back to normal : kafka-trunk-jdk8 #4010

2019-11-01 Thread Apache Jenkins Server
See 




[jira] [Created] (KAFKA-9128) ConsumerCoordinator ignores exceptions from ConsumerRebalanceListener

2019-11-01 Thread Oleg Muravskiy (Jira)
Oleg Muravskiy created KAFKA-9128:
-

 Summary: ConsumerCoordinator ignores exceptions from 
ConsumerRebalanceListener
 Key: KAFKA-9128
 URL: https://issues.apache.org/jira/browse/KAFKA-9128
 Project: Kafka
  Issue Type: Bug
  Components: consumer
Affects Versions: 2.3.0
Reporter: Oleg Muravskiy


I'm using a custom ConsumerRebalanceListener with a plain old Kafka consumer to 
manage offsets in an external storage as described in 
ConsumerRebalanceListener's javadoc.

When that storage is not available, I'm throwing an exception from my listener. 
However, the exception is simply logged and ignored by the ConsumerCoordinator, 
as could be seen in these two code snippets from it:


{code:java}
   try {
   listener.onPartitionsRevoked(revoked);
   } catch (WakeupException | InterruptException e) {
   throw e;
   } catch (Exception e) {
   log.error("User provided listener {} failed on partition 
revocation", listener.getClass().getName(), e);
   }{code}

and 


{code:java}
   try {
   listener.onPartitionsAssigned(assignedPartitions);
   } catch (WakeupException | InterruptException e) {
   throw e;
   } catch (Exception e) {
   log.error("User provided listener {} failed on partition 
assignment", listener.getClass().getName(), e);
   }{code}
The consumption continues as if nothing has happened.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Jenkins build is back to normal : kafka-trunk-jdk11 #925

2019-11-01 Thread Apache Jenkins Server
See