Build failed in Jenkins: kafka-trunk-jdk11 #1150

2020-02-11 Thread Apache Jenkins Server
See 


Changes:

[github] KAKFA-9503: Fix TopologyTestDriver output order (#8065)


--
[...truncated 2.86 MB...]
org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnStreamsTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 

Build failed in Jenkins: kafka-trunk-jdk8 #4226

2020-02-11 Thread Apache Jenkins Server
See 


Changes:

[github] KAKFA-9503: Fix TopologyTestDriver output order (#8065)


--
[...truncated 2.84 MB...]

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTimeDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTimeDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldForwardRecordsFromSubtopologyToSubtopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldForwardRecordsFromSubtopologyToSubtopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] PASSED


Build failed in Jenkins: kafka-2.2-jdk8 #26

2020-02-11 Thread Apache Jenkins Server
See 


Changes:

[rhauch] MINOR: Start using Response and replace IOException in


--
[...truncated 2.55 MB...]
kafka.utils.CommandLineUtilsTest > 
testMaybeMergeOptionsDefaultOverwriteExisting PASSED

kafka.utils.CommandLineUtilsTest > testParseEmptyArgAsValid STARTED

kafka.utils.CommandLineUtilsTest > testParseEmptyArgAsValid PASSED

kafka.utils.CommandLineUtilsTest > testMaybeMergeOptionsNotOverwriteExisting 
STARTED

kafka.utils.CommandLineUtilsTest > testMaybeMergeOptionsNotOverwriteExisting 
PASSED

kafka.utils.JsonTest > testParseToWithInvalidJson STARTED

kafka.utils.JsonTest > testParseToWithInvalidJson PASSED

kafka.utils.JsonTest > testParseTo STARTED

kafka.utils.JsonTest > testParseTo PASSED

kafka.utils.JsonTest > testJsonParse STARTED

kafka.utils.JsonTest > testJsonParse PASSED

kafka.utils.JsonTest > testLegacyEncodeAsString STARTED

kafka.utils.JsonTest > testLegacyEncodeAsString PASSED

kafka.utils.JsonTest > testEncodeAsBytes STARTED

kafka.utils.JsonTest > testEncodeAsBytes PASSED

kafka.utils.JsonTest > testEncodeAsString STARTED

kafka.utils.JsonTest > testEncodeAsString PASSED

kafka.utils.ReplicationUtilsTest > testUpdateLeaderAndIsr STARTED

kafka.utils.ReplicationUtilsTest > testUpdateLeaderAndIsr PASSED

kafka.utils.ZkUtilsTest > testGetSequenceIdMethod STARTED

kafka.utils.ZkUtilsTest > testGetSequenceIdMethod PASSED

kafka.utils.ZkUtilsTest > testAbortedConditionalDeletePath STARTED

kafka.utils.ZkUtilsTest > testAbortedConditionalDeletePath PASSED

kafka.utils.ZkUtilsTest > testGetAllPartitionsTopicWithoutPartitions STARTED

kafka.utils.ZkUtilsTest > testGetAllPartitionsTopicWithoutPartitions PASSED

kafka.utils.ZkUtilsTest > testSuccessfulConditionalDeletePath STARTED

kafka.utils.ZkUtilsTest > testSuccessfulConditionalDeletePath PASSED

kafka.utils.ZkUtilsTest > testPersistentSequentialPath STARTED

kafka.utils.ZkUtilsTest > testPersistentSequentialPath PASSED

kafka.utils.ZkUtilsTest > testClusterIdentifierJsonParsing STARTED

kafka.utils.ZkUtilsTest > testClusterIdentifierJsonParsing PASSED

kafka.utils.ZkUtilsTest > testGetLeaderIsrAndEpochForPartition STARTED

kafka.utils.ZkUtilsTest > testGetLeaderIsrAndEpochForPartition PASSED

kafka.utils.PasswordEncoderTest > testEncoderConfigChange STARTED

kafka.utils.PasswordEncoderTest > testEncoderConfigChange PASSED

kafka.utils.PasswordEncoderTest > testEncodeDecodeAlgorithms STARTED

kafka.utils.PasswordEncoderTest > testEncodeDecodeAlgorithms PASSED

kafka.utils.PasswordEncoderTest > testEncodeDecode STARTED

kafka.utils.PasswordEncoderTest > testEncodeDecode PASSED

kafka.utils.timer.TimerTaskListTest > testAll STARTED

kafka.utils.timer.TimerTaskListTest > testAll PASSED

kafka.utils.timer.TimerTest > testAlreadyExpiredTask STARTED

kafka.utils.timer.TimerTest > testAlreadyExpiredTask PASSED

kafka.utils.timer.TimerTest > testTaskExpiration STARTED

kafka.utils.timer.TimerTest > testTaskExpiration PASSED

kafka.utils.ShutdownableThreadTest > testShutdownWhenCalledAfterThreadStart 
STARTED

kafka.utils.ShutdownableThreadTest > testShutdownWhenCalledAfterThreadStart 
PASSED

kafka.utils.SchedulerTest > testMockSchedulerNonPeriodicTask STARTED

kafka.utils.SchedulerTest > testMockSchedulerNonPeriodicTask PASSED

kafka.utils.SchedulerTest > testMockSchedulerPeriodicTask STARTED

kafka.utils.SchedulerTest > testMockSchedulerPeriodicTask PASSED

kafka.utils.SchedulerTest > testNonPeriodicTask STARTED

kafka.utils.SchedulerTest > testNonPeriodicTask PASSED

kafka.utils.SchedulerTest > testRestart STARTED

kafka.utils.SchedulerTest > testRestart PASSED

kafka.utils.SchedulerTest > testReentrantTaskInMockScheduler STARTED

kafka.utils.SchedulerTest > testReentrantTaskInMockScheduler PASSED

kafka.utils.SchedulerTest > testPeriodicTask STARTED

kafka.utils.SchedulerTest > testPeriodicTask PASSED

kafka.utils.json.JsonValueTest > testJsonObjectIterator STARTED

kafka.utils.json.JsonValueTest > testJsonObjectIterator PASSED

kafka.utils.json.JsonValueTest > testDecodeLong STARTED

kafka.utils.json.JsonValueTest > testDecodeLong PASSED

kafka.utils.json.JsonValueTest > testAsJsonObject STARTED

kafka.utils.json.JsonValueTest > testAsJsonObject PASSED

kafka.utils.json.JsonValueTest > testDecodeDouble STARTED

kafka.utils.json.JsonValueTest > testDecodeDouble PASSED

kafka.utils.json.JsonValueTest > testDecodeOption STARTED

kafka.utils.json.JsonValueTest > testDecodeOption PASSED

kafka.utils.json.JsonValueTest > testDecodeString STARTED

kafka.utils.json.JsonValueTest > testDecodeString PASSED

kafka.utils.json.JsonValueTest > testJsonValueToString STARTED

kafka.utils.json.JsonValueTest > testJsonValueToString PASSED

kafka.utils.json.JsonValueTest > testAsJsonObjectOption STARTED

kafka.utils.json.JsonValueTest > testAsJsonObjectOption PASSED

kafka.utils.json.JsonValueTest > testAsJsonArrayOption STARTED


[jira] [Resolved] (KAFKA-9500) Foreign-Key Join creates an invalid topology

2020-02-11 Thread John Roesler (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9500?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Roesler resolved KAFKA-9500.
-
Resolution: Fixed

> Foreign-Key Join creates an invalid topology
> 
>
> Key: KAFKA-9500
> URL: https://issues.apache.org/jira/browse/KAFKA-9500
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.4.0
>Reporter: John Roesler
>Assignee: John Roesler
>Priority: Blocker
> Fix For: 2.5.0, 2.4.1
>
>
> Foreign-Key Join results are not required to be materialized by default, but 
> they might be needed if downstream operators need to perform lookups on the 
> result (such as when the join result participates in an equi-join).
> Currently, if the result is explicitly materialized (via Materialized), this 
> works correctly, but if the result is _not_ materialized explicitly, but _is_ 
> needed, the topology builder throws an exception that the result store isn't 
> added to the topology. This was an oversight in testing and review and needs 
> to be fixed ASAP.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: kafka-2.5-jdk8 #16

2020-02-11 Thread Apache Jenkins Server
See 


Changes:

[rhauch] MINOR: Start using Response and replace IOException in

[vvcephei] KAFKA-9390: Make serde pseudo-topics unique (#8054)

[wangguoz] KAFKA-9355: Fix bug that removed RocksDB metrics after failure in EOS


--
[...truncated 2.85 MB...]
org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowForUnknownTopic[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnStreamsTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnStreamsTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] 

Build failed in Jenkins: kafka-trunk-jdk8 #4225

2020-02-11 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-9355: Fix bug that removed RocksDB metrics after failure in EOS


--
[...truncated 2.84 MB...]

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] PASSED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis STARTED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis PASSED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime STARTED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime PASSED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep PASSED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
PASSED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testToString STARTED

org.apache.kafka.streams.test.TestRecordTest > testToString PASSED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords STARTED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords PASSED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
STARTED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
PASSED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher STARTED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher PASSED

org.apache.kafka.streams.test.TestRecordTest > testFields STARTED

org.apache.kafka.streams.test.TestRecordTest > testFields PASSED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode STARTED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode PASSED

> Task :streams:upgrade-system-tests-0100:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0100:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0100:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0100:compileTestJava
> Task :streams:upgrade-system-tests-0100:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:testClasses
> Task :streams:upgrade-system-tests-0100:checkstyleTest
> Task :streams:upgrade-system-tests-0100:spotbugsMain NO-SOURCE
> Task :streams:upgrade-system-tests-0100:test
> Task :streams:upgrade-system-tests-0101:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0101:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0101:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0101:checkstyleMain 

Build failed in Jenkins: kafka-trunk-jdk11 #1149

2020-02-11 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-9390: Make serde pseudo-topics unique (#8054)

[github] KAFKA-9355: Fix bug that removed RocksDB metrics after failure in EOS


--
[...truncated 2.85 MB...]
org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
PASSED

org.apache.kafka.streams.TestTopicsTest > testNonUsedOutputTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testNonUsedOutputTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testEmptyTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testEmptyTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testStartTimestamp STARTED

org.apache.kafka.streams.TestTopicsTest > testStartTimestamp PASSED

org.apache.kafka.streams.TestTopicsTest > testNegativeAdvance STARTED

org.apache.kafka.streams.TestTopicsTest > testNegativeAdvance PASSED

org.apache.kafka.streams.TestTopicsTest > shouldNotAllowToCreateWithNullDriver 
STARTED

org.apache.kafka.streams.TestTopicsTest > shouldNotAllowToCreateWithNullDriver 
PASSED

org.apache.kafka.streams.TestTopicsTest > testDuration STARTED

org.apache.kafka.streams.TestTopicsTest > testDuration PASSED

org.apache.kafka.streams.TestTopicsTest > testOutputToString STARTED

org.apache.kafka.streams.TestTopicsTest > testOutputToString PASSED

org.apache.kafka.streams.TestTopicsTest > testValue STARTED

org.apache.kafka.streams.TestTopicsTest > testValue PASSED

org.apache.kafka.streams.TestTopicsTest > testTimestampAutoAdvance STARTED

org.apache.kafka.streams.TestTopicsTest > testTimestampAutoAdvance PASSED

org.apache.kafka.streams.TestTopicsTest > testOutputWrongSerde STARTED

org.apache.kafka.streams.TestTopicsTest > testOutputWrongSerde PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputTopicWithNullTopicName STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputTopicWithNullTopicName PASSED

org.apache.kafka.streams.TestTopicsTest > testWrongSerde STARTED

org.apache.kafka.streams.TestTopicsTest > testWrongSerde PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMapWithNull STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMapWithNull PASSED

org.apache.kafka.streams.TestTopicsTest > testNonExistingOutputTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testNonExistingOutputTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testMultipleTopics STARTED

org.apache.kafka.streams.TestTopicsTest > testMultipleTopics PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValueList STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValueList PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputWithNullDriver STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputWithNullDriver PASSED

org.apache.kafka.streams.TestTopicsTest > testValueList STARTED

org.apache.kafka.streams.TestTopicsTest > testValueList PASSED

org.apache.kafka.streams.TestTopicsTest > testRecordList STARTED

org.apache.kafka.streams.TestTopicsTest > testRecordList PASSED

org.apache.kafka.streams.TestTopicsTest > testNonExistingInputTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testNonExistingInputTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMap STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMap PASSED

org.apache.kafka.streams.TestTopicsTest > testRecordsToList STARTED

org.apache.kafka.streams.TestTopicsTest > testRecordsToList PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValueListDuration STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValueListDuration PASSED

org.apache.kafka.streams.TestTopicsTest > testInputToString STARTED

org.apache.kafka.streams.TestTopicsTest > testInputToString PASSED

org.apache.kafka.streams.TestTopicsTest > testTimestamp STARTED

org.apache.kafka.streams.TestTopicsTest > testTimestamp PASSED

org.apache.kafka.streams.TestTopicsTest > testWithHeaders STARTED

org.apache.kafka.streams.TestTopicsTest > testWithHeaders PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValue STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValue PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateTopicWithNullTopicName STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateTopicWithNullTopicName PASSED

> Task :streams:upgrade-system-tests-0100:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0100:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0100:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0100:compileTestJava
> Task 

Build failed in Jenkins: kafka-2.3-jdk8 #173

2020-02-11 Thread Apache Jenkins Server
See 


Changes:

[rhauch] KAFKA-7052 Avoiding NPE in ExtractField SMT in case of non-existent


--
[...truncated 2.95 MB...]
kafka.log.LogCleanerTest > testCommitMarkerRemoval STARTED

kafka.log.LogCleanerTest > testCommitMarkerRemoval PASSED

kafka.log.LogCleanerTest > testCleanSegmentsWithConcurrentSegmentDeletion 
STARTED

kafka.log.LogCleanerTest > testCleanSegmentsWithConcurrentSegmentDeletion PASSED

kafka.log.LogValidatorTest > testRecompressedBatchWithoutRecordsNotAllowed 
STARTED

kafka.log.LogValidatorTest > testRecompressedBatchWithoutRecordsNotAllowed 
PASSED

kafka.log.LogValidatorTest > testCompressedV1 STARTED

kafka.log.LogValidatorTest > testCompressedV1 PASSED

kafka.log.LogValidatorTest > testCompressedV2 STARTED

kafka.log.LogValidatorTest > testCompressedV2 PASSED

kafka.log.LogValidatorTest > testDownConversionOfIdempotentRecordsNotPermitted 
STARTED

kafka.log.LogValidatorTest > testDownConversionOfIdempotentRecordsNotPermitted 
PASSED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterUpConversionV0ToV2NonCompressed STARTED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterUpConversionV0ToV2NonCompressed PASSED

kafka.log.LogValidatorTest > testAbsoluteOffsetAssignmentCompressed STARTED

kafka.log.LogValidatorTest > testAbsoluteOffsetAssignmentCompressed PASSED

kafka.log.LogValidatorTest > testLogAppendTimeWithRecompressionV1 STARTED

kafka.log.LogValidatorTest > testLogAppendTimeWithRecompressionV1 PASSED

kafka.log.LogValidatorTest > testLogAppendTimeWithRecompressionV2 STARTED

kafka.log.LogValidatorTest > testLogAppendTimeWithRecompressionV2 PASSED

kafka.log.LogValidatorTest > testCreateTimeUpConversionV0ToV1 STARTED

kafka.log.LogValidatorTest > testCreateTimeUpConversionV0ToV1 PASSED

kafka.log.LogValidatorTest > testCreateTimeUpConversionV0ToV2 STARTED

kafka.log.LogValidatorTest > testCreateTimeUpConversionV0ToV2 PASSED

kafka.log.LogValidatorTest > testCreateTimeUpConversionV1ToV2 STARTED

kafka.log.LogValidatorTest > testCreateTimeUpConversionV1ToV2 PASSED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterDownConversionV2ToV0Compressed STARTED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterDownConversionV2ToV0Compressed PASSED

kafka.log.LogValidatorTest > testZStdCompressedWithUnavailableIBPVersion STARTED

kafka.log.LogValidatorTest > testZStdCompressedWithUnavailableIBPVersion PASSED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterUpConversionV1ToV2Compressed STARTED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterUpConversionV1ToV2Compressed PASSED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterUpConversionV0ToV1NonCompressed STARTED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterUpConversionV0ToV1NonCompressed PASSED

kafka.log.LogValidatorTest > 
testDownConversionOfTransactionalRecordsNotPermitted STARTED

kafka.log.LogValidatorTest > 
testDownConversionOfTransactionalRecordsNotPermitted PASSED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterUpConversionV0ToV1Compressed STARTED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterUpConversionV0ToV1Compressed PASSED

kafka.log.LogValidatorTest > testRelativeOffsetAssignmentNonCompressedV1 STARTED

kafka.log.LogValidatorTest > testRelativeOffsetAssignmentNonCompressedV1 PASSED

kafka.log.LogValidatorTest > testRelativeOffsetAssignmentNonCompressedV2 STARTED

kafka.log.LogValidatorTest > testRelativeOffsetAssignmentNonCompressedV2 PASSED

kafka.log.LogValidatorTest > testControlRecordsNotAllowedFromClients STARTED

kafka.log.LogValidatorTest > testControlRecordsNotAllowedFromClients PASSED

kafka.log.LogValidatorTest > testRelativeOffsetAssignmentCompressedV1 STARTED

kafka.log.LogValidatorTest > testRelativeOffsetAssignmentCompressedV1 PASSED

kafka.log.LogValidatorTest > testRelativeOffsetAssignmentCompressedV2 STARTED

kafka.log.LogValidatorTest > testRelativeOffsetAssignmentCompressedV2 PASSED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterDownConversionV2ToV1NonCompressed STARTED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterDownConversionV2ToV1NonCompressed PASSED

kafka.log.LogValidatorTest > testLogAppendTimeNonCompressedV1 STARTED

kafka.log.LogValidatorTest > testLogAppendTimeNonCompressedV1 PASSED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterDownConversionV2ToV0NonCompressed STARTED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterDownConversionV2ToV0NonCompressed PASSED

kafka.log.LogValidatorTest > testControlRecordsNotCompressed STARTED

kafka.log.LogValidatorTest > testControlRecordsNotCompressed PASSED

kafka.log.LogValidatorTest > testInvalidCreateTimeNonCompressedV1 STARTED

kafka.log.LogValidatorTest > testInvalidCreateTimeNonCompressedV1 PASSED

kafka.log.LogValidatorTest > testInvalidCreateTimeNonCompressedV2 STARTED

kafka.log.LogValidatorTest > 

[jira] [Resolved] (KAFKA-9503) TopologyTestDriver processes intermediate results in the wrong order

2020-02-11 Thread John Roesler (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Roesler resolved KAFKA-9503.
-
Resolution: Fixed

> TopologyTestDriver processes intermediate results in the wrong order
> 
>
> Key: KAFKA-9503
> URL: https://issues.apache.org/jira/browse/KAFKA-9503
> Project: Kafka
>  Issue Type: Bug
>  Components: streams-test-utils
>Reporter: John Roesler
>Assignee: John Roesler
>Priority: Major
>
> TopologyTestDriver has the feature that it processes each input 
> synchronously, resolving one of the most significant challenges with 
> verifying the correctness of streaming applications.
> When processing an input, it feeds that record to the source node, which then 
> synchronously (it's always synchronous within a task) gets passed through the 
> subtopology via Context#forward calls. Ultimately, outputs from that input 
> are forwarded into the RecordCollector, which converts it to Producer.send 
> calls. In TopologyTestDriver, this Producer is a special one that actually 
> just captures the records.
> Some output topics from one subtopology are inputs to another subtopology. 
> For example, repartition topics. Immediately after the synchronous 
> subtopology process() invocation, TopologyTestDriver iterates over the 
> collected outputs from the special Producer. If they are purely output 
> records, it just enqueues them for later retrieval by testing code. If they 
> are records for internal topics, though, TopologyTestDriver immediately 
> processes them as inputs  for the relevant subtopology.
> The problem, and this is very subtle, is that TopologyTestDriver does this 
> recursively, which with some (apparently rare) programs can cause the output 
> to be observed in an invalid order.
> One such program is the one I wrote to test the fix for KAFKA-9487 . It 
> involves a foreign-key join whose result is joined back to one of its inputs.
> {noformat}
> Here's a simplified version:
> // foreign key join
> J = A.join(B, (extractor) a -> a.b, (joiner) (a,b) -> new Pair(a, b))
> // equi-join
> OUT = A.join(J, (joiner) (a, j) -> new Pair(a, j))
> Let's say we have the following initial condition:
> A:
> a1 = {v: X, b: b1}
> B:
> b1 = {v: Y}
> J:
> a1 = Pair({v: X}, b: b1}, {v: Y})
> OUT:
> a1 = Pair({v: X}, b: b1}, Pair({v: X}, b: b1}, {v: Y}))
> Now, piping an update:
> a1: {v: Z, b: b1}
> results immediately in two buffered results in the Producer:
> (FK join subscription): b1: {a1}
> (OUT): a1 = Pair({v: Z}, b: b1}, Pair({v: X}, b: b1}, {v: Y}))
> Note that the FK join result isn't updated synchronously, since it's an async 
> operation, so the RHS lookup is temporarily incorrect, yielding the nonsense 
> intermediate result where the outer pair has the updated value for a1, but 
> the inner (fk result) one still has the old value for a1.
> However! We don't buffer that output record for consumption by testing code 
> yet, we leave it in the internal Producer while we process the first 
> intermediate record (the FK subscription).
> Processing that internal record means that we have a new internal record to 
> process:
> (FK join subscription response): a1: {b1: {v: Y}}
> so right now, our internal-records-to-process stack looks like:
> (FK join subscription response): a1: {b1: {v: Y}}
> (OUT) a1: Pair({v: Z}, b: b1}, Pair({v: X}, b: b1}, {v: Y}))
> Again, we start by processing the first thing, the FK join response, which 
> results in an updated FK join result:
> (J) a1: Pair({v: Z}, b: b1}, {v: Y})
> and output:
> (OUT) a1: Pair({v: Z}, b: b1}, Pair({v: Z}, b: b1}, {v: Y}))
> and, we still haven't handled the earlier output, so now our 
> internal-records-to-process stack looks like:
> (J) a1: Pair({v: Z}, b: b1}, {v: Y})
> (OUT) a1: Pair({v: Z}, b: b1}, Pair({v: Z}, b: b1}, {v: Y}))
> (OUT) a1: Pair({v: Z}, b: b1}, Pair({v: X}, b: b1}, {v: Y}))
> At this point, there's nothing else to process in internal topics, so we just 
> copy the records one by one to the "output" collection for later handling by 
> testing code, but this yields the wrong final state of:
> (OUT) a1: Pair({v: Z}, b: b1}, Pair({v: X}, b: b1}, {v: Y}))
> That was an incorrect intermediate result, but because we're processing 
> internal records recursively (as a stack), it winds up emitted at the end 
> instead of in the middle.
> If we change the processing model from a stack to a queue, the correct order 
> is preserved, and the final state is:
> (OUT) a1: Pair({v: Z}, b: b1}, Pair({v: Z}, b: b1}, {v: Y}))
> {noformat}
> This is what I did in https://github.com/apache/kafka/pull/8015



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-9541) Flaky Test DescribeConsumerGroupTest#testDescribeGroupWithShortInitializationTimeout

2020-02-11 Thread huxihx (Jira)
huxihx created KAFKA-9541:
-

 Summary: Flaky Test 
DescribeConsumerGroupTest#testDescribeGroupWithShortInitializationTimeout
 Key: KAFKA-9541
 URL: https://issues.apache.org/jira/browse/KAFKA-9541
 Project: Kafka
  Issue Type: Bug
  Components: core, unit tests
Affects Versions: 2.4.0
Reporter: huxihx
Assignee: huxihx


h3. Error Message

java.lang.AssertionError: assertion failed
h3. Stacktrace

java.lang.AssertionError: assertion failed at 
scala.Predef$.assert(Predef.scala:267) at 
kafka.admin.DescribeConsumerGroupTest.testDescribeGroupMembersWithShortInitializationTimeout(DescribeConsumerGroupTest.scala:630)
 at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native 
Method) at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
 at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.base/java.lang.reflect.Method.invoke(Method.java:566) at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
 at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
 at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
 at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
 at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) 
at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) 
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at 
org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
 at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
 at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
 at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at 
org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at 
org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at 
org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at 
org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) 
at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) 
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at 
org.junit.runners.ParentRunner.run(ParentRunner.java:413) at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:110)
 at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:58)
 at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:38)
 at 
org.gradle.api.internal.tasks.testing.junit.AbstractJUnitTestClassProcessor.processTestClass(AbstractJUnitTestClassProcessor.java:62)
 at 
org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
 at jdk.internal.reflect.GeneratedMethodAccessor13.invoke(Unknown Source) at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.base/java.lang.reflect.Method.invoke(Method.java:566) at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:36)
 at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
 at 
org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:33)
 at 
org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:94)
 at com.sun.proxy.$Proxy2.processTestClass(Unknown Source) at 
org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:118)
 at jdk.internal.reflect.GeneratedMethodAccessor12.invoke(Unknown Source) at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.base/java.lang.reflect.Method.invoke(Method.java:566) at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:36)
 at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
 at 
org.gradle.internal.remote.internal.hub.MessageHubBackedObjectConnection$DispatchWrapper.dispatch(MessageHubBackedObjectConnection.java:182)
 at 
org.gradle.internal.remote.internal.hub.MessageHubBackedObjectConnection$DispatchWrapper.dispatch(MessageHubBackedObjectConnection.java:164)
 at 
org.gradle.internal.remote.internal.hub.MessageHub$Handler.run(MessageHub.java:412)
 at 
org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:64)
 at 

[jira] [Resolved] (KAFKA-9487) Followup : KAFKA-9445(Allow fetching a key from a single partition); addressing code review comments

2020-02-11 Thread Navinder Brar (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9487?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Navinder Brar resolved KAFKA-9487.
--
Resolution: Fixed

> Followup : KAFKA-9445(Allow fetching a key from a single partition); 
> addressing code review comments
> 
>
> Key: KAFKA-9487
> URL: https://issues.apache.org/jira/browse/KAFKA-9487
> Project: Kafka
>  Issue Type: New Feature
>  Components: streams
>Reporter: Navinder Brar
>Assignee: Navinder Brar
>Priority: Blocker
> Fix For: 2.5.0
>
>
> A few code review comments are left to be addressed from Kafka 9445, which I 
> will be addressing in this PR.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: kafka-2.4-jdk8 #139

2020-02-11 Thread Apache Jenkins Server
See 


Changes:

[rhauch] KAFKA-7052 Avoiding NPE in ExtractField SMT in case of non-existent


--
[...truncated 5.50 MB...]
org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 

Build failed in Jenkins: kafka-2.2-jdk8 #25

2020-02-11 Thread Apache Jenkins Server
See 


Changes:

[rhauch] KAFKA-7052 Avoiding NPE in ExtractField SMT in case of non-existent


--
[...truncated 1.82 MB...]
org.apache.kafka.streams.scala.kstream.KStreamTest > peek a KStream should run 
peek actions on records PASSED

org.apache.kafka.streams.scala.kstream.KStreamTest > selectKey a KStream should 
select a new key STARTED

org.apache.kafka.streams.scala.kstream.KStreamTest > selectKey a KStream should 
select a new key PASSED

org.apache.kafka.streams.scala.kstream.KStreamTest > join 2 KStreams should 
join correctly records STARTED

org.apache.kafka.streams.scala.kstream.KStreamTest > join 2 KStreams should 
join correctly records PASSED

org.apache.kafka.streams.scala.kstream.MaterializedTest > Create a Materialized 
should create a Materialized with Serdes STARTED

org.apache.kafka.streams.scala.kstream.MaterializedTest > Create a Materialized 
should create a Materialized with Serdes PASSED

org.apache.kafka.streams.scala.kstream.MaterializedTest > Create a Materialize 
with a store name should create a Materialized with Serdes and a store name 
STARTED

org.apache.kafka.streams.scala.kstream.MaterializedTest > Create a Materialize 
with a store name should create a Materialized with Serdes and a store name 
PASSED

org.apache.kafka.streams.scala.kstream.MaterializedTest > Create a Materialize 
with a window store supplier should create a Materialized with Serdes and a 
store supplier STARTED

org.apache.kafka.streams.scala.kstream.MaterializedTest > Create a Materialize 
with a window store supplier should create a Materialized with Serdes and a 
store supplier PASSED

org.apache.kafka.streams.scala.kstream.MaterializedTest > Create a Materialize 
with a key value store supplier should create a Materialized with Serdes and a 
store supplier STARTED

org.apache.kafka.streams.scala.kstream.MaterializedTest > Create a Materialize 
with a key value store supplier should create a Materialized with Serdes and a 
store supplier PASSED

org.apache.kafka.streams.scala.kstream.MaterializedTest > Create a Materialize 
with a session store supplier should create a Materialized with Serdes and a 
store supplier STARTED

org.apache.kafka.streams.scala.kstream.MaterializedTest > Create a Materialize 
with a session store supplier should create a Materialized with Serdes and a 
store supplier PASSED

org.apache.kafka.streams.scala.kstream.JoinedTest > Create a Joined should 
create a Joined with Serdes STARTED

org.apache.kafka.streams.scala.kstream.JoinedTest > Create a Joined should 
create a Joined with Serdes PASSED

org.apache.kafka.streams.scala.kstream.JoinedTest > Create a Joined should 
create a Joined with Serdes and repartition topic name STARTED

org.apache.kafka.streams.scala.kstream.JoinedTest > Create a Joined should 
create a Joined with Serdes and repartition topic name PASSED

org.apache.kafka.streams.scala.kstream.ProducedTest > Create a Produced should 
create a Produced with Serdes STARTED

org.apache.kafka.streams.scala.kstream.ProducedTest > Create a Produced should 
create a Produced with Serdes PASSED

org.apache.kafka.streams.scala.kstream.ProducedTest > Create a Produced with 
timestampExtractor and resetPolicy should create a Consumed with Serdes, 
timestampExtractor and resetPolicy STARTED

org.apache.kafka.streams.scala.kstream.ProducedTest > Create a Produced with 
timestampExtractor and resetPolicy should create a Consumed with Serdes, 
timestampExtractor and resetPolicy PASSED

org.apache.kafka.streams.scala.kstream.KTableTest > filter a KTable should 
filter records satisfying the predicate STARTED

org.apache.kafka.streams.scala.kstream.KTableTest > filter a KTable should 
filter records satisfying the predicate PASSED

org.apache.kafka.streams.scala.kstream.KTableTest > filterNot a KTable should 
filter records not satisfying the predicate STARTED

org.apache.kafka.streams.scala.kstream.KTableTest > filterNot a KTable should 
filter records not satisfying the predicate PASSED

org.apache.kafka.streams.scala.kstream.KTableTest > join 2 KTables should join 
correctly records STARTED

org.apache.kafka.streams.scala.kstream.KTableTest > join 2 KTables should join 
correctly records PASSED

org.apache.kafka.streams.scala.kstream.KTableTest > join 2 KTables with a 
Materialized should join correctly records and state store STARTED

org.apache.kafka.streams.scala.kstream.KTableTest > join 2 KTables with a 
Materialized should join correctly records and state store PASSED

org.apache.kafka.streams.scala.kstream.GroupedTest > Create a Grouped should 
create a Grouped with Serdes STARTED

org.apache.kafka.streams.scala.kstream.GroupedTest > Create a Grouped should 
create a Grouped with Serdes PASSED

org.apache.kafka.streams.scala.kstream.GroupedTest > Create a Grouped with 
repartition topic name should create a Grouped with Serdes, and repartition 

Build failed in Jenkins: kafka-trunk-jdk8 #4224

2020-02-11 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: fix and improve StreamsConfig JavaDocs (#8086)

[github] KAFKA-9483: Add Scala KStream#toTable to the Streams DSL (#8024)

[github] MINOR: improve error reporting in DescribeConsumerGroupTest (#8080)

[github] KAFKA-9390: Make serde pseudo-topics unique (#8054)


--
[...truncated 2.84 MB...]

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] PASSED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis STARTED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis PASSED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime STARTED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime PASSED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep PASSED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
PASSED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testToString STARTED

org.apache.kafka.streams.test.TestRecordTest > testToString PASSED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords STARTED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords PASSED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
STARTED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
PASSED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher STARTED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher PASSED

org.apache.kafka.streams.test.TestRecordTest > testFields STARTED

org.apache.kafka.streams.test.TestRecordTest > testFields PASSED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode STARTED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode PASSED

> Task :streams:upgrade-system-tests-0100:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0100:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0100:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0100:compileTestJava
> Task :streams:upgrade-system-tests-0100:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:testClasses
> Task :streams:upgrade-system-tests-0100:checkstyleTest
> Task :streams:upgrade-system-tests-0100:spotbugsMain NO-SOURCE
> Task :streams:upgrade-system-tests-0100:test
> Task :streams:upgrade-system-tests-0101:compileJava 

Jenkins build is back to normal : kafka-trunk-jdk11 #1148

2020-02-11 Thread Apache Jenkins Server
See 




Build failed in Jenkins: kafka-2.5-jdk8 #15

2020-02-11 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-6607: Commit correct offsets for transactional input data (#8040)

[matthias] MINOR: fix and improve StreamsConfig JavaDocs (#8086)

[matthias] KAFKA-9483: Add Scala KStream#toTable to the Streams DSL (#8024)


--
[...truncated 2.85 MB...]
org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairs 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicNameAndTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicNameAndTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairsAndCustomTimestamps
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairsAndCustomTimestamps
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithCustomTimestampAndIncrementsAndNotAdvanceTime
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithCustomTimestampAndIncrementsAndNotAdvanceTime
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithTimestampWithTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithTimestampWithTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKeyAndDefaultTimestamp
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKeyAndDefaultTimestamp
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithTimestampAndIncrements STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithTimestampAndIncrements PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldDeleteAndReturnPlainValue STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldDeleteAndReturnPlainValue PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnName 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnName 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutAllWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutAllWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldReturnIsPersistent STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldReturnIsPersistent PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutIfAbsentWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutIfAbsentWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldForwardClose 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldForwardClose 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldForwardFlush 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 

Build failed in Jenkins: kafka-2.2-jdk8-old #207

2020-02-11 Thread Apache Jenkins Server
See 


Changes:

[rhauch] MINOR: Start using Response and replace IOException in


--
Started by an SCM change
Running as SYSTEM
[EnvInject] - Loading node environment variables.
Building remotely on H38 (ubuntu) in workspace 

[WS-CLEANUP] Deleting project workspace...
[WS-CLEANUP] Deferred wipeout is used...
[WS-CLEANUP] Done
No credentials specified
Cloning the remote Git repository
Cloning repository https://github.com/apache/kafka.git
 > git init  # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress -- https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # 
 > timeout=10
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git fetch --tags --progress -- https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/2.2^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/2.2^{commit} # timeout=10
Checking out Revision 5dc15db523189949f5cc7999da18f53a9fd2aabd 
(refs/remotes/origin/2.2)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 5dc15db523189949f5cc7999da18f53a9fd2aabd
Commit message: "MINOR: Start using Response and replace IOException in 
EmbeddedConnectCluster for failures (#8055)"
 > git rev-list --no-walk 75ba599de50192c3a1d11ae9f1814fb2798ae6fe # timeout=10
ERROR: No tool found matching GRADLE_4_8_1_HOME
[kafka-2.2-jdk8-old] $ /bin/bash -xe /tmp/jenkins2305048494557374230.sh
+ rm -rf 
+ /bin/gradle
/tmp/jenkins2305048494557374230.sh: line 4: /bin/gradle: No such file or 
directory
Build step 'Execute shell' marked build as failure
[FINDBUGS] Collecting findbugs analysis files...
ERROR: No tool found matching GRADLE_4_8_1_HOME
[FINDBUGS] Searching for all files in 
 that match the pattern 
**/build/reports/findbugs/*.xml
[FINDBUGS] No files found. Configuration error?
ERROR: No tool found matching GRADLE_4_8_1_HOME
No credentials specified
ERROR: No tool found matching GRADLE_4_8_1_HOME
 Using GitBlamer to create author and commit information for all 
warnings.
 GIT_COMMIT=5dc15db523189949f5cc7999da18f53a9fd2aabd, 
workspace=
[FINDBUGS] Computing warning deltas based on reference build #175
Recording test results
ERROR: No tool found matching GRADLE_4_8_1_HOME
ERROR: Step ?Publish JUnit test result report? failed: No test report files 
were found. Configuration error?
ERROR: No tool found matching GRADLE_4_8_1_HOME
Not sending mail to unregistered user ism...@juma.me.uk
Not sending mail to unregistered user wangg...@gmail.com
Not sending mail to unregistered user nore...@github.com
Not sending mail to unregistered user j...@confluent.io
Not sending mail to unregistered user b...@confluent.io


Re: [DISCUSS] KIP-568: Explicit rebalance triggering on the Consumer

2020-02-11 Thread Guozhang Wang
Hello Sophie, thanks for brining up this KIP, and the great write-up
summarizing the motivations of the proposal. Here are some comments:

Minor:

1. If we want to make it a blocking call (I have some thoughts about this
below :), to be consistent we need to consider having two overloaded
function, one without the timeout which then relies on `
DEFAULT_API_TIMEOUT_MS_CONFIG`.

2. Also I'd suggest that, again for API consistency, we a) throw
TimeoutException if the operation cannot be completed within the timeout
value, b) return false immediately if we cannot trigger a rebalance either
because coordinator is unknown.

Meta:

3. I'm not sure if we have a concrete scenario that we want to wait until
the rebalance is completed in KIP-441 / 268, rather than calling
"consumer.enforceRebalance(); consumer.poll()" consecutively and try to
execute the rebalance in the poll call? If there's no valid motivations I'm
still a bit inclined to make it non-blocking (i.e. just setting a bit and
then execute the process in the later poll call) similar to our `seek`
functions. By doing this we can also make this function simpler as it would
never throw RebalanceInProgress or Timeout or even KafkaExceptions.

4. Re: the case "when a rebalance is already in progress", this may be
related to 3) above. I think we can simplify this case as well but just not
triggering a new rebalance and let the the caller handle it: for example in
KIP-441, in each iteration of the stream thread, we can if a standby task
is ready, and if yes we call `enforceRebalance`, if there's already a
rebalance in progress (either with the new subscription metadata, or not)
this call would be a no-op, and then in the next iteration we would just
call that function again, and eventually we would trigger the rebalance
with the new subscription metadata and previous calls would be no-op and
hence no cost anyways. I feel this would be simpler than letting the caller
to capture RebalanceInProgressException:


mainProcessingLoop() {
if (needsRebalance) {
consumer.enforceRebalance();
}

records = consumer.poll();
...
// do some processing
}

RebalanceListener {

   onPartitionsAssigned(...) {
  if (rebalanceGoalAchieved()) {
needsRebalance = false;
  }
}
}


WDYT?




On Tue, Feb 11, 2020 at 3:59 PM Sophie Blee-Goldman 
wrote:

> Hey Boyang,
>
> Originally I had it as a nonblocking call, but decided to change it to
> blocking
> with a timeout parameter. I'm not sure a future makes sense to return here,
> because the rebalance either does or does not complete within the timeout:
> if it does not, you will have to call poll again to complete it (as is the
> case with
> any other rebalance). I'll call this out in the javadocs as well.
>
> I also added an example demonstrating how/when to use this new API.
>
> Thanks!
>
> On Tue, Feb 11, 2020 at 1:49 PM Boyang Chen 
> wrote:
>
> > Hey Sophie,
> >
> > is the `enforceRebalance` a blocking call? Could we add a code sample to
> > the KIP on how this API should be used?
> >
> > Returning a future instead of a boolean might be easier as we are
> allowing
> > consumer to make progress during rebalance after 429 IMHO.
> >
> > Boyang
> >
> >
> > On Tue, Feb 11, 2020 at 1:17 PM Konstantine Karantasis <
> > konstant...@confluent.io> wrote:
> >
> > > Thanks for the quick turnaround Sophie. My points have been addressed.
> > > I think the intended use is quite clear now.
> > >
> > > Best,
> > > Konstantine
> > >
> > >
> > > On Tue, Feb 11, 2020 at 12:57 PM Sophie Blee-Goldman <
> > sop...@confluent.io>
> > > wrote:
> > >
> > > > Konstantine,
> > > > Thanks for the feedback! I've updated the sections with your
> > > suggestions. I
> > > > agree
> > > > in particular that it's really important to make sure users don't
> call
> > > this
> > > > unnecessarily,
> > > >  or for the wrong reasons: to that end I also extended the javadocs
> to
> > > > specify that this
> > > > API is for when changes to the subscription userdata occur. Hopefully
> > > that
> > > > should make
> > > > its intended usage quite clear.
> > > >
> > > > Bill,
> > > > The rebalance triggered by this new API will be a "normal" rebalance,
> > and
> > > > therefore
> > > > follow the existing listener semantics. For example a cooperative
> > > rebalance
> > > > will always
> > > > call onPartitionsAssigned, even if no partitions are actually moved.
> > > > An eager rebalance will still revoke all partitions first anyway.
> > > >
> > > > Thanks for the feedback!
> > > > Sophie
> > > >
> > > > On Tue, Feb 11, 2020 at 9:52 AM Bill Bejeck 
> wrote:
> > > >
> > > > > Hi Sophie,
> > > > >
> > > > > Thanks for the KIP, makes sense to me.
> > > > >
> > > > > One quick question, I'm not sure if it's relevant or not.
> > > > >
> > > > > If a user provides a `ConsumerRebalanceListener` and a rebalance is
> > > > > triggered from an `enforceRebalance`  call,
> > > > > it seems possible the listener won't get called since 

Re: [VOTE] KIP-568: Explicit rebalance triggering on the Consumer

2020-02-11 Thread Guozhang Wang
Hi Sophie,

Thanks for the KIP, I left some comments on the DISCUSS thread.


Guozhang


On Tue, Feb 11, 2020 at 3:25 PM Bill Bejeck  wrote:

> Thanks for the KIP Sophie.
>
> It's a +1 (binding) for me.
>
> -Bill
>
> On Tue, Feb 11, 2020 at 4:21 PM Konstantine Karantasis <
> konstant...@confluent.io> wrote:
>
> > The KIP reads quite well for me now and I think this feature will enable
> > even more efficient load balancing for specific use cases.
> >
> > I'm also +1 (non-binding)
> >
> > - Konstantine
> >
> > On Tue, Feb 11, 2020 at 9:35 AM Navinder Brar
> >  wrote:
> >
> > > Thanks Sophie, much required.
> > > +1 non-binding.
> > >
> > >
> > > Sent from Yahoo Mail for iPhone
> > >
> > >
> > > On Tuesday, February 11, 2020, 10:33 PM, John Roesler <
> > vvcep...@apache.org>
> > > wrote:
> > >
> > > Thanks Sophie,
> > >
> > > I'm +1 (binding)
> > >
> > > -John
> > >
> > > On Mon, Feb 10, 2020, at 20:54, Sophie Blee-Goldman wrote:
> > > > Hey all,
> > > >
> > > > I'd like to start the voting on KIP-568. It proposes the new
> > > > Consumer#enforceRebalance API to facilitate triggering efficient
> > > rebalances.
> > > >
> > > > For reference, here is the KIP link again:
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-568%3A+Explicit+rebalance+triggering+on+the+Consumer
> > > >
> > > > Thanks!
> > > > Sophie
> > > >
> > >
> > >
> > >
> > >
> >
>


-- 
-- Guozhang


Jenkins build is back to normal : kafka-trunk-jdk8 #4223

2020-02-11 Thread Apache Jenkins Server
See 




Re: [VOTE] KIP-447: Producer scalability for exactly once semantics

2020-02-11 Thread Guozhang Wang
Boyang,

Thanks for the update. This change makes sense to me.

Guozhang

On Tue, Feb 11, 2020 at 11:37 AM Boyang Chen 
wrote:

> Hey there,
>
> we are adding a small change to the KIP-447 public API. The default value
> of `transaction.abort.timed.out.transaction.cleanup.interval.ms` shall be
> changed from 1 minute to 10 seconds. The goal here is to trigger the
> expired transaction more frequently in order to reduce the consumer pending
> offset fetch wait time.
>
> Let me know if you have further questions, thanks!
>
>
> On Wed, Jan 8, 2020 at 3:44 PM Boyang Chen 
> wrote:
>
> > Thanks Guozhang for another review! I have addressed all the javadoc
> > changes for PendingTransactionException in the KIP. For
> FENCED_INSTANCE_ID
> > the only thrown place would be on the new send offsets API, which is also
> > addressed.
> >
> > Thanks Matthias for the vote! As we have 3 binding votes (Guozhang,
> Jason,
> > and Matthias), the KIP is officially accepted and prepared to ship in
> 2.5.
> >
> > Still feel free to put more thoughts on either discussion or voting
> thread
> > to refine the KIP!
> >
> >
> > On Wed, Jan 8, 2020 at 3:15 PM Matthias J. Sax 
> > wrote:
> >
> >> I just re-read the KIP. Overall I am +1 as well.
> >>
> >
> >> Some minor comments (also apply to the Google design doc):
> >>
> >> 1) As 2.4 was release, references should be updated to 2.5.
> >>
> >>  Addressed
> >
> >>
> >>
> >>>  2) About the upgrade path, the KIP says:
> >>
> >> 2a)
> >>
> >> > Broker must be upgraded to 2.4 first. This means the
> >> `inter.broker.protocol.version` (IBP) has to be set to the latest. Any
> >> produce request with higher version will automatically get fenced
> because
> >> of no support.
> >>
> >> From my understanding, this is not correct? After a broker is updated to
> >> the new binaries, it should accept new requests, even if IBP was not
> >> bumped yet?
> >>
> >> Your understanding was correct, after some offline discussion we should
> > not worry about IBP in this case.
> >
> >> 2b)
> >>
> >> About the two rolling bounces for KS apps and the statement
> >>
> >> > one should never allow task producer and thread producer under the
> same
> >> application group
> >>
> >> In the second rolling bounce, we might actually mix both (ie, per-task
> >> and per-thread producers) but this is fine as explained in the KIP. The
> >> only case we cannot allow is, old per-task producers (without consumer
> >> generation fencing) to be mixed with per-thread producers (that rely
> >> solely on consumer generation fencing).
> >>
> >> Does this sound correct?
> >>
> >> Correct, that's the purpose of doing 2 rolling bounce, where the first
> > one is to guarantee everyone's opt-in for generation fencing.
> >
> >>
> >> 3) We should also document how users can use KS 2.5 applications against
> >> older brokers -- for this case, we need to stay on per-task producers
> >> and cannot use the new fencing mechanism. Currently, the KIP only
> >> describe a single way how users could make this work: by setting (and
> >> keeping) UPGRADE_FROM config to 2.4 (what might not be an ideal solution
> >> and might also not be clear by itself that people would need to do
> this)?
> >>
> >>
> >> Yes exactly, at the moment we are actively working on a plan to acquire
> > broker's IBP during stream start-up and initialize based off that
> > information,
> > so that user doesn't need to keep UPGRADE_FROM simply for working with
> old
> > brokers.
> >
> >>
> >> -Matthias
> >>
> >>
> >>
> >> On 9/18/19 4:41 PM, Boyang Chen wrote:
> >> > Bump this thread to see if someone could also review!
> >> >
> >> > On Mon, Sep 9, 2019 at 5:00 PM Boyang Chen <
> reluctanthero...@gmail.com>
> >> > wrote:
> >> >
> >> >> Thank you Jason! Addressed the comments.
> >> >>
> >> >> Thank you Guozhang for explaining. I will document the timeout
> setting
> >> >> reasoning in the design doc.
> >> >>
> >> >>
> >> >> On Mon, Sep 9, 2019 at 1:49 PM Guozhang Wang 
> >> wrote:
> >> >>
> >> >>> On Fri, Sep 6, 2019 at 6:33 PM Boyang Chen <
> >> reluctanthero...@gmail.com>
> >> >>> wrote:
> >> >>>
> >>  Thanks Guozhang, I have polished the design doc to make it sync
> with
> >>  current KIP. As for overriding default timeout values, I guess it's
> >> >>> already
> >>  stated in the KIP to set txn timeout to 10s, are you suggesting we
> >> >>> should
> >>  also put down this recommendation on the KIP for non-stream EOS
> >> users?
> >> 
> >>  My comment is not for changing any produce / consumer default
> config
> >> >>> values, but for the Streams configs, to make sure that our
> >> >>> overridden config values respect the above rules. That is, we check
> >> the
> >> >>> actual value used in the config if they are ever overridden by
> users,
> >> and
> >> >>> if the above were not true we can log a warning that it may be risky
> >> to
> >> >>> encounter some unnecessary rebalances.
> >> >>>
> >> >>> Again, this is not something we need to include in 

[jira] [Resolved] (KAFKA-9390) Non-key joining of KTable not compatible with confluent avro serdes

2020-02-11 Thread John Roesler (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Roesler resolved KAFKA-9390.
-
Resolution: Fixed

> Non-key joining of KTable not compatible with confluent avro serdes
> ---
>
> Key: KAFKA-9390
> URL: https://issues.apache.org/jira/browse/KAFKA-9390
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.4.0
>Reporter: Andy Bryant
>Assignee: John Roesler
>Priority: Blocker
> Fix For: 2.5.0, 2.4.1
>
>
> I was trying out the new one-to-many KTable joins against some CDC data in 
> Avro format and kept getting serialisation errors.
>  
> {code:java}
> org.apache.kafka.common.errors.SerializationException: Error registering Avro 
> schema: 
> {"type":"record","name":"Key","namespace":"dbserver1.inventory.orders","fields":[
> {"name":"order_number","type":"int"}
> ],"connect.name":"dbserver1.inventory.orders.Key"}
>  Caused by: 
> io.confluent.kafka.schemaregistry.client.rest.exceptions.RestClientException: 
> Schema being registered is incompatible with an earlier schema; error code: 
> 409
>   
> {code}
> Both tables have avro keys of different types (one is an order key, the other 
> a customer key).
> This looks like it will cause issues.
> [https://github.com/apache/kafka/blob/2.4/streams/src/main/java/org/apache/kafka/streams/kstream/internals/foreignkeyjoin/CombinedKeySchema.java#L57-L60]
>  They will both attempt to register schemas with the same subject to the 
> schema registry which will fail a backward compatibility check.
> I also noticed in the schema registry there were some subjects that didn't 
> have the application id prefix. This is probably caused by this...
>  
> [https://github.com/apache/kafka/blob/2.4/streams/src/main/java/org/apache/kafka/streams/kstream/internals/foreignkeyjoin/ForeignJoinSubscriptionSendProcessorSupplier.java#L88]
> Where here {{repartitionTopicName}} doesn't have the application prefix.
>  
>  
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [KAFKA-557] Add emit on change support for Kafka Streams

2020-02-11 Thread Richard Yu
Hi all,

Bumping this. If you feel that this KIP is not too urgent. Then let me
know. :)

Cheers,
Richard

On Thu, Feb 6, 2020 at 4:55 PM Richard Yu 
wrote:

> Hi all,
>
> I've had just a few thoughts regarding the forwarding of  change>. As Matthias already mentioned, there are two
> separate priorities by which we can judge this KIP:
>
> 1. A optimization perspective: In this case, the user would prefer the
> impact of this KIP to be as minimal as possible. By such logic, if
> stateless operations are performed twice, that could prove unacceptable for
> them. (since operations can prove expensive)
>
> 2. Semantics correctness perspective: Unlike the optimization approach, we
> are more concerned with all KTable operations obeying the same emission
> policy. i.e. emit on change. In this case, a discrepancy would not be
> tolerated, even though an extra performance cost will be incurred.
> Therefore, we will follow Matthias's approach, and then perform the
> operation once on the old value, and once on the new.
>
> The issue here I think is more black and white than in between. The second
> option in particular would be favorable for users with inexpensive
> stateless operations, while for the former option, we are probably dealing
> with more expensive ones. So the simplest solution is probably to allow the
> user to choose one of the behaviors, and have a config which can switch in
> between them.
>
> Its the simplest compromise I can come up with at the moment, but if you
> think you have a better plan which could better balance tradeoffs. Then
> please let us know. :)
>
> Best,
> Richard
>
> On Wed, Feb 5, 2020 at 5:12 PM John Roesler  wrote:
>
>> Hi all,
>>
>> Thanks for the thoughtful comments!
>>
>> I need more time to reflect on your thoughts, but just wanted to offer
>> a quick clarification about equals().
>>
>> I only meant that we can't be sure if a class's equals() implementation
>> returns true for two semantically identical instances. I.e., if a class
>> doesn't
>> override the default equals() implementation, then we would see behavior
>> like:
>>
>> new MyPair("A", 1).equals(new MyPair("A", 1)) returns false
>>
>> In that case, I would still like to catch no-op updates by comparing the
>> serialized form of the records when we happen to have it serialized anyway
>> (such as when the operation is stateful, or when we're sending to a
>> repartition topic and we have both the "new" and "old" value from
>> upstream).
>>
>> I didn't mean to suggest we'd try to use reflection to detect whether
>> equals
>> is implemented, although that is a neat trick. I was thinking more of a
>> belt-and-suspenders algorithm where we do the check for no-ops based on
>> equals() and then _also_ check the serialized bytes for equality.
>>
>> Thanks,
>> -John
>>
>> On Wed, Feb 5, 2020, at 15:31, Ted Yu wrote:
>> > Thanks for the comments, Matthias.
>> >
>> > w.r.t. requirement of an `equals()` implementation, each template type
>> > would have an equals() method. We can use the following code to know
>> > whether it is provided by JVM or provided by user.
>> >
>> > boolean customEquals = false;
>> > try {
>> > Class cls = value.getClass().getMethod("equals",
>> > Object.class).getDeclaringClass();
>> > if (!Object.class.equals(cls)) {
>> > customEquals = true;
>> > }
>> > } catch (NoSuchMethodException nsme) {
>> > // equals is always defined, this wouldn't hit
>> > }
>> >
>> > The next question is: what if the user doesn't provide equals() method ?
>> > Would we automatically fall back to emit-on-update ?
>> >
>> > Cheers
>> >
>> > On Tue, Feb 4, 2020 at 1:37 PM Matthias J. Sax 
>> wrote:
>> >
>> > > -BEGIN PGP SIGNED MESSAGE-
>> > > Hash: SHA512
>> > >
>> > > First a high level comment:
>> > >
>> > > Overall, I would like to make one step back, and make sure we are
>> > > discussion on the same level. Originally, I understood this KIP as a
>> > > proposed change of _semantics_, however, given the latest discussion
>> > > it seems it's actually not -- it's more an _optimization_ proposal.
>> > > Hence, we only need to make sure that this optimization does not break
>> > > existing semantics. It this the right way to think about it?
>> > >
>> > > If yes, than it might actually be ok to have different behavior
>> > > depending if there is a materialized KTable or not. So far, we never
>> > > defined a public contract about our emit strategy and it seems this
>> > > KIP does not define one either.
>> > >
>> > > Hence, I don't have as strong of an opinion about sending oldValues
>> > > for example any longer. I guess the question is really, what can we
>> > > implement in a reasonable way.
>> > >
>> > >
>> > >
>> > > Other comments:
>> > >
>> > >
>> > > @Richard:
>> > >
>> > > Can you please add the KIP to the KIP overview table: It's missing
>> > > (
>> https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Improvement+Pro
>> > > posals).
>> > >
>> > >
>> > > @Bruno:
>> > >
>> > > 

Re: [DISCUSS] KIP-568: Explicit rebalance triggering on the Consumer

2020-02-11 Thread Sophie Blee-Goldman
Hey Boyang,

Originally I had it as a nonblocking call, but decided to change it to
blocking
with a timeout parameter. I'm not sure a future makes sense to return here,
because the rebalance either does or does not complete within the timeout:
if it does not, you will have to call poll again to complete it (as is the
case with
any other rebalance). I'll call this out in the javadocs as well.

I also added an example demonstrating how/when to use this new API.

Thanks!

On Tue, Feb 11, 2020 at 1:49 PM Boyang Chen 
wrote:

> Hey Sophie,
>
> is the `enforceRebalance` a blocking call? Could we add a code sample to
> the KIP on how this API should be used?
>
> Returning a future instead of a boolean might be easier as we are allowing
> consumer to make progress during rebalance after 429 IMHO.
>
> Boyang
>
>
> On Tue, Feb 11, 2020 at 1:17 PM Konstantine Karantasis <
> konstant...@confluent.io> wrote:
>
> > Thanks for the quick turnaround Sophie. My points have been addressed.
> > I think the intended use is quite clear now.
> >
> > Best,
> > Konstantine
> >
> >
> > On Tue, Feb 11, 2020 at 12:57 PM Sophie Blee-Goldman <
> sop...@confluent.io>
> > wrote:
> >
> > > Konstantine,
> > > Thanks for the feedback! I've updated the sections with your
> > suggestions. I
> > > agree
> > > in particular that it's really important to make sure users don't call
> > this
> > > unnecessarily,
> > >  or for the wrong reasons: to that end I also extended the javadocs to
> > > specify that this
> > > API is for when changes to the subscription userdata occur. Hopefully
> > that
> > > should make
> > > its intended usage quite clear.
> > >
> > > Bill,
> > > The rebalance triggered by this new API will be a "normal" rebalance,
> and
> > > therefore
> > > follow the existing listener semantics. For example a cooperative
> > rebalance
> > > will always
> > > call onPartitionsAssigned, even if no partitions are actually moved.
> > > An eager rebalance will still revoke all partitions first anyway.
> > >
> > > Thanks for the feedback!
> > > Sophie
> > >
> > > On Tue, Feb 11, 2020 at 9:52 AM Bill Bejeck  wrote:
> > >
> > > > Hi Sophie,
> > > >
> > > > Thanks for the KIP, makes sense to me.
> > > >
> > > > One quick question, I'm not sure if it's relevant or not.
> > > >
> > > > If a user provides a `ConsumerRebalanceListener` and a rebalance is
> > > > triggered from an `enforceRebalance`  call,
> > > > it seems possible the listener won't get called since partition
> > > assignments
> > > > might not change.
> > > > If that is the case, do we want to possibly consider adding a method
> to
> > > the
> > > > `ConsumerRebalanceListener` for callbacks on `enforceRebalance`
> > actions?
> > > >
> > > > Thanks,
> > > > Bill
> > > >
> > > >
> > > >
> > > > On Tue, Feb 11, 2020 at 12:11 PM Konstantine Karantasis <
> > > > konstant...@confluent.io> wrote:
> > > >
> > > > > Hi Sophie.
> > > > >
> > > > > Thanks for the KIP. I liked how focused the proposal is. Also, its
> > > > > motivation is clear after carefully reading the KIP and its
> > references.
> > > > >
> > > > > Yet, I think it'd be a good idea to call out explicitly on the
> > Rejected
> > > > > Alternatives section that an automatic and periodic triggering of
> > > > > rebalances that would not require exposing this capability through
> > the
> > > > > Consumer interface does not cover your specific use cases and
> > therefore
> > > > is
> > > > > not chosen as a desired approach. Maybe, even consider mentioning
> > again
> > > > > here that this method is expected to be used to respond to system
> > > changes
> > > > > external to the consumer and its membership logic and is not
> proposed
> > > as
> > > > a
> > > > > way to resolve temporary imbalances due to membership changes that
> > > should
> > > > > inherently be resolved by the assignor logic itself with one or
> more
> > > > > consecutive rebalances.
> > > > >
> > > > > Also, in your javadoc I'd add some context similar to what someone
> > can
> > > > read
> > > > > on the KIP. Specifically where you say: "for example if some
> > condition
> > > > has
> > > > > changed that has implications for the partition assignment." I'd
> > rather
> > > > add
> > > > > something like "for example, if some condition external and
> invisible
> > > to
> > > > > the Consumer and its group membership has changed in ways that
> would
> > > > > justify a new partition assignment". That's just an example, feel
> > free
> > > to
> > > > > reword, but I believe that saying explicitly that this condition is
> > not
> > > > > visible to the consumer is useful to understand that this is not
> > > > necessary
> > > > > under normal circumstances.
> > > > >
> > > > > In Compatibility, Deprecation, and Migration Plan section I think
> > it's
> > > > > worth mentioning that this is a new feature that affects new
> > > > > implementations of the Consumer interface and any such new
> > > implementation
> > > > > should override the new method. 

Request for adding wiki edit permission for creating KIP

2020-02-11 Thread Xue LIU
Hi guys,

I am Xue, just joined this mailing list. Being new to Kafka developer
community, I am very glad to join and any help/guide is greatly appreciated!

I am working on this JIRA: https://issues.apache.org/jira/browse/KAFKA-9440
I would need to create a new KIP for this, can the administrators give me
the permission to edit wiki?

Wiki ID: *xuel1*

Thanks!
Xue Liu


Build failed in Jenkins: kafka-2.3-jdk8 #172

2020-02-11 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-9181; Maintain clean separation between local and group


--
[...truncated 2.94 MB...]

kafka.log.LogCleanerTest > testPartialSegmentClean STARTED

kafka.log.LogCleanerTest > testPartialSegmentClean PASSED

kafka.log.LogCleanerTest > testCommitMarkerRemoval STARTED

kafka.log.LogCleanerTest > testCommitMarkerRemoval PASSED

kafka.log.LogCleanerTest > testCleanSegmentsWithConcurrentSegmentDeletion 
STARTED

kafka.log.LogCleanerTest > testCleanSegmentsWithConcurrentSegmentDeletion PASSED

kafka.log.LogValidatorTest > testRecompressedBatchWithoutRecordsNotAllowed 
STARTED

kafka.log.LogValidatorTest > testRecompressedBatchWithoutRecordsNotAllowed 
PASSED

kafka.log.LogValidatorTest > testCompressedV1 STARTED

kafka.log.LogValidatorTest > testCompressedV1 PASSED

kafka.log.LogValidatorTest > testCompressedV2 STARTED

kafka.log.LogValidatorTest > testCompressedV2 PASSED

kafka.log.LogValidatorTest > testDownConversionOfIdempotentRecordsNotPermitted 
STARTED

kafka.log.LogValidatorTest > testDownConversionOfIdempotentRecordsNotPermitted 
PASSED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterUpConversionV0ToV2NonCompressed STARTED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterUpConversionV0ToV2NonCompressed PASSED

kafka.log.LogValidatorTest > testAbsoluteOffsetAssignmentCompressed STARTED

kafka.log.LogValidatorTest > testAbsoluteOffsetAssignmentCompressed PASSED

kafka.log.LogValidatorTest > testLogAppendTimeWithRecompressionV1 STARTED

kafka.log.LogValidatorTest > testLogAppendTimeWithRecompressionV1 PASSED

kafka.log.LogValidatorTest > testLogAppendTimeWithRecompressionV2 STARTED

kafka.log.LogValidatorTest > testLogAppendTimeWithRecompressionV2 PASSED

kafka.log.LogValidatorTest > testCreateTimeUpConversionV0ToV1 STARTED

kafka.log.LogValidatorTest > testCreateTimeUpConversionV0ToV1 PASSED

kafka.log.LogValidatorTest > testCreateTimeUpConversionV0ToV2 STARTED

kafka.log.LogValidatorTest > testCreateTimeUpConversionV0ToV2 PASSED

kafka.log.LogValidatorTest > testCreateTimeUpConversionV1ToV2 STARTED

kafka.log.LogValidatorTest > testCreateTimeUpConversionV1ToV2 PASSED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterDownConversionV2ToV0Compressed STARTED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterDownConversionV2ToV0Compressed PASSED

kafka.log.LogValidatorTest > testZStdCompressedWithUnavailableIBPVersion STARTED

kafka.log.LogValidatorTest > testZStdCompressedWithUnavailableIBPVersion PASSED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterUpConversionV1ToV2Compressed STARTED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterUpConversionV1ToV2Compressed PASSED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterUpConversionV0ToV1NonCompressed STARTED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterUpConversionV0ToV1NonCompressed PASSED

kafka.log.LogValidatorTest > 
testDownConversionOfTransactionalRecordsNotPermitted STARTED

kafka.log.LogValidatorTest > 
testDownConversionOfTransactionalRecordsNotPermitted PASSED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterUpConversionV0ToV1Compressed STARTED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterUpConversionV0ToV1Compressed PASSED

kafka.log.LogValidatorTest > testRelativeOffsetAssignmentNonCompressedV1 STARTED

kafka.log.LogValidatorTest > testRelativeOffsetAssignmentNonCompressedV1 PASSED

kafka.log.LogValidatorTest > testRelativeOffsetAssignmentNonCompressedV2 STARTED

kafka.log.LogValidatorTest > testRelativeOffsetAssignmentNonCompressedV2 PASSED

kafka.log.LogValidatorTest > testControlRecordsNotAllowedFromClients STARTED

kafka.log.LogValidatorTest > testControlRecordsNotAllowedFromClients PASSED

kafka.log.LogValidatorTest > testRelativeOffsetAssignmentCompressedV1 STARTED

kafka.log.LogValidatorTest > testRelativeOffsetAssignmentCompressedV1 PASSED

kafka.log.LogValidatorTest > testRelativeOffsetAssignmentCompressedV2 STARTED

kafka.log.LogValidatorTest > testRelativeOffsetAssignmentCompressedV2 PASSED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterDownConversionV2ToV1NonCompressed STARTED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterDownConversionV2ToV1NonCompressed PASSED

kafka.log.LogValidatorTest > testLogAppendTimeNonCompressedV1 STARTED

kafka.log.LogValidatorTest > testLogAppendTimeNonCompressedV1 PASSED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterDownConversionV2ToV0NonCompressed STARTED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterDownConversionV2ToV0NonCompressed PASSED

kafka.log.LogValidatorTest > testControlRecordsNotCompressed STARTED

kafka.log.LogValidatorTest > testControlRecordsNotCompressed PASSED

kafka.log.LogValidatorTest > testInvalidCreateTimeNonCompressedV1 STARTED

kafka.log.LogValidatorTest > testInvalidCreateTimeNonCompressedV1 PASSED


Re: [VOTE] KIP-568: Explicit rebalance triggering on the Consumer

2020-02-11 Thread Bill Bejeck
Thanks for the KIP Sophie.

It's a +1 (binding) for me.

-Bill

On Tue, Feb 11, 2020 at 4:21 PM Konstantine Karantasis <
konstant...@confluent.io> wrote:

> The KIP reads quite well for me now and I think this feature will enable
> even more efficient load balancing for specific use cases.
>
> I'm also +1 (non-binding)
>
> - Konstantine
>
> On Tue, Feb 11, 2020 at 9:35 AM Navinder Brar
>  wrote:
>
> > Thanks Sophie, much required.
> > +1 non-binding.
> >
> >
> > Sent from Yahoo Mail for iPhone
> >
> >
> > On Tuesday, February 11, 2020, 10:33 PM, John Roesler <
> vvcep...@apache.org>
> > wrote:
> >
> > Thanks Sophie,
> >
> > I'm +1 (binding)
> >
> > -John
> >
> > On Mon, Feb 10, 2020, at 20:54, Sophie Blee-Goldman wrote:
> > > Hey all,
> > >
> > > I'd like to start the voting on KIP-568. It proposes the new
> > > Consumer#enforceRebalance API to facilitate triggering efficient
> > rebalances.
> > >
> > > For reference, here is the KIP link again:
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-568%3A+Explicit+rebalance+triggering+on+the+Consumer
> > >
> > > Thanks!
> > > Sophie
> > >
> >
> >
> >
> >
>


Build failed in Jenkins: kafka-2.5-jdk8 #14

2020-02-11 Thread Apache Jenkins Server
See 


Changes:

[rhauch] KAFKA-7052 Avoiding NPE in ExtractField SMT in case of non-existent


--
[...truncated 2.86 MB...]

org.apache.kafka.streams.MockProcessorContextTest > shouldCapturePunctuator 
PASSED

org.apache.kafka.streams.TestTopicsTest > testNonUsedOutputTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testNonUsedOutputTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testEmptyTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testEmptyTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testStartTimestamp STARTED

org.apache.kafka.streams.TestTopicsTest > testStartTimestamp PASSED

org.apache.kafka.streams.TestTopicsTest > testNegativeAdvance STARTED

org.apache.kafka.streams.TestTopicsTest > testNegativeAdvance PASSED

org.apache.kafka.streams.TestTopicsTest > shouldNotAllowToCreateWithNullDriver 
STARTED

org.apache.kafka.streams.TestTopicsTest > shouldNotAllowToCreateWithNullDriver 
PASSED

org.apache.kafka.streams.TestTopicsTest > testDuration STARTED

org.apache.kafka.streams.TestTopicsTest > testDuration PASSED

org.apache.kafka.streams.TestTopicsTest > testOutputToString STARTED

org.apache.kafka.streams.TestTopicsTest > testOutputToString PASSED

org.apache.kafka.streams.TestTopicsTest > testValue STARTED

org.apache.kafka.streams.TestTopicsTest > testValue PASSED

org.apache.kafka.streams.TestTopicsTest > testTimestampAutoAdvance STARTED

org.apache.kafka.streams.TestTopicsTest > testTimestampAutoAdvance PASSED

org.apache.kafka.streams.TestTopicsTest > testOutputWrongSerde STARTED

org.apache.kafka.streams.TestTopicsTest > testOutputWrongSerde PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputTopicWithNullTopicName STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputTopicWithNullTopicName PASSED

org.apache.kafka.streams.TestTopicsTest > testWrongSerde STARTED

org.apache.kafka.streams.TestTopicsTest > testWrongSerde PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMapWithNull STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMapWithNull PASSED

org.apache.kafka.streams.TestTopicsTest > testNonExistingOutputTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testNonExistingOutputTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testMultipleTopics STARTED

org.apache.kafka.streams.TestTopicsTest > testMultipleTopics PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValueList STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValueList PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputWithNullDriver STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputWithNullDriver PASSED

org.apache.kafka.streams.TestTopicsTest > testValueList STARTED

org.apache.kafka.streams.TestTopicsTest > testValueList PASSED

org.apache.kafka.streams.TestTopicsTest > testRecordList STARTED

org.apache.kafka.streams.TestTopicsTest > testRecordList PASSED

org.apache.kafka.streams.TestTopicsTest > testNonExistingInputTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testNonExistingInputTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMap STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMap PASSED

org.apache.kafka.streams.TestTopicsTest > testRecordsToList STARTED

org.apache.kafka.streams.TestTopicsTest > testRecordsToList PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValueListDuration STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValueListDuration PASSED

org.apache.kafka.streams.TestTopicsTest > testInputToString STARTED

org.apache.kafka.streams.TestTopicsTest > testInputToString PASSED

org.apache.kafka.streams.TestTopicsTest > testTimestamp STARTED

org.apache.kafka.streams.TestTopicsTest > testTimestamp PASSED

org.apache.kafka.streams.TestTopicsTest > testWithHeaders STARTED

org.apache.kafka.streams.TestTopicsTest > testWithHeaders PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValue STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValue PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateTopicWithNullTopicName STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateTopicWithNullTopicName PASSED

> Task :streams:upgrade-system-tests-0100:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0100:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0100:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0100:compileTestJava
> Task :streams:upgrade-system-tests-0100:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:testClasses
> Task :streams:upgrade-system-tests-0100:checkstyleTest
> Task :streams:upgrade-system-tests-0100:spotbugsMain NO-SOURCE
> Task 

[jira] [Resolved] (KAFKA-9483) Add Scala KStream#toTable to the Streams DSL

2020-02-11 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9483?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax resolved KAFKA-9483.

Fix Version/s: 2.5.0
   Resolution: Fixed

> Add Scala KStream#toTable to the Streams DSL
> 
>
> Key: KAFKA-9483
> URL: https://issues.apache.org/jira/browse/KAFKA-9483
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: highluck
>Assignee: highluck
>Priority: Minor
> Fix For: 2.5.0
>
>
> [KIP-523: Add KStream#toTable to the Streams 
> DSL|https://cwiki.apache.org/confluence/display/KAFKA/KIP-523%3A+Add+KStream%23toTable+to+the+Streams+DSL?src=jira]
>  
> I am trying to add the same function to scala



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: kafka-trunk-jdk11 #1147

2020-02-11 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-7052 Avoiding NPE in ExtractField SMT in case of non-existent


--
[...truncated 2.85 MB...]
org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairs 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicNameAndTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicNameAndTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairsAndCustomTimestamps
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairsAndCustomTimestamps
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithCustomTimestampAndIncrementsAndNotAdvanceTime
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithCustomTimestampAndIncrementsAndNotAdvanceTime
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithTimestampWithTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithTimestampWithTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKeyAndDefaultTimestamp
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKeyAndDefaultTimestamp
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithTimestampAndIncrements STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithTimestampAndIncrements PASSED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis STARTED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis PASSED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime STARTED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime PASSED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep PASSED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnIsOpen 
PASSED


[jira] [Created] (KAFKA-9540) Application getting "Could not find the standby task 0_4 while closing it" error

2020-02-11 Thread Badai Aqrandista (Jira)
Badai Aqrandista created KAFKA-9540:
---

 Summary: Application getting "Could not find the standby task 0_4 
while closing it" error
 Key: KAFKA-9540
 URL: https://issues.apache.org/jira/browse/KAFKA-9540
 Project: Kafka
  Issue Type: Bug
  Components: streams
Reporter: Badai Aqrandista


Because of this the following line, there is a possibility that some standby 
tasks might not be created:

https://github.com/apache/kafka/blob/2.4.0/streams/src/main/java/org/apache/kafka/streams/processor/internals/StreamThread.java#L436

Then causing this line to not adding the task to standby task list:

https://github.com/apache/kafka/blob/2.4.0/streams/src/main/java/org/apache/kafka/streams/processor/internals/StreamThread.java#L299

But this line assumes that all standby tasks are to be created and add it to 
the standby list:

https://github.com/apache/kafka/blob/2.4.0/streams/src/main/java/org/apache/kafka/streams/processor/internals/TaskManager.java#L168

This results in user getting this error message on the next 
PARTITION_ASSIGNMENT state:

{noformat}
Could not find the standby task 0_4 while closing it 
(org.apache.kafka.streams.processor.internals.AssignedStandbyTasks:74)
{noformat}

But the harm caused by this issue is minimal: No standby task for some 
partitions. And it is recreated on the next rebalance anyway. So, I suggest 
lowering this message to WARN. Or probably check to WARN when standby task 
could not be created.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: kafka-trunk-jdk8 #4222

2020-02-11 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-7052 Avoiding NPE in ExtractField SMT in case of non-existent


--
[...truncated 2.84 MB...]

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] PASSED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis STARTED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis PASSED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime STARTED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime PASSED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep PASSED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
PASSED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testToString STARTED

org.apache.kafka.streams.test.TestRecordTest > testToString PASSED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords STARTED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords PASSED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
STARTED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
PASSED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher STARTED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher PASSED

org.apache.kafka.streams.test.TestRecordTest > testFields STARTED

org.apache.kafka.streams.test.TestRecordTest > testFields PASSED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode STARTED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode PASSED

> Task :streams:upgrade-system-tests-0100:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0100:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0100:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0100:compileTestJava
> Task :streams:upgrade-system-tests-0100:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:testClasses
> Task :streams:upgrade-system-tests-0100:checkstyleTest
> Task :streams:upgrade-system-tests-0100:spotbugsMain NO-SOURCE
> Task :streams:upgrade-system-tests-0100:test
> Task :streams:upgrade-system-tests-0101:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0101:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0101:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0101:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0101:compileTestJava
> Task 

Re: [DISCUSS] KIP-568: Explicit rebalance triggering on the Consumer

2020-02-11 Thread Boyang Chen
Hey Sophie,

is the `enforceRebalance` a blocking call? Could we add a code sample to
the KIP on how this API should be used?

Returning a future instead of a boolean might be easier as we are allowing
consumer to make progress during rebalance after 429 IMHO.

Boyang


On Tue, Feb 11, 2020 at 1:17 PM Konstantine Karantasis <
konstant...@confluent.io> wrote:

> Thanks for the quick turnaround Sophie. My points have been addressed.
> I think the intended use is quite clear now.
>
> Best,
> Konstantine
>
>
> On Tue, Feb 11, 2020 at 12:57 PM Sophie Blee-Goldman 
> wrote:
>
> > Konstantine,
> > Thanks for the feedback! I've updated the sections with your
> suggestions. I
> > agree
> > in particular that it's really important to make sure users don't call
> this
> > unnecessarily,
> >  or for the wrong reasons: to that end I also extended the javadocs to
> > specify that this
> > API is for when changes to the subscription userdata occur. Hopefully
> that
> > should make
> > its intended usage quite clear.
> >
> > Bill,
> > The rebalance triggered by this new API will be a "normal" rebalance, and
> > therefore
> > follow the existing listener semantics. For example a cooperative
> rebalance
> > will always
> > call onPartitionsAssigned, even if no partitions are actually moved.
> > An eager rebalance will still revoke all partitions first anyway.
> >
> > Thanks for the feedback!
> > Sophie
> >
> > On Tue, Feb 11, 2020 at 9:52 AM Bill Bejeck  wrote:
> >
> > > Hi Sophie,
> > >
> > > Thanks for the KIP, makes sense to me.
> > >
> > > One quick question, I'm not sure if it's relevant or not.
> > >
> > > If a user provides a `ConsumerRebalanceListener` and a rebalance is
> > > triggered from an `enforceRebalance`  call,
> > > it seems possible the listener won't get called since partition
> > assignments
> > > might not change.
> > > If that is the case, do we want to possibly consider adding a method to
> > the
> > > `ConsumerRebalanceListener` for callbacks on `enforceRebalance`
> actions?
> > >
> > > Thanks,
> > > Bill
> > >
> > >
> > >
> > > On Tue, Feb 11, 2020 at 12:11 PM Konstantine Karantasis <
> > > konstant...@confluent.io> wrote:
> > >
> > > > Hi Sophie.
> > > >
> > > > Thanks for the KIP. I liked how focused the proposal is. Also, its
> > > > motivation is clear after carefully reading the KIP and its
> references.
> > > >
> > > > Yet, I think it'd be a good idea to call out explicitly on the
> Rejected
> > > > Alternatives section that an automatic and periodic triggering of
> > > > rebalances that would not require exposing this capability through
> the
> > > > Consumer interface does not cover your specific use cases and
> therefore
> > > is
> > > > not chosen as a desired approach. Maybe, even consider mentioning
> again
> > > > here that this method is expected to be used to respond to system
> > changes
> > > > external to the consumer and its membership logic and is not proposed
> > as
> > > a
> > > > way to resolve temporary imbalances due to membership changes that
> > should
> > > > inherently be resolved by the assignor logic itself with one or more
> > > > consecutive rebalances.
> > > >
> > > > Also, in your javadoc I'd add some context similar to what someone
> can
> > > read
> > > > on the KIP. Specifically where you say: "for example if some
> condition
> > > has
> > > > changed that has implications for the partition assignment." I'd
> rather
> > > add
> > > > something like "for example, if some condition external and invisible
> > to
> > > > the Consumer and its group membership has changed in ways that would
> > > > justify a new partition assignment". That's just an example, feel
> free
> > to
> > > > reword, but I believe that saying explicitly that this condition is
> not
> > > > visible to the consumer is useful to understand that this is not
> > > necessary
> > > > under normal circumstances.
> > > >
> > > > In Compatibility, Deprecation, and Migration Plan section I think
> it's
> > > > worth mentioning that this is a new feature that affects new
> > > > implementations of the Consumer interface and any such new
> > implementation
> > > > should override the new method. Implementations that wish to upgrade
> > to a
> > > > newer version should be extended and recompiled, since no default
> > > > implementation will be provided.
> > > >
> > > > Naming is hard here, if someone wants to emphasize the ad hoc and
> > > irregular
> > > > nature of this call. After some thought I'm fine with
> > 'enforceRebalance'
> > > > even if it could potentially be confused to a method that is supposed
> > to
> > > be
> > > > called to remediate one or more previously unsuccessful rebalances
> > (which
> > > > is partly what StreamThread#enforceRebalance is used for). The best I
> > > could
> > > > think of was 'onRequestRebalance' but that's not perfect either.
> > > >
> > > > Best,
> > > > Konstantine
> > > >
> > > >
> > > > On Mon, Feb 10, 2020 at 5:18 PM Sophie Blee-Goldman <
> > 

Re: [VOTE] KIP-568: Explicit rebalance triggering on the Consumer

2020-02-11 Thread Konstantine Karantasis
The KIP reads quite well for me now and I think this feature will enable
even more efficient load balancing for specific use cases.

I'm also +1 (non-binding)

- Konstantine

On Tue, Feb 11, 2020 at 9:35 AM Navinder Brar
 wrote:

> Thanks Sophie, much required.
> +1 non-binding.
>
>
> Sent from Yahoo Mail for iPhone
>
>
> On Tuesday, February 11, 2020, 10:33 PM, John Roesler 
> wrote:
>
> Thanks Sophie,
>
> I'm +1 (binding)
>
> -John
>
> On Mon, Feb 10, 2020, at 20:54, Sophie Blee-Goldman wrote:
> > Hey all,
> >
> > I'd like to start the voting on KIP-568. It proposes the new
> > Consumer#enforceRebalance API to facilitate triggering efficient
> rebalances.
> >
> > For reference, here is the KIP link again:
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-568%3A+Explicit+rebalance+triggering+on+the+Consumer
> >
> > Thanks!
> > Sophie
> >
>
>
>
>


Re: [DISCUSS] KIP-568: Explicit rebalance triggering on the Consumer

2020-02-11 Thread Konstantine Karantasis
Thanks for the quick turnaround Sophie. My points have been addressed.
I think the intended use is quite clear now.

Best,
Konstantine


On Tue, Feb 11, 2020 at 12:57 PM Sophie Blee-Goldman 
wrote:

> Konstantine,
> Thanks for the feedback! I've updated the sections with your suggestions. I
> agree
> in particular that it's really important to make sure users don't call this
> unnecessarily,
>  or for the wrong reasons: to that end I also extended the javadocs to
> specify that this
> API is for when changes to the subscription userdata occur. Hopefully that
> should make
> its intended usage quite clear.
>
> Bill,
> The rebalance triggered by this new API will be a "normal" rebalance, and
> therefore
> follow the existing listener semantics. For example a cooperative rebalance
> will always
> call onPartitionsAssigned, even if no partitions are actually moved.
> An eager rebalance will still revoke all partitions first anyway.
>
> Thanks for the feedback!
> Sophie
>
> On Tue, Feb 11, 2020 at 9:52 AM Bill Bejeck  wrote:
>
> > Hi Sophie,
> >
> > Thanks for the KIP, makes sense to me.
> >
> > One quick question, I'm not sure if it's relevant or not.
> >
> > If a user provides a `ConsumerRebalanceListener` and a rebalance is
> > triggered from an `enforceRebalance`  call,
> > it seems possible the listener won't get called since partition
> assignments
> > might not change.
> > If that is the case, do we want to possibly consider adding a method to
> the
> > `ConsumerRebalanceListener` for callbacks on `enforceRebalance` actions?
> >
> > Thanks,
> > Bill
> >
> >
> >
> > On Tue, Feb 11, 2020 at 12:11 PM Konstantine Karantasis <
> > konstant...@confluent.io> wrote:
> >
> > > Hi Sophie.
> > >
> > > Thanks for the KIP. I liked how focused the proposal is. Also, its
> > > motivation is clear after carefully reading the KIP and its references.
> > >
> > > Yet, I think it'd be a good idea to call out explicitly on the Rejected
> > > Alternatives section that an automatic and periodic triggering of
> > > rebalances that would not require exposing this capability through the
> > > Consumer interface does not cover your specific use cases and therefore
> > is
> > > not chosen as a desired approach. Maybe, even consider mentioning again
> > > here that this method is expected to be used to respond to system
> changes
> > > external to the consumer and its membership logic and is not proposed
> as
> > a
> > > way to resolve temporary imbalances due to membership changes that
> should
> > > inherently be resolved by the assignor logic itself with one or more
> > > consecutive rebalances.
> > >
> > > Also, in your javadoc I'd add some context similar to what someone can
> > read
> > > on the KIP. Specifically where you say: "for example if some condition
> > has
> > > changed that has implications for the partition assignment." I'd rather
> > add
> > > something like "for example, if some condition external and invisible
> to
> > > the Consumer and its group membership has changed in ways that would
> > > justify a new partition assignment". That's just an example, feel free
> to
> > > reword, but I believe that saying explicitly that this condition is not
> > > visible to the consumer is useful to understand that this is not
> > necessary
> > > under normal circumstances.
> > >
> > > In Compatibility, Deprecation, and Migration Plan section I think it's
> > > worth mentioning that this is a new feature that affects new
> > > implementations of the Consumer interface and any such new
> implementation
> > > should override the new method. Implementations that wish to upgrade
> to a
> > > newer version should be extended and recompiled, since no default
> > > implementation will be provided.
> > >
> > > Naming is hard here, if someone wants to emphasize the ad hoc and
> > irregular
> > > nature of this call. After some thought I'm fine with
> 'enforceRebalance'
> > > even if it could potentially be confused to a method that is supposed
> to
> > be
> > > called to remediate one or more previously unsuccessful rebalances
> (which
> > > is partly what StreamThread#enforceRebalance is used for). The best I
> > could
> > > think of was 'onRequestRebalance' but that's not perfect either.
> > >
> > > Best,
> > > Konstantine
> > >
> > >
> > > On Mon, Feb 10, 2020 at 5:18 PM Sophie Blee-Goldman <
> sop...@confluent.io
> > >
> > > wrote:
> > >
> > > > Thanks John. I took out the KafkaConsumer method and moved the
> javadocs
> > > > to the Consumer#enforceRebalance in the KIP -- hope you're happy :P
> > > >
> > > > Also, I wanted to point out one minor change to the current proposal:
> > > make
> > > > this
> > > > a blocking call, which accepts a timeout and returns whether the
> > > rebalance
> > > > completed within the timeout. It will still reduce to a nonblocking
> > call
> > > if
> > > > a "zero"
> > > > timeout is supplied. I've updated the KIP accordingly.
> > > >
> > > > Let me know if there are any further concerns, else 

[DISCUSS] KIP-569: DescribeConfigsResponse - Update the schema to include datatype of the field

2020-02-11 Thread Shailesh Panwar
Hi all,
We would like to extend the DescribeConfigsResponse schema to include the
data-type of the fields.

The KIP can be found here:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-569%3A+DescribeConfigsResponse+-+Update+the+schema+to+include+datatype+of+the+field

Thanks
Shailesh


Re: [DISCUSS] KIP-568: Explicit rebalance triggering on the Consumer

2020-02-11 Thread Sophie Blee-Goldman
Konstantine,
Thanks for the feedback! I've updated the sections with your suggestions. I
agree
in particular that it's really important to make sure users don't call this
unnecessarily,
 or for the wrong reasons: to that end I also extended the javadocs to
specify that this
API is for when changes to the subscription userdata occur. Hopefully that
should make
its intended usage quite clear.

Bill,
The rebalance triggered by this new API will be a "normal" rebalance, and
therefore
follow the existing listener semantics. For example a cooperative rebalance
will always
call onPartitionsAssigned, even if no partitions are actually moved.
An eager rebalance will still revoke all partitions first anyway.

Thanks for the feedback!
Sophie

On Tue, Feb 11, 2020 at 9:52 AM Bill Bejeck  wrote:

> Hi Sophie,
>
> Thanks for the KIP, makes sense to me.
>
> One quick question, I'm not sure if it's relevant or not.
>
> If a user provides a `ConsumerRebalanceListener` and a rebalance is
> triggered from an `enforceRebalance`  call,
> it seems possible the listener won't get called since partition assignments
> might not change.
> If that is the case, do we want to possibly consider adding a method to the
> `ConsumerRebalanceListener` for callbacks on `enforceRebalance` actions?
>
> Thanks,
> Bill
>
>
>
> On Tue, Feb 11, 2020 at 12:11 PM Konstantine Karantasis <
> konstant...@confluent.io> wrote:
>
> > Hi Sophie.
> >
> > Thanks for the KIP. I liked how focused the proposal is. Also, its
> > motivation is clear after carefully reading the KIP and its references.
> >
> > Yet, I think it'd be a good idea to call out explicitly on the Rejected
> > Alternatives section that an automatic and periodic triggering of
> > rebalances that would not require exposing this capability through the
> > Consumer interface does not cover your specific use cases and therefore
> is
> > not chosen as a desired approach. Maybe, even consider mentioning again
> > here that this method is expected to be used to respond to system changes
> > external to the consumer and its membership logic and is not proposed as
> a
> > way to resolve temporary imbalances due to membership changes that should
> > inherently be resolved by the assignor logic itself with one or more
> > consecutive rebalances.
> >
> > Also, in your javadoc I'd add some context similar to what someone can
> read
> > on the KIP. Specifically where you say: "for example if some condition
> has
> > changed that has implications for the partition assignment." I'd rather
> add
> > something like "for example, if some condition external and invisible to
> > the Consumer and its group membership has changed in ways that would
> > justify a new partition assignment". That's just an example, feel free to
> > reword, but I believe that saying explicitly that this condition is not
> > visible to the consumer is useful to understand that this is not
> necessary
> > under normal circumstances.
> >
> > In Compatibility, Deprecation, and Migration Plan section I think it's
> > worth mentioning that this is a new feature that affects new
> > implementations of the Consumer interface and any such new implementation
> > should override the new method. Implementations that wish to upgrade to a
> > newer version should be extended and recompiled, since no default
> > implementation will be provided.
> >
> > Naming is hard here, if someone wants to emphasize the ad hoc and
> irregular
> > nature of this call. After some thought I'm fine with 'enforceRebalance'
> > even if it could potentially be confused to a method that is supposed to
> be
> > called to remediate one or more previously unsuccessful rebalances (which
> > is partly what StreamThread#enforceRebalance is used for). The best I
> could
> > think of was 'onRequestRebalance' but that's not perfect either.
> >
> > Best,
> > Konstantine
> >
> >
> > On Mon, Feb 10, 2020 at 5:18 PM Sophie Blee-Goldman  >
> > wrote:
> >
> > > Thanks John. I took out the KafkaConsumer method and moved the javadocs
> > > to the Consumer#enforceRebalance in the KIP -- hope you're happy :P
> > >
> > > Also, I wanted to point out one minor change to the current proposal:
> > make
> > > this
> > > a blocking call, which accepts a timeout and returns whether the
> > rebalance
> > > completed within the timeout. It will still reduce to a nonblocking
> call
> > if
> > > a "zero"
> > > timeout is supplied. I've updated the KIP accordingly.
> > >
> > > Let me know if there are any further concerns, else I'll call for a
> vote.
> > >
> > > Cheers!
> > > Sophie
> > >
> > > On Mon, Feb 10, 2020 at 12:47 PM John Roesler 
> > wrote:
> > >
> > > > Thanks Sophie,
> > > >
> > > > Sorry I didn't respond. I think your new method name sounds good.
> > > >
> > > > Regarding the interface vs implementation, I agree it's confusing.
> It's
> > > > always bothered me that the interface redirects you to an
> > implementation
> > > > JavaDocs, but never enough for me to stop what I'm doing to 

Build failed in Jenkins: kafka-2.2-jdk8-old #206

2020-02-11 Thread Apache Jenkins Server
See 


Changes:

[rhauch] KAFKA-7052 Avoiding NPE in ExtractField SMT in case of non-existent


--
Started by an SCM change
Running as SYSTEM
[EnvInject] - Loading node environment variables.
Building remotely on H38 (ubuntu) in workspace 

[WS-CLEANUP] Deleting project workspace...
[WS-CLEANUP] Deferred wipeout is used...
No credentials specified
Cloning the remote Git repository
Cloning repository https://github.com/apache/kafka.git
 > git init  # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress -- https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # 
 > timeout=10
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git fetch --tags --progress -- https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/2.2^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/2.2^{commit} # timeout=10
Checking out Revision 75ba599de50192c3a1d11ae9f1814fb2798ae6fe 
(refs/remotes/origin/2.2)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 75ba599de50192c3a1d11ae9f1814fb2798ae6fe
Commit message: "KAFKA-7052 Avoiding NPE in ExtractField SMT in case of 
non-existent fields (#8059)"
 > git rev-list --no-walk 6462c2a7feabb6cc1251e303606ef30d35846706 # timeout=10
ERROR: No tool found matching GRADLE_4_8_1_HOME
[kafka-2.2-jdk8-old] $ /bin/bash -xe /tmp/jenkins4645503843235427188.sh
+ rm -rf 
+ /bin/gradle
/tmp/jenkins4645503843235427188.sh: line 4: /bin/gradle: No such file or 
directory
Build step 'Execute shell' marked build as failure
[FINDBUGS] Collecting findbugs analysis files...
ERROR: No tool found matching GRADLE_4_8_1_HOME
[FINDBUGS] Searching for all files in 
 that match the pattern 
**/build/reports/findbugs/*.xml
[FINDBUGS] No files found. Configuration error?
ERROR: No tool found matching GRADLE_4_8_1_HOME
No credentials specified
ERROR: No tool found matching GRADLE_4_8_1_HOME
 Using GitBlamer to create author and commit information for all 
warnings.
 GIT_COMMIT=75ba599de50192c3a1d11ae9f1814fb2798ae6fe, 
workspace=
[FINDBUGS] Computing warning deltas based on reference build #175
Recording test results
ERROR: No tool found matching GRADLE_4_8_1_HOME
ERROR: Step ?Publish JUnit test result report? failed: No test report files 
were found. Configuration error?
ERROR: No tool found matching GRADLE_4_8_1_HOME
Not sending mail to unregistered user ism...@juma.me.uk
Not sending mail to unregistered user wangg...@gmail.com
Not sending mail to unregistered user nore...@github.com
Not sending mail to unregistered user j...@confluent.io
Not sending mail to unregistered user b...@confluent.io


[DISCUSS] KIP-570: Add leader epoch in StopReplicaRequest

2020-02-11 Thread David Jacot
Hi all,

I've put together a very small KIP which proposes to add the leader epoch
in the
StopReplicaRequest in order to make it robust to reordering:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-570%3A+Add+leader+epoch+in+StopReplicaRequest

Please take a look at the KIP and let me know what you think.

Best,
David


[jira] [Created] (KAFKA-9539) Add leader epoch in StopReplicaRequest

2020-02-11 Thread David Jacot (Jira)
David Jacot created KAFKA-9539:
--

 Summary: Add leader epoch in StopReplicaRequest
 Key: KAFKA-9539
 URL: https://issues.apache.org/jira/browse/KAFKA-9539
 Project: Kafka
  Issue Type: Improvement
Reporter: David Jacot
Assignee: David Jacot


Unlike the LeaderAndIsrRequest, the StopReplicaRequest does not include the 
leader epoch which makes it vulnerable to reordering. This KIP proposes to add 
the leader epoch for each partition in the StopReplicaRequest and the broker 
will verify the epoch before proceeding with the StopReplicaRequest.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-7052) ExtractField SMT throws NPE - needs clearer error message

2020-02-11 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch resolved KAFKA-7052.
--
Fix Version/s: 2.4.1
   2.3.2
   2.5.0
   2.2.3
 Reviewer: Randall Hauch
   Resolution: Fixed

Merged to the `trunk`, `2.5`, `2.4`, `2.3` and `2.2` branches.

> ExtractField SMT throws NPE - needs clearer error message
> -
>
> Key: KAFKA-7052
> URL: https://issues.apache.org/jira/browse/KAFKA-7052
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect
>Reporter: Robin Moffatt
>Priority: Major
> Fix For: 2.2.3, 2.5.0, 2.3.2, 2.4.1
>
>
> With the following Single Message Transform: 
> {code:java}
> "transforms.ExtractId.type":"org.apache.kafka.connect.transforms.ExtractField$Key",
> "transforms.ExtractId.field":"id"{code}
> Kafka Connect errors with : 
> {code:java}
> java.lang.NullPointerException
> at 
> org.apache.kafka.connect.transforms.ExtractField.apply(ExtractField.java:61)
> at 
> org.apache.kafka.connect.runtime.TransformationChain.apply(TransformationChain.java:38){code}
> There should be a better error message here, identifying the reason for the 
> NPE.
> Version: Confluent Platform 4.1.1



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [VOTE] KIP-447: Producer scalability for exactly once semantics

2020-02-11 Thread Boyang Chen
Hey there,

we are adding a small change to the KIP-447 public API. The default value
of `transaction.abort.timed.out.transaction.cleanup.interval.ms` shall be
changed from 1 minute to 10 seconds. The goal here is to trigger the
expired transaction more frequently in order to reduce the consumer pending
offset fetch wait time.

Let me know if you have further questions, thanks!


On Wed, Jan 8, 2020 at 3:44 PM Boyang Chen 
wrote:

> Thanks Guozhang for another review! I have addressed all the javadoc
> changes for PendingTransactionException in the KIP. For FENCED_INSTANCE_ID
> the only thrown place would be on the new send offsets API, which is also
> addressed.
>
> Thanks Matthias for the vote! As we have 3 binding votes (Guozhang, Jason,
> and Matthias), the KIP is officially accepted and prepared to ship in 2.5.
>
> Still feel free to put more thoughts on either discussion or voting thread
> to refine the KIP!
>
>
> On Wed, Jan 8, 2020 at 3:15 PM Matthias J. Sax 
> wrote:
>
>> I just re-read the KIP. Overall I am +1 as well.
>>
>
>> Some minor comments (also apply to the Google design doc):
>>
>> 1) As 2.4 was release, references should be updated to 2.5.
>>
>>  Addressed
>
>>
>>
>>>  2) About the upgrade path, the KIP says:
>>
>> 2a)
>>
>> > Broker must be upgraded to 2.4 first. This means the
>> `inter.broker.protocol.version` (IBP) has to be set to the latest. Any
>> produce request with higher version will automatically get fenced because
>> of no support.
>>
>> From my understanding, this is not correct? After a broker is updated to
>> the new binaries, it should accept new requests, even if IBP was not
>> bumped yet?
>>
>> Your understanding was correct, after some offline discussion we should
> not worry about IBP in this case.
>
>> 2b)
>>
>> About the two rolling bounces for KS apps and the statement
>>
>> > one should never allow task producer and thread producer under the same
>> application group
>>
>> In the second rolling bounce, we might actually mix both (ie, per-task
>> and per-thread producers) but this is fine as explained in the KIP. The
>> only case we cannot allow is, old per-task producers (without consumer
>> generation fencing) to be mixed with per-thread producers (that rely
>> solely on consumer generation fencing).
>>
>> Does this sound correct?
>>
>> Correct, that's the purpose of doing 2 rolling bounce, where the first
> one is to guarantee everyone's opt-in for generation fencing.
>
>>
>> 3) We should also document how users can use KS 2.5 applications against
>> older brokers -- for this case, we need to stay on per-task producers
>> and cannot use the new fencing mechanism. Currently, the KIP only
>> describe a single way how users could make this work: by setting (and
>> keeping) UPGRADE_FROM config to 2.4 (what might not be an ideal solution
>> and might also not be clear by itself that people would need to do this)?
>>
>>
>> Yes exactly, at the moment we are actively working on a plan to acquire
> broker's IBP during stream start-up and initialize based off that
> information,
> so that user doesn't need to keep UPGRADE_FROM simply for working with old
> brokers.
>
>>
>> -Matthias
>>
>>
>>
>> On 9/18/19 4:41 PM, Boyang Chen wrote:
>> > Bump this thread to see if someone could also review!
>> >
>> > On Mon, Sep 9, 2019 at 5:00 PM Boyang Chen 
>> > wrote:
>> >
>> >> Thank you Jason! Addressed the comments.
>> >>
>> >> Thank you Guozhang for explaining. I will document the timeout setting
>> >> reasoning in the design doc.
>> >>
>> >>
>> >> On Mon, Sep 9, 2019 at 1:49 PM Guozhang Wang 
>> wrote:
>> >>
>> >>> On Fri, Sep 6, 2019 at 6:33 PM Boyang Chen <
>> reluctanthero...@gmail.com>
>> >>> wrote:
>> >>>
>>  Thanks Guozhang, I have polished the design doc to make it sync with
>>  current KIP. As for overriding default timeout values, I guess it's
>> >>> already
>>  stated in the KIP to set txn timeout to 10s, are you suggesting we
>> >>> should
>>  also put down this recommendation on the KIP for non-stream EOS
>> users?
>> 
>>  My comment is not for changing any produce / consumer default config
>> >>> values, but for the Streams configs, to make sure that our
>> >>> overridden config values respect the above rules. That is, we check
>> the
>> >>> actual value used in the config if they are ever overridden by users,
>> and
>> >>> if the above were not true we can log a warning that it may be risky
>> to
>> >>> encounter some unnecessary rebalances.
>> >>>
>> >>> Again, this is not something we need to include in the KIP since it
>> is not
>> >>> part of public APIs, just to emphasize that the internal
>> implementation
>> >>> can
>> >>> have some safety guard like this.
>> >>>
>> >>> Guozhang
>> >>>
>> >>>
>> >>>
>>  Boyang
>> 
>>  On Thu, Sep 5, 2019 at 8:43 PM Guozhang Wang 
>> >>> wrote:
>> 
>> > Hello Boyang,
>> >
>> > Just realized one thing about timeout configurations that we should
>> > consider including 

[jira] [Created] (KAFKA-9538) Flaky Test `testResetOffsetsExportImportPlan`

2020-02-11 Thread Jason Gustafson (Jira)
Jason Gustafson created KAFKA-9538:
--

 Summary: Flaky Test `testResetOffsetsExportImportPlan`
 Key: KAFKA-9538
 URL: https://issues.apache.org/jira/browse/KAFKA-9538
 Project: Kafka
  Issue Type: Bug
Reporter: Jason Gustafson
Assignee: Jason Gustafson


{code}
19:44:41 
19:44:41 kafka.admin.ResetConsumerGroupOffsetTest > 
testResetOffsetsExportImportPlan FAILED
19:44:41 java.lang.AssertionError: expected: 2, bar2-1 -> 2)> 
but was:
19:44:41 at org.junit.Assert.fail(Assert.java:89)
19:44:41 at org.junit.Assert.failNotEquals(Assert.java:835)
19:44:41 at org.junit.Assert.assertEquals(Assert.java:120)
19:44:41 at org.junit.Assert.assertEquals(Assert.java:146)
19:44:41 at 
kafka.admin.ResetConsumerGroupOffsetTest.testResetOffsetsExportImportPlan(ResetConsumerGroupOffsetTest.scala:429)
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-8211) Flaky Test: ResetConsumerGroupOffsetTest.testResetOffsetsExportImportPlan

2020-02-11 Thread Jason Gustafson (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-8211.

Resolution: Fixed

Closing this one since new failures do not match the stacktrace. I've opened 
https://issues.apache.org/jira/browse/KAFKA-9538 instead.

> Flaky Test: ResetConsumerGroupOffsetTest.testResetOffsetsExportImportPlan
> -
>
> Key: KAFKA-8211
> URL: https://issues.apache.org/jira/browse/KAFKA-8211
> Project: Kafka
>  Issue Type: Bug
>  Components: admin, clients, unit tests
>Affects Versions: 2.3.0
>Reporter: Bill Bejeck
>Assignee: huxihx
>Priority: Major
> Fix For: 2.5.0
>
>
> Failed in build [https://builds.apache.org/job/kafka-pr-jdk8-scala2.11/20778/]
>  
> {noformat}
> Error Message
> java.lang.AssertionError: Expected that consumer group has consumed all 
> messages from topic/partition.
> Stacktrace
> java.lang.AssertionError: Expected that consumer group has consumed all 
> messages from topic/partition.
>   at kafka.utils.TestUtils$.fail(TestUtils.scala:381)
>   at kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:791)
>   at 
> kafka.admin.ResetConsumerGroupOffsetTest.awaitConsumerProgress(ResetConsumerGroupOffsetTest.scala:364)
>   at 
> kafka.admin.ResetConsumerGroupOffsetTest.produceConsumeAndShutdown(ResetConsumerGroupOffsetTest.scala:359)
>   at 
> kafka.admin.ResetConsumerGroupOffsetTest.testResetOffsetsExportImportPlan(ResetConsumerGroupOffsetTest.scala:323)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:305)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:365)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
>   at org.junit.runners.ParentRunner$4.run(ParentRunner.java:330)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:78)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:328)
>   at org.junit.runners.ParentRunner.access$100(ParentRunner.java:65)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:292)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:305)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:412)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:110)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:58)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:38)
>   at 
> org.gradle.api.internal.tasks.testing.junit.AbstractJUnitTestClassProcessor.processTestClass(AbstractJUnitTestClassProcessor.java:62)
>   at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
>   at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
>   at 
> 

[jira] [Resolved] (KAFKA-8616) Replace ApiVersionsRequest request/response with automated protocol

2020-02-11 Thread Boyang Chen (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8616?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Boyang Chen resolved KAFKA-8616.

Resolution: Duplicate

> Replace ApiVersionsRequest request/response with automated protocol
> ---
>
> Key: KAFKA-8616
> URL: https://issues.apache.org/jira/browse/KAFKA-8616
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Boyang Chen
>Assignee: Boyang Chen
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] KIP-501 Avoid out-of-sync or offline partitions when follower fetch requests not processed in time

2020-02-11 Thread Harsha Chintalapani
Hi Lucas,
   Yes the case you mentioned is true. I do understand KIP-501
might not fully solve this particular use case where there might blocked
fetch requests. But the issue we noticed multiple times  and continue to
notice is
  1. Fetch request comes from Follower
  2. Leader tries to fetch data from disk which takes longer than
replica.lag.time.max.ms
 3. Async thread on leader side which checks the ISR marks the
follower who sent a fetch request as not in ISR
 4. Leader dies during this request due to disk errors and now we
have offline partitions because Leader kicked out healthy followers out of
ISR

Instead of considering this from a disk issue. Lets look at how we maintain
the ISR

   1. Currently we do not consider a follower as healthy even when its able
   to send fetch requests
   2. ISR is controlled on how healthy a broker is, ie if it takes longer
   than replica.lag.time.max.ms we mark followers out of sync instead of
   relinquishing the leadership.


What we are proposing in this KIP, we should look at the time when a
follower sends a fetch request and keep that as basis for marking a
follower out of ISR or to keep it in the ISR and leave the disk read time
on leader side out of this.

Thanks,
Harsha



On Mon, Feb 10, 2020 at 9:26 PM, Lucas Bradstreet 
wrote:

> Hi Harsha,
>
> Is the problem you'd like addressed the following?
>
> Assume 3 replicas, L and F1 and F2.
>
> 1. F1 and F2 are alive and sending fetch requests to L.
> 2. L starts encountering disk issues, any requests being processed by the
> request handler threads become blocked.
> 3. L's zookeeper connection is still alive so it remains the leader for
> the partition.
> 4. Given that F1 and F2 have not successfully fetched, L shrinks the ISR
> to itself.
>
> While KIP-501 may help prevent a shrink in partitions where a replica
> fetch request has started processing, any fetch requests in the request
> queue will have no effect. Generally when these slow/failing disk issues
> occur, all of the request handler threads end up blocked and requests queue
> up in the request queue. For example, all of the request handler threads
> may end up stuck in
> KafkaApis.handleProduceRequest handling produce requests, at which point
> all of the replica fetcher fetch requests remain queued in the request
> queue. If this happens, there will be no tracked fetch requests to prevent
> a shrink.
>
> Solving this shrinking issue is tricky. It would be better if L resigns
> leadership when it enters a degraded state rather than avoiding a shrink.
> If L is no longer the leader in this situation, it will eventually become
> blocked fetching from the new leader and the new leader will shrink the
> ISR, kicking out L.
>
> Cheers,
>
> Lucas
>


[jira] [Created] (KAFKA-9537) Abstract transformations in configurations cause unfriendly error message.

2020-02-11 Thread Jeremy Custenborder (Jira)
Jeremy Custenborder created KAFKA-9537:
--

 Summary: Abstract transformations in configurations cause 
unfriendly error message.
 Key: KAFKA-9537
 URL: https://issues.apache.org/jira/browse/KAFKA-9537
 Project: Kafka
  Issue Type: Bug
  Components: KafkaConnect
Affects Versions: 2.4.0
Reporter: Jeremy Custenborder
Assignee: Jeremy Custenborder


I was working with a coworker who had a bash script posting a config to connect 
with
{code:java}org.apache.kafka.connect.transforms.ExtractField.$Key{code} in the 
script. Bash removed the $Key because it wasn't escaped properly.
{code:java}
org.apache.kafka.connect.transforms.ExtractField.{code}
is made it to the rest interface. A Class was create for the abstract 
implementation of ExtractField and passed to getConfigDefFromTransformation. It 
tried to call newInstance which threw an exception. The following gets returned 
via the rest interface. 

{code}
{
  "error_code": 400,
  "message": "Connector configuration is invalid and contains the following 1 
error(s):\nInvalid value class org.apache.kafka.connect.transforms.ExtractField 
for configuration transforms.extractString.type: Error getting config 
definition from Transformation: null\nYou can also find the above list of 
errors at the endpoint `/{connectorType}/config/validate`"
}
{code}

It would be a much better user experience if we returned something like 
{code}
{
  "error_code": 400,
  "message": "Connector configuration is invalid and contains the following 1 
error(s):\nInvalid value class org.apache.kafka.connect.transforms.ExtractField 
for configuration transforms.extractString.type: Error getting config 
definition from Transformation: Transformation is abstract and cannot be 
created.\nYou can also find the above list of errors at the endpoint 
`/{connectorType}/config/validate`"
}
{code}

or
{code}
{
  "error_code": 400,
  "message": "Connector configuration is invalid and contains the following 1 
error(s):\nInvalid value class org.apache.kafka.connect.transforms.ExtractField 
for configuration transforms.extractString.type: Error getting config 
definition from Transformation: Transformation is abstract and cannot be 
created. Did you mean ExtractField$Key, ExtractField$Value?\nYou can also find 
the above list of errors at the endpoint `/{connectorType}/config/validate`"
}
{code}





--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] KIP-568: Explicit rebalance triggering on the Consumer

2020-02-11 Thread Bill Bejeck
Hi Sophie,

Thanks for the KIP, makes sense to me.

One quick question, I'm not sure if it's relevant or not.

If a user provides a `ConsumerRebalanceListener` and a rebalance is
triggered from an `enforceRebalance`  call,
it seems possible the listener won't get called since partition assignments
might not change.
If that is the case, do we want to possibly consider adding a method to the
`ConsumerRebalanceListener` for callbacks on `enforceRebalance` actions?

Thanks,
Bill



On Tue, Feb 11, 2020 at 12:11 PM Konstantine Karantasis <
konstant...@confluent.io> wrote:

> Hi Sophie.
>
> Thanks for the KIP. I liked how focused the proposal is. Also, its
> motivation is clear after carefully reading the KIP and its references.
>
> Yet, I think it'd be a good idea to call out explicitly on the Rejected
> Alternatives section that an automatic and periodic triggering of
> rebalances that would not require exposing this capability through the
> Consumer interface does not cover your specific use cases and therefore is
> not chosen as a desired approach. Maybe, even consider mentioning again
> here that this method is expected to be used to respond to system changes
> external to the consumer and its membership logic and is not proposed as a
> way to resolve temporary imbalances due to membership changes that should
> inherently be resolved by the assignor logic itself with one or more
> consecutive rebalances.
>
> Also, in your javadoc I'd add some context similar to what someone can read
> on the KIP. Specifically where you say: "for example if some condition has
> changed that has implications for the partition assignment." I'd rather add
> something like "for example, if some condition external and invisible to
> the Consumer and its group membership has changed in ways that would
> justify a new partition assignment". That's just an example, feel free to
> reword, but I believe that saying explicitly that this condition is not
> visible to the consumer is useful to understand that this is not necessary
> under normal circumstances.
>
> In Compatibility, Deprecation, and Migration Plan section I think it's
> worth mentioning that this is a new feature that affects new
> implementations of the Consumer interface and any such new implementation
> should override the new method. Implementations that wish to upgrade to a
> newer version should be extended and recompiled, since no default
> implementation will be provided.
>
> Naming is hard here, if someone wants to emphasize the ad hoc and irregular
> nature of this call. After some thought I'm fine with 'enforceRebalance'
> even if it could potentially be confused to a method that is supposed to be
> called to remediate one or more previously unsuccessful rebalances (which
> is partly what StreamThread#enforceRebalance is used for). The best I could
> think of was 'onRequestRebalance' but that's not perfect either.
>
> Best,
> Konstantine
>
>
> On Mon, Feb 10, 2020 at 5:18 PM Sophie Blee-Goldman 
> wrote:
>
> > Thanks John. I took out the KafkaConsumer method and moved the javadocs
> > to the Consumer#enforceRebalance in the KIP -- hope you're happy :P
> >
> > Also, I wanted to point out one minor change to the current proposal:
> make
> > this
> > a blocking call, which accepts a timeout and returns whether the
> rebalance
> > completed within the timeout. It will still reduce to a nonblocking call
> if
> > a "zero"
> > timeout is supplied. I've updated the KIP accordingly.
> >
> > Let me know if there are any further concerns, else I'll call for a vote.
> >
> > Cheers!
> > Sophie
> >
> > On Mon, Feb 10, 2020 at 12:47 PM John Roesler 
> wrote:
> >
> > > Thanks Sophie,
> > >
> > > Sorry I didn't respond. I think your new method name sounds good.
> > >
> > > Regarding the interface vs implementation, I agree it's confusing. It's
> > > always bothered me that the interface redirects you to an
> implementation
> > > JavaDocs, but never enough for me to stop what I'm doing to fix it.
> > > It's not a big deal either way, I just thought it was strange to
> propose
> > a
> > > "public interface" change, but not in terms of the actual interface
> > class.
> > >
> > > It _is_ true that KafkaConsumer is also part of the public API, but
> only
> > > really
> > > for the constructor. Any proposal to define a new "consumer client" API
> > > should be on the Consumer interface (which you said you plan to do
> > anyway).
> > > I guess I brought it up because proposing an addition to Consumer
> implies
> > > it would be added to KafkaConsumer, but proposing an addition to
> > > KafkaConsumer does not necessarily imply it would also be added to
> > > Consumer. Does that make sense?
> > >
> > > Anyway, thanks for updating the KIP.
> > >
> > > Thanks,
> > > -John
> > >
> > >
> > > On Mon, Feb 10, 2020, at 14:38, Sophie Blee-Goldman wrote:
> > > > Since this doesn't seem too controversial, I'll probably call for a
> > vote
> > > by
> > > > end of day.
> > > > If there any further 

Re: [VOTE] KIP-568: Explicit rebalance triggering on the Consumer

2020-02-11 Thread Navinder Brar
Thanks Sophie, much required.
+1 non-binding.


Sent from Yahoo Mail for iPhone


On Tuesday, February 11, 2020, 10:33 PM, John Roesler  
wrote:

Thanks Sophie,

I'm +1 (binding)

-John

On Mon, Feb 10, 2020, at 20:54, Sophie Blee-Goldman wrote:
> Hey all,
> 
> I'd like to start the voting on KIP-568. It proposes the new
> Consumer#enforceRebalance API to facilitate triggering efficient rebalances.
> 
> For reference, here is the KIP link again:
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-568%3A+Explicit+rebalance+triggering+on+the+Consumer
> 
> Thanks!
> Sophie
>





Re: [DISCUSS] KIP-552: Add interface to handle unused config

2020-02-11 Thread John Roesler
Ah... I've just looked at some integration tests in Streams, and see the same 
thing.

I need to apologize to everyone in the thread for my lack of understanding, and 
to
thank Gwen for her skepticism. Looking back at the KIP itself, I see that Artur 
specifically
listed log messages caused by Streams itself, which I failed to realize 
shouldn't be
there at all.

It now seems that we should not have a KIP at all, and also shouldn't make any 
changes
to log levels or loggers. Instead we should treat KAFKA-6793 as a normal bug 
whose
cause is that Streams does not correctly construct the client configurations 
when initializing
the clients. It is leaving in the prefixed version of the client configs, but 
it should remove them.
We should also add a test that we can specify all kinds of client 
configurations to Streams and
that no WARN logs result during startup.

Artur, what do you think about cancelling KIP-552 and instead just implementing 
a fix?

Again, I'm really sorry for not realizing this sooner. And again, thanks to 
Gwen for chiming in.

-John

On Mon, Feb 10, 2020, at 02:19, Patrik Kleindl wrote:
> Hi John
> Starting an empty streams instance
> 
> final String bootstrapServers = "broker0:9092";
> Properties streamsConfiguration = new Properties();
> streamsConfiguration.put(StreamsConfig.APPLICATION_ID_CONFIG, "configDemo");
> streamsConfiguration.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG,
> bootstrapServers);
> StreamsBuilder builder = new StreamsBuilder();
> final KafkaStreams streams = new KafkaStreams(builder.build(),
> streamsConfiguration);
> streams.start();
> 
> results in:
> 
> stream-thread 
> [configDemo-bcaf82b4-324d-4956-a2a8-1dea0a8e3a2e-StreamThread-1]
> Creating consumer client
> ConsumerConfig values:
> ...
> stream-thread 
> [configDemo-bcaf82b4-324d-4956-a2a8-1dea0a8e3a2e-StreamThread-1-consumer]
> Cooperative rebalancing enabled now
> The configuration 'admin.retries' was supplied but isn't a known config.
> The configuration 'admin.retry.backoff.ms' was supplied but isn't a
> known config.
> Kafka version: 2.4.0
> 
> when the normal consumer is created, but not for admin client /
> producer / restore consumer.
> 
> StreamsConfig seems to include this on purpose:
> 
> final AdminClientConfig adminClientDefaultConfig = new
> AdminClientConfig(getClientPropsWithPrefix(ADMIN_CLIENT_PREFIX,
> AdminClientConfig.configNames()));
> consumerProps.put(adminClientPrefix(AdminClientConfig.RETRIES_CONFIG),
> adminClientDefaultConfig.getInt(AdminClientConfig.RETRIES_CONFIG));
> consumerProps.put(adminClientPrefix(AdminClientConfig.RETRY_BACKOFF_MS_CONFIG),
> adminClientDefaultConfig.getLong(AdminClientConfig.RETRY_BACKOFF_MS_CONFIG));
> 
> If I add
> 
> streamsConfiguration.put(StreamsConfig.restoreConsumerPrefix(ConsumerConfig.RECEIVE_BUFFER_CONFIG),
> 65536);
> streamsConfiguration.put(StreamsConfig.mainConsumerPrefix(ConsumerConfig.MAX_POLL_RECORDS_CONFIG),
> 100);
> 
> then the warnings
> 
> The configuration 'main.consumer.max.poll.records' was supplied but
> isn't a known config.
> The configuration 'restore.consumer.receive.buffer.bytes' was supplied
> but isn't a known config.
> 
> are shown for all clients, not only the last consumer.
> 
> Streams provides these prefixes so maybe they are not handled
> correctly regarding the log message.
> 
> Maybe this helps to pinpoint the source of this in KS at least
> 
> best regards
> 
> Patrik
> 
> 
> On Sat, 8 Feb 2020 at 05:11, John Roesler  wrote:
> 
> > Looking at where the log message comes from:
> > org.apache.kafka.common.config.AbstractConfig#logUnused
> > it seems like maybe the warning just happens when you pass
> > extra configs to a client that it has no knowledge of (and therefore
> > doesn't "use").
> >
> > I'm now suspicious if Streams is actually sending extra configs to the
> > clients, although it seems like we _don't_ see these warnings in other
> > cases.
> >
> > Maybe some of the folks who actually see these messages can try to pinpoint
> > where exactly the rogue configs are coming from?
> >
> > I might have overlooked a message at some point, but it wasn't clear to
> > me that we were talking about warnings that were actually caused by
> > Streams.
> > I thought the unknown configs were something user-specified.
> >
> > Thanks,
> > -John
> >
> > On Fri, Feb 7, 2020, at 13:10, Gwen Shapira wrote:
> > > Ah, got it! I am indeed curious why they do this :)
> > >
> > > Maybe John can shed more light. But if we can't find a better fix,
> > > perhaps the nice thing to do is really a separate logger, so users who
> > > are not worried about shooting themselves in the foot can make those
> > > warnings go away.
> > >
> > > Gwen Shapira
> > > Engineering Manager | Confluent
> > > 650.450.2760 | @gwenshap
> > > Follow us: Twitter | blog
> > >
> > > On Fri, Feb 07, 2020 at 4:13 AM, Patrik Kleindl < pklei...@gmail.com >
> > wrote:
> > >
> > > >
> > > >
> > > >
> > > > Hi Gwen
> > > >
> > > >
> > > >
> > > > Kafka Streams is not 

Re: [DISCUSS] KIP-568: Explicit rebalance triggering on the Consumer

2020-02-11 Thread Konstantine Karantasis
Hi Sophie.

Thanks for the KIP. I liked how focused the proposal is. Also, its
motivation is clear after carefully reading the KIP and its references.

Yet, I think it'd be a good idea to call out explicitly on the Rejected
Alternatives section that an automatic and periodic triggering of
rebalances that would not require exposing this capability through the
Consumer interface does not cover your specific use cases and therefore is
not chosen as a desired approach. Maybe, even consider mentioning again
here that this method is expected to be used to respond to system changes
external to the consumer and its membership logic and is not proposed as a
way to resolve temporary imbalances due to membership changes that should
inherently be resolved by the assignor logic itself with one or more
consecutive rebalances.

Also, in your javadoc I'd add some context similar to what someone can read
on the KIP. Specifically where you say: "for example if some condition has
changed that has implications for the partition assignment." I'd rather add
something like "for example, if some condition external and invisible to
the Consumer and its group membership has changed in ways that would
justify a new partition assignment". That's just an example, feel free to
reword, but I believe that saying explicitly that this condition is not
visible to the consumer is useful to understand that this is not necessary
under normal circumstances.

In Compatibility, Deprecation, and Migration Plan section I think it's
worth mentioning that this is a new feature that affects new
implementations of the Consumer interface and any such new implementation
should override the new method. Implementations that wish to upgrade to a
newer version should be extended and recompiled, since no default
implementation will be provided.

Naming is hard here, if someone wants to emphasize the ad hoc and irregular
nature of this call. After some thought I'm fine with 'enforceRebalance'
even if it could potentially be confused to a method that is supposed to be
called to remediate one or more previously unsuccessful rebalances (which
is partly what StreamThread#enforceRebalance is used for). The best I could
think of was 'onRequestRebalance' but that's not perfect either.

Best,
Konstantine


On Mon, Feb 10, 2020 at 5:18 PM Sophie Blee-Goldman 
wrote:

> Thanks John. I took out the KafkaConsumer method and moved the javadocs
> to the Consumer#enforceRebalance in the KIP -- hope you're happy :P
>
> Also, I wanted to point out one minor change to the current proposal: make
> this
> a blocking call, which accepts a timeout and returns whether the rebalance
> completed within the timeout. It will still reduce to a nonblocking call if
> a "zero"
> timeout is supplied. I've updated the KIP accordingly.
>
> Let me know if there are any further concerns, else I'll call for a vote.
>
> Cheers!
> Sophie
>
> On Mon, Feb 10, 2020 at 12:47 PM John Roesler  wrote:
>
> > Thanks Sophie,
> >
> > Sorry I didn't respond. I think your new method name sounds good.
> >
> > Regarding the interface vs implementation, I agree it's confusing. It's
> > always bothered me that the interface redirects you to an implementation
> > JavaDocs, but never enough for me to stop what I'm doing to fix it.
> > It's not a big deal either way, I just thought it was strange to propose
> a
> > "public interface" change, but not in terms of the actual interface
> class.
> >
> > It _is_ true that KafkaConsumer is also part of the public API, but only
> > really
> > for the constructor. Any proposal to define a new "consumer client" API
> > should be on the Consumer interface (which you said you plan to do
> anyway).
> > I guess I brought it up because proposing an addition to Consumer implies
> > it would be added to KafkaConsumer, but proposing an addition to
> > KafkaConsumer does not necessarily imply it would also be added to
> > Consumer. Does that make sense?
> >
> > Anyway, thanks for updating the KIP.
> >
> > Thanks,
> > -John
> >
> >
> > On Mon, Feb 10, 2020, at 14:38, Sophie Blee-Goldman wrote:
> > > Since this doesn't seem too controversial, I'll probably call for a
> vote
> > by
> > > end of day.
> > > If there any further comments/questions/concerns, please let me know!
> > >
> > > Thanks,
> > > Sophie
> > >
> > > On Sat, Feb 8, 2020 at 12:19 AM Sophie Blee-Goldman <
> sop...@confluent.io
> > >
> > > wrote:
> > >
> > > > Thanks for the feedback! That's a good point about trying to prevent
> > users
> > > > from
> > > > thinking they should use this API during normal processing and
> > clarifying
> > > > when/why
> > > > you might need it -- regardless of the method name, we should
> > explicitly
> > > > call this out
> > > > in the javadocs.
> > > >
> > > > As for the method name, on reflection I agree that "rejoinGroup" does
> > not
> > > > seem to be
> > > > appropriate. Of course that's what the consumer will actually be
> doing,
> > > > but that's just an
> > > > implementation 

Re: [DISCUSS] : KIP-562: Allow fetching a key from a single partition rather than iterating over all the stores on an instance

2020-02-11 Thread John Roesler
Thanks, Bruno!

I meant to do it, but got side-tracked.

-John

On Tue, Feb 11, 2020, at 03:55, Bruno Cadonna wrote:
> Hi all,
> 
> I am fine with StoreQueryParameters, but then we should also change
> the DSL grammar accordingly. Since the PR was merged, I suppose
> everybody agrees on the new name and I changed the DSL grammar.
> 
> Best,
> Bruno
> 
> On Thu, Feb 6, 2020 at 1:07 PM Navinder Brar
>  wrote:
> >
> > Hi,
> >
> > While implementing 562, we have decided to rename StoreQueryParams -> 
> > StoreQueryParameters. I have updated the PR and confluence. Please share if 
> > anyone has feedback on it.
> >
> > Thanks & Regards,
> > Navinder Pal Singh Brar
> >
> > On Friday, 24 January, 2020, 08:45:15 am IST, Navinder Brar 
> >  wrote:
> >
> >  Hi John,
> >
> > Thanks for the responses. I will make the below changes as I had suggested 
> > earlier, and then close the vote in a few hours.
> >
> > includeStaleStores -> staleStores
> > withIncludeStaleStores() > enableStaleStores()
> > includeStaleStores() -> staleStoresEnabled()
> >
> > Thanks,
> > Navinder
> >
> > Sent from Yahoo Mail for iPhone
> >
> >
> > On Friday, January 24, 2020, 5:36 AM, John Roesler  
> > wrote:
> >
> > Hi Bruno,
> >
> > Thanks for your question; it's a very reasonable response to
> > what I said before.
> >
> > I didn't mean "field" as in an instance variable, just as in a specific
> > property or attribute. It's hard to talk about because all the words
> > for this abstract concept are also words that people commonly use
> > for instance variables.
> >
> > The principle is that these classes should be simple data containers.
> > It's not so much that the methods match the field name, or that the
> > field name matches the methods, but that all three bear a simple
> > and direct relationship to each other. Or maybe I should say that
> > the getters, setters, and fields are all directly named after a property.
> >
> > The point is that we should make sure we're not "playing games" in
> > these classes but simply setting a property and offering a transparent
> > way to get exactly what you just set.
> >
> > I actually do think that the instance variable itself should have the
> > same name as the "field" or "property" that the getters and setters
> > are named for. This is not a violation of encapsulation because those
> > instance variables are required to be private.
> >
> >  I guess you can think of  this rule as more of a style guide than a
> > grammar, but whatever. As a maintainer, I think we should discourage
> > these particular classes to have different instance variables than
> > method names. Otherwise,  it's just silly. Either "includeStaleStores"
> > or "staleStoresEnabled" is a fine name, but not both. There's no
> > reason at all to name all the accessors one of them and the instance
> > variable they access the  other name.
> >
> > Thanks,
> > -John
> >
> >
> > On Thu, Jan 23, 2020, at 17:27, Bruno Cadonna wrote:
> > > Hi John,
> > >
> > > One question: Why must the field name be involved in the naming? It
> > > somehow contradicts encapsulation. Field name `includeStaleStores` (or
> > > `staleStoresEnabled`) sounds perfectly fine to me. IMO, we should
> > > decouple the parameter name from the actual field name.
> > >
> > > Bruno
> > >
> > > On Thu, Jan 23, 2020 at 3:02 PM John Roesler  wrote:
> > > >
> > > > Hi all,
> > > >
> > > > Thanks for the discussion!
> > > >
> > > > The basic idea I used in the original draft of the grammar was to avoid
> > > > "fancy code" and just write "normal java". That's why I favored java 
> > > > bean
> > > > spec over Kafka code traditions.
> > > >
> > > > According to the bean spec, setters always start with "set" and getters
> > > > always start with "get" (or "is" for booleans). This often yields absurd
> > > > or awkward readability. On the other hand, the "kafka" idioms
> > > > seems to descend from Scala idiom of naming getters and setters
> > > > exactly the same as the field they get and set. This plays to a language
> > > > feature of Scala (getter referential transparency) that is not present
> > > > in Java. My feeling is that we probably keep this idiom around
> > > > precisely to avoid the absurd phrasing that the bean spec leads to.
> > > >
> > > > On the other hand, adopting the Kafka/Scala idiom brings in an
> > > > additional burden I was trying to avoid: you have to pick a good
> > > > name. Basically I was trying to avoid exactly this conversation ;)
> > > > i.e., "X sounds weird, how about Y", "well, Y also sounds weird,
> > > > what about Z", "Z sounds good, but then the setter sounds weird",
> > > > etc.
> > > >
> > > > Maybe avoiding discussion was too ambitious, and I can't deny that
> > > > bean spec names probably result in no one being happy, so I'm on
> > > > board with the current proposal:
> > > >
> > > > setters:
> > > > set{FieldName}(value)
> > > > {enable/disable}{FieldName}()
> > > >
> > > > getters:
> > > > {fieldName}()
> > > > 

Re: [VOTE] KIP-568: Explicit rebalance triggering on the Consumer

2020-02-11 Thread John Roesler
Thanks Sophie,

I'm +1 (binding)

-John

On Mon, Feb 10, 2020, at 20:54, Sophie Blee-Goldman wrote:
> Hey all,
> 
> I'd like to start the voting on KIP-568. It proposes the new
> Consumer#enforceRebalance API to facilitate triggering efficient rebalances.
> 
> For reference, here is the KIP link again:
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-568%3A+Explicit+rebalance+triggering+on+the+Consumer
> 
> Thanks!
> Sophie
>


Re: [DISCUSS] KIP-568: Explicit rebalance triggering on the Consumer

2020-02-11 Thread John Roesler
Sounds perfect. Thanks!
-John

On Mon, Feb 10, 2020, at 19:18, Sophie Blee-Goldman wrote:
> Thanks John. I took out the KafkaConsumer method and moved the javadocs
> to the Consumer#enforceRebalance in the KIP -- hope you're happy :P
> 
> Also, I wanted to point out one minor change to the current proposal: make
> this
> a blocking call, which accepts a timeout and returns whether the rebalance
> completed within the timeout. It will still reduce to a nonblocking call if
> a "zero"
> timeout is supplied. I've updated the KIP accordingly.
> 
> Let me know if there are any further concerns, else I'll call for a vote.
> 
> Cheers!
> Sophie
> 
> On Mon, Feb 10, 2020 at 12:47 PM John Roesler  wrote:
> 
> > Thanks Sophie,
> >
> > Sorry I didn't respond. I think your new method name sounds good.
> >
> > Regarding the interface vs implementation, I agree it's confusing. It's
> > always bothered me that the interface redirects you to an implementation
> > JavaDocs, but never enough for me to stop what I'm doing to fix it.
> > It's not a big deal either way, I just thought it was strange to propose a
> > "public interface" change, but not in terms of the actual interface class.
> >
> > It _is_ true that KafkaConsumer is also part of the public API, but only
> > really
> > for the constructor. Any proposal to define a new "consumer client" API
> > should be on the Consumer interface (which you said you plan to do anyway).
> > I guess I brought it up because proposing an addition to Consumer implies
> > it would be added to KafkaConsumer, but proposing an addition to
> > KafkaConsumer does not necessarily imply it would also be added to
> > Consumer. Does that make sense?
> >
> > Anyway, thanks for updating the KIP.
> >
> > Thanks,
> > -John
> >
> >
> > On Mon, Feb 10, 2020, at 14:38, Sophie Blee-Goldman wrote:
> > > Since this doesn't seem too controversial, I'll probably call for a vote
> > by
> > > end of day.
> > > If there any further comments/questions/concerns, please let me know!
> > >
> > > Thanks,
> > > Sophie
> > >
> > > On Sat, Feb 8, 2020 at 12:19 AM Sophie Blee-Goldman  > >
> > > wrote:
> > >
> > > > Thanks for the feedback! That's a good point about trying to prevent
> > users
> > > > from
> > > > thinking they should use this API during normal processing and
> > clarifying
> > > > when/why
> > > > you might need it -- regardless of the method name, we should
> > explicitly
> > > > call this out
> > > > in the javadocs.
> > > >
> > > > As for the method name, on reflection I agree that "rejoinGroup" does
> > not
> > > > seem to be
> > > > appropriate. Of course that's what the consumer will actually be doing,
> > > > but that's just an
> > > > implementation detail -- the name should reflect what the API is doing,
> > > > not how it does it
> > > > (which can always change).
> > > >
> > > > How about "enforceRebalance"? This is stolen from the StreamThread
> > method
> > > > that does
> > > > exactly this (by unsubscribing) so it seems to fit. I'll update the KIP
> > > > with this unless anyone
> > > > has another suggestion.
> > > >
> > > > Regarding the Consumer vs KafkaConsumer matter, I included the
> > > > KafkaConsumer method
> > > > because that's where all the javadocs redirect to in the Consumer
> > > > interface. Also, FWIW
> > > > I'm pretty sure KafkaConsumer is also part of the public API -- we
> > would
> > > > be adding a new
> > > > method to both.
> > > >
> > > > On Fri, Feb 7, 2020 at 7:42 PM John Roesler 
> > wrote:
> > > >
> > > >> Hi all,
> > > >>
> > > >> Thanks for the well motivated KIP, Sophie. I had some alternatives in
> > > >> mind, which
> > > >> I won't even bother to relate because I feel like the motivation made
> > a
> > > >> compelling
> > > >> argument for the API as proposed.
> > > >>
> > > >> One very minor point you might as well fix is that the API change is
> > > >> targeted at
> > > >> KafkaConsumer (the implementation), but should be targeted at
> > > >> Consumer (the interface).
> > > >>
> > > >> I agree with your discomfort about the name. Adding a "rejoin" method
> > > >> seems strange
> > > >> since there's no "join" method. Instead the way you join the group the
> > > >> first time is just
> > > >> by calling "subscribe". But "resubscribe" seems too indirect from what
> > > >> we're really trying
> > > >> to do, which is to trigger a rebalance by sending a new JoinGroup
> > request.
> > > >>
> > > >> Another angle is that we don't want the method to sound like something
> > > >> you should
> > > >> be calling in normal circumstances, or people will be "tricked" into
> > > >> calling it unnecessarily.
> > > >>
> > > >> So, I think "rejoinGroup" is fine, although a person _might_ be
> > forgiven
> > > >> for thinking they
> > > >> need to call it periodically or something. Did you consider
> > > >> "triggerRebalance", which
> > > >> sounds pretty advanced-ish, and accurately describes what happens when
> > > >> you call it?
> > > >>
> > > >> All in all, the 

KAFKA-9308: Request for Review of documentation

2020-02-11 Thread Sönke Liebau
Hi everybody,

I've reworked the SSL part of the documentation a little in order to fix
(among other things) KAFKA-9308[1] and would love some feedback if someone
can spare a few minutes.

Pull request: https://github.com/apache/kafka/pull/8009

[1] https://issues.apache.org/jira/browse/KAFKA-9308

Best regards,
Sönke


Build failed in Jenkins: kafka-2.5-jdk8 #13

2020-02-11 Thread Apache Jenkins Server
See 


Changes:

[wangguoz] KAFKA-9505: Only loop over topics-to-validate in retries (#8039)

[wangguoz] KAFKA-9480: Fix bug that prevented to measure task-level process-rate

[wangguoz] KAFKA-9523: Migrate 
BranchedMultiLevelRepartitionConnectedTopologyTest

[vvcephei] KAFKA-9487: Follow-up PR of Kafka-9445 (#8033)

[vvcephei] KAFKA-9517: Fix default serdes with FK join (#8061)


--
[...truncated 1.54 MB...]

kafka.tools.MirrorMakerIntegrationTest > testCommitOffsetsThrowTimeoutException 
STARTED

kafka.tools.MirrorMakerIntegrationTest > testCommitOffsetsThrowTimeoutException 
PASSED

kafka.tools.DumpLogSegmentsTest > testPrintDataLog STARTED

kafka.tools.DumpLogSegmentsTest > testPrintDataLog PASSED

kafka.tools.DumpLogSegmentsTest > testDumpIndexMismatches STARTED

kafka.tools.DumpLogSegmentsTest > testDumpIndexMismatches PASSED

kafka.tools.DumpLogSegmentsTest > testDumpTimeIndexErrors STARTED

kafka.tools.DumpLogSegmentsTest > testDumpTimeIndexErrors PASSED

kafka.tools.ConsoleProducerTest > testParseKeyProp STARTED

kafka.tools.ConsoleProducerTest > testParseKeyProp PASSED

kafka.tools.ConsoleProducerTest > testInvalidConfigs STARTED

kafka.tools.ConsoleProducerTest > testInvalidConfigs PASSED

kafka.tools.ConsoleProducerTest > testValidConfigs STARTED

kafka.tools.ConsoleProducerTest > testValidConfigs PASSED

kafka.tools.ConsumerPerformanceTest > testDetailedHeaderMatchBody STARTED

kafka.tools.ConsumerPerformanceTest > testDetailedHeaderMatchBody PASSED

kafka.tools.ConsumerPerformanceTest > testConfigWithUnrecognizedOption STARTED

kafka.tools.ConsumerPerformanceTest > testConfigWithUnrecognizedOption PASSED

kafka.tools.ConsumerPerformanceTest > testConfig STARTED

kafka.tools.ConsumerPerformanceTest > testConfig PASSED

kafka.tools.ConsumerPerformanceTest > testNonDetailedHeaderMatchBody STARTED

kafka.tools.ConsumerPerformanceTest > testNonDetailedHeaderMatchBody PASSED

kafka.tools.ConsoleConsumerTest > shouldLimitReadsToMaxMessageLimit STARTED

kafka.tools.ConsoleConsumerTest > shouldLimitReadsToMaxMessageLimit PASSED

kafka.tools.ConsoleConsumerTest > shouldParseGroupIdFromBeginningGivenTogether 
STARTED

kafka.tools.ConsoleConsumerTest > shouldParseGroupIdFromBeginningGivenTogether 
PASSED

kafka.tools.ConsoleConsumerTest > shouldExitOnOffsetWithoutPartition STARTED

kafka.tools.ConsoleConsumerTest > shouldExitOnOffsetWithoutPartition PASSED

kafka.tools.ConsoleConsumerTest > shouldStopWhenOutputCheckErrorFails STARTED

kafka.tools.ConsoleConsumerTest > shouldStopWhenOutputCheckErrorFails PASSED

kafka.tools.ConsoleConsumerTest > 
shouldExitOnInvalidConfigWithAutoOffsetResetAndConflictingFromBeginning STARTED

kafka.tools.ConsoleConsumerTest > 
shouldExitOnInvalidConfigWithAutoOffsetResetAndConflictingFromBeginning PASSED

kafka.tools.ConsoleConsumerTest > shouldResetUnConsumedOffsetsBeforeExit STARTED

kafka.tools.ConsoleConsumerTest > shouldResetUnConsumedOffsetsBeforeExit PASSED

kafka.tools.ConsoleConsumerTest > shouldParseConfigsFromFile STARTED

kafka.tools.ConsoleConsumerTest > shouldParseConfigsFromFile PASSED

kafka.tools.ConsoleConsumerTest > 
shouldParseValidConsumerConfigWithAutoOffsetResetLatest STARTED

kafka.tools.ConsoleConsumerTest > 
shouldParseValidConsumerConfigWithAutoOffsetResetLatest PASSED

kafka.tools.ConsoleConsumerTest > groupIdsProvidedInDifferentPlacesMustMatch 
STARTED

kafka.tools.ConsoleConsumerTest > groupIdsProvidedInDifferentPlacesMustMatch 
PASSED

kafka.tools.ConsoleConsumerTest > 
shouldParseValidConsumerConfigWithAutoOffsetResetAndMatchingFromBeginning 
STARTED

kafka.tools.ConsoleConsumerTest > 
shouldParseValidConsumerConfigWithAutoOffsetResetAndMatchingFromBeginning PASSED

kafka.tools.ConsoleConsumerTest > shouldExitOnGroupIdAndPartitionGivenTogether 
STARTED

kafka.tools.ConsoleConsumerTest > shouldExitOnGroupIdAndPartitionGivenTogether 
PASSED

kafka.tools.ConsoleConsumerTest > shouldExitOnUnrecognizedNewConsumerOption 
STARTED

kafka.tools.ConsoleConsumerTest > shouldExitOnUnrecognizedNewConsumerOption 
PASSED

kafka.tools.ConsoleConsumerTest > 
shouldParseValidSimpleConsumerValidConfigWithNumericOffset STARTED

kafka.tools.ConsoleConsumerTest > 
shouldParseValidSimpleConsumerValidConfigWithNumericOffset PASSED

kafka.tools.ConsoleConsumerTest > 
shouldParseValidConsumerConfigWithAutoOffsetResetEarliest STARTED

kafka.tools.ConsoleConsumerTest > 
shouldParseValidConsumerConfigWithAutoOffsetResetEarliest PASSED

kafka.tools.ConsoleConsumerTest > 
shouldParseValidSimpleConsumerValidConfigWithStringOffset STARTED

kafka.tools.ConsoleConsumerTest > 
shouldParseValidSimpleConsumerValidConfigWithStringOffset PASSED

kafka.tools.ConsoleConsumerTest > 
testCustomPropertyShouldBePassedToConfigureMethod STARTED

kafka.tools.ConsoleConsumerTest > 
testCustomPropertyShouldBePassedToConfigureMethod PASSED

kafka.tools.ConsoleConsumerTest > 

Re: [DISCUSS] : KIP-562: Allow fetching a key from a single partition rather than iterating over all the stores on an instance

2020-02-11 Thread Bruno Cadonna
Hi all,

I am fine with StoreQueryParameters, but then we should also change
the DSL grammar accordingly. Since the PR was merged, I suppose
everybody agrees on the new name and I changed the DSL grammar.

Best,
Bruno

On Thu, Feb 6, 2020 at 1:07 PM Navinder Brar
 wrote:
>
> Hi,
>
> While implementing 562, we have decided to rename StoreQueryParams -> 
> StoreQueryParameters. I have updated the PR and confluence. Please share if 
> anyone has feedback on it.
>
> Thanks & Regards,
> Navinder Pal Singh Brar
>
> On Friday, 24 January, 2020, 08:45:15 am IST, Navinder Brar 
>  wrote:
>
>  Hi John,
>
> Thanks for the responses. I will make the below changes as I had suggested 
> earlier, and then close the vote in a few hours.
>
> includeStaleStores -> staleStores
> withIncludeStaleStores() > enableStaleStores()
> includeStaleStores() -> staleStoresEnabled()
>
> Thanks,
> Navinder
>
> Sent from Yahoo Mail for iPhone
>
>
> On Friday, January 24, 2020, 5:36 AM, John Roesler  
> wrote:
>
> Hi Bruno,
>
> Thanks for your question; it's a very reasonable response to
> what I said before.
>
> I didn't mean "field" as in an instance variable, just as in a specific
> property or attribute. It's hard to talk about because all the words
> for this abstract concept are also words that people commonly use
> for instance variables.
>
> The principle is that these classes should be simple data containers.
> It's not so much that the methods match the field name, or that the
> field name matches the methods, but that all three bear a simple
> and direct relationship to each other. Or maybe I should say that
> the getters, setters, and fields are all directly named after a property.
>
> The point is that we should make sure we're not "playing games" in
> these classes but simply setting a property and offering a transparent
> way to get exactly what you just set.
>
> I actually do think that the instance variable itself should have the
> same name as the "field" or "property" that the getters and setters
> are named for. This is not a violation of encapsulation because those
> instance variables are required to be private.
>
>  I guess you can think of  this rule as more of a style guide than a
> grammar, but whatever. As a maintainer, I think we should discourage
> these particular classes to have different instance variables than
> method names. Otherwise,  it's just silly. Either "includeStaleStores"
> or "staleStoresEnabled" is a fine name, but not both. There's no
> reason at all to name all the accessors one of them and the instance
> variable they access the  other name.
>
> Thanks,
> -John
>
>
> On Thu, Jan 23, 2020, at 17:27, Bruno Cadonna wrote:
> > Hi John,
> >
> > One question: Why must the field name be involved in the naming? It
> > somehow contradicts encapsulation. Field name `includeStaleStores` (or
> > `staleStoresEnabled`) sounds perfectly fine to me. IMO, we should
> > decouple the parameter name from the actual field name.
> >
> > Bruno
> >
> > On Thu, Jan 23, 2020 at 3:02 PM John Roesler  wrote:
> > >
> > > Hi all,
> > >
> > > Thanks for the discussion!
> > >
> > > The basic idea I used in the original draft of the grammar was to avoid
> > > "fancy code" and just write "normal java". That's why I favored java bean
> > > spec over Kafka code traditions.
> > >
> > > According to the bean spec, setters always start with "set" and getters
> > > always start with "get" (or "is" for booleans). This often yields absurd
> > > or awkward readability. On the other hand, the "kafka" idioms
> > > seems to descend from Scala idiom of naming getters and setters
> > > exactly the same as the field they get and set. This plays to a language
> > > feature of Scala (getter referential transparency) that is not present
> > > in Java. My feeling is that we probably keep this idiom around
> > > precisely to avoid the absurd phrasing that the bean spec leads to.
> > >
> > > On the other hand, adopting the Kafka/Scala idiom brings in an
> > > additional burden I was trying to avoid: you have to pick a good
> > > name. Basically I was trying to avoid exactly this conversation ;)
> > > i.e., "X sounds weird, how about Y", "well, Y also sounds weird,
> > > what about Z", "Z sounds good, but then the setter sounds weird",
> > > etc.
> > >
> > > Maybe avoiding discussion was too ambitious, and I can't deny that
> > > bean spec names probably result in no one being happy, so I'm on
> > > board with the current proposal:
> > >
> > > setters:
> > > set{FieldName}(value)
> > > {enable/disable}{FieldName}()
> > >
> > > getters:
> > > {fieldName}()
> > > {fieldName}{Enabled/Disabled}()
> > >
> > > Probably, we'll find cases that are silly under that formula too,
> > > but we'll cross that bridge when we get to it.
> > >
> > > I'll update the grammar when I get the chance.
> > >
> > > Thanks!
> > > -John
> > >
> > > On Thu, Jan 23, 2020, at 12:37, Navinder Brar wrote:
> > > > Thanks Bruno, for the comments.
> > > > 1) Fixed.
> > > >
> >