Build failed in Jenkins: kafka-2.5-jdk8 #17

2020-02-12 Thread Apache Jenkins Server
See 


Changes:

[vvcephei] KAKFA-9503: Fix TopologyTestDriver output order (#8065)

[vvcephei] KAFKA-9500: Fix FK Join Topology (#8015)


--
[...truncated 2.89 MB...]

org.apache.kafka.streams.TestTopicsTest > testNegativeAdvance STARTED

org.apache.kafka.streams.TestTopicsTest > testNegativeAdvance PASSED

org.apache.kafka.streams.TestTopicsTest > shouldNotAllowToCreateWithNullDriver 
STARTED

org.apache.kafka.streams.TestTopicsTest > shouldNotAllowToCreateWithNullDriver 
PASSED

org.apache.kafka.streams.TestTopicsTest > testDuration STARTED

org.apache.kafka.streams.TestTopicsTest > testDuration PASSED

org.apache.kafka.streams.TestTopicsTest > testOutputToString STARTED

org.apache.kafka.streams.TestTopicsTest > testOutputToString PASSED

org.apache.kafka.streams.TestTopicsTest > testValue STARTED

org.apache.kafka.streams.TestTopicsTest > testValue PASSED

org.apache.kafka.streams.TestTopicsTest > testTimestampAutoAdvance STARTED

org.apache.kafka.streams.TestTopicsTest > testTimestampAutoAdvance PASSED

org.apache.kafka.streams.TestTopicsTest > testOutputWrongSerde STARTED

org.apache.kafka.streams.TestTopicsTest > testOutputWrongSerde PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputTopicWithNullTopicName STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputTopicWithNullTopicName PASSED

org.apache.kafka.streams.TestTopicsTest > testWrongSerde STARTED

org.apache.kafka.streams.TestTopicsTest > testWrongSerde PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMapWithNull STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMapWithNull PASSED

org.apache.kafka.streams.TestTopicsTest > testNonExistingOutputTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testNonExistingOutputTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testMultipleTopics STARTED

org.apache.kafka.streams.TestTopicsTest > testMultipleTopics PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValueList STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValueList PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputWithNullDriver STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputWithNullDriver PASSED

org.apache.kafka.streams.TestTopicsTest > testValueList STARTED

org.apache.kafka.streams.TestTopicsTest > testValueList PASSED

org.apache.kafka.streams.TestTopicsTest > testRecordList STARTED

org.apache.kafka.streams.TestTopicsTest > testRecordList PASSED

org.apache.kafka.streams.TestTopicsTest > testNonExistingInputTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testNonExistingInputTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMap STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMap PASSED

org.apache.kafka.streams.TestTopicsTest > testRecordsToList STARTED

org.apache.kafka.streams.TestTopicsTest > testRecordsToList PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValueListDuration STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValueListDuration PASSED

org.apache.kafka.streams.TestTopicsTest > testInputToString STARTED

org.apache.kafka.streams.TestTopicsTest > testInputToString PASSED

org.apache.kafka.streams.TestTopicsTest > testTimestamp STARTED

org.apache.kafka.streams.TestTopicsTest > testTimestamp PASSED

org.apache.kafka.streams.TestTopicsTest > testWithHeaders STARTED

org.apache.kafka.streams.TestTopicsTest > testWithHeaders PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValue STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValue PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateTopicWithNullTopicName STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateTopicWithNullTopicName PASSED

> Task :streams:upgrade-system-tests-0100:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0100:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0100:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0100:compileTestJava
> Task :streams:upgrade-system-tests-0100:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:testClasses
> Task :streams:upgrade-system-tests-0100:checkstyleTest
> Task :streams:upgrade-system-tests-0100:spotbugsMain NO-SOURCE
> Task :streams:upgrade-system-tests-0100:test
> Task :streams:upgrade-system-tests-0101:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0101:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0101:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0101:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0101:compileTestJava
> Task :streams:upgrade-system-tests-0101:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-

Re: [DISCUSS] KIP-552: Add interface to handle unused config

2020-02-12 Thread Patrik Kleindl
Hi John

Regarding Kafka Streams this can probably be fixed easily, but it does not
handle the underlying issue that other custom prefixes are not supported.
Seems I even did a short analysis several months ago and forgot about it,
see
https://issues.apache.org/jira/browse/KAFKA-6793?focusedCommentId=16870899&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16870899

We have used custom prefixes to pass properties to the RocksDBConfigSetter
and it seems people are doing something similar in Connect, see
https://issues.apache.org/jira/browse/KAFKA-7509

This KIP just seeks to avoid the false positives and setting it to debug
was preferred over implementing the custom prefixes.

best regards

Patrik

On Tue, 11 Feb 2020 at 18:21, John Roesler  wrote:

> Ah... I've just looked at some integration tests in Streams, and see the
> same thing.
>
> I need to apologize to everyone in the thread for my lack of
> understanding, and to
> thank Gwen for her skepticism. Looking back at the KIP itself, I see that
> Artur specifically
> listed log messages caused by Streams itself, which I failed to realize
> shouldn't be
> there at all.
>
> It now seems that we should not have a KIP at all, and also shouldn't make
> any changes
> to log levels or loggers. Instead we should treat KAFKA-6793 as a normal
> bug whose
> cause is that Streams does not correctly construct the client
> configurations when initializing
> the clients. It is leaving in the prefixed version of the client configs,
> but it should remove them.
> We should also add a test that we can specify all kinds of client
> configurations to Streams and
> that no WARN logs result during startup.
>
> Artur, what do you think about cancelling KIP-552 and instead just
> implementing a fix?
>
> Again, I'm really sorry for not realizing this sooner. And again, thanks
> to Gwen for chiming in.
>
> -John
>
> On Mon, Feb 10, 2020, at 02:19, Patrik Kleindl wrote:
> > Hi John
> > Starting an empty streams instance
> >
> > final String bootstrapServers = "broker0:9092";
> > Properties streamsConfiguration = new Properties();
> > streamsConfiguration.put(StreamsConfig.APPLICATION_ID_CONFIG,
> "configDemo");
> > streamsConfiguration.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG,
> > bootstrapServers);
> > StreamsBuilder builder = new StreamsBuilder();
> > final KafkaStreams streams = new KafkaStreams(builder.build(),
> > streamsConfiguration);
> > streams.start();
> >
> > results in:
> >
> > stream-thread
> > [configDemo-bcaf82b4-324d-4956-a2a8-1dea0a8e3a2e-StreamThread-1]
> > Creating consumer client
> > ConsumerConfig values:
> > ...
> > stream-thread
> > [configDemo-bcaf82b4-324d-4956-a2a8-1dea0a8e3a2e-StreamThread-1-consumer]
> > Cooperative rebalancing enabled now
> > The configuration 'admin.retries' was supplied but isn't a known config.
> > The configuration 'admin.retry.backoff.ms' was supplied but isn't a
> > known config.
> > Kafka version: 2.4.0
> >
> > when the normal consumer is created, but not for admin client /
> > producer / restore consumer.
> >
> > StreamsConfig seems to include this on purpose:
> >
> > final AdminClientConfig adminClientDefaultConfig = new
> > AdminClientConfig(getClientPropsWithPrefix(ADMIN_CLIENT_PREFIX,
> > AdminClientConfig.configNames()));
> > consumerProps.put(adminClientPrefix(AdminClientConfig.RETRIES_CONFIG),
> > adminClientDefaultConfig.getInt(AdminClientConfig.RETRIES_CONFIG));
> >
> consumerProps.put(adminClientPrefix(AdminClientConfig.RETRY_BACKOFF_MS_CONFIG),
> >
> adminClientDefaultConfig.getLong(AdminClientConfig.RETRY_BACKOFF_MS_CONFIG));
> >
> > If I add
> >
> >
> streamsConfiguration.put(StreamsConfig.restoreConsumerPrefix(ConsumerConfig.RECEIVE_BUFFER_CONFIG),
> > 65536);
> >
> streamsConfiguration.put(StreamsConfig.mainConsumerPrefix(ConsumerConfig.MAX_POLL_RECORDS_CONFIG),
> > 100);
> >
> > then the warnings
> >
> > The configuration 'main.consumer.max.poll.records' was supplied but
> > isn't a known config.
> > The configuration 'restore.consumer.receive.buffer.bytes' was supplied
> > but isn't a known config.
> >
> > are shown for all clients, not only the last consumer.
> >
> > Streams provides these prefixes so maybe they are not handled
> > correctly regarding the log message.
> >
> > Maybe this helps to pinpoint the source of this in KS at least
> >
> > best regards
> >
> > Patrik
> >
> >
> > On Sat, 8 Feb 2020 at 05:11, John Roesler  wrote:
> >
> > > Looking at where the log message comes from:
> > > org.apache.kafka.common.config.AbstractConfig#logUnused
> > > it seems like maybe the warning just happens when you pass
> > > extra configs to a client that it has no knowledge of (and therefore
> > > doesn't "use").
> > >
> > > I'm now suspicious if Streams is actually sending extra configs to the
> > > clients, although it seems like we _don't_ see these warnings in other
> > > cases.
> > >
> > > Maybe some of the folks who actually see these messages can try to
> pinpoint
> > > where 

Build failed in Jenkins: kafka-trunk-jdk8 #4227

2020-02-12 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-9500: Fix FK Join Topology (#8015)


--
[...truncated 2.87 MB...]

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] PASSED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis STARTED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis PASSED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime STARTED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime PASSED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep PASSED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
PASSED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testToString STARTED

org.apache.kafka.streams.test.TestRecordTest > testToString PASSED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords STARTED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords PASSED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
STARTED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
PASSED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher STARTED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher PASSED

org.apache.kafka.streams.test.TestRecordTest > testFields STARTED

org.apache.kafka.streams.test.TestRecordTest > testFields PASSED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode STARTED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode PASSED

> Task :streams:upgrade-system-tests-0100:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0100:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0100:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0100:compileTestJava
> Task :streams:upgrade-system-tests-0100:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:testClasses
> Task :streams:upgrade-system-tests-0100:checkstyleTest
> Task :streams:upgrade-system-tests-0100:spotbugsMain NO-SOURCE
> Task :streams:upgrade-system-tests-0100:test
> Task :streams:upgrade-system-tests-0101:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0101:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0101:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0101:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0101:compileTestJava
> Task :streams:upgrade-system-tests-0101:processTestResources NO-SOURCE
> Task :stre

Build failed in Jenkins: kafka-trunk-jdk11 #1151

2020-02-12 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-9500: Fix FK Join Topology (#8015)


--
[...truncated 2.89 MB...]
org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairs 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicNameAndTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicNameAndTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairsAndCustomTimestamps
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairsAndCustomTimestamps
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithCustomTimestampAndIncrementsAndNotAdvanceTime
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithCustomTimestampAndIncrementsAndNotAdvanceTime
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithTimestampWithTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithTimestampWithTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKeyAndDefaultTimestamp
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKeyAndDefaultTimestamp
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithTimestampAndIncrements STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithTimestampAndIncrements PASSED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis STARTED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis PASSED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime STARTED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime PASSED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep PASSED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.inter

Build failed in Jenkins: kafka-2.3-jdk8 #174

2020-02-12 Thread Apache Jenkins Server
See 


Changes:

[rhauch] MINOR: Start using Response and replace IOException in


--
[...truncated 2.96 MB...]

kafka.log.LogCleanerTest > testPartialSegmentClean STARTED

kafka.log.LogCleanerTest > testPartialSegmentClean PASSED

kafka.log.LogCleanerTest > testCommitMarkerRemoval STARTED

kafka.log.LogCleanerTest > testCommitMarkerRemoval PASSED

kafka.log.LogCleanerTest > testCleanSegmentsWithConcurrentSegmentDeletion 
STARTED

kafka.log.LogCleanerTest > testCleanSegmentsWithConcurrentSegmentDeletion PASSED

kafka.log.LogValidatorTest > testRecompressedBatchWithoutRecordsNotAllowed 
STARTED

kafka.log.LogValidatorTest > testRecompressedBatchWithoutRecordsNotAllowed 
PASSED

kafka.log.LogValidatorTest > testCompressedV1 STARTED

kafka.log.LogValidatorTest > testCompressedV1 PASSED

kafka.log.LogValidatorTest > testCompressedV2 STARTED

kafka.log.LogValidatorTest > testCompressedV2 PASSED

kafka.log.LogValidatorTest > testDownConversionOfIdempotentRecordsNotPermitted 
STARTED

kafka.log.LogValidatorTest > testDownConversionOfIdempotentRecordsNotPermitted 
PASSED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterUpConversionV0ToV2NonCompressed STARTED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterUpConversionV0ToV2NonCompressed PASSED

kafka.log.LogValidatorTest > testAbsoluteOffsetAssignmentCompressed STARTED

kafka.log.LogValidatorTest > testAbsoluteOffsetAssignmentCompressed PASSED

kafka.log.LogValidatorTest > testLogAppendTimeWithRecompressionV1 STARTED

kafka.log.LogValidatorTest > testLogAppendTimeWithRecompressionV1 PASSED

kafka.log.LogValidatorTest > testLogAppendTimeWithRecompressionV2 STARTED

kafka.log.LogValidatorTest > testLogAppendTimeWithRecompressionV2 PASSED

kafka.log.LogValidatorTest > testCreateTimeUpConversionV0ToV1 STARTED

kafka.log.LogValidatorTest > testCreateTimeUpConversionV0ToV1 PASSED

kafka.log.LogValidatorTest > testCreateTimeUpConversionV0ToV2 STARTED

kafka.log.LogValidatorTest > testCreateTimeUpConversionV0ToV2 PASSED

kafka.log.LogValidatorTest > testCreateTimeUpConversionV1ToV2 STARTED

kafka.log.LogValidatorTest > testCreateTimeUpConversionV1ToV2 PASSED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterDownConversionV2ToV0Compressed STARTED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterDownConversionV2ToV0Compressed PASSED

kafka.log.LogValidatorTest > testZStdCompressedWithUnavailableIBPVersion STARTED

kafka.log.LogValidatorTest > testZStdCompressedWithUnavailableIBPVersion PASSED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterUpConversionV1ToV2Compressed STARTED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterUpConversionV1ToV2Compressed PASSED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterUpConversionV0ToV1NonCompressed STARTED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterUpConversionV0ToV1NonCompressed PASSED

kafka.log.LogValidatorTest > 
testDownConversionOfTransactionalRecordsNotPermitted STARTED

kafka.log.LogValidatorTest > 
testDownConversionOfTransactionalRecordsNotPermitted PASSED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterUpConversionV0ToV1Compressed STARTED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterUpConversionV0ToV1Compressed PASSED

kafka.log.LogValidatorTest > testRelativeOffsetAssignmentNonCompressedV1 STARTED

kafka.log.LogValidatorTest > testRelativeOffsetAssignmentNonCompressedV1 PASSED

kafka.log.LogValidatorTest > testRelativeOffsetAssignmentNonCompressedV2 STARTED

kafka.log.LogValidatorTest > testRelativeOffsetAssignmentNonCompressedV2 PASSED

kafka.log.LogValidatorTest > testControlRecordsNotAllowedFromClients STARTED

kafka.log.LogValidatorTest > testControlRecordsNotAllowedFromClients PASSED

kafka.log.LogValidatorTest > testRelativeOffsetAssignmentCompressedV1 STARTED

kafka.log.LogValidatorTest > testRelativeOffsetAssignmentCompressedV1 PASSED

kafka.log.LogValidatorTest > testRelativeOffsetAssignmentCompressedV2 STARTED

kafka.log.LogValidatorTest > testRelativeOffsetAssignmentCompressedV2 PASSED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterDownConversionV2ToV1NonCompressed STARTED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterDownConversionV2ToV1NonCompressed PASSED

kafka.log.LogValidatorTest > testLogAppendTimeNonCompressedV1 STARTED

kafka.log.LogValidatorTest > testLogAppendTimeNonCompressedV1 PASSED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterDownConversionV2ToV0NonCompressed STARTED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterDownConversionV2ToV0NonCompressed PASSED

kafka.log.LogValidatorTest > testControlRecordsNotCompressed STARTED

kafka.log.LogValidatorTest > testControlRecordsNotCompressed PASSED

kafka.log.LogValidatorTest > testInvalidCreateTimeNonCompressedV1 STARTED

kafka.log.LogValidatorTest > testInvalidCreateTimeNonCompressedV1 PASSED

kafka.log.LogValidatorT

Build failed in Jenkins: kafka-2.4-jdk8 #140

2020-02-12 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-9540: Move "Could not find the standby task while closing it" log

[rhauch] MINOR: Start using Response and replace IOException in

[vvcephei] KAFKA-9390: Make serde pseudo-topics unique (#8054)


--
[...truncated 5.51 MB...]
org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestampWithPro

Re: [DISCUSS] KIP-405: Kafka Tiered Storage

2020-02-12 Thread Satish Duggana
Hi Jun,
Thanks for your comments on the separation of remote log metadata
storage and remote log storage.
We had a few discussions since early Jan on how to support eventually
consistent stores like S3 by uncoupling remote log segment metadata
and remote log storage. It is written with details in the doc here(1).
Below is the brief summary of the discussion from that doc.

The current approach consists of pulling the remote log segment
metadata from remote log storage APIs. It worked fine for storages
like HDFS. But one of the problems of relying on the remote storage to
maintain metadata is that tiered-storage needs to be strongly
consistent, with an impact not only on the metadata(e.g. LIST in S3)
but also on the segment data(e.g. GET after a DELETE in S3). The cost
of maintaining metadata in remote storage needs to be factored in.
This is true in the case of S3, LIST APIs incur huge costs as you
raised earlier.
So, it is good to separate the remote storage from the remote log
metadata store. We refactored the existing RemoteStorageManager and
introduced RemoteLogMetadataManager. Remote log metadata store should
give strong consistency semantics but remote log storage can be
eventually consistent.
We can have a default implementation for RemoteLogMetadataManager
which uses an internal topic(as mentioned in one of our earlier
emails) as storage. But users can always plugin their own
RemoteLogMetadataManager implementation based on their environment.

Please go through the updated KIP and let us know your comments. We
have started refactoring for the changes mentioned in the KIP and
there may be a few more updates to the APIs.

[1] 
https://docs.google.com/document/d/1qfkBCWL1e7ZWkHU7brxKDBebq4ie9yK20XJnKbgAlew/edit?ts=5e208ec7#

On Fri, Dec 27, 2019 at 5:43 PM Ivan Yurchenko  wrote:
>
> Hi all,
>
>
> Jun:
> > (a) Cost: S3 list object requests cost $0.005 per 1000 requests. If you
> > have 100,000 partitions and want to pull the metadata for each partition
> at
> > the rate of 1/sec. It can cost $0.5/sec, which is roughly $40K per day.
>
> I want to note here, that no reasonably durable storage will be cheap
> at 100k RPS. For example, DynamoDB might give the same ballpark figures.
> If we want to keep the pull-based approach, we can try to reduce this number
> in several ways: doing listings less frequently (as Satish mentioned,
> with the current defaults it's ~3.33k RPS for your example),
> batching listing operations in some way (depending on the storage;
> it might require the change of RSM's interface).
>
>
> > There are different ways for doing push based metadata propagation. Some
> > object stores may support that already. For example, S3 supports events
> > notification
> This sounds interesting. However, I see a couple of issues using it:
>   1. As I understand the documentation, notification delivery is not
> guaranteed
> and it's recommended to periodically do LIST to fill the gaps.
> Which brings us back to the same LIST consistency guarantees issue.
>   2. The same goes for the broker start: to get the current state, we need
> to LIST.
>   3. The dynamic set of multiple consumers (RSMs): AFAIK SQS and SNS aren't
> designed for such a case.
>
>
> Alexandre:
> > A.1 As commented on PR 7561, S3 consistency model [1][2] implies RSM
> cannot
> > relies solely on S3 APIs to guarantee the expected strong consistency. The
> > proposed implementation [3] would need to be updated to take this into
> > account. Let’s talk more about this.
>
> Thank you for the feedback. I clearly see the need for changing the S3
> implementation
> to provide stronger consistency guarantees. As it see from this thread,
> there are
> several possible approaches to this. Let's discuss RemoteLogManager's
> contract and
> behavior (like pull vs push model) further before picking one (or several -
> ?) of them.
> I'm going to do some evaluation of DynamoDB for the pull-based approach,
> if it's possible to apply it paying a reasonable bill. Also, of the
> push-based approach
> with a Kafka topic as the medium.
>
>
> > A.2.3 Atomicity – what does an implementation of RSM need to provide with
> > respect to atomicity of the APIs copyLogSegment, cleanupLogUntil and
> > deleteTopicPartition? If a partial failure happens in any of those (e.g.
> in
> > the S3 implementation, if one of the multiple uploads fails [4]),
>
> The S3 implementation is going to change, but it's worth clarifying anyway.
> The segment log file is being uploaded after S3 has acked uploading of
> all other files associated with the segment and only after this the whole
> segment file set becomes visible remotely for operations like
> listRemoteSegments [1].
> In case of upload failure, the files that has been successfully uploaded
> stays
> as invisible garbage that is collected by cleanupLogUntil (or overwritten
> successfully later).
> And the opposite happens during the deletion: log files are deleted first.
> This approach should generally wo

[jira] [Created] (KAFKA-9542) ZSTD Compression Not Working

2020-02-12 Thread Prashant (Jira)
Prashant created KAFKA-9542:
---

 Summary: ZSTD Compression Not Working
 Key: KAFKA-9542
 URL: https://issues.apache.org/jira/browse/KAFKA-9542
 Project: Kafka
  Issue Type: Bug
  Components: compression
Affects Versions: 2.3.0
 Environment: Linux, CentOS
Reporter: Prashant


I enabled zstd compression at producer by adding  "compression.type=zstd" in 
producer config. When try to run it, producer fails with 

"org.apache.kafka.common.errors.UnknownServerException: The server experienced 
an unexpected error when processing the request"

In Broker Logs, I could find following exception:

 

[2020-02-12 11:48:04,623] ERROR [ReplicaManager broker=1] Error processing 
append operation on partition load_logPlPts-6 (kafka.server.ReplicaManager)
org.apache.kafka.common.KafkaException: java.lang.NoClassDefFoundError: Could 
not initialize class 
org.apache.kafka.common.record.CompressionType$ZstdConstructors
       at 
org.apache.kafka.common.record.CompressionType$5.wrapForInput(CompressionType.java:133)
       at 
org.apache.kafka.common.record.DefaultRecordBatch.compressedIterator(DefaultRecordBatch.java:257)
       at 
org.apache.kafka.common.record.DefaultRecordBatch.iterator(DefaultRecordBatch.java:324)
       at 
scala.collection.convert.Wrappers$JIterableWrapper.iterator(Wrappers.scala:54)
       at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
       at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
       at 
kafka.log.LogValidator$$anonfun$validateMessagesAndAssignOffsetsCompressed$1.apply(LogValidator.scala:269)
       at 
kafka.log.LogValidator$$anonfun$validateMessagesAndAssignOffsetsCompressed$1.apply(LogValidator.scala:261)
       at scala.collection.Iterator$class.foreach(Iterator.scala:891)
       at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
       at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
       at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
       at 
kafka.log.LogValidator$.validateMessagesAndAssignOffsetsCompressed(LogValidator.scala:261)
       at 
kafka.log.LogValidator$.validateMessagesAndAssignOffsets(LogValidator.scala:72)
       at kafka.log.Log$$anonfun$append$2.liftedTree1$1(Log.scala:869)
       at kafka.log.Log$$anonfun$append$2.apply(Log.scala:868)
       at kafka.log.Log$$anonfun$append$2.apply(Log.scala:850)
       at kafka.log.Log.maybeHandleIOException(Log.scala:2065)
       at kafka.log.Log.append(Log.scala:850)
       at kafka.log.Log.appendAsLeader(Log.scala:819)
       at kafka.cluster.Partition$$anonfun$14.apply(Partition.scala:771)
       at kafka.cluster.Partition$$anonfun$14.apply(Partition.scala:759)
       at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:253) 

 

This is fresh broker installed on "CentOS Linux" v7. This doesn't seem to be a 
classpath issue as same package is working on MacOS. 

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-9543) Consumer offset reset after new segment rolling

2020-02-12 Thread Jira
Rafał Boniecki created KAFKA-9543:
-

 Summary: Consumer offset reset after new segment rolling
 Key: KAFKA-9543
 URL: https://issues.apache.org/jira/browse/KAFKA-9543
 Project: Kafka
  Issue Type: Bug
Affects Versions: 2.4.0
Reporter: Rafał Boniecki
 Attachments: Untitled.png

After upgrade from kafka 2.1.1 to 2.4.0, I'm experiencing unexpected consumer 
offset resets.

Consumer:
{code:java}
2020-02-12T11:12:58.402+01:00 hostname 4a2a39a35a02 
[2020-02-12T11:12:58,402][INFO 
][org.apache.kafka.clients.consumer.internals.Fetcher] [Consumer 
clientId=logstash-1, groupId=logstash] Fetch offset 1632750575 is out of range 
for partition stats-5, resetting offset
{code}
Broker:
{code:java}
2020-02-12 11:12:58:400 CET INFO  
[data-plane-kafka-request-handler-1][kafka.log.Log] [Log partition=stats-5, 
dir=/kafka4/data] Rolled new log segment at offset 1632750565 in 2 ms.{code}
All resets are perfectly correlated to rolling new segments at the broker - 
segment is rolled first, then, couple of ms later, reset on the consumer 
occurs. Attached is grafana graph with consumer lag per partition. All sudden 
spikes in lag are offset resets due to this bug.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: kafka-2.4-jdk8 #141

2020-02-12 Thread Apache Jenkins Server
See 


Changes:

[vvcephei] KAKFA-9503: Fix TopologyTestDriver output order (#8065)

[vvcephei] KAFKA-9500: Fix FK Join Topology (#8015)


--
[...truncated 5.56 MB...]

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
sho

Build failed in Jenkins: kafka-2.5-jdk8 #18

2020-02-12 Thread Apache Jenkins Server
See 

Changes:


--
[...truncated 2.89 MB...]

org.apache.kafka.streams.TestTopicsTest > testNegativeAdvance STARTED

org.apache.kafka.streams.TestTopicsTest > testNegativeAdvance PASSED

org.apache.kafka.streams.TestTopicsTest > shouldNotAllowToCreateWithNullDriver 
STARTED

org.apache.kafka.streams.TestTopicsTest > shouldNotAllowToCreateWithNullDriver 
PASSED

org.apache.kafka.streams.TestTopicsTest > testDuration STARTED

org.apache.kafka.streams.TestTopicsTest > testDuration PASSED

org.apache.kafka.streams.TestTopicsTest > testOutputToString STARTED

org.apache.kafka.streams.TestTopicsTest > testOutputToString PASSED

org.apache.kafka.streams.TestTopicsTest > testValue STARTED

org.apache.kafka.streams.TestTopicsTest > testValue PASSED

org.apache.kafka.streams.TestTopicsTest > testTimestampAutoAdvance STARTED

org.apache.kafka.streams.TestTopicsTest > testTimestampAutoAdvance PASSED

org.apache.kafka.streams.TestTopicsTest > testOutputWrongSerde STARTED

org.apache.kafka.streams.TestTopicsTest > testOutputWrongSerde PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputTopicWithNullTopicName STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputTopicWithNullTopicName PASSED

org.apache.kafka.streams.TestTopicsTest > testWrongSerde STARTED

org.apache.kafka.streams.TestTopicsTest > testWrongSerde PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMapWithNull STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMapWithNull PASSED

org.apache.kafka.streams.TestTopicsTest > testNonExistingOutputTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testNonExistingOutputTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testMultipleTopics STARTED

org.apache.kafka.streams.TestTopicsTest > testMultipleTopics PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValueList STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValueList PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputWithNullDriver STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputWithNullDriver PASSED

org.apache.kafka.streams.TestTopicsTest > testValueList STARTED

org.apache.kafka.streams.TestTopicsTest > testValueList PASSED

org.apache.kafka.streams.TestTopicsTest > testRecordList STARTED

org.apache.kafka.streams.TestTopicsTest > testRecordList PASSED

org.apache.kafka.streams.TestTopicsTest > testNonExistingInputTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testNonExistingInputTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMap STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMap PASSED

org.apache.kafka.streams.TestTopicsTest > testRecordsToList STARTED

org.apache.kafka.streams.TestTopicsTest > testRecordsToList PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValueListDuration STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValueListDuration PASSED

org.apache.kafka.streams.TestTopicsTest > testInputToString STARTED

org.apache.kafka.streams.TestTopicsTest > testInputToString PASSED

org.apache.kafka.streams.TestTopicsTest > testTimestamp STARTED

org.apache.kafka.streams.TestTopicsTest > testTimestamp PASSED

org.apache.kafka.streams.TestTopicsTest > testWithHeaders STARTED

org.apache.kafka.streams.TestTopicsTest > testWithHeaders PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValue STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValue PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateTopicWithNullTopicName STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateTopicWithNullTopicName PASSED

> Task :streams:upgrade-system-tests-0100:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0100:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0100:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0100:compileTestJava
> Task :streams:upgrade-system-tests-0100:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:testClasses
> Task :streams:upgrade-system-tests-0100:checkstyleTest
> Task :streams:upgrade-system-tests-0100:spotbugsMain NO-SOURCE
> Task :streams:upgrade-system-tests-0100:test
> Task :streams:upgrade-system-tests-0101:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0101:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0101:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0101:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0101:compileTestJava
> Task :streams:upgrade-system-tests-0101:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-0101:testClasses
> Task :streams:upgrade-system-tests-0101:checkstyleTest
> Task :streams:upgrade-system-tests-0101:spotbugsMain NO-SOU

[jira] [Resolved] (KAFKA-9499) Improve deletion process by batching more aggressively

2020-02-12 Thread Jason Gustafson (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9499?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-9499.

Fix Version/s: 2.5.0
   Resolution: Fixed

> Improve deletion process by batching more aggressively
> --
>
> Key: KAFKA-9499
> URL: https://issues.apache.org/jira/browse/KAFKA-9499
> Project: Kafka
>  Issue Type: Improvement
>Reporter: David Jacot
>Assignee: David Jacot
>Priority: Major
> Fix For: 2.5.0
>
>
> I have noticed that the topics deletion process could be improve by batching 
> topics here: 
> [https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/controller/TopicDeletionManager.scala#L354]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-6607) Kafka Streams lag not zero when input topic transactional

2020-02-12 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6607?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax resolved KAFKA-6607.

Fix Version/s: 2.5.0
   Resolution: Fixed

> Kafka Streams lag not zero when input topic transactional
> -
>
> Key: KAFKA-6607
> URL: https://issues.apache.org/jira/browse/KAFKA-6607
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Affects Versions: 1.0.0
>Reporter: Matthias J. Sax
>Assignee: Matthias J. Sax
>Priority: Minor
> Fix For: 2.5.0
>
>
> When an input topic for a Kafka Streams application is written using 
> transaction, Kafka Streams commits an "incorrect" offset, ie, it commits 
> "lastProcessedMessageOffset + 1" that is smaller than "endOffset" if it 
> reaches the end of topic. The reason is the commit marker that is the last 
> "message" in the topic; Streams does not take commit markers into account 
> when committing.
> This is not a correctness issue, but when one inspect the consumer lag via 
> {{bin/kafka-consumer.group.sh}} the lag is shown as 1 instead of 0 – what is 
> correct from consumer-group tool point of view.
> Note that all applications using a plain consumer may face the same issue if 
> they use `KafkaConsumer#commitSync(Map 
> offsets)`: to address the issue, the correct pattern is to either commit 
> "nextRecord.offset()" (if the next record is available already, ie, was 
> returned by `poll()`, or use `consumer.position()` that takes the commit 
> marker into account and would "step over it").



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: kafka-2.5-jdk8 #19

2020-02-12 Thread Apache Jenkins Server
See 


Changes:

[mickael.maison] KAFKA-9423: Refine layout of configuration options on website 
and make


--
[...truncated 2.89 MB...]

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] PASSED

org.apache.kaf

[jira] [Resolved] (KAFKA-9417) Integration test for new EOS model with vanilla Producer and Consumer

2020-02-12 Thread Guozhang Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9417?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-9417.
--
Resolution: Fixed

> Integration test for new EOS model with vanilla Producer and Consumer
> -
>
> Key: KAFKA-9417
> URL: https://issues.apache.org/jira/browse/KAFKA-9417
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Boyang Chen
>Assignee: Boyang Chen
>Priority: Major
>
> We would like to extend the `TransactionMessageCopier` to use the new 
> subscription mode consumer and do a system test based off that in order to 
> verify the new semantic actually works.
> We also want to make sure the backward compatibility is maintained by using 
> group metadata API in existing tests as well.
>  
> A minor public change is also included within this PR by setting 
> `transaction.abort.timed.out.transaction.cleanup.interval.ms` default to 
> 1 ms (10 seconds) on broker side 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: kafka-trunk-jdk8 #4228

2020-02-12 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Fix unnecessary metadata fetch before group assignment (#8095)

[github] KAFKA-9499; Improve deletion process by batching more aggressively


--
[...truncated 2.87 MB...]

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] PASSED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis STARTED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis PASSED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime STARTED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime PASSED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep PASSED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
PASSED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testToString STARTED

org.apache.kafka.streams.test.TestRecordTest > testToString PASSED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords STARTED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords PASSED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
STARTED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
PASSED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher STARTED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher PASSED

org.apache.kafka.streams.test.TestRecordTest > testFields STARTED

org.apache.kafka.streams.test.TestRecordTest > testFields PASSED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode STARTED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode PASSED

> Task :streams:upgrade-system-tests-0100:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0100:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0100:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0100:compileTestJava
> Task :streams:upgrade-system-tests-0100:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:testClasses
> Task :streams:upgrade-system-tests-0100:checkstyleTest
> Task :streams:upgrade-system-tests-0100:spotbugsMain NO-SOURCE
> Task :streams:upgrade-system-tests-0100:test
> Task :streams:upgrade-system-tests-0101:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0101:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0101:classes

Re: [DISCUSS] KIP-568: Explicit rebalance triggering on the Consumer

2020-02-12 Thread Sophie Blee-Goldman
Hey Guozhang, thanks for the thorough reply!

I definitely went back and forth on whether to make it a blocking call,
and ultimately went with blocking just to leave it open to potential future
use cases (in particular non-Streams apps). But on second (or third)
thought I think I agree that no use case wouldn't be similarly covered by
just calling poll() immediately after enforceRebalance(). It seems best to
leave all rebalancing action within the scope of poll alone and avoid
introducing unnecessary complexity -- happy to revert this then.

I think that ends up addressing most of your other concerns, although
there's one I would push back on: this method should still explicitly
call out whether a rebalance is already in progress and the call is thus
a no-op. If throwing a RebalanceInProgressException seems too
heavy maybe we can just return a boolean indicating whether a new
rebalance was triggered or not.

The snippet you included does work around this, by checking the
condition again in the rebalance listener. But I would argue that a) many
applications don't use a rebalance listener, and shouldn't be forced to
implement it to fully use this new API. More importantly, since you can
probably use the assignor's onAssignment method to achieve the same
thing, b) it adds unnecessary complexity, and as we've seen in Streams
the interactions between the rebalance callbacks and main consumer
can already get quite ugly.

For simplicity's sake then, I'll propose to just return the bool over the
exception and change the signature to

/**
 * @return Whether a new rebalance was triggered (false if a rebalance was
already in progress)
 * @throws java.lang.IllegalStateException if the consumer does not use
group subscription
 */
boolean enforceRebalance();

Thoughts?

On Tue, Feb 11, 2020 at 5:29 PM Guozhang Wang  wrote:

> Hello Sophie, thanks for brining up this KIP, and the great write-up
> summarizing the motivations of the proposal. Here are some comments:
>
> Minor:
>
> 1. If we want to make it a blocking call (I have some thoughts about this
> below :), to be consistent we need to consider having two overloaded
> function, one without the timeout which then relies on `
> DEFAULT_API_TIMEOUT_MS_CONFIG`.
>
> 2. Also I'd suggest that, again for API consistency, we a) throw
> TimeoutException if the operation cannot be completed within the timeout
> value, b) return false immediately if we cannot trigger a rebalance either
> because coordinator is unknown.
>
> Meta:
>
> 3. I'm not sure if we have a concrete scenario that we want to wait until
> the rebalance is completed in KIP-441 / 268, rather than calling
> "consumer.enforceRebalance(); consumer.poll()" consecutively and try to
> execute the rebalance in the poll call? If there's no valid motivations I'm
> still a bit inclined to make it non-blocking (i.e. just setting a bit and
> then execute the process in the later poll call) similar to our `seek`
> functions. By doing this we can also make this function simpler as it would
> never throw RebalanceInProgress or Timeout or even KafkaExceptions.
>
> 4. Re: the case "when a rebalance is already in progress", this may be
> related to 3) above. I think we can simplify this case as well but just not
> triggering a new rebalance and let the the caller handle it: for example in
> KIP-441, in each iteration of the stream thread, we can if a standby task
> is ready, and if yes we call `enforceRebalance`, if there's already a
> rebalance in progress (either with the new subscription metadata, or not)
> this call would be a no-op, and then in the next iteration we would just
> call that function again, and eventually we would trigger the rebalance
> with the new subscription metadata and previous calls would be no-op and
> hence no cost anyways. I feel this would be simpler than letting the caller
> to capture RebalanceInProgressException:
>
>
> mainProcessingLoop() {
> if (needsRebalance) {
> consumer.enforceRebalance();
> }
>
> records = consumer.poll();
> ...
> // do some processing
> }
>
> RebalanceListener {
>
>onPartitionsAssigned(...) {
>   if (rebalanceGoalAchieved()) {
> needsRebalance = false;
>   }
> }
> }
>
>
> WDYT?
>
>
>
>
> On Tue, Feb 11, 2020 at 3:59 PM Sophie Blee-Goldman 
> wrote:
>
> > Hey Boyang,
> >
> > Originally I had it as a nonblocking call, but decided to change it to
> > blocking
> > with a timeout parameter. I'm not sure a future makes sense to return
> here,
> > because the rebalance either does or does not complete within the
> timeout:
> > if it does not, you will have to call poll again to complete it (as is
> the
> > case with
> > any other rebalance). I'll call this out in the javadocs as well.
> >
> > I also added an example demonstrating how/when to use this new API.
> >
> > Thanks!
> >
> > On Tue, Feb 11, 2020 at 1:49 PM Boyang Chen 
> > wrote:
> >
> > > Hey Sophie,
> > >
> > > is the `enforceRebalance` a blocking call? Could we add 

[jira] [Resolved] (KAFKA-9192) NullPointerException if field in schema not present in value

2020-02-12 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch resolved KAFKA-9192.
--
Fix Version/s: 2.4.1
   2.3.2
   2.5.0
   2.2.3
 Reviewer: Randall Hauch
   Resolution: Fixed

Merged to the `trunk`, `2.5`, `2.4`, `2.3`, and `2.2` branches.

> NullPointerException if field in schema not present in value
> 
>
> Key: KAFKA-9192
> URL: https://issues.apache.org/jira/browse/KAFKA-9192
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 2.2.1
>Reporter: Mark Tinsley
>Priority: Major
> Fix For: 2.2.3, 2.5.0, 2.3.2, 2.4.1
>
>
> Given a message:
> {code:java}
> {
>"schema":{
>   "type":"struct",
>   "fields":[
>  {
> "type":"string",
> "optional":true,
> "field":"abc"
>  }
>   ],
>   "optional":false,
>   "name":"foobar"
>},
>"payload":{
>}
> }
> {code}
> I would expect, given the field is optional, for the JsonConverter to still 
> process this value. 
> What happens is I get a null pointer exception, the stacktrace points to this 
> line: 
> https://github.com/apache/kafka/blob/2.1/connect/json/src/main/java/org/apache/kafka/connect/json/JsonConverter.java#L701
>  called by 
> https://github.com/apache/kafka/blob/2.1/connect/json/src/main/java/org/apache/kafka/connect/json/JsonConverter.java#L181
> Issue seems to be that we need to check and see if the jsonValue is null 
> before checking if the jsonValue has a null value.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: kafka-trunk-jdk11 #1152

2020-02-12 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Fix unnecessary metadata fetch before group assignment (#8095)

[github] KAFKA-9499; Improve deletion process by batching more aggressively

[github] KAFKA-6607: Commit correct offsets for transactional input data (#8091)


--
[...truncated 2.89 MB...]
org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairs 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicNameAndTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicNameAndTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairsAndCustomTimestamps
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairsAndCustomTimestamps
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithCustomTimestampAndIncrementsAndNotAdvanceTime
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithCustomTimestampAndIncrementsAndNotAdvanceTime
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithTimestampWithTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithTimestampWithTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKeyAndDefaultTimestamp
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKeyAndDefaultTimestamp
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithTimestampAndIncrements STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithTimestampAndIncrements PASSED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis STARTED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis PASSED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime STARTED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime PASSED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep PASSED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep PASSED

org.apache.kafka.stre

[jira] [Created] (KAFKA-9544) Flaky Test `KafkaAdminClientTest.testDefaultApiTimeoutOverride`

2020-02-12 Thread Jason Gustafson (Jira)
Jason Gustafson created KAFKA-9544:
--

 Summary: Flaky Test 
`KafkaAdminClientTest.testDefaultApiTimeoutOverride`
 Key: KAFKA-9544
 URL: https://issues.apache.org/jira/browse/KAFKA-9544
 Project: Kafka
  Issue Type: Bug
Reporter: Jason Gustafson
Assignee: Jason Gustafson


{code}
org.junit.runners.model.TestTimedOutException: test timed out after 12 
milliseconds
at java.lang.Object.wait(Native Method)
at java.lang.Thread.join(Thread.java:1260)
at 
org.apache.kafka.clients.admin.KafkaAdminClient.close(KafkaAdminClient.java:594)
at org.apache.kafka.clients.admin.Admin.close(Admin.java:98)
at org.apache.kafka.clients.admin.Admin.close(Admin.java:81)
at 
org.apache.kafka.clients.admin.AdminClientUnitTestEnv.close(AdminClientUnitTestEnv.java:116)
at 
org.apache.kafka.clients.admin.KafkaAdminClientTest.testApiTimeout(KafkaAdminClientTest.java:2642)
at 
org.apache.kafka.clients.admin.KafkaAdminClientTest.testDefaultApiTimeoutOverride(KafkaAdminClientTest.java:2595)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:288)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:282)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.lang.Thread.run(Thread.java:748)
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: kafka-2.2-jdk8-old #208

2020-02-12 Thread Apache Jenkins Server
See 


Changes:

[rhauch] KAFKA-9192: fix NPE when for converting optional json schema in structs


--
Started by an SCM change
Running as SYSTEM
[EnvInject] - Loading node environment variables.
Building remotely on H26 (ubuntu) in workspace 

[WS-CLEANUP] Deleting project workspace...
[WS-CLEANUP] Deferred wipeout is used...
No credentials specified
Cloning the remote Git repository
Cloning repository https://github.com/apache/kafka.git
 > git init  # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress -- https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # 
 > timeout=10
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git fetch --tags --progress -- https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/2.2^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/2.2^{commit} # timeout=10
Checking out Revision 8d839e7d8e1545985b7513f7b2e6b5735a1617ac 
(refs/remotes/origin/2.2)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 8d839e7d8e1545985b7513f7b2e6b5735a1617ac
Commit message: "KAFKA-9192: fix NPE when for converting optional json schema 
in structs (#7733)"
 > git rev-list --no-walk 5dc15db523189949f5cc7999da18f53a9fd2aabd # timeout=10
ERROR: No tool found matching GRADLE_4_8_1_HOME
[kafka-2.2-jdk8-old] $ /bin/bash -xe /tmp/jenkins6665406421330220497.sh
+ rm -rf 
+ /bin/gradle
/tmp/jenkins6665406421330220497.sh: line 4: /bin/gradle: No such file or 
directory
Build step 'Execute shell' marked build as failure
[FINDBUGS] Collecting findbugs analysis files...
ERROR: No tool found matching GRADLE_4_8_1_HOME
[FINDBUGS] Searching for all files in 
 that match the pattern 
**/build/reports/findbugs/*.xml
[FINDBUGS] No files found. Configuration error?
ERROR: No tool found matching GRADLE_4_8_1_HOME
No credentials specified
ERROR: No tool found matching GRADLE_4_8_1_HOME
 Using GitBlamer to create author and commit information for all 
warnings.
 GIT_COMMIT=8d839e7d8e1545985b7513f7b2e6b5735a1617ac, 
workspace=
[FINDBUGS] Computing warning deltas based on reference build #175
Recording test results
ERROR: No tool found matching GRADLE_4_8_1_HOME
ERROR: Step ?Publish JUnit test result report? failed: No test report files 
were found. Configuration error?
ERROR: No tool found matching GRADLE_4_8_1_HOME
Not sending mail to unregistered user ism...@juma.me.uk
Not sending mail to unregistered user wangg...@gmail.com
Not sending mail to unregistered user nore...@github.com
Not sending mail to unregistered user j...@confluent.io
Not sending mail to unregistered user b...@confluent.io


[jira] [Resolved] (KAFKA-9204) ReplaceField transformation fails when encountering tombstone event

2020-02-12 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch resolved KAFKA-9204.
--
Fix Version/s: 2.4.1
   2.3.2
   2.5.0
   2.2.3
 Reviewer: Randall Hauch
   Resolution: Fixed

Merged to the `trunk`, `2.5`, `2.4`, `2.3`, and `2.2` branches.

> ReplaceField transformation fails when encountering tombstone event
> ---
>
> Key: KAFKA-9204
> URL: https://issues.apache.org/jira/browse/KAFKA-9204
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 2.3.0
>Reporter: Georgios Kalogiros
>Priority: Major
> Fix For: 2.2.3, 2.5.0, 2.3.2, 2.4.1
>
>
> When applying the {{ReplaceField}} transformation to a tombstone event, an 
> exception is raised:
>  
> {code:java}
> org.apache.kafka.connect.errors.ConnectException: Tolerance exceeded in error 
> handler
>   at 
> org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:178)
>   at 
> org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execute(RetryWithToleranceOperator.java:104)
>   at 
> org.apache.kafka.connect.runtime.TransformationChain.apply(TransformationChain.java:50)
>   at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.convertAndTransformRecord(WorkerSinkTask.java:506)
>   at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.convertMessages(WorkerSinkTask.java:464)
>   at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:320)
>   at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:224)
>   at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:192)
>   at 
> org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:177)
>   at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:227)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: org.apache.kafka.connect.errors.DataException: Only Map objects 
> supported in absence of schema for [field replacement], found: null
>   at 
> org.apache.kafka.connect.transforms.util.Requirements.requireMap(Requirements.java:38)
>   at 
> org.apache.kafka.connect.transforms.ReplaceField.applySchemaless(ReplaceField.java:134)
>   at 
> org.apache.kafka.connect.transforms.ReplaceField.apply(ReplaceField.java:127)
>   at 
> org.apache.kafka.connect.runtime.TransformationChain.lambda$apply$0(TransformationChain.java:50)
>   at 
> org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndRetry(RetryWithToleranceOperator.java:128)
>   at 
> org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:162)
>   ... 14 more
> {code}
> There was a similar bug for the InsertField transformation that got merged in 
> recently:
> https://issues.apache.org/jira/browse/KAFKA-8523
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: kafka-trunk-jdk8 #4229

2020-02-12 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-6607: Commit correct offsets for transactional input data (#8091)

[github] KAFKA-9417: New Integration Test for KIP-447 (#8000)

[github] KAFKA-9192: fix NPE when for converting optional json schema in structs


--
[...truncated 2.87 MB...]
org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis PASSED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime STARTED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime PASSED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep PASSED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
PASSED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testToString STARTED

org.apache.kafka.streams.test.TestRecordTest > testToString PASSED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords STARTED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords PASSED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
STARTED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
PASSED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher STARTED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher PASSED

org.apache.kafka.streams.test.TestRecordTest > testFields STARTED

org.apache.kafka.streams.test.TestRecordTest > testFields PASSED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode STARTED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode PASSED

> Task :streams:upgrade-system-tests-0100:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0100:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0100:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0100:compileTestJava
> Task :streams:upgrade-system-tests-0100:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:testClasses
> Task :streams:upgrade-system-tests-0100:checkstyleTest
> Task :streams:upgrade-system-tests-0100:spotbugsMain NO-SOURCE
> Task :streams:upgrade-system-tests-0100:test
> Task :streams:upgrade-system-tests-0101:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0101:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0101:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0101:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0101:compileTestJava
> Task :streams:upgrade-system-tests-0101:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-0101:testClasses
> Task :streams:upgrade-system-tests-0101:checkstyleTest
> Task :streams:upgrade-system-tests-0101:spotbugsMain NO-SOURCE
> Task :streams:upgrade-system-tests-0101:test
> Task :streams:upgrade-system-tests-0102:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0102:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0102:cla

Build failed in Jenkins: kafka-2.2-jdk8-old #209

2020-02-12 Thread Apache Jenkins Server
See 


Changes:

[rhauch] allow ReplaceField SMT to handle tombstone records (#7731)


--
Started by an SCM change
Running as SYSTEM
[EnvInject] - Loading node environment variables.
Building remotely on H35 (ubuntu) in workspace 

[WS-CLEANUP] Deleting project workspace...
[WS-CLEANUP] Deferred wipeout is used...
[WS-CLEANUP] Done
No credentials specified
Cloning the remote Git repository
Cloning repository https://github.com/apache/kafka.git
 > git init  # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress -- https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # 
 > timeout=10
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git fetch --tags --progress -- https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/2.2^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/2.2^{commit} # timeout=10
Checking out Revision 266c367e6c83faabb9a8df956e5f97d5812a4426 
(refs/remotes/origin/2.2)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 266c367e6c83faabb9a8df956e5f97d5812a4426
Commit message: "allow ReplaceField SMT to handle tombstone records (#7731)"
 > git rev-list --no-walk 8d839e7d8e1545985b7513f7b2e6b5735a1617ac # timeout=10
ERROR: No tool found matching GRADLE_4_8_1_HOME
[kafka-2.2-jdk8-old] $ /bin/bash -xe /tmp/jenkins1873545789954908088.sh
+ rm -rf 
+ /bin/gradle
/tmp/jenkins1873545789954908088.sh: line 4: /bin/gradle: No such file or 
directory
Build step 'Execute shell' marked build as failure
[FINDBUGS] Collecting findbugs analysis files...
ERROR: No tool found matching GRADLE_4_8_1_HOME
[FINDBUGS] Searching for all files in 
 that match the pattern 
**/build/reports/findbugs/*.xml
[FINDBUGS] No files found. Configuration error?
ERROR: No tool found matching GRADLE_4_8_1_HOME
No credentials specified
ERROR: No tool found matching GRADLE_4_8_1_HOME
 Using GitBlamer to create author and commit information for all 
warnings.
 GIT_COMMIT=266c367e6c83faabb9a8df956e5f97d5812a4426, 
workspace=
[FINDBUGS] Computing warning deltas based on reference build #175
Recording test results
ERROR: No tool found matching GRADLE_4_8_1_HOME
ERROR: Step ?Publish JUnit test result report? failed: No test report files 
were found. Configuration error?
ERROR: No tool found matching GRADLE_4_8_1_HOME
Not sending mail to unregistered user ism...@juma.me.uk
Not sending mail to unregistered user wangg...@gmail.com
Not sending mail to unregistered user nore...@github.com
Not sending mail to unregistered user j...@confluent.io
Not sending mail to unregistered user b...@confluent.io


[jira] [Created] (KAFKA-9545) Flaky Test `RegexSourceIntegrationTest.testRegexMatchesTopicsAWhenDeleted`

2020-02-12 Thread Jason Gustafson (Jira)
Jason Gustafson created KAFKA-9545:
--

 Summary: Flaky Test 
`RegexSourceIntegrationTest.testRegexMatchesTopicsAWhenDeleted`
 Key: KAFKA-9545
 URL: https://issues.apache.org/jira/browse/KAFKA-9545
 Project: Kafka
  Issue Type: Bug
  Components: streams
Reporter: Jason Gustafson


https://builds.apache.org/job/kafka-pr-jdk11-scala2.13/4678/testReport/org.apache.kafka.streams.integration/RegexSourceIntegrationTest/testRegexMatchesTopicsAWhenDeleted/

{code}
java.lang.AssertionError: Condition not met within timeout 15000. Stream tasks 
not updated
at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:26)
at 
org.apache.kafka.test.TestUtils.lambda$waitForCondition$5(TestUtils.java:367)
at 
org.apache.kafka.test.TestUtils.retryOnExceptionWithTimeout(TestUtils.java:415)
at 
org.apache.kafka.test.TestUtils.retryOnExceptionWithTimeout(TestUtils.java:383)
at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:366)
at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:337)
at 
org.apache.kafka.streams.integration.RegexSourceIntegrationTest.testRegexMatchesTopicsAWhenDeleted(RegexSourceIntegrationTest.java:224)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] KIP-568: Explicit rebalance triggering on the Consumer

2020-02-12 Thread Boyang Chen
Hey Sophie,

I'm satisfied with making enforceRebalance() not throwing any exception
other than illegal state. You could imagine this KIP is just making the
`rejoinNeededOrPending` external to user requests. Make it as lightweight
as possible makes sense.

Boyang

On Wed, Feb 12, 2020 at 2:14 PM Sophie Blee-Goldman 
wrote:

> Hey Guozhang, thanks for the thorough reply!
>
> I definitely went back and forth on whether to make it a blocking call,
> and ultimately went with blocking just to leave it open to potential future
> use cases (in particular non-Streams apps). But on second (or third)
> thought I think I agree that no use case wouldn't be similarly covered by
> just calling poll() immediately after enforceRebalance(). It seems best to
> leave all rebalancing action within the scope of poll alone and avoid
> introducing unnecessary complexity -- happy to revert this then.
>
> I think that ends up addressing most of your other concerns, although
> there's one I would push back on: this method should still explicitly
> call out whether a rebalance is already in progress and the call is thus
> a no-op. If throwing a RebalanceInProgressException seems too
> heavy maybe we can just return a boolean indicating whether a new
> rebalance was triggered or not.
>
> The snippet you included does work around this, by checking the
> condition again in the rebalance listener. But I would argue that a) many
> applications don't use a rebalance listener, and shouldn't be forced to
> implement it to fully use this new API. More importantly, since you can
> probably use the assignor's onAssignment method to achieve the same
> thing, b) it adds unnecessary complexity, and as we've seen in Streams
> the interactions between the rebalance callbacks and main consumer
> can already get quite ugly.
>
> For simplicity's sake then, I'll propose to just return the bool over the
> exception and change the signature to
>
> /**
>  * @return Whether a new rebalance was triggered (false if a rebalance was
> already in progress)
>  * @throws java.lang.IllegalStateException if the consumer does not use
> group subscription
>  */
> boolean enforceRebalance();
>
> Thoughts?
>
> On Tue, Feb 11, 2020 at 5:29 PM Guozhang Wang  wrote:
>
> > Hello Sophie, thanks for brining up this KIP, and the great write-up
> > summarizing the motivations of the proposal. Here are some comments:
> >
> > Minor:
> >
> > 1. If we want to make it a blocking call (I have some thoughts about this
> > below :), to be consistent we need to consider having two overloaded
> > function, one without the timeout which then relies on `
> > DEFAULT_API_TIMEOUT_MS_CONFIG`.
> >
> > 2. Also I'd suggest that, again for API consistency, we a) throw
> > TimeoutException if the operation cannot be completed within the timeout
> > value, b) return false immediately if we cannot trigger a rebalance
> either
> > because coordinator is unknown.
> >
> > Meta:
> >
> > 3. I'm not sure if we have a concrete scenario that we want to wait until
> > the rebalance is completed in KIP-441 / 268, rather than calling
> > "consumer.enforceRebalance(); consumer.poll()" consecutively and try to
> > execute the rebalance in the poll call? If there's no valid motivations
> I'm
> > still a bit inclined to make it non-blocking (i.e. just setting a bit and
> > then execute the process in the later poll call) similar to our `seek`
> > functions. By doing this we can also make this function simpler as it
> would
> > never throw RebalanceInProgress or Timeout or even KafkaExceptions.
> >
> > 4. Re: the case "when a rebalance is already in progress", this may be
> > related to 3) above. I think we can simplify this case as well but just
> not
> > triggering a new rebalance and let the the caller handle it: for example
> in
> > KIP-441, in each iteration of the stream thread, we can if a standby task
> > is ready, and if yes we call `enforceRebalance`, if there's already a
> > rebalance in progress (either with the new subscription metadata, or not)
> > this call would be a no-op, and then in the next iteration we would just
> > call that function again, and eventually we would trigger the rebalance
> > with the new subscription metadata and previous calls would be no-op and
> > hence no cost anyways. I feel this would be simpler than letting the
> caller
> > to capture RebalanceInProgressException:
> >
> >
> > mainProcessingLoop() {
> > if (needsRebalance) {
> > consumer.enforceRebalance();
> > }
> >
> > records = consumer.poll();
> > ...
> > // do some processing
> > }
> >
> > RebalanceListener {
> >
> >onPartitionsAssigned(...) {
> >   if (rebalanceGoalAchieved()) {
> > needsRebalance = false;
> >   }
> > }
> > }
> >
> >
> > WDYT?
> >
> >
> >
> >
> > On Tue, Feb 11, 2020 at 3:59 PM Sophie Blee-Goldman  >
> > wrote:
> >
> > > Hey Boyang,
> > >
> > > Originally I had it as a nonblocking call, but decided to change it to
> > > blocking
> > > with a timeo

Build failed in Jenkins: kafka-trunk-jdk11 #1153

2020-02-12 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-9417: New Integration Test for KIP-447 (#8000)

[github] KAFKA-9192: fix NPE when for converting optional json schema in structs

[github] allow ReplaceField SMT to handle tombstone records (#7731)


--
[...truncated 2.89 MB...]
org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnStreamsTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriver

Build failed in Jenkins: kafka-2.5-jdk8 #20

2020-02-12 Thread Apache Jenkins Server
See 


Changes:

[jason] MINOR: Fix unnecessary metadata fetch before group assignment (#8095)

[jason] KAFKA-9499; Improve deletion process by batching more aggressively

[wangguoz] KAFKA-9447: Add new customized EOS model example (#8031)

[wangguoz] HOTFIX: Fix spotsbug failure in Kafka examples (#8051)

[wangguoz] KAFKA-9417: New Integration Test for KIP-447 (#8000)


--
[...truncated 2.89 MB...]

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enab

Re: [DISCUSS] KIP-494 Connect REST Endpoint for Transformations (SMTs) and other Plugins

2020-02-12 Thread Christopher Egerton
Hi Cyrus,

Thanks for the KIP!

One quick question--I see the use case for being able to list per-connector
plugins (SMTs, converters, and header converters) via REST API, but I'm not
so sure about worker plugins (REST extensions and config providers). Since
those are configured at startup for the worker, is there any advantage to
being able to see the worker plugins available on a running worker? It's
not like they can be added during the lifetime of that worker, and that
information wouldn't be useful to anyone trying to create a connector as
they wouldn't be able to include worker plugins in a connector config
anyways.

And a small nit--it seems like the precedent set ever-so-slightly by the
/connector-plugins resource is to use a hyphen to separate a series of
lowercase words in an endpoint path as opposed to camel case with
capitalization. What do you think about changing the possible plugin types
to "connector", "converter", "header-converter", etc.?

One final question about forwards compatibility--the description for the
/plugins endpoint indicates that it "Returns plugin descriptions of all
types". If another type of pluggable component is added to the framework,
should we expect this endpoint to be updated to include that type of plugin
as well? And if so, should we also expect a matching /plugins/{pluginType}
endpoint to be added?

Cheers,

Chris

On Fri, Sep 6, 2019 at 11:48 AM Cyrus Vafadari  wrote:

> I've updated the KIP here to simplify and clarify the goal -- it's a pretty
> simple KIP to add a REST endpoint for SMTs and other plugins. This will
> make it easier to see what SMTs you've loaded.
>
> Hoping for some good discussion!
>
> Thanks, all
>
> On Sat, Jul 20, 2019 at 9:07 PM Cyrus Vafadari  wrote:
>
> > Hello, all,
> >
> > I'd like to start discussion on a new KIP I'm proposing to add HTTP
> > endpoints to Kafka Connect to give us some more insight into other
> > non-connector plugins, like Simple Message Transformations (SMTs). The
> KIP
> > would not change how Connect works in any meaningful way except to expose
> > some more data by the endpoint, analogous to existing
> > "ConnectorPluginsResource" class.
> >
> > Looking forward to getting some feedback.
> >
> > Cyrus
> >
> >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-494%3A+Connect+REST+Endpoint+for+Transformations+%28SMTs%29+and+other+Plugins
> >
>


Build failed in Jenkins: kafka-2.2-jdk8-old #210

2020-02-12 Thread Apache Jenkins Server
See 


Changes:

[rhauch] MINOR: Small Connect integration test fixes (#8100)


--
Started by an SCM change
Running as SYSTEM
[EnvInject] - Loading node environment variables.
Building remotely on H38 (ubuntu) in workspace 

[WS-CLEANUP] Deleting project workspace...
[WS-CLEANUP] Deferred wipeout is used...
[WS-CLEANUP] Done
No credentials specified
Cloning the remote Git repository
Cloning repository https://github.com/apache/kafka.git
 > git init  # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress -- https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # 
 > timeout=10
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git fetch --tags --progress -- https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/2.2^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/2.2^{commit} # timeout=10
Checking out Revision b9ff91b3edaea46685723605cf6a1067b57cd300 
(refs/remotes/origin/2.2)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f b9ff91b3edaea46685723605cf6a1067b57cd300
Commit message: "MINOR: Small Connect integration test fixes (#8100)"
 > git rev-list --no-walk 266c367e6c83faabb9a8df956e5f97d5812a4426 # timeout=10
ERROR: No tool found matching GRADLE_4_8_1_HOME
[kafka-2.2-jdk8-old] $ /bin/bash -xe /tmp/jenkins4614910982506502891.sh
+ rm -rf 
+ /bin/gradle
/tmp/jenkins4614910982506502891.sh: line 4: /bin/gradle: No such file or 
directory
Build step 'Execute shell' marked build as failure
[FINDBUGS] Collecting findbugs analysis files...
ERROR: No tool found matching GRADLE_4_8_1_HOME
[FINDBUGS] Searching for all files in 
 that match the pattern 
**/build/reports/findbugs/*.xml
[FINDBUGS] No files found. Configuration error?
ERROR: No tool found matching GRADLE_4_8_1_HOME
No credentials specified
ERROR: No tool found matching GRADLE_4_8_1_HOME
 Using GitBlamer to create author and commit information for all 
warnings.
 GIT_COMMIT=b9ff91b3edaea46685723605cf6a1067b57cd300, 
workspace=
[FINDBUGS] Computing warning deltas based on reference build #175
Recording test results
ERROR: No tool found matching GRADLE_4_8_1_HOME
ERROR: Step ?Publish JUnit test result report? failed: No test report files 
were found. Configuration error?
ERROR: No tool found matching GRADLE_4_8_1_HOME
Not sending mail to unregistered user ism...@juma.me.uk
Not sending mail to unregistered user wangg...@gmail.com
Not sending mail to unregistered user nore...@github.com
Not sending mail to unregistered user j...@confluent.io
Not sending mail to unregistered user b...@confluent.io


Re: [DISCUSS] KIP-568: Explicit rebalance triggering on the Consumer

2020-02-12 Thread Sophie Blee-Goldman
Thanks Boyang -- makes sense to me. I've optimistically updated the KIP
with this new signature and behavior.


On Wed, Feb 12, 2020 at 4:27 PM Boyang Chen 
wrote:

> Hey Sophie,
>
> I'm satisfied with making enforceRebalance() not throwing any exception
> other than illegal state. You could imagine this KIP is just making the
> `rejoinNeededOrPending` external to user requests. Make it as lightweight
> as possible makes sense.
>
> Boyang
>
> On Wed, Feb 12, 2020 at 2:14 PM Sophie Blee-Goldman 
> wrote:
>
> > Hey Guozhang, thanks for the thorough reply!
> >
> > I definitely went back and forth on whether to make it a blocking call,
> > and ultimately went with blocking just to leave it open to potential
> future
> > use cases (in particular non-Streams apps). But on second (or third)
> > thought I think I agree that no use case wouldn't be similarly covered by
> > just calling poll() immediately after enforceRebalance(). It seems best
> to
> > leave all rebalancing action within the scope of poll alone and avoid
> > introducing unnecessary complexity -- happy to revert this then.
> >
> > I think that ends up addressing most of your other concerns, although
> > there's one I would push back on: this method should still explicitly
> > call out whether a rebalance is already in progress and the call is thus
> > a no-op. If throwing a RebalanceInProgressException seems too
> > heavy maybe we can just return a boolean indicating whether a new
> > rebalance was triggered or not.
> >
> > The snippet you included does work around this, by checking the
> > condition again in the rebalance listener. But I would argue that a) many
> > applications don't use a rebalance listener, and shouldn't be forced to
> > implement it to fully use this new API. More importantly, since you can
> > probably use the assignor's onAssignment method to achieve the same
> > thing, b) it adds unnecessary complexity, and as we've seen in Streams
> > the interactions between the rebalance callbacks and main consumer
> > can already get quite ugly.
> >
> > For simplicity's sake then, I'll propose to just return the bool over the
> > exception and change the signature to
> >
> > /**
> >  * @return Whether a new rebalance was triggered (false if a rebalance
> was
> > already in progress)
> >  * @throws java.lang.IllegalStateException if the consumer does not use
> > group subscription
> >  */
> > boolean enforceRebalance();
> >
> > Thoughts?
> >
> > On Tue, Feb 11, 2020 at 5:29 PM Guozhang Wang 
> wrote:
> >
> > > Hello Sophie, thanks for brining up this KIP, and the great write-up
> > > summarizing the motivations of the proposal. Here are some comments:
> > >
> > > Minor:
> > >
> > > 1. If we want to make it a blocking call (I have some thoughts about
> this
> > > below :), to be consistent we need to consider having two overloaded
> > > function, one without the timeout which then relies on `
> > > DEFAULT_API_TIMEOUT_MS_CONFIG`.
> > >
> > > 2. Also I'd suggest that, again for API consistency, we a) throw
> > > TimeoutException if the operation cannot be completed within the
> timeout
> > > value, b) return false immediately if we cannot trigger a rebalance
> > either
> > > because coordinator is unknown.
> > >
> > > Meta:
> > >
> > > 3. I'm not sure if we have a concrete scenario that we want to wait
> until
> > > the rebalance is completed in KIP-441 / 268, rather than calling
> > > "consumer.enforceRebalance(); consumer.poll()" consecutively and try to
> > > execute the rebalance in the poll call? If there's no valid motivations
> > I'm
> > > still a bit inclined to make it non-blocking (i.e. just setting a bit
> and
> > > then execute the process in the later poll call) similar to our `seek`
> > > functions. By doing this we can also make this function simpler as it
> > would
> > > never throw RebalanceInProgress or Timeout or even KafkaExceptions.
> > >
> > > 4. Re: the case "when a rebalance is already in progress", this may be
> > > related to 3) above. I think we can simplify this case as well but just
> > not
> > > triggering a new rebalance and let the the caller handle it: for
> example
> > in
> > > KIP-441, in each iteration of the stream thread, we can if a standby
> task
> > > is ready, and if yes we call `enforceRebalance`, if there's already a
> > > rebalance in progress (either with the new subscription metadata, or
> not)
> > > this call would be a no-op, and then in the next iteration we would
> just
> > > call that function again, and eventually we would trigger the rebalance
> > > with the new subscription metadata and previous calls would be no-op
> and
> > > hence no cost anyways. I feel this would be simpler than letting the
> > caller
> > > to capture RebalanceInProgressException:
> > >
> > >
> > > mainProcessingLoop() {
> > > if (needsRebalance) {
> > > consumer.enforceRebalance();
> > > }
> > >
> > > records = consumer.poll();
> > > ...
> > > // do some processing
> > > }
> > >
> > > R

Re: [DISCUSS] KIP-568: Explicit rebalance triggering on the Consumer

2020-02-12 Thread Guozhang Wang
Hi Sophie,

So just to clarify, with the updated API we would keep calling
enforceRebalance until it returns true for cases where we rely on it with
new subscription metadata?

Guozhang

On Wed, Feb 12, 2020 at 5:14 PM Sophie Blee-Goldman 
wrote:

> Thanks Boyang -- makes sense to me. I've optimistically updated the KIP
> with this new signature and behavior.
>
>
> On Wed, Feb 12, 2020 at 4:27 PM Boyang Chen 
> wrote:
>
> > Hey Sophie,
> >
> > I'm satisfied with making enforceRebalance() not throwing any exception
> > other than illegal state. You could imagine this KIP is just making the
> > `rejoinNeededOrPending` external to user requests. Make it as lightweight
> > as possible makes sense.
> >
> > Boyang
> >
> > On Wed, Feb 12, 2020 at 2:14 PM Sophie Blee-Goldman  >
> > wrote:
> >
> > > Hey Guozhang, thanks for the thorough reply!
> > >
> > > I definitely went back and forth on whether to make it a blocking call,
> > > and ultimately went with blocking just to leave it open to potential
> > future
> > > use cases (in particular non-Streams apps). But on second (or third)
> > > thought I think I agree that no use case wouldn't be similarly covered
> by
> > > just calling poll() immediately after enforceRebalance(). It seems best
> > to
> > > leave all rebalancing action within the scope of poll alone and avoid
> > > introducing unnecessary complexity -- happy to revert this then.
> > >
> > > I think that ends up addressing most of your other concerns, although
> > > there's one I would push back on: this method should still explicitly
> > > call out whether a rebalance is already in progress and the call is
> thus
> > > a no-op. If throwing a RebalanceInProgressException seems too
> > > heavy maybe we can just return a boolean indicating whether a new
> > > rebalance was triggered or not.
> > >
> > > The snippet you included does work around this, by checking the
> > > condition again in the rebalance listener. But I would argue that a)
> many
> > > applications don't use a rebalance listener, and shouldn't be forced to
> > > implement it to fully use this new API. More importantly, since you can
> > > probably use the assignor's onAssignment method to achieve the same
> > > thing, b) it adds unnecessary complexity, and as we've seen in Streams
> > > the interactions between the rebalance callbacks and main consumer
> > > can already get quite ugly.
> > >
> > > For simplicity's sake then, I'll propose to just return the bool over
> the
> > > exception and change the signature to
> > >
> > > /**
> > >  * @return Whether a new rebalance was triggered (false if a rebalance
> > was
> > > already in progress)
> > >  * @throws java.lang.IllegalStateException if the consumer does not use
> > > group subscription
> > >  */
> > > boolean enforceRebalance();
> > >
> > > Thoughts?
> > >
> > > On Tue, Feb 11, 2020 at 5:29 PM Guozhang Wang 
> > wrote:
> > >
> > > > Hello Sophie, thanks for brining up this KIP, and the great write-up
> > > > summarizing the motivations of the proposal. Here are some comments:
> > > >
> > > > Minor:
> > > >
> > > > 1. If we want to make it a blocking call (I have some thoughts about
> > this
> > > > below :), to be consistent we need to consider having two overloaded
> > > > function, one without the timeout which then relies on `
> > > > DEFAULT_API_TIMEOUT_MS_CONFIG`.
> > > >
> > > > 2. Also I'd suggest that, again for API consistency, we a) throw
> > > > TimeoutException if the operation cannot be completed within the
> > timeout
> > > > value, b) return false immediately if we cannot trigger a rebalance
> > > either
> > > > because coordinator is unknown.
> > > >
> > > > Meta:
> > > >
> > > > 3. I'm not sure if we have a concrete scenario that we want to wait
> > until
> > > > the rebalance is completed in KIP-441 / 268, rather than calling
> > > > "consumer.enforceRebalance(); consumer.poll()" consecutively and try
> to
> > > > execute the rebalance in the poll call? If there's no valid
> motivations
> > > I'm
> > > > still a bit inclined to make it non-blocking (i.e. just setting a bit
> > and
> > > > then execute the process in the later poll call) similar to our
> `seek`
> > > > functions. By doing this we can also make this function simpler as it
> > > would
> > > > never throw RebalanceInProgress or Timeout or even KafkaExceptions.
> > > >
> > > > 4. Re: the case "when a rebalance is already in progress", this may
> be
> > > > related to 3) above. I think we can simplify this case as well but
> just
> > > not
> > > > triggering a new rebalance and let the the caller handle it: for
> > example
> > > in
> > > > KIP-441, in each iteration of the stream thread, we can if a standby
> > task
> > > > is ready, and if yes we call `enforceRebalance`, if there's already a
> > > > rebalance in progress (either with the new subscription metadata, or
> > not)
> > > > this call would be a no-op, and then in the next iteration we would
> > just
> > > > call that function again, and eventua

Build failed in Jenkins: kafka-2.4-jdk8 #142

2020-02-12 Thread Apache Jenkins Server
See 


Changes:

[jason] KAFKA-9181; Maintain clean separation between local and group

[jason] MINOR: Fix unnecessary metadata fetch before group assignment (#8095)


--
[...truncated 2.76 MB...]
org.apache.kafka.streams.TestTopicsTest > testDuration PASSED

org.apache.kafka.streams.TestTopicsTest > testOutputToString STARTED

org.apache.kafka.streams.TestTopicsTest > testOutputToString PASSED

org.apache.kafka.streams.TestTopicsTest > testValue STARTED

org.apache.kafka.streams.TestTopicsTest > testValue PASSED

org.apache.kafka.streams.TestTopicsTest > testTimestampAutoAdvance STARTED

org.apache.kafka.streams.TestTopicsTest > testTimestampAutoAdvance PASSED

org.apache.kafka.streams.TestTopicsTest > testOutputWrongSerde STARTED

org.apache.kafka.streams.TestTopicsTest > testOutputWrongSerde PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputTopicWithNullTopicName STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputTopicWithNullTopicName PASSED

org.apache.kafka.streams.TestTopicsTest > testWrongSerde STARTED

org.apache.kafka.streams.TestTopicsTest > testWrongSerde PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMapWithNull STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMapWithNull PASSED

org.apache.kafka.streams.TestTopicsTest > testNonExistingOutputTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testNonExistingOutputTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testMultipleTopics STARTED

org.apache.kafka.streams.TestTopicsTest > testMultipleTopics PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValueList STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValueList PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputWithNullDriver STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputWithNullDriver PASSED

org.apache.kafka.streams.TestTopicsTest > testValueList STARTED

org.apache.kafka.streams.TestTopicsTest > testValueList PASSED

org.apache.kafka.streams.TestTopicsTest > testRecordList STARTED

org.apache.kafka.streams.TestTopicsTest > testRecordList PASSED

org.apache.kafka.streams.TestTopicsTest > testNonExistingInputTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testNonExistingInputTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMap STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMap PASSED

org.apache.kafka.streams.TestTopicsTest > testRecordsToList STARTED

org.apache.kafka.streams.TestTopicsTest > testRecordsToList PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValueListDuration STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValueListDuration PASSED

org.apache.kafka.streams.TestTopicsTest > testInputToString STARTED

org.apache.kafka.streams.TestTopicsTest > testInputToString PASSED

org.apache.kafka.streams.TestTopicsTest > testTimestamp STARTED

org.apache.kafka.streams.TestTopicsTest > testTimestamp PASSED

org.apache.kafka.streams.TestTopicsTest > testWithHeaders STARTED

org.apache.kafka.streams.TestTopicsTest > testWithHeaders PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValue STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValue PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateTopicWithNullTopicName STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateTopicWithNullTopicName PASSED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldStoreAndReturnStateStores STARTED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldStoreAndReturnStateStores PASSED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldCaptureOutputRecordsUsingTo STARTED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldCaptureOutputRecordsUsingTo PASSED

org.apache.kafka.streams.MockProcessorContextTest > shouldCaptureOutputRecords 
STARTED

org.apache.kafka.streams.MockProcessorContextTest > shouldCaptureOutputRecords 
PASSED

org.apache.kafka.streams.MockProcessorContextTest > 
fullConstructorShouldSetAllExpectedAttributes STARTED

org.apache.kafka.streams.MockProcessorContextTest > 
fullConstructorShouldSetAllExpectedAttributes PASSED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldCaptureCommitsAndAllowReset STARTED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldCaptureCommitsAndAllowReset PASSED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldThrowIfForwardedWithDeprecatedChildName STARTED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldThrowIfForwardedWithDeprecatedChildName PASSED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldThrowIfForwardedWithDeprecatedChildIndex STARTED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldThrowIfForwardedWithDeprecatedChildIndex PASSED

org.

Build failed in Jenkins: kafka-trunk-jdk8 #4230

2020-02-12 Thread Apache Jenkins Server
See 


Changes:

[github] allow ReplaceField SMT to handle tombstone records (#7731)

[github] MINOR: Small Connect integration test fixes (#8100)


--
[...truncated 2.87 MB...]

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] PASSED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis STARTED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis PASSED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime STARTED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime PASSED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep PASSED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
PASSED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testToString STARTED

org.apache.kafka.streams.test.TestRecordTest > testToString PASSED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords STARTED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords PASSED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
STARTED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
PASSED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher STARTED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher PASSED

org.apache.kafka.streams.test.TestRecordTest > testFields STARTED

org.apache.kafka.streams.test.TestRecordTest > testFields PASSED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode STARTED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode PASSED

> Task :streams:upgrade-system-tests-0100:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0100:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0100:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0100:compileTestJava
> Task :streams:upgrade-system-tests-0100:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:testClasses
> Task :streams:upgrade-system-tests-0100:checkstyleTest
> Task :streams:upgrade-system-tests-0100:spotbugsMain NO-SOURCE
> Task :streams:upgrade-system-tests-0100:test
> Task :streams:upgrade-system-tests-0101:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0101:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0101:classes UP-TO-DATE
> Task :stream

Re: [DISCUSS] KIP-568: Explicit rebalance triggering on the Consumer

2020-02-12 Thread Sophie Blee-Goldman
Ok, I think I see what you're getting at.

No, we obviously would only want to trigger one additional rebalance after
the current one completes since the next rebalance would of course have
the metadata updates we want. But if enforceRebalance returns false a
second time, we don't know whether it was due to a new rebalance or just
the original rebalance taking a while. Even if we could, it's possible a new
update came along at some point and it would get quite messy to keep track
of when we do/don't need to retry.

Except, of course, by checking the resulting assignment to see if it
reflects
the latest metadata that we think it should. Which can simply be done in
the assignor#onAssignment method, since as pointed out in the javadocs
usage of this API implies a custom assignor implementation.

If we have to check the assignment anyway, I suppose there's no reason to
return anything. My one outstanding concern is that some apps might not
actually be able to detect whether the appropriate/latest metadata was used
based on the resulting assignment. Obviously the assign algorithm is known
to all members through their own assignor, but the cluster-wide metadata is
not. I'm not quite convinced that all possible assignments would be as
straightforward for each member to validate as in, for example, KIP-441
where
we just look for the active task.

Perhaps my imagination is running wild here but I'm inclined to still return
whether a new rebalance was triggered or not. It can always be ignored

On Wed, Feb 12, 2020 at 5:18 PM Guozhang Wang  wrote:

> Hi Sophie,
>
> So just to clarify, with the updated API we would keep calling
> enforceRebalance until it returns true for cases where we rely on it with
> new subscription metadata?
>
> Guozhang
>
> On Wed, Feb 12, 2020 at 5:14 PM Sophie Blee-Goldman 
> wrote:
>
> > Thanks Boyang -- makes sense to me. I've optimistically updated the KIP
> > with this new signature and behavior.
> >
> >
> > On Wed, Feb 12, 2020 at 4:27 PM Boyang Chen 
> > wrote:
> >
> > > Hey Sophie,
> > >
> > > I'm satisfied with making enforceRebalance() not throwing any exception
> > > other than illegal state. You could imagine this KIP is just making the
> > > `rejoinNeededOrPending` external to user requests. Make it as
> lightweight
> > > as possible makes sense.
> > >
> > > Boyang
> > >
> > > On Wed, Feb 12, 2020 at 2:14 PM Sophie Blee-Goldman <
> sop...@confluent.io
> > >
> > > wrote:
> > >
> > > > Hey Guozhang, thanks for the thorough reply!
> > > >
> > > > I definitely went back and forth on whether to make it a blocking
> call,
> > > > and ultimately went with blocking just to leave it open to potential
> > > future
> > > > use cases (in particular non-Streams apps). But on second (or third)
> > > > thought I think I agree that no use case wouldn't be similarly
> covered
> > by
> > > > just calling poll() immediately after enforceRebalance(). It seems
> best
> > > to
> > > > leave all rebalancing action within the scope of poll alone and avoid
> > > > introducing unnecessary complexity -- happy to revert this then.
> > > >
> > > > I think that ends up addressing most of your other concerns, although
> > > > there's one I would push back on: this method should still explicitly
> > > > call out whether a rebalance is already in progress and the call is
> > thus
> > > > a no-op. If throwing a RebalanceInProgressException seems too
> > > > heavy maybe we can just return a boolean indicating whether a new
> > > > rebalance was triggered or not.
> > > >
> > > > The snippet you included does work around this, by checking the
> > > > condition again in the rebalance listener. But I would argue that a)
> > many
> > > > applications don't use a rebalance listener, and shouldn't be forced
> to
> > > > implement it to fully use this new API. More importantly, since you
> can
> > > > probably use the assignor's onAssignment method to achieve the same
> > > > thing, b) it adds unnecessary complexity, and as we've seen in
> Streams
> > > > the interactions between the rebalance callbacks and main consumer
> > > > can already get quite ugly.
> > > >
> > > > For simplicity's sake then, I'll propose to just return the bool over
> > the
> > > > exception and change the signature to
> > > >
> > > > /**
> > > >  * @return Whether a new rebalance was triggered (false if a
> rebalance
> > > was
> > > > already in progress)
> > > >  * @throws java.lang.IllegalStateException if the consumer does not
> use
> > > > group subscription
> > > >  */
> > > > boolean enforceRebalance();
> > > >
> > > > Thoughts?
> > > >
> > > > On Tue, Feb 11, 2020 at 5:29 PM Guozhang Wang 
> > > wrote:
> > > >
> > > > > Hello Sophie, thanks for brining up this KIP, and the great
> write-up
> > > > > summarizing the motivations of the proposal. Here are some
> comments:
> > > > >
> > > > > Minor:
> > > > >
> > > > > 1. If we want to make it a blocking call (I have some thoughts
> about
> > > this
> > > > > below :), to be consistent we need

Jenkins build is back to normal : kafka-2.2-jdk8 #27

2020-02-12 Thread Apache Jenkins Server
See 




Jenkins build is back to normal : kafka-2.3-jdk8 #175

2020-02-12 Thread Apache Jenkins Server
See 




Build failed in Jenkins: kafka-trunk-jdk11 #1154

2020-02-12 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Small Connect integration test fixes (#8100)


--
[...truncated 2.89 MB...]
org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairs 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicNameAndTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicNameAndTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairsAndCustomTimestamps
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairsAndCustomTimestamps
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithCustomTimestampAndIncrementsAndNotAdvanceTime
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithCustomTimestampAndIncrementsAndNotAdvanceTime
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithTimestampWithTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithTimestampWithTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKeyAndDefaultTimestamp
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKeyAndDefaultTimestamp
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithTimestampAndIncrements STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithTimestampAndIncrements PASSED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis STARTED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis PASSED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime STARTED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime PASSED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep PASSED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.st

Build failed in Jenkins: kafka-2.5-jdk8 #21

2020-02-12 Thread Apache Jenkins Server
See 


Changes:

[rhauch] KAFKA-9192: fix NPE when for converting optional json schema in structs

[rhauch] allow ReplaceField SMT to handle tombstone records (#7731)

[rhauch] MINOR: Small Connect integration test fixes (#8100)


--
[...truncated 2.88 MB...]
org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTimeDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTimeDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled 

[jira] [Resolved] (KAFKA-9538) Flaky Test `testResetOffsetsExportImportPlan`

2020-02-12 Thread Jason Gustafson (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-9538.

Resolution: Fixed

> Flaky Test `testResetOffsetsExportImportPlan`
> -
>
> Key: KAFKA-9538
> URL: https://issues.apache.org/jira/browse/KAFKA-9538
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jason Gustafson
>Assignee: huxihx
>Priority: Major
>
> {code}
> 19:44:41 
> 19:44:41 kafka.admin.ResetConsumerGroupOffsetTest > 
> testResetOffsetsExportImportPlan FAILED
> 19:44:41 java.lang.AssertionError: expected: 2, bar2-1 -> 
> 2)> but was:
> 19:44:41 at org.junit.Assert.fail(Assert.java:89)
> 19:44:41 at org.junit.Assert.failNotEquals(Assert.java:835)
> 19:44:41 at org.junit.Assert.assertEquals(Assert.java:120)
> 19:44:41 at org.junit.Assert.assertEquals(Assert.java:146)
> 19:44:41 at 
> kafka.admin.ResetConsumerGroupOffsetTest.testResetOffsetsExportImportPlan(ResetConsumerGroupOffsetTest.scala:429)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: kafka-2.3-jdk8 #176

2020-02-12 Thread Apache Jenkins Server
See 


Changes:

[rhauch] KAFKA-9192: fix NPE when for converting optional json schema in structs

[rhauch] allow ReplaceField SMT to handle tombstone records (#7731)

[rhauch] MINOR: Small Connect integration test fixes (#8100)


--
[...truncated 2.95 MB...]
kafka.log.LogCleanerTest > testCleaningWithUnkeyedMessages PASSED

kafka.log.LogCleanerTest > testSegmentWithOffsetOverflow STARTED

kafka.log.LogCleanerTest > testSegmentWithOffsetOverflow PASSED

kafka.log.LogCleanerTest > testPartialSegmentClean STARTED

kafka.log.LogCleanerTest > testPartialSegmentClean PASSED

kafka.log.LogCleanerTest > testCommitMarkerRemoval STARTED

kafka.log.LogCleanerTest > testCommitMarkerRemoval PASSED

kafka.log.LogCleanerTest > testCleanSegmentsWithConcurrentSegmentDeletion 
STARTED

kafka.log.LogCleanerTest > testCleanSegmentsWithConcurrentSegmentDeletion PASSED

kafka.log.LogValidatorTest > testRecompressedBatchWithoutRecordsNotAllowed 
STARTED

kafka.log.LogValidatorTest > testRecompressedBatchWithoutRecordsNotAllowed 
PASSED

kafka.log.LogValidatorTest > testCompressedV1 STARTED

kafka.log.LogValidatorTest > testCompressedV1 PASSED

kafka.log.LogValidatorTest > testCompressedV2 STARTED

kafka.log.LogValidatorTest > testCompressedV2 PASSED

kafka.log.LogValidatorTest > testDownConversionOfIdempotentRecordsNotPermitted 
STARTED

kafka.log.LogValidatorTest > testDownConversionOfIdempotentRecordsNotPermitted 
PASSED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterUpConversionV0ToV2NonCompressed STARTED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterUpConversionV0ToV2NonCompressed PASSED

kafka.log.LogValidatorTest > testAbsoluteOffsetAssignmentCompressed STARTED

kafka.log.LogValidatorTest > testAbsoluteOffsetAssignmentCompressed PASSED

kafka.log.LogValidatorTest > testLogAppendTimeWithRecompressionV1 STARTED

kafka.log.LogValidatorTest > testLogAppendTimeWithRecompressionV1 PASSED

kafka.log.LogValidatorTest > testLogAppendTimeWithRecompressionV2 STARTED

kafka.log.LogValidatorTest > testLogAppendTimeWithRecompressionV2 PASSED

kafka.log.LogValidatorTest > testCreateTimeUpConversionV0ToV1 STARTED

kafka.log.LogValidatorTest > testCreateTimeUpConversionV0ToV1 PASSED

kafka.log.LogValidatorTest > testCreateTimeUpConversionV0ToV2 STARTED

kafka.log.LogValidatorTest > testCreateTimeUpConversionV0ToV2 PASSED

kafka.log.LogValidatorTest > testCreateTimeUpConversionV1ToV2 STARTED

kafka.log.LogValidatorTest > testCreateTimeUpConversionV1ToV2 PASSED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterDownConversionV2ToV0Compressed STARTED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterDownConversionV2ToV0Compressed PASSED

kafka.log.LogValidatorTest > testZStdCompressedWithUnavailableIBPVersion STARTED

kafka.log.LogValidatorTest > testZStdCompressedWithUnavailableIBPVersion PASSED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterUpConversionV1ToV2Compressed STARTED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterUpConversionV1ToV2Compressed PASSED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterUpConversionV0ToV1NonCompressed STARTED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterUpConversionV0ToV1NonCompressed PASSED

kafka.log.LogValidatorTest > 
testDownConversionOfTransactionalRecordsNotPermitted STARTED

kafka.log.LogValidatorTest > 
testDownConversionOfTransactionalRecordsNotPermitted PASSED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterUpConversionV0ToV1Compressed STARTED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterUpConversionV0ToV1Compressed PASSED

kafka.log.LogValidatorTest > testRelativeOffsetAssignmentNonCompressedV1 STARTED

kafka.log.LogValidatorTest > testRelativeOffsetAssignmentNonCompressedV1 PASSED

kafka.log.LogValidatorTest > testRelativeOffsetAssignmentNonCompressedV2 STARTED

kafka.log.LogValidatorTest > testRelativeOffsetAssignmentNonCompressedV2 PASSED

kafka.log.LogValidatorTest > testControlRecordsNotAllowedFromClients STARTED

kafka.log.LogValidatorTest > testControlRecordsNotAllowedFromClients PASSED

kafka.log.LogValidatorTest > testRelativeOffsetAssignmentCompressedV1 STARTED

kafka.log.LogValidatorTest > testRelativeOffsetAssignmentCompressedV1 PASSED

kafka.log.LogValidatorTest > testRelativeOffsetAssignmentCompressedV2 STARTED

kafka.log.LogValidatorTest > testRelativeOffsetAssignmentCompressedV2 PASSED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterDownConversionV2ToV1NonCompressed STARTED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterDownConversionV2ToV1NonCompressed PASSED

kafka.log.LogValidatorTest > testLogAppendTimeNonCompressedV1 STARTED

kafka.log.LogValidatorTest > testLogAppendTimeNonCompressedV1 PASSED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterDownConversionV2ToV0NonCompressed STARTED

kafka.log.LogValidatorTest > 
testOffsetAssignmentAfterDownConv

Build failed in Jenkins: kafka-2.4-jdk8 #143

2020-02-12 Thread Apache Jenkins Server
See 


Changes:

[rhauch] KAFKA-9192: fix NPE when for converting optional json schema in structs

[rhauch] allow ReplaceField SMT to handle tombstone records (#7731)

[rhauch] MINOR: Small Connect integration test fixes (#8100)

[github] MINOR: Fix poor backport that prevented compiling the MirrorMaker


--
[...truncated 5.56 MB...]

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestamp STARTED

org.apache.kafka.