Jenkins build is still unstable: Kafka » Kafka Branch Builder » trunk #376

2021-07-29 Thread Apache Jenkins Server
See 




[jira] [Resolved] (KAFKA-4064) Add support for infinite endpoints for range queries in Kafka Streams KV stores

2021-07-29 Thread John Roesler (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-4064?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Roesler resolved KAFKA-4064.
-
Fix Version/s: 3.1.0
   Resolution: Fixed

> Add support for infinite endpoints for range queries in Kafka Streams KV 
> stores
> ---
>
> Key: KAFKA-4064
> URL: https://issues.apache.org/jira/browse/KAFKA-4064
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Affects Versions: 0.10.1.0, 0.10.2.0
>Reporter: Roger Hoover
>Assignee: Patrick Stuedi
>Priority: Minor
>  Labels: needs-kip
> Fix For: 3.1.0
>
>
> In some applications, it's useful to iterate over the key-value store either:
> 1. from the beginning up to a certain key
> 2. from a certain key to the end
> We can add two new methods rangeUtil() and rangeFrom() easily to support this.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: Kafka » kafka-2.6-jdk8 #130

2021-07-29 Thread Apache Jenkins Server
See 


Changes:

[Jason Gustafson] KAFKA-13099; Transactional expiration should account for max 
batch size (#11098)


--
[...truncated 6.36 MB...]
org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureInternalTopicNamesIfWrittenInto[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureInternalTopicNamesIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTimeDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTimeDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldForwardRecordsFromSubtopologyToSubtopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldForwardRecordsFromSubtopologyToSubtopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
PASSED

org.apache.kafka.streams.TestTopicsTest > testNonUsedOutputTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testNonUsedOutputTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testEmptyTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testEmptyTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testStartTimestamp STARTED

org.apache.kafka.streams.TestTopicsTest > testStartTimestamp PASSED

org.apache.kafka.streams.TestTopicsTest > testNegativeAdvance STARTED

org.apache.kafka.streams.TestTopicsTest > testNegativeAdvance PASSED

org.apache.kafka.streams.TestTopicsTest > shouldNotAllowToCreateWithNullDriver 
STARTED

org.apache.kafka.streams.TestTopicsTest > shouldNotAllowToCreateWithNullDriver 
PASSED

org.apache.kafka.streams.TestTopicsTest > testDuration STARTED

org.apache.kafka.streams.TestTopicsTest > testDuration PASSED

org.apache.kafka.streams.TestTopicsTest > testOutputToString STARTED

org.apache.kafka.streams.TestTopicsTest > testOutputToString PASSED

org.apache.kafka.streams.TestTopicsTest > testValue STARTED

org.apache.kafka.streams.TestTopicsTest > testValue PASSED

org.apache.kafka.streams.TestTopicsTest > testTimestampAutoAdvance STARTED

org.apache.kafka.streams.TestTopicsTest > testTimestampAutoAdvance PASSED

org.apache.kafka.streams.TestTopicsTest > 

Jenkins build is still unstable: Kafka » Kafka Branch Builder » trunk #375

2021-07-29 Thread Apache Jenkins Server
See 




Build failed in Jenkins: Kafka » kafka-2.5-jdk8 #50

2021-07-29 Thread Apache Jenkins Server
See 


Changes:

[Jason Gustafson] KAFKA-13099; Transactional expiration should account for max 
batch size (#11098)


--
[...truncated 3.11 MB...]
org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairs 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicNameAndTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicNameAndTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairsAndCustomTimestamps
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairsAndCustomTimestamps
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithCustomTimestampAndIncrementsAndNotAdvanceTime
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithCustomTimestampAndIncrementsAndNotAdvanceTime
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithTimestampWithTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithTimestampWithTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKeyAndDefaultTimestamp
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKeyAndDefaultTimestamp
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithTimestampAndIncrements STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithTimestampAndIncrements PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldDeleteAndReturnPlainValue STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldDeleteAndReturnPlainValue PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnName 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnName 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutAllWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutAllWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldReturnIsPersistent STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldReturnIsPersistent PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutIfAbsentWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutIfAbsentWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldForwardClose 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldForwardClose 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldForwardFlush 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldForwardFlush 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldForwardInit 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldForwardInit 
PASSED


Jenkins build is still unstable: Kafka » Kafka Branch Builder » 3.0 #72

2021-07-29 Thread Apache Jenkins Server
See 




Jenkins build is still unstable: Kafka » Kafka Branch Builder » trunk #374

2021-07-29 Thread Apache Jenkins Server
See 




Re: [DISCUSS] Apache Kafka 3.0.0 release plan with new updated dates

2021-07-29 Thread Konstantine Karantasis
Thanks for reporting this issue Ryan.

I believe what you mention corresponds to the ticket you created here:
https://issues.apache.org/jira/projects/KAFKA/issues/KAFKA-13151

What happens if the configurations are present but the broker doesn't fail
at startup when configured to run in KRaft mode?
Asking to see if we have any workarounds in our availability.

Thanks,
Konstantine

On Thu, Jul 29, 2021 at 2:51 PM Ryan Dielhenn
 wrote:

> Hi,
>
> Disregard log.clean.policy being included in this blocker.
>
> Best,
> Ryan Dielhenn
>
> On Thu, Jul 29, 2021 at 2:38 PM Ryan Dielhenn 
> wrote:
>
> > Hey Konstantine,
> >
> > I'd like to report another bug in KRaft.
> >
> > log.cleanup.policy, alter.config.policy.class.name, and
> > create.topic.policy.class.name are all unsupported by KRaft but KRaft
> > servers allow them to be configured. I believe this should be considered
> a
> > blocker and that KRaft servers should fail startup if any of these are
> > configured. I do not have a PR yet but will soon.
> >
> > On another note, I have a PR for the dynamic broker configuration fix
> > here: https://github.com/apache/kafka/pull/11141
> >
> > Best,
> > Ryan Dielhenn
> >
> > On Wed, May 26, 2021 at 2:48 PM Konstantine Karantasis
> >  wrote:
> >
> >> Hi all,
> >>
> >> Please find below the updated release plan for the Apache Kafka 3.0.0
> >> release.
> >>
> >>
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=177046466
> >>
> >> New suggested dates for the release are as follows:
> >>
> >> KIP Freeze is 09 June 2021 (same date as in the initial plan)
> >> Feature Freeze is 30 June 2021 (new date, extended by two weeks)
> >> Code Freeze is 14 July 2021 (new date, extended by two weeks)
> >>
> >> At least two weeks of stabilization will follow Code Freeze.
> >>
> >> The release plan is up to date and currently includes all the approved
> >> KIPs
> >> that are targeting 3.0.0.
> >>
> >> Please let me know if you have any objections with the recent extension
> of
> >> Feature Freeze and Code Freeze or any other concerns.
> >>
> >> Regards,
> >> Konstantine
> >>
> >
>


[jira] [Resolved] (KAFKA-13132) Upgrading to topic IDs in LISR requests has gaps introduced in 3.0

2021-07-29 Thread Jason Gustafson (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13132?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-13132.
-
Resolution: Fixed

> Upgrading to topic IDs in LISR requests has gaps introduced in 3.0
> --
>
> Key: KAFKA-13132
> URL: https://issues.apache.org/jira/browse/KAFKA-13132
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Justine Olshan
>Assignee: Justine Olshan
>Priority: Blocker
> Fix For: 3.0.0
>
>
> With the change in 3.0 to how topic IDs are assigned to logs, a bug was 
> inadvertently introduced. Now, topic IDs will only be assigned on the load of 
> the log to a partition in LISR requests. This means we will only assign topic 
> IDs for newly created topics/partitions, on broker startup, or potentially 
> when a partition is reassigned.
>  
> In the case of upgrading from an IBP before 2.8, we may have a scenario where 
> we upgrade the controller to IBP 3.0 (or even 2.8) last. (Ie, the controller 
> is IBP < 2.8 and all other brokers are on the newest IBP) Upon the last 
> broker upgrading, we will elect a new controller but its LISR request will 
> not result in topic IDs being assigned to logs of existing topics. They will 
> only be assigned in the cases mentioned above.
> *Keep in mind, in this scenario, topic IDs will be still be assigned in the 
> controller/ZK to all new and pre-existing topics and will show up in 
> metadata.*  This means we are not ensured the same guarantees we had in 2.8. 
> *It is just the LISR/partition.metadata part of the code that is affected.* 
>  
> The problem is two-fold
>  1. We ignore LISR requests when the partition leader epoch has not increased 
> (previously we assigned the ID before this check)
>  2. We only assign the topic ID when we are associating the log with the 
> partition in replicamanager for the first time. Though in the scenario 
> described above, we have logs associated with partitions that need to be 
> upgraded.
>  
> We should check the if the LISR request is resulting in a topic ID addition 
> and add logic to logs already associated to partitions in replica manager.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] Apache Kafka 3.0.0 release plan with new updated dates

2021-07-29 Thread Ryan Dielhenn
Hi,

Disregard log.clean.policy being included in this blocker.

Best,
Ryan Dielhenn

On Thu, Jul 29, 2021 at 2:38 PM Ryan Dielhenn 
wrote:

> Hey Konstantine,
>
> I'd like to report another bug in KRaft.
>
> log.cleanup.policy, alter.config.policy.class.name, and
> create.topic.policy.class.name are all unsupported by KRaft but KRaft
> servers allow them to be configured. I believe this should be considered a
> blocker and that KRaft servers should fail startup if any of these are
> configured. I do not have a PR yet but will soon.
>
> On another note, I have a PR for the dynamic broker configuration fix
> here: https://github.com/apache/kafka/pull/11141
>
> Best,
> Ryan Dielhenn
>
> On Wed, May 26, 2021 at 2:48 PM Konstantine Karantasis
>  wrote:
>
>> Hi all,
>>
>> Please find below the updated release plan for the Apache Kafka 3.0.0
>> release.
>>
>> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=177046466
>>
>> New suggested dates for the release are as follows:
>>
>> KIP Freeze is 09 June 2021 (same date as in the initial plan)
>> Feature Freeze is 30 June 2021 (new date, extended by two weeks)
>> Code Freeze is 14 July 2021 (new date, extended by two weeks)
>>
>> At least two weeks of stabilization will follow Code Freeze.
>>
>> The release plan is up to date and currently includes all the approved
>> KIPs
>> that are targeting 3.0.0.
>>
>> Please let me know if you have any objections with the recent extension of
>> Feature Freeze and Code Freeze or any other concerns.
>>
>> Regards,
>> Konstantine
>>
>


Jenkins build is unstable: Kafka » Kafka Branch Builder » 3.0 #71

2021-07-29 Thread Apache Jenkins Server
See 




Re: [DISCUSS] Apache Kafka 3.0.0 release plan with new updated dates

2021-07-29 Thread Ryan Dielhenn
Hey Konstantine,

I'd like to report another bug in KRaft.

log.cleanup.policy, alter.config.policy.class.name, and
create.topic.policy.class.name are all unsupported by KRaft but KRaft
servers allow them to be configured. I believe this should be considered a
blocker and that KRaft servers should fail startup if any of these are
configured. I do not have a PR yet but will soon.

On another note, I have a PR for the dynamic broker configuration fix here:
https://github.com/apache/kafka/pull/11141

Best,
Ryan Dielhenn

On Wed, May 26, 2021 at 2:48 PM Konstantine Karantasis
 wrote:

> Hi all,
>
> Please find below the updated release plan for the Apache Kafka 3.0.0
> release.
>
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=177046466
>
> New suggested dates for the release are as follows:
>
> KIP Freeze is 09 June 2021 (same date as in the initial plan)
> Feature Freeze is 30 June 2021 (new date, extended by two weeks)
> Code Freeze is 14 July 2021 (new date, extended by two weeks)
>
> At least two weeks of stabilization will follow Code Freeze.
>
> The release plan is up to date and currently includes all the approved KIPs
> that are targeting 3.0.0.
>
> Please let me know if you have any objections with the recent extension of
> Feature Freeze and Code Freeze or any other concerns.
>
> Regards,
> Konstantine
>


[jira] [Created] (KAFKA-13151) KRaft does not support Policies (e.g. AlterConfigPolicy). The server should fail startup if any are configured.

2021-07-29 Thread Ryan Dielhenn (Jira)
Ryan Dielhenn created KAFKA-13151:
-

 Summary: KRaft does not support Policies (e.g. AlterConfigPolicy). 
The server should fail startup if any are configured.
 Key: KAFKA-13151
 URL: https://issues.apache.org/jira/browse/KAFKA-13151
 Project: Kafka
  Issue Type: Task
Affects Versions: 3.0.0
Reporter: Ryan Dielhenn
Assignee: Ryan Dielhenn


log.cleanup.policy, alter.config.policy.class.name, and 
create.topic.policy.class.name are all unsupported by KRaft. KRaft servers 
should fail startup if any of these are configured.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Jenkins build is still unstable: Kafka » Kafka Branch Builder » trunk #373

2021-07-29 Thread Apache Jenkins Server
See 




Build failed in Jenkins: Kafka » Kafka Branch Builder » 3.0 #70

2021-07-29 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 420594 lines...]
[2021-07-29T19:32:44.276Z] > Task :raft:testClasses UP-TO-DATE
[2021-07-29T19:32:44.276Z] > Task :connect:json:testJar
[2021-07-29T19:32:45.197Z] > Task :connect:json:testSrcJar
[2021-07-29T19:32:45.197Z] > Task :metadata:compileTestJava UP-TO-DATE
[2021-07-29T19:32:45.197Z] > Task :metadata:testClasses UP-TO-DATE
[2021-07-29T19:32:45.197Z] > Task 
:clients:generateMetadataFileForMavenJavaPublication
[2021-07-29T19:32:45.197Z] > Task 
:clients:generatePomFileForMavenJavaPublication
[2021-07-29T19:32:45.197Z] 
[2021-07-29T19:32:45.197Z] > Task :streams:processMessages
[2021-07-29T19:32:45.197Z] Execution optimizations have been disabled for task 
':streams:processMessages' to ensure correctness due to the following reasons:
[2021-07-29T19:32:45.197Z]   - Gradle detected a problem with the following 
location: 
'/home/jenkins/jenkins-agent/workspace/Kafka_kafka_3.0/streams/src/generated/java/org/apache/kafka/streams/internals/generated'.
 Reason: Task ':streams:srcJar' uses this output of task 
':streams:processMessages' without declaring an explicit or implicit 
dependency. This can lead to incorrect results being produced, depending on 
what order the tasks are executed. Please refer to 
https://docs.gradle.org/7.1.1/userguide/validation_problems.html#implicit_dependency
 for more details about this problem.
[2021-07-29T19:32:45.197Z] MessageGenerator: processed 1 Kafka message JSON 
files(s).
[2021-07-29T19:32:45.197Z] 
[2021-07-29T19:32:45.197Z] > Task :streams:compileJava UP-TO-DATE
[2021-07-29T19:32:45.197Z] > Task :streams:classes UP-TO-DATE
[2021-07-29T19:32:45.197Z] > Task :streams:test-utils:compileJava UP-TO-DATE
[2021-07-29T19:32:46.202Z] > Task :streams:copyDependantLibs
[2021-07-29T19:32:46.202Z] > Task :streams:jar UP-TO-DATE
[2021-07-29T19:32:46.202Z] > Task 
:streams:generateMetadataFileForMavenJavaPublication
[2021-07-29T19:32:49.228Z] > Task :connect:api:javadoc
[2021-07-29T19:32:49.228Z] > Task :connect:api:copyDependantLibs UP-TO-DATE
[2021-07-29T19:32:49.228Z] > Task :connect:api:jar UP-TO-DATE
[2021-07-29T19:32:49.228Z] > Task 
:connect:api:generateMetadataFileForMavenJavaPublication
[2021-07-29T19:32:49.228Z] > Task :connect:json:copyDependantLibs UP-TO-DATE
[2021-07-29T19:32:49.228Z] > Task :connect:json:jar UP-TO-DATE
[2021-07-29T19:32:49.228Z] > Task 
:connect:json:generateMetadataFileForMavenJavaPublication
[2021-07-29T19:32:49.228Z] > Task 
:connect:json:publishMavenJavaPublicationToMavenLocal
[2021-07-29T19:32:49.228Z] > Task :connect:json:publishToMavenLocal
[2021-07-29T19:32:50.237Z] > Task :connect:api:javadocJar
[2021-07-29T19:32:50.237Z] > Task :connect:api:compileTestJava UP-TO-DATE
[2021-07-29T19:32:50.237Z] > Task :connect:api:testClasses UP-TO-DATE
[2021-07-29T19:32:50.237Z] > Task :connect:api:testJar
[2021-07-29T19:32:50.237Z] > Task :connect:api:testSrcJar
[2021-07-29T19:32:51.411Z] > Task 
:connect:api:publishMavenJavaPublicationToMavenLocal
[2021-07-29T19:32:51.411Z] > Task :connect:api:publishToMavenLocal
[2021-07-29T19:32:52.535Z] > Task :streams:javadoc
[2021-07-29T19:32:52.535Z] > Task :streams:javadocJar
[2021-07-29T19:32:53.623Z] > Task :clients:javadoc
[2021-07-29T19:32:53.623Z] > Task :clients:javadocJar
[2021-07-29T19:32:54.761Z] 
[2021-07-29T19:32:54.761Z] > Task :clients:srcJar
[2021-07-29T19:32:54.761Z] Execution optimizations have been disabled for task 
':clients:srcJar' to ensure correctness due to the following reasons:
[2021-07-29T19:32:54.761Z]   - Gradle detected a problem with the following 
location: 
'/home/jenkins/jenkins-agent/workspace/Kafka_kafka_3.0/clients/src/generated/java'.
 Reason: Task ':clients:srcJar' uses this output of task 
':clients:processMessages' without declaring an explicit or implicit 
dependency. This can lead to incorrect results being produced, depending on 
what order the tasks are executed. Please refer to 
https://docs.gradle.org/7.1.1/userguide/validation_problems.html#implicit_dependency
 for more details about this problem.
[2021-07-29T19:32:54.761Z] 
[2021-07-29T19:32:54.761Z] > Task :clients:testJar
[2021-07-29T19:32:55.958Z] > Task :clients:testSrcJar
[2021-07-29T19:32:55.958Z] > Task 
:clients:publishMavenJavaPublicationToMavenLocal
[2021-07-29T19:32:55.958Z] > Task :clients:publishToMavenLocal
[2021-07-29T19:33:14.955Z] > Task :core:compileScala
[2021-07-29T19:34:24.330Z] > Task :core:classes
[2021-07-29T19:34:24.330Z] > Task :core:compileTestJava NO-SOURCE
[2021-07-29T19:34:53.007Z] > Task :core:compileTestScala
[2021-07-29T19:35:37.017Z] > Task :core:testClasses
[2021-07-29T19:35:53.127Z] > Task :streams:compileTestJava
[2021-07-29T19:35:53.127Z] > Task :streams:testClasses
[2021-07-29T19:35:53.127Z] > Task :streams:testJar
[2021-07-29T19:35:53.127Z] > Task :streams:testSrcJar
[2021-07-29T19:35:53.127Z] > Task 

Jenkins build is still unstable: Kafka » Kafka Branch Builder » trunk #372

2021-07-29 Thread Apache Jenkins Server
See 




[jira] [Resolved] (KAFKA-13150) How is Kafkastream configured to consume data from a specified offset ?

2021-07-29 Thread A. Sophie Blee-Goldman (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13150?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

A. Sophie Blee-Goldman resolved KAFKA-13150.

Resolution: Invalid

> How is Kafkastream configured to consume data from a specified offset ?
> ---
>
> Key: KAFKA-13150
> URL: https://issues.apache.org/jira/browse/KAFKA-13150
> Project: Kafka
>  Issue Type: New Feature
>  Components: streams
>Reporter: wangjh
>Priority: Minor
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-9858) CVE-2016-3189 Use-after-free vulnerability in bzip2recover in bzip2 1.0.6 allows remote attackers to cause a denial of service (crash) via a crafted bzip2 file, related

2021-07-29 Thread Guozhang Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9858?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-9858.
--
Fix Version/s: 3.0.0
   Resolution: Fixed

> CVE-2016-3189  Use-after-free vulnerability in bzip2recover in bzip2 1.0.6 
> allows remote attackers to cause a denial of service (crash) via a crafted 
> bzip2 file, related to block ends set to before the start of the block.
> -
>
> Key: KAFKA-9858
> URL: https://issues.apache.org/jira/browse/KAFKA-9858
> Project: Kafka
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.2.2, 2.3.1, 2.4.1
>Reporter: sihuanx
>Priority: Major
> Fix For: 3.0.0
>
>
> I'm not sure whether  CVE-2016-3189 affects kafka 2.4.1  or not?  This 
> vulnerability  was related to rocksdbjni-5.18.3.jar  which is compiled with 
> *bzip2 .* 
> Is there any task or plan to fix it? 
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: Mirror Maker 2 - High Throughput Identity Mirroring

2021-07-29 Thread Ryanne Dolan
Jamie, this would depend on KIP-712 (or similar) aka "shallow mirroring".
This is a work in progress, but I'm optimistic it'll happen at some point.

ftr, "IdentityReplicationPolicy" has landed for the upcoming release, tho
"identity" in that context just means that topics aren't renamed.

Ryanne

On Thu, Jul 29, 2021, 11:37 AM Jamie  wrote:

> Hi All,
> This blog post:
> https://blog.cloudera.com/a-look-inside-kafka-mirrormaker-2/ mentions
> that "High Throughput Identity Mirroring" (when the compression is the same
> in both the source and destination cluster) will soon be coming to MM2
> which would avoid the MM2 consumer decompressing the data only for the MM2
> producer to then re-compress it again.
> Has this feature been implemented yet in MM2?
> Many Thanks,
> Jamie


Mirror Maker 2 - High Throughput Identity Mirroring

2021-07-29 Thread Jamie
Hi All, 
This blog post: https://blog.cloudera.com/a-look-inside-kafka-mirrormaker-2/ 
mentions that "High Throughput Identity Mirroring" (when the compression is the 
same in both the source and destination cluster) will soon be coming to MM2 
which would avoid the MM2 consumer decompressing the data only for the MM2 
producer to then re-compress it again.
Has this feature been implemented yet in MM2?
Many Thanks,
Jamie 

[jira] [Resolved] (KAFKA-13143) Disable Metadata endpoint for KRaft controller

2021-07-29 Thread Jason Gustafson (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13143?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-13143.
-
Resolution: Fixed

> Disable Metadata endpoint for KRaft controller
> --
>
> Key: KAFKA-13143
> URL: https://issues.apache.org/jira/browse/KAFKA-13143
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jason Gustafson
>Assignee: Niket Goel
>Priority: Blocker
> Fix For: 3.0.0
>
>
> The controller currently implements Metadata incompletely. Specifically, it 
> does not return the metadata for any topics in the cluster. This may tend to 
> cause confusion to users. For example, if someone used the controller 
> endpoint by mistake in `kafka-topics.sh --list`, then they would see no 
> topics in the cluster, which would be surprising. It would be better for 3.0 
> to disable Metadata on the controller since we currently expect clients to 
> connect through brokers anyway.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


UI bug : Overlapping screen

2021-07-29 Thread soni goyal
Hi team,

I have been going through Kafka documentation for my project requirements
and when I navigated to this screen, I got this scrollbar section of the
left section overlapping on screen with the right content area
which I probably think, shouldn't happen.
screen size is 967px X 577px.

Can you please check if it's just me who's facing this issue?

[image: image.png]





Thanks
Soni


[jira] [Created] (KAFKA-13150) How is Kafkastream configured to consume data from a specified offset ?

2021-07-29 Thread wangjh (Jira)
wangjh created KAFKA-13150:
--

 Summary: How is Kafkastream configured to consume data from a 
specified offset ?
 Key: KAFKA-13150
 URL: https://issues.apache.org/jira/browse/KAFKA-13150
 Project: Kafka
  Issue Type: New Feature
  Components: streams
Reporter: wangjh






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Jenkins build is unstable: Kafka » Kafka Branch Builder » 3.0 #69

2021-07-29 Thread Apache Jenkins Server
See 




Jenkins build is still unstable: Kafka » Kafka Branch Builder » 2.8 #60

2021-07-29 Thread Apache Jenkins Server
See 




Build failed in Jenkins: Kafka » kafka-2.7-jdk8 #171

2021-07-29 Thread Apache Jenkins Server
See 


Changes:

[Manikumar Reddy] KAFKA-13041: Enable connecting VS Code remote debugger 
(#10915)


--
[...truncated 3.45 MB...]
org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@65749b7a,
 timestamped = true, caching = false, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@65749b7a,
 timestamped = true, caching = false, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@58292187,
 timestamped = true, caching = false, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@58292187,
 timestamped = true, caching = false, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@11903a14,
 timestamped = true, caching = false, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@11903a14,
 timestamped = true, caching = false, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@1f6734cc,
 timestamped = true, caching = false, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@1f6734cc,
 timestamped = true, caching = false, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@5e89c311,
 timestamped = true, caching = false, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@5e89c311,
 timestamped = true, caching = false, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@4fbde45e,
 timestamped = true, caching = false, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@4fbde45e,
 timestamped = true, caching = false, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@5f6d2727, 
timestamped = false, caching = true, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@5f6d2727, 
timestamped = false, caching = true, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@790ab952, 
timestamped = false, caching = true, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@790ab952, 
timestamped = false, caching = true, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@690b7c29, 
timestamped = false, caching = true, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@690b7c29, 
timestamped = false, caching = true, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@49b4fa42, 
timestamped = false, caching = true, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 

Build failed in Jenkins: Kafka » kafka-2.4-jdk8 #30

2021-07-29 Thread Apache Jenkins Server
See 


Changes:

[Ismael Juma] MINOR: Fix `testResolveDnsLookup` by using a mocked dns resolver 
(#11091)


--
[...truncated 2.90 MB...]
org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldTerminateWhenUsingTaskIdling[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldTerminateWhenUsingTaskIdling[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTimeDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTimeDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldForwardRecordsFromSubtopologyToSubtopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldForwardRecordsFromSubtopologyToSubtopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 

Jenkins build is back to normal : Kafka » kafka-2.7-jdk8 #170

2021-07-29 Thread Apache Jenkins Server
See 




Jenkins build is back to normal : Kafka » kafka-2.6-jdk8 #129

2021-07-29 Thread Apache Jenkins Server
See 




Build failed in Jenkins: Kafka » kafka-2.5-jdk8 #49

2021-07-29 Thread Apache Jenkins Server
See 


Changes:

[Ismael Juma] MINOR: Fix `testResolveDnsLookup` by using a mocked dns resolver 
(#11091)


--
[...truncated 3.11 MB...]
org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldDeleteAndReturnPlainValue STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldDeleteAndReturnPlainValue PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnName 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnName 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutAllWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutAllWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldReturnIsPersistent STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldReturnIsPersistent PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutIfAbsentWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutIfAbsentWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldForwardClose 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldForwardClose 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldForwardFlush 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldForwardFlush 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldForwardInit 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldForwardInit 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
STARTED