Build failed in Jenkins: kafka-trunk-jdk8 #3230

2018-11-29 Thread Apache Jenkins Server
See 


Changes:

[wangguoz] KAFKA-7660: Fix child sensor memory leak (#5974)

--
[...truncated 2.64 MB...]

kafka.utils.SchedulerTest > testRestart STARTED

kafka.utils.SchedulerTest > testRestart PASSED

kafka.utils.SchedulerTest > testReentrantTaskInMockScheduler STARTED

kafka.utils.SchedulerTest > testReentrantTaskInMockScheduler PASSED

kafka.utils.SchedulerTest > testPeriodicTask STARTED

kafka.utils.SchedulerTest > testPeriodicTask PASSED

kafka.utils.LoggingTest > testLog4jControllerIsRegistered STARTED

kafka.utils.LoggingTest > testLog4jControllerIsRegistered PASSED

kafka.utils.LoggingTest > testLogName STARTED

kafka.utils.LoggingTest > testLogName PASSED

kafka.utils.LoggingTest > testLogNameOverride STARTED

kafka.utils.LoggingTest > testLogNameOverride PASSED

kafka.utils.ZkUtilsTest > testGetSequenceIdMethod STARTED

kafka.utils.ZkUtilsTest > testGetSequenceIdMethod PASSED

kafka.utils.ZkUtilsTest > testAbortedConditionalDeletePath STARTED

kafka.utils.ZkUtilsTest > testAbortedConditionalDeletePath PASSED

kafka.utils.ZkUtilsTest > testGetAllPartitionsTopicWithoutPartitions STARTED

kafka.utils.ZkUtilsTest > testGetAllPartitionsTopicWithoutPartitions PASSED

kafka.utils.ZkUtilsTest > testSuccessfulConditionalDeletePath STARTED

kafka.utils.ZkUtilsTest > testSuccessfulConditionalDeletePath PASSED

kafka.utils.ZkUtilsTest > testPersistentSequentialPath STARTED

kafka.utils.ZkUtilsTest > testPersistentSequentialPath PASSED

kafka.utils.ZkUtilsTest > testClusterIdentifierJsonParsing STARTED

kafka.utils.ZkUtilsTest > testClusterIdentifierJsonParsing PASSED

kafka.utils.ZkUtilsTest > testGetLeaderIsrAndEpochForPartition STARTED

kafka.utils.ZkUtilsTest > testGetLeaderIsrAndEpochForPartition PASSED

kafka.utils.CoreUtilsTest > testGenerateUuidAsBase64 STARTED

kafka.utils.CoreUtilsTest > testGenerateUuidAsBase64 PASSED

kafka.utils.CoreUtilsTest > testAbs STARTED

kafka.utils.CoreUtilsTest > testAbs PASSED

kafka.utils.CoreUtilsTest > testReplaceSuffix STARTED

kafka.utils.CoreUtilsTest > testReplaceSuffix PASSED

kafka.utils.CoreUtilsTest > testCircularIterator STARTED

kafka.utils.CoreUtilsTest > testCircularIterator PASSED

kafka.utils.CoreUtilsTest > testReadBytes STARTED

kafka.utils.CoreUtilsTest > testReadBytes PASSED

kafka.utils.CoreUtilsTest > testCsvList STARTED

kafka.utils.CoreUtilsTest > testCsvList PASSED

kafka.utils.CoreUtilsTest > testReadInt STARTED

kafka.utils.CoreUtilsTest > testReadInt PASSED

kafka.utils.CoreUtilsTest > testAtomicGetOrUpdate STARTED

kafka.utils.CoreUtilsTest > testAtomicGetOrUpdate PASSED

kafka.utils.CoreUtilsTest > testUrlSafeBase64EncodeUUID STARTED

kafka.utils.CoreUtilsTest > testUrlSafeBase64EncodeUUID PASSED

kafka.utils.CoreUtilsTest > testCsvMap STARTED

kafka.utils.CoreUtilsTest > testCsvMap PASSED

kafka.utils.CoreUtilsTest > testInLock STARTED

kafka.utils.CoreUtilsTest > testInLock PASSED

kafka.utils.CoreUtilsTest > testTryAll STARTED

kafka.utils.CoreUtilsTest > testTryAll PASSED

kafka.utils.CoreUtilsTest > testSwallow STARTED

kafka.utils.CoreUtilsTest > testSwallow PASSED

kafka.utils.json.JsonValueTest > testJsonObjectIterator STARTED

kafka.utils.json.JsonValueTest > testJsonObjectIterator PASSED

kafka.utils.json.JsonValueTest > testDecodeLong STARTED

kafka.utils.json.JsonValueTest > testDecodeLong PASSED

kafka.utils.json.JsonValueTest > testAsJsonObject STARTED

kafka.utils.json.JsonValueTest > testAsJsonObject PASSED

kafka.utils.json.JsonValueTest > testDecodeDouble STARTED

kafka.utils.json.JsonValueTest > testDecodeDouble PASSED

kafka.utils.json.JsonValueTest > testDecodeOption STARTED

kafka.utils.json.JsonValueTest > testDecodeOption PASSED

kafka.utils.json.JsonValueTest > testDecodeString STARTED

kafka.utils.json.JsonValueTest > testDecodeString PASSED

kafka.utils.json.JsonValueTest > testJsonValueToString STARTED

kafka.utils.json.JsonValueTest > testJsonValueToString PASSED

kafka.utils.json.JsonValueTest > testAsJsonObjectOption STARTED

kafka.utils.json.JsonValueTest > testAsJsonObjectOption PASSED

kafka.utils.json.JsonValueTest > testAsJsonArrayOption STARTED

kafka.utils.json.JsonValueTest > testAsJsonArrayOption PASSED

kafka.utils.json.JsonValueTest > testAsJsonArray STARTED

kafka.utils.json.JsonValueTest > testAsJsonArray PASSED

kafka.utils.json.JsonValueTest > testJsonValueHashCode STARTED

kafka.utils.json.JsonValueTest > testJsonValueHashCode PASSED

kafka.utils.json.JsonValueTest > testDecodeInt STARTED

kafka.utils.json.JsonValueTest > testDecodeInt PASSED

kafka.utils.json.JsonValueTest > testDecodeMap STARTED

kafka.utils.json.JsonValueTest > testDecodeMap PASSED

kafka.utils.json.JsonValueTest > testDecodeSeq STARTED

kafka.utils.json.JsonValueTest > testDecodeSeq PASSED

kafka.utils.json.JsonValueTest > testJsonObjectGet STARTED

kafka.utils.json.JsonValueTest > 

[jira] [Resolved] (KAFKA-7551) Refactor to create both producer & consumer in Worker

2018-11-29 Thread Ewen Cheslack-Postava (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7551?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava resolved KAFKA-7551.
--
Resolution: Fixed

Merged [https://github.com/apache/kafka/pull/5842,] my bad for screwing up 
closing the Jira along with the fix...

> Refactor to create both producer & consumer in Worker
> -
>
> Key: KAFKA-7551
> URL: https://issues.apache.org/jira/browse/KAFKA-7551
> Project: Kafka
>  Issue Type: Task
>  Components: KafkaConnect
>Reporter: Magesh kumar Nandakumar
>Assignee: Magesh kumar Nandakumar
>Priority: Minor
> Fix For: 2.2.0
>
>
> In distributed mode,  the producer is created in the Worker and the consumer 
> is created in the WorkerSinkTask. The proposal is to refactor it so that both 
> of them are created in Worker. This will not affect any functionality and is 
> just a refactoring to make the code consistent.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-7690) Change disk allocation policy for multiple partitions on a broker when topic is created

2018-11-29 Thread haiyangyu (JIRA)
haiyangyu created KAFKA-7690:


 Summary: Change disk allocation policy for multiple partitions on 
a broker when topic is created
 Key: KAFKA-7690
 URL: https://issues.apache.org/jira/browse/KAFKA-7690
 Project: Kafka
  Issue Type: Improvement
Affects Versions: 2.0.0, 1.0.0, 0.10.2.0
Reporter: haiyangyu


h3. *Background*

if target topic partitions lager than broker size when create a topic or add 
partition, one broker will be assigned more than one partition. if current all 
disk is not balance, such as one disk has one partition and the other one has 
four partitions due to topic delete or others, the mutil partitions will be all 
allocated in the first disk, and if the target topic has a large flow, it is 
easily to fill up the disk io.
h3. *Improvement strategy*

when mutil ** partition is going to be allocated on a broker, the strategy is 
as follow:

1、calculate the target topic partition count and total partition count on each 
disk.

topic count

2、sorted by the target topic partition count wich ascending order, if the 
target topic partition count is equal, sorted by the total partitions on each 
disk.
h3. *Example*



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: kafka Build fail - jdk 8 - #18041

2018-11-29 Thread Manikumar
HI Srinivas,

They are mostly transient test failures. We can rerun the test to confirm
the failures.
You can re-trigger the jenkins tests by commenting  "retest this please" in
PR comments.


On Fri, Nov 30, 2018 at 12:04 PM Srinivas, Kaushik (Nokia - IN/Bangalore) <
kaushik.srini...@nokia.com> wrote:

> Hi All,
>
> Facing below build failure.
>
>
> 20:19:43
>
> 20:19:43 > Task :streams:test-utils:integrationTest
>
> 20:19:43 > Task :streams:upgrade-system-tests-0100:integrationTest
>
> 20:19:43 > Task :streams:upgrade-system-tests-0101:integrationTest
>
> 20:19:43 > Task :streams:upgrade-system-tests-0102:integrationTest
>
> 20:19:43 > Task :streams:upgrade-system-tests-0110:integrationTest
>
> 20:19:43 > Task :streams:upgrade-system-tests-10:integrationTest
>
> 20:19:43 > Task :streams:upgrade-system-tests-11:integrationTest
>
> 20:19:43 > Task :streams:upgrade-system-tests-20:integrationTest
>
> 20:19:43
>
> 20:19:43 FAILURE: Build failed with an exception.
>
> 20:19:43
>
> 20:19:43 * What went wrong:
>
> 20:19:43 Execution failed for task ':core:integrationTest'.
>
> 20:19:43 > There were failing tests. See the report at:
> file:///home/jenkins/jenkins-slave/workspace/kafka-pr-jdk8-scala2.11@2
> /core/build/reports/tests/integrationTest/index.html /core/build/reports/tests/integrationTest/index.html>
>
> Build failure report :
> https://builds.apache.org/job/kafka-pr-jdk8-scala2.11/18041/testReport/
>
> Need support.
>
> -kaushik
>


kafka Build fail - jdk 8 - #18041

2018-11-29 Thread Srinivas, Kaushik (Nokia - IN/Bangalore)
Hi All,

Facing below build failure.


20:19:43

20:19:43 > Task :streams:test-utils:integrationTest

20:19:43 > Task :streams:upgrade-system-tests-0100:integrationTest

20:19:43 > Task :streams:upgrade-system-tests-0101:integrationTest

20:19:43 > Task :streams:upgrade-system-tests-0102:integrationTest

20:19:43 > Task :streams:upgrade-system-tests-0110:integrationTest

20:19:43 > Task :streams:upgrade-system-tests-10:integrationTest

20:19:43 > Task :streams:upgrade-system-tests-11:integrationTest

20:19:43 > Task :streams:upgrade-system-tests-20:integrationTest

20:19:43

20:19:43 FAILURE: Build failed with an exception.

20:19:43

20:19:43 * What went wrong:

20:19:43 Execution failed for task ':core:integrationTest'.

20:19:43 > There were failing tests. See the report at: 
file:///home/jenkins/jenkins-slave/workspace/kafka-pr-jdk8-scala2.11@2/core/build/reports/tests/integrationTest/index.html

Build failure report : 
https://builds.apache.org/job/kafka-pr-jdk8-scala2.11/18041/testReport/

Need support.

-kaushik


[jira] [Resolved] (KAFKA-7259) Remove deprecated ZKUtils usage from ZkSecurityMigrator

2018-11-29 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7259?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-7259.
--
Resolution: Fixed

Issue resolved by pull request 5480
[https://github.com/apache/kafka/pull/5480]

> Remove deprecated ZKUtils usage from ZkSecurityMigrator
> ---
>
> Key: KAFKA-7259
> URL: https://issues.apache.org/jira/browse/KAFKA-7259
> Project: Kafka
>  Issue Type: Task
>  Components: core
>Reporter: Manikumar
>Assignee: Manikumar
>Priority: Minor
> Fix For: 2.2.0
>
>
> ZkSecurityMigrator code currently uses ZKUtils.  We can replace ZKUtils usage 
> with KafkaZkClient. Also remove usage of ZKUtils from various tests.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


kafka jenkins build fail - JDK 8 and Scala 2.11

2018-11-29 Thread Srinivas, Kaushik (Nokia - IN/Bangalore)
Hi All,

Have created a pull request with changes to kafka performance producer java 
code.

But Build has failed with below errors for JDK 8 and Scala 2.11,

kafka.api.SaslSslAdminClientIntegrationTest > testMinimumRequestTimeouts FAILED
19:20:31 java.lang.AssertionError: Expected an exception of type 
org.apache.kafka.common.errors.TimeoutException; got type 
org.apache.kafka.common.errors.SslAuthenticationException
19:20:31
19:20:31 kafka.api.SaslSslAdminClientIntegrationTest > testForceClose STARTED
19:21:10
19:21:10 kafka.api.SaslSslAdminClientIntegrationTest > testForceClose FAILED
19:21:10 java.lang.AssertionError: Expected an exception of type 
org.apache.kafka.common.errors.TimeoutException; got type 
org.apache.kafka.common.errors.SslAuthenticationException

Task :core:integrationTest FAILED

JDK 11 and Scala 2.12 Build is successful.

Any inputs or known issues on this would be really helpful.

-kaushik



Build failed in Jenkins: kafka-trunk-jdk8 #3229

2018-11-29 Thread Apache Jenkins Server
See 


Changes:

[ismael] MINOR: Bump Gradle version to 5.0 (#5964)

--
[...truncated 2.65 MB...]
org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestampWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareKeyValueTimestamp 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareKeyValueTimestamp 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualWithNullForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualWithNullForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValue PASSED


Re: [DISCUSS] KIP-394: Require member.id for initial join group request

2018-11-29 Thread Matthias J. Sax
Thanks! Makes sense.

I missed that fact, that the `member.id` is added on the second
joinGroup request that contains the `member.id`.

However, it seems there is another race condition for this design:

If two consumers join at the same time, it it possible that the broker
assigns the same `member.id` to both (because none of them have joined
the group yet--ie, second joinGroup request not sent yet--, the
`member.id` is not store broker side yes and broker cannot check for
duplicates when creating a new `member.id`.

The probability might be fairly low thought. However, what Stanislav
proposed, to add the `member.id` directly, and remove it after
`session.timeout.ms` sound like a save option that avoids this issue.

Thoughts?


-Matthias

On 11/28/18 8:15 PM, Boyang Chen wrote:
> Thanks Matthias for the question, and Stanislav for the explanation!
> 
> For the scenario described, we will never let a member join the GroupMetadata 
> map
> if it uses UNKNOWN_MEMBER_ID. So the workflow will be like this:
> 
>   1.  Group is empty. Consumer c1 started. Join with UNKNOWN_MEMBER_ID;
>   2.  Broker rejects while allocating a member.id to c1 in response (c1 
> protocol version is current);
>   3.  c1 handles the error and rejoins with assigned member.id;
>   4.  Broker stores c1 in its group metadata;
>   5.  Consumer c2 started. Join with UNKNOWN_MEMBER_ID;
>   6.  Broker rejects while allocating a member.id to c2 in response (c2 
> protocol version is current);
>   7.  c2 fails to get the response/crashes in the middle;
>   8.  After certain time, c2 restarts a join request with UNKNOWN_MEMBER_ID;
> 
> As you could see, c2 will repeat step 6~8 until successfully send back a join 
> group request with allocated id.
> By then broker will include c2 within the broker metadata map.
> 
> Does this sound clear to you?
> 
> Best,
> Boyang
> 
> From: Stanislav Kozlovski 
> Sent: Wednesday, November 28, 2018 7:39 PM
> To: dev@kafka.apache.org
> Subject: Re: [DISCUSS] KIP-394: Require member.id for initial join group 
> request
> 
> Hey Matthias,
> 
> I think the notion is to have the `session.timeout.ms` to start ticking
> when the broker responds with the member.id. Then, the broker would
> properly expire consumers and not hold too many stale ones.
> This isn't mentioned in the KIP though so it is worth to wait for Boyang to
> confirm
> 
> On Wed, Nov 28, 2018 at 3:10 AM Matthias J. Sax 
> wrote:
> 
>> Thanks for the KIP Boyang.
>>
>> I guess I am missing something, but I am still learning more details
>> about the rebalance protocol, so maybe you can help me out?
>>
>> Assume a client sends UNKNOWN_MEMBER_ID in its first joinGroup request.
>> The broker generates a `member.id` and sends it back via
>> `MEMBER_ID_REQUIRED` error response. This response might never reach the
>> client or the client fails before it can send the second joinGroup
>> request. Thus, a client would need to start over with a new
>> UNKNOWN_MEMBER_ID in its joinGroup request. Thus, the broker needs to
>> generate a new `member.id` again.
>>
>> So it seems the problem is moved, but not resolved? The motivation of
>> the KIP is:
>>
>>> The edge case is that if initial join group request keeps failing due to
>> connection timeout, or the consumer keeps restarting,
>>
>> From my understanding, this KIP move the issue from the first to the
>> second joinGroup request (or broker joinGroup response).
>>
>> But maybe I am missing something. Can you help me out?
>>
>>
>> -Matthias
>>
>>
>> On 11/27/18 6:00 PM, Boyang Chen wrote:
>>> Thanks Stanislav and Jason for the suggestions!
>>>
>>>
 Thanks for the KIP. Looks good overall. I think we will need to bump the
 version of the JoinGroup protocol in order to indicate compatibility
>> with
 the new behavior. The coordinator needs to know when it is safe to
>> assume
 the client will handle the error code.

 Also, I was wondering if we could reuse the REBALANCE_IN_PROGRESS error
 code. When the client sees this error code, it will take the memberId
>> from
 the response and rejoin. We'd still need the protocol bump since older
 consumers do not have this logic.
>>>
>>> I will add the join group protocol version change to the KIP. Meanwhile
>> I feel for
>>> understandability it's better to define a separate error code since
>> REBALANCE_IN_PROGRESS
>>> is not the actual cause of the returned error.
>>>
 One small question I have is now that we have one and a half round-trips
 needed to join in a rebalance (1 full RT addition), is it worth it to
 consider increasing the default value of `
>> group.initial.rebalance.delay.ms`?
>>> I guess we could keep it for now. After KIP-345 and incremental
>> cooperative rebalancing
>>> work we should be safe to deprecate `group.initial.rebalance.delay.ms`.
>> Also one round trip
>>> shouldn't increase the latency too much IMO.
>>>
>>> Best,
>>> Boyang
>>> 

Build failed in Jenkins: kafka-trunk-jdk8 #3228

2018-11-29 Thread Apache Jenkins Server
See 


Changes:

[wangguoz] KAFKA-7660: fix parentSensors memory leak (#5953)

--
[...truncated 2.63 MB...]
org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestampWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareKeyValueTimestamp 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareKeyValueTimestamp 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualWithNullForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 

Re: [DISCUSS] KIP-382: MirrorMaker 2.0

2018-11-29 Thread Ryanne Dolan
Sönke, thanks for the feedback!

>  the renaming policy [...] can be disabled [...] The KIP itself does not
mention this

Good catch. I've updated the KIP to call this out.

> "MirrorMaker clusters" I am not sure I fully understand the issue you are
trying to solve

MirrorMaker today is not scalable from an operational perspective. Celia
Kung at LinkedIn does a great job of explaining this problem [1], which has
caused LinkedIn to drop MirrorMaker in favor of Brooklin. With Brooklin, a
single cluster, single API, and single UI controls replication flows for an
entire data center. With MirrorMaker 2.0, the vision is much the same.

If your data center consists of a small number of Kafka clusters and an
existing Connect cluster, it might make more sense to re-use the Connect
cluster with MirrorSource/SinkConnectors. There's nothing wrong with this
approach for small deployments, but this model also doesn't scale. This is
because Connect clusters are built around a single Kafka cluster -- what I
call the "primary" cluster -- and all Connectors in the cluster must either
consume from or produce to this single cluster. If you have more than one
"active" Kafka cluster in each data center, you'll end up needing multiple
Connect clusters there as well.

The problem with Connect clusters for replication is way less severe
compared to legacy MirrorMaker. Generally you need one Connect cluster per
active Kafka cluster. As you point out, MM2's SinkConnector means you can
get away with a single Connect cluster for topologies that center around a
single primary cluster. But each Connector within each Connect cluster must
be configured independently, with no high-level view of your replication
flows within and between data centers.

With MirrorMaker 2.0, a single MirrorMaker cluster manages replication
across any number of Kafka clusters. Much like Brooklin, MM2 does the work
of setting up connectors between clusters as needed. This
Replication-as-a-Service is a huge win for larger deployments, as well as
for organizations that haven't adopted Connect.

[1]
https://www.slideshare.net/ConfluentInc/more-data-more-problems-scaling-kafkamirroring-pipelines-at-linkedin

Keep the questions coming! Thanks.
Ryanne

On Thu, Nov 29, 2018 at 3:30 AM Sönke Liebau 
wrote:

> Hi Ryanne,
>
> first of all, thanks for the KIP, great work overall and much needed I
> think!
>
> I have a small comment on the renaming policy, in one of the mails on this
> thread you mention that this can be disabled (to replicate topic1 in
> cluster A as topic1 on cluster B I assume). The KIP itself does not mention
> this, from reading just the KIP one might get the assumption that renaming
> is mandatory. It might be useful to add a sentence or two around renaming
> policies and what is possible here. I assume you intend to make these
> pluggable?
>
> Regarding the latest addition of "MirrorMaker clusters" I am not sure I
> fully understand the issue you are trying to solve and what exactly these
> scripts will do - but that may just me being dense about it :)
> I understand the limitation to a single source and target cluster that
> Connect imposes, but isn't this worked around by the fact that you have
> MirrorSource- and MirrorSinkConnectors and one part of the equation will
> always be under your control?
> The way I understood your intention was that there is a (regular, not MM)
> Connect Cluster somewhere next to a Kafka Cluster A and if you deploy a
> MirrorSourceTask to that it will read messages from a remote cluster B and
> replicate them into the local cluster A. If you deploy a MirrorSinkTask it
> will read from local cluster A and replicate into cluster B.
>
> Since in both causes the configuration for cluster B will be passed into
> the connector in the ConnectorConfig contained in the rest request, what's
> to stop us from starting a third connector with a MirrorSourceTask reading
> from cluster C?
>
> I am a bit hesitant about the entire concept of having extra scripts to
> run an entire separate Connect cluster - I'd much prefer an option to use a
> regular connect cluster from an ops point of view. Is it maybe worth
> spending some time investigating whether we can come up with a change to
> connect that enables what MM would need?
>
> Best regards,
> Sönke
>
>
>
> On Tue, Nov 27, 2018 at 10:02 PM Ryanne Dolan 
> wrote:
>
>> Hey y'all, I'd like you draw your attention to a new section in KIP-382 re
>> MirrorMaker Clusters:
>>
>>
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-382:+MirrorMaker+2.0#KIP-382:MirrorMaker2.0-MirrorMakerClusters
>>
>> A common concern I hear about using Connect for replication is that all
>> SourceConnectors in a Connect cluster must use the same target Kafka
>> cluster, and likewise all SinkConnectors must use the same source Kafka
>> cluster. In order to use multiple Kafka clusters from Connect, there are
>> two possible approaches:
>>
>> 1) use an intermediate Kafka cluster, K. SourceConnectors (A, 

Re: [DISCUSS] KIP-252: Extend ACLs to allow filtering based on ip ranges and subnets

2018-11-29 Thread Sönke Liebau
This has been dormant for a while now, can I interest anybody in chiming in
here?

I think we need to come up with an idea of how to handle changes to ACLs
going forward, i.e. some sort of versioning scheme. Not necessarily what I
proposed in my previous mail, but something.
Currently this fairly simple change is stuck due to this being unsolved.

I am happy to move forward without addressing the larger issue (I think the
issue raised by Colin is valid but could be mitigated in the release
notes), but that would mean that the next KIP to touch ACLs would inherit
the issue, which somehow doesn't seem right.

Looking forward to your input :)

Best regards,
Sönke

On Tue, Jun 19, 2018 at 5:32 PM Sönke Liebau 
wrote:

> Picking this back up, now that KIP-290 has been merged..
>
> As Colin mentioned in an earlier mail this change could create a
> potential security issue if not all brokers are upgraded and a DENY
> Acl based on an IP range is created, as old brokers won't match this
> rule and still allow requests. As I stated earlier I am not sure
> whether for this specific change this couldn't be handled via the
> release notes (see also this comment [1] from Jun Rao on a similar
> topic), but in principle I think some sort of versioning system around
> ACLs would be useful. As seen in KIP-290 there were a few
> complications around where to store ACLs. To avoid adding ever new
> Zookeeper paths for future ACL changes a versioning system is probably
> useful.
>
> @Andy: I've copied you directly in this mail, since you did a bulk of
> the work around KIP-290 and mentioned potentially picking up the
> follow up work, so I think your input would be very valuable here. Not
> trying to shove extra work your way, I'm happy to contribute, but we'd
> be touching a lot of the same areas I think.
>
> If we want to implement a versioning system for ACLs I see the
> following todos (probably incomplete & missing something at the same
> time):
> 1. ensure that the current Authorizer doesn't pick up newer ACLs
> 2. add a version marker to new ACLs
> 3. change SimpleACLAuthorizer to know what version of ACLs it is
> compatible with and only load ACLs of this / smaller version
> 4. Decide how to handle if incompatible (newer version) ACLs are
> present: log warning, fail broker startup, ...
>
>
> Post-KIP-290 ACLs are stored in two places in Zookeeper:
> /kafka-acl-extended   - for ACLs with wildcards in the resource
> /kafka-acl   -  for literal ACLs without wildcards (i.e. * means * not
> any character)
>
> To ensure 1 we probably need to move to a new directory once more,
> call it /kafka-acl-extended-new for arguments sake. Any ACL stored
> here would get a version number stored with it, and only
> SimpleAuthorizers that actually know to look here would find these
> ACLs and also know to check for a version number. I think Andy
> mentioned moving the resource definition in the new ACL format to JSON
> instead of simple string in a follow up PR, maybe these pieces of work
> are best tackled together - and if a new znode can be avoided even
> better.
>
> This would allow us to recognize situations where ACLs are defined
> that not all Authorizers can understand, as those Authorizers would
> notice that there are ACLs with a larger version than the one they
> support (not applicable to legacy ACLs up until now). How we want to
> treat this scenario is up for discussion, I think make it
> configurable, as customers have different requirements around
> security. Some would probably want to fail a broker that encounters
> unknown ACLs so as to not create potential security risks t others
> might be happy with just a warning in the logs. This should never
> happen, if users fully upgrade their clusters before creating new ACLs
> - but to counteract the situation that Colin described it would be
> useful.
>
> Looking forward, a migration option might be added to the kafka-acl
> tool to migrate all legacy ACLs once into the new structure once the
> user is certain that no old brokers will come online again.
>
> If you think this sounds like a convoluted way to go about things ...
> I agree :) But I couldn't come up with a better way yet.
>
> Any thoughts?
>
> Best regards,
> Sönke
>
> [1] https://github.com/apache/kafka/pull/5079#pullrequestreview-124512689
>
> On Thu, May 3, 2018 at 10:57 PM, Sönke Liebau
>  wrote:
> > Technically I absolutely agree with you, this would indeed create
> > issues. If we were just talking about this KIP I think I'd argue that
> > it is not too harsh of a requirement for users to refrain from using
> > new features until they have fully upgraded their entire cluster. I
> > think in that case it could have been solved in the release notes -
> > similarly to the way a binary protocol change is handled.
> > However looking at the discussion on KIP-290 and thinking ahead to
> > potential other changes on ACLs it would really just mean putting off
> > a proper solution which is a versioning system for 

Re: [DISCUSS] KIP-360: Improve handling of unknown producer

2018-11-29 Thread Jason Gustafson
Hey Guozhang,

To clarify, the broker does not actually use the ApiVersion API for
inter-broker communications. The use of an API and its corresponding
version is controlled by `inter.broker.protocol.version`.

Nevertheless, it sounds like we're on the same page about removing
DescribeTransactionState. The impact of a dangling transaction is a little
worse than what you describe though. Consumers with the read_committed
isolation level will be stuck. Still, I think we agree that this case
should be rare and we can reconsider for future work. Rather than
preventing dangling transactions, perhaps we should consider options which
allows us to detect them and recover. Anyway, this needs more thought. I
will update the KIP.

Best,
Jason

On Tue, Nov 27, 2018 at 6:51 PM Guozhang Wang  wrote:

> 0. My original question is about the implementation details primarily,
> since current the handling logic of the APIVersionResponse is simply "use
> the highest supported version of the corresponding request", but if the
> returned response from APIVersionRequest says "I don't even know about the
> DescribeTransactionStateRequest at all", then we need additional logic for
> the falling back logic. Currently this logic is embedded in NetworkClient
> which is shared by all clients, so I'd like to avoid making this logic more
> complicated.
>
> As for the general issue that a broker does not recognize a producer with
> sequence number 0, here's my thinking: as you mentioned in the wiki, this
> is only a concern for transactional producer since for idempotent producer
> it can just bump the epoch and go. For transactional producer, even if the
> producer request from a fenced producer gets accepted, its transaction will
> never be committed and hence messages not exposed to read-committed
> consumers as well. The drawback is though, 1) read-uncommitted consumers
> will still read those messages, 2) unnecessary storage for those fenced
> produce messages, but in practice should not accumulate to a large amount
> since producer should soon try to commit and be told it is fenced and then
> stop, 3) there will be no markers for those transactional messages ever.
> Looking at the list and thinking about the likelihood it may happen
> assuming we retain the producer up to transactional.id.timeout (default is
> 7 days), I feel comfortable leaving it as is.
>
> Guozhang
>
> On Mon, Nov 26, 2018 at 6:09 PM Jason Gustafson 
> wrote:
>
> > Hey Guozhang,
> >
> > Thanks for the comments. Responses below:
> >
> > 0. The new API is used between brokers, so we govern its usage using
> > `inter.broker.protocol.version`. If the other broker hasn't upgraded, we
> > will just fallback to the old logic, which is to accept the write. This
> is
> > similar to how we introduced the OffsetsForLeaderEpoch API. Does that
> seem
> > reasonable?
> >
> > To tell the truth, after digging this KIP up and reading it over, I am
> > doubting how crucial this API is. It is attempting to protect a write
> from
> > a zombie which has just reset its sequence number after that producer had
> > had its state cleaned up. However, one of the other improvements in this
> > KIP is to maintain producer state beyond its retention in the log. I
> think
> > that makes this case sufficiently unlikely that we can leave it for
> future
> > work. I am not 100% sure this is the only scenario where transaction
> state
> > and log state can diverge anyway, so it would be better to consider this
> > problem more generally. What do you think?
> >
> > 1. Thanks, from memory, the term changed after the first iteration. I'll
> > make a pass and try to clarify usage.
> > 2. I was attempting to handle some edge cases since this check would be
> > asynchronous. In any case, if we drop this validation as suggested above,
> > then we can ignore this.
> >
> > -Jason
> >
> >
> >
> > On Tue, Nov 13, 2018 at 6:23 PM Guozhang Wang 
> wrote:
> >
> > > Hello Jason, thanks for the great write-up.
> > >
> > > 0. One question about the migration plan: "The new GetTransactionState
> > API
> > > and the new version of the transaction state message will not be used
> > until
> > > the inter-broker version supports it." I'm not so clear about the
> > > implementation details here: say a broker is on the newer version and
> the
> > > txn-coordinator is still on older version. Today the APIVersionsRequest
> > can
> > > only help upgrade / downgrade the request version, but not forbidding
> > > sending any. Are you suggesting we add additional logic on the broker
> > side
> > > to handle the case of "not sending the request"? If yes my concern is
> > that
> > > this will be some tech-debt code that will live long before being
> > removed.
> > >
> > > Some additional minor comments:
> > >
> > > 1. "last epoch" and "instance epoch" seem to be referring to the same
> > thing
> > > in your wiki.
> > > 2. "The broker must verify after receiving the response that the
> producer
> > > state is still unknown.": 

Re: KIP-213 - Scalable/Usable Foreign-Key KTable joins - Rebooted.

2018-11-29 Thread John Roesler
Actually, I've been thinking more about my feedback #1 on the method name,
and I'm not so sure it's a good idea.

Following SQL, the existing join methods are named according to handedness
(inner/outer/left). All these joins are the same cardinality (1:1 joins). I
think it would be a mistake to switch up the naming scheme and introduce a
new method named for cardinality, like "manyToOneJoin".

Can we actually just add a new overload to the existing joins? The javadoc
could explain that it's a many-to-one join, and it would be differentiated
by the presence of the "keyExtractor".

Just to be safe, if we rename the proposed "joinOnForeignKey" to just
"join", then we could rename "keyExtractor" to "foreignKeyExtractor" for
clarity.

Just to be unambiguous, including the other API feedback I gave, I'm
proposing something like this:

KTable join(KTable other,
   ValueMapper foreignKeyExtractor,
   ValueJoiner joiner,
   ManyToOneJoined manyToOneJoined, // join config object
  );

The ManyToOneJoined config object would allow setting all 4 serdes and
configuring materialization.

Thanks for your consideration,
-John

On Thu, Nov 29, 2018 at 8:14 AM John Roesler  wrote:

> Hi all,
>
> Sorry that this discussion petered out... I think the 2.1 release caused
> an extended distraction that pushed it off everyone's radar (which was
> precisely Adam's concern). Personally, I've also had some extend
> distractions of my own that kept (and continue to keep) me preoccupied.
>
> However, calling for a vote did wake me up, so I guess Jan was on the
> right track!
>
> I've gone back and reviewed the whole KIP document and the prior
> discussion, and I'd like to offer a few thoughts:
>
> API Thoughts:
>
> 1. If I read the KIP right, you are proposing a many-to-one join. Could we
> consider naming it manyToOneJoin? Or, if you prefer, flip the design around
> and make it a oneToManyJoin?
>
> The proposed name "joinOnForeignKey" disguises the join type, and it seems
> like it might trick some people into using it for a one-to-one join. This
> would work, of course, but it would be super inefficient compared to a
> simple rekey-and-join.
>
> 2. I might have missed it, but I don't think it's specified whether it's
> an inner, outer, or left join. I'm guessing an outer join, as (neglecting
> IQ), the rest can be achieved by filtering or by handling it in the
> ValueJoiner.
>
> 3. The arg list to joinOnForeignKey doesn't look quite right.
> 3a. Regarding Serialized: There are a few different paradigms in play in
> the Streams API, so it's confusing, but instead of three Serialized args, I
> think it would be better to have one that allows (optionally) setting the 4
> incoming serdes. The result serde is defined by the Materialized. The
> incoming serdes can be optional because they might already be available on
> the source KTables, or the default serdes from the config might be
> applicable.
>
> 3b. Is the StreamPartitioner necessary? The other joins don't allow
> setting one, and it seems like it might actually be harmful, since the
> rekey operation needs to produce results that are co-partitioned with the
> "other" KTable.
>
> 4. I'm fine with the "reserved word" header, but I didn't actually follow
> what Matthias meant about namespacing requiring "deserializing" the record
> header. The headers are already Strings, so I don't think that
> deserialization is required. If we applied the namespace at source nodes
> and stripped it at sink nodes, this would be practically no overhead. The
> advantage of the namespace idea is that no public API change wrt headers
> needs to happen, and no restrictions need to be placed on users' headers.
>
> (Although I'm wondering if we can get away without the header at all...
> stay tuned)
>
> 5. I also didn't follow the discussion about the HWM table growing without
> bound. As I read it, the HWM table is effectively implementing OCC to
> resolve the problem you noted with disordering when the rekey is
> reversed... particularly notable when the FK changes. As such, it only
> needs to track the most recent "version" (the offset in the source
> partition) of each key. Therefore, it should have the same number of keys
> as the source table at all times.
>
> I see that you are aware of KIP-258, which I think might be relevant in a
> couple of ways. One: it's just about storing the timestamp in the state
> store, but the ultimate idea is to effectively use the timestamp as an OCC
> "version" to drop disordered updates. You wouldn't want to use the
> timestamp for this operation, but if you were to use a similar mechanism to
> store the source offset in the store alongside the re-keyed values, then
> you could avoid a separate table.
>
> 6. You and Jan have been thinking about this for a long time, so I've
> probably missed something here, but I'm wondering if we can avoid the HWM
> tracking at all and resolve out-of-order during a 

[jira] [Created] (KAFKA-7689) Add Commit/List Offsets Operations to AdminClient

2018-11-29 Thread Mickael Maison (JIRA)
Mickael Maison created KAFKA-7689:
-

 Summary: Add Commit/List Offsets Operations to AdminClient
 Key: KAFKA-7689
 URL: https://issues.apache.org/jira/browse/KAFKA-7689
 Project: Kafka
  Issue Type: Improvement
  Components: admin
Reporter: Mickael Maison
Assignee: Mickael Maison


Jira for KIP-396: 
https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=97551484



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-7688) Allow byte array class for Decimal Logical Types to fix Debezium Issues

2018-11-29 Thread Eric C Abis (JIRA)
Eric C Abis created KAFKA-7688:
--

 Summary: Allow byte array class for Decimal Logical Types to fix 
Debezium Issues
 Key: KAFKA-7688
 URL: https://issues.apache.org/jira/browse/KAFKA-7688
 Project: Kafka
  Issue Type: Bug
  Components: KafkaConnect
Affects Versions: 1.1.1
Reporter: Eric C Abis
 Fix For: 1.1.1


Decimal Logical Type fields are failing with Kafka Connect sink tasks and 
showing this error:
{code:java}
Invalid Java object for schema type BYTES: class [B for field: "null"{code}
There is an issue tracker for the problem here in the Confluent Schema Registry 
tracker (it's all related):  
[https://github.com/confluentinc/schema-registry/issues/833]

I've created a fix for this issue and tested and verified it in our CF4 cluster 
here at Shutterstock.

Ultimately the issue boils down to the fact that in Avro, Decimal Logical types 
store values as a Base64 encoded Byte Arrays for the default values, and 
BigInteger Byte Arrays for the record values.   I'd like to submit a PR that 
changes the SCHEMA_TYPE_CLASSES hash map in 
org.apache.kafka.connect.data.ConnectSchema to allow Byte Arrays for Decimal 
fields. 

Separately I have a similar change in{color:#33} 
[io.confluent.connect.avro.AvroData|https://github.com/TheGreatAbyss/schema-registry/pull/1/files#diff-ac149179f9760319ccc772695cb21364]
 that I will submit a PR for as well.{color}

I reached out [to us...@kafka.apache.org|mailto:to%c2%a0us...@kafka.apache.org] 
to ask for GitHub permissions but if there is somewhere else I need to reach 
out to please let me know.


Thank You!

Eric



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-7687) Print batch level information in DumpLogSegments when deep iterating

2018-11-29 Thread Jason Gustafson (JIRA)
Jason Gustafson created KAFKA-7687:
--

 Summary: Print batch level information in DumpLogSegments when 
deep iterating
 Key: KAFKA-7687
 URL: https://issues.apache.org/jira/browse/KAFKA-7687
 Project: Kafka
  Issue Type: Improvement
  Components: tools
Reporter: Jason Gustafson


It is often helpful to have the batch level information when deep iterating a 
segment. Currently you have to run DumpLogSegments with and without deep 
iteration and then correlate the results. It would be simpler to print both. We 
could even nest the individual messages to make the batch boundaries clear. For 
example:

{code}
baseOffset: 0 lastOffset: 1 magic: 2 position: 0 ...
- offset: 0 keysize: 5 ...
- offset: 1 keysize: 5 ...
baseOffset: 2 lastOffset: 4 magic: 2 position: 500 ...
- offset: 2 keysize: 5 ...
- offset: 3 keysize: 5 ...
- offset: 4 keysize: 5 ...
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: I'd like permissions to publish a KIP

2018-11-29 Thread Manikumar
Thanks for your interest. I have given you the wiki permissions.

On Thu, Nov 29, 2018 at 10:03 PM Noa Resare  wrote:

> Greetings friends,
>
> I have a proposal for a tiny change that would alter configuration
> semantics slightly, so it seems like something that should be handled
> through the KIP process. Could I get permission? My account on cwiki is
> ‘noa’
>
> cheers
> noa


Re: Requesting permission to create KIP

2018-11-29 Thread Manikumar
Thanks for your interest. I have given you the permissions.

On Thu, Nov 29, 2018 at 8:24 PM Brandon Kirchner 
wrote:

> Hello,
>
> Could you please grant me (brandon.kirchner) permission to create a KIP?
>
> Thanks!
> Brandon K.
>


I'd like permissions to publish a KIP

2018-11-29 Thread Noa Resare
Greetings friends,

I have a proposal for a tiny change that would alter configuration semantics 
slightly, so it seems like something that should be handled through the KIP 
process. Could I get permission? My account on cwiki is ‘noa’

cheers
noa

Requesting permission to create KIP

2018-11-29 Thread Brandon Kirchner
Hello,

Could you please grant me (brandon.kirchner) permission to create a KIP?

Thanks!
Brandon K.


Re: KIP-213 - Scalable/Usable Foreign-Key KTable joins - Rebooted.

2018-11-29 Thread John Roesler
Hi all,

Sorry that this discussion petered out... I think the 2.1 release caused an
extended distraction that pushed it off everyone's radar (which was
precisely Adam's concern). Personally, I've also had some extend
distractions of my own that kept (and continue to keep) me preoccupied.

However, calling for a vote did wake me up, so I guess Jan was on the right
track!

I've gone back and reviewed the whole KIP document and the prior
discussion, and I'd like to offer a few thoughts:

API Thoughts:

1. If I read the KIP right, you are proposing a many-to-one join. Could we
consider naming it manyToOneJoin? Or, if you prefer, flip the design around
and make it a oneToManyJoin?

The proposed name "joinOnForeignKey" disguises the join type, and it seems
like it might trick some people into using it for a one-to-one join. This
would work, of course, but it would be super inefficient compared to a
simple rekey-and-join.

2. I might have missed it, but I don't think it's specified whether it's an
inner, outer, or left join. I'm guessing an outer join, as (neglecting IQ),
the rest can be achieved by filtering or by handling it in the ValueJoiner.

3. The arg list to joinOnForeignKey doesn't look quite right.
3a. Regarding Serialized: There are a few different paradigms in play in
the Streams API, so it's confusing, but instead of three Serialized args, I
think it would be better to have one that allows (optionally) setting the 4
incoming serdes. The result serde is defined by the Materialized. The
incoming serdes can be optional because they might already be available on
the source KTables, or the default serdes from the config might be
applicable.

3b. Is the StreamPartitioner necessary? The other joins don't allow setting
one, and it seems like it might actually be harmful, since the rekey
operation needs to produce results that are co-partitioned with the "other"
KTable.

4. I'm fine with the "reserved word" header, but I didn't actually follow
what Matthias meant about namespacing requiring "deserializing" the record
header. The headers are already Strings, so I don't think that
deserialization is required. If we applied the namespace at source nodes
and stripped it at sink nodes, this would be practically no overhead. The
advantage of the namespace idea is that no public API change wrt headers
needs to happen, and no restrictions need to be placed on users' headers.

(Although I'm wondering if we can get away without the header at all...
stay tuned)

5. I also didn't follow the discussion about the HWM table growing without
bound. As I read it, the HWM table is effectively implementing OCC to
resolve the problem you noted with disordering when the rekey is
reversed... particularly notable when the FK changes. As such, it only
needs to track the most recent "version" (the offset in the source
partition) of each key. Therefore, it should have the same number of keys
as the source table at all times.

I see that you are aware of KIP-258, which I think might be relevant in a
couple of ways. One: it's just about storing the timestamp in the state
store, but the ultimate idea is to effectively use the timestamp as an OCC
"version" to drop disordered updates. You wouldn't want to use the
timestamp for this operation, but if you were to use a similar mechanism to
store the source offset in the store alongside the re-keyed values, then
you could avoid a separate table.

6. You and Jan have been thinking about this for a long time, so I've
probably missed something here, but I'm wondering if we can avoid the HWM
tracking at all and resolve out-of-order during a final join instead...

Let's say we're joining a left table (Integer K: Letter FK, (other data))
to a right table (Letter K: (some data)).

Left table:
1: (A, xyz)
2: (B, asd)

Right table:
A: EntityA
B: EntityB

We could do a rekey as you proposed with a combined key, but not
propagating the value at all..
Rekey table:
A-1: (dummy value)
B-2: (dummy value)

Which we then join with the right table to produce:
A-1: EntityA
B-2: EntityB

Which gets rekeyed back:
1: A, EntityA
2: B, EntityB

And finally we do the actual join:
Result table:
1: ((A, xyz), EntityA)
2: ((B, asd), EntityB)

The thing is that in that last join, we have the opportunity to compare the
current FK in the left table with the incoming PK of the right table. If
they don't match, we just drop the event, since it must be outdated.

In your KIP, you gave an example in which (1: A, xyz) gets updated to (1:
B, xyz), ultimately yielding a conundrum about whether the final state
should be (1: null) or (1: joined-on-B). With the algorithm above, you
would be considering (1: (B, xyz), (A, null)) vs (1: (B, xyz), (B,
EntityB)). It seems like this does give you enough information to make the
right choice, regardless of disordering.


7. Last thought... I'm a little concerned about the performance of the
range scans when records change in the right table. You've said that you've
been using the algorithm 

Re: [DISCUSS] KIP-382: MirrorMaker 2.0

2018-11-29 Thread Sönke Liebau
Hi Ryanne,

first of all, thanks for the KIP, great work overall and much needed I
think!

I have a small comment on the renaming policy, in one of the mails on this
thread you mention that this can be disabled (to replicate topic1 in
cluster A as topic1 on cluster B I assume). The KIP itself does not mention
this, from reading just the KIP one might get the assumption that renaming
is mandatory. It might be useful to add a sentence or two around renaming
policies and what is possible here. I assume you intend to make these
pluggable?

Regarding the latest addition of "MirrorMaker clusters" I am not sure I
fully understand the issue you are trying to solve and what exactly these
scripts will do - but that may just me being dense about it :)
I understand the limitation to a single source and target cluster that
Connect imposes, but isn't this worked around by the fact that you have
MirrorSource- and MirrorSinkConnectors and one part of the equation will
always be under your control?
The way I understood your intention was that there is a (regular, not MM)
Connect Cluster somewhere next to a Kafka Cluster A and if you deploy a
MirrorSourceTask to that it will read messages from a remote cluster B and
replicate them into the local cluster A. If you deploy a MirrorSinkTask it
will read from local cluster A and replicate into cluster B.

Since in both causes the configuration for cluster B will be passed into
the connector in the ConnectorConfig contained in the rest request, what's
to stop us from starting a third connector with a MirrorSourceTask reading
from cluster C?

I am a bit hesitant about the entire concept of having extra scripts to run
an entire separate Connect cluster - I'd much prefer an option to use a
regular connect cluster from an ops point of view. Is it maybe worth
spending some time investigating whether we can come up with a change to
connect that enables what MM would need?

Best regards,
Sönke



On Tue, Nov 27, 2018 at 10:02 PM Ryanne Dolan  wrote:

> Hey y'all, I'd like you draw your attention to a new section in KIP-382 re
> MirrorMaker Clusters:
>
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-382:+MirrorMaker+2.0#KIP-382:MirrorMaker2.0-MirrorMakerClusters
>
> A common concern I hear about using Connect for replication is that all
> SourceConnectors in a Connect cluster must use the same target Kafka
> cluster, and likewise all SinkConnectors must use the same source Kafka
> cluster. In order to use multiple Kafka clusters from Connect, there are
> two possible approaches:
>
> 1) use an intermediate Kafka cluster, K. SourceConnectors (A, B, C) write
> to K and SinkConnectors (X, Y, Z) read from K. This enables flows like A ->
> K - > X but means that some topologies require extraneous hops, and means
> that K must be scaled to handle records from all sources and sinks.
>
> 2) use multiple Connect clusters, one for each target cluster. Each cluster
> has multiple SourceConnectors, one for each source cluster. This enables
> direct replication of A -> X but means there is a proliferation of Connect
> clusters, each of which must be managed separately.
>
> Both options are viable for small deployments involving a small number of
> Kafka clusters in a small number of data centers. However, neither is
> scalable, especially from an operational standpoint.
>
> KIP-382 now introduces "MirrorMaker clusters", which are distinct from
> Connect clusters. A single MirrorMaker cluster provides
> "Replication-as-a-Service" among any number of Kafka clusters via a
> high-level REST API based on the Connect API. Under the hood, MirrorMaker
> sets up Connectors between each pair of Kafka clusters. The REST API
> enables on-the-fly reconfiguration of each Connector, including updates to
> topic whitelists/blacklists.
>
> To configure MirrorMaker 2.0, you need a configuration file that lists
> connection information for each Kafka cluster (broker lists, SSL settings
> etc). At a minimum, this looks like:
>
> clusters=us-west, us-east
> cluster.us-west.broker.list=us-west-kafka-server:9092
> cluster.us-east.broker.list=us-east-kafka-server:9092
>
> You can specify topic whitelists and other connector-level settings here
> too, or you can use the REST API to remote-control a running cluster.
>
> I've also updated the KIP with minor changes to bring it in line with the
> current implementation.
>
> Looking forward to your feedback, thanks!
> Ryanne
>
> On Mon, Nov 19, 2018 at 10:26 PM Ryanne Dolan 
> wrote:
>
> > Dan, you've got it right. ACL sync will be done by MM2 automatically
> > (unless disabled) according to simple rules:
> >
> > - If a principal has READ access on a topic in a source cluster, the same
> > principal should have READ access on downstream replicated topics
> ("remote
> > topics").
> > - Only MM2 has WRITE access on "remote topics".
> >
> > This covers sync from upstream topics like "topic1" to downstream remote
> > topics like "us-west.topic1". What's missing from the 

Re: [DISCUSS] KIP-345: Reduce multiple consumer rebalances by specifying member id

2018-11-29 Thread Boyang Chen
In fact I feel that it's more convenient for user to specify a list of instance 
id prefixes. Because
for general consumer application we couldn't always find a proper prefix to 
remove a list of consumers.
So we are either adding list[instanceid prefix], or we could add two fields: 
instanceid prefix, and list[instanceid]
for clarity purpose. As you know, two options are equivalent since full name is 
subset of prefix.

Let me know your thoughts!

Boyang

From: Boyang Chen 
Sent: Thursday, November 29, 2018 3:39 PM
To: dev@kafka.apache.org
Subject: Re: [DISCUSS] KIP-345: Reduce multiple consumer rebalances by 
specifying member id

Thanks Guozhang for the new proposal here!

So I'd like to propose a slightly modified version of LeaveGroupRequest:
instead of letting the static member consumer client themselves to send the
request (which means we still need to have some hidden configs to turn it
off like we did today), how about just letting any other client to send
this request since the LeaveGroupRequest only requires group.id and
member.id? So back to your operational scenarios, if some static member has
been found crashed and it is not likely to comeback, or we simply want to
shrink the size of the group by shutting down some static members, we can
use an admin client to send the LeaveGroupRequest after the instance has
been completely shutdown or crashed to kick them out of the group and also
triggers the rebalance.

One issue though, is that users may not know the member id required in the
LeaveGroupRequest. To work around it we can add the `group.instance.id`
along with the member id as well and then allow member id null-able. The
coordinator logic would then be modified as 1) if member.id is specified,
ignore instance.id and always use member.id to find the member to kick out,
2) otherwise, try with the instance.id to find the corresponding member.id
and kick it out, 3) if none is found, reject with an error code.

So in sum the alternative changes are:

a) Modify LeaveGroupRequest to add group.instance.id
b) Modify coordinator logic to handle such request on the broker side.
c) Add a new API in AdminClient like "removeMemberFromGroup(groupId,
instanceId)" which will be translated as a LeaveGroupRequest.
d) [Optional] we can even batch the request by allowing
"removeMemberFromGroup(groupId, list[instanceId])" and then make `member.id`
and `instance.id` field of LeaveGroupRequest to be an array instead of a
single entry.
e) We can also remove the admin ConsumerRebalanceRequest as well for
simplicity (why not? paranoid of having as less request protocols as
possible :), as it is not needed anymore with the above proposal.
I agree that reusing LeaveGroupRequest is actually a good idea: we only need to 
iterate
over an existing request format. Also I found that we haven't discussed how we 
want to enable
this feature on Streaming applications, which is different from common consumer 
application in that
Stream app uses stream thread as individual consumer.
For example if user specifies the client id, the stream consumer client id will 
be like:
User client id + "-StreamThread-" + thread id + "-consumer"

So I'm thinking we should do sth similar for defining group.instance.id on 
Stream. We shall define another
config called `stream.instance.id` which would be used as prefix, and for each 
thread consumer the formula
will look like:
`group.instance.id` = `stream.instance.id` + "-" + thread id + "-consumer"

And for the ease of use, the interface of leave group request could include 
`group.instance.id.prefix` instead of
`group.instance.id` so that we could batch remove consumers relating to a 
single stream instance. This is more intuitive
and flexible since specifying names of 16~32 * n (n = number of stream 
instances to shut down) consumers is not an easy
job without client management tooling.

How does this workaround sound?

Boyang

From: Guozhang Wang 
Sent: Thursday, November 29, 2018 2:38 AM
To: dev
Subject: Re: [DISCUSS] KIP-345: Reduce multiple consumer rebalances by 
specifying member id

Hi Boyang,

I was thinking that with the optional static members in the admin
ConsumerRebalanceRequest it should be sufficient to kick out the static
member before their session timeout (arguably long in practice) have not
reached. But now I see your concern is that in some situations the admin
operators may not even know the full list of static members, but ONLY know
which static member has failed and hence would like to kick out of the
group.

So I'd like to propose a slightly modified version of LeaveGroupRequest:
instead of letting the static member consumer client themselves to send the
request (which means we still need to have some hidden configs to turn it
off like we did today), how about just letting any other client to send
this request since the LeaveGroupRequest only requires group.id and
member.id? So back to your operational 

Re: [DISCUSS] KIP-388 Add observer interface to record request and response

2018-11-29 Thread Lincong Li
Hi everyone,

Thanks for all feedback on this KIP. I have had some lengthy offline
discussions with Dong, Joel and other insightful developers. I updated KIP
388

and
proposed a different way of recording each request and response. Here is a
summary of the change.

Instead of having interfaces as wrapper on AbstractRequest and
AbstractResponse, I provided an interface on the Struct class which
represents the Kafka protocol format ("wire format"). The interface is
called ObservableStruct and it provides a set of getters that allow user to
extract information from the internal Struct instance. Some other possible
approaches are discussed in the KIP as well. But after lots of thinking, I
think the currently proposed approach is the best one.

Why is this the best approach?
1. *It's the most general way* to intercept/observe each request/response
with any type since each request/response must be materialized to a Struct
instance at some point in their life cycle.

2. *It's the easiest-to-maintain interface*. There is basically only one
interface (ObservableStruct) and its implementation (ObservableStructImp)
to maintain. Due to the fact that this interface essentially defines a set
of ways to get field(s) from the Struct, that means even changes on the
structure of the Structure (wire format changes) is not going to cause the
interface to change.

3. Struct represents the Kafka protocol format which is public. Expecting
users to have knowledge of the format of the kind of request/response they
are recording is reasonable. Moreover, *the proposed interfaces freeze the
least amount of internal implementation details into public APIs by only
exposing ways of extracting information on the Struct class*.

I am aware that adding this broker-side instrumentation would touch other
sensitive modules and trigger many discussions on design trade-offs and
etc. I appreciate all of your effort trying to make it happen and truly
believe in the benefits it can bring to the community.

Thanks,
Lincong Li


On Sun, Nov 18, 2018 at 9:35 PM Dong Lin  wrote:

> Hey Lincong,
>
> Thanks for the explanation. Here are my concern with the current proposal:
>
> 1) The current APIs of RequestInfo/ResponseInfo only provide byte and
> count number of ProduceRequest/FetchRespnse. With these limited AIPs,
> developers will likely have to create new KIP and make change in Apache
> Kafka source code in order to implement more advanced observer plugin,
> which would considerably reduces the extensibility and customizability of
> observer plugins:
>
> Here are two use-cases that can be made possible if we can provide the raw
> request/response to the observer interface:
>
> - Get the number of bytes produced per source host. This is doable if
> plugin can get the original ProduceRequest, deserialize request into Kafka
> messages, and parse messages based on the schema of the message payload.
>
> - Get the ApiVersion supported per producer/consumer IPs. In the future we
> can add the version of client library in ApiVersionsRequest and observer
> can monitor whether there is still client library that is using very old
> version, and if so, what is their IP addresses.
>
> 2) It requires extra maintenance overhead for Apache Kafka developer to
> maintain implementation of RequestInfo (e.g. bytes produced per topic),
> which would not be necessary if we can just provide ProduceRequest to the
> observer interface.
>
> 3) It is not clear why we need RequestInfo/ResponseInfo needs to be
> interface rather than class. In general interface is needed when we expect
> multiple different implementation of the interface. Can you provide some
> idea why we need multiple implementations for RequestInfo?
>
>
> Thanks,
> Dong
>
>
> On Sun, Nov 18, 2018 at 12:50 AM Lincong Li 
> wrote:
>
>> Hi Dong and Patrick,
>>
>> Thank you very much for feedbacks! Firstly I want to clarify that the
>> implementations of RequestInfo and ResponseInfo are *not* provided by
>> user. Instead they will be provided as part of this KIP. In other words,
>> "RequestAdapter" and "ResponseAdapter" are implementations of "RequestInfo"
>> and "ResponseInfo" respectively. Users can figure out what information they
>> can use in their implementation of the observer whereas how to extract
>> those information is provided so that no internal interface is provided. In
>> the future, if implementation of internal class (Response/Request) changes,
>> "RequestAdapter" and "ResponseAdapter" should be changed accordingly.
>>
>> For Patrick,
>> Yes, it would take some effort to figure out exactly what information
>> should be exposed in order to find the best balance point between making
>> getters generic enough so that it is not hindering future changes in
>> internal classes and also specific enough so that most user's requirements
>> should be satisfied. With that being said, I 

Build failed in Jenkins: kafka-trunk-jdk8 #3227

2018-11-29 Thread Apache Jenkins Server
See 


Changes:

[manikumar.reddy] MINOR: Fix ProducerPerformance bug when numRecords > 
Integer.MAX (#5956)

--
[...truncated 2.59 MB...]
org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestampWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareKeyValueTimestamp 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareKeyValueTimestamp 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualWithNullForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest >