Re: President of India: Campaign to Abolish Article 370

2019-02-18 Thread Pawan Grover
Whatever may be the cause, using this kind of mailing list for any such
campaign itself is a gross violation of professional conduct .

My request is to please refrain from sending any such mail in future.

~Pawan

On Tue, 19 Feb 2019 at 09:09,  wrote:

> Hey,
>
> I just signed the petition "President of India: Campaign to Abolish Article
> 370" and wanted to see if you could help by adding your name.
>
> Our goal is to reach 150,000 signatures and we need more support. You can
> read more and sign the petition here:
>
> http://chng.it/x2PmTPpf9S
>
> Thanks!
> Anurag
>


Build failed in Jenkins: kafka-2.2-jdk8 #22

2019-02-18 Thread Apache Jenkins Server
See 


Changes:

[bill] MINOR: improve JavaDocs about auto-repartitioning in Streams DSL (#6269)

--
[...truncated 2.72 MB...]

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testOfflinePartitionLeaderElectionLastIsrOfflineUncleanLeaderElectionEnabled 
PASSED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testPreferredReplicaPartitionLeaderElectionPreferredReplicaNotInIsrNotLive 
STARTED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testPreferredReplicaPartitionLeaderElectionPreferredReplicaNotInIsrNotLive 
PASSED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testOfflinePartitionLeaderElectionLastIsrOfflineUncleanLeaderElectionDisabled 
STARTED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testOfflinePartitionLeaderElectionLastIsrOfflineUncleanLeaderElectionDisabled 
PASSED

kafka.controller.PartitionStateMachineTest > 
testNonexistentPartitionToNewPartitionTransition STARTED

kafka.controller.PartitionStateMachineTest > 
testNonexistentPartitionToNewPartitionTransition PASSED

kafka.controller.PartitionStateMachineTest > 
testNewPartitionToOnlinePartitionTransitionErrorCodeFromCreateStates STARTED

kafka.controller.PartitionStateMachineTest > 
testNewPartitionToOnlinePartitionTransitionErrorCodeFromCreateStates PASSED

kafka.controller.PartitionStateMachineTest > 
testOfflinePartitionToNonexistentPartitionTransition STARTED

kafka.controller.PartitionStateMachineTest > 
testOfflinePartitionToNonexistentPartitionTransition PASSED

kafka.controller.PartitionStateMachineTest > 
testOnlinePartitionToOfflineTransition STARTED

kafka.controller.PartitionStateMachineTest > 
testOnlinePartitionToOfflineTransition PASSED

kafka.controller.PartitionStateMachineTest > 
testNewPartitionToOfflinePartitionTransition STARTED

kafka.controller.PartitionStateMachineTest > 
testNewPartitionToOfflinePartitionTransition PASSED

kafka.controller.PartitionStateMachineTest > testUpdatingOfflinePartitionsCount 
STARTED

kafka.controller.PartitionStateMachineTest > testUpdatingOfflinePartitionsCount 
PASSED

kafka.controller.PartitionStateMachineTest > 
testInvalidNonexistentPartitionToOnlinePartitionTransition STARTED

kafka.controller.PartitionStateMachineTest > 
testInvalidNonexistentPartitionToOnlinePartitionTransition PASSED

kafka.controller.PartitionStateMachineTest > 
testInvalidNonexistentPartitionToOfflinePartitionTransition STARTED

kafka.controller.PartitionStateMachineTest > 
testInvalidNonexistentPartitionToOfflinePartitionTransition PASSED

kafka.controller.PartitionStateMachineTest > 
testOnlinePartitionToOnlineTransition STARTED

kafka.controller.PartitionStateMachineTest > 
testOnlinePartitionToOnlineTransition PASSED

kafka.controller.PartitionStateMachineTest > 
testNewPartitionToOnlinePartitionTransitionZkUtilsExceptionFromCreateStates 
STARTED

kafka.controller.PartitionStateMachineTest > 
testNewPartitionToOnlinePartitionTransitionZkUtilsExceptionFromCreateStates 
PASSED

kafka.controller.PartitionStateMachineTest > 
testInvalidNewPartitionToNonexistentPartitionTransition STARTED

kafka.controller.PartitionStateMachineTest > 
testInvalidNewPartitionToNonexistentPartitionTransition PASSED

kafka.controller.PartitionStateMachineTest > 
testNewPartitionToOnlinePartitionTransition STARTED

kafka.controller.PartitionStateMachineTest > 
testNewPartitionToOnlinePartitionTransition PASSED

kafka.controller.PartitionStateMachineTest > 
testInvalidOnlinePartitionToNewPartitionTransition STARTED

kafka.controller.PartitionStateMachineTest > 
testInvalidOnlinePartitionToNewPartitionTransition PASSED

kafka.controller.PartitionStateMachineTest > 
testUpdatingOfflinePartitionsCountDuringTopicDeletion STARTED

kafka.controller.PartitionStateMachineTest > 
testUpdatingOfflinePartitionsCountDuringTopicDeletion PASSED

kafka.controller.PartitionStateMachineTest > 
testOfflinePartitionToOnlinePartitionTransitionErrorCodeFromStateLookup STARTED

kafka.controller.PartitionStateMachineTest > 
testOfflinePartitionToOnlinePartitionTransitionErrorCodeFromStateLookup PASSED

kafka.controller.PartitionStateMachineTest > 
testOnlinePartitionToOnlineTransitionForControlledShutdown STARTED

kafka.controller.PartitionStateMachineTest > 
testOnlinePartitionToOnlineTransitionForControlledShutdown PASSED

kafka.controller.PartitionStateMachineTest > 
testOfflinePartitionToOnlinePartitionTransitionZkUtilsExceptionFromStateLookup 
STARTED

kafka.controller.PartitionStateMachineTest > 
testOfflinePartitionToOnlinePartitionTransitionZkUtilsExceptionFromStateLookup 
PASSED

kafka.controller.PartitionStateMachineTest > 
testNoOfflinePartitionsChangeForTopicsBeingDeleted STARTED

kafka.controller.PartitionStateMachineTest > 
testNoOfflinePartitionsChangeForTopicsBeingDeleted PASSED

kafka.controller.PartitionStateMachineTest > 

Re: [VOTE] KIP-412: Extend Admin API to support dynamic application log levels

2019-02-18 Thread Gwen Shapira
+1

On Mon, Feb 18, 2019, 3:48 AM Stanislav Kozlovski 
wrote:

> Hey everybody, I'm starting a VOTE thread for KIP-412. This feature should
> significantly improve the flexibility and ease in debugging Kafka in run
> time
>
> KIP-412 -
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-412%3A+Extend+Admin+API+to+support+dynamic+application+log+levels
>
>
> --
> Best,
> Stanislav
>


[jira] [Created] (KAFKA-7948) Feature to enable json field order retention in the JsonConverter

2019-02-18 Thread Jonathan Court (JIRA)
Jonathan Court created KAFKA-7948:
-

 Summary: Feature to enable json field order retention in the 
JsonConverter
 Key: KAFKA-7948
 URL: https://issues.apache.org/jira/browse/KAFKA-7948
 Project: Kafka
  Issue Type: New Feature
  Components: config, KafkaConnect
Affects Versions: 2.1.1
Reporter: Jonathan Court


We need to maintain the order of fields in json structures that pass through 
kafka connect. To achieve this a new configuration item has been added to the 
JsonConverter to engage the retention of field order in jsons between the input 
and output.

While the json spec doesn't require fields to be ordered it is required in 
instances where the parsers of json are primitive and difficult to correct - 
i.e. our mainframe.

The new config item is:
{code:java}
json.field.order = none|retained{code}
where the default is none and maintains the current functionality, and the 
option of 'retained' causes the underlying converter to use a LinkedHashMap in 
place of a HashMap and keeps the json fields in the order they're received 
during processing.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


President of India: Campaign to Abolish Article 370

2019-02-18 Thread anurag321423
Hey,

I just signed the petition "President of India: Campaign to Abolish Article
370" and wanted to see if you could help by adding your name.

Our goal is to reach 150,000 signatures and we need more support. You can
read more and sign the petition here:

http://chng.it/x2PmTPpf9S

Thanks!
Anurag


[jira] [Created] (KAFKA-7947) Flaky Test EpochDrivenReplicationProtocolAcceptanceTest #shouldFollowLeaderEpochBasicWorkflow

2019-02-18 Thread Matthias J. Sax (JIRA)
Matthias J. Sax created KAFKA-7947:
--

 Summary: Flaky Test EpochDrivenReplicationProtocolAcceptanceTest 
#shouldFollowLeaderEpochBasicWorkflow
 Key: KAFKA-7947
 URL: https://issues.apache.org/jira/browse/KAFKA-7947
 Project: Kafka
  Issue Type: Bug
  Components: core, unit tests
Affects Versions: 2.2.0
Reporter: Matthias J. Sax
 Fix For: 2.2.0


To get stable nightly builds for `2.2` release, I create tickets for all 
observed test failures.

[https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/17/]
{quote}java.lang.AssertionError: expected: but 
was: at org.junit.Assert.fail(Assert.java:88) at 
org.junit.Assert.failNotEquals(Assert.java:834) at 
org.junit.Assert.assertEquals(Assert.java:118) at 
org.junit.Assert.assertEquals(Assert.java:144) at 
kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest.shouldFollowLeaderEpochBasicWorkflow(EpochDrivenReplicationProtocolAcceptanceTest.scala:101){quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-7946) Flaky Test DeleteConsumerGroupsTest#testDeleteNonEmptyGroup

2019-02-18 Thread Matthias J. Sax (JIRA)
Matthias J. Sax created KAFKA-7946:
--

 Summary: Flaky Test 
DeleteConsumerGroupsTest#testDeleteNonEmptyGroup
 Key: KAFKA-7946
 URL: https://issues.apache.org/jira/browse/KAFKA-7946
 Project: Kafka
  Issue Type: Bug
  Components: admin, unit tests
Affects Versions: 2.2.0
Reporter: Matthias J. Sax
 Fix For: 2.2.0


To get stable nightly builds for `2.2` release, I create tickets for all 
observed test failures.

[https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/17/]
{quote}java.lang.NullPointerException at 
kafka.admin.DeleteConsumerGroupsTest.testDeleteNonEmptyGroup(DeleteConsumerGroupsTest.scala:96){quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Jenkins build is back to normal : kafka-trunk-jdk11 #297

2019-02-18 Thread Apache Jenkins Server
See 




Jenkins build is back to normal : kafka-2.1-jdk8 #129

2019-02-18 Thread Apache Jenkins Server
See 




Build failed in Jenkins: kafka-trunk-jdk8 #3397

2019-02-18 Thread Apache Jenkins Server
See 


Changes:

[bbejeck] MINOR: improve JavaDocs about auto-repartitioning in Streams DSL 
(#6269)

[github] KAFKA-7487: DumpLogSegments misreports offset mismatches (#5756)

--
[...truncated 2.32 MB...]
org.apache.kafka.connect.transforms.SetSchemaMetadataTest > 
updateSchemaOfNonStruct STARTED

org.apache.kafka.connect.transforms.SetSchemaMetadataTest > 
updateSchemaOfNonStruct PASSED

org.apache.kafka.connect.transforms.ExtractFieldTest > withSchema STARTED

org.apache.kafka.connect.transforms.ExtractFieldTest > withSchema PASSED

org.apache.kafka.connect.transforms.ExtractFieldTest > testNullWithSchema 
STARTED

org.apache.kafka.connect.transforms.ExtractFieldTest > testNullWithSchema PASSED

org.apache.kafka.connect.transforms.ExtractFieldTest > schemaless STARTED

org.apache.kafka.connect.transforms.ExtractFieldTest > schemaless PASSED

org.apache.kafka.connect.transforms.ExtractFieldTest > testNullSchemaless 
STARTED

org.apache.kafka.connect.transforms.ExtractFieldTest > testNullSchemaless PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessFieldConversion STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessFieldConversion PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testConfigInvalidTargetType STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testConfigInvalidTargetType PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessStringToTimestamp STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessStringToTimestamp PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > testKey STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > testKey PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessDateToTimestamp STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessDateToTimestamp PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimestampToString STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimestampToString PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimeToTimestamp STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimeToTimestamp PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaTimestampToDate STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaTimestampToDate PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaTimestampToTime STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaTimestampToTime PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaTimestampToUnix STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaTimestampToUnix PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaUnixToTimestamp STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaUnixToTimestamp PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessIdentity STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessIdentity PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaFieldConversion STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaFieldConversion PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaDateToTimestamp STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaDateToTimestamp PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaIdentity STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaIdentity PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimestampToDate STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimestampToDate PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimestampToTime STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimestampToTime PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimestampToUnix STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimestampToUnix PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testConfigMissingFormat STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testConfigMissingFormat PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testConfigNoTargetType STARTED


[jira] [Created] (KAFKA-7945) ExpiringCredentialRefreshingLogin - timeout value is negative

2019-02-18 Thread Denis Ogun (JIRA)
Denis Ogun created KAFKA-7945:
-

 Summary: ExpiringCredentialRefreshingLogin - timeout value is 
negative
 Key: KAFKA-7945
 URL: https://issues.apache.org/jira/browse/KAFKA-7945
 Project: Kafka
  Issue Type: Bug
Affects Versions: 2.0.1
Reporter: Denis Ogun


There was an issue with one of our Kafka consumers no longer sending a valid 
OAuth token. Looking at the logs, there seems to be an error in some of the 
math in the timestamp calculation:

 
{code:java}
14 Feb 2019 06:42:45,694 Expiring credential expires at 
2019-02-14T06:48:21.000+, so buffer times of 60 and 300 seconds at the 
front and back, respectively, cannot be accommodated. We will refresh at 
2019-02-14T06:01:39.078+.
14 Feb 2019 06:42:45,694 org.apache.kafka.common.utils.KafkaThread: Uncaught 
exception in thread
java.lang.IllegalArgumentException: timeout value is negative
at java.lang.Thread.sleep(Native Method) ~[?:1.8.0_202]
at org.apache.kafka.common.utils.SystemTime.sleep(SystemTime.java:45) 
~[kafka-clients-2.x.jar:?]
at 
org.apache.kafka.common.security.oauthbearer.internals.expiring.ExpiringCredentialRefreshingLogin$Refresher.run(ExpiringCredentialRefreshingLogin.java:86)
 ~[kafka-clients-2.x.jar:?]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_202]{code}
 

At this point the refresh logic would never recover and so the producer 
couldn't consume until we restarted the process.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[DISCUSS] KIP-431: Support of printing additional ConsumerRecord fields in DefaultMessageFormatter

2019-02-18 Thread Mateusz Zakarczemny
Hi all,

I have created a KIP to support additional message fields in console
consumer:
KIP-431 - Support of printing additional ConsumerRecord fields in
DefaultMessageFormatter


The main purpose of the proposed change is to allow printing message
offset, partition and headers in console consumer. Changes are backward
compatible and impact only console consumer parameters.

PR: https://github.com/apache/kafka/pull/4807
Jira ticket: KAFKA-6733 

I'm waiting for your feedback.

Regards,
Mateusz Zakarczemny


Build failed in Jenkins: kafka-trunk-jdk11 #296

2019-02-18 Thread Apache Jenkins Server
See 


Changes:

[mjsax] KAFKA-7895: Fix stream-time reckoning for suppress (#6278)

[rajinisivaram] KAFKA-7935: UNSUPPORTED_COMPRESSION_TYPE if 
ReplicaManager.getLogConfig

--
[...truncated 2.31 MB...]

org.apache.kafka.connect.runtime.WorkerConfigTransformerTest > 
testReplaceVariableWithTTLFirstCancelThenScheduleRestart PASSED

org.apache.kafka.connect.runtime.WorkerConfigTransformerTest > 
testTransformNullConfiguration STARTED

org.apache.kafka.connect.runtime.WorkerConfigTransformerTest > 
testTransformNullConfiguration PASSED

org.apache.kafka.connect.runtime.WorkerConfigTransformerTest > 
testReplaceVariable STARTED

org.apache.kafka.connect.runtime.WorkerConfigTransformerTest > 
testReplaceVariable PASSED

org.apache.kafka.connect.runtime.WorkerConfigTransformerTest > 
testReplaceVariableWithTTL STARTED

org.apache.kafka.connect.runtime.WorkerConfigTransformerTest > 
testReplaceVariableWithTTL PASSED

org.apache.kafka.connect.runtime.WorkerSinkTaskThreadedTest > 
testPollsInBackground STARTED

org.apache.kafka.connect.runtime.WorkerSinkTaskThreadedTest > 
testPollsInBackground PASSED

org.apache.kafka.connect.runtime.WorkerSinkTaskThreadedTest > testCommit STARTED

org.apache.kafka.connect.runtime.WorkerSinkTaskThreadedTest > testCommit PASSED

org.apache.kafka.connect.runtime.WorkerSinkTaskThreadedTest > testCommitFailure 
STARTED

org.apache.kafka.connect.runtime.WorkerSinkTaskThreadedTest > testCommitFailure 
PASSED

org.apache.kafka.connect.runtime.WorkerSinkTaskThreadedTest > 
testCommitSuccessFollowedByFailure STARTED

org.apache.kafka.connect.runtime.WorkerSinkTaskThreadedTest > 
testCommitSuccessFollowedByFailure PASSED

org.apache.kafka.connect.runtime.WorkerSinkTaskThreadedTest > 
testCommitConsumerFailure STARTED

org.apache.kafka.connect.runtime.WorkerSinkTaskThreadedTest > 
testCommitConsumerFailure PASSED

org.apache.kafka.connect.runtime.WorkerSinkTaskThreadedTest > testCommitTimeout 
STARTED

org.apache.kafka.connect.runtime.WorkerSinkTaskThreadedTest > testCommitTimeout 
PASSED

org.apache.kafka.connect.runtime.WorkerSinkTaskThreadedTest > 
testAssignmentPauseResume STARTED

org.apache.kafka.connect.runtime.WorkerSinkTaskThreadedTest > 
testAssignmentPauseResume PASSED

org.apache.kafka.connect.runtime.WorkerSinkTaskThreadedTest > testRewind STARTED

org.apache.kafka.connect.runtime.WorkerSinkTaskThreadedTest > testRewind PASSED

org.apache.kafka.connect.runtime.WorkerSinkTaskThreadedTest > 
testRewindOnRebalanceDuringPoll STARTED

org.apache.kafka.connect.runtime.WorkerSinkTaskThreadedTest > 
testRewindOnRebalanceDuringPoll PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testPutConnectorConfig STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testPutConnectorConfig PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testCreateConnector STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testCreateConnector PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testJoinAssignment STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testJoinAssignment PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testRebalance STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testRebalance PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testRebalanceFailedConnector STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testRebalanceFailedConnector PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testHaltCleansUpWorker STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testHaltCleansUpWorker PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testConnectorNameConflictsWithWorkerGroupId STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testConnectorNameConflictsWithWorkerGroupId PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testRestartUnknownConnector STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testRestartUnknownConnector PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testRestartConnectorRedirectToLeader STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testRestartConnectorRedirectToLeader PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testRestartConnectorRedirectToOwner STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testRestartConnectorRedirectToOwner PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testRestartUnknownTask STARTED


Build failed in Jenkins: kafka-trunk-jdk8 #3396

2019-02-18 Thread Apache Jenkins Server
See 


Changes:

[mjsax] KAFKA-7895: Fix stream-time reckoning for suppress (#6278)

[rajinisivaram] KAFKA-7935: UNSUPPORTED_COMPRESSION_TYPE if 
ReplicaManager.getLogConfig

--
[...truncated 2.56 MB...]
org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualWithNullForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualWithNullForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 

[jira] [Created] (KAFKA-7944) Add more natural Suppress test

2019-02-18 Thread John Roesler (JIRA)
John Roesler created KAFKA-7944:
---

 Summary: Add more natural Suppress test
 Key: KAFKA-7944
 URL: https://issues.apache.org/jira/browse/KAFKA-7944
 Project: Kafka
  Issue Type: Improvement
  Components: streams
Reporter: John Roesler


Can be integration or system test.

The idea is to test with tighter time bounds with windows of say 30 seconds and 
use system time without adding any extra time for verification.

Currently, all the tests rely on artificially advancing system time, which 
should be reliable, but you never know. So, we want to add a test that works 
exactly like production code.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-7943) Add Suppress system test with caching disabled.

2019-02-18 Thread John Roesler (JIRA)
John Roesler created KAFKA-7943:
---

 Summary: Add Suppress system test with caching disabled.
 Key: KAFKA-7943
 URL: https://issues.apache.org/jira/browse/KAFKA-7943
 Project: Kafka
  Issue Type: Improvement
  Components: streams
Reporter: John Roesler
Assignee: John Roesler






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [VOTE] KIP-412: Extend Admin API to support dynamic application log levels

2019-02-18 Thread Rajini Sivaram
Thanks for the KIP, Stanislav!

+1 (binding)

Regards,

Rajini


On Mon, Feb 18, 2019 at 1:54 PM Mickael Maison 
wrote:

> +1 (non-binding)
> We've used a tweaked JmxTool class to change log level for a while but
> this is a significant improvement!
> Thanks for the KIP
>
> On Mon, Feb 18, 2019 at 11:48 AM Stanislav Kozlovski
>  wrote:
> >
> > Hey everybody, I'm starting a VOTE thread for KIP-412. This feature
> should
> > significantly improve the flexibility and ease in debugging Kafka in run
> > time
> >
> > KIP-412 -
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-412%3A+Extend+Admin+API+to+support+dynamic+application+log+levels
> >
> >
> > --
> > Best,
> > Stanislav
>


Kafka Topic Volume and (possibly ACL) question

2019-02-18 Thread M. Manna
Hello,

We have a requirement where, based on business requirementes, we need to
publish data only for a specific set of clients. For example, an invoice
update shouldn't go to all clients, only the specific client. But a company
remittance info should be published to all clients. Also, in some cases, a
specific client changes some contract info which is published in a P2P
fashion. We have about 8k clients.

What is the ideal way to control this flow?

1) specific topic per client
2) Some form of ACL?

For option 1, we are not 100% sure if Kafka can handle 8k topics (or, the
resource issues for that matter). Has anyone solved a similar business
problem? If so, would you mind sharing your solution?

Btw, we are not using stream platform, it's simply pub-sub. Because we
don't need real-time aggregation of various items. For us, it's key that
the synchronisation occurs, and has "exactly-once" semantics.

Thanks,


Re: [VOTE] KIP-412: Extend Admin API to support dynamic application log levels

2019-02-18 Thread Mickael Maison
+1 (non-binding)
We've used a tweaked JmxTool class to change log level for a while but
this is a significant improvement!
Thanks for the KIP

On Mon, Feb 18, 2019 at 11:48 AM Stanislav Kozlovski
 wrote:
>
> Hey everybody, I'm starting a VOTE thread for KIP-412. This feature should
> significantly improve the flexibility and ease in debugging Kafka in run
> time
>
> KIP-412 -
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-412%3A+Extend+Admin+API+to+support+dynamic+application+log+levels
>
>
> --
> Best,
> Stanislav


[VOTE] KIP-412: Extend Admin API to support dynamic application log levels

2019-02-18 Thread Stanislav Kozlovski
Hey everybody, I'm starting a VOTE thread for KIP-412. This feature should
significantly improve the flexibility and ease in debugging Kafka in run
time

KIP-412 -
https://cwiki.apache.org/confluence/display/KAFKA/KIP-412%3A+Extend+Admin+API+to+support+dynamic+application+log+levels


-- 
Best,
Stanislav


Build failed in Jenkins: kafka-trunk-jdk8 #3395

2019-02-18 Thread Apache Jenkins Server
See 


Changes:

[ismael] KAFKA-6569: Move OffsetIndex/TimeIndex logger to companion object 

--
[...truncated 2.30 MB...]

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestampWithProducerRecord
 PASSED

> Task :streams:upgrade-system-tests-0102:test
> Task :streams:upgrade-system-tests-0110:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0110:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0110:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0110:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0110:compileTestJava
> Task :streams:upgrade-system-tests-0110:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-0110:testClasses
> Task :streams:upgrade-system-tests-0110:checkstyleTest
> Task :streams:upgrade-system-tests-0110:spotbugsMain NO-SOURCE
> Task :streams:upgrade-system-tests-0110:test
> Task :streams:upgrade-system-tests-10:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-10:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-10:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-10:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-10:compileTestJava
> Task :streams:upgrade-system-tests-10:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-10:testClasses
> Task :streams:upgrade-system-tests-10:checkstyleTest
> Task :streams:upgrade-system-tests-10:spotbugsMain NO-SOURCE
> Task :streams:upgrade-system-tests-10:test
> Task :streams:upgrade-system-tests-11:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-11:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-11:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-11:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-11:compileTestJava
> Task :streams:upgrade-system-tests-11:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-11:testClasses
> Task :streams:upgrade-system-tests-11:checkstyleTest
> Task :streams:upgrade-system-tests-11:spotbugsMain NO-SOURCE
> Task :streams:upgrade-system-tests-11:test
> Task :streams:upgrade-system-tests-20:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-20:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-20:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-20:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-20:compileTestJava
> Task :streams:upgrade-system-tests-20:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-20:testClasses
> Task :streams:upgrade-system-tests-20:checkstyleTest
> Task :streams:upgrade-system-tests-20:spotbugsMain NO-SOURCE
> Task :streams:upgrade-system-tests-20:test
> Task :streams:upgrade-system-tests-21:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-21:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-21:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-21:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-21:compileTestJava
> Task :streams:upgrade-system-tests-21:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-21:testClasses
> Task :streams:upgrade-system-tests-21:checkstyleTest
> Task :streams:upgrade-system-tests-21:spotbugsMain NO-SOURCE
> Task :streams:upgrade-system-tests-21:test

> Task :streams:streams-scala:test

org.apache.kafka.streams.scala.StreamToTableJoinScalaIntegrationTestImplicitSerdes
 > testShouldCountClicksPerRegionWithNamedRepartitionTopic STARTED

org.apache.kafka.streams.scala.StreamToTableJoinScalaIntegrationTestImplicitSerdes
 > testShouldCountClicksPerRegionWithNamedRepartitionTopic PASSED

org.apache.kafka.streams.scala.StreamToTableJoinScalaIntegrationTestImplicitSerdes
 > testShouldCountClicksPerRegionJava STARTED

org.apache.kafka.streams.scala.StreamToTableJoinScalaIntegrationTestImplicitSerdes
 > testShouldCountClicksPerRegionJava PASSED

org.apache.kafka.streams.scala.StreamToTableJoinScalaIntegrationTestImplicitSerdes
 > testShouldCountClicksPerRegion STARTED

org.apache.kafka.streams.scala.StreamToTableJoinScalaIntegrationTestImplicitSerdes
 > testShouldCountClicksPerRegion PASSED

org.apache.kafka.streams.scala.TopologyTest > 
shouldBuildIdenticalTopologyInJavaNScalaJoin STARTED

org.apache.kafka.streams.scala.TopologyTest > 
shouldBuildIdenticalTopologyInJavaNScalaJoin PASSED

org.apache.kafka.streams.scala.TopologyTest > 
shouldBuildIdenticalTopologyInJavaNScalaSimple STARTED

org.apache.kafka.streams.scala.TopologyTest > 
shouldBuildIdenticalTopologyInJavaNScalaSimple PASSED


[jira] [Created] (KAFKA-7942) Monitoring consumer lag by time

2019-02-18 Thread Yining Liu (JIRA)
Yining Liu created KAFKA-7942:
-

 Summary: Monitoring consumer lag by time
 Key: KAFKA-7942
 URL: https://issues.apache.org/jira/browse/KAFKA-7942
 Project: Kafka
  Issue Type: Wish
  Components: consumer, metrics
Reporter: Yining Liu


Currently, we can collect num of messages consumer lagging behind producer. 
However, if messages come into a partition sporadically, we are not easy to 
define consumer lag alert threshold. If we know how long consumer lagging 
behind latest message, it must be helpful.

There are two options:
 # expose per partition lag time by consumer jmx metrics
 # expose consumer lag time in kafka-consumer-groups.sh



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [DISCUSS] KIP-430 - Return Authorized Operations in Describe Responses

2019-02-18 Thread Rajini Sivaram
Hi Magnus,

Have your concerns been addressed in the KIP?

Thanks,

Rajini

On Wed, Feb 13, 2019 at 3:33 PM Satish Duggana 
wrote:

> Hi Rajini,
> That makes sense, thanks for the clarification.
>
> Satish.
>
> On Wed, Feb 13, 2019 at 7:30 PM Rajini Sivaram 
> wrote:
> >
> > Thanks for the reviews!
> >
> > Hi Satish,
> >
> > The authorised operations returned will use the same values as the
> > operations returned by the existing DescribeAclsResponse. AdminClient
> will
> > return these using the existing enum AclOperation.
> >
> > Hi Magnus,
> >
> > The MetadataResponse contains these two lines:
> >
> >- Metadata Response => throttle_time_ms [brokers] cluster_id
> >controller_id [topic_metadata] [authorized_operations] <== ADDED
> >authorized_operations
> >- topic_metadata => error_code topic is_internal [partition_metadata]
> >[authorized_operations]  <== ADDED authorized_operations
> >
> > The first is for the cluster's authorized operations and the second for
> > each topic. Did I misunderstand your question? The full set of operations
> > for each resource type is included in the subsection `AdminClient API
> > Changes`.
> >
> > Under `Rejected Alternatives` I have included addition of a separate
> > request to get this information rather than extend an existing one. The
> > rationale for including all the information in one request is to enable
> > clients to get all relevant metadata using a single API rather than have
> to
> > send multiple requests, get responses and combine the two while resource
> or
> > ACLs may have changed in between. It seems neater to use a single API
> since
> > a user getting authorized operations is almost definitely going to do a
> > Describe first and access control for both of these is controlled using
> > Describe access. If we add new resource types with a corresponding
> > Describe, we would simply need to add `authorized_operations` for their
> > Describe.
> >
> > Hi Manikumar,
> >
> > Added IdempotentWrite for Cluster, thanks for pointing that out! I was
> > thinking that if authorizer is not configured, we could return all
> > supported operations since the user can perform all operations. Added a
> > note to the KIP.
> >
> > Regards,
> >
> > Rajini
> >
> >
> >
> > On Wed, Feb 13, 2019 at 11:07 AM Manikumar 
> > wrote:
> >
> > > Hi,
> > >
> > > Thanks for the KIP.
> > >
> > > 1. Can't we include IdempotentWrite/ClusterResource Operations for
> Cluster
> > > resource.
> > > 2. What will be the API behaviour when the authorizer is not
> configured?. I
> > > assume we return empty list.
> > >
> > > Thanks,
> > > Manikumar
> > >
> > > On Wed, Feb 13, 2019 at 12:33 AM Rajini Sivaram <
> rajinisiva...@gmail.com>
> > > wrote:
> > >
> > > > Hi all,
> > > >
> > > > I have created a KIP to optionally request authorised operations on
> > > > resources when describing resources:
> > > >
> > > >
> > > >
> > >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-430+-+Return+Authorized+Operations+in+Describe+Responses
> > > >
> > > > This includes only information that users with Describe access can
> obtain
> > > > using other means and hence is consistent with our security model.
> It is
> > > > intended to made it easier for clients to obtain this information.
> > > >
> > > > Feedback and suggestions welcome.
> > > >
> > > > Thank you,
> > > >
> > > > Rajini
> > > >
> > >
>


[jira] [Resolved] (KAFKA-6569) Reflection in OffsetIndex and TimeIndex construction

2019-02-18 Thread Ismael Juma (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6569?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma resolved KAFKA-6569.

   Resolution: Fixed
Fix Version/s: 2.3.0

> Reflection in OffsetIndex and TimeIndex construction
> 
>
> Key: KAFKA-6569
> URL: https://issues.apache.org/jira/browse/KAFKA-6569
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Reporter: Kyle Ambroff-Kao
>Assignee: Kyle Ambroff-Kao
>Priority: Major
> Fix For: 2.3.0
>
> Attachments: after.png, before.png
>
>
> kafka.log.AbstractIndex uses the Logging mixin to lazily initialize loggers 
> for any concrete type that inherits from it. This works great, except that 
> the LazyLogging trait uses reflection to compute the logger name.
> When you have hundreds of thousands of log segments to load on startup the 
> extra cost adds up.
> I've attached flame graphs from broker startup on a broker that has about 
> 12TB of log segments to load, and a second flame graph after changing 
> AbstractIndex to statically initialize a logger.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)