Build failed in Jenkins: kafka-trunk-jdk8 #3407

2019-02-21 Thread Apache Jenkins Server
See 


Changes:

[jason] KAFKA-7763; Calls to commitTransaction and abortTransaction should not

[bbejeck] MINOR: Missing punctuation marks in quickstart (#5755)

--
[...truncated 4.62 MB...]
org.apache.kafka.connect.runtime.ConnectMetricsTest > testGettingGroupWithTags 
STARTED

org.apache.kafka.connect.runtime.ConnectMetricsTest > testGettingGroupWithTags 
PASSED

org.apache.kafka.connect.runtime.ConnectMetricsTest > testRecreateWithClose 
STARTED

org.apache.kafka.connect.runtime.ConnectMetricsTest > testRecreateWithClose 
PASSED

org.apache.kafka.connect.runtime.ConnectMetricsTest > testKafkaMetricsNotNull 
STARTED

org.apache.kafka.connect.runtime.ConnectMetricsTest > testKafkaMetricsNotNull 
PASSED

org.apache.kafka.connect.converters.ShortConverterTest > testBytesNullToNumber 
STARTED

org.apache.kafka.connect.converters.ShortConverterTest > testBytesNullToNumber 
PASSED

org.apache.kafka.connect.converters.ShortConverterTest > 
testSerializingIncorrectType STARTED

org.apache.kafka.connect.converters.ShortConverterTest > 
testSerializingIncorrectType PASSED

org.apache.kafka.connect.converters.ShortConverterTest > 
testDeserializingHeaderWithTooManyBytes STARTED

org.apache.kafka.connect.converters.ShortConverterTest > 
testDeserializingHeaderWithTooManyBytes PASSED

org.apache.kafka.connect.converters.ShortConverterTest > testNullToBytes STARTED

org.apache.kafka.connect.converters.ShortConverterTest > testNullToBytes PASSED

org.apache.kafka.connect.converters.ShortConverterTest > 
testSerializingIncorrectHeader STARTED

org.apache.kafka.connect.converters.ShortConverterTest > 
testSerializingIncorrectHeader PASSED

org.apache.kafka.connect.converters.ShortConverterTest > 
testDeserializingDataWithTooManyBytes STARTED

org.apache.kafka.connect.converters.ShortConverterTest > 
testDeserializingDataWithTooManyBytes PASSED

org.apache.kafka.connect.converters.ShortConverterTest > 
testConvertingSamplesToAndFromBytes STARTED

org.apache.kafka.connect.converters.ShortConverterTest > 
testConvertingSamplesToAndFromBytes PASSED

org.apache.kafka.connect.converters.FloatConverterTest > testBytesNullToNumber 
STARTED

org.apache.kafka.connect.converters.FloatConverterTest > testBytesNullToNumber 
PASSED

org.apache.kafka.connect.converters.FloatConverterTest > 
testSerializingIncorrectType STARTED

org.apache.kafka.connect.converters.FloatConverterTest > 
testSerializingIncorrectType PASSED

org.apache.kafka.connect.converters.FloatConverterTest > 
testDeserializingHeaderWithTooManyBytes STARTED

org.apache.kafka.connect.converters.FloatConverterTest > 
testDeserializingHeaderWithTooManyBytes PASSED

org.apache.kafka.connect.converters.FloatConverterTest > testNullToBytes STARTED

org.apache.kafka.connect.converters.FloatConverterTest > testNullToBytes PASSED

org.apache.kafka.connect.converters.FloatConverterTest > 
testSerializingIncorrectHeader STARTED

org.apache.kafka.connect.converters.FloatConverterTest > 
testSerializingIncorrectHeader PASSED

org.apache.kafka.connect.converters.FloatConverterTest > 
testDeserializingDataWithTooManyBytes STARTED

org.apache.kafka.connect.converters.FloatConverterTest > 
testDeserializingDataWithTooManyBytes PASSED

org.apache.kafka.connect.converters.FloatConverterTest > 
testConvertingSamplesToAndFromBytes STARTED

org.apache.kafka.connect.converters.FloatConverterTest > 
testConvertingSamplesToAndFromBytes PASSED

org.apache.kafka.connect.converters.ByteArrayConverterTest > 
testFromConnectSchemaless STARTED

org.apache.kafka.connect.converters.ByteArrayConverterTest > 
testFromConnectSchemaless PASSED

org.apache.kafka.connect.converters.ByteArrayConverterTest > testToConnectNull 
STARTED

org.apache.kafka.connect.converters.ByteArrayConverterTest > testToConnectNull 
PASSED

org.apache.kafka.connect.converters.ByteArrayConverterTest > 
testFromConnectBadSchema STARTED

org.apache.kafka.connect.converters.ByteArrayConverterTest > 
testFromConnectBadSchema PASSED

org.apache.kafka.connect.converters.ByteArrayConverterTest > 
testFromConnectNull STARTED

org.apache.kafka.connect.converters.ByteArrayConverterTest > 
testFromConnectNull PASSED

org.apache.kafka.connect.converters.ByteArrayConverterTest > testToConnect 
STARTED

org.apache.kafka.connect.converters.ByteArrayConverterTest > testToConnect 
PASSED

org.apache.kafka.connect.converters.ByteArrayConverterTest > testFromConnect 
STARTED

org.apache.kafka.connect.converters.ByteArrayConverterTest > testFromConnect 
PASSED

org.apache.kafka.connect.converters.ByteArrayConverterTest > 
testFromConnectInvalidValue STARTED

org.apache.kafka.connect.converters.ByteArrayConverterTest > 
testFromConnectInvalidValue PASSED

org.apache.kafka.connect.converters.LongConverterTest > testBytesNullToNumber 
STARTED

org.apache.kafka.connect.converters.LongConverterTest > testBytesNullToNumber 
PASSED

Re: [VOTE] KIP-430 - Return Authorized Operations in Describe Responses

2019-02-21 Thread Gwen Shapira
+1

On Thu, Feb 21, 2019 at 2:28 AM Rajini Sivaram  wrote:
>
> I would like to start vote on KIP-430 to optionally obtain authorized
> operations when describing resources:
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-430+-+Return+Authorized+Operations+in+Describe+Responses
>
> Thank you,
>
> Rajini



-- 
Gwen Shapira
Product Manager | Confluent
650.450.2760 | @gwenshap
Follow us: Twitter | blog


Build failed in Jenkins: kafka-trunk-jdk11 #307

2019-02-21 Thread Apache Jenkins Server
See 


Changes:

[bbejeck] MINOR: Missing punctuation marks in quickstart (#5755)

--
[...truncated 2.33 MB...]
org.apache.kafka.streams.StreamsConfigTest > 
testGetRestoreConsumerConfigsWithRestoreConsumerOverridenPrefix STARTED

org.apache.kafka.streams.StreamsConfigTest > 
testGetRestoreConsumerConfigsWithRestoreConsumerOverridenPrefix PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldThrowConfigExceptionIfMaxInFlightRequestsPerConnectionIsInvalidStringIfEosEnabled
 STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldThrowConfigExceptionIfMaxInFlightRequestsPerConnectionIsInvalidStringIfEosEnabled
 PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldOverrideStreamsDefaultProducerConfigs STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldOverrideStreamsDefaultProducerConfigs PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSupportPrefixedPropertiesThatAreNotPartOfProducerConfig STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSupportPrefixedPropertiesThatAreNotPartOfProducerConfig PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSupportPrefixedGlobalConsumerConfigs STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSupportPrefixedGlobalConsumerConfigs PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldBeSupportNonPrefixedConsumerConfigs STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldBeSupportNonPrefixedConsumerConfigs PASSED

org.apache.kafka.streams.StreamsConfigTest > 
consumerConfigShouldContainAdminClientConfigsForRetriesAndRetryBackOffMsWithAdminPrefix
 STARTED

org.apache.kafka.streams.StreamsConfigTest > 
consumerConfigShouldContainAdminClientConfigsForRetriesAndRetryBackOffMsWithAdminPrefix
 PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldOverrideStreamsDefaultConsumerConifgsOnRestoreConsumer STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldOverrideStreamsDefaultConsumerConifgsOnRestoreConsumer PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSupportPrefixedPropertiesThatAreNotPartOfGlobalConsumerConfig STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSupportPrefixedPropertiesThatAreNotPartOfGlobalConsumerConfig PASSED

org.apache.kafka.streams.StreamsConfigTest > shouldAcceptExactlyOnce STARTED

org.apache.kafka.streams.StreamsConfigTest > shouldAcceptExactlyOnce PASSED

org.apache.kafka.streams.StreamsConfigTest > 
testGetGlobalConsumerConfigsWithGlobalConsumerOverridenPrefix STARTED

org.apache.kafka.streams.StreamsConfigTest > 
testGetGlobalConsumerConfigsWithGlobalConsumerOverridenPrefix PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldThrowExceptionIfMaxInFlightRequestsGreaterThanFiveIfEosEnabled STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldThrowExceptionIfMaxInFlightRequestsGreaterThanFiveIfEosEnabled PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSetInternalLeaveGroupOnCloseConfigToFalseInConsumer STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSetInternalLeaveGroupOnCloseConfigToFalseInConsumer PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSupportPrefixedPropertiesThatAreNotPartOfConsumerConfig STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSupportPrefixedPropertiesThatAreNotPartOfConsumerConfig PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldForwardCustomConfigsWithNoPrefixToAllClients STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldForwardCustomConfigsWithNoPrefixToAllClients PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldResetToDefaultIfRestoreConsumerAutoCommitIsOverridden STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldResetToDefaultIfRestoreConsumerAutoCommitIsOverridden PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSpecifyCorrectValueSerdeClassOnError STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSpecifyCorrectValueSerdeClassOnError PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldAllowToSpecifyMaxInFlightRequestsPerConnectionAsStringIfEosEnabled STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldAllowToSpecifyMaxInFlightRequestsPerConnectionAsStringIfEosEnabled PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldResetToDefaultIfGlobalConsumerAutoCommitIsOverridden STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldResetToDefaultIfGlobalConsumerAutoCommitIsOverridden PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldResetToDefaultIfConsumerIsolationLevelIsOverriddenIfEosEnabled STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldResetToDefaultIfConsumerIsolationLevelIsOverriddenIfEosEnabled PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldBeSupportNonPrefixedGlobalConsumerConfigs STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldBeSupportNonPrefixedGlobalConsumerConfigs 

[jira] [Created] (KAFKA-7980) Flaky Test SocketServerTest#testConnectionRateLimit

2019-02-21 Thread Matthias J. Sax (JIRA)
Matthias J. Sax created KAFKA-7980:
--

 Summary: Flaky Test SocketServerTest#testConnectionRateLimit
 Key: KAFKA-7980
 URL: https://issues.apache.org/jira/browse/KAFKA-7980
 Project: Kafka
  Issue Type: Bug
  Components: core, unit tests
Affects Versions: 2.2.0
Reporter: Matthias J. Sax
 Fix For: 2.2.0


To get stable nightly builds for `2.2` release, I create tickets for all 
observed test failures.

[https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/25/]
{quote}java.lang.AssertionError: Connections created too quickly: 4 at 
org.junit.Assert.fail(Assert.java:88) at 
org.junit.Assert.assertTrue(Assert.java:41) at 
kafka.network.SocketServerTest.testConnectionRateLimit(SocketServerTest.scala:1122){quote}
 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-7979) Flaky Test PartitionTest#testDelayedFetchAfterAppendRecords

2019-02-21 Thread Matthias J. Sax (JIRA)
Matthias J. Sax created KAFKA-7979:
--

 Summary: Flaky Test 
PartitionTest#testDelayedFetchAfterAppendRecords
 Key: KAFKA-7979
 URL: https://issues.apache.org/jira/browse/KAFKA-7979
 Project: Kafka
  Issue Type: Bug
  Components: core, unit tests
Affects Versions: 2.2.0
Reporter: Matthias J. Sax
 Fix For: 2.2.0
 Attachments: error_2018_02_21.log

To get stable nightly builds for `2.2` release, I create tickets for all 
observed test failures.

[https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/25/]

Error log enclosed in file.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Build failed in Jenkins: kafka-trunk-jdk8 #3406

2019-02-21 Thread Apache Jenkins Server
See 


Changes:

[vahid.hashemian] Update kafka-run-class.bat (#6291)

[github] MINOR: Subscribe/assign calls should be logged at info level (#6299)

[github] MINOR: Handle Metadata v0 all topics requests during parsing (#6300)

--
[...truncated 4.61 MB...]
org.apache.kafka.connect.data.ValuesTest > shouldConvertEmptyString STARTED

org.apache.kafka.connect.data.ValuesTest > shouldConvertEmptyString PASSED

org.apache.kafka.connect.data.ValuesTest > 
shouldFailToParseStringOfMapWithIntValuesWithOnlyBlankEntries STARTED

org.apache.kafka.connect.data.ValuesTest > 
shouldFailToParseStringOfMapWithIntValuesWithOnlyBlankEntries PASSED

org.apache.kafka.connect.data.ValuesTest > shouldConvertBooleanValues STARTED

org.apache.kafka.connect.data.ValuesTest > shouldConvertBooleanValues PASSED

org.apache.kafka.connect.data.ValuesTest > canConsume STARTED

org.apache.kafka.connect.data.ValuesTest > canConsume PASSED

org.apache.kafka.connect.data.ValuesTest > 
shouldParseStringOfMapWithIntValuesWithoutWhitespaceAsMap STARTED

org.apache.kafka.connect.data.ValuesTest > 
shouldParseStringOfMapWithIntValuesWithoutWhitespaceAsMap PASSED

org.apache.kafka.connect.data.ValuesTest > 
shouldParseStringListWithExtraDelimitersAndReturnString STARTED

org.apache.kafka.connect.data.ValuesTest > 
shouldParseStringListWithExtraDelimitersAndReturnString PASSED

org.apache.kafka.connect.data.ValuesTest > shouldFailToParseStringOfEmptyMap 
STARTED

org.apache.kafka.connect.data.ValuesTest > shouldFailToParseStringOfEmptyMap 
PASSED

org.apache.kafka.connect.data.ValuesTest > 
shouldFailToConvertToListFromStringWithExtraDelimiters STARTED

org.apache.kafka.connect.data.ValuesTest > 
shouldFailToConvertToListFromStringWithExtraDelimiters PASSED

org.apache.kafka.connect.data.ValuesTest > 
shouldParseStringOfMapWithShortValuesWithWhitespaceAsMap STARTED

org.apache.kafka.connect.data.ValuesTest > 
shouldParseStringOfMapWithShortValuesWithWhitespaceAsMap PASSED

org.apache.kafka.connect.data.ValuesTest > shouldParseStringsWithoutDelimiters 
STARTED

org.apache.kafka.connect.data.ValuesTest > shouldParseStringsWithoutDelimiters 
PASSED

org.apache.kafka.connect.data.ValuesTest > 
shouldFailToParseStringOfMapWithIntValuesWithBlankEntry STARTED

org.apache.kafka.connect.data.ValuesTest > 
shouldFailToParseStringOfMapWithIntValuesWithBlankEntry PASSED

org.apache.kafka.connect.data.ValuesTest > 
shouldFailToParseStringOfMapWithIntValuesWithBlankEntries STARTED

org.apache.kafka.connect.data.ValuesTest > 
shouldFailToParseStringOfMapWithIntValuesWithBlankEntries PASSED

org.apache.kafka.connect.data.ValuesTest > 
shouldConvertMapWithStringKeysAndIntegerValues STARTED

org.apache.kafka.connect.data.ValuesTest > 
shouldConvertMapWithStringKeysAndIntegerValues PASSED

org.apache.kafka.connect.data.ValuesTest > shouldConvertDateValues STARTED

org.apache.kafka.connect.data.ValuesTest > shouldConvertDateValues PASSED

org.apache.kafka.connect.data.ValuesTest > 
shouldFailToParseInvalidBooleanValueString STARTED

org.apache.kafka.connect.data.ValuesTest > 
shouldFailToParseInvalidBooleanValueString PASSED

org.apache.kafka.connect.data.ValuesTest > shouldConvertTimestampValues STARTED

org.apache.kafka.connect.data.ValuesTest > shouldConvertTimestampValues PASSED

org.apache.kafka.connect.data.ValuesTest > shouldConvertNullValue STARTED

org.apache.kafka.connect.data.ValuesTest > shouldConvertNullValue PASSED

org.apache.kafka.connect.data.ValuesTest > shouldConvertTimeValues STARTED

org.apache.kafka.connect.data.ValuesTest > shouldConvertTimeValues PASSED

org.apache.kafka.connect.data.ValuesTest > 
shouldFailToParseStringOfMalformedMap STARTED

org.apache.kafka.connect.data.ValuesTest > 
shouldFailToParseStringOfMalformedMap PASSED

org.apache.kafka.connect.data.ValuesTest > 
shouldParseStringOfMapWithShortValuesWithoutWhitespaceAsMap STARTED

org.apache.kafka.connect.data.ValuesTest > 
shouldParseStringOfMapWithShortValuesWithoutWhitespaceAsMap PASSED

org.apache.kafka.connect.data.ValuesTest > shouldConvertListWithStringValues 
STARTED

org.apache.kafka.connect.data.ValuesTest > shouldConvertListWithStringValues 
PASSED

org.apache.kafka.connect.data.ValuesTest > 
shouldParseStringsWithEscapedDelimiters STARTED

org.apache.kafka.connect.data.ValuesTest > 
shouldParseStringsWithEscapedDelimiters PASSED

org.apache.kafka.connect.data.ValuesTest > 
shouldConvertStringOfListWithOnlyNumericElementTypesIntoListOfLargestNumericType
 STARTED

org.apache.kafka.connect.data.ValuesTest > 
shouldConvertStringOfListWithOnlyNumericElementTypesIntoListOfLargestNumericType
 PASSED

org.apache.kafka.connect.data.ValuesTest > 
shouldParseStringOfMapWithIntValuesWithWhitespaceAsMap STARTED

org.apache.kafka.connect.data.ValuesTest > 
shouldParseStringOfMapWithIntValuesWithWhitespaceAsMap PASSED

org.apache.kafka.connect.data.ValuesTest > 

[jira] [Created] (KAFKA-7978) Flaky Test SaslSslAdminClientIntegrationTest#testConsumerGroups

2019-02-21 Thread Matthias J. Sax (JIRA)
Matthias J. Sax created KAFKA-7978:
--

 Summary: Flaky Test 
SaslSslAdminClientIntegrationTest#testConsumerGroups
 Key: KAFKA-7978
 URL: https://issues.apache.org/jira/browse/KAFKA-7978
 Project: Kafka
  Issue Type: Bug
  Components: core, unit tests
Affects Versions: 2.2.0
Reporter: Matthias J. Sax
 Fix For: 2.2.0


To get stable nightly builds for `2.2` release, I create tickets for all 
observed test failures.

[https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/25/]
{quote}java.lang.AssertionError: expected:<2> but was:<0> at 
org.junit.Assert.fail(Assert.java:88) at 
org.junit.Assert.failNotEquals(Assert.java:834) at 
org.junit.Assert.assertEquals(Assert.java:645) at 
org.junit.Assert.assertEquals(Assert.java:631) at 
kafka.api.AdminClientIntegrationTest.testConsumerGroups(AdminClientIntegrationTest.scala:1157)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method){quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-7977) Flaky Test ReassignPartitionsClusterTest#shouldOnlyThrottleMovingReplicas

2019-02-21 Thread Matthias J. Sax (JIRA)
Matthias J. Sax created KAFKA-7977:
--

 Summary: Flaky Test 
ReassignPartitionsClusterTest#shouldOnlyThrottleMovingReplicas
 Key: KAFKA-7977
 URL: https://issues.apache.org/jira/browse/KAFKA-7977
 Project: Kafka
  Issue Type: Bug
  Components: admin, unit tests
Affects Versions: 2.2.0
Reporter: Matthias J. Sax
 Fix For: 2.2.0


To get stable nightly builds for `2.2` release, I create tickets for all 
observed test failures.

[https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/25/]
{quote}org.apache.zookeeper.KeeperException$SessionExpiredException: 
KeeperErrorCode = Session expired for /brokers/topics/topic1 at 
org.apache.zookeeper.KeeperException.create(KeeperException.java:130) at 
org.apache.zookeeper.KeeperException.create(KeeperException.java:54) at 
kafka.zookeeper.AsyncResponse.resultException(ZooKeeperClient.scala:534) at 
kafka.zk.KafkaZkClient.$anonfun$getReplicaAssignmentForTopics$2(KafkaZkClient.scala:579)
 at 
scala.collection.TraversableLike.$anonfun$flatMap$1(TraversableLike.scala:244) 
at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62) at 
scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55) at 
scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49) at 
scala.collection.TraversableLike.flatMap(TraversableLike.scala:244) at 
scala.collection.TraversableLike.flatMap$(TraversableLike.scala:241) at 
scala.collection.AbstractTraversable.flatMap(Traversable.scala:108) at 
kafka.zk.KafkaZkClient.getReplicaAssignmentForTopics(KafkaZkClient.scala:574) 
at 
kafka.admin.ReassignPartitionsCommand$.parseAndValidate(ReassignPartitionsCommand.scala:338)
 at 
kafka.admin.ReassignPartitionsCommand$.executeAssignment(ReassignPartitionsCommand.scala:209)
 at 
kafka.admin.ReassignPartitionsClusterTest.shouldOnlyThrottleMovingReplicas(ReassignPartitionsClusterTest.scala:343){quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-7976) Flaky Test DynamicBrokerReconfigurationTest#testUncleanLeaderElectionEnable

2019-02-21 Thread Matthias J. Sax (JIRA)
Matthias J. Sax created KAFKA-7976:
--

 Summary: Flaky Test 
DynamicBrokerReconfigurationTest#testUncleanLeaderElectionEnable
 Key: KAFKA-7976
 URL: https://issues.apache.org/jira/browse/KAFKA-7976
 Project: Kafka
  Issue Type: Bug
  Components: core, unit tests
Affects Versions: 2.2.0
Reporter: Matthias J. Sax
 Fix For: 2.2.0


To get stable nightly builds for `2.2` release, I create tickets for all 
observed test failures.

[https://builds.apache.org/blue/organizations/jenkins/kafka-2.2-jdk8/detail/kafka-2.2-jdk8/28/]
{quote}java.lang.AssertionError: Unclean leader not elected
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.assertTrue(Assert.java:41)
at 
kafka.server.DynamicBrokerReconfigurationTest.testUncleanLeaderElectionEnable(DynamicBrokerReconfigurationTest.scala:488){quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Jenkins build is back to normal : kafka-trunk-jdk11 #306

2019-02-21 Thread Apache Jenkins Server
See 




[jira] [Created] (KAFKA-7975) Provide client API version to authorizer

2019-02-21 Thread Ying Zheng (JIRA)
Ying Zheng created KAFKA-7975:
-

 Summary: Provide client API version to authorizer
 Key: KAFKA-7975
 URL: https://issues.apache.org/jira/browse/KAFKA-7975
 Project: Kafka
  Issue Type: Improvement
  Components: core
Reporter: Ying Zheng
Assignee: Ying Zheng






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Build failed in Jenkins: kafka-2.2-jdk8 #28

2019-02-21 Thread Apache Jenkins Server
See 


Changes:

[jason] MINOR: Subscribe/assign calls should be logged at info level (#6299)

--
[...truncated 2.67 MB...]

kafka.utils.SchedulerTest > testPeriodicTask PASSED

kafka.utils.LoggingTest > testLog4jControllerIsRegistered STARTED

kafka.utils.LoggingTest > testLog4jControllerIsRegistered PASSED

kafka.utils.LoggingTest > testLogName STARTED

kafka.utils.LoggingTest > testLogName PASSED

kafka.utils.LoggingTest > testLogNameOverride STARTED

kafka.utils.LoggingTest > testLogNameOverride PASSED

kafka.utils.ZkUtilsTest > testGetSequenceIdMethod STARTED

kafka.utils.ZkUtilsTest > testGetSequenceIdMethod PASSED

kafka.utils.ZkUtilsTest > testAbortedConditionalDeletePath STARTED

kafka.utils.ZkUtilsTest > testAbortedConditionalDeletePath PASSED

kafka.utils.ZkUtilsTest > testGetAllPartitionsTopicWithoutPartitions STARTED

kafka.utils.ZkUtilsTest > testGetAllPartitionsTopicWithoutPartitions PASSED

kafka.utils.ZkUtilsTest > testSuccessfulConditionalDeletePath STARTED

kafka.utils.ZkUtilsTest > testSuccessfulConditionalDeletePath PASSED

kafka.utils.ZkUtilsTest > testPersistentSequentialPath STARTED

kafka.utils.ZkUtilsTest > testPersistentSequentialPath PASSED

kafka.utils.ZkUtilsTest > testClusterIdentifierJsonParsing STARTED

kafka.utils.ZkUtilsTest > testClusterIdentifierJsonParsing PASSED

kafka.utils.ZkUtilsTest > testGetLeaderIsrAndEpochForPartition STARTED

kafka.utils.ZkUtilsTest > testGetLeaderIsrAndEpochForPartition PASSED

kafka.utils.CoreUtilsTest > testGenerateUuidAsBase64 STARTED

kafka.utils.CoreUtilsTest > testGenerateUuidAsBase64 PASSED

kafka.utils.CoreUtilsTest > testAbs STARTED

kafka.utils.CoreUtilsTest > testAbs PASSED

kafka.utils.CoreUtilsTest > testReplaceSuffix STARTED

kafka.utils.CoreUtilsTest > testReplaceSuffix PASSED

kafka.utils.CoreUtilsTest > testCircularIterator STARTED

kafka.utils.CoreUtilsTest > testCircularIterator PASSED

kafka.utils.CoreUtilsTest > testReadBytes STARTED

kafka.utils.CoreUtilsTest > testReadBytes PASSED

kafka.utils.CoreUtilsTest > testCsvList STARTED

kafka.utils.CoreUtilsTest > testCsvList PASSED

kafka.utils.CoreUtilsTest > testReadInt STARTED

kafka.utils.CoreUtilsTest > testReadInt PASSED

kafka.utils.CoreUtilsTest > testAtomicGetOrUpdate STARTED

kafka.utils.CoreUtilsTest > testAtomicGetOrUpdate PASSED

kafka.utils.CoreUtilsTest > testUrlSafeBase64EncodeUUID STARTED

kafka.utils.CoreUtilsTest > testUrlSafeBase64EncodeUUID PASSED

kafka.utils.CoreUtilsTest > testCsvMap STARTED

kafka.utils.CoreUtilsTest > testCsvMap PASSED

kafka.utils.CoreUtilsTest > testInLock STARTED

kafka.utils.CoreUtilsTest > testInLock PASSED

kafka.utils.CoreUtilsTest > testTryAll STARTED

kafka.utils.CoreUtilsTest > testTryAll PASSED

kafka.utils.CoreUtilsTest > testSwallow STARTED

kafka.utils.CoreUtilsTest > testSwallow PASSED

kafka.utils.json.JsonValueTest > testJsonObjectIterator STARTED

kafka.utils.json.JsonValueTest > testJsonObjectIterator PASSED

kafka.utils.json.JsonValueTest > testDecodeLong STARTED

kafka.utils.json.JsonValueTest > testDecodeLong PASSED

kafka.utils.json.JsonValueTest > testAsJsonObject STARTED

kafka.utils.json.JsonValueTest > testAsJsonObject PASSED

kafka.utils.json.JsonValueTest > testDecodeDouble STARTED

kafka.utils.json.JsonValueTest > testDecodeDouble PASSED

kafka.utils.json.JsonValueTest > testDecodeOption STARTED

kafka.utils.json.JsonValueTest > testDecodeOption PASSED

kafka.utils.json.JsonValueTest > testDecodeString STARTED

kafka.utils.json.JsonValueTest > testDecodeString PASSED

kafka.utils.json.JsonValueTest > testJsonValueToString STARTED

kafka.utils.json.JsonValueTest > testJsonValueToString PASSED

kafka.utils.json.JsonValueTest > testAsJsonObjectOption STARTED

kafka.utils.json.JsonValueTest > testAsJsonObjectOption PASSED

kafka.utils.json.JsonValueTest > testAsJsonArrayOption STARTED

kafka.utils.json.JsonValueTest > testAsJsonArrayOption PASSED

kafka.utils.json.JsonValueTest > testAsJsonArray STARTED

kafka.utils.json.JsonValueTest > testAsJsonArray PASSED

kafka.utils.json.JsonValueTest > testJsonValueHashCode STARTED

kafka.utils.json.JsonValueTest > testJsonValueHashCode PASSED

kafka.utils.json.JsonValueTest > testDecodeInt STARTED

kafka.utils.json.JsonValueTest > testDecodeInt PASSED

kafka.utils.json.JsonValueTest > testDecodeMap STARTED

kafka.utils.json.JsonValueTest > testDecodeMap PASSED

kafka.utils.json.JsonValueTest > testDecodeSeq STARTED

kafka.utils.json.JsonValueTest > testDecodeSeq PASSED

kafka.utils.json.JsonValueTest > testJsonObjectGet STARTED

kafka.utils.json.JsonValueTest > testJsonObjectGet PASSED

kafka.utils.json.JsonValueTest > testJsonValueEquals STARTED

kafka.utils.json.JsonValueTest > testJsonValueEquals PASSED

kafka.utils.json.JsonValueTest > testJsonArrayIterator STARTED

kafka.utils.json.JsonValueTest > testJsonArrayIterator PASSED


[jira] [Resolved] (KAFKA-5510) Streams should commit all offsets regularly

2019-02-21 Thread Matthias J. Sax (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-5510?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax resolved KAFKA-5510.

Resolution: Won't Do

> Streams should commit all offsets regularly
> ---
>
> Key: KAFKA-5510
> URL: https://issues.apache.org/jira/browse/KAFKA-5510
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Matthias J. Sax
>Priority: Major
>
> Currently, Streams commits only offsets of partitions it did process records 
> for. Thus, if a partition does not have any data for longer then 
> {{offsets.retention.minutes}} (default 1 day) the latest committed offset 
> get's lost. On failure or restart {{auto.offset.rese}} kicks in potentially 
> resulting in reprocessing old data.
> Thus, Streams should commit _all_ offset on a regular basis. Not sure what 
> the overhead of a commit is -- if it's too expensive to commit all offsets on 
> regular commit, we could also have a second config that specifies an 
> "commit.all.interval".
> This relates to https://issues.apache.org/jira/browse/KAFKA-3806, so we 
> should sync to get a solid overall solution.
> At the same time, it might be better to change the semantics of 
> {{offsets.retention.minutes}} in the first place. It might be better to apply 
> this setting only if the consumer group is completely dead (and not on "last 
> commit" and "per partition" basis). Thus, this JIRA would be a workaround fix 
> if core cannot be changed quickly enough.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Build failed in Jenkins: kafka-trunk-jdk11 #305

2019-02-21 Thread Apache Jenkins Server
See 


Changes:

[vahid.hashemian] Update kafka-run-class.bat (#6291)

[github] MINOR: Subscribe/assign calls should be logged at info level (#6299)

[github] MINOR: Handle Metadata v0 all topics requests during parsing (#6300)

--
[...truncated 2.31 MB...]

org.apache.kafka.trogdor.agent.AgentTest > testAgentExecWithTimeout STARTED

org.apache.kafka.trogdor.agent.AgentTest > testAgentExecWithTimeout PASSED

org.apache.kafka.trogdor.agent.AgentTest > testAgentExecWithNormalExit STARTED

org.apache.kafka.trogdor.agent.AgentTest > testAgentExecWithNormalExit PASSED

org.apache.kafka.trogdor.agent.AgentTest > testWorkerCompletions STARTED

org.apache.kafka.trogdor.agent.AgentTest > testWorkerCompletions PASSED

org.apache.kafka.trogdor.agent.AgentTest > testAgentFinishesTasks STARTED

org.apache.kafka.trogdor.agent.AgentTest > testAgentFinishesTasks PASSED

org.apache.kafka.trogdor.workload.ThrottleTest > testThrottle STARTED

org.apache.kafka.trogdor.workload.ThrottleTest > testThrottle PASSED

org.apache.kafka.trogdor.workload.PayloadGeneratorTest > 
testConstantPayloadGenerator STARTED

org.apache.kafka.trogdor.workload.PayloadGeneratorTest > 
testConstantPayloadGenerator PASSED

org.apache.kafka.trogdor.workload.PayloadGeneratorTest > 
testSequentialPayloadGenerator STARTED

org.apache.kafka.trogdor.workload.PayloadGeneratorTest > 
testSequentialPayloadGenerator PASSED

org.apache.kafka.trogdor.workload.PayloadGeneratorTest > 
testNullPayloadGenerator STARTED

org.apache.kafka.trogdor.workload.PayloadGeneratorTest > 
testNullPayloadGenerator PASSED

org.apache.kafka.trogdor.workload.PayloadGeneratorTest > 
testUniformRandomPayloadGenerator STARTED

org.apache.kafka.trogdor.workload.PayloadGeneratorTest > 
testUniformRandomPayloadGenerator PASSED

org.apache.kafka.trogdor.workload.PayloadGeneratorTest > testPayloadIterator 
STARTED

org.apache.kafka.trogdor.workload.PayloadGeneratorTest > testPayloadIterator 
PASSED

org.apache.kafka.trogdor.workload.PayloadGeneratorTest > 
testUniformRandomPayloadGeneratorPaddingBytes STARTED

org.apache.kafka.trogdor.workload.PayloadGeneratorTest > 
testUniformRandomPayloadGeneratorPaddingBytes PASSED

org.apache.kafka.trogdor.workload.ConsumeBenchSpecTest > 
testMaterializeTopicsWithSomePartitions STARTED

org.apache.kafka.trogdor.workload.ConsumeBenchSpecTest > 
testMaterializeTopicsWithSomePartitions PASSED

org.apache.kafka.trogdor.workload.ConsumeBenchSpecTest > 
testMaterializeTopicsWithNoPartitions STARTED

org.apache.kafka.trogdor.workload.ConsumeBenchSpecTest > 
testMaterializeTopicsWithNoPartitions PASSED

org.apache.kafka.trogdor.workload.ConsumeBenchSpecTest > 
testInvalidTopicNameRaisesExceptionInMaterialize STARTED

org.apache.kafka.trogdor.workload.ConsumeBenchSpecTest > 
testInvalidTopicNameRaisesExceptionInMaterialize PASSED

org.apache.kafka.trogdor.workload.HistogramTest > testHistogramPercentiles 
STARTED

org.apache.kafka.trogdor.workload.HistogramTest > testHistogramPercentiles 
PASSED

org.apache.kafka.trogdor.workload.HistogramTest > testHistogramSamples STARTED

org.apache.kafka.trogdor.workload.HistogramTest > testHistogramSamples PASSED

org.apache.kafka.trogdor.workload.HistogramTest > testHistogramAverage STARTED

org.apache.kafka.trogdor.workload.HistogramTest > testHistogramAverage PASSED

org.apache.kafka.trogdor.workload.TopicsSpecTest > testPartitionNumbers STARTED

org.apache.kafka.trogdor.workload.TopicsSpecTest > testPartitionNumbers PASSED

org.apache.kafka.trogdor.workload.TopicsSpecTest > testMaterialize STARTED

org.apache.kafka.trogdor.workload.TopicsSpecTest > testMaterialize PASSED

org.apache.kafka.trogdor.workload.TopicsSpecTest > testPartitionsSpec STARTED

org.apache.kafka.trogdor.workload.TopicsSpecTest > testPartitionsSpec PASSED

org.apache.kafka.trogdor.workload.ExternalCommandWorkerTest > 
testProcessWithFailedExit STARTED

org.apache.kafka.trogdor.workload.ExternalCommandWorkerTest > 
testProcessWithFailedExit PASSED

org.apache.kafka.trogdor.workload.ExternalCommandWorkerTest > 
testProcessNotFound STARTED

org.apache.kafka.trogdor.workload.ExternalCommandWorkerTest > 
testProcessNotFound PASSED

org.apache.kafka.trogdor.workload.ExternalCommandWorkerTest > 
testProcessForceKillTimeout STARTED

org.apache.kafka.trogdor.workload.ExternalCommandWorkerTest > 
testProcessForceKillTimeout PASSED

org.apache.kafka.trogdor.workload.ExternalCommandWorkerTest > testProcessStop 
STARTED

org.apache.kafka.trogdor.workload.ExternalCommandWorkerTest > testProcessStop 
PASSED

org.apache.kafka.trogdor.workload.ExternalCommandWorkerTest > 
testProcessWithNormalExit STARTED

org.apache.kafka.trogdor.workload.ExternalCommandWorkerTest > 
testProcessWithNormalExit PASSED

org.apache.kafka.trogdor.common.WorkerUtilsTest > testCreatesNotExistingTopics 
STARTED

org.apache.kafka.trogdor.common.WorkerUtilsTest > 

[jira] [Created] (KAFKA-7974) KafkaAdminClient loses worker thread/enters zombie state when initial DNS lookup fails

2019-02-21 Thread Nicholas Parker (JIRA)
Nicholas Parker created KAFKA-7974:
--

 Summary: KafkaAdminClient loses worker thread/enters zombie state 
when initial DNS lookup fails
 Key: KAFKA-7974
 URL: https://issues.apache.org/jira/browse/KAFKA-7974
 Project: Kafka
  Issue Type: Bug
Reporter: Nicholas Parker


Version: kafka-clients-2.1.0

I have some code that creates creates a KafkaAdminClient instance and then 
invokes listTopics(). I was seeing the following stacktrace in the logs, after 
which the KafkaAdminClient instance became unresponsive:
{code:java}
ERROR [kafka-admin-client-thread | adminclient-1] 2019-02-18 01:00:45,597 
KafkaThread.java:51 - Uncaught exception in thread 'kafka-admin-client-thread | 
adminclient-1':
java.lang.IllegalStateException: No entry found for connection 0
    at 
org.apache.kafka.clients.ClusterConnectionStates.nodeState(ClusterConnectionStates.java:330)
    at 
org.apache.kafka.clients.ClusterConnectionStates.disconnected(ClusterConnectionStates.java:134)
    at 
org.apache.kafka.clients.NetworkClient.initiateConnect(NetworkClient.java:921)
    at org.apache.kafka.clients.NetworkClient.ready(NetworkClient.java:287)
    at 
org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.sendEligibleCalls(KafkaAdminClient.java:898)
    at 
org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.run(KafkaAdminClient.java:1113)
    at java.lang.Thread.run(Thread.java:748){code}
>From looking at the code I was able to trace down a possible cause:
 * NetworkClient.ready() invokes this.initiateConnect() as seen in the above 
stacktrace
 * NetworkClient.initiateConnect() invokes 
ClusterConnectionStates.connecting(), which internally invokes 
ClientUtils.resolve() to to resolve the host when creating an entry for the 
connection.
 * If this host lookup fails, a UnknownHostException can be thrown back to 
NetworkClient.initiateConnect() and the connection entry is not created in 
ClusterConnectionStates. This exception doesn't get logged so this is a guess 
on my part.
 * NetworkClient.initiateConnect() catches the exception and attempts to call 
ClusterConnectionStates.disconnected(), which throws an IllegalStateException 
because no entry had yet been created due to the lookup failure.
 * This IllegalStateException ends up killing the worker thread and 
KafkaAdminClient gets stuck, never returning from listTopics().



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Jenkins build is back to normal : kafka-2.1-jdk8 #134

2019-02-21 Thread Apache Jenkins Server
See 




Re: [DISCUSS] KIP-307: Allow to define custom processor names with KStreams DSL

2019-02-21 Thread Bill Bejeck
Hi Florian,

Overall the KIP LGTM.  Once you've addressed the final comments from
Matthias I think we can put this up for a vote.

Thanks,
Bill

On Wed, Feb 20, 2019 at 9:42 PM Matthias J. Sax 
wrote:

> Florian,
>
> thanks for updating the KIP (and no worries for late reply -- 2.2
> release kept us busy anyway...). Overall LGTM.
>
> Just some nits:
>
>
> KStream-Table:
>
> Do we need to list the existing stream-globalTable join methods in the
> first table (thought it should only contain new/changing methods).
>
> typo: `join(GlobalKTbale, KeyValueMapper, ValueJoiner, Named)`
>
> `leftJoin(GlobalKTable, KeyValueMapper, ValueJoiner)` is missing the new
> `Named` parameter.
>
> `static Joined#named(final String name)`
>  -> should be `#as(...)` instead of `named(...)`
>
> flatTransform() is missing (cf. KIP-313)
>
>
>
> KTable-table:
>
> `Suppressed#withName(String)`
>  -> should we change this to `#as(...)` too (similar to `named()`)
>
>
>
> -Matthias
>
>
>
> On 1/25/19 9:49 AM, Matthias J. Sax wrote:
> > I was reading the KIP again, and there are still some open question and
> > inconsistencies:
> >
> > For example for `KGroupedStream#count(Named)` the KIP says, that only
> > the processor will be named, while the state store name will be `PREFIX
> > + COUNT` (ie, an auto-generated name). Additionally, for
> > `KGroupedStream#count(Named, Materialized)` the processor will be named
> > according to `Named` and the store will be named according to
> > `Materialized.as()`. So far so good. It implies that naming the
> > processor and naming the store are independent. (This pattern is applied
> > to all aggregation functions, for KStream and KTable).
> >
> > However, for `KTable#filter(Predicate, Named)` the KIP says, the
> > processor name and the store name are set. This sound wrong (ie,
> > inconsistent with the first paragraph from above), because there is also
> > `KTable#filter(Predicate, Named, Materialized)`. Also note, for the
> > first operator, the store might not be materialized to at all. (This
> > issue is there for all KTable operators -- stateless and stateful).
> >
> > Finally, there is the following statement in the KIP:
> >
> >> Also, note that for all methods accepting a Materialized argument, if
> no state store named is provided then the node named will be used to
> generate a one. The state store name will be the node name suffixed with
> "-table".
> >
> >
> > This contradict the non-naming of stores from the very beginning.
> >
> >
> > Also, the KIP still contains the question about `join(GlobalKTable,
> > KeyValueMapper, ValueJoiner)` and `leftJoin(GlobalKTable,
> > KeyValueMapper, ValueJoiner)`. I think a consistent approach would be to
> > add one overload each that takes a `Named` parameter.
> >
> >
> > Thoughts?
> >
> >
> > -Matthias
> >
> >
> > On 1/17/19 2:56 PM, Bill Bejeck wrote:
> >> +1 for me on Guozhang's proposal for changes to Joined.
> >>
> >> Thanks,
> >> Bill
> >>
> >> On Thu, Jan 17, 2019 at 5:55 PM Matthias J. Sax 
> >> wrote:
> >>
> >>> Thanks for all the follow up comments!
> >>>
> >>> As I mentioned earlier, I am ok with adding overloads instead of using
> >>> Materialized to specify the processor name. Seems this is what the
> >>> majority of people prefers.
> >>>
> >>> I am also +1 on Guozhang's suggestion to deprecate `static
> >>> Joined#named()` and replace it with `static Joined#as` for consistency
> >>> and to deprecate getter `Joined#name()` for removal and introduce
> >>> `JoinedInternal` to access the name.
> >>>
> >>> @Guozhang: the vote is already up :)
> >>>
> >>>
> >>> -Matthias
> >>>
> >>> On 1/17/19 2:45 PM, Guozhang Wang wrote:
>  Wow that's a lot of discussions in 6 days! :) Just catching up and
> >>> sharing
>  my two cents here:
> 
>  1. Materialized: I'm inclined to not let Materialized extending Named
> and
>  add the overload as well. All the rationales have been very well
> >>> summarized
>  before. Just to emphasize on John's points: Materialized is
> considered as
>  the control object being leveraged by the optimization framework to
>  determine if the state store should be physically materialized or
> not. So
>  let's say if the user does not want to query the store (hence it can
> just
>  be locally materialized), but still want to name the processor, they
> need
>  to do either
> "count(Materialized.as(null).withName("processorName"));" or
>  "count(Named.as("processorName"));" and neither of it is a bit hard to
>  educate to users, and hence it looks that an overload function with
> two
>  parameters are easier to understand.
> 
>  2. As for `NamedOperation`: I've left a comment about it before, i.e.
> "1)
>  Regarding the interface / function name, I'd propose we call the
> >>> interface
>  `NamedOperation` which would be implemented by Produced / Consumed /
>  Printed / Joined / Grouped / Suppressed (note I intentionally exclude
>  Materialized here 

[jira] [Resolved] (KAFKA-7763) KafkaProducer with transactionId endless waits when network is disconnection for 10-20s

2019-02-21 Thread Jason Gustafson (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7763?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-7763.

   Resolution: Fixed
Fix Version/s: 2.3.0

> KafkaProducer with transactionId endless waits when network is disconnection 
> for 10-20s
> ---
>
> Key: KAFKA-7763
> URL: https://issues.apache.org/jira/browse/KAFKA-7763
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, producer 
>Affects Versions: 2.1.0
>Reporter: weasker
>Assignee: huxihx
>Priority: Blocker
> Fix For: 2.3.0
>
>   Original Estimate: 72h
>  Remaining Estimate: 72h
>
> When the client disconnect with the bootstrap server, a KafkaProducer with 
> transactionId endless waits on commitTransaction, the question is the same 
> with below issues:
> https://issues.apache.org/jira/browse/KAFKA-6446
> the reproduce condition you can do it as belows:
> 1、producer.initTransactions();
> 2、producer.beginTransaction();
> 3、producer.send(record1);//set the breakpoint here
> key step: run the breakpoint above 3 then disconnect the network by manual, 
> 10-20seconds recover the network and continute the program by canceling the 
> breakpoint
> 4、producer.send(record2);
> 5、producer.commitTransaction();//endless waits
>  
> I found in 2.1.0 version the modificaiton about the initTransactions method, 
> but the 
> commitTransaction and abortTransaction method, I think it's the same question 
> with initTransactions...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Build failed in Jenkins: kafka-trunk-jdk8 #3405

2019-02-21 Thread Apache Jenkins Server
See 


Changes:

[wangguoz] KAFKA-6161 Add default implementation to close() and configure() for

--
[...truncated 2.31 MB...]

org.apache.kafka.connect.data.ConnectSchemaTest > 
testValidateValueMismatchDouble STARTED

org.apache.kafka.connect.data.ConnectSchemaTest > 
testValidateValueMismatchDouble PASSED

org.apache.kafka.connect.data.ConnectSchemaTest > 
testValidateValueMismatchMapKey STARTED

org.apache.kafka.connect.data.ConnectSchemaTest > 
testValidateValueMismatchMapKey PASSED

org.apache.kafka.connect.data.ConnectSchemaTest > 
testValidateValueMismatchString STARTED

org.apache.kafka.connect.data.ConnectSchemaTest > 
testValidateValueMismatchString PASSED

org.apache.kafka.connect.data.ConnectSchemaTest > testValidateValueMatchingType 
STARTED

org.apache.kafka.connect.data.ConnectSchemaTest > testValidateValueMatchingType 
PASSED

org.apache.kafka.connect.data.ConnectSchemaTest > testArrayEquality STARTED

org.apache.kafka.connect.data.ConnectSchemaTest > testArrayEquality PASSED

org.apache.kafka.connect.data.ConnectSchemaTest > testFieldsOnlyValidForStructs 
STARTED

org.apache.kafka.connect.data.ConnectSchemaTest > testFieldsOnlyValidForStructs 
PASSED

org.apache.kafka.connect.data.ConnectSchemaTest > 
testValidateValueMatchingLogicalType STARTED

org.apache.kafka.connect.data.ConnectSchemaTest > 
testValidateValueMatchingLogicalType PASSED

org.apache.kafka.connect.source.SourceRecordTest > 
shouldDuplicateRecordAndCloneHeaders STARTED

org.apache.kafka.connect.source.SourceRecordTest > 
shouldDuplicateRecordAndCloneHeaders PASSED

org.apache.kafka.connect.source.SourceRecordTest > 
shouldDuplicateRecordUsingNewHeaders STARTED

org.apache.kafka.connect.source.SourceRecordTest > 
shouldDuplicateRecordUsingNewHeaders PASSED

org.apache.kafka.connect.source.SourceRecordTest > shouldModifyRecordHeader 
STARTED

org.apache.kafka.connect.source.SourceRecordTest > shouldModifyRecordHeader 
PASSED

org.apache.kafka.connect.source.SourceRecordTest > 
shouldCreateSinkRecordWithHeaders STARTED

org.apache.kafka.connect.source.SourceRecordTest > 
shouldCreateSinkRecordWithHeaders PASSED

org.apache.kafka.connect.source.SourceRecordTest > 
shouldCreateSinkRecordWithEmtpyHeaders STARTED

org.apache.kafka.connect.source.SourceRecordTest > 
shouldCreateSinkRecordWithEmtpyHeaders PASSED

org.apache.kafka.connect.sink.SinkRecordTest > 
shouldDuplicateRecordAndCloneHeaders STARTED

org.apache.kafka.connect.sink.SinkRecordTest > 
shouldDuplicateRecordAndCloneHeaders PASSED

org.apache.kafka.connect.sink.SinkRecordTest > 
shouldCreateSinkRecordWithEmptyHeaders STARTED

org.apache.kafka.connect.sink.SinkRecordTest > 
shouldCreateSinkRecordWithEmptyHeaders PASSED

org.apache.kafka.connect.sink.SinkRecordTest > 
shouldDuplicateRecordUsingNewHeaders STARTED

org.apache.kafka.connect.sink.SinkRecordTest > 
shouldDuplicateRecordUsingNewHeaders PASSED

org.apache.kafka.connect.sink.SinkRecordTest > shouldModifyRecordHeader STARTED

org.apache.kafka.connect.sink.SinkRecordTest > shouldModifyRecordHeader PASSED

org.apache.kafka.connect.sink.SinkRecordTest > 
shouldCreateSinkRecordWithHeaders STARTED

org.apache.kafka.connect.sink.SinkRecordTest > 
shouldCreateSinkRecordWithHeaders PASSED

org.apache.kafka.connect.storage.SimpleHeaderConverterTest > 
shouldConvertEmptyListToListWithoutSchema STARTED

org.apache.kafka.connect.storage.SimpleHeaderConverterTest > 
shouldConvertEmptyListToListWithoutSchema PASSED

org.apache.kafka.connect.storage.SimpleHeaderConverterTest > 
shouldConvertStringWithQuotesAndOtherDelimiterCharacters STARTED

org.apache.kafka.connect.storage.SimpleHeaderConverterTest > 
shouldConvertStringWithQuotesAndOtherDelimiterCharacters PASSED

org.apache.kafka.connect.storage.SimpleHeaderConverterTest > 
shouldConvertListWithMixedValuesToListWithoutSchema STARTED

org.apache.kafka.connect.storage.SimpleHeaderConverterTest > 
shouldConvertListWithMixedValuesToListWithoutSchema PASSED

org.apache.kafka.connect.storage.SimpleHeaderConverterTest > 
shouldConvertMapWithStringKeys STARTED

org.apache.kafka.connect.storage.SimpleHeaderConverterTest > 
shouldConvertMapWithStringKeys PASSED

org.apache.kafka.connect.storage.SimpleHeaderConverterTest > 
shouldParseStringOfMapWithStringValuesWithWhitespaceAsMap STARTED

org.apache.kafka.connect.storage.SimpleHeaderConverterTest > 
shouldParseStringOfMapWithStringValuesWithWhitespaceAsMap PASSED

org.apache.kafka.connect.storage.SimpleHeaderConverterTest > 
shouldConvertMapWithStringKeysAndShortValues STARTED

org.apache.kafka.connect.storage.SimpleHeaderConverterTest > 
shouldConvertMapWithStringKeysAndShortValues PASSED

org.apache.kafka.connect.storage.SimpleHeaderConverterTest > 
shouldParseStringOfMapWithStringValuesWithoutWhitespaceAsMap STARTED

org.apache.kafka.connect.storage.SimpleHeaderConverterTest > 

Re: [DISCUSS] KIP-236 Interruptible Partition Reassignment

2019-02-21 Thread Harsha
Thanks George. LGTM.
Jun & Tom, Can you please take a look at the updated KIP.
Thanks,
Harsha

On Wed, Feb 20, 2019, at 12:18 PM, George Li wrote:
> Hi,
> 
> After discussing with Tom, Harsha and I are picking up KIP-236 
> .
>  The work focused on safely/cleanly cancel / rollback pending reassignments 
> in a timely fashion. Pull Request #6296 
>  Still working on more 
> integration/system tests. 
> 
> Please review and provide feedbacks/suggestions. 
> 
> Thanks,
> George
> 
> 
> On Saturday, December 23, 2017, 0:51:13 GMT, Jun Rao  
> wrote:
> 
> Hi, Tom,

Thanks for the reply.

10. That's a good thought. Perhaps it's better to get rid of
/admin/reassignment_requests
too. The window when a controller is not available is small. So, we can
just failed the admin client if the controller is not reachable after the
timeout.

13. With the changes in 10, the old approach is handled through ZK callback
and the new approach is through Kafka RPC. The ordering between the two is
kind of arbitrary. Perhaps the ordering can just be based on the order that
the reassignment is added to the controller request queue. From there, we
can either do the overriding or the prevention.

Jun


On Fri, Dec 22, 2017 at 7:31 AM, Tom Bentley  wrote:

> Hi Jun,
>
> Thanks for responding, my replies are inline:
>
> 10. You explanation makes sense. My remaining concern is the additional ZK
> > writes in the proposal. With the proposal, we will need to do following
> > writes in ZK.
> >
> > a. write new assignment in /admin/reassignment_requests
> >
> > b. write new assignment and additional metadata in
> > /admin/reassignments/$topic/$partition
> >
> > c. write old + new assignment  in /brokers/topics/[topic]
> >
> > d. write new assignment in /brokers/topics/[topic]
> >
> > e. delete /admin/reassignments/$topic/$partition
> >
> > So, there are quite a few ZK writes. I am wondering if it's better to
> > consolidate the info in /admin/reassignments/$topic/$partition into
> > /brokers/topics/[topic].
> > For example, we can just add some new JSON fields in
> > /brokers/topics/[topic]
> > to remember the new assignment and potentially the original replica count
> > when doing step c. Those fields with then be removed in step d. That way,
> > we can get rid of step b and e, saving 2 ZK writes per partition.
> >
>
> This seems like a great idea to me.
>
> It might also be possible to get rid of the /admin/reassignment_requests
> subtree too. I've not yet published the ideas I have for the AdminClient
> API for reassigning partitions, but given the existence of such an API, the
> route to starting a reassignment would be the AdminClient, and not
> zookeeper. In that case there is no need for /admin/reassignment_requests
> at all. The only drawback that I can see is that while it's currently
> possible to trigger a reassignment even during a controller
> election/failover that would no longer be the case if all requests had to
> go via the controller.
>
>
> > 11. What you described sounds good. We could potentially optimize the
> > dropped replicas a bit more. Suppose that assignment [0,1,2] is first
> > changed to [1,2,3] and then to [2,3,4]. When initiating the second
> > assignment, we may end up dropping replica 3 and only to restart it
> again.
> > In this case, we could only drop a replica if it's not going to be added
> > back again.
> >
>
> I had missed that, thank you! I will update the proposed algorithm to
> prevent this.
>
>
> > 13. Since this is a corner case, we can either prevent or allow
> overriding
> > with old/new mechanisms. To me, it seems that allowing is simpler to
> > implement, the order in /admin/reassignment_requests determines the
> > ordering the of override, whether that's initiated by the new way or the
> > old way.
> >
>
> That makes sense except for the corner case where:
>
> * There is no current controller and
> * Writes to both the new and old znodes happen
>
> On election of the new controller, for those partitions with both a
> reassignment_request and in /admin/reassign_partitions, we have to decide
> which should win. You could use the modification time, though there are
> some very unlikely scenarios where that doesn't work properly, for example
> if both znodes have the same mtime, or the /admin/reassign_partitions was
> updated, but the assignment of the partition wasn't changed, like this:
>
> 0. /admin/reassign_partitions has my-topic/42 = [1,2,3]
> 1. Controller stops watching.
> 2. Create /admin/reassignment_requests/request_1234 to change the
> reassignment of partition my-topic/42 = [4,5,6]
> 3. Update /admin/reassign_partitions to add your-topic/12=[7,8,9]
> 4. New controller resumes
>
>
>
> > Thanks,
> >
> > Jun
> >
> > On Tue, Dec 19, 2017 at 2:43 AM, Tom Bentley 
> > wrote:
> >
> > > Hi Jun,
> > >
> > > 10. Another concern of mine is on consistency 

Re: [VOTE] KIP-300: Add Windowed KTable API in StreamsBuilder

2019-02-21 Thread Boyang Chen
Great thank you Guozhang! We got 3 binding votes for KIP-300. I will close this 
thread and work on the implementations, thanks Matthias, Bill and Guozhang!

Get Outlook for iOS


From: Guozhang Wang 
Sent: Thursday, February 21, 2019 11:17 AM
To: dev
Subject: Re: [VOTE] KIP-300: Add Windowed KTable API in StreamsBuilder

Thanks for the update Boyang. +1 from me (binding).


Guozhang

On Thu, Feb 21, 2019 at 11:08 AM Boyang Chen  wrote:

> Thank you so much Bill!
>
> Get Outlook for iOS
>
> 
> From: Bill Bejeck 
> Sent: Thursday, February 21, 2019 10:45 AM
> To: dev@kafka.apache.org
> Subject: Re: [VOTE] KIP-300: Add Windowed KTable API in StreamsBuilder
>
> Thanks for the KIP Boyang.
>
> +1(binding)
> -Bill
>
> On Thu, Feb 21, 2019 at 12:31 PM Boyang Chen  wrote:
>
> > Oh, good catch Guozhang. I just renamed the function to `sessionedTable`
> > to be consistent, thanks!
> >
> > 
> > From: Guozhang Wang 
> > Sent: Friday, February 22, 2019 1:12 AM
> > To: dev
> > Subject: Re: [VOTE] KIP-300: Add Windowed KTable API in StreamsBuilder
> >
> > Hi Boyang,
> >
> > One quick question: for time windowed tables the API signature is "
> > windowedTable" while for session windowed tables the signature is "
> > sessionTable", which seems inconsistent (adj. v.s. noun). WDYT?
> >
> > Guozhang
> >
> >
> > On Wed, Feb 20, 2019 at 6:56 PM Matthias J. Sax 
> > wrote:
> >
> > > Thanks for the KIP Boyang!
> > >
> > >
> > > +1 (binding)
> > >
> > >
> > > -Matthias
> > >
> > > On 1/16/19 9:15 AM, Boyang Chen wrote:
> > > > Hey friends,
> > > >
> > > > I would like to start the vote thread for KIP-300 so that we could
> > agree
> > > on the high level API. Feel free to continue making comment on the
> > > discussion thread.
> > > >
> > > > Best,
> > > > Boyang
> > > >
> > >
> > >
> >
> > --
> > -- Guozhang
> >
>


--
-- Guozhang


Re: [VOTE] KIP-430 - Return Authorized Operations in Describe Responses

2019-02-21 Thread Harsha
+1 (binding).

Thanks,
Harsha

On Thu, Feb 21, 2019, at 2:49 AM, Satish Duggana wrote:
> Thanks for the KIP, +1 (non-binding)
> 
> ~Satish.
> 
> On Thu, Feb 21, 2019 at 3:58 PM Rajini Sivaram  
> wrote:
> >
> > I would like to start vote on KIP-430 to optionally obtain authorized
> > operations when describing resources:
> >
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-430+-+Return+Authorized+Operations+in+Describe+Responses
> >
> > Thank you,
> >
> > Rajini
>


[jira] [Created] (KAFKA-7973) Allow a Punctuator to cancel its own schedule

2019-02-21 Thread Matthias J. Sax (JIRA)
Matthias J. Sax created KAFKA-7973:
--

 Summary: Allow a Punctuator to cancel its own schedule
 Key: KAFKA-7973
 URL: https://issues.apache.org/jira/browse/KAFKA-7973
 Project: Kafka
  Issue Type: Improvement
  Components: streams
Reporter: Matthias J. Sax


As reported on this question on SO 
([https://stackoverflow.com/questions/54803124/cancel-punctuator-on-kafka-streams-after-is-triggered)]
 if one registers a lot of punctuation one needs to track them manually if they 
should be cancel at some point, what can be tedious.

It might be a good addition, to allow a `Punctuator` to cancel itself. 
Something like:
{quote}void punctuate(long timestamp) {
    // do stuff
    if (...) {
        this.cancel();
    }
}{quote}
It's just a sketch of an idea. This ticket implies a public API change and 
requires to write a KIP: 
[https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Improvement+Proposals]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [VOTE] KIP-300: Add Windowed KTable API in StreamsBuilder

2019-02-21 Thread Guozhang Wang
Thanks for the update Boyang. +1 from me (binding).


Guozhang

On Thu, Feb 21, 2019 at 11:08 AM Boyang Chen  wrote:

> Thank you so much Bill!
>
> Get Outlook for iOS
>
> 
> From: Bill Bejeck 
> Sent: Thursday, February 21, 2019 10:45 AM
> To: dev@kafka.apache.org
> Subject: Re: [VOTE] KIP-300: Add Windowed KTable API in StreamsBuilder
>
> Thanks for the KIP Boyang.
>
> +1(binding)
> -Bill
>
> On Thu, Feb 21, 2019 at 12:31 PM Boyang Chen  wrote:
>
> > Oh, good catch Guozhang. I just renamed the function to `sessionedTable`
> > to be consistent, thanks!
> >
> > 
> > From: Guozhang Wang 
> > Sent: Friday, February 22, 2019 1:12 AM
> > To: dev
> > Subject: Re: [VOTE] KIP-300: Add Windowed KTable API in StreamsBuilder
> >
> > Hi Boyang,
> >
> > One quick question: for time windowed tables the API signature is "
> > windowedTable" while for session windowed tables the signature is "
> > sessionTable", which seems inconsistent (adj. v.s. noun). WDYT?
> >
> > Guozhang
> >
> >
> > On Wed, Feb 20, 2019 at 6:56 PM Matthias J. Sax 
> > wrote:
> >
> > > Thanks for the KIP Boyang!
> > >
> > >
> > > +1 (binding)
> > >
> > >
> > > -Matthias
> > >
> > > On 1/16/19 9:15 AM, Boyang Chen wrote:
> > > > Hey friends,
> > > >
> > > > I would like to start the vote thread for KIP-300 so that we could
> > agree
> > > on the high level API. Feel free to continue making comment on the
> > > discussion thread.
> > > >
> > > > Best,
> > > > Boyang
> > > >
> > >
> > >
> >
> > --
> > -- Guozhang
> >
>


-- 
-- Guozhang


Re: [VOTE] KIP-300: Add Windowed KTable API in StreamsBuilder

2019-02-21 Thread Boyang Chen
Thank you so much Bill!

Get Outlook for iOS


From: Bill Bejeck 
Sent: Thursday, February 21, 2019 10:45 AM
To: dev@kafka.apache.org
Subject: Re: [VOTE] KIP-300: Add Windowed KTable API in StreamsBuilder

Thanks for the KIP Boyang.

+1(binding)
-Bill

On Thu, Feb 21, 2019 at 12:31 PM Boyang Chen  wrote:

> Oh, good catch Guozhang. I just renamed the function to `sessionedTable`
> to be consistent, thanks!
>
> 
> From: Guozhang Wang 
> Sent: Friday, February 22, 2019 1:12 AM
> To: dev
> Subject: Re: [VOTE] KIP-300: Add Windowed KTable API in StreamsBuilder
>
> Hi Boyang,
>
> One quick question: for time windowed tables the API signature is "
> windowedTable" while for session windowed tables the signature is "
> sessionTable", which seems inconsistent (adj. v.s. noun). WDYT?
>
> Guozhang
>
>
> On Wed, Feb 20, 2019 at 6:56 PM Matthias J. Sax 
> wrote:
>
> > Thanks for the KIP Boyang!
> >
> >
> > +1 (binding)
> >
> >
> > -Matthias
> >
> > On 1/16/19 9:15 AM, Boyang Chen wrote:
> > > Hey friends,
> > >
> > > I would like to start the vote thread for KIP-300 so that we could
> agree
> > on the high level API. Feel free to continue making comment on the
> > discussion thread.
> > >
> > > Best,
> > > Boyang
> > >
> >
> >
>
> --
> -- Guozhang
>


Re: [DISCUSS] KIP-432: Additional Broker-Side Opt-In for Default, Unsecure SASL/OAUTHBEARER Implementation

2019-02-21 Thread Ron Dagostino
My gut says a building block that takes a configuration as input and
exposes a representation of the potential issues found would be a good
starting point.  Then this could be leveraged as part of a command line
tool that emits the data to stdout, or it could be used during broker
startup such that any potential issues found are logged (or, more
aggressively, cause the broker to not start if that is desired).  The
return could be something as simple as an
EnumSet:

enum  PotentialConfigSecurityIssue {
OAUTHBEARER_UNSECURE_TOKENS_ALLOWED, // when OAUTHBEARER is in
sasl.enabled.mechanisms but server callback handler is the unsecured
validator
PLAINTEXT_LISTENER, // if there is a PLAINTEXT listener
SSL_CLIENT_AUTH_NOT_REQUIRED, // when SSL listener is enabled but
ssl.client.auth is not required
ETC
}

Or it could be a Set of instances where each instance implements an
interface that returns one of these enum values via a getType() method;
this allows each instance to potentially be a different class and hold
additional information (like the PLAINTEXT listener's address in case that
is something that requires additional validation, though how the validation
would be plugged in is a separate issue that I can't figure out at the
moment).

It feels to me that the EnumSet<> approach is the simplest place to start,
and it might be best to allow anything more complicated to fall out as
experience with the simple starting point builds.  Making the EnumSet<>
part of the public API would then allow anyone to build upon it as they see
fit.

Ron



On Thu, Feb 21, 2019 at 12:57 PM Stanislav Kozlovski 
wrote:

> I think that is a solid idea. The closest thing I can think of is David's
> PR about duplicate config key logging -
> https://github.com/apache/kafka/pull/6104
>
> We could either continue the pattern of checking on broker startup and
> logging a warning or create a separate tool that analyzes the configs.
>
> On Thu, Feb 21, 2019 at 3:16 PM Rajini Sivaram 
> wrote:
>
> > Hi Ron,
> >
> > Yes, a security sanity check tool could be quite useful. Let's see what
> > others think.
> >
> > Thanks,
> >
> > Rajini
> >
> >
> > On Thu, Feb 21, 2019 at 1:49 PM Ron Dagostino  wrote:
> >
> > > HI Rajini and Stan.  Thanks for the feedback.
> > >
> > > Stan, regarding the proposed config name, I couldn't think of anything
> > so I
> > > just threw in something outrageous in the hopes that it would give a
> > sense
> > > of what I was talking about while perhaps making folks chuckle a bit.
> > >
> > > Rajini, I definitely see your point.  It probably doesn't make sense to
> > > address this one particular issue (if we can even consider it an issue)
> > > when in fact it is part of a pattern that has been explicitly agreed
> upon
> > > as being appropriate.
> > >
> > > Maybe a security sanity check tool that scans the config and flags any
> of
> > > these items you mentioned, plus the OAUTHBEARER one and any others we
> can
> > > think of, would be useful?  That way the out-of-the-box experience can
> > > remain straightforward while some of the security risk that comes as a
> > > byproduct can be mitigated.
> > >
> > > Ron
> > >
> > > Ron
> > >
> > > On Thu, Feb 21, 2019 at 8:02 AM Rajini Sivaram <
> rajinisiva...@gmail.com>
> > > wrote:
> > >
> > > > Hi Ron,
> > > >
> > > > Thanks for the KIP. How is this different from other scenarios:
> > > >
> > > >1. Our default is to use a PLAINTEXT listener. If you forget to
> > change
> > > >that, anyone has access to your cluster
> > > >2. You may add a PLAINTEXT listener to the list of listeners in
> > > >production. May be you intended it for an interface that was
> > protected
> > > >using network segmentation, but entered the wrong address.
> > > >3. You are very security conscious and add an SSL listener. You
> must
> > > be
> > > >secure now right? Our default is `ssl.client.auth=none`, which
> means
> > > any
> > > >one can connect.
> > > >4. You use the built-in insecure PLAIN callback that stores
> > cleartext
> > > >passwords on the file system. Or enable SASL/PLAIN without SSL.
> > > >
> > > > At the moment, our defaults are intended to make it easy to get
> started
> > > > quickly. If we want to make brokers secure by default, we need an
> > > approach
> > > > that works across the board. I am not sure we have a specific issue
> > with
> > > > OAUTHBEARER apart from the fact that we don't provide a secure
> > > alternative.
> > > >
> > > >
> > > >
> > > > On Thu, Feb 21, 2019 at 12:05 PM Stanislav Kozlovski <
> > > > stanis...@confluent.io>
> > > > wrote:
> > > >
> > > > > Hey Ron, thanks for the KIP.
> > > > >
> > > > > I believe the proposed configuration setting
> > > > > `yes.virginia.i.really.do
> > > > >
> > > >
> > >
> >
> .want.to.allow.unsecured.oauthbearer.tokens.because.this.is.not.a.production.cluster`
> > > > > might be too verbose. I acknowledge that we do not want to enable
> > this
> > > in
> > > > > 

Re: [VOTE] KIP-300: Add Windowed KTable API in StreamsBuilder

2019-02-21 Thread Bill Bejeck
Thanks for the KIP Boyang.

+1(binding)
-Bill

On Thu, Feb 21, 2019 at 12:31 PM Boyang Chen  wrote:

> Oh, good catch Guozhang. I just renamed the function to `sessionedTable`
> to be consistent, thanks!
>
> 
> From: Guozhang Wang 
> Sent: Friday, February 22, 2019 1:12 AM
> To: dev
> Subject: Re: [VOTE] KIP-300: Add Windowed KTable API in StreamsBuilder
>
> Hi Boyang,
>
> One quick question: for time windowed tables the API signature is "
> windowedTable" while for session windowed tables the signature is "
> sessionTable", which seems inconsistent (adj. v.s. noun). WDYT?
>
> Guozhang
>
>
> On Wed, Feb 20, 2019 at 6:56 PM Matthias J. Sax 
> wrote:
>
> > Thanks for the KIP Boyang!
> >
> >
> > +1 (binding)
> >
> >
> > -Matthias
> >
> > On 1/16/19 9:15 AM, Boyang Chen wrote:
> > > Hey friends,
> > >
> > > I would like to start the vote thread for KIP-300 so that we could
> agree
> > on the high level API. Feel free to continue making comment on the
> > discussion thread.
> > >
> > > Best,
> > > Boyang
> > >
> >
> >
>
> --
> -- Guozhang
>


Re: [DISCUSS] KIP-432: Additional Broker-Side Opt-In for Default, Unsecure SASL/OAUTHBEARER Implementation

2019-02-21 Thread Stanislav Kozlovski
I think that is a solid idea. The closest thing I can think of is David's
PR about duplicate config key logging -
https://github.com/apache/kafka/pull/6104

We could either continue the pattern of checking on broker startup and
logging a warning or create a separate tool that analyzes the configs.

On Thu, Feb 21, 2019 at 3:16 PM Rajini Sivaram 
wrote:

> Hi Ron,
>
> Yes, a security sanity check tool could be quite useful. Let's see what
> others think.
>
> Thanks,
>
> Rajini
>
>
> On Thu, Feb 21, 2019 at 1:49 PM Ron Dagostino  wrote:
>
> > HI Rajini and Stan.  Thanks for the feedback.
> >
> > Stan, regarding the proposed config name, I couldn't think of anything
> so I
> > just threw in something outrageous in the hopes that it would give a
> sense
> > of what I was talking about while perhaps making folks chuckle a bit.
> >
> > Rajini, I definitely see your point.  It probably doesn't make sense to
> > address this one particular issue (if we can even consider it an issue)
> > when in fact it is part of a pattern that has been explicitly agreed upon
> > as being appropriate.
> >
> > Maybe a security sanity check tool that scans the config and flags any of
> > these items you mentioned, plus the OAUTHBEARER one and any others we can
> > think of, would be useful?  That way the out-of-the-box experience can
> > remain straightforward while some of the security risk that comes as a
> > byproduct can be mitigated.
> >
> > Ron
> >
> > Ron
> >
> > On Thu, Feb 21, 2019 at 8:02 AM Rajini Sivaram 
> > wrote:
> >
> > > Hi Ron,
> > >
> > > Thanks for the KIP. How is this different from other scenarios:
> > >
> > >1. Our default is to use a PLAINTEXT listener. If you forget to
> change
> > >that, anyone has access to your cluster
> > >2. You may add a PLAINTEXT listener to the list of listeners in
> > >production. May be you intended it for an interface that was
> protected
> > >using network segmentation, but entered the wrong address.
> > >3. You are very security conscious and add an SSL listener. You must
> > be
> > >secure now right? Our default is `ssl.client.auth=none`, which means
> > any
> > >one can connect.
> > >4. You use the built-in insecure PLAIN callback that stores
> cleartext
> > >passwords on the file system. Or enable SASL/PLAIN without SSL.
> > >
> > > At the moment, our defaults are intended to make it easy to get started
> > > quickly. If we want to make brokers secure by default, we need an
> > approach
> > > that works across the board. I am not sure we have a specific issue
> with
> > > OAUTHBEARER apart from the fact that we don't provide a secure
> > alternative.
> > >
> > >
> > >
> > > On Thu, Feb 21, 2019 at 12:05 PM Stanislav Kozlovski <
> > > stanis...@confluent.io>
> > > wrote:
> > >
> > > > Hey Ron, thanks for the KIP.
> > > >
> > > > I believe the proposed configuration setting
> > > > `yes.virginia.i.really.do
> > > >
> > >
> >
> .want.to.allow.unsecured.oauthbearer.tokens.because.this.is.not.a.production.cluster`
> > > > might be too verbose. I acknowledge that we do not want to enable
> this
> > in
> > > > production but we could maybe compromise on a more normal name.
> > > >
> > > > I am wondering whether it would be more worth it to replace the
> default
> > > > implementation with a secure one. Disabling it by default can be seen
> > as
> > > > just kicking the can down the road
> > > >
> > > > Best,
> > > > Stanislav
> > > >
> > > >
> > > >
> > > > On Wed, Feb 20, 2019 at 5:31 PM Ron Dagostino 
> > wrote:
> > > >
> > > > > Hi everyone. I created KIP-432: Additional Broker-Side Opt-In for
> > > > Default,
> > > > > Unsecure SASL/OAUTHBEARER Implementation
> > > > > <
> > > > >
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=103091238
> > > > > >
> > > > >  (
> > > > >
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=103091238
> > > > > ).
> > > > > The motivation for this KIPis as follows:
> > > > >
> > > > > The default implementation of SASL/OAUTHBEARER, as per KIP-255
> > > > > <
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=75968876
> > > > > >,
> > > > > is unsecured.  This is useful for development and testing purposes,
> > and
> > > > it
> > > > > provides a great out-of-the-box experience, but it must not be used
> > in
> > > > > production because it allows the client to authenticate with any
> > > > principal
> > > > > name it wishes.  To enable the default unsecured SASL/OAUTHBEARER
> > > > > implementation on the broker side simply requires the addition of
> > > > > OAUTHBEARER to the sasl.enabled.mechanisms configuration value (for
> > > > > example:
> > > > >  sasl.enabled.mechanisms=GSSAPI,OAUTHBEARER instead of simply
> > > > > sasl.enabled.mechanisms=GSSAPI). To secure the implementation
> > requires
> > > > the
> > > > > explicit setting of the
> > > > >
> > > > >
> > > >
> > >
> >
> 

Re: [VOTE] KIP-300: Add Windowed KTable API in StreamsBuilder

2019-02-21 Thread Boyang Chen
Oh, good catch Guozhang. I just renamed the function to `sessionedTable` to be 
consistent, thanks!


From: Guozhang Wang 
Sent: Friday, February 22, 2019 1:12 AM
To: dev
Subject: Re: [VOTE] KIP-300: Add Windowed KTable API in StreamsBuilder

Hi Boyang,

One quick question: for time windowed tables the API signature is "
windowedTable" while for session windowed tables the signature is "
sessionTable", which seems inconsistent (adj. v.s. noun). WDYT?

Guozhang


On Wed, Feb 20, 2019 at 6:56 PM Matthias J. Sax 
wrote:

> Thanks for the KIP Boyang!
>
>
> +1 (binding)
>
>
> -Matthias
>
> On 1/16/19 9:15 AM, Boyang Chen wrote:
> > Hey friends,
> >
> > I would like to start the vote thread for KIP-300 so that we could agree
> on the high level API. Feel free to continue making comment on the
> discussion thread.
> >
> > Best,
> > Boyang
> >
>
>

--
-- Guozhang


Re: [VOTE] KIP-300: Add Windowed KTable API in StreamsBuilder

2019-02-21 Thread Guozhang Wang
Hi Boyang,

One quick question: for time windowed tables the API signature is "
windowedTable" while for session windowed tables the signature is "
sessionTable", which seems inconsistent (adj. v.s. noun). WDYT?

Guozhang


On Wed, Feb 20, 2019 at 6:56 PM Matthias J. Sax 
wrote:

> Thanks for the KIP Boyang!
>
>
> +1 (binding)
>
>
> -Matthias
>
> On 1/16/19 9:15 AM, Boyang Chen wrote:
> > Hey friends,
> >
> > I would like to start the vote thread for KIP-300 so that we could agree
> on the high level API. Feel free to continue making comment on the
> discussion thread.
> >
> > Best,
> > Boyang
> >
>
>

-- 
-- Guozhang


[jira] [Resolved] (KAFKA-6161) Add default implementation to close() and configure() for Serializer, Deserializer and Serde

2019-02-21 Thread Guozhang Wang (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6161?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-6161.
--
   Resolution: Fixed
Fix Version/s: 2.3.0

> Add default implementation to close() and configure() for Serializer, 
> Deserializer and Serde
> 
>
> Key: KAFKA-6161
> URL: https://issues.apache.org/jira/browse/KAFKA-6161
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients, streams
>Reporter: Evgeny Veretennikov
>Assignee: Chia-Ping Tsai
>Priority: Major
>  Labels: kip
> Fix For: 2.3.0
>
>
> {{Serializer}}, {{Deserializer}} and {{Serde}} interfaces have methods 
> {{configure()}} and {{close()}}. Pretty often one want to leave these methods 
> empty. For example, a lot of serializers inside 
> {{org.apache.kafka.common.serialization}} package have these methods empty:
> {code}
> @Override
> public void configure(Map configs, boolean isKey) {
> // nothing to do
> }
> @Override
> public void close() {
> // nothing to do
> }
> {code}
> To avoid such boilerplate, we may create new interfaces (like 
> {{UnconfiguredSerializer}}), in which we will define these methods empty.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-7972) Replace SaslHandshake request/response with automated protocol

2019-02-21 Thread Mickael Maison (JIRA)
Mickael Maison created KAFKA-7972:
-

 Summary: Replace SaslHandshake request/response with automated 
protocol
 Key: KAFKA-7972
 URL: https://issues.apache.org/jira/browse/KAFKA-7972
 Project: Kafka
  Issue Type: Sub-task
Reporter: Mickael Maison
Assignee: Mickael Maison






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [DISCUSS] KIP-411: Add option to make Kafka Connect task client ID values unique

2019-02-21 Thread Randall Hauch
Hi, Paul. Thanks for the update to KIP-411 to reflect adding defaults, and
creating/updating https://github.com/apache/kafka/pull/6097 to reflect this
approach.

Now that we've avoided adding a new config and have changed the default `
client.id` to include some context, the connector name, and task number, I
think it makes overriding the client ID via worker config `
producer.client.id` or `consumer.client.id` properties less valuable
because those overridden client IDs will be exactly the same for all
connectors and tasks.

One one hand, we can leave this as-is, and any users that include `
producer.client.id` and `consumer.client.id` in their worker configs keep
the same (sort of useless) behavior. In fact, most users would probably be
better off by removing these worker config properties and instead relying
upon the defaults.

On the other, similar to what Ewen suggested earlier (in a different
context), we could add support for users to optionally use
"${connectorName}" and ${task}" in their overridden client ID property and
have Connect replace these (if found) with the connector name and task
number. Any existing properties that don't use these variables would behave
as-is, but this way the users could define their own client IDs yet still
get the benefit of uniquely identifying each of the clients. For example,
if my worker config contained the following:

producer.client.id=connect-cluster-A-${connectorName}-${task}-producer
consumer.client.id=connect-cluster-A-${connectorName}-${task}-consumer

Thoughts?

Randall

On Wed, Feb 20, 2019 at 3:18 PM Ryanne Dolan  wrote:

> Thanks Paul, this is great. This will make monitoring Connect a ton easier.
>
> Ryanne
>
> On Wed, Feb 20, 2019 at 1:24 PM Paul Davidson
>  wrote:
>
> > I have updated KIP-411 to propose changing the default client id - see:
> >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-411%3A+Make+default+Kafka+Connect+worker+task+client+IDs+distinct
> >
> >
> > There is also an PR ready to go here:
> > https://github.com/apache/kafka/pull/6097
> >
> > On Fri, Jan 11, 2019 at 3:39 PM Paul Davidson 
> > wrote:
> >
> > > Hi everyone.  We seem to have agreement that the ideal approach is to
> > > alter the default client ids. Now I'm wondering about the best process
> to
> > > proceed. Will the change in default behaviour require a new KIP, given
> it
> > > will affect existing deployments?  Would I be best to repurpose this
> > > KIP-411, or am I best to  create a new KIP? Thanks!
> > >
> > > Paul
> > >
> > > On Tue, Jan 8, 2019 at 7:16 PM Randall Hauch  wrote:
> > >
> > >> Hi, Paul.
> > >>
> > >> I concur with the others, and I like the new approach that avoids a
> new
> > >> configuration, especially because it does not change the behavior for
> > >> anyone already using `producer.client.id` and/or `consumer.client.id
> `.
> > I
> > >> did leave a few comments on the PR. Perhaps the biggest one is whether
> > the
> > >> producer used for the sink task error reporter (for dead letter queue)
> > >> should be `connector-producer-`, and whether that is
> > >> distinct
> > >> enough from source tasks, which will be of the form
> > >> `connector-producer-`. Maybe it is fine. (The other
> > >> comments were minor.)
> > >>
> > >> Best regards,
> > >>
> > >> Randall
> > >>
> > >> On Mon, Jan 7, 2019 at 1:19 PM Paul Davidson <
> pdavid...@salesforce.com>
> > >> wrote:
> > >>
> > >> > Thanks all. I've submitted a new PR with a possible implementation:
> > >> > https://github.com/apache/kafka/pull/6097. Note I did not include
> the
> > >> > group
> > >> > ID as part of the default client ID, mainly to avoid the connector
> > name
> > >> > appearing twice by default. As noted in the original Jira (
> > >> > https://issues.apache.org/jira/browse/KAFKA-5061), leaving out the
> > >> group
> > >> > ID
> > >> > could lead to naming conflicts if multiple clusters run the same
> Kafka
> > >> > cluster. This would probably not be a problem for many (including
> us)
> > as
> > >> > metrics exporters can usually be configured to include a cluster ID
> > and
> > >> > guarantee uniqueness. Will be interested to hear your thoughts on
> > this.
> > >> >
> > >> > Paul
> > >> >
> > >> >
> > >> >
> > >> > On Mon, Jan 7, 2019 at 10:27 AM Ryanne Dolan  >
> > >> > wrote:
> > >> >
> > >> > > I'd also prefer to avoid the new configuration property if
> possible.
> > >> > Seems
> > >> > > like a lighter touch without it.
> > >> > >
> > >> > > Ryanne
> > >> > >
> > >> > > On Sun, Jan 6, 2019 at 7:25 PM Paul Davidson <
> > >> pdavid...@salesforce.com>
> > >> > > wrote:
> > >> > >
> > >> > > > Hi Konstantine,
> > >> > > >
> > >> > > > Thanks for your feedback!  I think my reply to Ewen covers most
> of
> > >> your
> > >> > > > points, and I mostly agree.  If there is general agreement that
> > >> > changing
> > >> > > > the default behavior is preferable to a config change I will
> > update
> > >> my
> > >> > PR
> > >> > > > to use  that approach.
> > >> > > 

Re: [DISCUSS] KIP-432: Additional Broker-Side Opt-In for Default, Unsecure SASL/OAUTHBEARER Implementation

2019-02-21 Thread Rajini Sivaram
Hi Ron,

Yes, a security sanity check tool could be quite useful. Let's see what
others think.

Thanks,

Rajini


On Thu, Feb 21, 2019 at 1:49 PM Ron Dagostino  wrote:

> HI Rajini and Stan.  Thanks for the feedback.
>
> Stan, regarding the proposed config name, I couldn't think of anything so I
> just threw in something outrageous in the hopes that it would give a sense
> of what I was talking about while perhaps making folks chuckle a bit.
>
> Rajini, I definitely see your point.  It probably doesn't make sense to
> address this one particular issue (if we can even consider it an issue)
> when in fact it is part of a pattern that has been explicitly agreed upon
> as being appropriate.
>
> Maybe a security sanity check tool that scans the config and flags any of
> these items you mentioned, plus the OAUTHBEARER one and any others we can
> think of, would be useful?  That way the out-of-the-box experience can
> remain straightforward while some of the security risk that comes as a
> byproduct can be mitigated.
>
> Ron
>
> Ron
>
> On Thu, Feb 21, 2019 at 8:02 AM Rajini Sivaram 
> wrote:
>
> > Hi Ron,
> >
> > Thanks for the KIP. How is this different from other scenarios:
> >
> >1. Our default is to use a PLAINTEXT listener. If you forget to change
> >that, anyone has access to your cluster
> >2. You may add a PLAINTEXT listener to the list of listeners in
> >production. May be you intended it for an interface that was protected
> >using network segmentation, but entered the wrong address.
> >3. You are very security conscious and add an SSL listener. You must
> be
> >secure now right? Our default is `ssl.client.auth=none`, which means
> any
> >one can connect.
> >4. You use the built-in insecure PLAIN callback that stores cleartext
> >passwords on the file system. Or enable SASL/PLAIN without SSL.
> >
> > At the moment, our defaults are intended to make it easy to get started
> > quickly. If we want to make brokers secure by default, we need an
> approach
> > that works across the board. I am not sure we have a specific issue with
> > OAUTHBEARER apart from the fact that we don't provide a secure
> alternative.
> >
> >
> >
> > On Thu, Feb 21, 2019 at 12:05 PM Stanislav Kozlovski <
> > stanis...@confluent.io>
> > wrote:
> >
> > > Hey Ron, thanks for the KIP.
> > >
> > > I believe the proposed configuration setting
> > > `yes.virginia.i.really.do
> > >
> >
> .want.to.allow.unsecured.oauthbearer.tokens.because.this.is.not.a.production.cluster`
> > > might be too verbose. I acknowledge that we do not want to enable this
> in
> > > production but we could maybe compromise on a more normal name.
> > >
> > > I am wondering whether it would be more worth it to replace the default
> > > implementation with a secure one. Disabling it by default can be seen
> as
> > > just kicking the can down the road
> > >
> > > Best,
> > > Stanislav
> > >
> > >
> > >
> > > On Wed, Feb 20, 2019 at 5:31 PM Ron Dagostino 
> wrote:
> > >
> > > > Hi everyone. I created KIP-432: Additional Broker-Side Opt-In for
> > > Default,
> > > > Unsecure SASL/OAUTHBEARER Implementation
> > > > <
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=103091238
> > > > >
> > > >  (
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=103091238
> > > > ).
> > > > The motivation for this KIPis as follows:
> > > >
> > > > The default implementation of SASL/OAUTHBEARER, as per KIP-255
> > > > <
> > >
> >
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=75968876
> > > > >,
> > > > is unsecured.  This is useful for development and testing purposes,
> and
> > > it
> > > > provides a great out-of-the-box experience, but it must not be used
> in
> > > > production because it allows the client to authenticate with any
> > > principal
> > > > name it wishes.  To enable the default unsecured SASL/OAUTHBEARER
> > > > implementation on the broker side simply requires the addition of
> > > > OAUTHBEARER to the sasl.enabled.mechanisms configuration value (for
> > > > example:
> > > >  sasl.enabled.mechanisms=GSSAPI,OAUTHBEARER instead of simply
> > > > sasl.enabled.mechanisms=GSSAPI). To secure the implementation
> requires
> > > the
> > > > explicit setting of the
> > > >
> > > >
> > >
> >
> listener.name.{sasl_plaintext|sasl_ssl}.oauthbearer.sasl.{login,server}.callback.handler.class
> > > >  properties on the broker.  The question then arises: what if someone
> > > > either accidentally or maliciously appended OAUTHBEARER to the
> > > > sasl.enabled.mechanisms configuration value?  Doing so would enable
> the
> > > > unsecured implementation on the broker, and clients could then
> > > authenticate
> > > > with any principal name they desired.
> > > >
> > > > This KIP proposes to add an additional opt-in configuration property
> on
> > > the
> > > > broker side for the default, unsecured SASL/OAUTHBEARER
> implementation
> > > such
> > > > that 

[jira] [Created] (KAFKA-7971) Producer in Streams environment

2019-02-21 Thread Maciej Lizewski (JIRA)
Maciej Lizewski created KAFKA-7971:
--

 Summary: Producer in Streams environment
 Key: KAFKA-7971
 URL: https://issues.apache.org/jira/browse/KAFKA-7971
 Project: Kafka
  Issue Type: Improvement
  Components: streams
Reporter: Maciej Lizewski


Would be nice to have Producers that can emit messages to topic just like any 
producer but also have access to local stores from streams environment in 
Spring.

consider case: I have event sourced ordering process like this:
[EVENTS QUEUE] -> [MERGING PROCESS] -> [ORDERS CHANGELOG/KTABLE]

Merging process uses local storage "opened orders" to easily apply new changes.

Now I want to implement process of closing abandoned orders (orders that were 
started, but for too long there was no change and they hang in beginning 
status). Easiest way is to periodically scan "opened orders" store and produce 
"abandon event" for every order that meets criteria. The obnly way now i to 
create Transformer with punctuator and connect output to [EVENTS QUEUE]. That 
is obvious. but Transformer must be also connected to some input stream, but 
these events must be dropped as we want only the punctuator results. This 
causes unnecessary overhead in processing input messages (although they are 
just dropped) and it is not very elegant.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-7970) Missing topic causes service shutdown without exception

2019-02-21 Thread Jonny Heavey (JIRA)
Jonny Heavey created KAFKA-7970:
---

 Summary: Missing topic causes service shutdown without exception
 Key: KAFKA-7970
 URL: https://issues.apache.org/jira/browse/KAFKA-7970
 Project: Kafka
  Issue Type: Bug
  Components: streams
Affects Versions: 2.1.0
Reporter: Jonny Heavey


When launching a KafkaStreams application that depends on a topic that doesn't 
exist, the streams application correctly logs an error such as:

" is unknown yet during rebalance, please make sure they have been 
pre-created before starting the Streams application."

The stream is then shutdown, however, no exception is thrown indicating that an 
error has occurred.

In our circumstances, we run our streams app inside a container. The streams 
service is shutdown, but the process is not exited, meaning that the container 
does not crash (reducing visibility of the issue).

As no exception is thrown in the missing topic scenario described above, our 
application code has no way to determine that something is wrong that would 
then allow it to terminate the process.

 

Could the StreamsThread:onPartitionsAssigned method throw an exception when it 
decides to shutdown the stream (somewhere around line 264)?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [ANNOUNCE] New Committer: Randall Hauch

2019-02-21 Thread Randall Hauch
Thanks, everyone!

On Sun, Feb 17, 2019 at 8:57 PM Becket Qin  wrote:

> Congratulations, Randall!
>
> On Sat, Feb 16, 2019 at 2:44 AM Matthias J. Sax 
> wrote:
>
> > Congrats Randall!
> >
> >
> > -Matthias
> >
> > On 2/14/19 6:16 PM, Guozhang Wang wrote:
> > > Hello all,
> > >
> > > The PMC of Apache Kafka is happy to announce another new committer
> > joining
> > > the project today: we have invited Randall Hauch as a project committer
> > and
> > > he has accepted.
> > >
> > > Randall has been participating in the Kafka community for the past 3
> > years,
> > > and is well known as the founder of the Debezium project, a popular
> > project
> > > for database change-capture streams using Kafka (https://debezium.io).
> > More
> > > recently he has become the main person keeping Kafka Connect moving
> > > forward, participated in nearly all KIP discussions and QAs on the
> > mailing
> > > list. He's authored 6 KIPs and authored 50 pull requests and conducted
> > over
> > > a hundred reviews around Kafka Connect, and has also been evangelizing
> > > Kafka Connect at several Kafka Summit venues.
> > >
> > >
> > > Thank you very much for your contributions to the Connect community
> > Randall
> > > ! And looking forward to many more :)
> > >
> > >
> > > Guozhang, on behalf of the Apache Kafka PMC
> > >
> >
> >
>


Jenkins build is back to normal : kafka-trunk-jdk8 #3404

2019-02-21 Thread Apache Jenkins Server
See 




Re: [DISCUSS] KIP-432: Additional Broker-Side Opt-In for Default, Unsecure SASL/OAUTHBEARER Implementation

2019-02-21 Thread Ron Dagostino
HI Rajini and Stan.  Thanks for the feedback.

Stan, regarding the proposed config name, I couldn't think of anything so I
just threw in something outrageous in the hopes that it would give a sense
of what I was talking about while perhaps making folks chuckle a bit.

Rajini, I definitely see your point.  It probably doesn't make sense to
address this one particular issue (if we can even consider it an issue)
when in fact it is part of a pattern that has been explicitly agreed upon
as being appropriate.

Maybe a security sanity check tool that scans the config and flags any of
these items you mentioned, plus the OAUTHBEARER one and any others we can
think of, would be useful?  That way the out-of-the-box experience can
remain straightforward while some of the security risk that comes as a
byproduct can be mitigated.

Ron

Ron

On Thu, Feb 21, 2019 at 8:02 AM Rajini Sivaram 
wrote:

> Hi Ron,
>
> Thanks for the KIP. How is this different from other scenarios:
>
>1. Our default is to use a PLAINTEXT listener. If you forget to change
>that, anyone has access to your cluster
>2. You may add a PLAINTEXT listener to the list of listeners in
>production. May be you intended it for an interface that was protected
>using network segmentation, but entered the wrong address.
>3. You are very security conscious and add an SSL listener. You must be
>secure now right? Our default is `ssl.client.auth=none`, which means any
>one can connect.
>4. You use the built-in insecure PLAIN callback that stores cleartext
>passwords on the file system. Or enable SASL/PLAIN without SSL.
>
> At the moment, our defaults are intended to make it easy to get started
> quickly. If we want to make brokers secure by default, we need an approach
> that works across the board. I am not sure we have a specific issue with
> OAUTHBEARER apart from the fact that we don't provide a secure alternative.
>
>
>
> On Thu, Feb 21, 2019 at 12:05 PM Stanislav Kozlovski <
> stanis...@confluent.io>
> wrote:
>
> > Hey Ron, thanks for the KIP.
> >
> > I believe the proposed configuration setting
> > `yes.virginia.i.really.do
> >
> .want.to.allow.unsecured.oauthbearer.tokens.because.this.is.not.a.production.cluster`
> > might be too verbose. I acknowledge that we do not want to enable this in
> > production but we could maybe compromise on a more normal name.
> >
> > I am wondering whether it would be more worth it to replace the default
> > implementation with a secure one. Disabling it by default can be seen as
> > just kicking the can down the road
> >
> > Best,
> > Stanislav
> >
> >
> >
> > On Wed, Feb 20, 2019 at 5:31 PM Ron Dagostino  wrote:
> >
> > > Hi everyone. I created KIP-432: Additional Broker-Side Opt-In for
> > Default,
> > > Unsecure SASL/OAUTHBEARER Implementation
> > > <
> > >
> >
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=103091238
> > > >
> > >  (
> > >
> >
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=103091238
> > > ).
> > > The motivation for this KIPis as follows:
> > >
> > > The default implementation of SASL/OAUTHBEARER, as per KIP-255
> > > <
> >
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=75968876
> > > >,
> > > is unsecured.  This is useful for development and testing purposes, and
> > it
> > > provides a great out-of-the-box experience, but it must not be used in
> > > production because it allows the client to authenticate with any
> > principal
> > > name it wishes.  To enable the default unsecured SASL/OAUTHBEARER
> > > implementation on the broker side simply requires the addition of
> > > OAUTHBEARER to the sasl.enabled.mechanisms configuration value (for
> > > example:
> > >  sasl.enabled.mechanisms=GSSAPI,OAUTHBEARER instead of simply
> > > sasl.enabled.mechanisms=GSSAPI). To secure the implementation requires
> > the
> > > explicit setting of the
> > >
> > >
> >
> listener.name.{sasl_plaintext|sasl_ssl}.oauthbearer.sasl.{login,server}.callback.handler.class
> > >  properties on the broker.  The question then arises: what if someone
> > > either accidentally or maliciously appended OAUTHBEARER to the
> > > sasl.enabled.mechanisms configuration value?  Doing so would enable the
> > > unsecured implementation on the broker, and clients could then
> > authenticate
> > > with any principal name they desired.
> > >
> > > This KIP proposes to add an additional opt-in configuration property on
> > the
> > > broker side for the default, unsecured SASL/OAUTHBEARER implementation
> > such
> > > that simply adding OAUTHBEARER to the sasl.enabled.mechanisms
> > configuration
> > > value would be insufficient to enable the feature.  This additional
> > opt-in
> > > broker configuration property would have to be explicitly set to true
> > > before the default unsecured implementation would successfully
> > authenticate
> > > users, and the name of this configuration property would explicitly
> > > indicate that the feature is not 

Re: [DISCUSS] KIP-432: Additional Broker-Side Opt-In for Default, Unsecure SASL/OAUTHBEARER Implementation

2019-02-21 Thread Rajini Sivaram
Hi Ron,

Thanks for the KIP. How is this different from other scenarios:

   1. Our default is to use a PLAINTEXT listener. If you forget to change
   that, anyone has access to your cluster
   2. You may add a PLAINTEXT listener to the list of listeners in
   production. May be you intended it for an interface that was protected
   using network segmentation, but entered the wrong address.
   3. You are very security conscious and add an SSL listener. You must be
   secure now right? Our default is `ssl.client.auth=none`, which means any
   one can connect.
   4. You use the built-in insecure PLAIN callback that stores cleartext
   passwords on the file system. Or enable SASL/PLAIN without SSL.

At the moment, our defaults are intended to make it easy to get started
quickly. If we want to make brokers secure by default, we need an approach
that works across the board. I am not sure we have a specific issue with
OAUTHBEARER apart from the fact that we don't provide a secure alternative.



On Thu, Feb 21, 2019 at 12:05 PM Stanislav Kozlovski 
wrote:

> Hey Ron, thanks for the KIP.
>
> I believe the proposed configuration setting
> `yes.virginia.i.really.do
> .want.to.allow.unsecured.oauthbearer.tokens.because.this.is.not.a.production.cluster`
> might be too verbose. I acknowledge that we do not want to enable this in
> production but we could maybe compromise on a more normal name.
>
> I am wondering whether it would be more worth it to replace the default
> implementation with a secure one. Disabling it by default can be seen as
> just kicking the can down the road
>
> Best,
> Stanislav
>
>
>
> On Wed, Feb 20, 2019 at 5:31 PM Ron Dagostino  wrote:
>
> > Hi everyone. I created KIP-432: Additional Broker-Side Opt-In for
> Default,
> > Unsecure SASL/OAUTHBEARER Implementation
> > <
> >
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=103091238
> > >
> >  (
> >
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=103091238
> > ).
> > The motivation for this KIPis as follows:
> >
> > The default implementation of SASL/OAUTHBEARER, as per KIP-255
> > <
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=75968876
> > >,
> > is unsecured.  This is useful for development and testing purposes, and
> it
> > provides a great out-of-the-box experience, but it must not be used in
> > production because it allows the client to authenticate with any
> principal
> > name it wishes.  To enable the default unsecured SASL/OAUTHBEARER
> > implementation on the broker side simply requires the addition of
> > OAUTHBEARER to the sasl.enabled.mechanisms configuration value (for
> > example:
> >  sasl.enabled.mechanisms=GSSAPI,OAUTHBEARER instead of simply
> > sasl.enabled.mechanisms=GSSAPI). To secure the implementation requires
> the
> > explicit setting of the
> >
> >
> listener.name.{sasl_plaintext|sasl_ssl}.oauthbearer.sasl.{login,server}.callback.handler.class
> >  properties on the broker.  The question then arises: what if someone
> > either accidentally or maliciously appended OAUTHBEARER to the
> > sasl.enabled.mechanisms configuration value?  Doing so would enable the
> > unsecured implementation on the broker, and clients could then
> authenticate
> > with any principal name they desired.
> >
> > This KIP proposes to add an additional opt-in configuration property on
> the
> > broker side for the default, unsecured SASL/OAUTHBEARER implementation
> such
> > that simply adding OAUTHBEARER to the sasl.enabled.mechanisms
> configuration
> > value would be insufficient to enable the feature.  This additional
> opt-in
> > broker configuration property would have to be explicitly set to true
> > before the default unsecured implementation would successfully
> authenticate
> > users, and the name of this configuration property would explicitly
> > indicate that the feature is not secure and must not be used in
> > production.  Adding this explicit opt-in is a breaking change; existing
> > uses of the unsecured implementation would have to update their
> > configuration to include this explicit opt-in property before their
> cluster
> > would accept unsecure tokens again.  Note that this would only result in
> a
> > breaking change in production if the unsecured feature is either
> > accidentally or maliciously enabled there; it is assumed that 1) this
> will
> > probably not happen to anyone; and 2) if it does happen to someone it
> > almost certainly would not impact sanctioned clients but would instead
> > impact malicious clients only (if there were any).
> >
> >
> > Ron
> >
>
>
> --
> Best,
> Stanislav
>


Re: [DISCUSS] KIP-432: Additional Broker-Side Opt-In for Default, Unsecure SASL/OAUTHBEARER Implementation

2019-02-21 Thread Stanislav Kozlovski
Hey Ron, thanks for the KIP.

I believe the proposed configuration setting
`yes.virginia.i.really.do.want.to.allow.unsecured.oauthbearer.tokens.because.this.is.not.a.production.cluster`
might be too verbose. I acknowledge that we do not want to enable this in
production but we could maybe compromise on a more normal name.

I am wondering whether it would be more worth it to replace the default
implementation with a secure one. Disabling it by default can be seen as
just kicking the can down the road

Best,
Stanislav



On Wed, Feb 20, 2019 at 5:31 PM Ron Dagostino  wrote:

> Hi everyone. I created KIP-432: Additional Broker-Side Opt-In for Default,
> Unsecure SASL/OAUTHBEARER Implementation
> <
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=103091238
> >
>  (
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=103091238
> ).
> The motivation for this KIPis as follows:
>
> The default implementation of SASL/OAUTHBEARER, as per KIP-255
>  >,
> is unsecured.  This is useful for development and testing purposes, and it
> provides a great out-of-the-box experience, but it must not be used in
> production because it allows the client to authenticate with any principal
> name it wishes.  To enable the default unsecured SASL/OAUTHBEARER
> implementation on the broker side simply requires the addition of
> OAUTHBEARER to the sasl.enabled.mechanisms configuration value (for
> example:
>  sasl.enabled.mechanisms=GSSAPI,OAUTHBEARER instead of simply
> sasl.enabled.mechanisms=GSSAPI). To secure the implementation requires the
> explicit setting of the
>
> listener.name.{sasl_plaintext|sasl_ssl}.oauthbearer.sasl.{login,server}.callback.handler.class
>  properties on the broker.  The question then arises: what if someone
> either accidentally or maliciously appended OAUTHBEARER to the
> sasl.enabled.mechanisms configuration value?  Doing so would enable the
> unsecured implementation on the broker, and clients could then authenticate
> with any principal name they desired.
>
> This KIP proposes to add an additional opt-in configuration property on the
> broker side for the default, unsecured SASL/OAUTHBEARER implementation such
> that simply adding OAUTHBEARER to the sasl.enabled.mechanisms configuration
> value would be insufficient to enable the feature.  This additional opt-in
> broker configuration property would have to be explicitly set to true
> before the default unsecured implementation would successfully authenticate
> users, and the name of this configuration property would explicitly
> indicate that the feature is not secure and must not be used in
> production.  Adding this explicit opt-in is a breaking change; existing
> uses of the unsecured implementation would have to update their
> configuration to include this explicit opt-in property before their cluster
> would accept unsecure tokens again.  Note that this would only result in a
> breaking change in production if the unsecured feature is either
> accidentally or maliciously enabled there; it is assumed that 1) this will
> probably not happen to anyone; and 2) if it does happen to someone it
> almost certainly would not impact sanctioned clients but would instead
> impact malicious clients only (if there were any).
>
>
> Ron
>


-- 
Best,
Stanislav


Re: [VOTE] KIP-430 - Return Authorized Operations in Describe Responses

2019-02-21 Thread Satish Duggana
Thanks for the KIP, +1 (non-binding)

~Satish.

On Thu, Feb 21, 2019 at 3:58 PM Rajini Sivaram  wrote:
>
> I would like to start vote on KIP-430 to optionally obtain authorized
> operations when describing resources:
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-430+-+Return+Authorized+Operations+in+Describe+Responses
>
> Thank you,
>
> Rajini


[VOTE] KIP-430 - Return Authorized Operations in Describe Responses

2019-02-21 Thread Rajini Sivaram
I would like to start vote on KIP-430 to optionally obtain authorized
operations when describing resources:

https://cwiki.apache.org/confluence/display/KAFKA/KIP-430+-+Return+Authorized+Operations+in+Describe+Responses

Thank you,

Rajini


Build failed in Jenkins: kafka-trunk-jdk8 #3403

2019-02-21 Thread Apache Jenkins Server
See 


Changes:

[junrao] KAFKA-7283: Enable lazy mmap on index files and skip sanity check for

--
[...truncated 4.61 MB...]

org.apache.kafka.streams.scala.StreamToTableJoinScalaIntegrationTestImplicitSerdes
 > testShouldCountClicksPerRegion STARTED

org.apache.kafka.streams.scala.StreamToTableJoinScalaIntegrationTestImplicitSerdes
 > testShouldCountClicksPerRegion PASSED

org.apache.kafka.streams.scala.TopologyTest > 
shouldBuildIdenticalTopologyInJavaNScalaJoin STARTED

org.apache.kafka.streams.scala.TopologyTest > 
shouldBuildIdenticalTopologyInJavaNScalaJoin PASSED

org.apache.kafka.streams.scala.TopologyTest > 
shouldBuildIdenticalTopologyInJavaNScalaSimple STARTED

org.apache.kafka.streams.scala.TopologyTest > 
shouldBuildIdenticalTopologyInJavaNScalaSimple PASSED

org.apache.kafka.streams.scala.TopologyTest > 
shouldBuildIdenticalTopologyInJavaNScalaAggregate STARTED

org.apache.kafka.streams.scala.TopologyTest > 
shouldBuildIdenticalTopologyInJavaNScalaAggregate PASSED

org.apache.kafka.streams.scala.TopologyTest > 
shouldBuildIdenticalTopologyInJavaNScalaTransform STARTED

org.apache.kafka.streams.scala.TopologyTest > 
shouldBuildIdenticalTopologyInJavaNScalaTransform PASSED

org.apache.kafka.streams.scala.WordCountTest > testShouldCountWordsMaterialized 
STARTED

org.apache.kafka.streams.scala.WordCountTest > testShouldCountWordsMaterialized 
PASSED

org.apache.kafka.streams.scala.WordCountTest > testShouldCountWordsJava STARTED

org.apache.kafka.streams.scala.WordCountTest > testShouldCountWordsJava PASSED

org.apache.kafka.streams.scala.WordCountTest > testShouldCountWords STARTED

org.apache.kafka.streams.scala.WordCountTest > testShouldCountWords PASSED

org.apache.kafka.streams.scala.kstream.MaterializedTest > Create a Materialized 
should create a Materialized with Serdes STARTED

org.apache.kafka.streams.scala.kstream.MaterializedTest > Create a Materialized 
should create a Materialized with Serdes PASSED

org.apache.kafka.streams.scala.kstream.MaterializedTest > Create a Materialize 
with a store name should create a Materialized with Serdes and a store name 
STARTED

org.apache.kafka.streams.scala.kstream.MaterializedTest > Create a Materialize 
with a store name should create a Materialized with Serdes and a store name 
PASSED

org.apache.kafka.streams.scala.kstream.MaterializedTest > Create a Materialize 
with a window store supplier should create a Materialized with Serdes and a 
store supplier STARTED

org.apache.kafka.streams.scala.kstream.MaterializedTest > Create a Materialize 
with a window store supplier should create a Materialized with Serdes and a 
store supplier PASSED

org.apache.kafka.streams.scala.kstream.MaterializedTest > Create a Materialize 
with a key value store supplier should create a Materialized with Serdes and a 
store supplier STARTED

org.apache.kafka.streams.scala.kstream.MaterializedTest > Create a Materialize 
with a key value store supplier should create a Materialized with Serdes and a 
store supplier PASSED

org.apache.kafka.streams.scala.kstream.MaterializedTest > Create a Materialize 
with a session store supplier should create a Materialized with Serdes and a 
store supplier STARTED

org.apache.kafka.streams.scala.kstream.MaterializedTest > Create a Materialize 
with a session store supplier should create a Materialized with Serdes and a 
store supplier PASSED

org.apache.kafka.streams.scala.kstream.KTableTest > filter a KTable should 
filter records satisfying the predicate STARTED

org.apache.kafka.streams.scala.kstream.KTableTest > filter a KTable should 
filter records satisfying the predicate PASSED

org.apache.kafka.streams.scala.kstream.KTableTest > filterNot a KTable should 
filter records not satisfying the predicate STARTED

org.apache.kafka.streams.scala.kstream.KTableTest > filterNot a KTable should 
filter records not satisfying the predicate PASSED

org.apache.kafka.streams.scala.kstream.KTableTest > join 2 KTables should join 
correctly records STARTED

org.apache.kafka.streams.scala.kstream.KTableTest > join 2 KTables should join 
correctly records PASSED

org.apache.kafka.streams.scala.kstream.KTableTest > join 2 KTables with a 
Materialized should join correctly records and state store STARTED

org.apache.kafka.streams.scala.kstream.KTableTest > join 2 KTables with a 
Materialized should join correctly records and state store PASSED

org.apache.kafka.streams.scala.kstream.GroupedTest > Create a Grouped should 
create a Grouped with Serdes STARTED

org.apache.kafka.streams.scala.kstream.GroupedTest > Create a Grouped should 
create a Grouped with Serdes PASSED

org.apache.kafka.streams.scala.kstream.GroupedTest > Create a Grouped with 
repartition topic name should create a Grouped with Serdes, and repartition 
topic name STARTED

org.apache.kafka.streams.scala.kstream.GroupedTest > Create a Grouped with 

Re: [VOTE] KIP-412: Extend Admin API to support dynamic application log levels

2019-02-21 Thread Stanislav Kozlovski
Thanks for the interest everybody. This KIP has passed with 3 binding votes
(Harsha, Gwen, Rajini) and  5 non-binding votes (Mickael, Jonathan,
Dongjin, Satish, Andrew)

On Thu, Feb 21, 2019 at 9:00 AM Satish Duggana 
wrote:

> Thanks for the KIP
> +1 (non-binding)
>
> On Thu, Feb 21, 2019 at 9:38 AM Harsha  wrote:
> >
> > +1 (binding).
> >
> > Thanks,
> > Harsha
> >
> > On Tue, Feb 19, 2019, at 7:53 AM, Andrew Schofield wrote:
> > > Thanks for the KIP.
> > >
> > > +1 (non-binding)
> > >
> > > On 18/02/2019, 12:48, "Stanislav Kozlovski" 
> wrote:
> > >
> > > Hey everybody, I'm starting a VOTE thread for KIP-412. This
> feature should
> > > significantly improve the flexibility and ease in debugging Kafka
> in run
> > > time
> > >
> > > KIP-412 -
> > >
> > >
> https://nam03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fcwiki.apache.org%2Fconfluence%2Fdisplay%2FKAFKA%2FKIP-412%253A%2BExtend%2BAdmin%2BAPI%2Bto%2Bsupport%2Bdynamic%2Bapplication%2Blog%2Blevelsdata=02%7C01%7C%7C69bc63a9d7864e25ec3c08d69596eec4%7C84df9e7fe9f640afb435%7C1%7C0%7C636860872825557120sdata=XAnMhy6EPC7JkB77NBBhLR%2FvE7XrTutuS5Rlt%2FDpwfU%3Dreserved=0
> > >
> > >
> > > --
> > > Best,
> > > Stanislav
> > >
> > >
> > >
>


-- 
Best,
Stanislav


[jira] [Resolved] (KAFKA-7968) Delete leader epoch cache files with old message format versions

2019-02-21 Thread Stanislav Kozlovski (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7968?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stanislav Kozlovski resolved KAFKA-7968.

Resolution: Duplicate

This is a duplicate of https://issues.apache.org/jira/browse/KAFKA-7959. 
Apologies for not realizing that was opened

> Delete leader epoch cache files with old message format versions
> 
>
> Key: KAFKA-7968
> URL: https://issues.apache.org/jira/browse/KAFKA-7968
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 2.0.1
>Reporter: Stanislav Kozlovski
>Assignee: Stanislav Kozlovski
>Priority: Major
>
> [KAFKA-7897 (Invalid use of epoch cache with old message format 
> versions)|https://issues.apache.org/jira/browse/KAFKA-7897] fixed a critical 
> bug where replica followers would inadequately use their leader epoch cache 
> for truncating their logs upon becoming a follower. [The root of the 
> issue|https://issues.apache.org/jira/browse/KAFKA-7897?focusedCommentId=16761049=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16761049]
>  was that a regression in KAFKA-7415 caused the leader epoch cache to be 
> populated upon becoming a follower, even if the message format was older.
> KAFKA-7897 fixed that problem by not updating the leader epoch cache if the 
> message format does not support it. It was merged all the way back to 1.1 but 
> due to significant branch divergence, the patches for 2.0 and below were 
> simplified. As said in the commit:
> Note this is a simplified fix than what was merged to trunk in #6232 since 
> the branches have diverged significantly. Rather than removing the epoch 
> cache file, we guard usage of the cache with the record version.
> This results in the same bug being hit at a different time. When the message 
> format gets upgraded to support the leader epoch cache, brokers start to make 
> use of it. Due to the previous problem, we still have the sparsely populated 
> epoch cache file present. This results in the same large truncations we saw 
> in KAFKA-7897.
> The key difference is that the patches for 2.1 and trunk *deleted* the 
> non-empty leader epoch cache files if the log message format did not support 
> it.
> We should update the earlier versions to do the same thing. That way, users 
> that have upgraded to 2.0.1 but are still using old message formats/protocol 
> will have their epochs cleaned up on the first roll that upgrades the 
> `inter.broker.protocol.version`



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [VOTE] KIP-412: Extend Admin API to support dynamic application log levels

2019-02-21 Thread Satish Duggana
Thanks for the KIP
+1 (non-binding)

On Thu, Feb 21, 2019 at 9:38 AM Harsha  wrote:
>
> +1 (binding).
>
> Thanks,
> Harsha
>
> On Tue, Feb 19, 2019, at 7:53 AM, Andrew Schofield wrote:
> > Thanks for the KIP.
> >
> > +1 (non-binding)
> >
> > On 18/02/2019, 12:48, "Stanislav Kozlovski"  wrote:
> >
> > Hey everybody, I'm starting a VOTE thread for KIP-412. This feature 
> > should
> > significantly improve the flexibility and ease in debugging Kafka in run
> > time
> >
> > KIP-412 -
> >
> > https://nam03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fcwiki.apache.org%2Fconfluence%2Fdisplay%2FKAFKA%2FKIP-412%253A%2BExtend%2BAdmin%2BAPI%2Bto%2Bsupport%2Bdynamic%2Bapplication%2Blog%2Blevelsdata=02%7C01%7C%7C69bc63a9d7864e25ec3c08d69596eec4%7C84df9e7fe9f640afb435%7C1%7C0%7C636860872825557120sdata=XAnMhy6EPC7JkB77NBBhLR%2FvE7XrTutuS5Rlt%2FDpwfU%3Dreserved=0
> >
> >
> > --
> > Best,
> > Stanislav
> >
> >
> >


[jira] [Created] (KAFKA-7969) Flaky Test DescribeConsumerGroupTest#testDescribeOffsetsOfExistingGroupWithNoMembers

2019-02-21 Thread Matthias J. Sax (JIRA)
Matthias J. Sax created KAFKA-7969:
--

 Summary: Flaky Test 
DescribeConsumerGroupTest#testDescribeOffsetsOfExistingGroupWithNoMembers
 Key: KAFKA-7969
 URL: https://issues.apache.org/jira/browse/KAFKA-7969
 Project: Kafka
  Issue Type: Bug
  Components: admin, clients, unit tests
Affects Versions: 2.2.0
Reporter: Matthias J. Sax
 Fix For: 2.2.0


To get stable nightly builds for `2.2` release, I create tickets for all 
observed test failures.

[https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/24/]
{quote}java.lang.AssertionError: Expected no active member in describe group 
results, state: Some(Empty), assignments: Some(List()) at 
org.junit.Assert.fail(Assert.java:88) at 
org.junit.Assert.assertTrue(Assert.java:41) at 
kafka.admin.DescribeConsumerGroupTest.testDescribeOffsetsOfExistingGroupWithNoMembers(DescribeConsumerGroupTest.scala:278{quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [DISCUSS] KIP-378: Enable Dependency Injection for Kafka Streams handlers

2019-02-21 Thread Matthias J. Sax
Hi Wladimir,

what is the status of this KIP?

-Matthias

On 1/9/19 4:17 PM, Guozhang Wang wrote:
> Hello Wladimir,
> 
> Just checking if you are still working on this KIP. We have the 2.2 KIP
> freeze deadline by 24th this month, and it'll be great to complete this KIP
> by then so 2.2.0 release could have this feature.
> 
> 
> Guozhang
> 
> On Mon, Dec 3, 2018 at 11:26 PM Guozhang Wang  wrote:
> 
>> Hello Wladimir,
>>
>> I've thought about the two options and I think I'm sold on the second
>> option and actually I think it is better generalize it to be potentially
>> used for other clients (producer, consumer) as while since they also have
>> similar dependency injection requests for metrics reporter, partitioner,
>> partition assignor etc.
>>
>> So I'd suggest we add the following to AbstractConfig directly (note I
>> intentionally renamed the class to ConfiguredInstanceFactory to be used for
>> other clients as well):
>>
>> ```
>> AbstractConfig(ConfigDef definition, Map originals,
>> ConfiguredInstanceFactory, boolean doLog)
>> ```
>>
>> And then in StreamsConfig add:
>>
>> ```
>> StreamsConfig(Map props, ConfiguredInstanceFactory)
>> ```
>>
>> which would call the above AbstractConfig constructor (we can leave to
>> core team to decide when they want to add for producer and consumer);
>>
>> And in KafkaStreams / TopologyTestDriver we can add one overloaded
>> constructor each that includes all the parameters including the
>> ConfiguredInstanceFactory --- for those who only want `factory` but not
>> `client-suppliers` for example, they can set it to `null` and the streams
>> library will just use the default one.
>>
>>
>> Guozhang
>>
>>
>> On Sun, Dec 2, 2018 at 12:13 PM Wladimir Schmidt 
>> wrote:
>>
>>> Hello Guozhang,
>>> sure, the first approach is very straight-forward and allows minimal
>>> changes to the Kafka Streams API.
>>> On the other hand, second approach with the interface implementation
>>> looks more cleaner to me.
>>> I totally agree that this should be first discussed before will be
>>> implemented.
>>>
>>> Thanks, Wladimir
>>>
>>>
>>> On 17-Nov-18 23:37, Guozhang Wang wrote:
>>>
>>> Hello folks,
>>>
>>> I'd like to revive this thread for discussion. After reading the previous
>>> emails I think I'm still a bit leaning towards re-enabling to pass in
>>> StreamsConfig to Kafka Streams constructors compared with a
>>> ConfiguredStreamsFactory as additional parameters to overloaded
>>> KafkaStreams constructors: although the former seems less cleaner as it
>>> requires users to read through the usage of AbstractConfig to know how to
>>> use it in their frameworks, this to me is a solvable problem through
>>> documentations, plus AbstractConfig is a public interface already and hence
>>> the additional ConfiguredStreamsFactory to me is really a bit overlapping
>>> in functionality.
>>>
>>>
>>> Guozhang
>>>
>>>
>>>
>>> On Sun, Oct 21, 2018 at 1:41 PM Wladimir Schmidt  
>>>  wrote:
>>>
>>>
>>> Hi Damian,
>>>
>>> The first approach was added only because it had been initially proposed
>>> in my pull request,
>>> which started a discussion and thus, the KIP-378 was born.
>>>
>>> Yes, I would like to have something "injectable". In this regard, a
>>> `ConfiguredStreamsFactory` (name is a subject to discussion)
>>> is a good option to be introduced into `KafkaStreams` constructor.
>>>
>>> Even though, I consider the second approach to be cleaner, it involves a
>>> certain amount of refactoring of the streams library.
>>> The first approach, on the contrary, adds (or removes deprecated
>>> annotation, if the method has not been removed yet) only additional
>>> constructors with
>>> considerably less intervention into a streams library (no changes, which
>>> would break an API. Please see a pull 
>>> request:https://github.com/apache/kafka/pull/5344).
>>>
>>> Thanks
>>> Wladimir
>>>
>>> On 10-Oct-18 15:51, Damian Guy wrote:
>>>
>>> Hi Wladimir,
>>>
>>> Of the two approaches in the KIP - i feel the second approach is cleaner.
>>> However, am i correct in assuming that you want to have the
>>> `ConfiguredStreamsFactory` as a ctor arg in `StreamsConfig` so that
>>>
>>> Spring
>>>
>>> can inject this for you?
>>>
>>> Otherwise you could just put the ApplicationContext as a property in the
>>> config and then use that via the configure method of the appropriate
>>> handler to get your actual handler.
>>>
>>> Thanks,
>>> Damian
>>>
>>> On Tue, 9 Oct 2018 at 01:55, Guozhang Wang  
>>>  wrote:
>>>
>>>
>>> John, thanks for the explanation, now it makes much more sense to me.
>>>
>>> As for the concrete approach, to me it seems the first option requires
>>>
>>> less
>>>
>>> changes than the second (ConfiguredStreamsFactory based) approach,
>>>
>>> whereas
>>>
>>> the second one requires an additional interface that is overlapping with
>>> the AbstractConfig.
>>>
>>> I'm aware that in KafkaProducer / KafkaConsumer we do not have public
>>> constructors for taking a ProducerConfig or ConsumerConfig 

Jenkins build is back to normal : kafka-trunk-jdk11 #302

2019-02-21 Thread Apache Jenkins Server
See 




[jira] [Created] (KAFKA-7968) Delete leader epoch cache files with old message format versions

2019-02-21 Thread Stanislav Kozlovski (JIRA)
Stanislav Kozlovski created KAFKA-7968:
--

 Summary: Delete leader epoch cache files with old message format 
versions
 Key: KAFKA-7968
 URL: https://issues.apache.org/jira/browse/KAFKA-7968
 Project: Kafka
  Issue Type: Bug
Affects Versions: 2.0.1
Reporter: Stanislav Kozlovski
Assignee: Stanislav Kozlovski


[KAFKA-7897 (Invalid use of epoch cache with old message format 
versions)|https://issues.apache.org/jira/browse/KAFKA-7897] fixed a critical 
bug where replica followers would inadequately use their leader epoch cache for 
truncating their logs upon becoming a follower. [The root of the 
issue|https://issues.apache.org/jira/browse/KAFKA-7897?focusedCommentId=16761049=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16761049]
 was that a regression in KAFKA-7415 caused the leader epoch cache to be 
populated upon becoming a follower, even if the message format was older.

KAFKA-7897 fixed that problem by not updating the leader epoch cache if the 
message format does not support it. It was merged all the way back to 1.1 but 
due to significant branch divergence, the patches for 2.0 and below were 
simplified. As said in the commit:
Note this is a simplified fix than what was merged to trunk in #6232 since the 
branches have diverged significantly. Rather than removing the epoch cache 
file, we guard usage of the cache with the record version.
This results in the same bug being hit at a different time. When the message 
format gets upgraded to support the leader epoch cache, brokers start to make 
use of it. Due to the previous problem, we still have the sparsely populated 
epoch cache file present. This results in the same large truncations we saw in 
KAFKA-7897.

The key difference is that the patches for 2.1 and trunk *deleted* the 
non-empty leader epoch cache files if the log message format did not support it.
We should update the earlier versions to do the same thing. That way, users 
that have upgraded to 2.0.1 but are still using old message formats/protocol 
will have their epochs cleaned up on the first roll that upgrades the 
`inter.broker.protocol.version`



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)