Build failed in Jenkins: kafka-trunk-jdk11 #1641

2020-07-13 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-10002; Improve performances of StopReplicaRequest with large


--
[...truncated 3.20 MB...]
org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestampWithProd

Build failed in Jenkins: kafka-trunk-jdk8 #4714

2020-07-13 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-10044 Deprecate ConsumerConfig#addDeserializerToConfig and Prod…

[github] KAFKA-10002; Improve performances of StopReplicaRequest with large


--
[...truncated 6.34 MB...]
org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
PASSED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldWorkWithInMemoryStore STARTED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldWorkWithInMemoryStore PASSED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldFailWithLogging STARTED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldFailWithLogging PASSED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldFailWithCaching STARTED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldFailWithCaching PASSED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldWorkWithPersistentStore STARTED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldWorkWithPersistentStore PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairs STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKeyAndDefaultTimestamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKeyAndDefaultTimestamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithDefaultTimestamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithDefaultTimestamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKey STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKey PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithOtherTopicNameAndTimestampWithTimetamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithOtherTopicNameAndTimestampWithTimetamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullHeaders STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullHeaders PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithDefaultTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithDefaultTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameW

Re: [VOTE] KIP-614: Add Prefix Scan support for State Stores

2020-07-13 Thread John Roesler
Thanks for the KIP, Sagar!

I’m +1 (binding)

-John

On Sun, Jul 12, 2020, at 02:05, Sagar wrote:
> Hi All,
> 
> I would like to start a new voting thread for the below KIP to add prefix
> scan support to state stores:
> 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> 614%3A+Add+Prefix+Scan+support+for+State+Stores
> 
> 
> Thanks!
> Sagar.
>


[jira] [Resolved] (KAFKA-10192) Flaky test BlockingConnectorTest#testBlockInConnectorStop

2020-07-13 Thread Konstantine Karantasis (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantine Karantasis resolved KAFKA-10192.

Resolution: Fixed

> Flaky test BlockingConnectorTest#testBlockInConnectorStop
> -
>
> Key: KAFKA-10192
> URL: https://issues.apache.org/jira/browse/KAFKA-10192
> Project: Kafka
>  Issue Type: Bug
>Reporter: Boyang Chen
>Assignee: Chris Egerton
>Priority: Major
> Fix For: 2.6.0, 2.7.0
>
>
> h3. [https://builds.apache.org/job/kafka-pr-jdk14-scala2.13/1218/]
> h3. Error Message
> org.apache.kafka.connect.runtime.rest.errors.ConnectRestException: Could not 
> execute PUT request. Error response: \{"error_code":500,"message":"Request 
> timed out"}
> h3. Stacktrace
> org.apache.kafka.connect.runtime.rest.errors.ConnectRestException: Could not 
> execute PUT request. Error response: \{"error_code":500,"message":"Request 
> timed out"} at 
> org.apache.kafka.connect.util.clusters.EmbeddedConnectCluster.putConnectorConfig(EmbeddedConnectCluster.java:346)
>  at 
> org.apache.kafka.connect.util.clusters.EmbeddedConnectCluster.configureConnector(EmbeddedConnectCluster.java:300)
>  at 
> org.apache.kafka.connect.integration.BlockingConnectorTest.createConnectorWithBlock(BlockingConnectorTest.java:185)
>  at 
> org.apache.kafka.connect.integration.BlockingConnectorTest.testBlockInConnectorStop(BlockingConnectorTest.java:140)
>  at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method) at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>  at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.base/java.lang.reflect.Method.invoke(Method.java:564) at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) 
> at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) 
> at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
>  at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
>  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
>  at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at 
> org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:413) at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:110)
>  at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:58)
>  at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:38)
>  at 
> org.gradle.api.internal.tasks.testing.junit.AbstractJUnitTestClassProcessor.processTestClass(AbstractJUnitTestClassProcessor.java:62)
>  at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
>  at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method) at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>  at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.base/java.lang.reflect.Method.invoke(Method.java:564) at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:36)
>  at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>  at 
> org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:33)
>  at 
> org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:94)
>  at com.sun.proxy.$Proxy2.processTestClass(Unknown Source) at 
> org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:119)
>  at java.base/j

[jira] [Resolved] (KAFKA-10240) Sink tasks should not throw WakeupException on shutdown

2020-07-13 Thread Konstantine Karantasis (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantine Karantasis resolved KAFKA-10240.

Resolution: Fixed

> Sink tasks should not throw WakeupException on shutdown
> ---
>
> Key: KAFKA-10240
> URL: https://issues.apache.org/jira/browse/KAFKA-10240
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 2.0.0, 2.0.1, 2.1.0, 2.2.0, 2.1.1, 2.3.0, 2.2.1, 2.2.2, 
> 2.4.0, 2.3.1, 2.5.0, 2.4.1, 2.6.0
>Reporter: Chris Egerton
>Assignee: Chris Egerton
>Priority: Major
> Fix For: 2.7.0
>
>
> * When a task is scheduled for shutdown, the framework [wakes up the 
> consumer|https://github.com/apache/kafka/blob/8a24da376b801c6eb6522ad4861b83f5beb5826c/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerSinkTask.java#L159]
>  for that task.
>  * As is noted in the [Javadocs for that 
> method|https://github.com/apache/kafka/blob/8a24da376b801c6eb6522ad4861b83f5beb5826c/clients/src/main/java/org/apache/kafka/clients/consumer/KafkaConsumer.java#L2348],
>  “If no thread is blocking in a method which can throw 
> {{org.apache.kafka.common.errors.WakeupException}}, the next call to such a 
> method will raise it instead.”
>  * It just so happens that, if the framework isn’t in the middle of a call to 
> the consumer and then the task gets stopped, the next call the framework will 
> make on the consumer may be to commit offsets, which will immediately throw a 
> {{WakeupException}}.
>  * Currently, the framework handles this by [immediately retrying the offset 
> commit|https://github.com/apache/kafka/blob/8a24da376b801c6eb6522ad4861b83f5beb5826c/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerSinkTask.java#L337-L339]
>  until it either throws a different exception or succeeds, and then throwing 
> the original {{WakeupException}}. If this synchronous commit of offsets 
> occurs during task shutdown (as opposed to in response to a consumer 
> rebalance), it's unnecessary to throw the {{WakeupException}} back to the 
> caller, and can cause alarming {{ERROR}}-level messages to get logged by the 
> worker.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: kafka-trunk-jdk14 #289

2020-07-13 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-10002; Improve performances of StopReplicaRequest with large


--
[...truncated 3.20 MB...]
org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializersDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializersDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfEvenTimeAdvances[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfEvenTimeAdvances[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldInitProcessor[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldInitProcessor[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowForUnknownTopic[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowForUnknownTopic[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnStreamsTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnStreamsTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureGlobalTopicNameIfWrittenInto[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureGlobalTopicNameIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestD

Build failed in Jenkins: kafka-trunk-jdk14 #288

2020-07-13 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-10044 Deprecate ConsumerConfig#addDeserializerToConfig and Prod…


--
[...truncated 3.20 MB...]

org.apache.kafka.streams.scala.kstream.KTableTest > join 2 KTables with a 
Materialized should join correctly records and state store STARTED

org.apache.kafka.streams.scala.kstream.KTableTest > join 2 KTables with a 
Materialized should join correctly records and state store PASSED

org.apache.kafka.streams.scala.kstream.KTableTest > windowed KTable#suppress 
should correctly suppress results using Suppressed.untilTimeLimit STARTED

org.apache.kafka.streams.scala.kstream.KTableTest > windowed KTable#suppress 
should correctly suppress results using Suppressed.untilTimeLimit PASSED

org.apache.kafka.streams.scala.kstream.KTableTest > windowed KTable#suppress 
should correctly suppress results using Suppressed.untilWindowCloses STARTED

org.apache.kafka.streams.scala.kstream.KTableTest > windowed KTable#suppress 
should correctly suppress results using Suppressed.untilWindowCloses PASSED

org.apache.kafka.streams.scala.kstream.KTableTest > session windowed 
KTable#suppress should correctly suppress results using 
Suppressed.untilWindowCloses STARTED

org.apache.kafka.streams.scala.kstream.KTableTest > session windowed 
KTable#suppress should correctly suppress results using 
Suppressed.untilWindowCloses PASSED

org.apache.kafka.streams.scala.kstream.KTableTest > non-windowed 
KTable#suppress should correctly suppress results using 
Suppressed.untilTimeLimit STARTED

org.apache.kafka.streams.scala.StreamToTableJoinScalaIntegrationTestImplicitSerdes
 > testShouldCountClicksPerRegionWithNamedRepartitionTopic PASSED

org.apache.kafka.streams.scala.StreamToTableJoinScalaIntegrationTestImplicitSerdes
 > testShouldCountClicksPerRegionJava STARTED

org.apache.kafka.streams.scala.kstream.KTableTest > non-windowed 
KTable#suppress should correctly suppress results using 
Suppressed.untilTimeLimit PASSED

org.apache.kafka.streams.scala.StreamToTableJoinScalaIntegrationTestImplicitSerdes
 > testShouldCountClicksPerRegionJava PASSED

org.apache.kafka.streams.scala.StreamToTableJoinScalaIntegrationTestImplicitSerdes
 > testShouldCountClicksPerRegion STARTED

org.apache.kafka.streams.scala.StreamToTableJoinScalaIntegrationTestImplicitSerdes
 > testShouldCountClicksPerRegion PASSED

org.apache.kafka.streams.scala.kstream.MaterializedTest > Create a Materialized 
should create a Materialized with Serdes STARTED

org.apache.kafka.streams.scala.kstream.MaterializedTest > Create a Materialized 
should create a Materialized with Serdes PASSED

org.apache.kafka.streams.scala.kstream.MaterializedTest > Create a Materialize 
with a store name should create a Materialized with Serdes and a store name 
STARTED

org.apache.kafka.streams.scala.kstream.MaterializedTest > Create a Materialize 
with a store name should create a Materialized with Serdes and a store name 
PASSED

org.apache.kafka.streams.scala.kstream.MaterializedTest > Create a Materialize 
with a window store supplier should create a Materialized with Serdes and a 
store supplier STARTED

org.apache.kafka.streams.scala.kstream.MaterializedTest > Create a Materialize 
with a window store supplier should create a Materialized with Serdes and a 
store supplier PASSED

org.apache.kafka.streams.scala.kstream.MaterializedTest > Create a Materialize 
with a key value store supplier should create a Materialized with Serdes and a 
store supplier STARTED

org.apache.kafka.streams.scala.kstream.MaterializedTest > Create a Materialize 
with a key value store supplier should create a Materialized with Serdes and a 
store supplier PASSED

org.apache.kafka.streams.scala.kstream.MaterializedTest > Create a Materialize 
with a session store supplier should create a Materialized with Serdes and a 
store supplier STARTED

org.apache.kafka.streams.scala.kstream.MaterializedTest > Create a Materialize 
with a session store supplier should create a Materialized with Serdes and a 
store supplier PASSED

org.apache.kafka.streams.scala.kstream.KStreamTest > filter a KStream should 
filter records satisfying the predicate STARTED

org.apache.kafka.streams.scala.kstream.KStreamTest > filter a KStream should 
filter records satisfying the predicate PASSED

org.apache.kafka.streams.scala.kstream.KStreamTest > filterNot a KStream should 
filter records not satisfying the predicate STARTED

org.apache.kafka.streams.scala.kstream.KStreamTest > filterNot a KStream should 
filter records not satisfying the predicate PASSED

org.apache.kafka.streams.scala.kstream.KStreamTest > foreach a KStream should 
run foreach actions on records STARTED

org.apache.kafka.streams.scala.kstream.KStreamTest > foreach a KStream should 
run foreach actions on records PASSED

org.apache.kafka.streams.scala.kstream.KStreamTest > peek a KStream should run 
peek actions on record

Build failed in Jenkins: kafka-trunk-jdk8 #4713

2020-07-13 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-6453: Document how timestamps are computed for aggregations and


--
[...truncated 6.35 MB...]
org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureGlobalTopicNameIfWrittenInto[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureGlobalTopicNameIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureSinkTopicNamesIfWrittenInto[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureSinkTopicNamesIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] STARTED

or

[jira] [Resolved] (KAFKA-10002) Improve performances of StopReplicaRequest with large number of partitions to be deleted

2020-07-13 Thread Jun Rao (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10002?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao resolved KAFKA-10002.
-
Fix Version/s: 2.7.0
   Resolution: Fixed

merged the PR to trunk

> Improve performances of StopReplicaRequest with large number of partitions to 
> be deleted
> 
>
> Key: KAFKA-10002
> URL: https://issues.apache.org/jira/browse/KAFKA-10002
> Project: Kafka
>  Issue Type: Improvement
>Reporter: David Jacot
>Assignee: David Jacot
>Priority: Major
> Fix For: 2.7.0
>
>
> I have noticed that StopReplicaRequests with partitions to be deleted are 
> extremely slow when there is more than 2000 partitions which leads to hitting 
> the request timeout in the controller. A request with 2000 partitions to be 
> deleted still works but performances degrades significantly with the number 
> increases. For examples, a request with 3000 partitions to be deletes takes 
> appox. 60 seconds to be processed.
> A CPU profile shows that most of the time is spent in checkpointing log start 
> offsets and recovery offsets. Almost 90% of the time is there. See attached. 
> When a partition is deleted, the replica manager calls 
> `ReplicaManager#asyncDelete` that checkpoints recovery offsets and log start 
> offsets. As the checkpoints are per data directory, the checkpointing is made 
> for all the partitions in the directory of the partition to be deleted. In 
> our case where we have only one data directory, if you deletes 1000 
> partitions, we end up checkpointing the same things 1000 times which is not 
> efficient.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: kafka-trunk-jdk14 #287

2020-07-13 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-6453: Document how timestamps are computed for aggregations and


--
[...truncated 3.20 MB...]
org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureGlobalTopicNameIfWrittenInto[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureGlobalTopicNameIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureSinkTopicNamesIfWrittenInto[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureSinkTopicNamesIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] STARTED

or

Re: Permissions to create and assign JIRA issues

2020-07-13 Thread Boyang Chen
Added. Happy contributing!

On Mon, Jul 13, 2020 at 2:48 PM Sankalp Bhatia 
wrote:

> Hi devs,
>
> I would like to contribute and will be grateful if someone can grant me the
> permissions to create and assign JIRA issues.
>
> my id is : sankalpbhatia
>
> Thanks in advance!
>


Permissions to create and assign JIRA issues

2020-07-13 Thread Sankalp Bhatia
Hi devs,

I would like to contribute and will be grateful if someone can grant me the
permissions to create and assign JIRA issues.

my id is : sankalpbhatia

Thanks in advance!


Re: [VOTE] KIP-554: Add Broker-side SCRAM Config API

2020-07-13 Thread Ron Dagostino
Hi Colin.  I wanted to explicitly identify a side-effect that I think
derives from having deletions separated out from the AlterScramUsersRequest
and put in their own DeleteScramUsersRequest. The command line invocation
of kafka-configs can specify alterations and deletions simultaneously: it
is entirely legal for that tool to accept and process both --add-config and
--delete-config (the current code removes any keys from the added configs
that are also supplied in the deleted configs, it grabs the
currently-defined keys, deletes the keys to be deleted, adds the ones to be
added, and then sets the JSON for the user accordingly).  If we split these
two operations into separate APIs then an invocation of kafka-configs that
specifies both operations can't complete atomically and could possibly have
one of them succeed but the other fail.  I am wondering if splitting the
deletions out into a separate API is acceptable given this possibility, and
if so, what the behavior should be.  Maybe the kafka-configs command would
have to prevent both from being specified simultaneously when
--bootstrap-server is used.  That would create an inconsistency with how
the tool works with --zookeeper, meaning it is conceivable that switching
over to --bootstrap-server would not necessarily be a drop-in replacement.
Am I missing/misunderstanding something? Thoughts?

Also, separately, should the responses include a ThrottleTimeMs field?  I
believe so but would like to confirm.

Ron

On Mon, Jul 13, 2020 at 3:44 PM David Arthur  wrote:

> Thanks for the clarification, Colin. +1 binding from me
>
> -David
>
> On Mon, Jul 13, 2020 at 3:40 PM Colin McCabe  wrote:
>
> > Thanks, Boyang.  Fixed.
> >
> > best,
> > Colin
> >
> > On Mon, Jul 13, 2020, at 08:43, Boyang Chen wrote:
> > > Thanks for the update Colin. One nit comment to fix the RPC type
> > > for AlterScramUsersRequest as:
> > > "apiKey": 51,
> > > "type": "request",
> > > "name": "AlterScramUsersRequest",
> > > Other than that, +1 (binding) from me.
> > >
> > >
> > > On Mon, Jul 13, 2020 at 8:38 AM Colin McCabe 
> wrote:
> > >
> > > > Hi David,
> > > >
> > > > The API is for clients.  Brokers will still listen to ZooKeeper to
> load
> > > > the SCRAM information.
> > > >
> > > > best,
> > > > Colin
> > > >
> > > >
> > > > On Mon, Jul 13, 2020, at 08:30, David Arthur wrote:
> > > > > Thanks for the KIP, Colin. The new RPCs look good to me, just one
> > > > question:
> > > > > since we don't return the password info through the RPC, how will
> > brokers
> > > > > load this info? (I'm presuming that they need it to configure
> > > > > authentication)
> > > > >
> > > > > -David
> > > > >
> > > > > On Mon, Jul 13, 2020 at 10:57 AM Colin McCabe 
> > > > wrote:
> > > > >
> > > > > > On Fri, Jul 10, 2020, at 10:55, Boyang Chen wrote:
> > > > > > > Hey Colin, thanks for the KIP. One question I have about
> > > > AlterScramUsers
> > > > > > > RPC is whether we could consolidate the deletion list and
> > alteration
> > > > > > list,
> > > > > > > since in response we only have a single list of results. The
> > further
> > > > > > > benefit is to reduce unintentional duplicate entries for both
> > > > deletion
> > > > > > and
> > > > > > > alteration, which makes the broker side handling logic easier.
> > > > Another
> > > > > > > alternative is to add DeleteScramUsers RPC to align what we
> > currently
> > > > > > have
> > > > > > > with other user provided data such as delegation tokens
> (create,
> > > > change,
> > > > > > > delete).
> > > > > > >
> > > > > >
> > > > > > Hi Boyang,
> > > > > >
> > > > > > It can't really be consolidated without some awkwardness.  It's
> > > > probably
> > > > > > better just to create a DeleteScramUsers function and RPC.  I've
> > > > changed
> > > > > > the KIP.
> > > > > >
> > > > > > >
> > > > > > > For my own education, the salt will be automatically generated
> > by the
> > > > > > admin
> > > > > > > client when we send the SCRAM requests correct?
> > > > > > >
> > > > > >
> > > > > > Yes, the client generates the salt before sending the request.
> > > > > >
> > > > > > best,
> > > > > > Colin
> > > > > >
> > > > > > > Best,
> > > > > > > Boyang
> > > > > > >
> > > > > > > On Fri, Jul 10, 2020 at 8:10 AM Rajini Sivaram <
> > > > rajinisiva...@gmail.com>
> > > > > > > wrote:
> > > > > > >
> > > > > > > > +1 (binding)
> > > > > > > >
> > > > > > > > Thanks for the KIP, Colin!
> > > > > > > >
> > > > > > > > Regards,
> > > > > > > >
> > > > > > > > Rajini
> > > > > > > >
> > > > > > > >
> > > > > > > > On Thu, Jul 9, 2020 at 8:49 PM Colin McCabe <
> > cmcc...@apache.org>
> > > > > > wrote:
> > > > > > > >
> > > > > > > > > Hi all,
> > > > > > > > >
> > > > > > > > > I'd like to call a vote for KIP-554: Add a broker-side
> SCRAM
> > > > > > > > configuration
> > > > > > > > > API.  The KIP is here:
> > > > https://cwiki.apache.org/confluence/x/ihERCQ
> > > > > > > > >
> > > > > > > > > The previous discussion thread is here:
> > > > > > > > >
> > 

Re: [DISCUSS] KIP-631: The Quorum-based Kafka Controller

2020-07-13 Thread Jason Gustafson
Let me add a little more color to my question about the controller.id. I
have been assuming that each process would run a single Raft replication
thread for the metadata quorum, just that its role might be different. Some
would be voters and some would be observers. To me, that suggests that
broker.id and controller.id should be the same thing, but it sounds like
you are trying hard to make the controller process an isolated thing like
Zookeeper was. In that case, how would it work for brokers which have both
the "broker" and "controller" roles? Would they replicate the metadata log
twice? Or would the broker skip "broker" replication if the process
happens to also be a controller?

At a higher level, I'm not 100% sold on the hard lines you are trying to
draw between the broker and controller roles. It seems simpler to me from
an operational perspective if we can keep the configuration of all
processes as homogeneous as possible, perhaps with the only difference
being `process.roles`. It would help if you could motivate the need for
this separation a bit better and how it will affect operations.

-Jason

On Mon, Jul 13, 2020 at 11:08 AM Boyang Chen 
wrote:

> Hey Colin, some quick questions,
>
> 1. I looked around and didn't find a config for broker heartbeat interval,
> are we piggy-back on some existing configs?
> 2. We only mentioned that the lease time is 10X of the heartbeat interval,
> could we also include why we chose this value?
>
> On Mon, Jul 13, 2020 at 10:09 AM Jason Gustafson 
> wrote:
>
> > Hi Colin,
> >
> > Thanks for the proposal. A few initial comments comments/questions below:
> >
> > 1. I don't follow why we need a separate configuration for
> > `controller.listeners`. The current listener configuration already allows
> > users to specify multiple listeners, which allows them to define internal
> > endpoints that are not exposed to clients. Can you explain what the new
> > configuration gives us that we don't already have?
> > 2. What is the advantage of creating a separate `controller.id` instead
> of
> > just using `broker.id`?
> > 3. It sounds like you are imagining a stop-the-world approach to
> > snapshotting, which is why we need the controller micromanaging snapshots
> > on all followers. Alternatives include fuzzy snapshots which can be done
> > concurrently. If this has been rejected, can you add some detail about
> why?
> > 4. More of a nit, but should `DeleteBrokerRecord` be
> > `ShutdownBrokerRecord`? The broker is just getting removed from ISRs, but
> > it would still be present in the replica set (I assume).
> >
> > Thanks,
> > Jason
> >
> > On Sun, Jul 12, 2020 at 12:24 AM Colin McCabe 
> wrote:
> >
> > > Hi Unmesh,
> > >
> > > That's an interesting idea, but I think it would be best to strive for
> > > single metadata events that are complete in themselves, rather than
> > trying
> > > to do something transactional or EOS-like.  For example, we could have
> a
> > > create event that contains all the partitions to be created.
> > >
> > > best,
> > > Colin
> > >
> > >
> > > On Fri, Jul 10, 2020, at 04:12, Unmesh Joshi wrote:
> > > > I was thinking that we might need something like multi-operation
> > > >  record in
> > > zookeeper
> > > > to atomically create topic and partition records when this multi
> record
> > > is
> > > > committed.  This way metadata will have both the TopicRecord and
> > > > PartitionRecord together always, and in no situation we can have
> > > > TopicRecord without PartitionRecord. Not sure if there are other
> > > situations
> > > > where multi-operation is needed.
> > > > 
> > > >
> > > > Thanks,
> > > > Unmesh
> > > >
> > > > On Fri, Jul 10, 2020 at 11:32 AM Colin McCabe 
> > > wrote:
> > > >
> > > > > Hi Unmesh,
> > > > >
> > > > > Yes, once the last stable offset advanced, we would consider the
> > topic
> > > > > creation to be done, and then we could return success to the
> client.
> > > > >
> > > > > best,
> > > > > Colin
> > > > >
> > > > > On Thu, Jul 9, 2020, at 19:44, Unmesh Joshi wrote:
> > > > > > It still needs HighWaterMark / LastStableOffset to be advanced by
> > two
> > > > > > records? Something like following?
> > > > > >
> > > > > >
> > > > > >||
> > > > > > <--||   HighWaterMark
> > > > > >Response|PartitionRecord |
> > > > > >||
> > > > > >-|
> > > > > >| TopicRecord|
> > -
> > > > > >||
> > > > > > --->   --   Previous
> HighWaterMark
> > > > > >CreateTopic ||
> > > > > >||
> > > > > >||
> > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > >

Re: [VOTE] KIP-554: Add Broker-side SCRAM Config API

2020-07-13 Thread David Arthur
Thanks for the clarification, Colin. +1 binding from me

-David

On Mon, Jul 13, 2020 at 3:40 PM Colin McCabe  wrote:

> Thanks, Boyang.  Fixed.
>
> best,
> Colin
>
> On Mon, Jul 13, 2020, at 08:43, Boyang Chen wrote:
> > Thanks for the update Colin. One nit comment to fix the RPC type
> > for AlterScramUsersRequest as:
> > "apiKey": 51,
> > "type": "request",
> > "name": "AlterScramUsersRequest",
> > Other than that, +1 (binding) from me.
> >
> >
> > On Mon, Jul 13, 2020 at 8:38 AM Colin McCabe  wrote:
> >
> > > Hi David,
> > >
> > > The API is for clients.  Brokers will still listen to ZooKeeper to load
> > > the SCRAM information.
> > >
> > > best,
> > > Colin
> > >
> > >
> > > On Mon, Jul 13, 2020, at 08:30, David Arthur wrote:
> > > > Thanks for the KIP, Colin. The new RPCs look good to me, just one
> > > question:
> > > > since we don't return the password info through the RPC, how will
> brokers
> > > > load this info? (I'm presuming that they need it to configure
> > > > authentication)
> > > >
> > > > -David
> > > >
> > > > On Mon, Jul 13, 2020 at 10:57 AM Colin McCabe 
> > > wrote:
> > > >
> > > > > On Fri, Jul 10, 2020, at 10:55, Boyang Chen wrote:
> > > > > > Hey Colin, thanks for the KIP. One question I have about
> > > AlterScramUsers
> > > > > > RPC is whether we could consolidate the deletion list and
> alteration
> > > > > list,
> > > > > > since in response we only have a single list of results. The
> further
> > > > > > benefit is to reduce unintentional duplicate entries for both
> > > deletion
> > > > > and
> > > > > > alteration, which makes the broker side handling logic easier.
> > > Another
> > > > > > alternative is to add DeleteScramUsers RPC to align what we
> currently
> > > > > have
> > > > > > with other user provided data such as delegation tokens (create,
> > > change,
> > > > > > delete).
> > > > > >
> > > > >
> > > > > Hi Boyang,
> > > > >
> > > > > It can't really be consolidated without some awkwardness.  It's
> > > probably
> > > > > better just to create a DeleteScramUsers function and RPC.  I've
> > > changed
> > > > > the KIP.
> > > > >
> > > > > >
> > > > > > For my own education, the salt will be automatically generated
> by the
> > > > > admin
> > > > > > client when we send the SCRAM requests correct?
> > > > > >
> > > > >
> > > > > Yes, the client generates the salt before sending the request.
> > > > >
> > > > > best,
> > > > > Colin
> > > > >
> > > > > > Best,
> > > > > > Boyang
> > > > > >
> > > > > > On Fri, Jul 10, 2020 at 8:10 AM Rajini Sivaram <
> > > rajinisiva...@gmail.com>
> > > > > > wrote:
> > > > > >
> > > > > > > +1 (binding)
> > > > > > >
> > > > > > > Thanks for the KIP, Colin!
> > > > > > >
> > > > > > > Regards,
> > > > > > >
> > > > > > > Rajini
> > > > > > >
> > > > > > >
> > > > > > > On Thu, Jul 9, 2020 at 8:49 PM Colin McCabe <
> cmcc...@apache.org>
> > > > > wrote:
> > > > > > >
> > > > > > > > Hi all,
> > > > > > > >
> > > > > > > > I'd like to call a vote for KIP-554: Add a broker-side SCRAM
> > > > > > > configuration
> > > > > > > > API.  The KIP is here:
> > > https://cwiki.apache.org/confluence/x/ihERCQ
> > > > > > > >
> > > > > > > > The previous discussion thread is here:
> > > > > > > >
> > > > > > > >
> > > > > > >
> > > > >
> > >
> https://lists.apache.org/thread.html/r69bdc65bdf58f5576944a551ff249d759073ecbf5daa441cff680ab0%40%3Cdev.kafka.apache.org%3E
> > > > > > > >
> > > > > > > > best,
> > > > > > > > Colin
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > > >
> > > > --
> > > > David Arthur
> > > >
> > >
> >
>


-- 
David Arthur


Re: [VOTE] KIP-554: Add Broker-side SCRAM Config API

2020-07-13 Thread Colin McCabe
Thanks, Boyang.  Fixed.

best,
Colin

On Mon, Jul 13, 2020, at 08:43, Boyang Chen wrote:
> Thanks for the update Colin. One nit comment to fix the RPC type
> for AlterScramUsersRequest as:
> "apiKey": 51,
> "type": "request",
> "name": "AlterScramUsersRequest",
> Other than that, +1 (binding) from me.
> 
> 
> On Mon, Jul 13, 2020 at 8:38 AM Colin McCabe  wrote:
> 
> > Hi David,
> >
> > The API is for clients.  Brokers will still listen to ZooKeeper to load
> > the SCRAM information.
> >
> > best,
> > Colin
> >
> >
> > On Mon, Jul 13, 2020, at 08:30, David Arthur wrote:
> > > Thanks for the KIP, Colin. The new RPCs look good to me, just one
> > question:
> > > since we don't return the password info through the RPC, how will brokers
> > > load this info? (I'm presuming that they need it to configure
> > > authentication)
> > >
> > > -David
> > >
> > > On Mon, Jul 13, 2020 at 10:57 AM Colin McCabe 
> > wrote:
> > >
> > > > On Fri, Jul 10, 2020, at 10:55, Boyang Chen wrote:
> > > > > Hey Colin, thanks for the KIP. One question I have about
> > AlterScramUsers
> > > > > RPC is whether we could consolidate the deletion list and alteration
> > > > list,
> > > > > since in response we only have a single list of results. The further
> > > > > benefit is to reduce unintentional duplicate entries for both
> > deletion
> > > > and
> > > > > alteration, which makes the broker side handling logic easier.
> > Another
> > > > > alternative is to add DeleteScramUsers RPC to align what we currently
> > > > have
> > > > > with other user provided data such as delegation tokens (create,
> > change,
> > > > > delete).
> > > > >
> > > >
> > > > Hi Boyang,
> > > >
> > > > It can't really be consolidated without some awkwardness.  It's
> > probably
> > > > better just to create a DeleteScramUsers function and RPC.  I've
> > changed
> > > > the KIP.
> > > >
> > > > >
> > > > > For my own education, the salt will be automatically generated by the
> > > > admin
> > > > > client when we send the SCRAM requests correct?
> > > > >
> > > >
> > > > Yes, the client generates the salt before sending the request.
> > > >
> > > > best,
> > > > Colin
> > > >
> > > > > Best,
> > > > > Boyang
> > > > >
> > > > > On Fri, Jul 10, 2020 at 8:10 AM Rajini Sivaram <
> > rajinisiva...@gmail.com>
> > > > > wrote:
> > > > >
> > > > > > +1 (binding)
> > > > > >
> > > > > > Thanks for the KIP, Colin!
> > > > > >
> > > > > > Regards,
> > > > > >
> > > > > > Rajini
> > > > > >
> > > > > >
> > > > > > On Thu, Jul 9, 2020 at 8:49 PM Colin McCabe 
> > > > wrote:
> > > > > >
> > > > > > > Hi all,
> > > > > > >
> > > > > > > I'd like to call a vote for KIP-554: Add a broker-side SCRAM
> > > > > > configuration
> > > > > > > API.  The KIP is here:
> > https://cwiki.apache.org/confluence/x/ihERCQ
> > > > > > >
> > > > > > > The previous discussion thread is here:
> > > > > > >
> > > > > > >
> > > > > >
> > > >
> > https://lists.apache.org/thread.html/r69bdc65bdf58f5576944a551ff249d759073ecbf5daa441cff680ab0%40%3Cdev.kafka.apache.org%3E
> > > > > > >
> > > > > > > best,
> > > > > > > Colin
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> > >
> > > --
> > > David Arthur
> > >
> >
>


[jira] [Reopened] (KAFKA-5722) Refactor ConfigCommand to use the AdminClient

2020-07-13 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-5722?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch reopened KAFKA-5722:
--

> Refactor ConfigCommand to use the AdminClient
> -
>
> Key: KAFKA-5722
> URL: https://issues.apache.org/jira/browse/KAFKA-5722
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Viktor Somogyi-Vass
>Assignee: Viktor Somogyi-Vass
>Priority: Major
>  Labels: kip, needs-kip
> Fix For: 2.6.0
>
>
> The ConfigCommand currently uses a direct connection to zookeeper. The 
> zookeeper dependency should be deprecated and an AdminClient API created to 
> be used instead.
> This change requires a KIP.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-5722) Refactor ConfigCommand to use the AdminClient

2020-07-13 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-5722?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch resolved KAFKA-5722.
--
Resolution: Duplicate

> Refactor ConfigCommand to use the AdminClient
> -
>
> Key: KAFKA-5722
> URL: https://issues.apache.org/jira/browse/KAFKA-5722
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Viktor Somogyi-Vass
>Assignee: Viktor Somogyi-Vass
>Priority: Major
>  Labels: kip, needs-kip
> Fix For: 2.6.0
>
>
> The ConfigCommand currently uses a direct connection to zookeeper. The 
> zookeeper dependency should be deprecated and an AdminClient API created to 
> be used instead.
> This change requires a KIP.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Jenkins build is back to normal : kafka-trunk-jdk11 #1638

2020-07-13 Thread Apache Jenkins Server
See 




[jira] [Resolved] (KAFKA-5722) Refactor ConfigCommand to use the AdminClient

2020-07-13 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-5722?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch resolved KAFKA-5722.
--
Resolution: Fixed

> Refactor ConfigCommand to use the AdminClient
> -
>
> Key: KAFKA-5722
> URL: https://issues.apache.org/jira/browse/KAFKA-5722
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Viktor Somogyi-Vass
>Assignee: Viktor Somogyi-Vass
>Priority: Major
>  Labels: kip, needs-kip
> Fix For: 2.6.0
>
>
> The ConfigCommand currently uses a direct connection to zookeeper. The 
> zookeeper dependency should be deprecated and an AdminClient API created to 
> be used instead.
> This change requires a KIP.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Reopened] (KAFKA-5722) Refactor ConfigCommand to use the AdminClient

2020-07-13 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-5722?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch reopened KAFKA-5722:
--

> Refactor ConfigCommand to use the AdminClient
> -
>
> Key: KAFKA-5722
> URL: https://issues.apache.org/jira/browse/KAFKA-5722
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Viktor Somogyi-Vass
>Assignee: Viktor Somogyi-Vass
>Priority: Major
>  Labels: kip, needs-kip
> Fix For: 2.6.0
>
>
> The ConfigCommand currently uses a direct connection to zookeeper. The 
> zookeeper dependency should be deprecated and an AdminClient API created to 
> be used instead.
> This change requires a KIP.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Jenkins build is back to normal : kafka-trunk-jdk14 #286

2020-07-13 Thread Apache Jenkins Server
See 




Jenkins build is back to normal : kafka-trunk-jdk8 #4711

2020-07-13 Thread Apache Jenkins Server
See 




Re: [DISCUSS] KIP-631: The Quorum-based Kafka Controller

2020-07-13 Thread Boyang Chen
Hey Colin, some quick questions,

1. I looked around and didn't find a config for broker heartbeat interval,
are we piggy-back on some existing configs?
2. We only mentioned that the lease time is 10X of the heartbeat interval,
could we also include why we chose this value?

On Mon, Jul 13, 2020 at 10:09 AM Jason Gustafson  wrote:

> Hi Colin,
>
> Thanks for the proposal. A few initial comments comments/questions below:
>
> 1. I don't follow why we need a separate configuration for
> `controller.listeners`. The current listener configuration already allows
> users to specify multiple listeners, which allows them to define internal
> endpoints that are not exposed to clients. Can you explain what the new
> configuration gives us that we don't already have?
> 2. What is the advantage of creating a separate `controller.id` instead of
> just using `broker.id`?
> 3. It sounds like you are imagining a stop-the-world approach to
> snapshotting, which is why we need the controller micromanaging snapshots
> on all followers. Alternatives include fuzzy snapshots which can be done
> concurrently. If this has been rejected, can you add some detail about why?
> 4. More of a nit, but should `DeleteBrokerRecord` be
> `ShutdownBrokerRecord`? The broker is just getting removed from ISRs, but
> it would still be present in the replica set (I assume).
>
> Thanks,
> Jason
>
> On Sun, Jul 12, 2020 at 12:24 AM Colin McCabe  wrote:
>
> > Hi Unmesh,
> >
> > That's an interesting idea, but I think it would be best to strive for
> > single metadata events that are complete in themselves, rather than
> trying
> > to do something transactional or EOS-like.  For example, we could have a
> > create event that contains all the partitions to be created.
> >
> > best,
> > Colin
> >
> >
> > On Fri, Jul 10, 2020, at 04:12, Unmesh Joshi wrote:
> > > I was thinking that we might need something like multi-operation
> > >  record in
> > zookeeper
> > > to atomically create topic and partition records when this multi record
> > is
> > > committed.  This way metadata will have both the TopicRecord and
> > > PartitionRecord together always, and in no situation we can have
> > > TopicRecord without PartitionRecord. Not sure if there are other
> > situations
> > > where multi-operation is needed.
> > > 
> > >
> > > Thanks,
> > > Unmesh
> > >
> > > On Fri, Jul 10, 2020 at 11:32 AM Colin McCabe 
> > wrote:
> > >
> > > > Hi Unmesh,
> > > >
> > > > Yes, once the last stable offset advanced, we would consider the
> topic
> > > > creation to be done, and then we could return success to the client.
> > > >
> > > > best,
> > > > Colin
> > > >
> > > > On Thu, Jul 9, 2020, at 19:44, Unmesh Joshi wrote:
> > > > > It still needs HighWaterMark / LastStableOffset to be advanced by
> two
> > > > > records? Something like following?
> > > > >
> > > > >
> > > > >||
> > > > > <--||   HighWaterMark
> > > > >Response|PartitionRecord |
> > > > >||
> > > > >-|
> > > > >| TopicRecord|
> -
> > > > >||
> > > > > --->   --   Previous HighWaterMark
> > > > >CreateTopic ||
> > > > >||
> > > > >||
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > > On Fri, Jul 10, 2020 at 1:30 AM Colin McCabe 
> > wrote:
> > > > >
> > > > > > On Thu, Jul 9, 2020, at 04:37, Unmesh Joshi wrote:
> > > > > > > I see that, when a new topic is created, two metadata records,
> a
> > > > > > > TopicRecord (just the name and id of the topic) and a
> > PartitionRecord
> > > > > > (more
> > > > > > > like LeaderAndIsr, with leader id and replica ids for the
> > partition)
> > > > are
> > > > > > > created.
> > > > > > > While creating the topic, log entries for both the records need
> > to be
> > > > > > > committed in RAFT core. Will it need something like a
> > > > > > MultiOperationRecord
> > > > > > > in zookeeper. Then, we can have a single log entry with both
> the
> > > > records,
> > > > > > > and  the create topic request can be fulfilled atomically when
> > both
> > > > the
> > > > > > > records are committed?
> > > > > > >
> > > > > >
> > > > > > Hi Unmesh,
> > > > > >
> > > > > > Since the active controller is the only node writing to the log,
> > there
> > > > is
> > > > > > no need for any kind of synchronization or access control at the
> > log
> > > > level.
> > > > > >
> > > > > > best,
> > > > > > Colin
> > > > > >
> > > > > > >
> > > > > > > Thanks,
> > > > > > > Unmesh
> > > > > > >
> > > > > > > On Wed, Jul 8, 2020 at 6:57 AM Ron Dagostino <

[jira] [Created] (KAFKA-10270) Add a broker to controller channel manager to redirect AlterConfig

2020-07-13 Thread Boyang Chen (Jira)
Boyang Chen created KAFKA-10270:
---

 Summary: Add a broker to controller channel manager to redirect 
AlterConfig
 Key: KAFKA-10270
 URL: https://issues.apache.org/jira/browse/KAFKA-10270
 Project: Kafka
  Issue Type: Sub-task
Reporter: Boyang Chen
Assignee: Boyang Chen






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] KIP-631: The Quorum-based Kafka Controller

2020-07-13 Thread Jason Gustafson
Hi Colin,

Thanks for the proposal. A few initial comments comments/questions below:

1. I don't follow why we need a separate configuration for
`controller.listeners`. The current listener configuration already allows
users to specify multiple listeners, which allows them to define internal
endpoints that are not exposed to clients. Can you explain what the new
configuration gives us that we don't already have?
2. What is the advantage of creating a separate `controller.id` instead of
just using `broker.id`?
3. It sounds like you are imagining a stop-the-world approach to
snapshotting, which is why we need the controller micromanaging snapshots
on all followers. Alternatives include fuzzy snapshots which can be done
concurrently. If this has been rejected, can you add some detail about why?
4. More of a nit, but should `DeleteBrokerRecord` be
`ShutdownBrokerRecord`? The broker is just getting removed from ISRs, but
it would still be present in the replica set (I assume).

Thanks,
Jason

On Sun, Jul 12, 2020 at 12:24 AM Colin McCabe  wrote:

> Hi Unmesh,
>
> That's an interesting idea, but I think it would be best to strive for
> single metadata events that are complete in themselves, rather than trying
> to do something transactional or EOS-like.  For example, we could have a
> create event that contains all the partitions to be created.
>
> best,
> Colin
>
>
> On Fri, Jul 10, 2020, at 04:12, Unmesh Joshi wrote:
> > I was thinking that we might need something like multi-operation
> >  record in
> zookeeper
> > to atomically create topic and partition records when this multi record
> is
> > committed.  This way metadata will have both the TopicRecord and
> > PartitionRecord together always, and in no situation we can have
> > TopicRecord without PartitionRecord. Not sure if there are other
> situations
> > where multi-operation is needed.
> > 
> >
> > Thanks,
> > Unmesh
> >
> > On Fri, Jul 10, 2020 at 11:32 AM Colin McCabe 
> wrote:
> >
> > > Hi Unmesh,
> > >
> > > Yes, once the last stable offset advanced, we would consider the topic
> > > creation to be done, and then we could return success to the client.
> > >
> > > best,
> > > Colin
> > >
> > > On Thu, Jul 9, 2020, at 19:44, Unmesh Joshi wrote:
> > > > It still needs HighWaterMark / LastStableOffset to be advanced by two
> > > > records? Something like following?
> > > >
> > > >
> > > >||
> > > > <--||   HighWaterMark
> > > >Response|PartitionRecord |
> > > >||
> > > >-|
> > > >| TopicRecord|  -
> > > >||
> > > > --->   --   Previous HighWaterMark
> > > >CreateTopic ||
> > > >||
> > > >||
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > > On Fri, Jul 10, 2020 at 1:30 AM Colin McCabe 
> wrote:
> > > >
> > > > > On Thu, Jul 9, 2020, at 04:37, Unmesh Joshi wrote:
> > > > > > I see that, when a new topic is created, two metadata records, a
> > > > > > TopicRecord (just the name and id of the topic) and a
> PartitionRecord
> > > > > (more
> > > > > > like LeaderAndIsr, with leader id and replica ids for the
> partition)
> > > are
> > > > > > created.
> > > > > > While creating the topic, log entries for both the records need
> to be
> > > > > > committed in RAFT core. Will it need something like a
> > > > > MultiOperationRecord
> > > > > > in zookeeper. Then, we can have a single log entry with both the
> > > records,
> > > > > > and  the create topic request can be fulfilled atomically when
> both
> > > the
> > > > > > records are committed?
> > > > > >
> > > > >
> > > > > Hi Unmesh,
> > > > >
> > > > > Since the active controller is the only node writing to the log,
> there
> > > is
> > > > > no need for any kind of synchronization or access control at the
> log
> > > level.
> > > > >
> > > > > best,
> > > > > Colin
> > > > >
> > > > > >
> > > > > > Thanks,
> > > > > > Unmesh
> > > > > >
> > > > > > On Wed, Jul 8, 2020 at 6:57 AM Ron Dagostino 
> > > wrote:
> > > > > >
> > > > > > > HI Colin.  Thanks for the KIP.  Here is some feedback and
> various
> > > > > > > questions.
> > > > > > >
> > > > > > > "*Controller processes will listen on a separate port from
> brokers.
> > > > > This
> > > > > > > will be true even when the broker and controller are
> co-located in
> > > the
> > > > > same
> > > > > > > JVM*". I assume it is possible that the port numbers could be
> the
> > > same
> > > > > when
> > > > > > > using separate JVMs (i.e. broker uses port 9192 and controller
> also
> > > > > uses
> > > > > > > po

Re: [DISCUSS] KIP-595: A Raft Protocol for the Metadata Quorum

2020-07-13 Thread Jason Gustafson
Hi All,

Just a quick update on the proposal. We have decided to move quorum
reassignment to a separate KIP:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-642%3A+Dynamic+quorum+reassignment.
The way this ties into cluster bootstrapping is complicated, so we felt we
needed a bit more time for validation. That leaves the core of this
proposal as quorum-based replication. If there are no further comments, we
will plan to start a vote later this week.

Thanks,
Jason

On Wed, Jun 24, 2020 at 10:43 AM Guozhang Wang  wrote:

> @Jun Rao 
>
> Regarding your comment about log compaction. After some deep-diving into
> this we've decided to propose a new snapshot-based log cleaning mechanism
> which would be used to replace the current compaction mechanism for this
> meta log. A new KIP will be proposed specifically for this idea.
>
> All,
>
> I've updated the KIP wiki a bit updating one config "
> election.jitter.max.ms"
> to "election.backoff.max.ms" to make it more clear about the usage: the
> configured value will be the upper bound of the binary exponential backoff
> time after a failed election, before starting a new one.
>
>
>
> Guozhang
>
>
>
> On Fri, Jun 12, 2020 at 9:26 AM Boyang Chen 
> wrote:
>
> > Thanks for the suggestions Guozhang.
> >
> > On Thu, Jun 11, 2020 at 2:51 PM Guozhang Wang 
> wrote:
> >
> > > Hello Boyang,
> > >
> > > Thanks for the updated information. A few questions here:
> > >
> > > 1) Should the quorum-file also update to support multi-raft?
> > >
> > > I'm neutral about this, as we don't know yet how the multi-raft modules
> > would behave. If
> > we have different threads operating different raft groups, consolidating
> > the `checkpoint` files seems
> > not reasonable. We could always add `multi-quorum-file` later if
> possible.
> >
> > 2) In the previous proposal, there's fields in the FetchQuorumRecords
> like
> > > latestDirtyOffset, is that dropped intentionally?
> > >
> > > I dropped the latestDirtyOffset since it is associated with the log
> > compaction discussion. This is beyond this KIP scope and we could
> > potentially get a separate KIP to talk about it.
> >
> >
> > > 3) I think we also need to elaborate a bit more details regarding when
> to
> > > send metadata request and discover-brokers; currently we only discussed
> > > during bootstrap how these requests would be sent. I think the
> following
> > > scenarios would also need these requests
> > >
> > > 3.a) As long as a broker does not know the current quorum (including
> the
> > > leader and the voters), it should continue periodically ask other
> brokers
> > > via "metadata.
> > > 3.b) As long as a broker does not know all the current quorum voter's
> > > connections, it should continue periodically ask other brokers via
> > > "discover-brokers".
> > > 3.c) When the leader's fetch timeout elapsed, it should send metadata
> > > request.
> > >
> > > Make sense, will add to the KIP.
> >
> > >
> > > Guozhang
> > >
> > >
> > > On Wed, Jun 10, 2020 at 5:20 PM Boyang Chen <
> reluctanthero...@gmail.com>
> > > wrote:
> > >
> > > > Hey all,
> > > >
> > > > follow-up on the previous email, we made some more updates:
> > > >
> > > > 1. The Alter/DescribeQuorum RPCs are also re-structured to use
> > > multi-raft.
> > > >
> > > > 2. We add observer status into the DescribeQuorumResponse as we see
> it
> > > is a
> > > > low hanging fruit which is very useful for user debugging and
> > > reassignment.
> > > >
> > > > 3. The FindQuorum RPC is replaced with DiscoverBrokers RPC, which is
> > > purely
> > > > in charge of discovering broker connections in a gossip manner. The
> > > quorum
> > > > leader discovery is piggy-back on the Metadata RPC for the topic
> > > partition
> > > > leader, which in our case is the single metadata partition for the
> > > version
> > > > one.
> > > >
> > > > Let me know if you have any questions.
> > > >
> > > > Boyang
> > > >
> > > > On Tue, Jun 9, 2020 at 11:01 PM Boyang Chen <
> > reluctanthero...@gmail.com>
> > > > wrote:
> > > >
> > > > > Hey all,
> > > > >
> > > > > Thanks for the great discussions so far. I'm posting some KIP
> updates
> > > > from
> > > > > our working group discussion:
> > > > >
> > > > > 1. We will be changing the core RPCs from single-raft API to
> > > multi-raft.
> > > > > This means all protocols will be "batch" in the first version, but
> > the
> > > > KIP
> > > > > itself only illustrates the design for a single metadata topic
> > > partition.
> > > > > The reason is to "keep the door open" for future extensions of this
> > > piece
> > > > > of module such as a sharded controller or general quorum based
> topic
> > > > > replication, beyond the current Kafka replication protocol.
> > > > >
> > > > > 2. We will piggy-back on the current Kafka Fetch API instead of
> > > inventing
> > > > > a new FetchQuorumRecords RPC. The motivation is about the same as
> #1
> > as
> > > > > well as making the integration work easier, instead of letting two
> > > > similar
> > 

Re: [VOTE] KIP-554: Add Broker-side SCRAM Config API

2020-07-13 Thread Boyang Chen
Thanks for the update Colin. One nit comment to fix the RPC type
for AlterScramUsersRequest as:
"apiKey": 51,
"type": "request",
"name": "AlterScramUsersRequest",
Other than that, +1 (binding) from me.


On Mon, Jul 13, 2020 at 8:38 AM Colin McCabe  wrote:

> Hi David,
>
> The API is for clients.  Brokers will still listen to ZooKeeper to load
> the SCRAM information.
>
> best,
> Colin
>
>
> On Mon, Jul 13, 2020, at 08:30, David Arthur wrote:
> > Thanks for the KIP, Colin. The new RPCs look good to me, just one
> question:
> > since we don't return the password info through the RPC, how will brokers
> > load this info? (I'm presuming that they need it to configure
> > authentication)
> >
> > -David
> >
> > On Mon, Jul 13, 2020 at 10:57 AM Colin McCabe 
> wrote:
> >
> > > On Fri, Jul 10, 2020, at 10:55, Boyang Chen wrote:
> > > > Hey Colin, thanks for the KIP. One question I have about
> AlterScramUsers
> > > > RPC is whether we could consolidate the deletion list and alteration
> > > list,
> > > > since in response we only have a single list of results. The further
> > > > benefit is to reduce unintentional duplicate entries for both
> deletion
> > > and
> > > > alteration, which makes the broker side handling logic easier.
> Another
> > > > alternative is to add DeleteScramUsers RPC to align what we currently
> > > have
> > > > with other user provided data such as delegation tokens (create,
> change,
> > > > delete).
> > > >
> > >
> > > Hi Boyang,
> > >
> > > It can't really be consolidated without some awkwardness.  It's
> probably
> > > better just to create a DeleteScramUsers function and RPC.  I've
> changed
> > > the KIP.
> > >
> > > >
> > > > For my own education, the salt will be automatically generated by the
> > > admin
> > > > client when we send the SCRAM requests correct?
> > > >
> > >
> > > Yes, the client generates the salt before sending the request.
> > >
> > > best,
> > > Colin
> > >
> > > > Best,
> > > > Boyang
> > > >
> > > > On Fri, Jul 10, 2020 at 8:10 AM Rajini Sivaram <
> rajinisiva...@gmail.com>
> > > > wrote:
> > > >
> > > > > +1 (binding)
> > > > >
> > > > > Thanks for the KIP, Colin!
> > > > >
> > > > > Regards,
> > > > >
> > > > > Rajini
> > > > >
> > > > >
> > > > > On Thu, Jul 9, 2020 at 8:49 PM Colin McCabe 
> > > wrote:
> > > > >
> > > > > > Hi all,
> > > > > >
> > > > > > I'd like to call a vote for KIP-554: Add a broker-side SCRAM
> > > > > configuration
> > > > > > API.  The KIP is here:
> https://cwiki.apache.org/confluence/x/ihERCQ
> > > > > >
> > > > > > The previous discussion thread is here:
> > > > > >
> > > > > >
> > > > >
> > >
> https://lists.apache.org/thread.html/r69bdc65bdf58f5576944a551ff249d759073ecbf5daa441cff680ab0%40%3Cdev.kafka.apache.org%3E
> > > > > >
> > > > > > best,
> > > > > > Colin
> > > > > >
> > > > >
> > > >
> > >
> >
> >
> > --
> > David Arthur
> >
>


Re: [VOTE] KIP-554: Add Broker-side SCRAM Config API

2020-07-13 Thread Colin McCabe
Hi David,

The API is for clients.  Brokers will still listen to ZooKeeper to load the 
SCRAM information.

best,
Colin


On Mon, Jul 13, 2020, at 08:30, David Arthur wrote:
> Thanks for the KIP, Colin. The new RPCs look good to me, just one question:
> since we don't return the password info through the RPC, how will brokers
> load this info? (I'm presuming that they need it to configure
> authentication)
> 
> -David
> 
> On Mon, Jul 13, 2020 at 10:57 AM Colin McCabe  wrote:
> 
> > On Fri, Jul 10, 2020, at 10:55, Boyang Chen wrote:
> > > Hey Colin, thanks for the KIP. One question I have about AlterScramUsers
> > > RPC is whether we could consolidate the deletion list and alteration
> > list,
> > > since in response we only have a single list of results. The further
> > > benefit is to reduce unintentional duplicate entries for both deletion
> > and
> > > alteration, which makes the broker side handling logic easier. Another
> > > alternative is to add DeleteScramUsers RPC to align what we currently
> > have
> > > with other user provided data such as delegation tokens (create, change,
> > > delete).
> > >
> >
> > Hi Boyang,
> >
> > It can't really be consolidated without some awkwardness.  It's probably
> > better just to create a DeleteScramUsers function and RPC.  I've changed
> > the KIP.
> >
> > >
> > > For my own education, the salt will be automatically generated by the
> > admin
> > > client when we send the SCRAM requests correct?
> > >
> >
> > Yes, the client generates the salt before sending the request.
> >
> > best,
> > Colin
> >
> > > Best,
> > > Boyang
> > >
> > > On Fri, Jul 10, 2020 at 8:10 AM Rajini Sivaram 
> > > wrote:
> > >
> > > > +1 (binding)
> > > >
> > > > Thanks for the KIP, Colin!
> > > >
> > > > Regards,
> > > >
> > > > Rajini
> > > >
> > > >
> > > > On Thu, Jul 9, 2020 at 8:49 PM Colin McCabe 
> > wrote:
> > > >
> > > > > Hi all,
> > > > >
> > > > > I'd like to call a vote for KIP-554: Add a broker-side SCRAM
> > > > configuration
> > > > > API.  The KIP is here: https://cwiki.apache.org/confluence/x/ihERCQ
> > > > >
> > > > > The previous discussion thread is here:
> > > > >
> > > > >
> > > >
> > https://lists.apache.org/thread.html/r69bdc65bdf58f5576944a551ff249d759073ecbf5daa441cff680ab0%40%3Cdev.kafka.apache.org%3E
> > > > >
> > > > > best,
> > > > > Colin
> > > > >
> > > >
> > >
> >
> 
> 
> -- 
> David Arthur
>


Re: [VOTE] KIP-554: Add Broker-side SCRAM Config API

2020-07-13 Thread David Arthur
Thanks for the KIP, Colin. The new RPCs look good to me, just one question:
since we don't return the password info through the RPC, how will brokers
load this info? (I'm presuming that they need it to configure
authentication)

-David

On Mon, Jul 13, 2020 at 10:57 AM Colin McCabe  wrote:

> On Fri, Jul 10, 2020, at 10:55, Boyang Chen wrote:
> > Hey Colin, thanks for the KIP. One question I have about AlterScramUsers
> > RPC is whether we could consolidate the deletion list and alteration
> list,
> > since in response we only have a single list of results. The further
> > benefit is to reduce unintentional duplicate entries for both deletion
> and
> > alteration, which makes the broker side handling logic easier. Another
> > alternative is to add DeleteScramUsers RPC to align what we currently
> have
> > with other user provided data such as delegation tokens (create, change,
> > delete).
> >
>
> Hi Boyang,
>
> It can't really be consolidated without some awkwardness.  It's probably
> better just to create a DeleteScramUsers function and RPC.  I've changed
> the KIP.
>
> >
> > For my own education, the salt will be automatically generated by the
> admin
> > client when we send the SCRAM requests correct?
> >
>
> Yes, the client generates the salt before sending the request.
>
> best,
> Colin
>
> > Best,
> > Boyang
> >
> > On Fri, Jul 10, 2020 at 8:10 AM Rajini Sivaram 
> > wrote:
> >
> > > +1 (binding)
> > >
> > > Thanks for the KIP, Colin!
> > >
> > > Regards,
> > >
> > > Rajini
> > >
> > >
> > > On Thu, Jul 9, 2020 at 8:49 PM Colin McCabe 
> wrote:
> > >
> > > > Hi all,
> > > >
> > > > I'd like to call a vote for KIP-554: Add a broker-side SCRAM
> > > configuration
> > > > API.  The KIP is here: https://cwiki.apache.org/confluence/x/ihERCQ
> > > >
> > > > The previous discussion thread is here:
> > > >
> > > >
> > >
> https://lists.apache.org/thread.html/r69bdc65bdf58f5576944a551ff249d759073ecbf5daa441cff680ab0%40%3Cdev.kafka.apache.org%3E
> > > >
> > > > best,
> > > > Colin
> > > >
> > >
> >
>


-- 
David Arthur


Re: [VOTE] KIP-554: Add Broker-side SCRAM Config API

2020-07-13 Thread Colin McCabe
On Fri, Jul 10, 2020, at 10:55, Boyang Chen wrote:
> Hey Colin, thanks for the KIP. One question I have about AlterScramUsers
> RPC is whether we could consolidate the deletion list and alteration list,
> since in response we only have a single list of results. The further
> benefit is to reduce unintentional duplicate entries for both deletion and
> alteration, which makes the broker side handling logic easier. Another
> alternative is to add DeleteScramUsers RPC to align what we currently have
> with other user provided data such as delegation tokens (create, change,
> delete).
> 

Hi Boyang,

It can't really be consolidated without some awkwardness.  It's probably better 
just to create a DeleteScramUsers function and RPC.  I've changed the KIP.

>
> For my own education, the salt will be automatically generated by the admin
> client when we send the SCRAM requests correct?
> 

Yes, the client generates the salt before sending the request.

best,
Colin

> Best,
> Boyang
> 
> On Fri, Jul 10, 2020 at 8:10 AM Rajini Sivaram 
> wrote:
> 
> > +1 (binding)
> >
> > Thanks for the KIP, Colin!
> >
> > Regards,
> >
> > Rajini
> >
> >
> > On Thu, Jul 9, 2020 at 8:49 PM Colin McCabe  wrote:
> >
> > > Hi all,
> > >
> > > I'd like to call a vote for KIP-554: Add a broker-side SCRAM
> > configuration
> > > API.  The KIP is here: https://cwiki.apache.org/confluence/x/ihERCQ
> > >
> > > The previous discussion thread is here:
> > >
> > >
> > https://lists.apache.org/thread.html/r69bdc65bdf58f5576944a551ff249d759073ecbf5daa441cff680ab0%40%3Cdev.kafka.apache.org%3E
> > >
> > > best,
> > > Colin
> > >
> >
>


[jira] [Created] (KAFKA-10269) AdminClient ListOffsetsResultInfo/timestamp is always -1

2020-07-13 Thread Derek Troy-West (Jira)
Derek Troy-West created KAFKA-10269:
---

 Summary: AdminClient ListOffsetsResultInfo/timestamp is always -1
 Key: KAFKA-10269
 URL: https://issues.apache.org/jira/browse/KAFKA-10269
 Project: Kafka
  Issue Type: Bug
  Components: admin
Affects Versions: 2.5.0
Reporter: Derek Troy-West


When using AdminClient/listOffsets the resulting ListOffsetResultInfos appear 
to always have a timestamp of -1.

I've run listOffsets against live clusters with multiple Kafka versions (from 
1.0 to 2.5) with both CreateTIme and LogAppendTime for message.timestamp.type, 
every result has -1 timestamp.

e.g. 

{{org.apache.kafka.clients.admin.ListOffsetsResult$ListOffsetsResultInfo}}
{{}}{{0x5c3a771}}
{{ "ListOffsetsResultInfo(}}
{{  offset=23016, }}
{{  timestamp=-1, }}
{{  leaderEpoch=Optional[0])}}

 

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: NotEnoughReplicasException: The size of the current ISR Set(2) is insufficient to satisfy the min.isr requirement of 3

2020-07-13 Thread Nag Y
Thanks Liam, so the phrase "  current ISR set " in the warning refers to
ISR set that is being shown as kafka-topics describe command ?
And also, should the "maximum" value of " min.insync.replicas" be
(replication factor - 1 ) - I mean min.insync.replicas should not be same
as " replication factor" .

Please confirm

On Tue, Jul 7, 2020 at 1:22 PM Liam Clarke-Hutchinson <
liam.cla...@adscale.co.nz> wrote:

> Hi Nag,
>
> ISR is the replicas that are in sync with the leader, and there's a
> different ISR set for each partition of a given topic. If you use
> `kafka/bin/kafka-topics --describe --topic ` it'll show you the replicas
> and ISR for each partition.
>
> min.insync.replicas and replication factor are all about preventing data
> loss. Generally I set min ISR to 2 for a topic with a replication factor of
> 3 so that one down or struggling broker doesn't prevent producers writing
> to topics, but I still have a replica of the data in case the broker acting
> as leader goes down - a new partition leader can only be elected from the
> insync replicas.
>
> On Tue, Jul 7, 2020 at 7:39 PM Nag Y  wrote:
>
> > I had the following setup Brokers : 3 - all are up and running with
> > min.insync.replicas=3.
> >
> > I created a topic with the following configuration
> >
> > bin\windows\kafka-topics --zookeeper 127.0.0.1:2181 --topic
> topic-ack-all
> > --create --partitions 4 --replication-factor 3
> >
> > I triggered the producer with "ack = all" and producer is able to send
> the
> > message. However, the problem starts when i start the consumer
> >
> > bin\windows\kafka-console-consumer --bootstrap-server
> > localhost:9094,localhost:9092 --topic topic-ack-all --from-beginning
> >
> > The error is
> >
> > NotEnoughReplicasException: The size of the current ISR Set(2) is
> > insufficient to satisfy the min.isr requirement of 3
> > NotEnoughReplicasException:The size of the current ISR Set(3) is
> > insufficient to satisfy the min.isr requirement of 3 for partition __con
> >
> > I see two kinds of errors here . I went though the documentation and had
> > also understaning about "min.isr", However, these error messages are not
> > clear .
> >
> >1. What does it mean by current ISR set ? Is it different for each
> topic
> >and what it signifies ?
> >2. I guess min.isr is same as min.insync.replicas . I hope is should
> >have value at least same as "replication factor" ?
> >
>


Re: [DISCUSS] KIP-363

2020-07-13 Thread Tom Bentley
Hi Magnus,

Sorry for the late reply. In the specific case you mentioned of an
OffsetCommitResponse with 100 partitions across 10 topics we end up with
-200bytes (no error) and +200bytes (any error) in absolute terms. Precise
numbers depend on topic names too, of course but I did a little test using
a hypothetical version 9 response and got these numbers:

Version=8 no error size=806
Version=9 no error size=606
Version=8 any error size=806
Version=9 any error size=1006

So about 25% less in the no-error case and 25% more in the error case,
compared with version 8.

Kind regards,

Tom





On Wed, Jul 8, 2020 at 8:47 PM Magnus Edenhill  wrote:

> Hi Tom,
>
> I think it would be useful with some real world (or made up!) numbers on
> how much relative/% space is saved for
> the most error-dense protocol requests.
> E.g., an OffsetCommitResponse with 10 topics and 100 failing partitions
> would reduce the overall size by % bytes.
>
> Thanks,
> Magnus
>
>
> Den tis 7 juli 2020 kl 17:01 skrev Colin McCabe :
>
> > Hi Tom,
> >
> > Thanks for this.  I think the tough part is probably the few messages
> that
> > are still using manual serialization, which can't be easily converted to
> > using this.  So we will probably have to upgrade them to using automatic
> > generation, or accept a little inconsistency for a while until they are
> > upgraded.
> >
> > best,
> > Colin
> >
> >
> > On Wed, Jul 1, 2020, at 09:21, Tom Bentley wrote:
> > > Hi all,
> > >
> > > Following a suggestion from Colin in the KIP-625 discussion thread, I'd
> > > like to start discussion on a much smaller KIP which proposes to make
> > error
> > > codes and messages tagged fields in all RPCs.
> > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-363%3A+Make+RPC+error+codes+and+messages+tagged+fields
> > >
> > > I'd be grateful for any feedback you may have.
> > >
> > > Kind regards,
> > >
> > > Tom
> > >
> >
>


Re: [VOTE] KIP-554: Add Broker-side SCRAM Config API

2020-07-13 Thread Tom Bentley
+1 (non-binding), thanks for the KIP.

On Fri, Jul 10, 2020 at 7:02 PM Boyang Chen 
wrote:

> Hey Colin, thanks for the KIP. One question I have about AlterScramUsers
> RPC is whether we could consolidate the deletion list and alteration list,
> since in response we only have a single list of results. The further
> benefit is to reduce unintentional duplicate entries for both deletion and
> alteration, which makes the broker side handling logic easier. Another
> alternative is to add DeleteScramUsers RPC to align what we currently have
> with other user provided data such as delegation tokens (create, change,
> delete).
>
> For my own education, the salt will be automatically generated by the admin
> client when we send the SCRAM requests correct?
>
> Best,
> Boyang
>
> On Fri, Jul 10, 2020 at 8:10 AM Rajini Sivaram 
> wrote:
>
> > +1 (binding)
> >
> > Thanks for the KIP, Colin!
> >
> > Regards,
> >
> > Rajini
> >
> >
> > On Thu, Jul 9, 2020 at 8:49 PM Colin McCabe  wrote:
> >
> > > Hi all,
> > >
> > > I'd like to call a vote for KIP-554: Add a broker-side SCRAM
> > configuration
> > > API.  The KIP is here: https://cwiki.apache.org/confluence/x/ihERCQ
> > >
> > > The previous discussion thread is here:
> > >
> > >
> >
> https://lists.apache.org/thread.html/r69bdc65bdf58f5576944a551ff249d759073ecbf5daa441cff680ab0%40%3Cdev.kafka.apache.org%3E
> > >
> > > best,
> > > Colin
> > >
> >
>


[jira] [Created] (KAFKA-10268) dynamic config like "--delete-config log.retention.ms" not work

2020-07-13 Thread zhifeng.peng (Jira)
zhifeng.peng created KAFKA-10268:


 Summary: dynamic config like "--delete-config log.retention.ms" 
not work
 Key: KAFKA-10268
 URL: https://issues.apache.org/jira/browse/KAFKA-10268
 Project: Kafka
  Issue Type: Bug
  Components: log, log cleaner
Affects Versions: 2.1.1
Reporter: zhifeng.peng
 Attachments: server.log.2020-07-13-14

After I set "log.retention.ms=301000" to clean the data,i use the cmd 
"bin/kafka-configs.sh --bootstrap-server 10.129.104.15:9092 --entity-type 
brokers --entity-default --alter --delete-config log.retention.ms" to reset to 
default. Static broker configuration like log.retention.hours is 168h and no 
topic level configuration like retention.ms.

But it did not take effect actually although server.log print the broker 
configuration like that.

 log.retention.check.interval.ms = 30
 log.retention.hours = 168
 log.retention.minutes = null
 {color:#FF}log.retention.ms = null{color}
 log.roll.hours = 168
 log.roll.jitter.hours = 0
 log.roll.jitter.ms = null
 log.roll.ms = null
 log.segment.bytes = 1073741824
 log.segment.delete.delay.ms = 6

 

Then we can see that retention time is still 301000ms from the server.log and 
segments have been deleted.

[2020-07-13 14:30:00,958] INFO [Log partition=test_retention-2, 
dir=/data/kafka_logs-test] Found deletable segments with base offsets 
[5005329,6040360] due to retention time 301000ms breach (kafka.log.Log)
[2020-07-13 14:30:00,959] INFO [Log partition=test_retention-2, 
dir=/data/kafka_logs-test] Scheduling log segment [baseOffset 5005329, size 
1073741222] for deletion. (kafka.log.Log)
[2020-07-13 14:30:00,959] INFO [Log partition=test_retention-2, 
dir=/data/kafka_logs-test] Scheduling log segment [baseOffset 6040360, size 
1073728116] for deletion. (kafka.log.Log)
[2020-07-13 14:30:00,959] INFO [Log partition=test_retention-2, 
dir=/data/kafka_logs-test] Incrementing log start offset to 7075648 
(kafka.log.Log)
[2020-07-13 14:30:00,960] INFO [Log partition=test_retention-0, 
dir=/data/kafka_logs-test] Found deletable segments with base offsets 
[5005330,6040410] due to retention time 301000ms breach (kafka.log.Log)
[2020-07-13 14:30:00,960] INFO [Log partition=test_retention-0, 
dir=/data/kafka_logs-test] Scheduling log segment [baseOffset 5005330, size 
1073732368] for deletion. (kafka.log.Log)
[2020-07-13 14:30:00,961] INFO [Log partition=test_retention-0, 
dir=/data/kafka_logs-test] Scheduling log segment [baseOffset 6040410, size 
1073735366] for deletion. (kafka.log.Log)
[2020-07-13 14:30:00,961] INFO [Log partition=test_retention-0, 
dir=/data/kafka_logs-test] Incrementing log start offset to 7075685 
(kafka.log.Log)
[2020-07-13 14:31:00,959] INFO [Log partition=test_retention-2, 
dir=/data/kafka_logs-test] Deleting segment 5005329 (kafka.log.Log)
[2020-07-13 14:31:00,959] INFO [Log partition=test_retention-2, 
dir=/data/kafka_logs-test] Deleting segment 6040360 (kafka.log.Log)
[2020-07-13 14:31:00,961] INFO [Log partition=test_retention-0, 
dir=/data/kafka_logs-test] Deleting segment 5005330 (kafka.log.Log)
[2020-07-13 14:31:00,961] INFO [Log partition=test_retention-0, 
dir=/data/kafka_logs-test] Deleting segment 6040410 (kafka.log.Log)
[2020-07-13 14:31:01,144] INFO Deleted log 
/data/kafka_logs-test/test_retention-2/06040360.log.deleted. 
(kafka.log.LogSegment)
[2020-07-13 14:31:01,144] INFO Deleted offset index 
/data/kafka_logs-test/test_retention-2/06040360.index.deleted. 
(kafka.log.LogSegment)
[2020-07-13 14:31:01,144] INFO Deleted time index 
/data/kafka_logs-test/test_retention-2/06040360.timeindex.deleted. 
(kafka.log.LogSegment)

 

Here are a few steps to reproduce it.

1、set log.retention.ms=301000:

bin/kafka-configs.sh --bootstrap-server 10.129.104.15:9092 --entity-type 
brokers --entity-default --alter --add-config log.retention.ms=301000

2、produce messages to the topic:

bin/kafka-producer-perf-test.sh --topic test_retention --num-records 1000 
--throughput -1 --producer-props bootstrap.servers=10.129.104.15:9092 
--record-size 1024

3、reset log.retention.ms to the default:

bin/kafka-configs.sh --bootstrap-server 10.129.104.15:9092 --entity-type 
brokers --entity-default --alter --delete-config log.retention.ms

 

I have attched server.log. You can see the log from row 238 to row 731. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)