Build failed in Jenkins: kafka-trunk-jdk11 #1593

2020-06-23 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-10126:Add a warning message for ConsumerPerformance (#8845)

[github] KAFKA-10169: swallow non-fatal KafkaException and don't abort


--
[...truncated 6.36 MB...]
org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureInternalTopicNamesIfWrittenInto[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureInternalTopicNamesIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTimeDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTimeDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldForwardRecordsFromSubtopologyToSubtopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldForwardRecordsFromSubtopologyToSubtopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
PASSED

org.apache.kafka.streams.TestTopicsTest > testNonUsedOutputTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testNonUsedOutputTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testEmptyTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testEmptyTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testStartTimestamp STARTED

org.apache.kafka.streams.TestTopicsTest > testStartTimestamp PASSED

org.apache.kafka.streams.TestTopicsTest > testNegativeAdvance STARTED

org.apache.kafka.streams.TestTopicsTest > testNegativeAdvance PASSED

org.apache.kafka.streams.TestTopicsTest > shouldNotAllowToCreateWithNullDriver 
STARTED

org.apache.kafka.streams.TestTopicsTest > shouldNotAllowToCreateWithNullDriver 
PASSED

org.apache.kafka.streams.TestTopicsTest > testDuration STARTED

org.apache.kafka.streams.TestTopicsTest > testDuration PASSED

org.apache.kafka.streams.TestTopicsTest > testOutputToString STARTED

org.apache.kafka.streams.TestTopicsTest > testOutputToString PASSED

org.apache.kafka.streams.TestTopicsTest > testValue STARTED

org.apache.kafka.streams.TestTopicsTest > testValue PASSED

org.apache.kafka.streams.TestTopicsTest > testTimestampAutoAdvance STARTED

org.apache.kafka.streams.TestTopicsTest > testTimestampAutoAdvance PASSED

org.apache.kafka.streams.TestTopicsTest > testOutputWrongSerde STARTED

org.apache.kafka.streams.TestTopicsTest > testOutputWrongSerde PASSED

org.apache.kafka.streams.TestTopicsTest > 

Re: [DISCUSS] KIP-487: Automatic Topic Creation on Producer

2020-06-23 Thread Boyang Chen
Hey Justin and Jiamei,

I read the KIP and skimmed over the discussion. One thing I'm not fully
convinced of is why we need to deprecate the server side auto topic
creation logic, which seems orthogonal towards whether a client wants to
create the topic or not. Won't it be more natural to assume that only when
both server and client agree on turning on the switch, will a topic get
created?

Some clarifications would also be appreciated:

1. Could we include a link to KIP-464 and explain its relation to KIP-487?
It's very hard to read through the proposal when readers only have a
reference number to some KIP that is not briefed.

2. The KIP suggests, " In the producer, auto-creation of a topic will occur
through a specific request rather than through a side effect of requesting
metadata." Could we be specific such as whether we are going to introduce a
separate RPC, or just send another CreateTopicRequest?

Boyang

On Wed, Jun 17, 2020 at 8:51 AM jiamei xie  wrote:

> Hi all
> For
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-487%3A+Client-side+Automatic+Topic+Creation+on+Producer
> ,  It has not been updated for a long time. And I made some update, which
> has been pushed to https://github.com/apache/kafka/pull/8831
>
> MetadataRequest has method Builder(List topics, boolean
> allowAutoTopicCreation) by which we can set whether to enable
> allowAutoTopicCreation from producer.
> By default, allowAutoTopicCreation on Producer is true. And only if when
> the allowAutoTopicCreation of Broker and Producer are true, the topic can
> be auto-created.
>
> Besides, the test cases are changed:
> There are 4 cases for brokerAutoTopicCreationEnable and
> producerAutoCreateTopicsPolicy, Check if the topic is created under these
> four cases.
>  If brokerAutoTopicCreationEnable and producerAutoCreateTopicsPolicy
> are true:  assertTrue(topicCreated)
>  else : intercept[ExecutionException]
>
> Looking forward to your feedback and comments. Thanks.
>
> Best wishes
> Jiamei Xie
>
> On 2019/08/12 15:50:22, Harsha Chintalapani  wrote:
> > On Fri, Aug 09, 2019 at 11:12 AM, Ismael Juma  wrote:
> >
> > > Hi all,
> > >
> > > A few points:
> > >
> > > 1. I think the way backwards compatibility is being used here is not
> > > correct. Any functionality that is only enabled if set via a config is
> > > backwards compatible. People may disagree with the functionality or the
> > > config, but it's not a backwards compatibility issue.
> > >
> >
> > We are talking about both broker and producer as a  single entity and run
> > by the same team/users. Allowing newer producer to create topics on a
> older
> > broker when auto.create.topics.enable set to false, breaks  server side
> > contract that this config offered from the beginning.  IMO, it clearly
> > isn't backward compatible. User who set auto.create.topic.enable on
> broker
> > will not be the same who will turn it on producer side .
> >
> >
> > > 2. It's an interesting question if auto topic creation via the producer
> > > should be a server driven choice or not. I can see the desire to have a
> > > server-driven default, but it seems like this is often application
> > > specific. Because the functionality is trivially available via
> AdminClient
> > > (released 2 years ago), it's not quite possible to control what
> > > applications do without the use of ACLs or policies today.
> > >
> > >
> > >
> > Producers & consumers are the majority of the clients in Kafka ecosystem.
> > Just because AdminClient shipped a while back that doesn't mean all users
> > adopting to it. To this day lot more users are aware of Producer &
> Consumer
> > APIs and running them in production compare to AdminClient.
> >
> >
> > > 3. Changing the create topics request in this way is highly
> unintuitive in
> > > my opinion and it relies on every client to pass the new field. For
> > > example, if librdkafka added auto create functionality by relying on
> their
> > > AdminClient, it would behave differently than what is proposed here.
> > > Forcing every client to implement this change when calling auto create
> from
> > > the producer specifically seems odd
> > >
> >
> > I am not sure why its unintuitive , protocols change. We add or upgrade
> the
> > existing protocols all the time.
> >
> >
> > Thanks,
> > Harsha
> >
> > .
> > >
> > > Ismael
> > >
> > > On Thu, Aug 8, 2019 at 11:51 AM Jun Rao  wrote:
> > >
> > > Hi, Justine,
> > >
> > > Thanks for the KIP. Overall, it seems to be a good improvement.
> > >
> > > However, I think Harsha's point seems reasonable. We had
> > > auto.create.topics.enable config on the broker to allow admins to
> disable
> > > topic creation from the producer/consumer clients before we had the
> > > security feature. The need for that config is reduced with the security
> > > feature, but may still be present since not all places have security
> > > enabled. It's true that a non-secured environment is vulnerable to some
> > > additional attacks, but 

Build failed in Jenkins: kafka-trunk-jdk8 #4664

2020-06-23 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-10126:Add a warning message for ConsumerPerformance (#8845)


--
[...truncated 6.31 MB...]

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureInternalTopicNamesIfWrittenInto[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureInternalTopicNamesIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTimeDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTimeDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldForwardRecordsFromSubtopologyToSubtopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldForwardRecordsFromSubtopologyToSubtopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] PASSED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis STARTED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis PASSED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime STARTED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime PASSED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep PASSED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent PASSED


[jira] [Resolved] (KAFKA-9678) Introduce bounded exponential backoff in clients

2020-06-23 Thread Sanjana Kaundinya (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9678?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sanjana Kaundinya resolved KAFKA-9678.
--
Resolution: Duplicate

> Introduce bounded exponential backoff in clients
> 
>
> Key: KAFKA-9678
> URL: https://issues.apache.org/jira/browse/KAFKA-9678
> Project: Kafka
>  Issue Type: Improvement
>  Components: admin, consumer, producer 
>Reporter: Guozhang Wang
>Assignee: Sanjana Kaundinya
>Priority: Major
>  Labels: needs-kip
>
> In all clients (consumer, producer, admin, and streams) we have retry 
> mechanisms with fixed backoff to handle transient connection issues with 
> brokers. However, with small backoff (many defaults to 100ms) we could send 
> 10s of requests per second to the broker, and if the connection issue is 
> prolonged it means a huge overhead.
> We should consider introducing upper-bounded exponential backoff universally 
> in those clients to reduce the num of retry requests during the period of 
> connection partitioning.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: kafka-trunk-jdk14 #241

2020-06-23 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-10126:Add a warning message for ConsumerPerformance (#8845)

[github] KAFKA-10169: swallow non-fatal KafkaException and don't abort


--
[...truncated 3.18 MB...]
org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureSinkTopicNamesIfWrittenInto[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureSinkTopicNamesIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 

Build failed in Jenkins: kafka-2.6-jdk8 #59

2020-06-23 Thread Apache Jenkins Server
See 


Changes:

[wangguoz] KAFKA-10135: Extract Task#executeAndMaybeSwallow to be a general 
utility

[wangguoz] KAFKA-10169: swallow non-fatal KafkaException and don't abort


--
[...truncated 3.14 MB...]

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairs STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKeyAndDefaultTimestamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKeyAndDefaultTimestamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithDefaultTimestamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithDefaultTimestamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKey STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKey PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithOtherTopicNameAndTimestampWithTimetamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithOtherTopicNameAndTimestampWithTimetamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullHeaders STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullHeaders PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithDefaultTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithDefaultTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairs 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 

Re: [DISCUSS] KIP-578: Add configuration to limit number of partitions

2020-06-23 Thread Ismael Juma
Thanks for the KIP. A couple of questions:

1. Have we considered reusing the existing PolicyViolation error code and
renaming it? This would make it simpler to handle on the client.

2. What version was used for the perf section? I think master should do
better than what's described there.

Ismael

On Wed, Apr 1, 2020, 8:28 AM Gokul Ramanan Subramanian 
wrote:

> Hi.
>
> I have opened KIP-578, intended to provide a mechanism to limit the number
> of partitions in a Kafka cluster. Kindly provide feedback on the KIP which
> you can find at
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-578%3A+Add+configuration+to+limit+number+of+partitions
>
> I want to specially thank Stanislav Kozlovski who helped in formulating
> some aspects of the KIP.
>
> Many thanks,
>
> Gokul.
>


[jira] [Created] (KAFKA-10196) Add missing '--version' option to Kafka command producer-performance

2020-06-23 Thread jiamei xie (Jira)
jiamei xie created KAFKA-10196:
--

 Summary: Add missing '--version' option to Kafka command 
producer-performance
 Key: KAFKA-10196
 URL: https://issues.apache.org/jira/browse/KAFKA-10196
 Project: Kafka
  Issue Type: Bug
  Components: producer , tools
Reporter: jiamei xie
Assignee: jiamei xie


Option '--version'  is missing in Kafka command producer-performance



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] KIP-578: Add configuration to limit number of partitions

2020-06-23 Thread Boyang Chen
Hi Gokul,

Thanks for the excellent KIP. I was recently driving the rollout of KIP-590

and
proposed to fix the hole of the bypassing of topic creation policy when
applying Metadata auto topic creation. As far as I understand, this KIP
will take care of this problem specifically towards partition limit, so I
was wondering whether it makes sense to integrate this logic with topic
creation policy, and just enforce that entirely on the Metadata RPC auto
topic creation case?

Also, for the API exception part, could we state clearly that we are going
to bump the version of mentioned RPCs so that new clients could handle the
new error code RESOURCE_LIMITE_REACHED? Besides, I believe we are also
going to enforce this rule towards older clients, could you describe the
corresponding error code to be returned in the API exception part as well?

Let me know if my proposal makes sense, thank you!

Boyang

On Fri, Jun 12, 2020 at 2:57 AM Alexandre Dupriez <
alexandre.dupr...@gmail.com> wrote:

> Hi Gokul,
>
> Thank you for the answers and the data provided to illustrate the use case.
> A couple of additional questions.
>
> 904. If multi-tenancy is addressed in a future KIP, how smooth would
> be the upgrade path? For example, the introduced configuration
> parameters still apply, right? We would still maintain a first-come
> first-served pattern when topics are created?
>
> 905. The current built-in assignment tool prioritises balance between
> racks over brokers. In the version you propose, the limit on partition
> count would take precedence over attempts to balance between racks.
> Could it lead to a situation where it results in all partitions of a
> topic being assigned in a single data center, if brokers in other
> racks are "full"? Since it can potentially weaken the availability
> guarantees for that topic (and maybe durability and/or consumer
> performance with additional cross-rack traffic), how would we want to
> handle the case? It may be worth warning users that the resulting
> guarantees differ from what is offered by an "unlimited" assignment
> plan in such cases? Also, let's keep in mind that some plans generated
> by existing rebalancing tools could become invalid (w.r.t to the
> configured limits).
>
> 906. The limits do not apply to internal topics. What about
> framework-level topics from other tools and extensions? (connect,
> streams, confluent metrics, tiered storage, etc.) Is blacklisting
> possible?
>
> 907. What happens if one of the dynamic limit is violated at update
> time? (sorry if it's already explained in the KIP, may have missed it)
>
> Thanks,
> Alexandre
>
> Le dim. 3 mai 2020 à 20:20, Gokul Ramanan Subramanian
>  a écrit :
> >
> > Thanks Stanislav. Apologies about the long absence from this thread.
> >
> > I would prefer having per-user max partition limits in a separate KIP. I
> > don't see this as an MVP for this KIP. I will add this as an alternative
> > approach into the KIP.
> >
> > I was in a double mind about whether or not to impose the partition limit
> > for internal topics as well. I can be convinced both ways. On the one
> hand,
> > internal topics should be purely internal i.e. users of a cluster should
> > not have to care about them. In this sense, the partition limit should
> not
> > apply to internal topics. On the other hand, Kafka allows configuring
> > internal topics by specifying their replication factor etc. Therefore,
> they
> > don't feel all that internal to me. In any case, I'll modify the KIP to
> > exclude internal topics.
> >
> > I'll also add to the KIP the alternative approach Tom suggested around
> > using topic policies to limit partitions, and explain why it does not
> help
> > to solve the problem that the KIP is trying to address (as I have done
> in a
> > previous correspondence on this thread).
> >
> > Cheers.
> >
> > On Fri, Apr 24, 2020 at 4:24 PM Stanislav Kozlovski <
> stanis...@confluent.io>
> > wrote:
> >
> > > Thanks for the KIP, Gokul!
> > >
> > > I like the overall premise - I think it's more user-friendly to have
> > > configs for this than to have users implement their own config policy
> -> so
> > > unless it's very complex to implement, it seems worth it.
> > > I agree that having the topic policy on the CreatePartitions path makes
> > > sense as well.
> > >
> > > Multi-tenancy was a good point. It would be interesting to see how
> easy it
> > > is to extend the max partition limit to a per-user basis. Perhaps this
> can
> > > be done in a follow-up KIP, as a natural extension of the feature.
> > >
> > > I'm wondering whether there's a need to enforce this on internal
> topics,
> > > though. Given they're internal and critical to the function of Kafka, I
> > > believe we'd rather always ensure they're created, regardless if over
> some
> > > user-set limit. It brings up the question of forward compatibility -
> 

Jenkins build is back to normal : kafka-trunk-jdk14 #240

2020-06-23 Thread Apache Jenkins Server
See 




Re: [ANNOUNCE] New committer: Boyang Chen

2020-06-23 Thread Adam Bellemare
Just adding my congratulations, Boyang! Thank you for all your
contributions and effort!

On Tue, Jun 23, 2020 at 9:14 PM Kowshik Prakasam 
wrote:

> Congrats, Boyang! :)
>
>
> Cheers,
> Kowshik
>
> On Tue, Jun 23, 2020 at 8:43 AM Aparnesh Gaurav 
> wrote:
>
> > Congrats Boyang.
> >
> > On Tue, 23 Jun, 2020, 9:07 PM Vahid Hashemian, <
> vahid.hashem...@gmail.com>
> > wrote:
> >
> > > Congrats Boyang!
> > >
> > > --Vahid
> > >
> > > On Tue, Jun 23, 2020 at 6:41 AM Wang (Leonard) Ge 
> > > wrote:
> > >
> > > > Congrats Boyang! This is a great achievement.
> > > >
> > > > On Tue, Jun 23, 2020 at 10:33 AM Mickael Maison <
> > > mickael.mai...@gmail.com>
> > > > wrote:
> > > >
> > > > > Congrats Boyang! Well deserved
> > > > >
> > > > > On Tue, Jun 23, 2020 at 8:20 AM Tom Bentley 
> > > wrote:
> > > > > >
> > > > > > Congratulations Boyang!
> > > > > >
> > > > > > On Tue, Jun 23, 2020 at 8:11 AM Bruno Cadonna <
> br...@confluent.io>
> > > > > wrote:
> > > > > >
> > > > > > > Congrats, Boyang!
> > > > > > >
> > > > > > > Best,
> > > > > > > Bruno
> > > > > > >
> > > > > > > On Tue, Jun 23, 2020 at 7:50 AM Konstantine Karantasis
> > > > > > >  wrote:
> > > > > > > >
> > > > > > > > Congrats, Boyang!
> > > > > > > >
> > > > > > > > -Konstantine
> > > > > > > >
> > > > > > > > On Mon, Jun 22, 2020 at 9:19 PM Navinder Brar
> > > > > > > >  wrote:
> > > > > > > >
> > > > > > > > > Many Congratulations Boyang. Very well deserved.
> > > > > > > > >
> > > > > > > > > Regards,Navinder
> > > > > > > > >
> > > > > > > > > On Tuesday, 23 June, 2020, 07:21:23 am IST, Matt Wang <
> > > > > > > wang...@163.com>
> > > > > > > > > wrote:
> > > > > > > > >
> > > > > > > > >  Congratulations, Boyang!
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > --
> > > > > > > > >
> > > > > > > > > Best,
> > > > > > > > > Matt Wang
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > On 06/23/2020 07:59,Boyang Chen >
> > > > wrote:
> > > > > > > > > Thanks a lot everyone, I really appreciate the recognition,
> > and
> > > > > hope to
> > > > > > > > > make more solid contributions to the community in the
> future!
> > > > > > > > >
> > > > > > > > > On Mon, Jun 22, 2020 at 4:50 PM Matthias J. Sax <
> > > > mj...@apache.org>
> > > > > > > wrote:
> > > > > > > > >
> > > > > > > > > Congrats! Well deserved!
> > > > > > > > >
> > > > > > > > > -Matthias
> > > > > > > > >
> > > > > > > > > On 6/22/20 4:38 PM, Bill Bejeck wrote:
> > > > > > > > > Congratulations Boyang! Well deserved.
> > > > > > > > >
> > > > > > > > > -Bill
> > > > > > > > >
> > > > > > > > > On Mon, Jun 22, 2020 at 7:35 PM Colin McCabe <
> > > cmcc...@apache.org
> > > > >
> > > > > > > wrote:
> > > > > > > > >
> > > > > > > > > Congratulations, Boyang!
> > > > > > > > >
> > > > > > > > > cheers,
> > > > > > > > > Colin
> > > > > > > > >
> > > > > > > > > On Mon, Jun 22, 2020, at 16:26, Guozhang Wang wrote:
> > > > > > > > > The PMC for Apache Kafka has invited Boyang Chen as a
> > committer
> > > > > and we
> > > > > > > > > are
> > > > > > > > > pleased to announce that he has accepted!
> > > > > > > > >
> > > > > > > > > Boyang has been active in the Kafka community more than two
> > > years
> > > > > ago.
> > > > > > > > > Since then he has presented his experience operating with
> > Kafka
> > > > > Streams
> > > > > > > > > at
> > > > > > > > > Pinterest as well as several feature development including
> > > > > rebalance
> > > > > > > > > improvements (KIP-345) and exactly-once scalability
> > > improvements
> > > > > > > > > (KIP-447)
> > > > > > > > > in various Kafka Summit and Kafka Meetups. More recently
> he's
> > > > also
> > > > > been
> > > > > > > > > participating in Kafka broker development including
> > > > post-Zookeeper
> > > > > > > > > controller design (KIP-500). Besides all the code
> > > contributions,
> > > > > Boyang
> > > > > > > > > has
> > > > > > > > > also helped reviewing even more PRs and KIPs than his own.
> > > > > > > > >
> > > > > > > > > Thanks for all the contributions Boyang! And look forward
> to
> > > more
> > > > > > > > > collaborations with you on Apache Kafka.
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > -- Guozhang, on behalf of the Apache Kafka PMC
> > > > > > > > >
> > > > > > > > >
> > > > > > > > >
> > > > > > > > >
> > > > > > > > >
> > > > > > > > >
> > > > > > >
> > > > > > >
> > > > >
> > > >
> > > >
> > > > --
> > > > Leonard Ge
> > > > Software Engineer Intern - Confluent
> > > >
> > >
> > >
> > > --
> > >
> > > Thanks!
> > > --Vahid
> > >
> >
>


Re: [ANNOUNCE] New committer: Boyang Chen

2020-06-23 Thread Kowshik Prakasam
Congrats, Boyang! :)


Cheers,
Kowshik

On Tue, Jun 23, 2020 at 8:43 AM Aparnesh Gaurav 
wrote:

> Congrats Boyang.
>
> On Tue, 23 Jun, 2020, 9:07 PM Vahid Hashemian, 
> wrote:
>
> > Congrats Boyang!
> >
> > --Vahid
> >
> > On Tue, Jun 23, 2020 at 6:41 AM Wang (Leonard) Ge 
> > wrote:
> >
> > > Congrats Boyang! This is a great achievement.
> > >
> > > On Tue, Jun 23, 2020 at 10:33 AM Mickael Maison <
> > mickael.mai...@gmail.com>
> > > wrote:
> > >
> > > > Congrats Boyang! Well deserved
> > > >
> > > > On Tue, Jun 23, 2020 at 8:20 AM Tom Bentley 
> > wrote:
> > > > >
> > > > > Congratulations Boyang!
> > > > >
> > > > > On Tue, Jun 23, 2020 at 8:11 AM Bruno Cadonna 
> > > > wrote:
> > > > >
> > > > > > Congrats, Boyang!
> > > > > >
> > > > > > Best,
> > > > > > Bruno
> > > > > >
> > > > > > On Tue, Jun 23, 2020 at 7:50 AM Konstantine Karantasis
> > > > > >  wrote:
> > > > > > >
> > > > > > > Congrats, Boyang!
> > > > > > >
> > > > > > > -Konstantine
> > > > > > >
> > > > > > > On Mon, Jun 22, 2020 at 9:19 PM Navinder Brar
> > > > > > >  wrote:
> > > > > > >
> > > > > > > > Many Congratulations Boyang. Very well deserved.
> > > > > > > >
> > > > > > > > Regards,Navinder
> > > > > > > >
> > > > > > > > On Tuesday, 23 June, 2020, 07:21:23 am IST, Matt Wang <
> > > > > > wang...@163.com>
> > > > > > > > wrote:
> > > > > > > >
> > > > > > > >  Congratulations, Boyang!
> > > > > > > >
> > > > > > > >
> > > > > > > > --
> > > > > > > >
> > > > > > > > Best,
> > > > > > > > Matt Wang
> > > > > > > >
> > > > > > > >
> > > > > > > > On 06/23/2020 07:59,Boyang Chen
> > > wrote:
> > > > > > > > Thanks a lot everyone, I really appreciate the recognition,
> and
> > > > hope to
> > > > > > > > make more solid contributions to the community in the future!
> > > > > > > >
> > > > > > > > On Mon, Jun 22, 2020 at 4:50 PM Matthias J. Sax <
> > > mj...@apache.org>
> > > > > > wrote:
> > > > > > > >
> > > > > > > > Congrats! Well deserved!
> > > > > > > >
> > > > > > > > -Matthias
> > > > > > > >
> > > > > > > > On 6/22/20 4:38 PM, Bill Bejeck wrote:
> > > > > > > > Congratulations Boyang! Well deserved.
> > > > > > > >
> > > > > > > > -Bill
> > > > > > > >
> > > > > > > > On Mon, Jun 22, 2020 at 7:35 PM Colin McCabe <
> > cmcc...@apache.org
> > > >
> > > > > > wrote:
> > > > > > > >
> > > > > > > > Congratulations, Boyang!
> > > > > > > >
> > > > > > > > cheers,
> > > > > > > > Colin
> > > > > > > >
> > > > > > > > On Mon, Jun 22, 2020, at 16:26, Guozhang Wang wrote:
> > > > > > > > The PMC for Apache Kafka has invited Boyang Chen as a
> committer
> > > > and we
> > > > > > > > are
> > > > > > > > pleased to announce that he has accepted!
> > > > > > > >
> > > > > > > > Boyang has been active in the Kafka community more than two
> > years
> > > > ago.
> > > > > > > > Since then he has presented his experience operating with
> Kafka
> > > > Streams
> > > > > > > > at
> > > > > > > > Pinterest as well as several feature development including
> > > > rebalance
> > > > > > > > improvements (KIP-345) and exactly-once scalability
> > improvements
> > > > > > > > (KIP-447)
> > > > > > > > in various Kafka Summit and Kafka Meetups. More recently he's
> > > also
> > > > been
> > > > > > > > participating in Kafka broker development including
> > > post-Zookeeper
> > > > > > > > controller design (KIP-500). Besides all the code
> > contributions,
> > > > Boyang
> > > > > > > > has
> > > > > > > > also helped reviewing even more PRs and KIPs than his own.
> > > > > > > >
> > > > > > > > Thanks for all the contributions Boyang! And look forward to
> > more
> > > > > > > > collaborations with you on Apache Kafka.
> > > > > > > >
> > > > > > > >
> > > > > > > > -- Guozhang, on behalf of the Apache Kafka PMC
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > >
> > > > > >
> > > >
> > >
> > >
> > > --
> > > Leonard Ge
> > > Software Engineer Intern - Confluent
> > >
> >
> >
> > --
> >
> > Thanks!
> > --Vahid
> >
>


Re: [VOTE] KIP-590: Redirect Zookeeper Mutation Protocols to The Controller

2020-06-23 Thread Boyang Chen
Thanks for the clarification, Colin and Ismael. Personally I also feel
Option A is better to prioritize fixing the gap. Just to be clear, the
proposed solution would be:

1. Bump the Metadata RPC version to return POLICY_VIOLATION. In the
application level, we should swap the error message with the actual failure
reason such as "violation of topic creation policy when attempting to auto
create internal topic through MetadataRequest."

2. For older Metadata RPC, return AUTHORIZATION_FAILED to fail fast.

Will address our other discussed points as well in the KIP, let me know if
you have further questions.

Thanks,
Boyang

On Tue, Jun 23, 2020 at 10:41 AM Ismael Juma  wrote:

> Option A is basically what I was thinking. But with a slight adjustment:
>
> New versions of MetadataResponse return POLICY_VIOLATION, old versions
> return AUTHORIZATION_FAILED. The latter works correctly with old Java
> clients (i.e. the client fails fast and propagates the error), I've tested
> it. Adjust new clients to treat POLICY_VIOLATION like AUTHORIZATION_FAILED,
> but propagate the custom error message.
>
> Ismael
>
> On Mon, Jun 22, 2020 at 11:00 PM Colin McCabe  wrote:
>
> > > > > On Fri, Jun 19, 2020 at 3:18 PM Ismael Juma 
> > wrote:
> > > > >
> > > > > > Hi Colin,
> > > > > >
> > > > > > The KIP states in the Compatibility section (not Future work):
> > > > > >
> > > > > > "To support the proxy of requests, we need to build a channel for
> > > > > > brokers to talk directly to the controller. This part of the
> design
> > > > > > is internal change only and won’t block the KIP progress."
> > > > > >
> > > > > > I am clarifying that this is not internal only due to the config.
> > If we
> > > > > > say that this KIP depends on another KIP before we can merge
> > > > > > it, that's fine although it feels a bit unnecessary.
> > > > > >
> >
> > Hi Ismael,
> >
> > I didn't realize there was still a reference to the separate controller
> > channel in the "Compatibility, Deprecation, and Migration Plan"
> section.  I
> > agree that it doesn't really belong there.  Given that this is creating
> > confusion, I would suggest that we just drop this from the KIP entirely.
> > It really is orthogonal to what this KIP is about-- we don't need a
> > separate channel to implement redirection.
> >
> > Boyang wrote:
> >
> > >
> > > We are only opening the doors for specific internal topics (offsets,
> txn
> > > log), which I assume the client should have no possibility to mutate
> the
> > > topic policy?
> > >
> >
> > Hi Boyang,
> >
> > I think you and Ismael are talking about different scenarios.  You are
> > describing the scenario where the broker is auto-creating the transaction
> > log topic or consumer offset topic.  This scenario indeed should not
> happen
> > in a properly-configured cluster.  However, Ismael is describing a
> scenario
> > where the client is auto-creating some arbitrary non-internal topic just
> by
> > sending a metadata request.
> >
> > As far as I can see, there are two solutions here:
> >
> > A. Close the hole in CreateTopicsPolicy immediately.  In new versions,
> > allow MetadataResponse to return AUTHORIZATION_FAILED if we tried to
> > auto-create a topic and failed.  Find some other error code to return for
> > existing versions.
> >
> > B. Keep the hole in CreateTopicsPolicy and add some configuration to
> allow
> > admins to gradually migrate to closing it.  In practice, this probably
> > means a configuration toggle that enables direct ZK access, that starts
> off
> > as enabled.  Then we can eventually default it to false and then remove
> it
> > entirely over time.
> >
> > best,
> > Colin
> >
>


Jenkins build is back to normal : kafka-trunk-jdk11 #1591

2020-06-23 Thread Apache Jenkins Server
See 




Jenkins build is back to normal : kafka-trunk-jdk8 #4662

2020-06-23 Thread Apache Jenkins Server
See 




Build failed in Jenkins: kafka-trunk-jdk14 #239

2020-06-23 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: correct the doc of transaction.timeout.ms (#8901)


--
[...truncated 3.17 MB...]
org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairs STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKeyAndDefaultTimestamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKeyAndDefaultTimestamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithDefaultTimestamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithDefaultTimestamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKey STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKey PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithOtherTopicNameAndTimestampWithTimetamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithOtherTopicNameAndTimestampWithTimetamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullHeaders STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullHeaders PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithDefaultTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithDefaultTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairs 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicNameAndTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicNameAndTimestamp PASSED


Re: [VOTE] KIP-590: Redirect Zookeeper Mutation Protocols to The Controller

2020-06-23 Thread Ismael Juma
Option A is basically what I was thinking. But with a slight adjustment:

New versions of MetadataResponse return POLICY_VIOLATION, old versions
return AUTHORIZATION_FAILED. The latter works correctly with old Java
clients (i.e. the client fails fast and propagates the error), I've tested
it. Adjust new clients to treat POLICY_VIOLATION like AUTHORIZATION_FAILED,
but propagate the custom error message.

Ismael

On Mon, Jun 22, 2020 at 11:00 PM Colin McCabe  wrote:

> > > > On Fri, Jun 19, 2020 at 3:18 PM Ismael Juma 
> wrote:
> > > >
> > > > > Hi Colin,
> > > > >
> > > > > The KIP states in the Compatibility section (not Future work):
> > > > >
> > > > > "To support the proxy of requests, we need to build a channel for
> > > > > brokers to talk directly to the controller. This part of the design
> > > > > is internal change only and won’t block the KIP progress."
> > > > >
> > > > > I am clarifying that this is not internal only due to the config.
> If we
> > > > > say that this KIP depends on another KIP before we can merge
> > > > > it, that's fine although it feels a bit unnecessary.
> > > > >
>
> Hi Ismael,
>
> I didn't realize there was still a reference to the separate controller
> channel in the "Compatibility, Deprecation, and Migration Plan" section.  I
> agree that it doesn't really belong there.  Given that this is creating
> confusion, I would suggest that we just drop this from the KIP entirely.
> It really is orthogonal to what this KIP is about-- we don't need a
> separate channel to implement redirection.
>
> Boyang wrote:
>
> >
> > We are only opening the doors for specific internal topics (offsets, txn
> > log), which I assume the client should have no possibility to mutate the
> > topic policy?
> >
>
> Hi Boyang,
>
> I think you and Ismael are talking about different scenarios.  You are
> describing the scenario where the broker is auto-creating the transaction
> log topic or consumer offset topic.  This scenario indeed should not happen
> in a properly-configured cluster.  However, Ismael is describing a scenario
> where the client is auto-creating some arbitrary non-internal topic just by
> sending a metadata request.
>
> As far as I can see, there are two solutions here:
>
> A. Close the hole in CreateTopicsPolicy immediately.  In new versions,
> allow MetadataResponse to return AUTHORIZATION_FAILED if we tried to
> auto-create a topic and failed.  Find some other error code to return for
> existing versions.
>
> B. Keep the hole in CreateTopicsPolicy and add some configuration to allow
> admins to gradually migrate to closing it.  In practice, this probably
> means a configuration toggle that enables direct ZK access, that starts off
> as enabled.  Then we can eventually default it to false and then remove it
> entirely over time.
>
> best,
> Colin
>


Re: [ANNOUNCE] New committer: Boyang Chen

2020-06-23 Thread Aparnesh Gaurav
Congrats Boyang.

On Tue, 23 Jun, 2020, 9:07 PM Vahid Hashemian, 
wrote:

> Congrats Boyang!
>
> --Vahid
>
> On Tue, Jun 23, 2020 at 6:41 AM Wang (Leonard) Ge 
> wrote:
>
> > Congrats Boyang! This is a great achievement.
> >
> > On Tue, Jun 23, 2020 at 10:33 AM Mickael Maison <
> mickael.mai...@gmail.com>
> > wrote:
> >
> > > Congrats Boyang! Well deserved
> > >
> > > On Tue, Jun 23, 2020 at 8:20 AM Tom Bentley 
> wrote:
> > > >
> > > > Congratulations Boyang!
> > > >
> > > > On Tue, Jun 23, 2020 at 8:11 AM Bruno Cadonna 
> > > wrote:
> > > >
> > > > > Congrats, Boyang!
> > > > >
> > > > > Best,
> > > > > Bruno
> > > > >
> > > > > On Tue, Jun 23, 2020 at 7:50 AM Konstantine Karantasis
> > > > >  wrote:
> > > > > >
> > > > > > Congrats, Boyang!
> > > > > >
> > > > > > -Konstantine
> > > > > >
> > > > > > On Mon, Jun 22, 2020 at 9:19 PM Navinder Brar
> > > > > >  wrote:
> > > > > >
> > > > > > > Many Congratulations Boyang. Very well deserved.
> > > > > > >
> > > > > > > Regards,Navinder
> > > > > > >
> > > > > > > On Tuesday, 23 June, 2020, 07:21:23 am IST, Matt Wang <
> > > > > wang...@163.com>
> > > > > > > wrote:
> > > > > > >
> > > > > > >  Congratulations, Boyang!
> > > > > > >
> > > > > > >
> > > > > > > --
> > > > > > >
> > > > > > > Best,
> > > > > > > Matt Wang
> > > > > > >
> > > > > > >
> > > > > > > On 06/23/2020 07:59,Boyang Chen
> > wrote:
> > > > > > > Thanks a lot everyone, I really appreciate the recognition, and
> > > hope to
> > > > > > > make more solid contributions to the community in the future!
> > > > > > >
> > > > > > > On Mon, Jun 22, 2020 at 4:50 PM Matthias J. Sax <
> > mj...@apache.org>
> > > > > wrote:
> > > > > > >
> > > > > > > Congrats! Well deserved!
> > > > > > >
> > > > > > > -Matthias
> > > > > > >
> > > > > > > On 6/22/20 4:38 PM, Bill Bejeck wrote:
> > > > > > > Congratulations Boyang! Well deserved.
> > > > > > >
> > > > > > > -Bill
> > > > > > >
> > > > > > > On Mon, Jun 22, 2020 at 7:35 PM Colin McCabe <
> cmcc...@apache.org
> > >
> > > > > wrote:
> > > > > > >
> > > > > > > Congratulations, Boyang!
> > > > > > >
> > > > > > > cheers,
> > > > > > > Colin
> > > > > > >
> > > > > > > On Mon, Jun 22, 2020, at 16:26, Guozhang Wang wrote:
> > > > > > > The PMC for Apache Kafka has invited Boyang Chen as a committer
> > > and we
> > > > > > > are
> > > > > > > pleased to announce that he has accepted!
> > > > > > >
> > > > > > > Boyang has been active in the Kafka community more than two
> years
> > > ago.
> > > > > > > Since then he has presented his experience operating with Kafka
> > > Streams
> > > > > > > at
> > > > > > > Pinterest as well as several feature development including
> > > rebalance
> > > > > > > improvements (KIP-345) and exactly-once scalability
> improvements
> > > > > > > (KIP-447)
> > > > > > > in various Kafka Summit and Kafka Meetups. More recently he's
> > also
> > > been
> > > > > > > participating in Kafka broker development including
> > post-Zookeeper
> > > > > > > controller design (KIP-500). Besides all the code
> contributions,
> > > Boyang
> > > > > > > has
> > > > > > > also helped reviewing even more PRs and KIPs than his own.
> > > > > > >
> > > > > > > Thanks for all the contributions Boyang! And look forward to
> more
> > > > > > > collaborations with you on Apache Kafka.
> > > > > > >
> > > > > > >
> > > > > > > -- Guozhang, on behalf of the Apache Kafka PMC
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > >
> > > > >
> > > > >
> > >
> >
> >
> > --
> > Leonard Ge
> > Software Engineer Intern - Confluent
> >
>
>
> --
>
> Thanks!
> --Vahid
>


Re: [ANNOUNCE] New committer: Boyang Chen

2020-06-23 Thread Vahid Hashemian
Congrats Boyang!

--Vahid

On Tue, Jun 23, 2020 at 6:41 AM Wang (Leonard) Ge  wrote:

> Congrats Boyang! This is a great achievement.
>
> On Tue, Jun 23, 2020 at 10:33 AM Mickael Maison 
> wrote:
>
> > Congrats Boyang! Well deserved
> >
> > On Tue, Jun 23, 2020 at 8:20 AM Tom Bentley  wrote:
> > >
> > > Congratulations Boyang!
> > >
> > > On Tue, Jun 23, 2020 at 8:11 AM Bruno Cadonna 
> > wrote:
> > >
> > > > Congrats, Boyang!
> > > >
> > > > Best,
> > > > Bruno
> > > >
> > > > On Tue, Jun 23, 2020 at 7:50 AM Konstantine Karantasis
> > > >  wrote:
> > > > >
> > > > > Congrats, Boyang!
> > > > >
> > > > > -Konstantine
> > > > >
> > > > > On Mon, Jun 22, 2020 at 9:19 PM Navinder Brar
> > > > >  wrote:
> > > > >
> > > > > > Many Congratulations Boyang. Very well deserved.
> > > > > >
> > > > > > Regards,Navinder
> > > > > >
> > > > > > On Tuesday, 23 June, 2020, 07:21:23 am IST, Matt Wang <
> > > > wang...@163.com>
> > > > > > wrote:
> > > > > >
> > > > > >  Congratulations, Boyang!
> > > > > >
> > > > > >
> > > > > > --
> > > > > >
> > > > > > Best,
> > > > > > Matt Wang
> > > > > >
> > > > > >
> > > > > > On 06/23/2020 07:59,Boyang Chen
> wrote:
> > > > > > Thanks a lot everyone, I really appreciate the recognition, and
> > hope to
> > > > > > make more solid contributions to the community in the future!
> > > > > >
> > > > > > On Mon, Jun 22, 2020 at 4:50 PM Matthias J. Sax <
> mj...@apache.org>
> > > > wrote:
> > > > > >
> > > > > > Congrats! Well deserved!
> > > > > >
> > > > > > -Matthias
> > > > > >
> > > > > > On 6/22/20 4:38 PM, Bill Bejeck wrote:
> > > > > > Congratulations Boyang! Well deserved.
> > > > > >
> > > > > > -Bill
> > > > > >
> > > > > > On Mon, Jun 22, 2020 at 7:35 PM Colin McCabe  >
> > > > wrote:
> > > > > >
> > > > > > Congratulations, Boyang!
> > > > > >
> > > > > > cheers,
> > > > > > Colin
> > > > > >
> > > > > > On Mon, Jun 22, 2020, at 16:26, Guozhang Wang wrote:
> > > > > > The PMC for Apache Kafka has invited Boyang Chen as a committer
> > and we
> > > > > > are
> > > > > > pleased to announce that he has accepted!
> > > > > >
> > > > > > Boyang has been active in the Kafka community more than two years
> > ago.
> > > > > > Since then he has presented his experience operating with Kafka
> > Streams
> > > > > > at
> > > > > > Pinterest as well as several feature development including
> > rebalance
> > > > > > improvements (KIP-345) and exactly-once scalability improvements
> > > > > > (KIP-447)
> > > > > > in various Kafka Summit and Kafka Meetups. More recently he's
> also
> > been
> > > > > > participating in Kafka broker development including
> post-Zookeeper
> > > > > > controller design (KIP-500). Besides all the code contributions,
> > Boyang
> > > > > > has
> > > > > > also helped reviewing even more PRs and KIPs than his own.
> > > > > >
> > > > > > Thanks for all the contributions Boyang! And look forward to more
> > > > > > collaborations with you on Apache Kafka.
> > > > > >
> > > > > >
> > > > > > -- Guozhang, on behalf of the Apache Kafka PMC
> > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > >
> > > >
> >
>
>
> --
> Leonard Ge
> Software Engineer Intern - Confluent
>


-- 

Thanks!
--Vahid


Re: [ANNOUNCE] New committer: Boyang Chen

2020-06-23 Thread Wang (Leonard) Ge
Congrats Boyang! This is a great achievement.

On Tue, Jun 23, 2020 at 10:33 AM Mickael Maison 
wrote:

> Congrats Boyang! Well deserved
>
> On Tue, Jun 23, 2020 at 8:20 AM Tom Bentley  wrote:
> >
> > Congratulations Boyang!
> >
> > On Tue, Jun 23, 2020 at 8:11 AM Bruno Cadonna 
> wrote:
> >
> > > Congrats, Boyang!
> > >
> > > Best,
> > > Bruno
> > >
> > > On Tue, Jun 23, 2020 at 7:50 AM Konstantine Karantasis
> > >  wrote:
> > > >
> > > > Congrats, Boyang!
> > > >
> > > > -Konstantine
> > > >
> > > > On Mon, Jun 22, 2020 at 9:19 PM Navinder Brar
> > > >  wrote:
> > > >
> > > > > Many Congratulations Boyang. Very well deserved.
> > > > >
> > > > > Regards,Navinder
> > > > >
> > > > > On Tuesday, 23 June, 2020, 07:21:23 am IST, Matt Wang <
> > > wang...@163.com>
> > > > > wrote:
> > > > >
> > > > >  Congratulations, Boyang!
> > > > >
> > > > >
> > > > > --
> > > > >
> > > > > Best,
> > > > > Matt Wang
> > > > >
> > > > >
> > > > > On 06/23/2020 07:59,Boyang Chen wrote:
> > > > > Thanks a lot everyone, I really appreciate the recognition, and
> hope to
> > > > > make more solid contributions to the community in the future!
> > > > >
> > > > > On Mon, Jun 22, 2020 at 4:50 PM Matthias J. Sax 
> > > wrote:
> > > > >
> > > > > Congrats! Well deserved!
> > > > >
> > > > > -Matthias
> > > > >
> > > > > On 6/22/20 4:38 PM, Bill Bejeck wrote:
> > > > > Congratulations Boyang! Well deserved.
> > > > >
> > > > > -Bill
> > > > >
> > > > > On Mon, Jun 22, 2020 at 7:35 PM Colin McCabe 
> > > wrote:
> > > > >
> > > > > Congratulations, Boyang!
> > > > >
> > > > > cheers,
> > > > > Colin
> > > > >
> > > > > On Mon, Jun 22, 2020, at 16:26, Guozhang Wang wrote:
> > > > > The PMC for Apache Kafka has invited Boyang Chen as a committer
> and we
> > > > > are
> > > > > pleased to announce that he has accepted!
> > > > >
> > > > > Boyang has been active in the Kafka community more than two years
> ago.
> > > > > Since then he has presented his experience operating with Kafka
> Streams
> > > > > at
> > > > > Pinterest as well as several feature development including
> rebalance
> > > > > improvements (KIP-345) and exactly-once scalability improvements
> > > > > (KIP-447)
> > > > > in various Kafka Summit and Kafka Meetups. More recently he's also
> been
> > > > > participating in Kafka broker development including post-Zookeeper
> > > > > controller design (KIP-500). Besides all the code contributions,
> Boyang
> > > > > has
> > > > > also helped reviewing even more PRs and KIPs than his own.
> > > > >
> > > > > Thanks for all the contributions Boyang! And look forward to more
> > > > > collaborations with you on Apache Kafka.
> > > > >
> > > > >
> > > > > -- Guozhang, on behalf of the Apache Kafka PMC
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > >
> > >
>


-- 
Leonard Ge
Software Engineer Intern - Confluent


[jira] [Created] (KAFKA-10195) Move offset management codes from ConsumerCoordinator to a new class

2020-06-23 Thread dengziming (Jira)
dengziming created KAFKA-10195:
--

 Summary: Move offset management codes from ConsumerCoordinator to 
a new class
 Key: KAFKA-10195
 URL: https://issues.apache.org/jira/browse/KAFKA-10195
 Project: Kafka
  Issue Type: Improvement
  Components: clients, consumer
Reporter: dengziming
Assignee: dengziming


ConsumerCoordinator has 2 main functions:
 # partitions assignment
 # offset management

We are adding some new features in it, for example KAFKA-9657 add a field 
`throwOnFetchStableOffsetsUnsupported` which only used in offset management.

And the 2 functions almost don't interact with each other, so it's not wise to 
put these code in one single class, can we try to move offset management code 
to a new class.

For example, the below fields only used in offset management:
 ```

// can be move to another class directly
 private final OffsetCommitCallback defaultOffsetCommitCallback;
 private final ConsumerInterceptors interceptors;
 private final AtomicInteger pendingAsyncCommits;
 private final ConcurrentLinkedQueue 
completedOffsetCommits;
 private AtomicBoolean asyncCommitFenced;
 private final boolean throwOnFetchStableOffsetsUnsupported;
 private PendingCommittedOffsetRequest pendingCommittedOffsetRequest = null;

 

// used in `onJoinComplete` but can also be moved out.

private final boolean autoCommitEnabled;
private final int autoCommitIntervalMs;
private Timer nextAutoCommitTimer;
 ```

so we can just create a new class `OffsetManageCoordinator` and move the 
related codes into it.

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [ANNOUNCE] New committer: Boyang Chen

2020-06-23 Thread Mickael Maison
Congrats Boyang! Well deserved

On Tue, Jun 23, 2020 at 8:20 AM Tom Bentley  wrote:
>
> Congratulations Boyang!
>
> On Tue, Jun 23, 2020 at 8:11 AM Bruno Cadonna  wrote:
>
> > Congrats, Boyang!
> >
> > Best,
> > Bruno
> >
> > On Tue, Jun 23, 2020 at 7:50 AM Konstantine Karantasis
> >  wrote:
> > >
> > > Congrats, Boyang!
> > >
> > > -Konstantine
> > >
> > > On Mon, Jun 22, 2020 at 9:19 PM Navinder Brar
> > >  wrote:
> > >
> > > > Many Congratulations Boyang. Very well deserved.
> > > >
> > > > Regards,Navinder
> > > >
> > > > On Tuesday, 23 June, 2020, 07:21:23 am IST, Matt Wang <
> > wang...@163.com>
> > > > wrote:
> > > >
> > > >  Congratulations, Boyang!
> > > >
> > > >
> > > > --
> > > >
> > > > Best,
> > > > Matt Wang
> > > >
> > > >
> > > > On 06/23/2020 07:59,Boyang Chen wrote:
> > > > Thanks a lot everyone, I really appreciate the recognition, and hope to
> > > > make more solid contributions to the community in the future!
> > > >
> > > > On Mon, Jun 22, 2020 at 4:50 PM Matthias J. Sax 
> > wrote:
> > > >
> > > > Congrats! Well deserved!
> > > >
> > > > -Matthias
> > > >
> > > > On 6/22/20 4:38 PM, Bill Bejeck wrote:
> > > > Congratulations Boyang! Well deserved.
> > > >
> > > > -Bill
> > > >
> > > > On Mon, Jun 22, 2020 at 7:35 PM Colin McCabe 
> > wrote:
> > > >
> > > > Congratulations, Boyang!
> > > >
> > > > cheers,
> > > > Colin
> > > >
> > > > On Mon, Jun 22, 2020, at 16:26, Guozhang Wang wrote:
> > > > The PMC for Apache Kafka has invited Boyang Chen as a committer and we
> > > > are
> > > > pleased to announce that he has accepted!
> > > >
> > > > Boyang has been active in the Kafka community more than two years ago.
> > > > Since then he has presented his experience operating with Kafka Streams
> > > > at
> > > > Pinterest as well as several feature development including rebalance
> > > > improvements (KIP-345) and exactly-once scalability improvements
> > > > (KIP-447)
> > > > in various Kafka Summit and Kafka Meetups. More recently he's also been
> > > > participating in Kafka broker development including post-Zookeeper
> > > > controller design (KIP-500). Besides all the code contributions, Boyang
> > > > has
> > > > also helped reviewing even more PRs and KIPs than his own.
> > > >
> > > > Thanks for all the contributions Boyang! And look forward to more
> > > > collaborations with you on Apache Kafka.
> > > >
> > > >
> > > > -- Guozhang, on behalf of the Apache Kafka PMC
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> >
> >


Re: [DISCUSS] KIP-629: Use racially neutral terms in our codebase

2020-06-23 Thread Bruno Cadonna
Hi Xavier,

Thank you very much for starting this initiative!
Not only for the changes to the code base but also for showing me
where and how we can use more appropriate terms in general.

Best,
Bruno

On Tue, Jun 23, 2020 at 4:17 AM John Roesler  wrote:
>
> Hi Xavier,
>
> I think your approach made a lot of sense; I definitely didn’t mean to 
> criticize. Thanks for the update! The new names look good to me.
>
> -John
>
> On Mon, Jun 22, 2020, at 18:50, Matthias J. Sax wrote:
> > Great initiative!
> >
> > I liked the proposed names, too.
> >
> >
> > -Matthias
> >
> >
> > On 6/22/20 4:48 PM, Guozhang Wang wrote:
> > > Xavier, thanks for the KIP! The proposed names make sense to me.
> > >
> > > Guozhang
> > >
> > > On Mon, Jun 22, 2020 at 4:24 PM Xavier Léauté  wrote:
> > >
> > >> Please check the list for updated config / argument names.
> > >>
> > >> I also added a proposal to replace the term "blackout" with "backoff",
> > >> which is used internally in the replication protocol.
> > >>
> > >> On Mon, Jun 22, 2020 at 3:10 PM Xavier Léauté  wrote:
> > >>
> > >>> I agree we could improve on some of the config names. My thinking here 
> > >>> is
> > >>> that unless we had some precedent for a different name, it seemed
> > >>> relatively straightforward to follow the approach other open source
> > >>> projects have taken. It also makes migration for users easy if we are
> > >>> consistent in the renaming, so we should find terms we can use across 
> > >>> the
> > >>> board.
> > >>>
> > >>> A cursory search indicates we already use include/exclude for topic
> > >>> creation config in Connect, so I think it makes sense to align on that.
> > >>> I'll update the KIP accordingly.
> > >>>
> > >>> On Sat, Jun 20, 2020 at 11:37 AM Ryanne Dolan 
> > >>> wrote:
> > >>>
> >  Xavier, I'm dismayed to see some of these instances are my fault. Fully
> >  support your plan.
> > 
> >  John, I had the same thought -- "list" is extraneous here. In the case
> > >> of
> >  "topics.whitelist" we already have precedent to just use "topics".
> > 
> >  Ryanne
> > 
> >  On Sat, Jun 20, 2020, 12:43 PM John Roesler 
> > >> wrote:
> > 
> > > Thanks Xavier!
> > >
> > > I’m +1 on this idea, and I’m glad this is the extent of what needs to
> > >>> be
> > > changed. I recall when I joined the project being pleased at the lack
> > >>> of
> > > common offensive terminology. I hadn’t considered
> > >> whitelist/blacklist,
> >  but
> > > I can see the argument.
> > >
> > > Allowlist/blocklist are kind of a mouthful, though.
> > >
> > > What do you think of just “allow” and “deny” instead? This is common
> > > terminology in ACLs for example, and it doesn’t really seem necessary
> > >>> to
> > > say “list” in the config name.
> > >
> > > Alternatively, looking at the actual configs, it seems like
> > >> “include”,
> > > “include-only” (or “only”) and “exclude” might be more appropriate in
> > > context.
> > >
> > > I hope this doesn’t kick off a round of bikeshedding. I’m really fine
> > > either way; I doubt it matters much. I just wanted to see if we can
> > >>> name
> > > these configs without making up new multi-syllable words.
> > >
> > > Thanks for bringing it up!
> > > -John
> > >
> > > On Sat, Jun 20, 2020, at 09:31, Ron Dagostino wrote:
> > >> Yes.  Thank you.
> > >>
> > >>> On Jun 20, 2020, at 12:20 AM, Gwen Shapira 
> >  wrote:
> > >>>
> > >>> Thank you so much for this initiative. Small change, but it makes
> > >>> our
> > >>> community more inclusive.
> > >>>
> > >>> Gwen
> > >>>
> >  On Fri, Jun 19, 2020, 6:02 PM Xavier Léauté 
> >  wrote:
> > 
> >  Hi Everyone,
> > 
> >  There are a number of places in our codebase that use racially
> >  charged
> >  terms. I am proposing we update them to use more neutral terms.
> > 
> >  The KIP lists the ones I have found and proposes alternatives.
> > >> If
> >  you
> > > see
> >  any I missed or did not consider, please reply and I'll add
> > >> them.
> > 
> > 
> > 
> > >
> > 
> > >>>
> > >> https://cwiki.apache.org/confluence/display/KAFKA/KIP-629%3A+Use+racially+neutral+terms+in+our+codebase
> > 
> >  Thank you,
> >  Xavier
> > 
> > >>
> > >
> > 
> > >>>
> > >>
> > >
> > >
> >
> >
> > Attachments:
> > * signature.asc


Build failed in Jenkins: kafka-trunk-jdk11 #1590

2020-06-23 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-10135: Extract Task#executeAndMaybeSwallow to be a general 
utility


--
[...truncated 1.90 MB...]

kafka.api.SaslSslAdminIntegrationTest > 
testCreateTopicsResponseMetadataAndConfig STARTED

kafka.api.GroupAuthorizerIntegrationTest > testUnauthorizedProduceAndConsume 
PASSED

kafka.api.GroupAuthorizerIntegrationTest > testAuthorizedProduceAndConsume 
STARTED

kafka.api.GroupAuthorizerIntegrationTest > testAuthorizedProduceAndConsume 
PASSED

kafka.api.DescribeAuthorizedOperationsTest > testClusterAuthorizedOperations 
STARTED

kafka.api.SaslSslAdminIntegrationTest > 
testCreateTopicsResponseMetadataAndConfig PASSED

kafka.api.SaslSslAdminIntegrationTest > testAttemptToCreateInvalidAcls STARTED

kafka.api.DescribeAuthorizedOperationsTest > testClusterAuthorizedOperations 
PASSED

kafka.api.DescribeAuthorizedOperationsTest > testTopicAuthorizedOperations 
STARTED

kafka.api.SaslSslAdminIntegrationTest > testAttemptToCreateInvalidAcls PASSED

kafka.api.SaslSslAdminIntegrationTest > testAclAuthorizationDenied STARTED

kafka.api.DescribeAuthorizedOperationsTest > testTopicAuthorizedOperations 
PASSED

kafka.api.DescribeAuthorizedOperationsTest > 
testConsumerGroupAuthorizedOperations STARTED

kafka.api.DescribeAuthorizedOperationsTest > 
testConsumerGroupAuthorizedOperations PASSED

kafka.api.TransactionsTest > testBumpTransactionalEpoch STARTED

kafka.api.SaslSslAdminIntegrationTest > testAclAuthorizationDenied PASSED

kafka.api.SaslSslAdminIntegrationTest > testAclOperations STARTED

kafka.api.TransactionsTest > testBumpTransactionalEpoch PASSED

kafka.api.TransactionsTest > testSendOffsetsWithGroupMetadata STARTED

kafka.api.TransactionsTest > testSendOffsetsWithGroupMetadata PASSED

kafka.api.TransactionsTest > testBasicTransactions STARTED

kafka.api.SaslSslAdminIntegrationTest > testAclOperations PASSED

kafka.api.SaslSslAdminIntegrationTest > testAclOperations2 STARTED

kafka.api.TransactionsTest > testBasicTransactions PASSED

kafka.api.TransactionsTest > testSendOffsetsWithGroupId STARTED

kafka.api.TransactionsTest > testSendOffsetsWithGroupId PASSED

kafka.api.TransactionsTest > testFencingOnSendOffsets STARTED

kafka.api.TransactionsTest > testFencingOnSendOffsets PASSED

kafka.api.TransactionsTest > testFencingOnAddPartitions STARTED

kafka.api.TransactionsTest > testFencingOnAddPartitions PASSED

kafka.api.TransactionsTest > testFencingOnTransactionExpiration STARTED

kafka.api.SaslSslAdminIntegrationTest > testAclOperations2 PASSED

kafka.api.SaslSslAdminIntegrationTest > testAclDelete STARTED

kafka.api.TransactionsTest > testFencingOnTransactionExpiration PASSED

kafka.api.TransactionsTest > testDelayedFetchIncludesAbortedTransaction STARTED

kafka.api.TransactionsTest > testDelayedFetchIncludesAbortedTransaction PASSED

kafka.api.TransactionsTest > testOffsetMetadataInSendOffsetsToTransaction 
STARTED

kafka.api.TransactionsTest > testOffsetMetadataInSendOffsetsToTransaction PASSED

kafka.api.TransactionsTest > testConsecutivelyRunInitTransactions STARTED

kafka.api.TransactionsTest > testConsecutivelyRunInitTransactions PASSED

kafka.api.TransactionsTest > testReadCommittedConsumerShouldNotSeeUndecidedData 
STARTED

kafka.api.SaslSslAdminIntegrationTest > testAclDelete PASSED

kafka.api.SaslSslAdminIntegrationTest > testCreateDeleteTopics STARTED

kafka.api.TransactionsTest > testReadCommittedConsumerShouldNotSeeUndecidedData 
PASSED

kafka.api.TransactionsTest > testFencingOnSend STARTED

kafka.api.TransactionsTest > testFencingOnSend PASSED

kafka.api.TransactionsTest > testFencingOnCommit STARTED

kafka.api.TransactionsTest > testFencingOnCommit PASSED

kafka.api.TransactionsTest > testMultipleMarkersOneLeader STARTED

kafka.api.SaslSslAdminIntegrationTest > testCreateDeleteTopics PASSED

kafka.api.SaslSslAdminIntegrationTest > testAuthorizedOperations STARTED

kafka.api.TransactionsTest > testMultipleMarkersOneLeader PASSED

kafka.api.TransactionsTest > testCommitTransactionTimeout STARTED

kafka.api.TransactionsTest > testCommitTransactionTimeout PASSED

kafka.api.UserClientIdQuotaTest > testProducerConsumerOverrideLowerQuota STARTED

kafka.api.UserClientIdQuotaTest > testProducerConsumerOverrideLowerQuota PASSED

kafka.api.UserClientIdQuotaTest > testProducerConsumerOverrideUnthrottled 
STARTED

kafka.api.SaslSslAdminIntegrationTest > testAuthorizedOperations PASSED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testTransactionalProducerWithAuthenticationFailure STARTED

kafka.api.UserClientIdQuotaTest > testProducerConsumerOverrideUnthrottled PASSED

kafka.api.UserClientIdQuotaTest > testThrottledProducerConsumer STARTED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testTransactionalProducerWithAuthenticationFailure PASSED

kafka.api.SaslClientsWithInvalidCredentialsTest > 

Re: [ANNOUNCE] New committer: Boyang Chen

2020-06-23 Thread Tom Bentley
Congratulations Boyang!

On Tue, Jun 23, 2020 at 8:11 AM Bruno Cadonna  wrote:

> Congrats, Boyang!
>
> Best,
> Bruno
>
> On Tue, Jun 23, 2020 at 7:50 AM Konstantine Karantasis
>  wrote:
> >
> > Congrats, Boyang!
> >
> > -Konstantine
> >
> > On Mon, Jun 22, 2020 at 9:19 PM Navinder Brar
> >  wrote:
> >
> > > Many Congratulations Boyang. Very well deserved.
> > >
> > > Regards,Navinder
> > >
> > > On Tuesday, 23 June, 2020, 07:21:23 am IST, Matt Wang <
> wang...@163.com>
> > > wrote:
> > >
> > >  Congratulations, Boyang!
> > >
> > >
> > > --
> > >
> > > Best,
> > > Matt Wang
> > >
> > >
> > > On 06/23/2020 07:59,Boyang Chen wrote:
> > > Thanks a lot everyone, I really appreciate the recognition, and hope to
> > > make more solid contributions to the community in the future!
> > >
> > > On Mon, Jun 22, 2020 at 4:50 PM Matthias J. Sax 
> wrote:
> > >
> > > Congrats! Well deserved!
> > >
> > > -Matthias
> > >
> > > On 6/22/20 4:38 PM, Bill Bejeck wrote:
> > > Congratulations Boyang! Well deserved.
> > >
> > > -Bill
> > >
> > > On Mon, Jun 22, 2020 at 7:35 PM Colin McCabe 
> wrote:
> > >
> > > Congratulations, Boyang!
> > >
> > > cheers,
> > > Colin
> > >
> > > On Mon, Jun 22, 2020, at 16:26, Guozhang Wang wrote:
> > > The PMC for Apache Kafka has invited Boyang Chen as a committer and we
> > > are
> > > pleased to announce that he has accepted!
> > >
> > > Boyang has been active in the Kafka community more than two years ago.
> > > Since then he has presented his experience operating with Kafka Streams
> > > at
> > > Pinterest as well as several feature development including rebalance
> > > improvements (KIP-345) and exactly-once scalability improvements
> > > (KIP-447)
> > > in various Kafka Summit and Kafka Meetups. More recently he's also been
> > > participating in Kafka broker development including post-Zookeeper
> > > controller design (KIP-500). Besides all the code contributions, Boyang
> > > has
> > > also helped reviewing even more PRs and KIPs than his own.
> > >
> > > Thanks for all the contributions Boyang! And look forward to more
> > > collaborations with you on Apache Kafka.
> > >
> > >
> > > -- Guozhang, on behalf of the Apache Kafka PMC
> > >
> > >
> > >
> > >
> > >
> > >
>
>


Re: [ANNOUNCE] New committer: Boyang Chen

2020-06-23 Thread Bruno Cadonna
Congrats, Boyang!

Best,
Bruno

On Tue, Jun 23, 2020 at 7:50 AM Konstantine Karantasis
 wrote:
>
> Congrats, Boyang!
>
> -Konstantine
>
> On Mon, Jun 22, 2020 at 9:19 PM Navinder Brar
>  wrote:
>
> > Many Congratulations Boyang. Very well deserved.
> >
> > Regards,Navinder
> >
> > On Tuesday, 23 June, 2020, 07:21:23 am IST, Matt Wang 
> > wrote:
> >
> >  Congratulations, Boyang!
> >
> >
> > --
> >
> > Best,
> > Matt Wang
> >
> >
> > On 06/23/2020 07:59,Boyang Chen wrote:
> > Thanks a lot everyone, I really appreciate the recognition, and hope to
> > make more solid contributions to the community in the future!
> >
> > On Mon, Jun 22, 2020 at 4:50 PM Matthias J. Sax  wrote:
> >
> > Congrats! Well deserved!
> >
> > -Matthias
> >
> > On 6/22/20 4:38 PM, Bill Bejeck wrote:
> > Congratulations Boyang! Well deserved.
> >
> > -Bill
> >
> > On Mon, Jun 22, 2020 at 7:35 PM Colin McCabe  wrote:
> >
> > Congratulations, Boyang!
> >
> > cheers,
> > Colin
> >
> > On Mon, Jun 22, 2020, at 16:26, Guozhang Wang wrote:
> > The PMC for Apache Kafka has invited Boyang Chen as a committer and we
> > are
> > pleased to announce that he has accepted!
> >
> > Boyang has been active in the Kafka community more than two years ago.
> > Since then he has presented his experience operating with Kafka Streams
> > at
> > Pinterest as well as several feature development including rebalance
> > improvements (KIP-345) and exactly-once scalability improvements
> > (KIP-447)
> > in various Kafka Summit and Kafka Meetups. More recently he's also been
> > participating in Kafka broker development including post-Zookeeper
> > controller design (KIP-500). Besides all the code contributions, Boyang
> > has
> > also helped reviewing even more PRs and KIPs than his own.
> >
> > Thanks for all the contributions Boyang! And look forward to more
> > collaborations with you on Apache Kafka.
> >
> >
> > -- Guozhang, on behalf of the Apache Kafka PMC
> >
> >
> >
> >
> >
> >


[jira] [Created] (KAFKA-10194) run the reset tool between stopping StreamsOptimizedTest and starting the new one

2020-06-23 Thread Chia-Ping Tsai (Jira)
Chia-Ping Tsai created KAFKA-10194:
--

 Summary: run the reset tool between stopping StreamsOptimizedTest 
and starting the new one
 Key: KAFKA-10194
 URL: https://issues.apache.org/jira/browse/KAFKA-10194
 Project: Kafka
  Issue Type: Improvement
Reporter: Chia-Ping Tsai


inspired by [~ableegoldman]

{quote}
So, that's exactly what we should do in this test. We need to

1. call KafkaStreams#cleanup before starting the application up the second time 
(ie when we turn optimizations on)
2. run the reset tool between stopping the original application and starting 
the new one
I think that 1. alone would technically be enough to stop this test from 
failing, but we should really be doing both. 
{quote}

the comment.1 is addressed by KAFKA-10191 and this issue aims to address 
another one.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [VOTE] KIP-590: Redirect Zookeeper Mutation Protocols to The Controller

2020-06-23 Thread Colin McCabe
> > > On Fri, Jun 19, 2020 at 3:18 PM Ismael Juma  wrote:
> > >
> > > > Hi Colin,
> > > >
> > > > The KIP states in the Compatibility section (not Future work):
> > > >
> > > > "To support the proxy of requests, we need to build a channel for
> > > > brokers to talk directly to the controller. This part of the design
> > > > is internal change only and won’t block the KIP progress."
> > > >
> > > > I am clarifying that this is not internal only due to the config. If we
> > > > say that this KIP depends on another KIP before we can merge 
> > > > it, that's fine although it feels a bit unnecessary.
> > > >

Hi Ismael,

I didn't realize there was still a reference to the separate controller channel 
in the "Compatibility, Deprecation, and Migration Plan" section.  I agree that 
it doesn't really belong there.  Given that this is creating confusion, I would 
suggest that we just drop this from the KIP entirely.  It really is orthogonal 
to what this KIP is about-- we don't need a separate channel to implement 
redirection.

Boyang wrote:

>
> We are only opening the doors for specific internal topics (offsets, txn
> log), which I assume the client should have no possibility to mutate the
> topic policy?
> 

Hi Boyang,

I think you and Ismael are talking about different scenarios.  You are 
describing the scenario where the broker is auto-creating the transaction log 
topic or consumer offset topic.  This scenario indeed should not happen in a 
properly-configured cluster.  However, Ismael is describing a scenario where 
the client is auto-creating some arbitrary non-internal topic just by sending a 
metadata request.

As far as I can see, there are two solutions here:

A. Close the hole in CreateTopicsPolicy immediately.  In new versions, allow 
MetadataResponse to return AUTHORIZATION_FAILED if we tried to auto-create a 
topic and failed.  Find some other error code to return for existing versions.

B. Keep the hole in CreateTopicsPolicy and add some configuration to allow 
admins to gradually migrate to closing it.  In practice, this probably means a 
configuration toggle that enables direct ZK access, that starts off as enabled. 
 Then we can eventually default it to false and then remove it entirely over 
time.

best,
Colin