回复:[Discuss] KIP-571: Add option to force remove members in StreamsResetter

2020-02-14 Thread feyman2009
Hi, Boyang
You can call me Feyman :)
Thanks for your quick reply with great advices!
I have updated the KIP-571 , would you mind to see if it looks good ? 
Thanks !


--
发件人:Boyang Chen 
发送时间:2020年2月14日(星期五) 08:35
收件人:dev ; feyman2009 
主 题:Re: [Discuss] KIP-571: Add option to force remove members in StreamsResetter

Thanks for driving this change Feyman! Hope this is a good name to call you :)

The motivation of the KIP looks good, and I have a couple of initial thoughts:
1. I guess the reason to use setters instead of adding a new constructor to 
MemberToRemove class is because we have two String members. Could you point 
that out upfront so that people are not asking why not adding new constructor?
2. KIP discussion usually focuses on the public side changes, so you don't need 
to copy-paste the entire class. Just the new APIs you are adding should be 
suffice
3. Add the description of new flag inside Public API change, whose name could 
be simplified as `--force` and people would just read the description. An edge 
case I could think of is that some ongoing applications are not closed when the 
reset tool applies, which causes more unexpected rebalances. So it's important 
to warn users to use the flag wisely and be responsible to shutdown old 
applications first.
4. It would be good to mention in the Compatibility section which version of 
broker and admin client we need to be able to use this new feature. Also what's 
the expected behavior when the broker is not supporting the new API.
5. What additional feedback for users using the new flag? Are we going to 
include a list of successfully deleted members, and some failed members?
6. We could separate the proposed change and public API section. In the 
proposed change section, we could have a mention of which API we are going to 
use to get the members of the stream application.

And some minor style advices:
1. Remove the link on `member.id` inside Motivation section;
2. Use a code block for the new MemberToRemove and avoid unnecessary coloring;
3. Please pay more attention to style, for example `ability to  force removing` 
has double spaces. 

Boyang


On Thu, Feb 13, 2020 at 1:48 AM feyman2009  
wrote:
Hi, all
 In order to make it possible for StreamsResetter to reset stream even when 
there are active members, since we currently only have the ability to remove 
static members, so we basically need the ability to remove dynamic members, 
this involves some public interfaces change in 
org.apache.kafka.clients.admin.MemberToRemove.

 KIP: 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-571%3A+Add+option+to+force+remove+members+in+StreamsResetter
 JIRA: https://issues.apache.org/jira/browse/KAFKA-9146

 Any comments would be highly appreciated~
 Thanks !



[jira] [Created] (KAFKA-9562) Streams not making progress under heavy failures with EOS enabled on 2.5 branch

2020-02-14 Thread John Roesler (Jira)
John Roesler created KAFKA-9562:
---

 Summary: Streams not making progress under heavy failures with EOS 
enabled on 2.5 branch
 Key: KAFKA-9562
 URL: https://issues.apache.org/jira/browse/KAFKA-9562
 Project: Kafka
  Issue Type: Bug
  Components: streams
Affects Versions: 2.5.0
Reporter: John Roesler
Assignee: Boyang Chen
 Fix For: 2.5.0


During soak testing in preparation for the 2.5.0 release, we have discovered a 
case in which Streams appears to stop making progress. Specifically, this is a 
failure-resilience test in which we inject network faults separating the 
instances from the brokers roughly every twenty minutes.

On 2.4, Streams would obviously spend a lot of time rebalancing under this 
scenario, but would still make progress. However, on the current 2.5 branch, 
Streams effectively stops making progress except rarely.

This appears to be a severe regression, so I'm filing this ticket as a 2.5.0 
release blocker.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: Possible to create Scrum board under Kafka project in JIRA?

2020-02-14 Thread Harsha Chintalapani
All the dependency JIRAs are created and linked to the EPIC here
https://issues.apache.org/jira/browse/KAFKA-7739 . Lets drive it through
that.
-Harsha


On Fri, Feb 14, 2020 at 2:28 AM, Alexandre Dupriez <
alexandre.dupr...@gmail.com> wrote:

> Good morning,
>
> Would it be possible to allow the the Apache Kafka project in JIRA to be
> included in a new Scrum board?
>
> I can see there is already a Kanban board for Cloudera and tried to create
> a Scrum board for Tiered-Storage but don't have the permissions to include
> Apache Kafka.
>
> Thank you,
> Alexandre
>


Build failed in Jenkins: kafka-trunk-jdk8 #4237

2020-02-14 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-9557: correct thread process-rate sensor to measure throughput

[github] KAFKA-9556; Fix two issues with KIP-558 and expand testing coverage


--
[...truncated 2.87 MB...]
org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairs 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicNameAndTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicNameAndTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairsAndCustomTimestamps
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairsAndCustomTimestamps
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithCustomTimestampAndIncrementsAndNotAdvanceTime
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithCustomTimestampAndIncrementsAndNotAdvanceTime
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithTimestampWithTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithTimestampWithTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKeyAndDefaultTimestamp
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKeyAndDefaultTimestamp
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithTimestampAndIncrements STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithTimestampAndIncrements PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
PASSED

> Task :streams:upgrade-system-tests-0100:compileJava NO-SOURCE
> Task 

[jira] [Created] (KAFKA-9561) Update task input partitions when topic metadata changes

2020-02-14 Thread Boyang Chen (Jira)
Boyang Chen created KAFKA-9561:
--

 Summary: Update task input partitions when topic metadata changes
 Key: KAFKA-9561
 URL: https://issues.apache.org/jira/browse/KAFKA-9561
 Project: Kafka
  Issue Type: Improvement
  Components: streams
Reporter: Boyang Chen


With https://issues.apache.org/jira/browse/KAFKA-9545, we exposed a possibility 
that a task could have been alive throughout the rebalance, while the input 
partitions actually change. For example, a regex subscribed source could have 
different topics when partitions are added/removed. We need to consider adding 
the support to expand/shrink the partitions across rebalance to keep task 
information consistent with subscription data.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-9560) Connector::validate is utilized concurrently by the framework, but not documented as thread-safe

2020-02-14 Thread Chris Egerton (Jira)
Chris Egerton created KAFKA-9560:


 Summary: Connector::validate is utilized concurrently by the 
framework, but not documented as thread-safe
 Key: KAFKA-9560
 URL: https://issues.apache.org/jira/browse/KAFKA-9560
 Project: Kafka
  Issue Type: Bug
  Components: KafkaConnect
Reporter: Chris Egerton


Requests to the {{PUT /connector-plugins/\{connectorType}/config/validate}} 
endpoint are [delegated to the 
herder|https://github.com/apache/kafka/blob/16ee326755e3f13914a0ed446c34c84e65fc0bc4/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/rest/resources/ConnectorPluginsResource.java#L81],
 which [caches connector 
instances|https://github.com/apache/kafka/blob/16ee326755e3f13914a0ed446c34c84e65fc0bc4/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/AbstractHerder.java#L536-L544]
 that are used [during config 
validation|https://github.com/apache/kafka/blob/16ee326755e3f13914a0ed446c34c84e65fc0bc4/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/AbstractHerder.java#L310].
 This has the effect that, should concurrent requests to that endpoint occur 
for the same connector type, the same connector instance may be responsible for 
[validating those 
configurations|https://github.com/apache/kafka/blob/16ee326755e3f13914a0ed446c34c84e65fc0bc4/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/AbstractHerder.java#L334]
 concurrently _(may_ instead of _will_ because there is also a race condition 
in the {{AbstractHerder::getConnector}} method that potentially fails to detect 
that an instance of the connector has already been created and, as a result, 
creates a second instance).

This is slightly problematic because the 
[Connector::validate|https://github.com/apache/kafka/blob/16ee326755e3f13914a0ed446c34c84e65fc0bc4/connect/api/src/main/java/org/apache/kafka/connect/connector/Connector.java#L122-L127]
 method is not marked as thread-safe. However, because a lot of connectors out 
there tend to use the default implementation for that method, it's probably not 
super urgent that we patch this immediately.

A couple of options are:
 # Update the docs for that method to specify that it must be thread-safe
 # Rewrite the connector validation logic in the framework to avoid 
concurrently invoking {{Connector::validate}} on the same instance.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-9536) Integration tests for KIP-558

2020-02-14 Thread Konstantine Karantasis (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9536?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantine Karantasis resolved KAFKA-9536.
---
Resolution: Fixed

 

Fixed with [https://github.com/apache/kafka/pull/8085]

> Integration tests for KIP-558
> -
>
> Key: KAFKA-9536
> URL: https://issues.apache.org/jira/browse/KAFKA-9536
> Project: Kafka
>  Issue Type: Test
>Reporter: Konstantine Karantasis
>Assignee: Konstantine Karantasis
>Priority: Major
>
> Extend testing coverage for 
> [KIP-558|https://cwiki.apache.org/confluence/display/KAFKA/KIP-558%3A+Track+the+set+of+actively+used+topics+by+connectors+in+Kafka+Connect]
>  with integration tests and additional unit tests. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-9559) Change the default "default serde" from ByteArraySerde to null

2020-02-14 Thread Sophie Blee-Goldman (Jira)
Sophie Blee-Goldman created KAFKA-9559:
--

 Summary: Change the default "default serde" from ByteArraySerde to 
null
 Key: KAFKA-9559
 URL: https://issues.apache.org/jira/browse/KAFKA-9559
 Project: Kafka
  Issue Type: Improvement
  Components: streams
Reporter: Sophie Blee-Goldman
 Fix For: 3.0.0


The current default "default serde" is not particularly useful, and in almost 
all cases is intended to be overwritten either by setting the config explicitly 
or by specifying serdes directly in the topology. If a user does not set the 
config and misses specifying a serde they will get a runtime ClassCastException 
once they start processing unless they are in fact processing only bytes.

We should change the default default to null, so that an exception will instead 
be thrown immediately on startup if a user failed to specify a serde somewhere 
in their topology and it falls back to the unset default.

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-9557) Thread-level "process" metrics are computed incorrectly

2020-02-14 Thread John Roesler (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9557?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Roesler resolved KAFKA-9557.
-
Fix Version/s: 2.6.0
   Resolution: Fixed

> Thread-level "process" metrics are computed incorrectly
> ---
>
> Key: KAFKA-9557
> URL: https://issues.apache.org/jira/browse/KAFKA-9557
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: John Roesler
>Assignee: John Roesler
>Priority: Major
> Fix For: 2.6.0
>
>
> Among others, Streams reports the following two thread-level "process" 
> metrics:
> "process-rate": The average number of process calls per second.
> "process-total": The total number of process calls across all tasks.
> See the docs: 
> https://kafka.apache.org/documentation/#kafka_streams_thread_monitoring
> There's some surprising ambiguity in these definitions that has led to 
> Streams actually reporting something different than what most people would 
> probably expect. Specifically, it's not defined what a "process call" is.
> A reasonable definition of a "process call" is processing a record or 
> processing a task (both of which are publicly facing concepts, and both of 
> which are the same, since tasks process records one at a time). However, we 
> currently measure number of invocations to a private, internal `process()` 
> method, which would actually process more than one record at a time. Thus, 
> the current metric is under-counting the throughput, in an esoteric and 
> confusing way.
> Instead, we should simply change the rate and total metrics to measure the 
> (rate and total) of _record_ processing.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: kafka-trunk-jdk11 #1160

2020-02-14 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-9512: Flaky Test


--
[...truncated 2.89 MB...]

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 

Build failed in Jenkins: kafka-trunk-jdk8 #4236

2020-02-14 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-9512: Flaky Test


--
[...truncated 2.87 MB...]
org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTimeDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTimeDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldForwardRecordsFromSubtopologyToSubtopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldForwardRecordsFromSubtopologyToSubtopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] PASSED

> Task :streams:upgrade-system-tests-0100:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0100:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0100:classes UP-TO-DATE
> Task 

[jira] [Created] (KAFKA-9558) getListOffsetsCalls doesn't update node in case of leader change

2020-02-14 Thread Sanjana Kaundinya (Jira)
Sanjana Kaundinya created KAFKA-9558:


 Summary: getListOffsetsCalls doesn't update node in case of leader 
change
 Key: KAFKA-9558
 URL: https://issues.apache.org/jira/browse/KAFKA-9558
 Project: Kafka
  Issue Type: Bug
  Components: admin
Affects Versions: 2.5.0
Reporter: Sanjana Kaundinya
Assignee: Sanjana Kaundinya


As seen here:
[https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/clients/admin/KafkaAdminClient.java#L3810]

In handling the response in the `listOffsets` call, if there are errors in the 
topic partition that require a metadata refresh, it simply passes the call 
object as `this`. This produces incorrect behavior if there was a leader 
change, because the call object never gets its leader node updated. This will 
result in a tight loop of list offsets being called to the same old leader and 
not resulting in offsets, even though the metadata was correctly updated.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] KIP-508: Make Suppression State Queriable - rebooted.

2020-02-14 Thread John Roesler
Hi Dongjin,

Thanks for the KIP!

Can you explain more about why the internal data structures of suppression 
should be queriable? The motivation just says that users might want to do it, 
which seems like it could justify literally anything :)

One design point of Suppression is that if you wanted to query the “final 
state”, you can Materialize the suppress itself (which is why it needs the 
variant); if you wanted to query the “intermediate state”, you can materialize 
the operator immediately before the suppress.

Example:

...count(Materialized.as(“intermediate”))
  .supress(untilWindowClosed(), Materialized.as(“final”))

I’m not sure what use case would require actually fetching from the internal 
buffers. 

Thanks,
John


On Fri, Feb 14, 2020, at 07:55, Dongjin Lee wrote:
> Hi devs,
> 
> I'd like to reboot the discussion on KIP-508, which aims to support a
> Materialized variant of KTable#suppress. It was initially submitted several
> months ago but closed by the inactivity.
> 
> - KIP:
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-508%3A+Make+Suppression+State+Queriable
> - Jira: https://issues.apache.org/jira/browse/KAFKA-8403
> 
> All kinds of feedback will be greatly appreciated.
> 
> Best,
> Dongjin
> 
> -- 
> *Dongjin Lee*
> 
> *A hitchhiker in the mathematical world.*
> *github:  github.com/dongjinleekr
> linkedin: kr.linkedin.com/in/dongjinleekr
> speakerdeck: speakerdeck.com/dongjin
> *
>


Re: [VOTE] KIP-568: Explicit rebalance triggering on the Consumer

2020-02-14 Thread Sophie Blee-Goldman
Thanks all!

This KIP passes with 3 binding votes (John, Bill, and Guozhang) and
3 non-binding votes (Navinder, Konstantine, Boyang, and Bruno).

I'll call for review on a PR shortly.

Sophie



On Fri, Feb 14, 2020 at 12:25 AM Bruno Cadonna  wrote:

> Thanks!
>
> +1 (non-binding)
>
> Best,
> Bruno
>
> On Fri, Feb 14, 2020 at 1:57 AM Boyang Chen 
> wrote:
> >
> > +1 (non-binding)
> >
> > On Thu, Feb 13, 2020 at 4:45 PM Guozhang Wang 
> wrote:
> >
> > > +1 (binding).
> > >
> > >
> > > Guozhang
> > >
> > > On Tue, Feb 11, 2020 at 5:29 PM Guozhang Wang 
> wrote:
> > >
> > > > Hi Sophie,
> > > >
> > > > Thanks for the KIP, I left some comments on the DISCUSS thread.
> > > >
> > > >
> > > > Guozhang
> > > >
> > > >
> > > > On Tue, Feb 11, 2020 at 3:25 PM Bill Bejeck 
> wrote:
> > > >
> > > >> Thanks for the KIP Sophie.
> > > >>
> > > >> It's a +1 (binding) for me.
> > > >>
> > > >> -Bill
> > > >>
> > > >> On Tue, Feb 11, 2020 at 4:21 PM Konstantine Karantasis <
> > > >> konstant...@confluent.io> wrote:
> > > >>
> > > >> > The KIP reads quite well for me now and I think this feature will
> > > enable
> > > >> > even more efficient load balancing for specific use cases.
> > > >> >
> > > >> > I'm also +1 (non-binding)
> > > >> >
> > > >> > - Konstantine
> > > >> >
> > > >> > On Tue, Feb 11, 2020 at 9:35 AM Navinder Brar
> > > >> >  wrote:
> > > >> >
> > > >> > > Thanks Sophie, much required.
> > > >> > > +1 non-binding.
> > > >> > >
> > > >> > >
> > > >> > > Sent from Yahoo Mail for iPhone
> > > >> > >
> > > >> > >
> > > >> > > On Tuesday, February 11, 2020, 10:33 PM, John Roesler <
> > > >> > vvcep...@apache.org>
> > > >> > > wrote:
> > > >> > >
> > > >> > > Thanks Sophie,
> > > >> > >
> > > >> > > I'm +1 (binding)
> > > >> > >
> > > >> > > -John
> > > >> > >
> > > >> > > On Mon, Feb 10, 2020, at 20:54, Sophie Blee-Goldman wrote:
> > > >> > > > Hey all,
> > > >> > > >
> > > >> > > > I'd like to start the voting on KIP-568. It proposes the new
> > > >> > > > Consumer#enforceRebalance API to facilitate triggering
> efficient
> > > >> > > rebalances.
> > > >> > > >
> > > >> > > > For reference, here is the KIP link again:
> > > >> > > >
> > > >> > >
> > > >> >
> > > >>
> > >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-568%3A+Explicit+rebalance+triggering+on+the+Consumer
> > > >> > > >
> > > >> > > > Thanks!
> > > >> > > > Sophie
> > > >> > > >
> > > >> > >
> > > >> > >
> > > >> > >
> > > >> > >
> > > >> >
> > > >>
> > > >
> > > >
> > > > --
> > > > -- Guozhang
> > > >
> > >
> > >
> > > --
> > > -- Guozhang
> > >
>


Re: [DISCUSS] Apache Kafka 2.5.0 release

2020-02-14 Thread Konstantine Karantasis
Hi David,

I want to confirm what Randall mentions above. The code fixes for
KAFKA-9556 were in place before code freeze on Wed, but we spent a bit more
time hardening the conditions of the integration tests and fixing some
jenkins branch builders to run the test on repeat.

Best,
Konstantine


On Fri, Feb 14, 2020 at 7:42 AM Randall Hauch  wrote:

> Hi, David.
>
> I just filed https://issues.apache.org/jira/browse/KAFKA-9556 that
> identifies two pretty minor issues with the new KIP-558 that adds new
> Connect REST API endpoints to get the list of topics used by a connector.
> The impact is high: the feature cannot be fully disabled, and Connect does
> not automatically reset the topic set when a connector is deleted.
> https://github.com/apache/kafka/pull/8085 includes the two fixes, and also
> adds more unit and integration tests for this feature. Although I just
> created the blocker this AM, Konstantine has actually be working on the fix
> for four days. Risk of merging this PR is low, since a) the new integration
> tests add significant coverage and we've run the new tests numerous times,
> and b) the fixes help gate the new feature even more and allow the feature
> to be completely disabled.
>
> I'd like approve to merge https://github.com/apache/kafka/pull/8085
>
> Thanks!
> Randall
>
> On Mon, Feb 10, 2020 at 11:31 AM David Arthur  wrote:
>
> > Just a friendly reminder that this Wednesday, February 12th, is the code
> > freeze for the 2.5.0 release. After this time we will only accept blocker
> > bugs onto the release branch.
> >
> > Thanks!
> > David
> >
> > On Fri, Jan 31, 2020 at 5:13 PM David Arthur  wrote:
> >
> > > Thanks! I've updated the list.
> > >
> > > On Thu, Jan 30, 2020 at 5:48 PM Konstantine Karantasis <
> > > konstant...@confluent.io> wrote:
> > >
> > >> Hi David,
> > >>
> > >> thanks for driving the release.
> > >>
> > >> Please also remove KIP-158 from the list of KIPs that you plan to
> > include
> > >> in 2.5
> > >> KIP-158 has been accepted, but the implementation is not yet final. It
> > >> will be included in the release that follows 2.5.
> > >>
> > >> Regards,
> > >> Konstantine
> > >>
> > >> On 1/30/20, Matthias J. Sax  wrote:
> > >> > Hi David,
> > >> >
> > >> > the following KIP from the list did not make it:
> > >> >
> > >> >  - KIP-216 (no PR yet)
> > >> >  - KIP-399 (no PR yet)
> > >> >  - KIP-401 (PR not merged yet)
> > >> >
> > >> >
> > >> > KIP-444 should be included as we did make progress, but it is still
> > not
> > >> > fully implement and we need to finish in in 2.6 release.
> > >> >
> > >> > KIP-447 is partially implemented in 2.5 (ie, broker and
> > >> > consumer/producer changes -- the Kafka Streams parts slip)
> > >> >
> > >> >
> > >> > -Matthias
> > >> >
> > >> >
> > >> > On 1/29/20 9:05 AM, David Arthur wrote:
> > >> >> Hey everyone, just a quick update on the 2.5 release.
> > >> >>
> > >> >> I have updated the list of planned KIPs on the release wiki page
> > >> >>
> > >>
> >
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=143428858
> > >> .
> > >> >> If I have missed anything, or there are KIPs included in this list
> > >> which
> > >> >> should *not* be included in 2.5, please let me know.
> > >> >>
> > >> >> Based on the release schedule, the feature freeze is today, Jan
> 29th.
> > >> Any
> > >> >> major feature work that is not already complete will need to push
> out
> > >> to
> > >> >> 2.6. I will work on cutting the release branch during the day
> > tomorrow
> > >> >> (Jan
> > >> >> 30th).
> > >> >>
> > >> >> If you have any questions, please feel free to reach out to me
> > directly
> > >> >> or
> > >> >> in this thread.
> > >> >>
> > >> >> Thanks!
> > >> >> David
> > >> >>
> > >> >> On Mon, Jan 13, 2020 at 1:35 PM Colin McCabe 
> > >> wrote:
> > >> >>
> > >> >>> +1.  Thanks for volunteering, David.
> > >> >>>
> > >> >>> best,
> > >> >>> Colin
> > >> >>>
> > >> >>> On Fri, Dec 20, 2019, at 10:59, David Arthur wrote:
> > >>  Greetings!
> > >> 
> > >>  I'd like to volunteer to be release manager for the next
> time-based
> > >> >>> feature
> > >>  release which will be 2.5. If there are no objections, I'll send
> > out
> > >>  the
> > >>  release plan in the next few days.
> > >> 
> > >>  Thanks,
> > >>  David Arthur
> > >> 
> > >> >>>
> > >> >>
> > >> >>
> > >> >
> > >> >
> > >>
> > >
> > >
> > > --
> > > David Arthur
> > >
> >
> >
> > --
> > David Arthur
> >
>


Re: [DISCUSS] KIP-570: Add leader epoch in StopReplicaRequest

2020-02-14 Thread Jason Gustafson
Hey David,

Thanks, it makes sense to prevent reordering, especially for the case of
reassignment. When a topic is deleted, however, I am not sure we will have
a bumped epoch to send. I guess for that case, we could send a sentinel
which would take the existing semantics of overriding any existing epoch?

-Jason

On Tue, Feb 11, 2020 at 12:48 PM David Jacot  wrote:

> Hi all,
>
> I've put together a very small KIP which proposes to add the leader epoch
> in the
> StopReplicaRequest in order to make it robust to reordering:
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-570%3A+Add+leader+epoch+in+StopReplicaRequest
>
> Please take a look at the KIP and let me know what you think.
>
> Best,
> David
>


[jira] [Created] (KAFKA-9557) Thread-level "process" metrics are computed incorrectly

2020-02-14 Thread John Roesler (Jira)
John Roesler created KAFKA-9557:
---

 Summary: Thread-level "process" metrics are computed incorrectly
 Key: KAFKA-9557
 URL: https://issues.apache.org/jira/browse/KAFKA-9557
 Project: Kafka
  Issue Type: Bug
  Components: streams
Reporter: John Roesler
Assignee: John Roesler


Among others, Streams reports the following two thread-level "process" metrics:

"process-rate": The average number of process calls per second.
"process-total": The total number of process calls across all tasks.

See the docs: 
https://kafka.apache.org/documentation/#kafka_streams_thread_monitoring

There's some surprising ambiguity in these definitions that has led to Streams 
actually reporting something different than what most people would probably 
expect. Specifically, it's not defined what a "process call" is.

A reasonable definition of a "process call" is processing a record or 
processing a task (both of which are publicly facing concepts, and both of 
which are the same, since tasks process records one at a time). However, we 
currently measure number of invocations to a private, internal `process()` 
method, which would actually process more than one record at a time. Thus, the 
current metric is under-counting the throughput, in an esoteric and confusing 
way.

Instead, we should simply change the rate and total metrics to measure the 
(rate and total) of _record_ processing.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] Apache Kafka 2.5.0 release

2020-02-14 Thread Randall Hauch
Hi, David.

I just filed https://issues.apache.org/jira/browse/KAFKA-9556 that
identifies two pretty minor issues with the new KIP-558 that adds new
Connect REST API endpoints to get the list of topics used by a connector.
The impact is high: the feature cannot be fully disabled, and Connect does
not automatically reset the topic set when a connector is deleted.
https://github.com/apache/kafka/pull/8085 includes the two fixes, and also
adds more unit and integration tests for this feature. Although I just
created the blocker this AM, Konstantine has actually be working on the fix
for four days. Risk of merging this PR is low, since a) the new integration
tests add significant coverage and we've run the new tests numerous times,
and b) the fixes help gate the new feature even more and allow the feature
to be completely disabled.

I'd like approve to merge https://github.com/apache/kafka/pull/8085

Thanks!
Randall

On Mon, Feb 10, 2020 at 11:31 AM David Arthur  wrote:

> Just a friendly reminder that this Wednesday, February 12th, is the code
> freeze for the 2.5.0 release. After this time we will only accept blocker
> bugs onto the release branch.
>
> Thanks!
> David
>
> On Fri, Jan 31, 2020 at 5:13 PM David Arthur  wrote:
>
> > Thanks! I've updated the list.
> >
> > On Thu, Jan 30, 2020 at 5:48 PM Konstantine Karantasis <
> > konstant...@confluent.io> wrote:
> >
> >> Hi David,
> >>
> >> thanks for driving the release.
> >>
> >> Please also remove KIP-158 from the list of KIPs that you plan to
> include
> >> in 2.5
> >> KIP-158 has been accepted, but the implementation is not yet final. It
> >> will be included in the release that follows 2.5.
> >>
> >> Regards,
> >> Konstantine
> >>
> >> On 1/30/20, Matthias J. Sax  wrote:
> >> > Hi David,
> >> >
> >> > the following KIP from the list did not make it:
> >> >
> >> >  - KIP-216 (no PR yet)
> >> >  - KIP-399 (no PR yet)
> >> >  - KIP-401 (PR not merged yet)
> >> >
> >> >
> >> > KIP-444 should be included as we did make progress, but it is still
> not
> >> > fully implement and we need to finish in in 2.6 release.
> >> >
> >> > KIP-447 is partially implemented in 2.5 (ie, broker and
> >> > consumer/producer changes -- the Kafka Streams parts slip)
> >> >
> >> >
> >> > -Matthias
> >> >
> >> >
> >> > On 1/29/20 9:05 AM, David Arthur wrote:
> >> >> Hey everyone, just a quick update on the 2.5 release.
> >> >>
> >> >> I have updated the list of planned KIPs on the release wiki page
> >> >>
> >>
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=143428858
> >> .
> >> >> If I have missed anything, or there are KIPs included in this list
> >> which
> >> >> should *not* be included in 2.5, please let me know.
> >> >>
> >> >> Based on the release schedule, the feature freeze is today, Jan 29th.
> >> Any
> >> >> major feature work that is not already complete will need to push out
> >> to
> >> >> 2.6. I will work on cutting the release branch during the day
> tomorrow
> >> >> (Jan
> >> >> 30th).
> >> >>
> >> >> If you have any questions, please feel free to reach out to me
> directly
> >> >> or
> >> >> in this thread.
> >> >>
> >> >> Thanks!
> >> >> David
> >> >>
> >> >> On Mon, Jan 13, 2020 at 1:35 PM Colin McCabe 
> >> wrote:
> >> >>
> >> >>> +1.  Thanks for volunteering, David.
> >> >>>
> >> >>> best,
> >> >>> Colin
> >> >>>
> >> >>> On Fri, Dec 20, 2019, at 10:59, David Arthur wrote:
> >>  Greetings!
> >> 
> >>  I'd like to volunteer to be release manager for the next time-based
> >> >>> feature
> >>  release which will be 2.5. If there are no objections, I'll send
> out
> >>  the
> >>  release plan in the next few days.
> >> 
> >>  Thanks,
> >>  David Arthur
> >> 
> >> >>>
> >> >>
> >> >>
> >> >
> >> >
> >>
> >
> >
> > --
> > David Arthur
> >
>
>
> --
> David Arthur
>


[jira] [Created] (KAFKA-9556) KIP-558 cannot be fully disabled and when enabled topic reset not working on connector deletion

2020-02-14 Thread Randall Hauch (Jira)
Randall Hauch created KAFKA-9556:


 Summary: KIP-558 cannot be fully disabled and when enabled topic 
reset not working on connector deletion
 Key: KAFKA-9556
 URL: https://issues.apache.org/jira/browse/KAFKA-9556
 Project: Kafka
  Issue Type: Bug
  Components: KafkaConnect
Affects Versions: 2.5.0
Reporter: Randall Hauch
Assignee: Konstantine Karantasis
 Fix For: 2.5.0


According to KIP-558 for the new Connect topic tracking feature, Connect should 
not write the topic status records when the topic is disabled. However, 
currently that is not the case: when the new topic tracking (KIP-558) feature 
is disabled, Connect still writes topic status records to the internal status 
topic. 

Also, according to the KIP, Connect should automatically reset the topic status 
when a connector is deleted, but that is not happening.

It'd be good to increase test coverage on the new feature.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[DISCUSS] KIP-508: Make Suppression State Queriable - rebooted.

2020-02-14 Thread Dongjin Lee
Hi devs,

I'd like to reboot the discussion on KIP-508, which aims to support a
Materialized variant of KTable#suppress. It was initially submitted several
months ago but closed by the inactivity.

- KIP:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-508%3A+Make+Suppression+State+Queriable
- Jira: https://issues.apache.org/jira/browse/KAFKA-8403

All kinds of feedback will be greatly appreciated.

Best,
Dongjin

-- 
*Dongjin Lee*

*A hitchhiker in the mathematical world.*
*github:  github.com/dongjinleekr
linkedin: kr.linkedin.com/in/dongjinleekr
speakerdeck: speakerdeck.com/dongjin
*


[jira] [Resolved] (KAFKA-9554) Define the SPI for Tiered Storage framework

2020-02-14 Thread Satish Duggana (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Satish Duggana resolved KAFKA-9554.
---
Resolution: Duplicate

Duplicate of https://issues.apache.org/jira/browse/KAFKA-9548

> Define the SPI for Tiered Storage framework
> ---
>
> Key: KAFKA-9554
> URL: https://issues.apache.org/jira/browse/KAFKA-9554
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, core
>Reporter: Alexandre Dupriez
>Assignee: Alexandre Dupriez
>Priority: Major
>
> The goal of this task is to define the SPI (service provider interfaces) 
> which will be used by vendors to implement plug-ins to communicate with 
> specific storage system.
> Done means:
>  * Package with interfaces and key objects available and published for review.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] KIP-553: Enable TLSv1.3 by default and disable all protocols except [TLSV1.2, TLSV1.3]

2020-02-14 Thread Nikolay Izhikov
Hello, Kafka team.

I ran system tests that use SSL for the TLSv1.3. 
You can find the results of the tests in the Jira ticket [1], [2], [3], [4].

I also, need a changes [5] in `security_config.py` to execute system tests with 
TLSv1.3(more info in PR description).
Please, take a look.

Test environment:
• openjdk11
• trunk + changes from my PR [5].

Full system tests results have volume 15gb.
Should I share full logs with you?

What else should be done before we can enable TLSv1.3 by default?

[1] 
https://issues.apache.org/jira/browse/KAFKA-9319?focusedCommentId=17036927=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17036927

[2] 
https://issues.apache.org/jira/browse/KAFKA-9319?focusedCommentId=17036928=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17036928

[3] 
https://issues.apache.org/jira/browse/KAFKA-9319?focusedCommentId=17036929=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17036929

[4] 
https://issues.apache.org/jira/browse/KAFKA-9319?focusedCommentId=17036930=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17036930

[5] 
https://github.com/apache/kafka/pull/8106/files#diff-6dd015b94706f6920d9de524c355ddd8R51

> 29 янв. 2020 г., в 15:27, Nikolay Izhikov  написал(а):
> 
> Hello, Rajini.
> 
> Thanks for the feedback.
> 
> I’ve searched tests by the «ssl» keyword and found the following tests:
> 
> ./test/kafkatest/services/kafka_log4j_appender.py
> ./test/kafkatest/services/listener_security_config.py
> ./test/kafkatest/services/security/security_config.py
> ./test/kafkatest/tests/core/security_test.py
> 
> Is this all tests that need to be run with the TLSv1.3 to ensure we can 
> enable it by default?
> 
>> 28 янв. 2020 г., в 14:58, Rajini Sivaram  
>> написал(а):
>> 
>> Hi Nikolay,
>> 
>> Not sure of the total space required. But you can run a collection of tests 
>> at a time instead of running them all together. That way, you could just run 
>> all the tests that enable SSL. Details of running a subset of tests are in 
>> the README in tests.
>> 
>> On Mon, Jan 27, 2020 at 6:29 PM Nikolay Izhikov  wrote:
>> Hello, Rajini.
>> 
>> I’m tried to run all system tests but failed for now.
>> It happens, that system tests generates a lot of logs.
>> I had a 250GB of the free space but it all was occupied by the log from half 
>> of the system tests.
>> 
>> Do you have any idea what is summary disc space I need to run all system 
>> tests?  
>> 
>>> 7 янв. 2020 г., в 14:49, Rajini Sivaram  
>>> написал(а):
>>> 
>>> Hi Nikolay,
>>> 
>>> There a couple of things you could do:
>>> 
>>> 1) Run all system tests that use SSL with TLSv1.3. I had run a subset, but
>>> it will be good to run all of them. You can do this locally using docker
>>> with JDK 11 by updating the files in tests/docker. You will need to update
>>> tests/kafkatest/services/security/security_config.py to enable only
>>> TLSv1.3. Instructions for running system tests using docker are in
>>> https://github.com/apache/kafka/blob/trunk/tests/README.md.
>>> 2) For integration tests, we run a small number of tests using TLSv1.3 if
>>> the tests are run using JDK 11 and above. We need to do this for system
>>> tests as well. There is an open JIRA:
>>> https://issues.apache.org/jira/browse/KAFKA-9319. Feel free to assign this
>>> to yourself if you have time to do this.
>>> 
>>> Regards,
>>> 
>>> Rajini
>>> 
>>> 
>>> On Tue, Jan 7, 2020 at 5:15 AM Николай Ижиков  wrote:
>>> 
 Hello, Rajini.
 
 Can you, please, clarify, what should be done?
 I can try to do tests by myself.
 
> 6 янв. 2020 г., в 21:29, Rajini Sivaram 
 написал(а):
> 
> Hi Brajesh.
> 
> No one is working on this yet, but will follow up with the Confluent
 tools
> team to see when this can be done.
> 
> On Mon, Jan 6, 2020 at 3:29 PM Brajesh Kumar 
 wrote:
> 
>> Hello Rajini,
>> 
>> What is the plan to run system tests using JDK 11? Is someone working on
>> this?
>> 
>> On Mon, Jan 6, 2020 at 3:00 PM Rajini Sivaram 
>> wrote:
>> 
>>> Hi Nikolay,
>>> 
>>> We can leave the KIP open and restart the discussion once system tests
>> are
>>> running.
>>> 
>>> Thanks,
>>> 
>>> Rajini
>>> 
>>> On Mon, Jan 6, 2020 at 2:46 PM Николай Ижиков 
>> wrote:
>>> 
 Hello, Rajini.
 
 Thanks, for the feedback.
 
 Should I mark this KIP as declined?
 Or just wait for the system tests results?
 
> 6 янв. 2020 г., в 17:26, Rajini Sivaram 
 написал(а):
> 
> Hi Nikolay,
> 
> Thanks for the KIP. We currently run system tests using JDK 8 and
>> hence
 we
> don't yet have full system test results with TLS 1.3 which requires
>> JDK
 11.
> We should wait until that is done before enabling TLS1.3 by 

Possible to create Scrum board under Kafka project in JIRA?

2020-02-14 Thread Alexandre Dupriez
Good morning,

Would it be possible to allow the the Apache Kafka project in JIRA to
be included in a new Scrum board?

I can see there is already a Kanban board for Cloudera and tried to
create a Scrum board for Tiered-Storage but don't have the permissions
to include Apache Kafka.

Thank you,
Alexandre


[jira] [Created] (KAFKA-9555) Topic-based implementation for the RLMM

2020-02-14 Thread Alexandre Dupriez (Jira)
Alexandre Dupriez created KAFKA-9555:


 Summary: Topic-based implementation for the RLMM
 Key: KAFKA-9555
 URL: https://issues.apache.org/jira/browse/KAFKA-9555
 Project: Kafka
  Issue Type: Sub-task
  Components: core
Reporter: Alexandre Dupriez






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-9554) Define the SPI for Tiered Storage framework

2020-02-14 Thread Alexandre Dupriez (Jira)
Alexandre Dupriez created KAFKA-9554:


 Summary: Define the SPI for Tiered Storage framework
 Key: KAFKA-9554
 URL: https://issues.apache.org/jira/browse/KAFKA-9554
 Project: Kafka
  Issue Type: Sub-task
  Components: clients, core
Reporter: Alexandre Dupriez
Assignee: Alexandre Dupriez


The goal of this task is to define the SPI (service provider interfaces) which 
will be used by vendors to implement plug-ins to communicate with specific 
storage system.



Done means:
 * Package with interfaces and key objects available and published for review.

 

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Jenkins build is back to normal : kafka-2.5-jdk8 #22

2020-02-14 Thread Apache Jenkins Server
See 




Build failed in Jenkins: kafka-trunk-jdk11 #1159

2020-02-14 Thread Apache Jenkins Server
See 


Changes:

[github] HOTFIX: Fix breakage in `ConsumerPerformanceTest` (#8113)


--
[...truncated 2.89 MB...]

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairs 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicNameAndTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicNameAndTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairsAndCustomTimestamps
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairsAndCustomTimestamps
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithCustomTimestampAndIncrementsAndNotAdvanceTime
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithCustomTimestampAndIncrementsAndNotAdvanceTime
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithTimestampWithTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithTimestampWithTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKeyAndDefaultTimestamp
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKeyAndDefaultTimestamp
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithTimestampAndIncrements STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithTimestampAndIncrements PASSED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis STARTED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis PASSED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime STARTED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime PASSED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep PASSED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldDeleteAndReturnPlainValue STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldDeleteAndReturnPlainValue PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnName 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnName 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutAllWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutAllWithUnknownTimestamp PASSED


Re: [VOTE] KIP-568: Explicit rebalance triggering on the Consumer

2020-02-14 Thread Bruno Cadonna
Thanks!

+1 (non-binding)

Best,
Bruno

On Fri, Feb 14, 2020 at 1:57 AM Boyang Chen  wrote:
>
> +1 (non-binding)
>
> On Thu, Feb 13, 2020 at 4:45 PM Guozhang Wang  wrote:
>
> > +1 (binding).
> >
> >
> > Guozhang
> >
> > On Tue, Feb 11, 2020 at 5:29 PM Guozhang Wang  wrote:
> >
> > > Hi Sophie,
> > >
> > > Thanks for the KIP, I left some comments on the DISCUSS thread.
> > >
> > >
> > > Guozhang
> > >
> > >
> > > On Tue, Feb 11, 2020 at 3:25 PM Bill Bejeck  wrote:
> > >
> > >> Thanks for the KIP Sophie.
> > >>
> > >> It's a +1 (binding) for me.
> > >>
> > >> -Bill
> > >>
> > >> On Tue, Feb 11, 2020 at 4:21 PM Konstantine Karantasis <
> > >> konstant...@confluent.io> wrote:
> > >>
> > >> > The KIP reads quite well for me now and I think this feature will
> > enable
> > >> > even more efficient load balancing for specific use cases.
> > >> >
> > >> > I'm also +1 (non-binding)
> > >> >
> > >> > - Konstantine
> > >> >
> > >> > On Tue, Feb 11, 2020 at 9:35 AM Navinder Brar
> > >> >  wrote:
> > >> >
> > >> > > Thanks Sophie, much required.
> > >> > > +1 non-binding.
> > >> > >
> > >> > >
> > >> > > Sent from Yahoo Mail for iPhone
> > >> > >
> > >> > >
> > >> > > On Tuesday, February 11, 2020, 10:33 PM, John Roesler <
> > >> > vvcep...@apache.org>
> > >> > > wrote:
> > >> > >
> > >> > > Thanks Sophie,
> > >> > >
> > >> > > I'm +1 (binding)
> > >> > >
> > >> > > -John
> > >> > >
> > >> > > On Mon, Feb 10, 2020, at 20:54, Sophie Blee-Goldman wrote:
> > >> > > > Hey all,
> > >> > > >
> > >> > > > I'd like to start the voting on KIP-568. It proposes the new
> > >> > > > Consumer#enforceRebalance API to facilitate triggering efficient
> > >> > > rebalances.
> > >> > > >
> > >> > > > For reference, here is the KIP link again:
> > >> > > >
> > >> > >
> > >> >
> > >>
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-568%3A+Explicit+rebalance+triggering+on+the+Consumer
> > >> > > >
> > >> > > > Thanks!
> > >> > > > Sophie
> > >> > > >
> > >> > >
> > >> > >
> > >> > >
> > >> > >
> > >> >
> > >>
> > >
> > >
> > > --
> > > -- Guozhang
> > >
> >
> >
> > --
> > -- Guozhang
> >