Re: KAFKA-10145

2020-06-10 Thread lqjacklee
Thanks Paul, I will check it.

On Thu, Jun 11, 2020 at 1:01 PM Paul Whalen  wrote:

> Perhaps I’m misunderstanding, but this looks like the cogroup feature:
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-150+-+Kafka-Streams+Cogroup.
> Do you think that covers your use case?
>
> Paul
>
> > On Jun 10, 2020, at 10:13 PM, lqjacklee  wrote:
> >
> > Dear team,
> >
> >   I have created the JRIA
> https://issues.apache.org/jira/browse/KAFKA-10145.
> > I want to enhance the join feature to support the multiple join/aggregate
> > for stream. If anyone is interested or have questions can idea, please
> let
> > me know, thanks.
>


[jira] [Resolved] (KAFKA-9216) Enforce connect internal topic configuration at startup

2020-06-10 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch resolved KAFKA-9216.
--
  Reviewer: Randall Hauch
Resolution: Fixed

Merged to `trunk` the second PR that enforces the `cleanup.policy` topic 
setting on Connect's three internal topics, and cherry-picked it to the `2.6` 
(for upcoming 2.6.0). However, merging to earlier branches requires too many 
changes in integration tests.

> Enforce connect internal topic configuration at startup
> ---
>
> Key: KAFKA-9216
> URL: https://issues.apache.org/jira/browse/KAFKA-9216
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect
>Affects Versions: 0.11.0.0
>Reporter: Randall Hauch
>Assignee: Evelyn Bayes
>Priority: Major
> Fix For: 2.3.2, 2.6.0, 2.4.2, 2.5.1
>
>
> Users sometimes configure Connect's internal topic for configurations with 
> more than one partition. One partition is expected, however, and using more 
> than one leads to weird behavior that is sometimes not easy to spot.
> Here's one example of a log message:
> {noformat}
> "textPayload": "[2019-11-20 11:12:14,049] INFO [Worker clientId=connect-1, 
> groupId=td-connect-server] Current config state offset 284 does not match 
> group assignment 274. Forcing rebalance. 
> (org.apache.kafka.connect.runtime.distributed.DistributedHerder:942)\n"
> {noformat}
> Would it be possible to add a check in the KafkaConfigBackingStore and 
> prevent the worker from starting if connect config partition count !=1 ?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-9845) plugin.path property does not work with config provider

2020-06-10 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9845?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch resolved KAFKA-9845.
--
Fix Version/s: 2.7.0
 Reviewer: Randall Hauch
   Resolution: Fixed

Merged to `trunk`, and backported to `2.6` (for upcoming 2.6.0), `2.5` (for 
upcoming 2.5.1), and `2.4` (for future 2.4.2).

> plugin.path property does not work with config provider
> ---
>
> Key: KAFKA-9845
> URL: https://issues.apache.org/jira/browse/KAFKA-9845
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 2.3.0, 2.4.0, 2.3.1, 2.5.0, 2.4.1
>Reporter: Chris Egerton
>Assignee: Chris Egerton
>Priority: Minor
> Fix For: 2.6.0, 2.4.2, 2.5.1, 2.7.0
>
>
> The config provider mechanism doesn't work if used for the {{plugin.path}} 
> property of a standalone or distributed Connect worker. This is because the 
> {{Plugins}} instance which performs plugin path scanning is created using the 
> raw worker config, pre-transformation (see 
> [ConnectStandalone|https://github.com/apache/kafka/blob/371ad143a6bb973927c89c0788d048a17ebac91a/connect/runtime/src/main/java/org/apache/kafka/connect/cli/ConnectStandalone.java#L79]
>  and 
> [ConnectDistributed|https://github.com/apache/kafka/blob/371ad143a6bb973927c89c0788d048a17ebac91a/connect/runtime/src/main/java/org/apache/kafka/connect/cli/ConnectDistributed.java#L91]).
> Unfortunately, because config providers are loaded as plugins, there's a 
> circular dependency issue here. The {{Plugins}} instance needs to be created 
> _before_ the {{DistributedConfig}}/{{StandaloneConfig}} is created in order 
> for the config providers to be loaded correctly, and the config providers 
> need to be loaded in order to perform their logic on any properties 
> (including the {{plugin.path}} property).
> There is no clear fix for this issue in the code base, and the only known 
> workaround is to refrain from using config providers for the {{plugin.path}} 
> property.
> A couple improvements could potentially be made to improve the UX when this 
> issue arises:
>  #  Alter the config logging performed by the {{DistributedConfig}} and 
> {{StandaloneConfig}} classes to _always_ log the raw value for the 
> {{plugin.path}} property. Right now, the transformed value is logged even 
> though it isn't used, which is likely to cause confusion.
>  # Issue a {{WARN}}- or even {{ERROR}}-level log message when it's detected 
> that the user is attempting to use config providers for the {{plugin.path}} 
> property, which states that config providers cannot be used for that specific 
> property, instructs them to change the value for the property accordingly, 
> and/or informs them of the actual value that the framework will use for that 
> property when performing plugin path scanning.
> We should _not_ throw an error on startup if this condition is detected, as 
> this could cause previously-functioning, benignly-misconfigured Connect 
> workers to fail to start after an upgrade.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: KAFKA-10145

2020-06-10 Thread Paul Whalen
Perhaps I’m misunderstanding, but this looks like the cogroup feature: 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-150+-+Kafka-Streams+Cogroup.
 Do you think that covers your use case?

Paul

> On Jun 10, 2020, at 10:13 PM, lqjacklee  wrote:
> 
> Dear team,
> 
>   I have created the JRIA https://issues.apache.org/jira/browse/KAFKA-10145.
> I want to enhance the join feature to support the multiple join/aggregate
> for stream. If anyone is interested or have questions can idea, please let
> me know, thanks.


[jira] [Resolved] (KAFKA-6942) Connect connectors api doesn't show versions of connectors

2020-06-10 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch resolved KAFKA-6942.
--
Resolution: Invalid

I'm going to close this as INVALID because the versions are available in the 
API, as noted above.

> Connect connectors api doesn't show versions of connectors
> --
>
> Key: KAFKA-6942
> URL: https://issues.apache.org/jira/browse/KAFKA-6942
> Project: Kafka
>  Issue Type: New Feature
>  Components: KafkaConnect
>Affects Versions: 1.1.0
>Reporter: Antony Stubbs
>Priority: Minor
>  Labels: needs-kip
>
> Would be very useful to have the connector list API response also return the 
> version of the installed connectors.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-10115) Incorporate errors.tolerance with the Errant Record Reporter

2020-06-10 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10115?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch resolved KAFKA-10115.
---
  Reviewer: Randall Hauch
Resolution: Fixed

Merged to `2.6` rather than `trunk` (accidentally) and cherry-picked to `trunk`.

> Incorporate errors.tolerance with the Errant Record Reporter
> 
>
> Key: KAFKA-10115
> URL: https://issues.apache.org/jira/browse/KAFKA-10115
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect
>Affects Versions: 2.6.0
>Reporter: Aakash Shah
>Assignee: Aakash Shah
>Priority: Major
> Fix For: 2.6.0
>
>
> The errors.tolerance config is currently not being used when using the Errant 
> Record Reporter. If errors.tolerance is none then the task should fail after 
> sending it to the DLQ in Kafka.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-9066) Kafka Connect JMX : source & sink task metrics missing for tasks in failed state

2020-06-10 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch resolved KAFKA-9066.
--
Fix Version/s: 2.7.0
 Reviewer: Randall Hauch
   Resolution: Fixed

Merged to `trunk`, and backported to `2.6` (for upcoming 2.6.0). I'll file a 
separate issue to backport this to `2.5` (since we're in-progress on releasing 
2.5.1) and `2.4`.

> Kafka Connect JMX : source & sink task metrics missing for tasks in failed 
> state
> 
>
> Key: KAFKA-9066
> URL: https://issues.apache.org/jira/browse/KAFKA-9066
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 2.1.1
>Reporter: Mikołaj Stefaniak
>Assignee: Chris Egerton
>Priority: Major
> Fix For: 2.6.0, 2.7.0
>
>
> h2. Overview
> Kafka Connect exposes various metrics via JMX. Those metrics can be exported 
> i.e. by _Prometheus JMX Exporter_ for further processing.
> One of crucial attributes is connector's *task status.*
> According to official Kafka docs, status is available as +status+ attribute 
> of following MBean:
> {quote}kafka.connect:type=connector-task-metrics,connector="\{connector}",task="\{task}"status
>  - The status of the connector task. One of 'unassigned', 'running', 
> 'paused', 'failed', or 'destroyed'.
> {quote}
> h2. Issue
> Generally +connector-task-metrics+ are exposed propery for tasks in +running+ 
> status but not exposed at all if task is +failed+.
> Failed Task *appears* properly with failed status when queried via *REST API*:
>  
> {code:java}
> $ curl -X GET -u 'user:pass' 
> http://kafka-connect.mydomain.com/connectors/customerconnector/status
> {"name":"customerconnector","connector":{"state":"RUNNING","worker_id":"kafka-connect.mydomain.com:8080"},"tasks":[{"id":0,"state":"FAILED","worker_id":"kafka-connect.mydomain.com:8080","trace":"org.apache.kafka.connect.errors.ConnectException:
>  Received DML 'DELETE FROM mysql.rds_sysinfo .."}],"type":"source"}
> $ {code}
>  
> Failed Task *doesn't appear* as bean with +connector-task-metrics+ type when 
> queried via *JMX*:
>  
> {code:java}
> $ echo "beans -d kafka.connect" | java -jar 
> target/jmxterm-1.1.0-SNAPSHOT-uber.jar -l localhost:8081 -n -v silent | grep 
> connector=customerconnector
> kafka.connect:connector=customerconnector,task=0,type=task-error-metricskafka.connect:connector=customerconnector,type=connector-metrics
> $
> {code}
> h2. Expected result
> It is expected, that bean with +connector-task-metrics+ type will appear also 
> for tasks that failed.
> Below is example of how beans are properly registered for tasks in Running 
> state:
>  
> {code:java}
> $ echo "get -b 
> kafka.connect:connector=sinkConsentSubscription-1000,task=0,type=connector-task-metrics
>  status" | java -jar target/jmxterm-1.1.0-SNAPSHOT-ube r.jar -l 
> localhost:8081 -n -v silent
> status = running;
> $
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-10146) Backport KAFKA-9066 to 2.5 and 2.4 branches

2020-06-10 Thread Randall Hauch (Jira)
Randall Hauch created KAFKA-10146:
-

 Summary: Backport KAFKA-9066 to 2.5 and 2.4 branches
 Key: KAFKA-10146
 URL: https://issues.apache.org/jira/browse/KAFKA-10146
 Project: Kafka
  Issue Type: Bug
  Components: KafkaConnect
Reporter: Randall Hauch
Assignee: Randall Hauch
 Fix For: 2.4.2, 2.5.2


KAFKA-9066 was merged on the same day we were trying to release 2.5.1, so this 
was not backported at the time. However, once 2.5.1 is out the door, the 
`775f0d484` commit on `trunk` should be backported to the `2.5` and `2.4` 
branches.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Jenkins build is back to normal : kafka-2.6-jdk8 #36

2020-06-10 Thread Apache Jenkins Server
See 




Jenkins build is back to normal : kafka-trunk-jdk14 #208

2020-06-10 Thread Apache Jenkins Server
See 




Build failed in Jenkins: kafka-trunk-jdk11 #1559

2020-06-10 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-10014; Always try to close all channels in Selector#close (#8685)


--
[...truncated 1.89 MB...]

kafka.api.SaslPlainSslEndToEndAuthorizationTest > 
testNoDescribeProduceOrConsumeWithoutTopicDescribeAcl STARTED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > 
testNoDescribeProduceOrConsumeWithoutTopicDescribeAcl PASSED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > 
testProduceConsumeViaSubscribe STARTED

kafka.api.SaslPlainSslEndToEndAuthorizationTest > 
testProduceConsumeViaSubscribe PASSED

kafka.api.SaslSslAdminIntegrationTest > testAclDescribe STARTED

kafka.api.DescribeAuthorizedOperationsTest > 
testConsumerGroupAuthorizedOperations PASSED

kafka.api.TransactionsTest > testBumpTransactionalEpoch STARTED

kafka.api.TransactionsTest > testBumpTransactionalEpoch PASSED

kafka.api.TransactionsTest > testSendOffsetsWithGroupMetadata STARTED

kafka.api.SaslSslAdminIntegrationTest > testAclDescribe PASSED

kafka.api.SaslSslAdminIntegrationTest > 
testLegacyAclOpsNeverAffectOrReturnPrefixed STARTED

kafka.api.TransactionsTest > testSendOffsetsWithGroupMetadata PASSED

kafka.api.TransactionsTest > testBasicTransactions STARTED

kafka.api.TransactionsTest > testBasicTransactions PASSED

kafka.api.TransactionsTest > testSendOffsetsWithGroupId STARTED

kafka.api.TransactionsTest > testSendOffsetsWithGroupId PASSED

kafka.api.TransactionsTest > testFencingOnSendOffsets STARTED

kafka.api.TransactionsTest > testFencingOnSendOffsets PASSED

kafka.api.TransactionsTest > testFencingOnAddPartitions STARTED

kafka.api.SaslSslAdminIntegrationTest > 
testLegacyAclOpsNeverAffectOrReturnPrefixed PASSED

kafka.api.SaslSslAdminIntegrationTest > 
testCreateTopicsResponseMetadataAndConfig STARTED

kafka.api.TransactionsTest > testFencingOnAddPartitions PASSED

kafka.api.TransactionsTest > testFencingOnTransactionExpiration STARTED

kafka.api.TransactionsTest > testFencingOnTransactionExpiration PASSED

kafka.api.TransactionsTest > testDelayedFetchIncludesAbortedTransaction STARTED

kafka.api.TransactionsTest > testDelayedFetchIncludesAbortedTransaction PASSED

kafka.api.TransactionsTest > testOffsetMetadataInSendOffsetsToTransaction 
STARTED

kafka.api.TransactionsTest > testOffsetMetadataInSendOffsetsToTransaction PASSED

kafka.api.TransactionsTest > testConsecutivelyRunInitTransactions STARTED

kafka.api.SaslSslAdminIntegrationTest > 
testCreateTopicsResponseMetadataAndConfig PASSED

kafka.api.SaslSslAdminIntegrationTest > testAttemptToCreateInvalidAcls STARTED

kafka.api.TransactionsTest > testConsecutivelyRunInitTransactions PASSED

kafka.api.TransactionsTest > testReadCommittedConsumerShouldNotSeeUndecidedData 
STARTED

kafka.api.TransactionsTest > testReadCommittedConsumerShouldNotSeeUndecidedData 
PASSED

kafka.api.TransactionsTest > testFencingOnSend STARTED

kafka.api.TransactionsTest > testFencingOnSend PASSED

kafka.api.TransactionsTest > testFencingOnCommit STARTED

kafka.api.TransactionsTest > testFencingOnCommit PASSED

kafka.api.TransactionsTest > testMultipleMarkersOneLeader STARTED

kafka.api.SaslSslAdminIntegrationTest > testAttemptToCreateInvalidAcls PASSED

kafka.api.SaslSslAdminIntegrationTest > testAclAuthorizationDenied STARTED

kafka.api.TransactionsTest > testMultipleMarkersOneLeader PASSED

kafka.api.TransactionsTest > testCommitTransactionTimeout STARTED

kafka.api.TransactionsTest > testCommitTransactionTimeout PASSED

kafka.api.UserClientIdQuotaTest > testProducerConsumerOverrideLowerQuota STARTED

kafka.api.SaslSslAdminIntegrationTest > testAclAuthorizationDenied PASSED

kafka.api.SaslSslAdminIntegrationTest > testAclOperations STARTED

kafka.api.UserClientIdQuotaTest > testProducerConsumerOverrideLowerQuota PASSED

kafka.api.UserClientIdQuotaTest > testProducerConsumerOverrideUnthrottled 
STARTED

kafka.api.UserClientIdQuotaTest > testProducerConsumerOverrideUnthrottled PASSED

kafka.api.UserClientIdQuotaTest > testThrottledProducerConsumer STARTED

kafka.api.SaslSslAdminIntegrationTest > testAclOperations PASSED

kafka.api.SaslSslAdminIntegrationTest > testAclOperations2 STARTED

kafka.api.UserClientIdQuotaTest > testThrottledProducerConsumer PASSED

kafka.api.UserClientIdQuotaTest > testQuotaOverrideDelete STARTED

kafka.api.SaslSslAdminIntegrationTest > testAclOperations2 PASSED

kafka.api.SaslSslAdminIntegrationTest > testAclDelete STARTED

kafka.api.UserClientIdQuotaTest > testQuotaOverrideDelete PASSED

kafka.api.UserClientIdQuotaTest > testThrottledRequest STARTED

kafka.api.UserClientIdQuotaTest > testThrottledRequest PASSED

kafka.metrics.KafkaTimerTest > testKafkaTimer STARTED

kafka.metrics.KafkaTimerTest > testKafkaTimer PASSED

kafka.api.SaslSslAdminIntegrationTest > testAclDelete PASSED

kafka.api.SaslSslAdminIntegrationTest > testCreateDeleteTopics STARTED

kafka.api.SaslSslAdminIntegrationTest > testCreateDeleteTopics 

[jira] [Resolved] (KAFKA-9653) Duplicate tasks on workers after rebalance

2020-06-10 Thread Konstantine Karantasis (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9653?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantine Karantasis resolved KAFKA-9653.
---
Resolution: Duplicate

> Duplicate tasks on workers after rebalance
> --
>
> Key: KAFKA-9653
> URL: https://issues.apache.org/jira/browse/KAFKA-9653
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 2.4.0, 2.3.2
>Reporter: Agam Brahma
>Assignee: Konstantine Karantasis
>Priority: Major
> Fix For: 2.3.2, 2.6.0, 2.4.2, 2.5.1
>
>
> Verified the following
>  * observed issue goes away when `connect.protocol` is switched from 
> `compatible` to `eager`
>  * Debug logs show `WorkerSourceTask` on two different nodes referencing the 
> same task-id
>  * Debug logs show the node referring to the task as as part of both 
> `Configured assignments` and `Lost assignments`



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


KAFKA-10145

2020-06-10 Thread lqjacklee
Dear team,

   I have created the JRIA https://issues.apache.org/jira/browse/KAFKA-10145.
I want to enhance the join feature to support the multiple join/aggregate
for stream. If anyone is interested or have questions can idea, please let
me know, thanks.


[jira] [Created] (KAFKA-10145) Enhance to support the multiple join operation

2020-06-10 Thread lqjacklee (Jira)
lqjacklee created KAFKA-10145:
-

 Summary: Enhance to support the multiple join operation
 Key: KAFKA-10145
 URL: https://issues.apache.org/jira/browse/KAFKA-10145
 Project: Kafka
  Issue Type: Improvement
  Components: streams
Reporter: lqjacklee


Currently It supports the two stream join, and the join's relationship is 
clear. However in some case the data comes from multiple source/stream, and 
multiple source's relationship is not sure. 

For example :

If we are in the case that the end user will visit the website or click the 
item he(she) interested. Once event occur, The system will post one event to 
Kafka topic. we will calculate the data based on the click stream and the view 
stream. 

1,  Click Event comes from the click stream
2,  View Event comes from the view stream
3, 

finally we just care about the ClickView Aggregation Domain object. 
So once the click event occur , we just update the click event and the 
aggregation object, otherwise view event occur, we can update the view event 
and aggregation. 

The ClickView Aggregation Object will be persistent.  Only the ClickView 
Aggregation Object be updated by the click event and the view event. The 
ClickView Aggregation's method complete() will return true. 





--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: kafka-2.5-jdk8 #150

2020-06-10 Thread Apache Jenkins Server
See 


Changes:

[konstantine] KAFKA-9969: Exclude ConnectorClientConfigRequest from class 
loading


--
[...truncated 2.93 MB...]

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldDeleteAndReturnPlainValue STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldDeleteAndReturnPlainValue PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnName 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnName 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutAllWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutAllWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldReturnIsPersistent STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldReturnIsPersistent PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutIfAbsentWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutIfAbsentWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldForwardClose 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldForwardClose 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldForwardFlush 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldForwardFlush 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldForwardInit 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldForwardInit 

[jira] [Created] (KAFKA-10144) Corrupted standby tasks are not always cleaned up

2020-06-10 Thread Sophie Blee-Goldman (Jira)
Sophie Blee-Goldman created KAFKA-10144:
---

 Summary: Corrupted standby tasks are not always cleaned up
 Key: KAFKA-10144
 URL: https://issues.apache.org/jira/browse/KAFKA-10144
 Project: Kafka
  Issue Type: Bug
  Components: streams
Reporter: Sophie Blee-Goldman
Assignee: Sophie Blee-Goldman
 Fix For: 2.6.0


Thread death on the 2.6-eos-beta soak was due to re-registration of a standby 
task changelog that was already registered. The root cause was that the task 
had been marked corrupted, but `commit` threw a TaskMigratedException before we 
could get to calling TaskManager#handleCorruption and properly clean up the 
task.

For corrupted active tasks this is not a problem, since #handleLostAll will 
then finish the cleanup. But we intentionally don't clear standbys tasks on 
TaskMigratedException, leaving the task corrupted and partially registered



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Can't disable kafka broker on springboot application

2020-06-10 Thread ignacio gioya
Hi,
i have an application that use kafka broker, but when i run junit test,
springboot try to run kafka listeners, and i am forced to run kafka broker,
unnecessary.

I question you about how I can disable kafka when test are executed.

I have tried with @TypeExcludeFilters(KafkaListenersTypeExcludeFilter.class)
in each test class but it does not work.

Sorry for my english, it is not my native language jeje.

Thank you!


Re: [DISCUSS] KIP-595: A Raft Protocol for the Metadata Quorum

2020-06-10 Thread Boyang Chen
Hey all,

follow-up on the previous email, we made some more updates:

1. The Alter/DescribeQuorum RPCs are also re-structured to use multi-raft.

2. We add observer status into the DescribeQuorumResponse as we see it is a
low hanging fruit which is very useful for user debugging and reassignment.

3. The FindQuorum RPC is replaced with DiscoverBrokers RPC, which is purely
in charge of discovering broker connections in a gossip manner. The quorum
leader discovery is piggy-back on the Metadata RPC for the topic partition
leader, which in our case is the single metadata partition for the version
one.

Let me know if you have any questions.

Boyang

On Tue, Jun 9, 2020 at 11:01 PM Boyang Chen 
wrote:

> Hey all,
>
> Thanks for the great discussions so far. I'm posting some KIP updates from
> our working group discussion:
>
> 1. We will be changing the core RPCs from single-raft API to multi-raft.
> This means all protocols will be "batch" in the first version, but the KIP
> itself only illustrates the design for a single metadata topic partition.
> The reason is to "keep the door open" for future extensions of this piece
> of module such as a sharded controller or general quorum based topic
> replication, beyond the current Kafka replication protocol.
>
> 2. We will piggy-back on the current Kafka Fetch API instead of inventing
> a new FetchQuorumRecords RPC. The motivation is about the same as #1 as
> well as making the integration work easier, instead of letting two similar
> RPCs diverge.
>
> 3. In the EndQuorumEpoch protocol, instead of only sending the request to
> the most caught-up voter, we shall broadcast the information to all voters,
> with a sorted voter list in descending order of their corresponding
> replicated offset. In this way, the top voter will become a candidate
> immediately, while the other voters shall wait for an exponential back-off
> to trigger elections, which helps ensure the top voter gets elected, and
> the election eventually happens when the top voter is not responsive.
>
> Please see the updated KIP and post any questions or concerns on the
> mailing thread.
>
> Boyang
>
> On Fri, May 8, 2020 at 5:26 PM Jun Rao  wrote:
>
>> Hi, Guozhang and Jason,
>>
>> Thanks for the reply. A couple of more replies.
>>
>> 102. Still not sure about this. How is the tombstone issue addressed in
>> the
>> non-voter and the observer.  They can die at any point and restart at an
>> arbitrary later time, and the advancing of the firstDirty offset and the
>> removal of the tombstone can happen independently.
>>
>> 106. I agree that it would be less confusing if we used "epoch" instead of
>> "leader epoch" consistently.
>>
>> Jun
>>
>> On Thu, May 7, 2020 at 4:04 PM Guozhang Wang  wrote:
>>
>> > Thanks Jun. Further replies are in-lined.
>> >
>> > On Mon, May 4, 2020 at 11:58 AM Jun Rao  wrote:
>> >
>> > > Hi, Guozhang,
>> > >
>> > > Thanks for the reply. A few more replies inlined below.
>> > >
>> > > On Sun, May 3, 2020 at 6:33 PM Guozhang Wang 
>> wrote:
>> > >
>> > > > Hello Jun,
>> > > >
>> > > > Thanks for your comments! I'm replying inline below:
>> > > >
>> > > > On Fri, May 1, 2020 at 12:36 PM Jun Rao  wrote:
>> > > >
>> > > > > 101. Bootstrapping related issues.
>> > > > > 101.1 Currently, we support auto broker id generation. Is this
>> > > supported
>> > > > > for bootstrap brokers?
>> > > > >
>> > > >
>> > > > The vote ids would just be the broker ids. "bootstrap.servers"
>> would be
>> > > > similar to what client configs have today, where "quorum.voters"
>> would
>> > be
>> > > > pre-defined config values.
>> > > >
>> > > >
>> > > My question was on the auto generated broker id. Currently, the broker
>> > can
>> > > choose to have its broker Id auto generated. The generation is done
>> > through
>> > > ZK to guarantee uniqueness. Without ZK, it's not clear how the broker
>> id
>> > is
>> > > auto generated. "quorum.voters" also can't be set statically if broker
>> > ids
>> > > are auto generated.
>> > >
>> > > Jason has explained some ideas that we've discussed so far, the
>> reason we
>> > intentional did not include them so far is that we feel it is out-side
>> the
>> > scope of KIP-595. Under the umbrella of KIP-500 we should definitely
>> > address them though.
>> >
>> > On the high-level, our belief is that "joining a quorum" and "joining
>> (or
>> > more specifically, registering brokers in) the cluster" would be
>> > de-coupled a bit, where the former should be completed before we do the
>> > latter. More specifically, assuming the quorum is already up and
>> running,
>> > after the newly started broker found the leader of the quorum it can
>> send a
>> > specific RegisterBroker request including its listener / protocol / etc,
>> > and upon handling it the leader can send back the uniquely generated
>> broker
>> > id to the new broker, while also executing the "startNewBroker"
>> callback as
>> > the controller.
>> >
>> >
>> > >
>> > > > > 102. Log compaction. One weak spot of 

[jira] [Resolved] (KAFKA-7833) StreamsBuilder should throw an exception if addGlobalStore and addStateStore is called for the same store builder

2020-06-10 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax resolved KAFKA-7833.

Fix Version/s: 2.6.0
   Resolution: Fixed

> StreamsBuilder should throw an exception if addGlobalStore and addStateStore 
> is called for the same store builder
> -
>
> Key: KAFKA-7833
> URL: https://issues.apache.org/jira/browse/KAFKA-7833
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.1.0
>Reporter: Sam Lendle
>Assignee: Rob Meng
>Priority: Major
>  Labels: newbie
> Fix For: 2.6.0
>
>
> {{StoreBuilder> storeBuilder =}}
> {{ Stores.keyValueStoreBuilder(Stores.inMemoryKeyValueStore("global-store"), 
> null, null);}}
> {{ builder.addGlobalStore(}}
> {{ storeBuilder,}}
> {{ "global-topic",}}
> {{ Consumed.with(null, null),}}
> {{ new KTableSource(storeBuilder.name())}}
> {{ );}}
> {{builder.addStateStore(storeBuilder); }}
> {{builder.build();}}
>  
>  
> Does not throw an exception.
>  
> cc [~mjsax]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: kafka-2.6-jdk8 #35

2020-06-10 Thread Apache Jenkins Server
See 


Changes:

[wangguoz] KAFKA-9991: Fix flaky unit tests (#8843)


--
[...truncated 3.13 MB...]
org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairs 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicNameAndTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicNameAndTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairsAndCustomTimestamps
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairsAndCustomTimestamps
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithCustomTimestampAndIncrementsAndNotAdvanceTime
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithCustomTimestampAndIncrementsAndNotAdvanceTime
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithTimestampWithTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithTimestampWithTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKeyAndDefaultTimestamp
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKeyAndDefaultTimestamp
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithTimestampAndIncrements STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithTimestampAndIncrements PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent STARTED


Build failed in Jenkins: kafka-trunk-jdk14 #207

2020-06-10 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-10014; Always try to close all channels in Selector#close (#8685)


--
[...truncated 6.31 MB...]

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureSinkTopicNamesIfWrittenInto[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureSinkTopicNamesIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 

[jira] [Created] (KAFKA-10143) Can no longer change replication throttle with reassignment tool

2020-06-10 Thread Jason Gustafson (Jira)
Jason Gustafson created KAFKA-10143:
---

 Summary: Can no longer change replication throttle with 
reassignment tool
 Key: KAFKA-10143
 URL: https://issues.apache.org/jira/browse/KAFKA-10143
 Project: Kafka
  Issue Type: Improvement
Reporter: Jason Gustafson
 Fix For: 2.6.0


Previously we could use --execute with the --throttle option in order to change 
the quota of an active reassignment. We seem to have lost this with KIP-455. 
The code has the following comment:
```
val reassignPartitionsInProgress = zkClient.reassignPartitionsInProgress()
if (reassignPartitionsInProgress) {
  // Note: older versions of this tool would modify the broker quotas here 
(but not
  // topic quotas, for some reason).  This behavior wasn't documented in 
the --execute
  // command line help.  Since it might interfere with other ongoing 
reassignments,
  // this behavior was dropped as part of the KIP-455 changes.
  throw new 
TerseReassignmentFailureException(cannotExecuteBecauseOfExistingMessage)
}
```
Seems like it was a mistake to change this because it breaks compatibility. We 
probably have to revert.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[DISCUSS] KIP-623: Add "internal-topics" option to streams application reset tool

2020-06-10 Thread Joel Wee
Hi all

I would like to start the discussion for KIP-623 which proposes adding an 
“internal-topics” option to the streams application reset tool:

https://cwiki.apache.org/confluence/x/YQt4CQ

Thanks

Joel


[GitHub] [kafka-site] mjsax commented on pull request #267: Updating upgrade docs for KIP-535 and KIP-562

2020-06-10 Thread GitBox


mjsax commented on pull request #267:
URL: https://github.com/apache/kafka-site/pull/267#issuecomment-642270882


   Thanks @brary!



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka-site] mjsax merged pull request #267: Updating upgrade docs for KIP-535 and KIP-562

2020-06-10 Thread GitBox


mjsax merged pull request #267:
URL: https://github.com/apache/kafka-site/pull/267


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Resolved] (KAFKA-9182) Flaky Test org.apache.kafka.streams.integration.KTableSourceTopicRestartIntegrationTest.shouldRestoreAndProgressWhenTopicWrittenToDuringRestorationWithEosEnabled

2020-06-10 Thread Guozhang Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9182?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-9182.
--
Fix Version/s: 2.6.0
   Resolution: Fixed

Resolved as part of https://issues.apache.org/jira/browse/KAFKA-9991

> Flaky Test 
> org.apache.kafka.streams.integration.KTableSourceTopicRestartIntegrationTest.shouldRestoreAndProgressWhenTopicWrittenToDuringRestorationWithEosEnabled
> -
>
> Key: KAFKA-9182
> URL: https://issues.apache.org/jira/browse/KAFKA-9182
> Project: Kafka
>  Issue Type: Test
>  Components: streams
>Reporter: Bill Bejeck
>Priority: Major
>  Labels: flaky-test, tests
> Fix For: 2.6.0
>
>
> Failed in 
> [https://builds.apache.org/job/kafka-pr-jdk8-scala2.11/26571/testReport/junit/org.apache.kafka.streams.integration/KTableSourceTopicRestartIntegrationTest/shouldRestoreAndProgressWhenTopicWrittenToDuringRestorationWithEosEnabled/]
>  
> {noformat}
> Error Messagejava.lang.AssertionError: Condition not met within timeout 
> 3. Table did not read all valuesStacktracejava.lang.AssertionError: 
> Condition not met within timeout 3. Table did not read all values
>   at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:24)
>   at 
> org.apache.kafka.test.TestUtils.lambda$waitForCondition$4(TestUtils.java:369)
>   at 
> org.apache.kafka.test.TestUtils.retryOnExceptionWithTimeout(TestUtils.java:417)
>   at 
> org.apache.kafka.test.TestUtils.retryOnExceptionWithTimeout(TestUtils.java:385)
>   at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:368)
>   at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:356)
>   at 
> org.apache.kafka.streams.integration.KTableSourceTopicRestartIntegrationTest.assertNumberValuesRead(KTableSourceTopicRestartIntegrationTest.java:187)
>   at 
> org.apache.kafka.streams.integration.KTableSourceTopicRestartIntegrationTest.shouldRestoreAndProgressWhenTopicWrittenToDuringRestorationWithEosEnabled(KTableSourceTopicRestartIntegrationTest.java:141)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:305)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:365)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
>   at org.junit.runners.ParentRunner$4.run(ParentRunner.java:330)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:78)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:328)
>   at org.junit.runners.ParentRunner.access$100(ParentRunner.java:65)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:292)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:305)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:412)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:110)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:58)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:38)
>   at 
> org.gradle.api.internal.tasks.testing.junit.AbstractJUnitTestClassProcessor.processTestClass(AbstractJUnitTestClassProcessor.java:62)
>   at 
> 

[jira] [Resolved] (KAFKA-9991) Flaky Test KTableSourceTopicRestartIntegrationTest.shouldRestoreAndProgressWhenTopicWrittenToDuringRestorationWithEosAlphaEnabled

2020-06-10 Thread Guozhang Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9991?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-9991.
--
Fix Version/s: 2.6.0
   Resolution: Fixed

> Flaky Test 
> KTableSourceTopicRestartIntegrationTest.shouldRestoreAndProgressWhenTopicWrittenToDuringRestorationWithEosAlphaEnabled
> -
>
> Key: KAFKA-9991
> URL: https://issues.apache.org/jira/browse/KAFKA-9991
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Sophie Blee-Goldman
>Assignee: Guozhang Wang
>Priority: Major
>  Labels: flaky-test, unit-test
> Fix For: 2.6.0
>
>
> [https://builds.apache.org/job/kafka-pr-jdk11-scala2.13/6280/testReport/junit/org.apache.kafka.streams.integration/KTableSourceTopicRestartIntegrationTest/shouldRestoreAndProgressWhenTopicWrittenToDuringRestorationWithEosAlphaEnabled/]
>  
> h3. Stacktrace
> java.lang.AssertionError: Condition not met within timeout 3. Table did 
> not read all values at 
> org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:26) at 
> org.apache.kafka.test.TestUtils.lambda$waitForCondition$5(TestUtils.java:381) 
> at 
> org.apache.kafka.test.TestUtils.retryOnExceptionWithTimeout(TestUtils.java:429)
>  at 
> org.apache.kafka.test.TestUtils.retryOnExceptionWithTimeout(TestUtils.java:397)
>  at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:378) at 
> org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:368) at 
> org.apache.kafka.streams.integration.KTableSourceTopicRestartIntegrationTest.assertNumberValuesRead(KTableSourceTopicRestartIntegrationTest.java:205)
>  at 
> org.apache.kafka.streams.integration.KTableSourceTopicRestartIntegrationTest.shouldRestoreAndProgressWhenTopicWrittenToDuringRestorationWithEosEnabled(KTableSourceTopicRestartIntegrationTest.java:159)
>  at 
> org.apache.kafka.streams.integration.KTableSourceTopicRestartIntegrationTest.shouldRestoreAndProgressWhenTopicWrittenToDuringRestorationWithEosAlphaEnabled(KTableSourceTopicRestartIntegrationTest.java:143)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-10142) Worker can be disabled by blocks in connector version, taskConfigs, or taskClass methods

2020-06-10 Thread Chris Egerton (Jira)
Chris Egerton created KAFKA-10142:
-

 Summary: Worker can be disabled by blocks in connector version, 
taskConfigs, or taskClass methods
 Key: KAFKA-10142
 URL: https://issues.apache.org/jira/browse/KAFKA-10142
 Project: Kafka
  Issue Type: Bug
  Components: KafkaConnect
Reporter: Chris Egerton


https://issues.apache.org/jira/browse/KAFKA-9374 details how workers could be 
disabled by blocks in any connector method, such as {{start}}, {{stop}}, 
{{validate}}, etc. The [fix PR|https://github.com/apache/kafka/pull/8069] for 
that issue is mostly successful but does not include coverage for blocks in 
{{Connector::version}}, {{Connector::taskConfigs}}, or 
{{Connector::taskClass}}. It is still possible that a worker will be disabled 
if a connector hangs in any of these methods; however, the risk of this is 
lower than the risk of hanging in, for example, {{start}} or {{stop}}.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-10133) Cannot compress messages in destination cluster with MM2

2020-06-10 Thread Steve Jacobs (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10133?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Jacobs resolved KAFKA-10133.
--
Resolution: Not A Problem

> Cannot compress messages in destination cluster with MM2
> 
>
> Key: KAFKA-10133
> URL: https://issues.apache.org/jira/browse/KAFKA-10133
> Project: Kafka
>  Issue Type: Bug
>  Components: mirrormaker
>Affects Versions: 2.4.0, 2.5.0, 2.4.1
> Environment: kafka 2.5.0 deployed via the strimzi operator 0.18
>Reporter: Steve Jacobs
>Priority: Minor
>
> When configuring mirrormaker2 using kafka connect, it is not possible to 
> configure things such that messages are compressed in the destination 
> cluster. Dump Log shows that batching is occuring, but no compression. If 
> this is possible, then this is a documentation bug, because I can find no 
> documentation on how to do this.
> baseOffset: 4208 lastOffset: 4492 count: 285 baseSequence: -1 lastSequence: 
> -1 producerId: -1 producerEpoch: -1 partitionLeaderEpoch: 2 isTransactional: 
> false isControl: false position: 239371 CreateTime: 1591745894859 size: 16362 
> magic: 2 compresscodec: NONE crc: 1811507259 isvalid: true
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] KIP-590: Redirect Zookeeper Mutation Protocols to The Controller

2020-06-10 Thread Boyang Chen
That sounds great, thanks Jun!

On Wed, Jun 10, 2020 at 11:44 AM Jun Rao  wrote:

> Hi, Boyang,
>
> Thanks for the reply.
>
> For the metric, we just need to define a metric of meter type and of name
> NumRequestsForwardingToControllerPerSec. Meter exposes a few standard JMX
> attributes including an accumulated total and rates (
>
> https://metrics.dropwizard.io/2.2.0/apidocs/com/yammer/metrics/core/Meter.html
> ).
>
> Jun
>
> On Wed, Jun 10, 2020 at 10:38 AM Boyang Chen 
> wrote:
>
> > Thanks Jun for the suggestions! I have addressed suggestion and
> simplified
> > the metrics part.
> >
> > On Tue, Jun 9, 2020 at 5:46 PM Jun Rao  wrote:
> >
> > > Hi, Boyang,
> > >
> > > Thanks for the KIP. Just a few comments on the metrics.
> > >
> > > 1. It would be useful to document the full JMX metric names (package,
> > type,
> > > etc) of the new metrics. Also, for rates on the server side, we
> > > typically use Yammer Meter.
> > >
> > >  Sounds good.
> >
> > 2. For num-messages-redirected-rate, would num-requests-redirected-rate
> be
> > > better?
> > >
> > > Actually for the scope of this KIP, we are no longer needing to have a
> > separate tracking
> > of forwarded request rate, because the Envelope RPC is dropped.
> >
> >
> > > 3. num-client-forwarding-to-controller-rate: Is that based on clientId,
> > > client IP, client request version or sth else? How do you plan to
> > implement
> > > that since it seems to require tracking the current unique client set
> > > somehow. An alternative approach is to maintain a
> > > num-requests-redirected-rate metric with a client tag.
> > >
> > The clientId tag approach makes sense, will add to the KIP.
> >
> > Jun
> > >
> > >
> > >
> > > On Mon, Jun 8, 2020 at 9:36 AM Boyang Chen  >
> > > wrote:
> > >
> > > > Hey there,
> > > >
> > > > If no more question is raised, I will go ahead and start the vote
> > > shortly.
> > > >
> > > > On Thu, Jun 4, 2020 at 12:39 PM Boyang Chen <
> > reluctanthero...@gmail.com>
> > > > wrote:
> > > >
> > > > > Hey there,
> > > > >
> > > > > bumping this thread for any further KIP-590 discussion, since it's
> > been
> > > > > quiet for a while.
> > > > >
> > > > > Boyang
> > > > >
> > > > > On Thu, May 21, 2020 at 10:31 AM Boyang Chen <
> > > reluctanthero...@gmail.com
> > > > >
> > > > > wrote:
> > > > >
> > > > >> Thanks David, I agree the wording here is not clear, and the
> fellow
> > > > >> broker should just send a new CreateTopicRequest in this case.
> > > > >>
> > > > >> In the meantime, we had some offline discussion about the Envelope
> > API
> > > > as
> > > > >> well. Although it provides certain privileges like data embedding
> > and
> > > > >> principal embedding, it creates a security hole by letting a
> > malicious
> > > > user
> > > > >> impersonate any forwarding broker, thus pretending to be any admin
> > > user.
> > > > >> Passing the principal around also increases the vulnerability,
> > > compared
> > > > >> with other standard ways such as passing a verified token, but it
> is
> > > > >> unfortunately not fully supported with Kafka security.
> > > > >>
> > > > >> So for the security concerns, we are abandoning the Envelope
> > approach
> > > > and
> > > > >> fallback to just forward the raw admin requests. The
> authentication
> > > will
> > > > >> happen on the receiving broker side instead of on the controller,
> so
> > > > that
> > > > >> we are able to stripped off the principal fields and only include
> > the
> > > > >> principal in header as optional field for audit logging purpose.
> > > > >> Furthermore, we shall propose adding a separate endpoint for
> > > > >> broker-controller communication which should be recommended to
> > enable
> > > > >> secure connections so that a malicious client could not pretend to
> > be
> > > a
> > > > >> broker and perform impersonation.
> > > > >>
> > > > >> Let me know your thoughts.
> > > > >>
> > > > >> Best,
> > > > >> Boyang
> > > > >>
> > > > >> On Tue, May 19, 2020 at 12:17 AM David Jacot  >
> > > > wrote:
> > > > >>
> > > > >>> Hi Boyang,
> > > > >>>
> > > > >>> I've got another question regarding the auto topic creation case.
> > The
> > > > KIP
> > > > >>> says: "Currently the target broker shall just utilize its own ZK
> > > client
> > > > >>> to
> > > > >>> create
> > > > >>> internal topics, which is disallowed in the bridge release. For
> > above
> > > > >>> scenarios,
> > > > >>> non-controller broker shall just forward a CreateTopicRequest to
> > the
> > > > >>> controller
> > > > >>> instead and let controller take care of the rest, while waiting
> for
> > > the
> > > > >>> response
> > > > >>> in the meantime." There will be no request to forward in this
> case,
> > > > >>> right?
> > > > >>> Instead,
> > > > >>> a CreateTopicsRequest is created and sent to the controller node.
> > > > >>>
> > > > >>> When the CreateTopicsRequest is sent as a side effect of the
> > > > >>> MetadataRequest,
> > > > >>> it would be good to know the principal and the clientId in 

[jira] [Resolved] (KAFKA-10119) StreamsResetter fails with TimeoutException for older Brokers

2020-06-10 Thread Sophie Blee-Goldman (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sophie Blee-Goldman resolved KAFKA-10119.
-
Resolution: Duplicate

Closing this as a duplicate, the issue is tracked by KAFKA-10123

> StreamsResetter fails with TimeoutException for older Brokers
> -
>
> Key: KAFKA-10119
> URL: https://issues.apache.org/jira/browse/KAFKA-10119
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 2.6.0, 2.5.1
>Reporter: Bruno Cadonna
>Priority: Blocker
> Fix For: 2.6.0, 2.5.1
>
>
> Since somewhere after commit 2d37c8c84 in Apache Kafka, the streams resetter 
> consistently fails with brokers of version confluent-5.0.1. 
> The following exception is thrown:
> {code:java}
> org.apache.kafka.common.errors.TimeoutException: Timeout of 6ms expired 
> before the position for partition test-0 could be determined
> {code} 
> which comes from this line within the {{StreamsResetter}} class:
> {code:java}
> System.out.println("Topic: " + p.topic() + " Partition: " + p.partition() + " 
> Offset: " + client.position(p));
> {code}
> The exception is not thrown with brokers of version confluent-5.5.0. I have 
> not tried brokers of other versions.
> The bug can be reproduced with the following steps:
> 1. check out commit dc8f8ffd2ad from Apache Kafka
> 2. build with {{./gradlew clean -PscalaVersion=2.13 jar}}
> 3. start a confluent-5.0.1 broker. 
> 4. create a topic with {{bin/kafka-topics.sh --create --bootstrap-server 
> localhost:9092 --replication-factor 1 --partitions 1 --topic test}}
> 5. start streams resetter with {{bin/kafka-streams-application-reset.sh 
> --application-id test --bootstrap-servers localhost:9092 --input-topics test}}
> Streams resetter should output:
> {code}
> ERROR: org.apache.kafka.common.errors.TimeoutException: Timeout of 6ms 
> expired before the position for partition test-0 could be determined
> org.apache.kafka.common.errors.TimeoutException: Timeout of 6ms expired 
> before the position for partition test-0 could be determined
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-10141) Add more detail to log segment deletion message

2020-06-10 Thread Jason Gustafson (Jira)
Jason Gustafson created KAFKA-10141:
---

 Summary: Add more detail to log segment deletion message
 Key: KAFKA-10141
 URL: https://issues.apache.org/jira/browse/KAFKA-10141
 Project: Kafka
  Issue Type: Improvement
Reporter: Jason Gustafson
Assignee: Sanjana Kaundinya


It would be helpful to include as much information as possible when we delete 
segments. For example, if the retention time was breached, what was the last 
timestamp that was used? Was it the last modified time or was it the max 
timestamp?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-10014) Always try to close all channels in Selector#close

2020-06-10 Thread Jason Gustafson (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10014?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-10014.
-
Resolution: Fixed

> Always try to close all channels in Selector#close
> --
>
> Key: KAFKA-10014
> URL: https://issues.apache.org/jira/browse/KAFKA-10014
> Project: Kafka
>  Issue Type: Bug
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
>Priority: Minor
>
> {code:java}
> public void close() {
> List connections = new ArrayList<>(channels.keySet());
> try {
> for (String id : connections)
> close(id); // this line
> } finally {
> {code}
> KafkaChannel has a lot of releasable objects so we ought to try to close all 
> channels.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [VOTE] KIP-597: MirrorMaker2 internal topics Formatters

2020-06-10 Thread David Jacot
Hi Mickael,

Thanks for the KIP. That looks very useful.

I have few small comments/suggestions:

1. I was about the make a similar suggestion than Konstantine did regarding
requiring to recompile old formatters. While the formatters are not
directly part of the public API, I think that we can argue that, as they
are accepted by the console consumer, they are somehow part of it. It is a
bit a gray zone. With this in mind, I lean towards supporting both
interfaces by instantiating one or the other instead of making the old one
implements the new one. It is nicer for our users and require a similar
effort to implement overall.

2. Should the interface implement Configurable and Closable with default
empty implementations? That would make it similar to our other interfaces
but I am not entirely sure that works with the properties.

3. Should we directly move the existing Formatters to using the new
interface?

Regards,
David





Le mer. 10 juin 2020 à 19:13, Maulin Vasavada  a
écrit :

> +1 (non-binding)
>
> On Wed, Jun 10, 2020 at 9:47 AM Mickael Maison 
> wrote:
>
> > Bumping this thread. Let me know if you have any questions or feedback.
> >
> > So far, we have 2 binding and 5 non-binding votes.
> >
> > Thanks
> >
> > On Tue, May 19, 2020 at 10:56 AM Manikumar 
> > wrote:
> > >
> > > +1 (binding)
> > >
> > > Thanks for the KIP.
> > >
> > > On Tue, May 19, 2020 at 2:57 PM Mickael Maison <
> mickael.mai...@gmail.com
> > >
> > > wrote:
> > >
> > > > Thanks Konstantine for the feedback (and vote)!
> > > >
> > > > 1) I've added example commands using the new formatters
> > > >
> > > > 2) I updated the Compatiblity section to be more explicit about the
> > > > need for recompilation
> > > >
> > > > 3) Good point, updated
> > > >
> > > > On Tue, May 19, 2020 at 3:18 AM Konstantine Karantasis
> > > >  wrote:
> > > > >
> > > > > Thanks Michael.
> > > > > I think it's useful to enable specialized message formatters by
> > adding
> > > > this
> > > > > interface to the public API.
> > > > >
> > > > > You have my vote: +1 (binding)
> > > > >
> > > > > Just a few optional comments below:
> > > > >
> > > > > 1. Would you mind adding the equivalent command line example in the
> > > > places
> > > > > where you have an example output?
> > > > >
> > > > > Something equivalent to
> > > > > ./bin/kafka-console-consumer.sh --bootstrap-server localhost:9092
> > --topic
> > > > > __consumer_offsets --from-beginning --formatter
> > > > >
> > > >
> >
> "kafka.coordinator.group.GroupMetadataManager\$GroupMetadataMessageFormatter"
> > > > >
> > > > > but with the equivalent formatter classes and expected topic names.
> > > > >
> > > > > 2. I have to note that breaking old formatters by requiring
> > recompilation
> > > > > could be avoided if we didn't change kafka.common.MessageFormatter
> to
> > > > > extend the new org.apache.kafka.common.MessageFormatter. We could
> > > > maintain
> > > > > both, while the old one would remain deprecated and we could
> attempt
> > to
> > > > > instantiate one or the other type when reading the config and use
> > either
> > > > of
> > > > > the two different types in the few places in ConsoleConsumer that a
> > > > > formatter is used. But I admit that for this use case, it's not
> worth
> > > > > maintaining both types. The interface wasn't public before anyways.
> > > > >
> > > > > Given that, my small request would be to rephrase in the
> > compatibility
> > > > > section to say something like:
> > > > > 'Existing MessageFormatters implementations will require no changes
> > > > beyond
> > > > > recompilation.' or similar. Because, to be precise, existing
> > formatters
> > > > > _won't_ work if they are given as a parameter to a 2.6 console
> > consumer,
> > > > > without recompilation as you mention.
> > > > >
> > > > > 3. Finally, a minor comment on skipping the use of the `public`
> > specifier
> > > > > in the interface because it's redundant.
> > > > >
> > > > > Best regards,
> > > > > Konstantine
> > > > >
> > > > > On Mon, May 18, 2020 at 3:26 PM Maulin Vasavada <
> > > > maulin.vasav...@gmail.com>
> > > > > wrote:
> > > > >
> > > > > > +1 (non-binding)
> > > > > >
> > > > > > On Mon, May 18, 2020 at 8:49 AM Mickael Maison <
> > > > mickael.mai...@gmail.com>
> > > > > > wrote:
> > > > > >
> > > > > > > Bumping this thread as KIP freeze is approaching.
> > > > > > >
> > > > > > > It's a pretty small change and I have a PR ready:
> > > > > > > https://github.com/apache/kafka/pull/8604
> > > > > > >
> > > > > > > Thanks
> > > > > > >
> > > > > > > On Mon, May 4, 2020 at 5:26 PM Ryanne Dolan <
> > ryannedo...@gmail.com>
> > > > > > wrote:
> > > > > > > >
> > > > > > > > +1, non-binding
> > > > > > > >
> > > > > > > > On Mon, May 4, 2020, 9:24 AM Christopher Egerton <
> > > > chr...@confluent.io>
> > > > > > > > wrote:
> > > > > > > >
> > > > > > > > > +1 (non-binding)
> > > > > > > > >
> > > > > > > > > On Mon, May 4, 2020 at 5:02 AM Edoardo Comar <
> > eco...@uk.ibm.com>
> > > > > > > 

Build failed in Jenkins: kafka-2.6-jdk8 #34

2020-06-10 Thread Apache Jenkins Server
See 


Changes:

[vvcephei] KAFKA-10079: improve thread-level stickiness (#8775)


--
[...truncated 6.25 MB...]

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairs STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKeyAndDefaultTimestamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKeyAndDefaultTimestamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithDefaultTimestamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithDefaultTimestamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKey STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKey PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithOtherTopicNameAndTimestampWithTimetamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithOtherTopicNameAndTimestampWithTimetamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullHeaders STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullHeaders PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithDefaultTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithDefaultTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairs 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicNameAndTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicNameAndTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 

[jira] [Created] (KAFKA-10140) Incremental config api excludes plugin config changes

2020-06-10 Thread Jason Gustafson (Jira)
Jason Gustafson created KAFKA-10140:
---

 Summary: Incremental config api excludes plugin config changes
 Key: KAFKA-10140
 URL: https://issues.apache.org/jira/browse/KAFKA-10140
 Project: Kafka
  Issue Type: Bug
Reporter: Jason Gustafson
 Fix For: 2.6.0


I was trying to alter the jmx metric filters using the incremental alter config 
api and hit this error:
```
java.util.NoSuchElementException: key not found: metrics.jmx.blacklist
at scala.collection.MapLike.default(MapLike.scala:235)
at scala.collection.MapLike.default$(MapLike.scala:234)
at scala.collection.AbstractMap.default(Map.scala:65)
at scala.collection.MapLike.apply(MapLike.scala:144)
at scala.collection.MapLike.apply$(MapLike.scala:143)
at scala.collection.AbstractMap.apply(Map.scala:65)
at kafka.server.AdminManager.listType$1(AdminManager.scala:681)
at 
kafka.server.AdminManager.$anonfun$prepareIncrementalConfigs$1(AdminManager.scala:693)
at 
kafka.server.AdminManager.prepareIncrementalConfigs(AdminManager.scala:687)
at 
kafka.server.AdminManager.$anonfun$incrementalAlterConfigs$1(AdminManager.scala:618)
at 
scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:273)
at scala.collection.immutable.Map$Map1.foreach(Map.scala:154)
at scala.collection.TraversableLike.map(TraversableLike.scala:273)
at scala.collection.TraversableLike.map$(TraversableLike.scala:266)
at scala.collection.AbstractTraversable.map(Traversable.scala:108)
at 
kafka.server.AdminManager.incrementalAlterConfigs(AdminManager.scala:589)
at 
kafka.server.KafkaApis.handleIncrementalAlterConfigsRequest(KafkaApis.scala:2698)
at kafka.server.KafkaApis.handle(KafkaApis.scala:188)
at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:78)
at java.base/java.lang.Thread.run(Thread.java:834)
```

It looks like we are only allowing changes to the keys defined in `KafkaConfig` 
through this API. This excludes config changes to any plugin components such as 
`JmxReporter`. 

Note that I was able to use the regular `alterConfig` API to change this config.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] KIP-590: Redirect Zookeeper Mutation Protocols to The Controller

2020-06-10 Thread Jun Rao
Hi, Boyang,

Thanks for the reply.

For the metric, we just need to define a metric of meter type and of name
NumRequestsForwardingToControllerPerSec. Meter exposes a few standard JMX
attributes including an accumulated total and rates (
https://metrics.dropwizard.io/2.2.0/apidocs/com/yammer/metrics/core/Meter.html
).

Jun

On Wed, Jun 10, 2020 at 10:38 AM Boyang Chen 
wrote:

> Thanks Jun for the suggestions! I have addressed suggestion and simplified
> the metrics part.
>
> On Tue, Jun 9, 2020 at 5:46 PM Jun Rao  wrote:
>
> > Hi, Boyang,
> >
> > Thanks for the KIP. Just a few comments on the metrics.
> >
> > 1. It would be useful to document the full JMX metric names (package,
> type,
> > etc) of the new metrics. Also, for rates on the server side, we
> > typically use Yammer Meter.
> >
> >  Sounds good.
>
> 2. For num-messages-redirected-rate, would num-requests-redirected-rate be
> > better?
> >
> > Actually for the scope of this KIP, we are no longer needing to have a
> separate tracking
> of forwarded request rate, because the Envelope RPC is dropped.
>
>
> > 3. num-client-forwarding-to-controller-rate: Is that based on clientId,
> > client IP, client request version or sth else? How do you plan to
> implement
> > that since it seems to require tracking the current unique client set
> > somehow. An alternative approach is to maintain a
> > num-requests-redirected-rate metric with a client tag.
> >
> The clientId tag approach makes sense, will add to the KIP.
>
> Jun
> >
> >
> >
> > On Mon, Jun 8, 2020 at 9:36 AM Boyang Chen 
> > wrote:
> >
> > > Hey there,
> > >
> > > If no more question is raised, I will go ahead and start the vote
> > shortly.
> > >
> > > On Thu, Jun 4, 2020 at 12:39 PM Boyang Chen <
> reluctanthero...@gmail.com>
> > > wrote:
> > >
> > > > Hey there,
> > > >
> > > > bumping this thread for any further KIP-590 discussion, since it's
> been
> > > > quiet for a while.
> > > >
> > > > Boyang
> > > >
> > > > On Thu, May 21, 2020 at 10:31 AM Boyang Chen <
> > reluctanthero...@gmail.com
> > > >
> > > > wrote:
> > > >
> > > >> Thanks David, I agree the wording here is not clear, and the fellow
> > > >> broker should just send a new CreateTopicRequest in this case.
> > > >>
> > > >> In the meantime, we had some offline discussion about the Envelope
> API
> > > as
> > > >> well. Although it provides certain privileges like data embedding
> and
> > > >> principal embedding, it creates a security hole by letting a
> malicious
> > > user
> > > >> impersonate any forwarding broker, thus pretending to be any admin
> > user.
> > > >> Passing the principal around also increases the vulnerability,
> > compared
> > > >> with other standard ways such as passing a verified token, but it is
> > > >> unfortunately not fully supported with Kafka security.
> > > >>
> > > >> So for the security concerns, we are abandoning the Envelope
> approach
> > > and
> > > >> fallback to just forward the raw admin requests. The authentication
> > will
> > > >> happen on the receiving broker side instead of on the controller, so
> > > that
> > > >> we are able to stripped off the principal fields and only include
> the
> > > >> principal in header as optional field for audit logging purpose.
> > > >> Furthermore, we shall propose adding a separate endpoint for
> > > >> broker-controller communication which should be recommended to
> enable
> > > >> secure connections so that a malicious client could not pretend to
> be
> > a
> > > >> broker and perform impersonation.
> > > >>
> > > >> Let me know your thoughts.
> > > >>
> > > >> Best,
> > > >> Boyang
> > > >>
> > > >> On Tue, May 19, 2020 at 12:17 AM David Jacot 
> > > wrote:
> > > >>
> > > >>> Hi Boyang,
> > > >>>
> > > >>> I've got another question regarding the auto topic creation case.
> The
> > > KIP
> > > >>> says: "Currently the target broker shall just utilize its own ZK
> > client
> > > >>> to
> > > >>> create
> > > >>> internal topics, which is disallowed in the bridge release. For
> above
> > > >>> scenarios,
> > > >>> non-controller broker shall just forward a CreateTopicRequest to
> the
> > > >>> controller
> > > >>> instead and let controller take care of the rest, while waiting for
> > the
> > > >>> response
> > > >>> in the meantime." There will be no request to forward in this case,
> > > >>> right?
> > > >>> Instead,
> > > >>> a CreateTopicsRequest is created and sent to the controller node.
> > > >>>
> > > >>> When the CreateTopicsRequest is sent as a side effect of the
> > > >>> MetadataRequest,
> > > >>> it would be good to know the principal and the clientId in the
> > > controller
> > > >>> (quota,
> > > >>> audit, etc.). Do you plan to use the Envelope API for this case as
> > well
> > > >>> or
> > > >>> to call
> > > >>> the regular API directly? Another was to phrase it would be: Shall
> > the
> > > >>> internal
> > > >>> CreateTopicsRequest be sent with the original metadata (principal,
> > > >>> clientId, etc.)
> > > >>> of the 

Re: [VOTE] KIP-572: Improve timeouts and retires in Kafka Streams

2020-06-10 Thread Matthias J. Sax
Thanks!

+1 (binding) from myself.


I am closing the vote as accepted with 3 binding and 3 non-binding votes.

binding:
 - John
 - Guozhang
 - Matthias

non-binding:
 - Sophie
 - Boyang
 - Bruno



-Matthias

On 6/4/20 5:26 PM, Matthias J. Sax wrote:
> Guozhang,
> 
> what you propose makes sense, but this seems to get into implementation
> detail territory already?
> 
> Thus, if there are nor further change requests to the KIP wiki page
> itself, I would like to proceed with the VOTE.
> 
> 
> -Matthias
> 
> On 5/20/20 12:30 PM, Guozhang Wang wrote:
>> Thanks Matthias,
>>
>> I agree with you on all the bullet points above. Regarding the admin-client
>> outer-loop retries inside partition assignor, I think we should treat error
>> codes differently from those two blocking calls:
>>
>> Describe-topic:
>> * unknown-topic (3): add this topic to the to-be-created topic list.
>> * leader-not-available (5): do not try to create, retry in the outer loop.
>> * request-timeout: break the current loop and retry in the outer loop.
>> * others: fatal error.
>>
>> Create-topic:
>> * topic-already-exists: retry in the outer loop to validate the
>> num.partitions match expectation.
>> * request-timeout: break the current loop and retry in the outer loop.
>> * others: fatal error.
>>
>> And in the outer-loop, I think we can have a global timer for the whole
>> "assign()" function, not only for the internal-topic-manager, and the timer
>> can be hard-coded with, e.g. half of the rebalance.timeout to get rid of
>> the `retries`; if we cannot complete the assignment before the timeout runs
>> out, we can return just the partial assignment (e.g. if there are two
>> tasks, but we can only get the topic metadata for one of them, then just do
>> the assignment for that one only) while encoding in the error-code field to
>> request for another rebalance.
>>
>> Guozhang
>>
>>
>>
>> On Mon, May 18, 2020 at 7:26 PM Matthias J. Sax  wrote:
>>
>>> No worries Guozhang, any feedback is always very welcome! My reply is
>>> going to be a little longer... Sorry.
>>>
>>>
>>>
 1) There are some inconsistent statements in the proposal regarding what
>>> to
 deprecated:
>>>
>>> The proposal of the KIP is to deprecate `retries` for producer, admin,
>>> and Streams. Maybe the confusion is about the dependency of those
>>> settings within Streams and that we handle the deprecation somewhat
>>> different for plain clients vs Streams:
>>>
>>> For plain producer/admin the default `retries` is set to MAX_VALUE. The
>>> config will be deprecated but still be respected.
>>>
>>> For Streams, the default `retries` is set to zero, however, this default
>>> retry does _not_ affect the embedded producer/admin clients -- both
>>> clients stay on their own default of MAX_VALUES.
>>>
>>> Currently, this introduces the issue, that if a user wants to increase
>>> Streams retries, she might by accident reduce the embedded client
>>> retries, too. To avoid this issue, she would need to set
>>>
>>> retries=100
>>> producer.retires=MAX_VALUE
>>> admin.retries=MAX_VALUE
>>>
>>> This KIP will fix this issue only in the long term though, ie, when
>>> `retries` is finally removed. Short term, using `retries` in
>>> StreamsConfig would still affect the embedded clients, but Streams, as
>>> well as both client would log a WARN message. This preserves backward
>>> compatibility.
>>>
>>> Withing Streams `retries` is ignored and the new `task.timeout.ms` is
>>> used instead. This increase the default resilience of Kafka Streams
>>> itself. We could also achieve this by still respecting `retries` and to
>>> change it's default value. However, because we deprecate `retries` it
>>> seems better to just ignore it and switch to the new config directly.
>>>
>>> I updated the KIPs with some more details.
>>>
>>>
>>>
 2) We should also document the related behavior change in
>>> PartitionAssignor
 that uses AdminClient.
>>>
>>> This is actually a good point. Originally, I looked into this only
>>> briefly, but it raised an interesting question on how to handle it.
>>>
>>> Note that `TimeoutExceptions` are currently not handled in this retry
>>> loop. Also note that the default retries value for other errors would be
>>> MAX_VALUE be default (inherited from `AdminClient#retries` as mentioned
>>> already by Guozhang).
>>>
>>> Applying the new `task.timeout.ms` config does not seem to be
>>> appropriate because the AdminClient is used during a rebalance in the
>>> leader. We could introduce a new config just for this case, but it seems
>>> to be a little bit too much. Furthermore, the group-coordinator applies
>>> a timeout on the leader anyway: if the assignment is not computed within
>>> the timeout, the leader is removed from the group and another rebalance
>>> is triggered.
>>>
>>> Overall, we make multiple admin client calls and thus we should keep
>>> some retry logic and not just rely on the admin client internal retries,
>>> as those would fall short to retry 

Re: [VOTE] KIP-620 Deprecate ConsumerConfig#addDeserializerToConfig(Properties, Deserializer, Deserializer) and ProducerConfig#addSerializerToConfig(Properties, Serializer, Serializer)

2020-06-10 Thread Matthias J. Sax
Yes, it does.

I guess many people are busy wrapping up 2.6 release. Today is code freeze.


-Matthias


On 6/10/20 12:11 AM, Chia-Ping Tsai wrote:
> hi Matthias,
> 
> Does this straightforward KIP still need 3 votes? 
> 
> On 2020/06/05 21:27:52, "Matthias J. Sax"  wrote: 
>> +1 (binding)
>>
>> Thanks for the KIP!
>>
>>
>> -Matthias
>>
>> On 6/4/20 11:25 PM, Chia-Ping Tsai wrote:
>>> hi All,
>>>
>>> I would like to start the vote on KIP-620:
>>>
>>> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=155749118
>>>
>>> --
>>> Chia-Ping
>>>
>>
>>



signature.asc
Description: OpenPGP digital signature


[jira] [Created] (KAFKA-10139) Add operational guide for failure recovery

2020-06-10 Thread Boyang Chen (Jira)
Boyang Chen created KAFKA-10139:
---

 Summary: Add operational guide for failure recovery
 Key: KAFKA-10139
 URL: https://issues.apache.org/jira/browse/KAFKA-10139
 Project: Kafka
  Issue Type: Sub-task
  Components: core
Reporter: Boyang Chen


In the first released version, we should include an operation manual to the 
feature versioning failure cases, such as:

1. broker crash due to violation of feature versioning

2. ZK data corruption (rare)

We need to ensure this work gets reflected in the AK documentation after the 
implementation and thorough testings are done.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: kafka-trunk-jdk14 #206

2020-06-10 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-10079: improve thread-level stickiness (#8775)


--
[...truncated 3.16 MB...]
org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureSinkTopicNamesIfWrittenInto[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureSinkTopicNamesIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureInternalTopicNamesIfWrittenInto[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureInternalTopicNamesIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTimeDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTimeDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 

Re: [DISCUSS] KIP-590: Redirect Zookeeper Mutation Protocols to The Controller

2020-06-10 Thread Boyang Chen
Thanks Jun for the suggestions! I have addressed suggestion and simplified
the metrics part.

On Tue, Jun 9, 2020 at 5:46 PM Jun Rao  wrote:

> Hi, Boyang,
>
> Thanks for the KIP. Just a few comments on the metrics.
>
> 1. It would be useful to document the full JMX metric names (package, type,
> etc) of the new metrics. Also, for rates on the server side, we
> typically use Yammer Meter.
>
>  Sounds good.

2. For num-messages-redirected-rate, would num-requests-redirected-rate be
> better?
>
> Actually for the scope of this KIP, we are no longer needing to have a
separate tracking
of forwarded request rate, because the Envelope RPC is dropped.


> 3. num-client-forwarding-to-controller-rate: Is that based on clientId,
> client IP, client request version or sth else? How do you plan to implement
> that since it seems to require tracking the current unique client set
> somehow. An alternative approach is to maintain a
> num-requests-redirected-rate metric with a client tag.
>
The clientId tag approach makes sense, will add to the KIP.

Jun
>
>
>
> On Mon, Jun 8, 2020 at 9:36 AM Boyang Chen 
> wrote:
>
> > Hey there,
> >
> > If no more question is raised, I will go ahead and start the vote
> shortly.
> >
> > On Thu, Jun 4, 2020 at 12:39 PM Boyang Chen 
> > wrote:
> >
> > > Hey there,
> > >
> > > bumping this thread for any further KIP-590 discussion, since it's been
> > > quiet for a while.
> > >
> > > Boyang
> > >
> > > On Thu, May 21, 2020 at 10:31 AM Boyang Chen <
> reluctanthero...@gmail.com
> > >
> > > wrote:
> > >
> > >> Thanks David, I agree the wording here is not clear, and the fellow
> > >> broker should just send a new CreateTopicRequest in this case.
> > >>
> > >> In the meantime, we had some offline discussion about the Envelope API
> > as
> > >> well. Although it provides certain privileges like data embedding and
> > >> principal embedding, it creates a security hole by letting a malicious
> > user
> > >> impersonate any forwarding broker, thus pretending to be any admin
> user.
> > >> Passing the principal around also increases the vulnerability,
> compared
> > >> with other standard ways such as passing a verified token, but it is
> > >> unfortunately not fully supported with Kafka security.
> > >>
> > >> So for the security concerns, we are abandoning the Envelope approach
> > and
> > >> fallback to just forward the raw admin requests. The authentication
> will
> > >> happen on the receiving broker side instead of on the controller, so
> > that
> > >> we are able to stripped off the principal fields and only include the
> > >> principal in header as optional field for audit logging purpose.
> > >> Furthermore, we shall propose adding a separate endpoint for
> > >> broker-controller communication which should be recommended to enable
> > >> secure connections so that a malicious client could not pretend to be
> a
> > >> broker and perform impersonation.
> > >>
> > >> Let me know your thoughts.
> > >>
> > >> Best,
> > >> Boyang
> > >>
> > >> On Tue, May 19, 2020 at 12:17 AM David Jacot 
> > wrote:
> > >>
> > >>> Hi Boyang,
> > >>>
> > >>> I've got another question regarding the auto topic creation case. The
> > KIP
> > >>> says: "Currently the target broker shall just utilize its own ZK
> client
> > >>> to
> > >>> create
> > >>> internal topics, which is disallowed in the bridge release. For above
> > >>> scenarios,
> > >>> non-controller broker shall just forward a CreateTopicRequest to the
> > >>> controller
> > >>> instead and let controller take care of the rest, while waiting for
> the
> > >>> response
> > >>> in the meantime." There will be no request to forward in this case,
> > >>> right?
> > >>> Instead,
> > >>> a CreateTopicsRequest is created and sent to the controller node.
> > >>>
> > >>> When the CreateTopicsRequest is sent as a side effect of the
> > >>> MetadataRequest,
> > >>> it would be good to know the principal and the clientId in the
> > controller
> > >>> (quota,
> > >>> audit, etc.). Do you plan to use the Envelope API for this case as
> well
> > >>> or
> > >>> to call
> > >>> the regular API directly? Another was to phrase it would be: Shall
> the
> > >>> internal
> > >>> CreateTopicsRequest be sent with the original metadata (principal,
> > >>> clientId, etc.)
> > >>> of the MetadataRequest or as an admin request?
> > >>>
> > >>> Best,
> > >>> David
> > >>>
> > >>> On Fri, May 8, 2020 at 2:04 AM Guozhang Wang 
> > wrote:
> > >>>
> > >>> > Just to adds a bit more FYI here related to the last question from
> > >>> David:
> > >>> > in KIP-595 while implementing the new requests we are also adding a
> > >>> > "KafkaNetworkChannel" which is used for brokers to send vote /
> fetch
> > >>> > records, so besides the discussion on listeners I think
> > implementation
> > >>> wise
> > >>> > we can also consider consolidating a lot of those into the same
> > >>> call-trace
> > >>> > as well -- of course this is not related to public APIs so maybe
> just
> > >>> needs
> > >>> > 

Build failed in Jenkins: kafka-trunk-jdk11 #1558

2020-06-10 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-10079: improve thread-level stickiness (#8775)


--
[...truncated 3.16 MB...]

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairs 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicNameAndTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicNameAndTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairsAndCustomTimestamps
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairsAndCustomTimestamps
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithCustomTimestampAndIncrementsAndNotAdvanceTime
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithCustomTimestampAndIncrementsAndNotAdvanceTime
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithTimestampWithTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithTimestampWithTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKeyAndDefaultTimestamp
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKeyAndDefaultTimestamp
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithTimestampAndIncrements STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairsWithTimestampAndIncrements PASSED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis STARTED

org.apache.kafka.streams.MockTimeTest > shouldGetNanosAsMillis PASSED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime STARTED

org.apache.kafka.streams.MockTimeTest > shouldSetStartTime PASSED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldNotAllowNegativeSleep PASSED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep STARTED

org.apache.kafka.streams.MockTimeTest > shouldAdvanceTimeOnSleep PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldDeleteAndReturnPlainValue STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldDeleteAndReturnPlainValue PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnName 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnName 
PASSED


Re: [VOTE] KIP-597: MirrorMaker2 internal topics Formatters

2020-06-10 Thread Maulin Vasavada
+1 (non-binding)

On Wed, Jun 10, 2020 at 9:47 AM Mickael Maison 
wrote:

> Bumping this thread. Let me know if you have any questions or feedback.
>
> So far, we have 2 binding and 5 non-binding votes.
>
> Thanks
>
> On Tue, May 19, 2020 at 10:56 AM Manikumar 
> wrote:
> >
> > +1 (binding)
> >
> > Thanks for the KIP.
> >
> > On Tue, May 19, 2020 at 2:57 PM Mickael Maison  >
> > wrote:
> >
> > > Thanks Konstantine for the feedback (and vote)!
> > >
> > > 1) I've added example commands using the new formatters
> > >
> > > 2) I updated the Compatiblity section to be more explicit about the
> > > need for recompilation
> > >
> > > 3) Good point, updated
> > >
> > > On Tue, May 19, 2020 at 3:18 AM Konstantine Karantasis
> > >  wrote:
> > > >
> > > > Thanks Michael.
> > > > I think it's useful to enable specialized message formatters by
> adding
> > > this
> > > > interface to the public API.
> > > >
> > > > You have my vote: +1 (binding)
> > > >
> > > > Just a few optional comments below:
> > > >
> > > > 1. Would you mind adding the equivalent command line example in the
> > > places
> > > > where you have an example output?
> > > >
> > > > Something equivalent to
> > > > ./bin/kafka-console-consumer.sh --bootstrap-server localhost:9092
> --topic
> > > > __consumer_offsets --from-beginning --formatter
> > > >
> > >
> "kafka.coordinator.group.GroupMetadataManager\$GroupMetadataMessageFormatter"
> > > >
> > > > but with the equivalent formatter classes and expected topic names.
> > > >
> > > > 2. I have to note that breaking old formatters by requiring
> recompilation
> > > > could be avoided if we didn't change kafka.common.MessageFormatter to
> > > > extend the new org.apache.kafka.common.MessageFormatter. We could
> > > maintain
> > > > both, while the old one would remain deprecated and we could attempt
> to
> > > > instantiate one or the other type when reading the config and use
> either
> > > of
> > > > the two different types in the few places in ConsoleConsumer that a
> > > > formatter is used. But I admit that for this use case, it's not worth
> > > > maintaining both types. The interface wasn't public before anyways.
> > > >
> > > > Given that, my small request would be to rephrase in the
> compatibility
> > > > section to say something like:
> > > > 'Existing MessageFormatters implementations will require no changes
> > > beyond
> > > > recompilation.' or similar. Because, to be precise, existing
> formatters
> > > > _won't_ work if they are given as a parameter to a 2.6 console
> consumer,
> > > > without recompilation as you mention.
> > > >
> > > > 3. Finally, a minor comment on skipping the use of the `public`
> specifier
> > > > in the interface because it's redundant.
> > > >
> > > > Best regards,
> > > > Konstantine
> > > >
> > > > On Mon, May 18, 2020 at 3:26 PM Maulin Vasavada <
> > > maulin.vasav...@gmail.com>
> > > > wrote:
> > > >
> > > > > +1 (non-binding)
> > > > >
> > > > > On Mon, May 18, 2020 at 8:49 AM Mickael Maison <
> > > mickael.mai...@gmail.com>
> > > > > wrote:
> > > > >
> > > > > > Bumping this thread as KIP freeze is approaching.
> > > > > >
> > > > > > It's a pretty small change and I have a PR ready:
> > > > > > https://github.com/apache/kafka/pull/8604
> > > > > >
> > > > > > Thanks
> > > > > >
> > > > > > On Mon, May 4, 2020 at 5:26 PM Ryanne Dolan <
> ryannedo...@gmail.com>
> > > > > wrote:
> > > > > > >
> > > > > > > +1, non-binding
> > > > > > >
> > > > > > > On Mon, May 4, 2020, 9:24 AM Christopher Egerton <
> > > chr...@confluent.io>
> > > > > > > wrote:
> > > > > > >
> > > > > > > > +1 (non-binding)
> > > > > > > >
> > > > > > > > On Mon, May 4, 2020 at 5:02 AM Edoardo Comar <
> eco...@uk.ibm.com>
> > > > > > wrote:
> > > > > > > >
> > > > > > > > > +1 (non-binding)
> > > > > > > > > Thanks Mickael
> > > > > > > > >
> > > > > > > > > --
> > > > > > > > >
> > > > > > > > > Edoardo Comar
> > > > > > > > >
> > > > > > > > > Event Streams for IBM Cloud
> > > > > > > > > IBM UK Ltd, Hursley Park, SO21 2JN
> > > > > > > > >
> > > > > > > > >
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > From:   Mickael Maison 
> > > > > > > > > To: dev 
> > > > > > > > > Date:   04/05/2020 11:45
> > > > > > > > > Subject:[EXTERNAL] [VOTE] KIP-597: MirrorMaker2
> > > internal
> > > > > > topics
> > > > > > > > > Formatters
> > > > > > > > >
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > Hi all,
> > > > > > > > >
> > > > > > > > > I'd like to start a vote on KIP-597:
> > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > >
> > > > >
> > >
> https://urldefense.proofpoint.com/v2/url?u=https-3A__cwiki.apache.org_confluence_display_KAFKA_KIP-2D597-253A-2BMirrorMaker2-2Binternal-2Btopics-2BFormatters=DwIBaQ=jf_iaSHvJObTbx-siA1ZOg=EzRhmSah4IHsUZVekRUIINhltZK7U0OaeRo7hgW4_tQ=r-_T9EFUWNEUGi1GuX7klXNZIH2sJmxGTtySV3lAjoQ=iyBSxabuEi1h7ksmzoXgJT8jJoMR0xKYsJy_MpvtCRQ=
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > 

[jira] [Created] (KAFKA-10138) Prefer --bootstrap-server for reassign_partitions command in ducktape tests

2020-06-10 Thread Vinoth Chandar (Jira)
Vinoth Chandar created KAFKA-10138:
--

 Summary: Prefer --bootstrap-server for reassign_partitions command 
in ducktape tests
 Key: KAFKA-10138
 URL: https://issues.apache.org/jira/browse/KAFKA-10138
 Project: Kafka
  Issue Type: Sub-task
Reporter: Vinoth Chandar
Assignee: Vinoth Chandar






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [VOTE] KIP-597: MirrorMaker2 internal topics Formatters

2020-06-10 Thread Mickael Maison
Bumping this thread. Let me know if you have any questions or feedback.

So far, we have 2 binding and 5 non-binding votes.

Thanks

On Tue, May 19, 2020 at 10:56 AM Manikumar  wrote:
>
> +1 (binding)
>
> Thanks for the KIP.
>
> On Tue, May 19, 2020 at 2:57 PM Mickael Maison 
> wrote:
>
> > Thanks Konstantine for the feedback (and vote)!
> >
> > 1) I've added example commands using the new formatters
> >
> > 2) I updated the Compatiblity section to be more explicit about the
> > need for recompilation
> >
> > 3) Good point, updated
> >
> > On Tue, May 19, 2020 at 3:18 AM Konstantine Karantasis
> >  wrote:
> > >
> > > Thanks Michael.
> > > I think it's useful to enable specialized message formatters by adding
> > this
> > > interface to the public API.
> > >
> > > You have my vote: +1 (binding)
> > >
> > > Just a few optional comments below:
> > >
> > > 1. Would you mind adding the equivalent command line example in the
> > places
> > > where you have an example output?
> > >
> > > Something equivalent to
> > > ./bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic
> > > __consumer_offsets --from-beginning --formatter
> > >
> > "kafka.coordinator.group.GroupMetadataManager\$GroupMetadataMessageFormatter"
> > >
> > > but with the equivalent formatter classes and expected topic names.
> > >
> > > 2. I have to note that breaking old formatters by requiring recompilation
> > > could be avoided if we didn't change kafka.common.MessageFormatter to
> > > extend the new org.apache.kafka.common.MessageFormatter. We could
> > maintain
> > > both, while the old one would remain deprecated and we could attempt to
> > > instantiate one or the other type when reading the config and use either
> > of
> > > the two different types in the few places in ConsoleConsumer that a
> > > formatter is used. But I admit that for this use case, it's not worth
> > > maintaining both types. The interface wasn't public before anyways.
> > >
> > > Given that, my small request would be to rephrase in the compatibility
> > > section to say something like:
> > > 'Existing MessageFormatters implementations will require no changes
> > beyond
> > > recompilation.' or similar. Because, to be precise, existing formatters
> > > _won't_ work if they are given as a parameter to a 2.6 console consumer,
> > > without recompilation as you mention.
> > >
> > > 3. Finally, a minor comment on skipping the use of the `public` specifier
> > > in the interface because it's redundant.
> > >
> > > Best regards,
> > > Konstantine
> > >
> > > On Mon, May 18, 2020 at 3:26 PM Maulin Vasavada <
> > maulin.vasav...@gmail.com>
> > > wrote:
> > >
> > > > +1 (non-binding)
> > > >
> > > > On Mon, May 18, 2020 at 8:49 AM Mickael Maison <
> > mickael.mai...@gmail.com>
> > > > wrote:
> > > >
> > > > > Bumping this thread as KIP freeze is approaching.
> > > > >
> > > > > It's a pretty small change and I have a PR ready:
> > > > > https://github.com/apache/kafka/pull/8604
> > > > >
> > > > > Thanks
> > > > >
> > > > > On Mon, May 4, 2020 at 5:26 PM Ryanne Dolan 
> > > > wrote:
> > > > > >
> > > > > > +1, non-binding
> > > > > >
> > > > > > On Mon, May 4, 2020, 9:24 AM Christopher Egerton <
> > chr...@confluent.io>
> > > > > > wrote:
> > > > > >
> > > > > > > +1 (non-binding)
> > > > > > >
> > > > > > > On Mon, May 4, 2020 at 5:02 AM Edoardo Comar 
> > > > > wrote:
> > > > > > >
> > > > > > > > +1 (non-binding)
> > > > > > > > Thanks Mickael
> > > > > > > >
> > > > > > > > --
> > > > > > > >
> > > > > > > > Edoardo Comar
> > > > > > > >
> > > > > > > > Event Streams for IBM Cloud
> > > > > > > > IBM UK Ltd, Hursley Park, SO21 2JN
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > > From:   Mickael Maison 
> > > > > > > > To: dev 
> > > > > > > > Date:   04/05/2020 11:45
> > > > > > > > Subject:[EXTERNAL] [VOTE] KIP-597: MirrorMaker2
> > internal
> > > > > topics
> > > > > > > > Formatters
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > > Hi all,
> > > > > > > >
> > > > > > > > I'd like to start a vote on KIP-597:
> > > > > > > >
> > > > > > > >
> > > > > > >
> > > > >
> > > >
> > https://urldefense.proofpoint.com/v2/url?u=https-3A__cwiki.apache.org_confluence_display_KAFKA_KIP-2D597-253A-2BMirrorMaker2-2Binternal-2Btopics-2BFormatters=DwIBaQ=jf_iaSHvJObTbx-siA1ZOg=EzRhmSah4IHsUZVekRUIINhltZK7U0OaeRo7hgW4_tQ=r-_T9EFUWNEUGi1GuX7klXNZIH2sJmxGTtySV3lAjoQ=iyBSxabuEi1h7ksmzoXgJT8jJoMR0xKYsJy_MpvtCRQ=
> > > > > > > >
> > > > > > > >
> > > > > > > > Thanks
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > > Unless stated otherwise above:
> > > > > > > > IBM United Kingdom Limited - Registered in England and Wales
> > with
> > > > > number
> > > > > > > > 741598.
> > > > > > > > Registered office: PO Box 41, North Harbour, Portsmouth,
> > Hampshire
> > > > > PO6
> > > > > > > 3AU
> > > > > > > >
> > > > > > > >
> > > > > > >
> > > > >
> 

[jira] [Created] (KAFKA-10137) Clean-up retain Duplicate logic in Window Stores

2020-06-10 Thread Bruno Cadonna (Jira)
Bruno Cadonna created KAFKA-10137:
-

 Summary: Clean-up retain Duplicate logic in Window Stores
 Key: KAFKA-10137
 URL: https://issues.apache.org/jira/browse/KAFKA-10137
 Project: Kafka
  Issue Type: Task
  Components: streams
Affects Versions: 2.5.0
Reporter: Bruno Cadonna


Stream-stream joins use the regular `WindowStore` implementation but with 
`retainDuplicates` set to true. To allow for duplicates while using the same 
unique-key underlying stores we just wrap the key with an incrementing sequence 
number before inserting it.

The logic to maintain and append the sequence number is present in multiple 
locations, namely in the changelogging window store and in its underlying 
window stores. We should consolidate this code to one single location.  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Jenkins build is back to normal : kafka-2.5-jdk8 #149

2020-06-10 Thread Apache Jenkins Server
See 




Re: [VOTE] KIP-599: Throttle Create Topic, Create Partition and Delete Topic Operations

2020-06-10 Thread David Jacot
Hi Colin and Jun,

I have no problem if we have to rewrite part of it when the new controller
comes
out. I will be more than happy to help out.

Regarding KIP-590, I think that we can cope with a principal as a string
when the
time comes. The user entity name is defined with a string already.

Regarding the name of the error, you have made a good point. I do agree
that it
is important to differentiate the two cases. I propose the following two
errors:
- THROTTLING_QUOTA_EXCEEDED - Throttling is slightly better than rate as
we have quotas which are not rate (e.g. request quota). This one is
retryable
once the throttle time is passed.
- LIMIT_QUOTA_EXCEEDED - This one would indicate that the limit has been
reached and is a final error.
We only need the former in this KIP. What do you think?

Jun, I have added a few examples in the KIP. The new name works exactly like
the existing one once it is added to the accepted dynamic configs for the
user
and the client entities. I have added a "Kafka Config Command" chapter in
the
KIP. I will also open a Jira to not forget updating the AK documentation
once
the KIP gets merged.

Thanks,
David

On Wed, Jun 10, 2020 at 3:03 AM Jun Rao  wrote:

> Hi, Colin,
>
> Good point. Maybe sth like THROTTLING_QUOTA_VIOLATED will make this clear.
>
> Hi, David,
>
> We added a new quota name in the KIP. You chose not to bump up the version
> of DESCRIBE_CLIENT_QUOTAS and ALTER_CLIENT_QUOTAS, which seems ok since the
> quota name is represented as a string. However, the new quota name can be
> used in client tools for setting and listing the quota (
> https://kafka.apache.org/documentation/#quotas). Could you document how
> the
> new name will be used in those tools?
>
> Thanks,
>
> Jun
>
> On Tue, Jun 9, 2020 at 3:37 PM Colin McCabe  wrote:
>
> > On Tue, Jun 9, 2020, at 05:06, David Jacot wrote:
> > > Hi Colin,
> > >
> > > Thank you for your feedback.
> > >
> > > Jun has summarized the situation pretty well. Thanks Jun! I would like
> to
> > > complement it with the following points:
> > >
> > > 1. Indeed, when the quota is exceeded, the broker will reject the topic
> > > creations, partition creations and topics deletions that are exceeding
> > > with the new QUOTA_VIOLATED error. The ThrottleTimeMs field will
> > > be populated accordingly to let the client know how long it must wait.
> > >
> > > 2. I do agree that we actually want a mechanism to apply back pressure
> > > to the clients. The KIP basically proposes a mechanism to control and
> to
> > > limit the rate of operations before entering the controller. I think
> that
> > > it is similar to your thinking but is enforced based on a defined
> > > instead of relying on the number of pending items in the controller.
> > >
> > > 3. You mentioned an alternative idea in your comments that, if I
> > understood
> > > correctly, would bound the queue to limit the overload and reject work
> if
> > > the queue is full. I have been thinking about this as well but I don't
> > think
> > > that it  works well in our case.
> > > - The first reason is the one mentioned by Jun. We actually want to be
> > able
> > > to limit specific clients (the misbehaving ones) in a multi-tenant
> > > environment.
> > > - The second reason is that, at least in our current implementation,
> the
> > > length of the queue is not really a good characteristic to estimate the
> > load.
> > > Coming back to your example of the CreateTopicsRequest. They create
> path
> > >  in ZK for each newly created topics which trigger a ChangeTopic event
> > in
> > > the controller. That single event could be for a single topic in some
> > cases or
> > > for a thousand topics in others.
> > > These two reasons aside, bounding the queue also introduces a knob
> which
> > > requires some tuning and thus suffers from all the points you mentioned
> > as
> > > well, isn't it? The value may be true for one version but not for
> > another.
> > >
> > > 4. Regarding the integration with KIP-500. The implementation of this
> KIP
> > > will span across the ApiLayer and the AdminManager. To be honest, we
> > > can influence the implementation to work best with what you have in
> mind
> > > for the future controller. If you could shed some more light on the
> > future
> > > internal architecture, I can take this into account during the
> > > implementation.
> > >
> >
> > Hi David,
> >
> > Good question.  The approach we are thinking of is that in the future,
> > topic creation will be a controller RPC.  In other words, rather than
> > changing ZK and waiting for the controller code to notice, we'll go
> through
> > the controller code (which may change ZK, or may do something else in the
> > ZK-free environment).
> >
> > I don't think there is a good way to write this in the short term that
> > avoids having to rewrite in the long term.  That's probably OK though, as
> > long as we keep in mind that we'll need to.
> >
> > >
> > > 5. Regarding KIP-590. For the create topics request, create 

Jenkins build is back to normal : kafka-trunk-jdk11 #1557

2020-06-10 Thread Apache Jenkins Server
See 




[DISCUSS] KIP-622 Add currentSystemTimeMs and currentStreamTimeMs to ProcessorContext

2020-06-10 Thread William Bottrell
Add currentSystemTimeMs and currentStreamTimeMs to ProcessorContext


I am extremely new to Kafka, but thank you to John Roesler and Matthias J.
Sax for pointing me in the right direction. I accept any and all feedback.

Thanks,
Will


[jira] [Resolved] (KAFKA-9849) Fix issue with worker.unsync.backoff.ms creating zombie workers when incremental cooperative rebalancing is used

2020-06-10 Thread Konstantine Karantasis (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantine Karantasis resolved KAFKA-9849.
---
Resolution: Fixed

> Fix issue with worker.unsync.backoff.ms creating zombie workers when 
> incremental cooperative rebalancing is used
> 
>
> Key: KAFKA-9849
> URL: https://issues.apache.org/jira/browse/KAFKA-9849
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 2.3.1, 2.5.0, 2.4.1
>Reporter: Konstantine Karantasis
>Assignee: Konstantine Karantasis
>Priority: Major
> Fix For: 2.3.2, 2.6.0, 2.4.2, 2.5.1
>
>
> {{worker.unsync.backoff.ms}} is a property that was introduced a while ago 
> when eager (stop-the-world) rebalancing was the only option for Connect 
> workers. The goal of this property is to avoid triggering consecutive 
> rebalances when a worker fails to catch up with the config topic in time and 
> therefore voluntarily leaves the group with a {{LeaveGroupRequest}}.
> With incremental cooperative rebalancing this backoff 
> ({{worker.unsync.backoff.ms) }}that has a default value equal to the default 
> value of {{scheduled.rebalance.max.delay.ms}} (5min) might end up turning a 
> worker into a zombie worker that retains its tasks but stays out of the 
> group. This worker, by backing off from rebalancing, leaves not option to the 
> leader of the group but to reassign the missing tasks that were thought as 
> lost to other members of the group if the worker that backs off does not 
> return in time before {{scheduled.rebalance.max.delay.ms}} expires. 
> Clearly, {{worker.unsync.backoff.ms}} was introduced to avoid rebalancing 
> storms under the presence of intermittent connectivity issues with eager 
> rebalancing. However when incremental cooperative rebalancing is used this 
> property might inadvertently make workers operate as zombie workers that keep 
> running tasks while they are out of the group.
> Of course, a good tradeoff needs to be made between avoiding to make the 
> protocol too eager again and at the same time avoiding to turn workers into 
> zombies when connection is not lost for too long from the broker coordinator.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-9848) Avoid triggering scheduled rebalance delay when task assignment fails but Connect workers remain in the group

2020-06-10 Thread Konstantine Karantasis (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9848?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantine Karantasis resolved KAFKA-9848.
---
Resolution: Fixed

> Avoid triggering scheduled rebalance delay when task assignment fails but 
> Connect workers remain in the group
> -
>
> Key: KAFKA-9848
> URL: https://issues.apache.org/jira/browse/KAFKA-9848
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 2.3.1, 2.5.0, 2.4.1
>Reporter: Konstantine Karantasis
>Assignee: Konstantine Karantasis
>Priority: Major
> Fix For: 2.3.2, 2.6.0, 2.4.2, 2.5.1
>
>
> There are cases where a Connect worker does not receive its tasks assignments 
> successfully after a rebalance but will still remain in the group. For 
> example when a SyncGroup response is lost, a worker will not get its expected 
> assignments but will rejoin the group immediately and will trigger another 
> rebalance. 
> With incremental cooperative rebalancing, tasks assignments that are computed 
> and sent by the leader but are not received by any of the members are marked 
> as lost assignments in the subsequent rebalance. The presence of lost 
> assignments activates the scheduled rebalance delay (property) and the 
> missing tasks are not assigned until this delay expires.
> This situation can be improved in two cases: 
> a) When it's the leader that failed to receive the new assignments from the 
> broker coordinator (for example if the SyncGroup request or response was 
> lost). If this worker remains the leader of the group in the subsequent 
> rebalance round, it can detect that the previous assignment was not 
> successfully applied by checking what's the expected generation.
> b) If one or more regular members did not receive their assignments 
> successfully, but have joined the latest round of rebalancing, they can be 
> assigned the tasks that remain unassigned from the previous assignment 
> immediately without these tasks being marked as lost. The leader can detect 
> that by checking that some tasks seem lost since the previous assignment but 
> also the number of workers is unchanged between the two rounds of 
> rebalancing. In this case, the leader can go ahead and assign the missing 
> tasks as new tasks immediately.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [VOTE] KIP-620 Deprecate ConsumerConfig#addDeserializerToConfig(Properties, Deserializer, Deserializer) and ProducerConfig#addSerializerToConfig(Properties, Serializer, Serializer)

2020-06-10 Thread Chia-Ping Tsai
hi Matthias,

Does this straightforward KIP still need 3 votes? 

On 2020/06/05 21:27:52, "Matthias J. Sax"  wrote: 
> +1 (binding)
> 
> Thanks for the KIP!
> 
> 
> -Matthias
> 
> On 6/4/20 11:25 PM, Chia-Ping Tsai wrote:
> > hi All,
> > 
> > I would like to start the vote on KIP-620:
> > 
> > https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=155749118
> > 
> > --
> > Chia-Ping
> > 
> 
> 


[jira] [Created] (KAFKA-10136) Make option threads of ConsumerPerformance work

2020-06-10 Thread jiamei xie (Jira)
jiamei xie created KAFKA-10136:
--

 Summary: Make option threads of ConsumerPerformance work
 Key: KAFKA-10136
 URL: https://issues.apache.org/jira/browse/KAFKA-10136
 Project: Kafka
  Issue Type: Bug
Reporter: jiamei xie
Assignee: jiamei xie


Make option threads of ConsumerPerformance work



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] KIP-595: A Raft Protocol for the Metadata Quorum

2020-06-10 Thread Boyang Chen
Hey all,

Thanks for the great discussions so far. I'm posting some KIP updates from
our working group discussion:

1. We will be changing the core RPCs from single-raft API to multi-raft.
This means all protocols will be "batch" in the first version, but the KIP
itself only illustrates the design for a single metadata topic partition.
The reason is to "keep the door open" for future extensions of this piece
of module such as a sharded controller or general quorum based topic
replication, beyond the current Kafka replication protocol.

2. We will piggy-back on the current Kafka Fetch API instead of inventing a
new FetchQuorumRecords RPC. The motivation is about the same as #1 as well
as making the integration work easier, instead of letting two similar RPCs
diverge.

3. In the EndQuorumEpoch protocol, instead of only sending the request to
the most caught-up voter, we shall broadcast the information to all voters,
with a sorted voter list in descending order of their corresponding
replicated offset. In this way, the top voter will become a candidate
immediately, while the other voters shall wait for an exponential back-off
to trigger elections, which helps ensure the top voter gets elected, and
the election eventually happens when the top voter is not responsive.

Please see the updated KIP and post any questions or concerns on the
mailing thread.

Boyang

On Fri, May 8, 2020 at 5:26 PM Jun Rao  wrote:

> Hi, Guozhang and Jason,
>
> Thanks for the reply. A couple of more replies.
>
> 102. Still not sure about this. How is the tombstone issue addressed in the
> non-voter and the observer.  They can die at any point and restart at an
> arbitrary later time, and the advancing of the firstDirty offset and the
> removal of the tombstone can happen independently.
>
> 106. I agree that it would be less confusing if we used "epoch" instead of
> "leader epoch" consistently.
>
> Jun
>
> On Thu, May 7, 2020 at 4:04 PM Guozhang Wang  wrote:
>
> > Thanks Jun. Further replies are in-lined.
> >
> > On Mon, May 4, 2020 at 11:58 AM Jun Rao  wrote:
> >
> > > Hi, Guozhang,
> > >
> > > Thanks for the reply. A few more replies inlined below.
> > >
> > > On Sun, May 3, 2020 at 6:33 PM Guozhang Wang 
> wrote:
> > >
> > > > Hello Jun,
> > > >
> > > > Thanks for your comments! I'm replying inline below:
> > > >
> > > > On Fri, May 1, 2020 at 12:36 PM Jun Rao  wrote:
> > > >
> > > > > 101. Bootstrapping related issues.
> > > > > 101.1 Currently, we support auto broker id generation. Is this
> > > supported
> > > > > for bootstrap brokers?
> > > > >
> > > >
> > > > The vote ids would just be the broker ids. "bootstrap.servers" would
> be
> > > > similar to what client configs have today, where "quorum.voters"
> would
> > be
> > > > pre-defined config values.
> > > >
> > > >
> > > My question was on the auto generated broker id. Currently, the broker
> > can
> > > choose to have its broker Id auto generated. The generation is done
> > through
> > > ZK to guarantee uniqueness. Without ZK, it's not clear how the broker
> id
> > is
> > > auto generated. "quorum.voters" also can't be set statically if broker
> > ids
> > > are auto generated.
> > >
> > > Jason has explained some ideas that we've discussed so far, the reason
> we
> > intentional did not include them so far is that we feel it is out-side
> the
> > scope of KIP-595. Under the umbrella of KIP-500 we should definitely
> > address them though.
> >
> > On the high-level, our belief is that "joining a quorum" and "joining (or
> > more specifically, registering brokers in) the cluster" would be
> > de-coupled a bit, where the former should be completed before we do the
> > latter. More specifically, assuming the quorum is already up and running,
> > after the newly started broker found the leader of the quorum it can
> send a
> > specific RegisterBroker request including its listener / protocol / etc,
> > and upon handling it the leader can send back the uniquely generated
> broker
> > id to the new broker, while also executing the "startNewBroker" callback
> as
> > the controller.
> >
> >
> > >
> > > > > 102. Log compaction. One weak spot of log compaction is for the
> > > consumer
> > > > to
> > > > > deal with deletes. When a key is deleted, it's retained as a
> > tombstone
> > > > > first and then physically removed. If a client misses the tombstone
> > > > > (because it's physically removed), it may not be able to update its
> > > > > metadata properly. The way we solve this in Kafka is based on a
> > > > > configuration (log.cleaner.delete.retention.ms) and we expect a
> > > consumer
> > > > > having seen an old key to finish reading the deletion tombstone
> > within
> > > > that
> > > > > time. There is no strong guarantee for that since a broker could be
> > > down
> > > > > for a long time. It would be better if we can have a more reliable
> > way
> > > of
> > > > > dealing with deletes.
> > > > >
> > > >
> > > > We propose to capture this in the "FirstDirtyOffset" field