[jira] [Resolved] (KAFKA-9587) Producer configs are omitted in the documentation

2020-10-13 Thread Konstantine Karantasis (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantine Karantasis resolved KAFKA-9587.
---
Resolution: Fixed

> Producer configs are omitted in the documentation
> -
>
> Key: KAFKA-9587
> URL: https://issues.apache.org/jira/browse/KAFKA-9587
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients, documentation
>Affects Versions: 2.4.0
>Reporter: Dongjin Lee
>Assignee: Dongjin Lee
>Priority: Minor
> Fix For: 2.7.0
>
>
> As of 2.4, [the KafkaProducer 
> documentation|https://kafka.apache.org/24/javadoc/org/apache/kafka/clients/producer/KafkaProducer.html]
>  states:
> {quote}If the request fails, the producer can automatically retry, though 
> since we have specified retries as 0 it won't.
> {quote}
> {quote}... in the code snippet above, likely all 100 records would be sent in 
> a single request since we set our linger time to 1 millisecond.
> {quote}
> However, the code snippet (below) does not include any configurtaion on 
> '{{retry'}} or '{{linger.ms'}}:
> {quote}Properties props = new Properties();
>  props.put("bootstrap.servers", "localhost:9092");
>  props.put("acks", "all");
>  props.put("key.serializer", 
> "org.apache.kafka.common.serialization.StringSerializer");
>  props.put("value.serializer", 
> "org.apache.kafka.common.serialization.StringSerializer");
> {quote}
> The same documentation in [version 
> 2.0|https://kafka.apache.org/20/javadoc/org/apache/kafka/clients/producer/KafkaProducer.html]
>  includes the configs; However, 
> [2.1|https://kafka.apache.org/21/javadoc/org/apache/kafka/clients/producer/KafkaProducer.html]
>  only includes '{{linger.ms}}' and 
> [2.2|https://kafka.apache.org/22/javadoc/org/apache/kafka/clients/producer/KafkaProducer.html]
>  includes none. It seems like it was removed in the middle of two releases.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-10608) Add support for rolling upgrade with topology changes

2020-10-13 Thread Ashish Surana (Jira)
Ashish Surana created KAFKA-10608:
-

 Summary: Add support for rolling upgrade with topology changes
 Key: KAFKA-10608
 URL: https://issues.apache.org/jira/browse/KAFKA-10608
 Project: Kafka
  Issue Type: Improvement
Reporter: Ashish Surana


We observed that if we modify the topology then we can't do rolling upgrade as 
it seems all the instances should have exact same topology.

Everytime we upgrade topology we have to incur downtime, so we should explore 
if this can be supported fully and if not fully then should be doable for some 
subset of topology upgrades.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: Kafka » kafka-trunk-jdk11 #140

2020-10-13 Thread Apache Jenkins Server
See 


Changes:

[cshapi] MINOR internal KIP-629 changes to methods and variables

[github] KAFKA-10573 Update connect transforms configs for KIP-629 (#9403)


--
[...truncated 6.91 MB...]
org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@1e15654d, 
timestamped = false, caching = true, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@1e15654d, 
timestamped = false, caching = true, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@41651f95, 
timestamped = false, caching = true, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@41651f95, 
timestamped = false, caching = true, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@71b412fa, 
timestamped = false, caching = false, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@71b412fa, 
timestamped = false, caching = false, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@192dff9b, 
timestamped = false, caching = false, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@192dff9b, 
timestamped = false, caching = false, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@2c27cefe, 
timestamped = false, caching = false, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@2c27cefe, 
timestamped = false, caching = false, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@4557e558, 
timestamped = false, caching = false, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@4557e558, 
timestamped = false, caching = false, logging = false] PASSED

org.apache.kafka.streams.TestTopicsTest > testNonUsedOutputTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testNonUsedOutputTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testEmptyTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testEmptyTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testStartTimestamp STARTED

org.apache.kafka.streams.TestTopicsTest > testStartTimestamp PASSED

org.apache.kafka.streams.TestTopicsTest > testNegativeAdvance STARTED

org.apache.kafka.streams.TestTopicsTest > testNegativeAdvance PASSED

org.apache.kafka.streams.TestTopicsTest > shouldNotAllowToCreateWithNullDriver 
STARTED

org.apache.kafka.streams.TestTopicsTest > shouldNotAllowToCreateWithNullDriver 
PASSED

org.apache.kafka.streams.TestTopicsTest > testDuration STARTED

org.apache.kafka.streams.TestTopicsTest > testDuration PASSED

org.apache.kafka.streams.TestTopicsTest > testOutputToString STARTED

org.apache.kafka.streams.TestTopicsTest > testOutputToString PASSED

org.apache.kafka.streams.TestTopicsTest > testValue STARTED

org.apache.kafka.streams.TestTopicsTest > testValue PASSED

org.apache.kafka.streams.TestTopicsTest > testTimestampAutoAdvance STARTED

org.apache.kafka.streams.TestTopicsTest > testTimestampAutoAdvance PASSED

org.apache.kafka.streams.TestTopicsTest > testOutputWrongSerde STARTED

org.apache.kafka.streams.TestTopicsTest > testOutputWrongSerde PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputTopicWithNullTopicName STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputTopicWithNullTopicName PASSED

org.apache.kafka.streams.TestTopicsTest > testWrongSerde STARTED

org.apache.kafka.streams.TestTopicsTest > testWrongSerde PASSED

org.apache.kafka.streams.TestTopicsTest > 

Build failed in Jenkins: Kafka » kafka-2.7-jdk8 #10

2020-10-13 Thread Apache Jenkins Server
See 


Changes:

[Randall Hauch] KAFKA-10573 Update connect transforms configs for KIP-629 
(#9403)


--
[...truncated 3.42 MB...]
org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 

Jenkins build is back to normal : Kafka » kafka-trunk-jdk8 #135

2020-10-13 Thread Apache Jenkins Server
See 




Re: [DISCUSS] KIP-631: The Quorum-based Kafka Controller

2020-10-13 Thread Jun Rao
Hi, Colin,

Thanks for the reply. A few more comments below.

80.1 controller.listener.names only defines the name of the listener. The
actual listener including host/port/security_protocol is typically defined
in advertised_listners. Does that mean advertised_listners is a required
config now?

83.1 broker state machine: It seems that we should transition from FENCED
=> INITIAL since only INITIAL generates new broker epoch?

83.5. It's true that the controller node doesn't serve metadata requests.
However, there are admin requests such as topic creation/deletion are sent
to the controller directly. So, it seems that the client needs to know
the controller host/port?

85. "I was hoping that we could avoid responding to requests when the
broker was fenced." This issue is that if we don't send a response, the
client won't know the reason and can't act properly.

88. CurMetadataOffset: I was thinking that we may want to
use CurMetadataOffset to compute the MetadataLag. Since HWM is exclusive,
it's more convenient if CurMetadataOffset is also exclusive.

90. It would be useful to add a rejected section on why separate controller
and broker id is preferred over just broker id. For example, the following
are some potential reasons. (a) We can guard duplicated brokerID, but it's
hard to guard against duplicated controllerId. (b) brokerID can be auto
assigned in the future, but controllerId is hard to be generated
automatically.

Thanks,

Jun

On Mon, Oct 12, 2020 at 11:14 AM Colin McCabe  wrote:

> On Tue, Oct 6, 2020, at 16:09, Jun Rao wrote:
> > Hi, Colin,
> >
> > Thanks for the reply. Made another pass of the KIP. A few more comments
> > below.
> >
>
> Hi Jun,
>
> Thanks for the review.
>
> > 55. We discussed earlier why the current behavior where we favor the
> > current broker registration is better. Have you given this more thought?
> >
>
> Yes, I think we should favor the current broker registration, as you
> suggested earlier.
>
> > 80. Config related.
> > 80.1 Currently, each broker only has the following 3 required configs. It
> > will be useful to document the required configs post KIP-500 (in both the
> > dedicated and shared controller mode).
> > broker.id
> > log.dirs
> > zookeeper.connect
>
> For the broker, these configs will be required:
>
> broker.id
> log.dirs
> process.roles
> controller.listener.names
> controller.connect
>
> For the controller, these configs will be required:
>
> controller.id
> log.dirs
> process.roles
> controller.listener.names
> controller.connect
>
> For broker+controller, it will be the union of these two, which
> essentially means we need both broker.id and controller.id, but all
> others are the same as standalone.
>
> > 80.2 It would be useful to document all deprecated configs post KIP-500.
> > For example, all zookeeper.* are obviously deprecated. But there could be
> > others. For example, since we don't plan to support auto broker id
> > generation, it seems broker.id.generation.enable is deprecated too.
> > 80.3 Could we make it clear that controller.connect replaces
> quorum.voters
> > in KIP-595?
>
> OK.  I added a comment about this in the table.
>
> > 80.4 Could we document that broker.id is now optional?
>
> OK.  I have added a line for broker.id.
>
> > 80.5 The KIP suggests that controller.id is optional on the controller
> > node. I am concerned that this can cause a bit of confusion in 2 aspects.
> > First, in the dedicated controller mode, controller.id is not optional
> > (since broker.id is now optional). Second, in the shared controller
> mode,
> > it may not be easy for the user to figure out the default value of
> > controller.id to set controller.connect properly.
>
> I got rid of the default value of controller.id.  I agree it was a bit
> confusing to explain.
>
> > 80.6 Regarding the consistency of config names, our metrics already
> include
> > controller. So, prefixing all controller related configs with
> "controller"
> > may be more consistent. If we choose to do that, could we rename all new
> > configs here and in KIP-595 consistently?
> >
>
> I added the new names for the KIP-595 configurations to the configuration
> table.  I think prefixing these configurations with "controller.quorum" is
> the best option since it makes it clear that they are related to the
> controller quorum.
>
> > 81. Metrics
> > 81.1 kafka.controller:type=KafkaController,name=MetadataSnapshotLag: This
> > is now redundant since KIP-630 already has
> > kafka.controller:type=KafkaController,name=SnapshotLag.
>
> Thanks for bringing up the metric name issue.  I think it would be good to
> clarify that this is the lag for the metadata snapshot specifically.  So I
> prefer the name MetadataSnapshotLag.  I'll add a note to the table.
>
> > 81.2 Do we need both kafka.controller:type=KafkaServer,name=MetadataLag
> and
> > kafka.controller:type=KafkaController,name=MetadataLag since in the
> shared
> > controller mode, the metadata log is shared?
>
> Yes, both 

[jira] [Resolved] (KAFKA-10521) Remove ZK watch for completing partition reassignment

2020-10-13 Thread Jason Gustafson (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10521?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-10521.
-
Fix Version/s: 2.7.0
   Resolution: Fixed

> Remove ZK watch for completing partition reassignment
> -
>
> Key: KAFKA-10521
> URL: https://issues.apache.org/jira/browse/KAFKA-10521
> Project: Kafka
>  Issue Type: Improvement
>  Components: controller
>Affects Versions: 2.7.0
>Reporter: David Arthur
>Assignee: Jason Gustafson
>Priority: Minor
> Fix For: 2.7.0
>
>
> This is a follow-on from KAFKA-8836.
> Currently we have a ZK watch on the partition "/state" znode which fires a 
> handler in the controller to check if a reassignment can be completed due to 
> replicas catching up with the leader. This is located in 
> KafkaController#processPartitionReassignmentIsrChange
> Following the change to updating ISR with the new AlterIsr RPC, we would like 
> to remove this ZK watch and replace it with a direct call when writing out a 
> new ISR in the controller.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-10607) Ensure the error counts contains the NONE

2020-10-13 Thread Boyang Chen (Jira)
Boyang Chen created KAFKA-10607:
---

 Summary: Ensure the error counts contains the NONE
 Key: KAFKA-10607
 URL: https://issues.apache.org/jira/browse/KAFKA-10607
 Project: Kafka
  Issue Type: Improvement
  Components: clients
Reporter: Boyang Chen


In the RPC errorCounts() call, there are inconsistent behaviors from the 
default implementation, for example certain RPCs filter out Errors.NONE during 
the map generation. We should make it consistent by applying the errorCounts() 
to Errors.NONE for all RPCs.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: Kafka » kafka-2.7-jdk8 #9

2020-10-13 Thread Apache Jenkins Server
See 


Changes:

[Matthias J. Sax] KAFKA-10494: Eager handling of sending old values (#9415)

[cshapi] KAFKA-10570; Rename JMXReporter configs for KIP-629

[cshapi] MINOR rename kafka.utils.Whitelist to IncludeList

[cshapi] backport KAFKA-10571

[cshapi] MINOR internal KIP-629 changes to methods and variables


--
[...truncated 3.42 MB...]

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 

Re: [VOTE] KIP-584: Versioning scheme for features

2020-10-13 Thread Kowshik Prakasam
Hi all,

I wanted to let you know that I have made the following minor changes to
the `kafka-features` CLI tool description in the KIP-584 write up. The
purpose is to ensure the design is correct for a few things which came up
during implementation:

1. The CLI tool now produces a tab-formatted output instead of JSON. This
aligns with the type of format produced by other admin CLI tools of Kafka,
ex: `kafka-topics`.
2. Whenever feature updates are performed, the output of the CLI tool shows
the result of each feature update that was applied.
3. The CLI tool accepts an optional argument `--dry-run` which lets the
user preview the feature updates before applying them.

The following section of the KIP has been updated with the above changes:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-584%3A+Versioning+scheme+for+features#KIP584:Versioningschemeforfeatures-Toolingsupport

Please let me know if you have any questions.


Cheers,
Kowshik


On Thu, Oct 8, 2020 at 1:12 AM Kowshik Prakasam 
wrote:

> Hi Jun,
>
> This is a very good point. I have updated the feature version deprecation
> section mentioning the same:
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-584%3A+Versioning+scheme+for+features#KIP584:Versioningschemeforfeatures-Featureversiondeprecation
> .
>
> Thank you for the suggestion.
>
>
> Cheers,
> Kowshik
>
>
> On Tue, Oct 6, 2020 at 5:30 PM Jun Rao  wrote:
>
>> Hi, Kowshik,
>>
>> Thanks for the follow up. Both look good to me.
>>
>> For 2, it would be useful to also add that an admin should make sure that
>> no clients are using a deprecated feature version (e.g. using the client
>> version metric) before deploying a release that deprecates it.
>>
>> Thanks,
>>
>> Jun
>>
>> On Tue, Oct 6, 2020 at 3:46 PM Kowshik Prakasam 
>> wrote:
>>
>> > Hi Jun,
>> >
>> > I have added the following details in the KIP-584 write up:
>> >
>> > 1. Deployment, IBP deprecation and avoidance of double rolls. This
>> section
>> > talks about the various phases of work that would be required to use
>> this
>> > KIP to eventually avoid Broker double rolls in the cluster (whenever IBP
>> > values are advanced). Link to section:
>> >
>> >
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-584%3A+Versioning+scheme+for+features#KIP584:Versioningschemeforfeatures-Deployment,IBPdeprecationandavoidanceofdoublerolls
>> > .
>> >
>> > 2. Feature version deprecation. This section explains the idea for
>> feature
>> > version deprecation (using highest supported feature min version) which
>> you
>> > had proposed during code review:
>> >
>> >
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-584%3A+Versioning+scheme+for+features#KIP584:Versioningschemeforfeatures-Featureversiondeprecation
>> > .
>> >
>> > Please let me know if you have any questions.
>> >
>> >
>> > Cheers,
>> > Kowshik
>> >
>> >
>> > On Tue, Sep 29, 2020 at 11:07 AM Jun Rao  wrote:
>> >
>> > > Hi, Kowshik,
>> > >
>> > > Thanks for the update. Regarding enabling a single rolling restart in
>> the
>> > > future, could we sketch out a bit how this will work by treating IBP
>> as a
>> > > feature? For example, IBP currently uses the release version and this
>> KIP
>> > > uses an integer for versions. How do we bridge the gap between the
>> two?
>> > > Does min.version still make sense for IBP as a feature?
>> > >
>> > > Thanks,
>> > >
>> > > Jun
>> > >
>> > > On Fri, Sep 25, 2020 at 5:57 PM Kowshik Prakasam <
>> kpraka...@confluent.io
>> > >
>> > > wrote:
>> > >
>> > > > Hi Colin,
>> > > >
>> > > > Thanks for the feedback. Those are very good points. I have made the
>> > > > following changes to the KIP as you had suggested:
>> > > > 1. Included the `timeoutMs` field in the `UpdateFeaturesRequest`
>> > schema.
>> > > > The initial implementation won't be making use of the field, but we
>> can
>> > > > always use it in the future as the need arises.
>> > > > 2. Modified the `FinalizedFeaturesEpoch` field in
>> `ApiVersionsResponse`
>> > > to
>> > > > use int64. This is to avoid overflow problems in the future once ZK
>> is
>> > > > gone.
>> > > >
>> > > > I have also incorporated these changes into the versioning write
>> path
>> > PR
>> > > > that is currently under review:
>> > > https://github.com/apache/kafka/pull/9001.
>> > > >
>> > > >
>> > > > Cheers,
>> > > > Kowshik
>> > > >
>> > > >
>> > > >
>> > > > On Fri, Sep 25, 2020 at 4:57 PM Kowshik Prakasam <
>> > kpraka...@confluent.io
>> > > >
>> > > > wrote:
>> > > >
>> > > > > Hi Jun,
>> > > > >
>> > > > > Thanks for the feedback. It's a very good point. I have now
>> modified
>> > > the
>> > > > > KIP-584 write-up "goals" section a bit. It now mentions one of the
>> > > goals
>> > > > as
>> > > > > enabling rolling upgrades using a single restart (instead of 2).
>> > Also I
>> > > > > have removed the text explicitly aiming for deprecation of IBP.
>> Note
>> > > that
>> > > > > previously under "Potential features in Kafka" the IBP was
>> mentioned
>> > > > under
>> > > > > point (4) 

[jira] [Resolved] (KAFKA-10573) Rename connect transform configs for KIP-629

2020-10-13 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10573?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch resolved KAFKA-10573.
---
Fix Version/s: (was: 2.8.0)
 Reviewer: Randall Hauch
   Resolution: Fixed

Merged to `trunk` for inclusion in 2.8.0 (or whatever major/minor release 
follows 2.7.0), and to the `2.7` branch for inclusion in 2.7.0.

> Rename connect transform configs for KIP-629
> 
>
> Key: KAFKA-10573
> URL: https://issues.apache.org/jira/browse/KAFKA-10573
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Xavier Léauté
>Assignee: Xavier Léauté
>Priority: Major
>  Labels: needs-kip
> Fix For: 2.7.0
>
>
> Part of the implementation for 
> [KIP-629|https://cwiki.apache.org/confluence/display/KAFKA/KIP-629%3A+Use+racially+neutral+terms+in+our+codebase]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-10606) Auto create non-existent topics when fetching metadata for all topics

2020-10-13 Thread Lincong Li (Jira)
Lincong Li created KAFKA-10606:
--

 Summary: Auto create non-existent topics when fetching metadata 
for all topics
 Key: KAFKA-10606
 URL: https://issues.apache.org/jira/browse/KAFKA-10606
 Project: Kafka
  Issue Type: Bug
Reporter: Lincong Li


The "allow auto topic creation" flag is hardcoded to be true for the 
fetch-all-topic metadata request:

https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/requests/MetadataRequest.java#L37

In the below code, annotation claims that "*This never causes auto-creation*". 
It it NOT true and auto topic creation still gets triggered under some 
circumstances. So, this is a bug that needs to be fixed.

https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/requests/MetadataRequest.java#L68


For example, the bug could be manifested in the below situation:

A topic T is being deleted and a request to fetch metadata for all topics gets 
sent to one broker. The broker reads names of all topics from its metadata 
cache (shown below).

https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/server/KafkaApis.scala#L1196

Then the broker authorizes all topics and makes sure that they are allowed to 
be described. Then the broker tries to get metadata for every authorized topic 
by reading the metadata cache again, once for every topic (show below).

https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/server/KafkaApis.scala#L1240

However, the metadata cache could have been updated while the broker was 
authorizing all topics and topic T and its metadata no longer exist in the 
cache since the topic got deleted and metadata update requests eventually got 
propagated from the controller to all brokers. So, at this point, when the 
broker tries to get metadata for topic T from its cache, it realizes that it 
does not exist and the broker tries to "auto create" topic T since the 
allow-auto-topic-creation flag was set to true in all the fetch-all-topic 
metadata requests.

I think this bug exists since "*metadataRequest.allowAutoTopicCreation*" was 
introduced.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: Kafka » kafka-2.7-jdk8 #8

2020-10-13 Thread Apache Jenkins Server
See 


Changes:

[John Roesler] KAFKA-10437: Implement new PAPI support for test-utils (#9396)


--
[...truncated 6.84 MB...]

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualWithNullForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED


Build failed in Jenkins: Kafka » kafka-trunk-jdk8 #134

2020-10-13 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-10533; KafkaRaftClient should flush log after appends (#9352)

[github] KAFKA-10437: Implement new PAPI support for test-utils (#9396)


--
[...truncated 6.84 MB...]

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowPatternNotValidForTopicNameException[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldEnqueueLaterOutputsAfterEarlierOnes[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldEnqueueLaterOutputsAfterEarlierOnes[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializersDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializersDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfEvenTimeAdvances[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfEvenTimeAdvances[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowNoSuchElementExceptionForUnusedOutputTopicWithDynamicRouting[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowNoSuchElementExceptionForUnusedOutputTopicWithDynamicRouting[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldInitProcessor[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldInitProcessor[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowForUnknownTopic[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowForUnknownTopic[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnStreamsTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnStreamsTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureGlobalTopicNameIfWrittenInto[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureGlobalTopicNameIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
STARTED


Re: [VOTE] KIP-516: Topic Identifiers

2020-10-13 Thread Jun Rao
Hi, Justine,

Thanks for starting the vote. Just a few minor comments.

1. It seems that we should remove the topic field from the
StopReplicaResponse below?
StopReplica Response (Version: 4) => error_code [topics]
  error_code => INT16
topics => topic topic_id* [partitions]

2. "After controller election, upon receiving the result, assign the
metadata topic its unique topic ID". Will the UUID for the metadata topic
be written to the metadata topic itself?

3. The vote request is designed to support multiple topics, each of them
may require a different sentinel ID. Should we reserve more than one
sentinel ID for future usage?

4. UUID.randomUUID(): Could we clarify whether this method returns any
sentinel ID? Also, how do we expect the user to use it?

Thanks,

Jun

On Mon, Oct 12, 2020 at 9:54 AM Justine Olshan  wrote:

> Hi all,
>
> After further discussion and changes to this KIP, I think we are ready to
> restart this vote.
>
> Again, here is the KIP:
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-516%3A+Topic+Identifiers
>
> The discussion thread is here:
>
> https://lists.apache.org/thread.html/7efa8cd169cadc7dc9cf86a7c0dbbab1836ddb5024d310fcebacf80c@%3Cdev.kafka.apache.org%3E
>
> Please take a look and vote if you have a chance.
>
> Thanks,
> Justine
>
> On Tue, Sep 22, 2020 at 8:52 AM Justine Olshan 
> wrote:
>
> > Hi all,
> >
> > I'd like to call a vote on KIP-516: Topic Identifiers. Here is the KIP:
> >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-516%3A+Topic+Identifiers
> >
> > The discussion thread is here:
> >
> >
> https://lists.apache.org/thread.html/7efa8cd169cadc7dc9cf86a7c0dbbab1836ddb5024d310fcebacf80c@%3Cdev.kafka.apache.org%3E
> >
> > Please take a look and vote if you have a chance.
> >
> > Thank you,
> > Justine
> >
>


Jenkins build is back to normal : Kafka » kafka-trunk-jdk15 #167

2020-10-13 Thread Apache Jenkins Server
See 




Re: [VOTE] KIP-671: Add method to Shutdown entire Streams Application

2020-10-13 Thread Walker Carlson
Those are good points Sophie and Matthias. I sepificed the defaults in the
kip and standardized the names fo the handler to make them a bit more
readable.

Thanks for the suggestions,
Walker

On Tue, Oct 13, 2020 at 12:24 PM Sophie Blee-Goldman 
wrote:

> Super nit: can we standardize the method & enum names?
>
> Right now we have these enums:
> StreamsUncaughtExceptionHandlerResponse
> StreamsUncaughtExceptionHandlerResponseGlobalThread
>
> and these callbacks:
> handleUncaughtException()
> handleExceptionInGlobalThread()
>
> The method names have different syntax, which is a bit clunky. I don't have
> any
> strong opinions on what grammar they should follow, just that it should be
> the
> same for each. I also think that we should specify "StreamThread" somewhere
> in the name of the StreadThread-specific callback, now that we have a
> second
> callback that specifies it's for the GlobalThread. Something like
> "*handleStreamThreadException()*" and "*handleGlobalThreadException*"
>
> The enums are ok, although I think we should include "StreamThread"
> somewhere
> like with the callbacks. And we can probably shorten them a bit. For
> example
> "*StreamThreadExceptionResponse*" and "*GlobalThreadExceptionResponse*"
>
>
>
> On Tue, Oct 13, 2020 at 11:48 AM Matthias J. Sax  wrote:
>
> > Thanks Walker.
> >
> > Overall, LGTM. However, I am wondering if we should have default
> > implementations for both handler methods? Before the latest change,
> > there was only one method and having a default was not necessary.
> > However, forcing people to implement both methods might not be the best
> > user experience: for example, if there is no global thread, one should
> > not need to implement the global handler method (and the other way
> around).
> >
> > Thus, it might be good to add default for both methods. If we add
> > defaults, we should explain the default behavior to the KIP.
> >
> > -Matthias
> >
> > On 10/12/20 2:32 PM, Walker Carlson wrote:
> > > Hello all,
> > >
> > > I just wanted to let you know that I had to make 2 minor updates to the
> > KIP.
> > >
> > > 1) I changed the behavior of the shutdown client to not leave the
> client
> > in
> > > Error but instead close directly because this aligns better with our
> > > state machine.
> > >
> > > 2) I added a separate call back for the global thread as it does not
> have
> > > all the options as a streamThread does. i.e. replace. The default will
> be
> > > to close the client. that will also be the only option as that is the
> > > current behavior for the global thread.
> > >
> > > you can find the diff here:
> > >
> >
> https://cwiki.apache.org/confluence/pages/diffpagesbyversion.action?pageId=158876566=21=23
> > >
> > > If you have any problems with these changes let me know and we can
> > discuss
> > > them further
> > >
> > > Thank you,
> > > Walker
> > >
> > > On Wed, Sep 30, 2020 at 7:33 AM Walker Carlson 
> > > wrote:
> > >
> > >>
> > >> Bruno Cadonna 
> > >> 4:51 AM (2 hours ago)
> > >> to dev
> > >> Thank you all for voting!
> > >>
> > >> This KIP is accepted with +3 binding (Guozhang, Bill, Matthias) and +2
> > >> non-binding (Bruno, Leah).
> > >>
> > >> Matthias, we will take care of  the global threads, and for the
> > >> replacement that will depend on Kip-663.
> > >>
> > >> Best,
> > >>
> > >> On Wed, Sep 30, 2020 at 4:59 AM Bruno Cadonna 
> > wrote:
> > >>
> > >>> Thanks a lot Walker!
> > >>>
> > >>> +1 (non-binding)
> > >>>
> > >>> Best,
> > >>> Bruno
> > >>>
> > >>> On 30.09.20 03:10, Matthias J. Sax wrote:
> >  Thanks Walker. The proposed API changes LGTM.
> > 
> >  +1 (binding)
> > 
> >  One minor nit: you should also mention the global-thread that also
> > needs
> >  to be shutdown if requested by the user.
> > 
> >  Minor side question: should we actually terminate a thread and
> create
> > a
> >  new one, or instead revive the existing thread (reusing its existing
> > >>> ID)?
> > 
> > 
> >  -Matthias
> > 
> >  On 9/29/20 2:39 PM, Bill Bejeck wrote:
> > > Thanks for the KIP Walker.
> > >
> > > +1 (binding)
> > >
> > > -Bill
> > >
> > > On Tue, Sep 29, 2020 at 4:59 PM Guozhang Wang 
> > >>> wrote:
> > >
> > >> +1 again on the KIP.
> > >>
> > >> On Tue, Sep 29, 2020 at 1:51 PM Leah Thomas  >
> > >>> wrote:
> > >>
> > >>> Hey Walker,
> > >>>
> > >>> Thanks for the KIP! I'm +1, non-binding.
> > >>>
> > >>> Cheers,
> > >>> Leah
> > >>>
> > >>> On Tue, Sep 29, 2020 at 1:56 PM Walker Carlson <
> > >>> wcarl...@confluent.io>
> > >>> wrote:
> > >>>
> >  Hello all,
> > 
> >  I made some changes to the KIP the descriptions are on the
> > >>> discussion
> >  thread. If you have already voted I would ask you to confirm
> your
> > >>> vote.
> > 
> >  Otherwise please vote so we can get this feature in.
> > 
> >  Thanks,
> >  Walker

[jira] [Resolved] (KAFKA-10570) Rename JMXReporter configs for KIP-629

2020-10-13 Thread Jira


 [ 
https://issues.apache.org/jira/browse/KAFKA-10570?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xavier Léauté resolved KAFKA-10570.
---
Fix Version/s: 2.7.0
   Resolution: Fixed

> Rename JMXReporter configs for KIP-629
> --
>
> Key: KAFKA-10570
> URL: https://issues.apache.org/jira/browse/KAFKA-10570
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Xavier Léauté
>Assignee: Xavier Léauté
>Priority: Major
> Fix For: 2.7.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [VOTE] KIP-671: Add method to Shutdown entire Streams Application

2020-10-13 Thread Sophie Blee-Goldman
Super nit: can we standardize the method & enum names?

Right now we have these enums:
StreamsUncaughtExceptionHandlerResponse
StreamsUncaughtExceptionHandlerResponseGlobalThread

and these callbacks:
handleUncaughtException()
handleExceptionInGlobalThread()

The method names have different syntax, which is a bit clunky. I don't have
any
strong opinions on what grammar they should follow, just that it should be
the
same for each. I also think that we should specify "StreamThread" somewhere
in the name of the StreadThread-specific callback, now that we have a second
callback that specifies it's for the GlobalThread. Something like
"*handleStreamThreadException()*" and "*handleGlobalThreadException*"

The enums are ok, although I think we should include "StreamThread"
somewhere
like with the callbacks. And we can probably shorten them a bit. For example
"*StreamThreadExceptionResponse*" and "*GlobalThreadExceptionResponse*"



On Tue, Oct 13, 2020 at 11:48 AM Matthias J. Sax  wrote:

> Thanks Walker.
>
> Overall, LGTM. However, I am wondering if we should have default
> implementations for both handler methods? Before the latest change,
> there was only one method and having a default was not necessary.
> However, forcing people to implement both methods might not be the best
> user experience: for example, if there is no global thread, one should
> not need to implement the global handler method (and the other way around).
>
> Thus, it might be good to add default for both methods. If we add
> defaults, we should explain the default behavior to the KIP.
>
> -Matthias
>
> On 10/12/20 2:32 PM, Walker Carlson wrote:
> > Hello all,
> >
> > I just wanted to let you know that I had to make 2 minor updates to the
> KIP.
> >
> > 1) I changed the behavior of the shutdown client to not leave the client
> in
> > Error but instead close directly because this aligns better with our
> > state machine.
> >
> > 2) I added a separate call back for the global thread as it does not have
> > all the options as a streamThread does. i.e. replace. The default will be
> > to close the client. that will also be the only option as that is the
> > current behavior for the global thread.
> >
> > you can find the diff here:
> >
> https://cwiki.apache.org/confluence/pages/diffpagesbyversion.action?pageId=158876566=21=23
> >
> > If you have any problems with these changes let me know and we can
> discuss
> > them further
> >
> > Thank you,
> > Walker
> >
> > On Wed, Sep 30, 2020 at 7:33 AM Walker Carlson 
> > wrote:
> >
> >>
> >> Bruno Cadonna 
> >> 4:51 AM (2 hours ago)
> >> to dev
> >> Thank you all for voting!
> >>
> >> This KIP is accepted with +3 binding (Guozhang, Bill, Matthias) and +2
> >> non-binding (Bruno, Leah).
> >>
> >> Matthias, we will take care of  the global threads, and for the
> >> replacement that will depend on Kip-663.
> >>
> >> Best,
> >>
> >> On Wed, Sep 30, 2020 at 4:59 AM Bruno Cadonna 
> wrote:
> >>
> >>> Thanks a lot Walker!
> >>>
> >>> +1 (non-binding)
> >>>
> >>> Best,
> >>> Bruno
> >>>
> >>> On 30.09.20 03:10, Matthias J. Sax wrote:
>  Thanks Walker. The proposed API changes LGTM.
> 
>  +1 (binding)
> 
>  One minor nit: you should also mention the global-thread that also
> needs
>  to be shutdown if requested by the user.
> 
>  Minor side question: should we actually terminate a thread and create
> a
>  new one, or instead revive the existing thread (reusing its existing
> >>> ID)?
> 
> 
>  -Matthias
> 
>  On 9/29/20 2:39 PM, Bill Bejeck wrote:
> > Thanks for the KIP Walker.
> >
> > +1 (binding)
> >
> > -Bill
> >
> > On Tue, Sep 29, 2020 at 4:59 PM Guozhang Wang 
> >>> wrote:
> >
> >> +1 again on the KIP.
> >>
> >> On Tue, Sep 29, 2020 at 1:51 PM Leah Thomas 
> >>> wrote:
> >>
> >>> Hey Walker,
> >>>
> >>> Thanks for the KIP! I'm +1, non-binding.
> >>>
> >>> Cheers,
> >>> Leah
> >>>
> >>> On Tue, Sep 29, 2020 at 1:56 PM Walker Carlson <
> >>> wcarl...@confluent.io>
> >>> wrote:
> >>>
>  Hello all,
> 
>  I made some changes to the KIP the descriptions are on the
> >>> discussion
>  thread. If you have already voted I would ask you to confirm your
> >>> vote.
> 
>  Otherwise please vote so we can get this feature in.
> 
>  Thanks,
>  Walker
> 
>  On Thu, Sep 24, 2020 at 4:36 PM John Roesler  >
> >>> wrote:
> 
> > Thanks for the KIP, Walker!
> >
> > I’m +1 (binding)
> >
> > -John
> >
> > On Mon, Sep 21, 2020, at 17:04, Guozhang Wang wrote:
> >> Thanks for finalizing the KIP. +1 (binding)
> >>
> >>
> >> Guozhang
> >>
> >> On Mon, Sep 21, 2020 at 1:38 PM Walker Carlson <
> >>> wcarl...@confluent.io>
> >> wrote:
> >>
> >>> Hello all,
> 

Re: [VOTE] KIP-671: Add method to Shutdown entire Streams Application

2020-10-13 Thread Matthias J. Sax
Thanks Walker.

Overall, LGTM. However, I am wondering if we should have default
implementations for both handler methods? Before the latest change,
there was only one method and having a default was not necessary.
However, forcing people to implement both methods might not be the best
user experience: for example, if there is no global thread, one should
not need to implement the global handler method (and the other way around).

Thus, it might be good to add default for both methods. If we add
defaults, we should explain the default behavior to the KIP.

-Matthias

On 10/12/20 2:32 PM, Walker Carlson wrote:
> Hello all,
> 
> I just wanted to let you know that I had to make 2 minor updates to the KIP.
> 
> 1) I changed the behavior of the shutdown client to not leave the client in
> Error but instead close directly because this aligns better with our
> state machine.
> 
> 2) I added a separate call back for the global thread as it does not have
> all the options as a streamThread does. i.e. replace. The default will be
> to close the client. that will also be the only option as that is the
> current behavior for the global thread.
> 
> you can find the diff here:
> https://cwiki.apache.org/confluence/pages/diffpagesbyversion.action?pageId=158876566=21=23
> 
> If you have any problems with these changes let me know and we can discuss
> them further
> 
> Thank you,
> Walker
> 
> On Wed, Sep 30, 2020 at 7:33 AM Walker Carlson 
> wrote:
> 
>>
>> Bruno Cadonna 
>> 4:51 AM (2 hours ago)
>> to dev
>> Thank you all for voting!
>>
>> This KIP is accepted with +3 binding (Guozhang, Bill, Matthias) and +2
>> non-binding (Bruno, Leah).
>>
>> Matthias, we will take care of  the global threads, and for the
>> replacement that will depend on Kip-663.
>>
>> Best,
>>
>> On Wed, Sep 30, 2020 at 4:59 AM Bruno Cadonna  wrote:
>>
>>> Thanks a lot Walker!
>>>
>>> +1 (non-binding)
>>>
>>> Best,
>>> Bruno
>>>
>>> On 30.09.20 03:10, Matthias J. Sax wrote:
 Thanks Walker. The proposed API changes LGTM.

 +1 (binding)

 One minor nit: you should also mention the global-thread that also needs
 to be shutdown if requested by the user.

 Minor side question: should we actually terminate a thread and create a
 new one, or instead revive the existing thread (reusing its existing
>>> ID)?


 -Matthias

 On 9/29/20 2:39 PM, Bill Bejeck wrote:
> Thanks for the KIP Walker.
>
> +1 (binding)
>
> -Bill
>
> On Tue, Sep 29, 2020 at 4:59 PM Guozhang Wang 
>>> wrote:
>
>> +1 again on the KIP.
>>
>> On Tue, Sep 29, 2020 at 1:51 PM Leah Thomas 
>>> wrote:
>>
>>> Hey Walker,
>>>
>>> Thanks for the KIP! I'm +1, non-binding.
>>>
>>> Cheers,
>>> Leah
>>>
>>> On Tue, Sep 29, 2020 at 1:56 PM Walker Carlson <
>>> wcarl...@confluent.io>
>>> wrote:
>>>
 Hello all,

 I made some changes to the KIP the descriptions are on the
>>> discussion
 thread. If you have already voted I would ask you to confirm your
>>> vote.

 Otherwise please vote so we can get this feature in.

 Thanks,
 Walker

 On Thu, Sep 24, 2020 at 4:36 PM John Roesler 
>>> wrote:

> Thanks for the KIP, Walker!
>
> I’m +1 (binding)
>
> -John
>
> On Mon, Sep 21, 2020, at 17:04, Guozhang Wang wrote:
>> Thanks for finalizing the KIP. +1 (binding)
>>
>>
>> Guozhang
>>
>> On Mon, Sep 21, 2020 at 1:38 PM Walker Carlson <
>>> wcarl...@confluent.io>
>> wrote:
>>
>>> Hello all,
>>>
>>> I would like to start a thread to vote for KIP-671 to add a
>> method
>>> to
> close
>>> all clients in a kafka streams application.
>>>
>>> KIP:
>>>
>>>
>

>>>
>>
>>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-671%3A+Shutdown+Streams+Application+when+appropriate+exception+is+thrown
>>>
>>> Discussion thread: *here
>>> <
>>>
>

>>>
>>
>>> https://mail-archives.apache.org/mod_mbox/kafka-dev/202009.mbox/%3CCAC55fuh3HAGCxz-PzxTJraczy6T-os2oiCV328PBeuJQSVYASg%40mail.gmail.com%3E
 *
>>>
>>> Thanks,
>>> -Walker
>>>
>>
>>
>> --
>> -- Guozhang
>>
>

>>>
>>
>>
>> --
>> -- Guozhang
>>
>
>>>
>>
> 


Re: confluence access

2020-10-13 Thread Matthias J. Sax
Done.

On 10/13/20 8:51 AM, Dániel Urbán wrote:
> Hi,
> 
> I'd like to access the wiki and contribute to KIPs, please add me. My
> username is urbandan.
> 
> Thanks in advance,
> Daniel
> 


[GitHub] [kafka-site] mjsax merged pull request #302: Updating title on videos page

2020-10-13 Thread GitBox


mjsax merged pull request #302:
URL: https://github.com/apache/kafka-site/pull/302


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka-site] scott-confluent opened a new pull request #302: Updating title on videos page

2020-10-13 Thread GitBox


scott-confluent opened a new pull request #302:
URL: https://github.com/apache/kafka-site/pull/302


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Resolved] (KAFKA-10215) MockProcessorContext doesn't work with SessionStores

2020-10-13 Thread John Roesler (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Roesler resolved KAFKA-10215.
--
Resolution: Fixed

Fixed in the new processor API.

> MockProcessorContext doesn't work with SessionStores
> 
>
> Key: KAFKA-10215
> URL: https://issues.apache.org/jira/browse/KAFKA-10215
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: John Roesler
>Assignee: John Roesler
>Priority: Major
> Fix For: 2.7.0
>
>
> The recommended pattern for testing custom Processor implementations is to 
> use the test-utils MockProcessorContext. If a Processor implementation needs 
> a store, the store also has to be initialized with the same context. However, 
> the existing (in-memory and persistent) Session store implementations perform 
> internal casts that result in class cast exceptions if you attempt to 
> initialize them with the MockProcessorContext.
> A workaround is to instead embed the processor in an application and use the 
> TopologyTestDriver instead.
> The fix is the same as for KAFKA-10200



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-10605) KIP-478: deprecate the replaced Processor API members

2020-10-13 Thread John Roesler (Jira)
John Roesler created KAFKA-10605:


 Summary: KIP-478: deprecate the replaced Processor API members
 Key: KAFKA-10605
 URL: https://issues.apache.org/jira/browse/KAFKA-10605
 Project: Kafka
  Issue Type: Sub-task
Reporter: John Roesler
Assignee: John Roesler
 Fix For: 2.7.0


This is a minor task, but we shouldn't do the release without it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-10562) KIP-478: Delegate the store wrappers to the new init method

2020-10-13 Thread John Roesler (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10562?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Roesler resolved KAFKA-10562.
--
Resolution: Fixed

> KIP-478: Delegate the store wrappers to the new init method
> ---
>
> Key: KAFKA-10562
> URL: https://issues.apache.org/jira/browse/KAFKA-10562
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Affects Versions: 2.7.0
>Reporter: John Roesler
>Assignee: John Roesler
>Priority: Blocker
> Fix For: 2.7.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-10437) KIP-478: Implement test-utils changes

2020-10-13 Thread John Roesler (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10437?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Roesler resolved KAFKA-10437.
--
Resolution: Fixed

> KIP-478: Implement test-utils changes
> -
>
> Key: KAFKA-10437
> URL: https://issues.apache.org/jira/browse/KAFKA-10437
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: John Roesler
>Assignee: John Roesler
>Priority: Major
> Fix For: 2.7.0
>
>
> In addition to implementing the KIP, search for and resolve these todos:
> {color:#008dde}TODO will be fixed in KAFKA-10437{color}
> Also, add unit tests in test-utils making sure we can initialize _all_ the 
> kinds of store with the MPC and MPC.getSSC.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-10533) Add log flush semantics to simulation test

2020-10-13 Thread Jason Gustafson (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10533?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-10533.
-
Resolution: Fixed

> Add log flush semantics to simulation test
> --
>
> Key: KAFKA-10533
> URL: https://issues.apache.org/jira/browse/KAFKA-10533
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>Priority: Major
>
> In order to do KAFKA-10526, it is useful to add support for flush semantics 
> to `MockLog` and to use them in `RaftSimulationTest`. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


confluence access

2020-10-13 Thread Dániel Urbán
Hi,

I'd like to access the wiki and contribute to KIPs, please add me. My
username is urbandan.

Thanks in advance,
Daniel


[jira] [Created] (KAFKA-10604) The StreamsConfig.STATE_DIR_CONFIG's default value does not reflect the JVM parameter or OS-specific settings

2020-10-13 Thread Dongjin Lee (Jira)
Dongjin Lee created KAFKA-10604:
---

 Summary: The StreamsConfig.STATE_DIR_CONFIG's default value does 
not reflect the JVM parameter or OS-specific settings
 Key: KAFKA-10604
 URL: https://issues.apache.org/jira/browse/KAFKA-10604
 Project: Kafka
  Issue Type: Improvement
  Components: streams
Reporter: Dongjin Lee
Assignee: Dongjin Lee


I found this problem working for 
[KAFKA-10585|https://issues.apache.org/jira/browse/KAFKA-10585].

The JVM's temporary directory location is different per OS, and JVM allows to 
change it with `java.io.tmpdir` system property. In Linux, it defaults to 
`/tmp`.

The problem is the default value of StreamsConfig.STATE_DIR_CONFIG 
(`state.dir`) is fixed to `/tmp/kafka-streams`. For this reason, it does not 
change if the runs on OS other than Linux or the user specifies 
`java.io.tmpdir` system property.

It should be `\{temp-directory}/kafka-streams`, not `/tmp/kafka-streams`.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: Kafka » kafka-2.6-jdk8 #34

2020-10-13 Thread Apache Jenkins Server
See 


Changes:

[Randall Hauch] KAFKA-10574: Fix infinite loop in Values::parseString (#9375)


--
[...truncated 6.31 MB...]
org.apache.kafka.streams.MockProcessorContextTest > 
shouldCaptureRecordsOutputToChildByName PASSED

org.apache.kafka.streams.MockProcessorContextTest > shouldCapturePunctuator 
STARTED

org.apache.kafka.streams.MockProcessorContextTest > shouldCapturePunctuator 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldDeleteAndReturnPlainValue STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldDeleteAndReturnPlainValue PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnName 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnName 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutAllWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutAllWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldReturnIsPersistent STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldReturnIsPersistent PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutIfAbsentWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutIfAbsentWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldForwardClose 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldForwardClose 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldForwardFlush 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldForwardFlush 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldForwardInit 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldForwardInit 
PASSED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testToString STARTED

org.apache.kafka.streams.test.TestRecordTest > testToString PASSED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords STARTED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords PASSED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
STARTED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
PASSED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher STARTED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher PASSED

org.apache.kafka.streams.test.TestRecordTest > testFields STARTED

org.apache.kafka.streams.test.TestRecordTest > testFields PASSED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode STARTED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairs STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKeyAndDefaultTimestamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKeyAndDefaultTimestamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithDefaultTimestamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithDefaultTimestamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKey STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKey PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithOtherTopicNameAndTimestampWithTimetamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithOtherTopicNameAndTimestampWithTimetamp 
PASSED