Jenkins build is back to normal : Kafka » kafka-trunk-jdk11 #202

2020-11-02 Thread Apache Jenkins Server
See 




Build failed in Jenkins: Kafka » kafka-trunk-jdk8 #194

2020-11-02 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: KIP-584: Remove admin client facility to read features from 
controller (#9536)


--
[...truncated 3.42 MB...]

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowNoSuchElementExceptionForUnusedOutputTopicWithDynamicRouting[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowNoSuchElementExceptionForUnusedOutputTopicWithDynamicRouting[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldInitProcessor[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldInitProcessor[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowForUnknownTopic[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowForUnknownTopic[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnStreamsTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnStreamsTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureGlobalTopicNameIfWrittenInto[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureGlobalTopicNameIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 

Build failed in Jenkins: Kafka » kafka-trunk-jdk8 #193

2020-11-02 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-10471 Mark broker crash during log loading as unclean shutdown 
(#9364)

[github] KAFKA-10632; Raft client should push all committed data to state 
machines (#9482)


--
[...truncated 3.42 MB...]

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureSinkTopicNamesIfWrittenInto[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureSinkTopicNamesIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureInternalTopicNamesIfWrittenInto[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureInternalTopicNamesIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTimeDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTimeDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest 

Build failed in Jenkins: Kafka » kafka-trunk-jdk11 #201

2020-11-02 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-10632; Raft client should push all committed data to state 
machines (#9482)


--
[...truncated 3.45 MB...]
org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@53696421,
 timestamped = true, caching = true, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@c9d78f7, 
timestamped = true, caching = false, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@c9d78f7, 
timestamped = true, caching = false, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@5d89bd89,
 timestamped = true, caching = false, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@5d89bd89,
 timestamped = true, caching = false, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@6ac5e0b9,
 timestamped = true, caching = false, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@6ac5e0b9,
 timestamped = true, caching = false, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@7215d9ac,
 timestamped = true, caching = false, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@7215d9ac,
 timestamped = true, caching = false, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@15284dd9,
 timestamped = true, caching = false, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@15284dd9,
 timestamped = true, caching = false, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@5dddf00, 
timestamped = true, caching = false, logging = false] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.TimestampedWindowStoreBuilder@5dddf00, 
timestamped = true, caching = false, logging = false] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@40d8272a, 
timestamped = false, caching = true, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@40d8272a, 
timestamped = false, caching = true, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@1cdc4e24, 
timestamped = false, caching = true, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@1cdc4e24, 
timestamped = false, caching = true, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@4bcd9ee3, 
timestamped = false, caching = true, logging = true] STARTED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@4bcd9ee3, 
timestamped = false, caching = true, logging = true] PASSED

org.apache.kafka.streams.test.MockProcessorContextStateStoreTest > 
shouldEitherInitOrThrow[builder = 

[jira] [Resolved] (KAFKA-10343) Add IBP based ApiVersion constraint tests

2020-11-02 Thread Boyang Chen (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10343?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Boyang Chen resolved KAFKA-10343.
-
Resolution: Won't Fix

We are planning to work on a more long-term fix for this issue, see 

> Add IBP based ApiVersion constraint tests
> -
>
> Key: KAFKA-10343
> URL: https://issues.apache.org/jira/browse/KAFKA-10343
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Boyang Chen
>Assignee: Boyang Chen
>Priority: Major
> Fix For: 2.8.0
>
>
> We need to add ApiVersion constraints test based on IBP to remind future 
> developer bump it when new RPC version is developed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-10677) Complete fetches in purgatory immediately after raft leader resigns

2020-11-02 Thread Jason Gustafson (Jira)
Jason Gustafson created KAFKA-10677:
---

 Summary: Complete fetches in purgatory immediately after raft 
leader resigns
 Key: KAFKA-10677
 URL: https://issues.apache.org/jira/browse/KAFKA-10677
 Project: Kafka
  Issue Type: Sub-task
Reporter: Jason Gustafson


The current logic does not complete fetches in purgatory immediately after the 
leader has resigned. The idea was that there was no point in doing so until the 
election had completed because clients would just have to retry. However, the 
fetches in purgatory might correspond to requests from other voters, so the 
concern is that this might delay a leader election. For example, the voter 
might be trying to send a Vote request on the same socket that is blocking on a 
pending Fetch.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-10676) Decide whether Raft listener callback errors are fatal

2020-11-02 Thread Jason Gustafson (Jira)
Jason Gustafson created KAFKA-10676:
---

 Summary: Decide whether Raft listener callback errors are fatal
 Key: KAFKA-10676
 URL: https://issues.apache.org/jira/browse/KAFKA-10676
 Project: Kafka
  Issue Type: Sub-task
Reporter: Jason Gustafson


If a `RaftClient.Listener` callback fails, we need to decide how to handle it. 
The current code assumes that these errors are fatal and exceptions will get 
propagated. This might be what we want long term. With KIP-631, there will be 
one listener for the broker and one listener for the controller. If one of them 
fails, probably we should shutdown the server rather than remaining in a 
half-fenced state. However, we should reconsider this once we get closer to 
integration.




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-10632) Raft client should push all committed data to listeners

2020-11-02 Thread Jason Gustafson (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10632?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-10632.
-
Resolution: Fixed

> Raft client should push all committed data to listeners
> ---
>
> Key: KAFKA-10632
> URL: https://issues.apache.org/jira/browse/KAFKA-10632
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>Priority: Major
>
> We would like to move to a push model for sending committed data to the state 
> machine. This simplifies the state machine a bit since it does not need to 
> track its own position and poll for new data. It also allows the raft layer 
> to ensure that all committed data up to the state of a leader epoch has been 
> sent before allowing the state machine to begin sending writes. Finally, it 
> allows us to take advantage of optimizations. For example, we can save the 
> need to re-read writes that have been sent to the leader; instead, we can 
> retain the data in memory and push it to the state machine after it becomes 
> committed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Jenkins build is back to normal : Kafka » kafka-trunk-jdk11 #199

2020-11-02 Thread Apache Jenkins Server
See 




[jira] [Created] (KAFKA-10675) Error message from ConnectSchema.validateValue() should include the name of the schema.

2020-11-02 Thread Alexander Iskuskov (Jira)
Alexander Iskuskov created KAFKA-10675:
--

 Summary: Error message from ConnectSchema.validateValue() should 
include the name of the schema.
 Key: KAFKA-10675
 URL: https://issues.apache.org/jira/browse/KAFKA-10675
 Project: Kafka
  Issue Type: Improvement
  Components: KafkaConnect
Reporter: Alexander Iskuskov


The following error message
{code:java}
org.apache.kafka.connect.errors.DataException: Invalid Java object for schema 
type INT64: class java.lang.Long for field: "moderate_time"
{code}
can be confusing because {{java.lang.Long}} is acceptable type for schema 
{{INT64}}. In fact, in this case {{org.apache.kafka.connect.data.Timestamp}} is 
used but this info is not logged.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-10674) Brokers should know the active controller ApiVersion after enabling KIP-590 forwarding

2020-11-02 Thread Boyang Chen (Jira)
Boyang Chen created KAFKA-10674:
---

 Summary: Brokers should know the active controller ApiVersion 
after enabling KIP-590 forwarding
 Key: KAFKA-10674
 URL: https://issues.apache.org/jira/browse/KAFKA-10674
 Project: Kafka
  Issue Type: Improvement
  Components: clients, core
Reporter: Boyang Chen


Admin clients send ApiVersions to the broker upon the first connection 
establishes. The tricky thing after forwarding is enabled is that for 
forwardable APIs, admin client needs to know a commonly-agreed rang of 
ApiVersions among handling broker, active controller and itself.

Right now the inter-broker APIs are guaranteed by IBP constraints, but not for 
forwardable APIs. A compromised solution would be to put all forwardable APIs 
under IBP, which is brittle and hard to maintain consistency.

Instead, any broker connecting to the active controller should send an 
ApiVersion request from beginning, so it is easy to compute that information 
and send back to the admin clients upon ApiVersion request from admin.  Any 
rolling of the active controller will trigger reconnection between broker and 
controller, which guarantees a refreshed ApiVersions between the two. This 
approach avoids the tight bond with IBP and broker could just close the 
connection between admin client to trigger retry logic and refreshing of the 
ApiVersions. Since this failure should be rare, two round-trips and timeout 
delays are well compensated by the less engineering work.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: Kafka » kafka-2.6-jdk8 #46

2020-11-02 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-10454 / (2.6) Update copartitionSourceGroups when optimization 
algorithm is triggered (#9468)


--
[...truncated 6.33 MB...]

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
PASSED

org.apache.kafka.streams.TestTopicsTest > testNonUsedOutputTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testNonUsedOutputTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testEmptyTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testEmptyTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testStartTimestamp STARTED

org.apache.kafka.streams.TestTopicsTest > testStartTimestamp PASSED

org.apache.kafka.streams.TestTopicsTest > testNegativeAdvance STARTED

org.apache.kafka.streams.TestTopicsTest > testNegativeAdvance PASSED

org.apache.kafka.streams.TestTopicsTest > shouldNotAllowToCreateWithNullDriver 
STARTED

org.apache.kafka.streams.TestTopicsTest > shouldNotAllowToCreateWithNullDriver 
PASSED

org.apache.kafka.streams.TestTopicsTest > testDuration STARTED

org.apache.kafka.streams.TestTopicsTest > testDuration PASSED

org.apache.kafka.streams.TestTopicsTest > testOutputToString STARTED

org.apache.kafka.streams.TestTopicsTest > testOutputToString PASSED

org.apache.kafka.streams.TestTopicsTest > testValue STARTED

org.apache.kafka.streams.TestTopicsTest > testValue PASSED

org.apache.kafka.streams.TestTopicsTest > testTimestampAutoAdvance STARTED

org.apache.kafka.streams.TestTopicsTest > testTimestampAutoAdvance PASSED

org.apache.kafka.streams.TestTopicsTest > testOutputWrongSerde STARTED

org.apache.kafka.streams.TestTopicsTest > testOutputWrongSerde PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputTopicWithNullTopicName STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputTopicWithNullTopicName PASSED

org.apache.kafka.streams.TestTopicsTest > testWrongSerde STARTED

org.apache.kafka.streams.TestTopicsTest > testWrongSerde PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMapWithNull STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMapWithNull PASSED

org.apache.kafka.streams.TestTopicsTest > testNonExistingOutputTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testNonExistingOutputTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testMultipleTopics STARTED

org.apache.kafka.streams.TestTopicsTest > testMultipleTopics PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValueList STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValueList PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputWithNullDriver STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputWithNullDriver PASSED

org.apache.kafka.streams.TestTopicsTest > testValueList STARTED

org.apache.kafka.streams.TestTopicsTest > testValueList PASSED

org.apache.kafka.streams.TestTopicsTest > testRecordList STARTED

org.apache.kafka.streams.TestTopicsTest > testRecordList PASSED

org.apache.kafka.streams.TestTopicsTest > testNonExistingInputTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testNonExistingInputTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMap STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMap PASSED

org.apache.kafka.streams.TestTopicsTest > testRecordsToList STARTED

org.apache.kafka.streams.TestTopicsTest > testRecordsToList PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValueListDuration STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValueListDuration PASSED

org.apache.kafka.streams.TestTopicsTest > testInputToString STARTED

org.apache.kafka.streams.TestTopicsTest > testInputToString PASSED

org.apache.kafka.streams.TestTopicsTest > testTimestamp STARTED

org.apache.kafka.streams.TestTopicsTest > testTimestamp PASSED

org.apache.kafka.streams.TestTopicsTest > testWithHeaders STARTED

org.apache.kafka.streams.TestTopicsTest > testWithHeaders PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValue STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValue PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateTopicWithNullTopicName STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateTopicWithNullTopicName PASSED

> Task :streams:upgrade-system-tests-0101:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-0101:testClasses
> Task :streams:upgrade-system-tests-0101:checkstyleTest
> 

Build failed in Jenkins: Kafka » kafka-2.7-jdk8 #50

2020-11-02 Thread Apache Jenkins Server
See 


Changes:

[Manikumar Reddy] KAFKA-10669: Make CurrentLeaderEpoch field ignorable and set 
MaxNumOffsets field default to 1


--
[...truncated 6.87 MB...]

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowNoSuchElementExceptionForUnusedOutputTopicWithDynamicRouting[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowNoSuchElementExceptionForUnusedOutputTopicWithDynamicRouting[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldInitProcessor[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldInitProcessor[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowForUnknownTopic[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowForUnknownTopic[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnStreamsTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnStreamsTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureGlobalTopicNameIfWrittenInto[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureGlobalTopicNameIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 

[jira] [Created] (KAFKA-10673) ConnectionQuotas should cache interbroker listener name

2020-11-02 Thread David Mao (Jira)
David Mao created KAFKA-10673:
-

 Summary: ConnectionQuotas should cache interbroker listener name
 Key: KAFKA-10673
 URL: https://issues.apache.org/jira/browse/KAFKA-10673
 Project: Kafka
  Issue Type: Improvement
  Components: core
Affects Versions: 2.7.0
Reporter: David Mao
Assignee: David Mao
 Fix For: 2.8.0


ConnectionQuotas.protectedListener calls `config.interBrokerListenerName`. This 
is a surprisingly expensive call that creates a copy of all properties set on 
the config. Given that this method is called multiple times per connection 
created, this is not really ideal.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: Permission for contribution

2020-11-02 Thread Matthias J. Sax
Done.

On 11/2/20 11:04 AM, Rohit Deshpande wrote:
> Hi Boyang,
> My wiki id is rohitdeshaws
> 
> Thanks,
> Rohit
> 
> On Thu, Oct 29, 2020 at 1:59 PM Rohit Deshpande 
> wrote:
> 
>> I don't know how to create wiki id. I am following this documentation:
>> https://kafka.apache.org/contributing
>>
>> On Thu, Oct 29, 2020 at 1:40 PM Boyang Chen 
>> wrote:
>>
>>> Added you to the JIRA. What's your id for wiki?
>>>
>>> On Thu, Oct 29, 2020 at 1:23 PM rohit deshpande <
>>> rohitmdeshpa...@gmail.com>
>>> wrote:
>>>
 I forgot to provide details.

 My email id: rohitmdeshpa...@gmail.com
 This request is regarding code contribution.
 Thanks,
 Rohit

 On Thu, Oct 29, 2020 at 11:58 AM rohit deshpande <
 rohitmdeshpa...@gmail.com>
 wrote:

> Hi,
> I would like to get permission for the contribution.
> My jira id: rohitdeshaws
>
> Thanks,
> Rohit
>

>>>
>>
> 


Re: Permission for contribution

2020-11-02 Thread Rohit Deshpande
Hi Boyang,
My wiki id is rohitdeshaws

Thanks,
Rohit

On Thu, Oct 29, 2020 at 1:59 PM Rohit Deshpande 
wrote:

> I don't know how to create wiki id. I am following this documentation:
> https://kafka.apache.org/contributing
>
> On Thu, Oct 29, 2020 at 1:40 PM Boyang Chen 
> wrote:
>
>> Added you to the JIRA. What's your id for wiki?
>>
>> On Thu, Oct 29, 2020 at 1:23 PM rohit deshpande <
>> rohitmdeshpa...@gmail.com>
>> wrote:
>>
>> > I forgot to provide details.
>> >
>> > My email id: rohitmdeshpa...@gmail.com
>> > This request is regarding code contribution.
>> > Thanks,
>> > Rohit
>> >
>> > On Thu, Oct 29, 2020 at 11:58 AM rohit deshpande <
>> > rohitmdeshpa...@gmail.com>
>> > wrote:
>> >
>> > > Hi,
>> > > I would like to get permission for the contribution.
>> > > My jira id: rohitdeshaws
>> > >
>> > > Thanks,
>> > > Rohit
>> > >
>> >
>>
>


[jira] [Resolved] (KAFKA-10669) ListOffsetRequest: make CurrentLeaderEpoch field ignorable and set MaxNumOffsets field to 1

2020-11-02 Thread Manikumar (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-10669.
---
Resolution: Fixed

Issue resolved by pull request 9540
[https://github.com/apache/kafka/pull/9540]

> ListOffsetRequest: make CurrentLeaderEpoch field ignorable and set 
> MaxNumOffsets field to 1
> ---
>
> Key: KAFKA-10669
> URL: https://issues.apache.org/jira/browse/KAFKA-10669
> Project: Kafka
>  Issue Type: Task
>Affects Versions: 2.7.0
>Reporter: Manikumar
>Priority: Blocker
> Fix For: 2.7.0
>
>
> Couple of failures observed after KAFKA-9627: Replace ListOffset 
> request/response with automated protocol 
> ([https://github.com/apache/kafka/pull/8295])
> 1. Latest consumer fails to consume from 0.10.0.1 brokers. Below system tests 
> are failing
>  
> kafkatest.tests.client.client_compatibility_features_test.ClientCompatibilityFeaturesTest
>  
> kafkatest.tests.client.client_compatibility_produce_consume_test.ClientCompatibilityProduceConsumeTest
> 2. In some scenarios, latest consumer fails with below error when connecting 
> to a Kafka cluster which consists of newer and older (<=2.0) Kafka brokers 
>  org.apache.kafka.common.errors.UnsupportedVersionException: Attempted to 
> write a non-default currentLeaderEpoch at version 3



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] Apache Kafka 2.7.0 release

2020-11-02 Thread Bill Bejeck
Hi Manikumar,

Thanks for the heads-up.

-Bill

On Sat, Oct 31, 2020 at 4:56 PM Manikumar  wrote:

> Hi Bill,
>
> We have found a couple of issues related to ListOffsetRequest :
> https://issues.apache.org/jira/browse/KAFKA-10669
> These were introduced when we migrated the ListOffsetRequest request to use
> the auto-generated protocol.
> Since these are regressions, I marked the JIRA as blocker.
>
> I have opened a PR to fix it: https://github.com/apache/kafka/pull/9540
>
> Thanks,
>
>
>
> On Sat, Oct 31, 2020 at 6:06 PM Bill Bejeck  wrote:
>
> > All,
> >
> > Just a quick update on the 2.7.0 release.
> >
> > All blockers have been resolved at this point, so I'm planning on cutting
> > an RC early Monday morning.
> >
> > Thanks,
> > Bill
> >
> > On Fri, Oct 30, 2020 at 11:53 AM Bill Bejeck  wrote:
> >
> > > Hi Sophie,
> > >
> > > Thanks for the update.
> > >
> > > -Bill
> > >
> >
>


Re: [VOTE] KIP-661: Expose task configurations in Connect REST API

2020-11-02 Thread Tom Bentley
+1 non-binding

Thanks Mickael.

Tom

On Fri, Oct 16, 2020 at 8:54 PM Gwen Shapira  wrote:

> I definitely needed this capability a few times before. Thank you.
>
> +1 (binding)
>
> On Thu, Sep 24, 2020 at 7:54 AM Mickael Maison 
> wrote:
> >
> > Hi,
> >
> > I'd like to start a vote on KIP-661:
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-661%3A+Expose+task+configurations+in+Connect+REST+API
> >
> > Thanks
>
>
>
> --
> Gwen Shapira
> Engineering Manager | Confluent
> 650.450.2760 | @gwenshap
> Follow us: Twitter | blog
>
>


Re: [DISCUSS] Apache Kafka 2.6.1 release

2020-11-02 Thread Mickael Maison
Hi,

Quick update on 2.6.1:

Last week, a bunch of fixes went in for KAFKA-10651 and KAFKA-10515.
John also backported the Jenkinsfile so tests can run on 2.6.

Now, we only have 1 PR waiting to be merged:
https://github.com/apache/kafka/pull/9468 (KAFKA-10454).
Once this is merged, I'll start the release process.

Thanks

On Sat, Oct 31, 2020 at 4:27 AM Ismael Juma  wrote:
>
> Hi Mickael,
>
> Any updates on this release?
>
> Ismael
>
> On Thu, Oct 1, 2020 at 7:40 AM Mickael Maison 
> wrote:
>
> > Hi,
> >
> > I'd like to volunteer to be the release manager for the next bugfix
> > release, 2.6.1.
> > I created the release plan on the wiki:
> > https://cwiki.apache.org/confluence/display/KAFKA/Release+Plan+2.6.1
> >
> > Thanks
> >