Build failed in Jenkins: kafka-trunk-jdk11 #683

2019-07-09 Thread Apache Jenkins Server
See 


Changes:

[jason] MINOR: Use `Topic::isInternalTopic` instead of directly checking (#7047)

--
[...truncated 2.96 MB...]
org.apache.kafka.connect.data.ValuesTest > 
shouldFailToParseStringOfMapWithIntValuesWithBlankEntries PASSED

org.apache.kafka.connect.data.ValuesTest > 
shouldConvertMapWithStringKeysAndIntegerValues STARTED

org.apache.kafka.connect.data.ValuesTest > 
shouldConvertMapWithStringKeysAndIntegerValues PASSED

org.apache.kafka.connect.data.ValuesTest > shouldConvertDateValues STARTED

org.apache.kafka.connect.data.ValuesTest > shouldConvertDateValues PASSED

org.apache.kafka.connect.data.ValuesTest > 
shouldFailToParseInvalidBooleanValueString STARTED

org.apache.kafka.connect.data.ValuesTest > 
shouldFailToParseInvalidBooleanValueString PASSED

org.apache.kafka.connect.data.ValuesTest > shouldConvertTimestampValues STARTED

org.apache.kafka.connect.data.ValuesTest > shouldConvertTimestampValues PASSED

org.apache.kafka.connect.data.ValuesTest > shouldConvertNullValue STARTED

org.apache.kafka.connect.data.ValuesTest > shouldConvertNullValue PASSED

org.apache.kafka.connect.data.ValuesTest > shouldConvertTimeValues STARTED

org.apache.kafka.connect.data.ValuesTest > shouldConvertTimeValues PASSED

org.apache.kafka.connect.data.ValuesTest > 
shouldFailToParseStringOfMalformedMap STARTED

org.apache.kafka.connect.data.ValuesTest > 
shouldFailToParseStringOfMalformedMap PASSED

org.apache.kafka.connect.data.ValuesTest > 
shouldParseStringOfMapWithShortValuesWithoutWhitespaceAsMap STARTED

org.apache.kafka.connect.data.ValuesTest > 
shouldParseStringOfMapWithShortValuesWithoutWhitespaceAsMap PASSED

org.apache.kafka.connect.data.ValuesTest > shouldConvertListWithStringValues 
STARTED

org.apache.kafka.connect.data.ValuesTest > shouldConvertListWithStringValues 
PASSED

org.apache.kafka.connect.data.ValuesTest > 
shouldParseStringsWithEscapedDelimiters STARTED

org.apache.kafka.connect.data.ValuesTest > 
shouldParseStringsWithEscapedDelimiters PASSED

org.apache.kafka.connect.data.ValuesTest > 
shouldConvertStringOfListWithOnlyNumericElementTypesIntoListOfLargestNumericType
 STARTED

org.apache.kafka.connect.data.ValuesTest > 
shouldConvertStringOfListWithOnlyNumericElementTypesIntoListOfLargestNumericType
 PASSED

org.apache.kafka.connect.data.ValuesTest > 
shouldParseStringOfMapWithIntValuesWithWhitespaceAsMap STARTED

org.apache.kafka.connect.data.ValuesTest > 
shouldParseStringOfMapWithIntValuesWithWhitespaceAsMap PASSED

org.apache.kafka.connect.data.ValuesTest > shouldConvertListWithIntegerValues 
STARTED

org.apache.kafka.connect.data.ValuesTest > shouldConvertListWithIntegerValues 
PASSED

org.apache.kafka.connect.data.ValuesTest > 
shouldConvertStringOfListWithMixedElementTypesIntoListWithDifferentElementTypes 
STARTED

org.apache.kafka.connect.data.ValuesTest > 
shouldConvertStringOfListWithMixedElementTypesIntoListWithDifferentElementTypes 
PASSED

org.apache.kafka.connect.data.ValuesTest > 
shouldEscapeStringsWithEmbeddedQuotesAndBackslashes STARTED

org.apache.kafka.connect.data.ValuesTest > 
shouldEscapeStringsWithEmbeddedQuotesAndBackslashes PASSED

org.apache.kafka.connect.data.ValuesTest > 
shouldParseStringListWithMultipleElementTypesAndReturnListWithNoSchema STARTED

org.apache.kafka.connect.data.ValuesTest > 
shouldParseStringListWithMultipleElementTypesAndReturnListWithNoSchema PASSED

org.apache.kafka.connect.data.ValuesTest > 
shouldFailToConvertToListFromStringWithNonCommonElementTypeAndBlankElement 
STARTED

org.apache.kafka.connect.data.ValuesTest > 
shouldFailToConvertToListFromStringWithNonCommonElementTypeAndBlankElement 
PASSED

org.apache.kafka.connect.data.ValuesTest > 
shouldParseStringsWithMultipleDelimiters STARTED

org.apache.kafka.connect.data.ValuesTest > 
shouldParseStringsWithMultipleDelimiters PASSED

org.apache.kafka.connect.data.ConnectSchemaTest > testStructEquality STARTED

org.apache.kafka.connect.data.ConnectSchemaTest > testStructEquality PASSED

org.apache.kafka.connect.data.ConnectSchemaTest > 
testValidateValueMismatchMapValue STARTED

org.apache.kafka.connect.data.ConnectSchemaTest > 
testValidateValueMismatchMapValue PASSED

org.apache.kafka.connect.data.ConnectSchemaTest > 
testValidateValueMismatchStructWrongSchema STARTED

org.apache.kafka.connect.data.ConnectSchemaTest > 
testValidateValueMismatchStructWrongSchema PASSED

org.apache.kafka.connect.data.ConnectSchemaTest > 
testValidateValueMismatchMapSomeValues STARTED

org.apache.kafka.connect.data.ConnectSchemaTest > 
testValidateValueMismatchMapSomeValues PASSED

org.apache.kafka.connect.data.ConnectSchemaTest > 
testValidateValueMismatchBoolean STARTED

org.apache.kafka.connect.data.ConnectSchemaTest > 
testValidateValueMismatchBoolean PASSED

org.apache.kafka.connect.data.ConnectSchemaTest > testFieldsOnStructSchema 
STARTED


Build failed in Jenkins: kafka-trunk-jdk8 #3775

2019-07-09 Thread Apache Jenkins Server
See 


Changes:

[cmccabe] KAFKA-: Adds RoundRobinPartitioner with tests (#6771)

[jason] MINOR: Use `Topic::isInternalTopic` instead of directly checking (#7047)

--
[...truncated 2.57 MB...]

kafka.server.DeleteTopicsRequestWithDeletionDisabledTest > 
testDeleteRecordsRequest STARTED

kafka.server.DeleteTopicsRequestWithDeletionDisabledTest > 
testDeleteRecordsRequest PASSED

kafka.server.ReplicaManagerQuotasTest > shouldGetBothMessagesIfQuotasAllow 
STARTED

kafka.server.ReplicaManagerQuotasTest > shouldGetBothMessagesIfQuotasAllow 
PASSED

kafka.server.ReplicaManagerQuotasTest > 
testCompleteInDelayedFetchWithReplicaThrottling STARTED

kafka.server.ReplicaManagerQuotasTest > 
testCompleteInDelayedFetchWithReplicaThrottling PASSED

kafka.server.ReplicaManagerQuotasTest > 
shouldExcludeSubsequentThrottledPartitions STARTED

kafka.server.ReplicaManagerQuotasTest > 
shouldExcludeSubsequentThrottledPartitions PASSED

kafka.server.ReplicaManagerQuotasTest > 
shouldGetNoMessagesIfQuotasExceededOnSubsequentPartitions STARTED

kafka.server.ReplicaManagerQuotasTest > 
shouldGetNoMessagesIfQuotasExceededOnSubsequentPartitions PASSED

kafka.server.ReplicaManagerQuotasTest > shouldIncludeInSyncThrottledReplicas 
STARTED

kafka.server.ReplicaManagerQuotasTest > shouldIncludeInSyncThrottledReplicas 
PASSED

kafka.server.DynamicBrokerConfigTest > testPasswordConfigEncryption STARTED

kafka.server.DynamicBrokerConfigTest > testPasswordConfigEncryption PASSED

kafka.server.DynamicBrokerConfigTest > testSecurityConfigs STARTED

kafka.server.DynamicBrokerConfigTest > testSecurityConfigs PASSED

kafka.server.DynamicBrokerConfigTest > testSynonyms STARTED

kafka.server.DynamicBrokerConfigTest > testSynonyms PASSED

kafka.server.DynamicBrokerConfigTest > 
testDynamicConfigInitializationWithoutConfigsInZK STARTED

kafka.server.DynamicBrokerConfigTest > 
testDynamicConfigInitializationWithoutConfigsInZK PASSED

kafka.server.DynamicBrokerConfigTest > testConfigUpdateWithSomeInvalidConfigs 
STARTED

kafka.server.DynamicBrokerConfigTest > testConfigUpdateWithSomeInvalidConfigs 
PASSED

kafka.server.DynamicBrokerConfigTest > testDynamicListenerConfig STARTED

kafka.server.DynamicBrokerConfigTest > testDynamicListenerConfig PASSED

kafka.server.DynamicBrokerConfigTest > testReconfigurableValidation STARTED

kafka.server.DynamicBrokerConfigTest > testReconfigurableValidation PASSED

kafka.server.DynamicBrokerConfigTest > testConnectionQuota STARTED

kafka.server.DynamicBrokerConfigTest > testConnectionQuota PASSED

kafka.server.DynamicBrokerConfigTest > testConfigUpdate STARTED

kafka.server.DynamicBrokerConfigTest > testConfigUpdate PASSED

kafka.server.DynamicBrokerConfigTest > testPasswordConfigEncoderSecretChange 
STARTED

kafka.server.DynamicBrokerConfigTest > testPasswordConfigEncoderSecretChange 
PASSED

kafka.server.DynamicBrokerConfigTest > 
testConfigUpdateWithReconfigurableValidationFailure STARTED

kafka.server.DynamicBrokerConfigTest > 
testConfigUpdateWithReconfigurableValidationFailure PASSED

kafka.server.ThrottledChannelExpirationTest > testThrottledChannelDelay STARTED

kafka.server.ThrottledChannelExpirationTest > testThrottledChannelDelay PASSED

kafka.server.ThrottledChannelExpirationTest > 
testCallbackInvocationAfterExpiration STARTED

kafka.server.ThrottledChannelExpirationTest > 
testCallbackInvocationAfterExpiration PASSED

kafka.server.SaslApiVersionsRequestTest > 
testApiVersionsRequestWithUnsupportedVersion STARTED

kafka.server.SaslApiVersionsRequestTest > 
testApiVersionsRequestWithUnsupportedVersion PASSED

kafka.server.SaslApiVersionsRequestTest > 
testApiVersionsRequestBeforeSaslHandshakeRequest STARTED

kafka.server.SaslApiVersionsRequestTest > 
testApiVersionsRequestBeforeSaslHandshakeRequest PASSED

kafka.server.SaslApiVersionsRequestTest > 
testApiVersionsRequestAfterSaslHandshakeRequest STARTED

kafka.server.SaslApiVersionsRequestTest > 
testApiVersionsRequestAfterSaslHandshakeRequest PASSED

kafka.server.HighwatermarkPersistenceTest > 
testHighWatermarkPersistenceMultiplePartitions STARTED

kafka.server.HighwatermarkPersistenceTest > 
testHighWatermarkPersistenceMultiplePartitions PASSED

kafka.server.HighwatermarkPersistenceTest > 
testHighWatermarkPersistenceSinglePartition STARTED

kafka.server.HighwatermarkPersistenceTest > 
testHighWatermarkPersistenceSinglePartition PASSED

kafka.server.DescribeLogDirsRequestTest > testDescribeLogDirsRequest STARTED

kafka.server.DescribeLogDirsRequestTest > testDescribeLogDirsRequest PASSED

kafka.server.DelegationTokenRequestsTest > testDelegationTokenRequests STARTED

kafka.server.DelegationTokenRequestsTest > testDelegationTokenRequests PASSED

kafka.server.ServerGenerateBrokerIdTest > testBrokerMetadataOnIdCollision 
STARTED

kafka.server.ServerGenerateBrokerIdTest > testBrokerMetadataOnIdCollision PASSED


Re: [DISCUSS] KIP-213: Second follow-up on Foreign Key Joins

2019-07-09 Thread Adam Bellemare
I know what I posted was a bit of a wall of text, but three follow up
thoughts to this:

1) Is it possible to enforce exactly-once for a portion of the topology? I
was trying to think about how to process my proposal with at-least-once
processing (or at-most-once processing) and I came up empty-handed.

2) A very deft solution is to also just support left-joins but not
inner-joins. Practically speaking, either INNER or LEFT join as it
currently is would support all of my use-cases.

3) Accept that there may be some null tombstones (though this makes me want
to just go with LEFT only instead of LEFT and PSEUDO-INNER).

In my experience (obviously empirical) it seems that many people just want
the ability to join on foreign keys for the sake of handling all the
relational data in their event streams and extra tombstones don't matter at
all. This has been my own experience from our usage of our internal
implementation at my company, and that of many others who have reached out
to me.

What would help most at this point is if someone can come up with a
scenario where sending unnecessary tombstones actually poses a downstream
problem beyond that of confusing behaviour, as I cannot think of one
myself.  With that being said, I am far more inclined to actually then
support just option #2 above and only have LEFT joins, forgoing INNER
completely since it would not be a true inner join.

Adam








On Thu, Jul 4, 2019 at 8:50 AM Adam Bellemare 
wrote:

> Hi Matthias
>
> A thought about a variation of S1 that may work - it has a few moving
> parts, so I hope I explained it clearly enough.
>
> When we change keys on the LHS:
> (k,a) -> (k,b)
> (k,a,hashOf(b),PROPAGATE_OLD_VALUE) goes to RHS-0
> (k,b,PROPAGATE_NEW_VALUE) goes to RHS-1
>
> A) When the (k,a,hashOf(b),PROPAGATE_OLD_VALUE) hits RHS-0, the following
> occurs:
>   1) Store the current (CombinedKey, Value=(Hash, ForeignValue)) in
> a variable
>   2) Delete the key from the store
>   3) Publish the event from step A-1 downstream with an instruction:
> (eventType = COMPARE_TO_OTHER) (or whatever)
> *  (key, (hashOf(b),wasForeignValueNull, eventType))*
> //Don't need the old hashOf(b) as it is guaranteed to be out of date
> //We do need the hashOf(b) that came with the event to be passed
> along. Will be used in resolution.
> //Don't need the actual value as we aren't joining or comparing the
> values, just using it to determine nulls. Reduces payload size.
>
> B) When (k,b,PROPAGATE_NEW_VALUE) hits RHS-1, the following occurs:
>   1) Store it in the prefix state store (as we currently do)
>   2) Get the FK-value (as we currently do)
>   3) Return the normal SubscriptionResponse payload (eventType = UPDATE)
> (or whatever)
> * (key, (hashOf(b), foreignValue, eventType))*
>
>
> C) The Resolver Table is keyed on (as per our example):
> key = CombinedKey, value =
> NullValueResolution by RHS-1)>
>
> Resolution Steps per event:
>
> When one of either the output events from A (eventType ==
> COMPARE_TO_OTHER) or B (eventType == UPDATE) is received
> 1) Check if this event matches the current hashOf(b). If not, discard it,
> for it is stale and no longer matters.  Additionally, delete entry
> CombinedKey from the Resolver Table.
>
> 2) Lookup event in table on its CombinedKey:
>   - If it's not in the table, create the  NullValueResolution value,
> populate the field related to the eventType, and add it to the table.
>   - If it already IS in the table, get the existing NullValueResolution
> object and finish populating it:
>
> 3) If the NullValueResolution is fully populated, move on to the
> resolution logic below.
>
> Format:
> (wasForeignValueNull, foreignValue) -> Result
> If:
> ( false  , Null ) -> Send tombstone. Old value was not null, new one is,
> send tombstone.
> (  true  , Null ) -> Do nothing.  See * below for more details.
> (  true  , NewValue ) -> Send the new result
> (  true  , NewValue ) -> Send the new result
>
> * wasForeignValueNull may have been false at some very recent point, but
> only just translated to true (race condition). In this case, the RHS table
> was updated and the value was set to null due to a an RHS update of (a,
> oldVal) -> (a, null). This event on its own will propagate a delete event
> through to the resolver (of a different eventType), so we don't need to
> handle this case from the LHS and doing nothing is OK.
>
> In the case that it's truly (true, Null), we also don't need to send a
> tombstone because wasForeignKeyNull == true means that a tombstone was
> previously sent.
>
> 4) Check the hashOf(b) one last time before sending the resolved message
> out. If the hash is old, discard it.
>
> 5) Delete the row from the Resolver Table.
>
>
> Takeaways:
> 1) I believe this is only necessary for INNER joins when we transition
> from (k, non-Null-val) -> (k, non-Null-new-val)
> 2) We can maintain state until we get both events back from RHS-0 and
> RHS-1, at which point we delete it and clean 

Jenkins build is back to normal : kafka-trunk-jdk11 #682

2019-07-09 Thread Apache Jenkins Server
See 




[jira] [Resolved] (KAFKA-8642) Send LeaveGroupRequest for static members when reaching `max.poll.interval.ms`

2019-07-09 Thread Boyang Chen (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8642?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Boyang Chen resolved KAFKA-8642.

Resolution: Invalid

Synced with [~hachikuji], the session timeout should always be smaller than the 
max.poll.interval, as if we could tolerant a long unavailability for a consumer 
such like 10 minutes, then it makes no sense to expect itself making progress 
every 5 minutes.

> Send LeaveGroupRequest for static members when reaching `max.poll.interval.ms`
> --
>
> Key: KAFKA-8642
> URL: https://issues.apache.org/jira/browse/KAFKA-8642
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Boyang Chen
>Assignee: Boyang Chen
>Priority: Major
>
> Static members don't leave group explicitly. However, when the progress of 
> static member is going low, it might be favorable to let it leave the group 
> to leverage rebalance to shuffle assignment and become progressive again.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[VOTE] KIP-480 : Sticky Partitioner

2019-07-09 Thread Justine Olshan
Hello all,

I'd like to start the vote for KIP-480 : Sticky Partitioner.
https://cwiki.apache.org/confluence/display/KAFKA/KIP-480%3A+Sticky+Partitioner

Thank you,
Justine Olshan


[jira] [Created] (KAFKA-8644) The Kafka protocol generator should allow null defaults for bytes and array fields

2019-07-09 Thread Colin P. McCabe (JIRA)
Colin P. McCabe created KAFKA-8644:
--

 Summary: The Kafka protocol generator should allow null defaults 
for bytes and array fields
 Key: KAFKA-8644
 URL: https://issues.apache.org/jira/browse/KAFKA-8644
 Project: Kafka
  Issue Type: Improvement
Reporter: Colin P. McCabe
Assignee: Colin P. McCabe


The Kafka protocol generator should allow null defaults for bytes and array 
fields.  Currently, null defaults are only allowed for string fields.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-7788) Support null defaults in KAFKA-7609 RPC specifications

2019-07-09 Thread Colin P. McCabe (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7788?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin P. McCabe resolved KAFKA-7788.

Resolution: Duplicate

> Support null defaults in KAFKA-7609 RPC specifications
> --
>
> Key: KAFKA-7788
> URL: https://issues.apache.org/jira/browse/KAFKA-7788
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Colin P. McCabe
>Assignee: Colin P. McCabe
>Priority: Minor
>
> It would be nice if we could support null values as defaults in the 
> KAFKA-7609 RPC specification files.  null defaults should be allowed only if 
> the field is nullable in all supported versions of the field.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8643) Incompatible MemberDescription constructor change

2019-07-09 Thread Boyang Chen (JIRA)
Boyang Chen created KAFKA-8643:
--

 Summary: Incompatible MemberDescription constructor change
 Key: KAFKA-8643
 URL: https://issues.apache.org/jira/browse/KAFKA-8643
 Project: Kafka
  Issue Type: Bug
  Components: consumer
Reporter: Boyang Chen
Assignee: Boyang Chen


Accidentally deleted the existing public constructor interface in the 
MemberDescription. Need to bring back the old constructors for compatibility.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [DISCUSS] KIP-459: Improve KafkaStreams#close

2019-07-09 Thread Dongjin Lee
Hi Matthias,

Have you thought about this issue?

Thanks,
Dongjin

On Wed, Jun 19, 2019 at 5:07 AM Dongjin Lee  wrote:

> Hello.
>
> I just uploaded the draft implementation of the three proposed
> alternatives.
>
> - Type A: define a close timeout constant -
> https://github.com/dongjinleekr/kafka/tree/feature/KAFKA-7996-a
> - Type B: Provide a new configuration option, 'close.wait.ms' -
> https://github.com/dongjinleekr/kafka/tree/feature/KAFKA-7996-b
> - Type C: Extend KafkaStreams constructor to support a close timeout
> parameter -
> https://github.com/dongjinleekr/kafka/tree/feature/KAFKA-7996-c
>
> As you can see in the branches, Type B and C are a little bit more
> complicated than A, since it provides an option to control the timeout to
> close AdminClient and Producer. To provide that functionality, B and C
> share a refactoring commit, which replaces KafkaStreams#create into
> KafkaStreams.builder. It is why they are consist of two commits.
>
> Please have a look you are free.
>
> Thanks,
> Dongjin
>
> On Thu, May 30, 2019 at 12:26 PM Dongjin Lee  wrote:
>
>> I just updated the KIP document reflecting what I found about the clients
>> API inconsistency and Matthias's comments. Since it is now obvious that
>> modifying the default close timeout for the client is not feasible, the
>> updated document proposes totally different alternatives. (see Rejected
>> Alternatives section)
>>
>>
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-459%3A+Improve+KafkaStreams%23close
>>
>> Please have a look you are free. All kinds of feedbacks are welcomed!
>>
>> Thanks,
>> Dongjin
>>
>> On Fri, May 24, 2019 at 1:07 PM Matthias J. Sax 
>> wrote:
>>
>>> Thanks for digging into the back ground.
>>>
>>> I think it would be good to get feedback from people who work on
>>> clients, too.
>>>
>>>
>>> -Matthias
>>>
>>>
>>> On 5/19/19 12:58 PM, Dongjin Lee wrote:
>>> > Hi Matthias,
>>> >
>>> > I investigated the inconsistencies between `close` semantics of
>>> `Producer`,
>>> > `Consumer`, and `AdminClient`. And I found that this inconsistency was
>>> > intended. Here are the details:
>>> >
>>> > The current `KafkaConsumer#close`'s default timeout, 30 seconds, was
>>> > introduced in KIP-102 (0.10.2.0)[^1]. According to the document, there
>>> are
>>> > two differences between `Consumer` and `Producer`;
>>> >
>>> > 1. `Consumer`s don't have large requests.
>>> > 2. `Consumer#close` is affected by consumer coordinator, whose close
>>> > operation is affected by `request.timeout.ms`.
>>> >
>>> > By the above reasons, Consumer's default timeout was set a little bit
>>> > different.[^3] (It is done by Rajini.)
>>> >
>>> > At the initial proposal, I proposed to change the default timeout
>>> value of
>>> > `[Producer, AdminClient]#close` from `Long.MAX_VALUE` into another one;
>>> > However, since it is now clear that the current implementation is
>>> totally
>>> > reasonable, *it seems like changing the approach into just providing a
>>> > close timeout into the clients used by KafkaStreams is a more suitable
>>> > one.*[^4]
>>> > This approach has the following advantages:
>>> >
>>> > 1. The problem described in KAFKA-7996 now resolved, since Producer
>>> doesn't
>>> > hang up while its `close` operation.
>>> > 2. We don't have to change the semantics of `Producer#close`,
>>> > `AdminClient#close` nor `KafkaStreams#close`. As you pointed out, these
>>> > kinds of changes are hard for users to reason about.
>>> >
>>> > How do you think?
>>> >
>>> > Thanks,
>>> > Dongjin
>>> >
>>> > [^1]:
>>> >
>>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-102+-+Add+close+with+timeout+for+consumers
>>> > [^2]: "The existing close() method without a timeout will attempt to
>>> close
>>> > the consumer gracefully with a default timeout of 30 seconds. This is
>>> > different from the producer default of Long.MAX_VALUE since consumers
>>> don't
>>> > have large requests."
>>> > [^3]: 'Rejected Alternatives' section explains it.
>>> > [^4]: In the case of Streams reset tool, `KafkaAdminClient`'s close
>>> timeout
>>> > is 60 seconds (KIP-198):
>>> https://github.com/apache/kafka/pull/3927/files
>>> >
>>> > On Fri, Apr 26, 2019 at 5:16 PM Matthias J. Sax >> >
>>> > wrote:
>>> >
>>> >> Thanks for the KIP.
>>> >>
>>> >> Overall, I agree with the sentiment of the KIP. The current semantics
>>> of
>>> >> `KafkaStreams#close(timeout)` are not well defined. Also the general
>>> >> client inconsistencies are annoying.
>>> >>
>>> >>
>>> >>> This KIP make any change on public interfaces; however, it makes a
>>> >> subtle change to the existing API's semantics. If this KIP is
>>> accepted,
>>> >> documenting these semantics with as much detail as possible may much
>>> better.
>>> >>
>>> >> I am not sure if I would call this change "subtle". It might actually
>>> be
>>> >> rather big impact and hence I am also wondering about backward
>>> >> compatibility (details below). Overall, I am not sure if documenting
>>> the
>>> >> change would 

Re: [DISCUSS] KIP-482: The Kafka Protocol should Support Optional Fields

2019-07-09 Thread Jose Armando Garcia Sancio
Thanks Colin for the KIP. For my own edification why are we doing this
"Optional fields can have any type, except for an array of structures."?
Why can't we have an array of structures?

-- 
-Jose


Re: [DISCUSS] KIP-466: Add support for List serialization and deserialization

2019-07-09 Thread John Roesler
Ah, my apologies, I must have just overlooked it. Thanks for the update,
too.

Just one more super-small question, do we need this variant:

> New method public static  Serde> ListSerde() in
org.apache.kafka.common.serialization.Serdes class (infers list
implementation and inner serde from config file)

It seems like this situation implies my config file is already set up for
the list serde, so passing this serde (e.g., in Produced) would have the
same effect as not specifying it.

I guess that it could be the case that you have the
`default.key/value.serde` set to something else, like StringSerde, but you
still have the `default.key/value.list.serde.impl/element` set. This seems
like it would result in more confusion than convenience, so my gut instinct
is maybe we shouldn't introduce the `ListSerde()` variant until people
actually request it later on.

Thus, we'd just stick with fully config-driven or fully source-code-driven,
not half/half.

What do you think?

Thanks,
-John


On Tue, Jul 9, 2019 at 9:58 AM Development  wrote:
>
> Hi John,
>
> I hope everyone had a great long weekend.
>
> Regarding Java interfaces, I may not understand you correctly, but I
think I already listed them:
>
> So for Produced, you would use it in the following fashion, for example:
Produced.keySerde(Serdes.ListSerde(ArrayList.class, Serdes.Integer()))
>
> I also updated the KIP, and added a section “Serialization Strategy”
where I describe our logic of conditional serialization based on the type
of an inner serde.
>
> Thank you!
>
> Best,
> Daniyar Yeralin
>
> On Jun 26, 2019, at 11:44 AM, John Roesler  wrote:
>
> Thanks for the update, Daniyar!
>
> In addition to specifying the config interface, can you also specify
> the Java interface? Namely, if I need to pass an instance of this
> serde in to the DSL directly, as in Produced, Materialized, etc., what
> constructor(s) would I have available? Likewise with the Serializer
> and Deserailizer. I don't think you need to specify the implementation
> logic, since we've already discussed it here.
>
> If you also want to specify the serialized format of the data records
> in the KIP, it could be useful documentation, as well as letting us
> verify the schema for forward/backward compatibility concerns, etc.
>
> Thanks,
> John
>
> On Wed, Jun 26, 2019 at 10:33 AM Development  wrote:
>
>
> Hey,
>
> Finally made updates to the KIP:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-466%3A+Add+support+for+List%3CT%3E+serialization+and+deserialization
<
https://cwiki.apache.org/confluence/display/KAFKA/KIP-466:+Add+support+for+List%3CT%3E+serialization+and+deserialization
>
> Sorry for the delay :)
>
> Thank You!
>
> Best,
> Daniyar Yeralin
>
> On Jun 22, 2019, at 12:49 AM, Matthias J. Sax 
wrote:
>
> Yes, something like this. I did not think about good configuration
> parameter names yet. I am also not sure if I understand all proposed
> configs atm. But all configs should be listed and explained in the KIP
> anyway, and we can discuss further after you have updated the KIP (I can
> ask more detailed question if I have any).
>
>
> -Matthias
>
> On 6/21/19 2:05 PM, Development wrote:
>
> Yes, you are right. ByteSerializer is not what I need to have in a list
> of primitives.
>
> As for the default constructor and configurability, just want to make
> sure. Is this what you have on your mind?
>
> Best,
> Daniyar Yeralin
>
>
>
> On Jun 21, 2019, at 2:51 PM, Matthias J. Sax  > wrote:
>
> Thanks for the update!
>
> I think that `ListDeserializer`, `ListSerializer`, and `ListSerde`
> should have an default constructor and it should be possible to pass in
> the `Class listClass` information via a configuration. Otherwise,
> KafkaStreams cannot use it as default serde.
>
>
> For the primitive serializers: `BytesSerializer` is not primitive IMHO,
> as is it for `byte[]` with variable length -- it's for arrays, not for
> single `byte` (note, that `Bytes` is a Kafka class wrapping `byte[]`).
>
>
> For tests, we can comment on the PR. No need to do this in the KIP
> discussion.
>
>
> Can you also update the KIP?
>
>
>
> -Matthias
>
>
>
>
>
> On 6/21/19 11:29 AM, Development wrote:
>
> I made and pushed necessary commits, so we could review the final
> version under PR https://github.com/apache/kafka/pull/6592
>
> I also need some advice on writing tests for this new serde. So far I
> only have two test cases (roundtrip and empty payload), I’m not sure
> if it is enough.
>
> Thank y’all for your help in this KIP :)
>
> Best,
> Daniyar Yeralin
>
>
> On Jun 21, 2019, at 1:44 PM, John Roesler  > wrote:
>
> Hey Daniyar,
>
> Looks good to me! Thanks for considering it.
>
> Thanks,
> -John
>
> On Fri, Jun 21, 2019 at 9:04 AM Development   > wrote:
> Hey John and Matthias,
>
> Yes, now I see it all. I’m storing lots of redundant information.
> Here is my final idea. Yes, now a user should pass a 

Re: Nag!! KAFKA-8629 - Some feedback wanted

2019-07-09 Thread John Roesler
Hey Andy,

Thanks for looking into this. I've been curious about it for some
time. I'm glad to hear that the gap to get there is so small.

You mentioned potentially switching off the JMX stuff with a config
option. I'm not sure hiding the JMX features behind a config flag
would be good enough... Can you elaborate on the nature of the
incomatibility? I.e., is it a compile-time problem, or a run-time one?

It wouldn't really be straightforward to determine if your alternative
to the reflection code is acceptable until you send a PR... By all
means, feel free to create a PR and just start the title off with
`[POC]`, so that everyone know it's not a final proposal.

FWIW, I think if we're going to really state that Streams works on
GraalVM, we do need to build some verification of this statement into
the build and test cycle. So, once you get out of POC phase and start
making a serious proposal, be sure to consider how we can ensure we
_remain_ compatible going forward.

Thanks again for considering this!
-John

On Tue, Jul 9, 2019 at 3:34 PM Ismael Juma  wrote:
>
> I think it would be awesome to support GraalVM native images for Kafka
> Streams and CLI tools.
>
> Ismael
>
> On Tue, Jul 9, 2019, 12:40 PM Andy Muir 
> wrote:
>
> > Hi
> >
> > I hope you can have a look at
> > https://issues.apache.org/jira/browse/KAFKA-8629 <
> > https://issues.apache.org/jira/browse/KAFKA-8629> and perhaps give me
> > some feedback? I’d like to decide if this is worth pursuing!
> >
> > I believe that making apps built on top of Kafka Streams can benefit
> > hugely from the use of GraalVM. I’d like to help as much as I can.
> >
> > PS: I can’t change the assignee of the ticket to myself! :(
> >
> > Regards
> >
> > Andy Muir
> > muira...@yahoo.co.uk
> > @andrewcmuir


Re: [DISCUSS] KIP-478 Strongly Typed Processor API

2019-07-09 Thread John Roesler
Ah, good catch!

I didn't mean to include the  for Transform itself. I must
have just glossed over it when I was writing the KIP.

It would apply to TransformValues, since forwarding is disabled. But
for Transform, it should be bounded to the result type.

The Transformer interface actually presents a minor
challenge, because it needs type bounds on both its (single) return
type and (key, value) forward types. In practice, these types are
related, but it's not so easy to express it. The places you can use a
Transformer either ask for a return type of KeyValue _or_ an
Iterable>, but this can't be expressed at the level of
the interface. Ideally, we'd also bound the forward types to K1 and
V1, but then the interface would look like Transformer, which leaves something to be desired...

I think what I'd like to propose right now is just to leave the
Transformer interface alone and then consider revising it separately
in the scope of https://issues.apache.org/jira/browse/KAFKA-8396 .

I've updated the KIP to specifically call this out, stating that the
Transformer interface will not be touched right now.

What do you think about this?

Thanks,
-John


On Sun, Jul 7, 2019 at 1:27 PM Paul Whalen  wrote:
>
> First of all, +1 on the whole idea, my team has run into (admittedly minor,
> but definitely annoying) issues because of the weaker typing.  We're heavy
> users of the PAPI and have Processors that, while not hundreds of lines
> long, are certainly quite hefty and call context.forward() in many places.
>
> After reading the KIP and discussion a few times, I've convinced myself
> that any initial concerns I had aren't really concerns at all (state store
> types, for one).  One thing I will mention:  changing *Transformer* to have
> ProcessorContext gave me pause, because I have code that does
> context.forward in transformers.  Now that "flat transform" is a specific
> part of the API it seems okay to steer folks in that direction (to never
> use context.process in a transformer), but it should be called out
> explicitly in javadocs.  Currently Transformer (which is used for both
> transform() and flatTransform() ) doesn't really call out the ambiguity:
> https://github.com/apache/kafka/blob/ca641b3e2e48c14ff308181c775775408f5f35f7/streams/src/main/java/org/apache/kafka/streams/kstream/Transformer.java#L75-L77,
> and for migrating users (from before flatTransform) it could be confusing.
>
> Side note, I'd like to plug KIP-401 (there is a discussion thread and a
> voting thread) which also relates to using the PAPI.  It seems like there
> is some interest and it is in a votable state with the majority of
> implementation complete.
>
> Paul
>
> On Fri, Jun 28, 2019 at 2:02 PM Bill Bejeck  wrote:
>
> > Sorry for coming late to the party.
> >
> > As for the naming I'm in favor of RecordProcessor as well.
> >
> > I agree that we should not take on doing all of the package movements as
> > part of this KIP, especially as John has pointed out, it will be an
> > opportunity to discuss some clean-up on individual classes which I envision
> > becoming another somewhat involved process.
> >
> > For the end goal, if possible, here's what I propose.
> >
> >1. We keep the scope of the KIP the same, *but we only implement* *it in
> >phases*
> >2. Phase one could include what Guozhang had proposed earlier namely
> >1. > 1.a) modifying ProcessorContext only with the output types on
> >   forward.
> >   > 1.b) modifying Transformer signature to have generics of
> >   ProcessorContext,
> >   > and then lift the restricting of not using punctuate: if user did
> >   not
> >   > follow the enforced typing and just code without generics, they
> >   will get
> >   > warning at compile time and get run-time error if they forward
> >   wrong-typed
> >   > records, which I think would be acceptable.
> >3. Then we could tackle other pieces in an incremental manner as we see
> >what makes sense
> >
> > Just my 2cents
> >
> > -Bill
> >
> > On Mon, Jun 24, 2019 at 10:22 PM Guozhang Wang  wrote:
> >
> > > Hi John,
> > >
> > > Yeah I think we should not do all the repackaging as part of this KIP as
> > > well (we can just do the movement of the Processor / ProcessorSupplier),
> > > but I think we need to discuss the end goal here since otherwise we may
> > do
> > > the repackaging of Processor in this KIP, but only later on realizing
> > that
> > > other re-packagings are not our favorite solutions.
> > >
> > >
> > > Guozhang
> > >
> > > On Mon, Jun 24, 2019 at 3:06 PM John Roesler  wrote:
> > >
> > > > Hey Guozhang,
> > > >
> > > > Thanks for the idea! I'm wondering if we could take a middle ground
> > > > and take your proposed layout as a "roadmap", while only actually
> > > > moving the classes that are already involved in this KIP.
> > > >
> > > > The reason I ask is not just to control the scope of this KIP, but
> > > > also, I think that if we move other classes to new 

[jira] [Created] (KAFKA-8642) Send LeaveGroupRequest for static members when reaching `max.poll.interval.ms`

2019-07-09 Thread Boyang Chen (JIRA)
Boyang Chen created KAFKA-8642:
--

 Summary: Send LeaveGroupRequest for static members when reaching 
`max.poll.interval.ms`
 Key: KAFKA-8642
 URL: https://issues.apache.org/jira/browse/KAFKA-8642
 Project: Kafka
  Issue Type: Sub-task
Reporter: Boyang Chen
Assignee: Boyang Chen


Static members don't leave group explicitly. However, when the progress of 
static member is going low, it might be favorable to let it leave the group to 
leverage rebalance to shuffle assignment and become progressive again.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: Nag!! KAFKA-8629 - Some feedback wanted

2019-07-09 Thread Ismael Juma
I think it would be awesome to support GraalVM native images for Kafka
Streams and CLI tools.

Ismael

On Tue, Jul 9, 2019, 12:40 PM Andy Muir 
wrote:

> Hi
>
> I hope you can have a look at
> https://issues.apache.org/jira/browse/KAFKA-8629 <
> https://issues.apache.org/jira/browse/KAFKA-8629> and perhaps give me
> some feedback? I’d like to decide if this is worth pursuing!
>
> I believe that making apps built on top of Kafka Streams can benefit
> hugely from the use of GraalVM. I’d like to help as much as I can.
>
> PS: I can’t change the assignee of the ticket to myself! :(
>
> Regards
>
> Andy Muir
> muira...@yahoo.co.uk
> @andrewcmuir


Re: Nag!! KAFKA-8629 - Some feedback wanted

2019-07-09 Thread Adam Bellemare
Hi Andy

Can you elaborate on two things for clarity’s sake?

1) Are there any alternatives we can use for JMX instead of commenting it out?

2) Are there any other limitations with Graal VM that may impede future 
development?

The second one is more difficult to answer I suspect because it relies on us 
adopting GraalVM support as a requirement for future releases of Kafka Streams. 
I think we would need a better idea of the impacts of this and if it should be 
scoped out beyond Kafka Streams or just limited as such. 



> On Jul 9, 2019, at 7:39 AM, Andy Muir  wrote:
> 
> Hi
> 
> I hope you can have a look at 
> https://issues.apache.org/jira/browse/KAFKA-8629 
>  and perhaps give me some 
> feedback? I’d like to decide if this is worth pursuing!
> 
> I believe that making apps built on top of Kafka Streams can benefit hugely 
> from the use of GraalVM. I’d like to help as much as I can.
> 
> PS: I can’t change the assignee of the ticket to myself! :(
> 
> Regards
> 
> Andy Muir
> muira...@yahoo.co.uk
> @andrewcmuir


Nag!! KAFKA-8629 - Some feedback wanted

2019-07-09 Thread Andy Muir
Hi

I hope you can have a look at https://issues.apache.org/jira/browse/KAFKA-8629 
 and perhaps give me some 
feedback? I’d like to decide if this is worth pursuing!

I believe that making apps built on top of Kafka Streams can benefit hugely 
from the use of GraalVM. I’d like to help as much as I can.

PS: I can’t change the assignee of the ticket to myself! :(

Regards

Andy Muir
muira...@yahoo.co.uk
@andrewcmuir

Re: [DISCUSS] KIP-435: Internal Partition Reassignment Batching

2019-07-09 Thread Stanislav Kozlovski
Hey Viktor,

 I think it is intuitive if there are on a global level...If we applied
> them on every batch then we
> couldn't really guarantee any limits as the user would be able to get
> around it with submitting lots of reassignments.


Agreed. Could we mention this explicitly in the KIP?

Also if I understand correctly, AlterPartitionAssignmentsRequest would be a
> partition level batching, isn't it? So if you submit 10 partitions at once
> then they'll all be started by the controller immediately as per my
> understanding.


Yep, absolutely

I've raised the ordering problem on the discussion of KIP-455 in a bit
> different form and as I remember the verdict there was that we shouldn't
> expose ordering as an API. It might not be easy as you say and there might
> be much better strategies to follow (like disk or network utilization
> goals). Therefore I'll remove this section from the KIP.


Sounds good to me.

I'm not sure I get this scenario. So are you saying that after they
> submitted a reassignment they also submit a preferred leader change?
> In my mind what I would do is:
> i) make auto.leader.rebalance.enable to obey the leader movement limit as
> this way it will be easier to calculate the reassignment batches.
>

Sorry, this is my fault -- I should have been more clear.
First, I didn't think through this well enough at the time, I don't think.
If we have replicas=[1, 2, 3] and we reassign them to [4, 5, 6], it is
obvious that a leader shift will happen. Your KIP proposes we shift
replicas 1 and 4 first.

I meant the following (maybe rare) scenario - we have replicas [1, 2, 3] on
a lot of partitions and the user runs a massive rebalance to change them
all to [3, 2, 1]. In the old behavior, I think that this would not do
anything but simply change the replica set in ZK.
Then, the user could run kafka-preferred-replica-election.sh on a given set
of partitions to make sure the new leader 3 gets elected.

ii) the user could submit preferred leader election but it's basically a
> form of reassignment so it would fall under the batching criterias. If the
> batch they submit is smaller than the internal, then it'd be executed
> immediately but otherwise it would be split into more batches. It might be
> a different behavior as it may not be executed it in one batch but I think
> it isn't a problem because we'll default to Int.MAX with the batches and
> otherwise because since it's a form of reassignment I think it makes sense
> to apply the same logic.


The AdminAPI for that is "electPreferredLeaders(Collection
partitions)" and the old zNode is "{"partitions": [{"topic": "a",
"partition": 0}]}" so it is a bit less explicit than our other reassignment
API, but the functionality is the same.
You're 100% right that it is a form of reassignment, but I hadn't thought
of it like that and I some other people wouldn't have either.
If I understand correctly, you're suggesting that we count the
"reassignment.parallel.leader.movements" config against such preferred
elections. I think that makes sense. If we are to do that we should
explicitly mention that this KIP touches the preferred election logic as
well. Would that config also bound the number of leader movements the auto
rebalance inside the broker would do as well?

Then again, the "reassignment.parallel.leader.movements" config has a bit
of a different meaning when used in the context of a real reassignment
(where data moves from the leader to N more replicas) versus in the
preferred election switch (where all we need is two lightweight
LeaderAndIsr requests), so I am not entirely convinced we may want to use
the same config for that.
It might be easier to limit the scope of this KIP and not place a bound on
preferred leader election.

Thanks,
Stanislav


On Mon, Jul 8, 2019 at 4:28 PM Viktor Somogyi-Vass 
wrote:

> Hey Stanislav,
>
> Thanks for the thorough look at the KIP! :)
>
> > Let me first explicitly confirm my understanding of the configs and the
> > algorithm:
> > * reassignment.parallel.replica.count - the maximum total number of
> > replicas that we can move at once, *per partition*
> > * reassignment.parallel.partition.count - the maximum number of
> partitions
> > we can move at once
> > * reassignment.parallel.leader.movements - the maximum number of leader
> > movements we can have at once
>
> Yes.
>
> > As far as I currently understand it, your proposed algorithm will
> naturally
> > prioritize leader movement first. e.g if
> > reassignment.parallel.replica.count=1 and
> >
>
> reassignment.parallel.partition.count==reassignment.parallel.leader.movements,
> > we will always move one, the first possible, replica in the replica set
> > (which will be the leader if part of the excess replica set (ER)).
> > Am I correct in saying that?
>
> Yes.
>
> > 1. Does it make sense to add `max` somewhere in the configs' names?
>
> If you imply that it's not always an exact number (because the last batch
> could be smaller) than I think it's a good 

Re: [DISCUSS] KIP-480 : Sticky Partitioner

2019-07-09 Thread Colin McCabe
Hi M,

The RoundRobinPartitioner added by KIP-369 doesn't interact with this KIP.  If 
you configure your producer to use RoundRobinPartitioner, then the 
DefaultPartitioner will not be used.  And the "sticky" behavior is implemented 
only in the DefaultPartitioner.

regards,
Colin


On Tue, Jul 9, 2019, at 05:12, M. Manna wrote:
> Hello Justine,
> 
> I have one item I wanted to discuss.
> 
> We are currently in review stage for KAFKA- where we can choose always
> RoundRobin regardless of null/usable key.
> 
> If I understood this KIP motivation correctly, you are still honouring how
> the hashing of key works for DefaultPartitioner. Would you say that having
> an always "Round-Robin" partitioning with "Sticky" assignment (efficient
> batching of records for a partition) doesn't deviate from your original
> intention?
> 
> Thanks,
> 
> On Tue, 9 Jul 2019 at 01:00, Justine Olshan  wrote:
> 
> > Hello all,
> >
> > If there are no more comments or concerns, I would like to start the vote
> > on this tomorrow afternoon.
> >
> > However, if there are still topics to discuss, feel free to bring them up
> > now.
> >
> > Thank you,
> > Justine
> >
> > On Tue, Jul 2, 2019 at 4:25 PM Justine Olshan 
> > wrote:
> >
> > > Hello again,
> > >
> > > Another update to the interface has been made to the KIP.
> > > Please let me know if you have any feedback!
> > >
> > > Thank you,
> > > Justine
> > >
> > > On Fri, Jun 28, 2019 at 2:52 PM Justine Olshan 
> > > wrote:
> > >
> > >> Hi all,
> > >> I made some changes to the KIP.
> > >> The idea is to clean up the code, make behavior more explicit, provide
> > >> more flexibility, and to keep default behavior the same.
> > >>
> > >> Now we will change the partition in onNewBatch, and specify the
> > >> conditions for this function call (non-keyed values, no explicit
> > >> partitions) in willCallOnNewBatch.
> > >> This clears up some of the issues with the implementation. I'm happy to
> > >> hear further opinions and discuss this change!
> > >>
> > >> Thank you,
> > >> Justine
> > >>
> > >> On Thu, Jun 27, 2019 at 2:53 PM Colin McCabe 
> > wrote:
> > >>
> > >>> On Thu, Jun 27, 2019, at 01:31, Ismael Juma wrote:
> > >>> > Thanks for the KIP Justine. It looks pretty good. A few comments:
> > >>> >
> > >>> > 1. Should we favor partitions that are not under replicated? This is
> > >>> > something that Netflix did too.
> > >>>
> > >>> This seems like it could lead to cascading failures, right?  If a
> > >>> partition becomes under-replicated because there is too much traffic,
> > the
> > >>> producer stops sending to it, which puts even more load on the
> > remaining
> > >>> partitions, which are even more likely to fail then, etc.  It also will
> > >>> create unbalanced load patterns on the consumers.
> > >>>
> > >>> >
> > >>> > 2. If there's no measurable performance difference, I agree with
> > >>> Stanislav
> > >>> > that Optional would be better than Integer.
> > >>> >
> > >>> > 3. We should include the javadoc for the newly introduced method that
> > >>> > specifies it and its parameters. In particular, it would good to
> > >>> specify if
> > >>> > it gets called when an explicit partition id has been provided.
> > >>>
> > >>> Agreed.
> > >>>
> > >>> best,
> > >>> Colin
> > >>>
> > >>> >
> > >>> > Ismael
> > >>> >
> > >>> > On Mon, Jun 24, 2019, 2:04 PM Justine Olshan 
> > >>> wrote:
> > >>> >
> > >>> > > Hello,
> > >>> > > This is the discussion thread for KIP-480: Sticky Partitioner.
> > >>> > >
> > >>> > >
> > >>> > >
> > >>>
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-480%3A+Sticky+Partitioner
> > >>> > >
> > >>> > > Thank you,
> > >>> > > Justine Olshan
> > >>> > >
> > >>> >
> > >>>
> > >>
> >
>


Build failed in Jenkins: kafka-trunk-jdk11 #681

2019-07-09 Thread Apache Jenkins Server
See 


Changes:

[manikumar.reddy] MINOR: Fixes AK config typos (#7046)

--
[...truncated 2.55 MB...]
org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessUnixToTimestamp PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaTimestampToString STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaTimestampToString PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testConfigInvalidFormat STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testConfigInvalidFormat PASSED

org.apache.kafka.connect.transforms.HoistFieldTest > withSchema STARTED

org.apache.kafka.connect.transforms.HoistFieldTest > withSchema PASSED

org.apache.kafka.connect.transforms.HoistFieldTest > schemaless STARTED

org.apache.kafka.connect.transforms.HoistFieldTest > schemaless PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeDateRecordValueWithSchemaString STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeDateRecordValueWithSchemaString PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessBooleanTrue STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessBooleanTrue PASSED

org.apache.kafka.connect.transforms.CastTest > testConfigInvalidTargetType 
STARTED

org.apache.kafka.connect.transforms.CastTest > testConfigInvalidTargetType 
PASSED

org.apache.kafka.connect.transforms.CastTest > 
testConfigMixWholeAndFieldTransformation STARTED

org.apache.kafka.connect.transforms.CastTest > 
testConfigMixWholeAndFieldTransformation PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessFloat32 STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessFloat32 PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessFloat64 STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessFloat64 PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessUnsupportedType STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessUnsupportedType PASSED

org.apache.kafka.connect.transforms.CastTest > castWholeRecordKeySchemaless 
STARTED

org.apache.kafka.connect.transforms.CastTest > castWholeRecordKeySchemaless 
PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaFloat32 STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaFloat32 PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaFloat64 STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaFloat64 PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaString STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaString PASSED

org.apache.kafka.connect.transforms.CastTest > testConfigEmpty STARTED

org.apache.kafka.connect.transforms.CastTest > testConfigEmpty PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessInt16 STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessInt16 PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessInt32 STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessInt32 PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessInt64 STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueSchemalessInt64 PASSED

org.apache.kafka.connect.transforms.CastTest > castFieldsSchemaless STARTED

org.apache.kafka.connect.transforms.CastTest > castFieldsSchemaless PASSED

org.apache.kafka.connect.transforms.CastTest > testUnsupportedTargetType STARTED

org.apache.kafka.connect.transforms.CastTest > testUnsupportedTargetType PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaInt16 STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaInt16 PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaInt32 STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaInt32 PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaInt64 STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaInt64 PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeBigDecimalRecordValueWithSchemaString STARTED

org.apache.kafka.connect.transforms.CastTest > 
castWholeBigDecimalRecordValueWithSchemaString PASSED

org.apache.kafka.connect.transforms.CastTest > 
castWholeRecordValueWithSchemaBooleanFalse STARTED


Jenkins build is back to normal : kafka-trunk-jdk8 #3773

2019-07-09 Thread Apache Jenkins Server
See 




Build failed in Jenkins: kafka-trunk-jdk11 #680

2019-07-09 Thread Apache Jenkins Server
See 


Changes:

[jason] KAFKA-8591; WorkerConfigTransformer NPE on connector configuration

--
[...truncated 6.46 MB...]

kafka.security.auth.ZkAuthorizationTest > testConsumerOffsetPathAcls STARTED

kafka.security.auth.ZkAuthorizationTest > testConsumerOffsetPathAcls PASSED

kafka.security.auth.ZkAuthorizationTest > testZkMigration STARTED

kafka.security.auth.ZkAuthorizationTest > testZkMigration PASSED

kafka.security.auth.ZkAuthorizationTest > testChroot STARTED

kafka.security.auth.ZkAuthorizationTest > testChroot PASSED

kafka.security.auth.ZkAuthorizationTest > testDelete STARTED

kafka.security.auth.ZkAuthorizationTest > testDelete PASSED

kafka.security.auth.ZkAuthorizationTest > testDeleteRecursive STARTED

kafka.security.auth.ZkAuthorizationTest > testDeleteRecursive PASSED

kafka.security.auth.ResourceTypeTest > testJavaConversions STARTED

kafka.security.auth.ResourceTypeTest > testJavaConversions PASSED

kafka.security.auth.ResourceTypeTest > testFromString STARTED

kafka.security.auth.ResourceTypeTest > testFromString PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testAuthorizeWithPrefixedResource 
STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testAuthorizeWithPrefixedResource 
PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testAllowAllAccess STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testAllowAllAccess PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testLocalConcurrentModificationOfResourceAcls STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testLocalConcurrentModificationOfResourceAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testDeleteAllAclOnWildcardResource STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testDeleteAllAclOnWildcardResource PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testHighConcurrencyDeletionOfResourceAcls STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testHighConcurrencyDeletionOfResourceAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testNoAclFound STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testNoAclFound PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testAclInheritance STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testAclInheritance PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testDistributedConcurrentModificationOfResourceAcls STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testDistributedConcurrentModificationOfResourceAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testAddAclsOnWildcardResource 
STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testAddAclsOnWildcardResource 
PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testWritesExtendedAclChangeEventWhenInterBrokerProtocolAtLeastKafkaV2 STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testWritesExtendedAclChangeEventWhenInterBrokerProtocolAtLeastKafkaV2 PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testAclManagementAPIs STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testAclManagementAPIs PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testWildCardAcls STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testWildCardAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testWritesLiteralAclChangeEventWhenInterBrokerProtocolIsKafkaV2 STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testWritesLiteralAclChangeEventWhenInterBrokerProtocolIsKafkaV2 PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testTopicAcl STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testTopicAcl PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testSuperUserHasAccess STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testSuperUserHasAccess PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testDeleteAclOnPrefixedResource 
STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testDeleteAclOnPrefixedResource 
PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testDenyTakesPrecedence STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testDenyTakesPrecedence PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testSingleCharacterResourceAcls 
STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testSingleCharacterResourceAcls 
PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testNoAclFoundOverride STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testNoAclFoundOverride PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testEmptyAclThrowsException 
STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testEmptyAclThrowsException PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testSuperUserWithCustomPrincipalHasAccess STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testSuperUserWithCustomPrincipalHasAccess PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testAllowAccessWithCustomPrincipal STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 

Jenkins build is back to normal : kafka-2.3-jdk8 #58

2019-07-09 Thread Apache Jenkins Server
See 




Re: [DISCUSS] KIP-480 : Sticky Partitioner

2019-07-09 Thread M. Manna
Hello Justine,

I have one item I wanted to discuss.

We are currently in review stage for KAFKA- where we can choose always
RoundRobin regardless of null/usable key.

If I understood this KIP motivation correctly, you are still honouring how
the hashing of key works for DefaultPartitioner. Would you say that having
an always "Round-Robin" partitioning with "Sticky" assignment (efficient
batching of records for a partition) doesn't deviate from your original
intention?

Thanks,

On Tue, 9 Jul 2019 at 01:00, Justine Olshan  wrote:

> Hello all,
>
> If there are no more comments or concerns, I would like to start the vote
> on this tomorrow afternoon.
>
> However, if there are still topics to discuss, feel free to bring them up
> now.
>
> Thank you,
> Justine
>
> On Tue, Jul 2, 2019 at 4:25 PM Justine Olshan 
> wrote:
>
> > Hello again,
> >
> > Another update to the interface has been made to the KIP.
> > Please let me know if you have any feedback!
> >
> > Thank you,
> > Justine
> >
> > On Fri, Jun 28, 2019 at 2:52 PM Justine Olshan 
> > wrote:
> >
> >> Hi all,
> >> I made some changes to the KIP.
> >> The idea is to clean up the code, make behavior more explicit, provide
> >> more flexibility, and to keep default behavior the same.
> >>
> >> Now we will change the partition in onNewBatch, and specify the
> >> conditions for this function call (non-keyed values, no explicit
> >> partitions) in willCallOnNewBatch.
> >> This clears up some of the issues with the implementation. I'm happy to
> >> hear further opinions and discuss this change!
> >>
> >> Thank you,
> >> Justine
> >>
> >> On Thu, Jun 27, 2019 at 2:53 PM Colin McCabe 
> wrote:
> >>
> >>> On Thu, Jun 27, 2019, at 01:31, Ismael Juma wrote:
> >>> > Thanks for the KIP Justine. It looks pretty good. A few comments:
> >>> >
> >>> > 1. Should we favor partitions that are not under replicated? This is
> >>> > something that Netflix did too.
> >>>
> >>> This seems like it could lead to cascading failures, right?  If a
> >>> partition becomes under-replicated because there is too much traffic,
> the
> >>> producer stops sending to it, which puts even more load on the
> remaining
> >>> partitions, which are even more likely to fail then, etc.  It also will
> >>> create unbalanced load patterns on the consumers.
> >>>
> >>> >
> >>> > 2. If there's no measurable performance difference, I agree with
> >>> Stanislav
> >>> > that Optional would be better than Integer.
> >>> >
> >>> > 3. We should include the javadoc for the newly introduced method that
> >>> > specifies it and its parameters. In particular, it would good to
> >>> specify if
> >>> > it gets called when an explicit partition id has been provided.
> >>>
> >>> Agreed.
> >>>
> >>> best,
> >>> Colin
> >>>
> >>> >
> >>> > Ismael
> >>> >
> >>> > On Mon, Jun 24, 2019, 2:04 PM Justine Olshan 
> >>> wrote:
> >>> >
> >>> > > Hello,
> >>> > > This is the discussion thread for KIP-480: Sticky Partitioner.
> >>> > >
> >>> > >
> >>> > >
> >>>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-480%3A+Sticky+Partitioner
> >>> > >
> >>> > > Thank you,
> >>> > > Justine Olshan
> >>> > >
> >>> >
> >>>
> >>
>


Re: PR review

2019-07-09 Thread M. Manna
Hello Colin,

I appreciate the time for reviewing this PR. I have now incorporated your
(and Matthias's) comments and the PR has been resubmitted. Please let me
know if there is more that we need to change.

Thanks again,

On Mon, 8 Jul 2019 at 23:50, Colin McCabe  wrote:

> Hi M. Manna,
>
> I left a review.  Take a look.
>
> Sorry for the delays.
>
> best,
> Colin
>
>
> On Mon, Jul 8, 2019, at 14:38, M. Manna wrote:
> > Hello,
> >
> > A few requests have been sent already. Could this please be reviewed ?
> Our
> > business implementation is holding due to this change.
> >
> >
> >
> > On Thu, 4 Jul 2019 at 13:33, M. Manna  wrote:
> >
> > > https://github.com/apache/kafka/pull/6771
> > >
> > > Could this be reviewed please ?
> > >
> > > On Wed, 3 Jul 2019 at 11:35, M. Manna  wrote:
> > >
> > >> https://github.com/apache/kafka/pull/6771
> > >>
> > >> Bouncing both users and dev to get some activity going. We are waiting
> > >> for a while to get this KIP pr merged.
> > >>
> > >> Could someone please review?
> > >>
> > >> Thanks,
> > >>
> > >> On Sun, 30 Jun 2019 at 08:59, M. Manna  wrote:
> > >>
> > >>> https://github.com/apache/kafka/pull/6771
> > >>>
> > >>> Hello,
> > >>>
> > >>> Could the above PR can be reviewed? This has been waiting for a long
> > >>> time.
> > >>>
> > >>> Just to mention, the package name should have "internal". Round-robin
> > >>> partitioning should have been supported without/without a key from
> the
> > >>> beginning. It provides user a guaranteed round-robin partitioning
> without
> > >>> having to regard for key values (e.g. null/not null). From our
> business
> > >>> side, this is a Kafka internal logic. Hence, the placement inside
> > >>> "internal" package.
> > >>>
> > >>> Thanks,
> > >>>
> > >>
> >
>


Re: [DISCUSS] KIP-455 Create an Admin API for Replica Reassignments

2019-07-09 Thread Stanislav Kozlovski
Hey there everybody,

I've edited the KIP. Here is a diff of the changes -
https://cwiki.apache.org/confluence/pages/diffpagesbyversion.action?pageId=112820260=13=11
Specifically,
- new AdminAPI for listing ongoing reassignments for _a given list of topics_.
- made AlterPartitionReassignmentsRequest#Topics and
AlterPartitionReassignmentsRequest#Partitions nullable fields, in
order to make the API more flexible in large cancellations
- made ListPartitionReassignmentsRequest#PartitionIds nullable as
well, to support listing every reassigning partition for a given topic
without the need to specify each partition
- explicitly defined what a reassignment cancellation means
- laid out the algorithm for changing the state in ZK (this was
spelled out by Colin in the previous thread)
- mention expected behavior when reassigning during software upgrade/downgrades
- mention the known edge case of using both APIs at once during a
controller failover

I look forward to feedback.

Hey George,
> Regardless of KIP-236 or KIP-455,  I would like stress the importance of 
> keeping the original replicas info before reassignments are kicked off.  This 
> original replicas info will allow us to distinguish what replicas are 
> currently being reassigned, so we can rollback to its original state.
> Also this will opens up possibility to separate the ReplicaFetcher traffic of 
> normal follower traffic from Reassignment traffic,  also the metrics 
> reporting URP, MaxLag, TotalLag, etc. right now, Reassignment traffic and 
> normal follower traffic shared the same ReplicaFetcher threads pool.

Thanks for the reminder. A lot of your suggestions are outlined in the
"Future Work" section of KIP-455. The pointer towards different
ReplicaFetcher thread pools is interesting -- do you think there's
much value in that? My intuition is that having appropriate quotas for
the reassignment traffic is the better way to separate the two,
whereas a separate thread pool might provide less of a benefit.

With regards to keeping the original replicas info before
reassignments are kicked off - this KIP proposes that we store the
`targetReplicas` in a different collection and thus preserve the
original replicas info until the reassignment is fully complete. It
should allow you to implement rollback functionality. Please take a
look at the KIP and confirm if that is the case. It would be good to
synergize both KIPs.


Thanks,
Stanislav


On Tue, Jul 9, 2019 at 12:43 AM George Li
 wrote:
>
>  > Now that we support multiple reassignment requests, users may add execute> 
> them incrementally. Suppose something goes horribly wrong and they want to> 
> revert as quickly as possible - they would need to run the tool with> 
> multiple rollback JSONs.  I think that it would be useful to have an easy> 
> way to stop all ongoing reassignments for emergency situations.
>
> KIP-236: Interruptible Partition Reassignment is exactly trying to cancel the 
> pending reassignments cleanly/safely in a timely fashion.  It's possible to 
> cancel/rollback the reassignments not yet completed if the original replicas 
> before reassignment is saved somewhere. e.g. the /admin/reassign_partitions 
> znode, the Controller's ReassignmentContext memory struct.
>
> I think a command line option like "kafka-reassign-partitions.sh --cancel" 
> would be easier for the user to cancel whatever pending reassignments going 
> on right now.  no need to find the rollback json files and re-submit them as 
> reassignments.
>
> Regardless of KIP-236 or KIP-455,  I would like stress the importance of 
> keeping the original replicas info before reassignments are kicked off.  This 
> original replicas info will allow us to distinguish what replicas are 
> currently being reassigned, so we can rollback to its original state.  Also 
> this will opens up possibility to separate the ReplicaFetcher traffic of 
> normal follower traffic from Reassignment traffic,  also the metrics 
> reporting URP, MaxLag, TotalLag, etc. right now, Reassignment traffic and 
> normal follower traffic shared the same ReplicaFetcher threads pool.
>
> Thanks,
> George
>
> On Tuesday, July 2, 2019, 10:47:55 AM PDT, Stanislav Kozlovski 
>  wrote:
>
>  Hey there, I need to start a new thread on KIP-455. I think there might be
> an issue with the mailing server. For some reason, my replies to the
> previous discussion thread could not be seen by others. After numerous
> attempts, Colin suggested I start a new thread.
>
> Original Discussion Thread:
> https://sematext.com/opensee/m/Kafka/uyzND1Yl7Er128CQu1?subj=+DISCUSS+KIP+455+Create+an+Administrative+API+for+Replica+Reassignment
> KIP:
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-455%3A+Create+an+Administrative+API+for+Replica+Reassignment
> Last Reply of Previous Thread:
> http://mail-archives.apache.org/mod_mbox/kafka-dev/201906.mbox/%3C679a4c5b-3da6-4556-bb89-e680d8cbb705%40www.fastmail.com%3E
>
> The following is my reply:
> 
> Hi again,
>
> This 

[jira] [Created] (KAFKA-8641) Invalid value ogg_kafka_test_key for configuration value.deserializer: Class ogg_kafka_test_key could not be found.

2019-07-09 Thread chen qiang (JIRA)
chen qiang created KAFKA-8641:
-

 Summary: Invalid value ogg_kafka_test_key for configuration 
value.deserializer: Class ogg_kafka_test_key could not be found.
 Key: KAFKA-8641
 URL: https://issues.apache.org/jira/browse/KAFKA-8641
 Project: Kafka
  Issue Type: Bug
Reporter: chen qiang


Invalid value ogg_kafka_test_key for configuration value.deserializer: Class 
ogg_kafka_test_key could not be found.

 

at org.apache.kafka.common.config.ConfigDef.parseType(ConfigDef.java:715)
 at org.apache.kafka.common.config.ConfigDef.parseValue(ConfigDef.java:460)
 at org.apache.kafka.common.config.ConfigDef.parse(ConfigDef.java:453)
 at org.apache.kafka.common.config.AbstractConfig.(AbstractConfig.java:62)
 at org.apache.kafka.common.config.AbstractConfig.(AbstractConfig.java:75)
 at 
org.apache.kafka.clients.consumer.ConsumerConfig.(ConsumerConfig.java:481)
 at 
org.apache.kafka.clients.consumer.KafkaConsumer.(KafkaConsumer.java:635)
 at 
org.apache.kafka.clients.consumer.KafkaConsumer.(KafkaConsumer.java:617)
 at 
cc.ewell.datatools.consumer.ConsumerInitRunnable.(ConsumerInitRunnable.java:59)
 at 
cc.ewell.datatools.consumer.ConsumerGroup.addInitConsumer(ConsumerGroup.java:80)
 at 
cc.ewell.datatools.service.impl.DataPushLinkServiceImpl.init(DataPushLinkServiceImpl.java:618)
 at 
cc.ewell.datatools.controller.DataPushLinkController.init(DataPushLinkController.java:180)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
 at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:497)

for (int i = 0; i < 10; i++) {
 CacheUtils.consumerGroup().push("CONSUMER_" + topic + "_" + i, 
consumerInitThread);
 new Thread(consumerInitThread, "CONSUMER_" + topic + "_" + i).start();
}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Build failed in Jenkins: kafka-2.2-jdk8 #144

2019-07-09 Thread Apache Jenkins Server
See 


Changes:

[jason] KAFKA-8591; WorkerConfigTransformer NPE on connector configuration

--
[...truncated 1.92 MB...]
kafka.security.auth.ZkAuthorizationTest > testDeleteRecursive PASSED

kafka.security.auth.AclTest > testAclJsonConversion STARTED

kafka.security.auth.AclTest > testAclJsonConversion PASSED

kafka.security.auth.OperationTest > testJavaConversions STARTED

kafka.security.auth.OperationTest > testJavaConversions PASSED

kafka.security.auth.ResourceTest > shouldParseOldTwoPartString STARTED

kafka.security.auth.ResourceTest > shouldParseOldTwoPartString PASSED

kafka.security.auth.ResourceTest > shouldParseOldTwoPartWithEmbeddedSeparators 
STARTED

kafka.security.auth.ResourceTest > shouldParseOldTwoPartWithEmbeddedSeparators 
PASSED

kafka.security.auth.ResourceTest > 
shouldThrowOnTwoPartStringWithUnknownResourceType STARTED

kafka.security.auth.ResourceTest > 
shouldThrowOnTwoPartStringWithUnknownResourceType PASSED

kafka.security.auth.ResourceTest > shouldThrowOnBadResourceTypeSeparator STARTED

kafka.security.auth.ResourceTest > shouldThrowOnBadResourceTypeSeparator PASSED

kafka.security.auth.ResourceTest > shouldParseThreePartString STARTED

kafka.security.auth.ResourceTest > shouldParseThreePartString PASSED

kafka.security.auth.ResourceTest > shouldRoundTripViaString STARTED

kafka.security.auth.ResourceTest > shouldRoundTripViaString PASSED

kafka.security.auth.ResourceTest > shouldParseThreePartWithEmbeddedSeparators 
STARTED

kafka.security.auth.ResourceTest > shouldParseThreePartWithEmbeddedSeparators 
PASSED

kafka.security.auth.ResourceTypeTest > testJavaConversions STARTED

kafka.security.auth.ResourceTypeTest > testJavaConversions PASSED

kafka.security.auth.ResourceTypeTest > testFromString STARTED

kafka.security.auth.ResourceTypeTest > testFromString PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testAuthorizeWithPrefixedResource 
STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testAuthorizeWithPrefixedResource 
PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testAllowAllAccess STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testAllowAllAccess PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testLocalConcurrentModificationOfResourceAcls STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testLocalConcurrentModificationOfResourceAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testDeleteAllAclOnWildcardResource STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testDeleteAllAclOnWildcardResource PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testHighConcurrencyDeletionOfResourceAcls STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testHighConcurrencyDeletionOfResourceAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testNoAclFound STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testNoAclFound PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testAclInheritance STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testAclInheritance PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testDistributedConcurrentModificationOfResourceAcls STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testDistributedConcurrentModificationOfResourceAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testAddAclsOnWildcardResource 
STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testAddAclsOnWildcardResource 
PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testWritesExtendedAclChangeEventWhenInterBrokerProtocolAtLeastKafkaV2 STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testWritesExtendedAclChangeEventWhenInterBrokerProtocolAtLeastKafkaV2 PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testAclManagementAPIs STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testAclManagementAPIs PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testWildCardAcls STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testWildCardAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testWritesLiteralAclChangeEventWhenInterBrokerProtocolIsKafkaV2 STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testWritesLiteralAclChangeEventWhenInterBrokerProtocolIsKafkaV2 PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testTopicAcl STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testTopicAcl PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testSuperUserHasAccess STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testSuperUserHasAccess PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testDeleteAclOnPrefixedResource 
STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testDeleteAclOnPrefixedResource 
PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testDenyTakesPrecedence STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testDenyTakesPrecedence PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 

Re: [DISCUSS] KIP-221: Repartition Topic Hints in Streams

2019-07-09 Thread Levani Kokhreidze
Hello,

In order to move this KIP forward, I’ve updated confluence page with the new 
proposal 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-221%3A+Enhance+KStream+with+Connecting+Topic+Creation+and+Repartition+Hint
 

I’ve also filled “Rejected Alternatives” section. 

Looking forward to discuss this KIP :)

King regards,
Levani


> On Jul 3, 2019, at 1:08 PM, Levani Kokhreidze  wrote:
> 
> Hello Matthias,
> 
> Thanks for the feedback and ideas. 
> I like the idea of introducing dedicated `Topic` class for topic 
> configuration for internal operators like `groupedBy`.
> Would be great to hear others opinion about this as well.
> 
> Kind regards,
> Levani 
> 
> 
>> On Jul 3, 2019, at 7:00 AM, Matthias J. Sax  wrote:
>> 
>> Levani,
>> 
>> Thanks for picking up this KIP! And thanks for summarizing everything.
>> Even if some points may have been discussed already (can't really
>> remember), it's helpful to get a good summary to refresh the discussion.
>> 
>> I think your reasoning makes sense. With regard to the distinction
>> between operators that manage topics and operators that use user-created
>> topics: Following this argument, it might indicate that leaving
>> `through()` as-is (as an operator that uses use-defined topics) and
>> introducing a new `repartition()` operator (an operator that manages
>> topics itself) might be good. Otherwise, there is one operator
>> `through()` that sometimes manages topics but sometimes not; a different
>> name, ie, new operator would make the distinction clearer.
>> 
>> About adding `numOfPartitions` to `Grouped`. I am wondering if the same
>> argument as for `Produced` does apply and adding it is semantically
>> questionable? Might be good to get opinions of others on this, too. I am
>> not sure myself what solution I prefer atm.
>> 
>> So far, KS uses configuration objects that allow to configure a certain
>> "entity" like a consumer, producer, store. If we assume that a topic is
>> a similar entity, I am wonder if we should have a
>> `Topic#withNumberOfPartitions()` class and method instead of a plain
>> integer? This would allow us to add other configs, like replication
>> factor, retention-time etc, easily, without the need to change the "main
>> API".
>> 
>> Just want to give some ideas. Not sure if I like them myself. :)
>> 
>> 
>> -Matthias
>> 
>> 
>> 
>> On 7/1/19 1:04 AM, Levani Kokhreidze wrote:
>>> Actually, giving it more though - maybe enhancing Produced with num of 
>>> partitions configuration is not the best approach. Let me explain why:
>>> 
>>> 1) If we enhance Produced class with this configuration, this will also 
>>> affect KStream#to operation. Since KStream#to is the final sink of the 
>>> topology, for me, it seems to be reasonable assumption that user needs to 
>>> manually create sink topic in advance. And in that case, having num of 
>>> partitions configuration doesn’t make much sense. 
>>> 
>>> 2) Looking at Produced class, based on API contract, seems like Produced is 
>>> designed to be something that is explicitly for producer (key serializer, 
>>> value serializer, partitioner those all are producer specific 
>>> configurations) and num of partitions is topic level configuration. And I 
>>> don’t think mixing topic and producer level configurations together in one 
>>> class is the good approach.
>>> 
>>> 3) Looking at KStream interface, seems like Produced parameter is for 
>>> operations that work with non-internal (e.g topics created and managed 
>>> internally by Kafka Streams) topics and I think we should leave it as it is 
>>> in that case.
>>> 
>>> Taking all this things into account, I think we should distinguish between 
>>> DSL operations, where Kafka Streams should create and manage internal 
>>> topics (KStream#groupBy) vs topics that should be created in advance (e.g 
>>> KStream#to).
>>> 
>>> To sum it up, I think adding numPartitions configuration in Produced will 
>>> result in mixing topic and producer level configuration in one class and 
>>> it’s gonna break existing API semantics.
>>> 
>>> Regarding making topic name optional in KStream#through - I think underline 
>>> idea is very useful and giving users possibility to specify num of 
>>> partitions there is even more useful :) Considering arguments against 
>>> adding num of partitions in Produced class, I see two options here:
>>> 1) Add following method overloads
>>> * through() - topic will be auto-generated and num of partitions will 
>>> be taken from source topic
>>> * through(final int numOfPartitions) - topic will be auto generated 
>>> with specified num of partitions
>>> * through(final int numOfPartitions, final Produced produced) - 
>>> topic will be with generated with specified num of partitions and 
>>> configuration taken from produced parameter.
>>> 2) Leave KStream#through as it is and introduce 

[jira] [Resolved] (KAFKA-8591) NPE when reloading connector configuration using WorkerConfigTransformer

2019-07-09 Thread Jason Gustafson (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8591?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-8591.

   Resolution: Fixed
Fix Version/s: 2.3.1
   2.2.2

> NPE when reloading connector configuration using WorkerConfigTransformer
> 
>
> Key: KAFKA-8591
> URL: https://issues.apache.org/jira/browse/KAFKA-8591
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 2.2.1
>Reporter: Nacho Munoz
>Assignee: Robert Yokota
>Priority: Major
> Fix For: 2.2.2, 2.3.1
>
>
> When a connector uses a ConfigProvider and sets a given TTL in the returned 
> ConfigData, it is expected that WorkerConfigTransformer will periodically 
> reload the connector configuration. The problem is that when the TTL expires 
> a NPE is raised. 
> [2019-06-17 14:34:12,320] INFO Scheduling a restart of connector 
> workshop-incremental in 6 ms 
> (org.apache.kafka.connect.runtime.WorkerConfigTransformer:88)
>  [2019-06-17 14:34:12,321] ERROR Uncaught exception in herder work thread, 
> exiting: (org.apache.kafka.connect.runtime.distributed.DistributedHerder:227)
>  java.lang.NullPointerException
>  at 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder$19.onCompletion(DistributedHerder.java:1187)
>  at 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder$19.onCompletion(DistributedHerder.java:1183)
>  at 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder.tick(DistributedHerder.java:273)
>  at 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder.run(DistributedHerder.java:219)
>  at 
> java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
>  at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
>  at 
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
>  at 
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
>  at java.base/java.lang.Thread.run(Thread.java:834)
> The reason is that WorkerConfigTransformer is passing a null callback to the 
> herder's restartConnector method.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)